{"text":"author: Marie Csete\ndate: 2010\nreferences:\ntitle: Q&A: What can microfluidics do for stem-cell research?\n\n# What can microfluidics do for stem-cell research?\n\nStem-cell biology and microfluidics have both been hotbeds of research activity for the past few years, yet neither field has been able to successfully commercialize a clinical 'killer application'. Stem-cell behavior is exquisitely sensitive to environmental cues, and the important cues are difficult to establish, manipulate and quantify in traditional cell culture. Because the microenvironment can be controlled in microfluidics platforms, microfluidics has a lot to offer stem-cell biology and there are many good reasons for the fields to join forces.\n\n# What exactly is microfluidics?\n\nMicrofluidics is the characterization and manipulation of fluids on the nanoliter or picoliter scale. The behavior and properties of fluids change as amounts decrease from the macroscale (volumes used for everyday applications) to the microscale. This means that microfluidic devices cannot be built by simply scaling down macroscale devices. For instance, at low microliter volumes, fluids act more like solids, and two fluids flowing alongside each other in a microchannel will not mix well (except by diffusion); therefore, a variety of techniques (pumps, valves, electrokinetics) are used in microfluidics platforms to actuate mixing and fluid flows. Most microfluidics applications in research labs concentrate on the 10 to 100 \u03bcm scale, basically the diameter of a single cell.\n\nMicrofluidics lab-on-a-chip devices allow standard laboratory analyses, such as sample purification, labeling, detection and separation, to be carried out automatically as the sample is moved, via microchannels, to different regions of a chip. Various methods have been used to produce microfluidic devices, but inkjet printers offer an easily accessible way of printing channels and other features directly onto the device. This technique has been used to print precise patterns of proteins or protein gradients onto a surface on which cells can subsequently be cultured to investigate or control their behavior. A technically more advanced use of microfluidics is the integration of microchannels with nanoelectrospray emitters for preparing material for mass spectrometry in high-throughput proteomics analyses of biologic samples \\[1\\].\n\n# What background do you need for microfluidics?\n\nPhysics (in particular fluid dynamics), mechanical engineering, or bioengineering backgrounds, the common feature of these being a strong mathematical foundation.\n\n# Why should stem-cell biologists care about miniaturization of cell culture and analysis tools?\n\nOn the one hand, scientists working on the development of pluripotent stem cells for clinical use are encountering a major challenge in scaling up cell cultures for master banks to be used as sources of cell therapies for large numbers of patients. Microfluidics is clearly not the answer to this problem. But on the front end of developing therapies from stem cells, rigorous identification of the starting stem cell and its progeny is a major technical challenge and a regulatory requirement, analogous to the precise chemical identity of a drug. Classically, identification of stem cells is done clonally (at the single-cell level), and it is generally difficult to follow or analyze single cells in mass cell culture. Microfluidics techniques can be used for sensitive discrimination of gene expression (and protein) levels at the single-cell level and they are therefore increasingly useful in stem-cell biology to understand the heterogeneity of stem-cell populations.\n\nSeparation of rare stem cells (or rare cancer cell types) from a mixed population is also not easy using flow cytometers developed for clinical use; harsh conditions imposed on the cells during standard flow cytometry mean that cell recovery is low. Microfluidics-based, benchtop flow cytometry allows separation of small numbers of stem cells under direct visualization, and is less damaging to cells than traditional cell sorters. For both analysis and separation, microfluidics offers the means of controlling the cells' environment rigorously. Several groups have also reported that stem cells (and stem cells committed to a particular lineage) can be separated from mixed cell populations using their dielectric properties (electric and magnetic energy).\n\n# In what ways are microfluidics culture conditions superior to those of traditional mass cell culture?\n\nStem-cell fate (growth, death, differentiation, migration) is highly dependent on environmental cues, but the usual cell culture environment does not mimic the *in vivo* microenvironment in several fundamental ways (20% oxygen is unphysiologically high; physiologic fluid flow and shear stresses are not present; three-dimensional environments cannot be standardized), and overall the environment in conventional cell culture is not controllable. For example, pH inevitably drifts in conventional tissue culture, but in well-designed microfluidics devices, the pH can be held constant by controlling medium inflow and outflow. In other words, engineers can provide steady-state conditions for cells, as well as fast and predictable changes in the environment surrounding the cells. Of particular importance, the best microfluidics devices are supported by mathematical descriptions of the microenvironment, and information from experiments can be fed back into mathematical models to determine optimal design features to promote specific stem-cell behaviors.\n\nGradient cues, so important in embryonic development, can be constructed quite precisely on microfluidics devices, as noted above. For example, migration of stem cells in response to chemotactic gradients is often studied in mass cultures using repeated studies in Boyden chambers (two chambers separated by a filter through which cells migrate), but molecular gradients established with microfluidics tools yield inherently more detailed and precise information because gradient characteristics such as slope and concentration can be quantified and correlated to migration behavior. Overall, flexibility in the configuration of microchips is a major advantage of microfluidics-based cell-culture systems, and the ease with which fluid flows can be controlled over time and space.\n\nHuman embryonic stem cells (hESCs) are particularly sensitive to handling in culture, and automation of hESC growth and differentiation *in vitro* on microfluidics platforms produces more standardized outcomes. Many investigators believe that the stress of manual handling of hESCs is an important factor in their instability over time, and therefore automated techniques for passaging and expansion may be a method for overcoming the problem of karyotypic instability.\n\nThree-dimensional mass culture systems are especially 'noisy' and difficult to control using conventional tissue-culture methods. Embryoid bodies - floating aggregations of undifferentiated cells - are often used as an intermediate stage in differentiation protocols, and are generated from hESCs by passaging the cells onto non-adherent plates. The resulting embryoid bodies are widely heterogeneous in size unless special engineering protocols are used. This size heterogeneity means that diffusion patterns for signaling through the embryoid bodies and cell-cell interactions are also heterogeneous, resulting in lack of control over the differentiation patterns. Printed topographic features of various shapes on microchips or microchannels are a proven method for gaining control over how cells aggregate. The size and development of embryoid bodies can be controlled with microfluidics techniques, providing a more predictable differentiation pattern and organization of the cells into phenotypically distinct layers. In fact, engineers have successfully manipulated parts of embryoid bodies in different ways using microfluidics tools to alter distinct fates for different parts of the cell aggregates.\n\nThe 'micro' in microfluidics plus the configurability of channels can be used to look at simultaneous signals to two parts of a single cell, for example the apical versus basal signals that will be encountered by a polarized cell. In traditional mass culture, cells align in random fashion, and although matrix coatings on tissue-culture plastic can be used to line cells up relative to the matrix, it is impossible to present signals to separate subcellular domains. Epithelial cells are the classical polarized cell in which specific receptors are largely confined to either the apical or basal surface, and signals received at these subcellular domains determine cell function. At the very small scale of microfluidics devices, the apical and basal faces of a cell can be exposed to separate chambers whose composition can be defined and manipulated independently, making it possible to determine the hierarchy of stimuli that determine cell behavior.\n\nAn obvious advantage of microfluidics is that it provides economy in terms of reagent use, especially for high-throughput assays. Of course, this economy will only be realized if device fabrication is also inexpensive.\n\n# What are some of the major limitations of microfluidics-based cell culture systems?\n\nNot surprisingly, from a biologist's perspective, the materials-cell interface is still a problem. Polydimethylsiloxane (PMDS) is commonly used to make microchips because it is cheap, optically transparent, gas permeable, and can be manipulated outside a clean room. Although many groups have reported using PDMS chips for hESC studies, my experience is that PDMS has to be considerably modified (and coated), because it is very toxic to the cells. Other, more biocompatible surfaces are available, but the ideal material for exquisitely sensitive cells such as hESCs has not been developed. Again from the biologist's perspective, cellular debris can occlude small channels, so that optimal washing methods in some applications need improvement.\n\nEngineers have pointed out that the best mathematical framework for handling models, such as differentiation, that start at small scales but result in large-scale processes is still evolving \\[2\\]. So along with the constant improvement in hardware and software needed to make inexpensive devices work optimally, the mathematical tools that make microfluidics approaches so valuable also need continuous refinement. Ultimately, the feedback between biologists using the devices and engineers designing them is the essential key for moving microfluidics-based cell culture forward.\n\nA major issue limiting wide application of microfluidics is that the devices still require experts to operate them, and are not yet biology user-friendly.\n\n# What major problems in translational stem-cell biology can be addressed using microfluidics tools?\n\nHere again, microfluidics techniques afford the ability to define the microenvironment surrounding stem cells. The disease environment into which stem cells will be transplanted is certain to alter their behavior, and is not adequately mimicked in most animal models of disease. Microfluidics-controlled environments can be used to test the tolerance of cells to mechanical and shear forces, gases, oxidants and other extracellular cues that characterize the disease environment. Physical, mechanical and biochemical factors can be tested quantitatively at relatively high throughput on the benchtop using microfluidics to help predict behavior of stem cells *in vivo*.\n\nOverall, microfluidics tools can be used for spatiotemporal control over the stem-cell microenvironment, so that the ideal *ex vivo* niche for cell survival and differentiation can be defined quantitatively and in high throughput. Control over the culture environment also allows investigators to perturb cell fate to generate desired outcomes, and to define the limits of physical, mechanical and biochemical factors that are tolerated by stem cells at different stages of differentiation.\n\n# What have been the important contributions of microfluidics in biology in general?\n\nGeorge Whitesides points out that one of the best developed applications of microfluidics is in protein crystallographic studies, to screen the conditions that encourage growth and protection of crystals \\[3\\]. For cell biologists, the major impact has been in cell separation, single-cell resolution of the dynamics of gene expression, and insights into how mechanical forces applied to individual cells determine their behavior.\n\n# Where can I find out more?\n\nCai L, Friedman N, Xie XS: **Stochastic protein expression in individual cells at the single molecule level**. *Nature* 2006, **440**:358-362.\n\nCimetta E, Figallo E, Cannizzaro C, Elvassore N, Vunjak-Novakovic G: **Micro-bioreactor arrays for controlling cellular environments: Design principles for human embryonic stem cell applications**. *Methods* 2009, **47**:81-89.\n\nMelin J, Quake SR: **Microfluidic large-scale integration: the evolution of design rules for biological automation**. *Annu Rev Biophys Biomol Struct* 2007, **36**:213-231.\n\nStroock AD, Dertlinger SK, Ajdari A, Mezic I, Stone HA, Whitesides GM: **Chaotic mixer for microchannels**. *Science* 2002, **295**:647-651.\n\nTung Y-C, Torisawa Y, Futai N, Takayama S: **Small volume low mechanical stress cytometry using computer-controlled Braille display microfluidics**. *Lab Chip* 2007, **7**:1497-1503.","meta":{"dup_signals":{"dup_doc_count":118,"dup_dump_count":43,"dup_details":{"curated_sources":4,"2019-35":1,"2019-26":1,"2019-13":1,"2019-04":1,"2018-51":1,"2018-43":1,"2018-34":2,"2018-30":1,"2018-13":1,"2018-05":1,"2017-39":2,"2017-30":1,"2017-22":1,"2017-17":1,"2017-09":8,"2016-44":1,"2016-40":1,"2016-36":8,"2016-30":4,"2016-22":1,"2016-18":1,"2015-48":2,"2015-40":3,"2015-35":2,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":4,"2014-42":9,"2014-41":4,"2014-35":5,"2014-23":5,"2014-15":5,"2019-43":1,"2015-18":3,"2015-11":3,"2015-06":3,"2014-10":3,"2013-48":3,"2013-20":3,"2024-26":1}},"file":"PMC2871530"},"subset":"pubmed_central"} {"text":"abstract: # Background\n .\n Genomic tumor information, such as identification of amplified oncogenes, can be used to plan treatment. The two sources of a brain tumor that are commonly available include formalin-fixed, paraffin-embedded (FFPE) sections from the small diagnostic biopsy and the ultrasonic surgical aspiration that contains the bulk of the tumor. In research centers, frozen tissue of a brain tumor may also be available. This study compared ultrasonic surgical aspiration and FFPE specimens from the same brain tumors for retrieval of DNA and molecular assessment of amplified oncogenes.\n .\n # Methods\n .\n Surgical aspirations were centrifuged to separate erythrocytes from the tumor cells that predominantly formed large, overlying buffy coats. These were sampled to harvest nuclear pellets for DNA purification. Four glioblastomas, 2 lung carcinoma metastases, and an ependymoma were tested. An inexpensive PCR technique, multiplex ligation-dependent probe amplification (MLPA), quantified 79 oncogenes using 3 kits. Copy number (CN) results were normalized to DNA from non-neoplastic brain (NB) in calculated ratios, \\[tumor DNA\\]\/\\[NB DNA\\]. Bland-Altman and Spearman rank correlative comparisons were determined. Regression analysis identified outliers.\n .\n # Results\n .\n Purification of DNA from ultrasonic surgical aspirations was rapid (\\<3\u2009days) versus FFPE (weeks) and yielded greater amounts in 6 of 7 tumors. Gene amplifications up to 15-fold corresponded closely between ultrasonic aspiration and FFPE assays in Bland-Altman analysis. Correlation coefficients ranged from 0.71 to 0.99 using 3 kit assays per tumor. Although normalized CN ratios greater than 2.0 were more numerous in FFPE specimens, some were found only in the ultrasonic surgical aspirations, consistent with tumor heterogeneity. Additionally, CN ratios revealed 9 high-level (\u2265 6.0) gene amplifications in FFPE of which 8 were also detected in the ultrasonic aspirations at increased levels. The ultrasonic aspiration levels of these amplified genes were also greater than 6.0 CN ratio, except in one case (3.53 CN ratio). Ten of 17 mid-level (\u22653.0 & \\<6.0 CN ratio) amplifications detected in FFPE were also detected as being increased (\u2265 2.0 CN ratio) in the aspirations.\n .\n # Conclusions\n .\n Buffy coats of centrifuged ultrasonic aspirations contained abundant tumor cells whose DNA permitted rapid, multiplex detection of high-level oncogene amplifications that were confirmed in FFPE.\n .\n # Virtual slides\n .\n \nauthor: Long N Truong; Shashikant Patil; Sherry S Martin; Jay F LeBlanc; Anil Nanda; Mary L Nordberg; Marie E Beckner\ndate: 2012\ninstitute: 1Department of Biological Sciences, Louisiana State University - Shreveport, One University Place, Shreveport, LA 71115, USA; 2Department of Neurosurgery, Baylor College of Medicine, 1327 Lake Point Pkwy, Suite 400, Sugar Land, TX 77478, USA; 3Delta Pathology Group, One Saint Mary Place, Shreveport, LA 71101, USA; 4Department of Emergency Medicine, Long Medical Center, Louisiana State University Health Sciences Center \u2013 New Orleans, 5825 Airline Hwy, Baton Rouge, LA 70805, USA; 5Department of Neurosurgery, Louisiana State University Health Sciences Center \u2013 Shreveport, Rm. 3-215, 1501 Kings Highway, Shreveport, LA 71130, USA; 6Feist-Weiller Cancer Center, Louisiana State University Health Sciences Center \u2013 Shreveport, Rm. B-215, 1501 Kings Highway, Shreveport, LA 71130, USA; 7Departments of Pediatrics & Medicine, Louisiana State University Health Sciences Center-Shreveport, Rm. 2-303, 1501 Kings Highway, Shreveport, LA 71130, USA; 8Delta Pathology Molecular Diagnostics, One Saint Mary Place, Shreveport, LA 71101, USA; 9Department of Neurology, Louisiana State University Health Sciences Center \u2013 Shreveport, Rm. 3-438, 1501 Kings Highway, Shreveport, LA 71130, USA\nreferences:\ntitle: Rapid Detection of high-level oncogene amplifications in ultrasonic surgical aspirations of brain tumors\n\n# Background\n\nOncogenes encode proteins that promote tumor growth, survival under adverse conditions, and invasion. Many studies have detected amplified oncogenes in high grade brain tumors, especially glioblastoma or glioblastoma multiforme (GBM). Therefore, assays to identify amplified genes are proposed to become critical in patient care as monoclonal antibodies and small molecules that inhibit proteins encoded by oncogenes become available. Identification of amplified genes in a rapid, multiplex manner is relevant for stratifying patients to clinical trials and treatments. Brain tumor specimens for molecular studies include ultrasonic surgical aspirations available at the time of surgery and small, diagnostic biopsies that are processed as formalin-fixed, paraffin-embedded (FFPE) samples. Sometimes tumor fragments are resected and are also processed as FFPE samples. Although tissue frozen for storage at the time of surgery can also be released for DNA studies, it is not usually available outside of research settings. Ultrasonic surgical aspirations are not routinely used for diagnostic purposes and represent an untapped source of abundant, fresh tumor DNA. With laboratory support for processing cellular fluids and the expertise for confirming tumor cell content morphologically, pathology departments are well suited for providing cellular or nuclear pellets from ultrasonic surgical aspirations to provide tumor DNA needed in molecular testing. Surgical aspirations can quickly provide DNA to speed up turnaround times and produce higher yields than FFPE sections of small biopsies.\n\nBrain tumors debulked by ultrasonic surgical aspiration yield suspensions of single cells, minute tissue fragments, and blood in saline. Following introduction of this technique in the 1970's, ultrasonic aspiration became routine in microneurosurgery \\[1\\]. Unlike the characteristically small diagnostic biopsies of brain tumors, ultrasonic surgical aspirations contain ample tumor cells in large volumes of fluid. The ultrasonic apparatus consists of a computerized control unit with settings for different amplitudes of ultrasonic waves and speeds of irrigation and aspiration. Tumor tissues are targeted and selectively aspirated. An advantage of the ultrasonic aspiration technique over tissue dissection is the reduced need for retraction of normal brain in the patient. Accordingly, the use of ultrasonic surgical aspiration to debulk intracranial tumors is common and explains the unfortunate paradox of having only small diagnostic tissue biopsies from large brain tumors available for molecular studies outside of research settings. Although ultrasonic aspiration specimens have yielded viable tumor cells for experimental studies \\[2-4\\], their lack of tissue architecture greatly diminishes their usefulness in diagnostic pathology. For example, regions of tumor necrosis that are a useful diagnostic feature in high grade tumors are prone to disintegrate during the aspiration procedure. Although some institutions preserve small portions of ultrasonic aspirations in FFPE blocks, the intact tissue fragments from biopsies are preferred and routinely relied upon for diagnostic evaluation of primary and metastatic brain tumors. Ultrasonic surgical aspirations, that contain the bulk of brain tumor tissue in a dispersed and disaggregated form, seldom provide the complete architectural features of the tumors that aid in determining morphologic diagnoses.\n\nIn this study, ultrasonic aspirations of brain tumors were tested to see if they would yield tumor DNA that is appropriate for studies to detect amplified oncogenes. Testing methods that identify amplified genes as potential treatment targets in individual tumors would ideally be performed as soon after surgery as possible. The genes tested with commercial MLPA kits are known to be amplified in association with malignancy. The amounts of DNA purified from ultrasonic aspirations of GBMs were found to be especially ample. These DNA samples were analyzed for amplifications in seventy-nine oncogenes, with results compared to those from FFPE-derived DNA and with fluorescence *in situ* hybridizaiton (FISH) results for *EGFR* gene amplification in FFPE tissue sections. The MLPA assays on ultrasonic aspirations identified high-level amplified genes within a few days at low cost and served as a preview of the more sensitive FFPE assay results that became available later.\n\n# Methods\n\n## Patient Population, Specimen Retrieval and DNA Purification\n\nPermission from our Institutional Review Board was obtained for retrieval and study of the excess amounts of ultrasonic aspiration and FFPE specimens taken from brain tumors resected in 2008\u20132009 in a pilot study of using these specimens for molecular tests. Ultrasonic aspirations from seven brain tumors, including four glioblastomas (GBM1-4), two lung carcinomas metastases (LCM1-2) and an ependymoma (EP1), were tested. The patients (all male except for one female with a GBM) were 44 to 81\u2009yrs of age. Samples were obtained in saline with a Cavitron Ultrasonic Aspirator Excel (Valleylab-Tyco International, Boulder, CO) set at 23 to 36\u2009MHz. The volume (cc) of each tumor present preoperatively and postoperatively was estimated by the neurosurgeon. The ultrasonic surgical aspirations were not designated to be used for any diagnostic purpose. The specimens in this study were delivered to the laboratory within 24\u2009hr of surgical resection, either fresh or after overnight refrigeration. Portions of ultrasonic aspiration specimens, 15 to 30\u2009ml, were centrifuged at 500\u2009g for 10\u2009min. Tumor cells sedimented as large buffy coats above the erythrocytes and beneath floating necrotic debris. According to the manufacturer's recommendations (Qiagen Midi Kits, Valencia, CA), 500\u2009\u03bcl of each buffy coat were pipetted and combined with purification kit components to obtain a nuclear pellet that was processed immediately or frozen for later purification. All specimens yielded measureable amounts of DNA. For comparisons in each tumor, DNA from routine FFPE sections (10\u2009\u03bcm thick, 10 per tumor) was purified with a Qiagen FFPE DNA purification kit. In each case tumor cells represented at least 90% of nucleated cells in the tissue sections. Proteinase K digestions at 55\u00b0C were extended as needed to achieve digestion of the tissues. Purified DNA was evaluated and quantified with spectrometry using absorbance measurements at 260 and 280\u2009nm (JENWAY Genova, Jenway Limited, Essex, England).\n\n## Histological Review of Tumor Specimens\n\nDiagnostic FFPE tissue sections, available from all tumors, were reviewed microscopically to characterize the pathological features. Portions of ultrasonic aspiration specimens that had also been processed as FFPE specimens were available. Hematoxylin and eosin stained FFPE sections were evaluated. The number of mitotic figures in ten high power fields (HPFs) (400X magnification) was determined for each tumor in duplicate counts. Also, stains for Ki67 reactive cells were performed at the time of diagnosis for some tumors using a pre-diluted antiKi67 antibody (Ventana Medical Systems, Oro Valley, AZ) for direct staining on an automated immunohistochemical stainer (Benchmark XT, Ventana Medical Systems). Pre-diluted, non-reactive mouse monoclonal negative control (Ventana) solutions were applied to separate tissue sections. The percentages of tumor cells positive for Ki67 specific staining were determined with an automated cellular imaging system (Chromavision ACIS, San Juan Capistrano, CA) with verification by manual counts at the time of review. If interference from background staining was present, manual counts were substituted.\n\n## *EGFR* FISH\n\nInclusion of probe sets for *EGFR* in the MLPA kits permitted comparison of MLPA and FISH copy number (CN) data for this gene in each tumor. FISH probes for a control locus, 7p11.1-q11.1, D7Z1, and the *EGFR* band region, 7p11.2-7p12, (Vysis Locus Specific Identifier (LSI) *EGFR* SpectrumOrange\/CEP 7 SpectrumGreen, Abbott Molecular Inc., Des Plaines, IL) were hybridized to interphase nuclei. Paraffin sections of all tumors and cytologic smears prepared from some of the ultrasonic aspiration specimens were examined. Smears from aspirations were fixed, denatured, and hybridized with probes overnight. Un-hybridized probes were washed away. Diamino-phenylindole (DAP1) fluorescent blue (Abbott Molecular) stained the nuclei. Slides were scanned on a fluorescent microscope (Leica DMR, Wetzlar, GM) for analysis with images captured using a digital camera (Applied Imaging, San Jose, CA) and CytoVision v4.02 imaging software (Applied Imaging). Sections from FFPE tissue were cut at a thickness of five microns, deparaffinized (Paraffin Pretreatment Reagent Kit II, Abbott Molecular), processed on a VP2000 Processor (Abbott Molecular) and then FISH probes for *EGFR* and CEP 7 were applied and analyzed as described above. Amplifications of *EGFR* were observed as average *EGFR* to CEP 7 signal ratios greater than two for at least 20 cells (usually many more). The results were determined by averages of all cells that could be evaluated. Proteinaceous debris hindered FISH interpretations in some of the ultrasonic surgical aspiration specimens. Ratios of total *EGFR* and CEP 7 signals per tumor, individual *EGFR*\/CEP 7 signals per cell and medians of the signal ratios per cell were described.\n\n## MLPA\n\nThe PCR-based technique, MLPA, involved multiple steps, including exposure of DNA to gene-specific probe sets, enzymatic ligation, PCR, and capillary electrophoresis (CE) to separate PCR products. Briefly, as explained earlier \\[5\\], 200\u2009\u03bcg of DNA from each tumor sample and normal brain in buffer were denatured at 98\u00b0C for 30\u2009min in a thermocycler (MasterCycler personal Eppendorf, Hamburg, GM) and were then hybridized to MLPA probes (SALSA P171, P172 and P173 kits, MRC-Holland, The Netherlands) according to kit instructions. The manufacturer selected genes according to literature documenting amplification in tumors. In addition to gene specific sequences, the probes also contained universal PCR primers, X and Y, and stuffer sequences for subsequent identification of specific gene amplicons by size using CE. Ligase65 (MRC-Holland) generated ligations specific for perfectly hybridized probe pairs (sets) at ligation junctions in the DNA and the enzyme was then heat inactivated. Ligated probes were PCR amplified with polymerase (MRC-Holland) according to instructions. PCR products were separated on a CEQ 8000 (Beckman-Coulter, Fullerton, CA) and were then identified according to the length of amplicons.\n\nFragment analysis of PCR products produced a series of linear peaks (signals of fluorescence) corresponding to the relative quantities of PCR products that were proportional to initial amounts of DNA (or CN) of the targeted genes. Slightly less efficient amplifications of longer amplicons accounted for minor reductions in their peak heights. All of the genes tested in the three MLPA kits, P171, P172, and P173, are listed as follows: *AKT1, AURKA, BCAR2, BCAS1-2, BCL2, BCL2A1, BCL2L1*, *11* and *13, BCL6, BCLG, BIRC1-5, BRAF, BRMS1, CCNA1, CCND1-2, CCNE1, CDK4* and *6, CENPF, CYP27B1, EGFR, EMS1, ERBB2* and *4, ESR1, EVI1, FGF3* and *4, FGFR1, FLJ20517, GNAS, GSTP1, HMGA1, IGF1R, IGFBP2,4* and *5, IRS2, JAK2, MDM2* and *4, MET, MMP7, MOS, MYCL1, MYBL1* and *2, MYC, MYCN, NFKBIE, NRAS, NTRK1-3, PDGFRA* and *B, PIK3C2B, PIK3CA, PPM1D, PSMB4, PTK2, PTP4A3, PTPN1, RELA, RNF139, RUNX1, SERBINB2, 7* and *9, TERT* and *TOM1L2*.\n\nOccasional failures to see peaks in one of the eight capillary tubes during a CE run were attributed to low current whenever the peak was obtained after repeating CE. Successful concurrent results for NB were required for a CE run of tumor samples to be further analyzed.\n\n## Normalization of CN\n\nFor detection of somatic changes found in highly malignant tissues, there were no known reference probes that could be relied on to maintain their normal CN at the time of this study. Large numbers of genes are lost or amplified in malignant tumors leading to considerable variability in the CN of individual tumors. Also, genetic variations and mutations in tumors, such as nucleotide deletions, insertions, or substitutions near the ligation junctions could potentially impact ligation efficiency. In this study the result of each probe set in each tumor specimen was normalized with CN for the same probe set in DNA from non-neoplastic brain (NB) assayed concurrently. Comparisons of all tumors with the same source of NB controlled for assay to assay variation. Aliquots of NB were from an 82\u2009year old woman's normal occipital lobe (Biochain, Hayward, CA).\n\nThe CN ratios, or fold-differences from normal for each gene, are represented by the ratio, \\[tumor DNA\\] \/ \\[NB DNA\\], derived from measurements of the CE peaks. The CN ratio was calculated after peaks of non-amplified genes on CE graphs, representing relative amounts of DNA, were matched with the average of two NB samples analyzed in the same assay run. Peak heights of non-amplified genes in tumor samples and NB were adjusted to approximately the same scale as graphical printouts from CE were produced. Final, finer adjustments were made in spreadsheets by maximizing alignments of trendlines for non-amplified genes. Scatterplots were evaluated to check the reactions of NB DNA with the MLPA probes compared to expected values provided by the manufacturer. Overlays of NB results obtained in GBM1's assays of FFPE, using P171, P172, and P173 kits, with graphs of normal values provided in the MLPA kits' literature, demonstrated comparable overall detection of the two populations of normal DNA data points (Figure 1<\/a>).\n\nRegression analysis of NB's scatterplots demonstrated that results obtained using one of the two probe sets for *ERBB4* was an outlier. In P171 data points, *ERBB4*'s probe set result was at the 95% confidence limit for the overall NB sample population. All other probe set results fell above the lower 95% confidence limit for the NB sample. Also, three probe sets were slightly out of range in the opposite direction so that missing a low level amplification was a concern but other probe sets for these genes were within the 95% confidence limits so that missing a significant CN gain after their results were averaged would be unlikely. The probe sets with results above the confidence limits included 1 of 2 for *IGFBP2*, 1 of 2 for *NTRK1*, and 1 of 4 for *IGF1R*.\n\nThe normal CN range was set at\u2009\u2265\u20090.75 to\u2009\u2264\u20091.50, with CN ratios\u2009\u2265\u20092.0 considered to be amplified. High-level gene amplifications in glioblastomas have been previously set at 6-fold greater than diploid or 12 or more copies per nucleus \\[6\\]. In this study CN ratios\u2009\u2265\u20096.0 were also designated as high-level amplifications and lesser amplifications were set at CN ratios of\u2009\u2265\u20093.0 and\u2009\\<\u20096.0 for mid-level and\u2009\u2265\u20092.0 and\u2009\\<\u20093.0 for low-level amplifications. Averaged results for two replicates of amplified and non-amplified genes in ultrasonic aspiration tumor DNA were compared to results for two replicates of genes in NB for each gene's probe set (or ligated pair). Results for multiple probe sets for a gene were averaged. MLPA analysis using the kits, P171, P172, and P173, was performed on FFPE DNA in two replicates from each of 7 tumors in 15 of the 21 assays and in single replicates in 6 due to DNA depletion from the small specimens.\n\n## Statistical methods and correction factors\n\nExcel was used to prepare graphs for images, determine Spearman rank correlation coefficients, regression analysis, etc. Outlier identification was performed with regression analysis to detect data points at or beyond 95% confidence intervals for residuals. A Bland-Altman plot to compare MLPA results of cavitronic ultrasonic surgical aspiration and FFPE assays was generated with R, version 2.10 (The R Foundation for Statistical Computing, http\/\/), statistical software.\n\n# Results\n\n## DNA purification yields\n\nSuccessful DNA purification was achieved for all of the tumors. The greatest yields of DNA (43.0 to 77\u2009\u03bcg) were obtained from buffy coats of glioblastoma ultrasonic aspiration specimens. The FFPE sections from all of the tumors yielded less DNA (6.4 to 20.5\u2009\u03bcg) as seen in Table 1<\/a>.\n\nDNA data according to sample type, cavitronic ultrasonic surgical aspiration (CUSA) or formalin-fixed, paraffin-embedded (FFPE) sections\n\n| **Brain tumors** | **Specimen type** | **260\/280 Absorbance (nm)** | **DNA yield per specimen (\u03bcg)** |\n|:---|:---|:---|:---|\n| GBM1 | CUSA | 1.38 | 49.1 |\n| \u00a0 | FFPE | 1.98 | 6.7 |\n| GBM2 | CUSA | 1.84 | 77.0 |\n| \u00a0 | FFPE | 1.98 | 7.9 |\n| GBM3 | CUSA | 1.84 | 43.3 |\n| \u00a0 | FFPE | 1.86 | 6.4 |\n| GBM4 | CUSA | 1.67 | 44.6 |\n| \u00a0 | FFPE | 1.90 | 7.0 |\n| LCM1 | CUSA | 1.85 | 17.9 |\n| \u00a0 | FFPE | 1.81 | 11.3 |\n| LCM2 | CUSA | 1.16 | 14.6 |\n| \u00a0 | FFPE | 1.63 | 20.5 |\n| EP1 | CUSA | 1.88 | 12.1 |\n| \u00a0 | FFPE | 1.55 | 10.1 |\n\nGBM\u2009=\u2009glioblastoma, LCM\u2009=\u2009Lung carcinoma metastasis, EP1\u2009=\u2009Ependymoma.\n\n## Clinicopathologic features of the tumors\n\nFeatures of the tumors were typical for their diagnostic classification as glioblastomas, metastatic tumors, and an ependymoma. Photomicrographs of the ultrasonic surgical aspirations (100X and 400X magnifications) and FFPE sections (400X magnification) illustrates the similarity of tumor cells in the two types of specimens (Figure 2<\/a>). There were too few cases to evaluate whether characteristic morphologic profiles for amplified gene(s) in these tumors were present. Mitotic figures among the glioblastomas were most numerous in GBM1 and mitoses among all tumors were most numerous in LCM2 (Table 2<\/a>).\n\nSelected clinicopathologic features of the brain tumors in this study\n\n| **Tumor** | **Age & sex** | **Tumor (cc)** | | **Mitoses per 10 HPF** | **Ki67 positive tumor cells** |\n|:---|:---|:---|:---|:---|:---|\n| \u00a0 | \u00a0 | Pre Op | Post Op | \u00a0 | \u00a0 |\n| GBM1 | 50\u2009yr\u2009M | 80.8 | 19.1 | 46.5\u2009\u00b1\u20096.4 | 61\u2009% |\n| GBM2 | 81\u2009yr\u2009M | 51.4 | 0.0 | 19.5\u2009\u00b1\u20092.1 | 55\u2009% |\n| GBM3 | 52\u2009yr\u2009F | 124.0 | 96.1 | 10\u2009\u00b1\u20092.8 | 38\u2009% |\n| GBM4 | 44\u2009yr\u2009M | 35.6 | 0.0 | 18.5\u2009\u00b1\u20092.1 | Not tested |\n| LCM1 | 55\u2009yr\u2009M | 19.0 | 0.0 | 51.5\u2009\u00b1\u20099.2 | Not tested |\n| LCM2 | 61\u2009yr\u2009M | 6.5 | 0.0 | 64.5\u2009\u00b1\u20095.0 | Not tested |\n| EP1 | 73\u2009yr\u2009M | 20.6 | 1.38 | 0 | 10\u2009% |\n\nThe volumes of tumor were provided as estimates by the neurosurgeon. HPF\u2009=\u2009high power field (400X).\n\n## *EGFR* FISH\n\nAnalysis of *EGFR* with FISH was used to correlate with MLPA assay results. Signals were counted in the FFPE sections. The CN for *EGFR* was increased in the four glioblastomas with ratios of *EGFR* to CEP 7 increased to values of more than 20 in at least some cells of each glioblastoma. In the metastatic tumors averages of CNs for *EGFR* were 2.5 to 2.8 per cell with *EGFR* to CEP 7 ratios remaining below 2.0. Distributions of the *EGFR*\/CEP 7 ratios are shown in Figure 3<\/a>A and illustrate \"tailing off\" in the data towards high-level amplifications in individual cells. Representative cells with amplified *EGFR* from the glioblastomas and from one of the metastases lacking amplification are shown in Figure 3<\/a>B. The EGFR\/CEP 7 ratios are listed for all tumors in Table 3<\/a>. The CN of CEP 7 varied and was increased (averages of 2.7 to 3.8) in glioblastomas. The CN of CEP 7 varied in LCM1 and LCM2 with averages of 2.5 and 2.6, respectively.\n\n**FISH versus MLPA results for\u2009*EGFR*\u2009**\n\n| **Tumor** | ***EGFR*\/CEP 7 ratios in FISH assays on FFPE** | | ***EGFR*'s tumor\/NB DNA ratios (normalized CN) Average of 2 MLPA probe sets \u00b1\u2009SD** | |\n|:---|:---|:---|:---|:---|\n| \u00a0 | Total *EGFR\/*Total CEP 7 signals | Median ratio | CUSA | FFPE |\n| GBM1 | 7.3 | 7.2 | 8.8\u2009\u00b1\u20090.2 | 21.1\u2009\u00b1\u20092.0 |\n| GBM2 | 8.6 | 9.3 | 22.5\u2009\u00b1\u20097.9 | 19.3\u2009\u00b1\u20097.2 |\n| GBM3 | 10.3 | 9.0 | 10.5\u2009\u00b1\u20093.8 | 33.1\u2009\u00b1\u200917.2 |\n| GBM4 | 11.1 | 11.6 | 19.5\u2009\u00b1\u20099.3 | 26.7\u2009\u00b1\u20096.5 |\n| LCM1 | 1.0 | 1.0 | 1.2\u2009\u00b1\u20090.5 | 1.2\u2009\u00b1\u20090.4 |\n| LCM2 | 1.1 | 1.0 | 2.3\u2009\u00b1\u20090.9 | 2.3\u2009\u00b1\u20091.4 |\n| EP1 | 1.0 | 1.0 | 1.7\u2009\u00b1\u20090.7 | 2.9\u2009\u00b1\u20092.6 |\n\nAlthough amplification of *EGFR* seen in individual cells of GBMs varied greatly and MLPA provided collective results of numerous cells, both techniques demonstrated corresponding large elevations in copy number for this oncogene in GBMs. Polysomy may have accounted for low-level amplifications found with MLPA in LCM2 and EP1. Median ratios were derived from the distributions of signal ratios in individual cells. *EGFR*\u2009=\u2009*Epidermal Growth Factor Receptor*. CEP 7\u2009=\u2009Centromere of chromosome 7, FISH\u2009=\u2009Fluorescence *in situ* hybridization. SD\u2009=\u2009Standard deviation. CUSA\u2009=\u2009Cavitronic ultrasonic surgical aspiration. FFPE\u2009=\u2009Formalin-fixed, paraffin embedded tissue.\n\n## MLPA molecular results\n\nProductive CE runs were obtained for all tumors. Prior to normalization of tumor CN, the output of CE data from all four glioblastomas and one of two metastases revealed obvious amplifications in at least one gene for each tumor. Measurements of the peaks for each gene were connected to create line graphs. The obvious peaks representing high-level and some mid-level gene amplifications in the glioblastomas are shown in Figure 4<\/a>. Genes are listed along the *x-*axes according to amplicon size within each kit (P171, 172 and 173). Easily recognized amplifications occurred for *EGFR*, *CDK4*, *MDM2*, *CYP27B1*, and *PDGFRA* in one or more GBMs*,* and also for *CCNE1* in one of the brain metastases (not shown) using DNA from ultrasonic surgical aspirations. Comparable amplifications for the same genes were also detected in corresponding FFPE specimens (not shown). Results for non-amplified genes in the tumors and in normal brain (assayed concurrently) constitute the baselines of graphs in Figure 4<\/a>. The *EGFR* data obtained with MLPA is included in Table 3<\/a> along with FISH data.\n\nFollowing NB normalization to generate CN ratios, Bland-Altman analysis (*n*\u2009=\u2009889, derived from 127 data points\/tumor x 7) found that data in ultrasonic surgical aspiration assays corresponded to data from FFPE assays for the same tumor within 1.96 SD limits, except for some of the amplifications that exceeded CN ratios of 15 (log value\u2009=\u20091.176). A normal CN ratio of 1 corresponds to a log value of 0 (Figure 5<\/a>A). Correlation coefficients for comparisons ranged from 0.71 to 0.99 in studies of each tumor according to the kits used. Consistency was high with the exception of LCM2 that showed slightly less correspondence, as seen in Figure 5<\/a>. The *n*'s were 42, 42, and 43 for P171, P172, and P173 kits, respectively, in each tumor.\n\nNormalized CN ratios for specific genes in glioblastomas, including those with high-level amplifications (CN ratio\u2009\u2265\u20096.0) and those with no alterations (all CN ratios \u22650.75 and \u22641.5), are shown in Figure 6<\/a> (A & B, respectively). Both the ultrasonic surgical aspiration and corresponding FFPE results from the same tumors are shown. The only high-level gene amplification in the brain metastases was *EVI1* (7.02 CN ratio) in LCM2 FFPE but it was not amplified in the corresponding aspiration specimen. The metastases, LCM1 and LCM2, had 2 and 6 mid-level gene amplifications, respectively. The ependymoma (EP1) only had 1 mid-level gene amplification. Among the 10 genes with no alterations in any of the GBMs, including replicates of all probe sets for a specific gene, there were 9 genes that were also unaltered in EP1, and 9 and 6 genes that were unaltered in LCM1 and LCM2, respectively. Totals of all deviations (\\<0.75 or \u22652.0 in CN ratios) from normal values in ultrasonic aspiration and FFPE specimens are shown in Table 4<\/a>. Alterations in CN were more frequent in FFPE than in ultrasonic aspiration specimens, 85% and 49% of total changes, respectively. However, FFPE did not completely encompass all of the changes. Restriction of some genomic gains and losses to only one type of specimen from the same tumor is consistent with tumor heterogeneity.\n\nCopy number changes in formalin-fixed, paraffin-embedded (FFPE) and cavitronic ultrasonic surgical aspiration (CUSA) specimen DNA\n\n| **Tumors** | **Genes with normalized CN ratios\u2009\u2265\u20092.0 (gains in copy number)** | | | | | **Genes with normalized CN ratios\u2009\\<\u20090.75 (losses in copy number)** | | | | |\n|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|\n| \u00a0 | Total | FFPE | CUSA | Only FFPE | Only CUSA | Total | FFPE | CUSA | Only FFPE | Only CUSA |\n| GBM1 | 18 | 17 | 7 | 11 | 1 | 16 | 12 | 6 | 10 | 4 |\n| GBM2 | 10 | 10 | 5 | 5 | 0 | 21 | 17 | 9 | 12 | 4 |\n| GBM3 | 13 | 13 | 7 | 6 | 0 | 21 | 18 | 8 | 13 | 3 |\n| GBM4 | 10 | 8 | 8 | 2 | 2 | 15 | 13 | 7 | 8 | 2 |\n| LCM1 | 7 | 7 | 5 | 2 | 0 | 15 | 12 | 11 | 4 | 3 |\n| LCM2 | 15 | 13 | 5 | 10 | 2 | 32 | 22 | 18 | 14 | 10 |\n| EP1 | 7 | 7 | 3 | 4 | 0 | 16 | 15 | 6 | 10 | 1 |\n| Totals | 80 | 75 | 40 | 40 | 5 | 136 | 109 | 65 | 71 | 27 |\n\nGBM1-4 had increased CN ratios for 22 genes (*AURKA, BCL2A1, BCL6, BIRC1,2,4, BRAF, CDK4,6, CYP27B1, EGFR, ERBB4, ESR1, EVI1, GNAS, GSTP1, MDM2, NRAS, PDGFRA, PIK3CA, RNF139,* &*SERPINB9*) and decreased CN ratios for 35 (*AKT1, BCAR3, BCAS1, BCL2, BCL2L13, BCLG, BIRC5, BRMS1, CCND1, CCNE1, CDK4, CENPF, EMS, FGF4, FGFR1, FLJ20517, GSTP1, IGF1R, IGFBP2, IRS2, JAK2, MDM4, MYBL2, MYC, MYCL1, MYCN, NFKBIE, NTRK1, PDGFRB, PIK3C2B, PTK2, PTP4A3, PTPN1, TERT,* &*TOM1L2*). LCM1-2 had increased CN ratios for 16 genes that overlapped with those for GBM1-4 plus *CCNE1, MET, PPM1D,* and *PSMB4*, and decreased CN ratios were found for thirty six genes that overlapped with those listed for GBM1-4 plus *BCL2L1, BIRC3, CYP27B1, MMP7, NTRK2, NTRK3, PDGFRA, RELA,* &*SERPINB2*. EP1 had increased CN ratios for genes among those for GBM1-4 and decreased CN ratios for those for GBM1-4 or LCM1-2 plus *CCND3, HMGA1*, &*MOS.*\n\n# Discussion\n\n## Overview of tumor genomics\n\nBrain tumors are known for harbouring genetic abnormalities. Tumor genomes are being evaluated to varying extents by FISH, array comparative genomic hybridization (aCGH), single nucleotide polymorphism (SNP) arrays, specific mutation detection with various PCR methods, methylation studies, targeted sequencing of selected genes, and whole genome sequencing. Therapies are planned accordingly in some institutions. Ideally, molecular testing for potential treatment targets occurs at the time of diagnosis and again in tumor recurrences to indicate appropriate treatment alterations.\n\n## Amplified oncogenes in tumor fluids\n\nThis study demonstrates that high-level amplified oncogenes can be quickly detected by inexpensive multiplex, PCR-based studies of cellular fluids using standard molecular laboratory equipment and techniques. Fluids with significant tumor cellularity offer the opportunity to retrieve fresh tumor DNA in abundant amounts. Advantages in testing the DNA of tumor cells in fluids rather than FFPE tissue sections, include avoidance of exposure to formalin, heat, and organic solvents, and enrichment for tumor via pelleting, filters, and various other methods, such as those developed for isolating tumor cells from blood. Tumor cells in fluids not needed diagnostically constitute as a potential source of DNA for molecular assays. This study demonstrated detection of amplified oncogenes despite concerns regarding ultrasonic, mechanical, and osmotic distortions and mixing of the tumor cells with saline and blood. Endothelial cells were the predominant type of non-tumor cell present but these do not exceed what was originally present in the tumor parenchyma. A tendency for short sections blood vessels to remain intact and sediment differently from tumor cells was noted but was not investigated.\n\n## Adaptive and Co-amplification of Oncogenes\n\nIt is proposed that oncogenes can be adaptively amplified in tumor cells to increase key gene products while circumventing promoters and other traditional methods of gene regulation \\[7,8\\]. Tumor cells are already well known for developing resistance to chemotherapy by being permissive to amplification of a gene that encodes a key protein that aids drug metabolism \\[7\\]. Although genetic instability of brain tumors may account for some genes being amplified by chance, amplification of multiple oncogenes involved in a similar function, such as cellular proliferation or resistance to apoptosis, suggests an adaptive response. Co-amplification of multiple oncogenes that provoke redundant malignant behavior or adaptation to cancer treatments strengthens the rationale for planning multi-agent therapies based on identification of amplified oncogenes with multiplex techniques.\n\nAlthough co-amplifications of several oncogenes have been reported in glioblastomas, the lack of predictable patterns in individual tumors indicates that each tumor needs to be tested. Despite a tendency for genes in close proximity to be amplified together in tumor genomes, the span of amplifications in these regions is variable and frequently interrupted. Amplified expanses of chromosomes include both \"driver\" and \"passenger\" or \"bystander\" genes in regard to their effects on tumor behaviour. The benefits of amplified \"driver\" genes could underlie retention of amplified chromosomal regions in malignant tumor clones.\n\n## Specific oncogenes amplified in glioblastomas\n\nSeveral oncogenes are commonly amplified in brain tumors. Amplification of *EGFR* or one of its variants, *EGFRvIII,* whose encoded protein is constitutively active, is well-known to occur in primary glioblastomas. Amplification of *EGFR* has also been found in anaplastic astrocytomas \\[9\\]. Gains of *EGFR* occurred in 70% of 40 glioblastomas in one study with high levels of gene amplification occurring as double minutes in 42% of the cases. Lower levels of *EGFR* amplification occurred as insertions of extra gene copies distributed along chromosome 7 \\[10\\]. In a previous MLPA study of 104 glioblastomas, 74 (71%) had additional copies of *EGFR*\\[11\\]. Although high levels of *EGFR* most likely underlie some malignant adaptations in glioblastomas, only small subsets of patients have responded to therapies targeted to EGFR in clinical trials \\[12-15\\]. Five moderate to high-level amplified genes in glioblastomas identified in this study, *EGFR, PDGFRA, CDK4, MDM2* and *CYP27B1*, have also been previously reported, sometimes with co-amplification \\[16-20\\]. Although proximity of the locations for *CDK4CYP27B1* and *MDM2* at 12q14, 12q13.1-q13.3 and 12q14.3-q15, respectively, contributes to co-amplification, this cannot be assumed to occur for all three genes in individual tumors. In 20 glioblastomas that harbored at least one of these amplifications, only 7 had amplification of all 3 genes \\[20\\]. In another study when 5 glioblastomas contained amplification of either *MDM2* or *CDK4*, both genes were amplified in only 3 \\[18\\]. In another study, among the 5 glioblastomas that contained amplification of either *CDK4* or *MDM2*, only one tumor had both amplified \\[17\\]. In a survey of 456 glioblastomas, 13.4% had amplifications of *CDK4* but only 9.2% had amplification of *MDM2*\\[16\\]. In our study, 2 of 4 glioblastomas had co-amplifications of *CDK4* and *CYP27B1* and only one also had amplification of *MDM2.* Amplification of *EGFR* was present in all four glioblastomas.\n\n## Amplifications of oncogenes in other brain tumors\n\nBrain metastases from lung primaries in this study also contained amplified oncogenes suggesting that genomic analysis of metastases will detect amplified genes to serve as treatment targets, such as *CCNE1* that encodes cyclin E1 \\[21,22\\]. Additional analyses of brain metastases are indicated to identify the full range of oncogenes that can be amplified. Interestingly, in a previous MLPA study of non-typical meningiomas, 19 oncogenes were found to have amplifications in two or more invasive\/atypical\/anaplastic (mostly Grade II) meningiomas (total of 15) and the sums of copy numbers were inversely correlated with patient age \\[5\\]. Some of those genes were among the high-level amplified genes detected here but none of the amplifications in the non-typical meningiomas were in the high-level range. In this study, sums of CN ratios normalized with NB for the same 19 genes were higher in all 4 glioblastomas and in 1 of the 2 metastases than in the non-typical meningiomas studied previously (not shown).\n\n## Implications of amplified oncogenes\n\nAs products of amplified genes become treatment targets, combinations of specific inhibitors and monoclonal antibodies will need to be tailored for individual patients according to amplifications found in the tumor DNA. Additionally, there is a tendency for amplified oncogenes to undergo mutations. Therefore, testing each brain tumor for oncogene amplifications would be useful for detecting key molecular treatment targets.\n\n## Methods for assessments of oncogenes\n\nWith multiplex PCR methodology, rapid assessment of a relatively large group of oncogenes can be obtained using routine molecular laboratory techniques and equipment. In comparison to FISH assays, numerous genes can be tested simultaneously with comparisons to multiple non-amplified genes, whereas the number of FISH targets is limited to fewer genes. However, in this study, FISH did offer the advantage of detecting centromere duplication as a surrogate of chromosomal duplication so that low levels of *EGFR* amplification attributed to polysomy could be predicted. In comparison to the specificity of other PCR-based assays, amplification occurred with MLPA only if the probes hybridized to the gene targets in pairs (probe sets) and then underwent enzymatic ligation based on perfect sequence correspondence at their junctions to the tumor DNA. Also, the use of only one-fourth of the reaction volume for PCR in MLPA further reduced the chances of non-specific carryover of PCR amplicons from previous reactions. However, our study did not compare MLPA with other multiplex PCR techniques. Quantitative real-time PCR using multiwell plates is among other rapid molecular techniques to consider for testing multiple oncogenes simultaneously at low cost.\n\nThe validity of MLPA has been supported by other techniques. Detection of gene amplification (*ERBB2* or *Her- 2\/neu*) with fluorescent and chromogenic *in situ* hybridization has closely correlated with MLPA results \\[23,24\\]. Results of *EGFR* amplification with MLPA have been comparable to detection of increased protein and RNA expression using immunohistochemistry and non-MLPA PCR, respectively \\[10\\]. In this study, FISH confirmed MLPA results for the presence or absence of mid and high-level *EGFR* amplification. Correlations were not quantified due to the potential for FISH signals to merge in tandem repeats or as aggregates of double minutes and also the heterogeneity of results in individual tumor cells.\n\nThe cost of kit reagents in a single MLPA assay is less than \\$15. Although testing a tumor and normal brain DNA with two replicates each in three reactions to encompass probes for 79 genes increases the cost, testing multiple tumors with the same control DNA could be performed. Also, the number of genes to be screened could be reduced so that only 1 kit would be needed. Although high-level gene amplification results were obtained from the ultrasonic aspirations within a few days and the content of tumor DNA was plentiful, it should be noted that the FFPE assays demonstrated higher sensitivity in detection of low to mid-level gene amplifications. Therefore, ideally both types of specimens would be tested if detection of all levels of oncogene amplifications is desired. Multiplex detection of high-level amplifications using both types of samples strengthens confidence in the results at a relatively low cost and helps to negate concerns about tumor heterogeneity.\n\nAlthough DNA from patients' blood cells has been used as normal controls when searching for somatic genetic alterations in tumors, quantifying high-level amplified oncogenes pose a problem when using blood samples. Very low numbers of circulating tumor cells with high-level amplifications or their free DNA in the blood could bias analyses. Thus this study used normal brain DNA that was comparable to normal values expected with MLPA that were provided by the manufacturer. Pooled DNA from multiple normal donors should be considered for future studies. Based on this pilot study, automation of steps in the MLPA procedure and subsequent data analysis are future goals so that this type of assay can be streamlined, validated, and used clinically.\n\n# Conclusions\n\nIn summary, multiplex detection of high-level amplifications among oncogenes was successful in ultrasonic surgical aspiration DNA obtained from malignant brain tumors when compared with FFPE from the same tumors. The results indicate that morphologic evaluation of ultrasonic surgical aspirations to confirm tumor cell content and retrieval of DNA would aid molecular testing of brain tumors for oncogene amplifications. Many of the oncogenes with copy number gains encode proteins that are potential therapeutic targets. Therefore rapid identification of high-level gene amplifications could stratify patients to clinical trials and treatment plans shortly after surgical resections of brain tumors. Although ultrasonic surgical aspiration specimens are less sensitive than FFPE in detecting low to mid-level amplifications, the bulk of the tumor is present in the aspirations, they are fresh and homogenously mixed, and high-level amplifications can be detected in them. Abundant tumor DNA harvested from cellular fluids could also be used for targeted sequencing of amplified oncogenes to detect activating mutations.\n\n# Abbreviations\n\nCE: Capillary electrophoresis; CGH: Comparative genomic hybridization; CN: Copy number; CUSA: Cavitronic ultrasonic surgical aspiration; DAP1: Diamino-phenylindole; EP1: Ependymoma; FFPE: Formalin-fixed: paraffin-embedded; FISH: Fluorescence in-situ hybridization; GBM: Glioblastoma multiforme; HPF: High power field; MLPA: Multiplex ligation-dependent probe amplification; LCM: Lung carcinoma metastasis; MHz: Mega Hertz; NB: Non-neoplastic (normal) brain; PCR: Polymerase chain reaction; SNP: Single nucleotide polymorphism.\n\n# Competing interests\n\nThe authors declare that they have no competing interests.\n\n# Authors' contributions\n\nMEB, LNT, SSM, and JFL carried out the molecular genetic studies. MEB, LNT, and SP drafted the manuscript. SSM and MLN performed the FISH assays. MEB, SP, MLN and AN participated in the design of the study. SP, JFL, and AN participated in and coordinated specimen and clinical data retrieval and characterization. MEB conceived and coordinated the study. All authors read and approved the final manuscript. This study was conducted with approval from Institutional Review Board (IRB) of LSUHSC-S.\n\n## Acknowledgements\n\nWe acknowledge Lee Ellen Brunson, Dwain D'Souza, JoAnn Dismuke and Neelam Joshi, Department of Pathology, Louisiana State University Health Sciences Center-Shreveport (LSUHSC-S), for advice and technical assistance and also Ashley B. Flowers, Kristopher M. Katira, and Raj B. Patel, while medical students at LSUHSC-S School of Medicine, for technical assistance. We thank the Feist-Weiller Cancer Center and the Department of Pathology, LSUHSC-S, for providing equipment. This study was funded by a LSUHSC-S Grant-in-Aid Award (2008\u20132010) to Marie E. Beckner, MD with the purpose of encouraging high-value intramural research.","meta":{"dup_signals":{"dup_doc_count":122,"dup_dump_count":26,"dup_details":{"curated_sources":2,"2022-27":1,"2021-43":1,"2020-40":1,"2020-24":1,"2019-35":1,"2015-48":5,"2015-40":4,"2015-35":5,"2015-32":5,"2015-27":5,"2015-22":5,"2015-14":4,"2014-52":5,"2014-49":6,"2014-42":13,"2014-41":7,"2014-35":4,"2014-23":10,"2014-15":6,"2023-23":1,"2015-18":5,"2015-11":5,"2015-06":4,"2014-10":6,"2013-48":5,"2013-20":5}},"file":"PMC3475141"},"subset":"pubmed_central"} {"text":"abstract: Peer review is the foundation quality in science, but now, in the US, the system is broken. What's the best way to fix things.\nauthor: Gregory A Petsko\ndate: 2006\ninstitute: 1Rosenstiel Basic Medical Sciences Research Center, Brandeis University, Waltham, MA 02454-9110, USA\ntitle: Instructions for repair\n\nLast month I wrote about the sharp decline in the success rate for scientific research proposals submitted to the US National Institutes of Health (NIH) and other agencies. That column provoked numerous responses from both administrators of the funding organizations and life scientists. The administrators, while not denying some of the problems I discussed, argued that things aren't quite as bad as they seem, and that a large part of the difficulty stems from sizeable increases in the number of grant applications and the amounts requested, rather than from poor choices in managing the doubling of the NIH budget that took place not long ago. The scientists, on the other hand, all said that things were even worse than I had claimed.\n\nCare has to be taken in drawing conclusions from either of these sources. I'm sure that people who have experienced difficulty in obtaining funding are more likely to respond to that column than those who've had success. And administrators probably feel the need to defend themselves, and their agencies, from what they might, with some justification, see as an attack by someone who doesn't know the whole story the way they do.\n\nNevertheless, although I think both sets of comments are useful, I also think both largely missed the point. People who wrote to me were all concerned, in one way or another, with the amount of money available for research and how it is being allocated. That's what seems to be on everybody's minds, and it's certainly worth talking about. Whether or not we're allocating the available funding sensibly is something that ought to be engaging officials as well as researchers in an ongoing dialog about priorities in science. (But that dialog isn't taking place. Somehow it just seems easier to keep asking for more money.) Yet, that wasn't the main point of the column. What concerns me is that, whether there really is a crisis in scientific funding or not, the perception that there is - and believe me, that is the perception on the part of just about every researcher I have talked to - has crippled the peer-review system.\n\nPeer review is the foundation of quality in science. It prevents widespread cronyism and slowly weeds out unproductive lines of inquiry. But it requires that reviewers be both fair and wise. When the perception is that there's not nearly enough money to fund even all of the highest-quality proposals, a defensive turf-protection replaces a spirit of curiosity and egalitarianism. When it seems as if the primary job of a reviewer is to eliminate most proposals rather than to fight for the good ones, nit-picking replaces generosity. When the feeling is that every dollar counts so much that no risks dare be taken, conservatism and incremental advances get rewarded at the expense of bold new ideas. And when all of these things happen - and I believe they are happening, now, in the US - then the system is broken.\n\nSocieties based on scarcity tend to become hierarchical, with a well-fed elite and starving masses. As can be seen from publicly available data , during the recent doubling of the NIH budget over a seven-year period, the number of investigators getting funded changed very little. Where did the money go? Besides a very large increase in the funding for NIH's own intramural research program, it seems to have gone to large increases in funding for established investigators who renewed their grants successfully during this period, or wrote additional ones. Instead of bringing lots of new people into the system, we ended up with more money for roughly the same set of grant holders. Now that funding is tight, those bloated operations are under tremendous pressure to at least maintain their size, which makes it even more difficult for new investigators - or new ideas - to enter the system. The average age at which a scientist receives his or her first NIH grant in the US is currently 42 for PhDs (even older for MDs), and in this time of perceived scarcity a broken peer-review system is not likely to change that.\n\nWhat's the best way to fix things? It could be argued that the problem is temporary, and that when funding loosens up again, as it always has in past boom-bust cycles, peer review will recover along with everything else. After all, that's what happened in the 1970s. No need to tamper with the system. Time will take care of the problem.\n\nI have my doubts. There's one big difference between peer review in 1975 and peer review today: the number of senior investigators participating in the process. Back then most review panels had a preponderance of such scientists, who provided the system with institutional memory of the way things were supposed to work. Nowadays, most established investigators feel they are too busy to put in the considerable time required to deal with the glut of proposals that every panel faces. The result is that less experienced scientists, with no history of a different gestalt, are being fed into a system where fault-finding and conservatism are the norm, so when the funding situation improves, there's no guarantee that the peer-review system will improve with it. (If you doubt this, consider the former Soviet Union. When it collapsed in 1989, newer Soviet-block countries like Poland and Hungary and Czechoslovakia, where there was a generation of people who still had a memory of how a market-based economy should work, did much better than Russia, where no one alive had experienced any system but communism.) In addition, the insistence that the composition of the panels must satisfy a requirement for geographic and institutional balance means that it's hard to have a large number of top scientists on any panel, even if they wanted to serve.\n\nSo my first repair instruction is simple: Do away with the misguided concept of balance, and require that all holders of research grants serve at least one year on a reviewing panel for every five years of funding they receive, regardless of seniority. Renewal of funding would be contingent on fulfillment of this service. If there is a surplus of available talent, then grants administrators could forgive the obligation for any given five-year cycle, but the requirement would kick in again when a grant was renewed. There would need to be a mechanism to deal with people who hold multiple grants - perhaps they would only incur a single one year debt for every five years of total funding, or the length of service could scale with the total budget; these details can be worked out. The important point is to create a pool of the best researchers, and to make sure that they represent the majority on all peer-review panels. As a dyed-in-the-wool advocate of personal freedom, the coercive aspects of this suggestion do trouble me somewhat, but it isn't really all that different from the way things work in the other main form of peer review - the jury system.\n\nMy second idea for how fix things is meant to address the problem of reviewer morale. When someone is given twelve grants to review, and knows that there is only a small probability that even the best one is actually going to be funded, he or she rapidly becomes discouraged. It's even more depressing when some less knowledgeable reviewer nit-picks one's best proposals, and depression is not the best mindset from which to make judgments. I suspect the program officers at the funding agencies must feel equally demoralized: it's no fun having to say \"no\" all the time, and to watch conservative study sections pass over the most exciting new ideas in favor of more of the usual. The solution, I think, is to give the program officers more autonomy in funding decisions. Some NIH institutes and centers claim that they do this, but in practice I have found that program officers rarely go against the recommendations of the reviewing panels. I suggest taking at least 10% of the budget of each institute or center and allowing the program officers to use it to fund grants that they believe to be exciting but that would otherwise miss the payline cut-off. They would need to justify each decision to the council, of course, but this suggestion would empower them to rectify some of the worst mistakes of the panels. In my experience, funding officials tend to be bright, committed individuals with a good broad knowledge of their field; I have no hesitation in giving them more autonomy. This is the way things actually work at the National Science Foundation, a funding agency that many believe has a better long-term history of supporting innovative research than does the NIH.\n\nThere also needs to be a way to improve the judgments coming out of the panels. Having more experienced reviewers would help, but it's hard to deal thoroughly and fairly with each proposal when the number being reviewed has increased so greatly. The way to solve the problem of proposal overload is to reduce proposal size. NIH proposals now are limited to 25 pages for the scientific description (that includes background and significance, progress during the past budget period, and the plan for future research). I think that should be shortened to 15 pages. If you can't describe clearly in 15 pages what you've already done, what you intend to do, and why it's important, you probably can't do it in 50.\n\nBut I think the proposals should be structured differently for different investigators. Scientists submitting their first proposal need to spend more space detailing how they are planning to carry out the work than established investigators should. In fact, I would argue that established investigators shouldn't have to describe their proposed methods in any detail at all, except if these are novel. To ask someone who has demonstrated for years that they can deliver the goods to prove that they know what they're doing is silly and borders on insulting. It also provides the nit-pickers with extra ammunition. People who tout stocks are constantly warning investors that past performance is no guarantee of future returns. But there is one area where it is: scientific research. The best predictor I know of as to whether a project will work is the track record of the principal investigator. Someone who has been consistently successful is not likely to fail, even when doing something risky. We need to stop pretending that isn't true. Most organizations that award pre- and post-doctoral fellowships spend very little time picking over the details of the applicant's research proposal, because they know that these young people haven't had any experience writing proposals and anyway usually end up doing something different, or in a very different way, from what they propose. Instead, fellowship reviewers tend to consider the qualities of the individual to be the most important factor on which to base their judgments. I think that makes sense at all levels of science. We need to be much less concerned with the details of projects, and put our bets on people and ideas.\n\nWhile we're waiting for funding levels to improve, we need additional mechanisms to get young people started. The observation that, while investigator funding went way up during the NIH budget doubling, the number of investigators changed very little, suggests that we should consider putting a cap on the size of each award so as to make more money available for funding new projects and people. This is a serious matter, because it potentially has an impact on current employees, so if we implement a cap we will need to phase it in gradually. I am not proposing that we limit the total amount of funding that an individual can have - I think if someone can justify the need for millions of dollars to do first-rate science they should be able to obtain it. But I do think that we should exercise more scrutiny in such cases, and one way to do that is to force someone who claims to need, say, a million dollars to support a project to submit two or three proposals instead of one. I also think we will do better science, as a community, if we have more individual investigator-initiated projects and fewer mega-sized 'me-too' programs. Most innovation comes from small projects by relatively new people.\n\nTwo final ideas pertain to the machinery of the reviewing process itself. Turf protection is one of the biggest problems in peer review: as fields try to survive in a time of scarce resources, they often fight to fund their own mediocre science at the expense of quality in other areas. This largely stems from the personal and professional relationships that develop among members of a particular discipline. It's less of a problem when there's more money to go around, but right now we need to fight it. Here's a heretical and possibly crazy idea: I think we should consider not allowing people to review grants in their own field. Instead, they should only be allowed to comment on any questions of technical feasibility that come up during the review. This may seem absurd, but I'm not sure it is. If we follow my suggestion to bet on people rather than projects, detailed technical expertise isn't so important. And if we have the best, most experienced people back on our review panels, they usually will have a pretty broad knowledge of genomics, or biology, or whatever the main subject is. That will allow them to assess the importance of the proposed research and the impact of the applicant's previous contributions, which I maintain are the only two criteria that really should matter. Reviewing outside one's primary area of technical expertise happens all the time on fellowship panels, and they usually make pretty good decisions. After all, you don't have to be able to lay an egg in order to tell a good one - or to smell a bad one.\n\nThe second procedural change we should consider is aimed at addressing the issue of possible bias in the system. In times of scarce dollars, reviewers worried about their own chances of obtaining funding have an incentive to prevent others from being funded. Even if we assume such a thing rarely happens, we should want to ensure that proposals are reviewed wisely as well as fairly. I think the best way to guarantee the quality of the peer-review process is to review the reviewers. The data to do so exist, because there is a record of how every member of a panel has voted on every grant. Since most panelists only read their assigned proposals in detail, we need only be concerned with how a reviewer's scoring of such applications compares with the average score awarded to those same applications by the other assigned reviewers. Abnormally high or low scores would not be damning in and of themselves (there's plenty of room for legitimate differences of opinion in science) but a consistent pattern of low or high scores could indicate either poor judgment or bias. You may wonder how to be sure about such an evaluation, but it's actually easy, because we can compare each reviewer with him or herself. Bias or territoriality should be relatively easy to detect by examining how a suspect reviewer treats the same grants when they are resubmitted after revision. Since unsuccessful applicants try hard to answer the criticisms raised by the previous review, the scores of resubmitted grant proposals should improve, on average. If a reviewer's scoring on such resubmissions remains abnormally low compared with other reviewers who are also seeing the proposal for a second time, then there is reason to question the impartiality, or the judgment, of that reviewer, and they can be eased off the panel. There might even be no need to evaluate every reviewer all the time: random checking might be all that is needed to discourage trying to rig the game.\n\nWhen fear and discouragement drive peer-review decisions, then the system is broken. But it's not broken beyond repair. The suggestions I've offered here can help mend it; at the very least, I hope they will start a dialog about what should be done. The worst thing we can do as a community is to throw up our hands in despair or pretend that everything will right itself magically when more money becomes available. Peer review is too important to give up on, or be left to chance. If we're serious about funding the best science and making our profession attractive to the brightest, most creative young minds, then we need to fix the system so that it once again serves those ends. Let's get to work.\n\n## Figures and Tables","meta":{"dup_signals":{"dup_doc_count":101,"dup_dump_count":48,"dup_details":{"curated_sources":2,"2023-14":1,"2022-49":1,"2022-21":1,"2021-43":1,"2020-34":1,"2020-24":1,"2019-47":2,"2019-35":1,"2019-26":1,"2019-13":1,"2019-04":2,"2018-47":2,"2018-39":1,"2018-13":1,"2017-47":1,"2017-34":1,"2017-26":1,"2017-17":2,"2017-09":8,"2016-44":1,"2016-40":1,"2016-36":9,"2016-30":7,"2016-22":1,"2016-18":1,"2016-07":4,"2015-48":2,"2015-40":1,"2015-35":2,"2015-32":1,"2015-27":2,"2015-22":2,"2015-14":2,"2014-52":2,"2014-49":3,"2014-42":3,"2014-41":4,"2014-35":2,"2014-23":3,"2014-15":3,"2023-40":1,"2015-18":2,"2015-11":2,"2015-06":1,"2014-10":2,"2013-48":2,"2013-20":2,"2024-22":1}},"file":"PMC1557998"},"subset":"pubmed_central"} {"text":"abstract: Lessons in personal genome analysis, social networking or health information?\nauthor: Gregory A Petsko\ndate: 2009\ninstitute: 1Rosenstiel Basic Medical Sciences Research Center, Brandeis University, Waltham, MA 02454-9110, USA\ntitle: What my genome told me - and what it didn't\n\nWell, it turns out I'm not descended from Genghis Khan. I'm sure that's as surprising to you as it is to me. I mean, according to what we hear from people who use genomics to track human migrations, a huge percentage of the human race is actually descended from Genghis Khan. But not me.\n\nThat's one of the things I learned when I submitted a sample of my DNA for genome-wide single nucleotide polymorphism (SNP) analysis by one of the companies that have sprung up to perform such tests for ordinary individuals for a fee. I was curious to see what sort of information they provide, and wanted to know something about my own genomic makeup, to be honest, so I followed the directions of the company I had selected and spat into a plastic container until I produced the required volume of saliva, mailed it in, and awaited the results. Would I have an allele that doomed me to a rare genetic disorder as I got older? Was I at much higher than normal risk for heart disease, diabetes, or any of the other thousand natural shocks that flesh is heir to? Was I descended from Genghis Khan?\n\nThe company I sent my saliva sample to doesn't actually do any DNA sequencing or hybridization itself; it contracts this out to a specialist laboratory. Once the lab received my sample, DNA was extracted from cheek cells in the saliva and amplified by PCR to produce enough material for the genotyping step. Next, the DNA was cut by restriction digestion into smaller, more manageable pieces. These DNA pieces were then applied to a DNA chip, which in this specific case is a small glass slide with millions of microscopic beads on its surface. Attached to each bead are the probes - bits of DNA complementary to those specific sites in the human genome where important SNPs are located. There is a pair of probes for each SNP, corresponding to the 'normal' and 'non-normal' version of each SNP. Hybridization to the particular probe, detected by fluorescence just as in the case of any other DNA chip experiment, serves to identify the allele.\n\nThe DNA chip that this particular company uses reads 550,000 SNPs that are spread across the entire genome. Although this is still only a fraction of the 10 million SNPs that are estimated to be in the human genome, these 550,000 SNPs are specially selected 'tag SNPs' - because many SNPs are linked to one another, the genotype at many SNPs can often be determined by looking at one SNP that 'tags' its group. This tagging procedure maximizes the information from every SNP analyzed, while keeping the cost of analysis low.\n\nIn addition, all the DNA analysis companies have hand-picked tens of thousands of additional SNPs of particular interest from the scientific literature and added their corresponding probes to the DNA chip. These SNPs include risk factors for common and rare human diseases, genetic traits such as color blindness, and so on.\n\nAccess to the resulting data is through the company's website, which includes the ability to download the entire set of SNP information. Once I was notified that my results were in, I did that, and being a scientist I performed my own bioinformatics on the data, but the website actually does a pretty good job of providing the customer with specific information about alleles for various illnesses, physical traits, and so on.\n\nHere are a few of the things I leaned about myself, physically speaking:\n\nAccording to my genome, my eye color is likely to be brown (good guess). I should be lactose tolerant (I am). My cytochrome P450 data show that I would be quite sensitive to the anti-clotting drug warfarin if I ever had to take it (which I hope I never do - it's a nasty drug). The SNPs in my androgen receptor gene say that I am considerably decreased in risk for male pattern baldness (I have news for them; I'm getting rather thin on top). I have a SNP in a dopamine receptor gene that, in one German study, was found to be associated with reduced efficiency in learning to avoid errors (unless I got the facts wrong). According to a single SNP in one gene associated with insulin metabolism, I have increased odds of living to be 100 (that is, if all the mistakes I don't learn to avoid don't get me first). There are a number of SNPs that have, in some studies, been associated with increased athletic performance (faster running, quicker reaction times, and so on). I don't have any of them, which will come as no surprise to any of my gym teachers.\n\nI am at slightly increased risk, relative to the norm, for rheumatoid arthritis and psoriasis (that last one is interesting, because my father suffered from it). I am at slightly decreased risk for Celiac disease, Crohn's disease, type 1 diabetes, and prostate cancer. In all cases, the change is small - less than two-fold difference, and not enough to cause me to consider any lifestyle changes.\n\nBut one thing did jump out at me when I looked at my data. I have a guanine at rs1799945, which is located in the gene coding for a protein called HFE. HFE is the protein mutated in hereditary hemochromatosis. Hereditary hemochromatosis, the most common form of iron overload disease, is an autosomal recessive genetic disorder that causes the body to absorb and store too much iron. Excess iron is stored throughout the body in organs and tissues, including the pancreas, liver, and skin. Without treatment, the iron deposits can damage these organs and tissues. There are two primary variants that give rise to this disease.\n\nGenetic variant 1 (C282Y\/rs1800562) is in the *HFE* gene. The *HFE* gene makes a membrane protein that is similar structurally to MHC class I-type proteins and associates with \u03b22-microglobulin. It is thought that HFE helps cells in the intestines, liver, and immune system control iron absorption by regulating the interaction of the transferrin receptor with transferrin. The C282Y substitution disrupts the interaction between HFE and its \u03b22-microglobulin light chain and prevents cell-surface expression. Pamela Bjorkman's 2.6 \u00c5 resolution crystal structure of HFE confirms that, as predicted from its sequence, Cys282 (residue 260 in the mature form of the protein) is involved in a disulfide bridge analogous to those found in class I MHC \u03b13 domains. Loss of the disulfide destabilizes the native fold of the protein. The second most common variant is also in the HFE gene. It is a change of histidine 63 to aspartic acid. In the crystal structure of HFE, His63 (41 in the sequence of the mature form) is involved in a crucial salt bridge, which would be destroyed by mutation to a negatively charged residue, thereby also destabilizing the protein. Thus, like so many other hereditary disorders, hemochromatosis is a protein conformational disease.\n\nIn the US, variant 1 is the more frequent. The 'normal' Cys282 allele has guanine in both strands and is found in about 876 out of 1,000 people of European ancestry. The most common form of hereditary hemochromatosis is typically associated with people homozygous for an adenine in both positions; this occurs in about 4 out of 1,000 people of European ancestry (0.4%). However, penetrance is incomplete: only about a third to a half of the homozygotes will show elevated iron levels and perhaps fewer than 10% of the males (and 1 to 2% of the females) will develop the full clinical symptoms of the disease, which include joint pain, fatigue, abdominal pain, liver dysfunction, and heart problems. As Ernest Beutler has pointed out, the hemochromatosis mutation is relatively common; the hemochromatosis disease is rare. Mutation of the HFE gene is a necessary, but not sufficient, condition. The challenge, as in the case of so many diseases in the age of genomics, is to understand what other genetic, epigenetic, and environmental factors determine why a few homozygotes for the C282Y (or H63D) mutations develop severe iron-storage disease, while the majority go through life pretty much unscathed by this genotype.\n\nHeterozygotes for C282Y have an adenine in only one strand and represent about 120 in 1,000 people of European ancestry; they almost never develop clinical symptoms. Heterozygotes for H63D are less common, but also are unlikely to develop clinical symptoms. Like one person in ten in the United States, I am a hemochromatosis carrier. I am heterozygous for H63D.\n\nNow that I know that, what does it do for me? Not much, I guess, but I will remember it, and should I ever develop any of the symptoms of iron overload, I will probably tell my physician to check my iron levels. Maybe that's worth knowing.\n\nBut if you go to the website of the company that did my analysis, you will find that the sort of information I've talked about here is actually not that prominently displayed. What is displayed are all manner of data connecting to ancestry. I spoke to the CEO of the company, and she confirmed that, much to their surprise, people who use their service are much more interested in tracing their roots, genetically speaking, than they are in things related to their health or physical condition. The site offers several tools for connecting yourself with others who share your ancestry, genetically speaking. In other words, for the present, the primary use of personal genome-wide SNP analysis is social networking.\n\nMy maternal haplogroup is T2b2. Haplogroup T originated about 33,000 years ago in the Middle East, as modern humans first expanded out of eastern Africa. Its present-day geographic distribution is strongly influenced by multiple migrations out of the Middle East into Europe, India, and eastern Africa after about 15,000 years ago. T2 is currently widespread in northern Africa and Europe. My mother's family most recently came from Italy, so I guess this makes sense. You can find famous people with your haplotype on the site: for example, if your maternal haplotype is H4a, you have the same type as Warren Buffet, one of the richest men in the world. You'll be delighted - and perhaps not surprised - to know that the only famous person the site lists as sharing my maternal haplotype is the notorious old-west outlaw Jesse James.\n\nMy paternal haplotype is I2. Haplogroup I2 is most abundant in eastern Europe and on the Mediterranean island of Sardinia, where it is found in 40% of the male population. Like its brother haplogroup, I1, I2 expanded northward at the end of the Ice Age about 12,000 to 14,000 years ago. But unlike I1, which expanded from the Iberian peninsula into northwestern Europe, I2 radiated outward from the Balkans and southwestern Russia into the eastern half of the continent. That again makes sense, as my father's family were Cossacks. If my paternal haplotype were the extremely common C3, I would be descended from Genghis Khan. I'm not. If it were type T, I would share paternal lineage with the great American president and founding father Thomas Jefferson. I don't. In fact, the company website doesn't list a single famous person with paternal haplotype I2 (unless you count me, of course).\n\nSo now, thanks to my own personal genome SNP analysis, I know that I'm not likely to be exceptionally athletic and that I'm not a blue-eyed balding blonde, neither of which comes as any surprise whatsoever. But I have also learned that I'm not descended from Genghis Khan. So I've got that going for me. Which is something, I suppose.","meta":{"dup_signals":{"dup_doc_count":102,"dup_dump_count":51,"dup_details":{"curated_sources":2,"2023-23":1,"2022-33":1,"2021-39":1,"2021-21":1,"2020-34":1,"2020-24":1,"2019-47":2,"2019-30":1,"2019-22":1,"2019-09":1,"2019-04":1,"2018-51":1,"2018-47":1,"2018-43":1,"2018-39":1,"2018-34":1,"2018-30":1,"2018-26":1,"2018-13":1,"2018-09":1,"2017-43":2,"2017-22":1,"2017-09":7,"2016-44":1,"2016-36":7,"2016-30":8,"2016-22":1,"2016-18":1,"2016-07":3,"2015-48":2,"2015-40":2,"2015-35":2,"2015-32":2,"2015-27":2,"2015-22":2,"2015-14":2,"2014-52":3,"2014-49":3,"2014-42":5,"2014-41":3,"2014-35":2,"2014-23":3,"2014-15":1,"2023-50":1,"2015-18":2,"2015-11":2,"2015-06":1,"2014-10":3,"2013-48":2,"2013-20":2,"2024-26":1}},"file":"PMC2718485"},"subset":"pubmed_central"} {"text":"abstract: Aortic valve replacement (AVR) is the gold standard for the treatment of severe symptomatic aortic stenosis. Complications directly related to surgical procedure are relatively infrequent. Coronary ostial stenosis is, generally, referred as late complication. Anecdotal reports concern coronary ostial stenosis as acute complication. A unique fatal case of intraoperative, bilateral coronary ostial obstruction by prosthetic valve leading to an extensive myocardial infarction is reported. Surgeons must have a high level of vigilance regarding the occurrence of acute myocardial ischemia and sudden death soon after AVR.\nauthor: Emanuela Turillazzi; Gabriele Di Giammarco; Margherita Neri; Stefania Bello; Irene Riezzo; Vittorio Fineschi\ndate: 2011\ninstitute: 1Department of Forensic Pathology, University of Foggia, Ospedale Colonnello D'Avanzo, Via degli Aviatori 1, 71100 Foggia, Italy; 2Institute of Cardiac Surgery, University of Chieti, Via dei Vestini, 29, 66100 Chieti, Italy\nreferences:\ntitle: Coronary ostia obstruction after replacement of aortic valve prostesis\n\n# Background\n\nAortic valve replacement (AVR) remains the gold standard for the treatment of severe symptomatic aortic stenosis. Late coronary ostial stenosis is described as late complication of the surgical procedure \\[1\\]. Anecdotal reports concern coronary ostial stenosis as acute complication: right ostial occlusion from aortotomy suture, ostial thrombosis as traumatic consequence from an aortic retractor, coronary artery spasm, calcium debris embolization and partial direct occlusion by the device or edematous reaction have been described \\[2-4\\].\n\nA unique fatal case of intraoperative, bilateral coronary ostial obstruction by prosthetic valve leading to myocardial infarction is reported.\n\n# Case presentation\n\n## Clinical findings\n\nA 50 -year old woman, previously submitted to surgical aortic valve replacement in 2003 for aortic valve stenosis was admitted to a cardiac surgery unit for replacement of dysfunctioning mechanical valve prosthesis. Echocardiographic evaluation documented a prosthetic dysfunction with the evidence of an increased peak gradient up to 105 mmHg and a prosthetic valve area by 0.58 cm^2^ in a normal left ventricular function. Prosthetic valve replacement was then performed in August 2010. Cardiopulmonary bypass (CPB) was instituted between right atrium and ascending aorta and a moderate hypothermia was reached. Myocardial protection was achieved by retrograde injection of blood cardioplegia through coronary sinus as induction; it was completed by antegrade injection directly in both coronary ostia. After ascending aorta had been transversely opened, prosthetic dysfunction was evident as one of the hemidisks appeared to be locked by pannus ingrowth and fresh pivotal thrombi. No perivalvular leaks were described. The prosthesis was then removed; both coronary ostia were described quite close to the aortic annulus. Bovine pericardium bioprosthesis (Edwards Magna Ease 21 mm) was then implanted in supraannular position. Aortotomy was sutured using Teflon felt strips as reinforcement due to an extremely thin aortic wall and patient was uneventfully weaned from CPB. During skin suture patient experienced sudden severe hypotension; an ECG demonstrated signs of transmural myocardial ischemia. CPB was promptly reinstituted. Severe dilatation and hypokinesia of right ventricle and severe hypokinesia of interventricular septum were registered. Intraortic ballon pumping (IABP) was inserted through a femoral artery and a saphenous vein graft to right coronary artery (RCA) immediately added on a beating heart in the hypothesis of RCA ostial obstruction. At the end of the operation cardiopulmonary support by means of Extracorporeal Membrane Oxygenator (ECMO) through femoro-femoral access was initiated and IABP removed. Patient was then transferred to Intensive Care Unit (ICU) for postoperative course. Graft verification was not performed, either intraoperatively and postoperatively. Two days after, following progressive decrease of ECMO support along with a satisfying hemodinamical recovery, she was transferred to the operating theatre to remove mechanical assistance. One hour after the new admission to the ICU, cardiac arrest occurred pushing surgical team to reposition an emergent ECMO at the ICU bed. The day after patient was transferred again to the theatre to remove thrombi from left atrium evident at ecographic inspection. At that moment cardiac activity was silent in a very compromised systemic condition. Two days after the new admission to the ICU under a new femoro-femoral support she was declared dead.\n\n## Pathological findings\n\nA complete post mortem examination was performed. Cerebral and pulmonary edema was detected. Cardiac size was mildly increased (11.5 \u00d7 10 \u00d7 7.7 cm), with conical shape; heart weight was increased (547g). At the inspection, a horizontal aortotomy was evident with prosthetic valve in situ after aorta was opened. Meticulous attention was devoted to observe the coronary ostia: the right one was not found at the inspection of inner surface of sinus of Valsalva. Using a metal probe inserted into right coronary artery distally cut, the entrapment of the ostium in a stitch at the level of prosthetic annulus became evident. Left coronary ostium was identified behind one of the bioprosthesis post that was responsible of its complete obstruction (Figure 1<\/a>). Coronary vessels were fully patent during their course. Equally patent appeared to be saphenous graft to RCA. The inspection of both ventricles showed many foci of pale myocardium diffused to either free wall and interventricular septum.\n\nHistological investigations showed haemostasis of all organs and mild cerebral and pulmonary edema. Specimens taken from the heart showed diffuse stretching of the myocardium with elongation of the sarcomeres and nuclei. Polymorphonuclear leucocyte infiltration was well evident at the periphery of the necrosis zones (Figure 2<\/a>). Numerous foci of hypercontracted myocardial cells with markedly short sarcomeres and anomalous, extreme thickening of Z lines and rexis of the myofibrillar apparatus into cross-fibre, anomalous and irregular, up to a total granular disruption were also observed. Myocells pathological aspects were also studied on confocal laser scanning microscope: stretching of the myocardium in flaccid paralysis, resulting in a very early elongation of sarcomeres and nuclei; polymorphonuclear leucocyte infiltration was well evident.\n\nExtramural coronary arteries showed nodular hyperplasia of smooth muscle cells and elastic tissue with fibrous replacement. Proteoglycan accumulation in the deep intima between tunica media and the fibrous cap was observed too. No critical stenosis was evident.\n\nFinal diagnosis of extensive myocardial infarction due to the intraoperative occlusion of both coronary ostia by bioprosthesis malpositioning was established as the cause of death.\n\n# Conclusions\n\nIn the present case, extensive myocardial infarction due to the intraoperative occlusion of both coronary ostia by prosthetic valve was recorded as the cause of death. Since there are, to date, no reports of similar deaths in patients undergone to AVR, our report provides useful information on this complication of AVR.\n\nCoronary ostial stenosis following AVR is believed to occur in 1% to 5% of AVR procedures. It is a life threatening complication which, generally, becomes evident from 1 to 6 months after the operation \\[5\\]. Several different mechanisms which include the possibility of microinjuries and local hyperplastic reaction related to the infusion pressure and\/or low temperature of cardioplegic solution and overdilation of the vessel by the tip of the cardioplegic catheters are thought to be involved \\[1,5\\]. Other mechanisms were hypothesized such as widespread intimal thickening and fibrous proliferation in proximity of the aortic root, presumably as a reaction to turbulence around aortic ball valve prostheses. An immunological reaction to the heterograft after AVR has been considered in cases of bilateral ostial coronary arteries stenoses revealing several months following the surgical procedure \\[1,2\\]. On the basis of these evidences the exact mechanism underlying late coronary ostial stenosis following AVR is unclear.\n\nAnecdotal reports describe the rare occurrence of acute coronary ostial stenosis; right ostial occlusion from aortotomy sutures and ostial post-traumatic thrombosis due to aortic retractor have been described \\[2\\]. In exceptional cases, embolism from debris, more often calcium related to aortic valve decalcification, or left atrial thrombectomy can be involved \\[1,2\\]. Coronary artery spasm has been recognized as a possible cause of hemodynamic and arrhythmic instability after aortic valve replacement \\[6\\]. Occasionally, secondary fibrosis in the area of suture placement may occur causing ostial stenosis \\[7\\]. Finally, use of surgical glue in aortic surgery, or compression from outside due to the glue used to protect the anastomosis may cause stenosis of one or both coronary ostia \\[8\\].\n\nConclusively, there is a reasonable body of evidence that acute coronary ostial stenoses may occur, even if rarely, after AVR and this complication may be life threatening if not promptly recognized, leading to myocardial ischemia, infarction, or fatal arrhythmia. Consequently, it is important to have an high index of diagnostic suspicion if circulatory collapse and\/or signs of myocardial ischemia occur soon after surgery. Transesophageal echocardiography can be useful in diagnosis of acute complications of cardiac surgery; however urgent coronary angiography remains the gold diagnostic tool \\[8\\].\n\nIn our case, both coronary ostia were iatrogenically occluded by the prosthesis: the right one appeared to be entrapped by a stitch and the left one was occluded by prosthetic post. Neither intraoperative transesophageal echocardiography or postoperative coronary angiography were performed, so impeding to reach a prompt diagnosis.\n\nFrom a clinical point of view, surgeons must have an high level of vigilance regarding the occurrence of acute myocardial ischemia soon after AVR and must be ready to perform either an intraoperative verification of patency or an early coronary angiography during ICU stay since these diagnostic tools may reveal mechanisms underlying ischemia which could make necessary a surgical approach (coronary stenting, device removal and\/or re-replacement with a smaller valve size with or without annular enlargement) \\[3,8\\].\n\n# Consent statement\n\nWritten informed consent was obtained from the Medical Examiner Department, Court of Justice, for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.\n\n# List of abbreviations\n\nAVR: aortic valve replacement; CPB: cardiopulmonary bypass; IABP: intraortic ballon pumping; RCA: right coronary artery; ECMO: Extracorporeal Membrane Oxygenator; ICU: intensive care unit.\n\n# Competing interests\n\nThe authors declare that they have no competing interests.\n\n# Authors' contributions\n\nSB drafted the manuscript. MN carried out the istological analysis. IR performed the microscopic analysis. ET, GDG and VF conceived of the study, and participated in its design and coordination and helped to draft the manuscript.\n\nAll authors read and approved the final manuscript.","meta":{"dup_signals":{"dup_doc_count":186,"dup_dump_count":38,"dup_details":{"curated_sources":4,"2022-49":1,"2022-27":1,"2021-49":1,"2020-29":1,"2019-35":1,"2017-17":1,"2017-09":18,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-30":2,"2016-22":1,"2016-18":1,"2016-07":18,"2015-48":5,"2015-40":4,"2015-35":6,"2015-32":4,"2015-27":6,"2015-22":6,"2015-14":4,"2014-52":6,"2014-49":5,"2014-42":15,"2014-41":8,"2014-35":9,"2014-23":10,"2014-15":7,"2023-40":1,"2015-18":6,"2015-11":6,"2015-06":6,"2014-10":6,"2013-48":6,"2013-20":4,"2024-22":1}},"file":"PMC3162531"},"subset":"pubmed_central"} {"text":"author: Klaus Kayser\ndate: 2011\ntitle: Diagnostic Pathology in 2011: reflecting on the development of an open access journal during the last five years\n\nToday, electronic communication is involved in all parts of our lives, either in a directed active, communicative, or passive manner. Whether to live with electronic communication or to ignore it, is no longer a question. Instead, the question we have to answer is: How shall we live in our communicative environment? What can we expect? What are we forced to develop in order \"to survive\"?\n\nThis Editorial written at the end of a really successful year of our journal *Diagnostic Pathology* tries to give some answers from different points of view.\n\nLet us start with the publisher's interest which stands on two feet, the commercial success and the scientific reputation. Both feet are linked: The higher the scientific reputation of the journal, the higher the financial benefit, in terms of profit per article, as more articles are likely to be submitted as the reputation increases. The limitations of the required fee are the absolute numbers of published articles: If the publication procedure has not been outsourced the publication expenses are related to a fixed amount that is mandatory for hardware, software, salaries, etc., and upon a relative amount which can be calculated to the number of published articles (manpower required for each article). Thus, publishing more and more articles per year usually will, however may not always, increase the publisher's financial profit. Publishing a low number of articles with a very high reputation might endanger the journal's existence. At present, our journal *Diagnostic Pathology* is in a good shape: in 2011 we will publish more than 100 articles. The citation index has slowly reached 1.39. The percentage of research articles has increased to 60% - 70%, whereas the percentage of case reports has decreased to about 20%. This situation is in favour of a rising citation index despite the negative influence of doubling the published articles.\n\nNaturally, the authors' point of view is somewhat different to the publisher's interest. The publication fee is not small, and can usually only be covered by specific grants. The granting of waivers can only partly solve the \"Tom Sawyer's problem\" which is searching for outstanding articles and afterwards asking for money to cover their publication. This conflict can only be solved by the journal's attraction to its readers, through its formal work and presented content. Only if these parameters can overcome the constraint of the publication fee, will *Diagnostic Pathology* survive. The steep increase of submissions and published articles seems to confirm that our journal *Diagnostic Pathology* is on a successful way.\n\nThe main aim of any Editor-in-Chief is to guide the scientific journal to higher levels of its scientific reputation. Herein both the citation index with all its questionnaires and the geographical distribution of the journal are the main components. The open access journal *Diagnostic Pathology* started six years ago. It possesses more than 1200 registered readers living in all parts of the world. Its scientific reputation lies in the middle of all Thomson Reuter's registered scientific journals. The number of submissions and publications doubled last year and our rejection rate is currently about 50%. So far this data can be compared to other journals, i.e. the more traditional print based, or other open access journals. What else?\n\nWe have been involved in electronic publication since its beginning in the early 1990s, starting with the solely electronically distributed Electronic Journal of Pathology and Histology. Since that time new ideas and trials in scientific publication were also in our focus. The first trials of interactive publications were undertaken as well as publication of articles that included executable programs. The reader could start and execute these programs by selecting specific parameters or functions without influence from the authors. Our philosophy to continue to develop electronic scientific publication has remained unchanged since then. We are proud to offer our authors the possibility to publish whole digitized glass slides (virtual slides, VS) without any additional costs since January 2011. To our knowledge, *Diagnostic Pathology* is the first and until today the only, electronically scientific journal that offers its authors the unique chance to publish whole VS, and not only specific areas of interest, in the form of still images. This opportunity has been accepted by about 50% of authors with suitable articles. It took some time to solve the difficult logistic problems but the gateway is now open, and additional derived innovative offers will follow. One of these is that VS and their corresponding articles will be collected to form a repository that can be used for education, as a basis for additional secondary publications (interactive publication), and for assistance in routine diagnostic work. It will also serve as quality assurance for newly submitted articles.\n\nAn additional approach will be the \"opening\" of published articles for the inclusion of related data submitted by other authors. We are also undertaking tests on how to include the reader into published articles, and some of the results from these tests so far are really promising. Of course, the original authors have to be informed and have to give their permission for such investigations.\n\nOpen access journals can provide the framework that is mandatory for such new publication and communication techniques. Additional trials would be to automatically include published articles into specific social forums, for example facebook or linkedin. Certainly, we are also thinking about how to improve the review process, as one of our main aims is to judge all submitted articles without personal biased methods that are only related to scientific issues.\n\n*Diagnostic Pathology* is an alive scientific journal. All active journals require certain changes and exchanges from time to time. Unfortunately, one of our active Section Editors, Professor Torsten Goldmann, PhD will be leaving, and we want to express our gratitude for all of his efforts in helping to develop our journal *Diagnostic Pathology*. Similarly, some colleagues of our Editorial board will leave in a routine exchange. We would like to thank all of them for being on the board and supporting the journal. We also regret to announce that one of our internationally well known Editorial board members, Professor Dr. Anthony Leong, has passed away. Certainly, we will keep him in good memory.\n\nWe are looking forward to further developing our open access journal *Diagnostic Pathology*, focusing on still not, or only poorly investigated, fields of electronic scientific communication. We will also keep an eye on the progress of molecular and genetic pathology, and on new technologies such as in vivo biopsies, alive imaging publication, or nanotechnologies. For all these promising and growing fields of pathology we do need and appreciate the interest, knowledge, information, and assistance of the scientific community, specifically from our readers and authors.\n\nFor the progress made so far we would like to express our deep gratitude to our authors, readers, reviewers, and our publication team, and we wish them all a Blessed and Merry Christmas and a Happy, Healthy, and Fruitful New Year.\n\nKlaus Kayser\n\nEditor-in-Chief","meta":{"dup_signals":{"dup_doc_count":108,"dup_dump_count":48,"dup_details":{"curated_sources":2,"2022-27":1,"2021-49":1,"2020-40":1,"2020-29":1,"2019-39":1,"2019-26":1,"2019-22":1,"2019-13":1,"2019-04":1,"2018-47":1,"2018-39":1,"2018-30":2,"2018-13":1,"2017-47":1,"2017-34":1,"2017-26":1,"2017-17":1,"2017-09":11,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-30":1,"2016-22":1,"2016-18":1,"2015-48":3,"2015-40":2,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":5,"2014-42":7,"2014-41":4,"2014-35":4,"2014-23":5,"2014-15":3,"2023-50":1,"2015-18":3,"2015-11":3,"2015-06":3,"2014-10":3,"2013-48":3,"2013-20":1,"2024-22":1}},"file":"PMC3284407"},"subset":"pubmed_central"} {"text":"author: MH Hoffmann; J Tuncel; K Skriner; M Tohidast-Akrad; G Schett; JS Smolen; R Holmdahl; G Steiner\ndate: 2005\ninstitute: 1Center of Molecular Medicine of the Austrian Academy of Sciences, Vienna, Austria; 2Department of Rheumatology, Medical University of Vienna, Austria; 3Section for Medical Inflammation Research, Lund University, Sweden; 4Department of Rheumatology, Charit\u00e9 University Hospital, Berlin, Germany; 5Ludwig Boltzmann Institute for Rheumatology, Vienna, Austria\nreferences:\ntitle: Identification of hnRNP-A2 (RA33) as a major B-cell and T-cell autoantigen in pristane\u2013induced arthritis\n\n# Background\n\nPristane-induced arthritis (PIA) in rats is considered an excellent model for rheumatoid arthritis (RA) since it fulfils the criteria for RA including a chronic relapsing disease course and is not dependent on immunization with exogenous antigen. Although the adjuvant pristane is not immunogenic, the disease is MHC associated and dependent on the activation of (autoreactive) T cells. However, so far it has not been possible to link the immune response to joint antigens or other endogenous components. HnRNP A2, the RA33 autoantigen (A2\/RA33), is a multi-functional RNA binding protein involved in splicing and other aspects of post-transcriptional regulation of gene expression. Autoantibodies as well as autoreactive T cells against A2\/RA33 have been found in patients with RA but the pathogenetic role of these autoimmune responses is unresolved \\[1\\]. It was therefore the aim of this study to elucidate a potential involvement of A2\/RA33 in PIA.\n\n# Methods\n\nAutoantibodies against A2\/RA33 were determined by immunoblotting, and MHC association of the anti-A2\/RA33 immune response was specified by the presence of autoantibodies, delayed-type hypersensitivity reactions and T-cell cytokine secretion in DA rats of different haplotypes. Interferon gamma and tumour necrosis factor secretion by T cells isolated from draining lymph nodes 10 days after pristane injection and restimulation with A2\/RA33 *in vitro* was determined. Expression of A2\/RA33 in joints and organs was analysed by immunohistochemistry and western blotting. Nasal vaccinations were performed with A2\/RA33 7 days prior to pristane injection.\n\n# Results\n\nAlthough anti-A2\/RA33 autoantibodies were detected in all four rat strains investigated, the immune response appeared to be particularly linked to the F and U MHC haplotypes. Autoantibodies to A2\/RA33 were found in 60% of DA1.F sera, and T cells of all DA1.F rats tested produced intermediate to high levels of interferon gamma upon stimulation with A2\/RA33. The reaction seemed to be entirely produced by CD4^+^ cells showing a Th1 phenotype. Furthermore, nasal vaccination with A2\/RA33 significantly delayed the onset and decreased severity of arthritis in DA1.F rats. Finally, immunohistochemical and western blot analyses revealed pronounced overexpression of A2\/RA33 in joints of rats suffering from acute PIA, but not in healthy joints or in joints from animals with chronic PIA.\n\n# Conclusion\n\nThe A2\/RA33 autoantigen is targeted by autoantibodies and Th1 cells in rats with PIA shortly after pristane injection. The presence of autoreactive Th1 cells in conjunction with synovial overexpression of A2\/RA33 strongly suggests involvement of this autoantigen in the pathogenesis of PIA. This is further bolstered by the observed alleviation and delay of onset of PIA following nasal vaccination with A2\/RA33. Thus, A2\/RA33 seems to be one of the primary autoantigens in PIA that in conjunction with previous observations on the presence of autoreactive T cells in RA patients may argue for a pathogenetic role also in human RA.\n\n## Acknowledgements\n\nThis work was supported by a grant from the Austrian Academy of Sciences and by Marie Curie Host Fellowship number HPMT-CT-2000-00126 of the European Commission Research Directorate.","meta":{"dup_signals":{"dup_doc_count":111,"dup_dump_count":40,"dup_details":{"curated_sources":2,"2022-21":1,"2021-43":1,"2020-29":1,"2020-05":1,"2019-39":1,"2019-04":1,"2018-43":1,"2018-34":1,"2018-26":1,"2018-17":1,"2018-05":1,"2017-39":1,"2017-22":1,"2017-09":8,"2016-44":1,"2016-40":1,"2016-36":8,"2016-30":8,"2015-48":3,"2015-40":2,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":3,"2014-42":7,"2014-41":4,"2014-35":5,"2014-23":5,"2014-15":4,"2022-40":1,"2015-18":3,"2015-11":3,"2015-06":3,"2014-10":3,"2013-48":2,"2013-20":3,"2024-30":1}},"file":"PMC2834155"},"subset":"pubmed_central"} {"text":"abstract: # Background\n .\n Recent advances in proteomics technologies such as two-hybrid, phage display and mass spectrometry have enabled us to create a detailed map of biomolecular interaction networks. Initial mapping efforts have already produced a wealth of data. As the size of the interaction set increases, databases and computational methods will be required to store, visualize and analyze the information in order to effectively aid in knowledge discovery.\n .\n # Results\n .\n This paper describes a novel graph theoretic clustering algorithm, \"Molecular Complex Detection\" (MCODE), that detects densely connected regions in large protein-protein interaction networks that may represent molecular complexes. The method is based on vertex weighting by local neighborhood density and outward traversal from a locally dense seed protein to isolate the dense regions according to given parameters. The algorithm has the advantage over other graph clustering methods of having a directed mode that allows fine-tuning of clusters of interest without considering the rest of the network and allows examination of cluster interconnectivity, which is relevant for protein networks. Protein interaction and complex information from the yeast *Saccharomyces cerevisiae* was used for evaluation.\n .\n # Conclusion\n .\n Dense regions of protein interaction networks can be found, based solely on connectivity data, many of which correspond to known protein complexes. The algorithm is not affected by a known high rate of false positives in data from high-throughput interaction techniques. The program is available from .\nauthor: Gary D Bader; Christopher WV Hogue\ndate: 2003\ninstitute: 1Samuel Lunenfeld Research Institute, Mt. Sinai Hospital, Toronto ON Canada M5G 1X5, Dept. of Biochemistry, University of Toronto, Toronto ON Canada M5S 1A8; 2Current address: Memorial Sloan-Kettering Cancer Center 1275 York Avenue, Box 460, New York, NY, 10021, USA\nreferences:\ntitle: An automated method for finding molecular complexes in large protein interaction networks\n\n# Background\n\nRecent papers published in *Science* and *Nature* among others describe large-scale proteomics experiments that have generated large data sets of protein-protein interactions and molecular complexes \\[1-7\\]. Protein structure \\[8\\] and gene expression data \\[9\\] is also accumulating at a rapid rate. Bioinformatics systems for storage, management, visualization and analysis of this new wealth of data must keep pace. We previously published a simple graph theory method that identified a functional protein complex around the yeast protein Las17 that is involved in actin cytoskeleton rearrangement \\[10\\]. Here we extend the method to better apply it to the accumulating information in protein networks.\n\nCurrently, most proteomics data is available for the model organism *Saccharomyces cerevisiae*, by virtue of the availability of a defined and relatively stable proteome, full genome clone libraries \\[11\\], established molecular biology experimental techniques and an assortment of well designed genomics databases \\[12-14\\]. Using the Biomolecular Interaction Network Database (BIND \u2013 ) \\[15\\] as an integration platform, we have collected 15,143 yeast protein-protein interactions among 4,825 proteins (about 75% of the yeast proteome). Much larger data sets than this will eventually be available for other well studied model organisms as well as for the human proteome. These complex data sets present a formidable challenge for computational biology to develop automated data mining analyses for knowledge discovery.\n\nHere we present the first report that uses a clustering algorithm to identify molecular complexes in a large protein interaction network derived from heterogeneous experimental sources. Based on our previous observation that highly interconnected, or dense, regions of the network may represent complexes \\[10\\], the \"Molecular Complex Detection\" (MCODE) algorithm has been implemented and evaluated on our yeast protein interaction compilation using known molecular complex data from a recent systematic mass spectrometry study of the proteome \\[7\\] and from the MIPS database \\[13\\].\n\nPredicting molecular complexes from protein interaction data is important because it provides another level of functional annotation above other guilt-by-association methods. Since sub-units of a molecular complex generally function towards the same biological goal, prediction of an unknown protein as part of a complex also allows increased confidence in the annotation of that protein.\n\nMCODE also makes the visualization of large networks manageable by extracting the dense regions around a protein of interest. This is important, as it is now obvious that the current visualization tools present on many interaction databases \\[15\\], originally based on the Sun Microsystems embedded spring graph layout Java applet do not scale well to large networks ().\n\n## Algorithm\n\nThe MCODE algorithm operates in three stages, vertex weighting, complex prediction and optionally post-processing to filter or add proteins in the resulting complexes by certain connectivity criteria.\n\nA network of interacting molecules can be intuitively modeled as a graph, where vertices are molecules and edges are molecular interactions. If temporal pathway or cell signalling information is known, it is possible to create a directed graph with arcs representing direction of chemical action or direction of information flow, otherwise an undirected graph is used. Using this graph representation of a biological system allows graph theoretic methods to be applied to aid in analysis and solve biological problems. This graph theory approach has been used by other biomolecular interaction database projects such as DIP \\[16\\], CSNDB \\[17\\], TRANSPATH \\[18\\], EcoCyc \\[19\\] and WIT \\[20\\] and is discussed by Wagner and Fell \\[21\\].\n\nAlgorithms for finding clusters, or locally dense regions, of a graph are an ongoing research topic in computer science and are often based on network flow\/minimum cut theory \\[22,23\\] or more recently, spectral clustering \\[24\\]. To find locally dense regions of a graph, MCODE instead uses a vertex-weighting scheme based on the clustering coefficient, C~i~, which measures 'cliquishness' of the neighborhood of a vertex \\[25\\]. C~i~ = 2*n*\/*k*~*i*~(*k*~*i*~-1) where *k*~*i*~ is the vertex size of the neighborhood of vertex *i* and *n* is the number of edges in the neighborhood (the immediate neighborhood density of *v* not including *v*). A clique is defined as a maximally connected graph. There is no standard graph theory definition of density, but definitions are normally based on the connectivity level of a graph. Density of a graph, G = (V,E), with number of vertices, \\|V\\|, and number of edges, \\|E\\|, is defined here as \\|E\\|; divided by the theoretical maximum number of edges possible for the graph, \\|E\\|~max~. For a graph with loops (an edge connecting back to its originating vertex), \\|E\\|~max~ = \\|V\\| (\\|V\\|+1)\/2 and for a graph with no loops, \\|E\\|~max~ = \\|V\\| (\\|V\\|-1)\/2. So, density of G, D~G~ = \\|E\\|\/\\|E\\|~max~ and is thus a real number ranging from 0.0 to 1.0.\n\nThe first stage of MCODE, vertex weighting, weights all vertices based on their local network density using the highest *k*-core of the vertex neighborhood. A *k*-core is a graph of minimal degree *k* (graph G, for all *v* in G, deg(*v*) \\>= *k*). The highest *k*-core of a graph is the central most densely connected subgraph. We define here the term core-clustering coefficient of a vertex, *v*, to be the density of the highest *k*-core of the immediate neighborhood of *v* (vertices connected directly to *v*) including *v* (note that C~i~ does not include *v*). The core-clustering coefficient is used here instead of the clustering coefficient because it amplifies the weighting of heavily interconnected graph regions while removing the many less connected vertices that are usually part of a biomolecular interaction network, known to be scale-free \\[6,21,26-29\\]. A scale-free network has a vertex connectivity distribution that follows a power law, with relatively few highly connected vertices (high degree) and many vertices having a low degree. A given highly connected vertex, *v*, in a dense region of a graph may be connected to many vertices of degree one (singly linked vertex). These low degree vertices do not interconnect within the neighborhood of *v* and thus would reduce the clustering coefficient, but not the core-clustering coefficient. The final weight given to a vertex is the product of the vertex core-clustering coefficient and the highest *k*-core level, *k*~max~, of the immediate neighborhood of the vertex. This weighting scheme further boosts the weight of densely connected vertices. This specific weighting function is based on local network density. Many other functions are possible and some may have better performance for this algorithm but these are not evaluated here.\n\nThe second stage, molecular complex prediction, takes as input the vertex weighted graph, seeds a complex with the highest weighted vertex and recursively moves outward from the seed vertex, including vertices in the complex whose weight is above a given threshold, which is a given percentage away from the weight of the seed vertex. This is the vertex weight percentage (VWP) parameter. If a vertex is included, its neighbours are recursively checked in the same manner to see if they are part of the complex. A vertex is not checked more than once, since complexes cannot overlap in this stage of the algorithm (see below for a possible overlap condition). This process stops once no more vertices can be added to the complex based on the given threshold and is repeated for the next highest unseen weighted vertex in the network. In this way, the densest regions of the network are identified. The vertex weight threshold parameter defines the density of the resulting complex. A threshold that is closer to the weight of the seed vertex identifies a smaller, denser network region around the seed vertex.\n\nThe third stage is post-processing. Complexes are filtered if they do not contain at least a 2-core (graph of minimum degree 2). The algorithm may be run with the 'fluff' option, which increases the size of the complex according to a given 'fluff' parameter between 0.0 and 1.0. For every vertex in the complex, *v*, its neighbors are added to the complex if they have not yet been seen and if the neighborhood density (including *v*) is higher than the given fluff parameter. Vertices that are added by the fluff parameter are not marked as seen, so there can be overlap among predicted complexes with the fluff parameter set. If the algorithm is run using the 'haircut' option, the resulting complexes are 2-cored, thereby removing the vertices that are singly connected to the core complex. If both options are specified, fluff is run first, then haircut.\n\nResulting complexes from the algorithm are scored and ranked. The complex score is defined as the product of the complex subgraph, C = (V,E), density and the number of vertices in the complex subgraph (D~C~ \u00d7 \\|V\\|). This ranks larger more dense complexes higher in the results. Other scoring schemes are possible, but are not evaluated here.\n\nMCODE may also be run in a directed mode where a seed vertex is specified as a parameter. In this mode, MCODE only runs once to predict the single complex that the specified seed is a part of. Typically, when analyzing complexes in a given network, one would find all complexes present (undirected mode) and then switch to the directed mode for the complexes of interest. The directed mode allows one to experiment with MCODE parameters to fine tune the size of the resulting complex according to existing biological knowledge of the system. In directed mode, MCODE will first pre-process the input network to ignore all vertices with higher vertex weight than the seed vertex. If this were not done, MCODE would preferentially branch out to denser regions of the graph, if they exist, which could belong to separate, but denser complexes. Thus, a seed vertex for directed mode should always be the highest density vertex among the suspected complex. There is an option to turn this pre-processing step off, which will allow seeded complexes to branch out into denser regions of the graph, if desired.\n\nThe time complexity of the entire algorithm is polynomial O(*nmh*^3^) where *n* is the number of vertices, *m* is the number of edges and *h* is the vertex size of the average vertex neighbourhood in the input graph, G. This comes from the vertex-weighting step. Finding a k-core in a graph proceeds by progressively removing vertices of degree \\< k until all remaining vertices are connected to each other by degree k or more, and is thus O(*n*^2^). The highest k-core is found by trying to find k-cores from one up until all vertices have been found and cannot go beyond a number of steps equal to the highest degree in the graph. Thus, the highest k-core step is O(*n*^3^). Since this k-core step operates only on the neighbourhood of a vertex, the *n* in this case is the number of vertices in the average neighbourhood of a vertex, *h*. The inner loop of the algorithm only operates twice for every edge in the input graph, thus is O(2*mh*^3^). The outer loop operates once on all vertices in the input graph, thus the entire time complexity of the weighting stage is O(*n*2*mh*^3^) = O(*nmh*^3^). The complex prediction stage is O(*n*) and the optional post-processing step can be up to O(*cs*^2^), where *c* is the number of complexes that were found in the previous step and *s* is the number of vertices in the largest complex - O(*cs*^2^) to find the 2-core once for each complex.\n\nEven though the fastest min-cut graph clustering algorithms are faster, at O(*n*^2^log*n*) \\[30\\], MCODE has a number of advantages. Since weighting is done once and comprises most of the time complexity, many algorithm parameters can be tried, in O(*n*), once weighting is complete. This is useful when evaluating many different parameters. MCODE is relatively easy to implement and since it is local density based, has the advantage of a directed mode and a complex connectivity mode. These two modes are generally not useful in typical clustering applications, but are useful for examining molecular interaction networks. Additionally, only those proteins above a given local density threshold are assigned to complexes. This is in contrast to many clustering applications that force all data points to be part of clusters, whether they truly should be part of a cluster or not.\n\n## Pseudocode\n\n### Stage 1: Vertex Weighting\n\n**procedure** MCODE-VERTEX-WEIGHTING\n\n\u00a0\u00a0\u00a0**input**: **graph**: G = (V,E)\n\n\u00a0\u00a0\u00a0**for all** *v* in G **do**\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0N = find neighbors of *v* to depth 1\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0K = Get highest *k*-core graph from N\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0*k* = Get highest *k*-core number from N\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0*d* = Get density of K\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Set weight of *v* = *k* \u00d7 *d*\n\n\u00a0\u00a0\u00a0**end for**\n\nend procedure\n\n### Stage 2: Molecular Complex Prediction\n\n**procedure** MCODE-FIND-COMPLEX\n\n\u00a0\u00a0\u00a0**input**: **graph**: G = (V,E); **vertex weights**: W;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0**vertex weight percentage**: *d*; **seed vertex**: *s*\n\n\u00a0\u00a0\u00a0**if** *s* already seen **then return**\n\n\u00a0\u00a0\u00a0**for all** *v* neighbors of *s* **do**\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0**if** weight of *v* \\> (weight of *s*)(1 - *d*) **then** add *v* to complex C\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0**call**: MCODE-FIND-COMPLEX (G, W, *d*, *v*)\n\n\u00a0\u00a0\u00a0**end for**\n\nend procedure\n\n**procedure** MCODE-FIND-COMPLEXES\n\n\u00a0\u00a0\u00a0**input**: **graph**: G = (V,E); **vertex weights**: W;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0**vertex weight percentage**: *d*\n\n\u00a0\u00a0\u00a0**for all** *v* in G **do**\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0**if** not already seen v **then call**: MCODE-FIND-COMPLEX(G, W, *d*, *v*)\n\n\u00a0\u00a0\u00a0**end for**\n\nend procedure\n\n### Stage 3: Post-Processing (optional)\n\n**procedure** MCODE-FLUFF-COMPLEX\n\n\u00a0\u00a0\u00a0**input**: **graph**: G = (V,E); **vertex weights**: W;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0**fluff density threshold**: *d*; **complex graph**: C = (U,F)\n\n\u00a0\u00a0\u00a0**for all** *u* in C **do**\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0**if** weight of *u* \\>*d* **then** add *u* to complex C\n\n\u00a0\u00a0\u00a0**end for**\n\nend procedure\n\n**procedure** MCODE-POST-PROCESS\n\n\u00a0\u00a0\u00a0**input**: **graph**: G = (V,E); **vertex weights**: W; **haircut flag**: *h*; **fluff flag**: *f*;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0**fluff density threshold**: *t*; **set of predicted complex graphs**: C\n\n\u00a0\u00a0\u00a0**for all** *c* in C **do**\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0**if** *c* not 2-core **then** filter\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0**if** *h* is TRUE **then** 2-core complex\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0**if** *f* is TRUE **then call**: MCODE-FLUFF-COMPLEX(G, W, *t*, *c*)\n\n\u00a0\u00a0\u00a0**end for**\n\nend procedure\n\n### Overall Process\n\n**procedure** MCODE\n\n\u00a0\u00a0\u00a0**input**: **graph**: G = (V,E); **vertex weight percentage**: *d*;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0**haircut flag**: *h*; **fluff flag**: *f*; **fluff density threshold**: *t*;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0**set of predicted complex graphs**: C\n\n\u00a0\u00a0\u00a0**call**: W = MCODE-VERTEX-WEIGHTING (G)\n\n\u00a0\u00a0\u00a0**call**: C = MCODE-FIND-COMPLEXES (G, W, *d*)\n\n\u00a0\u00a0\u00a0**call**: MCODE-POST-PROCESS (G, W, *h*, *f*, *t*, C)\n\nend procedure\n\n## Implementation\n\nMCODE has been implemented in ANSI C using the cross-platform NCBI Toolkit; and the BIND graph library in the SLRI Toolkit; . Both of these source code libraries are freely available. The actual MCODE source code is not yet freely available. The MCODE program has been compiled and tested on UNIX, Mac OS X and Windows. Because a yeast gene name dictionary is used to recognize input and generate output, the MCODE executable currently only works for yeast proteins in a user friendly manner. The algorithm, however is completely general, via the graph theory abstraction, to any graph and thus to any biomolecular interaction network. MCODE binaries are available from .\n\n# Results\n\n## Evaluation of MCODE\n\nThe evaluation of MCODE requires a set of experimentally determined biomolecular interactions and a set of associated experimentally determined molecular complexes. Currently, the largest source for such data is for proteins from the budding yeast, *Saccharomyces cerevisiae*. Recently, a large-scale mass spectrometry study by Gavin et al \\[7\\] provided a large data set of protein interactions with manually annotated molecular complexes. Also available are the protein interaction and complex tables of MIPS \\[13\\] and YPD \\[14\\]. MCODE was used to automatically predict protein complexes in our collected protein-protein interaction data sets. Resulting complexes were then matched to known molecular complexes from Gavin et al. (the Gavin benchmark) and the MIPS benchmark using an overlap score. Parameter optimization was then used to maximize the biological relevance of predicted complexes according to the given benchmarks. YPD was not used as a current version could not be acquired.\n\nTo ensure that MCODE is not unduly affected by the expected high false-positive rate in large-scale interaction data sets, large-scale and literature derived MCODE predictions were compared. MCODE was then used to predict complexes in the entire set of machine readable protein-protein interactions that we could collect for yeast. Complexes of interest were then further examined using the directed mode and complex connectivity mode of MCODE.\n\n## Evaluation of MCODE using the Gavin data set of protein interactions and complexes\n\nIn this study, we wanted to use all forms of protein interaction data available, which requires mixing of different types of experiments, such as yeast two-hybrid and co-immunoprecipitation. Two-hybrid results are inherently pairwise, whereas copurification results are sets of one or more identified proteins. For a copurification result, only a set of size 2 can be directly considered a pairwise interaction, otherwise it must be modeled as a set of hypothetical interactions. Biochemical copurifications can be thought of as populations of complexes with some underlying pairwise protein interaction topology that is unknown from the experiment. In the general case of the purification used by Gavin et al., one affinity tagged protein was used as bait to pull associated proteins out of a yeast cell lysate. The two extreme cases for the topology underlying the population of complexes from a single purification experiment are a minimally connected 'spoke' model, where the data are modeled as direct bait-associated protein pairwise interactions, and a maximally connected 'matrix' model, where the data are modeled as all proteins connected to all others in the set. The real topology of the set of proteins must lie somewhere between these two extremes.\n\nPopulation of complexes: *C* = {*b, c, d, e*} (*b* = bait)\n\nSpoke model hypothetical interactions: *i*~*S*~ = {*b-c, b-d, b-e*}\n\nMatrix model hypothetical interactions; *i*~*M*~ = {*b-b, b-c, b-d, b-e, c-c, c-d, c-e, d-d, d-e, e-e*}\n\nAdvantages of the spoke model are that it is biologically intuitive, biologists often represent their copurification results in this manner, and is about 3 times more accurate than the matrix model \\[31\\]. Disadvantages are that it could misrepresent interactions. The matrix model, alternatively, cannot misrepresent interactions, as all possible interactions are generated, but this is at the cost of generating a large number of false interactions. Matrix topologies are also physically implausible for larger complexes because of increased possibility of steric clash if all subunits are interacting with all others. Ultimately, the spoke model should be reasonable for use in evaluating MCODE.\n\nGavin et al. raw data from 588 biochemical purifications were represented using the spoke model, described above, to get 3,225 hypothetical protein-protein interactions among 1,363 proteins for input to MCODE. A list of 232 manually annotated protein complexes based on the original purification data reported by Gavin et al. was filtered to remove five reported 'complexes' each composed of a single protein and six complexes of two or three proteins that were already in the data set as part of a larger complex. This yielded a filtered set of 221 complexes that were used to evaluate MCODE, although some of these complexes have significant overlap to other complexes in the set.\n\nTo evaluate which parameter choice would allow automatic prediction of protein complexes from the spoke modeled Gavin et al. interaction set that best matched the manually annotated complexes, MCODE was run using all four possible combinations of the two Boolean parameters (haircut: true\/false, fluff: true\/false) over a full range of 20 vertex weight percentage (VWP) and fluff parameters (0 to 0.95 in 0.05 increments). During this parameter optimization process, MCODE was limited to find complexes of size two or higher.\n\nA scoring scheme was developed to determine how effectively an MCODE predicted complex matched a complex from the benchmark set of complexes. In this case, the benchmark complex set was the Gavin et al. hand-annotated complex set. The overlap score was defined as \u03c9 = *i*^2^\/*a*\\**b*, where *i* is the size of the intersection set of a predicted complex with a known complex, *a* is the size of the predicted complex and *b* is the size of the known complex. A protein is part of the intersection set only if it is present in both predicted and known complexes. Thus, a predicted complex that has no proteins in a known complex has \u03c9 = 0 and a predicted complex that perfectly matches a known complex has \u03c9 = 1. Also, predicted complexes that fully overlap, but are much larger or much smaller than any known complexes will get a low \u03c9. The overlap score of a predicted complex vs. a benchmark complex is then a measure of biological significance of the prediction, assuming that the benchmark set of complexes is biologically relevant. The best parameter choice for MCODE on this protein interaction data set is one that predicts the largest set of complexes that match the largest number of benchmark complexes above a threshold \u03c9. Since there is overlap in the Gavin benchmark complex database, a predicted complex may match more than one known complex with a high \u03c9.\n\nTo choose an overlap score that maximizes biological relevance of the predicted complexes without filtering away too many predictions, each of the 840 parameter combinations tested during the parameter optimization stage. The number of MCODE predicted complexes was plotted against the number of matched known complexes over a range of \u03c9 thresholds from 'no threshold' to 0.1 to 0.9 (in 0.1 increments). If no \u03c9 threshold is used, a predicted complex only needs at least one protein in common with a known complex to be considered a match. If predicted and known complexes are only counted as a match when their \u03c9 is above a specific threshold, the number of matched complexes declines with increasing \u03c9 threshold, as shown in Figure 1<\/a>. Interestingly, the average and maximum number of matched known complexes drops more quickly from zero until a \u03c9 threshold of 0.2 than from 0.2 to 0.9 indicating that many predicted complexes only have one or a few proteins that overlap with known complexes. A \u03c9 threshold of 0.2 to 0.3 thus seems to filter out most predicted complexes that have insignificant overlap with known complexes.\n\nFigure 2<\/a> shows the range of number of complexes predicted and number of known complexes matched for the 0.2 \u03c9 threshold over all tried MCODE parameters. A y = x line is also plotted to show that data points tend to be skewed towards a higher number of matched known complexes than predicted complexes because of the redundancy in the Gavin complex benchmark. Data points closest to the upper right portion of the graph maximize both number of matched known complexes and number of predicted complexes. MCODE parameter combinations that result in these data points therefore optimize MCODE on this data set (according to the overlap score threshold). This result shows that the number of predicted complexes should be similar to the number of matched known complexes for a parameter choice to be reasonable, although the number of matched known complexes may be larger, again, because of some commonality among complexes in the benchmark set. The parameter combination corresponding to the best data point (63,88) at an overlap score threshold of 0.2 is haircut = FALSE, fluff = TRUE, VWP = 0.05 and a fluff density threshold between 0 and 0.1. These parameter optimization results for MCODE over this data set were stable over a range of \u03c9 thresholds up to 0.5. Above 0.5, the result was not stable as there were generally too few predicted complexes with high overlap scores (Figure 1<\/a>).\n\nA specificity versus sensitivity analysis \\[32\\] was also performed. Defining the number of true positives (TP) as the number of MCODE predicted complexes with \u03c9 over a threshold value and the number of false positives (FP) as the total number of predicted MCODE complexes minus TP. The number of false negatives (FN) equals the number of known benchmark complexes not matched by predicted complexes. Sensitivity was defined as \\[TP\/(TP+FN)\\] and specificity was defined as \\[TP\/(TP+FP)\\]. The MCODE parameter choice that optimizes both specificity and sensitivity is the same as from the above analysis. The optimal sensitivity of this analysis was \\~0.31 and the corresponding specificity was \\~0.79.\n\nThe 63 MCODE predicted complexes only matched 88 of the 221 complexes in the known data set indicating that MCODE could not recapitulate the majority of the Gavin complex benchmark solely using protein connectivity information. As mentioned above, there are more matched complexes than predicted because of some redundancy in the benchmark. This low sensitivity is not surprising, since many of the hand-annotated complexes were created directly from single co-immunoprecipitation results, which are not highly interconnected in the spoke model. For example, Cdc3 was used as a bait to co-immunoprecipitate Cdc10, Cdc11, Cdc12 and Ydl225w. A complex was annotated as containing these five proteins, but only Cdc3 was used as bait. If more elements of a complex are used as baits, the proteins become more interconnected and more readily predicted by MCODE. A good example of this is the Arp2\/3 complex, which is highly conserved in eukaryotes and is involved in actin cytoskeleton rearrangement. The structure of this complex is known by X-ray crystallography \\[33\\] thus actual protein-protein interactions from the structure can be matched up to the co-immunoprecipitation results. MCODE predicted all seven components of the Arp2\/3 complex crystal structure and five extra proteins using the optimized parameters. Six out of the seven Arp2\/3 subunits were used as baits by Gavin et al. and the resulting benchmark complex included the five extra proteins that MCODE also predicted (Nog2, Pfk1, Prt1, Cct8 and Cct5) that are not in the crystal structure. Cct5 and Cct8 are known to be involved in actin assembly, but Nog2, Pfk1 and Prt1 are not. These extra proteins likely represent non-specific binding in the experimental approach. These two cases are shown diagrammatically in Figure 3<\/a>. Interestingly, using the haircut parameter would remove all five extra proteins that are not in the crystal structure, leaving only the seven that are present. This shows that while the parameter optimization allows maximum matching of the hand-annotated known complexes, these complexes may not all be physiologically relevant and thus another parameter set may better predict 'real' complexes.\n\nTo explore the effect of certain MCODE parameters on resulting predicted complexes, various features of these complexes were examined while changing specific parameters and keeping all else constant. Linearly increasing the VWP parameter increased the size of the predicted complexes exponentially while reducing the number of complexes predicted in a linear fashion. Figure 4<\/a> shows this effect with both fluff and haircut parameters turned off. At high VWP values, very large complexes were predicted and these encompassed most of the data set, thus were not very useful.\n\nBecause using haircut = TRUE would have led MCODE to predict the Arp2\/3 complex perfectly (according to the crystal structure as discussed above), we examined if the haircut parameter has any general effect on the number of matched predicted complexes. Setting haircut = TRUE had no significant effect on the number of complexes predicted at high \u03c9 thresholds, but generally reduced the number of matched known complexes at low \u03c9 thresholds (0 to 0.1) compared to haircut = FALSE. Since the haircut = TRUE option removes less-connected proteins on the fringe of a predicted complex and this reduces the number of predicted complexes with low overlap scores, these fringe proteins likely contribute to low-level overlap (\\<0.2 \u03c9) of the known complexes.\n\nWe also investigated the effect of changing the fluff density threshold when setting fluff = TRUE on the number of matched benchmark complexes. Linearly increasing the fluff density threshold in the MCODE post-processing step linearly decreased the number of matched complexes above an overlap score of 0.2.\n\n## Evaluation of MCODE using MIPS data set of protein interactions and complexes\n\nSince the Gavin et al. data set was developed by only one group using a single experimental method, it may not accurately represent protein complex knowledge for yeast. The MIPS protein complex catalogue is a curated set of 260 protein complexes for yeast that was compiled from the literature and is thus a more realistic data set comprised of varied experiments from many labs using different techniques. After filtering away 50 'complexes' each composed of a single protein and 2 highly similar complexes, we were left with 208 complexes for the MIPS known set. This set did not include information from the recent large-scale mass spectrometry studies \\[6,7\\]. While the MIPS complex catalogue may be incomplete, it is currently the best available public resource for yeast protein complexes that we are aware of.\n\nMCODE was run again with a full combination of parameters, this time over a set of 9088 protein-protein interactions among 4379 proteins which did not include the recent large-scale mass spectrometry studies but included all interactions from the MIPS, YPD and PreBIND databases as well as from the majority of large-scale yeast two-hybrid experiments to date \\[2-4,10,34\\]. This interaction set is termed 'Pre HTMS'. All of the interactions in this set were published before the last update specified on the MIPS protein complex catalogue and many are included in the MIPS protein interaction table, thus we assumed that the MIPS complex catalogue took into account the information in the known interaction table. Protein complexes found by MCODE in this set were compared to the MIPS protein complex catalogue to evaluate how well MCODE performed at locating protein complexes *ab initio*.\n\nThe same evaluation of MCODE that was done using the Gavin et al. data set was performed with the MIPS data set. From this analysis, including specificity versus sensitivity plots (optimized sensitivity = \\~0.27 and specificity = \\~0.31), the MIPS complex benchmark optimized parameters were haircut = TRUE, fluff = TRUE, VWP = 0.1 and a fluff density threshold of 0.2. This result was stable up to a \u03c9 threshold of 0.6 after which it was difficult to evaluate the results, as there were generally too few predicted complexes above the high \u03c9 thresholds. This parameter combination led MCODE to predict 166 complexes of which 52 matched 64 MIPS complexes with a \u03c9 of at least 0.2. Examining the \u03c9 distribution for this parameter set reveals that, even though this prediction is optimized, most of the predicted complexes don't show overlap to those in the known MIPS set (Figure 5<\/a>). The complexes predicted here are also different from those predicted from the Gavin interaction data. Nine complexes have an overlap score above 0.2 between these two sets, with the highest overlap score being 0.43 and all the rest being below 0.27. This might signify that either the MIPS complex catalogue is not complete, that there is not enough data in the dataset that MCODE was run on, or a human annotated definition of a complex does not perfectly match with a graph density based definition.\n\nThe effect of the VWP parameter on complex size and of the haircut and fluff parameters on number of matched complexes was very similar to that seen when evaluating MCODE on the Gavin complex benchmark.\n\n## Effect of data set properties on MCODE\n\nSince many large-scale protein interaction data sets from yeast are known to contain a high level of false positives \\[35\\], we examined the effect these might have on MCODE predictions. Sensitivity vs. specificity was plotted for MCODE predictions, with parameters chosen to maximize these values at \u03c9 threshold of 0.2 against the MIPS and Gavin complex benchmarks for the various data sets (Figure 6<\/a>).\n\nMCODE predictions on the high-throughput data sets, termed 'Gavin Spoke', 'Y2H' and 'HTP only' (see Methods), are about as specific as the literature derived interaction data set, but not as sensitive (Figure 6A<\/a>). MCODE predictions on interaction data sets containing the literature derived benchmark, labelled 'Benchmark', 'Pre HTMS' and 'AllYeast', are generally more sensitive and specific than those containing just the large-scale interaction sets. Since the specificity drops from Benchmark to Pre HTMS to AllYeast, with increasing amounts of large-scale data, it could be argued that addition of this data negatively affects MCODE. However, large-scale data is known to contain a high number of false positives, so it should be expected that these false-positives would not randomly contribute to the formation of dense regions, which are highly unlikely to occur by chance (see below). More complexes should be predicted with the addition of the large-scale data, assuming this data explores previously unseen regions of the interactome, but the high number of false-positives should limit the amount of new complexes compared to the amount of added interactions. The MIPS complex benchmark used here is not expected to contain complexes newly found in large-scale studies, explaining the decrease in specificity. This is exactly what occurs in our analysis. In an effort to further test the effect of large-scale data on MCODE prediction performance, the Benchmark interaction data set was augmented with the addition of interactions from large-scale experiments that only connect proteins in the Benchmark set with each other. Over 3100 interactions were added to the Benchmark data set to create a set of over 6400 interactions. MIPS complex benchmark optimised MCODE predicted 52 complexes matching 66 MIPS benchmark complexes, almost exactly the same number of complexes found using the Benchmark set by itself (Table 1<\/a>). These analyses strongly suggest the addition of large-scale experimentally derived interactions does not unduly affect the prediction of complexes by MCODE.\n\nSummary of MCODE Results with Best Parameters on Various Data Sets.\n\n| Data Set | Number of Proteins | Number of Interact-ions | Number of Predicted Complexes | MCODE Complexes Predicted Above \u03c9 = 0.2 | Matched Benchmark Complexes | Complex Benchmark | Best MCODE Parameters |\n|----|----|----|----|----|----|----|----|\n| Gavin Spoke | 1363 | 3225 | 82 | 63 | 88 | Gavin | hFfT\\0.05\\0.05 |\n| Gavin Spoke | 1363 | 3225 | 53 | 20 | 20 | MIPS | hTfT\\0.1\\0.35 |\n| Pre HTMS | 4379 | 9088 | 158 | 21 | 28 | Gavin | hTfT\\0\\0.2\\\\ |\n| Pre HTMS | 4379 | 9088 | 166 | 52 | 64 | MIPS | hTfT\\0.1\\0.2 |\n| AllYeast | 4825 | 15143 | 209 | 52 | 76 | Gavin | hFfT\\0\\0.1 |\n| AllYeast | 4825 | 15143 | 209 | 54 | 63 | MIPS | hTfT\\0\\0.1 |\n| AllYeast | 4825 | 15143 | 203 | 80 | 150 | MIPS+Gavin | hTfT\\0\\0.15\\\\ |\n| Benchmark | 1762 | 3310 | 141 | 23 | 30 | Gavin | hTfT\\0\\0.3 |\n| Benchmark | 1762 | 3310 | 163 | 58 | 67 | MIPS | hTfT\\0.1\\0.05 |\n| HTP Only | 4557 | 12249 | 138 | 46 | 77 | Gavin | hTfT\\0.05\\0.1 |\n| HTP Only | 4557 | 12249 | 122 | 29 | 35 | MIPS | hTfT\\0.05\\0.15 |\n| Y2H | 3847 | 6133 | 73 | 7 | 7 | Gavin | hTfT\\0.2\\0.1 |\n| Y2H | 3847 | 6133 | 78 | 21 | 26 | MIPS | hTfT\\0\\0.1 |\n\nStatistics and a summary of results are shown for the various data sets used to evaluate MCODE. 'Gavin Spoke' is the Gavin et al. data set represented as binary interactions using the spoke model; 'Pre HTMS' is the set of all yeast interaction not including the recent high-throughput mass spectrometry studies \\[6,7\\].; 'AllYeast' is the set of all yeast interactions that we could collect; 'Benchmark' is a set of interactions found in the literature from YPD, MIPS and PreBIND; 'HTP Only' is the combination of all large-scale and high-throughput yeast two-hybrid and mass spectrometry data sets; 'Y2H' is the set of all yeast two-hybrid results from large-scale and literature sources. See Methods for full explanation of data sets. The 'Best MCODE Parameters' are formatted as haircut True of False, fluff True or False\\VWP\\Fluff Density Threshold Parameter.\n\nIt can be seen from Figure 6B<\/a> that the Gavin complex benchmark set is biased towards the Gavin et al. spoke modeled interaction data. This is expected and is the main reason why the less biased MIPS complex set is used throughout this work as a benchmark instead of the Gavin set.\n\nSince the result of a co-immunoprecipitation experiment is a set of proteins, which we model as binary interactions using the spoke method, we wished to evaluate whether this affects complex prediction compared to an experimental system that generates purely binary interaction results, such as yeast two-hybrid. As can be seen in Table 1<\/a>, MCODE does find known complexes in the 'Y2H' set of only yeast two-hybrid results, thus this set does contain dense regions that are known protein complexes. This being said, the Y2H set is the least dense of all data sets examined here so is expected to have less dense regions of the network and thus less MCODE predictable complexes per protein present in the set. MCODE predicts a similar amount of complexes as well as finding a similar amount of known complexes in the Y2H and Gavin Spoke data sets indicating that these data sets are not significantly different from each other in the amount of dense network regions that they contain, even though they are different sizes. Taken together, the latter results and those in Figure 6B<\/a> show that the spoke model is a reasonable representation of the Gavin et al. tandem affinity purification data.\n\n## Predicting complexes in the Yeast interactome\n\nGiven that MCODE performed reasonably well on test data, we decided to predict complexes in a much larger network. All machine-readable protein-protein interaction data from various data sets \\[2-7,10,13,14\\]. were collected and integrated to form a non-redundant set of 15,143 experimentally determined yeast protein interactions encompassing 4,825 proteins, or approximately three quarters of the proteome. This set was termed 'AllYeast'. MCODE was parameter optimized, as above, using the MIPS benchmark. The best resulting parameter set was haircut = TRUE, fluff = TRUE, VWP = 0 and a fluff density threshold of 0.1. With these parameters, MCODE predicted 209 complexes, of which 54 matched 63 MIPS benchmark complexes above an overlap score of 0.2 (see Additional file 1<\/a>). Complexes found in this manner should be further studied using MCODE in directed mode by specifying a seed vertex and trying different parameters to examine how large a complex can get before seemingly biologically irrelevant proteins are added (see below).\n\nFigure 5<\/a> shows that even when a large set of interactions is used as input to MCODE, most of the MCODE predicted complexes do not match well with known complexes in MIPS. The complex size distribution of MCODE predicted complexes matches the shape of the MIPS set, but the MCODE complexes are on average larger (Average MIPS size = 6.0, Average MCODE Predicted size = 9.7). The average number of YPD and GO functional annotation terms per protein in an MCODE predicted complex is similar to that of MIPS complexes (Table 2<\/a>). This seems to indicate that MCODE is predicting complexes that are functionally relevant. Also, closer examination of the top, middle and bottom five scoring MCODE complexes shows that MCODE can predict biologically relevant complexes (Table 3<\/a>).\n\nAverage Number of YPD and GO Annotation Terms in Complex Sets.\n\n| Data Set | YPD Functions | YPD Roles | GO Components | GO Processes |\n|----|----|----|----|----|\n| MCODE on All Yeast Interactions | 0.58 | 0.89 | 0.39 | 0.59 |\n| MIPS Complex Database | 0.50 | 0.75 | 0.39 | 0.48 |\n| MCODE Random Model (100 AllYeast network permutations) | 0.72 | 1.24 | 0.52 | 0.85 |\n\nThe average number of YPD and GO functional annotation terms per protein in an MCODE predicted complex is shown for MCODE predicted complexes on the AllYeast set, the MIPS complex database and the MCODE random model. A lower number indicates that the complexes from a set contain more functionally related proteins (or unannotated proteins). In the cases of multiple annotation, all terms are taken into account. Even though there are multiple annotation terms per protein and a variable amount of unannotated proteins per complex, these numbers should perform well in relative comparisons based on the assumption that the distribution of the latter two factors is similar in each data set.\n\nStatistics for Top, Middle and Bottom Five Scoring Optimized MCODE Predicted Complexes Found in All Known Yeast Protein Interaction Data Set\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Complex Rank<\/strong><\/th>\nScore<\/strong><\/th>\nProteins<\/strong><\/th>\nInteractions<\/strong><\/th>\nDensity<\/strong><\/th>\nCell Role<\/strong><\/th>\nCell Localization<\/strong><\/th>\n<\/tr>\n<\/thead>\n
1<\/td>\n10.04<\/td>\n46<\/td>\n236<\/td>\n0.22<\/td>\nRNA processing\/modification and protein degradation (26S Proteasome)<\/td>\nNuclear<\/td>\n<\/tr>\n
Protein names<\/strong><\/td>\nDbf2,Ecm29,Gcn4,Hsm3,Hyp2,Lhs1,Mkt1,Nas6,Pre1,Pre2,Pre4,Pre5,Pre6,
\nPre7,Pre8,Pre9,Pup3,Rad23,Rad24,Rad50,Rfc3,Rfc4,Rpn1,Rpn10,Rpn11,
\nRpn12,Rpn13,Rpn3,Rpn4,Rpn5,Rpn6,Rpn7,Rpn8,Rpn9,Rpt1,Rpt2,Rpt3,Rpt4,
\nRpt5,Rpt6,Scl1,Ubp6,Ura7,Ygl004c,Yku70,Ypl070w<\/td>\n<\/tr>\n
<\/td>\n<\/tr>\n
2<\/td>\n9<\/td>\n19<\/td>\n90<\/td>\n0.51<\/td>\nRNA processing\/modification<\/td>\nNuclear<\/td>\n<\/tr>\n
Protein names<\/strong><\/td>\nCft1,Cft2,Fip1,Fir1,Hca4,Mpe1,Pap1,Pcf11,Pfs2,Pta1,Pti1,Ref2,Rna14,Ssu72,
\nUba2,Ufd1,Yor179c,Ysh1,Yth1<\/td>\n<\/tr>\n
<\/td>\n<\/tr>\n
3<\/td>\n7.72<\/td>\n56<\/td>\n220<\/td>\n0.14<\/td>\nPol II transcription<\/td>\nNuclear<\/td>\n<\/tr>\n
Protein names<\/strong><\/td>\nAda2,Adr1,Ahc1,Cdc23,Cdc36,Epl1,Esa1,Fet4,Fun19,Gal4,Gcn5,Hac1,Hfi1,
\nHhf2,Hht1,Hht2,Ire1,Luc7,Med7,Myo4,Ngg1,Pcf11,Pdr1,Prp40,Rna14,Rpb2,
\nRpo21,Sap185,Sgf29,Sgf73,Spt15,Spt20,Spt3,Spt7,Spt8,Srb6,Swi5,Taf1,Taf10,
\nTaf11,Taf12,Taf13,Taf14,Taf2,Taf3,Taf5,Taf6,Taf7,Taf8,Taf9,Tra1,Ubp8,
\nYap1,Yap6,Ybr270c,Yng2<\/td>\n<\/tr>\n
<\/td>\n<\/tr>\n
4<\/td>\n7.58<\/td>\n18<\/td>\n72<\/td>\n0.44<\/td>\nCell cycle control, protein degradation, mitosis (Anaphase Promoting Complex)<\/td>\nNuclear<\/td>\n<\/tr>\n
Protein names<\/strong><\/td>\nApc1,Apc11,Apc2,Apc4,Apc5,Apc9,Cdc16,Cdc23,Cdc26,Cdc27,Dmc1,Doc1,
\nLeu3,Rpt1,Sic1,Spc29,Spt2,Ybr270c<\/td>\n<\/tr>\n
<\/td>\n<\/tr>\n
5<\/td>\n7<\/td>\n15<\/td>\n56<\/td>\n0.52<\/td>\nVesicular transport (TRAPP Complex)<\/td>\nGolgi<\/td>\n<\/tr>\n
Protein names<\/strong><\/td>\nBet1,Bet3,Bet5,Fks1,Gsg1,Gyp6,Kre11,Sec22,Trs120,Trs130,Trs20,Trs23,
\nTrs31,Trs33,Uso1<\/td>\n<\/tr>\n
<\/td>\n<\/tr>\n
102<\/td>\n3<\/td>\n3<\/td>\n3<\/td>\n1<\/td>\nRNA splicing<\/td>\nNuclear<\/td>\n<\/tr>\n
Protein names<\/strong><\/td>\nMsl5,Mud2,Smy2<\/td>\n<\/tr>\n
<\/td>\n<\/tr>\n
103<\/td>\n3<\/td>\n3<\/td>\n3<\/td>\n1<\/td>\nSignal transduction, Cell cycle control, DNA repair, DNA synthesis<\/td>\nNuclear<\/td>\n<\/tr>\n
Protein names<\/strong><\/td>\nPtc2,Rad53,Ydr071c<\/td>\n<\/tr>\n
<\/td>\n<\/tr>\n
104<\/td>\n3<\/td>\n3<\/td>\n3<\/td>\n1<\/td>\nCell cycle control, mating response<\/td>\nUknown<\/td>\n<\/tr>\n
Protein names<\/strong><\/td>\nFar3,Vps64,Ynl127w<\/td>\n<\/tr>\n
<\/td>\n<\/tr>\n
105<\/td>\n3<\/td>\n3<\/td>\n3<\/td>\n1<\/td>\nChromatin\/chromosome structure<\/td>\nNuclear<\/td>\n<\/tr>\n
Protein names<\/strong><\/td>\nGbp2,Hpr1,Mft1<\/td>\n<\/tr>\n
<\/td>\n<\/tr>\n
106<\/td>\n3<\/td>\n3<\/td>\n3<\/td>\n1<\/td>\nPol II transcription<\/td>\nNuclear<\/td>\n<\/tr>\n
Protein names<\/strong><\/td>\nCtk1,Ctk2,Ctk3<\/td>\n<\/tr>\n
<\/td>\n<\/tr>\n
205<\/td>\n2<\/td>\n3<\/td>\n4<\/td>\n1<\/td>\nVesicular transport<\/td>\nER<\/td>\n<\/tr>\n
Protein names<\/strong><\/td>\nRim20,Snf7,Vps4<\/td>\n<\/tr>\n
<\/td>\n<\/tr>\n
206<\/td>\n2<\/td>\n3<\/td>\n4<\/td>\n1<\/td>\nProtein translocation<\/td>\nCytoplasmic<\/td>\n<\/tr>\n
Protein names<\/strong><\/td>\nSrp14,Srp21,Srp54<\/td>\n<\/tr>\n
<\/td>\n<\/tr>\n
207<\/td>\n2<\/td>\n3<\/td>\n4<\/td>\n1<\/td>\nProtein translocation<\/td>\nCytoplasmic<\/td>\n<\/tr>\n
Protein names<\/strong><\/td>\nSrp54,Srp68,Srp72<\/td>\n<\/tr>\n
<\/td>\n<\/tr>\n
208<\/td>\n2<\/td>\n3<\/td>\n4<\/td>\n1<\/td>\nEnergy generation<\/td>\nMitochondrial<\/td>\n<\/tr>\n
Protein names<\/strong><\/td>\nAtp1,Atp11,Atp2<\/td>\n<\/tr>\n
<\/td>\n<\/tr>\n
209<\/td>\n2<\/td>\n4<\/td>\n5<\/td>\n0.67<\/td>\nNuclear-cytoplasmic and vesicular transport<\/td>\nVaried<\/td>\n<\/tr>\n
Protein names<\/strong><\/td>\nKap123,Nup145,Sec7,Slc1<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\nScore is defined as the product of the complex subgraph density and the number of vertices (proteins) in the complex subgraph (DC \u00d7 \\|V\\|). This ranks larger more dense complexes higher in the results. Density is calculated using the \"loop\" formula if homodimers exist in the complex, otherwise the \"no loop\" formula is used. The cell role column is a manual combination of annotation terms for the proteins reported in the complex.\n\nMany of the 209 predicted complexes are of size 2 (9 predicted complexes) or 3 (54 predicted complexes). Complexes of this size may not be significant since it is easy to create high density subgraphs of size 2 or 3, but becomes combinatorially more difficult to randomly create high density subgraphs as the size of the subgraph increases. To examine the relevance of these small predicted complexes of size 2 or 3, we calculated the sensitivity and specificity of the optimized MCODE predictions against the MIPS complex benchmark while disregarding the small complexes. First, complexes of size 2, then of size 3, were removed from the optimized MCODE predicted complex set. Removing each of these sets independently resulted in only small sensitivity and specificity changes. Because both sets overlap the MIPS benchmark, small complexes have been reported as predictions. Also, because MCODE found these small complexes in regions of high local density, they may be good cores for further examination with MCODE in directed mode, especially since the haircut option was turned on here to produce them.\n\nComplexes that are larger and denser are ranked higher by MCODE and these generally correspond to known complexes (see below). Interestingly, some MCODE complexes contain unknown proteins that are highly connected to known complex subunits. For example, the second highest ranked MCODE complex is involved in RNA processing\/modification and contains the known polyadenylation factor I complex (Cft1, Cft2, Fip1, Pap1, Pfs2, Pta1, Ysh1, Yth1 and Ykl059c). Seven other proteins involved in mainly RNA processing\/modification (Fir1, Hca4, Pcf11, Pti1, Ref2, Rna14, Ssu72) and protein degradation (Uba2 and Ufd1) are highly connected within this predicted complex. Two unknown proteins Pti1 and Yor179c are highly connected to RNA processing\/modification proteins and are therefore likely involved in the same process (Figure 7<\/a>). Pti1 may be an unknown component of the polyadenylation factor I complex. The 23^rd^ highest ranked predicted complex is interesting in that it is involved in cell polarity and cytokinesis and contains two proteins of unknown function, Yhr033w and Yal027w. Yal027w interacts with two kinases, Gin4 and Kcc4, which in turn interact with the components of the Septin complex (Cdc3, Cdc10, Cdc11 and Cdc12) (Figure 8<\/a>).\n\n## Significance of MCODE predictions\n\nNa\u00efvely, the chance of randomly picking a known protein complex from a protein interaction network depends on the size of the complex and the network. It is easier to pick out a smaller known complex by chance from a smaller network. For instance, in our network of 15,143 interactions among 4,825 proteins, the chance of picking a specific known complex of size three is about one in 1.9 \u00d7 10^10^ (4,825 choose 3). A more realistic model would assume that the proteins are connected and thus would only consider complex choices of size three where all three proteins are connected. The number of choices now depends on the topology of the network. In our large network, there are 6,799 fully connected subnetworks of size three and 313,057 subnetworks of size three with only two interactions (from the triadic census feature of Pajek). Thus now our chance of picking a more realistic complex is one out of 319,856 (1\/(6,799 + 313,057) = 3.1 \u00d7 10^-6^). As the size of the complex increases, the number of possible complex topologies increases exponentially and, in a connected network of some reasonable density, so does the number of possible subgraphs that could represent a complex. The density of our large protein interaction network is 0.0013 and is mostly connected (4,689 proteins are in one connected component). Thus, it is expected that if a complex is found in a network with MCODE that matches a known complex, that the result would be highly significant. To understand the significance of complex prediction further, the topology of the protein interaction network would have to be understood in general, in order to build a null model to compare against.\n\nRecent research on modeling complex systems \\[21,25,27\\] has found that networks such as the world wide web, metabolic networks \\[26\\] and protein-protein interaction networks \\[36\\] are scale-free. That is, the connectivity distribution of the vertices of the graph follows a power law, with many vertices of low degree and few vertices of high degree. Scale-free networks are known to have large clustering coefficients, or clustered regions of the graph. In biological networks, at least in yeast, these clustered regions seem to correspond to molecular complexes and these subgraphs are what MCODE is designed to find.\n\nTo test the significance of clustered regions in biological networks, 100 random permutations of the large set of all 15,143 yeast interactions were made. If the graph to be randomised is considered as a set of edges between two vertices (*v*~1~, *v*~2~), a network permutation is made by randomly permuting the set of all *v*~2~ vertices. The random networks have the same number of edges and vertices as the original network and follow a power-law connectivity distribution, as do the original data sets \\[37\\]. Running MCODE with the same parameters as the original network (haircut = TRUE, fluff = TRUE, VWP = 0 and a fluff density threshold of 0.1) on the 100 random networks resulted in an average of 27.4 (SD = 4.4) complexes per network. The size distribution of complexes found by MCODE did not match that of the complexes found in the original network, as some complexes found in the random networks were composed of \\>1500 proteins. One random network that had an approximately average number of predicted complexes (27) was parameter optimized using the MIPS benchmark to see how parameter choice affects the size distribution and number of predicted complexes. Parameters of haircut = TRUE, fluff = TRUE, VWP = 0.1 and a fluff density threshold of zero produced the maximal number of 81 complexes for this network, but these complexes were composed of on average 27 proteins (without counting an outlier complex of size 1961), which is much larger than normal (e.g. larger than the MIPS set average of 6.0). None of these predicted complexes matched any MIPS complexes above an overlap score of 0.1. Also, the random network complexes had a much higher average number of YPD and GO annotation terms per protein per complex than for MIPS or MCODE on the original network (Table 2<\/a>). This indicates, as expected, that the random network complexes are composed of a higher level of unrelated proteins than complexes in the original network. Thus, the number, size and functional composition of complexes that MCODE predicts in the large set of all yeast interactions are highly unlikely to occur by chance.\n\nTo evaluate the effectiveness of our scoring scheme, which scores larger, more dense complexes higher than smaller, more sparse complexes, we examined the accuracy of MCODE predictions at various score thresholds. As the score threshold for inclusion of complexes is increased, less complexes are included, but a higher percentage of the included complexes match complexes in the benchmark. This is at the expense of sensitivity as many benchmark matching complexes are not included at higher score thresholds (Figure 9<\/a>). For example, of the ten predicted complexes with MCODE score greater or equal to six, nine match a known complex in either the MIPS or Gavin benchmark above a 0.2 threshold overlap score, yielding an accuracy of 90%. 100% of the five complexes that had an MCODE score better or equal to seven matched known complexes. Thus, complexes that score highly on our simple density based scoring scheme are very likely to be real.\n\n## Directed mode of MCODE\n\nTo simulate an obvious example where the directed mode of MCODE would be useful, MCODE was run with relaxed parameters (haircut = TRUE, fluff = TRUE, VWP = 0.05 and a fluff density threshold of 0.2) compared to the best parameters on the AllYeast network. The resulting fourth highest ranked complex, when visualized, shows two clustered components and represents two protein complexes, the proteasome and an RNA processing complex, both found in the nucleus (Figure 10<\/a>). This is an example of where a lower VWP parameter would have been superior since it would have divided this large complex into two more functionally related complexes. The highest weighted vertices in the center of each of the two dense regions in Figure 10<\/a> are the Rpt1 and Lsm4 proteins. MCODE was run in directed mode starting with these two proteins over a range of VWP parameters from 0 to 0.2, at 0.05 increments. For Lsm4, the parameter set of haircut = TRUE, fluff = FALSE, VWP = 0 was used to find a core complex, which contained 9 proteins fully connected to each other (Dcp1, Kem1, Lsm2, Lsm3, Lsm4, Lsm5, Lsm6, Lsm7 and Pat1). Above this VWP parameter, the core complex branched out into proteasome subunit proteins, which are not part of the Lsm complex (see Figure 11A<\/a>). Using this VWP parameter, combinations of haircut and fluff parameters were used to further expand the core complex. This process was stopped when the predicted complexes began to include proteins of sufficiently different known biological function to the seed vertex. Proteins, such as Vam6 and Yor320c were included in the complex at moderate fluff parameters (0.4\u20130.6), but not at higher fluff parameters, and these are known to be localized in membranes outside of the nucleus, thus are likely not functionally related to the Lsm complex proteins. Therefore, the 9 proteins listed above were decided to be the final complex (Figure 11B<\/a>). This is intuitive because of their maximal density (a 9-clique).\n\nUsing this same method of known biological role \"titration\" on Rpt1 found a complex of 34 proteins (Gal4, Gcn4, Hsm3, Lhs1, Nas6, Pre1, Pre2, Pre3, Pre4, Pre5, Pre6, Pre7, Pre9, Pup3, Rpn10, Rpn11, Rpn13, Rpn3, Rpn5, Rpn6, Rpn7, Rpn8, Rpn9, Rpt1, Rpt2, Rpt3, Rpt4, Rpt6, Rri1, Scl1, Sts1, Ubp6, Ydr179c, Ygl004c) and 160 interactions using the parameter set haircut = TRUE, fluff = TRUE, VWP = 0.2 and a fluff density threshold of 0.3. Two regions of density can be seen here corresponding to the two known subunits of the 26S proteasome. The 20S proteolytic subunit of the proteasome is comprised of 15 proteins (Pre1 to Pre10, Pup1, Pup2, Pup3, Scl1 and Ump1) of which Pre7, Pre8, Pre10, Pup1, Pup2 and Ump1 are not found with MCODE. The 19S regulatory subunit of the proteasome is known to have 21 subunits (Nas6, Rpn1 to Rpn13, Rpt1 to Rpt6 and Ubp6) of which Rpn1, Rpn2, Rpn4, Rpn12 and Rpt5 are not found with MCODE. Known complex components not found by MCODE are not present at a high enough local density regions of the interaction network, possibly because not enough experiments involving these proteins are present in our data set. Figure 11C<\/a> shows the final Rpt1 seeded complex. Of note, Ygl004c is unknown and binds to almost every Rpt and Rpn protein in the complex although all of these interactions were from a single immunoprecipitation experiment \\[6\\]. As well, Rri1 and Ydr179c have unknown function and both bind to each other and to Rpn5. Thus one would predict that these three unknown proteins function with or as part of the 26S proteasome. The protein Hsm3 binds to eight other 19S subunits and is involved in DNA mismatch repair pathways, but is not known to be part of the proteasome, although all of these Hsm3 interactions are from a particular large-scale experiment \\[7\\]. Interestingly, Gal4, a transcription factor involved in galactose metabolism, is found to be part of the proteasome complex. While this metabolic functionality seems unrelated to protein degradation, it has recently been shown that the binding is physiologically relevant \\[38\\]. These cases illustrate the possible unreliability of both functional annotation and interaction data, but also that seemingly unrelated proteins should not be immediately discounted if found to be part of a complex by MCODE.\n\nOf note, the known topology of the 26S proteasome \\[39\\] compares favourably with the complex visualization of Figure 11C<\/a> without considering stoichiometry. Thus, if enough interactions are known, visualizing complexes may reveal the rough structural outline of large complexes. This should be expected when dealing with actual physical protein-protein interactions since there are few allowed topologies for large complexes considering the specific set of defining interactions and steric clashes between protein subunits.\n\n## Complex connectivity\n\nMCODE may also be used to examine the connectivity and relationships between molecular complexes. Once a complex is known using the directed mode, the MCODE parameters can be relaxed to allow branching out into other complexes. The MCODE directed mode preprocessing step must also be turned off to allow MCODE to branch into other connected complexes, which may reside in denser regions of the graph than the seed vertex. As an example, this was done with the Lsm4 seeded complex (Figure 12<\/a>). MCODE parameters were relaxed to haircut = TRUE, fluff = FALSE, VWP = 0.2 although they could be further relaxed for greater extension out into the network.\n\n# Discussion\n\nThis method represents an initial step in taking advantage of the protein function data being generated by many large-scale protein interaction studies. As the experimental methods are further developed, an increasing amount of data will be produced which will require computational methods for efficient interpretation. The algorithm described here allows the automated prediction of protein complexes from qualitative protein-protein interaction data and is thus able to help predict the function of unknown proteins and aid in the understanding of the functional connectivity of molecular complexes in the cell. The general nature of this method may allow complex prediction for molecules other than proteins as well, for example metabolic complexes that include small molecules.\n\nMCODE cannot stand alone in this task; it must be combined with a graph visualization system to ease the understanding of the relationships among molecules in the data set. We use the Pajek program for large network analysis \\[40\\] with the Kamada-Kawai graph layout algorithm \\[41\\]. Kamada-Kawai models the edges in the graph as springs, randomly places the vertices in a high energy state and then attempts to minimize the energy of the system over a number of time steps. The result is that the Euclidean distance, here in a plane, is close to the graph-theoretic or path distance between the vertices. The vertices are visually clustered based on connectivity. Biologically, this visualization can allow one to see the rough structural outline of large complexes, if enough interactions are known, as evidenced in the proteasome complex analysis above (Figure 11C<\/a>).\n\nIt is important to note and understand the limitations of the current experimental methods (e.g. yeast two-hybrid and co-immunoprecipitation) and the protein interaction networks that these techniques generate when analyzing the resulting data. One common class of false-positive interactions arising from many different kinds of experimental methods is that of indirect interactions. For instance, an interaction may be seen between two proteins using a specific experimental method, but in reality, those proteins do not physically bind each other, and one or more other molecules that are generally part of the same complex mediate the observed interaction. As can be seen for the Arp2\/3 complex shown in Figure 3<\/a>, when pairwise interactions between all combinations of proteins in a complex are studied, this creates a very dense graph. Interestingly, this false-positive effect is normally considered a disadvantage, but is an advantage with MCODE as it increases the density in the region of the graph containing a complex, which can then be more easily predicted.\n\nApart from the experimental factors that lead to false-positive and false-negative interactions, representational limitations also exist computationally. Temporal and spatial information is not currently described in interaction networks. A complex found by the MCODE approach may not actually exist even though all of the component proteins bind each other *in vitro*. Those proteins may never be present at the same time and place. For example, molecular complexes that perform different functions sometimes have common subunits as with the three types of eukaryotic RNA polymerases.\n\nComplex stoichiometry, another important aspect of biological data, is not represented either. While it is possible to include full stoichiometry in a graph representation of a biomolecular interaction network, many experimental methods do not provide this information, so a homo-multimeric complex is normally represented as a simple homodimer. When an experiment does provide stoichiometry information, it is not stored in most current databases, such as MIPS and YPD. Thus, one is forced to return to the primary literature to extract the data, an extremely time-consuming task for large data sets.\n\nSome quantitative and statistical information is present when integrating results of large-scale approaches and this is not used in our current graph model. For instance, the number of different types of experiments that find the same interaction, the quality of the experiment, the date the experiment was conducted (newer methods may be superior in certain aspects) and other factors that pertain to the reliability of the interaction could all be considered to determine a reliability index or p-value on edges in the graph. For instance, one may wish to rank results published in high-impact journals above other journals (or vice versa) and rank classical purification methods above high-throughput yeast two-hybrid techniques when determining the quality of the interaction data. It may also be possible to weight vertices on the graph by other quality criteria, such as whether a protein is hypothetical from a gene prediction or not or whether a protein is expressed at a particular time and place in the cell. For example, if one were interested in a certain stage of the cell cycle, proteins that are known to be absent at that stage could be reduced in weight (VWP in the case of MCODE) compared to proteins that are present. It should be noted that any weighting scheme that tries to assess the quality of an interaction might make false assumptions that would prevent the discovery of new and interesting data.\n\nThis paper shows that the structure of a biological network can define complexes, which can be seen as dense regions. This may be attributed to indirect interactions accumulating in the literature. Thus, interaction data taken out of context may be erroneous. For instance, if one has a collection of protein interactions from various different experiments done at different times in different labs from a specific complex that form a clique, and if one chooses an interaction from this clique, then how can one verify if it is indirect or not. We would only begin to know if we had a very detailed description of the experiment from the original papers where we could tell the amount of work and quality of work that went into measuring each interaction. Thus with only a qualitative view of interactions, in reference to Dobzhansky \\[42\\], nothing in the biomolecular interaction network would make sense except in light of molecular complexes and the functional connections between them. If one had a highly detailed representation of each interaction including time, place, experimental condition, number of experiments, binding sites, chemical actions and chemical state information, one would be able to computationally delve into molecular complexes to resolve topology, structure, function and mechanism down to the atomic level. This information would also help to judge the biological relevance of an interaction. Thus, we require databases like BIND \\[15\\] to store this information. The integration of known qualitative and quantitative molecular interaction data in a machine-readable format should allow increasingly accurate protein interaction, molecular complex and pathway prediction, including actual binding site and mechanism information in a sequence and structural context.\n\nBased on our scale-free network analysis, it would seem that real biological networks are organized differently than random models of scale-free networks in that they have higher clustering coefficients around specific regions (complexes) and the vertices in these regions are related to each other, by biological function. Thus, attempts to model biological networks and their evolution in a global way solely using the statistics of scale-free networks may not work, rather modeling should take into account as much extant biological knowledge as possible.\n\nFuture work on MCODE could include researching different, possibly adaptive, vertex scoring functions to take into account, for example, the local density of the network past the immediate neighborhood of a vertex and the inclusion of functional annotation and p-values on edges. Time, space and stoichiometry should also be represented on networks and in visualization systems. The process of 'functional annotation titration' in the directed mode of MCODE could be automated.\n\n# Conclusions\n\nMCODE effectively finds densely connected regions of a molecular interaction network, many of which correspond to known molecular complexes, based solely on connectivity data. Given that this approach to analyzing protein interaction networks performs well using minimal qualitative information implies that large amounts of available knowledge is buried in large protein interaction networks. More accurate data mining algorithms and systems models could be constructed to understand and predict interactions, complexes and pathways by taking into account more existing biological knowledge. Structured molecular interaction data resources such as BIND will be vital in creating these resources.\n\n# Methods\n\n## Data sources\n\nAll protein interaction data sets from MIPS \\[13\\], Gene Ontology \\[43\\] and PreBIND were collected as described previously \\[6\\]. The YPD protein interaction data are from March 2001 and were originally requested from Proteome, Inc. . Other interaction data sets are from BIND . A BIND yeast import utility was developed to integrate data from SGD \\[12\\], RefSeq \\[44\\], Gene Registry , the list of essential genes from the yeast deletion consortium \\[11\\] and GO terms \\[43\\]. This database ensures proper matching of yeast gene names among the multiple data sets that may use different names for the same genes. The yeast proteome used here is defined by SGD and RefSeq and contains 6,334 ORFs including the mitochondrial chromosome. Before performing comparisons, the various interaction data sets were entered into a local instance of BIND as pairwise protein interaction records. The MIPS complex catalogue was downloaded in February 2002.\n\nThe protein interaction data sets used here were composed as follows. 'Gavin Spoke' is the spoke model of the raw purifications from Gavin et al \\[7\\]. 'Y2H' is all known large-scale \\[2-5,10\\] combined with normal yeast two-hybrid results from MIPS. 'HTP Only' is only high-throughput or large-scale data \\[2-7,10\\] The 'Benchmark' set was constructed from MIPS, YPD and PreBIND as previously described \\[6\\]. 'Pre HTMS' was composed of all yeast sets except the recent large-scale mass spectrometry data sets \\[6,7\\]. 'AllYeast' was the combination of all above data sets. All data sets are non-redundant.\n\n## Network visualization\n\nVisualization of networks was performed using the Pajek program for large network analysis \\[40\\] as described previously \\[6,10\\]. using the Kamada-Kawai graph layout algorithm followed by manual vertex adjustments and was formatted using CorelDraw 10. Power law analysis was also accomplished as previously described \\[6\\].\n\n# Authors' contributions\n\nGB conceived of the study and carried out all programming and analyses as a Ph.D. student in the lab of CH. CH supervised the study and provided valuable input for the evaluation analyses.\n\n# Supplementary Material\n\n###### Additional File 1\n\nAllYeastPredictedComplexes.zip Zip file containing Pajek .net and annotation files for all 209 complexes found using MCODE on the set of all yeast interactions reported here. Various report files from MCODE are also included as well as basic instructions for using Pajek.\n\nClick here for file\n\n### Acknowledgements\n\nMichel Dumontier, Sherrie Kelly, Katerina Michalickova, Tony Pawson and Mike Tyers provided helpful discussion. This work was supported in part from grants from the Canadian Institutes of Health Research (CIHR), the Ontario Research and Development Challenge Fund and MDS-Sciex to C.H. G.D.B. is supported by an Ontario Graduate Scholarship (OGS).","meta":{"dup_signals":{"dup_doc_count":113,"dup_dump_count":52,"dup_details":{"curated_sources":1,"2023-14":1,"2022-49":2,"2022-27":3,"2022-05":2,"2021-43":1,"2021-21":1,"2021-17":1,"2021-04":1,"2020-50":1,"2020-45":1,"2020-40":1,"2020-24":1,"2020-10":1,"2019-51":1,"2019-39":2,"2019-35":1,"2019-30":2,"2019-26":1,"2019-22":1,"2019-13":1,"2019-09":1,"2018-47":3,"2018-43":1,"2018-34":1,"2018-30":1,"2018-13":1,"2018-09":1,"2017-43":2,"2017-39":1,"2017-30":1,"2017-26":5,"2017-22":8,"2017-17":13,"2017-09":7,"2017-04":1,"2016-44":2,"2016-40":1,"2016-36":7,"2016-30":9,"2016-22":1,"2016-18":1,"2015-40":2,"2015-35":1,"2015-32":2,"2015-22":2,"2014-52":1,"2014-49":1,"2014-42":3,"2014-41":2,"2014-35":1,"2014-23":2,"2023-50":1}},"file":"PMC149346"},"subset":"pubmed_central"} {"text":"abstract: Pandemics such as the Black Death have altered the course of history. The ravages of the Spanish Flu pandemic were so terrible that at least one culture decided the most humane way of dealing with the aftermath was simply to ignore it\u2014to pretend the disease had not happened. Could a new pandemic possibly be worse than the Black Death or the Spanish Flu? Could it be as bad as the Scarlet Plague of Jack London's story?\nauthor: Stephen Webb\ndate: 2019-01-08\nreferences:\ntitle: Pandemic\n\n# The Scarlet Plague (Jack London)\n\n## I\n\nTHE way led along upon what had once been the embankment of a railroad. But no train had run upon it for many years. The forest on either side swelled up the slopes of the embankment and crested across it in a green wave of trees and bushes. The trail was as narrow as a man's body, and was no more than a wild-animal runway. Occasionally, a piece of rusty iron, showing through the forest-mould, advertised that the rail and the ties still remained. In one place, a ten-inch tree, bursting through at a connection, had lifted the end of a rail clearly into view. The tie had evidently followed the rail, held to it by the spike long enough for its bed to be filled with gravel and rotten leaves, so that now the crumbling, rotten timber thrust itself up at a curious slant. Old as the road was, it was manifest that it had been of the monorail type.\n\nAn old man and a boy travelled along this runway. They moved slowly, for the old man was very old, a touch of palsy made his movements tremulous, and he leaned heavily upon his staff. A rude skullcap of goat-skin protected his head from the sun. From beneath this fell a scant fringe of stained and dirty-white hair. A visor, ingeniously made from a large leaf, shielded his eyes, and from under this he peered at the way of his feet on the trail. His beard, which should have been snow-white but which showed the same weather-wear and camp-stain as his hair, fell nearly to his waist in a great tangled mass. About his chest and shoulders hung a single, mangy garment of goat-skin. His arms and legs, withered and skinny, betokened extreme age, as well as did their sunburn and scars and scratches betoken long years of exposure to the elements.\n\nThe boy, who led the way, checking the eagerness of his muscles to the slow progress of the elder, likewise wore a single garment\u2014a ragged-edged piece of bear-skin, with a hole in the middle through which he had thrust his head. He could not have been more than twelve years old. Tucked coquettishly over one ear was the freshly severed tail of a pig. In one hand he carried a medium-sized bow and an arrow.\n\nOn his back was a quiverful of arrows. From a sheath hanging about his neck on a thong, projected the battered handle of a hunting knife. He was as brown as a berry, and walked softly, with almost a catlike tread. In marked contrast with his sunburned skin were his eyes\u2014blue, deep blue, but keen and sharp as a pair of gimlets. They seemed to bore into aft about him in a way that was habitual. As he went along he smelled things, as well, his distended, quivering nostrils carrying to his brain an endless series of messages from the outside world. Also, his hearing was acute, and had been so trained that it operated automatically. Without conscious effort, he heard all the slight sounds in the apparent quiet\u2014heard, and differentiated, and classified these sounds\u2014whether they were of the wind rustling the leaves, of the humming of bees and gnats, of the distant rumble of the sea that drifted to him only in lulls, or of the gopher, just under his foot, shoving a pouchful of earth into the entrance of his hole.\n\nSuddenly he became alertly tense. Sound, sight, and odor had given him a simultaneous warning. His hand went back to the old man, touching him, and the pair stood still. Ahead, at one side of the top of the embankment, arose a crackling sound, and the boy's gaze was fixed on the tops of the agitated bushes. Then a large bear, a grizzly, crashed into view, and likewise stopped abruptly, at sight of the humans. He did not like them, and growled querulously. Slowly the boy fitted the arrow to the bow, and slowly he pulled the bowstring taut. But he never removed his eyes from the bear.\n\nThe old man peered from under his green leaf at the danger, and stood as quietly as the boy. For a few seconds this mutual scrutinizing went on; then, the bear betraying a growing irritability, the boy, with a movement of his head, indicated that the old man must step aside from the trail and go down the embankment. The boy followed, going backward, still holding the bow taut and ready. They waited till a crashing among the bushes from the opposite side of the embankment told them the bear had gone on. The boy grinned as he led back to the trail.\n\n\"A big un, Granser,\" he chuckled.\n\nThe old man shook his head.\n\n\"They get thicker every day,\" he complained in a thin, undependable falsetto. \"Who'd have thought I'd live to see the time when a man would be afraid of his life on the way to the Cliff House. When I was a boy, Edwin, men and women and little babies used to come out here from San Francisco by tens of thousands on a nice day. And there weren't any bears then. No, sir. They used to pay money to look at them in cages, they were that rare.\"\n\n\"What is money, Granser?\"\n\nBefore the old man could answer, the boy recollected and triumphantly shoved his hand into a pouch under his bear-skin and pulled forth a battered and tarnished silver dollar. The old man's eyes glistened, as he held the coin close to them.\n\n\"I can't see,\" he muttered. \"You look and see if you can make out the date, Edwin.\"\n\nThe boy laughed.\n\n\"You're a great Granser,\" he cried delightedly, \"always making believe them little marks mean something.\"\n\nThe old man manifested an accustomed chagrin as he brought the coin back again close to his own eyes.\n\n\"2012,\" he shrilled, and then fell to cackling grotesquely. \"That was the year Morgan the Fifth was appointed President of the United States by the Board of Magnates. It must have been one of the last coins minted, for the Scarlet Death came in 2013. Lord! Lord!\u2014think of it! Sixty years ago, and I am the only person alive today that lived in those times. Where did you find it, Edwin?\"\n\nThe boy, who had been regarding him with the tolerant curiousness one accords to the prattlings of the feeble-minded, answered promptly.\n\n\"I got it off of Hoo-Hoo. He found it when we was herdin' goats down near San Jos\u00e9 last spring. Hoo-Hoo said it was *money*. Ain't you hungry, Granser?\"\n\nThe ancient caught his staff in a tighter grip and urged along the trail, his old eyes shining greedily.\n\n\"I hope Hare-Lip's found a crab\u2026 or two,\" he mumbled. \"They're good eating, crabs, mighty good eating when you've no more teeth and you've got grandsons that love their old grandsire and make a point of catching crabs for him. When I was a boy\u2014\"\n\nBut Edwin, suddenly stopped by what he saw, was drawing the bowstring on a fitted arrow. He had paused on the brink of a crevasse in the embankment. An ancient culvert had here washed out, and the stream, no longer confined, had cut a passage through the fill. On the opposite side, the end of a rail projected and overhung. It showed rustily through the creeping vines which overran it. Beyond, crouching by a bush, a rabbit looked across at him in trembling hesitancy. Fully fifty feet was the distance, but the arrow flashed true; and the transfixed rabbit, crying out in sudden fright and hurt, struggled painfully away into the brush. The boy himself was a flash of brown skin and flying fur as he bounded down the steep wall of the gap and up the other side. His lean muscles were springs of steel that released into graceful and efficient action. A hundred feet beyond, in a tangle of bushes, he overtook the wounded creature, knocked its head on a convenient tree-trunk, and turned it over to Granser to carry.\n\n\"Rabbit is good, very good,\" the ancient quavered, \"but when it comes to a toothsome delicacy I prefer crab. When I was a boy\u2014\"\n\n\"Why do you say so much that ain't got no sense?\" Edwin impatiently interrupted the other's threatened garrulousness.\n\nThe boy did not exactly utter these words, but something that remotely resembled them and that was more guttural and explosive and economical of qualifying phrases. His speech showed distant kinship with that of the old man, and the latter's speech was approximately an English that had gone through a bath of corrupt usage.\n\n\"What I want to know,\" Edwin continued, \"is why you call crab 'toothsome delicacy'? Crab is crab, ain't it? No one I never heard calls it such funny things.\"\n\nThe old man sighed but did not answer, and they moved on in silence. The surf grew suddenly louder, as they emerged from the forest upon a stretch of sand dunes bordering the sea. A few goats were browsing among the sandy hillocks, and a skin-clad boy, aided by a wolfish-looking dog that was only faintly reminiscent of a collie, was watching them. Mingled with the roar of the surf was a continuous, deep-throated barking or bellowing, which came from a cluster of jagged rocks a hundred yards out from shore. Here huge sea-lions hauled themselves up to lie in the sun or battle with one another. In the immediate foreground arose the smoke of a fire, tended by a third savage-looking boy. Crouched near him were several wolfish dogs similar to the one that guarded the goats.\n\nThe old man accelerated his pace, sniffing eagerly as he neared the fire. \"Mussels!\" he muttered ecstatically. \"Mussels! And ain't that a crab, Hoo-Hoo? Ain't that a crab? My, my, you boys are good to your old grandsire.\" Hoo-Hoo, who was apparently of the same age as Edwin, grinned. \"All you want, Granser. I got four.\"\n\nThe old man's palsied eagerness was pitiful. Sitting down in the sand as quickly as his stiff limbs would let him, he poked a large rock-mussel from out of the coals. The heat had forced its shells apart, and the meat, salmon-colored, was thoroughly cooked. Between thumb and forefinger, in trembling haste, he caught the morsel and carried it to his mouth. But it was too hot, and the next moment was violently ejected. The old man spluttered with the pain, and tears ran out of his eyes and down his cheeks.\n\nThe boys were true savages, possessing only the cruel humor of the savage. To them the incident was excruciatingly funny, and they burst into loud laughter. Hoo-Hoo danced up and down, while Edwin rolled gleefully on the ground. The boy with the goats came running to join in the fun.\n\n\"Set 'em to cool, Edwin, set 'em to cool,\" the old man besought, in the midst of his grief, making no attempt to wipe away the tears that still flowed from his eyes. \"And cool a crab, Edwin, too. You know your grandsire likes crabs.\"\n\nFrom the coals arose a great sizzling, which proceeded from the many mussels bursting open their shells and exuding their moisture. They were large shellfish, running from three to six inches in length. The boys raked them out with sticks and placed them on a large piece of driftwood to cool.\n\n\"When I was a boy, we did not laugh at our elders; we respected them.\"\n\nThe boys took no notice, and Granser continued to babble an incoherent flow of complaint and censure. But this time he was more careful, and did not burn his mouth. All began to eat, using nothing but their hands and making loud mouth-noises and lip-smackings. The third boy, who was called Hare-Lip, slyly deposited a pinch of sand on a mussel the ancient was carrying to his mouth; and when the grit of it bit into the old fellow's mucous membrane and gums, the laughter was again uproarious. He was unaware that a joke had been played on him, and spluttered and spat until Edwin, relenting, gave him a gourd of fresh water with which to wash out his mouth.\n\n\"Where's them crabs, Hoo-Hoo?\" Edwin demanded. \"Granser's set upon having a snack.\"\n\nAgain Granser's eyes burned with greediness as a large crab was handed to him. It was a shell with legs and all complete, but the meat had long since departed. With shaky fingers and babblings of anticipation, the old man broke off a leg and found it filled with emptiness.\n\n\"The crabs, Hoo-Hoo?\" he wailed. \"The crabs?\"\n\n\"I was fooling Granser. They ain't no crabs! I never found one.\"\n\nThe boys were overwhelmed with delight at sight of the tears of senile disappointment that dribbled down the old man's cheeks. Then, unnoticed, Hoo-Hoo replaced the empty shell with a fresh-cooked crab. Already dismembered, from the cracked legs the white meat sent forth a small cloud of savory steam. This attracted the old man's nostrils, and he looked down in amazement.\n\nThe change of his mood to one of joy was immediate. He snuffled and muttered and mumbled, making almost a croon of delight, as he began to eat. Of this the boys took little notice, for it was an accustomed spectacle. Nor did they notice his occasional exclamations and utterances of phrases which meant nothing to them, as, for instance, when he smacked his lips and champed his gums while muttering: \"Mayonnaise! Just think\u2014mayonnaise! And it's sixty years since the last was ever made! Two generations and never a smell of it! Why, in those days it was served in every restaurant with crab.\" When he could eat no more, the old man sighed, wiped his hands on his naked legs, and gazed out over the sea. With the content of a full stomach, he waxed reminiscent.\n\n\"To think of it! I've seen this beach alive with men, women, and children on a pleasant Sunday. And there weren't any bears to eat them up, either. And right up there on the cliff was a big restaurant where you could get anything you wanted to eat. Four million people lived in San Francisco then. And now, in the whole city and county there aren't forty all told. And out there on the sea were ships and ships always to be seen, going in for the Golden Gate or coming out. And airships in the air\u2014dirigibles and flying machines. They could travel two hundred miles an hour. The mail contracts with the New York and San Francisco Limited demanded that for the minimum. There was a chap, a Frenchman, I forget his name, who succeeded in making three hundred; but the thing was risky, too risky for conservative persons. But he was on the right clew, and he would have managed it if it hadn't been for the Great Plague. When I was a boy, there were men alive who remembered the coming of the first aeroplanes, and now I have lived to see the last of them, and that sixty years ago.\"\n\nThe old man babbled on, unheeded by the boys, who were long accustomed to his garrulousness, and whose vocabularies, besides, lacked the greater portion of the words he used. It was noticeable that in these rambling soliloquies his English seemed to recrudesce into better construction and phraseology. But when he talked directly with the boys it lapsed, largely, into their own uncouth and simpler forms.\n\n\"But there weren't many crabs in those days,\" the old man wandered on. \"They were fished out, and they were great delicacies. The open season was only a month long, too. And now crabs are accessible the whole year around. Think of it\u2014catching all the crabs you want, any time you want, in the surf of the Cliff House beach!\"\n\nA sudden commotion among the goats brought the boys to their feet. The dogs about the fire rushed to join their snarling fellow who guarded the goats, while the goats themselves stampeded in the direction of their human protectors. A half dozen forms, lean and gray, glided about on the sand hillocks and faced the bristling dogs. Edwin arched an arrow that fell short. But Hare-Lip, with a sling such as David carried into battle against Goliath, hurled a stone through the air that whistled from the speed of its flight. It fell squarely among the wolves and caused them to slink away toward the dark depths of the eucalyptus forest.\n\nThe boys laughed and lay down again in the sand, while Granser sighed ponderously. He had eaten too much, and, with hands clasped on his paunch, the fingers interlaced, he resumed his maunderings.\n\n\"The fleeting systems lapse like foam,\" he mumbled what was evidently a quotation. \"That's it\u2014foam, and fleeting. All man's toil upon the planet was just so much foam. He domesticated the serviceable animals, destroyed the hostile ones, and cleared the land of its wild vegetation. And then he passed, and the flood of primordial life rolled back again, sweeping his handiwork away\u2014the weeds and the forest inundated his fields, the beasts of prey swept over his flocks, and now there are wolves on the Cliff House beach.\" He was appalled by the thought. \"Where four million people disported themselves, the wild wolves roam today, and the savage progeny of our loins, with prehistoric weapons, defend themselves against the fanged despoilers. Think of it! And all because of the Scarlet Death\u2014\"\n\nThe adjective had caught Hare-Lip's ear.\n\n\"He's always saying that,\" he said to Edwin. \"What is *scarlet?*\"\n\n\"The scarlet of the maples can shake me like the cry of bugles going by,\" the old man quoted.\n\n\"It's red,\" Edwin answered the question. \"And you don't know it because you come from the Chauffeur Tribe. They never did know nothing, none of them. Scarlet is red\u2014I know that.\"\n\n\"Red is red, ain't it?\" Hare-Lip grumbled. \"Then what's the good of gettin' cocky and calling it scarlet?\"\n\n\"Granser, what for do you always say so much what nobody knows?\" he asked. \"Scarlet ain't anything, but red is red. Why don't you say red, then?\"\n\n\"Red is not the right word,\" was the reply. \"The plague was scarlet. The whole face and body turned scarlet in an hour's time. Don't I know? Didn't I see enough of it? And I am telling you it was scarlet because\u2014well, because it *was* scarlet. There is no other word for it.\"\n\n\"Red is good enough for me,\" Hare-Lip muttered obstinately. \"My dad calls red red, and he ought to know. He says everybody died of the Red Death.\"\n\n\"Your dad is a common fellow, descended from a common fellow,\" Granser retorted heatedly. \"Don't I know the beginnings of the Chauffeurs? Your grandsire was a chauffeur, a servant, and without education. He worked for other persons. But your grandmother was of good stock, only the children did not take after her. Don't I remember when I first met them, catching fish at Lake Temescal?\"\n\n\"What is *education?*\" Edwin asked.\n\n\"Calling red scarlet,\" Hare-Lip sneered, then returned to the attack on Granser. \"My dad told me, an' he got it from his dad afore he croaked, that your wife was a Santa Rosan, an' that she was sure no account. He said she was a *hash-slinger* before the Red Death, though I don't know what a *hash-slinger* is. You can tell me, Edwin.\"\n\nBut Edwin shook his head in token of ignorance.\n\n\"It is true, she was a waitress,\" Granser acknowledged. \"But she was a good woman, and your mother was her daughter. Women were very scarce in the days after the Plague. She was the only wife I could find, even if she was a *hash-slinger*, as your father calls it. But it is not nice to talk about our progenitors that way.\"\n\n\"Dad says that the wife of the first Chauffeur was a *lady*\u2014\"\n\n\"What's a *lady?*\" Hoo-Hoo demanded.\n\n\"A *lady*'s a Chauffeur squaw,\" was the quick reply of Hare-Lip.\n\n\"The first Chauffeur was Bill, a common fellow, as I said before,\" the old man expounded; \"but his wife was a lady, a great lady. Before the Scarlet Death she was the wife of Van Worden. He was President of the Board of Industrial Magnates, and was one of the dozen men who ruled America. He was worth one billion, eight hundred millions of dollars\u2014coins like you have there in your pouch, Edwin. And then came the Scarlet Death, and his wife became the wife of Bill, the first Chauffeur. He used to beat her, too. I have seen it myself.\"\n\nHoo-Hoo, lying on his stomach and idly digging his toes in the sand, cried out and investigated, first, his toe-nail, and next, the small hole he had dug. The other two boys joined him, excavating the sand rapidly with their hands till there lay three skeletons exposed. Two were of adults, the third being that of a part-grown child. The old man hudged along on the ground and peered at the find.\n\n\"Plague victims,\" he announced. \"That's the way they died everywhere in the last days. This must have been a family, running away from the contagion and perishing here on the Cliff House beach. They\u2014what are you doing, Edwin?\"\n\nThis question was asked in sudden dismay, as Edwin, using the back of his hunting knife, began to knock out the teeth from the jaws of one of the skulls.\n\n\"Going to string 'em,\" was the response.\n\nThe three boys were now hard at it; and quite a knocking and hammering arose, in which Granser babbled on unnoticed.\n\n\"You are true savages. Already has begun the custom of wearing human teeth. In another generation you will be perforating your noses and ears and wearing ornaments of bone and shell. I know. The human race is doomed to sink back farther and farther into the primitive night ere again it begins its bloody climb upward to civilization. When we increase and feel the lack of room, we will proceed to kill one another. And then I suppose you will wear human scalp-locks at your waist, as well\u2014as you, Edwin, who are the gentlest of my grandsons, have already begun with that vile pigtail. Throw it away, Edwin, boy; throw it away.\"\n\n\"What a gabble the old geezer makes,\" Hare-Lip remarked, when, the teeth all extracted, they began an attempt at equal division.\n\nThey were very quick and abrupt in their actions, and their speech, in moments of hot discussion over the allotment of the choicer teeth, was truly a gabble. They spoke in monosyllables and short jerky sentences that was more a gibberish than a language. And yet, through it ran hints of grammatical construction, and appeared vestiges of the conjugation of some superior culture. Even the speech of Granser was so corrupt that were it put down literally it would be almost so much nonsense to the reader. This, however, was when he talked with the boys.\n\nWhen he got into the full swing of babbling to himself, it slowly purged itself into pure English. The sentences grew longer and were enunciated with a rhythm and ease that was reminiscent of the lecture platform.\n\n\"Tell us about the Red Death, Granser,\" Hare-Lip demanded, when the teeth affair had been satisfactorily concluded.\n\n\"The Scarlet Death,\" Edwin corrected.\n\n\"An' don't work all that funny lingo on us,\" Hare-Lip went on. \"Talk sensible, Granser, like a Santa Rosan ought to talk. Other Santa Rosans don't talk like you.\"\n\n## II\n\nTHE old man showed pleasure in being thus called upon. He cleared his throat and began.\n\n\"Twenty or thirty years ago my story was in great demand. But in these days nobody seems interested\u2014\"\n\n\"There you go!\" Hare-Lip cried hotly. \"Cut out the funny stuff and talk sensible. What's *interested?* You talk like a baby that don't know how.\"\n\n\"Let him alone,\" Edwin urged, \"or he'll get mad and won't talk at all. Skip the funny places. We'll catch on to some of what he tells us.\"\n\n\"Let her go, Granser,\" Hoo-Hoo encouraged; for the old man was already maundering about the disrespect for elders and the reversion to cruelty of all humans that fell from high culture to primitive conditions.\n\nThe tale began.\n\n\"There were very many people in the world in those days. San Francisco alone held four millions\u2014\"\n\n\"What is millions?\" Edwin interrupted. Granser looked at him kindly.\n\n\"I know you cannot count beyond ten, so I will tell you. Hold up your two hands. On both of them you have altogether ten fingers and thumbs. Very well. I now take this grain of sand\u2014you hold it, Hoo-Hoo.\" He dropped the grain of sand into the lad's palm and went on. \"Now that grain of sand stands for the ten fingers of Edwin. I add another grain. That's ten more fingers. And I add another, and another, and another, until I have added as many grains as Edwin has fingers and thumbs. That makes what I call one hundred. Remember that word\u2014one hundred. Now I put this pebble in Hare-Lip's hand. It stands for ten grains of sand, or ten tens of fingers, or one hundred fingers. I put in ten pebbles. They stand for a thousand fingers. I take a mussel-shell, and it stands for ten pebbles, or one hundred grains of sand, or one thousand fingers\u2026.\" And so on, laboriously, and with much reiteration, he strove to build up in their minds a crude conception of numbers. As the quantities increased, he had the boys holding different magnitudes in each of their hands. For still higher sums, he laid the symbols on the log of driftwood; and for symbols he was hard put, being compelled to use the teeth from the skulls for millions, and the crab-shells for billions. It was here that he stopped, for the boys were showing signs of becoming tired.\n\n\"There were four million people in San Francisco\u2014four teeth.\"\n\nThe boys' eyes ranged along from the teeth and from hand to hand, down through the pebbles and sand-grains to Edwin's fingers. And back again they ranged along the ascending series in the effort to grasp such inconceivable numbers.\n\n\"That was a lot of folks, Granser,\" Edwin at last hazarded.\n\n\"Like sand on the beach here, like sand on the beach, each grain of sand a man, or woman, or child. Yes, my boy, all those people lived right here in San Francisco. And at one time or another all those people came out on this very beach\u2014more people than there are grains of sand. More\u2014more\u2014more. And San Francisco was a noble city. And across the bay\u2014where we camped last year, even more people lived, clear from Point Richmond, on the level ground and on the hills, all the way around to San Leandro\u2014one great city of seven million people.\u2014Seven teeth\u2026 there, that's it, seven millions.\"\n\nAgain the boys' eyes ranged up and down from Edwin's fingers to the teeth on the log.\n\n\"The world was full of people. The census of 2010 gave eight billions for the whole world\u2014eight crab-shells, yes, eight billions. It was not like today. Mankind knew a great deal more about getting food. And the more food there was, the more people there were. In the year 1800, there were one hundred and seventy millions in Europe alone. One hundred years later\u2014a grain of sand, Hoo-Hoo\u2014one hundred years later, at 1900, there were five hundred millions in Europe\u2014five grains of sand, Hoo-Hoo, and this one tooth. This shows how easy was the getting of food, and how men increased. And in the year 2000 there were fifteen hundred millions in Europe. And it was the same all over the rest of the world. Eight crab-shells there, yes, eight billion people were alive on the earth when the Scarlet Death began.\n\n\"I was a young man when the Plague came\u2014twenty-seven years old; and I lived on the other side of San Francisco Bay, in Berkeley. You remember those great stone houses, Edwin, when we came down the hills from Contra Costa? That was where I lived, in those stone houses. I was a professor of English literature.\"\n\nMuch of this was over the heads of the boys, but they strove to comprehend dimly this tale of the past.\n\n\"What was them stone houses for?\" Hare-Lip queried.\n\n\"You remember when your dad taught you to swim?\" The boy nodded. \"Well, in the University of California\u2014that is the name we had for the houses\u2014we taught young men and women how to think, just as I have taught you now, by sand and pebbles and shells, to know how many people lived in those days. There was very much to teach. The young men and women we taught were called students. We had large rooms in which we taught. I talked to them, forty or fifty at a time, just as I am talking to you now. I told them about the books other men had written before their time, and even, sometimes, in their time\u2014\"\n\n\"Was that all you did?\u2014just talk, talk, talk?\" Hoo-Hoo demanded. \"Who hunted your meat for you? and milked the goats? and caught the fish?\"\n\n\"A sensible question, Hoo-Hoo, a sensible question. As I have told you, in those days food-getting was easy. We were very wise. A few men got the food for many men. The other men did other things. As you say, I talked. I talked all the time, and for this food was given me\u2014much food, fine food, beautiful food, food that I have not tasted in sixty years and shall never taste again. I sometimes think the most wonderful achievement of our tremendous civilization was food\u2014its inconceivable abundance, its infinite variety, its marvellous delicacy. O my grandsons, life was life in those days, when we had such wonderful things to eat.\"\n\nThis was beyond the boys, and they let it slip by, words and thoughts, as a mere senile wandering in the narrative.\n\n\"Our food-getters were called *freemen*. This was a joke. We of the ruling classes owned all the land, all the machines, everything. These food-getters were our slaves. We took almost all the food they got, and left them a little so that they might eat, and work, and get us more food\u2014\"\n\n\"I'd have gone into the forest and got food for myself,\" Hare-Lip announced; \"and if any man tried to take it away from me, I'd have killed him.\"\n\nThe old man laughed.\n\n\"Did I not tell you that we of the ruling class owned all the land, all the forest, everything? Any food-getter who would not get food for us, him we punished or compelled to starve to death. And very few did that. They preferred to get food for us, and make clothes for us, and prepare and administer to us a thousand\u2014a mussel-shell, Hoo-Hoo\u2014a thousand satisfactions and delights. And I was Professor Smith in those days\u2014Professor James Howard Smith. And my lecture courses were very popular\u2014that is, very many of the young men and women liked to hear me talk about the books other men had written. \"And I was very happy, and I had beautiful things to eat. And my hands were soft, because I did no work with them, and my body was clean all over and dressed in the softest garments\u2014\"\n\nHe surveyed his mangy goat-skin with disgust.\n\n\"We did not wear such things in those days. Even the slaves had better garments. And we were most clean. We washed our faces and hands often every day. You boys never wash unless you fall into the water or go swimming.\"\n\n\"Neither do you Granser,\" Hoo-Hoo retorted.\n\n\"I know, I know, I am a filthy old man, but times have changed. Nobody washes these days, there are no conveniences. It is sixty years since I have seen a piece of soap.\n\n\"You do not know what soap is, and I shall not tell you, for I am telling the story of the Scarlet Death. You know what sickness is. We called it a disease. Very many of the diseases came from what we called germs. Remember that word\u2014germs. A germ is a very small thing. It is like a woodtick, such as you find on the dogs in the spring of the year when they run in the forest. Only the germ is very small. It is so small that you cannot see it\u2014\"\n\nHoo-Hoo began to laugh.\n\n\"You're a queer un, Granser, talking about things you can't see. If you can't see 'em, how do you know they are? That's what I want to know. How do you know anything you can't see?\"\n\n\"A good question, a very good question, Hoo-Hoo. But we did see\u2014some of them. We had what we called microscopes and ultramicroscopes, and we put them to our eyes and looked through them, so that we saw things larger than they really were, and many things we could not see without the microscopes at all. Our best ultramicroscopes could make a germ look forty thousand times larger. A mussel-shell is a thousand fingers like Edwin's. Take forty mussel-shells, and by as many times larger was the germ when we looked at it through a microscope. And after that, we had other ways, by using what we called moving pictures, of making the forty-thousand-times germ many, many thousand times larger still. And thus we saw all these things which our eyes of themselves could not see. Take a grain of sand. Break it into ten pieces. Take one piece and break it into ten. Break one of those pieces into ten, and one of those into ten, and one of those into ten, and one of those into ten, and do it all day, and maybe, by sunset, you will have a piece as small as one of the germs.\" The boys were openly incredulous. Hare-Lip sniffed and sneered and Hoo-Hoo snickered, until Edwin nudged them to be silent.\n\n\"The woodtick sucks the blood of the dog, but the germ, being so very small, goes right into the blood of the body, and there it has many children. In those days there would be as many as a billion\u2014a crab-shell, please\u2014as many as that crab-shell in one man's body. We called germs micro-organisms. When a few million, or a billion, of them were in a man, in all the blood of a man, he was sick. These germs were a disease. There were many different kinds of them\u2014more different kinds than there are grains of sand on this beach. We knew only a few of the kinds. The micro-organic world was an invisible world, a world we could not see, and we knew very little about it. Yet we did know something. There was the *bacillus anthracis*; there was the *micrococcus*; there was the *Bacterium termo*, and the *Bacterium lactis*\u2014that's what turns the goat milk sour even to this day, Hare-Lip; and there were *Schizomycetes* without end. And there were many others\u2026.\"\n\nHere the old man launched into a disquisition on germs and their natures, using words and phrases of such extraordinary length and meaninglessness, that the boys grinned at one another and looked out over the deserted ocean till they forgot the old man was babbling on.\n\n\"But the Scarlet Death, Granser,\" Edwin at last suggested.\n\nGranser recollected himself, and with a start tore himself away from the rostrum of the lecture-hall, where, to another world audience, he had been expounding the latest theory, sixty years gone, of germs and germ-diseases.\n\n\"Yes, yes, Edwin; I had forgotten. Sometimes the memory of the past is very strong upon me, and I forget that I am a dirty old man, clad in goat-skin, wandering with my savage grandsons who are goatherds in the primeval wilderness. 'The fleeting systems lapse like foam,' and so lapsed our glorious, colossal civilization. I am Granser, a tired old man. I belong to the tribe of Santa Rosans. I married into that tribe. My sons and daughters married into the Chauffeurs, the Sacramentos, and the Palo-Altos. You, Hare-Lip, are of the Chauffeurs. You, Edwin, are of the Sacramentos. And you, Hoo-Hoo, are of the Palo-Altos. Your tribe takes its name from a town that was near the seat of another great institution of learning. It was called Stanford University. Yes, I remember now. It is perfectly clear. I was telling you of the Scarlet Death. Where was I in my story?\"\n\n\"You was telling about germs, the things you can't see but which make men sick,\" Edwin prompted.\n\n\"Yes, that's where I was. A man did not notice at first when only a few of these germs got into his body. But each germ broke in half and became two germs, and they kept doing this very rapidly so that in a short time there were many millions of them in the body. Then the man was sick. He had a disease, and the disease was named after the kind of a germ that was in him. It might be measles, it might be influenza, it might be yellow fever; it might be any of thousands and thousands of kinds of diseases.\n\n\"Now this is the strange thing about these germs. There were always new ones coming to live in men's bodies. Long and long and long ago, when there were only a few men in the world, there were few diseases. But as men increased and lived closely together in great cities and civilizations, new diseases arose, new kinds of germs entered their bodies. Thus were countless millions and billions of human beings killed. And the more thickly men packed together, the more terrible were the new diseases that came to be. Long before my time, in the middle ages, there was the Black Plague that swept across Europe. It swept across Europe many times. There was tuberculosis, that entered into men wherever they were thickly packed. A hundred years before my time there was the bubonic plague. And in Africa was the sleeping sickness. The bacteriologists fought all these sicknesses and destroyed them, just as you boys fight the wolves away from your goats, or squash the mosquitoes that light on you. The bacteriologists\u2014\"\n\n\"But, Granser, what is a what-you-call-it?\" Edwin interrupted.\n\n\"You, Edwin, are a goatherd. Your task is to watch the goats. You know a great deal about goats. A bacteriologist watches germs. That's his task, and he knows a great deal about them. So, as I was saying, the bacteriologists fought with the germs and destroyed them\u2014sometimes. There was leprosy, a horrible disease. A hundred years before I was born, the bacteriologists discovered the germ of leprosy. They knew all about it. They made pictures of it. I have seen those pictures. But they never found a way to kill it. But in 1984, there was the Pantoblast Plague, a disease that broke out in a country called Brazil and that killed millions of people. But the bacteriologists found it out, and found the way to kill it, so that the Pantoblast Plague went no farther. They made what they called a serum, which they put into a man's body and which killed the pantoblast germs without killing the man. And in 1910, there was Pellagra, and also the hookworm. These were easily killed by the bacteriologists. But in 1947 there arose a new disease that had never been seen before. It got into the bodies of babies of only ten months old or less, and it made them unable to move their hands and feet, or to eat, or anything; and the bacteriologists were eleven years in discovering how to kill that particular germ and save the babies.\n\n\"In spite of all these diseases, and of all the new ones that continued to arise, there were more and more men in the world. This was because it was easy to get food. The easier it was to get food, the more men there were; the more men there were, the more thickly were they packed together on the earth; and the more thickly they were packed, the more new kinds of germs became diseases. There were warnings. Soldervetzsky, as early as 1929, told the bacteriologists that they had no guaranty against some new disease, a thousand times more deadly than any they knew, arising and killing by the hundreds of millions and even by the billion. You see, the micro-organic world remained a mystery to the end. They knew there was such a world, and that from time to time armies of new germs emerged from it to kill men. \"And that was all they knew about it. For all they knew, in that invisible micro-organic world there might be as many different kinds of germs as there are grains of sand on this beach. And also, in that same invisible world it might well be that new kinds of germs came to be. It might be there that life originated\u2014the 'abysmal fecundity,' Soldervetzsky called it, applying the words of other men who had written before him\u2026.\"\n\nIt was at this point that Hare-Lip rose to his feet, an expression of huge contempt on his face.\n\n\"Granser,\" he announced, \"you make me sick with your gabble. Why don't you tell about the Red Death? If you ain't going to, say so, an' we'll start back for camp.\"\n\nThe old man looked at him and silently began to cry. The weak tears of age rolled down his cheeks and all the feebleness of his eighty-seven years showed in his grief-stricken countenance.\n\n\"Sit down,\" Edwin counselled soothingly. \"Granser's all right. He's just gettin' to the Scarlet Death, ain't you, Granser? He's just goin' to tell us about it right now. Sit down, Hare-Lip. Go ahead, Granser.\"\n\n## III\n\nTHE old man wiped the tears away on his grimy knuckles and took up the tale in a tremulous, piping voice that soon strengthened as he got the swing of the narrative.\n\n\"It was in the summer of 2013 that the Plague came. I was twenty-seven years old, and well do I remember it. Wireless despatches\u2014\"\n\nHare-Lip spat loudly his disgust, and Granser hastened to make amends. \"We talked through the air in those days, thousands and thousands of miles. And the word came of a strange disease that had broken out in New York. There were seventeen millions of people living then in that noblest city of America. Nobody thought anything about the news. It was only a small thing. There had been only a few deaths. It seemed, though, that they had died very quickly, and that one of the first signs of the disease was the turning red of the face and all the body. Within twenty-four hours came the report of the first case in Chicago. And on the same day, it was made public that London, the greatest city in the world, next to Chicago, had been secretly fighting the plague for two weeks and censoring the news despatches\u2014that is, not permitting the word to go forth to the rest of the world that London had the plague.\n\n\"It looked serious, but we in California, like everywhere else, were not alarmed. We were sure that the bacteriologists would find a way to overcome this new germ, just as they had overcome other germs in the past. But the trouble was the astonishing quickness with which this germ destroyed human beings, and the fact that it inevitably killed any human body it entered. No one ever recovered. There was the old Asiatic cholera, when you might eat dinner with a well man in the evening, and the next morning, if you got up early enough, you would see him being hauled by your window in the death-cart. But this new plague was quicker than that\u2014much quicker.\n\n\"From the moment of the first signs of it, a man would be dead in an hour. Some lasted for several hours. Many died within ten or fifteen minutes of the appearance of the first signs.\n\n\"The heart began to beat faster and the heat of the body to increase. Then came the scarlet rash, spreading like wildfire over the face and body. Most persons never noticed the increase in heat and heart-beat, and the first they knew was when the scarlet rash came out. Usually, they had convulsions at the time of the appearance of the rash. But these convulsions did not last long and were not very severe. If one lived through them, he became perfectly quiet, and only did he feel a numbness swiftly creeping up his body from the feet. The heels became numb first, then the legs, and hips, and when the numbness reached as high as his heart he died. They did not rave or sleep. Their minds always remained cool and calm up to the moment their heart numbed and stopped. And another strange thing was the rapidity of decomposition. No sooner was a person dead than the body seemed to fall to pieces, to fly apart, to melt away even as you looked at it. That was one of the reasons the plague spread so rapidly. All the billions of germs in a corpse were so immediately released.\n\n\"And it was because of all this that the bacteriologists had so little chance in fighting the germs. They were killed in their laboratories even as they studied the germ of the Scarlet Death. They were heroes. As fast as they perished, others stepped forth and took their places. It was in London that they first isolated it. The news was telegraphed everywhere. Trask was the name of the man who succeeded in this, but within thirty hours he was dead. Then came the struggle in all the laboratories to find something that would kill the plague germs. All drugs failed. You see, the problem was to get a drug, or serum, that would kill the germs in the body and not kill the body. They tried to fight it with other germs, to put into the body of a sick man germs that were the enemies of the plague germs\u2014\"\n\n\"And you can't see these germ-things, Granser,\" Hare-Lip objected, \"and here you gabble, gabble, gabble about them as if they was anything, when they're nothing at all. Anything you can't see, ain't, that's what. Fighting things that ain't with things that ain't! They must have been all fools in them days. That's why they croaked. I ain't goin' to believe in such rot, I tell you that.\"\n\nGranser promptly began to weep, while Edwin hotly took up his defence. \"Look here, Hare-Lip, you believe in lots of things you can't see.\"\n\nHare-Lip shook his head.\n\n\"You believe in dead men walking about. You never seen one dead man walk about.\"\n\n\"I tell you I seen 'em, last winter, when I was wolf-hunting with dad.\"\n\n\"Well, you always spit when you cross running water,\" Edwin challenged. \"That's to keep off bad luck,\" was Hare-Lip's defence.\n\n\"You believe in bad luck?\"\n\n\"Sure.\"\n\n\"An' you ain't never seen bad luck,\" Edwin concluded triumphantly. \"You're just as bad as Granser and his germs. You believe in what you don't see. Go on, Granser.\"\n\nHare-Lip, crushed by this metaphysical defeat, remained silent, and the old man went on. Often and often, though this narrative must not be clogged by the details, was Granser's tale interrupted while the boys squabbled among themselves. Also, among themselves they kept up a constant, low-voiced exchange of explanation and conjecture, as they strove to follow the old man into his unknown and vanished world.\n\n\"The Scarlet Death broke out in San Francisco. The first death came on a Monday morning. By Thursday they were dying like flies in Oakland and San Francisco. They died everywhere\u2014in their beds, at their work, walking along the street. It was on Tuesday that I saw my first death\u2014Miss Collbran, one of my students, sitting right there before my eyes, in my lecture-room. I noticed her face while I was talking. It had suddenly turned scarlet. I ceased speaking and could only look at her, for the first fear of the plague was already on all of us and we knew that it had come. The young women screamed and ran out of the room. So did the young men run out, all but two. Miss Collbran's convulsions were very mild and lasted less than a minute. One of the young men fetched her a glass of water. She drank only a little of it, and cried out:\n\n\"'My feet! All sensation has left them.'\n\n\"After a minute she said, 'I have no feet. I am unaware that I have any feet. And my knees are cold. I can scarcely feel that I have knees.'\n\n\"She lay on the floor, a bundle of notebooks under her head. And we could do nothing. The coldness and the numbness crept up past her hips to her heart, and when it reached her heart she was dead. In fifteen minutes, by the clock\u2014I timed it\u2014she was dead, there, in my own classroom, dead. And she was a very beautiful, strong, healthy young woman. And from the first sign of the plague to her death only fifteen minutes elapsed. That will show you how swift was the Scarlet Death.\n\n\"Yet in those few minutes I remained with the dying woman in my classroom, the alarm had spread over the university; and the students, by thousands, all of them, had deserted the lecture-room and laboratories. When I emerged, on my way to make report to the President of the Faculty, I found the university deserted. Across the campus were several stragglers hurrying for their homes. Two of them were running.\n\n\"President Hoag, I found in his office, all alone, looking very old and very gray, with a multitude of wrinkles in his face that I had never seen before. At the sight of me, he pulled himself to his feet and tottered away to the inner office, banging the door after him and locking it. You see, he knew I had been exposed, and he was afraid. He shouted to me through the door to go away. I shall never forget my feelings as I walked down the silent corridors and out across that deserted campus. I was not afraid. I had been exposed, and I looked upon myself as already dead. It was not that, but a feeling of awful depression that impressed me. Everything had stopped. It was like the end of the world to me\u2014my world. I had been born within sight and sound of the university. It had been my predestined career. My father had been a professor there before me, and his father before him. For a century and a half had this university, like a splendid machine, been running steadily on. And now, in an instant, it had stopped. It was like seeing the sacred flame die down on some thrice-sacred altar. I was shocked, unutterably shocked.\n\n\"When I arrived home, my housekeeper screamed as I entered, and fled away. And when I rang, I found the housemaid had likewise fled. I investigated. In the kitchen I found the cook on the point of departure. But she screamed, too, and in her haste dropped a suitcase of her personal belongings and ran out of the house and across the grounds, still screaming. I can hear her scream to this day. You see, we did not act in this way when ordinary diseases smote us. We were always calm over such things, and sent for the doctors and nurses who knew just what to do. But this was different. It struck so suddenly, and killed so swiftly, and never missed a stroke. When the scarlet rash appeared on a person's face, that person was marked by death. There was never a known case of a recovery.\n\n\"I was alone in my big house. As I have told you often before, in those days we could talk with one another over wires or through the air. The telephone bell rang, and I found my brother talking to me. He told me that he was not coming home for fear of catching the plague from me, and that he had taken our two sisters to stop at Professor Bacon's home. He advised me to remain where I was, and wait to find out whether or not I had caught the plague.\n\n\"To all of this I agreed, staying in my house and for the first time in my life attempting to cook. And the plague did not come out on me. By means of the telephone I could talk with whomsoever I pleased and get the news. Also, there were the newspapers, and I ordered all of them to be thrown up to my door so that I could know what was happening with the rest of the world.\n\n\"New York City and Chicago were in chaos. And what happened with them was happening in all the large cities. A third of the New York police were dead. Their chief was also dead, likewise the mayor. All law and order had ceased. The bodies were lying in the streets un-buried. All railroads and vessels carrying food and such things into the great city had ceased running and mobs of the hungry poor were pillaging the stores and warehouses. Murder and robbery and drunkenness were everywhere. Already the people had fled from the city by millions\u2014at first the rich, in their private motor-cars and dirigibles, and then the great mass of the population, on foot, carrying the plague with them, themselves starving and pillaging the farmers and all the towns and villages on the way.\n\n\"The man who sent this news, the wireless operator, was alone with his instrument on the top of a lofty building. The people remaining in the city\u2014he estimated them at several hundred thousand\u2014had gone mad from fear and drink, and on all sides of him great fires were raging. He was a hero, that man who staid by his post\u2014an obscure newspaperman, most likely.\n\n\"For twenty-four hours, he said, no transatlantic airships had arrived, and no more messages were coming from England. He did state, though, that a message from Berlin\u2014that's in Germany\u2014announced that Hoffmeyer, a bacteriologist of the Metchnikoff School, had discovered the serum for the plague. That was the last word, to this day, that we of America ever received from Europe. If Hoffmeyer discovered the serum, it was too late, or otherwise, long ere this, explorers from Europe would have come looking for us. We can only conclude that what happened in America happened in Europe, and that, at the best, some several score may have survived the Scarlet Death on that whole continent.\n\n\"For one day longer the despatches continued to come from New York. Then they, too, ceased. The man who had sent them, perched in his lofty building, had either died of the plague or been consumed in the great conflagrations he had described as raging around him. And what had occurred in New York had been duplicated in all the other cities. It was the same in San Francisco, and Oakland, and Berkeley. By Thursday the people were dying so rapidly that their corpses could not be handled, and dead bodies lay everywhere. Thursday night the panic outrush for the country began. Imagine, my grandsons, people, thicker than the salmon-run you have seen on the Sacramento river, pouring out of the cities by millions, madly over the country, in vain attempt to escape the ubiquitous death. You see, they carried the germs with them. Even the airships of the rich, fleeing for mountain and desert fastnesses, carried the germs.\n\n\"Hundreds of these airships escaped to Hawaii, and not only did they bring the plague with them, but they found the plague already there before them. This we learned, by the despatches, until all order in San Francisco vanished, and there were no operators left at their posts to receive or send. It was amazing, astounding, this loss of communication with the world. It was exactly as if the world had ceased, been blotted out. For sixty years that world has no longer existed for me. I know there must be such places as New York, Europe, Asia, and Africa; but not one word has been heard of them\u2014not in sixty years. With the coming of the Scarlet Death the world fell apart, absolutely, irretrievably. Ten thousand years of culture and civilization passed in the twinkling of an eye, 'lapsed like foam.'\n\n\"I was telling about the airships of the rich. They carried the plague with them and no matter where they fled, they died. I never encountered but one survivor of any of them\u2014Mungerson. He was afterwards a Santa Rosan, and he married my eldest daughter. He came into the tribe eight years after the plague. He was then nineteen years old, and he was compelled to wait twelve years more before he could marry. You see, there were no unmarried women, and some of the older daughters of the Santa Rosans were already bespoken. So he was forced to wait until my Mary had grown to sixteen years. It was his son, Gimp-Leg, who was killed last year by the mountain lion.\n\n\"Mungerson was eleven years old at the time of the plague. His father was one of the Industrial Magnates, a very wealthy, powerful man. It was on his airship, the Condor, that they were fleeing, with all the family, for the wilds of British Columbia, which is far to the north of here. But there was some accident, and they were wrecked near Mount Shasta. You have heard of that mountain. It is far to the north. The plague broke out amongst them, and this boy of eleven was the only survivor. For eight years he was alone, wandering over a deserted land and looking vainly for his own kind. And at last, travelling south, he picked up with us, the Santa Rosans.\n\n\"But I am ahead of my story. When the great exodus from the cities around San Francisco Bay began, and while the telephones were still working, I talked with my brother. I told him this flight from the cities was insanity, that there were no symptoms of the plague in me, and that the thing for us to do was to isolate ourselves and our relatives in some safe place. We decided on the Chemistry Building, at the university, and we planned to lay in a supply of provisions, and by force of arms to prevent any other persons from forcing their presence upon us after we had retired to our refuge.\n\n\"All this being arranged, my brother begged me to stay in my own house for at least twenty-four hours more, on the chance of the plague developing in me. To this I agreed, and he promised to come for me next day. We talked on over the details of the provisioning and the defending of the Chemistry Building until the telephone died. It died in the midst of our conversation. That evening there were no electric lights, and I was alone in my house in the darkness. No more newspapers were being printed, so I had no knowledge of what was taking place outside.\n\n\"I heard sounds of rioting and of pistol shots, and from my windows I could see the glare of the sky of some conflagration in the direction of Oakland. It was a night of terror. I did not sleep a wink. A man\u2014why and how I do not know\u2014was killed on the sidewalk in front of the house. I heard the rapid reports of an automatic pistol, and a few minutes later the wounded wretch crawled up to my door, moaning and crying out for help. Arming myself with two automatics, I went to him. By the light of a match I ascertained that while he was dying of the bullet wounds, at the same time the plague was on him. I fled indoors, whence I heard him moan and cry out for half an hour longer.\n\n\"In the morning, my brother came to me. I had gathered into a handbag what things of value I purposed taking, but when I saw his face I knew that he would never accompany me to the Chemistry Building. The plague was on him. He intended shaking my hand, but I went back hurriedly before him.\n\n\"'Look at yourself in the mirror,' I commanded.\n\n\"He did so, and at sight of his scarlet face, the color deepening as he looked at it, he sank down nervelessly in a chair.\n\n\"'My God!' he said. 'I've got it. Don't come near me. I am a dead man.' Then the convulsions seized him. He was two hours in dying, and he was conscious to the last, complaining about the coldness and loss of sensation in his feet, his calves, his thighs, until at last it was his heart and he was dead.\n\n\"That was the way the Scarlet Death slew. I caught up my handbag and fled. The sights in the streets were terrible. One stumbled on bodies everywhere. Some were not yet dead. And even as you looked, you saw men sink down with the death fastened upon them. There were numerous fires burning in Berkeley, while Oakland and San Francisco were apparently being swept by vast conflagrations. The smoke of the burning filled the heavens, so that the midday was as a gloomy twilight, and, in the shifts of wind, sometimes the sun shone through dimly, a dull red orb. Truly, my grandsons, it was like the last days of the end of the world.\n\n\"There were numerous stalled motor cars, showing that the gasoline and the engine supplies of the garages had given out. I remember one such car. A man and a woman lay back dead in the seats, and on the pavement near it were two more women and a child. Strange and terrible sights there were on every hand. People slipped by silently, furtively, like ghosts\u2014white-faced women carrying infants in their arms; fathers leading children by the hand; singly, and in couples, and in families\u2014all fleeing out of the city of death. Some carried supplies of food, others blankets and valuables, and there were many who carried nothing.\n\n\"There was a grocery store\u2014a place where food was sold. The man to whom it belonged\u2014I knew him well\u2014a quiet, sober, but stupid and obstinate fellow, was defending it. The windows and doors had been broken in, but he, inside, hiding behind a counter, was discharging his pistol at a number of men on the sidewalk who were breaking in. In the entrance were several bodies\u2014of men, I decided, whom he had killed earlier in the day. Even as I looked on from a distance, I saw one of the robbers break the windows of the adjoining store, a place where shoes were sold, and deliberately set fire to it. I did not go to the groceryman's assistance. The time for such acts had already passed. Civilization was crumbling, and it was each for himself.\"\n\n## IV\n\n\"I WENT away hastily, down a cross-street, and at the first corner I saw another tragedy. Two men of the working class had caught a man and a woman with two children, and were robbing them. I knew the man by sight, though I had never been introduced to him. He was a poet whose verses I had long admired. Yet I did not go to his help, for at the moment I came upon the scene there was a pistol shot, and I saw him sinking to the ground. The woman screamed, and she was felled with a fist-blow by one of the brutes. I cried out threateningly, whereupon they discharged their pistols at me and I ran away around the corner. Here I was blocked by an advancing conflagration. The buildings on both sides were burning, and the street was filled with smoke and flame. From somewhere in that murk came a woman's voice calling shrilly for help. But I did not go to her. A man's heart turned to iron amid such scenes, and one heard all too many appeals for help.\n\n\"Returning to the corner, I found the two robbers were gone. The poet and his wife lay dead on the pavement. It was a shocking sight. The two children had vanished\u2014whither I could not tell. And I knew, now, why it was that the fleeing persons I encountered slipped along so furtively and with such white faces. In the midst of our civilization, down in our slums and labor-ghettos, we had bred a race of barbarians, of savages; and now, in the time of our calamity, they turned upon us like the wild beasts they were and destroyed us. And they destroyed themselves as well.\n\n\"They inflamed themselves with strong drink and committed a thousand atrocities, quarreling and killing one another in the general madness. One group of workingmen I saw, of the better sort, who had banded together, and, with their women and children in their midst, the sick and aged in litters and being carried, and with a number of horses pulling a truck-load of provisions, they were fighting their way out of the city. They made a fine spectacle as they came down the street through the drifting smoke, though they nearly shot me when I first appeared in their path. As they went by, one of their leaders shouted out to me in apologetic explanation. He said they were killing the robbers and looters on sight, and that they had thus banded together as the only-means by which to escape the prowlers.\n\n\"It was here that I saw for the first time what I was soon to see so often. One of the marching men had suddenly shown the unmistakable mark of the plague. Immediately those about him drew away, and he, without a remonstrance, stepped out of his place to let them pass on. A woman, most probably his wife, attempted to follow him. She was leading a little boy by the hand. But the husband commanded her sternly to go on, while others laid hands on her and restrained her from following him. This I saw, and I saw the man also, with his scarlet blaze of face, step into a doorway on the opposite side of the street. I heard the report of his pistol, and saw him sink lifeless to the ground.\n\n\"After being turned aside twice again by advancing fires, I succeeded in getting through to the university. On the edge of the campus I came upon a party of university folk who were going in the direction of the Chemistry Building. They were all family men, and their families were with them, including the nurses and the servants. Professor Badminton greeted me, I had difficulty in recognizing him. Somewhere he had gone through flames, and his beard was singed off. About his head was a bloody bandage, and his clothes were filthy.\n\n\"He told me he had prowlers, and that his brother had been killed the previous night, in the defence of their dwelling.\n\n\"Midway across the campus, he pointed suddenly to Mrs. Swinton's face. The unmistakable scarlet was there. Immediately all the other women set up a screaming and began to run away from her. Her two children were with a nurse, and these also ran with the women. But her husband, Doctor Swinton, remained with her.\n\n\"'Go on, Smith,' he told me. 'Keep an eye on the children. As for me, I shall stay with my wife. I know she is as already dead, but I can't leave her. Afterwards, if I escape, I shall come to the Chemistry Building, and do you watch for me and let me in.'\n\n\"I left him bending over his wife and soothing her last moments, while I ran to overtake the party. We were the last to be admitted to the Chemistry Building. After that, with our automatic rifles we maintained our isolation. By our plans, we had arranged for a company of sixty to be in this refuge. Instead, every one of the number originally planned had added relatives and friends and whole families until there were over four hundred souls. But the Chemistry Building was large, and, standing by itself, was in no danger of being burned by the great fires that raged everywhere in the city.\n\n\"A large quantity of provisions had been gathered, and a food committee took charge of it, issuing rations daily to the various families and groups that arranged themselves into messes. A number of committees were appointed, and we developed a very efficient organization. I was on the committee of defence, though for the first day no prowlers came near. We could see them in the distance, however, and by the smoke of their fires knew that several camps of them were occupying the far edge of the campus. Drunkenness was rife, and often we heard them singing ribald songs or insanely shouting. While the world crashed to ruin about them and all the air was filled with the smoke of its burning, these low creatures gave rein to their bestiality and fought and drank and died. And after all, what did it matter? Everybody died anyway, the good and the bad, the efficients and the weaklings, those that loved to live and those that scorned to live. They passed. Everything passed. \"When twenty-four hours had gone by and no signs of the plague were apparent, we congratulated ourselves and set about digging a well. You have seen the great iron pipes which in those days carried water to all the city-dwellers. We feared that the fires in the city would burst the pipes and empty the reservoirs. So we tore up the cement floor of the central court of the Chemistry Building and dug a well. There were many young men, undergraduates, with us, and we worked night and day on the well. And our fears were confirmed. Three hours before we reached water, the pipes went dry.\n\n\"A second twenty-four hours passed, and still the plague did not appear among us. We thought we were saved. But we did not know what I afterwards decided to be true, namely, that the period of the incubation of the plague germs in a human's body was a matter of a number of days. It slew so swiftly when once it manifested itself, that we were led to believe that the period of incubation was equally swift. So, when two days had left us unscathed, we were elated with the idea that we were free of the contagion.\n\n\"But the third day disillusioned us. I can never forget the night preceding it. I had charge of the night guards from eight to twelve, and from the roof of the building I watched the passing of all man's glorious works. So terrible were the local conflagrations that all the sky was lighted up. One could read the finest print in the red glare. All the world seemed wrapped in flames. San Francisco spouted smoke and fire from a score of vast conflagrations that were like so many active volcanoes. Oakland, San Leandro, Haywards\u2014all were burning; and to the northward, clear to Point Richmond, other fires were at work. It was an awe-inspiring spectacle. Civilization, my grandsons, civilization was passing in a sheet of flame and a breath of death. At ten o'clock that night, the great powder magazines at Point Pinole exploded in rapid succession. So terrific were the concussions that the strong building rocked as in an earthquake, while every pane of glass was broken. It was then that I left the roof and went down the long corridors, from room to room, quieting the alarmed women and telling them what had happened.\n\n\"An hour later, at a window on the ground floor, I heard pandemonium break out in the camps of the prowlers. There were cries and screams, and shots from many pistols. As we afterward conjectured, this fight had been precipitated by an attempt on the part of those that were well to drive out those that were sick. At any rate, a number of the plague-stricken prowlers escaped across the campus and drifted against our doors. We warned them back, but they cursed us and discharged a fusillade from their pistols. Professor Merryweather, at one of the windows, was instantly killed, the bullet striking him squarely between the eyes. We opened fire in turn, and all the prowlers fled away with the exception of three. One was a woman. The plague was on them and they were reckless. Like foul fiends, there in the red glare from the skies, with faces blazing, they continued to curse us and fire at us. One of the men I shot with my own hand. After that the other man and the woman, still cursing us, lay down under our windows, where we were compelled to watch them die of the plague.\n\n\"The situation was critical. The explosions of the powder magazines had broken all the windows of the Chemistry Building, so that we were exposed to the germs from the corpses. The sanitary committee was called upon to act, and it responded nobly. Two men were required to go out and remove the corpses, and this meant the probable sacrifice of their own lives, for, having performed the task, they were not to be permitted to reenter the building. One of the professors, who was a bachelor, and one of the undergraduates volunteered. They bade good-bye to us and went forth. They were heroes. They gave up their lives that four hundred others might live. After they had performed their work, they stood for a moment, at a distance, looking at us wistfully. Then they waved their hands in farewell and went away slowly across the campus toward the burning city.\n\n\"And yet it was all useless. The next morning the first one of us was smitten with the plague\u2014a little nurse-girl in the family of Professor Stout. It was no time for weak-kneed, sentimental policies. On the chance that she might be the only one, we thrust her forth from the building and commanded her to be gone.\n\n\"She went away slowly across the campus, wringing her hands and crying pitifully. We felt like brutes, but what were we to do? There were four hundred of us, and individuals had to be sacrificed.\n\n\"In one of the laboratories three families had domiciled themselves, and that afternoon we found among them no less than four corpses and seven cases of the plague in all its different stages.\n\n\"Then it was that the horror began. Leaving the dead lie, we forced the living ones to segregate themselves in another room. The plague began to break out among the rest of us, and as fast as the symptoms appeared, we sent the stricken ones to these segregated rooms. We compelled them to walk there by themselves, so as to avoid laying hands on them. It was heartrending. But still the plague raged among us, and room after room was filled with the dead and dying. And so we who were yet clean retreated to the next floor and to the next, before this sea of the dead, that, room by room and floor by floor, inundated the building.\n\n\"The place became a charnel house, and in the middle of the night the survivors fled forth, taking nothing with them except arms and ammunition and a heavy store of tinned foods. We camped on the opposite side of the campus from the prowlers, and, while some stood guard, others of us volunteered to scout into the city in quest of horses, motor cars, carts, and wagons, or anything that would carry our provisions and enable us to emulate the banded workingmen I had seen fighting their way out to the open country.\n\n\"I was one of these scouts; and Doctor Hoyle, remembering that his motor car had been left behind in his home garage, told me to look for it. We scouted in pairs, and Dombey, a young undergraduate, accompanied me. We had to cross half a mile of the residence portion of the city to get to Doctor Hoyle's home. Here the buildings stood apart, in the midst of trees and grassy lawns, and here the fires had played freaks, burning whole blocks, skipping blocks and often skipping a single house in a block. And here, too, the prowlers were still at their work. We carried our automatic pistols openly in our hands, and looked desperate enough, forsooth, to keep them from attacking us. But at Doctor Hoyle's house the thing happened. Untouched by fire, even as we came to it the smoke of flames burst forth.\n\n\"The miscreant who had set fire to it staggered down the steps and out along the driveway. Sticking out of his coat pockets were bottles of whiskey, and he was very drunk. My first impulse was to shoot him, and I have never ceased regretting that I did not. Staggering and maundering to himself, with bloodshot eyes, and a raw and bleeding slash down one side of his bewhiskered face, he was altogether the most nauseating specimen of degradation and filth I had ever encountered. I did not shoot him, and he leaned against a tree on the lawn to let us go by. It was the most absolute, wanton act. Just as we were opposite him, he suddenly drew a pistol and shot Dombey through the head. The next instant I shot him. But it was too late. Dombey expired without a groan, immediately. I doubt if he even knew what had happened to him.\n\n\"Leaving the two corpses, I hurried on past the burning house to the garage, and there found Doctor Hoyle's motor car. The tanks were filled with gasoline, and it was ready for use. And it was in this car that I threaded the streets of the ruined city and came back to the survivors on the campus. The other scouts returned, but none had been so fortunate. Professor Fairmead had found a Shetland pony, but the poor creature, tied in a stable and abandoned for days, was so weak from want of food and water that it could carry no burden at all. Some of the men were for turning it loose, but I insisted that we should lead it along with us, so that, if we got out of food, we would have it to eat.\n\n\"There were forty-seven of us when we started, many being women and children. The President of the Faculty, an old man to begin with, and now hopelessly broken by the awful happenings of the past week, rode in the motor car with several young children and the aged mother of Professor Fairmead. Wathope, a young professor of English, who had a grievous bullet-wound in his leg, drove the car. The rest of us walked, Professor Fairmead leading the pony.\n\n\"It was what should have been a bright summer day, but the smoke from the burning world filled the sky, through which the sun shone murkily, a dull and lifeless orb, blood-red and ominous. But we had grown accustomed to that blood-red sun. With the smoke it was different. It bit into our nostrils and eyes, and there was not one of us whose eyes were not bloodshot. We directed our course to the southeast through the endless miles of suburban residences, travelling along where the first swells of low hills rose from the flat of the central city. It was by this way, only, that we could expect to gain the country.\n\n\"Our progress was painfully slow. The women and children could not walk fast. They did not dream of walking, my grandsons, in the way all people walk today. In truth, none of us knew how to walk. It was not until after the plague that I learned really to walk. So it was that the pace of the slowest was the pace of all, for we dared not separate on account of the prowlers. There were not so many now of these human beasts of prey. The plague had already well diminished their numbers, but enough still lived to be a constant menace to us. Many of the beautiful residences were untouched by fire, yet smoking ruins were everywhere. The prowlers, too, seemed to have got over their insensate desire to burn, and it was more rarely that we saw houses freshly on fire.\n\n\"Several of us scouted among the private garages in search of motor cars and gasoline. But in this we were unsuccessful. The first great flights from the cities had swept all such utilities away. Calgan, a fine young man, was lost in this work. He was shot by prowlers while crossing a lawn. Yet this was our only casualty, though, once, a drunken brute deliberately opened fire on all of us. Luckily, he fired wildly, and we shot him before he had done any hurt.\n\n\"At Fruitvale, still in the heart of the magnificent residence section of the city, the plague again smote us. Professor Fairmead was the victim. Making signs to us that his mother was not to know, he turned aside into the grounds of a beautiful mansion. He sat down forlornly on the steps of the front veranda, and I, having lingered, waved him a last farewell. That night, several miles beyond Fruitvale and still in the city, we made camp. And that night we shifted camp twice to get away from our dead. In the morning there were thirty of us. I shall never forget the President of the Faculty. During the morning's march his wife, who was walking, betrayed the fatal symptoms, and when she drew aside to let us go on, he insisted on leaving the motor car and remaining with her. There was quite a discussion about this, but in the end we gave in. It was just as well, for we knew not which ones of us, if any, might ultimately escape.\n\n\"That night, the second of our march, we camped beyond Haywards in the first stretches of country. And in the morning there were eleven of us that lived. Also, during the night, Wathope, the professor with the wounded leg, deserted us in the motor car. He took with him his sister and his mother and most of our tinned provisions. It was that day, in the afternoon, while resting by the wayside, that I saw the last airship I shall ever see. The smoke was much thinner here in the country, and I first sighted the ship drifting and veering helplessly at an elevation of two thousand feet. What had happened I could not conjecture, but even as we looked we saw her bow dip down lower and lower. Then the bulkheads of the various gas-chambers must have burst, for, quite perpendicular, she fell like a plummet to the earth.\n\n\"And from that day to this I have not seen another airship. Often and often, during the next few years, I scanned the sky for them, hoping against hope that somewhere in the world civilization had survived. But it was not to be. What happened with us in California must have happened with everybody everywhere.\n\n\"Another day, and at Niles there were three of us. Beyond Niles, in the middle of the highway, we found Wathope. The motor car had broken down, and there, on the rugs which they had spread on the ground, lay the bodies of his sister, his mother, and himself.\n\n\"Wearied by the unusual exercise of continual walking, that night I slept heavily. In the morning I was alone in the world. Canfield and Parsons, my last companions, were dead of the plague. Of the four hundred that sought shelter in the Chemistry Building, and of the forty-seven that began the march, I alone remained\u2014I and the Shetland pony. Why this should be so there is no explaining. I did not catch the plague, that is all. I was immune. I was merely the one lucky man in a million\u2014just as every survivor was one in a million, or, rather, in several millions, for the proportion was at least that.\"\n\n## V\n\n\"FOR two days I sheltered in a pleasant grove where there had been no deaths. In those two days, while badly depressed and believing that my turn would come at any moment, nevertheless I rested and recuperated. So did the pony. And on the third day, putting what small store of tinned provisions I possessed on the pony's back, I started on across a very lonely land. Not a live man, woman, or child, did I encounter, though the dead were everywhere. Food, however, was abundant. The land then was not as it is now. It was all cleared of trees and brush, and it was cultivated. The food for millions of mouths was growing, ripening, and going to waste. From the fields and orchards I gathered vegetables, fruits, and berries. Around the deserted farmhouses I got eggs and caught chickens. And frequently I found supplies of tinned provisions in the store-rooms.\n\n\"A strange thing was what was taking place with all the domestic animals. Everywhere they were going wild and preying on one another. The chickens and ducks were the first to be destroyed, while the pigs were the first to go wild, followed by the cats. Nor were the dogs long in adapting themselves to the changed conditions. There was a veritable plague of dogs. They devoured the corpses, barked and howled during the nights, and in the daytime slunk about in the distance. As the time went by, I noticed a change in their behavior. At first they were apart from one another, very suspicious and very prone to fight. But after a not very long while they began to come together and run in packs. The dog, you see, always was a social animal, and this was true before ever he came to be domesticated by man. In the last days of the world before the plague, there were many many very different kinds of dogs\u2014dogs without hair and dogs with warm fur, dogs so small that they would make scarcely a mouthful for other dogs that were as large as mountain lions. Well, all the small dogs, and the weak types, were killed by their fellows. Also, the very large ones were not adapted for the wild life and bred out. As a result, the many different kinds of dogs disappeared, and there remained, running in packs, the medium-sized wolfish dogs that you know today.\"\n\n\"But the cats don't run in packs, Granser,\" Hoo-Hoo objected.\n\n\"The cat was never a social animal. As one writer in the nineteenth century said, the cat walks by himself. He always walked by himself, from before the time he was tamed by man, down through the long ages of domestication, to today when once more he is wild.\n\n\"The horses also went wild, and all the fine breeds we had degenerated into the small mustang horse you know today. The cows likewise went wild, as did the pigeons and the sheep. And that a few of the chickens survived you know yourself. But the wild chicken of today is quite a different thing from the chickens we had in those days.\n\n\"But I must go on with my story. I travelled through a deserted land. As the time went by I began to yearn more and more for human beings. But I never found one, and I grew lonelier and lonelier. I crossed Livermore Valley and the mountains between it and the great valley of the San Joaquin. You have never seen that valley, but it is very large and it is the home of the wild horse. There are great droves there, thousands and tens of thousands. I revisited it thirty years after, so I know. You think there are lots of wild horses down here in the coast valleys, but they are as nothing compared with those of the San Joaquin. Strange to say, the cows, when they went wild, went back into the lower mountains. Evidently they were better able to protect themselves there. \"In the country districts the ghouls and prowlers had been less in evidence, for I found many villages and towns untouched by fire. But they were filled by the pestilential dead, and I passed by without exploring them. It was near Lathrop that, out of my loneliness, I picked up a pair of collie dogs that were so newly free that they were urgently willing to return to their allegiance to man. These collies accompanied me for many years, and the strains of them are in those very dogs there that you boys have today. But in sixty years the collie strain has worked out. These brutes are more like domesticated wolves than anything else.\"\n\nHare-Lip rose to his feet, glanced to see that the goats were safe, and looked at the sun's position in the afternoon sky, advertising impatience at the prolixity of the old man's tale. Urged to hurry by Edwin, Granser went on. \"There is little more to tell. With my two dogs and my pony, and riding a horse I had managed to capture, I crossed the San Joaquin and went on to a wonderful valley in the Sierras called Yosemite. In the great hotel there I found a prodigious supply of tinned provisions. The pasture was abundant, as was the game, and the river that ran through the valley was full of trout. I remained there three years in an utter loneliness that none but a man who has once been highly civilized can understand. Then I could stand it no more. I felt that I was going crazy. Like the dog, I was a social animal and I needed my kind. I reasoned that since I had survived the plague, there was a possibility that others had survived. Also, I reasoned that after three years the plague germs must all be gone and the land be clean again.\n\n\"With my horse and dogs and pony, I set out. Again I crossed the San Joaquin Valley, the mountains beyond, and came down into Livermore Valley. The change in those three years was amazing. All the land had been splendidly tilled, and now I could scarcely recognize it, such was the sea of rank vegetation that had overrun the agricultural handiwork of man. You see, the wheat, the vegetables, and orchard trees had always been cared for and nursed by man, so that they were soft and tender. The weeds and wild bushes and such things, on the contrary, had always been fought by man, so that they were tough and resistant. As a result, when the hand of man was removed, the wild vegetation smothered and destroyed practically all the domesticated vegetation. The coyotes were greatly increased, and it was at this time that I first encountered wolves, straying in twos and threes and small packs down from the regions where they had always persisted.\n\n\"It was at Lake Temescal, not far from the one-time city of Oakland, that I came upon the first live human beings. Oh, my grandsons, how can I describe to you my emotion, when, astride my horse and dropping down the hillside to the lake, I saw the smoke of a campfire rising through the trees. Almost did my heart stop beating. I felt that I was going crazy. Then I heard the cry of a babe\u2014a human babe. And dogs barked, and my dogs answered. I did not know but what I was the one human alive in the whole world. It could not be true that here were others\u2014smoke, and the cry of a babe.\n\n\"Emerging on the lake, there, before my eyes, not a hundred yards away, I saw a man, a large man. He was standing on an outjutting rock and fishing. I was overcome. I stopped my horse. I tried to call out but could not. I waved my hand. It seemed to me that the man looked at me, but he did not appear to wave. Then I laid my head on my arms there in the saddle. I was afraid to look again, for I knew it was an hallucination, and I knew that if I looked the man would be gone. And so precious was the hallucination, that I wanted it to persist yet a little while. I knew, too, that as long as I did not look it would persist.\n\n\"Thus I remained, until I heard my dogs snarling, and a man's voice. What do you think the voice said? I will tell you. It said: '*Where in hell did you come from?*'\n\n\"Those were the words, the exact words. That was what your other grandfather said to me, Hare-Lip, when he greeted me there on the shore of Lake Temescal fifty-seven years ago. And they were the most ineffable words I have ever heard. I opened my eyes, and there he stood before me, a large, dark, hairy man, heavy-jawed, slant-browed, fierce-eyed. How I got off my horse I do not know. But it seemed that the next I knew I was clasping his hand with both of mine and crying. I would have embraced him, but he was ever a narrow-minded, suspicious man, and he drew away from me. Yet did I cling to his hand and cry.\"\n\nGranser's voice faltered and broke at the recollection, and the weak tears streamed down his cheeks while the boys looked on and giggled.\n\n\"Yet did I cry,\" he continued, \"and desire to embrace him, though the Chauffeur was a brute, a perfect brute\u2014the most abhorrent man I have ever known. His name was\u2026 strange, how I have forgotten his name. Everybody called him Chauffeur\u2014it was the name of his occupation, and it stuck. That is how, to this day, the tribe he founded is called the Chauffeur Tribe.\n\n\"He was a violent, unjust man. Why the plague germs spared him I can never understand. It would seem, in spite of our old metaphysical notions about absolute justice, that there is no justice in the universe. Why did he live?\u2014an iniquitous, moral monster, a blot on the face of nature, a cruel, relentless, bestial cheat as well. All he could talk about was motor cars, machinery, gasoline, and garages\u2014and especially, and with huge delight, of his mean pilferings and sordid swindlings of the persons who had employed him in the days before the coming of the plague. And yet he was spared, while hundreds of millions, yea, billions, of better men were destroyed.\n\n\"I went on with him to his camp, and there I saw her, Vesta, the one woman. It was glorious and\u2026 pitiful. There she was, Vesta Van Warden, the young wife of John Van Warden, clad in rags, with marred and scarred and toil-calloused hands, bending over the campfire and doing scullion work\u2014she, Vesta, who had been born to the purple of the greatest baronage of wealth the world had ever known. John Van Warden, her husband, worth one billion, eight hundred millions and President of the Board of Industrial Magnates, had been the ruler of America. Also, sitting on the International Board of Control, he had been one of the seven men who ruled the world. And she herself had come of equally noble stock. Her father, Philip Saxon, had been President of the Board of Industrial Magnates up to the time of his death. This office was in process of becoming hereditary, and had Philip Saxon had a son that son would have succeeded him. But his only child was Vesta, the perfect flower of generations of the highest culture this planet has ever produced. It was not until the engagement between Vesta and Van Warden took place, that Saxon indicated the latter as his successor. It was, I am sure, a political marriage. I have reason to believe that Vesta never really loved her husband in the mad passionate way of which the poets used to sing. It was more like the marriages that obtained among crowned heads in the days before they were displaced by the Magnates.\n\n\"And there she was, boiling fish-chowder in a soot-covered pot, her glorious eyes inflamed by the acrid smoke of the open fire. Hers was a sad story. She was the one survivor in a million, as I had been, as the Chauffeur had been. On a crowning eminence of the Alameda Hills, overlooking San Francisco Bay, Van Warden had built a vast summer palace. It was surrounded by a park of a thousand acres. When the plague broke out, Van Warden sent her there. Armed guards patrolled the boundaries of the park, and nothing entered in the way of provisions or even mail matter that was not first fumigated. And yet did the plague enter, killing the guards at their posts, the servants at their tasks, sweeping away the whole army of retainers\u2014or, at least, all of them who did not flee to die elsewhere. So it was that Vesta found herself the sole living person in the palace that had become a charnel house.\n\n\"Now the Chauffeur had been one of the servants that ran away. Returning, two months afterward, he discovered Vesta in a little summer pavilion where there had been no deaths and where she had established herself. He was a brute. She was afraid, and she ran away and hid among the trees. That night, on foot, she fled into the mountains\u2014she, whose tender feet and delicate body had never known the bruise of stones nor the scratch of briars. He followed, and that night he caught her. He struck her. Do you understand? He beat her with those terrible fists of his and made her his slave. It was she who had to gather the firewood, build the fires, cook, and do all the degrading camp-labor\u2014she, who had never performed a menial act in her life. These things he compelled her to do, while he, a proper savage, elected to lie around camp and look on. He did nothing, absolutely nothing, except on occasion to hunt meat or catch fish.\"\n\n\"Good for Chauffeur,\" Hare-Lip commented in an undertone to the other boys. \"I remember him before he died. He was a corker. But he did things, and he made things go. You know, Dad married his daughter, an' you ought to see the way he knocked the spots outa Dad. The Chauffeur was a son-of-a-gun. He made us kids stand around. Even when he was croaking he reached out for me, once, an' laid my head open with that long stick he kept always beside him.\"\n\nHare-Lip rubbed his bullet head reminiscently, and the boys returned to the old man, who was maundering ecstatically about Vesta, the squaw of the founder of the Chauffeur Tribe.\n\n\"And so I say to you that you cannot understand the awfulness of the situation. The Chauffeur was a servant, understand, a servant. And he cringed, with bowed head, to such as she. She was a lord of life, both by birth and by marriage. The destinies of millions, such as he, she carried in the hollow of her pink-white hand. And, in the days before the plague, the slightest contact with such as he would have been pollution. Oh, I have seen it. Once, I remember, there was Mrs. Goldwin, wife of one of the great magnates. It was on a landing stage, just as she was embarking in her private dirigible, that she dropped her parasol. A servant picked it up and made the mistake of handing it to her\u2014to her, one of the greatest royal ladies of the land! She shrank back, as though he were a leper, and indicated her secretary to receive it. Also, she ordered her secretary to ascertain the creature's name and to see that he was immediately discharged from service. And such a woman was Vesta Van Warden. And her the Chauffeur beat and made his slave.\n\n\"\u2014Bill\u2014that was it; Bill, the Chauffeur. That was his name. He was a wretched, primitive man, wholly devoid of the finer instincts and chivalrous promptings of a cultured soul. No, there is no absolute justice, for to him fell that wonder of womanhood, Vesta Van Warden. The grievousness of this you will never understand, my grandsons; for you are yourselves primitive little savages, unaware of aught else but savagery. Why should Vesta not have been mine? I was a man of culture and refinement, a professor in a great university. Even so, in the time before the plague, such was her exalted position, she would not have deigned to know that I existed. Mark, then, the abysmal degradation to which she fell at the hands of the Chauffeur. Nothing less than the destruction of all mankind had made it possible that I should know her, look in her eyes, converse with her, touch her hand\u2014ay, and love her and know that her feelings toward me were very kindly. I have reason to believe that she, even she, would have loved me, there being no other man in the world except the Chauffeur. Why, when it destroyed eight billions of souls, did not the plague destroy just one more man, and that man the Chauffeur?\n\n\"Once, when the Chauffeur was away fishing, she begged me to kill him. With tears in her eyes she begged me to kill him. But he was a strong and violent man, and I was afraid. Afterwards, I talked with him. I offered him my horse, my pony, my dogs, all that I possessed, if he would give Vesta to me. And he grinned in my face and shook his head. He was very insulting. He said that in the old days he had been a servant, had been dirt under the feet of men like me and of women like Vesta, and that now he had the greatest lady in the land to be servant to him and cook his food and nurse his brats. 'You had your day before the plague,' he said; 'but this is my day, and a damned good day it is. I wouldn't trade back to the old times for anything.' Such words he spoke, but they are not his words. He was a vulgar, low-minded man, and vile oaths fell continually from his lips.\n\n\"Also, he told me that if he caught me making eyes at his woman he'd wring my neck and give her a beating as well. What was I to do? I was afraid. He was a brute. That first night, when I discovered the camp, Vesta and I had great talk about the things of our vanished world. We talked of art, and books, and poetry; and the Chauffeur listened and grinned and sneered. He was bored and angered by our way of speech which he did not comprehend, and finally he spoke up and said: 'And this is Vesta Van Warden, one-time wife of Van Warden the Magnate\u2014a high and stuck-up beauty, who is now my squaw. Eh, Professor Smith, times is changed, times is changed. Here, you, woman, take off my moccasins, and lively about it. I want Professor Smith to see how well I have you trained.'\n\n\"I saw her clench her teeth, and the flame of revolt rise in her face. He drew back his gnarled fist to strike, and I was afraid, and sick at heart. I could do nothing to prevail against him. So I got up to go, and not be witness to such indignity. But the Chauffeur laughed and threatened me with a beating if I did not stay and behold. And I sat there, perforce, by the campfire on the shore of Lake Temescal, and saw Vesta, Vesta Van Warden, kneel and remove the moccasins of that grinning, hairy, apelike human brute.\n\n\"\u2014Oh, you do not understand, my grandsons. You have never known anything else, and you do not understand.\n\n\"'Halter-broke and bridle-wise,' the Chauffeur gloated, while she performed that dreadful, menial task. 'A trifle balky at times, Professor, a trifle balky; but a clout alongside the jaw makes her as meek and gentle as a lamb.'\n\n\"And another time he said: 'We've got to start all over and replenish the earth and multiply. You're handicapped, Professor. You ain't got no wife, and we're up against a regular Garden-of-Eden proposition. But I ain't proud. I'll tell you what, Professor.' He pointed at their little infant, barely a year old. 'There's your wife, though you'll have to wait till she grows up. It's rich, ain't it? We're all equals here, and I'm the biggest toad in the splash. But I ain't stuck up\u2014not I. I do you the honor, Professor Smith, the very great honor of betrothing to you my and Vesta Van Warden's daughter. Ain't it cussed bad that Van Warden ain't here to see?'\"\n\n## VI\n\n\"I LIVED three weeks of infinite torment there in the Chauffeur's camp. And then, one day, tiring of me, or of what to him was my bad effect on Vesta, he told me that the year before, wandering through the Contra Costa Hills to the Straits of Carquinez, across the Straits he had seen a smoke. This meant that there were still other human beings, and that for three weeks he had kept this inestimably precious information from me. I departed at once, with my dogs and horses, and journeyed across the Contra Costa Hills to the Straits. I saw no smoke on the other side, but at Port Costa discovered a small steel barge on which I was able to embark my animals. Old canvas which I found served me for a sail, and a southerly breeze fanned me across the Straits and up to the ruins of Vallejo. Here, on the outskirts of the city, I found evidences of a recently occupied camp.\n\n\"Many clam-shells showed me why these humans had come to the shores of the Bay. This was the Santa Rosa Tribe, and I followed its track along the old railroad right of way across the salt marshes to Sonoma Valley. Here, at the old brickyard at Glen Ellen, I came upon the camp. There were eighteen souls all told. Two were old men, one of whom was Jones, a banker. The other was Harrison, a retired pawnbroker, who had taken for wife the matron of the State Hospital for the Insane at Napa. Of all the persons of the city of Napa, and of all the other towns and villages in that rich and populous valley, she had been the only survivor. Next, there were the three young men\u2014Cardiff and Hale, who had been farmers, and Wainwright, a common day-laborer. All three had found wives. To Hale, a crude, illiterate farmer, had fallen Isadore, the greatest prize, next to Vesta, of the women who came through the plague. She was one of the world's most noted singers, and the plague had caught her at San Francisco. She has talked with me for hours at a time, telling me of her adventures, until, at last, rescued by Hale in the Mendocino Forest Reserve, there had remained nothing for her to do but become his wife. But Hale was a good fellow, in spite of his illiteracy. He had a keen sense of justice and right-dealing, and she was far happier with him than was Vesta with the Chauffeur.\n\n\"The wives of Cardiff and Wainwright were ordinary women, accustomed to toil with strong constitutions\u2014just the type for the wild new life which they were compelled to live. In addition were two adult idiots from the feeble-minded home at Eldredge, and five or six young children and infants born after the formation of the Santa Rosa Tribe. Also, there was Bertha. She was a good woman, Hare-Lip, in spite of the sneers of your father. Her I took for wife. She was the mother of your father, Edwin, and of yours, Hoo-Hoo. And it was our daughter, Vera, who married your father, Hare-Lip\u2014your father, Sandow, who was the oldest son of Vesta Van Warden and the Chauffeur.\n\n\"And so it was that I became the nineteenth member of the Santa Rosa Tribe. There were only two outsiders added after me. One was Mungerson, descended from the Magnates, who wandered alone in the wilds of Northern California for eight years before he came south and joined us. He it was who waited twelve years more before he married my daughter, Mary. The other was Johnson, the man who founded the Utah Tribe. That was where he came from, Utah, a country that lies very far away from here, across the great deserts, to the east. It was not until twenty-seven years after the plague that Johnson reached California. In all that Utah region he reported but three survivors, himself one, and all men. For many years these three men lived and hunted together, until, at last, desperate, fearing that with them the human race would perish utterly from the planet, they headed westward on the possibility of finding women survivors in California. Johnson alone came through the great desert, where his two companions died. He was forty-six years old when he joined us, and he married the fourth daughter of Isadore and Hale, and his eldest son married your aunt, Hare-Lip, who was the third daughter of Vesta and the Chauffeur. Johnson was a strong man, with a will of his own. And it was because of this that he seceded from the Santa Rosans and formed the Utah Tribe at San Jos\u00e9. It is a small tribe\u2014there are only nine in it; but, though he is dead, such was his influence and the strength of his breed, that it will grow into a strong tribe and play a leading part in the recivilization of the planet.\n\n\"There are only two other tribes that we know of\u2014the Los Angelitos and the Carmelitos. The latter started from one man and woman. He was called Lopez, and he was descended from the ancient Mexicans and was very black. He was a cowherd in the ranges beyond Carmel, and his wife was a maidservant in the great Del Monte Hotel. It was seven years before we first got in touch with the Los Angelitos. They have a good country down there, but it is too warm. I estimate the present population of the world at between three hundred and fifty and four hundred\u2014provided, of course, that there are no scattered little tribes elsewhere in the world. If there be such, we have not heard from them. Since Johnson crossed the desert from Utah, no word nor sign has come from the East or anywhere else. The great world which I knew in my boyhood and early manhood is gone. It has ceased to be. I am the last man who was alive in the days of the plague and who knows the wonders of that far-off time. We, who mastered the planet\u2014its earth, and sea, and sky\u2014and who were as very gods, now live in primitive savagery along the water courses of this California country.\n\n\"But we are increasing rapidly\u2014your sister, Hare-Lip, already has four children. We are increasing rapidly and making ready for a new climb toward civilization. In time, pressure of population will compel us to spread out, and a hundred generations from now we may expect our descendants to start across the Sierras, oozing slowly along, generation by generation, over the great continent to the colonization of the East\u2014a new Aryan drift around the world.\n\n\"But it will be slow, very slow; we have so far to climb. We fell so hopelessly far. If only one physicist or one chemist had survived! But it was not to be, and we have forgotten everything. The Chauffeur started working in iron. He made the forge which we use to this day. But he was a lazy man, and when he died he took with him all he knew of metals and machinery. What was I to know of such things? I was a classical scholar, not a chemist. The other men who survived were not educated. Only two things did the Chauffeur accomplish\u2014the brewing of strong drink and the growing of tobacco. It was while he was drunk, once, that he killed Vesta. I firmly believe that he killed Vesta in a fit of drunken cruelty though he always maintained that she fell into the lake and was drowned.\n\n\"And, my grandsons, let me warn you against the medicine-men. They call themselves *doctors*, travestying what was once a noble profession, but in reality they are medicine-men, devil-devil men, and they make for superstition and darkness. They are cheats and liars. But so debased and degraded are we, that we believe their lies. They, too, will increase in numbers as we increase, and they will strive to rule us. Yet are they liars and charlatans. Look at young Cross-Eyes, posing as a doctor, selling charms against sickness, giving good hunting, exchanging promises of fair weather for good meat and skins, sending the death-stick, performing a thousand abominations. Yet I say to you, that when he says he can do these things, he lies. I, Professor Smith, Professor James Howard Smith, say that he lies. I have told him so to his teeth. Why has he not sent me the death-stick? Because he knows that with me it is without avail. But you, Hare-Lip, so deeply are you sunk in black superstition that did you awake this night and find the death-stick beside you, you would surely die. And you would die, not because of any virtues in the stick, but because you are a savage with the dark and clouded mind of a savage.\n\n\"The doctors must be destroyed, and all that was lost must be discovered over again. Wherefore, earnestly, I repeat unto you certain things which you must remember and tell to your children after you. You must tell them that when water is made hot by fire, there resides in it a wonderful thing called steam, which is stronger than ten thousand men and which can do all man's work for him. There are other very useful things. In the lightning flash resides a similarly strong servant of man, which was of old his slave and which some day will be his slave again.\n\n\"Quite a different thing is the alphabet. It is what enables me to know the meaning of fine markings, whereas you boys know only rude picture-writing. In that dry cave on Telegraph Hill, where you see me often go when the tribe is down by the sea, I have stored many books. In them is great wisdom. Also, with them, I have placed a key to the alphabet, so that one who knows picture-writing may also know print. Some day men will read again; and then, if no accident has befallen my cave, they will know that Professor James Howard Smith once lived and saved for them the knowledge of the ancients. \"There is another little device that men inevitably will rediscover. It is called gunpowder. It was what enabled us to kill surely and at long distances. Certain things which are found in the ground, when combined in the right proportions, will make this gunpowder. What these things are, I have forgotten, or else I never knew. But I wish I did know. Then would I make powder, and then would I certainly kill Cross-Eyes and rid the land of superstition\u2014\"\n\n\"After I am man-grown I am going to give Cross-Eyes all the goats, and meat, and skins I can get, so that he'll teach me to be a doctor,\" Hoo-Hoo asserted. \"And when I know, I'll make everybody else sit up and take notice. They'll get down in the dirt to me, you bet.\"\n\nThe old man nodded his head solemnly, and murmured:\n\n\"Strange it is to hear the vestiges and remnants of the complicated Aryan speech falling from the lips of a filthy little skin-clad savage. All the world is topsy-turvy. And it has been topsy-turvy ever since the plague.\"\n\n\"You won't make me sit up,\" Hare-Lip boasted to the would-be medicine-man. \"If I paid you for a sending of the death-stick and it didn't work, I'd bust in your head\u2014understand, you Hoo-Hoo, you?\"\n\n\"I'm going to get Granser to remember this here gunpowder stuff,\" Edwin said softly, \"and then I'll have you all on the run. You, Hare-Lip, will do my fighting for me and get my meat for me, and you, Hoo-Hoo, will send the death-stick for me and make everybody afraid. And if I catch Hare-Lip trying to bust your head, Hoo-Hoo, I'll fix him with that same gunpowder. Granser ain't such a fool as you think, and I'm going to listen to him and some day I'll be boss over the whole bunch of you.\"\n\nThe old man shook his head sadly, and said:\n\n\"The gunpowder will come. Nothing can stop it\u2014the same old story over and over. Man will increase, and men will fight. The gunpowder will enable men to kill millions of men, and in this way only, by fire and blood, will a new civilization, in some remote day, be evolved. And of what profit will it be? Just as the old civilization passed, so will the new. It may take fifty thousand years to build, but it will pass. All things pass. Only remain cosmic force and matter, ever in flux, ever acting and reacting and realizing the eternal types\u2014the priest, the soldier, and the king. Out of the mouths of babes comes the wisdom of all the ages. Some will fight, some will rule, some will pray; and all the rest will toil and suffer sore while on their bleeding carcasses is reared again, and yet again, without end, the amazing beauty and surpassing wonder of the civilized state. It were just as well that I destroyed those cave-stored books\u2014whether they remain or perish, all their old truths will be discovered, their old lies lived and handed down. What is the profit\u2014\"\n\nHare-Lip leaped to his feet, giving a quick glance at the pasturing goats and the afternoon sun.\n\n\"Gee!\" he muttered to Edwin, \"The old geezer gets more long-winded every day. Let's pull for camp.\"\n\nWhile the other two, aided by the dogs, assembled the goats and started them for the trail through the forest, Edwin stayed by the old man and guided him in the same direction. When they reached the old right of way, Edwin stopped suddenly and looked back. Hare-Lip and Hoo-Hoo and the dogs and the goats passed on. Edwin was looking at a small herd of wild horses which had come down on the hard sand. There were at least twenty of them, young colts and yearlings and mares, led by a beautiful stallion which stood in the foam at the edge of the surf, with arched neck and bright wild eyes, sniffing the salt air from off the sea.\n\n\"What is it?\" Granser queried.\n\n\"Horses,\" was the answer. \"First time I ever seen 'em on the beach. It's the mountain lions getting thicker and thicker and driving 'em down.\"\n\nThe low sun shot red shafts of light, fan-shaped, up from a cloud-tumbled horizon. And close at hand, in the white waste of shore-lashed waters, the sea-lions, bellowing their old primeval chant, hauled up out of the sea on the black rocks and fought and loved.\n\n\"Come on, Granser,\" Edwin prompted. And old man and boy, skin-clad and barbaric, turned and went along the right of way into the forest in the wake of the goats.\n\nTHE END\n\n## John Griffith \"Jack\" London\n\n(1876\u20131916) was an American novelist, one of the first fiction writers to get rich from writing fiction. His most famous novels\u2014*The Call of the Wild* (1903) and *White Fang* (1906)\u2014were adventure stories set in the Klondike Gold Rush. A number of London's stories were science fiction, and dealt with topics such as invisibility, germ warfare, and energy weapons.\n\n#### Commentary\n\nIn 1918, just six years after \"The Scarlet Plague\" was published in *London Magazine*, the deadly Spanish flu pandemic struck humanity. (The disease got its name not because it originated in Spain but simply because reporters were free to describe its dread effects there. In order to maintain morale, wartime censors suppressed early reports of the illness in the nations fighting World War I. At the time, therefore, it seemed as if neutral Spain had been particularly badly affected.) The H1N1 influenza virus circulating in the period January 1918 to December 1920 infected half a billion people around the globe and killed as many as 100 million people\u2014five percent of the human population. Flu deaths far exceeded the number killed in four years of battle. The disease struck everywhere, from the Arctic to isolated islands in the Pacific. In some communities the impact was so terrible survivors decided it would be best to never talk about it\u2014to pretend, as it were, that this grim event had simply not happened.\n\nSpanish flu was merely the most recent of large-scale pandemics; infectious disease has attacked humankind throughout history. Bubonic plague has been responsible for the most notable pandemics. The first recorded outbreak was the Plague of Justinian (541\u2013542), which struck the Byzantine Empire. Historians estimate that 25 million people died of the plague\u2014less than the number of fatalities caused by Spanish flu, but the total population was much smaller back in Roman times. The Plague of Justinian killed about 13% of the world's population. The plague returned over the following two centuries, killing a further 25 million people.\n\nThe second major outbreak of plague began with the Black Death in 1347. The disease made numerous returns over the following three centuries. The Black Death was one of the most devastating events in human history: it killed between 75 and 200 million people. The dead were placed in ditches and then isolated; in some towns and villages there weren't enough living to bury the dead. (Figure 3.1<\/a> shows an example of a plague pit.) The world population did not recover to pre-Black Death levels until the seventeenth century. Many historians argue that the devastation unleashed by this pandemic had a significant impact on the course of European history: the labour shortages it caused accelerated several economic, social, and technical developments, and might even have helped introduce the Renaissance.\n\nSo we know pandemics occur. It's entirely possible\u2014even probable\u2014that a destructive flu pandemic will strike again. And it's not hard to imagine how a mutation in one of the viral hemorrhagic fevers\u2014Ebola, say, or the Marburg virus\u2014could lead to a disease causing widespread suffering. Therefore the basic premise of \"The Scarlet Plague\" is not unrealistic. But Jack London was writing more than a century ago. Although elements of his story were prescient (for example, he guessed a global population of eight billion people by 2010\u2014not bad), science and technology have advanced to a level he could scarcely have imagined. Suppose a disease such as the scarlet plague did break out, and let's assume it was as lethal as, say, Spanish flu. Would our modern civilization collapse in the way London suggests?\n\nThis is not an easy question to answer.\n\nOn the one hand, as I write, the world population is about 7.6 billion. A large fraction of the population is mobile to an extent that would have astonished Jack London. In the past, disease travelled from continent to continent at the speed of sailing boats. Nowadays, if people develop a disease in Beijing, say, they can carry it to Berlin within hours. This combination of a large pool of people in which disease can develop with the rapid, large-scale movement of people mean infections can be transmitted more efficiently than ever before. Furthermore, the threats posed by viruses and bacteria are always evolving. The flu virus, for example, changes constantly. One type of change is a gradual \"drift\" in genetic make-up that leads, over time, to a virus our immune system fails to recognise even if we have previously been exposed to a similar virus. (This is one of the reasons why flu vaccine effectiveness is so hit-and-miss: the vaccine must be reformulated every year, based on an informed guess about which strains are likely to cause most suffering in the coming year.) Another type of change is an abrupt \"shift\" in genetic make-up, caused by a random mutation. When this happens, the possibility of a pandemic occurs: most people will possess no immune protection against the new virus. In short, if the microbial world is our enemy then we face a crafty foe. We might well choose to think of microbes as a threat to our civilisation.\n\nOn the other hand, our understanding of medicine in general and of public health in particular are vastly more advanced than in London's day. Furthermore, information can travel even more quickly than people. These advances help mitigate the threat of pandemic disease. Consider, for example, the case of SARS. Between November 2002 and July 2003, a viral disease causing flu-like symptoms caused 774 deaths in China and neighbouring countries. The disease, which was given the name Severe Acute Respiratory Syndrome (SARS for short) was new and it was dangerous\u2014it had a 9.6% fatality rate. There was no vaccine against SARS; there isn't one now. Nevertheless, the dire threat posed to our civilisation by SARS did not materialise. The rapid response of public health authorities, at national and international level, broke the chain of transmission. Since 2004, no case of SARS has been reported anywhere in the world.\n\nOr consider the case of the swine flu outbreak of 2009. A new strain of the H1N1 virus (see Fig. 3.2<\/a>) began circulating and, since H1N1 caused the dreadful Spanish flu pandemic, it's no surprise that individuals and organisations were worried. Fortunately, the virus that caused swine flu was about one hundred times less lethal than the 1918 virus. Even if that had not been the case, I suspect the outcome of the 2009 pandemic would have been less severe than what happened in 1918. For one thing, doctors could prescribe antiviral medicines\u2014the antivirals weren't hugely successful, but they were better than nothing. More importantly, the public health response was rapid. My own university was soon covered with posters explaining how to slow the transmission of the disease. Some of those posters are still to be found; they are fading, now, but they still provide basic but effective hygiene advice. Furthermore, although it turned out not to be needed, organisations developed business continuity plans. In my own university, these continuity plans involved teaching online if a pandemic caused students to stay away from lecture theatres. (In 1665, the University of Cambridge closed down because of pandemic. In this case it was a precaution against the Great Plague, the last major outbreak of bubonic plague in England. Cambridge was unable to offer online learning back in 1665, but that turned out not to be a hindrance for Isaac Newton. During enforced private study at his home in Woolsthorpe he developed optics, calculus, and the law of universal gravitation!)\n\nI'm writing this almost exactly one hundred years after the doctors observed the first cases of Spanish flu. For an entire century, humans have managed to avoid a widespread outbreak of contagious disease. So it's tempting to conclude that although technological developments might promote a pandemic they also provide us with tools to help prevent a pandemic. It's comforting to suppose that our science, technology, and medicine will avert the disaster envisaged in \"The Scarlet Plague\".\n\nOur knowledge might well save us. But only a fool would be complacent.\n\nHumanity faces a growing threat from bacteria\u2014a self-inflicted threat we could ward off if people would only act rationally. The menace stems from the misuse of our most effective weapon against bacteria.\n\nPeople are naturally enamoured of antibiotics. Patients demand, and often receive, antibiotics as a treatment for colds, sore throats, earaches\u2026 even in cases where doctors know antibiotics won't work. Antibiotics are used in crop production, as pesticides, or to treat disease in plants. Antibiotics are given to animals as freely as they are given to humans\u2014vets use them to treat disease while farmers use them to promote growth. We live in a world awash with antibiotics. The problem? Well, in order to survive in this antibiotic-filled environment bacteria have evolved resistance. Some bacteria are now resistant to all known antibiotics.\n\nA world without effective antibiotics is a terrifying prospect. Many routine medical interventions we now take for granted\u2014appendix operations, hip replacements, transplant surgery\u2014would be dangerous: patients might survive the knife but succumb to infection. We'd face the same risks people faced before 1928, when Alexander Fleming discovered penicillin. Worse, though, is that possibility of pandemic.\n\nConsider the plague. The disease is caused by the bacterium *Yersinia pestis*. When this pestilence stalked our ancestors the prognosis was poor for anyone infected: chances were high the sufferer would die a horrible death. Antibiotics changed the story. The plague infects people to this day, but nowadays if patients are given streptomycin quickly enough then the chances are high they'll survive. So if a strain of *Yersinia pestis* evolves resistance to all antibiotics then doctors\u2014and, more to the point, patients\u2014will be in trouble. It's a similar story with many other infectious diseases.\n\nScience and medicine might have provided us with tools to help prevent a pandemic, but we're letting one of our best tools get rusty.\n\nBlack death, Spanish flu, or something along the lines of the scarlet plague\u2014another pandemic will surely happen eventually. When it comes, though, we'll at least have the comfort of knowing the pandemic agent won't be *trying* to kill us. Some bacteria and viruses cause us harm, but that's just a byproduct of their life cycle. It's nothing personal. As mentioned in Chapter 10.1007\/978-3-030-03195-4_2, however, advances in biotechnology will soon permit a group or even an individual to *design* a microbial agent that's *intended* to kill. The terrorist, or perhaps merely the misogynist, will have the ability to target disease at specific groups\u2014males, females, the pre-pubescent, those possessing too much or too little skin melanin. And an engineered pandemic could be designed to kill more effectively than the natural variety.\n\nConsider the West African Ebola epidemic of 2013\u20132016. According to official statistics the virus caused 11,310 deaths. This was a shocking outbreak, of course, but in some ways the virus killed too effectively for its own good. Symptoms became obvious between 2 and 21 days after exposure to the virus, and this led to a method of containment and control. Anyone who was in contact with a patient was tracked for 21 days; communities were made aware of risk factors and preventative measures; and quarantines were put in place. Roughly 900 days after the first case was diagnosed, the epidemic was over. But the single-minded bioterrorist could *engineer* an infectious agent in such a way that obvious countermeasures would be ineffective. The agent, for example, might be airborne and easily transmitted through the simple act of breathing. It might establish itself in its host whilst producing no symptoms. After a long latency period it could be \"switched on\"\u2014and death would follow for all those infected.\n\nSF writers have long imagined something along these lines. In a 1982 novel, for example, Frank Herbert's titular \"white plague\" was designed by a molecular biologist. The biologist, driven insane by the death of his wife and children in a car bomb, desires revenge\u2014so he develops a deadly plague, which is carried by men but kills only women. In his 1997 novel *The Cobra Event*, Richard Preston has an antagonist called Archimedes release a genetically engineered virus (called \"Cobra\")\u2014a fusion of the common cold and smallpox viruses\u2014which results in a horrifying disease called brainpox. (Preston is the author of the non-fiction book *The Hot Zone*, which gives a well written account of the viral haemorrhagic fevers.) In Paulo Bacigalupi's award-winning 2009 novel *The Windup Girl*, large corporations release bioengineered plagues that attack crops\u2014a lucrative activity if you possess plague-resistant seeds.\n\nWhite plague, Cobra, bioengineered attack genes\u2026 these nightmares remain science fictional. The knowledge and techniques needed to realise them don't exist yet. But, as described in the commentary to the previous chapter, the rate of progress in biotechnology is astounding. In a few years there'll be research labs possessing the knowledge necessary to create a deadly life form from scratch; a few years later those same techniques will be available to undergraduates.\n\nSuch technology is so dangerous it presents an existential risk. Should we not therefore police it, in the same way we police other existential risks? After all, the world has managed to avoid a nuclear catastrophe by cooperating at the international level to limit the spread of the technology that permits the construction of hydrogen bombs. Unfortunately, nuclear non-proliferation techniques won't work in the case of bioterrorism. The construction of a nuclear arsenal can't easily be hidden from view\u2014the resources of a nation state are required to build a hydrogen bomb\u2014and so the activity can in principle be monitored. But one day soon a small terrorist cell working quietly in a garden shed might be able to engineer a virus using only a few test tubes. How could society possibly police that situation? The human race can survive natural pandemics: there have been many in our past and there'll be more in our future. But could humanity survive an *engineered* pandemic?\n\nIf a pandemic, natural or artificial, did wipe out humanity, what might Earth be like without us? Following London, it's interesting to speculate. Many of our buildings would likely soon vanish under the onslaught of wind, rain, and vegetation; roads would crack and bridges would collapse; a few constructions\u2014the Channel Tunnel, for example\u2014might last much longer. But eventually, most traces of humanity's time on this planet would be erased. Perhaps after a few tens of millions of years\u2014the same sort of timescale separating us from the dinosaurs\u2014evolution might produce another intelligent species. Would all traces of the achievements we so value be dissolved by the passage of time? Or would such a species be able to find evidence that humans once walked the Earth?\n\nIf some future intelligence was indeed able to infer the existence of a bipedal creature, which chose to dig up fossilized carbon and transform it into plastics and a source of power, then we'd surely seem bizarre to them. But would we seem as bizarre as the creatures depicted in Chapter 10.1007\/978-3-030-03195-4_4?\n\n#### Notes and Further Reading\n\n- *the impact was so terrible*\u2014The Spanish flu is the subject of numerous books; for an entertaining account (if \"entertaining\" is the correct word to use about such a grim event), see Spinney (2017).\n\n- *attacked humans throughout history*\u2014In *Plagues and Peoples*, William McNeill (1976) explores the effects of disease on human history. He gives an in-depth account of the Justinian Plague, as well as more recent pandemics.\n\n- *the Black Death in 1347*\u2014One of the best of many accounts of the Black Death, its causes and consequences, is that by John Kelly (2006).\n\n- *According to official statistics*\u2014For further details of the Ebola epidemic that peaked during 2014\u20132015, see World Health Organisation (2016). For more recent information about this terrible disease, see World Health Organisation (2018).\n\n- *writers have long imagined something along these lines*\u2014For the books and novels mentioned here, see: Bacigalupi (2009); Herbert (1982); and Preston (1995, 1997).\n\n- *be able to find evidence that we once walked the Earth?*\u2014This question lies at the core of a fascinating novel by a world-renowned astrophysicist. Jayant Narlikar's (2015) novel *The Return of Vaman* appears in the Springer Science & Fiction series. For non-fiction accounts exploring what the world might be like if humans vanished, see for example Weisman (2007) and Zalasiewicz (2009).\n\n# Bibliography","meta":{"dup_signals":{"dup_doc_count":226,"dup_dump_count":89,"dup_details":{"curated_sources":2,"2023-50":2,"2023-40":1,"2023-23":1,"2023-14":1,"2022-49":1,"2022-40":2,"2022-27":2,"2022-21":1,"2022-05":2,"2021-49":2,"2021-43":2,"2021-31":1,"2021-25":1,"2021-21":1,"2021-17":2,"2021-04":5,"2020-40":2,"2020-29":1,"2020-16":4,"2020-10":1,"2020-05":2,"2019-51":1,"2019-43":1,"2019-39":1,"2019-35":2,"2019-30":1,"2019-26":1,"2019-22":3,"2019-18":4,"2019-13":1,"2019-09":3,"2019-04":3,"2018-51":5,"2018-47":2,"2018-43":3,"2018-39":1,"2018-34":5,"2018-30":1,"2018-26":2,"2018-22":1,"2018-17":3,"2018-13":4,"2018-09":1,"2018-05":3,"2017-51":1,"2017-47":3,"2017-43":5,"2017-39":2,"2017-34":2,"2017-30":3,"2017-26":5,"2017-22":4,"2017-17":5,"2017-09":4,"2017-04":4,"2016-50":4,"2016-44":4,"2016-40":3,"2016-36":1,"2016-30":1,"2016-26":1,"2016-22":1,"2016-18":1,"2016-07":3,"2015-48":4,"2015-40":1,"2015-35":2,"2015-32":3,"2015-27":4,"2015-22":3,"2015-14":3,"2014-52":4,"2014-49":2,"2014-42":3,"2014-41":3,"2014-35":5,"2014-23":4,"2014-15":2,"2024-26":2,"2024-18":1,"2024-10":3,"2017-13":7,"2015-18":2,"2015-11":4,"2015-06":3,"2014-10":2,"2013-48":6,"2013-20":4,"2024-30":1}},"file":"PMC7123587"},"subset":"pubmed_central"} {"text":"abstract: A report on the 6th annual Future of Genomic Medicine conference, held at the Scripps Seaside Forum, La Jolla, CA, USA, March 7-8, 2013.\nauthor: Konrad J Karczewski\ndate: 2013\ninstitute: 1Biomedical Informatics Training Program, Stanford University School of Medicine, Stanford, CA 94305, USA; 2Department of Genetics, Stanford University School of Medicine, Stanford, CA 94305, USA\ntitle: The future of genomic medicine is here\n\nOn his flight over to San Diego to lead the Future of Genomic Medicine (FoGM) conference, Dr Eric Topol (Scripps Translational Science Institute, USA) used a heart monitor device attached to his smartphone to diagnose a distressed passenger with atrial fibrillation. Already, mobile technologies such as this one are beginning to transform medicine, and genome sequencing, with its rapidly decreasing costs, is no exception. As we get closer to mini-sequencers and what George Church (Harvard Medical School, USA) termed 'wearable sequencing', a future of genomically informed medicine becomes possible. The FoGM conference integrated the patient-oriented perspective of genomic medicine, along with cutting-edge technologies and data integration, and developing methods and models in the aim of clinical utility.\n\n# Patient-oriented genomic medicine\n\nIt is becoming increasingly clear, at this and other conferences, such as the Cold Spring Harbor Personal Genomes meetings, that genomics can have a profound role in guiding diagnoses and treatments. A major theme of this year's conference was the patient perspective and their reaction to having their genome sequenced in a clinical setting. The conference started with a direct perspective from the parents of Lilly Grossman, a patient with a lifelong undiagnosed disease, marked by tremors and sleepless nights. After having her full genome sequenced by the Idiopathic Diseases of Man (IDIOM) study, led by Topol, mutations in *ADCY5* and *DOCK3* were able to putatively explain her phenotype, and suggest a possible treatment, which provided a few weeks of regular sleep. While the result was not a conclusive answer, it provided hope for the patient and her family. Howard Jacob (Medical College of Wisconsin, USA) agreed, stressing that even in the absence of clinical utility (if a diagnosis is not actionable), the personal utility of having a diagnosis is important to the patient and the patient's family. Jacob suggested a consumer-driven economy for personal genomics, and that even though variants and annotations are subject to change as technologies and interpretations improve, involving patients in the process can be an effective way to deal with these changes. Misha Angrist (Duke University, USA) mirrored these sentiments, drawing parallels to open-access publishing: subjects should have the right to their own data and to see results of the studies that use their data. Randy Scott (Invitae, USA) outlined his and Invitae's mission of bringing genetics to the masses by building databases and infrastructure for managing genetic information. The books that were handed out to participants reflected this mission, between AJ Jacob's *Drop Dead Healthy*, a foray into taking control of one's health, as well as the book by myself (Konrad Karczewski, Stanford University, USA) and Joel Dudley (Mount Sinai School of Medicine, USA): *Exploring Personal Genomics*, a handbook to understanding and interpreting personal genetic data.\n\nWhile much of these opinions on patient perspectives were anecdotal, Cinnamon Bloss (Scripps Translational Science Institute, USA) presented hard data on the perceptions of both patients and physicians, and the differences therein, through surveys of families. Parents of patients and their doctor agreed that the doctor was knowledgeable about genetics, but the parents were much less satisfied with the doctor's explanations of the results. However, Bloss noted that the majority of patients, parents and physicians were interested in receiving secondary findings, regardless of age of onset or actionability and desire for these results increased with actionability for all three groups.\n\n# Going beyond SNPs\n\nAnother major scientific theme of FoGM this year involved the expansion past somatic variants (SNPs) to other technologies that could be integrated to inform diagnosis and treatment. In addition to getting his genome sequenced, Michael Snyder (Stanford University, USA) tracked a number of omics technologies over time, including his transcriptome, proteome and metabolome, and used this information to track the onset of diabetes concurrent with infection. Since every patient is unique, an 'N of 1' study, followed longitudinally over time, provided him with interesting observations of altered physiological states (such as infection) compared with his healthy state.\n\nWhile Snyder's analysis was aimed at comprehensive profiling of a healthy individual, Elaine Mardis (Washington University in St Louis, USA) suggested a similar approach for cancers, where sequencing RNAs in addition to DNA would inform predictions of peptides binding with HLA class I. Such an analysis would prioritize antigens that could then be used for personalized immunotherapy. Eric Schadt (Mount Sinai School of Medicine, USA) used similar data types to discover personal cancer drivers and create patient-specific networks that could suggest personalized cancer treatments.\n\nGeorge Weinstock (Washington University in St Louis, USA) and Michael Eisen (University of California, Davis, USA) both brought the microbiome into the mix. Weinstock described efforts to sequence neonatal microbiomes to predict antimicrobial resistance, and discussed a future of fecal transplants and microbiome-based acne treatments. Eisen shared the optimism that the microbiome will become important for human phenotypes, but cautioned against overselling and being careful between correlation and causation, which becomes difficult for large numbers of hypotheses whose outcomes may be linked to their causes. Additionally, we already have a high demand for genetic counselors, but additional data types may bring similar demands, as Eisen called for microbiome counselors.\n\n# Scaling up and towards unified models\n\nFinally, a conference of this nature would not be complete without a call to arms for developing methods and models that will ultimately enable physicians to use genomic information in a clinical setting. Daniel MacArthur (Massachusetts General Hospital, USA) cautioned that consistent calling of exomes and genomes is of utmost importance for variant accuracy. He laid out the challenges for scaling up to variant calling of more than 26,000 exomes, but presented one solution in reduced BAMs, a compressed format, that then can be used for joint variant calling to increase accuracy. From this dataset, MacArthur was able to catalog tolerated protein-coding variation. David Goldstein (Duke University, USA) used such tolerated variants to identify genes that were likely to be functional and focused on these to narrow genetic factors for epileptic encephalopathy.\n\nPeter Visscher (University of Queensland, Australia) described models for predicting complex traits from genotype, which will become increasingly important for sequencing of healthy individuals and prioritization of disease risks. Atul Butte (Stanford University, USA) brought this message back to the clinic, using likelihood ratios, which doctors already use, to combine variant risk information into a unified risk factor. While six gigabytes of genetic data may seem overwhelming at first, Butte reminds us that there is a specialty that routinely analyzes gigabyte scale data: the radiologist. All that are needed are the proper tools.\n\n# Conclusions\n\nThe consensus among speakers at FoGM this year was clear: the genome has an important role in the clinic. With the price of sequencing dropping to costs amenable to regular clinical use, the question now is not if, but how, genomic information will be integrated. There is a coming onslaught of 'big data' that will improve individual healthcare and enable genomic personalized medicine. Challenges still remain, including establishing a patient-centric view of genomic data, which is tied to educating the public and encouraging participation in personal health, as well as standardizing models to most accurately identify causal variation and portray disease risk. As these and new challenges arise, it will take a concerted effort of physicians and scientists to bring forth a future of genomic medicine.\n\n# Abbreviations\n\nBAM: Binary Sequence Alignment Map; FoGM: Future of Genomic Medicine; HLA: human leukocyte antigen; SNP: single nucleotide polymorphism.\n\n# Competing interests\n\nThe author declares that they have no competing interests.","meta":{"dup_signals":{"dup_doc_count":101,"dup_dump_count":34,"dup_details":{"curated_sources":2,"2022-40":1,"2022-21":1,"2021-49":1,"2020-24":1,"2019-47":1,"2017-09":9,"2016-44":1,"2016-40":1,"2016-36":9,"2016-30":9,"2016-22":1,"2016-18":1,"2016-07":4,"2015-48":3,"2015-40":2,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":2,"2014-52":2,"2014-49":3,"2014-42":7,"2014-41":5,"2014-35":4,"2014-23":4,"2014-15":1,"2023-40":1,"2015-18":3,"2015-11":3,"2015-06":2,"2014-10":2,"2013-48":2,"2024-18":1}},"file":"PMC3663104"},"subset":"pubmed_central"} {"text":"abstract: # Background\n .\n Problems of quality and safety persist in health systems worldwide. We conducted a large research programme to examine culture and behaviour in the English National Health Service (NHS).\n .\n # Methods\n .\n Mixed-methods study involving collection and triangulation of data from multiple sources, including interviews, surveys, ethnographic case studies, board minutes and publicly available datasets. We narratively synthesised data across the studies to produce a holistic picture and in this paper present a high-level summary.\n .\n # Results\n .\n We found an almost universal desire to provide the best quality of care. We identified many 'bright spots' of excellent caring and practice and high-quality innovation across the NHS, but also considerable inconsistency. Consistent achievement of high-quality care was challenged by unclear goals, overlapping priorities that distracted attention, and compliance-oriented bureaucratised management. The institutional and regulatory environment was populated by multiple external bodies serving different but overlapping functions. Some organisations found it difficult to obtain valid insights into the quality of the care they provided. Poor organisational and information systems sometimes left staff struggling to deliver care effectively and disempowered them from initiating improvement. Good staff support and management were also highly variable, though they were fundamental to culture and were directly related to patient experience, safety and quality of care.\n .\n # Conclusions\n .\n Our results highlight the importance of clear, challenging goals for high-quality care. Organisations need to put the patient at the centre of all they do, get smart intelligence, focus on improving organisational systems, and nurture caring cultures by ensuring that staff feel valued, respected, engaged and supported.\nauthor: Mary Dixon-Woods; Richard Baker; Kathryn Charles; Jeremy Dawson; Gabi Jerzembek; Graham Martin; Imelda McCarthy; Lorna McKee; Joel Minion; Piotr Ozieranski; Janet Willars; Patricia Wilkie; Michael WestCorrespondence to Professor Mary Dixon-Woods, Department of Health Sciences, University of Leicester, 22\u201328 Princess Road West, Leicester LE1 6TP, UK; \ndate: 2014-02\ninstitute: 1Department of Health Sciences, University of Leicester, Leicester, UK; 2Imperial College Centre for Patient Safety and Service Quality (CPSSQ), London, UK; 3Institute of Work Psychology and School of Health and Related Research, University of Sheffield, Sheffield, UK; 4Aston Business School, Aston University, Birmingham, UK; 5Health Services Research Unit, University of Aberdeen, Aberdeen, UK; 6Department of Social and Policy Sciences, University of Bath, Bath, UK; 7National Association for Patient Participation, Surrey, UK; 8Lancaster University Management School, Lancaster, UK\nreferences:\ntitle: Culture and behaviour in the English National Health Service: overview of lessons from a large multimethod study\n\n# Introduction\n\nA commitment to delivering high-quality, safe healthcare has been a policy goal of governments worldwide for more than a decade, but progress in delivering on these aspirations has been modest:1 patients everywhere continue to suffer avoidable harm and substandard care.2 3 England's National Health Service (NHS) has not been immune to these problems. Despite some encouraging evidence of improvement in quality and safety,4 5 large and inexplicable variations in quality of care are evident across multiple domains and sectors of healthcare, from primary through to community and secondary care.6 7 England has also seen a number of high-profile scandals involving egregious failings in the quality and safety of individual providers. These include the case of Mid Staffordshire NHS Foundation Trust,8 the subject of a recently published public inquiry by Sir Robert Francis into how catastrophic failings in the quality and safety of care went undetected and uncorrected.9\n\nFrancis identified the causes of organisational degradation at Mid Staffordshire as systemic; he saw the underlying faults as institutional and cultural in character. He found significant weaknesses in NHS systems for oversight, accountability and influence for patient safety and quality of care. Central to his analysis was evidence of a large-scale failure of control and leadership at multiple levels, from what social scientists term the 'blunt end' of the system where decisions, policies, rules, regulations, resources and incentives are generated,10 through to the 'sharp end', often known as the 'frontline', where care is provided to patients. The distinction between the blunt end and the sharp end is of course a heuristic one; many within healthcare organisations function in hybrid positions as managers and practitioners. Nonetheless, it is useful to recognise how the blunt end, by shaping the environment where care is delivered, may create the 'latent conditions'11 that increase the risks of failure at the sharp end, but may equally generate organisational contexts that are conducive to providing high-quality care. Such contexts include culture: Francis blamed an 'insidious negative culture involving a tolerance of poor standards and a disengagement from managerial and leadership responsibilities'.8 Culture is, of course, a term that is widely used but notoriously escapes consensual definition.12 Many definitions of culture (including Schein's13 influential approach) nonetheless have in common an emphasis on the shared basic assumptions, norms, and values and repeated behaviours of particular groups into which new members are socialised, to the extent that culture becomes 'the way things are done around here'.\n\nThe findings of the Francis inquiry are depressingly familiar. England is not alone in experiencing organisational crises in healthcare; examples of failures in healthcare systems have occurred as far apart as New Zealand, the USA and the Netherlands. Several demonstrate precisely the same features as Mid Staffordshire, including long incubation periods during which warning signs were discounted, poor management systems, failure to respond to patient concerns, cultures of secrecy and protectionism, fragmentation of knowledge about problems and responsibility for addressing them, and cultures of denial of uncomfortable information.14 An important question thus concerns the extent to which the features of the Mid Staffordshire case might be symptoms of more widespread pathologies, given that other organisations in the NHS are exposed to the same institutional and regulatory environment. In this article, we offer lessons from a large multimethod research programme on culture and behaviour related to quality and safety in the NHS.\n\nThe research programme covered a critical period between 2010, following the initial inquiry into Mid Staffordshire15 and the White Paper on the NHS,16 and 2012, when the Health and Social Care Act was passed. The programme involved a large number of substudies using different methods to seek evidence from staff and patients throughout the English NHS, from large subsamples of NHS organisations, strategic level stakeholders, teams, and patient and carer organisations, and from detailed case studies. It was thus able to provide graduated levels of focus and multiple lenses. Each of the individual substudies will be reported separately, but there is considerable value in bringing the learning from them together holistically. In this article, we provide a synthesis across the studies to draw out high-level learning about culture and behaviour in NHS organisations; what influences culture and behaviour; and what needs to change to give effect to the vision of a safe, compassionate service in which patients and their families could have trust and confidence.\n\n# Methods\n\nWe conducted a large, mixed-method research programme involving seven separate substudies (table 1<\/a>). The programme received ethical approval from an NHS Research Ethics Committee. In summary, primary data were drawn from:\n\n- 107 interviews with key, senior level stakeholders from across the NHS and beyond;\n\n- 197 interviews from the 'blunt end' (executive and board level) of NHS primary care and acute organisations through to the 'sharp end' (frontline clinicians) where staff care for patients;\n\n- over 650\u2005h of ethnographic observation in hospital wards, primary care practices, and accident and emergency units;\n\n- 715 survey responses from patient and carer organisations;\n\n- two focus groups and 10 interviews with patient and carer organisations;\n\n- team process and performance data from 621 clinical teams, drawn from the acute, ambulance, mental health, primary care and community trust sectors;\n\n- 793 sets of minutes from the meetings of 71 NHS trust boards from multiple sectors over an 18-month period, including detailed analysis of eight boards' minutes.\n\nSummary of elements of the research programme\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Study element<\/th>\nParticipants and scheduling<\/th>\nSetting<\/th>\nFocus of research<\/th>\nAnalytic approach<\/th>\n<\/tr>\n<\/thead>\n
1. Stakeholder interviews<\/td>\n107 semi-structured telephone interviews with those closely involved in quality and safety<\/td>\nAcute trusts, ambulance trusts, mental health trusts, community trusts, foundation trusts, primary care trusts, strategic health authorities, general practices and healthcare commissioning organisations<\/td>\nUnderstanding of vision of high-quality and safe care; what is required to make it happen; theories of change; plans to implement quality and safety improvement, enhance leadership and promote staff engagement; views on what quality improvement means, how it could best be secured, and obstacles<\/td>\nAnalysis based on constant comparative method
\nUse of QSR NVivo 8 software<\/td>\n<\/tr>\n
2. Ethnographic case studies: observations and interviews<\/td>\nComparative case studies across seven purposively chosen cases
\n650\u2005h of observation; 197 semi-structured interviews with executive and board-level staff and frontline staff<\/td>\n
Four hospital trusts; a quality improvement collaborative; a large-scale quality improvement programme involving dozens of organisations; one primary care provider involving a chain of practices<\/td>\nAssessing culture and behaviour in relation to quality, staff engagement with quality, leadership for quality, quality improvement, practical actions for promoting cultures of high-quality care<\/td>\nAnalysis based on constant comparative method
\nCoding within and across cases, systematically searching for where clusters of codes formed a pattern
\nCombining data from interviews across cases and stakeholders to form a single dataset<\/td>\n<\/tr>\n
3a. Patient and public involvement: survey<\/td>\n715 survey responses
\nCross-sectional<\/td>\n
Patient participation groups<\/td>\nThe survey consisted of 14 statements about patient experience. Open text box provided for each statement<\/td>\nQuantitative analysis\u2014largely descriptive
\nOpen-ended responses subject to content analysis to derive themes inductively<\/td>\n<\/tr>\n
3b. Patient and public involvement: focus groups and interviews<\/td>\nTwo focus groups and 10 interviews<\/td>\nPatient and carer organisations<\/td>\nInterpret the findings of the survey
\nAssessing views on obstacles to delivering improved quality and safety and greater accountability in the NHS<\/td>\n
Qualitative analysis of key themes<\/td>\n<\/tr>\n
4a. NHS staff and patient surveys: patient satisfaction survey data<\/td>\n165 acute trusts\u2014data from 2007, 2009, 2011<\/td>\nAcute trusts<\/td>\nPatient satisfaction came from the National Acute Inpatient Survey, using the data on patients' overall ratings of care<\/td>\nDescriptive statistics and paired sample t tests<\/td>\n<\/tr>\n
4b. NHS staff and patient surveys: national staff survey data<\/td>\n309 NHS trusts from 2007, 2009, 2011 national staff survey<\/td>\nPrimary care, ambulance, acute care and mental health trusts<\/td>\nStaff engagement, organisational climate, job satisfaction, manager support, job design, errors and reporting, work pressure, bullying, harassment and abuse, team working, training, appraisal, stress<\/td>\nDescriptive statistics and paired sample t tests<\/td>\n<\/tr>\n
4c. NHS staff and patient surveys: outcome measures<\/td>\n2005\u20132009<\/td>\nPrimary care, ambulance, acute care and mental health trusts<\/td>\nPatient mortality (acute sector only) (hospital standardised mortality ratio); quality of services and use of resources (Annual Health Check ratings by Healthcare Commission between 2005\/2006 and 2008\/2009); infection rates (MRSA) per 10000 bed days; staff absenteeism; staff turnover<\/td>\nDetailed correlation analysis between staff survey and inpatient survey; multiple and multilevel regression analysis, using HR practice variables to predict engagement; regression and ordinal logistic regression analysis to predict patient satisfaction, patient mortality, staff absenteeism, staff turnover, infection rates, and Annual Health Check ratings, controlling for trust type, size and location; latent growth curve modelling to predict outcomes<\/td>\n<\/tr>\n
5. Clinical teams functioning, effectiveness and innovation<\/td>\n621 teams (4604 responses)
\nAston Team Performance Inventory
\nCross-sectional data with data on team changes collected from 388 teams (1299 individuals) 3\u2005months later
\nTeam performance data from team leaders\/external raters<\/td>\n
51 trusts (13 acute, 17 mental health, 10 ambulance and 11 primary care trusts)<\/td>\nTeam functioning:
\ntask design, team effort and skills, organisational support, resources, objectives, participation, creativity, conflict, reflexivity, task focus, leadership, satisfaction, attachment, effectiveness, inter-team relationships, innovation
\nLeaders'\/external raters' evaluations of effectiveness
\nInnovations introduced by teams
\nSources of frustration and resilience<\/td>\n
Descriptive analysis, ANOVA, regression and relative importance analysis
\nAnalysis and ratings from domain relevant experts
\nOpen-ended responses subject to content analysis to derive the themes<\/td>\n<\/tr>\n
6a. Objectives and team working of trust boards<\/td>\n34 boards (306 individuals) Administered processes section of Aston Team Performance Inventory
\nDetails of board objectives<\/td>\n
Primary care, ambulance, secondary care and mental health trusts<\/td>\nTeam processes and content: objectives, participation, reflexivity, task focus (lack of team) conflict, creativity and innovation
\nClarity and challenge of board objectives<\/td>\n
Descriptive analysis, regression and relative importance analysis
\nAnalysis and ratings from domain relevant experts<\/td>\n<\/tr>\n
6b. Trust board innovation<\/td>\n71 NHS trust boards
\n793 sets of minutes
\nMinutes from 18\u2005months of board meetings<\/td>\n
Primary care, ambulance, secondary care and mental health trusts<\/td>\nInnovations introduced by boards and domain of focus (e.g. productivity, targets, organisational effectiveness, quality, safety, patient complaints, clinical effectiveness)<\/td>\nAnalysis and ratings from domain relevant experts<\/td>\n<\/tr>\n
6c Quality and safety in trust boards<\/td>\nDetailed analysis of minutes for eight boards<\/td>\nPrimary care, ambulance, secondary care and mental health trusts<\/td>\nBoard discussions of quality and safety<\/td>\nEthnographic content analysis and summative analysis<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\nANOVA, analysis of variance; HR, human resources; MRSA, methicillin-resistant *Staphylococcus aureus*; NHS, National Health Service.\n\nWe did not use a formal protocol for integrating the findings across these studies,17 instead deploying a more interpretive, narrative approach.18 We engaged in extensive discussions as a team, and identified points of convergence and updated our analytic categories as we came closer to agreement. Given the size of our datasets, we are able to provide only very limited primary data in support of our analysis in this article; our focus is on high-level messages. Further details of the methods and the data are available in a longer report.19\n\n# Results\n\nOur synthesis of the findings across the substudies allowed many insights into the challenges of realising a vision of reliably safe, high-quality care across the NHS, and important learning about how improvement can best be secured.\n\n## Goal setting\n\nVirtually all those we interviewed (over 300 in total) were firmly committed to the ideal of a safe, high-quality health service for patients and to good patient experience. Many identified the values of compassion and care as at the heart of the mission for their organisations, and as their most deeply felt personal professional commitment. Our interviews, observations, surveys and documentary analysis were united in suggesting that, for organisations to succeed in delivering high-quality, safe care, they needed to have a clearly articulated vision, including explicit goals for quality and safety and a strategy for achieving them. Interviews and observations repeatedly emphasised the importance of clear goals in establishing and signposting priorities for improvement, motivating staff and ensuring resources were appropriately directed. Our survey evidence from the national patient satisfaction and national staff surveys (NSS) showed that patient satisfaction was highest in trusts that had clear goals at every level. Consistent with the findings of the Francis inquiry, boards of organisations were identified in interviews as particularly influential in setting the overall direction and demonstrating the commitment and organisational priority given to quality and safety.\n\nBut converting laudable aspirations for high-quality, safe and compassionate care into clear goals appeared challenging. Clarity about goals, how they could be achieved, and leadership for delivery were highly variable. Our questionnaire surveys of board members showed that they rarely stated clear board objectives that were challenging and measurable. The 621 frontline clinical teams we studied were generally even less clear about their objectives.\n\nA major challenge in creating unifying visions for patient safety and quality and setting clear objectives was the range, diversity and complexity of external expectations and requirements that NHS organisations faced. Board and executive teams described an institutional and regulatory environment that was populated by external agencies and actors who served different but overlapping functions. They reported that targets, standards, incentives and measures seemed to crowd in from multiple external sources; that the same information was required many times in different formats; and that answering to so many masters and producing data for so many external audiences was costly and distracting. The proliferation of externally set priorities and the number of different agencies and actors created what we termed 'priority thickets'\u2014dense patches of overlapping or disjointed goals that commanded very substantial attention and resources, but did not necessarily provide clear direction or facilitate the development of clear goals, internally coherent visions or strategies linked to local priorities.\n\nFaced with so many competing demands, some organisations tended to revert to a highly bureaucratised form of management, characterised by proliferation of rules, procedures and forms corresponding to externally imposed demands. Many of these seemed to be motivated mostly by a need to make displays of compliance,20 rather than by genuine efforts to make systems safer or of better quality. Much of this activity could be characterised as defensive and reactive. It was a source of frustration throughout organisations; frontline teams complained of 'blanket' policies which were seen as 'very prescriptive and not concentrated on clinical work'.\n\nWe also found considerable variability in how far organisations succeed in making their aspirations for high-quality care real: what we termed 'bright spots' and 'dark spots' were both evident, even within the same organisations. Bright spots included teams and individuals who demonstrated caring, compassion, cooperation and civility, and a commitment to learning and innovation. Direct observations found that in many settings patients were often treated with kindness and respect, systems functioned well, and staff were busy but knew what they were doing and why. Compliance with many standards of good practice, such as hygiene and equipment counting, was observed to be very good in many cases.\n\nThough much care was of such high quality as to be inspiring, substandard care or 'dark spots' were also evident. Dark spots were found where staff were challenged to provide quality care, were harried or distracted, or were preoccupied with bureaucracy. Across our interviews, surveys and observations we found evidence of staff and patient concern about variability in quality of care, and a lack of confidence that care would be reliably good. Interviews and surveys with patient and carer groups suggested that patients and their carers were often concerned about quality and safety. Vulnerable patients, including older patients, young patients or those who lacked the ability to 'speak up', were reported to be at risk of being left to 'fend for themselves' or 'being forgotten'. Our observations in clinical areas and our interviews confirmed that inconsistency was a feature of many settings. For example, many staff spoke to patients politely and with kindness, but some others were brusque, impatient or discourteous. Some senior clinical nursing staff highlighted their concern at what they saw as a tendency towards task-focused rather than person-centred care.\n\nFurther evidence of the challenges of realising a vision of consistently high-quality safe care came from our analysis of the NHS Staff Survey and inpatient survey data over the period 2007\u20132011 (tables 2<\/a> and 3<\/a>). This suggested that there had been improvements in scores relating to quality and safety reported by patients and staff nationally between 2007 and 2009, but subsequently these improvements stalled or went into reverse. Some of the plateauing may reflect a natural maximum level being reached. For example, the percentage of staff receiving health and safety training increased from 2007 to 2009, and appears to remain relatively constant in 2011. Likewise, levels of job satisfaction increased from 2007 to a moderately high level in 2009, and then stayed approximately the same in 2011. However, measures on the staff survey relating to error and incident reporting, blame cultures and improvements following incidents, where there was headroom for improvement, appeared to have shown only very modest gains. The number of staff working paid extra hours has decreased consistently, but since 2009 the number working unpaid extra hours has increased sharply. The percentage of staff receiving training in infection control related issues increased from 2007 to 2009, but fell in 2011.\n\nChanges in the National Staff Survey 2007\u20132011 (NHS trusts in England)\n\n| | 2007 | 2009 | 2011 | Change 2007\u20132009 | Change 2009\u20132011 | Standard deviation (2007) |\n|:---|:---|:---|:---|:---|:---|:---|\n| 'I have adequate materials, supplies and equipment to do my work' | 3.22 | 3.36 | 3.38 | +0.14\\*\\* | +0.02\\* | 0.18 |\n| 'There are enough staff at this Trust for me to do my job properly' | 2.61 | 2.78 | 2.72 | +0.17\\*\\* | \u22120.06\\*\\* | 0.17 |\n| 'I do not have time to carry out all my work' | 3.30 | 3.26 | 3.25 | \u22120.04\\*\\* | \u22120.01 | 0.21 |\n\nResults are based on the 309 NHS trusts in England with data from all 3\u2005years shown. p values are based on paired sample t tests. Responses were on a 1\u20135 scale with 5 indicating greater agreement with statements; an increase of 0.10 is equivalent to 10% of respondents moving up one category of response, for example, from 'neither agree nor disagree' to 'agree'.\n\n\\*p\\<0.05; \\*\\*p\\<0.01.\n\nNHS, National Health Service.\n\nChanges in the National Staff Survey and Acute Inpatient Survey 2007\u20132011 (NHS trusts in England)\n\n| | 2007 | 2009 | 2011 | Change 2007\u20132009 | Change 2009\u20132011 | Standard deviation (2007) |\n|:---|----|----|----|----|----|----|\n| Staff survey | | | | | | |\n| \u2003'My trust encourages us to report errors, near misses or incidents' | 3.84 | 3.93 | 3.94 | +0.09\\*\\* | +0.01\\* | 0.11 |\n| \u2003'My trust treats reports of errors, near misses or incidents confidentially' | 3.55 | 3.63 | 3.66 | +0.08\\*\\* | +0.03\\*\\* | 0.10 |\n| \u2003'My trust blames or punishes people who are involved in errors, near misses or incidents' | 2.75 | 2.67 | 2.68 | \u22120.08\\*\\* | +0.01 | 0.12 |\n| \u2003'When errors, near misses or incidents are reported, my trust takes action to ensure that they do not happen again' | 3.47 | 3.54 | 3.57 | +0.07\\*\\* | +0.03\\*\\* | 0.13 |\n| Acute inpatient survey | | | | | | |\n| \u2003In your opinion, how clean was the hospital room or ward that you were in? | 3.45 | 3.60 | 3.63 | +0.15\\*\\* | +0.03\\*\\* | 0.13 |\n| \u2003As far as you know, did doctors wash or clean their hands between touching patients? (% of positive responses) | 53 | 59 | 57 | +6\\*\\* | \u22122\\* | 6 |\n\nStaff survey results are based on the 309 NHS trusts in England with data from all 3\u2005years shown. p values are based on paired sample t tests. Staff survey responses were on a 1\u20135 scale with 5 indicating greater agreement with statements; an increase of 0.10 is equivalent to 10% of respondents moving up one category of response, for example, from 'neither agree nor disagree' to 'agree'. Inpatient survey results are based on the 157 NHS acute trusts in England with data from all 3\u2005years shown. Inpatient survey responses were on a 1\u20134 scale, with 4 indicating greater agreement with statements.\n\n\\*p\\<0.05; \\*\\*p\\<0.01.\n\nNHS, National Health Service.\n\n## Variability in intelligence\n\nA major challenge to achieving goals relating to quality and safety was that of ensuring that high-quality intelligence was available to organisations, teams, and individuals about how well they were doing and where the deficits and risks in organisational systems lay. NHS organisations that we studied were putting considerable time, effort and resources into data collection and monitoring systems. They typically used a combination of routinely collected data, specific data collection initiatives, and sporadic sources such as spot checks and audits. To a varying extent they also drew on feedback provided by clinical staff and patients as a means of assessing trends. However, the degree to which data collection efforts translated into actionable knowledge, and then into effective organisational responses, differed markedly between organisations.\n\nSome behaviours in relation to data gathering might be described as 'problem-sensing'; other, less positive behaviours were 'comfort-seeking'. Problem-sensing involved actively seeking out weaknesses in organisational systems, and it made use of multiple sources of data\u2014not just mandated measures, but also softer intelligence. Soft intelligence could be gathered in many ways, including active listening to patients and staff; informal, unannounced visits to clinical areas; and techniques such as 'mystery shoppers', shadowing of staff, and swapping roles for a short period. While sometimes discomfiting, this less routinely gathered knowledge enabled fresh, more penetrating insights to complement quantitative data. Senior teams displaying problem-sensing behaviours tended to be cautious about being self-congratulatory; perhaps more importantly, when they did uncover problems, they often used strategies that went beyond merely sanctioning staff at the sharp end, making more holistic efforts to strengthen their organisations and teams.\n\nComfort-seeking behaviours are defined here as being focused on external impression management and seeking reassurance that all was well; consequently, what was available to organisations was data, but not intelligence. Serious blind spots could arise when organisations used a very limited range of methods for gathering data, were preoccupied with demonstrating compliance with external expectations, failed to listen to negative signals from staff or lacked knowledge of the real issues at the frontline. Comfort-seeking tended to demonstrate preoccupation with positive news and results from staff, and could lead to concerns and critical comments being dismissed as 'whining' or disruptive behaviour. When comfort-seeking was the predominant behaviour, data collection activities were prone to being treated by sharp-end staff as wearisome and fruitless accountability exercises. Some staff reported that they felt the main purpose of much data collection was to allow individuals to be blamed if something did go wrong, not to make the system safer.\n\n## Variability in systems\n\nInterviews, observations and surveys showed that when staff had access to appropriate resources, perceived that staffing levels were adequate with the right skill mix, and had systems that functioned effectively, they felt that they could complete their work successfully, could explore new ways of improving quality and could develop reflective practices. This reinforced their levels of motivation and morale in a virtuous circle. But deficits in systems often obstructed and frustrated well-motivated staff in their mission to provide good care for patients. Our analysis of questionnaire reports from 621 clinical teams showed that many staff felt unable to achieve their goals for patients because of organisational factors outside their control. Observations showed that staff wasted time working with poorly designed IT systems, negotiating clinical pathways with obstructions and gaps, and battling with multiple professional groups and subsystems (e.g. pharmacy, microbiology and imaging, and many others) that did not operate in integrated ways. We also found evidence of problematic handovers between shifts, departments and teams, team conflict and a diffusion of responsibility relating to particular patients. Patient and carer groups reported discontinuities in care between institutional boundaries and even within single organisations. These 'responsibility cordons' left patients variously ill informed, distressed and disappointed, and sometimes in danger.\n\nStaff at the sharp end were very often aware of systems problems but felt powerless to bring about change. Changes within organisations, uncertainty about priorities, poor systems, heavy workloads and staff shortages were all blamed for staff feeling they lacked support, further reducing their motivation and morale. Given that many systems required significant improvement, it was disappointing that we found a clear trend of decreasing levels of board innovation, especially in relation to quality and safety.\n\nWe defined innovation as the intentional introduction of processes and procedures, new to the unit of adoption (team or organisation) and designed to significantly benefit the unit of adoption, staff, patients or the wider public. An analysis of board minutes from 71 NHS Trusts covering an 18-month period between January 2010 and June 2011 identified a total of 144 innovations that were implemented in organisations, representing an average of only 1\u20133 per organisation. More than half were focused on increasing productivity (73), with very few related to safety (14). The largest number of innovations (62) was identified between January and June 2010, followed by 56 innovations between July and December 2010. Only 26 innovations were identified in data covering the time between January and June 2011. Separately, analysis of 4976 responses to open-ended questions in our survey of 486 clinical teams identified 183 innovations over a 6-month period. This also suggested relatively low rates of innovation among frontline teams, though many of the solutions they did devise were ingenious and resourceful. The largest number of frontline staff innovations was focused on enhancing quality of patient care; fewer aimed at improving administrative effectiveness, and the smallest number concerned staff wellbeing.\n\nMany organisations were using specific quality improvement methods to achieve change, including Plan\u2013Do\u2013Study\u2013Act cycles, Collaboratives, Lean, Six Sigma, and Productive Ward (an NHS programme to support ward teams in reviewing the processes and environment used to provide patient care21). Some organisations also used wider techniques to improve quality, including organisation-level campaigns. Great enthusiasm for these approaches was often reported by those leading improvement efforts, but we also sometimes observed a tendency towards uncritical or indiscriminate use, and some evidence of 'magical thinking' ('this initiative will solve many problems easily and quickly'). Frontline staff who had to implement these initiatives were often not consulted or adequately informed about their purpose and implementation, and sometimes initiatives were abandoned or forgotten after a short period of intense activity. In some cases, there was insufficient acknowledgement of the effort, expertise and investment required to make such approaches work, and substantial problems with quality of data collection and interpretation.\n\n## Culture and behaviour\n\nLeadership was important for setting mission, direction and tone. Our observations, interviews and surveys all emphasised the importance of high-quality management in ensuring positive, innovative and caring cultures at the sharp end of care. Some senior teams encouraged and enabled frontline teams to address challenges and to innovate, but recognised that, along with demanding personal accountability from staff, they also needed to fix systems problems that prevented staff from functioning well. A strong focus by executive and board teams on their own role in identifying and addressing systems problems was powerful in supporting cultural change that delivered benefits for patients, and our observations and interviews identified many examples of impressive gains being made by the sharp and blunt ends working together around unifying goals.\n\nNevertheless, an important consequence of the failure to clarify goals, to gather appropriate intelligence or to address systems deficits was the existence of frequent misalignments between the ways the blunt end and the sharp end of organisations conceived of quality and safety problems and their solutions. For sharp-end staff, threats to safety and quality were identified as weaknesses in systems, failures of reliability, suboptimal staffing, inadequate resources and poor leadership. Lack of support, appreciation and respect, and not being consulted and listened to were seen as endemic problems by staff in some organisations. In contrast, some senior managers\u2014particularly those engaged in comfort seeking\u2014tended to see frontline staff behaviour and culture as the cause of quality problems. In consequence, those at the blunt and those at the sharp end often did not agree on the causes of variation in quality and safety and, therefore, on how they should be addressed.\n\nWe also found substantial variation in the quality of management. Our analyses of NSS data showed that hospital standardised mortality ratios were inversely associated with positive and supportive organisational climates. Higher levels of staff engagement and health and wellbeing were associated with lower levels of mortality, as were staff reporting support from line managers, well structured appraisals (e.g. agreeing objectives, ensuring the individual feels valued, respected and supported), and opportunities to influence and contribute to improvements at work. NSS data also showed that staff perceptions of the supportiveness of their immediate managers, the extent of staff positive feeling, staff satisfaction and staff commitment were associated with other important outcomes, including patient satisfaction. In places where staff reported high work pressure, patients on the national surveys also reported too few nurses, insufficient support, and problems with information, privacy and respect. In trusts with poor staff health and wellbeing, high injury rates, and a high level of staff intention to quit their jobs, patients reported that they were generally less satisfied, and Care Quality Commission ratings described poorer care and poorer use of resources. These findings were consistent across trust types (primary care, ambulance, mental health, community and acute).\n\nAlso concerning was evidence that though team-working seemed well established and widespread on the surface, there was a surprising lack of clarity about team purpose, objectives, membership, leadership and performance among many teams. Our survey of 621 clinical teams demonstrated that team inputs and team processes were significantly associated with the effective provision of good-quality care, but senior managers were sometimes unable to identify teams and team leaders. When team leaders were identified, they were often confused about who their team members were. Team members themselves had low agreement about who was part of their team. Factors associated with successful teams were the effort and skills of team members, resources made available and good processes. Clarity and agreement about team objectives were key to clinical team effectiveness, along with a participative approach to decision-making that engaged all team members. Teams who regularly took time out to reflect on their objectives, how they were going about achieving these and how their performance needed to change were particularly likely to be more effective and innovative.\n\n# Discussion\n\nThis large mixed-method study identified many 'bright spots' of excellent caring and practice and high-quality innovation across the NHS, but also considerable inconsistency. Though Mid Staffordshire may have been one particularly 'dark spot' in the NHS, organisations throughout the NHS are likely to have at least some shadows: there was little confidence that care could be relied upon to be good at all times in all parts of organisations, and we found evidence of structural and cultural threats to quality and safety. Our analysis points to how things may improve. First, clear and explicit goals that are coherent from ward to Whitehall are essential. Second, organisations need intelligence: they need to know how well they are really doing as an organisation, and where they need to improve. This means actively seeking uncomfortable and challenging information from patients and staff, rather than relying solely on formal data collection against narrow performance indicators that may not give a fully rounded picture of quality of care. Third, organisations must constantly review, strengthen and improve their systems. System improvement and strengthening may be needed at many different levels, from smoothing clinical pathways to improving communication, teamwork, assuring that clinical areas are adequately staffed with the right kinds of skills, and ensuring personal development, equipment standardisation, and training. Organisations also need to focus on developing cultures that are person centred\u2014not just task focused\u2014by valuing and building on the excellent care and commitment delivered by many staff throughout the NHS. This involves modelling and reinforcing values and behaviours that underpin high-quality care, patient safety and positive patient experience from the blunt end to the sharp end of the whole system.\n\nOur work involving a very large number of organisations confirms that achieving quality and safety in NHS organisations requires a robust strategy and unifying vision. National leadership sets the tone, signals importance, legitimises, and creates accountability mechanisms. Yet the Francis public inquiry showed that a major problem for Mid Staffordshire was the large number of different agencies and bodies with a say in the NHS. This contributed to fragmentation, multiple competing pressures, ambiguity and diffusion of responsibility. Our work similarly demonstrates that proliferation of external agencies and expectations creates conflicts, distraction and confusion from the blunt to the sharp end of organisations about where resources and attention should be directed. Where incentives and external expectations conflict, compete or fail to cohere, the ability of organisations to set themselves clear, internally valued goals for achieving their aspirations is weakened.22\n\nIn a distributed and complex system such as the NHS, failures are least likely when the goals are clear and uniting, and when appropriate, sensitively designed regimes of control and support are found at every level: from policymaking, through the layers of formal regulatory systems, the institutional environment, individual organisations, teams and practitioners, through to patients' experiences.23 Coherence of national direction is therefore essential to avoid dispersing responsibility and accountability, and creating confusing messages and signals.24 As new bodies move forward, including NHS England and the Clinical Commissioning Groups, it is important that they avoid creating further competing priorities, and instead ensure focus and coherence.\n\nNational leadership needs to be matched by high-quality leadership across multiple organisational levels underpinned by clear, patient-focused goals and objectives. The role of organisational boards in securing the quality and safety of health services has become an increasing focus of academic and policy interest,25 26 not least because of evidence of the link between leadership from the top and the priority and resources given to quality25 and clinician engagement.27 But we found worrying evidence of NHS trust boards failing to set clear goals for themselves as boards and for their organisations. Goals do need to be set, and they should be limited in number (to identify priorities while avoiding the creation of priority thickets) and known not only by all board members but (if appropriate) more widely within their trusts. Goals should be shaped by the need to promote quality and safety, to ensure sound financial performance, and to value dignity and respect for patients. They should provide a framework for objectives at all levels of trusts, from senior management to clinical teams at the sharp end. They should be framed to encourage innovation at all levels, and quality and safety of patient care must be overriding. Not all innovations need to be grand and over-arching: fixing (apparently) small problems may result in major gains.28\n\nThe Francis public inquiry showed that discounting of warning signs of deterioration was a key feature of board and executive behaviour at Mid Staffordshire. Those in senior positions appear to have developed 'blindsight'\u2014a way of not seeing what was going wrong. Our work confirms the importance of high-quality intelligence (not just data) and making that intelligence actionable. But we found sobering evidence that NHS organisations are not always smart with intelligence, and need to gear more towards problem-sensing rather than comfort-seeking. At the national level, care needs to be taken to ensure that the number of measures organisations are expected to report externally is well managed,29 and that measures are aligned with local priorities, avoid imposing excessive burdens, generate accurate intelligence,30 and most of all, are useful for informing improvement locally. Thus the right intelligence needs to be gathered, interpreted correctly and fed back clearly to staff at the sharp end of care, so that they consolidate and improve their performance.31\n\nOrganisations need to be especially alert to the possibility of blind spots where they are unaware of problems. They should use multiple strategies to generate intelligence and undertake self-assessment32 of local culture and behaviours\u2014not just rely on mandated measures\u2014and use a range of techniques for hearing the patient's voice and the voice and insights of those at the sharp end of care. Consistent with Francis' findings, good management is as important as good leadership in our analysis: the wellbeing of staff is closely linked to the wellbeing of patients, and staff engagement is a key predictor of a wide range of outcomes in NHS trusts. Achieving high levels of engagement is only possible in cultures that are generally positive, when staff feel valued, respected and supported, and when relationships are good between managers, staff, teams and departments and across institutional boundaries. Staff experience frustration and conflict when asked to work in systems that do not effectively serve them, or the patients they care for; these system defects include staff shortages or inappropriate skill mix to address the needs of specific clinical areas. Our analysis suggests that improving culture, behaviour and systems requires system improvement and better communication between the blunt end and sharp end. This needs to be sustained, intense, mutually respectful and focused on achieving a shared understanding of quality problems and joint working to put them right. Trusts can develop these cultures by using specific strategies (box 1<\/a>), while recognising the complexities of trying purposefully to engineer culture.12\n\n###### Strategies for creating positive cultures\n\nSenior leaders should:\n\n1. Continually reinforce an inspiring vision of the work of their organisations\n\n2. Promote staff health and wellbeing\n\n3. Listen to staff and encourage them to be involved in decision making, problem solving and innovation at all levels\n\n4. Provide staff with helpful feedback on how they are doing and celebrate good performance\n\n5. Take effective, supportive action to address system problems and other challenges when improvement is needed\n\n6. Develop and model excellent teamwork\n\n7. Make sure that staff feel safe, supported, respected and valued at work.35\n\nThis article has some limitations. In the available space, we have not been able to provide full details of methods or data from this unusually large research programme. Instead we have sought to provide an overview of the key findings of the component studies. Our synthesis of findings was interpretive and narrative, and did not use a formal protocol. Others might reach somewhat dissimilar conclusions or interpretations of our data.33 However, we believe that our careful scrutiny of the data, extensive discussions, and detailed analysis of themes have enabled us to produce a powerful, robust and rich picture. Future work should assess the generalisability of these findings in other contexts, including the other countries of the UK.\n\n# Conclusions\n\nThis very large-scale research programme suggests that there is room for improvement in the quality and safety of care offered by the NHS, and that this improvement can build on the progress already made. Trusts need continually to refresh, reinforce and model an inspiring vision that keeps the patient at the centre. It is essential to commit to an ethic of learning and honesty,34 to work continually to improve organisational systems, and to nurture the core values of compassion, patient dignity and patient safety through high-quality leadership. This implies equal attention to systems, cultures and behaviours: setting coherent and challenging goals and monitoring progress towards them; empowering staff to provide high-quality care and providing them with the means to achieve this through routine practice and innovation; and exemplifying and encouraging sound behaviours.\n\nWe thank all of the many organisations and individuals who participated in this research programme for their generosity and support. We thank our collaborators and the advisory group for the programme.\n\n# References","meta":{"dup_signals":{"dup_doc_count":137,"dup_dump_count":58,"dup_details":{"curated_sources":2,"2023-50":4,"2023-40":1,"2023-14":3,"2023-06":4,"2022-49":2,"2022-40":2,"2022-33":2,"2022-27":2,"2022-21":4,"2022-05":4,"2021-49":2,"2021-43":3,"2021-39":1,"2021-25":2,"2021-21":1,"2021-17":5,"2021-10":1,"2021-04":2,"2020-50":2,"2020-45":5,"2020-40":2,"2020-34":2,"2020-29":2,"2020-24":2,"2020-10":1,"2020-05":5,"2019-51":1,"2019-47":3,"2019-43":2,"2019-39":3,"2019-35":2,"2019-30":2,"2019-26":2,"2019-22":1,"2019-18":2,"2019-04":3,"2018-47":1,"2018-39":1,"2018-30":3,"2018-26":1,"2018-22":2,"2018-17":2,"2018-13":2,"2018-09":1,"2018-05":4,"2017-47":4,"2017-39":6,"2017-34":1,"2017-30":2,"2017-26":3,"2017-22":3,"2017-17":2,"2017-09":2,"2024-22":2,"2024-18":2,"2024-10":2,"2017-13":1,"2024-30":1}},"file":"PMC3913222"},"subset":"pubmed_central"} {"text":"abstract: CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) loci, together with *cas* (CRISPR\u2013associated) genes, form the CRISPR\/Cas adaptive immune system, a primary defense strategy that eubacteria and archaea mobilize against foreign nucleic acids, including phages and conjugative plasmids. Short spacer sequences separated by the repeats are derived from foreign DNA and direct interference to future infections. The availability of hundreds of shotgun metagenomic datasets from the Human Microbiome Project (HMP) enables us to explore the distribution and diversity of known CRISPRs in human-associated microbial communities and to discover new CRISPRs. We propose a targeted assembly strategy to reconstruct CRISPR arrays, which whole-metagenome assemblies fail to identify. For each known CRISPR type (identified from reference genomes), we use its direct repeat consensus sequence to recruit reads from each HMP dataset and then assemble the recruited reads into CRISPR loci; the unique spacer sequences can then be extracted for analysis. We also identified novel CRISPRs or new CRISPR variants in contigs from whole-metagenome assemblies and used targeted assembly to more comprehensively identify these CRISPRs across samples. We observed that the distributions of CRISPRs (including 64 known and 86 novel ones) are largely body-site specific. We provide detailed analysis of several CRISPR loci, including novel CRISPRs. For example, known streptococcal CRISPRs were identified in most oral microbiomes, totaling \u223c8,000 unique spacers: samples resampled from the same individual and oral site shared the most spacers; different oral sites from the same individual shared significantly fewer, while different individuals had almost no common spacers, indicating the impact of subtle niche differences on the evolution of CRISPR defenses. We further demonstrate potential applications of CRISPRs to the tracing of rare species and the virus exposure of individuals. This work indicates the importance of effective identification and characterization of CRISPR loci to the study of the dynamic ecology of microbiomes.\nauthor: Mina Rho; Yu-Wei Wu; Haixu Tang; Thomas G. Doak; Yuzhen Ye\\* E-mail: [^1]\ndate: 2012-06\ninstitute: 1School of Informatics and Computing, Indiana University, Bloomington, Indiana, United States of America; 2Center for Genomics and Bioinformatics, Indiana University, Bloomington, Indiana, United States of America; 3Department of Biology, Indiana University, Bloomington, Indiana, United States of America; University of Toronto, Canada\nreferences:\ntitle: Diverse CRISPRs Evolving in Human Microbiomes\n\n# Introduction\n\nCRISPRs, together with *cas* genes (CRISPR-associated genes), provide acquired resistance against viruses and conjugative plasmids \\[1\\], \\[2\\], and are found in most archaeal (\u223c90%) and bacterial (\u223c40%) genomes \\[3\\], \\[4\\], \\[5\\]. CRISPR arrays consist of 24\u201347 bp direct repeats, separated by unique sequences (spacers) that are acquired from viral or plasmid genomes \\[6\\]. Even though some CRISPR arrays may contain hundreds of spacers (an extreme case is the CRISPR array in the *Haliangium ochraceum* DSM 14365 genome, which has 588 copies of its repeat), they tend to be much smaller, generally with dozens of spacers. The repeat sequences of some CRISPRs are partially palindromic, and have stable, highly conserved RNA secondary structures, while others lack detectable structures \\[7\\].\n\nCRISPR arrays are usually adjacent to *cas* genes, which encode a large and heterogeneous family of proteins with functional domains typical of nucleases, helicases, polymerases, and polynucleotide-binding proteins. CRISPR\/Cas systems commonly use repeat and spacer-derived short guide CRISPR RNAs (crRNAs) to silence foreign nucleic acids in a sequence-specific manner \\[8\\], . CRISPR\/Cas defense pathways involve several steps, including integration of viral or plasmid DNA-derived spacers into the CRISPR array, expression of short crRNAs consisting of unique single repeat-spacer units, and interference with invading foreign genomes at both the DNA and RNA levels, by mechanisms that are not yet fully understood \\[8\\], \\[10\\]. The diversity of *cas* genes suggests that multiple pathways have arisen to use the basic information contained in the repeat-spacer units in diverse defense mechanisms. The CRISPR components are evolutionarily closely linked and potentially evolve simultaneously as an intact locus\u2014sequence analysis reveals that the direct repeats in CRISPR locus and the linked *cas* genes co-evolve under analogous evolutionary pressure \\[11\\].\n\nPrevious studies have shown that CRISPR loci are very diverse and abundant in the genomes of bacteria and archaea. In addition, it has been shown that CRISPR loci with the same repeat sequence and *cas* gene set can be found in multiple bacterial species, implying horizontal gene transfer (HGT) \\[12\\]. Moreover, CRISPR loci can change their spacer content rapidly, as a result of interactions between viruses (or plasmids) and bacteria: several metagenomic studies investigating host-virus population dynamics have shown that CRISPR loci evolve in response to viral predation and that CRISPR spacer content and sequential order provide both historically and geographically insights \\[13\\], \\[14\\], \\[15\\], \\[16\\]\u2014essentially, epidemiology.\n\nAs a reflection of the infectious dynamics of microbial communities, the study of CRISPRs is an essential compliment to the study of the human microbiome, encompassing both disease ecology and ecological immunology \\[17\\]. Infectious disease works to maintain both species diversity \\[18\\], \\[19\\] and genotypic diversity \\[20\\] within a species, as has recently been shown for marine microbiomes \\[21\\], \\[22\\]. As such, infectious agents may be at least partially responsible for the amazing species diversity and turnover found throughout the human microbiome \\[23\\]. The ability of CRISPR loci to prevent plasmid spread is medically relevant, in that the exchange of conjugative elements is perhaps the dominant mechanism by which antibiotic resistance genes (notably multi-drug resistance) move within a biome, and by which pathogens acquire resistance \\[24\\]; CRISPR activities could be expected to retard this exchange (*e.g.* \\[25\\]).\n\nCRISPR composition in human microbial communities, the relative rate of CRISPR locus change, or how CRISPR loci vary between different body sites and between the microbiota of different individuals are less studied, as compared to other environments. A recent analysis of streptococcal CRISPRs from human saliva, in which CRISPR spacers and repeats were amplified from salivary DNA, using the conserved streptococcal CRISPR repeat sequence for priming, revealed substantial spacer sequence diversity within and between subjects over time \\[26\\], which is imagined to reflect the dynamics of phage and other infectious agents in the human mouth \\[2\\].\n\nThe availability of more than 700 shotgun metagenomic datasets from the Human Microbiome Project (HMP) enables us to explore the distribution and diversity of many more CRISPRs, and to discover new ones, across different body sites, in a systematic manner. We developed a targeted assembly strategy (see Figure 1<\/a>) to better identify CRISPRs in shotgun metagenomic sequences, as whole-metagenomic assembly failed to reconstruct many CRISPRs that otherwise could be identified. All of the programs available to date \\[27\\], \\[28\\], \\[29\\], \\[30\\] are designed to find CRISPRS from assembled contigs that are sufficiently long to contain at least partial CRISPR loci; however, it is very difficult to assemble metagenome reads into contigs containing CRISPR loci, because of their repeated structures. We thus needed to collect sequencing reads associated with CRISPRs and assemble them specifically. For known CRISPRs (identified in reference genomes), we identified consensus sequences of CRISPR repeats, collected the reads containing these sequences, and assembled these reads into CRISPR contigs. We also identified CRISPRs from the whole-metagenome assemblies, and for the novel CRISPRs or new CRISPR variants (that are not seen in the reference genomes), applied the same assembly strategy to achieve a more comprehensive identification of the novel CRISPRs across the samples. This approach allows us to study the evolution of CRISPRs in human microbiomes.\n\n# Results\n\nWe identified and selected 64 known CRISPRs\u2014including the streptococcal CRISPR\u2014from complete and draft bacterial genomes and 86 novel CRISPRs from the 751 HMP whole-metagenome assemblies, using metaCRT and CRISPRAlign (see Methods<\/a>). For each selected CRISPR, we then applied the targeted assembly approach (for each CRISPR, first pool the reads that contain the repeat, and then assemble the pooled reads only; see Methods<\/a> for a validation of the targeted assembly approach using simulated datasets) to achieve a more comprehensive characterization of the CRISPR loci in the human microbiome shotgun datasets. Below we provide detailed analysis of the targeted assembly approach, and the resulting CRISPR loci (listed in Table 1<\/a> and Tables S1<\/a> and S2<\/a>).\n\n10.1371\/journal.pgen.1002441.t001\n\n###### List of selected CRISPRs discussed in the paper.\n\n\n\n| IDa<\/a> | Species (or HMP sample ID)Consensus sequence of the CRISPR repeats |\n|:---|:--:|\n| **Known** | |\n| AhydrL30 | *Anaerococcus hydrogenalis* DSM 7454 (NZ_ABXA01000037)ATTTCAATACATCTAATGTTATTAATCAAC |\n| AlactL29 | *Anaerococcus lactolyticus* ATCC 51172 (NZ_ABYO01000191)AGGATCATCCCCGCTTGTGCGGGTACAAC |\n| BcoprL32 | *Bacteroides coprophilus* DSM 18228 (NZ_ACBW01000156)GTCGCACCCTGCGTGGGTGCGTGGATTGAAAC |\n| FalocL36 | *Filifactor alocis* ATCC 35896 (NZ_GG745527)TTTGAGAGTAGTGTAATTTCATATGGTAGTCAAAC |\n| GhaemL36 | *Gemella haemolysans* ATCC 10379 (EQ973306)GTTTGAGAGATATGTAAATTTTGAATTCTACAAAAC |\n| LcrisL29 | *Lactobacillus crispatus* ST1 (NC_014106)AGGATCACCTCCACATACGTGGAGAATAC |\n| LjassL36 | *Lactobacillus gasseri* JV-V03 (NZ_ACGO01000006)GTTTTAGATGGTTGTTAGATCAATAAGGTTTAGATC |\n| LjensL36 | *Lactobacillus jensenii* 115-3-CHN (NZ_GG704745)GTTTTAGAAGGTTGTTAAATCAGTAAGTTGAAAAAC |\n| Neis_t014_L28 | *Neisseria* sp. oral taxon 014 str. F0314 (NZ_GL349412)GTTACCTGCCGCACAGGCAGCTTAGAAA |\n| Neis_t014_L36 | *Neisseria* sp. oral taxon 014 str. F0314 (NZ_GL349412)GTTGTAGCTCCCTTTCTCATTTCGCAGTGCTACAAT |\n| PacneL29 | *Propionibacterium acnes* J139 (NZ_ADFS01000004)GTATTCCCCGCCTATGCGGGGGTGAGCCC |\n| PpropL29 | *Pelobacter propionicus* DSM 2379 (NC_008609)CGGTTCATCCCCGCGCATGCGGGGAACAC |\n| SmutaL36 | *Streptococcus mutans* NN2025GTTTTAGAGCTGTGTTGTTTCGAATGGTTCCAAAAC |\n| **Novel** | |\n| SRS012279L38 | SRS012279 (dataset from a tongue dorsum sample)TATAAAAGAAGAGAATCCAGTAGAATAAGGATTGAAAC |\n| SRS018394L37 | SRS018394L37 (dataset from a supragingival plaque sample)GTATTGAAGGTCATCCATTTATAACAAGGTTTAAAAC |\n| SRS023604L36 | SRS023604 (dataset from a posterior fornix sample)GTTTGAGAGTAGTGTAATTTATGAAGGTACTAAAAC |\n\nThe IDs of the CRISPRs are assigned using the following rules: 1) If a CRISPR (*e.g.*, SmutaL36) is identified from a known complete\/draft genome with species name (for SmutaL36, the genome is *Streptococcus mutans* NN2025), its ID uses five letters from the species name (*i.e.*, Smuta) followed by the length of the repeats (length of 36 is shown as L36); 2) If a CRISPR (Neis_t014_L28) is identified from a known complete\/draft genome that has only general genus information (*e.g.*, *Neisseria sp*. oral taxon 014 str. F0314), then its ID is four letters from the genus name, followed by the taxon ID, and the length of the repeats; and 3) the CRISPRs identified in the HMP datasets are named as the ID of the datasets followed by the length of repeat.\n\n## Targeted assembly improves the characterization of CRISPRs\n\nWe first asked if our targeted assembly strategy helps to identify CRISPR elements from metagenomic datasets, and found that it greatly improved detection (see comparison in Table 2<\/a>). The improvements are twofold. First, the targeted assembly approach identifies known CRISPRs in more human microbiome datasets, as compared to the annotation of CRISPRs using whole-metagenome assemblies. Second, targeted assembly resulted in longer CRISPR arrays, from which we can extract many more diverse spacers for analyzing the evolution of the CRISPRs and other purposes. Here we use three examples to demonstrate the performance of the targeted assembly.\n\n10.1371\/journal.pgen.1002441.t002\n\n###### Comparison of CRISPR identification using whole-metagenome assembly and targeted assembly.\n\n\n\n| | | Whole-metagenome assembly | | Targeted assembly | | |\n|:---|:--:|:--:|:--:|:--:|:--:|:--:|\n| CRISPR | Sample datasets | Spacers (max) | Spacers (total) | Short reads | Spacers (max) | Spacers (total) |\n| SmutaL36 (386a<\/a> vs 38b<\/a>) | SRS017025 (plaque) | 1c<\/a> | 1d<\/a> | 1078e<\/a> | 26 | 76 |\n| | SRS011086 (tongue) | 1 | 2 | 4018 | 24 | 78 |\n| GhaemL36 (257 versus 9) | SRS019071 (tongue) | 0 | 0 | 1718 | 47 | 21 |\n| | SRS014124 (tongue) | 3 | 3 | 490 | 21 | 58 |\n| SRS018394L37 (238 versus 39) | SRS049389 (tongue) | 0 | 0 | 5778 | 25 | 492 |\n| | SRS049318 (plaque) | 1 | 1 | 1463 | 38 | 134 |\n\nthe total number of samples that have streptococcal CRISPRs identified if using targeted assembly, and\n\nif using whole-metagenome assembly;\n\nthe total number of spacers found in the longest CRISPR locus found in the given dataset;\n\nthe total number of spacers found in all contigs assembled from the given dataset;\n\nthe total number of sequences that contain the repeats of a given CRISPR, *i.e.*, the recruited reads used for targeted assembly. See Table S1<\/a> for comparison of all the CRISPRs studied in this paper.\n\nThe first example is the streptococcal CRISPR SmutaL36 (see Table 1<\/a>), a CRISPR that is conserved in streptococcal species such as *Streptococcus mutans* \\[26\\]. This CRISPR was observed only in a limited number of samples (38 out of 751 datasets) when using contigs from whole-metagenome assembly. But our targeted CRISPR assembly identifies instances of CRISPR SmutaL36 in \u223c10 times more (386) datasets. Consistent with the distribution of *streptococcus* across body sites, most of the 386 datasets are from oral samples: 120 of 128 supragingival plaques (94%), 128 of 135 tongue dorsum samples (95%), and 97 of 121 buccal mucosa samples (80%) (see Table 3<\/a>). CRISPR SmutaL36 was only found in a small proportion of samples from other body locations, where *streptococcus* rarely exists (*e.g.*, 4 of 148 stool samples, and none of the posterior fornix datasets). Table 2<\/a> shows the details of targeted assembly of this CRISPR in two datasets.\n\n10.1371\/journal.pgen.1002441.t003\n\n###### Distribution of selected CRISPRs across body sites.\n\n\n\n| | | | Oral | | | | Skin | |\n|:---|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|\n| CRISPR | Anterior nares (94a<\/a>) | Stool (148) | Buccal mucosa (121) | Supra-gingival plaque (128) | Tongue dorsum (135) | Posterior fornix (61) | L- (9)c<\/a> | R- (18)d<\/a> |\n| SmutaL36 | 11b<\/a> | 4 | 97 | 120 | 128 | 0 | 0 | 1 |\n| AhydrL30 | 0 | 53 | 0 | 0 | 0 | 0 | 0 | 0 |\n| BcoprL32 | 0 | 65 | 0 | 0 | 0 | 0 | 0 | 0 |\n| FalocL36 | 0 | 63 | 1 | 18 | 50 | 0 | 0 | 0 |\n| Neis_t014_L28 | 0 | 51 | 15 | 58 | 15 | 0 | 0 | 0 |\n| Neis_t014_L36 | 0 | 0 | 37 | 66 | 82 | 0 | 0 | 0 |\n| PacneL29 | 1 | 0 | 0 | 0 | 0 | 0 | 4 | 7 |\n\nthe total number of datasets;\n\nthe total number of datasets that have CRISPRs identified;\n\nL-Retroauricular crease;\n\nR-Retroauricular crease. Note not all body sites are listed in this table.\n\nThe other two examples are GhaemL36 and SRS018394L37 (see details in Table 2<\/a>). CRISPR GhaemL36 was initially identified from the genome of *Gemella haemolysans* ATCC 10379 using metaCRT. Targeted assembly further identified instances of this CRISPR in 258 oral-associated samples. The longest contig\u2014of 3121 bases\u2014was assembled from the SRS019071 dataset. This CRISPR array has even more repeats (48; *i.e.*, 47 spacers) than the CRISPR array in the *Gemella haemolysans* reference genome, which has 29 repeats. CRISPR SRS018394L37 (currently not yet associated with a host genome) was initially identified from the whole-metagenome assembly of SRS018394, but targeted assembly reveals the presence of this CRISPR in 238 oral-associated microbiomes. The contig that was assembled in SRS049389 is the longest one (2014 bps), containing 25 spacers.\n\nIn most cases we have tested, targeted assembly dramatically improves the identification of CRISPRs in the HMP datasets: for 142 CRISPRs (out of 150), targeted assembly resulted in CRISPR identification in more HMP samples as compared to using whole-metagenome assemblies, and for 36 CRISPRs, targeted assembly identified instances of the corresponding CRISPR in at least 10 times more datasets (see Table S1<\/a>). It suggests that specifically designed assembly approaches, such as the targeted assembly approach for CRISPR assembly presented here, are important for the characterization of functionally important repetitive elements that otherwise may be poorly assembled in a whole-metagenome assembly (which tends to be confused by repeats), and such a comprehensive identification is important for deriving an unbiased distribution of these functional elements across different body sites among individuals.\n\n## Novel CRISPRs are found in human microbiome samples\n\nIn order to identify novel CRISPR loci, with which to seed further targeted assemblies, we set out to find loci based simply on the structural patterns of CRISPR loci, using the program metaCRT, which we modified from CRT (see Methods<\/a>). As a result, we found and selected 86 different types of novel CRISPR repeats in metagenomic samples, which could not be found in reference genomes, for further targeted assembly (see Methods<\/a>). Table 1<\/a> lists selected examples of novel CRISPRs that we identified in HMP datasets (see Table 1<\/a> for naming conventions). A full list of CRISPRs (including the number of CRISPR contigs assembled in each metagenomic dataset) is available as Table S1<\/a>. In this section, we highlight two examples of novel CRISPRs.\n\nCRISPR SRS012279L38 was identified from a whole-metagenome assembly contig of dataset SRS012279 (derived from a tongue dorsum sample; see Figure 2A<\/a>). The identified CRISPR contig has 6 copies of a 38-bp repeat (the last copy is incomplete; see Table 1<\/a> for the consensus sequence of the repeats). *De novo* gene prediction by FragGeneScan \\[31\\] reveals 10 protein-coding genes in this contig, among which 9 share similarities with *cas* genes from other genomes, including *Leptotrichia buccalis* DSM 1135 (NC_013192, an anaerobic, gram-negative species, which is a constituent of normal oral flora \\[32\\]) and *Fusobacterium mortiferum* ATCC 9817, by BLASTP search using the predicted protein sequences as queries (see Figure 2B<\/a>). (By contrast, BLASTX search of this contig against nr database only achieved annotations for 7 genes). In addition, similarity searches revealed a single identical copy of this repeat in the genome of *Leptotrichia buccalis* DSM 1135 (from 1166729 to 1166764; *de novo* CRISPR prediction shows that this genome has several CRISPR arrays, including an array that has 84 copies of a 29-bp repeat, but none of the CRISPRs have the same repeat sequence as SRS012279L38). These two lines of evidence (similar *cas* genes, and an identical region in the genome) suggest that the SRS012279L38 CRISPR we found in the human microbiomes could have evolved from *Leptotrichia buccalis* or a related species.\n\nTargeted assembly of this novel CRISPR (SRS012279L38) in HMP datasets resulted in 278 contigs from 97 datasets, confirming the presence of this CRISPR in human microbiomes. In particular, the CRISPR fragments (407 bps) identified from the whole-metagenome assembly of SRS012279 were assembled into a longer CRISPR contig (890 bps) by targeted assembly. A total of 14 unique but related repeat sequences were identified from 278 CRISPR contigs, and two of them (which differ at 3 positions) are dominant, constituting 71% of the repeats in the CRISPR contigs. Notably, all the repeats could be clustered into a single consensus sequence with an identity threshold of 88%. By contrast, the spacer sequences are very diverse across different samples. For example, we obtained a total of 352 unique spacer sequences, which were clustered into 345 consensus sequences with an identity threshold of 90%. Among 352 unique spacers, 114 spacer sequences were shared by multiple samples\u2014a single spacer was shared by at most eight samples.\n\nThe second example is CRISPR SRS023604L36, initially identified in a whole-metagenome assembly contig of dataset SRS023604 (derived from posterior fornix), which has 5 copies of a 36 bp repeat (see consensus sequence of the CRISPR repeat in Table 1<\/a>). Targeted assembly of this CRISPR across all HMP metagenomic datasets revealed further instances of this CRISPR in several other datasets, including two from stool, and two from posterior fornix. Moreover, the CRISPR contig was assembled into a longer contig of 778 bps containing 12 copies of the CRISPR repeat. BLAST search of the CRISPR repeat against the nr database did not reveal any significant hits.\n\n## Expanding the CRISPR space by human microbiomes\n\nTo investigate how much the CRISPRs identified in the HMP datasets can expand the CRISPR space (sequence space of the CRISPR repeats), we built a network of CRISPRs, based on the sequence similarity between CRISPR repeats. An edge in the network between two CRISPR repeats, each represented by a node, indicates that the two repeats can be transformed from one to another by at most 10 operations (including mutations, insertions, and deletions). Since it is difficult to determine the direction of CRISPR repeats \\[7\\] (especially for the CRISPR arrays that have incomplete structures), given two repeats, we calculated two edit distances\u2014one is the distance between the two repeats, and the other one is between one repeat and the reverse complement of the other\u2014and used the smaller value as the edit distance between the two repeats. The global network (Figure 3A<\/a>; see Figure S1<\/a> with node labels) shows that most of the novel CRISPRs identified in the human microbiomes are remotely related to ones identified in complete (or draft) genomes. Still, there are small clusters that contain only novel HMP CRISPRs, indicating that these CRISPRs are substantially different from ones identified in the reference genomes. In Figure 3B<\/a>, we have colored nodes by body site: while specific CRISPR repeats can be highly specific to body site (see below), the larger families of repeats shown in Figure 3B<\/a> do not appear to cluster based on body site.\n\nWe further studied the sequence patterns of the repeats for each group and our results show 1) distinct patterns among the groups, and 2) high conservation around the stem and start\/end positions in CRSIPR repeats of each group (see sequence logos\u2014for the large groups\u2014in Figure S2<\/a>). The consensuses revealed by the logos show consistencies with the results in a previous study, which used a similar approach, based on alignments of CRISPR repeats, for classification of CRISPR repeats \\[7\\].\n\n## CRISPRs have diverse distributions across human body sites and individuals\n\nOverall, the distributions of CRISPRs are largely body-site specific (see Figure 4<\/a> and Figure S3<\/a>; the name of CRISPR and the number of samples in which the CRISPR was found are listed in Table S3<\/a>). For example, CRISPRs AhydrL30 ad BcoprL32 are only found in stool samples (see Table 3<\/a>). Exceptions include two CRISPRs that were found from both a significant number of gut- and oral-associated samples: Neis_t014_L28 were found in 51 gut samples and 92 oral-associated samples; FalocL36 identified from *Filifactor alocis* ATCC 35896 were found in 63 gut samples and 72 oral-associated samples, including 50 tongue dorsum samples (see Table 3<\/a>).\n\nThe first 50 CRISPRs shown in Figure 4<\/a> are mainly found in stool samples. AshahL36, which was initially identified from *Alistipes shahii* WAL 8301, was found in more than half of gut-related samples (96 out of 147 samples). On the other hand, 99 CRISPRs are mainly found in oral samples, in particular, tongue dorsum, supragingival plaque, and buccal mucosa. We found 5 CRISPRs that exist in more than half of the oral-associated samples (out of 417): SmutaL36, KoralL32 from *Kingella oralis* ATCC 51147, Veil_sp3_1_44_L36 and Veil_sp3_1_44_L35 from *Veillonella sp.* 3_1_44, and SoralL35 from *Streptococcus oralis* ATTC 35037. 4 CRISPRs are mostly found in vaginal samples (AlactL29, LjensL36, LjassL36, and LcrisL29). 1 CRISPR is skin-specific (PacneL29), found mainly in skin samples. Below we discuss the body-site distributions of a few examples.\n\nNeis_t014_L28 and Neis_t014_L36 are inferred from a single genome, *Neisseria sp.* oral taxon 014 str. F0314, but these two CRISPRs show distinct absence\/presence profiles across body sites (see Table 3<\/a>). For stool samples, there exists only CRISPR Neis_t014_L28 in 51 datasets, but not Neis_t014_L36. And Neis_t014_L36 is relatively more prevalent in oral-associated samples as compared to Neis_t014_L28. The different body site distributions can be explained by the fact that these two CRISPRs are found in different sets of genomes (although both can exist in a common genome, *Neisseria sp.* oral taxon 014 str. F0314). Neis_t014_L36 has been identified in multiple *Neisseria* genomes, including *Neisseria meningitidis* ATCC 13091, *Neisseria meningitidis* 8013 (so Neis_t014_L36 belongs to the Nmeni subtype among the 8 subtypes defined by Haft et al \\[33\\]), *Neisseria flavescens* SK114, and *Actinobacillus minor* NM305. Neis_t014_L28, however, was only found in *Neisseria sp.* oral taxon 014 str. F0314. On the other hand, even though we could not find any CRISPRs containing the exactly same repeat as Neis_t014_L28 in the complete\/draft genomes other than *Neisseria sp.* oral taxon 014 str. F0314, many CRISPRs, when a few mismatches are allowed, were found in diverse genomes (for example, *Crenothrix polyspora*, *Legionella pneumophila* 2300\/99 Alcoy, and *Thioalkalivibrio sp.* K90mix) from environmental samples.\n\nFour CRISPRs (AlactL29, LjensL36, LjassL36, and LcrisL29) exist mostly in vaginal samples. AlactL29, initially identified from the *Anaerococcus lactolyticus* genome, was found only in 3 vaginal samples. Notably, LjensL36 was found in 28 vaginal samples (which comprise 43% of vaginal samples collected) and 1 skin sample. This observation is consistent with a previous study showing that *Lactobacillus* constitutes over 70% of all bacteria sampled from vaginas of healthy, fertile women, and *Lactobacillus jensenii* is one of the major genomes \\[34\\]. Interestingly, we could find evidence of adaptation in the LjensL36 spacer against *Lactobacillus* phage Lv-1 (NC_011801) (see below). LjassL36 was found in 33 vaginal samples by targeted assembly. We confirmed that it is in different *Lactobacillus* genomes, such as *Lactobacillus gasseri* and *Lactobacillus crispatus*, by BLAST search. CRISPR LcrisL29, which was identified in the *Lactobacillus crispatus* genome, was found in 31 vaginal samples, and we found the same repeat sequence in the *Lactobacillus helveticus* genome.\n\nPacneL29 was the only skin-specific CRISPR we found in the HMP datasets. Interestingly, instances of PacneL29 are found in *Propionibacterium acnes* HL110PA4 and *Propionibacterium acnes* J139, but not other *P. acnes* isolates (including KPA171202, SK137, J165, and SK187). This indicates a potential application of CRISPRs in the characterization of specific stains for a species in human microbiomes.\n\n## CRISPRs have very diverse spacers\n\nThe HMP project enables us to explore the diversity of streptococcal CRISPRs (and others) at a much broader scale (with 751 samples from 104 healthy individuals). The CRISPRs that we identified in human microbiomes exhibited substantial sequence diversity in their spacers among subjects. Targeted assembly of the streptococcal CRISPRs (SmutaL36) in HMP datasets resulted in a total of 15,662 spacers identified from 386 samples, among which 7,815 were unique spacers (clustering of the spacers at 80% identify resulted in a non-redundant collection of 7,436 sequences). See Figure S4<\/a> for the sharing of the spacers in streptococcal CRISPRs among all individuals, which shows several large clusters of spacers that are shared by multiple individuals (for clarity, we only keep spacers that were shared by more than eight samples in this figure). In particular, the most common spacer is shared by 25 individuals (in 32 samples).\n\nMore importantly, we could check the sharing of CRISPR spacers across different body sites and sub-body sites (*e.g.*, multiple oral sites) using HMP datasets (Pride *et al.* examined streptococcal CRISPRs in saliva samples from 4 individuals \\[26\\]). Figure 5<\/a> shows the spacer sharing among 6 selected individuals, each of whom has multiple samples with identified streptococcal CRISPRs from multiple body sites (see Figure S5<\/a> for the spacer sharing with spacers clustered at 80% sequence identify). By examining the distribution of the spacers across samples, we observed that samples re-sampled from the same individual and oral site shared the most spacers, different oral sites from the same individual shared significantly fewer, while different individuals had almost no common spacers, indicating the impact of subtle niche differences and histories on the evolution of CRISPRs. Our observation is largely consistent with the conclusion from Pride *et al.* \\[26\\]. But our study showed that different samples from the same oral site of the same person, even samples collected many months apart, could still share a significant number of spacers (*e.g.*, the supragingival plaque samples from individual 1 in visit 1 and visit 2, with 238 days between the two visits, and the tongue dorsum samples from individual 5 in visit 1 and visit 3, with 336 days between the two visits; as shown in Figure 5<\/a>). Our study also showed that although the different oral sites of the same individual share similar spacers, this sharing (*e.g.*, between the supragingival plaque sample and the buccal mucosa sample for individual 1) is minimal, as compared to the spacer sharing between samples collected in different visits but from the same oral site (*e.g.*, between the supragingival plaque samples from visit 1 and visit 2 for individual 1). Finally, our study shows that the spacer turnover varies among individuals\u2014for the 6 selected individuals, individual 3 shows significantly higher turnover of the spacers between visits, as compared to other individuals.\n\nWe also checked the spacer diversity for the CRISPR KoralL32, since it and its variants are one of the most abundant CRISPRs. This CRISPR was assembled from 339 samples: 327 from oral sites and 2 from gut. The targeted assembly of KoralL32 found 7282 unique spacers, among which the most commonly shared spacer is shared by 35 individuals (in 58 samples). Figure S6<\/a> shows the sharing of the spacers among the individuals for this CRISPR, which shows similar spacer-sharing patterns as those found in the streptococcal CRISPRs.\n\nThe similarity between spacers from the same individual suggests that we may still be able to trace the evolution of CRISPRs, especially in the same body site of the same individual, even though the CRISPR loci tend to have extremely high turnover of their spacers.\n\n## CRISPR spacer sequences can be used to trace the viral exposure of microbial communities\n\nAs a consequence of CRISPR adaptation, the spacer contents in CRISPR arrays reflect diverse phages and plasmids that have passed through the host genome \\[1\\], \\[35\\], \\[36\\], \\[37\\]. However, previous studies have shown that only 2% of the spacer sequences have matches in GenBank, which is probably due to the fact that bacteriophage and plasmids are still poorly represented in databases \\[13\\], \\[14\\]. Similarity searches of identified spacers against viral genomes enable identification of the viral sources of the spacers (*i.e.*, proto-spacers) captured in each CRISPR locus. For example, similarity searches of the 7,815 unique spacers in the streptococcal CRISPR against viral genomes revealed similarities between streptococcal spacers and 22 viral genomes (species names and accession IDs are listed in Table S4<\/a>), and the two most prevalent viruses are *Streptococcus* phage PH10 (NC_012756) and *Streptococcus* phage Cp-1 (NC_001825) (see Figure 6A<\/a>). Figure 6B<\/a> suggests that the potential proto-spacers are rather evenly distributed along the phage genomes (except for a few regions, including a region that encodes for an integrase, which is highlighted in red in Figure 6B<\/a>). Although the positional distribution of the proto-spacers is close to random, the sequences adjoining the proto-spacers for streptococcal CRISPR we identified in the virus genomes showed conserved short sequence motifs (GG) (see Figure S7<\/a> for the sequence logo), which is also the most common proto-spacer adjacent motif (PAM) shared by several CRISPR groups, as reported in \\[38\\].\n\nAnother example is CRISPR PacneL29, which is mainly found in skin-associated microbiomes. BLAST search of the identified spacers against the virus genome dataset revealed similarity between the spacers and several regions in *Propionibacterium* phage PA6 (NC_009541). We also found evidence of adaptation in LjensL36 against *Lactobacillus* phage Lv-1 (NC_011801): BLAST search shows significant matches to a total of 38 regions in the phage genome. Overall we found 23 CRISPRs that have spacers with high sequence similarities (\u226590% over 30 bps) with virus genomes collected from the NCBI ftp site (Table S5<\/a>).\n\nWe also searched the spacers against plasmid sequences (collected in the IMG database). For example, matches were found between the detected streptococcal CRISPR spacers and more than 10 plasmid sequences (including *Streptococcus thermophilus* plasmid pER35, pER36, pSMQ308, and pSMQ173b; *Bacillus subtilis* plasmid pTA1040; and *Streptococcus pneumoniae* plasmids pSMB1, pDP1 and pSpnP1). See Table S6<\/a> for a summary of the plasmids that share high homology with the CRISPR spacers.\n\nThe CRISPER spacers can also be used to identify viral contigs in metagenome assemblies that contain proto-spacers. As an example, similarity searches of identified streptococcal CRISPR spacers against the HMP assemblies revealed 37 potential viral contigs (of lengths from 2,134 to 56,413 bp): these contigs show high homology (\\>80% sequence similarity) with known viral genomes. The largest contig (of 56,413 bps) is similar to the genome of *Streptococcus* phage Dp-1 (NC_015274), with 88% sequence identify, and covers almost the entire viral genome (of 59,241 bps). A future paper will fully explore this approach.\n\n## Conserved CRISPR repeat sequences can be used to reveal rare species in human microbiome\n\nBecause of the large number of repeats that many CRISPR loci contain, CRISPR repeats of rare species with low sequence coverage in a community can still be found. It was reported that repeat-based classification \\[7\\] corresponds to a *cas* gene-based classification of CRISPRs \\[33\\], which revealed several subtypes of CRISPRs largely constrained within groups of evolutionarily related species (*e.g.*, the Ecoli subtype). As such, we may use the presence of the repeats of a particular CRISPR as a first indication of the presence of related genome(s) in a microbiome, even though CRISPR locus has been found transferred horizontally as a complete package among genomes \\[11\\].\n\nWe use CRISPR PpropL29 as an example to demonstrate this potential application, as PpropL29 was identified in only a small proportion of the HMP samples (11 datasets): including 7 supragingival plaque samples (out of 125) and 4 tongue dorsum samples (out of 138). All the PpropL29-related repeats identified in these samples can be clustered into 7 unique sequences. In order to find the most likely reference genomes for these 7 unique repeat sequences, we blasted these repeat sequences against the human microbiome reference genomes and found 100% identity matches in the *Lautropia mirabills* genome. To investigate the overall coverage of this genome by the reads (not only the CRISPR regions), we mapped the entire collection of reads from four samples: SRS019980 and SRS021477 (both are from supragingival plaque, and have an 100% identity match with the CRISPR repeat in the *Lautropia mirabills* genome); SRS019974 (from tongue dorsum, with a slightly different CRISPR repeat sequence with 3 differences); and SRS019906 (which does not contain any CRISPR repeats similar to PpropL29, used as a control). The mapping results show the reads from two samples SRS019980 and SRS021477 each cover \u223c80% of the *Lautropia mirabills* genome, which is very significant evidence that these two microbiomes include *Lautropia mirabills*. But the other two samples have only a limited number of reads mapped to the genome (*e.g.*, only 3089, reads in SRS019906 were mapped into *Lautropia mirabills*). This contrast suggests that identification of CRISPRs by targeted assembly could provide significant evidence for the existence of certain rare genomes.\n\n# Discussion\n\nWe have applied a targeted assembly approach to CRISPR identification, to characterize CRISPRs across body sites in different individuals. Our studies show that a directed approach\u2014such as our targeted assembly approach\u2014is important for a comprehensive (thus less biased) estimation of the distribution of CRISPRs across body sites and individuals, and their dynamics. Note that in this study, we only focused on CRISPRs identified in eubacterial genomes, since archaea are rare in human microbiomes (we looked for, but did not find, archaeal CRISPRs in the HMP assemblies). Also for the sake of simplicity, we derived a non-redundant list of CRISPRs based on the similarity of the CRISPR repeats (see Methods<\/a>), and detailed targeted assembly was only applied to these CRISPRs.\n\nAlthough many CRISPR arrays may be missed by whole-metagenome assembly, we show that whole-metagenome assemblies are useful for identifying novel CRISPRs (as *de novo* prediction of CRISPRs relies on sequence features of CRISPRs that do not exist in short reads). Once seeding CRISPRs are identified from whole-metagenome assemblies, we can go back to the original short read datasets, and pursue a comprehensive characterization of the CRISPRs, using the targeted assembly approach. Also, we did not fully utilize the presence of *cas* genes for identification of novel CRISPRs in our study, since in many cases we can identify arrays of repeats, but not their associated *cas* genes. A future direction is to combine targeted assembly of CRISPRs and whole-metagenome assembly, aiming to achieve even better assembly of the CRISPR loci with more complete structures, including *cas* genes and the arrays of repeats and spacers. Such an improvement is necessary to achieve a more comprehensive characterization of especially the novel CRISPRs discovered in metagenomes, and the temporal order of spacer addition to arrays.\n\nThe immediate utility of this study is to provide more complete inventories of CRISPR loci in human microbiomes and their distributions in different human body sites, and the spacer content of these loci. The identification of CRISPR spacers opens up several potential applications, including tracing the viral exposure of the hosts, studying the sequence patterns of the regions adjoining the spacer precursors in viral genomes, and discovering viral contigs in metagenome assemblies. It has been shown that short sequence motifs found in the regions adjacent to the spacer precursors in the viral genomes determine the targets of the CRISPR defense system \\[38\\], and we were able to analyze the sequence patterns of regions adjacent to spacer precursors for several CRISPRs with the most spacers identified in the HMP datasets (including SmutaL36, LjensL36, and KoralL32; see sequence logos in Figure S7<\/a>). When more metagenomic datasets become available, we will extend the analysis to more CRISPRs, which may provide insights into the mechanism of the CRISPR defense system (including the turnover patterns of the CRISPR spacers, and the target recognition of the CRISPR defense systems). Our preliminary exploration of viral contigs\u2014by searching CRISPR spacers against whole-metagenome assemblies\u2014suggests that we can identify new virus genomes in metagenome assemblies; further computational and experimental analysis will be needed to confirm these contigs.\n\nWe look forward to being able to utilize CRISPR spacer sequences to understand human and human microbiome biology better, utilizing the metadata associated with the HMP datasets. This awaits a more complete sampling of individuals over time, and of known relationships; and a far better characterization of bacteriophage and other selfish genetic elements in the human biome (our inventory of spacers is a standard against which phage and plasmid collections can be judged).\n\n# Methods\n\n## *De novo* identification of CRISPRs\n\nCRT \\[28\\] is a tool for fast, *de novo* identification of CRISPRs in long DNA sequences. CRT works by first detecting repeats that are separated by a similar distance, and then checking for other CRISPR specific requirements (*e.g.*, the spacers need to be non-repeating and similarly sized). We modified CRT to consider incomplete repeats at the ends of contigs from whole-metagenome assembly, and call the modified program metaCRT.\n\n## Identification of CRISPRs by similarity search\n\nWe implemented CRISPRAlign for identifying CRISPRs in a target sequence (a genome or a contig) that has repeats similar to a given CRISPR (query CRISPR). CRISPRAlign works by first detecting substrings in the target sequence (or its reverse complement) that are similar to the repeat sequence of a query CRISPR, and then checking for other requirements, as in metaCRT. Both metaCRT and CRISPRAlign are available for download at .\n\n## Selection of known and novel CRISPRs for targeted assembly in HMP datasets\n\nUsing metaCRT and CRISPRAlign, we prepared a list of known CRISPRs repeats (identified from complete\/draft bacterial genomes) and a list of potentially novel ones (identified only in the whole-metagenome assemblies from the HMP datasets) for further detailed study of their distributions among the HMP datasets. As we show in Results, the targeted assembly strategy is important for an efficient and comprehensive characterization of these CRISPRs in human microbiome datasets.\n\nKnown CRISPRs were first identified from the bacterial genomes (or drafts) collected in the IMG dataset (version 3.3), using metaCRT. We then selected a subset of the identified CRISPRs that meet the following requirements: direct repeats are of length 24\u201340 bps, there are a minimum of 4 copies of the direct repeats, and the individual repeats each differ by at most one nucleotide from the repeat consensus sequence, on average. The parameters were chosen to minimize false CRISPRs, considering that a CRISPR array typically contains 27 repeats, with an average repeat length of 32 base pairs \\[28\\]. We only kept CRISPRs that can be found in at least one of the whole-metagenome assemblies, using CRISPRAlign. We further reduced the number of candidate CRISPRs by keeping only those that share at most 90% sequence identity along their repeats by CD-HIT \\[39\\], as there are CRISPRs that share very similar repeats, and our targeted assembly strategy can recover the CRISPRs with slight repeat differences. To avoid including a repeat and its reverse complete (metaCRT does not consider the orientation for the repeats) in the non-redundant list, we included reverse complement sequences of the CRISPR repeats in the clustering process. Therefore, a repeat would be classified into two clusters by CD-HIT (the reverse complete of the repeat would be classified into a different cluster), one of which was removed to reduce redundancy.\n\nWe consider that a CRISPR identified in the HMP assemblies is novel if we find no instances of this CRISPR in the IMG bacterial genomes and the HMP reference genomes, with at most 4 mismatches using CRISPRAlign. Similarly, we only kept a non-redundant list of the novel candidates.\n\nIn total, we selected a collection of non-redundant CRISPRs\u2014including 64 known CRISPRs and 86 novel ones\u2014for further targeted assembly from HMP shotgun reads. The detailed information for these CRISPRs (repeat sequences, and their resources, and the references for the CRISPRs already collected in the CRISPRdb database \\[6\\]), is provided in Tables S1<\/a> and S2<\/a>.\n\n## Targeted assembly of CRISPRs\n\nFor the targeted assembly of CRISPRs, we first carried out a BLASTN search with each putative CRISPR repeat sequence as the query, to collect reads that contain the repeat sequence (see Figure 1<\/a>). In order to make the similarity search tolerant to sequencing errors and genomic variations that are observed among the multiple copies of a CRISPR repeat (in one CRISPR locus or between different CRISPR loci), we allowed three mismatches over the entire CRISPR repeat sequence: we retained only the reads that are aligned with the entire CRISPR repeat sequence with a maximum of three mismatches. With these reads containing CRISPR repeat sequences, we ran SOAPdenovo \\[40\\] with k-mers of 45 bps, which are sufficiently long to assemble reads with the repetitive sequences found in CRISPRs. In general, whole-metagenome contigs are assembled using shorter k-mers (for example, 21\u201323 bps in MetaHit \\[41\\] and 25 bps in HMP assembly \\[42\\]), as longer k-mers often fragment assemblies into short contigs. After CRISPR contigs were assembled, the exact boundaries of the repeats and spacers were obtained using CRISPRAlign.\n\n## Validation of the targeted assembly approach using simulated datasets\n\nWe simulated short reads from 6 reference genomes (*Azospirillum B510*, *Streptococcus mutans NN2025*, *Deferribacter desulfuricans SSM1*, *Dehalococcoides GT*, *Erwinia amylovora ATCC 49946*, and *Escherichia coli K12 MG1655*), and applied our method to attempt to assemble the 10 known CRISPRs in these genomes. All 54 contigs assembled by our targeted assembly approach match perfectly to known CRISPRs in the reference genomes. We listed the genome names, the CRISPR repeats, the coordinates of the known CRISPRS in the reference genomes, and the coordinates of the contigs aligned on the reference genomes in Table S7<\/a>.\n\n## Datasets\n\nWe used the dataset Human Microbiome Illumina WGS Reads (HMIGWS) Build 1.0 available at , and the whole-metagenome assemblies from the HMP consortium (). The bacterial genomes were downloaded from the IMG database (), NCBI ftp site (), and human microbiome project website (). The viral genomes were downloaded from the NCBI ftp site (). Additional phage genomes were downloaded from the PhAnToMe database site ().\n\n# Supporting Information\n\nA network of 150 CRISPRS. The CRISPR names were shown in each node. The CRISPR host species for each known CRIPRS are listed in Table S2<\/a>. Known CRISPRs are shown as blue nodes (except for several CRISPRs highlighted in green), and the novel CRISPRs identified in the HMP datasets are shown as red nodes.\n\n(TIF)\n\nClick here for additional data file.\n\nThe consensus of CRISPR repeats for 6 large clusters. See cluster ID in Figure S1<\/a>. The sequence logo was prepared using weblogo ().\n\n(TIF)\n\nClick here for additional data file.\n\nDistribution of CRISPRs in different body sites. The x-axis represents 150 CRISPRs (listed in Table S2<\/a>) and y-axis represents the proportion of samples in which instances of each of the CRISPR are found.\n\n(TIF)\n\nClick here for additional data file.\n\nCluster of spacers shared by more than eight samples. In this map, rows are spacers (clustered at 80% identify), and the columns are samples: cluster (a) is shared by 22 samples; cluster (b) is shared by 23 samples; cluster (c) is shared by 12 samples; cluster (d) is shared by 32 samples. The red lines indicate the presence of spacers in each of the samples. Multiple lines in the same row represent a spacer that is shared by multiple samples.\n\n(TIF)\n\nClick here for additional data file.\n\nSharing of streptococcal CRISPR spacers among samples from 6 individuals. In this map, the rows are the 761 spacers (clustered at 80% identify; see Figure 5<\/a> for the plot using 98% identify) identified in one or more of these 6 individuals, and the columns are samples (e.g., Stool_v1_p1 means a sample from stool of individual 1, in visit 1; Tongue_v2_p1 indicates dataset from tongue, individual 1, in visit 2). Buccal stands for buccal mucosa, and SupraPlaque stands for supragingival plaque. The red lines indicate the presence of spacers in each of the samples. Multiple lines in the same row represent a spacer that is shared by multiple samples.\n\n(TIF)\n\nClick here for additional data file.\n\nSharing of KoralL32 CRISPR spacers among samples from 6 individuals. In this map, rows are the 598 spacers (clustered at 80% identify), and the columns are samples (e.g., Stool_v1_p1 means a sample from stool of individual 1, in visit 1; tongue_v2_p1 indicates dataset from tongue, individual 1, in visit 2). The red lines indicate the presence of spacers in each of the samples. Multiple lines in the same row represent a spacer that is shared by multiple samples.\n\n(TIF)\n\nClick here for additional data file.\n\nSequence logos showing the short sequence motifs in regions adjacent to proto-spacers in the viral genomes for three CRISPRs.\n\n(TIF)\n\nClick here for additional data file.\n\nList of 150 CRISPRs studied in this manuscript and the targeted assembly results in the HMP datasets.\n\n(DOCX)\n\nClick here for additional data file.\n\nList of CRISPRs that are identified from the reference genomes, and their cross-references in the CRISPRdb.\n\n(DOCX)\n\nClick here for additional data file.\n\nList of numbers of datasets from different body sites that have reads (the first number) or CRISPRs (the second number) identified for each CRISPR.\n\n(XLSX)\n\nClick here for additional data file.\n\nList of viral genomes and their accession IDs plotted in Figure 6A<\/a>.\n\n(DOCX)\n\nClick here for additional data file.\n\nList of viral genomes sharing high sequence similarities (\u226590% identify over 30 bps) with CRISPR spacers.\n\n(DOCX)\n\nClick here for additional data file.\n\nList of plasmids sharing high sequence similarities (\u226590%) with CRISPR spacers.\n\n(DOCX)\n\nClick here for additional data file.\n\nTargeted assembly results of 10 CRISPRs using reads simulated from 6 genomes.\n\n(DOCX)\n\nClick here for additional data file.\n\nThe authors thank the Human Microbiome Project (HMP) consortium for providing the sequencing data and the whole-metagenome assemblies of the HMP datasets, and the anonymous reviewers for their helpful suggestions.\n\n# References\n\n[^1]: Conceived and designed the experiments: MR YY. Performed the experiments: MR Y-WW YY. Analyzed the data: MR Y-WW HT TGD YY. Wrote the paper: MR Y-WW HT TGD YY.","meta":{"dup_signals":{"dup_doc_count":122,"dup_dump_count":12,"dup_details":{"curated_sources":2,"2014-52":9,"2014-49":7,"2014-42":16,"2014-41":17,"2014-35":12,"2014-23":13,"2014-15":13,"2017-13":1,"2015-18":9,"2015-11":10,"2014-10":12,"2024-26":1}},"file":"PMC3374615"},"subset":"pubmed_central"} {"text":"abstract: # Abstract\n .\n Hemangioblastoma is a benign and morphologically distinctive tumor that can occur sporadically or in association with von Hippel-Lindau disease in approximately 25% of the cases, and which involves the central nervous system in the majority of the cases. Rare occurrences of hemangioblastoma in peripheral nerves and extraneural tissues have been reported. This report describes one case of sporadic renal hemangioblastoma happened in a 16-year-old Chinese female patient, presenting with hematuria, and low back pain. Histologically, the tumors were circumscribed, and composed of sheets of large polygonal cells traversed by arborizing thin-walled blood vessels. The diagnosis of hemangioblastoma was confirmed by negative immunostaining for cytokeratin, and positive staining for \u03b1-inhibin, S100 and neuron-specific enolase (NSE). This benign neoplasm which can be mistaken for various malignancies such as renal cell carcinoma, epithelioid hemangiopericytoma and epithelioid angiomyolipoma, deserves wider recognition for its occurrence as a primary renal tumor.\n .\n # Virtual slides\n .\n The virtual slide(s) for this article can be found here: http:\/\/www.diagnosticpathology.diagnomx.eu\/vs\/5445834246942699\nauthor: Yang Liu; Xue-shan Qiu; En-Hua Wang\ndate: 2012\ninstitute: 1Department of Pathology, the First Affiliated Hospital and College of Basic Medical Sciences, China Medical University, Shenyang, 110001, China; 2Institute of Pathology and Pathophysiology, China Medical University, Shenyang, 110001, China\nreferences:\ntitle: Sporadic Hemangioblastoma of the Kidney: a rare renal tumor\n\n# Background\n\nHemangioblastoma, also known as capillary hemangioblastoma, is a benign tumor of uncertain histogenesis, consisting of networks of small blood vessels interspersed with lipid-laden stromal cells \\[1\\]. Although the stromal cells are often bland-looking, they can exhibit significant nuclear pleomorphism, mimicking carcinoma, or other malignancies. Hemangioblastoma occurs sporadically, or in a setting of von Hippel-Lindau disease in approximately one-quarter of the cases \\[2\\]. This tumor typically occurs within the central nervous system, predominantly in the cerebellum, but also occasionally in the meninges, retina, spinal cord, corpus callosum, lateral ventricle, pituitary gland, and the optic nerve. Exceptionally, hemangioblastoma occurs in other sites, such as peripheral nerve, soft tissue, retroperitoneum, skin, liver, pancreas, lung, adrenal, kidney, and urinary bladder, usually in the setting of known von Hippel-Lindau disease. There are only a few reports on sporadic hemangioblastoma occurring outside the central nervous system, including kidney \\[3\\]. We report one such case involving the kidney, which might be mistaken for other renal tumors, in particular clear cell renal cell carcinoma.\n\n# Case presentation\n\n## Clinical History\n\nA 16-year-old Chinese female, presented with gross hematuria and low back pain for 18 months. A homogeneous solid mass in the upper pole of the left kidney was detected on a computed tomographic scan. Nephrectomy was performed, showing a 1.2 cm well-encapsulated brownish-white tumor with some hemorrhagic areas in the upper pole of left kidney. There was no polycythemia or clinical evidence of von Hippel-Lindau disease. At imaging, no other tumor was detected, particularly in the central nervous system (CNS). The patient was alive with no tumor recurrence or metastasis at half a year of follow-up.\n\n## Gross features\n\nThe left kidney was normal in size (12.0\u2009\u00d7\u20095.0\u2009\u00d7\u20094.0 cm). A 1.2 cm well-encapsulated mass was found in the upper pole of left kidney. The cut surface of the tumor was brownish-white, solid and homogeneous.\n\n## Microscopic features\n\nThe tumor was characterized by an alternation of cellular and paucicellular areas surrounded by a thick fibrous capsule and well-demarcated from the surrounding renal parenchyma (Figure 1<\/a>A-E). The paucicellular areas were mainly composed of fibrous stroma containing reticular vascular channels, hemosiderin pigment, and rare stromal cells (Figure 1<\/a>D). The cellular areas were composed of a rich capillary network of single-layered flat endothelial cells enclosing stromal cells. These last ones showed a pale or eosinophilic cytoplasm exhibiting occasional lipid droplets but no hyaline globules, ovalnuclei, delicate chromatin, and inconspicuous nucleoli (Figure 1<\/a>F-G). Pleomorphic, or bizarre tumor cell nuclei was barely seen (Figure 1<\/a>H), and no necrosis or mitoses was found.\n\n## Immunohistochemistry\n\nThe immunohistochemical study showed that stromal cells were diffusely positive for NSE (Figure 2<\/a>A), S-100 protein (strong cytoplasmic and nuclear staining, Figure 2<\/a>B), and vimentin (Figure 2<\/a>C). Stromal cells expressed more focal \u03b1-inhibin (Figure 2<\/a>D). They were strictly negative for epithelial membrane antigen (EMA), AE1\/AE3, CD10, and HMB-45 (Figure 2<\/a>E-H). The other markers studied were strictly negative (melan-A, CD56, chromogranin, synaptophysin, calretinin, smooth muscle actin, and desmin). Finally, CD34 and CD31 underlined the rich and delicate vascular channels (Figure 2<\/a>I, J). All of the immunohistochemical results were summarized in Table 1<\/a>.\n\nPanel of immunohistochemical stains\n\n| **Immunohistochemical Stain** | **Result** |\n|:------------------------------------|:-----------|\n| Pan-cytokeratin (AE1\/AE3) | \\- |\n| epithelial membrane antigen (EMA) | \\- |\n| vimentin | \\+ |\n| \u03b1-inhibin | \\+ |\n| Neuron-specific enolase (NSE) | \\+ |\n| S100 protein | \\+ |\n| Synaptophysin | \\- |\n| Chromogranin | \\- |\n| HMB-45 (melanoma-associated marker) | \\- |\n| melan-A | \\- |\n| Calretinin | \\- |\n| smooth muscle actin | \\- |\n| Desmin | \\- |\n| CD10 | \\- |\n| CD34 | \\- |\n| CD31 | \\- |\n\n# Discussion\n\nBecause of the few reports of sporadic primary hemangioblastoma, the progression and prognosis of this tumor was still unclear. To our knowledge, there are five cases of hemangioblastoma involving the kidney which have been reported \\[4,5\\]. Hemangioblastoma is likely to be an underrecognized tumor of the kidney, because it mimics many tumor types morphologically and it is usually not considered in the differential diagnosis. A correct diagnosis is important because hemangioblastoma is benign even if there are highly atypical tumor cells, and the patient should be evaluated for possible von Hippel-Lindau disease. Because of the prominent vasculature and large neoplastic cells with atypical nuclei, renal hemangioblastoma can be mistaken for many renal neoplasms including renal cell carcinoma (RCC) and epithelioid angiomyolipoma, adrenal cortical carcinoma and paraganglioma (pheochromocytoma). Sometimes, it can also be mistaken for other rare tumors in this location as epthelioid hemangiopericytoma or lobular capillary hemangioma.\n\nAlthough renal hemangioblastoma mimics various renal neoplasms, it can be recognized or suspected on morphologic grounds. The clues to the diagnosis are: circumscribed borders, paucity of mitotic figures despite prominence of atypical cells, fine vacuoles in some tumor cells indicating presence of intracytoplasmic lipids, and rich capillary network with focal pericytomatous pattern.\n\nIn this case, all the features described above were observed except for fine cytoplasmic vacuoles, so the first diagnosis came to our mind was epithelioid hemangiopericytoma instead of hemangioblastoma. Because of numerous variants of RCC have been described in recent years which markedly expanding the morphologic spectrum and rendering it difficult to totally rule out RCC on morphologic grounds. Among the variants of RCC, clear cell RCC is the main differential diagnosis as it shares several morphologic features with hemangioblastoma. This tumor can have occasionally a hemangioblastoma-like pattern which makes it nearly impossible to distinguish with hemangioblastoma on morphologic grounds. In such particular case, immunohistochemical staining might be the only solution. In contrast to hemangioblastoma, clear cell RCC is usually negative for \u03b1-inhibin, S100, and NSE and positive for AE1\/AE3, EMA and CD10 \\[5,6\\]. And in this case, the tumor cells show striking morphologic mimicry of epithelioid angiomyolipoma. The epithelioid tumor cells of the latter often shows homogeneous or reticulated (spidery) cytoplasm, sometimes with grumous basophilic material, instead of lipid-containing vacuolated cytoplasm. Fat cells and thick-walled blood vessels with spindly cells radiating from the wall, if present, provide further support to the diagnosis. In addition, melanin marker (HMB45 or melan-A) and muscle marker (smooth muscle actin or desmin) will be helpful to diagnosis.\n\nAdrenal cortical carcinoma may directly invade or metastasize to the kidney. But the tumor cells commonly show lipid cytoplasmic vacuoles; mitotic activity, infiltrative growth and vascular invasion are often identifiable. In addition, immunohistochemical staining will be helpful to further demonstrate. Paraganglioma (pheochromocytoma) often shows a definite nested pattern in at least some foci which was hardly seen in this case.\n\nTo our surprise, the immunophenotype (CD34^-^, melan-A^-^, HMB-45^-^, smooth muscle actin^-^, muscle specific actin^-^ and desmin^-^) overthrows the diagnosis of epithelioid hemangiopericytoma or angiomyolipoma. Negative staining for AE1\/AE3, EMA and CD10 exclude the probability of RCC; Negative staining for synaptophysin, chromogranin and calretinin exclude the probability of adrenal cortical carcinoma and paraganglioma. In addition, S100 was diffuse positive in this case, while in paraganglioma, S100+ sustentacular cells are often found surrounding the nests of tumor cells. All of the diagnosises were overthrowed by immunohistochemical staining, so we searched the similar case on PubMed ( ). After we learned that sporadic hemangioblastoma might happen in this location, we reviewed this case carefully and found only a few lipid droplets and bizarre tumor cell nuclei in tumor cytoplasm which was indicated by arrow in Figure 1<\/a>G-H. NSE and \u03b1-inhibin were also added to stain. The immunophenotype (NSE^+^, S100^+^ and \u03b1-inhibin^+^) also demonstrated the diagnosis of hemangioblastoma. Furthermore, more cytoplasmic multiple sharply delineated vacuoles were observed in the slides performed for immunohistochemistry (Figure 2<\/a>A, B, and D). The morphologic characteristics of this case were strikingly superimposable to those previously described in this location, including the clues to the diagnosis defined by Ip et al. \\[4\\] The immunophenotypic profile (AE1\/AE3-, S100+, NSE+, and \u03b1-inhibin+) reported by Ip et al \\[4\\] was also noted.\n\n# Conclusions\n\nIn this case, the tumor was nearly mistaken for epithelioid hemangiopericytoma which indicates it's hard to distinguish between them in the practical work. Cytoplasmic lipid droplet maybe indicates a diagnosis of hemangioblastoma, but don't exclude the probability when it was hardly seen, especially in the unfamiliar renal tumors with abundant capillary network. The stromal cells may be or may not be atypical with bizarre nuclei. Therefore, hemangioblastoma must be included in the dfferential diagnosis of renal tumors to not underestimate this tumor in this location and so to better evaluate its real frequency and not establish wrongly a diagnosis of malignancy to this benign tumor. Using combination of immunohistochemistry may be helpful to some rare renal tumors.\n\n# Competing interests\n\nThe authors declare that they have no competing interests.\n\n# Authors' contributions\n\nYL analyzed the data and wrote the manuscript as a major contributor. X-SQ and E-HW helped to revise the discussion section of this manuscript. All authors have read and approved the final manuscript.\n\n# Consent\n\nWritten informed consent was obtained from the parents of the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in Chief of this Journal.\n\n## Acknowledgements\n\nThe authors express their sincere appreciation of the language editing assistance given by Liang Wang, MD and Juan-han Yu, MD, Department of Pathology, China Medical University.","meta":{"dup_signals":{"dup_doc_count":159,"dup_dump_count":50,"dup_details":{"curated_sources":4,"2023-23":1,"2022-27":1,"2022-05":1,"2021-39":1,"2021-21":1,"2020-40":1,"2020-29":1,"2019-43":1,"2019-35":1,"2019-13":1,"2019-04":1,"2018-47":1,"2018-39":1,"2018-30":1,"2018-17":1,"2018-05":1,"2017-39":1,"2017-30":1,"2017-22":1,"2017-17":1,"2017-09":15,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-30":1,"2016-22":1,"2016-18":1,"2015-48":5,"2015-40":2,"2015-35":5,"2015-32":4,"2015-27":4,"2015-22":4,"2015-14":5,"2014-52":5,"2014-49":8,"2014-42":14,"2014-41":6,"2014-35":7,"2014-23":7,"2014-15":6,"2023-50":1,"2015-18":4,"2015-11":5,"2015-06":6,"2014-10":5,"2013-48":4,"2013-20":5}},"file":"PMC3488519"},"subset":"pubmed_central"} {"text":"abstract: The vast majority of all agents used to directly kill cancer cells (ionizing radiation, most chemotherapeutic agents and some targeted therapies) work through either directly or indirectly generating reactive oxygen species that block key steps in the cell cycle. As mesenchymal cancers evolve from their epithelial cell progenitors, they almost inevitably possess much-heightened amounts of antioxidants that effectively block otherwise highly effective oxidant therapies. Also key to better understanding is why and how the anti-diabetic drug metformin (the world's most prescribed pharmaceutical product) preferentially kills oxidant-deficient mesenchymal p53^\u2212\\ \u2212^cells. A much faster timetable should be adopted towards developing more new drugs effective against p53^\u2212\\ \u2212^ cancers.\nauthor: Jim Watsone-mail: \ndate: 2013-01\ninstitute: Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, NY 11724, USA\nreferences:\ntitle: Oxidants, antioxidants and the current incurability of metastatic cancers\n\nAlthough the mortality from many cancers, particularly those of haematopoietic cells, has been steadily falling, the more important statistic may be that so many epithelial cancers (carcinomas) and effectively all mesenchymal cancers (sarcomas) remain largely incurable. Even though an increasing variety of intelligently designed, gene-targeted drugs now are in clinical use, they generally only temporarily hold back the fatal ravages of major cancers such as those of the lung, colon and breast that have become metastatic and gone beyond the reach of the skilled surgeon or radiotherapist. Even though we will soon have comprehensive views of how most cancers arise and function at the genetic and biochemical level, their 'curing' seems now to many seasoned scientists an even more daunting objective than when the 'War on Cancer' was started by President Nixon in December 1971.\n\nPropelling me then, 40 years ago, to turn the Cold Spring Harbor Laboratory into a major site for unravelling the genetic underpinnings of cancer was the belief that once the gene-induced molecular pathways to cancer became known, medicinal chemists would go on to develop much more effective gene-targeted drugs. Unlike most early proponents of the 'War on Cancer', who thought that DNA-damaging chemotherapeutic agents would bring real victories in one to two decades, I thought three if not four more decades of focused research would need to pass before we would be in a position to go all out for total victory \\[1\\]. In fact, only after the 1988\u20132003 Human Genome Project provided the world with the highly accurate sequences for three billion human DNA letters has it been possible to begin to approach the true genetic complexity of cancer.\n\n# Molecular pathways to cancer as revealed through DNA sequencing\n\nBy now we know that mutations in at least several hundred human genes (out of a total of 21 000 genes) become serious 'drivers' of the abnormal cell growth and division process that generates human cancer \\[2\\]. They do so because they encode the protein components of 'signal transduction pathways' that enable external signals (growth factors) to move from the cell surface receptors to key promoter\u2013enhancer regions along the 24 human chromosomes. There they turn up the expression of genes needed for cell growth and division as well as the evasion of programmed cell death, the latter of which much underlies the ever-growing resistance of late-stage aggressive cancer cells to radio- and chemotherapeutic therapies. Most importantly, there exist multiple molecular pathways that bring about cell growth and proliferation, each with their own specific surface receptors, cytoplasmic transducers, and promoters and enhancers of gene expression \\[3\\].\n\nMuch potential cross talk exists between these pathways, allowing new DNA mutations to create new pathways to cancer when pre-existing ones are blocked. Already we know that the emergence of resistance to the gene *BRAF*-targeted anti-melanoma drug Zelboraf frequently results from driver pathway cross talk, as does resistance to the targeted drugs Iressa and Tarceva when they are deployed against EGFR-driven lung cancers. Given the seemingly almost intrinsic genetic instability of many late-stage cancers, we should not be surprised when key old timers in cancer genetics doubt being able to truly cure most victims of widespread metastatic cancer.\n\nResistance to gene-targeted anti-cancer drugs also comes about as a consequence of the radical changes in underlying patterns of gene expression that accompany the epithelial-to-mesenchymal cell transitions (EMTs) that cancer cells undergo when their surrounding environments become hypoxic \\[4\\]. EMTs generate free-floating mesenchymal cells whose flexible shapes and still high ATP-generating potential give them the capacity for amoeboid cell-like movements that let them metastasize to other body locations (brain, liver, lungs). Only when they have so moved do most cancers become truly life-threatening.\n\n# Epithelial-to-mesenchymal transitions are a consequence of changes in transcriptional regulation\n\nEMTs leave intact the pre-existing order of DNA bases while changing the way they are read into RNA transcripts. Underlying transcriptional regulation are site-specific DNA-binding proteins, and sometimes regulatory RNAs, that recruit to genes the machinery required to read those genes. This includes the general transcription machinery and also enzymes that modify the histones around which chromosomal DNA is wound, and the DNA itself. These enzymes mediate methylation and acetylation of histones, as well as remodelling of the nucleosomes in various ways, and methylation of DNA bases, changes that can influence how a given gene is expressed. Regulation of transcription extends far beyond its role in influencing how cancer cells respond to changes in their environmental surroundings. This regulation underlies all the multiple switches that accompany the transition of fertilized eggs into the differentiated cells (lung, kidney, etc.) of mature organisms.\n\n# IL6-like cytokines drive mesenchymal cells to commence cell proliferation\n\nMuch holding back the creation of effective drugs against mesenchymal cancer cells has long been ignorance of the externally driven signalling pathways propelling them into stem cell growth and subsequent differentiation. Most attention until now has been focused on the Wnt signalling pathway that sends \u03b2-catenin into the cell nucleus to activate the TCF transcription factor for essential roles in EMTs as well as stem cell functioning \\[5,6\\]. An even more important villain may have been virtually staring in our faces for almost two decades\u2014one or more of the cytokine mediators of inflammation and immunity, in particular, the IL6 interleukin. IL6 blood serum levels, for example, steadily go up as incurable cancers become more life-threatening \\[7,8\\]. Autocrine loops probably exist where cytokine binding to their respective cell surface receptors sets into motion downstream gene-activating pathways that not only generate more IL6 molecules but give their respective cancer cells an aura of almost true immortality by blocking the major pathway to programmed cell death (apoptosis). *Pushing by cytokines of otherwise quiescent mesenchymal cancer cells to grow and divide probably explains why anti-inflammatory agents such as aspirin lead to much less cancer in those human beings who regularly take them* \\[9\\].\n\nUnfortunately, the inherently very large number of proteins whose expression goes either up or down as the mesenchymal cancer cells move out of quiescent states into the cell cycle makes it still very tricky to know, beyond the cytokines, what other driver proteins to focus on for drug development. Ideally, we should largely focus first on finding inhibitors of cancer cell proliferation as opposed to inhibitors of cancer cell growth. Inhibiting, say, the synthesis of cellular molecular building blocks will slow down not only the metabolism of cancer cells but also that of our body's normally functioning cells. By contrast, blocking proteins specifically moving through the cell cycle should leave untouched the normal functioning of the vast majority of our body's cells and so generate much less unwanted side effects.\n\n# The gene transcription activator Myc allows cells to move through the cell cycle\n\nLong thought to be a key, if not *the* key, protein against which to develop cell-proliferation-inhibiting drugs is the powerful gene transcription activator Myc. First known for its role in driving cancers of blood-forming lymphocytes (e.g. Burkitt's lymphoma), Myc now also has been found to be a key driver of the rapidly fatal 'small cell' lung cancers as well as the likely driver of many late-stage incurable cancers, including receptor negative and ductal breast cancers. *Lots of* *Myc* *may turn out to be an essential feature of much of the truly incurable cancer*. It simultaneously turns up the synthesis of the more than 1000 different proteins required to move all cells through the cell cycle. Although precisely how this almost 400-amino acid long polypeptide works at the molecular level remains to be worked out, it seems to play a unique role that cannot be handled by any other class of transcription factors. Unlike our first hunch that Myc was somehow an on\u2013off specifier of gene activity, it is a nonlinear amplifier of expression acting universally on active genes except for the immediate early genes that become expressed before *Myc* \\[10,11\\]. Already many serious efforts have been made to develop drugs that block its cell-proliferation-promoting activities. Unfortunately, all such direct efforts have so far failed.\n\nUsing a dominant negative plasmid that blocks all Myc functions, Gerard Evans' laboratory, first at UCSF and now in Cambridge, UK, has used mouse xenograph models of several major human cancers to show Myc's indispensable role in moving through the cell cycle \\[12\\]. Although mouse stem cells in Myc's absence stop growing and dividing, they resume normal functioning when *Myc* is turned back on. By contrast, the turning off of *Myc* in human cancer cells preferentially drives them into programmed cell death (apoptosis) with one important exception: pancreatic adenocarcinoma cells do not enter into apoptosis, quite possibly explaining why pancreatic cancer is so resistant to virtually all cell-killing reagents (G. Evans 2012, personal communication).\n\n# Bromodomain 4 proteins play essential roles in maintaining the Myc levels necessary for leukaemic cell growth and division\n\nAn unanticipated powerful way for lowering Myc levels in haematopoietic cancers has emerged from the discovery that the incurable nature of *MLL-AF9* acute myeloid leukaemia (AML) depends upon the presence of the not yet well understood protein bromodomain 4 (BRD4). When JQ1, developed last year to treat the BRD4-driven rare *NUT* midline carcinoma, was used on human *MLL-AF9* AML cells, they rapidly stopped multiplying and differentiated into macrophages \\[13,14\\]. At the same time, Myc levels rapidly plunged. Most importantly, JQ1 does not block the normal macrophage production, suggesting that Myc levels in macrophage-forming stem cells do not depend upon BRD4. Their formation must depend on a different chromosomal remodeller.\n\n# *Myc* is turned on through multiple molecular pathways\n\nHow *Myc* is turned on not only in other cancers but also during normal human development remains largely to be worked out. Likewise not known is how the BRD4 protein at the molecular level helps turn on Myc synthesis in MLL-AF9-driven leukaemia. Until JQ1 goes into the clinic against leukaemia late this year, we will not moreover know for sure whether resistance to JQ1 will compromise its clinical utility. Unfortunately, the answer is probably yes because artificially turning up *Myc* by means that bypass *BRD4* causes JQ1 resistance. Moreover, there are already known multiple ways to turn on *Myc* expression in normal cells, each starting by signals binding to specific cell surface receptors then moving through one or more layers of signal transducers to the nucleus to turn up the transcription of genes needed for cell growth and division. Myc synthesis is not only downstream of the cytokine Jak\u2013Stat3 signal transduction pathway but also downstream of the HER2\u2013RAS\u2013RAF\u2013SHp2\u2013ERK3 pathway that helps drive the growth of much, if not most, breast cancer \\[15\\]. Whether they in turn feed into BRD protein-dependent gene-activating pathways remains for the future to reveal. A multiplicity of Myc-inhibiting specific drugs may have to be in our arsenal before we can routinely move beyond delaying death from incurable cancers to true lifetime long cures.\n\n# Detecting key cancer cell vulnerabilities through RNAi screens\n\nThat the BRD4 protein is among the major Achilles' heels of incurable AML became known not because of a chance observation but by using a powerful new methodology for detecting molecular weaknesses that are cancer cell-specific. At its heart has been the deployment over the past several years by Greg Hannon at Cold Spring Harbor Laboratory of short hairpin RNA molecules (shRNAs) specifically designed to knock back the functioning of single human genes \\[16\\]. A genome shRNA library containing multiple probes (four to six) for each human gene possesses some 100 000 shRNAs. Testing all of them extensively against just one type of cancer still poses a formidable, logistical challenge likely to require 1- to 2-year long intervals for even 'big science laboratories'.\n\nMuch smaller highly focused libraries, however, now can be deployed by high-quality, university-level science laboratories provided there already exist hints as to what molecular vulnerabilities might be found. Forearmed by knowledge that invariably incurable forms of acute myeloblastic leukaemia (AML) originate from rearrangements of a key gene involved in epigenetic chromosomal remodelling, Chris Vakoc and Johannes Zuber at the Cold Spring Harbor Laboratory found the gene-activating BRD4 as the most pronounced potential molecular weakness of an *MLL-AF9* human AML. They did so by screening libraries of only some 1000 probes designed to knockout 234 genes coding for the key proteins involved in epigenetic-driven gene expression.\n\nMost recently, Vakoc has found three other major protein players (Menin, Ezh1\/2 and Eed) that work together with BRD4 to make *MLL-AF9* AML incurable by currently deployed anti-cancer drugs \\[17\\]. Drugs inhibiting their respective functioning should also provide effective anti-AML agents. *Ezh1\/2* and *Eed* code for polycomb proteins that block specific gene expression, whereas the *Menin* gene, like the *BRD4* gene, activates gene expression. Loss of functional Ezh1\/2 and Eed blocks the expression of the *CdKn2a* gene-encoded p16 and p19 proteins that have widespread cell-cycle-progression-blocking roles. The Menin protein's molecular role probably involves its already known binding to MLL. Like BRD4, it may have a Myc-level-raising role. Finding out how such chromosome remodelling dependencies emerge and evolve during tumour progression will directly impact the clinical implementation of epigenetic-based anti-cancer therapies.\n\n# BRD4 functioning is vital not only for fast-growing leukaemias but also for many, if not most, dangerous lymphomas and myelomas\n\nAs soon as possible, we must find out in more detail how far the drug JQ1's anti-cancer actions extend beyond *MLL-AF9*-specific AMLs. Already we know that in mice it stops equally well the more curable, non-*MLL* rearranged strains of AML as well as all forms of acute lymphocytic leukaemia (ALL). BRD4's capacity to heighten Myc levels thus probably extends over almost all leukaemias. Whether the polycomb proteins of ALL, like those of AML, also turn off the cell-cycle-inhibiting *CdKn2a*-coded proteins p16 and p19 remains to be seen. JQ1 also stops the growth in mice of many fast-growing B-, and T-cell lymphomas, suggesting that their untreated BRD4 protein maintains their high Myc levels necessary to make them fatal. In JQ1-resistant lymphomas (e.g. Jurkat cell), Myc synthesis must be turned on by a different route. Cell lines from most human multiple myeloma victims also frequently show high sensitivity to JQ1 \\[18\\]. There, the twosome cocktail of JQ1 and the now widely deployed proteasome inhibitor Velcade reinforce each other's anti-myeloma actions. When JQ1 becomes broadly available clinically, hopefully by mid-2013, it may considerably lengthen the 3\u20135 more years of additional life provided to most myeloma victims by Velcade administration.\n\nJQ1 also significantly slows down the growth of a small but real number of cell lines derived from many major solid cancers (e.g. prostate and melanoma). BRD4 may have been only called into play late as these cancers evolve to become more aggressive. Of more importance is JQ1's failure to stop the growth of the vast majority of solid tumour cell lines. The heightened Myc levels needed by, say, cancers of the prostate and breast may instead be provided by the intervention of one or more of the some 35 other BRD proteins or other chromatin regulators. Unfortunately, we do not yet know how the vast majority of them function beyond the fact that their BRD pockets, by binding to the acetyl groups, help turn on, not turn off, gene activation. JQ1's unanticipated blocking of sperm functioning most excitingly has led to the recent discovery of a testis-specific bromodomain (BRDT) essential for chromatin remodelling during spermatogenesis. Occupancy of the BRDT acetyl-lysine pocket by JQ1 generates a complete and reversible contraceptive effect \\[19\\]. Early evidence suggests that BRDT does not promote Myc synthesis. There may be out there soon to be found, say, breast-specific or prostate-specific *BRD* gene activators. Most important to learn is whether they also do or do not drive Myc synthesis.\n\n# The circadian rhythm regulator (PER2) by negatively regulating Myc levels functions as an important tumour suppressor\n\nMyc's paramount role in moving cancer cells through the cell cycle has recently been reinforced by two highly independent RNAi screens to find genes whose loss of function selectively kills cancer cells \\[20,21\\]. In sampling largely different sets of genes, they both honed in on the gene *CSNKe* coding for protein kinase casein kinase 2 epsilon. Among its many multiple targets for phosphorylation and subsequent proteosome-mediated degradation is the transcription factor *PER2* gene whose selective binding to DNA turns off the function of many genes including *Myc*. Already long known has been PERIOD 2 (PER2) involvement as a clock protein at the heart of the circadian rhythms of higher animal cells. Later, quite unexpectedly, PER2 was found to function as a tumour suppressor, with the absence of both its copies causing the rate of radiation-induced cancers to rise. It now seems obvious that its anti-cancer action arises from its ability to turn off *Myc*. In PER2's absence, Myc levels greatly rise, thereby explaining why tumours of many types all display higher levels of CSNKe than found in their normal cell equivalents. Common sense suggests that specific CSNKe inhibitors should soon be broadly tested against a large variety of human cancers.\n\n# High-Myc-driven, fast proliferating cells possess cell cycle vulnerabilities\n\nHigh-Myc-level proliferating cells less efficiently proceed through the mitotic cycle than cells driven by lower Myc levels. Why high Myc leads to many more mitotic-generated chromosome abnormalities has recently been explained through a large RNAi screen designed to reveal 'synthetic lethal' genes that only have vital function under conditions of high Myc. Most unexpectedly, they pinpointed key roles for the SUMO-activating genes *SAE1* and *SAE2* involved in proteasome-specific protein degradation \\[22\\]. When they are blocked from functioning, large numbers of Myc-driven genes somehow become switched from on to off. As expected, many function in the formation and breakdown of the mitotic spindle. A much less anticipated second class functions in ubiquitin-based, proteasome-mediated protein degradation. Conceivably, the fast growth rates of high-Myc-level-driven proliferating cells generate more mitosis-involved proteins than their respective proteasomes can timely breakdown. In any case, drugs designed to block *SAE1* and *SAE2* should preferentially kill fast-proliferating cancer cells.\n\nHigh-Myc-level vulnerability is also generated by suboptimal supplies of CD kinase 1 (CDK1) that functions with the A type cyclins during the late S phase of the cell cycle. As long as the Myc levels are those of normal cells, proliferating cells have sufficient CDK1. But when more Myc leads to faster cell cycles, much more CDK1 is required to prevent failed cell divisions. So, it makes a prime candidate for the development of an effective drug against high-Myc-driven cancers \\[23\\].\n\n# Selectively killing cancer cells through exploiting cancer-specific metabolic and oxidative weaknesses\n\nWe must focus much, much more on the wide range of metabolic and oxidative vulnerabilities that arise as consequences of the uncontrolled growth and proliferation capacities of cancer cells. As human cancers become driven to more aggressive glycolytic states, their ever-increasing metabolic stress makes them especially vulnerable to sudden lowering of their vital ATP energy supplies. 3-Bromopyruvate, the powerful dual inhibitor of hexokinase as well as oxidative phosphorylation, kills highly dangerous hepatocellular carcinoma cells more than 10 times faster than the more resilient normal liver cells and so has the capacity to truly cure, at least in rats, an otherwise highly incurable cancer \\[24,25\\]. The structurally very different hexokinase inhibitor 2-deoxyglucose, through its ability to block glycolysis, also has the potential for being an important anti-cancer drug. Not surprisingly, it works even better when combined with inhibitors of ATP-generating oxidative phosphorylation such as the mitochondrial target drug Mito Q \\[26\\].\n\nA key mediator of cellular response to falling ATP levels is the AMP-dependent protein kinase AMPK, which in times of nutritional stress phosphorylates key target proteins to push metabolism away from anabolic growth patterns \\[27\\]. By inhibiting mTOR it slows protein synthesis, and by phosphorylating acetyl-CoA carboxylase it slows down lipid synthesis. The glycolytic pathways that produce the cellular building blocks are indirectly controlled by AMPK through its phosphorylation of the p53 transcription factor. Activated p53 slows down glycolysis during cell cycle arrest through turning on its *TIGAR* gene target. Its respective protein breaks down the key regulator of glycolysis fructose 2,6-bisphosphate as well as blocking further cell cycles through turning on the *p21* gene.\n\n# Preferential cancer cell killing by apoptosis reflects high p53 levels\n\nThe enhanced apoptosis capability of early-stage epithelial cancer cells, in comparison with their normal cell equivalents, reflects their higher content of activated p53 transcription factor. Overexpression and amplification of the p53 repressors MDM2 and MDM4 are common across cancer types. In the case of melanomas, p53 function is commonly shut down by overexpression of MDM4. Already a drug exists that through its inhibition of MDM4 makes melanoma much more treatable \\[28\\]. Knowing more about why p53 activation sometimes leads to cell cycle arrest (senescence) and under different circumstances results in apoptosis remains an important challenge for the immediate future.\n\n# P53 induces apoptosis by turning on the synthesis of genes whose primary function is the synthesis of reactive oxygen species\n\nHow p53 turns on apoptosis was first revealed through elegant gene expression studies carried out in Bert Vogelstein's Johns Hopkins laboratory in 1997 \\[29\\]. Although looking for genes expressed only during apoptosis, they discovered a set of 13 p53-induced genes (*PIG* genes), each of which are likely key players in the cellular synthesis of reactive oxygen species (ROS; H~2~O~2~ hydrogen peroxide, the OH^\u2212^ radiation and O~2~^\u2212^ superoxides). *PIG3*, for example, codes for a quinone oxidoreductase that is a potent generator of ROS \\[30,31\\]. p53 target genes also play major roles in downstream processes through turning on synthesis of some 10 different mitochondrial functioning proteins such as BAX, PUMA and NOXA, as well as death receptors such as DR4 and DR5, that in ways yet to be elucidated help carry out the many successive proteolysis stages in apoptosis \\[32\\].\n\nEqually important, p53 turns on the synthesis of the key proteins involved in the apoptotic (programmed cell death) elimination of cells that have no long-term future, say, through unsustainable metabolic stress or damage to cellular chromosomes brought about by exposure to ultraviolet or ionizing radiation. So, removing such cells are complex sets of largely mitochondrial-sited degradation events. As the successive stages in apoptosis unravel, the respective dying cells lose mitochondrial functioning and release cytochrome c, culminating in DNA-liberating cell dissolution.\n\n# Leakage from drug-impaired mitochondrial electron transport chains raises reactive oxygen species levels\n\nThe mitochondrial electron transport generation of ATP and heat is obligatorily accompanied by the production of ROS (such as the OH^\u2212^ radical, H~2~O~2~ and O~2~^\u2212^ superoxides). Normally, preventing ROS molecules from irreversibly damaging key nucleic acid and protein molecules are potent antioxidative molecules such as glutathione and thioredoxin \\[33\\]. When present in normal amounts, they cannot handle the much larger amount of ROS generated when oxidative phosphorylation becomes inhibited by mitochondrial-specific drugs such as rotenone that block feeding of NADH into the respiratory chain or by 3\u2032-3\u2032 diindolylmethane (DIM), the active component in the long-reputed chemo-preventative *Brassica* vegetables, which inhibits the mitochondrial F1F0 ATP synthesis complex \\[34\\]. Still-remaining ROS molecules through oxidizing intra-mitochondrial targets induce the apoptotic elimination of cells damaged from excessive oxidative stress. Already, DIM is used as an adjuvant therapy for recurrent respiratory papillomatosis in humans. The molecular mechanism(s) through which ROS induce apoptosis remains to be found\u2014hopefully soon. Now, we will be surprised if they do not somehow directly oxidize and so activate one or more of the BAX-like proteins involved in p53-mediated apoptosis.\n\nThat ROS by themselves can mediate apoptosis was recently convincingly shown by the finding that the 'first-in-class' anti-cancer mitochondrial drug elesclomol (discovered by Synta Pharmaceuticals through screening for anti-apoptotic agents) kills cancer cells through promoting ROS generation \\[35\\]. When these resulting ROS molecules are destroyed through the simultaneous administration of the antioxidant molecule *N*-acetylcysteine, preferential killing of cancer cells stops. The failure of elesclomol to generate apoptosis in non-cancerous cells probably arises from the inherently lower ROS level generated by normal mitochondrial electron transport machinery.\n\n# Reactive oxygen species may directly induce most apoptosis\n\nThat elesclomol promotes apoptosis through ROS generation raises the question whether much more, if not most, programmed cell death caused by anti-cancer therapies is also ROS-induced. Long puzzling has been why the highly oxygen sensitive 'hypoxia-inducible transcription factor' HIF1\u03b1 is inactivated by both the, until now thought very differently acting, 'microtubule binding' anti-cancer taxanes such as paclitaxel and the anti-cancer DNA intercalating topoisomerases such as topotecan or doxorubicin, as well as by frame-shifting mutagens such as acriflavine \\[36,37\\]. All these seemingly unrelated facts finally make sense by postulating that not only does ionizing radiation produce apoptosis through ROS but also today's most effective anti-cancer chemotherapeutic agents as well as the most efficient frame-shifting mutagens induce apoptosis through generating the synthesis of ROS \\[38\u201340\\]. That the taxane paclitaxel generates ROS through its binding to DNA became known from experiments showing that its relative effectiveness against cancer cell lines of widely different sensitivity is inversely correlated with their respective antioxidant capacity \\[41,42\\]. A common ROS-mediated way through which almost all anti-cancer agents induce apoptosis explains why cancers that become resistant to chemotherapeutic control become equally resistant to ionizing radiotherapy.\n\nRecent use of a 50 000 member chemical library at MIT's Koch Cancer Center to search out molecules that selectively killed *K-RAS*-transformed human fibroblasts revealed the piperidine derivation lanperisone \\[43\\]. ROS generation underlies its cancer cell killing action. Surprisingly, this already clinically used muscle relaxant induced non-apoptotic cell death in a p53 (++ versus\u2212\u2212) independent manner. When lanperisone was applied in the presence of the ROS-destroying antioxidant scavenger molecules deferoxamine, butylated hydroxylamine or the antioxidant trolox, no activity was observed.\n\n# Blockage of reactive-oxygen-species-driven apoptosis by antioxidants\n\nAlthough we know ROS as a positive force for life through their apoptosis-inducing role, for much longer we have feared them for their ability to irreversibly damage key proteins and nucleic acid molecules. So when not needed, they are constantly being neutralized by antioxidative proteins such as glutathione, superoxide dismutase, catalase and thioredoxin. Controlling their synthesis as well as that of many more minor antioxidants is the Nrf2 transcription factor, which probably came into existence soon after life as we know it started. Most importantly, at Cancer Research UK in Cambridge, David Tuveson's laboratory has recently shown that Nrf2 synthesis is somehow upregulated by the cell growth and division-promoting *RAS*, *RAF* and *MYC* oncogenes \\[44\\]. Biologically, this makes sense because we want antioxidants present when DNA functions to make more of itself.\n\nThe fact that cancer cells largely driven by RAS and Myc are among the most difficult to treat may thus often be due to their high levels of ROS-destroying antioxidants. Whether their high antioxidative level totally explains the effective incurability of pancreatic cancer remains to be shown. The fact that late-stage cancers frequently have multiple copies of *RAS* and *MYC* oncogenes strongly hints that their general incurability more than occasionally arises from high antioxidant levels. Clearly important to learn is what other molecules exist that turn on Nrf2 expression. During the yeast life cycle and probably that of most organisms, oxidative phosphorylation is clearly separated by time from when DNA synthesis occurs. Whether Nrf2 levels also go up and down during the cell cycle remains important to be known soon.\n\n# Enhancing apoptotic killing using pre-existing drugs that lower antioxidant levels\n\nAlready there exist experiments with haematopoietic cells in which the cancer-cell-killing capacity of the ROS generator arsenic trioxide (As~2~O~3~) has been shown to be inversely correlated with the content levels of the major cellular antioxidant glutathione \\[45\\]. As~2~O~3~ also knocks down the reductive power of thioredoxin necessary for several key steps in cellular metabolism. Its capacity to inhibit both thioredoxin and glutathione widens its potential for a successful deployment against many major cancers beyond promyeloblastic leukaemia. Also capable of enhancing the cytotoxic effect of As~2~O~3~ is ascorbic acid, which, though known for its antioxidant role in cells, is converted into its oxidizing form dehydroascorbic acid. Unfortunately, up until now, we do not yet have clinically effective ways to lower glutathione levels. Lowering its level through deployment of the drug buthionine sulphazine that blocks its synthesis leads quickly to upregulation of the Nrf2 transcription factor that in turn upregulates glutathione synthesis \\[46\\]. A more general way to reduce antioxidant levels deploys motexafin gadolinium, a member of a class of porphyrin molecules called texaphyrins. Through a process called futile redox recycling, it transfers hydrogen from antioxidants to produce ROS. Unfortunately, clinical trials designed to show its enhancement of chemo- and radiotherapies have so far shown only modest life extensions as opposed to cures.\n\nThrough selecting for compounds that preferentially induce apoptosis in cancer cells as opposed to normal cells, the natural product piperlongumine from the *Piper longum* plant was recently revealed as a potential anti-cancer drug \\[47\\]. Most exciting, it mediates its action through its binding to the active sites of several key cellular antioxidants (e.g. glutathione *S* transferase and carbonyl reductase 1) known to participate in cellular responses to ROS-induced oxidative stress. That piperlongumine failed to raise ROS levels in non-cancerous cells probably resulted from their inherently lower levels of these antioxidants which, in turn, result from less activation of the Nrf2 transcription factor.\n\n# Anti-angiogenic drugs work only when used in conjunction with reactive oxygen species generators\n\nThe non-toxic anti-angiogenesis protein endostatin (discovered and promoted in the late 1990s in Judah Folkman's Boston laboratory and now resurrected by Yongzhang Luo in Beijing) shows anti-cancer activity only when it is used together with conventional chemotherapeutic agents. This fact, long puzzling to me, may be due to the chemotherapeutic component providing the ROS needed for cancer cell killing \\[48\\]. By itself, the hypoxia resulting from endostatin action may not be sufficient for cancer cell killing. A similar explanation may explain why Genentech's avastin also only works when combined with chemotherapy. By contrast, the killing of mutant *BRAF* melanoma cells by Zelboraf works very well in the absence of any obvious direct source of ROS. Conceivably, the metabolic stress resulting from its turning off the RAS\u2013ERK pathways somehow shuts down the Nrf2 pathways, letting ROS rise to the level needed to kill the drug-weakened melanoma cells.\n\n# Lower reactive oxygen species levels in stem cells reflect higher levels of antioxidants\n\nFor more than a decade, there has existed too long ignored evidence that normal stem cells have lower ROS levels than their differentiated progeny. Just a year ago, even more convincing experimentation showed that breast cancer stem cells also contain lower ROS levels than those found in their cancerous epithelial-like progeny cells \\[49\\]. All stem cells, be they normal or cancerous, probably have lower ROS levels as a result of their corresponding higher levels of prominent antioxidant molecules such as glutathione and thioredoxin. Most likely, these heightened amounts have evolved to protect chromosomal RNA from ROS-induced damage to the more exposed region of chromosomal DNA as it undergoes changes in compaction as it moves through the cell cycle. Whether all dividing cells have higher antioxidant levels remains to be worked out. If so, all stem cells will be inherently much more resistant to ROS-induced apoptotic killing than more differentiated, much less antioxidant-rich progeny cells.\n\n# Metformin selectively targets (kills) mesenchymal cancer stem cells\n\nAlready we have at our disposal a relatively non-toxic, excessively well-tested drug that preferentially kills mesenchymal stem cells. In a still much unappreciated article published three years ago in *Cancer Research*, Kevin Struhl's laboratory at Harvard Medical School first showed that metformin, a blocker of stage 2 oxidative phosphorylation, selectively targets stem cells. When so applied with chemotherapeutic agents to block xenographic tumour growth, it induces prolonged remission if not real cures \\[50,51\\]. But when metformin was left out of these experiments, subsequent multiplication of unkillable mesenchymal stem cells lets these xenographs grow into life-threatening forms, showing that chemotherapy by itself does not kill stem cells. This most widely used anti-diabetic drug's heightened ability to kill late-stage mesenchymal cancer cells probably explains why those humans who use it regularly have reduced incidences of many cancers.\n\nMetformin is presently being added to a number of anti-cancer chemotherapeutic regimes to see whether it magnifies their effectiveness in humans. The fact that metformin works much more effectively against p53**^\u2212\\ \u2212^** cells suggests that it may be most active against late-stage cancers, the vast majority of whose cells have lost both of their *p53* genes. By contrast, the highly chemo-radio-sensitive early-stage cancers against which most of anti-cancer drug development has focused might very well show little metformin effectiveness. By the end of 2013, we should know whether it radically improves any current therapies now in use. Highly focused new drug development should be initiated towards finding compounds beyond metformin that selectively kill stem cells. And the reason why metformin preferentially kills p53**^\u2212\\ \u2212^** stem cells should be even more actively sought out.\n\n# Free-radical-destroying antioxidative nutritional supplements may have caused more cancers than they have prevented\n\nFor as long as I have been focused on the understanding and curing of cancer (I taught a course on Cancer at Harvard in the autumn of 1959), well-intentioned individuals have been consuming antioxidative nutritional supplements as cancer preventatives if not actual therapies. The past, most prominent scientific proponent of their value was the great Caltech chemist, Linus Pauling, who near the end of his illustrious career wrote a book with Ewan Cameron in 1979, *Cancer and Vitamin C*, about vitamin C's great potential as an anti-cancer agent \\[52\\]. At the time of his death from prostate cancer in 1994, at the age of 93, Linus was taking 12 g of vitamin C every day. In light of the recent data strongly hinting that much of late-stage cancer's untreatability may arise from its possession of too many antioxidants, the time has come to seriously ask whether antioxidant use much more likely causes than prevents cancer.\n\nAll in all, the by now vast number of nutritional intervention trials using the antioxidants \u03b2-carotene, vitamin A, vitamin C, vitamin E and selenium have shown no obvious effectiveness in preventing gastrointestinal cancer nor in lengthening mortality \\[53\\]. In fact, they seem to slightly shorten the lives of those who take them. Future data may, in fact, show that antioxidant use, particularly that of vitamin E, leads to a small number of cancers that would not have come into existence but for antioxidant supplementation. Blueberries best be eaten because they taste good, not because their consumption will lead to less cancer.\n\n# A much faster timetable for developing anti-metastatic drugs\n\nThe world of Physics already knew 20 years ago that it had no choice but to go very big for the Higgs boson. To the civilized world's great relief, they now finally have it. Biology and Medicine must likewise now again aim big\u2014as when we first promised the world in 1988 that the still to be found human genome would later prove indispensable for the curing of most cancers and so went for it big. If, however, we continue to move forward at today's never frantic, largely five-day working week, the never receding 10\u201320 year away final victory that our cancer world now feels safe to project will continue to sink the stomachs of informed cancer victims and their families. That we now have no General of influence, much less power, say an Eisenhower or even better a Patton, leading our country's War on Cancer says everything. Needed soon is a leader that has our cancer drug development world working every day and all through the night.\n\nThe now much-touted genome-based personal cancer therapies may turn out to be much less important tools for future medicine than the newspapers of today lead us to hope \\[54\\]. Sending more government cancer monies towards innovative, anti-metastatic drug development to appropriate high-quality academic institutions would better use National Cancer Institute's (NCI) monies than the large sums spent now testing drugs for which we have little hope of true breakthroughs. The biggest obstacle today to moving forward effectively towards a true *war against cancer* may, in fact, come from the inherently conservative nature of today's cancer research establishments. They still are too closely wedded to moving forward with cocktails of drugs targeted against the growth promoting molecules (such as HER2, RAS, RAF, MEK, ERK, PI3K, AKT and mTOR) of signal transduction pathways instead of against Myc molecules that specifically promote the cell cycle.\n\nMost needed now are many new anti-Myc drugs beyond the exciting new BRD4 inhibitors, such as JQ1, as well as multiple drugs that inhibit the antioxidative molecules that likely make, say, pancreatic cancer so incurable. They should much enhance the effectiveness of all current radio- and chemotherapeutic regimes. As such, they will likely cure many more now incurable cancers. How they will interact as cocktail partners with the newer targeted therapies that do not directly generate ROS remains to be seen. Equally important may be an expanded search for drugs that prevent p53 breakdown.\n\n# A billion dollars should suffice to identify all the remaining proteins needed for curing most metastatic cancer\n\nThe total sum of money required for RNAi methodologies to reveal the remaining major molecular targets for future anti-cancer drug development need not be more than 500\u20131000 million dollars. Unfortunately, the NCI now is unlikely to take on still one more big science project when it is so hard-pressed to fund currently funded cancer programmes. Still dominating NCI's big science budget is The Cancer Genome Atlas (TCGA) project, which by its very nature finds only cancer cell drivers as opposed to vulnerabilities (synthetic lethals). While I initially supported TCGA getting big monies, I no longer do so. Further 100 million dollar annual injections so spent are not likely to produce the truly breakthrough drugs that we now so desperately need.\n\nHappily, the first RNAi whole genome big screen backed by a 'big pharma' firm has just started with Pfizer working with the Cold Spring Harbor Laboratory. Even, however, if several more giants working separately join in, collectively they will naturally focus on major cancers such as those of the breast, colon and lung. I doubt they will soon go big against, say, either melanoma or oesophageal cancer. Greg Hannon here at Cold Spring Harbor will probably be the first academic scientist to come to grips with the non-trivial experimental challenges provided by whole genome, 100 000 RNAi probe screens, through both his collaboration with Pfizer and through using monies separately provided by the Long Island-based Lustgarten Foundation's support for a comprehensive pancreatic cancer target screen and by Hollywood's 'stand up against cancer' support for breast cancer drug target identification. Although our enthusiasm for big RNAi screens remains far from universally shared, lack of money should not now keep us from soon seeing whether whole genome methodologies live up to their much-touted expectations \\[55\\]. The Cold Spring Harbor Laboratory happily has the means to move forward almost as if it were in a true war.\n\nFurther financial backing, allowing many more cancer-focused academic institutions to also go big using RNAi-based target discovery as well as to let them go on to the early stages of subsequent drug discovery, is not beyond the might of the world's major government research funding bodies nor that of our world's many, many super billionaires. The main factor holding us back from overcoming most of metastatic cancer over the next decade may soon no longer be lack of knowledge but our world's increasing failure to intelligently direct its 'monetary might' towards more human-society-benefiting directions.\n\n![](rsob-3-120144-g1.gif)\n\n**Jim Watson's** (JDW's) interest in cancer first publicly expressed itself through his teaching on tumour viruses after he joined the Harvard University Biology Department in the fall of 1956. Later, for the new Introductory Biology II, his last of 10 lectures focused on how cancer might be induced by DNA tumour viruses, the smallest of which probably only had DNA sufficient to code for 3\u20135 proteins. In his 1965 textbook, *The Molecular Biology of the Gene*, the last chapter ('A geneticist's view of cancer') raised the question of how a virus might have the capacity to turn on the cell cycle. Upon becoming director of the Cold Spring Harbor Laboratory in 1968, he changed its major research emphasis from microbial genetics to cancer (through recruiting Joe Sambrook from Renato Dulbecco's lab at the Salk Institute). Major among its early Cold Spring Harbor Laboratory eukaryotic accomplishments was the 1977 co-discovery of RNA splicing by Richard Roberts and Phil Sharp (MIT). JDW then necessarily devoted much of his time on scientific politics, first toward gaining National Institutes of Health (NIH) acceptance of the safety of recombinant DNA procedures (1973\u20131978), and second arguing for and then leading NIH's role in the Human Genome Project (1986\u20131992). In 2008, JDW's main interest moved to the curing of cancer focusing on the biochemistry of cancer cells as opposed to their genetic origins.\n\n# References","meta":{"dup_signals":{"dup_doc_count":129,"dup_dump_count":44,"dup_details":{"curated_sources":2,"2018-47":3,"2018-43":1,"2018-39":1,"2018-34":2,"2018-30":2,"2018-22":2,"2018-13":3,"2018-09":1,"2018-05":2,"2017-51":2,"2017-47":1,"2017-43":1,"2017-39":1,"2017-34":1,"2017-30":1,"2017-26":2,"2017-22":1,"2017-17":2,"2017-09":3,"2017-04":1,"2016-50":1,"2016-44":2,"2016-40":2,"2016-36":4,"2016-30":4,"2016-26":1,"2016-22":2,"2016-18":2,"2016-07":4,"2015-48":4,"2015-40":5,"2015-35":4,"2015-32":4,"2015-27":5,"2015-22":5,"2015-14":4,"2014-52":4,"2014-49":5,"2014-42":12,"2014-41":7,"2017-13":2,"2015-18":4,"2015-11":3,"2015-06":4}},"file":"PMC3603456"},"subset":"pubmed_central"} {"text":"abstract: The thermal properties of nanofluids are an especially interesting research topic because of the variety of potential applications, which range from bio-utilities to next-generation heat-transfer fluids. In this study, photopyroelectric calorimetry for measuring the thermal diffusivity of urchin-like colloidal gold nanofluids as a function of particle size, concentration and shape in water, ethanol and ethylene glycol is reported. Urchin-like gold nanoparticles were synthesised in the presence of hydroquinone through seed-mediated growth with homogeneous shape and size ranging from 55 to 115 nm. The optical response, size and morphology of these nanoparticles were characterised using UV-visible spectroscopy and transmission electron microscopy. The thermal diffusivity of these nanofluids decreased as the size of the nanoparticles increased, and the enhancement depended on the thermal diffusivity of the solvent. The opposite effect (increase in thermal diffusivity) was observed when the nanoparticle concentration was increased. These effects were more evident for urchin-like gold nanofluids than for the corresponding spherical gold nanofluids.\nauthor: Gerardo A L\u00f3pez-Mu\u00f1oz; Jos\u00e9 Abraham Balderas-L\u00f3pez; Jaime Ortega-Lopez; Jos\u00e9 A Pescador-Rojas; Jaime Santoyo Salazar\ndate: 2012\ninstitute: 1Qu\u00edmica Arom\u00e1tica SA, R\u00edo Grande S\/N, Col. Santa Catarina, Acolman, Mexico State, CP 55875, Mexico; 2UPIBI-IPN, Av. Acueducto S\/N, Col. Barrio la Laguna Ticom\u00e1n, Mexico City, CP, 07340, Mexico; 3Biotechnology and Bioengineering Department, CINVESTAV, Av. IPN 2508, Col. San Pedro Zacatenco, Mexico City, CP, 07360, Mexico; 4ICGDE-BUAP, 4 Sur 104 Edificio Carolino, Col. Centro Hist\u00f3rico, Puebla, CP, 72000, Mexico; 5Physics Department, CINVESTAV, Av. IPN 2508, Col. San Pedro Zacatenco, Mexico City, CP, 07360, Mexico\nreferences:\ntitle: Thermal diffusivity measurement for urchin-like gold nanofluids with different solvents, sizes and concentrations\/shapes\n\n# Background\n\nCurrently, thermal properties of nanofluids, i.e. mixtures of nanomaterials suspended in an organic or inorganic base fluid, are intriguing because of their various uses, which range from bio-applications to the next generation of heat-transfer fluids. For example, gold nanofluids are a promising material for bio-applications because of their biocompatibility and thermo-optical properties. In particular, urchin-like gold fluids have a higher surface area that endows them with specific catalytic and thermo-optical qualities \\[1\\]. The surface plasmon resonance of urchin-like gold nanofluids can be tuned to the near-infrared region by increasing the particle size, which makes the particles potentially effective in photothermal therapy and as contrast agents for diagnosis \\[2\\]. In addition, local electromagnetic field enhancement at the tips of branched particles and thermal conductivity enhancement make these materials a candidate for application in electromagnetic hyperthermia therapy \\[3,4\\]. Thermal studies of nanofluids have mainly focused on thermal conductivity measurements, but recently, other techniques have been developed for thermal diffusivity measurements in nanofluids, such as the hot-wire technique and thermal lens spectrometry \\[5-7\\].\n\nIn the present study, the aqueous synthesis of urchin-like gold nanoparticles in the presence of hydroquinone through a seed-mediated growth process is presented. The thermal diffusivity in urchin-like gold nanofluids as a function of particle size (55 to 115 nm) and concentration in different solvents was investigated by photopyroelectric calorimetry. A comparative study of thermal diffusivity in urchin-like and spherical gold nanofluids with approximately equal particle sizes as a function of concentration is also reported. These results will open new horizons for thermal studies of nanofluids for bio-applications and heat-transfer applications.\n\n# Methods\n\n## Hydroquinone synthesis of urchin-like gold nanoparticles\n\nUrchin-like gold nanoparticles are commonly synthesised through a self-seeding growth process, where larger particles grow through the deposition of smaller seeds that form from the epitaxial deposition of atoms. The reduction of gold chloride on the nanoparticle seeds (gold nanoparticles, \\<20 nm) paired with hydroquinone as a selective reducing agent improves the homogeneous size and shape distribution of the urchin-like particles through physicochemical effects \\[8,9\\].\n\nGold nanoparticle seeds were synthesised by sodium citrate reduction. For this process, a solution of gold chloride is brought to boil, whereupon a solution of sodium citrate is immediately added. The solution is then removed from the heat source once nanoparticle maturation is completed, as indicated by the colour transition.\n\nUrchin-like gold nanoparticles were synthesised by hydroquinone method using consistent concentrations of gold chloride, sodium citrate and hydroquinone; however, the number of seeds gradually decreased, which resulted in the formation of larger urchin-like gold nanoparticles. Nanofluids with varying particle size were centrifuged at 6,000 rpm for 30 min and re-dispersed in high-performance liquid chromatography (HPLC) water, ethanol and ethylene glycol (EG) at a final concentration of 0.1 mg\/ml.\n\nTo provide nanofluids with different shapes and concentrations, stock solutions of urchin-like gold nanoparticles and spherical gold nanofluids synthesised by hydroquinone method \\[9,10\\] were centrifuged at 6,000 rpm for 30 min. The concentrated solutions were re-dispersed in HPLC water, ethanol and ethylene glycol to obtain different nanoparticle concentrations.\n\nAll chemicals used were of analytical grade as obtained from Sigma-Aldrich Corporation (St. Louis, MO, USA) and were used as received; HPLC water was used for the synthesis of all the urchin-like and spherical gold nanoparticles.\n\n## Basic photopyroelectric theoretical scheme\n\nAs shown in the literature \\[11\\], the photopyroelectric signal in the transmission configuration, assuming that the sample thickness *L* is the only variable (with fixed modulation frequency *f*) in the thermally thick regime of the liquid sample, can be expressed as follows:\n\n$${\\delta P} = E\\exp\\left( {- \\sigma_{s}L} \\right),$$\n\nwhere *E* is a complex constant and *\u03c3*~s~\u2009=\u2009(1\u2009+\u2009*i*)(*\u03c0f*\/*\u03b1*~s~)^1\/2^, with *\u03b1*~s~ is the thermal diffusivity of the sample. The amplitude \u2502\u03b4\u03a1\u2502 and phase \u0424 of this equation can be calculated as follows:\n\n$$|{\\delta P}| = |E|\\exp\\left( {- \\sqrt{\\frac{\\pi f}{\\alpha_{s}}}L} \\right),$$\n\n$$\\Phi = \\Phi_{0} - \\sqrt{\\frac{\\mathit{\\pi f}}{\\alpha_{s}}}L.$$\n\nThe signal phase \u0424 is a linear function of the sample thickness, and slope *B* is given by the following:\n\n$$B = \\left( \\frac{\\mathit{\\pi f}}{a_{s}} \\right)^{\\frac{1}{2}},$$\n\nfrom which the thermal diffusivity of the sample can be determined.\n\n## Experimental setup\n\nA transversal section of the photopyroelectric experimental setup for thermal diffusivity measurements is shown in Figure 1<\/a>. The photopyroelectric sensor consisted of a PVDF film (25-\u03bcm thick) with metal electrodes (Ni-Al) on both sides of a stainless steel body. A silicon foil was attached to the top surface of the photopyroelectric sensor to prevent any possible damage that could result from exposure to the liquid environment. The resultant pyroelectric signals were processed by a lock-in amplifier (model SR830; Stanford Research, Menlo Park, CA, USA) for amplification and de-modulation; the transistor-transistor logic output of the lock-in was used for the modulation control of a 660-nm model laser diode system (IFLEX-2000; Qioptiq Photonics Ltd., Hamble, Hampshire, UK) at a fixed modulation frequency of 1 Hz.\n\nThe pyroelectric signal was recorded as a function of the sample thickness by measuring 20 experimental points from a relative sample thickness *l*~*0*~, at 10-\u03bcm intervals using a micro-linear stage (model T-LSM025A; Zaber Technologies, Inc., Vancouver, Canada). Linear fits were performed for the pyroelectric phase to obtain the *B* parameter, as defined in the 'Basic photopyroelectric theoretical scheme' section, from which the thermal diffusivity of the sample was obtained by means of the relationship *\u03b1*~s~\u2009=\u2009*\u03c0*\/*B*^2^. Measurements were performed at room temperature, or 22 \u00b1 2\u00b0C.\n\n# Results and discussion\n\n## Nanoparticle characterization\n\nThe particle size, shape\/morphology and nanoparticle distribution in the base medium were determined by transmission electron microscopy (TEM) using a JEOL JEM2010 (JEOL Ltd., Akishima, Tokyo, Japan) operating at 200 kV with a beam current of 105 \u03bcA. The TEM specimens were prepared by adding 50 \u03bcl of the stock nanoparticle solution onto 200-mesh carbon-coated copper grids. In Figure 2<\/a>, the TEM images show the larger urchin-like gold nanoparticles that were synthesised by decreasing the number of nanoparticle seeds, as was expected. The dispersion of the urchin-like gold nanoparticles had a median size distribution of nearly 6% for all of the samples, including the large-diameter nanoparticles. This technique is advantageous over single-step methods that use silver ions to increase the facet selectivity during synthesis of the urchin-like nanoparticles; these methods are also less capable of continuously controlling the diameter of branched particles and spherical particles, which is another important method used to tailor the plasmon resonance absorption \\[8\\].\n\nThe absorption spectra of the synthesised nanoparticles were measured after the HPLC water background was established using a UV-visible (vis) spectrophotometer (8453 Agilent Technologies, Inc., Santa Clara, CA, USA). Figure 3A<\/a> shows the UV-vis absorption spectra of the different-sized urchin-like gold nanofluids. As expected, increases in the diameter of the gold nanoparticles resulted in larger absorption (*\u03bb*~max~) values (Figure 3B<\/a>) according to Mie theory \\[12,13\\]; these results provide further evidence that the hydroquinone growth method allows superior tuning of surface plasmon resonance for urchin-like gold nanoparticles.\n\n## Thermal diffusivity measurement\n\nThe plots in Figure 4<\/a> show the signal phase as a function of sample thickness for the solvents used. The recorded thermal diffusivity values were 14.29 \u00d7 10^\u22124^ \u00b1 0.03 \u00d7 10^\u22124^ cm^2^\u00b7s^\u22121^, 9.25 \u00d7 10^\u22124^ \u00b1 0.03 \u00d7 10^\u22124^ cm^2^\u00b7s^\u22121^ and 8.32 \u00d7 10^\u22124^ \u00b1 0.03 \u00d7 10^\u22124^ cm^2^\u00b7s^\u22121^ for HPLC water (Sigma-Aldrich Corporation), EG and ethanol (Sigma-Aldrich Corporation), respectively. The measurements were obtained by utilising the photopyroelectric system as a reference sample for system characterisation with a coefficient of variation within 1% of similar values reported in the literature. The resulting thermal diffusivity values for colloidal urchin-like gold nanoparticles with different sizes in water, ethanol and ethylene glycol are summarised in Table 1<\/a>. The thermal diffusivity values of colloidal urchin-like and spherical gold nanofluids of two different particle sizes at different concentrations in water, ethanol and ethylene glycol are summarised in Tables 2<\/a> and 3<\/a>. All of the values reported are averaged from ten measurements for each sample, and the standard deviation was calculated as an estimation of uncertainty.\n\nThermal diffusivity of urchin-like gold nanofluids for different nanoparticle diameters and solvents\n\n| **Particle size (nm)** | **Water**^**a**^ | **EG**^**a**^ | **Ethanol**^**a**^ |\n|:----------------------:|:----------------:|:-------------:|:------------------:|\n| 55 | 14.76 | 9.83 | 9.05 |\n| 67 | 14.71 | 9.78 | 8.93 |\n| 70 | 14.68 | 9.73 | 8.84 |\n| 79 | 14.48 | 9.49 | 8.61 |\n| 115 | 14.34 | 9.33 | 8.44 |\n\nAt a constant nanoparticle concentration of 0.1 mg\/ml. ^a^Values are presented as (*\u03b1* (10^\u22124^ cm^2^ s^\u22121^ ) \u00b1 0.03).\n\nThermal diffusivity of urchin-like gold nanofluids at different nanoparticle concentrations and in different solvents\n\n| **Concentration (mg\/ml)** | **Water**^**a**^ | **EG**^**a**^ | **Ethanol**^**a**^ | **Water**^**b**^ | **EG**^**b**^ | **Ethanol**^**b**^ |\n|:--:|:--:|:--:|:--:|:--:|:--:|:--:|\n| 0.2 | 14.79 | 9.87 | 9.09 | 14.36 | 9.36 | 8.47 |\n| 0.4 | 14.83 | 9.92 | 9.15 | 14.37 | 9.39 | 8.51 |\n| 0.6 | 14.91 | 10.01 | 9.28 | 14.42 | 9.45 | 8.56 |\n| 0.8 | 14.98 | 10.13 | 9.39 | 14.46 | 9.49 | 8.62 |\n| 1.0 | 15.11 | 10.25 | 9.56 | 14.53 | 9.56 | 8.71 |\n\n^a^Values are presented as (*\u03b1*~55nm~ (10^\u22124^ cm^2^ s^\u22121^) \u00b1 0.03). ^b^Values are presented as (*\u03b1*~115nm~ (10^\u22124^ cm^2^ s^\u22121^) \u00b1 0.03).\n\nThermal diffusivity of spherical gold nanofluids at different nanoparticle concentrations and in different solvents\n\n| **Concentration (mg\/ml)** | **Water**^**a**^ | **EG**^**a**^ | **Ethanol**^**a**^ | **Water**^**b**^ | **EG**^**b**^ | **Ethanol**^**b**^ |\n|:--:|:--:|:--:|:--:|:--:|:--:|:--:|\n| 0.2 | 14.74 | 9.83 | 9.05 | 14.33 | 9.32 | 8.44 |\n| 0.4 | 14.79 | 9.87 | 9.10 | 14.34 | 9.34 | 8.47 |\n| 0.6 | 14.84 | 9.91 | 9.22 | 14.37 | 9.39 | 8.51 |\n| 0.8 | 14.91 | 10.02 | 9.32 | 14.41 | 9.44 | 8.57 |\n| 1.0 | 14.99 | 10.13 | 9.43 | 14.46 | 9.50 | 8.64 |\n\n^a^Values are presented as (\u03b1~55nm~ (10^\u22124^ cm^2^ s^\u22121^) \u00b1 0.03). ^b^Values are presented as (\u03b1~120nm~ (10^\u22124^ cm^2^ s^\u22121^ ) \u00b1 0.03).\n\n### Nanoparticle size\n\nFigure 5<\/a> shows the thermal diffusivity values of urchin-like gold colloid nanofluids as a function of nanoparticle size and in different solvents; the plot also reveals that a constant nanoparticle concentration (0.1 mg\/ml) effectively increased the thermal diffusivity ratio (*\u03b1*~sample~\/*\u03b1*~base\\ fluid~) of the nanofluids as the nanoparticle diameter and as the thermal diffusivity of the solvent were reduced. This behaviour can be explained as follows: as the particle size decreases, the surface area-to-volume effective ratio of the particle increases; therefore, the thermal diffusivity and thermal conductivity of the nanofluids are increased. However, for the large-diameter nanoparticles, the surface area-to-volume ratio decreases, and there is consequently no enhancement of thermal diffusivity and thermal conductivity \\[10,14\\]. Moreover, ethanol-based nanofluids were observed to have a higher thermal diffusivity ratio than EG- and water-based nanofluids because of their lower thermal diffusivity.\n\n### Concentration\/shape\n\nFigure 6<\/a> shows the dependency of the nanofluid thermal diffusivity on nanoparticle concentration for 55- and 115-nm urchin-like gold nanoparticles; as the nanoparticle concentration increases, the thermal diffusivity ratio (*\u03b1*~sample~\/*\u03b1*~base\\ fluid~) of the nanofluids also increases. Similar results have been reported in the literature for metal nanofluids measured by hot wire method \\[14,15\\]. This dependency can be attributed to the volumetric increase of the nanoparticles that effectively decreases the specific heat of the nanofluids; consequently, the thermal diffusivity of the colloidal suspension increases. The enhancement of the surface area-to-volume ratio paired with the decrease in specific heat resulted in an increase in the thermal diffusivity ratio of 55-nm concentrated nanofluids when compared with 115-nm concentrated nanofluids.\n\nTEM images of spherical gold nanoparticles with sizes similar to those of urchin-like gold nanoparticles are shown in Figure 7<\/a>. The thermal diffusivity ratio (*\u03b1*~sample~\/*\u03b1*~base\\ fluid~) for 55- and 120-nm spherical gold nanofluids increased as a function of concentration, as shown in Figure 8<\/a>. However, the thermal diffusivity ratio enhancement was smaller when compared with that of urchin-like gold nanofluids. This dependency can be ascribed to the high surface area of the urchin-like gold nanoparticles when compared with the spherical gold nanoparticles, which exhibit an increase in the surface area-to-volume ratio and decrease in the specific heat. These parameters are more strongly affected by the thermal diffusivity ratio of nanofluids containing urchin-like nanoparticles than by the thermal diffusivity ratio of nanofluids containing spherical nanoparticles.\n\n# Conclusions\n\nThe present study investigated the effects of concentration, size and solvent on the thermal diffusivity of urchin-like gold nanofluids prepared by hydroquinone method through seed-mediated growth. The low size dispersion of urchin-like gold nanoparticles synthesised by hydroquinone method is advantageous when compared with single-step methods that are less capable of continuously controlling the diameters of branched particles; such control is critical as it is directly related to the tunable surface plasmon resonance of the urchin-like nanoparticles. The thermal diffusivity ratio changed inversely with nanoparticle size, which varied from 55 to 115 nm for urchin-like gold nanofluids. The thermal diffusivity ratio has been found to increase with nanoparticle concentration and was thus investigated within the range of 1 to 0.2 mg\/ml for the gold nanofluids. The thermal diffusivity ratio increased inversely with the thermal diffusivity of the nanofluid solvent. Moreover, the particle shape was found to have an effect on the thermal diffusivity of nanofluids. Experimental data for the thermal diffusivity ratio as a function of concentration and size for gold nanofluids depict similar behaviours in the enhancement ratio when compared with the thermal diffusivity using other techniques reported in the literature with high accuracy. Because of the small amount of sample required (600 \u03bcl) and the possibility to provide additional information such as the thermal and optical parameters of the sample with the developed sensor, photopyroelectric calorimetry is a promising alternative to classical techniques for measuring the thermal diffusivity of nanofluids.\n\n# Abbreviations\n\nEG: Ethylene glycol; HPLC: High-performance liquid chromatography; TEM: Transmission electron microscopy; Vis: Visible.\n\n# Competing interests\n\nThe authors declare that they have no competing interests.\n\n# Authors' contributions\n\nGALM prepared all the samples, set up the photopyroelectric method and measured and analysed the photopyroelectric data. JAPR and JSS measured and analysed the UV-vis and TEM data. JOL and JABL designed the experiments and wrote the manuscript. All authors read and approved the final manuscript.\n\n## Acknowledgements\n\nThe authors acknowledge Zaber Technologies, Inc. for providing a micro-linear stage and Qu\u00edmica Arom\u00e1tica SA, COFAA-IPN and CONACyT for the partial support of this work.","meta":{"dup_signals":{"dup_doc_count":129,"dup_dump_count":24,"dup_details":{"curated_sources":2,"2022-27":1,"2019-51":1,"2019-35":1,"2015-48":6,"2015-40":2,"2015-35":6,"2015-32":4,"2015-27":5,"2015-22":5,"2015-14":5,"2014-52":6,"2014-49":9,"2014-42":11,"2014-41":6,"2014-35":8,"2014-23":10,"2014-15":6,"2022-49":1,"2015-18":6,"2015-11":5,"2015-06":6,"2014-10":6,"2013-48":5,"2013-20":6}},"file":"PMC3524782"},"subset":"pubmed_central"} {"text":"abstract: We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (\\>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx\/s\/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px\u00d7352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception.\nauthor: Nathan G. Clack; Daniel H. O'Connor; Daniel Huber; Leopoldo Petreanu; Andrew Hires; Simon Peron; Karel Svoboda; Eugene W. Myers\\* E-mail: [^1]\ndate: 2012-07\ninstitute: Janelia Farm Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America; University of California, San Diego, United States of America\nreferences:\ntitle: Automated Tracking of Whiskers in Videos of Head Fixed Rodents\n\n> \"This is a *PLoS Computational Biology* Software article.\"\n\n# Introduction\n\nRats and mice move their large whiskers (vibrissae), typically in a rhythmic pattern, to locate object features \\[1\\], \\[2\\], or to identify textures and objects \\[3\\], \\[4\\]. Whisker movements in turn are influenced by touch \\[1\\], \\[5\\]. The whisker system is a powerful model for studying the principles underlying sensorimotor integration and active somatosensation \\[6\\]. Critical to any mechanistic study at the level of neurons is the quantitative analysis of behavior. Whisker movements have been measured in a variety of ways. Electromyograms, recorded from the facial muscles, correlate with whisker movements \\[7\\]. This invasive method does not report whisker position and shape per se and is complimentary to whisker tracking. Imaging individual labeled whiskers, for example by gluing a high-contrast particle on the whisker \\[8\\], provides the position of the marked whisker, but does not reveal whisker shape. In addition, the particle will change the whisker mass and stiffness and thereby perturb whisker dynamics. Monitoring a single point along the whisker using linescan imaging \\[9\\], or a linear light sheet \\[10\\], can provide high-speed information about whisker position, but only at a single line of intersection.\n\nHigh-speed (\\>500 Hz) videography is a non-invasive method for measuring whisker movements and forces acting on whiskers and yields nearly complete information about whiskers during behavior \\[1\\], \\[11\\]\u2013\\[15\\]. The position of the whisker base with respect to the face reveals the motor programs underlying behavior. The deformation of whisker shape by touch can be used to extract the forces felt by the mouse sensory follicles \\[16\\], \\[17\\]. However, high-speed videography brings its own technical challenges. The large number of images required makes manual analysis impossible for more than a few seconds of video. Comprehensive studies require fully automated analysis. In addition, extraction of motions and touch forces demands accurate measurement of whisker shape, often with sub-pixel precision, and identification of rapidly moving whiskers across time. Finally, the large volume of video data potentially places severe demands even on advanced computational infrastructures, making efficient algorithms necessary.\n\nTracking whiskers is challenging. Whiskers can move at high speeds \\[12\\] and in complex patterns \\[1\\], \\[18\\]. Adjacent whiskers can have distinct trajectories. Moreover, whiskers are thin hairs (e.g. mouse whiskers taper to a thickness of a few micrometers) and thus provide only limited contrast in imaging experiments.\n\nTo address these challenges, we have developed software for tracking a single row of whiskers in a fully automated fashion. Over a manually curated database of 400 video sequences of head fixed mice (1.32\u00d710^6^ cumulative images, 4.5\u00d710^6^ traced whiskers, 8 mice), whiskers were correctly detected and identified with an accuracy of 99.997% (1 error per 3\u00d710^4^ traced whiskers). In other whisker tracking systems, models constraining possible motions and whisker shapes have been used to aid tracking. In contrast, our approach uses statistics gleaned from the video itself to estimate the most likely identity of each traced object that maintains the expected order of whiskers along the face. As a result, the shapes of highly strained whiskers (curvature\\>0.25\/mm) can be traced with sub-pixel accuracy, and tracking is faithful despite occasional fast motion (deflections \\>10,000 degrees\/s).\n\nOur method consists of two steps performed in succession: tracing and linking. Tracing produces a set of piecewise linear curves that represent whisker-like objects for each individual image in a video. Each image is analyzed independently. The linking algorithm is then applied to determine the identity of each traced curve. The trajectory of a whisker is described by collecting curves with the same identity throughout the video. Importantly, once a faithful tracing is completed, the original voluminous image data set is no longer required for downstream processing.\n\nOnce tracing is complete, a small set of features is tabulated for each curve. Based on these features, a heuristic is used to make an initial guess as to the identity of curves in a set of images where it works well. These initial identifications are then used to get a statistical description of the shapes and motions of whiskers. These statistics are then used to compute a final optimal labeling for each traced curve subject to the constraint that whiskers are ordered along the face.\n\nWe validated this approach using manually curated video acquired from a head-fixed, whisker-dependant object localization task \\[1\\]. In these experiments the field of view contains a single row of partially and wholly imaged whiskers as well as the stimulus object (a thin pole, normal to the imaging plane) (Figure 1A<\/a>). The camera is positioned below the whiskers and illuminated from above (Figure 1B<\/a>) producing silhouetted views of the whiskers (Figure 1C\u2013D<\/a>). Automated tracing is robust to occlusions introduced by the stimulus and to whisker crossings. Tracing yields curves for both whiskers and facial hairs (Figure 1C\u2013D<\/a>). Linking is used to uniquely identify each whisker so features such as angle of deflection (Figure 1F<\/a>) and curvature (Figure 1G<\/a>) can be extracted as time-series.\n\n# Design and Implementation\n\n## Tracing\n\nThe process of tracing the backbone of a whisker consists of three phases: initiation, extension, and termination. Tracing starts by using the angle and position estimated at the highest scoring candidate initiation site. As a curve is extended, initiation sites that fall under the traced curve are removed from consideration. When none remain, potentially duplicate curves are resolved.\n\nBefore any analysis, images were preprocessed to remove imaging artifacts specific to particular cameras (Text S1<\/a>, Figure S1<\/a>). Tracing begins by searching the image for candidate initiation sites. These are found by performing a pixel-level segmentation isolating locally line-like features in the image. With each segmented pixel, a score and an angle are computed. This is an optimization step; it is possible to initiate tracing at any pixel in an image. However, tracing is relatively expensive, and, by filtering out unproductive initiation sites, computation can be focused on the small number of relevant pixels. On the other hand, the filtration threshold should be set conservatively so that every whisker is sure to have at least one initiation site. Parameters were set by examining results over a small set of 10 images but worked well over the entire data set of over 1 million images. Typically, 50\u2013100 candidate sites were found per whisker with 10\u201320 false positives per image.\n\nA variety of methods have been developed to detect interest points in natural images \\[19\\], and the Hough transform is conventionally used to find linear features. However, given the high contrast and controlled imaging conditions used to acquire data (Text S1<\/a>), we developed a linear time algorithm, summarized in Figure 2<\/a>, that requires only local image information and yields an estimate of salience and line orientation in a single pass.\n\nFirst, a 7\u00d77 square box is centered on a pixel of interest (Figure 2A<\/a>). The box is divided into two partitions and the position of the intensity minima found in each subset of the two partitions is found. These are designed to ensure that when a whisker passes through the box, one partition will preferentially collect minima along the whisker backbone, and, as a result, the position of those minima will be linearly correlated. For each partition, principle components are computed over the covariance of the collected positions. The principle components describe the major and minor axis of an ellipse. The partition with the highest eccentricity is chosen, and the eccentricity is used to score how line-like the image is at the queried point (Figure 2B<\/a>). The orientation of the major axis is used to estimate the direction of the line (Figure 2C<\/a>, Video S1<\/a>).\n\nOne advantage of this approach is that the entire image need not be queried for line-like objects. For example, in the data analyzed here, whiskers span hundreds of pixels. A grid of lines spaced appropriately (50 px; this is an adjustable parameter) will cross each whisker at least once. Restricting the search for line-like features to this grid greatly reduces the amount of time required for this step. Additionally, since the algorithm relies on local minima, demands on background subtraction and other preprocessing is minimal. The structure of the partitions ensures that lines in any orientation are detected. Greater than 80% of the pixels that could be used to start tracing a whisker (typically 1000 per whisker over the full image) are detected using an eccentricity threshold of 0.95; only 1 is required to fully trace a whisker. This detector systematically avoids potential starting locations near the face, near occlusions or where whiskers cross or nearly touch.\n\nCurves are bidirectionally extended from candidate initiation sites by performing a small step (1 px) along the measured direction, and repeating until one of several criteria is met. The sub-pixel position and tangent angle are estimated by optimizing the correlation between an oriented line detector that is parameterized as a function of sub-pixel position, angle and width (Figure 3<\/a>).\n\nThe line detector is designed based on modeling the intensity profile of a whisker as a rectangular valley in the image with variable position, width and angle. The center of the whisker is estimated with sub-pixel precision by finding a position that minimizes the Laplacian of the correlation between the model and the image \\[20\\]. The line detector approximates the Laplacian of the model. It consists of two rectangular parallel step-edge detectors (20\u00d71 px) spaced by the detector width (Figure 3A<\/a>). The length was chosen to match the distance over which highly curved whiskers remain approximately linear. The correlation at a given point is evaluated using a pixel representation of the detector that is computed by evaluating the area integral of the detector over the square domain of each pixel (Figure 3B<\/a>). The value of the correlation at that position is then the dot product between pixels in the image and pixels in the evaluated detector. For efficiency, pixel representations of the detector are pre-tabulated for discrete values of the sub-pixel position (0.1 px), width (0.2 px), and angle (2.5 degree precision). Optimizing the width of the detector was important for reliable tracing of whiskers of different width and for reliable tracing of the very tips of whiskers.\n\nTracing is stopped if any one of several criteria indicates that the optimization procedure cannot be trusted to yield accurate results. This is necessary to handle cases where whiskers were partially occluded, such as when whiskers crossed or contacted an obstructing object, such as the stimulus pole in an object localization experiment (Figure 4<\/a>). The tests were for low correlation with the line detector, large left-right asymmetry in the image intensity about the detector, low mean intensity about the detector, and large angular change between steps. A corresponding user-adjustable threshold parameterizes each test. The tests are applied at each step as the curve is traced from the initiation point (Figure 4A,B<\/a>). If one of these fails (Figure 4C,D<\/a>), a number of single pixel steps are taken along the last trusted direction up to a user-settable maximum distance in an attempt to jump past the distrusted region (Figure 4E,F<\/a>). Linear extrapolation is usually a good guess. If all tests are satisfied at one of the points, normal tracing is resumed (Figure 4G,H<\/a>). Otherwise, the trace is terminated at the last trusted point (Figure 4G<\/a>).\n\nOccasionally, overlapping curves are generated during the tracing step. These need to be resolved before tracking so that individual whiskers have exactly one associated curve per frame. Searching for curves that cross the same pixel identifies pairs of potentially redundant curves. If all the points within an interval of one of the curves is within a small distance (typically 2 px) of the other curve and that interval consists of over 50% of the total length, then it is considered redundant. The shorter of the two curves is discarded; the shapes of shorter curves were not reliable enough to average. This procedure is repeated until there are no redundant curves left.\n\n## Linking\n\nIdentifying individual whiskers is challenging. First, the tracing algorithm generates curves (false positives) for facial hairs, trimmed whiskers, or other artifacts in the scene (Figure 1C<\/a>). The linking algorithm must correctly identify true whiskers against this noise. Second, the motion of whiskers can be difficult to reliably model \\[8\\], \\[12\\]. A typical trial may consist of slow, predictable motions interrupted by rapid transient motions that are occasionally too fast even for high speed (1000 fps) cameras to adequately sample \\[1\\]. Although difficult to track, these unpredictable motions may correspond to some of the most interesting data.\n\nOur approach is based on the observation that whiskers are relatively long and maintain a consistent ordering along the face. We use a statistical model to describe the shapes and motions that are characteristic of whiskers. The model is then used to assign each traced whisker the most probable identity subject to the constraint that whiskers are ordered along the face.\n\nFor the majority of images, a simple length threshold can be used to remove false positives and leave exactly one traced curve per whisker. The linking algorithm is either supplied the number, *N*, of whiskers on the subject mouse, or estimates it as follows: For a given length threshold , let be the set of frames that have *n* curves longer than . If *N* is not supplied, then we find and such that is maximal over all possible choices. In most of the data sets we have tested, , i.e. this heuristic is spot on. But when one or more whiskers frequently leave the field of view it may fail. If *N* is supplied, then we let be the value for which is maximal because the set of long curves in each frame in are always true whiskers. For the data sets tested here, represented 99.8% of a video on average (min: 90.1%, max: 100%).\n\nThe curves in the frames and their heuristic classification into whisker and non-whisker based on supplies a training data set for the specific video. From these, we can learn various characteristics to estimate the probability of a curve being one or the other. Specifically, we build normalized histograms, *n~f~*,*~W~* and *n~f~*,*~FP~* (Figure 5C<\/a>), one for whiskers (*W*) and one for false positives (*FP*), for each of six features, *f*, using the training set. The features used are the angle near the face, average curvature, average tracing score, length, and endpoints of a curve. The angle and curvature are computed from a parametric third degree polynomial fit to each curve. These histograms are then used to estimate, , the probability that a curve *c* is of kind *k*, either *W* or *FP*, as the product,\n\n![](pcbi.1002591.e015.jpg)\n\nTo ensure no feature has zero likelihood, a count of one is added to each bin in a histogram before normalizing.\n\nIn (1) above, the features are assumed to be conditionally independent in order to simplify estimating feature distributions, even though this not strictly true in practice. Moreover, errors in the heuristic may introduce a systematic sampling bias. For example, at high deflection angles near the edge of the field of view, only a small segment of a whisker is imaged (Figure 1E<\/a>, purple whisker). The traced curves from such segments might systematically fall below and be excluded from whisker feature distributions. As a result, estimated probabilities will be biased towards longer whiskers at milder deflection angles. Despite these caveats, the use of these feature distributions leads to a highly accurate result.\n\nAppearance alone is not sufficient to uniquely identify individual whiskers in some cases. To address this, we designed a naive Bayes' classifier to determine the most probable identity of each traced curve subject to ordering constraints. The traced curves are ordered into a sequence, *C*, according to their time and relative anterior-posterior position. Identifying a curve, *c*, involves assigning a particular label, *l*, from the following set of *2N+1* labels. There is a label for each whisker, *W~1~* to *W~N~*, and labels, *F~0~* to *F~N~*, where *F~i~* identifies all false positives between whisker *W~i~* and *W~i+1~* (Figure 5A<\/a>). The kind of a label, *K(l)*, is *W* if *l* is a whisker label, and *FP* if *l* is a false positive label. A curve labeled *W~i~* or *F~i~* must be posterior to curves labeled *W~j~* or *F~j~* for all *i\\Figure 5B<\/a>). For a video with *T* frames,\n\n![](pcbi.1002591.e021.jpg)\n\nwhere *N^t^* is the number of curves traced in frame *t*, and is the label of the *i*'th curve found in frame *t*. Values for and are estimated by computing the frequency of observed label pairs that occur in the a training set, . This describes a hidden Markov model for labeling curves in a single video frame. The optimal labeling can be computed efficiently with well-known dynamic programming techniques \\[21\\].\n\nThe likelihood, , is computed under the simplifying assumption that the likelihood of a single curve depends only on the curves in the previous or following frame. Using this and applying Bayes' theorem,\n\n![](pcbi.1002591.e027.jpg)\n\nwhere is the label *L* assigns to the curve *c*, *C^t^* is the (possibly empty) set of curves found in frame *t*, and are the labels assigned to the curves in *C^t^*. The first component, , is the likelihood that *c* is an object of the kind denoted by label *l* which we estimate with formula (1).\n\nThe second component of (4), , is interpreted as the likelihood that a curve is part of the same trajectory as corresponding curves in the previous (or following) frame. Similar to the approach used in equation (1) for estimating , we need normalized histograms and of the changes of whiskers and false positives between successive frames for each feature *f* over a \"training\" data set in which the corresponding curves in successive frames is known. While we could use the implied assignment over , we first estimate a hopefully better assignment by restricting the model in (4) to use shape features alone. That is, is treated as a constant and thus the assignment to labels in a frame can be computed independently of other frames. We then use this preliminary labeling over the frames of as the training set over which to build and .\n\nGiven these change histograms, one can estimate the correspondence likelihood according to the formula,\n\n![](pcbi.1002591.e040.jpg)\n\nNote that when evaluating this likelihood, a unique corresponding curve is not always present in the previous (or following) frame. There may be zero or many false positive curves in the previous frame with the same label. Similarly a whisker may be missing because it exited the field of view.\n\nDirectly solving for the most probable labeling (2) is complicated by the fact that the likelihood of a labeling in one frame depends on the labeling in neighboring frames. Our approach is to initially score each frame by computing , where is obtained using shape features alone. In decreasing order of this score, we 'visit' the next best frame, say *t*, and update the label assignment for each of the two adjacent frames that maximizes the full version of (4), provided the adjacent frame has not already been visited. The new assignment replaces the current assignment and the frame's visitation order is updated according to the score of this new assignment (under the full model of (4)). In this way, we let the most confident frames influence their neighbors in a transitive fashion until every frame has been visited. This was critical for achieving high accuracy. Previous whisker tracking algorithms have relied on propagating the identity of a whisker frame-wise from the beginning of the video to the end, and as a result, an error in one frame is propagated throughout the rest of the video \\[11\\]\u2013\\[13\\].\n\n# Results\/Discussion\n\nAverage processing time was 8 Mpx\/s\/cpu (35 fps for 640\u00d7352 pixel video) measured on a 2.6 GHz Intel Core2 Duo Macbook Pro with 2 GB of RAM, and was dominated by the CPU time required to trace curves; linking time was 0.5 ms per frame. This is faster than published speeds of similar whisker analysis packages (typically, 1\u20135 fps.) \\[11\\], \\[13\\], \\[15\\]. However, performance can be difficult to compare as implementations may improve over time. For example, our software is equally fast as the current version of the only other publically available whisker tracker\\[13\\]. More importantly, our software can readily be run on inexpensive cluster nodes to process videos in parallel. This is not the case for whisker trackers that require supervision \\[13\\] or that depend on software with expensive licensing requirements (such as Matlab) \\[11\\]\u2013\\[15\\].\n\nTracing was accurate to within 0.2 px as estimated by analyzing 39 hand-traced whiskers. Individual hand-tracings had an accuracy of 0.23\u00b10.2 pixels when compared against consensus curves. Mouse whiskers (row C) were automatically traced in images with resolutions from 5 \u00b5m\/px to 320 \u00b5m\/px using default settings and in noisy, low-contrast images (Text S1<\/a>). For the best results, it is important to minimize motion blur and uniformly illuminate the scene, although illumination inhomogeneities can be partially corrected with background subtraction. In contrast with methods that use Kalman filtering \\[13\\], traced curves are not biased by whisker dynamics. Additionally, tracing is faithful even when curves are not well approximated by low order polynomials, in contrast to published tracing methods \\[13\\]\u2013\\[15\\]. Whiskers could be detected and traced in videos of freely behaving rats (Video S2<\/a>) and mice (Video S3<\/a>) with all whiskers intact.\n\nLinking accuracy was measured against a hand-annotated set of videos selected by choosing 100 random trials from behavioral sessions of 4 mice (1.32 million frames) \\[1\\], \\[17\\]. The curated videos captured a range of behavior including protracted bouts (\\>1 s) of whisking (Video S4<\/a>), multiple whiskers simultaneously contacting an object (a thin metal pole) (Video S4<\/a>), and extremely fast motion (\\>10,000\u00b0\/second) (Video S5<\/a>). Of the 4.5 million traced whiskers, 130 were incorrectly identified or not detected, less than one mistake per behavioral trial on average. Linking is robust to whiskers that occasionally leave the field of view (Video S6<\/a>), and works well even at relatively low frame rates (100 fps; see Text S1<\/a>). Lesser image quality will ultimately degrade results.\n\nThere are two principle sources of error in the linking. First, covariance between different features is ignored. For example, when the field of view is small, whiskers at high deflection angles might not be fully contained within the image. Any curve tracing one of these whiskers would appear shorter than the full whisker length. Under these conditions there is a strong correlation between angle and whisker length. The current model penalizes the shorter length because it is rarely observed; accounting for the covariance should eliminate this problem. Second, the estimated feature distributions are affected by systematic biases introduced from the heuristic used to initially guess whisker identity. A heuristic relying on a length threshold, like the one used here, may systematically reject these short curves at high deflection angle. This will result in a bias against strongly deflected whiskers.\n\nFundamentally, our linking method is applicable only to single rows of whiskers. The whiskers in videos imaging multiple rows of whiskers will be traced, but, because whisker images no longer maintain a strict anterior-posterior ordering, the linking will likely fail. Although this software was validated with images of one side of the mouse's whiskers, we have successfully tracked whiskers in images containing both whisker fields (Video S7<\/a>). Also, the whiskers of freely moving rats and mice can be traced (Videos S2<\/a>,S3<\/a>). This suggests that it will be possible to use this software for two-dimensional tracking of single rows of whiskers in freely moving animals by incorporating head tracking \\[2\\], \\[11\\], \\[15\\], though we have not explicitly tested this.\n\nWe analyzed data acquired during learning of an object detection task (see Text S1<\/a>) \\[17\\]. We tracked whiskers from four mice during the first 6 days of training (8148 trials, 3122 frames each). This allowed us to observe the changes in whisking behavior that emerge during learning of the task. We looked specifically at correct rejection trials (2485 trials), which were not perturbed by contact between object and whisker, to analyze the distribution of whisking patterns during learning. We characterized whisking by computing normalized histograms of whisker angle (1\u00b0 resolution) over time (100 ms bins) for a single representative whisker (C1). The angle and position of other whiskers were strongly correlated throughout (r^2^\\>0.95) in these trials. Two patterns emerged. After learning, two mice (JF25607 and JF25609) showed changes in whisking amplitude and mean angular position when the stimulus was present (Figure 6A<\/a>). The other two mice (JF27332 and JF26706) did not show nearly as much stereotypy and appeared to rely on lower amplitude whisking (Figure 6B<\/a>). All mice held their whiskers in a non-resting anterior position during most of the trial \\[1\\]. For JF25607 and JF25609, reduction in task error rate was correlated with an increase in mean deflection angle during stimulus presentation (r^2^\u200a=\u200a0.97,0.84 respectively) (Figure 6C<\/a>). This correlation was smaller for the other two mice, which also learned the task but appeared to rely on a different whisking strategy.\n\nThe forces at the base of the whisker constitute the information available to an animal for somatosensory perception. These forces are proportional to changes in whisker curvature \\[16\\]. The instantaneous moment acting at a point on the whisker is proportional to the curvature change at that point. In contrast to whisker tracing software that represents whiskers as line segments\\[14\\], \\[15\\] or non-parametric cubic polynomials\\[13\\], the curves produced by our tracing algorithm provide a high-resolution representation of whisker shape, allowing measurement of varying curvature along the length of the whisker.\n\nTo illustrate this feature we measured curvature in the following manner: For each video frame (Figure 7A<\/a>), the traced whisker midline (Figure 7B<\/a>) was fit via least-squares to a parametric 5^th^ degree polynomial of the form:\n\n![](pcbi.1002591.e043.jpg)\n\nwhere \\[*x(t),y(t)*\\] are points that span the curve over the interval , and *a~i~,b~i~* are the fit parameters (Figure 7C<\/a>). Curvature was measured at a specified distance along the curve from the follicle chosen for best signal to noise. The follicle was not imaged, and so the intersection between the curve and an arc circumscribing the face was used as a reference point instead. The curvature was then measured as the mean curvature inside a 1\u20132.5 mm long region about the point of interest where it was approximately constant. To ensure the measurement was not biased by shape outside the interval, another parametric polynomial fit (degree 2) was performed, but over the region of interest rather than the entire curve (Figure 7D<\/a>). The follicle position was set by linearly extrapolating a set distance into the face. The point of whisker-pole contact was determined as the closest point to the center of the pole on the curve or on a line extrapolated from the nearest end of the curve (Figure 7E<\/a>).\n\nTime series of curvature measurement (Figure 7F<\/a>, measurements at 5 mm from follicle) can be used to identify most contact events. For whisker-pole contacts first made during a protraction, the first touch tends to be followed by a series of more forceful contacts. Contacts are correlated with relatively constant angular position of the whisker (Figure 7G<\/a>). This indicates the mouse may press the whisker against the pole by translating the whisker pad rather than by rotating the whisker. The distribution of peak curvature-change during a whisker-pole contact indicates more the bulk of contacts were made during protraction and that these were more forceful (Figure 7H<\/a>).\n\n## Availability\n\nThe software and documentation are freely available online () as cross-platform (Windows, OS X, Linux) source code written in C with Python and Matlab interfaces included. Pre-built binaries are also available. A graphical user interface is provided to aid semi-automated tracing as well as the viewing and editing of results. The software is capable of analyzing 8-bit grayscale StreamPix (Norpix Sequence Format: SEQ), TIFF, MPEG, MOV, and AVI formatted video.\n\n# Supporting Information\n\nSample data and tutorial.\n\n(ZIP)\n\nClick here for additional data file.\n\nLine bias correction. (A) Raw data exhibits a fixed-pattern artifact where odd-numbered lines are systematically darker than bright lines. (B) This artifact is corrected by multiplying odd-lines by a factor estimated from the raw data itself. This removes the stripes without blurring or reducing the contrast of whiskers. Scale bar, 0.5 mm.\n\n(TIF)\n\nClick here for additional data file.\n\nMethodological details.\n\n(DOC)\n\nClick here for additional data file.\n\nVideo overlaid with high scoring seeds (eccentricity\\>0.99) for each frame. Pixels are colored according to the orientation indicated by the seed.\n\n(MP4)\n\nClick here for additional data file.\n\nResults of automatic tracing applied to video 500 Hz video (1.7 s duration) of a freely behaving rat exploring a corner with its full whisker field. The video was obtained with permission from the BIOTACT Whisker Tracking Benchmark \\[22\\] (Clip ID:behavingFullContact, courtesy of Igor Perkon and Goren Gordon). It has been cropped, and the frame rate has been changed for presentation.\n\n(MP4)\n\nClick here for additional data file.\n\nResults of automatic tracing applied to a video acquired at 100 Hz video (8 s duration) of a freely behaving mouse with its full whisker field. Background subtraction was performed before tracing to reduce the effects of non-uniform illumination. Because no image of the scene without the animal present was available, the background was estimated for each pixel as the maximum intensity observed at that point throughout the video. The video was obtained with permission from the BIOTACT Whisker Tracking Benchmark \\[22\\] (Clip ID: behavingMouse100, courtesy of Ehud Fonio and Ehud Ahissar). It has been cropped, and the frame rate has been changed for presentation.\n\n(MP4)\n\nClick here for additional data file.\n\nResults from a full trial. The results of automated tracing and linking applied to a video (500 Hz, 9.2 s duration) of a head fixed mouse trimmed to a single row of 4 whiskers interacting with a pole. Curves that were classified as whiskers are colored according to their identity, and otherwise they are not shown. Multiple whiskers simultaneously interact with the pole at 1.2\u20131.4 sec into the trial. Protracted bouts of whisking can be observed throughout the video.\n\n(MP4)\n\nClick here for additional data file.\n\nTracing and linking is robust to rapid changes in whisker angle. The results of automated tracing and linking applied to a video (500 Hz, 150 ms) of a head fixed mouse trimmed to a single row of 3 whiskers interacting with a pole. Curves that were classified as whiskers are colored according to their identity, and otherwise they are not shown. One whisker (red) has been trimmed so it cannot contact the pole. The green whisker presses against the pole, and quickly flicks past it as it is removed from the field. This is the fastest angular motion (16\u00b0\/ms) observed in the data set used to measure tracking accuracy.\n\n(MP4)\n\nClick here for additional data file.\n\nLinking is robust to whiskers that leave the field of view. The results of automated tracing and linking applied to a video (500 Hz, 190 ms) of a head fixed mouse trimmed to a single row of 3 whiskers interacting with a pole. Curves that were classified as whiskers are colored according to their identity, and otherwise they are not shown. Two whiskers (green and blue) are frequently occluded by the lick-port (black bar, lower right), but they are properly identified before and after such events.\n\n(MP4)\n\nClick here for additional data file.\n\nTracing and linking of whiskers bilaterally. The results of automated tracing and linking applied to a video (500 Hz, 1 s duration) of a bilateral view to a head fixed mouse trimmed to a single row of whiskers (2 on each side).\n\n(MP4)\n\nClick here for additional data file.\n\n# References\n\n[^1]: Conceived and designed the experiments: NGC DHO KS EWM. Performed the experiments: NGC DHO DH LP. Analyzed the data: NGC DHO DH LP AH SP KS. Contributed reagents\/materials\/analysis tools: NGC DHO DH LP. Wrote the paper: NGC KS EWM.","meta":{"dup_signals":{"dup_doc_count":110,"dup_dump_count":52,"dup_details":{"curated_sources":2,"2023-06":1,"2022-49":1,"2022-05":2,"2021-43":1,"2021-10":1,"2020-16":1,"2019-51":1,"2019-39":1,"2018-51":1,"2018-47":1,"2018-34":1,"2018-05":1,"2017-51":1,"2017-47":2,"2017-43":1,"2017-39":1,"2017-34":1,"2017-30":2,"2017-26":2,"2017-22":2,"2017-17":2,"2017-09":7,"2017-04":1,"2016-50":2,"2016-44":1,"2016-40":2,"2016-36":3,"2016-30":2,"2016-26":1,"2016-18":1,"2016-07":1,"2015-48":3,"2015-40":2,"2015-35":2,"2015-32":2,"2015-27":3,"2015-22":2,"2015-14":2,"2014-52":6,"2014-49":3,"2014-42":6,"2014-41":2,"2014-35":5,"2014-23":7,"2014-15":3,"2023-40":1,"2024-22":1,"2017-13":2,"2015-18":3,"2015-11":2,"2014-10":2,"2024-26":1}},"file":"PMC3390361"},"subset":"pubmed_central"} {"text":"abstract: # Background\n .\n Medications necessary for disease management can simultaneously contribute to weight gain, especially in children. Patients with preexisting obesity are more susceptible to medication-related weight gain.\n .\n How equipped are primary care practitioners at identifying and potentially reducing medication-related weight gain? To inform this question germane to public health we sought to identify potential gaps in clinician knowledge related to metabolic adverse drug effects of weight gain.\n .\n # Methods\n .\n The study analyzed practitioner responses to the pre-activity questions of six continuing medical education (CME) activities from May 2009 through August 2010.\n .\n # Results\n .\n The 20,705 consecutive, self-selected respondents indicated varied levels of familiarity with adverse metabolic effects and psychiatric indications of atypical antipsychotics. Correct responses were lower than predicted for drug indications pertaining to autism (\u221217% predicted); drug effects on insulin resistance (\u221262% predicted); chronic disease risk in mental illness (\u221234% predicted); and drug safety research (\u221240% predicted). Pediatrician knowledge scores were similar to other primary care practitioners.\n .\n # Conclusions\n .\n Clinicians' knowledge of medication-related weight gain may lead them to overestimate the benefits of a drug in relation to its metabolic risks. The knowledge base of pediatricians appears comparable to their counterparts in adult medicine, even though metabolic drug effects in children have only become prevalent recently.\nauthor: Ingrid Kohlstadt; Gerold Wharton\ndate: 2013\ninstitute: 1Johns Hopkins Bloomberg School of Public Health, Center for Human Nutrition, 198 Prince George St, Annapolis MD 21401, Maryland, USA; 2The U.S. Food and Drug Administration, Office of Pediatric Therapeutics, Silver Spring, Maryland, USA\nreferences:\ntitle: Clinician uptake of obesity-related drug information: a qualitative assessment using continuing medical education activities\n\n# Background\n\nNo study to date assesses the knowledge base around medication-related weight gain in pediatric or adult primary care medicine. We therefore sought to characterize what practitioners know about metabolic drug effects in the context of clinical decision-making.\n\nInformed clinicians can often modify their patients' risk of adverse metabolic drug effects, even when medications are essential for disease management \\[1\\]. Practitioners can choose lowest effective dosing and therapies with fewer metabolic effects; treat underlying medical conditions which can contribute to weight gain, such as sleep apnea and hypothyroidism; correct nutritional deficiencies such as vitamins B~12~\\[2\\] and D \\[3\\] to facilitate lifestyle adherence; and counsel patients on drug-related increases in appetite, emphasizing adherence to medication and healthful lifestyle choices.\n\nAmong the patient groups most vulnerable to metabolic drug effects are children. Children are more susceptible to central nervous system effects of medications \\[4\\]. Some metabolic drug effects are unique to children at certain growth stages and demonstrate a prolonged effect \\[5,6\\]. Metabolic drug effects also tend to be delayed relative to the therapeutic benefit, especially in children. Concurrently, drug exposure is increasing in children, the age group with the fastest growing number of prescriptions \\[7\\], in part due to obesity-related chronic diseases. Preexisting overweight and obesity heighten vulnerability to metabolic drug effects.\n\nManaging adverse metabolic drug affects is relatively new to the practice of pediatrics. Historically pediatricians focused on medication-related weight loss and stunting, recorded as step-offs on patient growth charts. Today's pediatric practice may require as diligent a diagnosis and management of medication-related weight gain, especially since preexisting overweight and obesity, defined as a body mass index at or above the 85^th^ percentile, has reached approximately 32% of the U.S. population ages 2-19 \\[8,9\\].\n\nDisseminating drug safety updates to pediatricians holds other challenges as well. Safety information specific to children represents a recent advance. Practitioners may not realize they need to watch for such updates \\[10\\]. Metabolic drug effects specific to children and adolescents may be first identified years after a drug is on the market \\[11\\] because the metabolic effects in children tend to manifest beyond the timeframe of clinical trials. Disseminating drug safety information may be additionally complicated by practice patterns. For example, psychiatrists may diagnose and prescribe highly specialized treatment and look to primary care practitioners to monitor patients for adverse drug effects.\n\nClinicians draw on their knowledge base of adverse metabolic drug effects for clinical decision-making. Elevated and unique risks of metabolic drug effects and major shifts in disease prevalence and practice patterns in pediatrics together prompted our interest in confirming that primary care clinicians who care for children have a knowledge base comparable to their adult medicine counterparts.\n\n# Methods\n\n## CME partners\n\nContinuing medical education (CME) activities were developed in partnership with CME providers. Inclusion criteria for partners were: experience implementing pre-activity questions, having primary care practitioners as a target audience, willingness to co-develop programs relevant to medication-associated weight gain, providing free public access to associated media and print materials, and collaborating within time and budget constraints. Partners were selected across different media - audio, lectures, or web-based activities - Audio-Digest Foundation, Medscape CME, The Maryland Academy of Family Physicians, and The FDA Scientific Rounds Program.\n\n## Instrument development\n\nThe instrument in this study, pre-CME activity questions, measures practitioners' baseline knowledge relevant to the content of the CME activities. Pre-activity questions were 4-choice multiple choice questions or true-false questions. They were directed at clinical decision-making and were organized into four categories: 1) drug indications, 2) metabolic drug effects, 3) drug safety updates, and 4) patients most at risk. Each CME program partner selected among the pre-activity questions and adapted the wording to their standard format.\n\n## CME content\n\nThe 6 CME activities pertained to either atypical antipsychotic use in children or obesogenic medications in general. They were provided through the CME partners at varied intervals between June 2009 and August 2010. Each activity was an audio program, web-based program, or conference lecture; and awarded a maximum of 0.5 to 2 Category 1 CME credits. The program content, participant characteristics, and pre-activity questions are presented in Table\u00a01<\/a>.\n\nSummary of continuing medical education (CME) programs, June 2009 \u2013 August 2010\n\n| **CME partner and activity** | **Date, audience and promotions** | **Pre-activity questions** |\n|:---|:---|:---|\n| Activity 1: Maryland Academy of Family Physicians, 2009 Annual CME Assembly; Drug \u2013nutrient interactions | 6\/09 Conference attendees, primarily family physicians in the mid-Atlantic | 5 interspersed questions using an audience response system |\n| Activity 2: Audio-Digest Pediatrics on Atypical antipsychotics in children | 8\/09-8\/10 Subscribers, practitioners who care for children, approximately 75% physicians | 10 questions submitted by mail, fax, or website |\n| Activity 3: Audio-Digest Special Topics on Atypical antipsychotics in children \\* | 8\/09-8\/10 Responders to free CME post on the Audio-Digest homepage; American Academy of Pediatrics newsletter feature | 10 questions submitted via website |\n| Activity 4: Medscape CME on Atypical antipsychotics in children \\* | 11\/09-1\/10 Participants in Medscape's Psychiatry and Mental Health listserv with multiple cross-posts | 6 web-based interspersed questions |\n| Activity 5: Medscape CME on Medication-related weight gain \\* | 5\/10-8\/10 Participants in Medscape's Preventive Medicine and Public Health listserv with multiple cross-posts; American College of Preventive Medicine feature | 4 web-based interspersed questions |\n| Activity 6: Food and Drug Administration Scientific Rounds on Autism | 5\/10 FDA employees with regulatory and clinical interests | 3 questions using an audience response system, 1 which was developed to better understand responses from prior CME programs |\n\n\\*Three of the activities may be viewed online \\[12-14\\].\n\nIn order to compare the knowledge of practitioners specializing in pediatrics with adult medicine practitioners, we developed Activity 5 which is applicable to the care of children and adults.\n\nActivity 6 was the only program where the target audience was not primary care practitioners. The biweekly activity is attended by a diverse group of health care practitioners and scientists, all of whom work in regulation. The activity was included to better characterize practitioner knowledge of the autism indication for atypical antipsychotics.\n\n## Response analysis\n\nThe information used for the response analysis was obtained from the CME providers as anonymized source data with no way to match responses with individuals. No personal identifiers were used. The respondents were participating in CME activities, where responses to the related learning questions are routinely aggregated to inform future CME development and related research.\n\nWe analyzed the data with and without comparing it to predicted scores. Predicted scores facilitate comparison between multiple choice questions with four choices and binomial true-false questions, which differ in the likelihood of selecting a correct answer by chance alone. For this analysis, predicted scores were 70% for multiple choice and 85% for true-false questions. The basis for these numbers comes from Audio-Digest's overall average pretest scores, which are 70%, \\[personal communication August 2010\\] and the pedagogic intent of a CME to build on participants' existing practice-relevant knowledge.\n\nEach response holds an inherent error, since a participant with a constant knowledge base could score better or worse on the pre-activity depending on the circumstances at that moment. We estimated the two-tailed two standard deviations of this variability to be ten percent. We also analyzed the participants' responses to the choice identified as close to correct, also called the second best answer.\n\nSTATA\u00ae statistical software was used to run discrete-response regression analyses on pre-activity question responses. Probit regressions were used for binomial dependent variables, analyzing whether the respondent answered the CME question correctly. The probit models give a standard normal z-score rather than a t-statistic, with the total variability explained as a pseudo-R^2^ rather than a normal R^2^. McFadden's pseudo-R^2^ is reported. The probit analysis reports the overall significance of the model using an LR chi-square. The effect of a control variable in predicting correct responses (a certain percentage above\/below the average) is calculated as the difference in probability of getting a question correct versus a baseline probability. For this analysis, baseline probability is where all control variables are set to their population means.\n\nThe control variables used in the probit models were educational degree, medical specialty, and CME participation date. Geographic region was only provided by some respondents and was therefore not included in the analysis. The type of medical practice such as hospital-based or solo practice was not among the data obtained by the CME providers.\n\nPartial incomplete responses were included in the analysis. Having all pre-activity questions left blank was considered equivalent to nonparticipation, and these entries were excluded. Since the sample size of the distinct CME activities varied, both the unweighted and weighted averages of correct responses are reported.\n\n## Instrument validation\n\nTo assess the extent to which the pre-activity responses could be generalized among primary care practitioners, the responses were compared across the diverse CME programs detailed in Table\u00a01<\/a>.\n\nThe scores on pre-activity responses were compared to self-reported learning in Activities 2\u20133, where participants were asked, \"Please list one concept or strategy gained from this activity.\"\n\nParticipant evaluations of the CME programs were recorded, to confirm satisfactory evaluations. The rating of the CME activity on a 1\u20135 Likert scale is a composite score which reflects practice relevance and appropriate teaching level of target population.\n\nNot all participants completed all pre-activity questions. The data was analyzed both including and excluding question-specific non-responders, to detect a potential bias introduced by partial completion.\n\nActivities 2 and 3 were the longest-running programs, each offered for 13 months. They were analyzed for a temporal trend, since a news story or regulatory change during the interval could potentially change practitioner baseline knowledge or practice patterns.\n\n# Results\n\nThere were 20,705 participants in the combined six CME activities which spanned 15 months. Each participant answered one or more of the following questions.\n\n## Drug indications\n\nSee Table\u00a02<\/a>. For the first question, both the average correct response rate of 76% and the weighted correct response of 79% are within the predicted range. For the second question, the average correct response rate is 53% (17% below predicted) and the weighted average correct response rate is 52% (18% below predicted).\n\nResponses to multiple choice pre-activity questions on use of antipsychotic medications\n\n| **CME activity** | **Activity 2** | **Activity 3** | **Activity 4** |\n|:---|:---|:---|:---|\n| Sample size | *n*=1237 | *n*=611 | *n*=2400 |\n| Responses to: Which of the following is a labeled indication for one or more atypical antipsychotic drugs? | | | |\n| A. Refractory epilepsy | | | |\n| B. Refractory major depression in adolescents | | | |\n| **C. Acute mania associated with bipolar-I disorder** | | | |\n| D. Attention deficit\/hyperactivity disorder | | | |\n| Incorrect (A) | 82 (7%) | 40 (6%) | 72 (3%) |\n| Incorrect (B) | 188 (15%) | 92 (15%) | 264 (11%) |\n| **Correct (C)** | **912 (74%)** | **439 (72%)** | **1992 (83%)** |\n| Incorrect (D) | 55 (4%) | 40 (6%) | 72 (3%) |\n| Correct vs. Predicted | +4% | +2% | +13% |\n| Responses to: Which of the following is a labeled indication for an atypical antipsychotic in a child, 7 years of age? | | | |\n| **A. Irritability associated with autism** | | | |\n| B. Acute bipolar mania | | | |\n| C. Schizophrenia | | | |\n| D. Generalized anxiety disorder | | | |\n| **Correct (A)** | **777 (63%)** | **288 (47%)** | **1152 (48%)** |\n| Incorrect (B) | 197 (16%) | 114 (19%) | 456 (19%) |\n| Incorrect (C) | 222 (18%) | 189 (31%) | 720 (30%) |\n| Incorrect (D) | 38 (3%) | 19 (3%) | 72 (3%) |\n| Correct vs. Predicted | \u22127% | \u221223% | \u221222% |\n\nSee Table\u00a03<\/a>. The average correct response rate is 65% (20% below predicted) and the weighted average is 67% (18% below predicted).\n\nResponses to the true-false pre-activity question on use of antipsychotic medications\n\n| **CME activity** | **Activity 2** | **Activity 3** |\n|:---|:---|:---|\n| Sample size | *n*=1237 | *n*=611 |\n| Responses to: The intramuscular administration of at least one atypical antipsychotic agent has been approved by the FDA for use in children. True or false? | | |\n| Incorrect (True) | 341 (28%) | 265 (43%) |\n| **Correct (False)** | **896 (72%)** | **346 (57%)** |\n| Correct vs. Predicted | \u221213% | \u221228% |\n\nThe participants in Activity 6 were asked: Recommended treatment of autism includes all EXCEPT:\n\nA Correct nutritional deficiencies 6 (12%)\n\nB Treatment of concurrent attention-deficit hyperactivity disorder 5 (10%)\n\nC Prescribe atypical antipsychotics 29 (57%)\n\nD Use behavioral therapies following early diagnosis 11 (21%)\n\nThe rate of correct response, response C, is 57% (13% below predicted).\n\n## Adverse metabolic effects\n\nParticipants in Activity 5 were asked to respond to: After diagnosing Ed with metabolic syndrome, Ed's doctor advised him to reduce his weight by 10%, a total of 18 pounds, by diet and exercise. Which of the following medications potentially makes it more difficult for Ed to achieve his goal?\n\nA Angiotensin-converting enzyme inhibitors 6667 (41%)\n\nB Diuretic 1243 (8%)\n\nC Vitamin D 1106 (7%)\n\nD Biguanide 7021 (43%)\n\nThe rate of correct response, choice B, is 8% (62% below predicted). Specialty did not predict response to this question.\n\nParticipants in Activity 5 were also asked to respond to: Within months of being diagnosed with bipolar disorder at age 14, Sara gained 20 pounds. Which of the following is likely to contribute to her recent weight gain and body mass index of 28?\n\nA Vitamin D deficiency 952 (6%)\n\nB Atypical antipsychotic agent 10349 (63%)\n\nC An eating disorder 1072 (7%)\n\nD Psychostimulant agent 3948 (24%)\n\nThe average correct response rate of 63% falls within the predicted range. Predicted probability of answering the question correctly given the regression control variables is 65%. Analysis by specialty indicates that mental health specialists scored 28% better than average (z=27; p\\<0.01), family practitioners scored 14% higher (z=12; p\\<0.01), internal medicine specialists scored 9% higher (z=7; p\\<0.01), endocrinologists scored 8% higher (z=3; p\\<0.01). The regression explains 9% (pseudo-R^2^=0.09) of the total variability in responses and was very significant in predicting scores (LR chi-square=1833; p\\<0.01).\n\nTable\u00a04<\/a> indicates the responses to a pre-activity question on adverse drug effects. See Table\u00a05<\/a>. For the first question, the average correct response rate was 61% with a weighted correct response average of 67%. For the second question, the average correct response rate was 75% with a weighted average also of 75%. These are within the predicted range.\n\nResponses to the true-false pre-activity question on adverse drug effects\n\n| **CME activity** | **Activity 2** | **Activity 3** |\n|:---|:---|:---|\n| Sample size | *n*=1237 | *n*=611 |\n| Responses to: Hyperglycemia, hyperlipidemia, and elevated liver enzymes are labeled side effects for one or more atypical antipsychotics. True or false? | | |\n| **Correct (True)** | **1154 (93%)** | **582 (95%)** |\n| Incorrect (False) | 83 (7%) | 29 (5%) |\n| Correct vs. Predicted | +8% | +10% |\n\nResponses to multiple choice pre-activity questions on adverse drug effects\n\n| **CME activity** | **Activity 1** | **Activity 2** | **Activity 3** | **Activity 4** |\n|:---|:---|:---|:---|:---|\n| Sample size | *n*=39 | *n*=1237 | *n*=611 | *n*=2400 |\n| Responses to: Jeff is an athletic 14-year-old who has been diagnosed with schizophrenia and prescribed an atypical antipsychotic. You counsel the family about potential cardio-metabolic side effects. Which risk would you emphasize? | | | | |\n| A. Sudden death from myocardial infarction | | | | |\n| B. Cardiac arrhythmias | | | | |\n| **C. Increases in triglycerides, total cholesterol, and low-density lipoprotein** | | | | |\n| D. Because patient has normal weight, he is unlikely to experience significant weight gain | | | | |\n| Incorrect (A) | n\/a | 88 (7%) | 34 (6%) | 120 (5%) |\n| Incorrect (B) | n\/a | 173 (14%) | 132 (22%) | 408 (17%) |\n| **Correct (C)** | **n\/a** | **726 (59%)** | **288 (47%)** | **1848 (77%)** |\n| Incorrect (D) | n\/a | 245 (20%) | 156 (25%) | 48 (2%) |\n| Correct vs. Predicted | n\/a | \u221211% | \u221223% | +7% |\n| Responses to: Three months after initiating treatment with antipsychotic medication, Jeff has blood-work to monitor lipids, liver enzymes, and glucose. A work-up for what endocrine condition may additionally be indicated? | | | | |\n| A. Hypothyroidism | | | | |\n| B. **Hyperprolactinemia** | | | | |\n| C. Hyperparathyroidism | | | | |\n| D. Hyperthyroidism | | | | |\n| Incorrect (A) | 4 (10%) | 144 (12%) | 101 (17%) | 432 (18%) |\n| **Correct (B)** | **27 (75%)** | **974 (79%)** | **443 (73%)** | **1752 (73%)** |\n| Incorrect (C) | 3 (8%) | 57 (5%) | 22 (3%) | 72 (3%) |\n| Incorrect (D) | 5 (13%) | 55 (4%) | 44 (7%) | 144 (6%) |\n| Correct vs. Predicted | +5% | +9% | +3% | +3% |\n\n## Patients at increased risk\n\nResponses to a question about vulnerable populations are presented in Table\u00a06<\/a>. The average correct response rate is 36% (34% below predicted) with a weighted average of 33% (37% below predicted).\n\nResponses to the pre-activity question on vulnerable populations\n\n| **CME activity** | **Activity 1** | **Activity 2** | **Activity 3** | **Activity 4** | **Activity 5** |\n|:---|:---|:---|:---|:---|:---|\n| Sample size | *n*=45 | *n*=1237 | *N*=611 | *n*=2400 | *n*=16361 |\n| Responses to: Mental illness shortens lifespan as follows: | | | | | |\n| **A. Patients with mental illness die 25 years earlier than the general population, mostly due to earlier onset of chronic medical conditions.** | | | | | |\n| B. The average 15 years of potential life lost can be explained by premature death early in life. | | | | | |\n| C. The causes of mortality are similar to the general population, but occur 10 years earlier on average. | | | | | |\n| D. Suicide and accidents including motor vehicle accidents account for most of the 10 years of shorter life expectancy. | | | | | |\n| **Correct (A)** | **13 (29%)** | **629 (51%)** | **215 (35%)** | **840 (35%)** | **5166 (32%)** |\n| Incorrect (B) | 9 (20%) | 285 (23%) | 184 (30%) | 144 (6%) | 1584 (10%) |\n| Next best answer (C) | 20 (44%) | 265 (21%) | 169 (28%) | 528 (22%) | 4557 (28%) |\n| Incorrect (D) | 3 (7%) | 55 (5%) | 42 (7%) | 888 (37%) | 4718 (29%) |\n| Correct vs. Predicted | \u221241% | \u221219% | \u221235% | \u221235% | \u221238% |\n\nFigure\u00a01<\/a> illustrates the correct responses compared to the predicted responses for the pre-activity question on mental illness and chronic disease risk. The responses are presented across CME activities 1\u20135. Standard error bars are shown. Since one of the three incorrect responses (Choice C) in the question was close to the correct answer, it may reflect a stronger knowledge base than the other two incorrect choices. We therefore included this response in the figure.\n\nActivity 5's large sample size allowed for further analysis. Participants had a predicted probability of 31% in answering correctly. Participants specializing in mental health scored 14% higher than average (z=11; p\\<0.01) and family practitioners scored 4% higher (z=3; p\\<0.05). The probit regression explained 1% (pseudo-R^2^=0.01) of the total variability in the responses and was significant (LR chi-square=202, p\\<0.01).\n\n## Drug safety updates\n\nParticipants in Activity 5 were asked to respond to: Which of the following statements is correct?\n\nA. Comparative effectiveness trials are part of the drug approval process \\[5622 (34%)\\]\n\nB. Phase 3 clinical trials are powered to identify appetite-stimulating effects of medication \\[3346 (20%)\\]\n\nC. Incidence of weight gain can be calculated from a passive adverse events reporting system \\[4424 (27%)\\]\n\nD. Current legislation requires clinical trials in pediatric populations \\[2452 (15%)\\]\n\nOn average, 15% of participants answered correctly (55% below predicted) selecting choice D. Predicted probability of answering correctly given the regression variables is 14%. Analysis by specialty indicates that pediatricians scored 7% higher than average (z=6; p\\<0.01) and mental health specialists scored 2% higher (z=2; p\\<0.02). General practitioners scored 3% below average (z=\u22122; p\\<0.03) and emergency medicine specialists scored 4% lower (z=\u22122; p\\<0.04). The regression explained 2% of the total variability in answers (pseudo-R^2^=0.02) and was very significant (LR chi-square=242; p\\<0.01).\n\nNote that this question had the highest non-response rate, with 517 (3%) of participants leaving the question blank. Regression analysis excluding non-responders had the same significant outcome variables as the analysis which included non-responders.\n\nSee Table\u00a07<\/a>. The average correct response was 47% (23% below predicted) and the weighted average was 51% (19% below predicted).\n\nResponses to pre-activity questions on drug safety information\n\n| **CME activity** | **Activity 2** | **Activity 3** |\n|:---|:---|:---|\n| Sample size | n=1237 | n=611 |\n| Responses to: Practitioners are asked to report drug adverse events: | | |\n| **A. To MedWatch** | | |\n| B. Only when the adverse event is not specified on the drug label | | |\n| C. Within 30 days of occurrence, as required by law. | | |\n| D. Voluntarily, if observed within 30 days of the first dose | | |\n| **Correct (A)** | **698 (56%)** | **237 (39%)** |\n| Incorrect (B) | 85 (7%) | 53 (9%) |\n| Incorrect (C) | 203 (10%) | 174 (29%) |\n| Incorrect (D) | 243 (20%) | 146 (24%) |\n| Correct vs. Predicted | \u221214% | \u221231% |\n\nFor Activity 5 (*n*=16,361), the top three professions of participants were nurse practitioners (52%, *n*=8407), physicians (38%, *n*=6212), and physician assistants (3%, *n*=476). The top specialties were psychiatry\/mental health (12%, *n*=2022), family medicine (11%, *n*=1875), internal medicine (10%, *n*=1639), general practice (6%, *n*=946), and pediatrics (6%, *n*=906).\n\nWe controlled the regression analysis for predicting correct pre-activity responses for specialty, professional degree, and date of CME participation by quartile. The time of participation was included in the regression analysis because it explained a significant portion of the variability but yielded no clear pattern for interpretation. Results of the regression analysis follow each applicable question. The results of the analysis by specialty concur with the practice demands of each specialty. For example, family physicians, practitioners who follow patients across the lifespan, were more likely to correctly identify the profound extent to which mental illness shortens life expectancy due to chronic diseases.\n\n## Instrument analysis\n\nThe strength of the instrument is its ease of use in the context of CME programming, and its associated ability to identify trends in practitioner knowledge and some broad comparisons among practitioners. However since the instrument is comprised of multiple choice questions, responses to any one question are more appropriately viewed in the context of the full instrument.\n\nIn order to assess the variability of the instrument, responses were compared across CME programs which varied in content, timeframe, recruitment, and question administration. Figure\u00a01<\/a> depicts the responses. Responses among CME programs varied within the pre-established +\/\u221210% test error, except for one program with a small sample size. The unweighted, correct response averages across CME programs are reported.\n\nTo assess the extent to which recruitment methods may influence the pre-activity responses, the overall scores of the two Audio-Digest programs were compared. The two programs differ only in how the participants were recruited. They were recruited as either, subscribers or one-time participants. The 10% difference in responses falls within the pre-established test error.\n\nPractice-relevance and the perception of the CME program's usefulness were considered in the instrument analysis. The participants in each of the 6 activities were asked to evaluate the program on a 1\u20135 Likert scale, 5 being the highest score. The ratings for each program ranged from 4.0 to 5.0, with an unweighted mean score of 4.5, suggesting that all were well-received and applicable to participants' clinical practice.\n\nThe pre-activity question responses were correlated with what participants said they learned from the activity. Participants in Activities 2 and 3 were asked to \"Please list one concept or strategy gained from this activity.\" The written responses fell into categories consistent with pre-activity responses: Pediatric indications (20), patient adherence (2), adverse effects (71), MedWatch reporting (9), drug interactions (5), and patient risk factor (8).\n\nA temporal trend in correct responses was not observed between the first three months and the total 13 months of the responses to Activities 2 and 3. Neither was any single news event or regulatory change identified which might be anticipated to influence practitioner knowledge on this topic during the study period.\n\n# Discussion\n\nThe childhood obesity epidemic is recent; however, practitioners who care for children appear as familiar with adverse metabolic drug effects as practitioners who care for adults. Those specializing in pediatrics performed better on a question about drug research, perhaps reflecting recent educational activities directed towards pediatricians.\n\nAcross medical specialties practitioner knowledge of medication-related weight gain was low in four areas of our study. Each of the knowledge gaps if practice relevant, would overestimate the medication's benefits or underestimate adverse metabolic effects. The net effect of each knowledge gap would therefore affect clinical decision-making in the same direction, potentially contributing to excess metabolic dysfunction. The four areas of low practitioner knowledge are as follows:\n\nResponses to questions about drug indications and the use of antipsychotics in autism suggest that some practitioners may mistake the management of aggressive symptoms for treatment of the underlying disease process. Additionally, new oral preparations are available for children who have difficulty swallowing pills. These preparations should be used before prescribing intramuscular preparations, which have greater metabolic effects and do not have pediatric use indications at the time this manuscript is written.\n\nAmong the questions pertaining to adverse metabolic effects, only 8% of practitioners selected the intended response that some diuretics have been associated with promoting insulin resistance. The 41% who incorrectly selected \"angiotensin-converting enzyme inhibitors\" are unlikely to have been aware that some diuretics promote insulin resistance and angiotensin converting enzymes, in contrast, may be insulin sensitizing \\[15\\], the distinction between these two antihypertensive therapies in a patient with metabolic syndrome would be practice relevant. Furthermore, these respondents may have erroneously equated the reduction in peripheral edema with meaningful, long-term weight loss among their patients. The 43% of practitioners who incorrectly selected \"biguanide\" may not have realized that metformin is in this medication class, so the response would have been more informative if the answer had read \"biguanide (metformin).\"\n\nResponses reflected low baseline knowledge of drug safety research and MedWatch, a passive surveillance program. It is possible that practitioners lack a framework for managing the escalating drug-related information. Our findings parallel those of a recent study of physician knowledge and adverse events reporting of dietary supplements \\[16\\].\n\nMental illness is associated with increased vulnerability to adverse metabolic effects. The profound chronic disease mortality among patients with mental illness was under-recognized across specialties as measured by our instrument. Awareness of the high mortality from chronic diseases among patients with mental illness might be unlikely to cause practitioners to alter the patient's psychiatric medications; however, it would guide overall care such as screening, referring, concurrent medication prescribing, and managing co-morbidities.\n\nThe knowledge gaps parallel the few peer-reviewed publications on medication-related weight gain, other than the atypical antipsychotics. Reviewing the proceeding of a large international conference on obesity \\[17\\] revealed a similar paucity of research and translational initiatives surrounding medication-related weight gain. Additionally, current drug product information and labeling lacks a consistent format or location for communicating the potential effects of a drug on the patient's appetite and underlying metabolism.\n\nPractitioners were familiar with the general indications for use of atypical antipsychotics in children and the adverse metabolic effects including prolactinemia, dyslipidemia, elevated liver enzymes, insulin resistance, and weight gain, findings which correlate with the lay and medical literature's recent attention to the topic \\[4\\]. Similarly, education initiatives about pediatric drug labeling have been directed to pediatricians and pediatricians were more knowledgeable than other practitioners about ongoing pharmacovigilance.\n\nThe instrument demonstrated internal consistency across diverse CME programs (Table\u00a01<\/a>), suggesting the findings may appropriately be generalized across U.S. primary care practitioners. The sampling frame captures participants across the United States, with diverse patient populations in diverse practice settings. The degrees of the participants, nurse practitioners, physician assistants, and medical doctors, correctly represent the educational diversity of primary care practitioners. Pediatricians scored as well as their adult medicine counterparts, suggesting that future initiatives could appropriately be directed to all primary care practitioners.\n\nAdditional merits of the instrument are that it can be implemented in a timely and cost-sensitive way. It can be applied to assess evidence-based practice knowledge \\[18\\]. Study findings can provide baseline data, by which to gauge the effectiveness of future interventions. The instrument also provides a continuing education curriculum developed free of industry interests. An internet curriculum on safe medication use measurably improved clinician practice choices \\[19,20\\].\n\nKnowledge is one of many clinical practice barriers to modifying medication-related weight gain, and merits incorporation into future initiatives. The findings, taken with the population prevalence of obesity, the emerging treatment options, and the central role of the primary care practitioner, suggest a significant prevention opportunity.\n\n# Conclusions\n\nPediatricians' knowledge base of adverse metabolic drug effects appears comparable to their counterparts in adult medicine. Regardless of medical specialty, practitioners participating in the CME programs reflected low knowledge on specific questions pertaining to drug indications, adverse metabolic effects, patient risk profiles and safety updates. Each of the four knowledge gaps would potentially influence clinical decision-making in the same manner, leading clinicians to overestimate the benefits of a drug in relation to its metabolic risks. Therefore future efforts to detail cross-specialty practitioner knowledge of metabolic drug effects and initiate education strategies to bolster knowledge could meaningfully contribute to obesity prevention.\n\n## Availability of supporting data\n\nThe full instrument (CME questions from all activities) is available at the journal's request.\n\n# Competing interests\n\nNeither author has financial or non-financial competing interests.\n\n# Authors' contributions\n\nIK developed the CME modules in collaboration with colleagues acknowledged elsewhere in the manuscript and published CME materials. She designed the study in collaboration with the Office of Pediatric Therapeutics and drafted the manuscript. GW participated in the design of the study and performed the statistical analysis. Both authors read and approved the final manuscript.\n\n# Author's information\n\nIK is a physician nutrition specialist board-certified in preventive medicine and public health. She is the editor of Advancing Medicine with Food and Nutrients, Second Edition (CRC Press, December 2012) and serves on the faculty of Johns Hopkins Bloomberg School of Public Health. As an inaugural FDA Commissioner's Fellow she worked within the Office of Pediatric Therapeutics on nutrition-related issues, which gave rise to this research collaboration.\n\n## Acknowledgements\n\nWe thank Anne Myers for her background work on pediatrician focus groups; Lon Osmond, Executive Director, Audio-Digest Foundation for his collaboration; Michelle Surrichio of the American College of Preventive Medicine for her technical assistance; and Rachel Barr of the Maryland Academy of Family Physicians for her conference preparations.\n\n## Disclaimer\n\nThis manuscript is a professional contribution developed by its authors. No endorsement by the FDA is intended or should be inferred.","meta":{"dup_signals":{"dup_doc_count":130,"dup_dump_count":41,"dup_details":{"curated_sources":2,"2021-49":1,"2019-26":1,"2019-13":1,"2019-04":1,"2018-47":1,"2018-39":1,"2018-30":2,"2018-13":1,"2017-47":1,"2017-34":1,"2017-26":1,"2017-17":1,"2017-09":25,"2017-04":1,"2016-50":1,"2016-44":1,"2016-36":1,"2016-30":1,"2016-22":1,"2016-18":1,"2015-48":4,"2015-40":3,"2015-35":4,"2015-32":4,"2015-27":4,"2015-22":4,"2015-14":2,"2014-52":4,"2014-49":6,"2014-42":9,"2014-41":3,"2014-35":6,"2014-23":7,"2014-15":4,"2023-50":1,"2015-18":4,"2015-11":4,"2015-06":4,"2014-10":3,"2013-48":2,"2024-30":1}},"file":"PMC3636058"},"subset":"pubmed_central"} {"text":"abstract: Circulating interleukin (IL)-18 is elevated in obesity, but paradoxically causes hypophagia. We hypothesized that IL-18 may attenuate high-fat diet (HFD)-induced insulin resistance by activating AMP-activated protein kinase (AMPK). We studied mice with a global deletion of the \u03b1-isoform of the IL-18 receptor (IL-18R^\u2212\/\u2212^) fed a standard chow or HFD. We next performed gain-of-function experiments in skeletal muscle, in vitro, ex vivo, and in vivo. We show that IL-18 is implicated in metabolic homeostasis, inflammation, and insulin resistance via mechanisms involving the activation of AMPK in skeletal muscle. IL-18R^\u2212\/\u2212^ mice display increased weight gain, ectopic lipid deposition, inflammation, and reduced AMPK signaling in skeletal muscle. Treating myotubes or skeletal muscle strips with IL-18 activated AMPK and increased fat oxidation. Moreover, in vivo electroporation of IL-18 into skeletal muscle activated AMPK and concomitantly inhibited HFD-induced weight gain. In summary, IL-18 enhances AMPK signaling and lipid oxidation in skeletal muscle implicating IL-18 in metabolic homeostasis.\nauthor: Birgitte Lindegaard; Vance B. Matthews; Claus Brandt; Pernille Hojman; Tamara L. Allen; Emma Estevez; Matthew J. Watt; Clinton R. Bruce; Ole H. Mortensen; Susanne Syberg; Caroline Rudnicka; Julie Abildgaard; Henriette Pilegaard; Juan Hidalgo; Susanne Ditlevsen; Thomas J. Alsted; Andreas N. Madsen; Bente K. Pedersen; Mark A. FebbraioCorresponding author: Mark A. Febbraio, , or Bente K. Pedersen, .\ndate: 2013-09\nreferences:\ntitle: Interleukin-18 Activates Skeletal Muscle AMPK and Reduces Weight Gain and Insulin Resistance in Mice\n\nThe cytokine interleukin (IL)-18 was identified \u223c15 years ago as a cofactor that, together with IL-12, stimulates production of interferon-\u03b3 (1). This \u223c18-kDa cytokine, which has structural similarities to the IL-1 cytokine family, is widely expressed in many mammalian cells and tissues, including liver, adipose tissue, skeletal muscle, pancreas, brain, and endothelium (2). IL-18 is best known for its role in inflammation, whereby proinflammatory stimuli such as lipopolysaccharide, Fas ligand, and tumor necrosis factor-\u03b1 lead to caspase-1\u2013mediated cleavage of pro-IL-18 into mature IL-18. IL-18 then can signal via a heterodimer of the transmembrane IL-18 receptors (\u03b1 and \u03b2) and via a Toll-like receptor signaling cascade, ultimately leading to the activation of nuclear factor-\u03baB and subsequent regulation of gene transcription (3). Although this is the most characterized signaling pathway for this cytokine, it is worth noting that IL-18 also has been implicated in mitogen-activated protein kinase, phosphatidylinositol 3-kinase, and signal transducer and activator of transcription (STAT) 3 signaling (4), which are all implicated in energy metabolism.\n\nIt is now well-accepted that obesity results in a state of low-grade chronic inflammation (5); therefore, it is not surprising that circulating IL-18 levels are elevated in human obesity (6) and in patients with type 2 diabetes (7). Somewhat paradoxically, the works of two separate groups have reported that mice with a global deletion of IL-18 become obese and insulin-resistant, whereas exogenous administration of recombinant IL-18 rescued this phenotype (8,9). These previous studies ascribed the mechanism of action of IL-18 in modulating nutrient homeostasis to be exclusively via neuronal control of food intake (8,9). However, in the earlier study (8), the authors demonstrated that the IL-18^\u2212\/\u2212^ mice displayed decreased peripheral insulin sensitivity and that IL-18 signals via STAT3 in the liver, increasing the possibility that IL-18 may play a role in peripheral energy metabolism because STAT3 plays a major role in maintaining metabolic homeostasis in the liver (10). In addition, we (11,12) and others (13,14) have shown that cytokines that result in activation of STAT3 via transmembrane receptor signaling can activate the AMP-activated kinase (AMPK) signaling pathway to enhance fat oxidation in skeletal muscle, thereby attenuating high-fat diet (HFD)-induced insulin resistance. Together, these previous studies increase the possibility that IL-18 may attenuate HFD-induced insulin resistance via affecting metabolic processes, such as activation of AMPK, in skeletal muscle. This is important from a therapeutic viewpoint because drugs that effectively modulate food intake via targeting the central nervous system have, to date, proven unsuccessful because of side effects associated with activation of the lateral hypothalamus (15). In the current study, we tested the hypothesis that IL-18 signaling can modulate nutrient homeostasis via mechanisms associated with peripheral energy metabolism. We show that IL-18 activates AMPK and increases lipid oxidation in skeletal muscle, implicating IL-18 in diet-induced obesity and insulin resistance.\n\n# RESEARCH DESIGN AND METHODS\n\n## Animal experimental protocols.\n\nFor the diet intervention study, 12-week-old *Il18R^\u2212\/\u2212^* (back-crossed 11 generations to C57BL\/6J) and wild-type C57BL\/6J mice were obtained from Charles River Laboratories (L'Arbresle, France). Subsequent to this initial cohort, we then performed experiments in *Il18R^\u2212\/\u2212^* and wild-type mice obtained from heterozygous mating. Eight-week-old C57BL\/6J mice (inbred; Herlev, Denmark) were used for a DNA electroporation study, and 13-week-old C57\/BL6 mice (inbred; Melbourne, Australia) were used for ex vivo experiments. All experiments were approved by The Animal Experiments Inspectorate in Denmark or the Baker IDI Alfred Medical Research and Education Precinct Animal Ethics Committee (or both) in accordance with the National Health and Medical Research Council of Australia Guidelines on Animal Experimentation. Mice were maintained on a 12-h light, 12-h dark cycle on a standard rodent chow diet (27%, 13%, and 60% kcal from protein, fat, and carbohydrates, respectively) or HFD composed of 60% calories from fat (Research Diets 12492; 20%, 60%, and 20% kcal from protein, fat, and carbohydrates, respectively) for 18 weeks (diet-induced obesity study) or 4 weeks (in vivo electroporation study).\n\nIn vivo electroporation experiments were performed as previously described (16). Briefly, the regulatory plasmid pTet-On was obtained from Clontech (Palo Alto, CA). Because IL-18 lacks a typical signal sequence, the V-J2-C region of murine immunoglobin k-chain was cloned upstream of the mature IL-18 sequence. The tibialis anterior muscle of each mouse was directly injected with 20 \u00b5L plasmid solution (0.5 \u00b5g\/\u00b5L) and electric pulsing was applied using 4-mm plate electrodes and an electric field of one high-voltage pulse (100 \u00b5s, 800 V\/cm) and one low-voltage pulse (400 ms, 100 V\/cm) (17). Induction of gene expression was obtained by administering drinking water consisting of distilled water containing 0.2 mg\/mL doxycycline (doxycycline hyclate; Sigma-Aldrich) (18).\n\nFor ex vivo experiments, mice were fed a standard chow diet and drinking water, available ad libitum. To examine palmitate metabolism, mice were first anesthetized and soleus muscles were carefully dissected into longitudinal strips from tendon to tendon. Strips were removed and \\[^14^C\\]palmitate oxidation was analyzed as previously described (19). The dose of 100 ng\/mL IL-18 was used and PBS served as placebo. The strips from the same animal were used for both IL-18 and PBS.\n\n## Skeletal muscle cell culture.\n\nTo examine whether IL-18 affected phosphorylation (Thr^172^) of AMPK and acetyl CoA carboxylase-\u03b2 (ACC\u03b2; Ser^79^), fully fused L6 myotubes were treated with recombinant rat IL-18 (MBL, Woburn, MA). Cells were treated with IL-18 for 10, 30, and 60 min at doses of 1 and 10 ng\/mL.\n\n## Analysis of body composition.\n\nWe measured body composition analysis in IL18R^\u2212\/\u2212^ by dual-energy X-ray absorptiometry using the Lunar PIXImus Mouse Densitometer (GE Medical Systems) and in IL-18 electroporated mice by using a Lunar Prodigy scanner with a small animal software application (GE Healthcare Systems). Animals were anesthetized by intraperitoneal injection of Hypnorm (0.4 mL\/kg; Janssen, Saunderton, U.K.) and Dormicum (2 mg\/kg; Roche, Basel, Switzerland), and laid flat on the scanning platform on their ventral side. To confirm dual-energy X-ray absorptiometry fat mass determination, mice were killed and two intra-abdominal fat pads (gonadal, retroperitoneal) and one subcutaneous fat pad (inguinal) were dissected and weighed.\n\n## Insulin tolerance tests.\n\nWe performed insulin tolerance tests in 7.5-month-old male mice 4 h after removal of food. Blood samples were obtained by tail cut and were analyzed for glucose content using a glucometer (Accu-chek Compact plus) immediately before and at 15, 30, 45, 60, 90, and 120 min after an intraperitoneal injection of insulin (0.75 units\/kg; Actrapid; Novo Nordisk).\n\n## Indirect calorimetry.\n\nMice were placed in a 16-chamber indirect calorimetry system (TSE Systems, Bad Homburg, Germany) cages for 10 days; the first 5 days were considered the acclimation phase and data were analyzed only for the last final days. After 3 days, the mice were fasted for 24 h. The Vo<\/span>~2~ (mL\/h\/kg), respiratory exchange ratio, and activity (beam breaks) were measured using the system. Mice had free access to food and water while in the chambers. Food intake was measured for the duration of data collection while mice underwent the indirect calorimetry measurements.\n\n## Plasma parameters analysis.\n\nWe collected blood samples from the tail vein of the mice into EDTA tubes and they were immediately spun at 6,000*g* for 10 min at 4\u00b0C and plasma was removed. Plasma was stored at \u221280\u00b0C until analysis. Plasma insulin was determined by ELISA (Crystal Chem). Blood glucose was measured by glucometer (Accu-chek Compact Plus). Adiponectin concentration was measured by RIA kit. Leptin, monocyte chemoattractant protein 1, plasminogen activator inhibitor 1, IL-6, and tumor necrosis factor-\u03b1 were measured by using a Lincoplex mouse serum adipokine panel (Linco).\n\n## Insulin signaling tissue collection.\n\nAnimals were anesthetized with an injection of Hypnorm (0.4 mL\/kg; Janssen, Saunderton, UK) and Dormicum (2 mg\/kg; Roche, Basel, Switzerland), and the gastrocnemius muscle as well as the right lobe of the liver (after a ligature around the blood vessel) were removed and stored in liquid nitrogen until further processing. This was followed by injection of insulin (1.5 units\/kg lean body mass) into the abdominal aorta and removal of the contralateral gastrocnemius and liver lobe occurred 2 min after injection. Samples were stored in liquid nitrogen until further processing.\n\n## RNA extraction and real-time quantitative PCR.\n\nMouse tissue was isolated, frozen in liquid nitrogen or in dry ice and absolute alcohol, and stored at \u221280\u00b0C until extraction. Total RNA was isolated from adipose tissue with TriZol (Life Technology), as described by the manufacturer; 500 ng RNA was reverse-transcribed to cDNA with the use of random hexamers (Taqman reverse-transcription reagents; Applied Biosystems). Real-time PCR was performed on an ABI PRISM 7900 sequence detector or 7500 fast sequence detector (Applied Biosystems). Each assay included (in triplicate) the following: a cDNA standard curve of five serial dilution points (range, 1\u20130.01); a no-template control; a no-reverse-transcriptase control; and 7.5 ng (0.375 ng for 18S rRNA) of each sample cDNA. For *18S rRNA, SREEBP1c*, fatty acid synthase (*FAS*), *HADB*, phosphoenolpyruvate carboxykinase (*PEPCK*), glucose-6 phosphate dehydrogenase, and carnitine palmitoyl transferase 1 (*CPT1*), the amplification mixtures were prepared with 2X Taqman Universal PCR master mix. All assay reagents were from Applied Biosystems. Primers and Taqman probes were designed for sterol regulatory\u2013element binding protein-1c and *FAS* using a mouse-specific database (ensemble.com) and Primer Express (Applied Biosystems) and primers for *CPT1* and *HADB* were designed using the free program primer3. The sequences to amplify a fragment of sterol regulatory\u2013element binding protein-1c was as follows: FP: 5\u2032 GACCACGGAGCCATGGAT; 3\u2032and RP: 5\u2032 GGCCCGGGAAGTCACTGT; 3\u2032 and TaqMan probe: 5\u2032 ACATTTGAAGACATGCTCCAGCTCATCAACA; 3\u2032, a fragment of *FAS* FP: 5\u2032 ATCCTGGAACGAGAACACGATCT 3\u2032; RP: 5\u2032 GGACTTGGGGGCTGTCGTGTCA; 3\u2032 and TaqMan probe: 5\u2032 CACGCTGCGGAAACTTCAGGAAATGT; 3\u2032, a fragment of *HAD* FP: GTGGAGAAGACCCTGAGCTA; RP: GCAAATCGGTCTTGTCTAGT; a fragment of *PEPCK* FP: GGCGGAGCATATGCT and RP: CCACAGGCACTAGGGAAGGC, a fragment of glucose-6 phosphate dehydrogenase FP: TCAACCTCGTCTTCAAGTGGATT; RP: GCTGTAGTAGTCGGTGTCCAGGA; CPTI FP: GTCGCTTCTTCAAGGTCTGG; and RP: AAGAAAGCAGCACGTTCCAT. Oligos for *SREBP1* and *FAS* were obtained from TaqCopenhagen (Copenhagen, Denmark) for *CPT1*, oligos for *HAD* were obtained from DNA technology (Aarhus, Denmark), and oligos for *PEPCK* and glucose-6 phosphate dehydrogenase were obtained from Geneworks (South Australia, Australia). The *18S rRNA* content was determined using a predeveloped assay reagent (Applied Biosystems). The relative concentrations of measured mRNAs were determined by plotting the threshold cycle versus the log of the serial dilution points, and the relative expression of the gene of interest was determined after normalization to *18S*, which was unaffected by genotype and diet.\n\n## AMPK activity and lipid measurements.\n\nAMPK activity was assayed in frozen tibialis anterior muscle homogenized in lysis buffer as described previously (12). Briefly, muscle lysate containing 200 \u03bcg protein was immunoprecipitated with antibody specific to the \u03b12 or \u03b11 catalytic subunit of AMPK and protein A\/G agarose beads. Beads were washed five times, and the activity of the immobilized enzyme was assayed based on the phosphorylation of substrate for AMP-activated protein kinase peptide (0.2 mmol\/L) by 0.2 mmol\/L ATP (containing 2 \u03bcCi \\[\u03b3-^32^P\\] ATP) in the presence and absence of 0.2 mmol\/L AMP. Label incorporation into the substrate for AMP-activated protein kinase peptide was measured on a Racbeta 1214 scintillation counter. For measurement of intramuscular triglycerides, freeze-dried muscle samples were dissected free of visible connective tissue and blood. Lipid was extracted by a Folch extraction, the triacylglycerol was saponified in an ethanol\/KOH solution at 60\u00b0C, and glycerol content was determined fluorometrically as described previously (12).\n\n## Protein analysis.\n\nTissue lysates (40 \u03bcg) were solubilized in Laemmli sample buffer and boiled for 5 min, resolved by SDS-PAGE on 10% polyacrylamide gels, transferred (semi-dry) to nitrocellulose membrane, blocked with 5% BSA, and immunoblotted with primary antibodies (2.5% BSA) overnight. After incubation in horseradish peroxidase-conjugated secondary antibody (2.5% BSA; Amersham Bioscience), the immunoreactive proteins were detected with enhanced chemiluminescence (Amersham Bioscience) and quantified by densitometry (ChemiDoc XRS). Membranes were stripped, washed, and reprobed for total protein content or housekeeping protein when appropriate. Total AKT was run on a separate gel and was not stripped. Samples from mice fed a chow or HFD were run on separate gels. The antibodies used for detection of total AKT and \u03b2-actin and phosphorylation of JNK1\/2, AKT, AMPK\u03b1, and ACC\u03b2 were purchased from Cell Signaling. The \u03b1-tubulin antibody was obtained from Sigma Aldrich.\n\n## Statistical analysis.\n\nAll analyses were performed using SAS 9.1. We performed comparison between two groups using unpaired Student *t* test and one-way ANOVA followed the Tukey post hoc test. Time-series data were analyzed with PROC MIXED. *P* \\< 0.05 was considered significant. When appropriate, values were logarithmically transformed to ensure normality and equal variance.\n\n# RESULTS\n\n## IL-18 receptor\u2013deficient mice are prone to weight gain that is not associated with hyperphagia.\n\nTo evaluate the role of IL-18 signaling in the etiology of body weight homeostasis, we first performed loss-of-function experiments by phenotyping mice with a global deletion of the \u03b1-isoform of the IL-18 receptor (IL-18R^\u2212\/\u2212^). Consistent with a previous study (8), IL-18R^\u2212\/\u2212^ mice become heavier than their wild-type counterparts (control \\[CON\\]) at \u223c6 months of age when fed a regular chow diet (Fig. 1A<\/em><\/a>).\n\nThe increase in body mass was attributable to an increase in adiposity because we observed differences in percent fat (Fig. 1B<\/em><\/a>) but not in percent fat-free mass (data not shown). This increase in adiposity was attributable to increases in visceral, but not subcutaneous, fat mass because both epididymal and retroperitoneal, but not inguinal, fat pad masses were higher in IL-18R^\u2212\/\u2212^ mice relative to CON (Fig. 1C<\/em><\/a>). Although both IL-18R^\u2212\/\u2212^ and CON mice had markedly increased body mass and fat pad mass when fed an HFD for 16 weeks, the differences observed when comparing genotypes on a regular chow diet were not evident when animals were fed an HFD (Fig. 1A\u2013D<\/em><\/a>). As discussed, previous studies have demonstrated that IL-18^\u2212\/\u2212^ mice are prone to weight gain because of hyperphagia (8,9). Although the earlier study reported weight gain and insulin resistance in IL-18R^\u2212\/\u2212^ mice, they did not make reference to altered feeding behavior in these animals (8). To determine whether the increase in adiposity observed in IL-18R^\u2212\/\u2212^ mice was attributable to an increase in food intake or a decrease in energy expenditure (or both), we next performed whole-body indirect calorimetry experiments. We observed no difference in food intake (Fig. 1E<\/em><\/a>), whole-body oxygen consumption (Fig. 1F<\/em><\/a>), or activity (data not shown) measured over 24 h when comparing IL-18R^\u2212\/\u2212^ mice with CON irrespective of diet. As expected, consumption of the HFD decreased whole-body RER, indicative of an increase in whole-body fat oxidation. When mice were provided with HFD ad libitum, average RER (over a 72-h period) was not different when comparing IL-18R^\u2212\/\u2212^ mice with CON irrespective of diet (Fig. 1H<\/em><\/a>). Of note, however, when mice were fasted and refed at the cessation of the 72-h period, IL-18R^\u2212\/\u2212^ mice displayed a significantly higher RER in both the fasted and refed conditions, relative to CON (Fig. 1H<\/em><\/a>).\n\n## IL-18R^\u2212\/\u2212^ mice are insulin-resistant.\n\nWe next examined whether IL-18R^\u2212\/\u2212^ mice were insulin-resistant. At 3 month of age, and before the IL-18R^\u2212\/\u2212^ mice became obese on a chow diet, there were no differences in insulin resistance as measured by an intraperitoneal tolerance test (ITT) (data not shown). However, with age and irrespective of diet, IL-18R^\u2212\/\u2212^ mice displayed whole-body insulin resistance as measured by both fasting hyperinsulinemia (Fig. 2A<\/em><\/a>) and impaired glucose clearance during an ITT (Fig. 2B<\/em><\/a> and [Supplementary Fig. 1](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1095\/-\/DC1)). We next examined insulin signaling in both muscle and liver by analyzing the phosphorylation of Akt (Ser^473^) before and 2 min after a bolus dose of insulin. Although insulin increased pAkt in the skeletal muscle (Fig. 2C<\/em><\/a>) and liver (Fig. 2E<\/em><\/a>) of CON animals fed a chow diet, this effect was markedly blunted in IL-18R^\u2212\/\u2212^ mice (Fig. 2C<\/em> and E<\/em><\/a>). No differences in pAkt in either skeletal muscle (Fig. 2D<\/em><\/a>) or liver (Fig. 2F<\/em><\/a>) were observed when comparing IL-18R^\u2212\/\u2212^ with CON mice when fed an HFD.\n\n## IL-18R^\u2212\/\u2212^ mice store excess lipid in skeletal muscle and have inflamed livers and skeletal muscle.\n\nExcess adiposity often is associated with ectopic lipid storage in metabolic tissues such as liver and skeletal muscle, which can mediate insulin resistance either directly or via the upregulation of serine threonine kinases such as c-jun terminal kinase (JNK) and inhibitor of \u03baB kinase (5). Accordingly, we next measured intramyocellular and intrahepatic lipid content and the phosphorylation of JNK (Thr^183^\/Tyr^185^) and inhibitor of \u03baB kinase-\u03b1\u03b2 (Ser^180^\/ Ser^181^) in these tissues. Irrespective of diet, triacylglycerol content was higher in the skeletal muscles of IL-18R^\u2212\/\u2212^ relative to CON mice (Fig. 3A<\/em><\/a>). This was associated with elevated JNK (Fig. 3B<\/em><\/a>) but not with inhibitor of \u03baB-kinase ([Supplementary Fig. 2](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1095\/-\/DC1)) phosphorylation. Conversely, we did not observe any differences in intrahepatic triacylglycerol concentration when comparing IL-18R^\u2212\/\u2212^ with CON mice irrespective of diet (Fig. 3C<\/em><\/a>). Notwithstanding, phosphorylation of JNK (Fig. 3D<\/em><\/a>) and the mRNA expression of key fatty acid synthesis transcription factors\/enzyme sterol regulatory\u2013element binding protein-1c and *FAS* (Fig. 3E<\/em> and F<\/em><\/a>) were elevated in the liver when comparing IL-18R^\u2212\/\u2212^ with CON mice when animals were fed a chow and an HFD. However, no differences were observed in the mRNA expression of key gluconeogenic enzymes *PEPCK* or glucose-6 phosphate dehydrogenase when comparing IL-18R^\u2212\/\u2212^ with CON mice irrespective of diet ([Supplementary Fig. 3](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1095\/-\/DC1)). Given that neither hepatosteatosis nor the expression of key enzymes involved in regulating hepatic glucose production was different when comparing IL-18R^\u2212\/\u2212^ with CON mice, it is unlikely that changes in liver insulin sensitivity were responsible for the reduced whole-body insulin sensitivity observed in IL-18R^\u2212\/\u2212^, although this possibility cannot be entirely ruled out.\n\n## IL-18R^\u2212\/\u2212^ mice have reduced AMPK signaling in metabolic tissues.\n\nBecause IL-18R^\u2212\/\u2212^ mice are prone to weight gain on a normal diet and ectopic lipid storage independent of feeding, we next examined whether pathways associated with lipid oxidation were impaired. One major pathway that regulates fatty acid oxidation is AMPK. AMPK phosphorylates ACC\u03b2, resulting in inhibition of ACC activity, which in turn leads to a decrease in malonyl CoA content, relieving inhibition of CPT1 and increasing fatty acid oxidation. No significant differences between phenotypes were observed when measuring the phosphorylation of AMPK (Thr^172^) (data not shown). However, the phosphorylation of ACC\u03b2 (Ser^218^), a downstream marker of AMPK activity, was reduced in the skeletal muscle, liver, and adipose tissue of IL-18R^\u2212\/\u2212^ when fed a chow diet (Fig. 4A\u2013C<\/em><\/a>). In addition, this effect was maintained in skeletal muscle (Fig. 4A<\/em><\/a>) and liver (Fig. 4B<\/em><\/a>), but not adipose tissue (Fig. 4C<\/em><\/a>) when mice were fed the HFD.\n\n## Exogenous IL-18 treatment increases AMPK signaling and fat oxidation in skeletal muscle in vitro and ex vivo.\n\nBecause we observed that IL-18R^\u2212\/\u2212^ mice become obese, store more lipid in skeletal muscle, and have defective ACC\u03b2 phosphorylation in this organ, we next performed in vitro and ex vivo experiments in muscle cells and whole muscle strips to confirm the role of IL-18 on AMPK signaling and fat oxidation in this important metabolic tissue. In initial experiments, we demonstrated that as little as 1.0 ng\/mL recombinant IL-18 protein was sufficient to phosphorylate both AMPK (Thr^172^) (Fig. 5A<\/em><\/a>) and ACC\u03b2 (Ser^218^) (Fig. 5B<\/em><\/a>) in L6 myotubes. We next performed experiments in isolated intact soleus muscle as previously reported (16,20,21). Treating these muscles with 100 ng\/mL recombinant IL-18 was sufficient to increase both palmitate oxidation (Fig. 5C<\/em><\/a>) and AMPK phosphorylation (Thr^172^) (Fig. 5D<\/em><\/a>).\n\n## Ectopic expression of IL-18 in a single tibialis anterior muscle is sufficient to protect against excess adipose tissue storage in mice fed an HFD.\n\nTo determine whether the results obtained in muscle cell and ex vivo muscle strips also were prevalent in vivo, we used the in vivo electroporation technique to overexpress *IL-18* cDNA in the tibialis anterior muscles of C57BL\/6J mice that were placed on an HFD diet for 4 weeks. Using this technique, we previously have observed a transfection efficiency of \u223c60%, as measured by the electroporation of a GFP construct as a control (21). IL-18 protein expression in the tibialis anterior was increased 30- to 40-fold above basal when compared with mice when the tibialis anterior was electroporated with an empty vector (sham) (Fig. 6A<\/em><\/a>). No difference in body weight was observed when comparing IL-18 with sham electroporated mice (Fig. 6B<\/em><\/a>). In accordance with this observation, adiposity was reduced when comparing IL-18 with sham electroporated mice at 4 weeks (Fig. 6C<\/em> and D<\/em><\/a> and [Supplementary Fig. 4](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1095\/-\/DC1)). However, this reduction in adiposity was insufficient to result in increased whole-body insulin sensitivity or glucose tolerance as measured by ITT and glucose tolerance test ([Supplementary Fig. 5](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1095\/-\/DC1)).\n\n## Ectopic expression of IL-18 in a single tibialis anterior muscle increases AMPK signaling and markers of lipid oxidation in this organ.\n\nWe next measured AMPK signaling and markers of lipid oxidation in IL-18 and sham electroporated muscles. Both AMPK activity (Fig. 7A<\/em><\/a>) and ACC\u03b2 phosphorylation (Ser^218^) (Fig. 7B<\/em><\/a>) were markedly elevated in IL-18 compared with sham electroporated muscles. In addition, we also observed increased mRNA abundance of \u03b2-hydroxyacyl-CoA-dehydrogenase, a key enzyme involved in mitochondrial function (Fig. 7C<\/em><\/a>) and *CPT1* (Fig. 7D<\/em><\/a>), the enzyme that controls the transfer of long-chain fatty acyl CoA into mitochondria and enhances rates of fatty acid oxidation in the IL-18 electroporated mice relative to sham mice. It is now well-known that skeletal muscle can act as an endocrine organ, producing \"myokines\" to result in tissue cross-talk (22). To examine whether IL-18 could act as a myokine when overexpressed in skeletal muscle, we examined circulating levels of IL-18 and markers of insulin sensitivity, fat oxidation, and inflammation in other tissues such as the liver and adipose. Despite the increase in intramuscular IL-18 expression with electroporation, plasma IL-18 was not elevated in the IL-18 electroporated mice relative to sham ([Supplementary Fig. 5](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1095\/-\/DC1)). Therefore, it was not surprising that pAkt, pAMPK, pACC, and pJNK were not altered in the liver or adipose tissue of IL-18 electroporated mice relative to sham mice ([Supplementary Figs. 5 and 6](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1095\/-\/DC1)). Together, these data provide evidence that IL-18 can activate AMPK in skeletal muscle in vivo.\n\n# DISCUSSION\n\nIL-18 signaling has been implicated in the etiology of metabolic homeostasis and insulin resistance; however, studies in IL-18\u2013deficient mice have suggested that IL-18 is required to prevent hyperphagia (8,9). In the current study, rather than use a model of genetic deletion of IL-18, we initially studied mice that harbor a global deletion of the functional IL-18R. We show that IL-18R^\u2212\/\u2212^ mice are prone to weight gain on a chow diet and develop insulin resistance, a phenotype that is associated with ectopic skeletal muscle lipid expression, inflammation, and reduced AMPK signaling.\n\nThe IL-18R^\u2212\/\u2212^ mice displayed significantly increased body weight and increased fat pad mass after \u223c26 weeks of age. Although others have shown that IL-18\u2013deficient mice display hyperphagia (8,9), we did not observe this in the IL-18R^\u2212\/\u2212^ mice. We were careful to monitor food intake in their normal habitat and during metabolic caging, so we are confident that the food intake between the IL-18R^\u2212\/\u2212^ and CON mice were comparable. Given the identical oxygen consumption between the IL-18R^\u2212\/\u2212^ and CON mice, why would the IL-18R^\u2212\/\u2212^ mice gain more weight if both energy input and expenditure were comparable? It should be noted that the difference in body weight when comparing the IL-18R^\u2212\/\u2212^ and CON mice was \u223c10% by the end of the study (Fig. 1A<\/em><\/a>), whereas the overall difference in percent body fat was \u223c4% (Fig. 1B<\/em><\/a>). This would equate to an approximate increase of 0.15 g fat or 0.70 kJ energy per day. The daily energy expenditure of a mouse has been estimated to be 42 kJ per day (23). Consequently, \\<2% difference in daily energy expenditure would be sufficient to result in the increased body weight but was unlikely to be detected with available techniques, such as metabolic caging. Similar problems have been encountered in other mouse models of obesity (23,24), and work from our group also has recently observed such an anomaly (25). Notwithstanding this apparent anomaly, it appears, based on the data we have reported, that the IL-18R^\u2212\/\u2212^ mice gain weight and store excess lipid in skeletal muscle, which results in whole-body and skeletal muscle insulin resistance.\n\nIt has been suggested that IL-18R also might be activated by ligands other than IL-18 (26,27), and this, potentially, could explain why hyperphagia previously was observed in IL-18\u2013deficient mice (8,9) but not in IL-18R^\u2212\/\u2212^ mice in the current study. Interestingly, when mice fed an HFD were fasted and refed, IL-18R^\u2212\/\u2212^ mice displayed a significantly higher RER in the refed condition (Fig. 1H<\/em><\/a>). This suggests that mice lacking the IL-18R cannot oxidize lipids and rely on utilization of carbohydrates as a preferred energy substrate during refeeding. This is supported by our gain-of-function data (IL-18 increased fat oxidation). Interestingly, Zorrilla et al. (9) observed that mice with IL-18 deficiency were not hyperphagic when fed an HFD and suggested that they differentially process carbohydrate-rich compared with lipid-rich diets or differentially use these macronutrients as fuel. Our study supports this hypothesis.\n\nBy performing whole-body insulin tolerance tests, we cannot ascertain whether the whole-body insulin resistance observed in the IL-18R^\u2212\/\u2212^ mice relative to control mice was attributable to insulin resistance in skeletal muscle, liver, or a combination of the two. However, several lines of evidence suggest that the defect in the IL-18R^\u2212\/\u2212^ mice was primarily in the skeletal muscle, and this is the rationale for choosing to study this organ in depth. First, it is well-acknowledged that ectopic lipid expression is a primary mechanism leading to insulin resistance (28). Whereas intramuscular triglycerides were elevated in the IL-18R^\u2212\/\u2212^ mice relative to CON mice, no such increase was observed in the liver (Fig. 3<\/a>). Second, the mRNA expression of key gluconeogenic enzymes PEPCK and glucose-6 phosphate dehydrogenase were not different in the liver of the IL-18R^\u2212\/\u2212^ mice. Third, even though the IL-18R^\u2212\/\u2212^ mice were insulin-resistant on an HFD relative to CON mice, there was no evidence of decreased pAkt (Fig. 2F<\/em><\/a>) or of mRNA expression of *SREBP1* or *FAS* (Fig. 3E<\/em> and F<\/em><\/a>) in the liver. Together, these data suggest that skeletal muscle was the origin of the primary defect, although we acknowledge that the effects of IL-18R^\u2212\/\u2212^ on the liver cannot be completely discounted because we did not directly measure insulin sensitivity in this organ.\n\nOf note was our observation that pJNK was markedly elevated in the livers of the IL-18R^\u2212\/\u2212^ mice relative to CON mice despite equivalent lipid content when animals were fed a chow diet (Fig. 3<\/a>). Although this may seem counterintuitive, this observation is not novel. We previously have observed such a phenomenon in IL-6\u2013deficient mice (25). A potential mechanism recently has been proposed by Flavell et al., who demonstrated that NLRP6 and NLRP3 inflammasomes and, importantly, IL-18 negatively regulates nonalcoholic fatty liver disease\/nonalcoholic steatohepatitis progression as well as multiple aspects of metabolic syndrome via modulation of the gut microbiota not necessarily related to hepatosteatosis (29). Although speculative, the increased inflammation observed in the IL-18R^\u2212\/\u2212^ mice in the presence of relatively normal lipid levels may be related to such a mechanism.\n\nBased on our loss-of-function and gain-of-function models, IL-18 signaling is implicated in fatty acid oxidation rates in skeletal muscle as a result of activation of AMPK. As discussed, AMPK phosphorylates ACC\u03b2, resulting in inhibition of ACC activity, which in turn leads to a decrease in malonyl CoA content, relieving inhibition of CPT1 and increasing fatty acid oxidation. The phosphorylation of ACC\u03b2 was reduced in the skeletal muscle, liver, and adipose tissue of IL-18R^\u2212\/\u2212^ (Fig. 4<\/a>). Moreover, when cultured skeletal muscle cells or isolated skeletal muscle strips were treated with IL-18, phosphorylation of AMPK or ACC (or both) was increased and, in the case of intact ex vivo\u2013treated skeletal muscle, this increase was associated with enhanced fatty acid oxidation. Finally, when IL-18 was overexpressed in skeletal muscle in vivo, AMPK activity and ACC phosphorylation were increased, not only in the electoporated muscle (Fig. 7A<\/em> and B<\/em><\/a>). Taken together, the data provide evidence implicating IL-18 in the activation of AMPK.\n\nIt is now well-known that many cytokines, including leptin, adiponectin, ghrelin, IL-6, and ciliary neurotrophic factor can activate AMPK (30,31), but this is the first report indicating that IL-18 can act as an AMPK agonist. This observation, however, is consistent with these previous studies because IL-18 can act as an activator of STAT3 (4). Work from our group previously has shown that members of the IL-6 family of cytokines, which potently activate STAT3, also enhance fat oxidation via AMPK (11,12). Importantly, when mice that harbor a truncation of the COOH-terminal domain that eliminates these tyrosine residues on the gp130 receptor (gp130^\u2206STAT^ mice) are treated with CNTF, the phosphorylation of STAT3 is abolished, as is the activation of AMPK (12).\n\nAlthough feeding mice an HFD did not result in differences in body weight, fat mass, or insulin signaling in skeletal muscle and liver when comparing the genotypes, the IL-18R^\u2212\/\u2212^ mice nevertheless displayed elevated fasting insulin levels and impaired insulin tolerance as measured by an ITT (Fig. 2<\/a>). Although speculative, this may be attributable to the fact that under HFD conditions, the activation of the AMPK pathway remained impaired in the IL-18R^\u2212\/\u2212^ mice, at least in skeletal muscle and liver (Fig. 4<\/a>), leading to elevated lipid levels in skeletal muscle of the IL-18R^\u2212\/\u2212^ mice under HFD conditions (Fig. 3<\/a>).\n\nIn summary, we have identified that IL-18 can activate AMPK. Moreover, mice that harbor a genetic deletion of a functional IL-18R are prone to weight gain and development of insulin resistance and inflammation in important metabolic tissues such as skeletal muscle and liver. Therefore, our data add IL-18 to a growing list of catabolic proinflammatory cytokines that paradoxically are required to maintain pathways important for fatty acid oxidation and thus prevent insulin resistance.\n\n## ACKNOWLEDGMENTS\n\nThis study was supported, in part, by a grant from the National Health and Medical Research Council of Australia (NHMRC grant no. 526606). This study was further supported by the Danish Council for Independent Research\u2013Medical Sciences, the Commission of the European Communities (grant agreement no. 223576-MYOAGE), and by grants from the Novo Nordisk Foundation, H\u00f8rslevfonden, the Danish National Research Foundation (#10-083807), H\u00f8jmoseg\u00e5rdlegatet, Fonden for L\u00e6gevidenskabens Fremme, and Direkt\u00f8r Jacob Madsen og Hustru Olga Madsens Fond. The Centre of Inflammation and Metabolism (CIM) and The Rodent Metabolic Phenotyping Center is part of the UNIK Project: Food, Fitness & Pharma for Health and Disease (see [www.foodfitnesspharma.ku.dk](http:\/\/www.foodfitnesspharma.ku.dk)) supported by the Danish Ministry of Science, Technology, and Innovation. The CIM is a member of DD2-the Danish Center for Strategic Research in Type 2 Diabetes (the Danish Council for Strategic Research, grant no. 09-067009 and 09-075724). The Copenhagen Muscle Research Centre is supported by a grant from the Capital Region of Denmark. M.A.F. is a Senior Principal Research Fellow, M.J.W. is a Senior Research Fellow, and C.R.B. and V.B.M. are Career Development Fellows of the NHMRC. The CIM is supported by a grant from the Danish National Research Foundation (#02-512-55). B.L. was supported by a grant from the Danish National Research Foundation (#09-063656). B.L. received postdoctoral fellowship support from a grant from the Danish National Research Foundation.\n\nNo other potential conflicts of interest relevant to this article were reported.\n\nB.L. designed research, performed and\/or analyzed research, and wrote the manuscript. V.B.M., C.B., P.H., T.L.A., E.E., M.J.W., C.R.B., O.H.M., S.S., C.R., J.A., H.P., S.D., T.J.A., and A.N.M. performed and\/or analyzed research. J.H. contributed new reagents and analytical tools. B.K.P. designed research. M.A.F. designed research and wrote the manuscript. All authors contributed to the writing of the final submitted version of the manuscript. B.L. is the guarantor of this work and, as such, had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.\n\nThe authors acknowledge the assistance of Betina Mentz, Ruth Rousing, Hanne Villumsen, Lone Nielsen, Andreas Nygaard Madsen (University of Copenhagen), Steve Risis (Baker IDI Heart and Diabetes Institute), and the Rodent Metabolic Phenotyping Center (University of Copenhagen).\n\n# REFERENCES","meta":{"dup_signals":{"dup_doc_count":131,"dup_dump_count":50,"dup_details":{"curated_sources":3,"2022-40":1,"2021-49":1,"2021-43":1,"2019-43":1,"2019-09":1,"2018-51":1,"2018-43":1,"2018-34":1,"2018-26":3,"2018-17":3,"2018-09":1,"2018-05":3,"2017-47":3,"2017-43":5,"2017-39":3,"2017-34":1,"2017-30":9,"2017-26":2,"2017-22":2,"2017-17":7,"2017-09":4,"2017-04":10,"2016-50":3,"2016-44":4,"2016-40":3,"2016-36":3,"2016-30":3,"2016-26":3,"2016-22":3,"2016-18":2,"2016-07":2,"2015-48":3,"2015-40":2,"2015-35":3,"2015-32":1,"2015-27":1,"2015-22":2,"2015-14":2,"2014-52":3,"2014-49":1,"2014-42":3,"2014-41":3,"2014-35":1,"2014-23":2,"2023-50":1,"2017-13":4,"2015-18":2,"2015-11":1,"2015-06":2,"2024-18":1}},"file":"PMC3749341"},"subset":"pubmed_central"} {"text":"author: Elaine R Mardis\ndate: 2010\ninstitute: 1The Genome Center at Washington University School of Medicine, 4444 Forest Park Blvd, St Louis, MO 63108, USA\nreferences:\ntitle: The \\$1,000 genome, the \\$100,000 analysis?\n\nHaving recently attended the Personal Genomes meeting at Cold Spring Harbor Laboratories (I was an organizer this year), I was struck by the number of talks that described the use of whole-genome sequencing and analysis to reveal the genetic basis of disease in patients. These patients included a child with irritable bowel disease, a child with severe combined immunodeficiency, two siblings affected with Miller syndrome, and several with cancers of different types. Although each presenter emphasized the rapidity with which these data can now be generated using next-generation sequencing instruments, they also listed the large number of people involved in the analysis of these datasets. The required expertise to 'solve' each case included molecular and computational biologists, geneticists, pathologists and physicians with exquisite knowledge of the disease and of treatment modalities, research nurses, genetic counselors, and IT and systems support specialists, among others. While much of the attendant effort was focused on the absolute importance of obtaining the correct diagnosis, the large number of specialists was critical for the completion of the data analysis, the annotation of variants, the interpretive 'filtering' necessary to deduce the causative or 'actionable' variants, the clinical verification of these variants, and the communication of results and their ramifications to the treating physician, and ultimately to the patient. At the end of the day, although the idea of clinical whole-genome sequencing for diagnosis is exciting and potentially life-changing for these patients, one does wonder how, in the clinical translation required for this practice to become commonplace, such a 'dream team' of specialists would be assembled for each case. In other words, even if the cost and speed of generating sequencing data continue their precipitous decreases, the cost of 'team' analysis seems unlikely to immediately follow suit. However, rather than predicting from this reasoning that widespread diagnosis by sequencing is unlikely to occur widely, it is perhaps more fruitful to predict, in my opinion, what is probably required for it to occur. I therefore offer the following as food for thought.\n\nOne source of difficulty in using resequencing approaches for diagnosis centers on the need to improve the quality and completeness of the human reference genome. In terms of quality, it is clear that the clone-based methods used to map, assign a minimal tiling path, and sequence the human reference genome did not yield a properly assembled or contiguous sequence equally across all loci. Lack of proper assembly is often due to collapsing of sequence within repetitive regions, such as segmental duplications, wherein genes can be found once the correct clones are identified and sequenced. At some loci, the current reference contains a single nucleotide polymorphism (SNP) that occurs at the minor allele frequency rather than being the major allele. In addition, some loci cannot be represented by a single tiling path and require multiple clone tiling paths to capture all of the sequence variations. All of these deficiencies and others not cited provide a less-than-optimal alignment target for next-generation sequencing data and can confound the analytical validity of variants necessary to properly interpret patient-derived data. Hence, although it is difficult work to perform, the ongoing efforts of the Genome Resource Consortium \\[1\\] to improve the overall completeness and correctness of the human reference genome should be enhanced.\n\nAlong these lines, although projects such as the early SNP Consortium \\[2\\], the subsequent HapMap projects \\[3-5\\], and more recently the 1,000 Genomes Project \\[6\\] have identified millions of SNPs in multiple ethnic groups, there is much more diversity to the human genome than single base differences. In some ways, the broader scope of 'beyond SNP' diversity of the genome across human populations remains mysterious, including common copy number polymorphisms, large insertions and deletions, and inversions. Mining the 1,000 Genomes data using methods to identify genome-wide structural variation should augment this considerably \\[7\\], with validation playing an important role, as many methods are still nascent. Lastly, devising clever ways to provide all such classes of variants as a 'searchable space' for sequence data alignment remains a significant challenge, as does the development of sequence alignment algorithms that facilitate the analysis of structurally complex loci.\n\nHow well do we understand the functions encoded by our genome? Certainly, comprehensive functional information about proteins, including the impact of mutations, is complete for relatively few genes. The development of high-throughput systems for biochemistry and enzymology could have a dramatic impact on this deficiency and would add vitality to these areas of scientific endeavor. Efforts that annotate regulatory protein binding sites, sites of RNA-mediated regulatory mechanisms, and other motifs that contribute to transcriptional regulation in the human genome must continue. Improved understanding of these regions, and thus their annotation, will require the power of model-organism-based systems to identify and characterize functional proteins or mechanisms that are shared with humans. We also must transfer these findings into human cell experimental systems that allow researchers to examine the impact of the mutations or other alterations of the genome on cellular pathways and the resulting disease biology. With functional consequences in hand, we will begin to understand and associate the clinical validity of genomic variants, effectively enabling the correlation of variant(s) with the resultant phenotype(s).\n\nIf our efforts to improve the human reference sequence quality, variation, and annotation are successful, how do we avoid the pitfall of having cheap human genome resequencing but complex and expensive manual analysis to make clinical sense out of the data? One approach would emphasize the development of 'clinical grade' interpretational analysis pipelines to perform much of the initial discovery from datasets derived from massively parallel sequencing \\[8\\]. Although such pipelines already exist in the research setting \\[9\\], manual checks and orthogonal validation of variants are required because of the ongoing development of the analytical approaches. Towards patient diagnoses, such validation could initially be performed in a clinical laboratory medicine setting, but ultimately we must develop sophisticated analytical approaches and quality filters that enable high-confidence variant detection solely from the primary data. All discovered variants would then be interpreted in the context of the ever-improving human genome annotation and evaluated in the contexts of medical genetics, of demonstrated clinical validity, and of the pharmaceutical databases (when appropriate), to identify causative or therapeutically actionable genes. Ultimately, as in medicine today, the results will require interpretation by a physician, which raises a separate but equally important issue: the significant need to develop and implement training programs in genomics for medical professionals. Pathologists and genetic counselors will be the first in line for training programs focused on genomic diagnostics, and improving the genomics education of medical students will also be a first priority. More challenging will be the genomics education of practicing physicians and other medical professionals, many of whom do not require genetics to perform their valuable role in health care daily, but who will be confronted in the near term by increasingly well informed patients who expect their doctors to be as well versed as they are about genome-guided diagnosis and treatment.\n\nA final word on the important topic of patient access to genome-guided medicine seems necessary and appropriate. The current high cost of whole-genome sequencing and analysis relative to most clinical diagnostic assays, coupled with the fact that these costs are not currently reimbursed by insurers, might mean that only those with the means to pay for the test will be allowed access. Perhaps worse, those with the fattest wallets might pay extra for a place higher in the queue, denying earlier access to patients who more desperately need the information. Although there are no easy answers here, one plausible solution might be the establishment of funds at major medical centers, where genome-guided medicine is likely to be practiced first, that pay for the genomic sequencing, diagnosis and associated costs and thus allow equitable access to this new assay.\n\n# Competing interests\n\nThe author declares that they have no competing interests.\n\n## Acknowledgements\n\nI thank Deanna Church, Timothy Ley and W Richard McCombie for their critical reading and suggestions.","meta":{"dup_signals":{"dup_doc_count":133,"dup_dump_count":47,"dup_details":{"curated_sources":2,"2019-18":2,"2019-09":1,"2019-04":2,"2018-47":2,"2018-43":1,"2018-39":1,"2018-34":1,"2018-30":1,"2018-26":1,"2018-17":2,"2018-09":2,"2017-39":2,"2017-34":1,"2017-30":1,"2017-22":1,"2017-17":1,"2017-09":5,"2017-04":1,"2016-50":1,"2016-44":2,"2016-40":2,"2016-36":8,"2016-30":6,"2016-07":4,"2015-48":3,"2015-40":1,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":4,"2014-42":9,"2014-41":6,"2014-35":4,"2014-23":6,"2014-15":5,"2019-30":1,"2017-13":1,"2015-18":4,"2015-11":3,"2015-06":3,"2014-10":3,"2013-48":4,"2013-20":4,"2024-22":1}},"file":"PMC3016626"},"subset":"pubmed_central"} {"text":"date: 2018-05\ntitle: WHO and UNICEF issue new guidance to promote breastfeeding in health facilities globally\n\n**11 APRIL 2018 \\| GENEVA -** WHO and UNICEF today issued new ten-step guidance to increase support for breastfeeding in health facilities that provide maternity and newborn services. Breastfeeding all babies for the first 2 years would save the lives of more than 820 000 children under age 5 annually.\n\nThe Ten Steps to Successful Breastfeeding underpin the Baby-friendly Hospital Initiative, which both organizations launched in 1991. The practical guidance encourages new mothers to breastfeed and informs health workers how best to support breastfeeding.\n\nBreastfeeding is vital to a child's lifelong health, and reduces costs for health facilities, families, and governments. Breastfeeding within the first hour of birth protects newborn babies from infections and saves lives. Infants are at greater risk of death due to diarrhoea and other infections when they are only partially breastfed or not breastfed at all. Breastfeeding also improves IQ, school readiness and attendance, and is associated with higher income in adult life. It also reduces the risk of breast cancer in the mother.\n\n\"Breastfeeding saves lives. Its benefits help keep babies healthy in their first days and last will into adulthood,\" says UNICEF Executive Director Henrietta H. Fore. \"But breastfeeding requires support, encouragement and guidance. With these basic steps, implemented properly, we can significantly improve breastfeeding rates around the world and give children the best possible start in life.\"\n\nWHO Director-General Dr Tedros Adhanom Ghebreyesus says that in many hospitals and communities around the world, whether a child can be breastfed or not can make the difference between life and death, and whether a child will develop to reach his or her full potential.\n\n\"Hospitals are not there just to cure the ill. They are there to promote life and ensure people can thrive and live their lives to their full potential,\" says Dr Tedros. \"As part of every country's drive to achieve universal health coverage, there is no better or more crucial place to start than by ensuring the Ten Steps to Successful Breastfeeding are the standard for care of mothers and their babies.\"\n\nThe new guidance describes practical steps countries should take to protect, promote and support breastfeeding in facilities providing maternity and newborn services. They provide the immediate health system platform to help mothers initiate breastfeeding within the first hour and breastfeed exclusively for six months.\n\nIt describes how hospitals should have a written breastfeeding policy in place, staff competencies, and antenatal and post-birth care, including breastfeeding support for mothers. It also recommends limited use of breastmilk substitutes, rooming-in, responsive feeding, educating parents on the use of bottles and pacifiers, and support when mothers and babies are discharged from hospital.\n\n**Note to editors**\n\nThe Ten Steps are based on the WHO guidelines, issued in November 2017, titled Protecting, promoting and supporting breastfeeding in facilities providing maternity and newborn services.\n\nEarly initiation of breastfeeding, within one hour of birth, protects the newborn from acquiring infections and reduces newborn mortality. Starting breastfeeding early increases the chances of a successful continuation of breastfeeding. Exclusive breastfeeding for six months has many benefits for the infant and mother. Chief among these is protection against gastrointestinal infections and malnutrition, which are observed not only in developing but also industrialized countries.\n\nBreast-milk is also an important source of energy and nutrients in children aged 6\u201323 months. It can provide half or more of a child's energy needs between 6-12 months, and one-third of energy needs between 12-24 months. Breast-milk is also a critical source of energy and nutrients during illness, and reduces mortality among children who are malnourished.\n\nChildren and adolescents who were breastfed as babies are less likely to be overweight or obese.\n\nAvailable from: ","meta":{"dup_signals":{"dup_doc_count":174,"dup_dump_count":48,"dup_details":{"curated_sources":2,"2023-40":4,"2023-23":1,"2023-14":3,"2023-06":1,"2022-49":2,"2022-40":3,"2022-33":1,"2022-27":2,"2022-21":1,"2022-05":1,"2021-43":2,"2021-39":4,"2021-25":2,"2021-21":2,"2021-17":2,"2021-10":2,"2021-04":2,"2020-50":3,"2020-45":1,"2020-40":5,"2020-34":2,"2020-29":3,"2020-16":1,"2020-10":3,"2020-05":2,"2019-51":1,"2019-43":1,"2019-39":5,"2019-35":3,"2019-30":2,"2019-26":5,"2019-22":5,"2019-18":12,"2019-13":7,"2019-09":8,"2019-04":10,"2018-51":9,"2018-47":14,"2018-43":5,"2018-39":3,"2018-34":4,"2018-30":1,"2018-26":5,"2018-17":10,"2024-30":2,"2024-26":2,"2024-18":1,"2024-10":2}},"file":"PMC6118173"},"subset":"pubmed_central"} {"text":"abstract: The high levels of intelligence seen in humans, other primates, certain cetaceans and birds remain a major puzzle for evolutionary biologists, anthropologists and psychologists. It has long been held that social interactions provide the selection pressures necessary for the evolution of advanced cognitive abilities (the 'social intelligence hypothesis'), and in recent years decision-making in the context of cooperative social interactions has been conjectured to be of particular importance. Here we use an artificial neural network model to show that selection for efficient decision-making in cooperative dilemmas can give rise to selection pressures for greater cognitive abilities, and that intelligent strategies can themselves select for greater intelligence, leading to a Machiavellian arms race. Our results provide mechanistic support for the social intelligence hypothesis, highlight the potential importance of cooperative behaviour in the evolution of intelligence and may help us to explain the distribution of cooperation with intelligence across taxa.\nauthor: Luke McNally; Sam P. Brown; Andrew L. Jackson\\*Author for correspondence ().\ndate: 2012-08-07\ninstitute: 1Department of Zoology, School of Natural Sciences, Trinity College Dublin, Dublin 2, Republic of Ireland; 2Centre for Biodiversity Research, Trinity College Dublin, Dublin 2, Republic of Ireland; 3Centre for Immunity, Infection and Evolution, School of Biological Sciences, University of Edinburgh, Edinburgh EH9 3JT, UK\nreferences:\ntitle: Cooperation and the evolution of intelligence\n\n# Introduction\n\nNatural selection never favours excess; if a lower-cost solution is present, it is selected for. Intelligence is a hugely costly trait. The human brain is responsible for 25 per cent of total glucose use, 20 per cent of oxygen use and 15 per cent of our total cardiac output, although making up only 2 per cent of our total body weight \\[1\\]. Explaining the evolution of such a costly trait has been a long-standing goal in evolutionary biology, leading to a rich array of explanatory hypotheses, ranging from evasion of predators to intelligence acting as an adaptation for the evolution of culture \\[2\u20134\\]. Among the proposed explanations, arguably the most influential has been the 'social intelligence hypothesis', which posits that it is the varied demands of social interactions that have led to advanced intelligence \\[4\u201312\\].\n\nIn recent years, the cognitive demands of reciprocity, one of the mechanisms posited as important in the maintenance of cooperation in humans and other intelligent taxa, have been suggested to be a causal factor in the evolution of advanced intelligence and human language. This has been particularly apparent in the evolutionary game theory literature, where conjecture regarding this relationship is frequent \\[13\u201316\\]. Indeed, there is a rich history of work relating intelligence and reciprocity in the game theory literature, though most of this work has focused on the cognitive abilities required for the evolution of cooperation, rather than the possible role that the negotiation of these interactions has in the evolution of intelligence \\[17\u201324\\]. As well as the cognitive abilities required for the coordination of partners during cooperative acts, both direct (decisions based on what you do to me) and indirect (decisions based on what you do to others) reciprocity have additional demands in terms of the ability to remember previous interactions and to integrate across these interactions to make decisions in cooperative dilemmas \\[25\u201331\\]. These cognitive demands, combined with the occurrence of cooperative behaviour between unrelated individuals in intelligent taxa, suggest that selection for these mechanisms of cooperation could, at least in part, be responsible for advanced cognitive abilities \\[26\\].\n\nThe many subfields within the social intelligence hypothesis have shown a rich elaboration of verbal arguments, and data from comparative studies support many of their predictions \\[32\u201334\\]. However, verbal reasoning and comparative analysis alone are not sufficient to assess the relative merit of competing hypotheses \\[35\\]; mechanistic models are needed to assess the plausibility of these different explanations for advanced cognition.\n\nHere, we use an artificial neural network model to focus on the potential for direct reciprocity, a behaviour that is widespread in humans, to select for advanced cognitive abilities. Rather than manufacturing some form of functional relationship between intelligence and fitness, we allow this relationship to emerge based on the demands of decision-making in two social dilemmas, and analyse the consequences for the evolution of intelligence.\n\n# Material and methods\n\n## The social dilemmas\n\nIn order to consider the dynamics of cooperative social interactions, we use the framework of two classic social dilemmas: the iterated prisoner's dilemma (IPD) and the iterated snowdrift game (ISD). In both games, two players must choose between cooperation and defection during repeated rounds. In the event of mutual cooperation or mutual defection, both players receive payoffs *R* or *P*, respectively, while a defector exploiting a cooperator gets *T* and the cooperator gets *S*. In the prisoner's dilemma, the benefit of an individual's cooperative behaviour goes to their opponent, while they pay all of the costs (e.g. food sharing, reciprocal coalitionary behaviour). This results in a payoff order of *T* \\> *R* \\> *P* \\> *S*. Here, the worst possible outcome for an individual is to cooperate while their opponent defects, while the best outcome is to defect while the opponent cooperates. In the snowdrift game, the benefits of cooperative behaviours are shared between opponents, and the costs are shared if both individuals cooperate (e.g. cooperative hunting, coalitionary behaviour with shared benefits). This results in a payoff order of *T* \\> *R* \\> *S* \\> *P*. Again, the best outcome for an individual is to defect while their opponent cooperates, though the worst possible outcome for an individual is for neither them nor their opponent to cooperate. In both games, the overall payoff (sum of both individual's payoffs) is greatest for mutual cooperation and lowest for mutual defection.\n\nAll of this means that the equilibrium frequency of cooperation for a single interaction (single-interaction Nash equilibrium) will be zero in the prisoner's dilemma but will be non-zero for a single-interaction snowdrift game \\[36\\]. These single-interaction Nash equilibria provide a useful benchmark against which to assess the effects of contingent behaviours (i.e. those that depend on the behaviour of others) in repeated interactions.\n\n## The neural network model\n\nAny attempt to define a metric of intelligence will always be a contentious matter. However, comparative studies across taxa have usually focused on two main classes of brain properties as proxies of intelligence: metrics based on relative or absolute size of the brain or certain brain regions, and metrics based on more specific properties such as numbers of cortical neurons \\[37\\]. It is with this tradition in mind that we develop our artificial neural network model, with evolving network structure, using the number of neurons, *i*, as our proxy for intelligence. Each individual can display varying levels of intelligence, from simply being characterized by a binary response of always cooperate or always defect to large neural networks that possess complex neuronal structure, allowing for computations to inform decisions based on payoffs and the integration of longer-term memory into their current decision-making processes.\n\nEach individual in our simulated populations possesses a neural network that determines their behaviour in social dilemmas (illustrated in figure 1<\/a>). The networks each have two input nodes (which receive the payoffs of the individual and their opponent in the previous round as inputs) and one output node (giving the probability that they cooperate during their next interaction). The hidden layer of each individual's network has an evolving structure, possessing different numbers of cognitive and context nodes \\[38\\] (figure 1<\/a>). Cognitive nodes allow for computation based on the values of network inputs and context nodes, which in turn allow for the build-up of memory based on previous states of their associated cognitive nodes.\n\nComputation in the network is implemented via synchronous updating of nodes. The value of each input node is passed to each of the network's cognitive nodes, multiplied by the weight linking the two nodes. Each cognitive node is also passed the current value of their associated context node (if they possess one) multiplied by the weight linking the two nodes. The cognitive nodes sum across all of the weighted values that they receive and pass this value through a sigmoidal squashing function, resulting in a value between 0 and 1, analogous to a probability of activation. All context nodes are then passed the value of their associated cognitive nodes. This allows the context nodes to build up memory of previous interactions without having to store the actual sequence of events that have occurred. The internal states of these context nodes could be considered analogous to emotional states. Finally, the values at all cognitive nodes are then passed to the output node (multiplied by their weights), summed and again passed through a sigmoidal squashing function. This output gives the probability that the individual will cooperate in the current round. As the sigmoidal function asymptotes to 0 at \u2013\u221e and 1 at \u221e, there will always be inherent noise in the network's probabilistic decision. This property of the function also means that it is easy to minimize noise in the network's behaviour if that behaviour shows a lack of contingency (as the node can always be near one of the asymptotes), while contingent behaviour will show greater noise (as switching is more difficult to achieve near the asymptotes). This formulation has intuitive appeal over simply adding extraneous noise to individual decisions, as in nature we would expect individuals to make few mistakes when their behaviour is non-contingent, while more complex decisions would be expected to be more error-prone. As the network cannot make decisions without an input, each individual has an additional trait encoding whether they cooperate or defect in the first round.\n\nWe allowed networks to evolve according to natural selection using a genetic algorithm where fitness is the mean payoff per round from the iterated games minus a penalty for the individual's intelligence, *i*. When individuals reproduce, mutations allow for the gain and loss of nodes from the hidden layer of their network with a fixed probability. Context nodes could only be gained if there was already a cognitive node present without an associated context node. The loss of a cognitive node with an associated node resulted also in the loss of the associated context node.\n\nThe addition of extra cognitive nodes gives networks the potential to perform complex computation based on payoffs by increasing the dimensions of internal representation of the network. The addition of context nodes gives the potential for the integration of longer-term memory of previous interactions in these computations. If an individual possessed no hidden layer nodes in its network, its behaviour in all rounds was decided by its first round move (i.e. they either always cooperated or always defected). The weights of each node in the network (arrowed lines in figure 1<\/a>) and the threshold of each node (see the electronic supplementary material) were encoded as continuous genetic traits, again subject to mutation during reproduction. This means that, while the number of nodes in the network constrains the possible behavioural repertoire, it is the way that the constituent parts of the network interact that actually decides the individual's behaviour. In this way, our metric of intelligence assesses the potential for complex behaviour that the individual possesses, rather than the appropriateness or 'wisdom' of their behaviour, similarly to the measures of intelligence used in comparative studies.\n\n## Model implementation\n\nIn order to elucidate when selection favoured intelligence, we ran 10 replicates of our model for both the IPD and ISD, with each replicate lasting 50 000 generations. The payoff values used for all simulations were *R* = 6, *P* = 2, *T* = 7 and *S* = 1 for the IPD, and *R* = 5, *P* = 1, *T* = 8 and *S* = 2 for the ISD. The genetic algorithm was implemented as follows (see the electronic supplementary material for further details):\n\n1. \u2014\u2002an initial population of random networks was generated;\n\n2. \u2014\u2002each individual played every other individual in the population (50 individuals) in an IPD or ISD;\n\n3. \u2014\u2002each individual network's fitness was calculated as their mean payoff per round minus a fitness penalty for their level of intelligence, *i*;\n\n4. \u2014\u2002individuals were selected to reproduce asexually with probability proportional to their fitness;\n\n5. \u2014\u2002newly produced offspring underwent mutation of their network weights, node thresholds and network structure with constant probabilities;\n\n6. \u2014\u2002the previous generation died; and\n\n7. \u2014\u2002the algorithm returned to step 2 until 50 000 generations was reached.\n\nDuring simulations we recorded the frequency of cooperation in the population, the intelligence of individuals (*i*) and assessments of the behaviour of individuals against a pre-determined test set of moves (see the electronic supplementary material). We then analysed the gradients of selection for intelligence across these simulations by taking selection for intelligence as the covariance between fitness and intelligence in any given generation \\[39\\]. As 50 million individual neural networks were simulated in our study, and individuals were not constrained to base their behaviour only on the previous move, our simulations generated a great diversity of strategies. In order to gain a coarse-grained overview of the strategic composition of the population, we clustered individuals based on their proximity to four canonical strategy types: always-defect-like, always-cooperate-like, tit-for-tat-like (do what your opponent did to you) and Pavlov-like (if your payoff is over a threshold, repeat your previous move). Assignment to each of these strategy types was based on which of these four strategies each individual network clustered closest to based on its behaviour against the test set. While this clustering is only a coarse-grained view, it allows assessment of the effects of shifts towards contingent cooperative strategies on selection for intelligence. Additionally, contingent human cooperation has previously clustered as either tit-for-tat-like or Pavlov-like \\[40\\], though longer-term memory is often included \\[41\\]. For full details of our data analysis, we direct readers to the electronic supplementary material.\n\n# Results\n\nOur model shows the spontaneous evolutionary emergence of behaviours similar to strategies known to perform well in the IPD and ISD, such as tit-for-tat and Pavlov, as well as simple always-cooperate or always-defect strategies (figure 2<\/a>) \\[42\\]. Although our networks' behaviours are similar to these strategies, they often show integration over many previous rounds to decide on their next moves. For example, manual interrogation of networks revealed that, of the tit-for-tat type strategies that emerge, many are tit-for-2(or more)-tats, and many of the Pavlov-like strategies also show a threshold mechanism, switching to constant defection against opponents that show behaviour close to an always-defect strategy. Behaviour was observed that appeared to be close to many other strategies\u2014for example, grim variants (cooperate until the opponent defects, then defect forever), though often requiring more than one defection to trigger permanent defection; false cooperator (cooperate first then switch to defection), though often giving another cooperative move after many defections; and many other variants of tit-for-tat such as 2-tits-for-1-tat and 2-tits-for-2-tats. It is worth noting that strategies of these types that use longer-term memory are observed in behavioural experiments of repeated games with noise \\[41\\]. These responsive strategies require greater cognitive abilities in order to carry out computations based on payoffs, memorize past rounds and integrate across them to make decisions, in comparison with the lower requirements of simply always cooperating or defecting. We hasten to add, however, that the strategies emerging only resemble these strategies; the strategies vary in a continuous manner and often incorporate memory over more rounds. Our goal here is not to describe the strategies that can emerge in repeated games, as there is already extensive literature on this topic (see table 2.1 in \\[43\\]), but rather to elucidate the potential effects of their evolution on selection for cognitive capacities.\n\nIn order to elucidate the causal factors leading to the evolution of more complex strategies, we analyse the gradient of selection for intelligence in response to population features such as the prevalence of cooperative acts. We find that the selection for intelligence is maximized as the level of cooperation in the population moves above the single-interaction Nash equilibria towards more cooperative regimes (figure 3<\/a>). In the IPD, this maximum occurs during increases in cooperation from the single-interaction Nash equilibrium, whereas in the ISD selection for intelligence is maximized at levels of cooperation just above the single-interaction Nash equilibrium. This discrepancy between the games is explained by the different natures of their single-interaction Nash equilibria. In the IPD, this equilibrium is zero, meaning that declining cooperation near this equilibrium is caused by increases in the frequency of individuals that always defect, requiring only little cognitive ability. In the ISD, the equilibrium is non-zero (0.23 in our simulations), meaning that decreases in cooperation back towards the equilibrium can be caused by 'meaner' contingent strategies (e.g. 2-tits-for-1-tat and false-cooperator variants), as well as individuals that always defect. As a result, transitions back to the single-interaction Nash equilibrium in the ISD can in principle select for intelligence, while this is very unlikely in the IPD.\n\nWe also find that increasing intelligence decreases the mean frequency of cooperative acts in the IPD (Spearman's rank correlation test; *\u03c1* = \u22120.2333, *p* \\< 0.001; figure 4<\/a>*a*), while slightly increasing cooperation in the ISD (Spearmans' rank correlation test; *\u03c1* = 0.0089, *p* \\< 0.0001, figure 4<\/a>*b*). Increasing intelligence increases the variance in the frequency of cooperative acts in the population in both the IPD (Breusch\u2013Pagan test; intercept = 0.0294, slope = 0.0582, *p* \\< 0.0001; figure 4<\/a>*a*) and the ISD (Breusch\u2013Pagan test; intercept = 0.0309, slope = 0.0384, *p* \\< 0.001; figure 4<\/a>*b*), showing that intelligence can facilitate greater extremes of cooperation. These results can be explained by assortment of individuals' cooperative acts \\[44\\]; the contingent strategies facilitated by increased intelligence allow an individual to increase the probability that they assort their cooperative acts with other cooperative acts. This leads to a synergistic process, where this increase in cooperation due to increased intelligence creates further opportunities for intelligent individuals to engage in mutual cooperation. However, as levels of cooperation increase further this feedback can break down, as there may be enough cooperation occurring for unconditional cooperators to increase in the population, allowing in turn for the invasion of unconditional defectors or 'meaner' intelligent strategies (e.g. grim variants, false-cooperator variants, etc.). This results in intelligence-facilitating cycles in the levels of cooperation seen in the population (figure 2<\/a>), which increases both the variance in, and the maximum level of, cooperation.\n\nIn addition to dependency on the prevalence of, and change in, cooperative actions (figure 3<\/a>), we find that intelligence can be subject to a Machiavellian runaway process \\[11,12\\]. In the ISD, as the frequency of contingent (intelligent) strategies increases, so too does selection for intelligence in the population (tit-for-tat-like strategies: Spearman's *\u03c1* = 0.2025, *p* \\< 0.0001; Pavlov-like strategies: *\u03c1* = 0.2352, *p* \\< 0.0001; see figure 5<\/a>; electronic supplementary material, table S1 and figure S1). In the IPD, increasing frequencies of tit-for-tat and Pavlov reduces selection for intelligence at low levels of cooperation (frequency of cooperation \\<0.5; tit-for-tat: *\u03c1* = \u22120.0945, *p* \\< 0.0001; Pavlov: *\u03c1* = \u22120.0529, *p* \\< 0.0001), but does increase selection for intelligence when cooperation is more frequent (frequency of cooperation \u2265 0.5; tit-for-tat: *\u03c1* = 0.5491, *p* \\< 0.0001; Pavlov: *\u03c1* = 0.3187, *p* \\< 0.0001). The reason for this distinction between the IPD and the ISD is that there must already be some cooperation occurring for cooperative contingent strategies to be favoured via their ability to assort cooperative acts. In the ISD, the partially cooperative single-interaction equilibrium provides sufficient baseline cooperation for tit-for-tat and Pavlov to be favoured, whereas in the IPD the single-interaction equilibrium of zero cooperation means that unless contingent strategies (or random drift) have already increased cooperation, tit-for-tat-like and Pavlov-like strategies cannot be favoured, and hence cannot lead to an arms race. Note that it is still the case that intelligence is selected for in the IPD when cooperative acts are rare yet increasing (figure 3<\/a>*a*). However, the cooperative acts that drive selection for intelligence in this case are generated by less cooperative contingent strategies, which cluster with always-defect as their closest pure strategy (*\u03c1* = 0.1095, *p* \\< 0.0001; see electronic supplementary material, table S1 and figure S1). This means that there is a succession in the arms race in the IPD, with 'mean' contingent strategies initially increasing selection for intelligence at low cooperation, and 'kind' contingent strategies increasing selection for intelligence as cooperation increases.\n\nIt is not any particular single strategy that drives these arms races; rather, as the complexity of the strategies in the population increases, there is selection for other complex strategies to outwit them. Unlike previous analyses where fixed strategies or strategies with constrained memory were used, our open-ended system allows for near infinite strategic variations to outwit opponents. In this way, selection for intelligence occurs owing to a constantly shifting strategic environment, where the 'best response' to the population of strategies can be shifting from generation to generation. Increases in memory allow for the potential of the recognition of opponents' strategies, allowing for the alteration of one's own strategy in response (e.g. Pavlov-like strategies that can recognize individuals that always defect). In turn this can allow for attempts to deceive opponents regarding one's own strategy (e.g. false cooperators).\n\n# Discussion\n\nIt is important here to note the closed nature of our model system; individuals can only choose within one social task (whether to cooperate or defect against another individual based on their behaviour in previous interactions with them). However, our results may apply in principle to other social scenarios where individuals use strategies to decide who to cooperate with or when to cooperate, such as indirect reciprocity \\[17,18\\], policing\/punishment \\[45,46\\] and partner choice \\[47\u201349\\]. Along with kin selection, these are the major mechanisms thought to lead to transitions to cooperative groups. As the intelligence of an individual increases, it is likely that more of these behavioural repertoires will become available to them, with increased intelligence acting as a pre-adaptation. For example, increased intelligence owing to selection for direct reciprocity could facilitate the evolution of indirect reciprocity or partner choice, highlighting the contingent nature of social evolution in multiple strategic dimensions \\[50\\]. The facilitation of new social behaviours due to emergent intelligence could allow for a perpetuated Machiavellian arms race leading to ever-greater levels of intelligence. Additionally, in our simulations populations evolved to play only a single game (either the IPD or ISD). The simultaneous play of multiple games could greatly increase strategic complexity, with the possibility of the integration of information from previous interactions in games with different payoff structures into the decision-making process.\n\nIt has previously been suggested that the pinnacles of cooperative behaviour in nature show a bimodal distribution with intelligence, with the most cooperative species displaying either limited cognition (e.g. microbes, social hymenoptera) or exceptional intelligence (e.g. humans and other primates, certain cetaceans and birds) \\[26\\]. It is clear in the former case that cooperation has evolved primarily owing to combinations of kin selection and ecological factors \\[51\\]. However, in the latter case, kin selection is not the only mechanism leading to cooperation, and may not even be the most important. A recent study has in fact suggested that relatedness was too low in human hunter\u2013gatherer groups for kin selection to drive the evolution of human cooperation \\[52\\]. In highly intelligent species, contingent behaviours (reciprocity, partner choice, etc.) appear to have been important in the evolution of cooperation \\[26\\]. Our results may help us to explain this pattern by showing that the selection for appropriate behavioural assortment of cooperative acts can lead to selection for greater cognitive abilities and Machiavellian arms races, and that intelligence facilitates greater extremes of cooperation. Additionally, although kin selection is still of importance in highly intelligent taxa, high relatedness may hinder the evolution of intelligence by driving unconditional cooperation to fixation in the population, without any need of contingent behaviours.\n\nA trait as complex as advanced intelligence is likely to have evolved owing to a combination of several factors rather than a single factor \\[4\\]. Along with the social intelligence hypothesis, many other theories attempting to explain the evolution of advanced intelligence have been suggested, among them that intelligence is an adaptation for tool use \\[53,54\\], that intelligence is an adaptation for social learning and the accumulation of culture \\[55\u201357\\], and that intelligence is the result of sexual selection \\[58\\]. All of these theories are supported by evidence from at least some of the most intelligent animals. However, the difficulty lies in disentangling the traits that are causal factors in the evolution of intelligence from those that are by-products of advanced intelligence. The combination of game theoretic frameworks and artificial neural network models presented here may provide a framework for the evaluation of the mechanistic strengths of these different hypotheses. While previous models have sought to relate cooperation and intelligence, the focus has most frequently been on the cognitive requirements of cooperation, rather than on the selection for intelligence. Many of these models have lacked an explicit brain structure \\[17\u201322\\], and among those studies that have used artificial neural networks, we know of no examples where the network structure was allowed to freely evolve or implications of selection for decision-making strategies for the evolution of intelligence were directly addressed \\[23,24\\]. While artificial neural networks are not real brains, relying on abstraction of the activity of millions of real neurons down to a manageable number of artificial neurons, they can provide insight into the dynamics of cognitive evolution and allow for the flexible evolution of behaviour \\[59\\]. Our results show that, in a freely evolving system, selection for efficient decision-making in social interactions can give rise to selection pressures for advanced cognition, supporting the view that the transition to the cooperative groups seen in the most intelligent taxa may be the key to their intellect.\n\n## Acknowledgments\n\nWe thank G. D. Ruxton, C. J. Tanner, I. Donohue, N. M. Marples, K. Irvine, R. Bshary and two anonymous reviewers for helpful comments on a previous version of this manuscript. We also thank F. J. Weissing, S. Gavrillets, J. A. R. Marshall and R. K\u00fcmmerli for helpful discussions. L.M. was supported by a scholarship from the Irish Research Council for Science, Engineering and Technology. S.P.B. was supported by the Wellcome Trust.\n\n# References","meta":{"dup_signals":{"dup_doc_count":173,"dup_dump_count":44,"dup_details":{"curated_sources":2,"2018-47":2,"2018-43":1,"2018-39":3,"2018-34":1,"2018-30":3,"2018-26":2,"2018-22":3,"2018-17":2,"2018-13":2,"2018-09":3,"2018-05":2,"2017-51":4,"2017-47":4,"2017-43":4,"2017-39":5,"2017-34":4,"2017-30":4,"2017-26":5,"2017-22":3,"2017-17":5,"2017-09":5,"2017-04":6,"2016-50":6,"2016-44":6,"2016-40":6,"2016-36":5,"2016-30":4,"2016-26":4,"2016-22":5,"2016-18":5,"2016-07":5,"2015-48":5,"2015-40":5,"2015-35":3,"2015-32":3,"2015-27":4,"2015-22":5,"2015-14":4,"2014-52":4,"2014-49":2,"2017-13":5,"2015-18":4,"2015-11":4,"2015-06":4}},"file":"PMC3385471"},"subset":"pubmed_central"} {"text":"abstract: Two new approaches have identified physiological nonsense-mediated mRNA decay (NMD) substrates, and suggest that NMD functions as a multipurpose tool in the modulation of gene expression.\nauthor: Gabriele Neu-Yilik; Niels H Gehring; Matthias W Hentze; Andreas E Kulozik\ndate: 2004\ninstitute: 1Department of Pediatric Oncology, Hematology and Immunology, University of Heidelberg, Im Neuenheimer Feld 150, 69120 Heidelberg, Germany; 2Molecular Medicine Partnership Unit, EMBL\/University of Heidelberg, Im Neuenheimer Feld 156, 69120 Heidelberg, Germany; 3EMBL Heidelberg, Gene Expression Programme, Meyerhofstrasse 1, 69117 Heidelberg, Germany\nreferences:\ntitle: Nonsense-mediated mRNA decay: from vacuum cleaner to Swiss army knife\n\nNonsense-mediated mRNA decay (NMD) is a specific pathway for the degradation of mRNAs that have premature termination codons (PTCs) in their open reading frames (ORFs). Its importance is highlighted by its conservation in all eukaryotes. NMD counteracts the potentially harmful impact of mRNAs that have PTCs as a result of errors at various levels of gene expression, such as nonsense and frameshift mutations, transcriptional errors and faulty splicing. Thus, NMD serves as a 'cellular vacuum cleaner' that protects the cell from the potentially harmful effects of truncated proteins by eliminating mRNAs with PTCs in a sequence of events that is not yet fully understood. In recent years numerous biochemical and cell-biological investigations in *Saccharomyces cerevisiae* \\[1\\], *Drosophila melanogaster* \\[2\\], *Caenorhabditis elegans* \\[3\\] and human \\[4,5\\] cells have helped to elucidate some of the mechanistic details underlying the NMD pathway. A role for NMD in the regulation of mRNA metabolism beyond the mere vacuum cleaner function for faulty mRNAs has been suspected, and was foreshadowed by work on the splicing factor SC35 and some ribosomal proteins \\[6,7\\]. Now, 'genome-wide' approaches - one in yeast using microarrays and another *in silico*, analyzing information mined from mRNA and protein databases - have added powerful evidence to suggest that NMD may serve multiple purposes in gene expression \\[8-11\\].\n\n# Inspirations from yeast\n\nIn yeast, NMD depends on the expression of the Upf1, Upf2 (Nmd2) and Upf3 proteins. Single or simultaneous inactivation of the UPF genes stabilizes nonsense-containing mRNAs, indicating that their protein products interact functionally in the same pathway. He *et al*. \\[8\\] used high-density oligonucleotide arrays to analyze genome-wide expression profiles of yeast strains containing single deletions of the *UPF1*, *UPF2* or *UPF3* genes, as well as of the *DCP1* and *XRN1* genes which encode proteins with activities thought to be involved in the NMD pathway - an essential component of the mRNA decapping enzyme and the 5'-3' exonuclease, respectively. They also tested double deletions of the *XRN1* gene in combination with each of the UPF genes. Two-dimensional clustering analysis of the expressed genes for the *\u0394upf1*, *\u0394upf2* and *\u0394upf3* strains yielded several interesting results.\n\nThe deletion of *UPF1*, *UPF2* or *UPF3* generated nearly identical expression profiles. Thus, all three gene products act on the same targets, consistent with the function of Upf1, Upf2 and Upf3 in a single, linear pathway in yeast. The abundance of most mRNAs upregulated in Upf-deficient cells was also increased in *\u0394dcp1* and *\u0394xrn1* strains, suggesting that these mRNAs are largely degraded by decapping and subsequent 5'-3' exonucleolytic decay. This approach also identified a considerable number of NMD-regulated transcripts (765 out of the 7,839 genes represented on the microarrays) and showed that NMD substrates are generally expressed at below-average levels. In addition, most NMD substrates were found to be upregulated upon NMD inactivation, but some were downregulated, pointing to the existence of higher-order NMD targets (or additional functions of the Upf proteins in alternative gene-regulation pathways). Finally, only one third of the identified transcripts can be classified into structural or functional groups, some of which are surprising and hitherto unrecognized. Representatives of previously described NMD-substrate categories were identified, including mRNAs with nonsense mutations, transcripts resulting from faulty or regulated alternative splicing, mRNAs subject to leaky scanning during translation initiation, and mRNAs with an upstream ORF or with AATGA or ATGAA motifs immediately upstream of their translation initiation codons. More intriguing is the discovery of several new classes of NMD targets, including mRNAs that use translational +1 frameshifting, bicistronic mRNAs and, most interestingly, two classes of noncoding RNAs: pseudogene transcripts and transcripts encoded by transposable elements or their long terminal repeat (LTR) sequences.\n\nA significant fraction of the protein-encoding transcripts upregulated in the strains with mutations in the NMD-pathway genes \\[8\\] could be grouped into clusters of proteins that act in similar pathways. Among these are proteins coordinately involved in telomere maintenance, pre-mRNA splicing, peroxisomal function and DNA repair. This suggests the exciting possibility that NMD could orchestrate the expression of functional groups of genes. Several important results of the work by He *et al*. \\[8\\] confirm findings obtained by another lab using a similar approach several years ago \\[12\\]. Notably, the coordinate upregulation of genes involved in telomere maintenance by NMD inactivation has already sparked interesting follow-up investigations. The yeast NMD pathway has been shown to accelerate the rate of senescence promoted by the loss of the telomerase enzyme or by the erosion of telomeres that results from altering the stochiometry of telomere-cap components \\[13-15\\]. At least one likely primary target of NMD is the mRNA of Stn1p, an essential protein involved in chromosome-end protection. The observation by He *et al*. \\[8\\] that 35.9% of all ORFs encoded in the telomere region were upregulated in strains with mutations in components of the NMD pathway further illustrates that NMD controls many genes near telomere ends, although probably indirectly. These genes are usually silenced and may be derepressed when the protection of chromosome ends is disturbed by the loss of NMD. This enlightening example demonstrates how NMD can affect whole pathways by regulating the expression of one or a few primary target mRNAs, with consequences for groups of downstream secondary effectors. The discovery of additional examples of NMD-mediated control of functional pathways can be expected, promoting NMD to a gene-expression tool with many utilities. The cellular vacuum cleaner has therefore become a Swiss army knife (Figure 1<\/a>).\n\n# *In silico* veritas?\n\nIn a series of three recent publications \\[9-11\\] Steve Brenner and colleagues suggested, by data-mining, that one-third of reliably inferred human splice products form a major class of natural targets for NMD. Alternative splicing is thought to occur in 30-60% of human genes, expanding the coding repertoire of the limited number of genes in the genome and modulating tissue-specific and developmental gene functions. Brenner and colleagues \\[9-11\\] now suggest that alternative splicing provides a mechanism to generate PTC-containing splice products that are subsequently degraded by NMD and, as a consequence, that cooperation between alternative splicing and NMD pathways offers a major and currently underappreciated way to regulate gene expression.\n\nFor their *in silico* analysis, they mapped well-characterized human RefSeq \\[16\\] and LocusLink \\[17\\] mRNA sequences to genomic sequences, and then performed high-stringency alignments between these 'RefSeq-coding genes' and expressed sequence tags (ESTs). The reliably inferred splice variants were only accepted as likely NMD targets when they conformed to the '50-nucleotide rule': this hallmark of mammalian NMD predicates that stop codons located at least 50 nucleotides upstream of the last exon junction will be interpreted as 'premature' and trigger NMD. This approach leads to an underestimate of potential NMD targets, because some PTCs (for example, in T-cell receptor gene transcripts) will trigger NMD even when they are not followed by a sufficiently distant intron \\[18\\]. Moreover, Brenner and colleagues \\[9-11\\] also excluded mRNA variants that are indistinguishable from products of faulty splicing. These studies have unearthed several groups of functionally related proteins whose expression appears to be regulated by NMD, including translation factors and ribosomal proteins. This is in remarkable contrast to the yeast data of He *et al*. \\[8\\], where proteins with a function in translation were, if anything, underrepresented in the pool of NMD-regulated genes.\n\nNMD is not (yet) on everyone's mind. As a consequence, Brenner and colleagues \\[11\\] found several entries for truncated proteins in the Swiss-Prot database. In some of these cases the available experimental evidence confirms that the mRNAs that encode these truncated proteins are *bona fide* NMD substrates. We are left with a consolation and a surprise. The consolation is that traces of NMD can be uncovered even though they had been overlooked before. Having recently had to accept that the number of genes in the human genome is too limited to explain the far higher number of proteins (not to mention other gene products), we then had to learn that one plausible and elegant explanation lies in alternative splicing, which enables a gene to code for a whole family of related, or sometimes antagonistic, proteins. And now the surprise is that a large portion of this effort is supposedly expended only to direct many of the primary products to decay. Several conclusions can be drawn from these observations. First, databases need to be read and annotated with a full realization of the implications of NMD; and second, NMD seems to serve as a tool for rapidly switching off gene expression. This view extends the idea of NMD as a mechanism for ridding the cell of the potentially harmful products of faulty splicing. But there may be more to it. NMD rarely downregulates the expression of a transcript completely; more commonly, 10-30% of the PTC-containing transcripts survive and may allow the production of physiologically relevant levels of truncated protein products.\n\nFor NMD researchers it has always been hard to reconcile these observations with the presumed protective role of NMD, especially as very low levels of biological products can sometimes have enormous effects. Given the problems of detecting the low levels of proteins or peptides produced from downregulated transcripts, in addition to some lingering lack of awareness of NMD, examples to prove otherwise are hard to come by. A recent publication \\[19\\] describes a PTC-containing transcript of the high-affinity immunoglobulin E (IgE) receptor, Fc\u03b5RI\u03b2, arising from retention of an intron. This alternative transcript not only conforms to the 50-nucleotide rule but its expression levels are very low compared to those of the full-length transcript, as would be expected for an NMD target. Nonetheless, the truncated protein is not only detectable, it even competes effectively with the full-length protein to control Fc\u03b5RI\u03b2 expression on the cell surface. Thus, even low endogenous expression levels of NMD targets can suffice to generate a product with a biological function. A similar example of the utility of a *bona fide* NMD substrate is illustrated by the *unc-49* locus in *C. elegans*. This locus uses alternative splicing to produce three GABA-receptor subunits, two of which (A and B) undergo several splice events in their 3' UTRs, rendering them predicted NMD substrates \\[20\\]. While A-form transcripts are hardly detectable, B-form transcripts represent the most abundant form and code for a protein essential for the worm's locomotion. Either the B-form transcript escapes NMD, or the residual mRNA left after NMD suffices for the necessary protein production.\n\nAmong the examples presented by Brenner and colleagues \\[11\\] or in other studies \\[21\\] are genes with complex alternative splice patterns resulting in multiple transcript isoforms with or without PTCs. The lower abundance of these isoforms when compared to the full-length form supports the notion that they are targeted to the NMD pathway. But is the possibility that cells engage in complex alternative-splicing procedures generating multiple products, just to finally dispose of them, the only conceivable option? Four out of five PTC-containing isoforms of the mRNA for the LARD death receptor are readily detectable in non-activated lymphocytes, whereas only the full-length form is expressed in activated lymphocytes \\[11\\]. While it is entirely possible that the PTC-containing forms are generated to switch off receptor expression in resting lymphocytes, there are attractive alternatives: as in the case of the Fc\u03b5RI\u03b2 receptor \\[19\\], the PTC-containing mRNAs may produce proteins or peptides with relevant, although currently unknown, functions. Thus, quantitative control of the expression of low amounts of protein isoforms could represent yet another facet of the function of the NMD pathway.\n\nA role for NMD in controlling the levels of noncoding RNAs (including noncoding alternative splice products) must also be considered. RNA accounts for more than 95% of the human genome's output and there is increasing evidence that noncoding RNAs (including introns, and spliced and polyadenylated transcripts) can have a function, for example as a modulating network or an additional layer of information \\[22,23\\]. Importantly, noncoding RNAs have been discovered among natural NMD targets in yeast \\[8\\], where only a relatively small portion of the genome is transcribed into noncoding RNAs. What might be the role of NMD in mammals, which transcribe a far higher percentage of their genome into noncoding RNAs \\[22,24,25\\]? Clearly, the recent insights into the RNA-substrate spectrum of the NMD system should enhance the appreciation of NMD as a versatile, multipurpose mechanism that controls the transcriptome qualitatively and quantitatively.\n\n## Figures and Tables","meta":{"dup_signals":{"dup_doc_count":151,"dup_dump_count":47,"dup_details":{"curated_sources":4,"2023-14":1,"2022-27":1,"2022-05":1,"2020-34":1,"2019-51":1,"2019-43":1,"2019-26":1,"2019-18":1,"2019-09":2,"2018-51":2,"2018-43":2,"2018-34":1,"2018-22":1,"2018-05":2,"2017-39":1,"2017-30":2,"2017-22":1,"2017-09":10,"2016-50":1,"2016-44":2,"2016-40":1,"2016-36":11,"2016-30":8,"2016-22":1,"2016-18":1,"2015-48":4,"2015-40":2,"2015-35":6,"2015-32":4,"2015-27":4,"2015-22":4,"2015-14":4,"2014-52":4,"2014-49":4,"2014-42":9,"2014-41":7,"2014-35":6,"2014-23":6,"2014-15":4,"2023-40":1,"2015-18":3,"2015-11":4,"2015-06":3,"2014-10":3,"2013-48":4,"2013-20":3,"2024-10":1}},"file":"PMC395777"},"subset":"pubmed_central"} {"text":"abstract: A report on the Human Genome Organisation (HUGO) 11th Human Genome Meeting, Helsinki, Finland, 31 May-3 June 2006.\nauthor: Kristine Kleivi\ndate: 2006\ninstitute: 1Medical Biotechnology, VTT Technical Research Center of Finland, 20521 Turku, Finland and Department of Genetics, Institute for Cancer Research, Rikshospitalet-Radiumhospitalet Medical Center, 0310 Oslo, Norway\ntitle: Advances in the genetics and epigenetics of gene regulation and human disease\n\nAt the recent annual meeting on the human genome in Helsinki, organized by the Human Genome Organisation (HUGO), close to 700 scientists gathered to present and discuss the latest advances in genome research. This report presents some selected highlights.\n\n# Genome variation, gene expression and disease susceptibility\n\nThrough their effects on gene expression, polymorphisms in the human genome can contribute to phenotypic variation and disease susceptibility. For many diseases, such as cancer, great effort is being made to study the sequence variants that contribute to disease susceptibility. The impact of genetic variation on common diseases was addressed by Kari Stefansson (deCODE Genetics, Reykjavik, Iceland), who gave an update on the identified sequence variants that may increase the risk of developing type 2 diabetes, prostate cancer, myocardial infarction, stroke and schizophrenia. In the past decades, type 2 diabetes has become a major health problem in the Western world, as both its incidence and its prevalence have increased rapidly. Stefansson reported his group's recent discovery of an inherited variant of the gene *TCF7L7*, encoding a protein called transcription factor 7-like 2 located on chromosome 10, which is estimated to account for about 20% of the diabetes cases. They have also showed an association between a common genetic variant in the microsatellite DG8S737 at chromosome band 8q24, which may contribute to the development of prostate cancers in European and African populations.\n\nSingle-nucleotide polymorphism (SNP) genotypes correlated with gene-expression data in breast tumors were presented by Vessela Kristensen (The Norwegian Radium Hospital, Oslo, Norway). For genotyping, she and her colleagues selected sets of genes involved in reactive oxygen species signaling (ROS) and the repair of DNA damage caused by ROS - that is, pathways that are generally affected by chemotherapy and radiation therapy. Using various statistical approaches, the genetic association between SNPs in genes involved in the ROS pathways and the expression levels of mRNA transcripts from a panel of breast cancer patients were assessed. Regulatory SNPs in the genes *EGF*, *IL1A*, *MAPK8*, *XPC*, *SOD2* and *ALOX12* were associated with alterations in the expression levels of several transcripts. Kristensen also showed that a set of SNPs were linked to a cluster of transcripts participating in the same functional pathway.\n\nThomas Hudson (McGill University, Montreal, Canada) described several resources and technologies that are available to study the impact of genome variation on gene expression. He and his colleagues systematically studied a subset of genes whose alleles show large differences in expression in lymphoblastoid cell lines. These data were integrated with HapMap data to search for haplotypes associated with mRNA expression at flanking genes. Hudson described the discovery of 16 loci harboring a common haplotype affecting the total expression of a gene, that is, all the alleles of the gene, and of 17 loci that affected relative allelic expression in heterozygous samples. To better understand the mechanisms controlling this gene expression, *cis*-acting polymorphisms need to be studied in the human genome in larger sample sets and tissue panels.\n\n# DNA methylation and epigenetic modification of the genome\n\nMethylation of CpG islands has an important role in controlling gene expression during mammalian development, and is frequently altered in diseases such as cancer. DNA methylation was extensively discussed in the meeting. For example, Carmen Sapienza (Temple University Medical School, Philadelphia, USA) reported that imprinted regions in humans are historical hotspots of recombination. Together with specific DNA sequences, epigenetic factors may have an important influence on the rate of meiotic recombination and the position of cross-overs. Using *in silico* and *in vitro* analyses, Sapienza's group have shown a relationship between increased rates of meiotic recombination and genomic imprinting. Imprinted regions showed more linkage disequilibrium, and had a significantly higher number of small haplotype blocks, than the non-imprinted regions. Their findings suggest that several factors, including both specific DNA sequences and epigenetics, are involved in controlling meiotic recombination in humans.\n\nNutritional influences during prenatal and early postnatal development may affect gene expression, and subsequently the phenotype, through epigenetic gene regulatory mechanisms. Nutrition is important in providing methyl donors for DNA, and some genes are especially sensitive to nutritional changes during embryogenesis. Rob Waterland (Baylor Collage of Medicine, Houston, USA) has used mouse models to show that some alleles are particularly susceptible to changes in methylation due to maternal nutrition. For example, supplementary nutrition can lead to increasing body weight across several generations of offspring. He and his colleagues postulate that maternal nutrition before and during pregnancy may affect the establishment of CpG methylation and the life-long expression of metastable epialleles (epigenetically modified alleles) in humans.\n\nWhereas the majority of CpG islands in the genome are normally unmethylated, a sizeable fraction is prone to methylation in various cell types and pathological conditions. Christoph Bock (Max Planck Institute for Informatics, Saarbr\u00fccken, Germany) described his group's work predicting CpG methylation on the basis of DNA sequence and genomic location. Using a bioinformatics approach, they were able to distinguish CpG islands that are prone to methylation from those that are not. For example, on chromosome 21, they were able to predict the CpG island methylation rate with 90% accuracy, which was later confirmed by *in vitro* analyses. This study revealed that the DNA composition of CpG islands, the sequence, the structure and the number of repeats play an important role in predisposing CpG islands to DNA methylation. Furthermore, these features can also be used to predict the CpG methylation pattern of the whole genome.\n\n# Regulatory genomics\n\nKnowledge about transcription factors, their binding specificities and the assembly of their binding sites to form tissue-specific enhancer elements is critical for understanding key regulatory mechanisms of human gene expression. Outi Hallikas (University of Helsinki, Finland) aimed to determine the binding specificities of transcription factors that are involved in growth control, and to find evolutionarily conserved enhancer elements that drive organ-specific expression of genes that regulate a cell's progression through the cell cycle. By using a high-throughput method for determining transcription factor binding sites, Hallikas reported the binding sites of the transcription factors GLIs, TCF4 and c-ETS1, which are involved in different signaling pathways such as those leading from the signal protein Wnt and the Ras\/MAPK intracellular signaling module. To identify the mammalian enhancer elements, Hallikas and colleagues have developed a new computational tool (Enhancer Element Locator; available online ), and used it to predict active transcription factor binding sites. Validation of these in transgenic mice revealed the presence of enhancers in c-*Myc* and N-*Myc*, genes that play a role in growth control and tumorigenesis.\n\nFrom the same group, Mikael Bj\u00f6rklund presented a genome-wide RNA interference (RNAi) analysis of genes that are involved in cell-cycle control and cell-size regulation in *Drosophila*. Using flow cytometry, Bj\u00f6rklund and colleagues analyzed the RNAi-induced loss-of-function of 70% of the genes in *Drosophila*, including those conserved in humans, on cell-cycle progression by flow cytometry. Genes controlling several cellular processes were identified, including cell size, cytokinesis, apoptosis and phases of the cell cycle. In addition, a translational regulator (eIF-3p66) associated with the cyclin\/cyclin-dependent kinase pathway was identified.\n\nThe combination of RNAi and gene-expression profiling provides further insight into gene function and the regulatory networks controlling expression. Ilaria Piccini (Max Planck Institute for Molecular Genetics, Berlin, Germany) presented their genome-wide analysis of transcription factors in association with gene regulation. Using RNAi, Piccini and colleagues are knocking down the expression of 200 transcription factors involved in human developmental processes in a panel of cell lines, and investigating the downstream targets of these at the transcriptome level. They have initially focused on transcription factors encoded by chromosome 21, and as an example, Piccini described the identification of 72 potential target genes, containing both known and novel targets, that were dysregulated when expression of the transcription factor gene *BACH1* was silenced.\n\n# DNA amplifications in cancer genomes\n\nGene amplifications are seen in a variety of human cancers, and are often associated with poor clinical outcome for the patient. Target genes for amplified regions are often oncogenes, which may be used as therapeutic, prognostic and diagnostic targets. Therefore, increasing the knowledge of DNA copy number amplifications in human neoplasms is important. This issue was addressed by Samuel Myllykangas (University of Helsinki, Finland) who presented an extensive analysis of DNA copy number amplifications in human cancers. Using data from published comparative genomic hybridization studies, they performed an *in silico* analysis of DNA copy number changes in approximately 4,500 samples from over 70 different neoplasms. Computational analysis identified the different amplification hotspots, which were spread over large parts of the genome and frequently co-localized with known fragile sites, cancer genes and virus integration sites. Amplification of some chromosomal regions was observed in the majority of the cancers studied, whereas other amplifications were cancer-site specific. From the characteristic amplification profiles, Myllykangas showed that cancers with similar cellular origin and histology, such as breast and prostate adenocarcinomas, clustered together. Their discoveries show the relevance of global studies on DNA amplifications in human cancers, and suggest diagnostic and predictive possibilities.\n\nThe annual Human Genome Meeting of 2006 was an inspiring meeting, updating us on the latest knowledge in human genomics. The power of combining high-throughput experimental approaches with genome-wide bioinformatics, systems biology and data integration was emphasized. Thus, it was a successful demonstration of strategies that will be increasingly useful in human genetics in the future.","meta":{"dup_signals":{"dup_doc_count":103,"dup_dump_count":43,"dup_details":{"curated_sources":4,"2022-33":1,"2021-43":1,"2019-22":1,"2019-09":1,"2018-51":1,"2018-39":1,"2018-30":1,"2018-22":1,"2018-05":1,"2017-39":1,"2017-30":1,"2017-26":1,"2017-22":1,"2017-09":8,"2016-44":1,"2016-40":1,"2016-36":9,"2016-30":6,"2016-22":1,"2016-18":1,"2016-07":6,"2015-48":2,"2015-40":2,"2015-35":2,"2015-32":2,"2015-27":1,"2015-22":2,"2015-14":2,"2014-52":2,"2014-49":4,"2014-42":6,"2014-41":3,"2014-35":3,"2014-23":3,"2014-15":4,"2023-14":1,"2015-18":2,"2015-11":2,"2015-06":2,"2014-10":3,"2013-48":2,"2013-20":2,"2024-10":1}},"file":"PMC1779594"},"subset":"pubmed_central"} {"text":"author: Raymond Hutubessy\ndate: 2013\ninstitute: 1Initiative for Vaccine Research, World Health Organization, 20 Avenue Appia, CH-1211, Geneva, Switzerland\ntitle: Q&A - Economic analyses for vaccine introduction decisions in low- and middle- income countries\n\n# Introduction\n\nRaymond Hutubessy is a senior health economist affiliated to the Immunization, Vaccines and Biologicals (IVB) Department of the World Health Organization (WHO), and is the executive secretary of the WHO Immunization and Vaccines related Implementation Research Advisory Committee (IVIR-AC). His main research interests focus on economic and financial analyses of vaccine introduction decisions in low- and middle-income countries (LMICs). In this Q&A, he will discuss the importance of this work in relation to the global context of vaccine introduction decisions (Figure 1<\/a>).\n\n## 1) What is the importance of conducting economic analyses for vaccine introduction decisions?\n\nVaccines for prevention of communicable diseases have been shown to be extremely effective in terms of health outcomes. Therefore, conducting economic analyses to get the most value for money from vaccine introduction decisions is of high importance; evidence and information resulting from these analyses are not the only input to the decision-making process for vaccine introduction decisions, but they are important ones.\n\nThe relationship between health and economic growth is one of the cornerstones of development economics: health status is determinant of productivity that can be shown to influence economic growth. Specifically, vaccines have a broader value in terms of their indirect effects (for example, herd immunity) and other externalities (for example, improvements in the cognitive development of children, higher school attendance and attainment, macroeconomic impact). Therefore, in addition to the traditional economic appraisals for vaccine introduction decisions it is useful to policy makers and other stakeholders involved with vaccine introduction decisions to demonstrate the broader added value of vaccines and investments in health in general.\n\nEconomic appraisals address different key issues with regard to decisions on vaccine introduction. These appraisals range from priority-setting issues across vaccines and other competing health interventions, to affordability and budget impact analysis, and costing and financing issues with regard to the introduction decisions of immunization programs. For these different policy questions, different analytical tools are available, such as cost-effectiveness analyses, costing studies, budget impact and optimization analysis.\n\n## 2) What are the main issues that should be considered?\n\nFirst, because many economic evaluations are based on analytical decision tools such as mathematical infectious disease models, costing tools, decision trees models and so on, transparency is needed on the choice of the modeling methodologies, parameters and country data used and assumptions made by the analyst. Standardization of methods of cost-effectiveness is therefore needed and analysts in the field should adhere to these guides. This allows users to make comparisons of different study results by different groups. The WHO, in addition to other organizations, has developed several guidelines on economic evaluations in health, and vaccines and immunization programs in particular.\n\nSecond, to be relevant, local decision makers have country-specific policy questions and therefore need contextualized study results driven by specific country data and information with regard to demographics, epidemiological and economic data, and local needs. However, this does not mean that for each country (in my field of work, this primarily involves LMICs), economists and analysts need to start from scratch - they can build on work from other groups who often put their models, including the computer program codes, in the public domain.\n\nIn light of this, efforts should be put into the collection of local data and building local technical capacity in LMICs so that they are able to perform their own analysis and interpret their own results with the aim of increasing local ownership of the evidence generated. The WHO, along with partners i.e. Pan American Health Organization (PAHO), Agence de M\u00e9dicine de Pr\u00e9ventive (AMP), Program for Appropriate Technology in Health (PATH), Sabin Vaccine Institute and USA Centers for Disease Control and Prevention, recently started the ProVac International Working Group to promote the use of economic analysis for vaccine introduction decisions in LMICs.\n\n## 3) Are there differences in conducting economic analyses for vaccine introduction decisions between higher income versus resource-limited settings?\n\nIn principle, the methods applied and tools used are similar in higher income versus resource-limited settings. However, because country demographics, disease burden, epidemiological and socioeconomic background, and health systems and infrastructure differ, the methods of measurement and valuation and the interpretation hence key drivers of results of economic evaluations will also differ.\n\nFor example, the price at which human papillomavirus (HPV) vaccination is considered to be cost-effective is heavily dependent on HPV prevalence and the existing local cervical cancer services and the ceiling cost-effectiveness or cost-effectiveness threshold (that is, societies' willingness to pay for an additional health gain). In high income countries, access to health care services is better and delivery systems of vaccines to reach adolescent girls are more advanced than in LMICs. As a result, the affordability question in such resource-rich scenarios focused around the relatively high public market prices of HPV vaccines, which may go up to 150 USD per dose.\n\nBy contrast, in countries eligible for the Global Alliance for Vaccines and Immunization's (GAVI Alliance) support, the manufacturers of one of the vaccines have offered an indicative price of 5 USD per dose. This is a 64% reduction on the lowest public prices. As a result, rather than the vaccine price *per se*, it is the securing of the delivery costs to get the vaccine from the port of entry to those girls in need that has become a main barrier in many of these countries. This barrier is in addition to other capacity issues, such as the existing delivery infrastructure being already over-stretched with traditional Expanded Programme on Immunization vaccines and other competing new vaccines, such as rotavirus and pneumococcal vaccines.\n\n## 4) Are there any specific ways in which these economic analyses have improved clinical outcomes?\n\nIn my opinion, economic analyses have helped to increased the level of health that health care spending can buy, and hence have aided promotion of the use of effective and cost-effective health interventions. For example, by identifying barriers and uptake issues of vaccine introduction decisions, economic analyses have contributed to bridge the link between the evidence on theoretical vaccine efficacy and real-life effectiveness. Another example is that economic analyses do not just appraise vertical disease programs but more often also have an integrated and combined disease program perspective. This has the potential to account for synergistic effects of disease programs and therefore will improve the overall public health impact.\n\n## 5) Where can I find more information?\n\nWHO and other partners have vaccine specific and generic websites on health economic information and initiativies. In addition, WHO published several key documents on vaccine economics.\n\n**WHO Initiative for Vaccine Research (IVR)** \\[\\]\n\n**CHOosing Interventions that are Cost Effective (WHO-CHOICE)** \\[\\]\n\n**WHO guide for standardization of economic evaluations of immunization programmes** \\[\\]\n\n**Pan American Health Organization (PAHO ProVac)** \\[\\]\n\n**International Health Economics Association** \\[\\]\n\nJit M, Levin C, Brison M, Levin A, Resch S, Berkhof J, Kim J, Hutubessy R: **Economic analyses to support decisions about HPV vaccination in low- and middle-income countries: a consensus report and guide for analysts**. *BMC Med* 2013, **11**:23.\n\nHutubessy R, Levin A, Wang S, Morgan W, Ally M, John T, Broutet N: **A case study using the United Republic of Tanzania: costing nationwide HPV vaccine delivery using the WHO Cervical Cancer Prevention and Control Costing Tool**. *BMC Med* 2012, **10**:136.\n\nDeogaonkar R, Hutubessy R, van der Putten I, Evers S, Jit M: **Systematic review of studies evaluating the broader economic impact of vaccination in low and middle income countries**. *BMC Public Health* 2012, **12**:878.\n\nPostma MJ, Jit M, Rozenbaum MH, Standaert B, Tu HA, Hutubessy RC: **Comparative review of three cost-effectiveness models for rotavirus vaccines in national immunization programs; a generic approach applied to various regions in the world**. *BMC Med* 2011, **9**:84.\n\nHutubessy R, Henao AM, Namgyal P, Moorthy V, Hombach J: **Results from evaluations of models and cost-effectiveness tools to support introduction decisions for new vaccines need critical appraisal**. *BMC Med* 2011, **9**:55.\n\nJit M, Demarteau N, Elbasha E, Ginsberg G, Kim J, Praditsitthikorn N, Sinanovic E, Hutubessy R: **Human papillomavirus vaccine introduction in low-income and middle-income countries: guidance on the use of cost-effectiveness models**. *BMC Med* 2011, **9**:54.\n\nChaiyakunapruk N, Somkrua R, Hutubessy R, Henao AM, Hombach J, Melegaro A, Edmunds JW, Beutels P: **Cost effectiveness of pediatric pneumococcal conjugate vaccines: a comparative assessment of decision-making tools**. *BMC Med* 2011, **9**:53.\n\n# Author information\n\nRH is a staff member of the World Health Organization. The views expressed in this article is that of the author and do not necessarily represent the views of the World Health Organization.\n\n# Pre-publication history\n\nThe pre-publication history for this paper can be accessed here:\n\n","meta":{"dup_signals":{"dup_doc_count":118,"dup_dump_count":58,"dup_details":{"curated_sources":2,"2023-23":1,"2023-06":1,"2022-33":2,"2022-05":1,"2021-31":1,"2021-21":1,"2021-10":1,"2020-40":2,"2020-29":1,"2019-39":1,"2019-26":1,"2019-22":1,"2019-18":1,"2019-09":1,"2018-51":2,"2018-43":2,"2018-34":1,"2018-30":1,"2018-26":1,"2018-17":1,"2018-09":2,"2017-39":1,"2017-30":3,"2017-22":5,"2017-17":3,"2017-09":10,"2017-04":2,"2016-50":2,"2016-44":3,"2016-40":1,"2016-36":6,"2016-30":11,"2016-22":1,"2016-18":2,"2016-07":1,"2015-48":2,"2015-40":2,"2015-35":1,"2015-32":1,"2015-27":1,"2015-22":1,"2015-14":1,"2014-52":2,"2014-49":2,"2014-42":5,"2014-41":2,"2014-35":3,"2014-23":2,"2014-15":2,"2023-50":1,"2024-10":1,"2017-13":1,"2015-18":2,"2015-11":1,"2015-06":1,"2014-10":1,"2013-48":3,"2024-26":1}},"file":"PMC3599816"},"subset":"pubmed_central"} {"text":"abstract: Nutrient timing is a popular nutritional strategy that involves the consumption of combinations of nutrients--primarily protein and carbohydrate--in and around an exercise session. Some have claimed that this approach can produce dramatic improvements in body composition. It has even been postulated that the timing of nutritional consumption may be more important than the absolute daily intake of nutrients. The post-exercise period is widely considered the most critical part of nutrient timing. Theoretically, consuming the proper ratio of nutrients during this time not only initiates the rebuilding of damaged muscle tissue and restoration of energy reserves, but it does so in a supercompensated fashion that enhances both body composition and exercise performance. Several researchers have made reference to an anabolic \"window of opportunity\" whereby a limited time exists after training to optimize training-related muscular adaptations. However, the importance - and even the existence - of a post-exercise 'window' can vary according to a number of factors. Not only is nutrient timing research open to question in terms of applicability, but recent evidence has directly challenged the classical view of the relevance of post-exercise nutritional intake with respect to anabolism. Therefore, the purpose of this paper will be twofold: 1) to review the existing literature on the effects of nutrient timing with respect to post-exercise muscular adaptations, and; 2) to draw relevant conclusions that allow practical, evidence-based nutritional recommendations to be made for maximizing the anabolic response to exercise.\nauthor: Alan Albert Aragon; Brad Jon Schoenfeld\ndate: 2013\ninstitute: 1California State University, Northridge, CA, USA; 2Department of Health Science, Lehman College, Bronx, NY, USA\nreferences:\ntitle: Nutrient timing revisited: is there a post-exercise anabolic window?\n\n# Introduction\n\nOver the past two decades, nutrient timing has been the subject of numerous research studies and reviews. The basis of nutrient timing involves the consumption of combinations of nutrients--primarily protein and carbohydrate--in and around an exercise session. The strategy is designed to maximize exercise-induced muscular adaptations and facilitate repair of damaged tissue \\[1\\]. Some have claimed that such timing strategies can produce dramatic improvements in body composition, particularly with respect to increases in fat-free mass \\[2\\]. It has even been postulated that the timing of nutritional consumption may be more important than the absolute daily intake of nutrients \\[3\\].\n\nThe post-exercise period is often considered the most critical part of nutrient timing. An intense resistance training workout results in the depletion of a significant proportion of stored fuels (including glycogen and amino acids) as well as causing damage to muscle fibers. Theoretically, consuming the proper ratio of nutrients during this time not only initiates the rebuilding of damaged tissue and restoration of energy reserves, but it does so in a supercompensated fashion that enhances both body composition and exercise performance. Several researchers have made reference to an \"anabolic window of opportunity\" whereby a limited time exists after training to optimize training-related muscular adaptations \\[3-5\\].\n\nHowever, the importance \u2013 and even the existence \u2013 of a post-exercise 'window' can vary according to a number of factors. Not only is nutrient timing research open to question in terms of applicability, but recent evidence has directly challenged the classical view of the relevance of post-exercise nutritional intake on anabolism. Therefore, the purpose of this paper will be twofold: 1) to review the existing literature on the effects of nutrient timing with respect to post-exercise muscular adaptations, and; 2) to draw relevant conclusions that allow evidence-based nutritional recommendations to be made for maximizing the anabolic response to exercise.\n\n## Glycogen repletion\n\nA primary goal of traditional post-workout nutrient timing recommendations is to replenish glycogen stores. Glycogen is considered essential to optimal resistance training performance, with as much as 80% of ATP production during such training derived from glycolysis \\[6\\]. MacDougall et al. \\[7\\] demonstrated that a single set of elbow flexion at 80% of 1 repetition maximum (RM) performed to muscular failure caused a 12% reduction in mixed-muscle glycogen concentration, while three sets at this intensity resulted in a 24% decrease. Similarly, Robergs et al. \\[8\\] reported that 3 sets of 12 RM performed to muscular failure resulted in a 26.1% reduction of glycogen stores in the vastus lateralis while six sets at this intensity led to a 38% decrease, primarily resulting from glycogen depletion in type II fibers compared to type I fibers. It therefore stands to reason that typical high volume bodybuilding-style workouts involving multiple exercises and sets for the same muscle group would deplete the majority of local glycogen stores.\n\nIn addition, there is evidence that glycogen serves to mediate intracellular signaling. This appears to be due, at least in part, to its negative regulatory effects on AMP-activated protein kinase (AMPK). Muscle anabolism and catabolism are regulated by a complex cascade of signaling pathways. Several pathways that have been identified as particularly important to muscle anabolism include mammalian target of rapamycin (mTOR), mitogen-activated protein kinase (MAPK), and various calcium- (Ca^2+^) dependent pathways. AMPK, on the other hand, is a cellular energy sensor that serves to enhance energy availability. As such, it blunts energy-consuming processes including the activation of mTORC1 mediated by insulin and mechanical tension, as well as heightening catabolic processes such as glycolysis, beta-oxidation, and protein degradation \\[9\\]. mTOR is considered a master network in the regulation of skeletal muscle growth \\[10,11\\], and its inhibition has a decidedly negative effect on anabolic processes \\[12\\]. Glycogen has been shown to inhibit purified AMPK in cell-free assays \\[13\\], and low glycogen levels are associated with an enhanced AMPK activity in humans *in vivo*\\[14\\].\n\nCreer et al. \\[15\\] demonstrated that changes in the phosphorylation of protein kinase B (Akt) are dependent on pre-exercise muscle glycogen content. After performing 3 sets of 10 repetitions of knee extensions with a load equating to 70% of 1 repetition maximum, early phase post-exercise Akt phosphorylation was increased only in the glycogen-loaded muscle, with no effect seen in the glycogen-depleted contralateral muscle. Glycogen inhibition also has been shown to blunt S6K activation, impair translation, and reduce the amount of mRNA of genes responsible for regulating muscle hypertrophy \\[16,17\\]. In contrast to these findings, a recent study by Camera et al. \\[18\\] found that high-intensity resistance training with low muscle glycogen levels did not impair anabolic signaling or muscle protein synthesis (MPS) during the early (4 h) postexercise recovery period. The discrepancy between studies is not clear at this time.\n\nGlycogen availability also has been shown to mediate muscle protein breakdown. Lemon and Mullin \\[19\\] found that nitrogen losses more than doubled following a bout of exercise in a glycogen-depleted versus glycogen-loaded state. Other researchers have displayed a similar inverse relationship between glycogen levels and proteolysis \\[20\\]. Considering the totality of evidence, maintaining a high intramuscular glycogen content at the onset of training appears beneficial to desired resistance training outcomes.\n\nStudies show a supercompensation of glycogen stores when carbohydrate is consumed immediately post-exercise, and delaying consumption by just 2 hours attenuates the rate of muscle glycogen re-synthesis by as much as 50% \\[21\\]. Exercise enhances insulin-stimulated glucose uptake following a workout with a strong correlation noted between the amount of uptake and the magnitude of glycogen utilization \\[22\\]. This is in part due to an increase in the translocation of GLUT4 during glycogen depletion \\[23,24\\] thereby facilitating entry of glucose into the cell. In addition, there is an exercise-induced increase in the activity of glycogen synthase\u2014the principle enzyme involved in promoting glycogen storage \\[25\\]. The combination of these factors facilitates the rapid uptake of glucose following an exercise bout, allowing glycogen to be replenished at an accelerated rate.\n\nThere is evidence that adding protein to a post-workout carbohydrate meal can enhance glycogen re-synthesis. Berardi et al. \\[26\\] demonstrated that consuming a protein-carbohydrate supplement in the 2-hour period following a 60-minute cycling bout resulted in significantly greater glycogen resynthesis compared to ingesting a calorie-equated carbohydrate solution alone. Similarly, Ivy et al. \\[27\\] found that consumption of a combination of protein and carbohydrate after a 2+ hour bout of cycling and sprinting increased muscle glycogen content significantly more than either a carbohydrate-only supplement of equal carbohydrate or caloric equivalency. The synergistic effects of protein-carbohydrate have been attributed to a more pronounced insulin response \\[28\\], although it should be noted that not all studies support these findings \\[29\\]. Jentjens et al. \\[30\\] found that given ample carbohydrate dosing (1.2 g\/kg\/hr), the addition of a protein and amino acid mixture (0.4 g\/kg\/hr) did not increase glycogen synthesis during a 3-hour post-depletion recovery period.\n\nDespite a sound theoretical basis, the practical significance of expeditiously repleting glycogen stores remains dubious. Without question, expediting glycogen resynthesis is important for a narrow subset of endurance sports where the duration between glycogen-depleting events is limited to less than approximately 8 hours \\[31\\]. Similar benefits could potentially be obtained by those who perform two-a-day split resistance training bouts (i.e. morning and evening) provided the same muscles will be worked during the respective sessions. However, for goals that are not specifically focused on the performance of multiple exercise bouts in the same day, the urgency of glycogen resynthesis is greatly diminished. High-intensity resistance training with moderate volume (6-9 sets per muscle group) has only been shown to reduce glycogen stores by 36-39% \\[8,32\\]. Certain athletes are prone to performing significantly more volume than this (i.e., competitive bodybuilders), but increased volume typically accompanies decreased frequency. For example, training a muscle group with 16-20 sets in a single session is done roughly once per week, whereas routines with 8-10 sets are done twice per week. In scenarios of higher volume and frequency of resistance training, incomplete resynthesis of pre-training glycogen levels would not be a concern aside from the far-fetched scenario where exhaustive training bouts of the same muscles occur after recovery intervals shorter than 24 hours. However, even in the event of complete glycogen depletion, replenishment to pre-training levels occurs well-within this timeframe, regardless of a significantly delayed post-exercise carbohydrate intake. For example, Parkin et al \\[33\\] compared the immediate post-exercise ingestion of 5 high-glycemic carbohydrate meals with a 2-hour wait before beginning the recovery feedings. No significant between-group differences were seen in glycogen levels at 8 hours and 24 hours post-exercise. In further support of this point, Fox et al. \\[34\\] saw no significant reduction in glycogen content 24 hours after depletion despite adding 165 g fat collectively to the post-exercise recovery meals and thus removing any potential advantage of high-glycemic conditions.\n\n## Protein breakdown\n\nAnother purported benefit of post-workout nutrient timing is an attenuation of muscle protein breakdown. This is primarily achieved by spiking insulin levels, as opposed to increasing amino acid availability \\[35,36\\]. Studies show that muscle protein breakdown is only slightly elevated immediately post-exercise and then rapidly rises thereafter \\[36\\]. In the fasted state, muscle protein breakdown is significantly heightened at 195 minutes following resistance exercise, resulting in a net negative protein balance \\[37\\]. These values are increased as much as 50% at the 3 hour mark, and elevated proteolysis can persist for up to 24 hours of the post-workout period \\[36\\].\n\nAlthough insulin has known anabolic properties \\[38,39\\], its primary impact post-exercise is believed to be anti-catabolic \\[40-43\\]. The mechanisms by which insulin reduces proteolysis are not well understood at this time. It has been theorized that insulin-mediated phosphorylation of PI3K\/Akt inhibits transcriptional activity of the proteolytic Forkhead family of transcription factors, resulting in their sequestration in the sarcoplasm away from their target genes \\[44\\]. Down-regulation of other aspects of the ubiquitin-proteasome pathway are also believed to play a role in the process \\[45\\]. Given that muscle hypertrophy represents the difference between myofibrillar protein synthesis and proteolysis, a decrease in protein breakdown would conceivably enhance accretion of contractile proteins and thus facilitate greater hypertrophy. Accordingly, it seems logical to conclude that consuming a protein-carbohydrate supplement following exercise would promote the greatest reduction in proteolysis since the combination of the two nutrients has been shown to elevate insulin levels to a greater extent than carbohydrate alone \\[28\\].\n\nHowever, while the theoretical basis behind spiking insulin post-workout is inherently sound, it remains questionable as to whether benefits extend into practice. First and foremost, research has consistently shown that, in the presence of elevated plasma amino acids, the effect of insulin elevation on net muscle protein balance plateaus within a range of 15\u201330 mU\/L \\[45,46\\]; roughly 3\u20134 times normal fasting levels. This insulinogenic effect is easily accomplished with typical mixed meals, considering that it takes approximately 1\u20132 hours for circulating substrate levels to peak, and 3\u20136 hours (or more) for a complete return to basal levels depending on the size of a meal. For example, Capaldo et al. \\[47\\] examined various metabolic effects during a 5-hour period after ingesting a solid meal comprised of 75 g carbohydrate 37 g protein, and 17 g fat. This meal was able to raise insulin 3 times above fasting levels within 30 minutes of consumption. At the 1-hour mark, insulin was 5 times greater than fasting. At the 5-hour mark, insulin was still double the fasting levels. In another example, Power et al. \\[48\\] showed that a 45g dose of whey protein isolate takes approximately 50 minutes to cause blood amino acid levels to peak. Insulin concentrations peaked 40 minutes after ingestion, and remained at elevations seen to maximize net muscle protein balance (15-30 mU\/L, or 104-208 pmol\/L) for approximately 2 hours. The inclusion of carbohydrate to this protein dose would cause insulin levels to peak higher and stay elevated even longer. Therefore, the recommendation for lifters to spike insulin post-exercise is somewhat trivial. The classical post-exercise objective to quickly reverse catabolic processes to promote recovery and growth may only be applicable in the absence of a properly constructed pre-exercise meal.\n\nMoreover, there is evidence that the effect of protein breakdown on muscle protein accretion may be overstated. Glynn et al. \\[49\\] found that the post-exercise anabolic response associated with combined protein and carbohydrate consumption was largely due to an elevation in muscle protein synthesis with only a minor influence from reduced muscle protein breakdown. These results were seen regardless of the extent of circulating insulin levels. Thus, it remains questionable as to what, if any, positive effects are realized with respect to muscle growth from spiking insulin after resistance training.\n\n## Protein synthesis\n\nPerhaps the most touted benefit of post-workout nutrient timing is that it potentiates increases in MPS. Resistance training alone has been shown to promote a twofold increase in protein synthesis following exercise, which is counterbalanced by the accelerated rate of proteolysis \\[36\\]. It appears that the stimulatory effects of hyperaminoacidemia on muscle protein synthesis, especially from essential amino acids, are potentiated by previous exercise \\[35,50\\]. There is some evidence that carbohydrate has an additive effect on enhancing post-exercise muscle protein synthesis when combined with amino acid ingestion \\[51\\], but others have failed to find such a benefit \\[52,53\\].\n\nSeveral studies have investigated whether an \"anabolic window\" exists in the immediate post-exercise period with respect to protein synthesis. For maximizing MPS, the evidence supports the superiority of post-exercise free amino acids and\/or protein (in various permutations with or without carbohydrate) compared to solely carbohydrate or non-caloric placebo \\[50,51,54-59\\]. However, despite the common recommendation to consume protein as soon as possible post-exercise \\[60,61\\], evidence-based support for this practice is currently lacking. Levenhagen et al. \\[62\\] demonstrated a clear benefit to consuming nutrients as soon as possible after exercise as opposed to delaying consumption. Employing a within-subject design,10 volunteers (5 men, 5 women) consumed an oral supplement containing 10 g protein, 8 g carbohydrate and 3 g fat either immediately following or three hours post-exercise. Protein synthesis of the legs and whole body was increased threefold when the supplement was ingested immediately after exercise, as compared to just 12% when consumption was delayed. A limitation of the study was that training involved moderate intensity, long duration aerobic exercise. Thus, the increased fractional synthetic rate was likely due to greater mitochondrial and\/or sarcoplasmic protein fractions, as opposed to synthesis of contractile elements \\[36\\]. In contrast to the timing effects shown by Levenhagen et al. \\[62\\], previous work by Rasmussen et al. \\[56\\] showed no significant difference in leg net amino acid balance between 6 g essential amino acids (EAA) co-ingested with 35 g carbohydrate taken 1 hour versus 3 hours post-exercise. Compounding the unreliability of the post-exercise 'window' is the finding by Tipton et al. \\[63\\] that immediate pre-exercise ingestion of the same EAA-carbohydrate solution resulted in a significantly greater and more sustained MPS response compared to the immediate post-exercise ingestion, although the validity of these findings have been disputed based on flawed methodology \\[36\\]. Notably, Fujita et al \\[64\\] saw opposite results using a similar design, except the EAA-carbohydrate was ingested 1 hour prior to exercise compared to ingestion immediately pre-exercise in Tipton et al. \\[63\\]. Adding yet more incongruity to the evidence, Tipton et al. \\[65\\] found no significant difference in net MPS between the ingestion of 20 g whey immediately pre- versus the same solution consumed 1 hour post-exercise. Collectively, the available data lack any consistent indication of an ideal post-exercise timing scheme for maximizing MPS.\n\nIt also should be noted that measures of MPS assessed following an acute bout of resistance exercise do not always occur in parallel with chronic upregulation of causative myogenic signals \\[66\\] and are not necessarily predictive of long-term hypertrophic responses to regimented resistance training \\[67\\]. Moreover, the post-exercise rise in MPS in untrained subjects is not recapitulated in the trained state \\[68\\], further confounding practical relevance. Thus, the utility of acute studies is limited to providing clues and generating hypotheses regarding hypertrophic adaptations; any attempt to extrapolate findings from such data to changes in lean body mass is speculative, at best.\n\n## Muscle hypertrophy\n\nA number of studies have directly investigated the long-term hypertrophic effects of post-exercise protein consumption. The results of these trials are curiously conflicting, seemingly because of varied study design and methodology. Moreover, a majority of studies employed both pre- and post-workout supplementation, making it impossible to tease out the impact of consuming nutrients after exercise. These confounding issues highlight the difficulty in attempting to draw relevant conclusions as to the validity of an \"anabolic window.\" What follows is an overview of the current research on the topic. Only those studies that specifically evaluated immediate (\u2264 1 hour) post-workout nutrient provision are discussed (see Table\u20091<\/a> for a summary of data).\n\nPost-exercise nutrition and muscle hypertrophy\n\n| **Study** | **Subjects** | **Supplementation** | **Protein matched with Control?** | **Measurement instrument** | **Training protocol** | **Results** |\n|:---|:---|:---|:---|:---|:---|:---|\n| Esmarck et al. \\[69\\] | 13 untrained elderly males | 10 g milk\/soy protein combo consumed either immediately or 2 hours after exercise | Yes | MRI and muscle biopsy | Progressive resistance training consisting of multiple sets of lat pulldown, leg press and knee extension performed 3 days\/wk for 12 wk | Significant increase in muscle CSA with immediate vs. delayed supplementation |\n| Cribb and Hayes \\[70\\] | 23 young recreational male bodybuilders | 1 g\/kg of a supplement containing 40 g whey isolate, 43 g glucose, and 7 g creatine monohydrate consumed either immediately before and after exercise or in the early morning and late evening | Yes | DXA and muscle biopsy | Progressive resistance training consisting of exercises for the major muscle groups performed 3 days\/wk for 10 wks | Significant increases in lean body mass and muscle CSA of type II fibers in immediate vs. delayed supplementation |\n| Willoughby et al. \\[71\\] | 19 untrained young males | 20 g protein or 20 g dextrose consumed 1 hour before and after exercise | No | Hydrostatic weighing, muscle biopsy, surface measurements | Progressive resistance training consisting of 3 sets of 6\u20138 repetitions for all the major muscles performed 4 days\/wk for 10 wks | Significant increase in total body mass, fat-free mass, and thigh mass with protein vs. carb supplementation |\n| Hulmi et al. \\[72\\] | 31 untrained young males | 15 g whey isolate or placebo consumed immediately before and after exercise | No | MRI, muscle biopsy | Progressive, periodized total body resistance training consisting of 2\u20135 sets of 5\u201320 repetitions performed 2 days\/wk for 21 wks. | Significant increase in CSA of the vastus lateralis but not of the other quadriceps muscles in supplemented group versus placebo. |\n| Verdijk et al. \\[73\\] | 28 untrained elderly males | 10 g casein hydrolysate or placebo consumed immediately before and after exercise | No | DXA, CT, and muscle biopsy | Progressive resistance training consisting of multiple sets of leg press and knee extension performed 3 days\/wk for 12 wks | No significant differences in muscle CSA between groups |\n| Hoffman et al. \\[74\\] | 33 well-trained young males | Supplement containing 42 g protein (milk\/collagen blend) and 2 g carbohydrate consumed either immediately before and after exercise or in the early morning and late evening | Yes | DXA | Progressive resistance training consisting of 3\u20134 sets of 6\u201310 repetitions of multiple exercises for the entire body peformed 4 days\/wk for 10 weeks. | No significant differences in total body mass or lean body mass between groups. |\n| Erskine et al. \\[75\\] | 33 untrained young males | 20 g high quality protein or placebo consumed immediately before and after exercise | No | MRI | 4-6 sets of elbow flexion performed 3 days\/wk for 12 weeks | No significant differences in muscle CSA between groups |\n\nEsmarck et al. \\[69\\] provided the first experimental evidence that consuming protein immediately after training enhanced muscular growth compared to delayed protein intake. Thirteen untrained elderly male volunteers were matched in pairs based on body composition and daily protein intake and divided into two groups: P0 or P2. Subjects performed a progressive resistance training program of multiple sets for the upper and lower body. P0 received an oral protein\/carbohydrate supplement immediately post-exercise while P2 received the same supplement 2 hours following the exercise bout. Training was carried out 3 days a week for 12 weeks. At the end of the study period, cross-sectional area (CSA) of the quadriceps femoris and mean fiber area were significantly increased in the P0 group while no significant increase was seen in P2. These results support the presence of a post-exercise window and suggest that delaying post-workout nutrient intake may impede muscular gains.\n\nIn contrast to these findings, Verdijk et al. \\[73\\] failed to detect any increases in skeletal muscle mass from consuming a post-exercise protein supplement in a similar population of elderly men. Twenty-eight untrained subjects were randomly assigned to receive either a protein or placebo supplement consumed immediately before and immediately following the exercise session. Subjects performed multiple sets of leg press and knee extension 3 days per week, with the intensity of exercise progressively increased over the course of the 12 week training period. No significant differences in muscle strength or hypertrophy were noted between groups at the end of the study period indicating that post exercise nutrient timing strategies do not enhance training-related adaptation. It should be noted that, as opposed to the study by Esmark et al. \\[69\\] this study only investigated adaptive responses of supplementation on the thigh musculature; it therefore is not clear based on these results whether the upper body might respond differently to post-exercise supplementation than the lower body.\n\nIn an elegant single-blinded design, Cribb and Hayes \\[70\\] found a significant benefit to post-exercise protein consumption in 23 recreational male bodybuilders. Subjects were randomly divided into either a PRE-POST group that consumed a supplement containing protein, carbohydrate and creatine immediately before and after training or a MOR-EVE group that consumed the same supplement in the morning and evening at least 5 hours outside the workout. Both groups performed regimented resistance training that progressively increased intensity from 70% 1RM to 95% 1RM over the course of 10 weeks. Results showed that the PRE-POST group achieved a significantly greater increase in lean body mass and increased type II fiber area compared to MOR-EVE. Findings support the benefits of nutrient timing on training-induced muscular adaptations. The study was limited by the addition of creatine monohydrate to the supplement, which may have facilitated increased uptake following training. Moreover, the fact that the supplement was taken both pre- and post-workout confounds whether an anabolic window mediated results.\n\nWilloughby et al. \\[71\\] also found that nutrient timing resulted in positive muscular adaptations. Nineteen untrained male subjects were randomly assigned to either receive 20 g of protein or 20 grams dextrose administered 1 hour before and after resistance exercise. Training consisted of 3 sets of 6\u20138 repetitions at 85%\u201390% intensity. Training was performed 4 times a week over the course of 10 weeks. At the end of the study period, total body mass, fat-free mass, and thigh mass was significantly greater in the protein-supplemented group compared to the group that received dextrose. Given that the group receiving the protein supplement consumed an additional 40 grams of protein on training days, it is difficult to discern whether results were due to the increased protein intake or the timing of the supplement.\n\nIn a comprehensive study of well-trained subjects, Hoffman et al. \\[74\\] randomly assigned 33 well-trained males to receive a protein supplement either in the morning and evening (n\u2009=\u200913) or immediately before and immediately after resistance exercise (n\u2009=\u200913). Seven participants served as unsupplemented controls. Workouts consisted of 3\u20134 sets of 6\u201310 repetitions of multiple exercises for the entire body. Training was carried out on 4 day-a-week split routine with intensity progressively increased over the course of the study period. After 10 weeks, no significant differences were noted between groups with respect to body mass and lean body mass. The study was limited by its use of DXA to assess body composition, which lacks the sensitivity to detect small changes in muscle mass compared to other imaging modalities such as MRI and CT \\[76\\].\n\nHulmi et al. \\[72\\] randomized 31 young untrained male subjects into 1 of 3 groups: protein supplement (n\u2009=\u200911), non-caloric placebo (n\u2009=\u200910) or control (n\u2009=\u200910). High-intensity resistance training was carried out over 21 weeks. Supplementation was provided before and after exercise. At the end of the study period, muscle CSA was significantly greater in the protein-supplemented group compared to placebo or control. A strength of the study was its long-term training period, providing support for the beneficial effects of nutrient timing on chronic hypertrophic gains. Again, however, it is unclear whether enhanced results associated with protein supplementation were due to timing or increased protein consumption.\n\nMost recently, Erskine et al. \\[75\\] failed to show a hypertrophic benefit from post-workout nutrient timing. Subjects were 33 untrained young males, pair-matched for habitual protein intake and strength response to a 3-week pre-study resistance training program. After a 6-week washout period where no training was performed, subjects were then randomly assigned to receive either a protein supplement or a placebo immediately before and after resistance exercise. Training consisted of 6\u2013 8 sets of elbow flexion carried out 3 days a week for 12 weeks. No significant differences were found in muscle volume or anatomical cross-sectional area between groups.\n\n# Discussion\n\nDespite claims that immediate post-exercise nutritional intake is essential to maximize hypertrophic gains, evidence-based support for such an \"anabolic window of opportunity\" is far from definitive. The hypothesis is based largely on the pre-supposition that training is carried out in a fasted state. During fasted exercise, a concomitant increase in muscle protein breakdown causes the pre-exercise net negative amino acid balance to persist in the post-exercise period despite training-induced increases in muscle protein synthesis \\[36\\]. Thus, in the case of resistance training after an overnight fast, it would make sense to provide immediate nutritional intervention--ideally in the form of a combination of protein and carbohydrate--for the purposes of promoting muscle protein synthesis and reducing proteolysis, thereby switching a net catabolic state into an anabolic one. Over a chronic period, this tactic could conceivably lead cumulatively to an increased rate of gains in muscle mass.\n\nThis inevitably begs the question of how pre-exercise nutrition might influence the urgency or effectiveness of post-exercise nutrition, since not everyone engages in fasted training. In practice, it is common for those with the primary goal of increasing muscular size and\/or strength to make a concerted effort to consume a pre-exercise meal within 1-2 hours prior to the bout in attempt to maximize training performance. Depending on its size and composition, this meal can conceivably function as both a pre- and an immediate post-exercise meal, since the time course of its digestion\/absorption can persist well into the recovery period. Tipton et al. \\[63\\] observed that a relatively small dose of EAA (6 g) taken immediately pre-exercise was able to elevate blood and muscle amino acid levels by roughly 130%, and these levels remained elevated for 2 hours after the exercise bout. Although this finding was subsequently challenged by Fujita et al. \\[64\\], other research by Tipton et al. \\[65\\] showed that the ingestion of 20 g whey taken immediately pre-exercise elevated muscular uptake of amino acids to 4.4 times pre-exercise resting levels during exercise, and did not return to baseline levels until 3 hours post-exercise. These data indicate that even minimal-to-moderate pre-exercise EAA or high-quality protein taken immediately before resistance training is capable of sustaining amino acid delivery into the post-exercise period. Given this scenario, immediate post-exercise protein dosing for the aim of mitigating catabolism seems redundant. The next scheduled protein-rich meal (whether it occurs immediately or 1\u20132 hours post-exercise) is likely sufficient for maximizing recovery and anabolism.\n\nOn the other hand, there are others who might train before lunch or after work, where the previous meal was finished 4\u20136 hours prior to commencing exercise. This lag in nutrient consumption can be considered significant enough to warrant post-exercise intervention if muscle retention or growth is the primary goal. Layman \\[77\\] estimated that the anabolic effect of a meal lasts 5-6 hours based on the rate of postprandial amino acid metabolism. However, infusion-based studies in rats \\[78,79\\] and humans \\[80,81\\] indicate that the postprandial rise in MPS from ingesting amino acids or a protein-rich meal is more transient, returning to baseline within 3 hours despite sustained elevations in amino acid availability. It thus has been hypothesized that a \"muscle full\" status can be reached where MPS becomes refractory, and circulating amino acids are shunted toward oxidation or fates other than MPS. In light of these findings, when training is initiated more than \\~3\u20134 hours after the preceding meal, the classical recommendation to consume protein (at least 25 g) as soon as possible seems warranted in order to reverse the catabolic state, which in turn could expedite muscular recovery and growth. However, as illustrated previously, minor pre-exercise nutritional interventions can be undertaken if a significant delay in the post-exercise meal is anticipated.\n\nAn interesting area of speculation is the generalizability of these recommendations across training statuses and age groups. Burd et al. \\[82\\] reported that an acute bout of resistance training in untrained subjects stimulates both mitochondrial and myofibrillar protein synthesis, whereas in trained subjects, protein synthesis becomes more preferential toward the myofibrillar component. This suggests a less global response in advanced trainees that potentially warrants closer attention to protein timing and type (e.g., high-leucine sources such as dairy proteins) in order to optimize rates of muscular adaptation. In addition to training status, age can influence training adaptations. Elderly subjects exhibit what has been termed \"anabolic resistance,\" characterized by a lower receptivity to amino acids and resistance training \\[83\\]. The mechanisms underlying this phenomenon are not clear, but there is evidence that in younger adults, the acute anabolic response to protein feeding appears to plateau at a lower dose than in elderly subjects. Illustrating this point, Moore et al. \\[84\\] found that 20 g whole egg protein maximally stimulated post-exercise MPS, while 40 g increased leucine oxidation without any further increase in MPS in young men. In contrast, Yang et al. \\[85\\] found that elderly subjects displayed greater increases in MPS when consuming a post-exercise dose of 40 g whey protein compared to 20 g. These findings suggest that older subjects require higher individual protein doses for the purpose of optimizing the anabolic response to training. Further research is needed to better assess post-workout nutrient timing response across various populations, particularly with respect to trained\/untrained and young\/elderly subjects.\n\nThe body of research in this area has several limitations. First, while there is an abundance of acute data, controlled, long-term trials that systematically compare the effects of various post-exercise timing schemes are lacking. The majority of chronic studies have examined pre- and post-exercise supplementation simultaneously, as opposed to comparing the two treatments against each other. This prevents the possibility of isolating the effects of either treatment. That is, we cannot know whether pre- or post-exercise supplementation was the critical contributor to the outcomes (or lack thereof). Another important limitation is that the majority of chronic studies neglect to match total protein intake between the conditions compared. As such, it's not possible to ascertain whether positive outcomes were influenced by timing relative to the training bout, or simply by a greater protein intake overall. Further, dosing strategies employed in the preponderance of chronic nutrient timing studies have been overly conservative, providing only 10\u201320 g protein near the exercise bout. More research is needed using protein doses known to maximize acute anabolic response, which has been shown to be approximately 20\u201340 g, depending on age \\[84,85\\]. There is also a lack of chronic studies examining the co-ingestion of protein and carbohydrate near training. Thus far, chronic studies have yielded equivocal results. On the whole, they have not corroborated the consistency of positive outcomes seen in acute studies examining post-exercise nutrition.\n\nAnother limitation is that the majority of studies on the topic have been carried out in untrained individuals. Muscular adaptations in those without resistance training experience tend to be robust, and do not necessarily reflect gains experienced in trained subjects. It therefore remains to be determined whether training status influences the hypertrophic response to post-exercise nutritional supplementation.\n\nA final limitation of the available research is that current methods used to assess muscle hypertrophy are widely disparate, and the accuracy of the measures obtained are inexact \\[68\\]. As such, it is questionable whether these tools are sensitive enough to detect small differences in muscular hypertrophy. Although minor variances in muscle mass would be of little relevance to the general population, they could be very meaningful for elite athletes and bodybuilders. Thus, despite conflicting evidence, the potential benefits of post-exercise supplementation cannot be readily dismissed for those seeking to optimize a hypertrophic response. By the same token, widely varying feeding patterns among individuals challenge the common assumption that the post-exercise \"anabolic window of opportunity\" is universally narrow and urgent.\n\n## Practical applications\n\nDistilling the data into firm, specific recommendations is difficult due to the inconsistency of findings and scarcity of systematic investigations seeking to optimize pre- and\/or post-exercise protein dosage and timing. Practical nutrient timing applications for the goal of muscle hypertrophy inevitably must be tempered with field observations and experience in order to bridge gaps in the scientific literature. With that said, high-quality protein dosed at 0.4\u20130.5 g\/kg of LBM at both pre- and post-exercise is a simple, relatively fail-safe general guideline that reflects the current evidence showing a maximal acute anabolic effect of 20\u201340 g \\[53,84,85\\]. For example, someone with 70 kg of LBM would consume roughly 28\u201335 g protein in both the pre- and post exercise meal. Exceeding this would be have minimal detriment if any, whereas significantly under-shooting or neglecting it altogether would not maximize the anabolic response.\n\nDue to the transient anabolic impact of a protein-rich meal and its potential synergy with the trained state, pre- and post-exercise meals should not be separated by more than approximately 3\u20134 hours, given a typical resistance training bout lasting 45\u201390 minutes. If protein is delivered within particularly large mixed-meals (which are inherently more anticatabolic), a case can be made for lengthening the interval to 5\u20136 hours. This strategy covers the hypothetical timing benefits while allowing significant flexibility in the length of the feeding windows before and after training. Specific timing within this general framework would vary depending on individual preference and tolerance, as well as exercise duration. One of many possible examples involving a 60-minute resistance training bout could have up to 90-minute feeding windows on both sides of the bout, given central placement between the meals. In contrast, bouts exceeding typical duration would default to shorter feeding windows if the 3\u20134 hour pre- to post-exercise meal interval is maintained. Shifting the training session closer to the pre- or post-exercise meal should be dictated by personal preference, tolerance, and lifestyle\/scheduling constraints.\n\nEven more so than with protein, carbohydrate dosage and timing relative to resistance training is a gray area lacking cohesive data to form concrete recommendations. It is tempting to recommend pre- and post-exercise carbohydrate doses that at least match or exceed the amounts of protein consumed in these meals. However, carbohydrate availability during and after exercise is of greater concern for endurance as opposed to strength or hypertrophy goals. Furthermore, the importance of co-ingesting post-exercise protein and carbohydrate has recently been challenged by studies examining the early recovery period, particularly when sufficient protein is provided. Koopman et al \\[52\\] found that after full-body resistance training, adding carbohydrate (0.15, or 0.6 g\/kg\/hr) to amply dosed casein hydrolysate (0.3 g\/kg\/hr) did not increase whole body protein balance during a 6-hour post-exercise recovery period compared to the protein-only treatment. Subsequently, Staples et al \\[53\\] reported that after lower-body resistance exercise (leg extensions), the increase in post-exercise muscle protein balance from ingesting 25 g whey isolate was not improved by an additional 50 g maltodextrin during a 3-hour recovery period. For the goal of maximizing rates of muscle gain, these findings support the broader objective of meeting total daily carbohydrate need instead of specifically timing its constituent doses. Collectively, these data indicate an increased potential for dietary flexibility while maintaining the pursuit of optimal timing.\n\n# Competing interests\n\nThe authors declare that they have no competing interests.\n\n# Authors' contribution\n\nAAA and BJS each contributed equally to the formulation and writing of the manuscript. Both authors read and approved the final manuscript.","meta":{"dup_signals":{"dup_doc_count":211,"dup_dump_count":85,"dup_details":{"curated_sources":4,"2023-40":1,"2023-23":3,"2023-14":2,"2022-49":1,"2022-40":2,"2022-33":3,"2022-27":1,"2022-21":3,"2021-49":1,"2021-43":2,"2021-39":1,"2021-31":3,"2021-25":1,"2021-21":2,"2021-17":1,"2021-10":2,"2020-50":1,"2020-40":1,"2020-34":1,"2020-29":1,"2020-24":1,"2020-16":3,"2020-10":1,"2020-05":1,"2019-47":1,"2019-43":2,"2019-35":1,"2019-30":2,"2019-22":1,"2019-13":2,"2019-09":1,"2019-04":1,"2018-51":2,"2018-47":1,"2018-39":2,"2018-30":4,"2018-26":1,"2018-22":2,"2018-17":2,"2018-13":4,"2018-09":2,"2017-51":5,"2017-47":1,"2017-43":4,"2017-39":2,"2017-34":4,"2017-30":1,"2017-26":5,"2017-22":4,"2017-17":1,"2017-09":5,"2017-04":3,"2016-50":3,"2016-44":3,"2016-40":3,"2016-36":3,"2016-30":4,"2016-26":3,"2016-22":3,"2016-18":3,"2016-07":2,"2015-48":3,"2015-40":3,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":2,"2014-52":3,"2014-49":4,"2014-42":8,"2014-41":4,"2014-35":6,"2014-23":6,"2014-15":3,"2023-50":1,"2024-22":2,"2024-18":4,"2024-10":3,"2017-13":3,"2015-18":2,"2015-11":2,"2015-06":2,"2014-10":1,"2013-48":1}},"file":"PMC3577439"},"subset":"pubmed_central"} {"text":"abstract: Following extensive examination of published and unpublished materials, we provide a history of the use of dexamethasone in pregnant women at risk of carrying a female fetus affected by congenital adrenal hyperplasia (CAH). This intervention has been aimed at preventing development of ambiguous genitalia, the urogenital sinus, tomboyism, and lesbianism. We map out ethical problems in this history, including: misleading promotion to physicians and CAH-affected families; de facto experimentation without the necessary protections of approved research; troubling parallels to the history of prenatal use of diethylstilbestrol (DES); and the use of medicine and public monies to attempt prevention of benign behavioral sex variations. Critical attention is directed at recent investigations by the U.S. Food and Drug Administration (FDA) and Office of Human Research Protections (OHRP); we argue that the weak and unsupported conclusions of these investigations indicate major gaps in the systems meant to protect subjects of high-risk medical research.\nauthor: Alice Dreger; Ellen K. Feder; Anne Tamar-Mattis\ndate: 2012-07-31\nreferences:\nsubtitle: : An Ethics Canary in the Modern Medical Mine\ntitle: Prenatal Dexamethasone for Congenital Adrenal Hyperplasia\n\n# Introduction\/Our Backgrounds\n\nIn this article, we provide a condensed history of the use of prenatal dexamethasone for congenital adrenal hyperplasia, with an eye toward the ethically problematic aspects of this history. Congenital adrenal hyperplasia (CAH) is a disease of the endocrine system that can cause virilization (i.e., development of masculine traits) in female fetuses. In an attempt to prevent CAH-affected female fetuses from developing in a sexually atypical fashion, some physicians treat pregnant women \"at risk\" for having an affected daughter with the steroid dexamethasone. This intervention starts as soon as pregnancy is confirmed and continues throughout the pregnancy if the fetus is ultimately diagnosed as a CAH-affected female. If\u2014several weeks into the dosing\u2014the fetus is determined to be male or not CAH-affected, the intervention is immediately stopped, because the intention is only to alter the course of development in CAH-affected females.\n\nThis use of dexamethasone was first described in 1984 in *The Journal of Pediatrics* by Michel David and Maguelone Forest, French clinician-researchers. David and Forest reported apparent effectiveness of prenatal dexamethasone in eliminating genital virilization in a single girl affected by CAH (David and Forest 1984). In the nearly three decades since David and Forest's paper, many specialists have come to believe that prenatal dexamethasone for CAH constitutes the standard of care. A 2000\u20132001 survey of members of the European Society for Paediatric Endocrinology, representing 125 institutions, found that, \"\\[i\\]n 57\u00a0% of the centres prenatal diagnosis and treatment \\[of CAH with dexamethasone\\] are routine\" (Riepe et al. 2002, 199). A 2010 Continuing Medical Education review article in the *Obstetrical and Gynecological Survey* concluded: \"Given the data available at this moment, antenatal treatment with corticosteroids is recommended\" (Vos and Bruinse 2010, 203).\n\nBut what data are \"available at this moment\" such that prenatal dexamethasone for CAH can be recommended to obstetricians, and recommended by them to prospective patients?\n\nA systematic review and meta-analysis of this intervention, published in 2010 in *Clinical Endocrinology*, indicated that a search of the literature \"identified 1083 candidate studies for review; of which, only four studies were confirmed eligible\" for serious scientific consideration (Fern\u00e1ndez-Balsells et al. 2010, 438). That is to say, as late as 2010, less than one half of one percent of published \"studies\" of this intervention were regarded as being of high enough quality to provide meaningful data for a meta-analysis. Even these four studies were of low quality:\n\n> All the eligible studies were observational and were conducted by two groups of investigators (one from the US and one from Europe). \u2026 Studies lacked details regarding the use of methodological features that protect against bias. None of the studies reported blinding of the outcome assessors to the exposure (i.e., the researchers estimating each patient's degree of virilization). Loss to follow-up was, in most cases, substantial (Fern\u00e1ndez-Balsells et al. 2010, 438).\n\nIn spite of at least a thousand pregnant women likely having been exposed by the time of the review, the four studies judged worthy of inclusion in the meta-analysis covered only 325 pregnancies. Even more stunning, the meta-analysis revealed, \"*there were no data on long-term follow-up of physical and metabolic outcomes in children exposed to dexamethasone*\" prenatally for CAH (Fern\u00e1ndez-Balsells et al. 2010, 436, *emphasis added*). It was not that the data about long-term physical and metabolic outcomes were unclear; there simply were none available.\n\nToday, some clinicians promote prenatal dexamethasone for CAH as \"an excellent example of pharmacological therapy during pregnancy\" (Rosner et al. 2006, 803) and even as \"a paradigm of prenatal diagnosis and treatment\" (Nimkarn and New 2010a, 5). Yet the Endocrine Society Task Force that had commissioned the 2010 meta-analysis concluded: \"The evidence regarding fetal and maternal sequelae \u2026 is of low or very low quality due to methodological limitations and sample sizes\" (Speiser et al. 2010a, 4137). This would hardly seem to qualify prenatal dexamethasone for CAH as \"an excellent example\" or a \"paradigm\" of a prenatal pharmacological intervention. Indeed, for lack of quality clinical studies, the 2010 Task Force could not even say with any confidence whether prenatal dexamethasone works to reduce genital virilization. Notice the specific qualifications included in the Task Force's statement on efficacy: \"*\\[T\\]he groups advocating and performing prenatal treatment appear to agree* that it is effective in reducing and often eliminating virilization of female fetal genitalia and that the success rate is about 80\u201385\u00a0%\" (Speiser et al. 2010a, 4138, *emphasis added*).\n\nRegardless of some people's enthusiastic endorsements of the intervention, we show below that ethical debates within medicine about prenatal dexamethasone for CAH are actually not new. Those internal debates, however, have focused on potential risks and benefits to mothers and children exposed. There are a number of other ethical problems in the history of this intervention also deserving of attention, including: de facto experimentation on fetuses and pregnant women, largely outside of prospective long-term trials and without adequate informed consent; failure to appropriately collect and publish evidence when promoting and providing a high-risk intervention; use of medicine and public monies for research to prevent benign behavioral sex variations, including tomboyism and lesbianism (cf. Murphy 1997); and inadequacy in the United States of systems designed to protect subjects of medical experimentation, including especially pregnant women and their offspring.\n\nIn the United States, the use of prenatal dexamethasone remains \"off-label.\" This means that the indication has never received approval by the Food and Drug Administration (FDA).[^1] Nonetheless, we want to be clear: The problem we see with the use of prenatal dexamethasone for CAH is not per se that it is an off-label use; it is rather that\u2014as this paper documents\u2014prenatal dexamethasone for CAH has sometimes been promoted to prospective patients and clinicians in misleading ways, and sometimes promoted for uses that are not legitimately medical (e.g., for the prevention of tomboyism and lesbianism). Furthermore, this intervention\u2014*intended* to alter the course of fetal development\u2014has been \"studied\" in ways so slipshod as to breach professional standards of medical ethics.\n\nWe come to this work as history, philosophy, and legal scholars interested in the medical treatment of children with atypical sex, but also as women who have long advocated clinical reform in this general area. Late in 2009, clinicians working in the pediatric care of children born with sex anomalies made one of us (Dreger) aware of their growing alarm about prenatal dexamethasone for CAH. These clinicians were concerned that pregnant women at risk for having daughters with CAH were being given prenatal dexamethasone without being informed that (a) this use has consistently been labeled experimental by expert panels; (b) benefits and risks have not been established, due to inadequate scientific study; and (c) some children who had been exposed in utero were being studied retrospectively, in many cases years later, by the very clinicians who had been (and were still) actively promoting the use to pregnant women as \"safe for mother and child,\" to find out what the risks might really be.\n\nDreger (a historian) then reviewed the medical literature as well as Internet-based advertisements directed at affected families and became quite concerned. In December 2009 and January 2010, respectively, Dreger and UCLA pediatric geneticist Eric Vilain separately asked Mount Sinai School of Medicine pediatric endocrinologist Maria New, the most prominent promoter of this intervention, about the informed consent process she used. New is a highly distinguished pediatric endocrinologist and member of the National Academy of Sciences. By 2003, she had already publicly taken credit for having \"treated\" more than 600 pregnant women with dexamethasone in an attempt to prevent virilization in CAH-affected female fetuses (Kitzinger 2003), putting her efforts in this area well beyond any other clinical researcher's (see also New et al. 2001). Dreger's preliminary research indicated that, even while obtaining a federal grant promising to determine the actual safety and efficacy of prenatal dexamethasone for CAH through retrospective follow-up studies, New was functioning as a uniquely aggressive promoter of the intervention among parents and clinicians, repeatedly describing this intervention as \"safe for mother and child\" (New 2010a, \u00b64). But when Dreger and Vilain attempted to ask New about informed consent, both were rebuffed by New. Indeed, at the Miami medical conference session where Vilain pressed the issue (at New 2010b), New publicly admonished Vilain that his question to her was inappropriate.\n\nDreger then asked colleagues in bioethics and allied fields to join with her in raising concerns to the U.S. government regarding the potential failure to protect the rights of these women and their offspring. The second author of this paper (Feder) became the corresponding author on the resulting (February 2010) letters of concern from a total of 32 academicians to the Office of Human Research Protections (OHRP) and the FDA. The third author (Tamar-Mattis) handled most of our subsequent communications with federal agencies and conducted additional legal research.\n\nIn September 2010, the OHRP and FDA informed us they could find nothing worth pursuing further. The OHRP decided that abuse had not occurred because: (a) at least some of the pregnant women had been enrolled in IRB-approved studies when given the drug at Weill Cornell Medical College, where most of the interventions appear to have occurred while New worked there; (b) follow-up studies conducted at Mount Sinai School of Medicine under New had been IRB-approved; and (c) the fetal intervention itself does not require IRB oversight. The FDA explained that (to our surprise) regulations allow a clinician to promote an off-label use\u2014even an experimental use intended to alter fetal development\u2014as \"safe and effective\" so long as the clinician does not simultaneously work for the drug maker or count as an FDA-approved investigator of the drug (Borror 2010). (Dreger has made the agencies' full responses available at fetaldex.org.) Nevertheless, we show here that the material generated by the government's own investigations\u2014along with further scholarly inquiry on our part\u2014appear actually to *confirm* the concerns we expressed at the outset, suggesting a major failure of the layered systems designed to protect subjects of research, especially pregnant women and their fetuses.\n\nBecause the following history necessarily focuses attention on the actions of Maria New, we want to be sure that our readers appreciate that New's career has included critically important research and clinical care that has improved the lives, health, and fertility of a very large population. Her promotion of prenatal dexamethasone for CAH was no doubt motivated by a desire to improve the lives of her patients. The same beneficent attitude was likely present in all or most of the clinicians who used prenatal dexamethasone for CAH. But it is worth remembering that many cases in the history of medicine now rightly understood as ethically problematic were carried out by clinical researchers with good intentions (Eder 2011; Reverby 2009; Skloot 2010).\n\nIt has been impossible for us not to be aware that, as we have researched this fetal intervention, the medical world has been marking the 40^th^ anniversary of the 1971 publication of a study reporting a relatively high number of occurrences of a rare vaginal cancer in girls and young women who had been exposed in utero to diethylstilbestrol (DES). It was this small 1971 study\u2014eight subjects with adenocarcinoma of the vagina matched to 32 untreated controls\u2014that marked the beginning of the end of DES administration to pregnant women (Herbst, Ulfelder, and Poskanzer 1971). DES treatment during pregnancy had had admirable intentions, including primarily the prevention of miscarriage. Already by 1953, DES had actually been found ineffective for miscarriage prevention, but doctors continued to prescribe it to pregnant women anyway. Over the years, millions of fetuses were exposed, putting females and males at increased risk for rare cancers, genital anomalies, reproductive problems, etc. (Goodman, Schorge, and Greene 2011).\n\nAs we have spoken with others about how the use of prenatal dexamethasone for CAH has played out, it has been our experience that many have been skeptical that a DES-like scenario could occur with prenatal dexamethasone for CAH after what physicians learned from DES. We find ourselves wondering whether it is the assumption that \"we'll never do *that* again\" that paradoxically has blinded many clinicians to the striking parallels between DES and prenatal dexamethasone for CAH: Both DES and dexamethasone are powerful synthetic hormones; in both cases the practice involved administration of the intervention starting early in pregnancy; both were introduced into medical practice without much study as to efficacy and safety (Dreger et al. forthcoming).\n\nThree striking differences between these interventions are that: (1) this poorly studied yet widespread use of prenatal dexamethasone is happening *after* the lessons supposedly learned from DES; (2) while DES was never intended to alter fetal development, prenatal dexamethasone for CAH has explicitly aimed to do so; and (3) while DES was aimed at preventing fetal death, dexamethasone is directed at preventing something we would hope most people would understand to be substantially less dire, namely the development of atypical sex.\n\nYet rather than suggesting that the case of prenatal dexamethasone for CAH should be understood as one of the \"big\" stories of the history of medicine (like DES), we are suggesting something more disturbing: that this case appears to be representative of problems endemic in modern medicine, problems that threaten the health, lives, and rights of patients who continue to become unwitting subjects of (problematic) medical experimentation. Because so many systems of protection appear to have failed these women and children, we fear that prenatal dexamethasone for CAH is a canary in the modern medical mine.\n\n# CAH's Effects and the Goals of Using Prenatal Dexamethasone\n\nCongenital adrenal hyperplasia (CAH) is a genetic disease involving malfunction of the adrenal glands, endocrine organs that contribute to the production of sex steroids. CAH can occur in both males and females and can cause a number of metabolic problems, some of which may lead to postnatal adrenal crisis and death if left untreated. Because some forms of CAH are very dangerous, all U.S. states require newborn screening for CAH.\n\nThe prenatal administration of dexamethasone, a potent synthetic steroid of the glucocorticoid class, cannot prevent an affected child from being born with CAH. The intervention is aimed instead at causing CAH-affected female fetuses to develop in a more female-typical fashion than they otherwise might. Androgens contribute to sex differentiation, including in the brain and genitals; relatively low prenatal levels ordinarily result in a more female-typical development; relatively high levels usually result in male-typical development. In certain forms of CAH\u2014including 21-hydroxylase deficiency (21-OHD CAH), i.e., the type of CAH most at issue here\u2014the prenatal production of high levels of androgens may result in a genetic female (46,XX) fetus developing along a more masculine pathway neurologically and genitally. Prenatal dexamethasone is meant to engineer the CAH-affected female fetus's hormonal system to be typically female.\n\nThe intensity of CAH's effects on developing females varies. An affected genetic-female child might be born fairly female-typical, or she may be born with a large clitoris, labia that fuse and appear to form a scrotum, and an occluded or even absent lower vagina. Some affected genetic females have developed such masculine genitalia that they have been assumed at birth to be typical males and have been raised as boys (Eder 2011; Lee and Houk 2010). Newborn screening for CAH, aimed primarily at saving lives, has greatly reduced the likelihood of such children's conditions going undetected.\n\nAtypical clitorises and labia generally require no medical intervention for health. Despite evidence that elective genitoplasty may harm a girl (Crouch et al. 2004), pediatric urologists often perform surgery to \"feminize\" atypical clitorises and labia for what they call \"social reasons,\" including promotion of parent\u2013child bonding. They do so even while admitting we lack evidence that this approach is necessary or effective (Lee et al. 2006).\n\nSome CAH-affected girls are also born with a urogenital sinus, a condition in which the urethra and vagina are joined together. This creates potential for repetitive infections and also presents problems for sexual intercourse. Thus the urogenital sinus has traditionally been treated with pediatric surgery. It represents part of what prenatal dexamethasone is meant to prevent.\n\nWomen \"at risk\" of giving birth to a CAH-affected daughter are most often identified because they have already given birth to a CAH-affected child. Others are identified through genetic screening. Because \"the period during which the genitalia of a female fetus may become virilized begins only 6 \\[weeks\\] after conception, treatment must be instituted as soon as the woman knows she is pregnant\" (Speiser et al. 2010a, 4137; cf. Nimkarn and New 2007). Although this intervention is sometimes termed \"low-dose therapy\" (New 2010c), researchers estimate that \"the effective glucocorticoid doses reaching the fetus are 60\u2013100 times physiologic\" (Miller 2008, 17), meaning this intervention exposes the developing fetus to 60 to 100 times the normal level of glucocorticoids. The potential harms of prenatal dexamethasone represent a growing source of concern for clinical researchers because emerging research indicates glucocorticoids may alter \"fetal programming,\" potentially resulting in serious metabolic problems that will not become apparent until adulthood (Hirvikoski et al. 2007; Marciniak et al. 2011). Some animal studies, for example, suggest long-term risk to the cardiovascular system (Lajic, Nordenstr\u00f6m, and Hirvikoski 2011).\n\nCAH is an autosomal recessive disorder, so offspring of carrier parents have a 1 in 4 chance of having CAH and thus only a chance of 1 in 8 of being a CAH-affected female (since only half of the 1 in 4 will be females). While recent advances now may enable determination of fetal sex by the seventh week (Devaney et al. 2011), before 2011 the sex status of fetuses could not be determined reliably before 10 to 12\u00a0weeks. As a consequence, about 7 out of 8 (87.5\u00a0percent) of the individuals who have been exposed to many weeks of prenatal dexamethasone\u2014at 60 to 100 times normal levels\u2014never even had the condition that was being targeted with the prenatal intervention. These individuals have been \"necessarily\" exposed to risk in order to try to engineer the development of the 1 in 8 who would be CAH-affected females. Many of the clinicians who have raised ethical concerns about prenatal dexamethasone for CAH have specifically been concerned about the 87.5\u00a0percent (those 7 out of 8) exposed to the risk of first-trimester glucocorticoid \"therapy\" who stood no chance to benefit (e.g., Frias et al. 2001; Hirvikoski et al. 2007; Miller 2008; Speiser et al. 2010a).\n\nMost clinicians who have written about prenatal dexamethasone have spoken of its purpose as the prevention of ambiguous genitalia and the urogenital sinus. (Sometimes they speak of the urogenital sinus as part of \"ambiguous genitalia,\" and sometimes they speak of these as two sets of concerns.) But interestingly, some clinicians also speak of prenatal dexamethasone's purpose as the prevention of feminizing genital *surgeries*. So one elective risky intervention (prenatal dexamethasone) has been represented as necessary to prevent another (feminizing genital surgeries) (Nimkarn and New 2010a). Notably, in 2006, a major consensus of the American and European pediatric endocrine groups, known as \"the Chicago consensus,\" urged a more conservative approach to surgeries for genital anomalies, stating that, for girls, \"surgery should only be considered in cases of severe virilization\" (Lee et al. 2006, e491). But this recommendation does not appear to have had much (if any) effect on the promotion of prenatal dexamethasone as a preventive to \"feminizing\" surgery.\n\nStudies have shown that girls with 21-OHD CAH exhibit increased rates of what clinicians call \"behavioral masculinization,\" i.e., behaviors that are more male-typical. These girls are, on average, more interested in boy-typical play, hobbies, and subjects than non-affected females, less interested in becoming mothers, and more likely to grow up to be lesbian or bisexual (Meyer-Bahlburg 1999; Meyer-Bahlburg et al. 2006). The rate of adult identification as male is significantly higher in this group than in the general population of people with an XX karyotype; clinician-researchers report that about 5\u00a0percent of CAH-affected genetic-females may ultimately self-identify as male (Dessens, Slijper, and Drop 2005). For this reason, some clinicians have considered recommending raising highly virilized 46,XX CAH-affected babies as boys (Lee and Houk 2010; cf. Eder 2011).\n\nAlthough many researchers have hinted that prenatal dexamethasone might be good for preventing the \"behavioral masculinization\" associated with CAH (e.g., Lajic et al. 1998), this potential \"benefit\" has been spelled out most explicitly in the work of Maria New (Dreger, Feder, and Tamar-Mattis 2010). Writing with another pediatric endocrinologist in the 2010 *Annals of the New York Academy of Sciences* in an article that disregarded the 2006 Chicago consensus, New summed up the situation thus:\n\n> Without prenatal therapy, masculinization of external genitalia in females is potentially devastating. It carries the risk of wrong sex assignment at birth, difficult reconstructive surgery, and subsequent long-term effects on quality of life. Gender-related behaviors, namely childhood play, peer association, career and leisure time preferences in adolescence and adulthood, maternalism \\[interest in being a mother\\], aggression, and sexual orientation become masculinized in 46,XX girls and women with 21HOD deficiency. \u2026 Genital sensitivity impairment and difficulties in sexual function in women who underwent genitoplasty early in life have likewise been reported. We anticipate that prenatal dexamethasone therapy will reduce the well-documented behavioral masculinization and difficulties related to reconstructive surgeries (Nimkarn and New 2010a, 9).\n\nAmong those advocating prenatal dexamethasone, New seems to have been particularly concerned that CAH-affected girls may fail to grow up to be heterosexual wives and mothers. Speaking to a group of parents of children with CAH in 2001 at a meeting organized by the CARES Foundation, New showed a photo of a girl with ambiguous genitalia and said:\n\n> The challenge here is \u2026 to see what could be done to restore this baby to the normal female appearance which would be compatible with her parents presenting her as a girl, with her eventually becoming somebody's wife, and having normal sexual development, and becoming a mother. And she has all the machinery for motherhood, and therefore nothing should stop that, if we can repair her surgically and help her psychologically to continue to grow and develop as a girl (New 2001a).\n\nIn a 1999 paper entitled \"What Causes Low Rates of Child-Bearing in Congenital Adrenal Hyperplasia?\", New's chief collaborator in psychoneuroendocrine studies of CAH, Heino Meyer-Bahlburg of Columbia University, noted:\n\n> CAH women as a group have a lower interest than controls in getting married and performing the traditional child-care\/housewife role. As children, they show an unusually low interest in engaging in maternal play with baby dolls, and their interest in caring for infants, the frequency of daydreams or fantasies of pregnancy and motherhood, or the expressed wish of experiencing pregnancy and having children of their own appear to be relatively low in all age groups. (Meyer-Bahlburg 1999, 1845\u20131846).\n\nMeyer-Bahlburg posited that \"\\[l\\]ong term follow-up studies of the behavioral outcome will show whether \\[prenatal\\] dexamethasone treatment also prevents the effects of prenatal androgens on brain and behavior\" (1999, 1846).\n\nSurprisingly, results from our Freedom of Information Act (FOIA) requests\u2014made as part of our attempt to understand this history\u2014indicate that the U.S. National Institutes of Health (NIH) have funded New to see whether prenatal dexamethasone \"works\" to make more CAH-affected girls straight and interested in having babies. New's 1996 grant application states that\n\n> genital abnormalities and often multiple corrective surgeries needed affect social interaction, self image, romantic and sexual life, and fertility. As a consequence, many of these patients, and the majority of women with the salt-losing variant \\[of CAH\\], appear to remain childless and single. Preventive prenatal dexamethasone exposure is expected to improve this situation (New 1996a, 38).\n\nNew's NIH grant application specifically promised to try to determine \"the success of DEX in suppressing behavioral masculinization\" (New 1996b, 17).\n\n# A Long History of Ineffective Calls for Ethical, Scientifically Rigorous Studies\n\nThe 2010 systematic review and meta-analysis of prenatal dexamethasone for CAH (mentioned in our introduction) was commissioned by an Endocrine Society Task Force charged with developing new consensus guidelines for the treatment of CAH. That Task Force was in turn co-sponsored by the American Academy of Pediatrics, the Lawson Wilkins Pediatric Endocrine Society, the European Society for Paediatric Endocrinology, the Society of Pediatric Urology, the European Society of Endocrinology, the CARES Foundation, and the Androgen Excess and PCOS Society. Based on the systematic review and meta-analysis, in its 2010 practice guidelines the Task Force concluded\n\n> that prenatal therapy *continue* to be regarded as experimental. \u2026 We suggest that prenatal therapy be pursued through protocols approved by Institutional Review Boards \\[i.e., ethics committees\\] at centers capable of collecting outcomes data on a sufficiently large number of patients so that risks and benefits of this treatment can be defined more precisely (Speiser et al. 2010a, 4137, *emphasis added*).\n\nThe named authors of this 2010 consensus (including Meyer-Bahlburg) appeared to have recognized that their call for use only within scientifically-meaningful clinical trials pre-approved by ethics committees was not novel. In fact, expert panels have consistently judged prenatal dexamethasone for CAH to be risky and experimental. In 2002, a joint statement from the Lawson Wilkins Pediatric Endocrine Society and the European Society for Paediatric Endocrinology had already said something similar\u2014even stronger:\n\n> We believe that this specialized and demanding therapy should be undertaken by designated teams using nationally or multinationally approved protocols, subject to institutional review boards \\[IRBs\\] or ethics committees in recognized centers. Written informed consent must be obtained. \u2026 Families and clinicians should be obliged to undertake *prospective* follow-up of prenatally treated children whether they have CAH or not. The data should be entered into a central database *audited by an independent safety committee*. (Joint LWPES\/ESPE CAH Working Group 2002, 4049, *emphasis added*).\n\nAnd a year earlier, in a tense exchange of letters with New in the journal *Pediatrics*, representatives of the Section on Endocrinology and Committee on Genetics of the American Academy of Pediatrics had admonished New that\n\n> it remains the physician's ethical obligation to remain very cautious, even if using lower doses. The fact that only 1 in 8 treated pregnancies may benefit from the therapy confounds this equation even further. There is much information on the effect of glucocorticoids on the brain. Data continue to accumulate that indicate that high-dose glucocorticoid therapy is harmful for the developing (prenatal and postnatal) brain. Further, Dr. New herself coauthored a paper reporting increased frequency of white matter abnormalities and temporal lobe atrophy on magnetic resonance imaging in patients with CAH. Although cause and effect remain to be established, one must consider that glucocorticoid therapy may have played a role in causing these abnormalities (Frias et al. 2001, 805).\n\nThe AAP Committee concluded, in their response to New:\n\n> The maxim of \"first do no harm\" requires a cautious, long-term approach, which is why the Academy Committee unanimously agrees that prenatal glucocorticoid therapy for CAH should be confined to centers doing *controlled prospective, long-term studies*. The memory of the tragedies associated with prenatal use of dexamethasone and thalidomide demands no less (Frias et al. 2001, 805, *emphasis added*).\n\nA few months later, *Pediatrics* issued an erratum, correcting what may have been a Freudian slip; the AAP Committee had meant to say \"the memory of the tragedies associated with the prenatal use of *diethylstilbestrol (DES)* and thalidomide demands no less\" (*Pediatrics* 2001, 1450, *emphasis added*).\n\nOver the years, many individual researchers and clinicians expressed similar concerns about the use of prenatal dexamethasone for CAH outside of appropriate studies or, in some cases, about its use at all (Seckl and Miller 1997; Miller 1999; Ritzen 2001; Hughes 2003; Lajic et al. 2004; Hirvikoski et al. 2007). Even Forest, the French pioneer of the intervention, wrote in 2004, \"the prenatal treatment of CAH remains an experimental therapy and, hence, must only be done with fully informed consent in *controlled prospective* trials approved by human experimentation committees at centre's that see enough of these patients to collect meaningful data\" (2004, 479, *emphasis added*). By 2008, Walter Miller, distinguished professor of pediatrics and chief of endocrinology at the University of California\u2013San Francisco, declared, \"It is this author's opinion that this experimental treatment is not warranted and should not be pursued, even in prospective clinical trials\" (Miller 2008, 17). Miller later would add: \"It seems to me that the main point of prenatal therapy is to allay parental anxiety. In that construct, one must question the ethics of using the fetus as a reagent to treat the parent, especially when the risks are non-trivial\" (in Dreger 2010, \u00b620).\n\nIn spite of all these challenges and warnings, New and her collaborators in the United States appear never to have entered into a prospective, long-term, continuous study of the use of prenatal dexamethasone for CAH. New and her group do appear to have occasionally done some trials of prenatal dexamethasone during pregnancy, tracking outcomes up to the birth; among the 325 pregnancies included in studies seriously considered by the 2010 meta-analysis, 281 came from New's group (Fern\u00e1ndez-Balsells et al. 2010). Even those, however, were not studied in a matter adequate to establish efficacy or safety according to the standards of evidence-based medicine. The trials lacked adequate controls and methods to mitigate bias.\n\nMoreover, our recent FOIA findings on New's pregnancy trials of dexamethasone throw up multiple ethical red flags. For example, in her 1985 application to the Cornell IRB to study prenatal dexamethasone for CAH, New did not check the boxes indicating that her subject population included pregnant women and fetuses (New 1985, 2), even while, deeper in the application, she indicates that the study design calls for administration of dexamethasone starting at two to four\u00a0weeks of gestation (New 1985, 9e). The accompanying consent form describes this use as experimental, but then goes on to minimize the risks:\n\n> I understand that my participation in the project involves the following risks: transient and reversible suppression of the maternal and fetal adrenal gland. For this reason, extra doses of steroids will be administered to the mother at the time of delivery to cover for the additional stress. Although complications of glucocorticoid therapy (cleft palate, growth retardation, placental degeneration and fetal death) have been reported in laboratory animals, the doses used were extremely high. Congenital malformations associated with dexamethasone therapy are rare in humans, even when large doses are given. The pregnant women and fetuses treated to date with this regimen have not experienced complications (New 1985, 9e).\n\nA pregnant woman reading this might reasonably have read \"transient and reversible\" to mean that the risks included no long-term harms to her or her offspring.\n\nIn another New York Presbyterian Hospital\/Cornell Weill Medical College consent form for the prenatal administration of dexamethasone for CAH, marked \"IRB Approved\" and dated April 2004, New required pregnant women to sign a form saying they understood the use was \"experimental.\" But the form also advised the women: \"Over the last decade, treatment with dexamethasone in the prescribed doses has been shown to be effective in reducing the masculinization of the female fetus with CAH and has been shown to be safe for the fetus\" (New 2004).\n\nThus, it remains unclear\u2014but doubtful\u2014whether U.S. federal regulation 45 CFR 46 subpart B, which regulates experimentation on pregnant women, was consistently followed in this use of prenatal dexamethasone. Because the OHRP removed, from its FOIA response to us, sections of its correspondence with Weill Cornell that might have indicated how many women were included in pregnancy trials of prenatal dexamethasone conducted by New, we have been unable to determine how many of the 600-plus pregnant women and fetuses she says she had \"treated\" at Weill Cornell by 2003 had the benefit of IRB surveillance (such as it was). What we do know is that, given what we have seen, even those engaged in IRB-approved trials at Weill Cornell apparently cannot be said to have given informed consent.\n\nSo far as we can ascertain, there has been and remains an absence of IRB oversight for prenatal dexamethasone administration for CAH at Mount Sinai, where New began working in June 2004, in spite of the fact that New has claimed the exposure of fetuses at Mount Sinai as evidence of her research progress there. For example, in her 2006 grant \"progress report\" to the NIH, New indicated that she and her team had settled in at Mount Sinai and had been continuing the intervention there: \"The prenatal diagnosis and treatment program has galloped ahead so that we now have diagnosed and or treated 768 fetuses,\" including 27 fetuses exposed to dexamethasone from February 2005 to January 2006, i.e., when she was at Mount Sinai (New 2006, 3).\n\nOur FOIA digging also turned up a 2004 letter from Jeffrey Silverstein, chair of the Mount Sinai IRB, assuring the NIH that New had Mount Sinai IRB approval for a project called \"Prenatal Diagnosis and Treatment\" (Silverstein 2004); the accompanying documents indicate that the project specifically involved the use of prenatal dexamethasone for CAH. (New probably needed this IRB approval letter to continue her NIH funding.) Yet in his 2010 response to the OHRP on behalf of Mount Sinai in response to our letters of concern, Silverstein indicated that Mount Sinai did not believe IRB oversight of the decision to undertake this fetal intervention was needed, because New was not herself writing the prescriptions (Silverstein et al. 2010).\n\nMount Sinai's 2010 conclusions that New had not been doing research on those pregnancies appears to be based on the confusing premise that *other* doctors had been doing the prescribing to the pregnant women whose fetuses were then included in the growing numbers of New's subjects. Perhaps Mount Sinai's respondents to the OHRP in 2010 did not know that New had been reporting statistics on fetal exposure as part of the federally-funded research progress she described to the NIH as \"gallop\\[ing\\] ahead\" at Mount Sinai? But surely Silverstein, who lead-authored the Mount Sinai defense in 2010, should have known that he had himself assured the government in 2004 that New was intending prenatal exposure as part of a study supposedly under IRB supervision. If these fetuses were not regarded as research subjects locally in terms of having IRB protections, they appear to have at least been counted as research subjects federally.\n\nWith Meyer-Bahlburg, New has also conducted a few follow-up studies, mostly low-quality questionnaire studies in which parents have been asked, long-distance, to reply to a battery of questions. (The 2010 review found these of such low quality, they did not even bother to consider them for the meta-analysis, and Meyer-Bahlburg has admitted that \"fewer than 50\u00a0% of mothers and offspring have responded to questionnaires\" \\[Speiser et al. 2010b, under \"3. Prenatal Treatment of CAH\"\\].) For these studies, the researchers had IRB approval, but it does not appear that the families enrolled in these studies would have known that they might have been misled about the status of prenatal dexamethasone when it was administered to the pregnant women. Thus it is questionable whether their participation in the follow-up study can be called fully informed.\n\nSo far as we can ascertain, New was not alone in her less-than-rigorous approach to scientific study of prenatal dexamethasone for CAH. Indeed, although the intervention has been offered in many other nations, Sweden appears to be the only nation to have specifically restricted prenatal dexamethasone for CAH to women who agreed to participate in a continuous, prospective, long-term study of the intervention, a restriction instituted there in 1999. (Before that it could be obtained outside of trials.) The group has been led by Svetlana Lajic, associate professor of molecular medicine and surgery at the Karolinska University Hospital. In a recent exchange with one of us (Feder), Lajic revealed that the Swedish team has actually halted the intervention part of the study due to concerns about adverse effects.\n\nLajic wrote to Feder that \"due to our findings we have addressed the Regional Ethics Committee in Stockholm, in November 2010, and stated that due to possible adverse events we wish to put on hold further recruitment of patients to the on-going prospective study of prenatal DEX treatment of CAH\" (Lajic 2011). While our paper was in press, the Swedish team published news of this development, along with follow-up data on \"43 children treated in Sweden and Norway during 1985\u20131995\" (Hirvikoski et al. 2012, 1). When compared to controls,\n\n> \\[i\\]n general, treated children were born at term and were not small for gestational age. As a group, they did not exhibit teratogenous effects\/gross malformations, although eight severe adverse events were noted in the treated group, compared with one in the control group. Three children failed to thrive during the first year of life; in addition, one had developmental delay and hypospadias; one had hydrocephalus; two girls were born small for gestational age, and one of these girls was later diagnosed with mental retardation; and one child had severe mood fluctuations that caused hospital admission. In the control group, only one child was admitted because of Down's syndrome (Hirvikoski et al. 2012, 2).\n\nThe Swedish team has clearly been alarmed by the extraordinary number of serious medical problems among the small group of children treated prenatally.\n\nAs for the cognitive and behavioral outcomes, in their most recent report the Swedish team acknowledged that \"The small sample size, relatively high refusal rate, and the retrospective study design limited the conclusiveness of the results\" of their follow-up of the behavioral development in 40 Swedish children treated prenatally. Nevertheless, they could report that\n\n> \\[a\\]n adverse effect was observed in the form of impaired verbal working memory in CAH-unaffected short-term-treated cases \\[i.e., the children who were not the intended targets of the intervention\\]. The verbal working memory capacity correlated with the children's self-perception of difficulties in scholastic ability, another measure showing significantly lower results in CAH-unaffected, DEX-exposed children. These children also reported increased social anxiety. In the studies on gender role behavior, we found indications of more neutral behaviors in DEX-exposed boys (Hirvikoski et al. 2012, 2).\n\nIn other words, the boys appeared relatively less masculine\u2014another unintended effect.\n\nThe Swedish team argued that \"the\\[se\\] results cause concern because no side effects should be tolerated in CAH-unaffected children who do not benefit from the treatment per se\" (Hirvikoski et al. 2012, 2). Indeed, the team concluded its 2012 report with as strongly a worded ethics statement as has ever been issued by a team engaged in the use of prenatal dexamethasone for CAH: \"We find it unacceptable that, globally, fetuses at risk for CAH are still treated prenatally with DEX without follow-up\" (Hirvikoski et al. 2012, 2).\n\nAs we urged United States governmental agencies in 2010, the Swedish team now \"urge\\[s\\] the scientific community to perform additional retrospective studies, preferably on all treated children and young adults\" (Hirvikoski et al. 2012, 2). Yet we see no signs that rigorous, independently-audited, retrospective studies will be pursued in the United States, where the great majority of interventions appear to have occurred, largely under the guidance of Maria New. Although we find plenty of suggestions in New's grant materials and presentations that results from her retrospective follow-up studies are forthcoming, and although the NIH repeatedly renewed her prenatal dexamethasone research funding, little appears to have been produced in terms of longer-term data on prenatal dexamethasone for CAH, particularly with regard to long-term safety. We can find no publications resulting from New's 2007 Rare Diseases Clinical Research Network (RDCRN\/NIH) follow-up study protocol, except a paper she co-authored in a 2011 issue of *Advances in Experimental Medicine and Biology*. This paper purports to demonstrate the long-term safety of prenatal dexamethasone for CAH. But the article contains no methods section, no description of the study allegedly being reported, no description of the controls, nor even of the intervention. It consists simply of unexplained tables of data and unsupported narrative (New and Parsa 2011). Needless to say, it is not the kind of report that will ever make it into a meta-analysis.\n\nAlthough New had told the U.S. government in 1996 \"we propose to continue our studies of prenatal diagnosis and treatment\" (New 1996b, 61), we can find no evidence of there ever having been, in the United States, a reasonably-designed, IRB-approved, prospective, long-term study\u2014nothing like the rigorous approach taken in Sweden. In a recent debate with Dreger on this subject, Meyer-Bahlburg confirmed the total absence of prospective continuous studies in the United States and offered no indication that any has ever even been planned (Meyer-Bahlburg 2011). This is very concerning, and is even more worrisome when one considers the design of New's 2007 RDCRN\/NIH retrospective follow-up study (to be carried out with Meyer-Bahlburg). As we discovered from our FOIA requests, that protocol specifically states as an \"exclusion criteria for all groups\" this: \"mental impairment which prevents understanding of questionnaire\" (New 2007, 25).\n\nSo, while the researchers in Sweden are documenting cognitive impairment among those exposed to prenatal dexamethasone for CAH, researchers in America have specifically designed a follow-up study that *prevents detection* of certain negative effects on cognitive development.\n\n# Promotion of Prenatal Dexamethasone for CAH\n\nNew has a clear track record of promoting the use of prenatal dexamethasone for CAH as safe and effective to clinicians who might reasonably have expected her to represent accurately to them what was actually known and unknown. In a co-authored article for the 2000 edition of the *Cecil Textbook of Medicine*, New advised her colleagues: \"Prenatal treatment with dexamethasone has been shown to be safe and effective for both mother and child in the largest human studies\" (New and Josso 2000, 1302), implying that there were large human studies of sufficiently high quality to establish safety and efficacy. A few years later, New's discussion of prenatal dexamethasone (published with two co-authors) in the 2007 edition of the textbook *Pediatric Endocrinology* reads, \"we believe that proper prenatal treatment of fetuses at risk for CAH can be considered effective and safe. Long-term studies on the psychological development of patients treated prenatally are currently underway\"\u2014a tacit admission of more interest in outcomes related to gender identity and cognitive development than in metabolic outcomes like those involving cardiovascular function (New, Ghizzoni, and Lin-Su 2007, 237).\n\nNew's determined influence on prenatal treatment practices for CAH is also illustrated by the work she has published in *GeneReviews*, the NIH-sponsored free online textbook regarded as an authoritative source in prenatal counseling. Until recently (for reasons we explain below), New's article described prenatal dexamethasone for CAH as if it were standard practice, explaining precisely how clinicians in the field could implement the intervention. No mention was made of its experimental status, of the fact that the use is off-label, nor of the absence of evidence for its efficacy and safety (Nimkarn and New 2009; cf. Witchel and Miller 2012).\n\nIn her grants with the NIH, New also represented prenatal dexamethasone for CAH as having been shown safe and effective. For example, in her 2003 report, she indicated: \"Based on our experience and other large human studies, proper prenatal diagnosis and treatment of 21-OHD is safe for mother and child, and is effective\" (New 2003a, 98). In her NIH grant materials, New used the fact that \"\\[w\\]e are the only group in the U.S.A. routinely carrying out prenatal diagnosis and treatment of CAH\" as a major reason why the government should fund her studies on the \"large population of prenatally-treated infants\" she had \"accumulated\" (New 1996b, 2). Already by her 1996 report to the NIH, New was claiming her clinic \"receive\\[d\\] requests \\[for the intervention\\] from all over the U.S. and foreign countries at the rate of 3\u20134 per week\" (New 1996b, 61). A decade later, New's protocol for a follow-up study again indicated that her clinic had been drawing patients from all over the United States (New 2007, 25\u201326); recall that by 2006 she reported use of the intervention in 768 pregnancies (New 2006, 3).\n\nThe large draw of potential subjects for her grants perhaps occurred because New actively promoted prenatal dexamethasone for CAH as safe and effective not only to clinicians but also directly to parents, including through the CARES Foundation, a non-profit organization dedicated to supporting individuals and families with CAH and to advancing research and improved clinical practices (e.g., newborn screening). New's 2001 lecture to parents at CARES, quoted above, represents one example. Another appears in the CARES Foundation Winter 2003 newsletter, for which Elizabeth Kitzinger of Weill Cornell Medical College provided a short article effectively advertising New's clinic as *the* place to go for at-risk mothers:\n\n> In the United States, the only center routinely offering prenatal diagnosis and treatment is Dr. Maria New's clinic at New York Presbyterian Hospital-Weill Medical Center (Cornell) in New York City. Dr. New has treated over 600 pregnant women at risk for the birth of a CAH-affected child. \u2026 The results are remarkable. Dr. New maintains contact with all children treated prenatally, and has found no adverse developmental consequences. Thus, with nearly 20\u00a0years' experience, the treatment appears to be safe for mother and child, though there are endocrinologists who are wary of using dexamethasone prenatally even now (Kitzinger 2003, \u00b62\u2013\u00b63).\n\nSome parents reading this might hear in that last line a warning that not all clinicians were convinced this use was safe and effective. But others might reasonably read it as an indication that Dr. New would be the only clinician willing to help them. The text continues:\n\n> It is important to note that prenatal diagnosis and treatment should ONLY be done in a clinic like Dr. New's with long experience and commitment to follow-up. Only by tracking the growth of prenatally treated children can the long-term effects of treatment be exhaustively studied. Administering dexamethasone to achieve normal genitalia requires the judgment and experience of specialists. The benefits to families of classically affected girls cannot be underestimated. We hope that the availability of this treatment will be shared with all families at risk for the birth of CAH-affected children. (Kitzinger 2003, \u00b64, capitalization original).\n\nEven since the 2010 meta-analysis, the website of the Maria New Children's Hormone Foundation continues to claim \"the treatment has been found safe for mother and child\" (New 2010a, \u00b64). In 2010, our complaints to the FDA about advertisement of this off-label use as \"safe for mother and child\" were referred to Robert \"Skip\" Nelson, an FDA-based pediatrician and ethicist. As mentioned above, our discussions with Nelson clarified that regulations prohibit only two groups from advertising off-label uses as \"safe and effective\": employees of a drug's maker (which New is not) and FDA-approved investigators of drugs.\n\nAccording to Nelson, New did seek and obtain an IND (investigational new drug) exemption from the FDA in 1996 for prenatal dexamethasone for CAH; this would mean she sought and obtained the FDA's permission at that time for a specific study of this drug use. But as Nelson explained to us, since New does not *currently* have an approval from the FDA to study prenatal dexamethasone, she does not count as an FDA investigator, so she is not currently prohibited from calling this intervention \"safe and effective.\" In other words, because of a regulatory loophole in the United States, if you inappropriately investigate an off-label use of a drug, then you can also inappropriately advertise it\u2014even to pregnant women and even when the drug use is meant to alter the course of fetal development.\n\nNotably, we think Nelson may be incorrect in his claim that in 1996 the FDA granted New an IND exemption for the use of prenatal dexamethasone to prevent sex atypicality in CAH. Through the FOIA, we obtained the 1996 FDA letter cited by Nelson in 2010, and it actually refers only to a \"proposal to utilize dexamethasone to treat pregnant women with a \\[sic\\] congenital adrenal hyperplasia\" (Sobel 1996). It is entirely possible the 1996 letter referred to a study of the use of dexamethasone to treat women with CAH who became pregnant, i.e., an off-label use meant to treat a woman *herself* afflicted with CAH to keep *her* healthy during her pregnancy*,* an entirely different medical matter than an explicit attempt at fetal engineering wherein the mother is merely a carrier of CAH. There appears to be no evidence to support Nelson's representation of the 1996 IND exemption letter.\n\n# A Worrisome Track Record\n\nWe feel optimistic that our efforts to draw attention to this use of prenatal dexamethasone have increased the likelihood that physicians and researchers will respect the rights of the population exposed. Nevertheless, we feel pessimistic about ever knowing what really happened to most of those already exposed\u2014for weeks or months of pregnancy or fetal development. Recall that in 2002 a joint statement by the European and American pediatric endocrinology societies called for data on prenatal dexamethasone outcomes to be entered into \"a central database audited by an independent safety committee\" (Joint LWPES\/ESPE CAH Working Group 2002, 4049). But\u2014except for a still unpublished European study known as \"PREDEX,\" which enrolled only 24 subjects from 1999 to 2004, including subjects from the Swedish prospective cohort (Lajic et al. 2004)\u2014no such thing has happened. Instead, it appears from the 2007 RDCRN\/NIH study protocol that Maria New retains control of the database of contact information for what she reports is now more than 768 children and their mothers who have been exposed to this prenatal intervention through her clinics (New 2006, 3). It seems likely that that population represents at least a significant minority of those exposed. In fact, New's 2001 NIH \"application for continuation grant\" provides a table describing human subjects under the \"specific \\[study\\] aim\" called \"prenatal dx and treatment in families at risk,\" and there the total number of subjects is stated as 2,144 (New 2001b, 14).\n\nOur research has caused us substantial concern about what appears to be a pattern of misrepresentation by Maria New, even beyond what we take to be misrepresentations regarding the status of prenatal dexamethasone for CAH. As reported in *The Wall Street Journal* in 2005, one of New's NIH grants (which included work on CAH) formed the subject of a fraud suit brought against Weill Cornell Medical College\u2014a suit over \"phantom studies\" that resulted in a settlement whereby Cornell paid the government \\$4.4 million (Wysocki 2005). While the settlement required no admission of wrongdoing, it seems significant that, at the time of the fraud case, New's status at Cornell radically changed; substantial portions of her Cornell titles and salary appear to have been administratively withdrawn.[^2]\n\nAn oblique exchange in which we participated in 2010 raises additional concerns about New's track record for trustworthiness. In a project aimed at making known the experimental status of prenatal dexamethasone for CAH, Dreger wrote with Taylor Sale (Dreger's student and a genetic counselor) to the editors of *GeneReviews* to request that Dr. New's article in that publication be changed to reflect medical consensuses concerning prescription of prenatal dexamethasone for CAH (Dreger and Sale 2010). The editors reviewed the consensus statements Dreger and Sale provided them and then wrote to New with recommended changes. Some time later, one of the editors wrote to Dreger and Sale to report that: \"Dr. New's initial reply to our proposed edits to the 21-hydroxylase deficient \\[sic\\] CAH *GeneReview* is that 'Prenatal dexamethasone treatment has been FDA approved by Dr. Sobel.'\" The editor added, \"We are interested in your thoughts on \\[Dr. New's\\] comment\" (Dolan 2010). We informed the editors that Solomon Sobel was the FDA physician who had signed off in 1996 on a single IND exemption (Sobel 1996). As we surely did not need to explain to the *GeneReviews* editors, this did not qualify prenatal dexamethasone for CAH as \"FDA approved.\"\n\nToday, New's co-authored *GeneReviews* article on 21-OHD CAH says this: \"Prenatal treatment should continue to be considered experimental and should only be used within the context of a formal IRB-approved clinical trial\" (Nimkarn and New 2010b, \u00b61 under \"Therapies Under Investigation\"). Since the 2010 revision, the article no longer provides detailed instructions on how to conduct the intervention. And yet, the change reflected in the *GeneReviews* article does not appear to mark a major change in approach for New. Just after the revision of the *GeneReviews* piece, New published a short article in *The American Journal of Bioethics* entitled, \"Vindication of Prenatal Diagnosis and Treatment of Congenital Adrenal Hyperplasia with Low-Dose Dexamethasone.\" There she wrote: \"The recent reports by the Office of Human Research Protections and the FDA \u2026 make crystal clear that my research on prenatal treatment of CAH is and always has been both legally and ethically proper at every level\" (New 2010c, 68).\n\nIn spite of our petitions, the Office of Human Research Protections (OHRP) appears to be quite a bit less concerned than we are about the veracity of New's claims. For example, the agency appears to have accepted a claim that New had not been conducting fetal experimentation with dexamethasone at Mount Sinai, apparently without learning or considering that her 2006 NIH grant report described \"gallop\\[ing\\] ahead\" with the intervention and specified, as part of her \"research progress report,\" at least 27 fetuses exposed to prenatal dexamethasone for CAH since New had relocated her clinic to Mount Sinai (New 2006). As part of its investigation following our letters, the OHRP uncovered the deeply problematic IRB-approved consent forms from Weill Cornell mentioned above, yet the OHRP accepted New's claims that appropriate IRB oversight and informed consent had occurred. The OHRP staff did not indicate in their response to us how many of the pregnant women New claims to have \"treated\" were even in IRB-approved trials, nor did they address the problematic disjuncture between her own advertisement of prenatal dexamethasone as \"safe for mother and child\" and her simultaneous federally funded study to investigate whether in fact it is safe for mother and child.\n\nThis all seems particularly strange since Tamar-Mattis's research turned up two strongly-worded 2004 determination letters from the OHRP finding major faults with New's IRB-approved studies at Weill Cornell (Tamar-Mattis 2010). (OHRP determination letters are the final result of investigations that reveal problems in the conduct of research.) New was not the only researcher at Cornell named in the findings; in fact, the OHRP found the entire system for reviewing pediatric research at Cornell so problematic that they placed a restriction on Cornell's Federalwide Assurance and took the extraordinary step of requiring review by the IRB of all active research involving children (McNeilly 2004). The problems the OHRP found with protection for subjects in New's studies were particularly alarming: an informed consent form that did not include a description of the purpose of research, as required (McNeilly 2004, 5); consent forms that contained extremely complex language that a subject would be unlikely to understand (McNeilly 2004, 6); an informed consent form that did not state that \"the tests conducted for the protocol could be obtained outside the research\" (McNeilly 2004, 5); \"subjects enrolled outside the protocol age range prior to IRB review and approval\" (McNeilly 2004, 4); missing IRB records (McNeilly 2004, 3) and research initiated without first obtaining legally informed consent (McNeilly 2004, 3). The OHRP also noted that Weill Cornell had suspended New's research protocol in the fall of 2002 but did not report the suspension to the OHRP until June 2003 (McNeilly 2004, 4).\n\nWhy so much scrutiny of New and Weill Cornell by the OHRP in 2004 and so little by the OHRP in 2010? This seems to be part of a problematic trend forming at the OHRP\u2014again suggesting that prenatal dexamethasone for CAH is not a unique case but a bellwether. A March 2011 analysis from the *Report on Research Compliance* on the OHRP has found a substantial drop-off of OHRP determination letters in the last four\u00a0years: \"Agency observers and others expressed concern about the steep drops in letters and open cases, telling \\[the *Report on Research Compliance*\\] they raise questions about the agency's current commitment to serious oversight of human subjects research and investigations into possible wrongdoing\" (National Council of University Research Administrators 2011, 1). Notably, the \"possibility that compliance is increasing at institutions was not among the reasons OHRP cited for the drop, and funding is not a problem\" (National Council of University Research Administrators 2011, 1). Nor has the number of complaints made to the OHRP fallen.\n\n# Final Thoughts\n\nThe OHRP and FDA have not yet fully responded to our FOIA requests. (Dreger is presently suing to obtain the remainder of the documents.) But even what we already have found indicates that the government could well have made a case for inappropriate behaviors here and could have issued statements echoing at least some of our concerns about the way that prenatal dexamethasone for CAH has been administered and studied.\n\nDespite the disappointing responses from the OHRP and FDA, we do not regret raising the alarm as we did starting in January 2010. Doing so increased awareness among the affected population about the actual status of prenatal dexamethasone and has also shone light into corners we would not otherwise be able to view. For example, it made available to the public New's retrospective study protocol and IRB-approved consent forms, and it also led to a reporter for *Time* magazine finding women who indicated that they did not know, when given prenatal dexamethasone to attempt prevention of virilization in female fetuses, that it was an off-label and controversial drug use (Elton 2010). Without the investigation, this valuable information would have remained hidden from view.\n\nThe investigation also seems to have driven Mount Sinai to take action, judging from Silverstein's response to the OHRP:\n\n> The Committee \\[at Mount Sinai charged with responding\\] determined that there are widely differing opinions amongst the staff, with some staff members expressing significant concerns regarding the use of dexamethasone for the prenatal treatment of CAH. A particular concern is the current necessity to treat potentially unaffected fetuses until a diagnosis is determined. Therefore, the Committee concluded that the clinical use of dexamethasone in this situation should require a rigorous informed consent process with detailed documentation that the risks and benefits of this treatment have been clearly communicated to the parents making a decision to engage in prenatal treatment. The Committee also recommends that this issue be referred to the Medical Board of The Mount Sinai Hospital for further consideration of the consent issue. (Silverstein et al. 2010, 11).\n\nIt would appear, then, that Mount Sinai now shares our concerns about the practices associated with the administration of dexamethasone. Nevertheless, it is still the case that the Maria New Children's Hormone Foundation website declares that prenatal dexamethasone for CAH \"has been found safe for mother and child\" and provides a phone number to call to make an appointment (New 2010a, \u00b64). When the number is called, it rings to a clinic at Mount Sinai.\n\n# Epilogue\n\nAs our paper was in press, Maria New's research group, led by Meyer-Bahlburg, published a study including some cognitive outcome data for 67 children prenatally exposed to dexamethasone for CAH (Meyer-Bahlburg et al. 2012). The children studied came from New's database (Meyer-Bahlburg et al. 2012, line 92) and included eight CAH-affected girls who were \"long-term\" exposed in utero and 59 boys and CAH-unaffected girls who were \"short-term\" exposed (Meyer-Bahlburg et al. 2012, line 23). These children were compared to 73 unexposed controls (Meyer-Bahlburg et al. 2012, line 24).\n\nThe new paper concludes: \"Our studies do not replicate a previously reported adverse effect of short-term prenatal DEX exposure on working memory, while our findings on cognitive function in CAH girls with long-term DEX exposure contribute to concerns about potentially adverse cognitive aftereffects of such exposure\" (Meyer-Bahlburg et al. 2012, lines 34\u201336). The \"previously reported adverse effect of short-term prenatal DEX exposure\" was that reported by the Swedish team (Hirvikoski et al. 2007). But whereas the Swedish team employed a relatively rigorous design (a prospective, controlled, long-term study), in the new study from New's group, the 67 exposed children were selected via convenience sampling performed retrospectively.\n\nFurthermore, although the new Meyer-Bahlburg et al. publication purports to seek information on the effects of dexamethasone exposure (Meyer-Bahlburg et al. 2012, lines 26\u201327), among the eight girls long-term exposed the degree of exposure varies from a total of nine\u00a0weeks of fetal life to a total of 39\u00a0weeks (Meyer-Bahlburg et al. 2012, lines 117\u2013120), an exposure-length difference of more than four times, making it very difficult to establish meaningful dose-effect.\n\nIn marked contrast to the Swedish team, and as if to confirm the weakness of the study by New's group, Meyer-Bahlburg et al. found some \"positive\" cognitive outcomes in the short-term treated children. Understandably, they found this hard to explain. (No one suspects first-trimester glucocorticoid exposure at 60 to100 times normal levels to be good for children's brains.) In their discussion, the authors try to explain the lack of corroboration of their results by other studies or by logic (Meyer-Bahlburg et al. 2012, lines 278\u2013298), but they do not propose the most obvious explanation for this very strange finding: The paper's study population is a highly skewed sample.\n\nWe document here that New has claimed to have \"treated\" somewhere between 600 and 2,144 fetuses with dexamethasone for CAH, yet her group is now reporting cognitive outcomes on only 67 children in her database, a tiny fraction of those who ought to be available for study. Furthermore, based on New's claims to the public and to her granting agencies, her efforts should have produced between about 98 and 268 girls who were long-term dexamethasone-exposed, yet here her group reports on cognitive outcomes in only eight. Thus it would appear that the children included in the new study from Meyer-Bahlburg et al. represent somewhere between only 3 and 12 percent of the population prenatally exposed to dexamethasone under New's consultation.\n\nWhat the new paper from Meyer-Bahlburg et al. actually appears to replicate is our finding: that the approach taken by New and her collaborators to prenatal dexamethasone for CAH has been so scientifically weak as to be both clinically uninformative and profoundly unethical, especially in light of the history of DES.\n\nThe authors thank Heather Appelbaum, J.J. Burchman, James A. Bruce, Joel Frader, Susan Gilbert, Janet Green, Philip Gruppuso, Mary Ann Harrell, Shelley Harshe, Hilde Lindemann, Johanna Michael, Thad Morgan, Robert Nelson, Nigel Paneth, Taylor Sale, David Sandberg, Aron Sousa, Valerie Thonger, Kiira Triea, Katie Watson, Eric Vilain, two anonymous reviewers, and the journal's editors for assistance with this work. Funding for this project has been provided to Alice Dreger by the Office of the Provost of Northwestern University.\n\n### Open Access\n\nThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.\n\n# References\n\n[^1]: For an analysis of the problems with off-label usages, see Dresser and Frader (2009).\n\n[^2]: On February 12, 2003, New wrote to the NIH asking for money to support herself, under the NIH Merit Award system: \"As I am no longer Chairman of Pediatrics, Chief of Pediatric Endocrinology, and Program Director of the CCRC, I have much more time to devote to the research proposed in my Merit Award. \u2026 This is a hard time for me and I am deeply appreciative of your consideration\" (New 2003b). In a subsequent internal memo, the NIH staff \"enthusiastically support\\[ed\\]\" giving New more money under the Merit Award system, without mentioning *why* \"her circumstances changed abruptly this year when she had to relinquish the chairmanship of the Department of Pediatrics and the directorship of the Children's Clinical Research Center at Weill-Cornell Medical School and the salaries entailed in these positions\" (NIH Staff 2003). Shortly after, New was hired by Mount Sinai School of Medicine.","meta":{"dup_signals":{"dup_doc_count":122,"dup_dump_count":56,"dup_details":{"curated_sources":3,"2023-14":2,"2023-06":1,"2021-17":1,"2019-35":1,"2019-26":2,"2019-22":2,"2019-04":2,"2018-47":2,"2018-43":1,"2018-39":1,"2018-34":1,"2018-17":3,"2018-13":2,"2018-09":1,"2017-47":3,"2017-43":2,"2017-39":2,"2017-34":2,"2017-30":3,"2017-26":3,"2017-22":4,"2017-17":5,"2017-09":5,"2017-04":4,"2016-50":3,"2016-44":4,"2016-40":4,"2016-36":5,"2016-30":6,"2016-26":4,"2016-22":4,"2016-18":4,"2016-07":1,"2015-48":1,"2015-40":1,"2015-35":1,"2015-32":1,"2015-27":1,"2015-22":1,"2015-14":1,"2014-52":1,"2014-49":1,"2014-42":2,"2014-41":2,"2014-35":2,"2014-23":2,"2014-15":1,"2023-23":1,"2017-13":3,"2015-18":1,"2015-11":1,"2015-06":1,"2014-10":1,"2013-48":1,"2013-20":1,"2024-26":1}},"file":"PMC3416978"},"subset":"pubmed_central"} {"text":"abstract: Flaws in the design, conduct, analysis, and reporting of randomised trials can cause the effect of an intervention to be underestimated or overestimated. The Cochrane Collaboration's tool for assessing risk of bias aims to make the process clearer and more accurate\nauthor: Julian P T Higgins; Douglas G Altman; Peter C G\u00f8tzsche; Peter J\u00fcni; David Moher; Andrew D Oxman; Jelena Savovi\u0107; Kenneth F Schulz; Laura Weeks; Jonathan A C SterneCorrespondence to: J P T Higgins .\ndate: 2011\nreferences:\ntitle: The Cochrane Collaboration's tool for assessing risk of bias in randomised trials\n\n# Further details on the items included in risk assessment tool\n\nClick here for additional data file.\n\nRandomised trials, and systematic reviews of such trials, provide the most reliable evidence about the effects of healthcare interventions. Provided that there are enough participants, randomisation should ensure that participants in the intervention and comparison groups are similar with respect to both known and unknown prognostic factors. Differences in outcomes of interest between the different groups can then in principle be ascribed to the causal effect of the intervention.1\n\nCausal inferences from randomised trials can, however, be undermined by flaws in design, conduct, analyses, and reporting, leading to underestimation or overestimation of the true intervention effect (bias).2 However, it is usually impossible to know the extent to which biases have affected the results of a particular trial.\n\nSystematic reviews aim to collate and synthesise all studies that meet prespecified eligibility criteria3 using methods that attempt to minimise bias. To obtain reliable conclusions, review authors must carefully consider the potential limitations of the included studies. The notion of study \"quality\" is not well defined but relates to the extent to which its design, conduct, analysis, and presentation were appropriate to answer its research question. Many tools for assessing the quality of randomised trials are available, including scales (which score the trials) and checklists (which assess trials without producing a score).4 5 6 7 Until recently, Cochrane reviews used a variety of these tools, mainly checklists.8 In 2005 the Cochrane Collaboration's methods groups embarked on a new strategy for assessing the quality of randomised trials. In this paper we describe the collaboration's new risk of bias assessment tool, and the process by which it was developed and evaluated.\n\n# Development of risk assessment tool\n\nIn May 2005, 16 statisticians, epidemiologists, and review authors attended a three day meeting to develop the new tool. Before the meeting, JPTH and DGA compiled an extensive list of potential sources of bias in clinical trials. The items on the list were divided into seven areas: generation of the allocation sequence; concealment of the allocation sequence; blinding; attrition and exclusions; other generic sources of bias; biases specific to the trial design (such as crossover or cluster randomised trials); and biases that might be specific to a clinical specialty. For each of the seven areas, a nominated meeting participant prepared a review of the empirical evidence, a discussion of specific issues and uncertainties, and a proposed set of criteria for assessing protection from bias as adequate, inadequate, or unclear, supported by examples.\n\nDuring the meeting decisions were made by informal consensus regarding items that were truly potential biases rather than sources of heterogeneity or imprecision. Potential biases were then divided into domains, and strategies for their assessment were agreed, again by informal consensus, leading to the creation of a new tool for assessing potential for bias. Meeting participants also discussed how to summarise assessments across domains, how to illustrate assessments, and how to incorporate assessments into analyses and conclusions. Minutes of the meeting were transcribed from an audio recording in conjunction with written notes.\n\nAfter the meeting, pairs of authors developed detailed criteria for each included item in the tool and guidance for assessing the potential for bias. Documents were shared and feedback requested from the whole working group (including six who could not attend the meeting). Several email iterations took place, which also incorporated feedback from presentations of the proposed guidance at various meetings and workshops within the Cochrane Collaboration and from pilot work by selected review teams in collaboration with members of the working group. The materials were integrated by the co-leads into comprehensive guidance on the new risk of bias tool. This was published in February 2008 and adopted as the recommended method throughout the Cochrane Collaboration.9\n\n## Evaluation phase\n\nA three stage project to evaluate the tool was initiated in early 2009. A series of focus groups was held in which review authors who had used the tool were asked to reflect on their experiences. Findings from the focus groups were then fed into the design of questionnaires for use in three online surveys of review authors who had used the tool, review authors who had not used the tool (to explore why not), and editorial teams within the collaboration. We held a meeting to discuss the findings from the focus groups and surveys and to consider revisions to the first version of the risk of bias tool. This was attended by six participants from the 2005 meeting and 17 others, including statisticians, epidemiologists, coordinating editors and other staff of Cochrane review groups, and the editor in chief of the *Cochrane Library*.\n\n# The risk of bias tool\n\nAt the 2005 workshop the participants agreed the seven principles on which the new risk of bias assessment tool was based (box).\n\n## Principles for assessing risk of bias\n\n### 1. Do not use quality scales\n\nQuality scales and resulting scores are not an appropriate way to appraise clinical trials. They tend to combine assessments of aspects of the quality of reporting with aspects of trial conduct, and to assign weights to different items in ways that are difficult to justify. Both theoretical considerations10 and empirical evidence11 suggest that associations of different scales with intervention effect estimates are inconsistent and unpredictable\n\n### 2. Focus on internal validity\n\nThe internal validity of a study is the extent to which it is free from bias. It is important to separate assessment of internal validity from that of external validity (generalisability or applicability) and precision (the extent to which study results are free from random error). Applicability depends on the purpose for which the study is to be used and is less relevant without internal validity. Precision depends on the number of participants and events in a study. A small trial with low risk of bias may provide very imprecise results, with a wide confidence interval. Conversely, the results of a large trial may be precise (narrow confidence interval) but have a high risk of bias if internal validity is poor\n\n### 3. Assess the risk of bias in trial results, not the quality of reporting or methodological problems that are not directly related to risk of bias\n\nThe quality of reporting, such as whether details were described or not, affects the ability of systematic review authors and users of medical research to assess the risk of bias but is not directly related to the risk of bias. Similarly, some aspects of trial conduct, such as obtaining ethical approval or calculating sample size, are not directly related to the risk of bias. Conversely, results of a trial that used the best possible methods may still be at risk of bias. For example, blinding may not be feasible in many non-drug trials, and it would not be reasonable to consider the trial as low quality because of the absence of blinding. Nonetheless, many types of outcome may be influenced by participants' knowledge of the intervention received, and so the trial results for such outcomes may be considered to be at risk of bias because of the absence of blinding, despite this being impossible to achieve\n\n### 4. Assessments of risk of bias require judgment\n\nAssessment of whether a particular aspect of trial conduct renders its results at risk of bias requires both knowledge of the trial methods and a judgment about whether those methods are likely to have led to a risk of bias. We decided that the basis for bias assessments should be made explicit, by recording the aspects of the trial methods on which the judgment was based and then the judgment itself\n\n### 5. Choose domains to be assessed based on a combination of theoretical and empirical considerations\n\nEmpirical studies show that particular aspects of trial conduct are associated with bias.2 12 However, these studies did not include all potential sources of bias. For example, available evidence does not distinguish between different aspects of blinding (of participants, health professionals, and outcome assessment) and is very limited with regard to how authors dealt with incomplete outcome data. There may also be topic specific and design specific issues that are relevant only to some trials and reviews. For example, in a review containing crossover trials it might be appropriate to assess whether results were at risk of bias because there was an insufficient \"washout\" period between the two treatment periods\n\n### 6. Focus on risk of bias in the data as represented in the review rather than as originally reported\n\nSome papers may report trial results that are considered as at high risk of bias, for which it may be possible to derive a result at low risk of bias. For example, a paper that inappropriately excluded certain patients from analyses might report the intervention groups and outcomes for these patients, so that the omitted participants can be reinstated\n\n### 7. Report outcome specific evaluations of risk of bias\n\nSome aspects of trial conduct (for example, whether the randomised allocation was concealed at the time the participant was recruited) apply to the trial as a whole. For other aspects, however, the risk of bias is inherently specific to different outcomes within the trial. For example, all cause mortality might be ascertained through linkages to death registries (low risk of bias), while recurrence of cancer might have been assessed by a doctor with knowledge of the allocated intervention (high risk of bias)\n\nThe risk of bias tool covers six domains of bias: selection bias, performance bias, detection bias, attrition bias, reporting bias, and other bias. Within each domain, assessments are made for one or more items, which may cover different aspects of the domain, or different outcomes. Table 1<\/a> shows the recommended list of items. These are discussed in more detail in the appendix on bmj.com.\n\n\u2002 Cochrane Collaboration's tool for assessing risk of bias (adapted from Higgins and Altman^13^)\n\n| Bias domain | Source of bias | Support for judgment | Review authors' judgment (assess as low, unclear or high risk of bias) |\n|----|----|----|----|\n| Selection bias | Random sequence generation | Describe the method used to generate the allocation sequence in sufficient detail to allow an assessment of whether it should produce comparable groups | Selection bias (biased allocation to interventions) due to inadequate generation of a randomised sequence |\n| | Allocation concealment | Describe the method used to conceal the allocation sequence in sufficient detail to determine whether intervention allocations could have been foreseen before or during enrolment | Selection bias (biased allocation to interventions) due to inadequate concealment of allocations before assignment |\n| Performance bias | Blinding of participants and personnel\\* | Describe all measures used, if any, to blind trial participants and researchers from knowledge of which intervention a participant received. Provide any information relating to whether the intended blinding was effective | Performance bias due to knowledge of the allocated interventions by participants and personnel during the study |\n| Detection bias | Blinding of outcome assessment\\* | Describe all measures used, if any, to blind outcome assessment from knowledge of which intervention a participant received. Provide any information relating to whether the intended blinding was effective | Detection bias due to knowledge of the allocated interventions by outcome assessment |\n| Attrition bias | Incomplete outcome data\\* | Describe the completeness of outcome data for each main outcome, including attrition and exclusions from the analysis. State whether attrition and exclusions were reported, the numbers in each intervention group (compared with total randomised participants), reasons for attrition or exclusions where reported, and any reinclusions in analyses for the review | Attrition bias due to amount, nature, or handling of incomplete outcome data |\n| Reporting bias | Selective reporting | State how selective outcome reporting was examined and what was found | Reporting bias due to selective outcome reporting |\n| Other bias | Anything else, ideally prespecified | State any important concerns about bias not covered in the other domains in the tool | Bias due to problems not covered elsewhere |\n\n\\*Assessments should be made for each main outcome or class of outcomes.\n\nFor each item in the tool, the assessment of risk of bias is in two parts. The support for judgment provides a succinct free text description or summary of the relevant trial characteristic on which judgments of risk of bias are based and aims to ensure transparency in how judgments are reached. For example, the item about concealment of the randomised allocation sequence would provide details of what measures were in place, if any, to conceal the sequence. Information for these descriptions will often come from a single published trial report but may be obtained from a mixture of trial reports, protocols, published comments on the trial, and contacts with the investigators. The support for the judgment should provide a summary of known facts, including verbatim quotes where possible. The source of this information should be stated, and when there is no information on which to base a judgment, this should be stated.\n\nThe second part of the tool involves assigning a judgment of high, low, or unclear risk of material bias for each item. We define material bias as bias of sufficient magnitude to have a notable effect on the results or conclusions of the trial, recognising the subjectivity of any such judgment. Detailed criteria for making judgments about the risk of bias from each of the items in the tool are available in the *Cochrane Handbook.*13 If insufficient detail is reported of what happened in the trial, the judgment will usually be unclear risk of bias. A judgment of unclear risk should also be made if what happened in the trial is known but the associated risk of bias is unknown\u2014for example, if participants take additional drugs of unknown effectiveness as a result of them being aware of their intervention assignment. We recommend that judgments be made independently by at least two people, with any discrepancies resolved by discussion in the first instance.\n\nSome of the items in the tool, such as methods for randomisation, require only a single assessment for each trial included in the review. For other items, such as blinding and incomplete outcome data, two or more assessments may be used because they generally need to be made separately for different outcomes (or for the same outcome at different time points). However, we recommend that review authors limit the number of assessments used by grouping outcomes\u2014for example, as subjective or objective for the purposes of assessing blinding of outcome assessment or as \"patient reported at 6 months\" or \"patient reported at 12 months\" for assessing risk of bias due to incomplete outcome data.\n\n# Evaluation of initial implementation\n\nThe first (2008) version of the tool was slightly different from the one we present here. The 2008 version did not categorise biases by the six domains (selection bias, performance bias, etc); had a single assessment for blinding; and expressed risk of bias in the format '\"yes,\" \"no,\" or \"unclear\" (referring to lack of a risk) rather than as low, high, or unclear risk. The 2010 evaluation of the initial version found wide acceptance of the need for the risk of bias tool, with a consensus that it represents an improvement over methods previously recommended by the Collaboration or widely used in systematic reviews.\n\nParticipants in the focus groups noted that the tool took longer to complete than previous methods. Of 187 authors surveyed, 88% took longer than 10 minutes to complete the new tool, 44% longer than 20 minutes, and 7% longer than an hour, but 83% considered the time taken acceptable. There was consensus that classifying items in the tool according to categories of bias (selection bias, performance bias, etc) would help users, so we introduced these. There was also consensus that assessment of blinding should be separated into blinding of participants and health professionals (performance bias) and blinding of outcome assessment (detection bias) and that the phrasing of the judgments about risk should be changed to low, high, and unclear risk. The domains reported to be the most difficult to assess were risk of bias due to incomplete outcome data and selective reporting of outcomes. There was agreement that improved training materials and availability of worked examples would increase the quality and reliability of bias assessments.\n\n# Presentation of assessments\n\nResults of an assessment of risk of bias can be presented in a table, in which judgments for each item in each trial are presented alongside their descriptive justification. Table 2<\/a> presents an example of a risk of bias table for one trial included in a Cochrane review of therapeutic monitoring of antiretrovirals for people with HIV.14 Risks of bias due to blinding and incomplete outcome data were assessed across all outcomes within each included study, rather than separately for different outcomes as will be more appropriate in some situations.\n\n\u2002Example of risk of bias table from a Cochrane review^14^\n\n| Bias | Authors' judgment | Support for judgment |\n|----|----|----|\n| Random sequence generation (selection bias) | Low risk | Quote: \"Randomization was one to one with a block of size 6. The list of randomization was obtained using the SAS procedure plan at the data statistical analysis centre\" |\n| Allocation concealment (selection bias) | Unclear risk | The randomisation list was created at the statistical data centre, but further description of allocation is not included |\n| Blinding of participants and researchers (performance bias) | High risk | Open label |\n| Blinding of outcome assessment (detection bias) | High risk | Open label |\n| Incomplete outcome data (attrition bias) | Low risk | Losses to follow-up were disclosed and the analyses were conducted using, firstly, a modified intention to treat analysis in which missing=failures and, secondly, on an observed basis. Although the authors describe an intention to treat analysis, the 139 participants initially randomised were not all included; five were excluded (four withdrew and one had lung cancer diagnosed). This is a reasonable attrition and not expected to affect results. Adequate sample size of 60 per group was achieved |\n| Selective reporting (reporting bias) | Low risk | All prespecified outcomes were reported |\n| Other bias | Unclear risk | No description of the uptake of the therapeutic drug monitoring recommendations by physicians, which could result in performance bias |\n\nPresenting risk of bias tables for every study in a review can be cumbersome, and we suggest that illustrations are used to summarise the judgments in the main systematic review document. The figure<\/a> provides an example. Here the judgments apply to all meta-analyses in the review. An alternative would be to illustrate the risk of bias for a particular meta-analysis (or for a particular outcome if a statistical synthesis is not undertaken), showing the proportion of information that comes from studies at low, unclear, or high risk of bias for each item in the tool, among studies contributing information to that outcome.\n\n# Summary assessment of risk of bias\n\nTo draw conclusions about the overall risk of bias within or across trials it is necessary to summarise assessments across items in the tool for each outcome within each trial. In doing this, review authors must decide which domains are most important in the context of the review, ideally when writing their protocol. For example, for highly subjective outcomes such as pain, blinding of participants is critical. The way that summary judgments of risk of bias are reached should be explicit and should be informed by empirical evidence of bias when it exists, likely direction of bias, and likely magnitude of bias. Table 3<\/a> provides a suggested framework for making summary assessments of the risk of bias for important outcomes within and across trials.\n\n\u2002Approach to formulating summary assessments of risk of bias for each important outcome (across domains) within and across trials (adapted from Higgins and Altman^13^)\n\n| Risk of bias | Interpretation | Within a trial | Across trials |\n|----|----|----|----|\n| Low risk of bias | Bias, if present, is unlikely to alter the results seriously | Low risk of bias for all key domains | Most information is from trials at low risk of bias |\n| Unclear risk of bias | A risk of bias that raises some doubt about the results | Low or unclear risk of bias for all key domains | Most information is from trials at low or unclear risk of bias |\n| High risk of bias | Bias may alter the results seriously | High risk of bias for one or more key domains | The proportion of information from trials at high risk of bias is sufficient to affect the interpretation of results |\n\n## Assessments of risk of bias and synthesis of results\n\nSummary assessments of the risk of bias for an outcome within each trial should inform the meta-analysis. The two preferable analytical strategies are to restrict the primary meta-analysis to studies at low risk of bias or to present meta-analyses stratified according to risk of bias. The choice between these strategies should be based on the context of the particular review and the balance between the potential for bias and the loss of precision when studies at high or unclear risk of bias are excluded. Meta-regression can be used to compare results from studies at high and low risk of bias, but such comparisons lack power, 15 and lack of a significant difference should not be interpreted as implying the absence of bias.\n\nA third strategy is to present a meta-analysis of all studies while providing a summary of the risk of bias across studies. However, this runs the risk that bias is downplayed in the discussion and conclusions of a review, so that decisions continue to be based, at least in part, on flawed evidence. This risk could be reduced by incorporating summary assessments into broader, but explicit, measures of the quality of evidence for each important outcome, for example using the GRADE system.16 This can help to ensure that judgments about the risk of bias, as well as other factors affecting the quality of evidence (such as imprecision, heterogeneity, and publication bias), are considered when interpreting the results of systematic reviews.17 18\n\n# Discussion\n\nDiscrepancies between the results of different systematic reviews examining the same question19 20 and between meta-analyses and subsequent large trials21 have shown that the results of meta-analyses can be biased, which may be partly caused by biased results in the trials they include. We believe our risk of bias tool is one of the most comprehensive approaches to assessing the potential for bias in randomised trials included in systematic reviews or meta-analyses. Inclusion of details of trial conduct, on which judgments of risk of bias are based, provides greater transparency than previous approaches, allowing readers to decide whether they agree with the judgments made. There is continuing uncertainty, and great variation in practice, over how to assess potential for bias in specific domains within trials, how to summarise bias assessments across such domains, and how to incorporate bias assessments into meta-analyses.\n\nA recent study has found that the tool takes longer to complete than other tools (the investigators took a mean of 8.8 minutes per person for a single predetermined outcome using our tool compared with 1.5 minutes for a previous rating scale for quality of reporting).22 The reliability of the tool has not been extensively studied, although the same authors observed that larger effect sizes were observed on average in studies rated as at high risk of bias compared with studies at low risk of bias.22\n\nBy explicitly incorporating judgments into the tool, we acknowledge that agreements between assessors may not be as high as for some other tools. However, we also explicitly target the risk of bias rather than reported characteristics of the trial. It would be easier to assess whether a drop-out rate exceeds 20% than whether a drop-out rate of 21% introduces an important risk of bias, but there is no guarantee that results from a study with a drop-out rate lower than 20% are at low risk of bias. Preliminary evidence suggests that incomplete outcome data and selective reporting are the most difficult items to assess; kappa measures of agreement of 0.32 (fair) and 0.13 (slight) respectively have been reported for these.22 It is important that guidance and training materials continue to be developed for all aspects of the tool, but particularly these two.\n\nWe hope that widespread adoption and implementation of the risk of bias tool, both within and outside the Cochrane Collaboration, will facilitate improved appraisal of evidence by healthcare decision makers and patients and ultimately lead to better healthcare. Improved understanding of the ways in which flaws in trial conduct may bias their results should also lead to better trials and more reliable evidence. Risk of bias assessments should continue to evolve, taking into account any new empirical evidence and the practical experience of authors of systematic reviews.\n\n## Summary points\n\n1. Systematic reviews should carefully consider the potential limitations of the studies included\n\n2. The Cochrane Collaboration has developed a new tool for assessing risk of bias in randomised trials\n\n3. The tool separates a judgment about risk of bias from a description of the support for that judgment, for a series of items covering different domains of bias\n\nCite this as: *BMJ* 2011;343:d5928","meta":{"dup_signals":{"dup_doc_count":544,"dup_dump_count":89,"dup_details":{"curated_sources":2,"2023-50":10,"2023-40":13,"2023-23":11,"2023-14":8,"2023-06":10,"2022-49":9,"2022-40":8,"2022-33":16,"2022-27":10,"2022-21":15,"2022-05":11,"2021-49":13,"2021-43":12,"2021-39":16,"2021-31":6,"2021-25":8,"2021-21":9,"2021-17":16,"2021-10":7,"2021-04":5,"2020-50":8,"2020-45":5,"2020-40":9,"2020-34":11,"2020-29":7,"2020-24":8,"2020-16":10,"2020-10":15,"2020-05":10,"2019-51":7,"2019-47":10,"2019-43":12,"2019-39":8,"2019-35":5,"2019-30":9,"2019-26":8,"2019-22":4,"2019-18":7,"2019-13":4,"2019-09":4,"2018-51":7,"2018-47":2,"2018-43":4,"2018-39":3,"2018-34":3,"2018-30":2,"2018-26":3,"2018-17":3,"2018-13":1,"2018-05":2,"2017-51":1,"2017-47":6,"2017-43":4,"2017-39":2,"2017-34":6,"2017-30":1,"2017-26":7,"2017-22":3,"2017-17":2,"2017-09":2,"2016-50":1,"2016-36":4,"2016-30":3,"2016-26":2,"2016-22":3,"2016-18":2,"2016-07":3,"2015-48":3,"2015-40":3,"2015-35":2,"2015-32":3,"2015-27":2,"2015-22":3,"2015-14":2,"2014-52":3,"2014-49":5,"2014-42":7,"2014-41":4,"2014-35":5,"2014-23":5,"2024-30":8,"2024-26":4,"2024-22":4,"2024-18":4,"2024-10":4,"2017-13":7,"2015-18":3,"2015-11":3,"2015-06":2}},"file":"PMC3196245"},"subset":"pubmed_central"} {"text":"date: 2020\ntitle: IN WHO GLOBAL PULSE SURVEY, 90% OF COUNTRIES REPORT DISRUPTIONS TO ESSENTIAL HEALTH SERVICES SINCE COVID-19 PANDEMIC\n\n**31 August 2020 News release** - WHO to roll out learning and monitoring tools to improve service provision during pandemic\n\nThe World Health Organization (WHO) today published a first indicative survey on the impact of COVID-19 on health systems based on 105 countries' reports. Data collected from five regions over the period from March to June 2020 illustrate that almost every country (90%) experienced disruption to its health services, with low- and middle-income countries reporting the greatest difficulties. Most countries reported that many routine and elective services have been suspended, while critical care - such as cancer screening and treatment and HIV therapy \u2013 has seen high-risk interruptions in low-income countries.\n\n\"The survey shines a light on the cracks in our health systems, but it also serves to inform new strategies to improve healthcare provision during the pandemic and beyond,\" said Dr Tedros Adhanom Ghebreyesus, WHO Director-General. \"COVID-19 should be a lesson to all countries that health is not an 'either-or' equation. We must better prepare for emergencies but also keep investing in health systems that fully respond to people's needs throughout the life course.\"\n\nServices hit across the board: Based on reports from key informants, countries on average experienced disruptions in 50% of a set of 25 tracer services. The most frequently disrupted areas reported included routine immunization \u2013 outreach services (70%) and facility-based services (61%), non-communicable diseases diagnosis and treatment (69%), family planning and contraception (68%), treatment for mental health disorders (61%), cancer diagnosis and treatment (55%).\n\nCountries also reported disruptions in malaria diagnosis and treatment (46%), tuberculosis case detection and treatment (42%) and antiretroviral treatment (32%). While some areas of health care, such as dental care and rehabilitation, may have been deliberately suspended in line with government protocols, the disruption of many of the other services is expected to have harmful effects on population health in the short- medium- and long-term.\n\nPotentially life-saving emergency services were disrupted in almost a quarter of responding countries. Disruptions to 24-hour emergency room services for example were affected in 22% of countries, urgent blood transfusions were disrupted in 23% of countries, emergency surgery was affected in 19% of the countries.\n\nDisruption due to a mix of supply and demand side factors. 76% of countries reported reductions in outpatient care attendance due to lower demand and other factors such as lockdowns and financial difficulties. The most commonly reported factor on the supply side was cancellation of elective services (66%). Other factors reported by countries included staff redeployment to provide COVID-19 relief, unavailability of services due to closings, and interruptions in the supply of medical equipment and health products.\n\nAdapting service delivery strategies. Many countries have started to implement some of the WHO recommended strategies to mitigate service disruptions, such as triaging to identify priorities, shifting to on-line patient consultations, changes to prescribing practices and supply chain and public health information strategies. However, only 14% of countries reported removal of user fees, which WHO recommends to offset potential financial difficulties for patients.\n\nThe pulse survey also provides an indication of countries' experiences in adapting strategies to mitigate the impact on service provision. Despite the limitations of such a survey, it highlights the need to improve real-time monitoring of changes in service delivery and utilization as the outbreak is likely to wax and wane over the next months, and to adapt solutions accordingly.\n\nTo that end, WHO will continue to work with countries and to provide supportive tools to address the fallout from COVID-19. Given countries' urgent demand for assistance during the pandemic response, WHO is developing the COVID19: Health Services Learning Hub, a web-based platform that will allow sharing of experiences and learning from innovative country practices that can inform the collective global response. WHO is also devising additional surveys at the sub-national level and in health facilities to gauge the longer-term impact of disruptions and help countries weigh the benefits and risks of pursuing different mitigation strategies.\n\nAvailable from: ","meta":{"dup_signals":{"dup_doc_count":109,"dup_dump_count":23,"dup_details":{"curated_sources":1,"2023-50":4,"2023-40":6,"2023-23":1,"2023-14":3,"2023-06":3,"2022-49":2,"2022-40":4,"2022-33":1,"2022-27":3,"2022-21":3,"2022-05":3,"2021-49":1,"2021-43":1,"2021-39":5,"2021-31":5,"2021-25":5,"2021-21":6,"2021-17":7,"2021-10":5,"2021-04":9,"2020-50":6,"2020-45":13,"2020-40":12}},"file":"PMC7557558"},"subset":"pubmed_central"} {"text":"author: Joshua P. Thaler; Stephan J. Guyenet; Mauricio D. Dorfman; Brent E. Wisse; Michael W. SchwartzCorresponding author: Michael W. Schwartz, .\ndate: 2013-08\nreferences:\ntitle: Hypothalamic Inflammation: Marker or Mechanism of Obesity Pathogenesis?\n\nObesity and its associated metabolic and cardiovascular disorders are among the most challenging health problems confronting developed countries. Not only is obesity remarkably common, affecting more than one-third of U.S. adults, and very challenging to treat, but it is also tightly linked to type 2 diabetes and related metabolic disorders. A major obstacle to effective obesity treatment is that lost weight tends to be regained over time (1). Although the mechanisms underlying recovery of lost weight are incompletely understood, a large literature suggests that body fat stores are subject to homeostatic regulation, and that this process occurs in obese as well as normal-weight individuals. From this perspective, obesity can be viewed as a disorder in which the biologically defended level of body fat mass is increased. Recent findings implicate inflammation in key hypothalamic areas for body weight control in this process. In this review, we present an overview of energy homeostasis\u2014the biological process that underlies the control of body fat mass\u2014and describe evidence that defects in this regulatory system contribute to obesity pathogenesis. We then address molecular characteristics of hypothalamic inflammation and their implications for obesity pathogenesis (detailed more extensively in refs. 2,3), followed by evidence linking high-fat diet (HFD) feeding to neuropathological alteration of key hypothalamic areas controlling energy balance. We conclude by considering how cell-cell interactions may contribute to this deleterious hypothalamic response and the implications of these interactions for obesity pathogenesis.\n\n# The case for energy homeostasis\n\n## Homeostatic response to weight loss.\n\nFollowing a period of caloric restriction, lost weight is gradually but inexorably recovered in most individuals. This effect involves the capacity of the brain to sense the reduction of body energy stores and activate responses to compensate for this deficit. In simple terms, voluntary weight loss triggers increases of both appetite and energy efficiency, such that both sides of the energy balance equation shift in favor of weight gain (4,5). Perhaps the clearest example of this compensatory response to lost weight comes from the large clinical literature of diet studies in human subjects. Some 20 years ago, Safer (6) reported that the vast majority (\\>90%) of weight lost through hypocaloric dieting in humans is regained over a 5-year period. Even with the addition of exercise (which slows the rate of weight regain \\[7\\]), caloric restriction through lifestyle modification is a Sisyphean task over the long term. As an example, obese subjects enrolled in the lifestyle intervention arm of the landmark Diabetes Prevention Program (DPP) study averaged \u223c7% weight loss during the trial but regained virtually all of the weight lost over the full 10 years of postintervention monitoring (8). Lifestyle intervention in the follow-up Look AHEAD trial similarly failed to produce durable weight loss (9).\n\nRobust defense of body energy stores is similarly observed in conditions that promote weight loss by increasing energy expenditure rather than reducing intake, such as housing mice in a cold environment. To survive, a marked increase of energy expenditure is required to support thermogenesis and maintain core body temperature. Increased heat production occurs initially at the expense of modest body fat depletion, but as fat loss proceeds, food intake increases in a commensurate manner to offset further loss of body fat stores (10). Consequently, graded decreases of environmental temperature trigger proportionate increases of energy expenditure that are compensated by increased energy intake so as to maintain body fat stores.\n\n## Adaptive responses to weight gain.\n\nFrom the foregoing, it is evident that the energy homeostasis system responds robustly to states of negative energy balance, whether secondary to caloric restriction or increased energy expenditure. Does the energy homeostasis system also protect against weight gain? Available data from both animal models and humans support this possibility. A classic demonstration of this phenomenon was provided by the \"Experimental Obesity in Man\" studies performed at the University of Vermont almost 50 years ago (11). In these experiments, lean human volunteers were challenged to increase their body weight by 20% through overeating for a period of up to 6 months. Although overeating was initially accomplished without difficulty, the subjects' capacity to do so soon began to wane, presumably because counterregulatory mechanisms were engaged that reduce appetite. After reaching the prescribed 20% increase of body weight and being released from the experimental mandate to overeat, the subjects displayed several weeks of profound anorexia that caused the excess pounds to be shed. Only when body weight began to converge on preintervention values did food intake also return to normal levels. Similar observations were made in subsequent human overfeeding studies (12), and across a variety of animal models as well (13).\n\nIn the face of a challenge to body fat stores in either direction, therefore, robust homeostatic responses drive the return of body weight to its original, biologically defended level (sometimes referred to as a set-point). If normal-weight individuals defend against experimentally induced changes of body fat mass in either direction, it follows that obesity pathogenesis should involve impairment of the energy homeostasis system. Since overweight individuals appear, by and large, to defend their body fat mass as robustly as normal weight individuals (14), obesity does not appear to reflect a failure of energy homeostasis so much as the biological defense of an elevated level of body fat mass.\n\n# Obesity and energy homeostasis\n\nAn important strength of the homeostasis model for understanding obesity pathogenesis is that it offers a plausible explanation for why conservative approaches to weight loss fail over time. To understand how a slow, progressive increase in the defended level of body fat stores might occur, it is helpful to consider how the energy homeostasis system operates in normal-weight individuals.\n\n## How does energy homeostasis work?\n\nEnergy homeostasis involves an adiposity negative feedback system through which circulating signals such as the hormone leptin inform key brain centers about body energy stores (15). In response to a change of body fat mass, corresponding changes of plasma leptin levels and hence of leptin signaling in these brain areas elicit changes of both energy intake and energy expenditure that favor the return of body weight to preintervention levels. In response to weight loss due to caloric restriction, this model predicts that falling plasma levels of leptin and other adiposity negative feedback signals (e.g., insulin) elicit brain responses that minimize further weight loss and promote the eventual recovery of lost weight. Leptin also responds robustly to short-term changes of energy balance, prior to major changes in adiposity, and therefore integrates information about both long-term and short-term energy status (16,17).\n\nA growing number of leptin-responsive neuronal subsets have been identified, and their roles in adaptive changes of energy balance are increasingly well understood. Perhaps best studied of these are two leptin-sensitive neuronal subsets situated in the hypothalamic arcuate nucleus (ARC). Activation of neuropeptide Y and agouti-related peptide (NPY\/AgRP) neurons potently increases food intake and reduces energy expenditure through the paracrine action of the neuropeptides NPY and AgRP themselves and through inhibition of downstream neuronal activity via GABAergic signaling (15,18,19). Conversely, reduced intake and increased expenditure occurs with activation of proopiomelanocortin (POMC) neurons largely through the action of the anorexigenic neuropeptide \u03b1-melanocyte\u2013stimulating hormone (15) (although direct neurotransmitter action likely contributes as well \\[20\\]). Weight loss reduces leptin levels causing activation of NPY\/AgRP neurons and inhibition of POMC neurons (15). This combination potently favors positive energy balance and is sustained until lost weight has been regained.\n\nA large literature now indicates that proper functioning of these neurocircuits is required for normal energy homeostasis in rodent models, and many models of obesity caused by impairment of the leptin-melanocortin pathway now exist. A similar role was recently invoked for a distinct subset of leptin-sensitive ARC neurons containing neuronal nitric oxide synthetase. Deletion of leptin receptors from these neurons causes hyperphagia and obesity even more severe than that induced by defective melanocortin signaling (21). Neurons in other hypothalamic areas, including the ventromedial, lateral hypothalamic, and paraventricular nuclei, and in extrahypothalamic areas, including hindbrain (e.g., nucleus of the solitary tract), midbrain (ventral tegmental area), and forebrain (nucleus accumbens), are also critical for normal energy homeostasis.\n\nThat the aforementioned hypothalamic neurocircuits are essential for normal energy homeostasis in humans is supported by insights gained from rare monogenic forms of human obesity. Loss-of-function mutations affecting the leptin-melanocortin pathway, including genes encoding leptin, melanocortin-4 receptor (Mc4r), leptin receptor, and the prohormone convertase enzyme that processes POMC, together constitute the most prevalent, highly penetrant genetic causes of human obesity (22). However, \u223c95% of human obesity is due not to a single, highly penetrant mutation but rather to complex interactions between genetic and environmental factors that favor both weight gain and the defense of elevated body weight. Impaired leptin responsiveness is a feature of these more common forms of obesity as well and hence may contribute to obesity pathogenesis in susceptible individuals.\n\n## Hypothalamic inflammation and leptin resistance.\n\nLeptin resistance can be defined simply as an acquired or inherited defect in the response to leptin. This simplistic definition is problematic, however, since such leptin resistance can potentially arise from a long list of defects\u2014not only those affecting leptin receptor signal transduction, but at innumerable points downstream of or in parallel to leptin-mediated effects (23). For example, obesity due to Mc4r mutation, the most common monogenic form of human obesity, is characterized by leptin resistance, since leptin action on energy balance requires that it activate the melanocortin pathway. From this perspective, the term leptin resistance is easily misused, since a blunted feeding response to leptin is present in virtually all forms of obesity, save for that caused by leptin deficiency (23). This conundrum has to some extent confounded efforts to delineate underlying mechanisms.\n\nDespite this caveat, growing evidence suggests a causal role for leptin resistance in obesity pathogenesis and that obesity-associated hypothalamic inflammation can underlie this resistance. This hypothesis originated following clear evidence that inflammatory signaling contributes to obesity-associated insulin resistance in peripheral tissues such as liver, muscle, and adipose tissue (24). Among neuronal consequences of proinflammatory signaling is a disruption of intracellular signal transduction downstream of both insulin and leptin receptors via the insulin receptor substrate\u2013phosphatidylinositol 3-kinase pathway (2). Leptin signaling via the janus kinase\u2013signal transducer and activator of transcription pathway can also be impaired by cellular inflammation (25,26), suggesting that this mechanism may contribute not only to obesity-associated leptin and insulin resistance, but also to the associated increase in the defended level of body fat stores. However, dissociating cause and effect has been challenging, and evidence of leptin resistance caused by inflammation independently of obesity is lacking (though studies of aging, injury, and endotoxin-induced leptin resistance are suggestive of this possibility \\[27\u201329\\]). We favor the hypothesis that a vicious cycle exists involving obesity, leptin resistance, and inflammation. According to this model, diet-induced increases of inflammation beget a state of leptin resistance that promotes weight gain, which in turn triggers further inflammation and leptin resistance, eventually resulting in the biological defense of an elevated level of body fat. This perspective highlights the challenges inherent in determining the extent to which leptin resistance and inflammation are causes or consequences of weight gain, a challenge that cannot be met without methodologies that can distinguish cellular responses to diet from metabolic alterations induced by obesity itself.\n\n## Does hypothalamic inflammation cause obesity?\n\nThe first report of obesity-associated hypothalamic inflammation was in a rat model of diet-induced obesity (DIO) published in 2005 (30), and many investigators have since replicated this finding (2,3,31\u201336). The observation that genetic interventions that disrupt neuronal inflammation can block both obesity and hypothalamic leptin resistance during HFD feeding implicates this inflammation in obesity pathogenesis. For this to be the case, however, hypothalamic inflammation would have to occur prior to obesity onset, unlike the later onset of inflammation in peripheral tissues. This hypothesis is supported by our recent observation that in rats predisposed to DIO, expression of proinflammatory biomarkers increases in the mediobasal hypothalamus within 24 h of HFD onset (37). The mediobasal hypothalamus level of these markers increases more by day 3 and is followed by a transient normalization of inflammation, only to rise again in 2\u20133 weeks and remain elevated thereafter. Interestingly, this initial rise and fall of hypothalamic inflammatory markers follows a temporal pattern similar to that of caloric intake following the switch to an HFD.\n\nSeveral molecules and pathways have been identified as candidate mediators of hypothalamic inflammation during HFD feeding. Among these are the serine kinases c-Jun N-terminal kinase (Jnk) and\u00a0inhibitor of nuclear factor-\u03baB kinase (IKK\u03b2) as well as Toll-like receptor 4 (TLR4), the ceramide biosynthesis pathway, and the endoplasmic reticulum (ER) stress pathway (30\u201336). Each of these is upregulated in the rodent hypothalamus during DIO (30,36,38), and Jnk and IKK\u03b2 can inhibit insulin and leptin signaling by at least two mechanisms: induction of the signal termination molecule suppressor of cytokine signaling-3 and serine phosphorylation of the essential pathway intermediate insulin receptor substrate (3,24). Targeted disruption of each of these pathways in a hypothalamus-specific manner (by pharmacologic, viral gene transfer, or genetic means) limits the extent of HFD-induced obesity, hypothalamic leptin resistance, and systemic insulin resistance (30\u201336), implying a contributory role for these mechanisms of hypothalamic inflammation in the pathogenesis of DIO and its metabolic sequelae. Conversely, although acute or robust central nervous system inflammation is typically associated with anorexia and weight loss, interventions that generate more modest levels of hypothalamic inflammation (e.g., virally mediated neuronal expression of constitutively active IKK\u03b2, or central administration of either low-dose tumor necrosis factor-\u03b1 or the ER stress inducer thapsigargin) can produce mild obesity\/metabolic syndrome phenotypes (34,36,39,40). These data collectively suggest that in some contexts, hypothalamic inflammation is both necessary and sufficient for DIO. As such, an important research priority is to determine the relative importance of the many inflammatory signaling molecules and pathways implicated in promoting leptin resistance and obesity.\n\nAlthough these findings collectively offer evidence of a causal role for hypothalamic inflammation in the pathogenesis of DIO in rodent models, there are conflicting results that cast doubt on this hypothesis. First, several models of increased tissue inflammation (e.g., fasting in adipose tissue \\[41\\] and p50 knockout in liver \\[42\\]) show improved rather than worsened insulin sensitivity, suggesting that the relationship between hormone signaling and inflammation is context dependent. More importantly, a number of total-body proinflammatory cytokine and cytokine receptor knockout models (e.g., interleukin \\[IL\\]-1, IL-6, IL-1 receptor, and tumor necrosis factor-\u03b1 receptors \\[43\u201346\\]) display obesity phenotypes rather than being protected from DIO. Conversely, IL-1 receptor antagonist\u2013deficient mice show a lean, hypermetabolic phenotype with increased leptin sensitivity despite higher levels of IL-1 activity (47). Reconciling these results with the literature supporting a role for hypothalamic inflammation in DIO is inherently difficult because these are largely whole-body gene knockouts that affect multiple tissues and may cause developmental compensation. In many instances, the degree of change in inflammatory signaling in these models is much greater than that observed with HFD-induced obesity. Lastly, the hypothalamus was not specifically examined in these models. However, two recent studies have demonstrated dissociation between weight changes and alterations in hypothalamic inflammation. In the first, DIO-sensitive HFD-fed rats switched back to regular chow for 8 weeks were observed to lose all their excess weight despite sustained elevations in hypothalamic inflammatory gene expression (48). Conversely, neuronal peroxisome proliferator\u2013activated receptor-\u03b4 knockout mice are resistant to HFD-induced hypothalamic inflammation yet are more susceptible to DIO than wild-type controls (49). Thus, the evidence supporting a direct role of hypothalamic inflammation in promoting weight gain is clearly mixed. These considerations raise the possibility that HFD-induced hypothalamic inflammation is but one element of a more comprehensive injury process that itself contributes to obesity pathogenesis, a hypothesis that we explore further below.\n\n# Hypothalamic gliosis and neuron injury\n\nAt the cellular level, exposure of neurons to nutrient excess represents a significant stress that not only engages adaptive mechanisms such as autophagy and ER stress that limit neuronal damage, but also involves neighboring cell populations. More than 50% of the cellular composition of the brain is nonneuronal, including glial, vascular, and periventricular constituents (50). Astrocytes and microglia are the most abundant of these specialized cell types and, in addition to comprising much of the basic architecture of brain parenchyma, they maintain the blood-brain barrier, support neuronal metabolism, and both guard against and react to local tissue injury. To accomplish these various functions, both astrocytes and microglia are capable of remarkable plasticity, altering both their genetic programs and cellular morphologies (called reactive gliosis) to combat infection, support or consume damaged neurons, and direct the restorative process (51,52). Interestingly, rats receiving systemic leptin administration 3 h prior to cerebral ischemia had reduced infarct volume with signal transducer and activator of transcription-3 activation in astrocytes of the ischemic penumbra (53). Similarly, leptin administration and fasting reciprocally modulate glucose and glutamate transporter expression in hypothalamic astrocytes. These observations are suggestive of an interaction between nutritional status and glial cell function (54).\n\nWhereas the link between hypothalamic inflammation and obesity pathogenesis is the focus of a sizeable literature, the role of astrocytes and microglia in energy homeostasis is comparatively unexplored. Both cell types play crucial roles in synaptic remodeling (55), metabolic coupling at active synapses (56), and neurotransmitter reuptake (57), functions that ultimately influence neuronal activity and behavior. Astrocytes and microglia play central roles in the central nervous system response to injury but can provide either neuroprotection or perpetuate neurotoxicity depending on the nature of the underlying insult (51,58). In neurodegenerative conditions such as Alzheimer's disease, amyotrophic lateral sclerosis, and multiple sclerosis, microglia and astrocytes often accelerate the disease process by generating cytokines, reactive oxygen species, and other toxic mediators involved in clearing degenerating cells. By amplifying inflammatory signals in the hypothalamus analogously to the recruitment and activation of proinflammatory immune cells in adipose and other peripheral tissues (59,60), both cell types have the potential to impair neuronal function in ways that favor leptin resistance and associated weight gain. Lending support to this argument, TLR4, a putative mediator of saturated fatty acid\u2013induced inflammatory signaling, is abundantly expressed by microglia (33), and acute inactivation of TLR4 by central administration of an anti-TLR4 antibody can reduce weight gain during HFD feeding (33). In addition, reactive astrocytes and microglia accumulate in the hypothalamus during long-term HFD consumption at times when hypothalamic inflammation is clearly evident (33,37). In contrast, HFD-fed mice subjected to an involuntary exercise regimen exhibit modest improvements in glucose tolerance along with reduced hypothalamic microglial activation (61), suggesting a link between microglial phenotype and obesity-associated metabolic impairment. Finally, in a nonhuman primate model, microglial activation along with increased expression of inflammatory pathway genes was observed in fetuses from HFD-fed mothers, raising the possibility that hypothalamic microglia contribute to the effect of intrauterine programming to influence the adult phenotype (62).\n\nOther studies, however, suggest a more complex picture in which microglia and astrocytes play a protective role by limiting the deleterious hypothalamic consequences of consuming an HFD. First, mice with moderately increased production of IL-6 from astrocytes were protected from DIO, rather than being more susceptible (63). Moreover, adult rats overfed during the neonatal period manifest hypothalamic microglial activation (as evidenced by major histocompatibility complex class II expression) without local inflammation (64), and hypothalamic microglia from mice fed an HFD accumulate the generally anti-inflammatory molecule IgG (65). Finally, our recent work suggests that effects of short-term HFD feeding on hypothalamic inflammation and reactive gliosis are separable from one another (37). Specifically, whereas both processes were evident within the first week of HFD consumption, hypothalamic inflammation subsided over the next 2 weeks despite accumulation of enlarged microglia in the ARC that continued unabated.\n\nAlthough definitive answers are still awaited, these results are consistent with a model (Fig. 1<\/a>) in which gliosis develops initially as a neuroprotective response to cope with neuronal stress induced by HFD feeding. In this scenario, glial responses initially constrain hypothalamic inflammatory signaling. With prolonged exposure to an HFD, however, astrocytes and\/or microglia may eventually convert to a more proinflammatory, neurotoxic phenotype. Detailed cellular phenotyping and carefully timed functional interventions should help clarify these possibilities.\n\n# Integrated model of neuronal injury and obesity pathogenesis\n\nLike cells in peripheral tissues of animals fed an HFD, hypothalamic neurons confronted by nutrient excess are proposed to respond by engaging mechanisms that limit excessive hormone- and nutrient-related signaling, manage organellar stress, and maintain structural integrity. At the same time, these neurons also integrate signaling inputs from neurons in other brain regions that may themselves be reacting to the change in diet. These various challenges to neuronal function are compatible with evidence that in rats predisposed to DIO, hypothalamic neurons initiate a stress response (as measured by up-regulation of the chaperone protein hsp72) within the first week of HFD exposure, close to the onset of hypothalamic inflammation (37). The accumulation of reactive astrocytes and microglia in the mediobasal hypothalamus occurs around the same time and precedes a subsequent reduction in the level of inflammation (Fig. 1<\/a>).\n\nWith prolonged HFD exposure, however, POMC neurons exhibit increased autophagic activity and eventually (after 8 months) appear to decrease in number by 25% (37), consistent with prior reports of apoptosis of these neurons in mice with DIO (66). Based on these considerations, we speculate that hypothalamic inflammation during HFD feeding originates primarily within neurons rather than glial cells, and that the associated gliosis occurs in response to neuron injury and is initially neuroprotective. This hypothesis does not exclude the possibility that glial cells that respond directly to dietary components and\/or nutrient excess are also drivers of inflammation and\/or neuronal injury, but to date we are unaware of data that directly support this view. If chronic exposure to an HFD causes permanent damage to or death of neurons comprising energy balance neurocircuits, obesity susceptibility and other phenotypic outcomes may depend on the degree to which these neuronal subsets are susceptible to this type of stress. For example, if neuron injury and loss occur preferentially in POMC over AgRP neurons, the balance of functional anorexigenic to orexigenic inputs should shift so as to favor leptin resistance, excess weight gain, and an upward resetting of the defended level of body fat mass.\n\nAlthough the extent to which obesity and\/or HFD feeding impacts various hypothalamic cell populations in humans remains uncertain, early insights from brain imaging studies are beginning to support this type of neuropathological model. A retrospective analysis of magnetic resonance images from 34 subjects (BMI range 17.7\u201344.1 kg\/m^2^) revealed evidence of gliosis in the mediobasal hypothalamus that correlated with BMI (37), and a separate study reported an inverse correlation between systemic inflammation (measured as serum fibrinogen) and the integrity of brain structures involved in food reward and feeding behavior measured by magnetic resonance imaging in 44 overweight\/obese subjects (67). Brain imaging (including new approaches to the use of PET imaging to detect microglial activation) and other modalities have important potential to ascertain whether human obesity involves cumulative ARC neuronal damage and whether such an effect is reversible and\/or predictive of either future weight gain or weight regain following voluntary weight loss.\n\n# Summary\n\nOur understanding of obesity-associated hypothalamic inflammation\u2014its underlying causes, the contributions made by distinct cell types, the extent to which it is reflective of tissue injury versus repair, and its implications for obesity pathogenesis and treatment\u2014remains incomplete, but the field is evolving rapidly. Although hypothalamic inflammation shares several features with responses observed in peripheral tissues (e.g., comparable increases of some of the same inflammatory biomarkers, activation of local immune cells), its many unique features identify it as a fundamentally distinct process. For one, inflammation occurs much more rapidly in the hypothalamus than in peripheral tissues following the switch to an obesogenic HFD, such that it precedes weight gain. Available data are compatible with a model in which the initial cause of hypothalamic inflammation induced by HFD feeding involves injury to neurons that comprise energy balance neurocircuits. In turn, this injury may undermine homeostatic responses that protect against weight gain, thereby contributing to obesity pathogenesis. Additional studies are warranted to critically test this model and determine if therapeutic interventions targeting this process have a role in the future of obesity treatment.\n\n## ACKNOWLEDGMENTS\n\nThis work was supported by a National Institutes of Health (NIH) Fellowship Training Program Award (T32DK007247), NIH National Research Service Award (F32DK091989), the NIH-funded University of Washington Nutrition Obesity Research Center and the Diabetes Research Center, and NIH grants to M.W.S. (DK090320, DK083042, and DK052989) and J.P.T. (DK088872). J.P.T. was also supported by a Beginning Grant-in-Aid from the American Heart Association.\n\nM.W.S. has consulted for Santarus and Pfizer. No other potential conflicts of interest relevant to this article were reported.\n\nJ.P.T. and M.W.S. wrote the manuscript. S.J.G. and B.E.W. reviewed and edited the manuscript. M.D.D. contributed the figure and reviewed and edited the manuscript.\n\n# REFERENCES","meta":{"dup_signals":{"dup_doc_count":131,"dup_dump_count":58,"dup_details":{"curated_sources":4,"2021-49":1,"2021-31":1,"2021-21":1,"2021-17":1,"2020-50":1,"2020-24":1,"2020-16":1,"2019-43":3,"2019-35":1,"2019-30":1,"2019-04":1,"2018-43":1,"2018-34":1,"2018-26":3,"2018-22":1,"2018-17":2,"2018-13":1,"2018-09":1,"2018-05":3,"2017-47":4,"2017-43":1,"2017-39":3,"2017-34":1,"2017-30":4,"2017-26":2,"2017-22":2,"2017-17":5,"2017-09":4,"2017-04":5,"2016-50":3,"2016-44":4,"2016-40":3,"2016-36":3,"2016-30":2,"2016-26":3,"2016-22":3,"2016-18":3,"2016-07":3,"2015-48":2,"2015-40":1,"2015-35":1,"2015-32":3,"2015-27":2,"2015-22":2,"2015-14":2,"2014-52":3,"2014-49":2,"2014-42":2,"2014-41":4,"2014-35":3,"2014-23":2,"2022-05":1,"2024-22":1,"2017-13":4,"2015-18":3,"2015-11":3,"2015-06":1,"2024-26":1}},"file":"PMC3717869"},"subset":"pubmed_central"} {"text":"author: PH Remans; JM van Laar; ME Sanders; P-P Tak; KA Reedquist\ndate: 2005\ninstitute: 1AMC, Amsterdam, The Netherlands; 2LUMC, Leiden, The Netherlands\ntitle: Synovial monocytes mediate Rap1-dependent oxidative stress in rheumatoid arthritis T lymphocytes\n\n# Background\n\nTransient production of reactive oxygen species (ROS) plays an important role in optimizing transcriptional and proliferative responses to T-cell receptor signaling. Conversely, chronic oxidative stress leads to mitogenic hyporesponsiveness and enhanced transcription of inflammatory gene products. It has recently been demonstrated that constitutive activation of the small GTPase Ras and simultaneous inhibition of Rap1 in synovial fluid (SF) T cells results in high intracellular ROS production, which is thought to underlie many of the functional abnormalities observed in these cells in rheumatoid arthritis.\n\n# Objectives\n\nTo identify the factor(s) responsible for modulation of intracellular ROS production in synovial T lymphocytes.\n\n# Methods\n\nPurified rheumatoid arthritis peripheral blood (PB) T cells were incubated in the presence of different cytokines, in 50% autologous SF, or with autologous SF monocytes for 72 hours. Activation status of Ras and Rap1 GTPases were determined using activation-specific probes for these GTPases. Oxidation of the dye DCF by FACS analysis was used to measure intracellular ROS production.\n\n# Results\n\nChronic stimulation of PB T cells for 72 hours with tumor necrosis factor alpha (TNF-\u03b1) or 50% autologous SF resulted in a slight increase in basal ROS production, but did not increase intracellular ROS production to levels found in SF T cells. Exposure of PB T cells to SF (but not PB) monocytes for 72 hours, however, led to a strong increase in ROS production in PB T cells, comparable with ROS levels in SF T cells. Moreover, similar Rap1 inhibition as found in SF T cells were observed in PB T cells after exposure to SF monocytes. To demonstrate that the inhibition of Rap1 is critical in the subsequent increase in ROS production, PB T cells were nucleofected with the constitutive active isoform of Rap1 (RapV12). In RapV12 nucleofected PB T cells the SF monocyte-induced ROS production was prevented. Cell\u2013cell contact is critical, since in PB T cells separated from SF monocytes by a transwell membrane, the inhibition of Rap1 was relieved, concomitant with an absence in excess ROS production. Additionally, we found that addition of 10 \u03bcg\/ml recombinant CTLA-4-Ig fusion protein also prevented oxidative stress in PB T cells exposed to SF monocytes, which suggested a central role for CD28. PB T cells were therefore stimulated with TNF-\u03b1, interferon gamma, IL-1\u03b2, or transforming growth factor beta, in the presence or absence of anti-CD28. Here we found that stimulation with anti CD28 by itself was sufficient to induce Rap1 inhibition and induce a moderate increase in ROS production. Co-incubation of PB T cells with TNF-\u03b1 strongly enhanced the intracellular ROS production.\n\n# Conclusion\n\n*In vitro* exposure of PB T cells from rheumatoid arthritis patients to synovial monocytes leads to a strong increase in intracellular ROS production. This is mediated by simultaneous Ras activation and inhibition of Rap1. Where Ras can be activated by a variety of stimuli, Rap1 inhibition is induced by SF monocytes through CD28 costimulatory signaling.","meta":{"dup_signals":{"dup_doc_count":103,"dup_dump_count":33,"dup_details":{"curated_sources":2,"2022-21":1,"2020-24":1,"2020-05":1,"2019-39":1,"2017-39":1,"2017-22":1,"2017-09":7,"2016-44":1,"2016-40":1,"2016-36":7,"2016-30":4,"2015-48":3,"2015-40":2,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":5,"2014-42":8,"2014-41":5,"2014-35":4,"2014-23":5,"2014-15":4,"2022-40":1,"2015-18":3,"2015-11":2,"2015-06":3,"2014-10":4,"2013-48":3,"2013-20":4,"2024-30":1}},"file":"PMC2834129"},"subset":"pubmed_central"} {"text":"abstract: A report on the Cold Spring Harbor Laboratory meeting 'Systems Biology: Global Regulation of Gene Expression', Cold Spring Harbor, USA, 27-30 March 2008.\nauthor: Stein Aerts; Stefanie Butland\ndate: 2008\ninstitute: 1Laboratory of Neurogenetics, Department of Molecular and Developmental Genetics, VIB, Leuven B-3000, Belgium; 2Centre for Molecular Medicine and Therapeutics, CFRI, University of British Columbia, Vancouver, V5Z 4H4, Canada\ntitle: Sequencing the regulatory genome\n\nThe line between the biological and computational research communities has disappeared in the field of gene regulation. The group of regulatory biology researchers represented at the recent meeting at Cold Spring Harbor on systems biology shares the same goal: develop and apply experimental and computational technologies to decipher the genomic regulatory code and the gene regulatory networks that are the driving forces of development and evolution. As in previous years, important gaps in solving the complex problem of gene regulation were bridged. This year featured the emerging massively parallel sequencing technologies, which are now being applied to every conceivable step in the gene regulation process, from gene annotation and alternative splicing, to transcription factor binding, and chromatin structure. Topics covered at the meeting ranged widely and in this report, we give our impressions of some highlights in two dominant themes: gene regulation in a nuclear context and transcription factor binding specificity.\n\n# Genome geography in three dimensions\n\nWhen transcription factors are reading the genomic regulatory code to determine the complement of active genes in a cell at a given time, they can be aided, guided, or obstructed by the chromatin they operate on. To catch chromatin in the regulatory act, laboratories are sequencing the sites associated with histone modifications that mark repressive, activating, and bivalent chromatin states, high-resolution DNase I hypersensitive sites (DHSs) that mark accessible chromatin, possible insulator sites, sites bound by transcription factors and RNA polymerase II, and *in vivo* cross-linked sites that represent long-range regulatory interactions in a locus. In an increasing number of laboratories, the regulatory geography of the genome is now being assessed within the three-dimensional context of the nucleus.\n\nBas van Steensel (Netherlands Cancer Institute, Amsterdam, the Netherlands) provided an elegant picture of human gene regulation in three dimensions by identifying nuclear-lamina-associated domains (LADs) in interphase chromosomes. LADs range from 100 kb to 10 Mb and have sharp borders that define chromatin regions with distinctive characteristics: they tend to have fewer genes with lower expression levels compared with genes outside LADS, low RNA polymerase II occupancy, and enrichment of the repressive histone mark H3 trimethylated on lysine 27 (H3K27me3) at their borders. Thirty percent of LAD borders have at least one of three other marks: binding sites for the transcription factor CTCF (commonly held to act as insulators), a CpG island, or a promoter directing transcription away from the LAD.\n\nSites of chromatin accessibility across the human genome were precisely delineated by John Stamatoyannopoulos (University of Washington School of Medicine, St Louis, USA) in nine cell types by 'digital DNase I', an *in vivo* assay of DNase I hypersensitive sites identified by single-molecule sequencing. Stamatoyannopoulos has identified around 400,000 DHSs genome-wide, of which around 170,000 were highly regulated cell-type specific elements. A subset of these are organized into approximately 2,000 tissue 'regulons', each comprising a large cluster of lineage-specific elements spread out over tens or even hundreds of kilobases. Genes that were marked by these regulons in a given cell type showed striking over-representation of Gene Ontology terms for processes associated with the cell lineage in which they were observed. In addition to accessibility to DNase, active enhancers show specific histone modifications. Gary Hon (Ludwig Institute for Cancer Research, San Diego, USA) was able to distinguish cell-type specific enhancers from promoters by their enrichment for H3K4me1 over H3K4me3 modifications. He proposed that these enhancers are what drive cell-type specific patterns of gene expression.\n\nWith such data it is critical to determine how different regulon elements interact with each other to elicit a response. Job Dekker (University of Massachusetts Medical School, Worcester, USA) described his 'chromosome conformation capture carbon copy' (5C) method to detect many-by-many chromatin interactions for a picture of spatial conformation of genomic regions. His analysis of a 1 Mb region around the human beta-globin locus showed that an alternative promoter 250 kb upstream physically interacts with the globin locus control region. Dekker pointed out that \"simple models are insufficient\" for gene regulation, as CTCF sites, usually considered as insulators, at the beta-globin locus actually facilitate long-range interactions between promoters and enhancers.\n\n# Transcription factor binding specificity\n\nWe were reminded by Kevin Struhl (Harvard Medical School, Boston, USA) that the epigenetic states of chromatin cannot explain the specificity of gene expression, but are rather instructed by the sequence-specific transcription factors that translate the regulatory code and recruit chromatin-modifying activities. A cornerstone of our understanding of the regulatory language is the knowledge of a transcription factor's DNA-binding specificity. Significant progress has been achieved in deriving high-quality DNA-binding profiles through a variety of approaches with a large dose of collaboration, particularly with Martha Bulyk (Brigham & Women's Hospital and Harvard Medical School, Boston, USA) for protein-binding microarrays (PBMs). Scot Wolfe (University of Massachusetts Medical School, Worcester, USA) reported new binding profiles for 84 homeodomain transcription factors for *Drosophila melanogaster* through a bacterial one-hybrid system, while Gong-Hong Wei (University of Helsinki, Finland) described binding profiles for all 27 human and 26 mouse ETS family members using a microwell-based high-throughput assay and PBMs, and Christian Grove (University of Massa-chusetts Medical School, Worcester, USA) reported profiles for most of the basic helix-loop-helix (bHLH) dimers in *Caenorhabditis elegans* using a novel version of assay by PBMs. Timothy Hughes (University of Toronto, Canada) described profiles for 300 human and mouse transcription factors across 23 structural classes, including 168 profiles (of 175 total) for homeodomain transcription factors using PBMs.\n\nAll these profiles are highly conserved across species and can be ported between orthologous transcription factors when the DNA-contacting amino acids are conserved. This implies that a full compendium of transcripton factor binding specificities across all animals can be accomplished in the near future, with about one third being finished and released by these groups very soon. A question that remains is precisely how other contributors to specificity, such as transcription factor cooperativity, cell-type specific expression, variant or 'weak' recognition sites, and chromatin state together distinguish between correct target sites of related transcription factors that have virtually identical position weight matrices (PWMs).\n\nWhile the relationship between transcription factors and their binding profiles is well conserved, independent data from various speakers showed yet again that the locations of *bona fide* regulatory elements are not always conserved in an alignment between orthologous regions. This plasticity of transcription factor recognition sites between functionally conserved regulatory regions is still posing a challenge for their computational prediction. Pouya Kheradpour (Massa-chusetts Institute of Technology, Cambridge, USA) presented a pragmatic solution by allowing for movement of a predicted site in an alignment, which for many motifs resulted in increased recovery of conserved sites (sensitivity) at a given specificity. Furthermore, many nonconserved sites are located in transposable elements that are generally not under selection and are usually masked before sequence analysis. For example, Guillaume Bourque (Genome Institute of Singapore, Singapore) found that 43% of nonconserved p53-binding sites are repeat-associated. Ting Wang (University of California, Santa Cruz, USA) identified a similar proportion of p53-binding sites in human endogenous retrovirus long terminal repeats, and Stamatoyanno-poulos noted that around 10% of his DHSs map to transposable elements. Mobile elements thus provide an additional substrate for evolution of species-specific gene regulation.\n\n# What's next in transcriptional regulation?\n\nThe key challenge will be to combine the two topics highlighted in this report, namely determination of the specific binding sites for multiple transcription factors and the genome-scale characterization of chromatin states, and to link these with spatial and temporal differences in gene expression. Advances in measuring cell-type specific gene expression were shown by Bob Waterston (University of Washington, St Louis, USA), who is using automated image-processing tools to analyze three-dimensional movies of fluorescent-marker tagged transcription factors in *C. elegans* embryos. Comparing massive numbers of images, they can make direct quantitative comparisons of expression patterns of different transcription factors \"cell-by-cell, minute-by-minute\". On the same topic, Philip Benfey (Duke University, Durham, USA) has leveraged a compendium of gene-expression data at cell-type specific resolution for an entire organ. His group performed microarray experiments on diverse cell lineages across the radial and longitudinal axes of the *Arabidopsis* root. A complementary set of experiments on six different cell types showed that specific cell types respond uniquely to high-salt or low-iron stress conditions in terms of which genes are up- or down-regulated.\n\nRobert Kingston (Harvard Medical School, Boston, USA) is developing technologies for locus-specific chromatin isolation to get the complete list of players that bind *in vivo* to a regulatory locus. He presented a convincing proof of principle by isolating 95% of known telomere interactors and identifying new biologically relevant ones. A more classical way of determining the input of multiple transcription factors to a specific locus is by genetic screens. Results of a high-throughput assay were presented by Pinay Kainth (University of Toronto, Canada), who tested the input contributions of all nonessential yeast transcription factors and their potential regulators on 27 cell-cycle-specific promoters using quantitative fluorescence measurements. Although genetic perturbations that alter a promoter's output are not limited to the transcription factors that physically bind to the promoter, such data can approximate direct interactions, especially when combined with PWM-based motif predictions.\n\nOnce the transcription-factor-specific regulatory sites, chromatin accessibility, and long-range interactions are determined for a given cell state, one must still determine the *cis*-regulatory logic and the rate of transcription initiation that it produces. This is still a difficult problem addressed by only few groups, including that of Jason Gertz (Washington University School of Medicine, St Louis, USA), who reported the use of libraries of synthetic regulatory regions to examine putative roles of combinations of *cis*-elements even before they have been discovered in real enhancers. This approach provides a possible solution to the sparse sampling of sets of *in vivo* validated regulatory regions that produce a similar output.\n\nThis high-quality meeting of regulatory biology researchers indicates that we are taking important steps toward the construction of a powerful toolkit to identify and model *in vivo* regulatory interactions and networks. The strong proofs of principle demonstrated at this meeting, together with increased access to massively parallel sequencing platforms, anticipate an era in which systems geneticists will collaborate to perform gene-regulation experiments in unprecedented detail and scale to characterize their pet 'regulome', and niche biologists will apply these technologies to address specific hypotheses about development, health and disease.","meta":{"dup_signals":{"dup_doc_count":121,"dup_dump_count":47,"dup_details":{"curated_sources":3,"2020-50":1,"2020-24":1,"2019-47":1,"2019-35":1,"2019-26":1,"2019-13":1,"2019-04":1,"2018-51":1,"2018-47":1,"2018-43":1,"2018-39":1,"2018-30":1,"2018-26":1,"2018-09":1,"2017-39":1,"2017-22":1,"2017-09":8,"2016-44":1,"2016-40":1,"2016-36":8,"2016-30":8,"2016-22":1,"2016-18":1,"2016-07":5,"2015-48":3,"2015-40":1,"2015-35":2,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":4,"2014-42":8,"2014-41":4,"2014-35":5,"2014-23":5,"2014-15":3,"2022-27":1,"2024-10":1,"2015-18":2,"2015-11":3,"2015-06":3,"2014-10":3,"2013-48":2,"2013-20":3,"2024-30":1}},"file":"PMC2481419"},"subset":"pubmed_central"} {"text":"author: Gregory A Petsko\ndate: 2010\ninstitute: 1Rosenstiel Basic Medical Sciences Research Center, Brandeis University, Waltham, MA 02454-9110, USA\ntitle: A Faustian bargain\n\n# An open letter to George M Philip, President of the State University of New York At Albany\n\nDear President Philip,\n\nProbably the last thing you need at this moment is someone else from outside your university complaining about your decision. If you want to argue that I can't really understand all aspects of the situation, never having been associated with SUNY Albany, I wouldn't disagree. But I cannot let something like this go by without weighing in. I hope, when I'm through, you will at least understand why.\n\nJust 30 days ago, on October 1st, you announced that the departments of French, Italian, Classics, Russian and Theater Arts were being eliminated. You gave several reasons for your decision, including that 'there are comparatively fewer students enrolled in these degree programs.' Of course, your decision was also, perhaps chiefly, a cost-cutting measure - in fact, you stated that this decision might not have been necessary had the state legislature passed a bill that would have allowed your university to set its own tuition rates. Finally, you asserted that the humanities were a drain on the institution financially, as opposed to the sciences, which bring in money in the form of grants and contracts.\n\nLet's examine these and your other reasons in detail, because I think if one does, it becomes clear that the facts on which they are based have some important aspects that are not covered in your statement. First, the matter of enrollment. I'm sure that relatively few students take classes in these subjects nowadays, just as you say. There wouldn't have been many in my day, either, if universities hadn't required students to take a distribution of courses in many different parts of the academy: humanities, social sciences, the fine arts, the physical and natural sciences, and to attain minimal proficiency in at least one foreign language. You see, the reason that humanities classes have low enrollment is not because students these days are clamoring for more relevant courses; it's because administrators like you, and spineless faculty, have stopped setting distribution requirements and started allowing students to choose their own academic programs - something I feel is a complete abrogation of the duty of university faculty as teachers and mentors. You could fix the enrollment problem tomorrow by instituting a mandatory core curriculum that included a wide range of courses.\n\nYoung people haven't, for the most part, yet attained the wisdom to have that kind of freedom without making poor decisions. In fact, without wisdom, it's hard for most people. That idea is thrashed out better than anywhere else, I think, in Dostoyevsky's parable of the Grand Inquisitor, which is told in Chapter Five of his great novel, *The Brothers Karamazov*. In the parable, Christ comes back to earth in Seville at the time of the Spanish Inquisition. He performs several miracles but is arrested by Inquisition leaders and sentenced to be burned at the stake. The Grand Inquisitor visits Him in his cell to tell Him that the Church no longer needs Him. The main portion of the text is the Inquisitor explaining why. The Inquisitor says that Jesus rejected the three temptations of Satan in the desert in favor of freedom, but he believes that Jesus has misjudged human nature. The Inquisitor says that the vast majority of humanity cannot handle freedom. In giving humans the freedom to choose, Christ has doomed humanity to a life of suffering.\n\nThat single chapter in a much longer book is one of the great works of modern literature. You would find a lot in it to think about. I'm sure your Russian faculty would love to talk with you about it - if only you had a Russian department, which now, of course, you don't.\n\nThen there's the question of whether the state legislature's inaction gave you no other choice. I'm sure the budgetary problems you have to deal with are serious. They certainly are at Brandeis University, where I work. And we, too, faced critical strategic decisions because our income was no longer enough to meet our expenses. But we eschewed your draconian - and authoritarian - solution, and a team of faculty, with input from all parts of the university, came up with a plan to do more with fewer resources. I'm not saying that all the specifics of our solution would fit your institution, but the process sure would have. You did call a town meeting, but it was to discuss your plan, not let the university craft its own. And you called that meeting for Friday afternoon on October 1st, when few of your students or faculty would be around to attend. In your defense, you called the timing 'unfortunate', but pleaded that there was a 'limited availability of appropriate large venue options.' I find that rather surprising. If the President of Brandeis needed a lecture hall on short notice, he would get one. I guess you don't have much clout at your university.\n\nIt seems to me that the way you went about it couldn't have been more likely to alienate just about everybody on campus. In your position, I would have done everything possible to avoid that. I wouldn't want to end up in the 9th Bolgia (ditch of stone) of the 8th Circle of the Inferno, where the great 14th century Italian poet Dante Alighieri put the sowers of discord. There, as they struggle in that pit for all eternity, a demon continually hacks their limbs apart, just as in life they divided others.\n\nThe *Inferno* is the first book of Dante's *Divine Comedy*, one of the great works of the human imagination. There's so much to learn from it about human weakness and folly. The faculty in your Italian department would be delighted to introduce you to its many wonders - if only you had an Italian department, which now, of course, you don't.\n\nAnd do you really think even those faculty and administrators who may applaud your tough-minded stance (partly, I'm sure, in relief that they didn't get the axe themselves) are still going to be on your side in the future? I'm reminded of the fable by Aesop of the Travelers and the Bear: two men were walking together through the woods, when a bear rushed out at them. One of the travelers happened to be in front, and he grabbed the branch of a tree, climbed up, and hid himself in the leaves. The other, being too far behind, threw himself flat down on the ground, with his face in the dust. The bear came up to him, put his muzzle close to the man's ear, and sniffed and sniffed. But at last with a growl the bear slouched off, for bears will not touch dead meat. Then the fellow in the tree came down to his companion, and, laughing, said 'What was it that the bear whispered to you?' 'He told me,' said the other man, 'Never to trust a friend who deserts you in a pinch.'\n\nI first learned that fable, and its valuable lesson for life, in a freshman classics course. Aesop is credited with literally hundreds of fables, most of which are equally enjoyable - and enlightening. Your classics faculty would gladly tell you about them, if only you had a Classics department, which now, of course, you don't.\n\nAs for the argument that the humanities don't pay their own way, well, I guess that's true, but it seems to me that there's a fallacy in assuming that a university should be run like a business. I'm not saying it shouldn't be managed prudently, but the notion that every part of it needs to be self-supporting is simply at variance with what a university is all about. You seem to value entrepreneurial programs and practical subjects that might generate intellectual property more than you do 'old-fashioned' courses of study. But universities aren't just about discovering and capitalizing on new knowledge; they are also about preserving knowledge from being lost over time, and that requires a financial investment. There is good reason for it: what seems to be archaic today can become vital in the future. I'll give you two examples of that. The first is the science of virology, which in the 1970s was dying out because people felt that infectious diseases were no longer a serious health problem in the developed world and other subjects, such as molecular biology, were much sexier. Then, in the early 1990s, a little problem called AIDS became the world's number 1 health concern. The virus that causes AIDS was first isolated and characterized at the National Institutes of Health in the USA and the Institute Pasteur in France, because these were among the few institutions that still had thriving virology programs. My second example you will probably be more familiar with. Middle Eastern Studies, including the study of foreign languages such as Arabic and Persian, was hardly a hot subject on most campuses in the 1990s. Then came September 11, 2001. Suddenly we realized that we needed a lot more people who understood something about that part of the world, especially its Muslim culture. Those universities that had preserved their Middle Eastern Studies departments, even in the face of declining enrollment, suddenly became very important places. Those that hadn't - well, I'm sure you get the picture.\n\nI know one of your arguments is that not every place should try to do everything. Let other institutions have great programs in classics or theater arts, you say; we will focus on preparing students for jobs in the real world. Well, I hope I've just shown you that the real world is pretty fickle about what it wants. The best way for people to be prepared for the inevitable shock of change is to be as broadly educated as possible, because today's backwater is often tomorrow's hot field. And interdisciplinary research, which is all the rage these days, is only possible if people aren't too narrowly trained. If none of that convinces you, then I'm willing to let you turn your institution into a place that focuses on the practical, but only if you stop calling it a university and yourself the President of one. You see, the word 'university' derives from the Latin 'universitas', meaning 'the whole'. You can't be a university without having a thriving humanities program. You will need to call SUNY Albany a trade school, or perhaps a vocational college, but not a university. Not anymore.\n\nI utterly refuse to believe that you had no alternative. It's your job as President to find ways of solving problems that do not require the amputation of healthy limbs. Voltaire said that no problem can withstand the assault of sustained thinking. Voltaire, whose real name was Fran\u00e7ois-Marie Arouet, had a lot of pithy, witty and brilliant things to say (my favorite is 'God is a comedian playing to an audience that is afraid to laugh'). Much of what he wrote would be very useful to you. I'm sure the faculty in your French department would be happy to introduce you to his writings, if only you had a French department, which now, of course, you don't.\n\nI guess I shouldn't be surprised that you have trouble understanding the importance of maintaining programs in unglamorous or even seemingly 'dead' subjects. From your biography, you don't actually have a PhD or other high degree, and have never really taught or done research at a university. Perhaps my own background will interest you. I started out as a classics major. I'm now Professor of Biochemistry and Chemistry. Of all the courses I took in college and graduate school, the ones that have benefited me the most in my career as a scientist are the courses in classics, art history, sociology, and English literature. These courses didn't just give me a much better appreciation for my own culture; they taught me how to think, to analyze, and to write clearly. None of my sciences courses did any of that.\n\nOne of the things I do now is write a monthly column on science and society. I've done it for over 10 years, and I'm pleased to say some people seem to like it. If I've been fortunate enough to come up with a few insightful observations, I can assure you they are entirely due to my background in the humanities and my love of the arts.\n\nOne of the things I've written about is the way genomics is changing the world we live in. Our ability to manipulate the human genome is going to pose some very difficult questions for humanity in the next few decades, including the question of just what it means to be human. That isn't a question for science alone; it's a question that must be answered with input from every sphere of human thought, including - especially including - the humanities and arts. Science unleavened by the human heart and the human spirit is sterile, cold, and self-absorbed. It's also unimaginative: some of my best ideas as a scientist have come from thinking and reading about things that have, superficially, nothing to do with science. If I'm right that what it means to be human is going to be one of the central issues of our time, then universities that are best equipped to deal with it, in all its many facets, will be the most important institutions of higher learning in the future. You've just ensured that yours won't be one of them.\n\nSome of your defenders have asserted that this is all a brilliant ploy on your part - a master political move designed to shock the legislature and force them to give SUNY Albany enough resources to keep these departments open. That would be Machiavellian (another notable Italian writer, but then, you don't have any Italian faculty to tell you about him), certainly, but I doubt that you're that clever. If you were, you would have held that town meeting when the whole university could have been present, at a place where the press would be all over it. That's how you force the hand of a bunch of politicians. You proclaim your action on the steps of the state capitol. You don't try to sneak it through in the dead of night, when your institution has its back turned.\n\nNo, I think you were simply trying to balance your budget at the expense of what you believe to be weak, outdated and powerless departments. I think you will find, in time, that you made a Faustian bargain. Faust is the title character in a play by Johann Wolfgang von Goethe. It was written around 1800 but still attracts the largest audiences of any play in Germany whenever it's performed. Faust is the story of a scholar who makes a deal with the devil. The devil promises him anything he wants as long as he lives. In return, the devil will get - well, I'm sure you can guess how these sorts of deals usually go. If only you had a Theater department, which now, of course, you don't, you could ask them to perform the play so you could see what happens. It's awfully relevant to your situation. You see, Goethe believed that it profits a man nothing to give up his soul for the whole world. That's the whole world, President Philip, not just a balanced budget. Although, I guess, to be fair, you haven't given up your soul. Just the soul of your institution.\n\nDisrespectfully yours,\n\nGregory A Petsko","meta":{"dup_signals":{"dup_doc_count":420,"dup_dump_count":78,"dup_details":{"curated_sources":2,"2023-50":2,"2023-23":1,"2023-14":1,"2023-06":2,"2022-33":3,"2022-27":2,"2022-21":1,"2022-05":1,"2021-49":1,"2021-43":2,"2021-31":4,"2021-25":1,"2021-17":1,"2020-40":1,"2020-29":1,"2020-16":2,"2020-10":1,"2019-47":1,"2019-43":1,"2019-35":1,"2019-30":1,"2019-26":1,"2019-22":2,"2019-13":3,"2019-04":2,"2018-51":1,"2018-47":2,"2018-39":3,"2018-34":1,"2018-30":7,"2018-26":5,"2018-22":4,"2018-17":5,"2018-13":8,"2018-09":8,"2018-05":5,"2017-51":9,"2017-47":6,"2017-43":8,"2017-39":8,"2017-34":11,"2017-30":8,"2017-26":8,"2017-22":8,"2017-17":7,"2017-09":16,"2017-04":8,"2016-50":8,"2016-44":9,"2016-40":7,"2016-36":16,"2016-30":13,"2016-26":4,"2016-22":7,"2016-18":7,"2016-07":12,"2015-48":9,"2015-40":6,"2015-35":9,"2015-32":8,"2015-27":8,"2015-22":8,"2015-14":6,"2014-52":4,"2014-49":8,"2014-42":13,"2014-41":8,"2014-35":8,"2014-23":7,"2014-15":6,"2024-22":3,"2017-13":6,"2015-18":9,"2015-11":4,"2015-06":5,"2014-10":5,"2013-48":5,"2013-20":5}},"file":"PMC3218652"},"subset":"pubmed_central"} {"text":"abstract: # Background\n .\n Of concern to health educators is the suggestion that college females practice diet and health behaviors that contradict the 2005 dietary guidelines for Americans. In this regard, there remain gaps in the research related to dieting among college females. Namely, do normal weight individuals diet differently from those who are overweight or obese, and are there dieting practices used by females that can be adapted to promote a healthy body weight? Since it is well recognized that females diet, this study seeks to determine the dieting practices used among normal, overweight, and obese college females (do they diet differently) and identify dieting practices that could be pursued to help these females more appropriately achieve and maintain a healthy body weight.\n .\n # Methods\n .\n A total of 185 female college students aged 18 to 24 years participated in this study. Height, weight, waist and hip circumferences, and skinfold thickness were measured to assess body composition. Surveys included a dieting practices questionnaire and a 30-day physical activity recall. Participants were classified according to body mass index (BMI) as normal weight (n = 113), overweight (n = 35), or obese (n = 21). Data were analyzed using JMP IN\u00ae software. Descriptive statistics included means, standard deviations, and frequency. Subsequent data analysis involved Pearson *X*^2^ and one-way analysis of variance with comparison for all pairs that were significantly different using Tukey-Kramer honestly significant difference test.\n .\n # Results\n .\n Outcomes of this study indicate the majority of participants (83%) used dieting for weight loss and believed they would be 2% to 6% greater than current weight if they did not diet; normal weight, overweight, and obese groups perceived attractive weight to be 94%, 85%, and 74%, respectively, of current weight; 80% of participants reported using physical activity to control weight, although only 19% exercised at a level that would promote weight loss; only two of 15 dieting behaviors assessed differed in terms of prevalence of use among groups, which were consciously eating less than you want (44% normal weight, 57% overweight, 81% obese) and using artificial sweeteners (31% normal weight and overweight, 5% obese); and the most prevalent explicit maladaptive weight loss behavior was smoking cigarettes (used by 9% of participants) and most unhealthy was skipping breakfast (32%).\n .\n # Conclusion\n .\n Collectively, results indicate female college students, regardless of weight status, would benefit from open discussions with health educators regarding healthy and effective dieting practices to achieve\/maintain a healthy body weight. The results are subject to replication among high school, middle-aged, and older females.\nauthor: Brenda M Malinauskas; Thomas D Raedeke; Victor G Aeby; Jean L Smith; Matthew B Dallas\ndate: 2006\ninstitute: 1Department of Nutrition and Hospitality Management, East Carolina University, Greenville, North Carolina, USA; 2Department of Exercise and Sport Science, East Carolina University, Greenville, North Carolina, USA; 3Department of Health Education, East Carolina University, Greenville, North Carolina, USA\nreferences:\ntitle: Dieting practices, weight perceptions, and body composition: A comparison of normal weight, overweight, and obese college females\n\n# Background\n\nTens of thousands of high school graduates will be heading out on their own this fall. Many will be confronting the rigors of higher education while adjusting to a new environment and new pressures. With all the changes, gaining weight is common. When you consider that more than 1.5 million students are entering U.S. colleges and universities each fall, by the following spring the weight gain entering collegians encounter is epidemic. Gaining weight during freshman year is so common that the phenomenon has its own nickname the \"Freshman 15\" and has been associated with the freshman experience for many years. Although unwanted weight gain can occur at any age, the transition from high school to college is a time of increased risk of weight gain among females.\n\nFemales, at a very young age, are concerned about body weight and place high importance on appearance, which is dramatically influenced by the media. A major messaging theme of teen magazine websites is the necessity to be beautiful \\[1\\]. Mooney et al \\[2\\] conducted interviews with home economic teachers and teen focus groups in Ireland and found that adolescent females are very conscious of their body image, are strongly influenced by high profile celebrities, and the primary motivations for wanting to be thin are to gain attention from males, approval from friends, and self confidence. Unfortunately, the desire to be beautiful may result in unwanted outcomes. For example, in a study by Monro et al \\[3\\] the authors reported an increase in appearance anxiety among female college students that resulted from viewing advertisements of idealized images. Wilson et al \\[4\\] reported that non-athlete college students reported more stress than athlete counterparts in areas of satisfaction with physical appearance, decisions about education, social conflicts regarding smoking, and financial burdens. Lastly, there are unique experiences of college females that may promote dieting, including fear of gaining weight, increased sense of independence that may promote experimenting with dieting (food restriction, use of supplements and fad diets), and changes to daily schedule that affect eating and exercise habits (\"pulling all-nighters\" to study for exams or complete course projects).\n\nEmergence of dieting among girls is most prevalent at 13 and 14 years of age and remains prevalent throughout adulthood \\[5\\]. Dieting has been reported among normal and underweight individuals, in addition to those who are overweight \\[6\\]. Despite the finding that dieting to control weight is often times ineffective; dieting remains popular among females \\[7\\]. When compared to college males, college females are more likely to actively diet, place high importance on appearance and the benefits of maintaining an ideal weight, and engage in unsafe dieting \\[8\\].\n\nGiven our culture, which suffers from a pathological emphasis on weight as a measure of a woman's worth, and given the continuing epidemic in our society of disordered eating, there are significant problems stemming from the idea of the \"Freshman 15\". The first problem is the assumption that excessive weight gain is inevitable during the college years. Second, for the many females already struggling with disordered eating, the construct of the \"Freshman 15\" becomes a terrifying probability that exacerbates an already difficult problem. For example, Patton \\[9\\] reported that girls with a history of severe dieting were 18 times more likely to develop eating disorders than girls with no history of dieting, and girls with a history of moderate dieting were five times more likely to develop eating disorders than girls who did not diet. Third, even for the college females who may not have significant issues with food or weight, the idea of the \"Freshman 15\" can play a significant role in solidifying the oppressive ideathat a female must be thin in order to qualify for success and happiness in our society. Such mentality can contribute significantly to the development of disordered eating throughout college.\n\nAt the core of what makes the \"Freshman 15\" concept so problematic is the reduction of female students to a single-dimension: body type. Given the myriad life issues that are relevant to women at the onset of the college years, to have a concept referring to the single superficial issue of body weight be so widespread and so influential is indicative of a larger cultural pathology. What is needed is a more aggressive message to the culture that contradicts this reductionistic message.\n\nIn this regard, there remain gaps in the research related to dieting among college females. Namely, do normal weight individuals diet differently from those who are overweight or obese, and are there dieting practices used by females that can be adapted to promote a healthy body weight? Because it is well recognized that females diet, this study seeks to determine the dieting practices used among normal, overweight, and obese college females (do they diet differently) and identify dieting practices that could be pursued to help these females more appropriately achieve and maintain a healthy body weight.\n\nOf concern to health educators is the suggestion that college females practice diet and health behaviors that contradict the 2005 dietary guidelines for Americans \\[10\\]. For example, physical exercise plays a role in weight management \\[10,11\\]. In an executive summary regarding treatment of overweight and obesity in adults, the National Heart, Lung, and Blood Institute in cooperation with the National Institutes of Diabetes and Digestive and Kidney Diseases report that an increase in physical activity is an important component of weight loss therapy and that sustained physical activity is helpful in prevention of weight regain \\[11\\]. Importantly, the Dietary Guidelines recommend adults engage in 60 to 90 minutes of daily moderate- to vigorous-intensity physical activity \\[10\\]. It is questionable if college females adhere to the public health message of higher intensity and longer duration physical activity to promote a healthy body weight \\[10,11\\]. The relationship between smoking and weight concerns has been explored. Potter et al \\[12\\] reported in a comprehensive review of smoking among adolescents that dieting behaviors, disordered eating symptoms, and general weight concerns were related to smoking among females.\n\nThe purpose of this study was to determine differences in dieting practices, weight perceptions, and body composition of normal weight, overweight, and obese female college students. The research questions under investigation seek to first, determine body composition, weight perceptions, and physical activity differences of normal weight, overweight, and obese college females. Second, identify perceived sources of pressure to be a certain weight among these groups. Third, identify weight loss behaviors ever used to consciously lose or control weight among groups. Fourth, identify the most prevalent weight control products and commercial diets used by female college students.\n\n# Methods\n\n## Design and sample\n\nThe study is a quasi-experimental design. The sample of convenience included students from five introductory nutrition courses taught by two instructors and invited to participate during the spring semester of 2005. The total sample recruited included 230 females, which represented 51% of all females who were enrolled in introductory nutrition courses spring 2005. Overall, 208 female participants completed the study, which was a 90% participation rate for females in the nutrition courses surveyed. The final sample size was 185, as 23 were excluded for the following reasons: age greater than 24 years (n = 14), BMI less than 18.5 kg\/m^2^ (n = 7), and reported pregnancy (n = 2).\n\nCourse enrollment ranged from 49 to 100 students. A scripted description of the study (read by the instructor at the beginning of a class period which highlighted the study protocol) and an electronic course announcement posted on Blackboard were the means used to inform students about the study. For the purpose of this study Blackboard is defined as a web-based education platform that aids course delivery; students are accustomed to checking Blackboard frequently for course information. Extra credit was available to students who completed the study. Males, females greater than 24 years of age, and females reporting current pregnancy were excluded from analysis.\n\n## Data collection and research instruments\n\nData collection occurred on two occasions. During the first meeting, students completed a survey regarding dieting practices, weight perceptions, and level of physical activity participation. During the second meeting, body composition assessments were completed. The university-affiliated institutional review board approved the project (University and Medical Center Institutional Review Board number 04\u20130487), and the project was carried out in compliance with the Helsinki Declaration. Signed informed consent was obtained from all participants.\n\nPrior to survey administration, one of two research assistants met with students during a class period, using a script to describe the study and provide instruction for completing the survey. The key points of the script were to provide uniform instruction for completing the study survey, encourage participants to answer all questions completely and truthfully, and to remind participants of the body composition assessment schedule. The survey took approximately 15 minutes to complete.\n\nThe survey included a dieting practices questionnaire that assessed demographic, dieting practices, weight perceptions, and perceived sources of pressure to be a certain weight, and a 30-day physical activity recall. The dieting practices questionnaire was developed by a Registered Dietitian and reviewed for content validity by two Registered Dietitians who practice in eating disorder treatment and nutrition counseling of college students. The questionnaire was adapted from one developed by Calderon et al \\[13\\] that investigated dieting practices among high school students. As a measure of physical activity, a 30 day self-report measure developed by Jackson et al \\[14\\] was used. To complete this measure, participants rate how active they have been over the past 30 days on a scale ranging from zero to seven. In general, scores of zero and one indicate that the person does not participate in regular activity. Participants who score two or three participate in some moderate activity whereas those who score four or above participate in some vigorous exercise. Baumgartner et al \\[15\\] reported that values four or above generally indicate that a person participates in physical activity on a regular basis. Regular activity participation, particularly vigorous exercise, often results in increased aerobic capacity. In support of the scale's validity, researchers have found that self-reported physical activity was a significant predictor of VO~2~ max \\[15\\].\n\nTo pilot test the survey, a small sample of 12 undergraduate female students completed the questionnaires. Minor syntax and formatting modifications were made to the dieting practices questionnaire based on their responses.\n\nOn the second occasion, participants reported to the lab for anthropometric measurements. Females with the first letter of their last name A to L reported after completing the second exam in their respective nutrition course, females with last name M to Z after the third. The lab was organized into two measurement stations that were separated by privacy screens. Weight and height were measured at the first station, circumference (waist and hip) and skinfold thickness (triceps, biceps, and thigh) measurements were obtained at the second station. Four research assistants (two for each station) were trained by a single anthropometrist. The same research assistant took all measurements for that \"station\" on each participant. Weight was measured to the nearest 0.1 kg (Tanita body composition analyzer, Arlington, IL), height (Seca portable height stadiometer, Leicester, England) and circumferences (non-flexible tape measure) to 0.1 cm, and skinfolds to 0.1 mm (Harpenden skinfold caliper, Vital Signs model 68875, Country Technology, Inc., Gays Mills, WI) using American College of Sports Medicine \\[16\\] procedures. Shoes, and as much clothing as was socially acceptable to the participant, were removed preceding body composition measurements. Participants were classified according to body mass index (BMI); BMI 18.5 to 24.9 kg\/m^2^ were classified as normal weight, 25 to 29.9 kg\/m^2^ as overweight, and 30 kg\/m^2^ or greater as obese \\[17\\].\n\n## Data analysis\n\nAnalyses were performed using JMP IN\u00ae software \\[18\\]. Descriptive statistics included means, standard deviations, and frequency. Subsequent data analysis involved Pearson *X*^2^ and one-way analysis of variance with comparison for all pairs that were significantly different using Tukey-Kramer honestly significant difference test. A significant level of .05 was used for statistical analysis.\n\n# Results\n\nOverall, 208 female participants completed the study, which was a 90% participation rate for females in the nutrition courses surveyed. However, 23 were excluded for the following reasons: age greater than 24 years (n = 14), BMI less than 18.5 kg\/m^2^ (n = 7), and reported pregnancy (n = 2). BMI less than 18.5 kg\/m^2^ is classified by the Center for Disease Control and Prevention \\[17\\] as an underweight condition. The final sample size was 185. Mean age of participants was 19.7 years (SD = 1.4). A majority (78%) of participants were White, 18% were Black, and 4% were American Indian, Asian, or Hispanic. Participants were predominantly freshmen and sophomores (70%), as expected because they were recruited from introductory nutrition courses.\n\nRegarding question one, body composition, weight perceptions, and physical activity of normal weight, overweight, and obese groups are reported in Table 1<\/a>. Despite significant differences among groups for weight, F(2, 166) = 171, p \\< .01, BMI, F(2, 166) = 272, p \\< .01, body fat, F(2, 166) = 96, p \\< .01, and waist-to-hip ratio, F(2, 166) = 53, p \\< .01, mean physical activity was similar among groups, F(2, 165) = 0.5, p = .61. Significant differences were found among groups for perceived healthy weight, F(2, 163) = 76, p \\< .01, and attractive weight, F(2, 160) = 74, p \\< .01, compared to current weight. Mean perceived healthy weight was 5% (normal weight), 13% (overweight), and 23% (obese) lower than current weight. Mean perceived attractive weight followed the same trend, and was 6%, 15%, and 26% lower than current weight for normal weight, overweight, and obese groups, respectively. All groups perceived natural weight to be greater than current weight, meaning if no attempt was made to control weight then weight would be 2% to 6% greater than current weight. The majority (80%) of participants reported using physical activity to control their weight. However, 32% did not participate in regularly programmed recreation, sport, or heavy physical activity, and only 19% of participants (21% normal weight, 21% overweight, and 5% obese groups) reported spending over 3 hours per week in vigorous aerobic activity.\n\nMean body composition, weight perceptions, and physical activity among normal weight, overweight, and obese female college students.\n\n| | Weight classification | | |\n|----|----|----|----|\n| | | | |\n| | Normal weight | Overweight | Obese |\n| | n = 113 | n = 35 | n = 21 |\n| Variable | M \u00b1 SD | M \u00b1 SD | M \u00b1 SD |\n| Body composition | | | |\n| Weight (pounds) | 130^a^ \u00b1 13 | 160^b^ \u00b1 18 | 213^c^ \u00b1 39 |\n| BMI (kg\/m^2^) | 21.9^a^ \u00b1 1.8 | 27.3^b^ \u00b1 1.3 | 35.3^c^ \u00b1 5.7 |\n| Body fat (%) | 28^a^ \u00b1 4 | 33^b^ \u00b1 4 | 38^c^ \u00b1 3 |\n| Waist-to-hip ratio | 0.72^a^ \u00b1 0.01 | 0.76^b^ \u00b1 0.05 | 0.83^c^ \u00b1 0.07 |\n| Weight perceptions | | | |\n| Perceived healthy weight (pounds) | 123^a^ \u00b1 10 | 138^b^ \u00b1 17 | 164^c^ \u00b1 26 |\n| Perceived attractive weight (pounds) | 122^a^ \u00b1 11 | 136^b^ \u00b1 17 | 158^c^ \u00b1 29 |\n| Perceived natural weight (pounds) | 138^a^ \u00b1 18 | 169^b^ \u00b1 25 | 216^c^ \u00b1 42 |\n| Perceived healthy weight (% of current) | 95^a^ \u00b1 7 | 87^b^ \u00b1 6 | 77^c^ \u00b1 6 |\n| Perceived attractive weight (% of current) | 94^a^ \u00b1 7 | 85^b^ \u00b1 6 | 74^c^ \u00b1 8 |\n| Perceived natural weight (% of current) | 106 \u00b1 9 | 106 \u00b1 10 | 102 \u00b1 2 |\n| Physical activity | 4.1 \u00b1 2.5 | 3.8 \u00b1 2.5 | 3.5 \u00b1 2.5 |\n\nNote: Means in the same row having letters that do not share subscripts differ at *p* \\< .01 in the Tukey-Kramer honestly significant difference test. The higher the physical activity score, the greater intensity and amount of reported exercise over the past 30 days using a scoring system of 0 (do not participate regularly in planned exercise) to 7 (spend over 3 hours weekly in heavy exercise).\n\nConcerning question two, 83% of participants reported ever consciously trying to lose or control their weight, including 80% of normal weight, 91% of overweight, and 86% of obese participants. Mean age when dieting was initiated was 15.7 years (SD = 2.3), which was similar among groups, F(2, 137) = 1, p = .30. Fifty eight percent of participants reported pressure to be a certain weight; the primary sources were self (54%), media (37%), and friends (32%). Frequency distributions of \"yes\" response to these sources were similar among groups (see Table 2<\/a>). Other sources of pressure included significant other (reported as a source of pressure by 13% of participants), colleagues at work (5%), other athletes (3%), coach (2%), and family members (2%).\n\nPerceived sources of pressure to be a certain weight among normal weight, overweight, and obese female college students.\n\n| Variable | Frequency \"yes\" response (%) | *X*^2^ | p (weight status) |\n|------------------|------------------------------|--------|-------------------|\n| Self | | 2.5 | .29 |\n| Normal weight^a^ | 50 | | |\n| Overweight^b^ | 62 | | |\n| Obese^c^ | 43 | | |\n| Media | | 0.9 | .63 |\n| Normal weight^a^ | 39 | | |\n| Overweight^b^ | 34 | | |\n| Obese^c^ | 29 | | |\n| Friends | | 3.4 | .18 |\n| Normal weight^a^ | 31 | | |\n| Overweight^b^ | 37 | | |\n| Obese^c^ | 14 | | |\n\nNote: *X*^2^ (2, *N* = 166).\n\n^a^n = 113; ^b^n = 35; ^c^n = 21.\n\nReferring to question three, 15 dieting behaviors were assessed to determine if use differed among normal weight, overweight, and obese participants. Nine of the 15 behaviors had large enough sample size of \"yes\" response to analyze frequency distribution differences among groups; these are reported in Table 3<\/a>. The five most common behaviors used by all participants were exercising (80%), eating or drinking low fat or fat free versions of foods\/drinks (59%), consciously eat less than you want (51%), eating or drinking sugar free versions of foods\/drinks (43%), and count calories (40%). Eating less than you want and use of artificial sweeteners were the only behaviors for which frequency of use was significantly different among groups (see Table 3<\/a>). The other behaviors assessed, which had low frequency of use reported, included not eating foods with high glycemic index (used by 4% normal weight, 6% overweight, and 0% obese), smoke cigarettes (8%, 14%, and 5%), use laxatives after you eat (2%, 6%, and 5%), vomit after you eat (4%, 6%, and 5%), skip lunch (10%, 9%, 10%), and skip dinner (4%, 9%, 0%). Explicit maladaptive weight loss practices included smoking cigarettes (used by 9% of all participants), vomiting (5%) and use of laxatives (3%) after eating.\n\nWeight loss behaviors ever used to consciously lose or control weight of normal weight, overweight, and obese female college students.\n\n| Variable | Frequency \"yes\" response (%) | *X*^2^ | p (weight status) |\n|----|----|----|----|\n| Consciously eat less than you want | | 10.1 | \\< .01 |\n| Normal weight^a^ | 44 | | |\n| Overweight^b^ | 57 | | |\n| Obese^c^ | 81 | | |\n| Count grams of fat | | 1.0 | .60 |\n| Normal weight^a^ | 23 | | |\n| Overweight^b^ | 31 | | |\n| Obese^c^ | 24 | | |\n| Count net carbs | | 2.3 | .32 |\n| Normal weight^a^ | 18 | | |\n| Overweight^b^ | 17 | | |\n| Obese^c^ | 5 | | |\n| Count calories | | 0.2 | .89 |\n| Normal weight^a^ | 39 | | |\n| Overweight^b^ | 43 | | |\n| Obese^c^ | 43 | | |\n| Eat or drink low fat or fat free versions of foods\/drinks | | 2.9 | .24 |\n| Normal weight^a^ | 57 | | |\n| Overweight^b^ | 71 | | |\n| Obese^c^ | 52 | | |\n| Eat or drink sugar free versions of foods\/drinks | | 2.2 | .33 |\n| Normal weight^a^ | 43 | | |\n| Overweight^b^ | 49 | | |\n| Obese^c^ | 29 | | |\n| Exercise | | 2.2 | .33 |\n| Normal weight^a^ | 77 | | |\n| Overweight^b^ | 89 | | |\n| Obese^c^ | 81 | | |\n| Use artificial sweeteners | | | |\n| Normal weight^a^ | 31 | 6.3 | .04 |\n| Overweight^b^ | 31 | | |\n| Obese^c^ | 5 | | |\n| Skip breakfast | | | |\n| Normal weight^a^ | 27 | 4.9 | .09 |\n| Overweight^b^ | 40 | | |\n| Obese^c^ | 48 | | |\n\nNote: *X*^2^ (2, N = 166).\n\n^a^n = 113; ^b^n = 35; ^c^n = 21.\n\nIn terms of question four, over-the-counter meal replacement drinks were used by 35% of participants, supplements (Hydroxycut, Stacker II, Xenadrine, CortiSlim, Trim Spa, Hot Rox) by 26%, meal replacement bars by 18%, dieter's tea, green tea, and green tea pills by 11%, and chromium picolinate by 3%. Physician prescribed weight loss pills were used by 3% of participants. In examining commercial diets, the most popular ones were Atkins or South Beach (20%) and Weight Watchers\u00ae (11%). Subway, Sugar Busters, and The Zone diet combined had been used by 7% of participants.\n\n# Discussion\n\nThe purpose of this study was to investigate dieting practices, weight perceptions, and body composition among normal weight, overweight, and obese college females. Findings from this study support the general belief that dieting by college females is a common weight management strategy, irrespective of weight status. Abraham and O'Dea \\[19\\] reported that females as young as 12 years of age had tried to lose weight, including 44% who dieted using food restriction and 78% who exercised to lose weight. Findings from the present study indicate that females continue to diet through college. Calderon et al \\[13\\] reported a greater prevalence of dieting among overweight (67% prevalence) versus at risk for overweight (25%) high school females, which is in contrast to the findings from the present study. We found 83% of college females reporting ever consciously trying to lose or control their weight, with similar prevalence between normal weight, overweight, and obese participants. The discrepancy between findings could be due to how dieting is defined by researchers and interpreted by study participants. Dieting strategies have become so \"main stream\" in our society, that one may not be aware, unless explicitly stated, behaviours that are being used to consciously lose or control ones' weight. For example, someone may have been drinking diet sodas to control\/lose their weight for so many years that they do not recognize this behaviour as a weight loss strategy unless it is pointed out to them. We classified dieting as any method of food restriction, use of supplements, or behaviours used to control one's weight. Although our questionnaire was adapted from the survey of Calderon and colleagues \\[13\\], we expanded upon behaviours and methods marketed for weight control that were popular at the time our study was conducted, including counting net carbs, eating\/drinking sugar free versions of foods\/drinks, avoiding foods with high glycemic index, and use of artificial sweeteners. Furthermore, a limitation of the study is that being a cross-sectional study design, we do not know if there exists a causal relationship between dieting and weight control. Longitudinal studies would be the best way to measure if dieting frequency among females and methods of dieting choices change throughout the lifespan and if there exists a causal relationship between dieting and weight control.\n\nWe found females perceive healthy and attractive weights to be lower than current weight, and that media influence contributes pressure to be a certain weight. Klesges et al \\[8\\] reported that college females are more likely than male counterparts to engage in healthy and unhealthy physical activity and food restriction behaviours and place higher importance on appearance benefits of maintaining an ideal body weight. Of great concern are the findings of Bacon et al \\[20\\] that indicated a significant negative correlation between bone mineral content and number of times on a weight loss diet among obese adult females. Additionally, Gingras et al \\[21\\] reported that adult female chronic dieters had low body satisfaction and suggested that it is important to address body image dissatisfaction with chronic dieting to improve health, regardless of body size of the dieter. Clearly, chronic dieting can have physical and emotional ramifications.\n\nMost important, many of the weight loss practices reported to be used by participants of the current study, if used appropriately, could promote healthy weight loss and be used long term to maintain a healthy weight. For example, the majority (80%) of participants reported using physical activity to control their weight. However, 32% did not participate in regularly programmed recreation, sport, or heavy physical activity, which is similar to the Behavioral Risk Factor Surveillance system finding that 15% to 34% of adults reported no leisure-time physical activity \\[22\\]. Holcomb et al \\[23\\] reported that body mass index, body fat, and waist-to-hip ratio were significantly lower as physical activity increases among premenopausal females. The researchers suggested that ability to increase daily physical activity to minimize fat accumulation could be a strong incentive to become more physically active. Strategies to improve nutrient intake and physical activity to promote health and weight control among females are warranted.\n\nThe current study makes an important contribution to the literature by narrowing the gap in research related to dieting among females. Namely, do normal weight individuals diet differently from those who are overweight or obese, and are there dieting practices used by females that can be adapted to promote a healthy body weight? Outcomes of interest to the health educator include: First, dieting practices are prevalent among female college students, regardless of body weight; Second, there were a number of dieting practices used that could promote effective weight loss and healthy weight maintenance if employed appropriately; Third, although many dieters stated they used physical activity for weight loss, the majority who used this method were not meeting physical activity goals for weight loss; and Fourth, the most prevalent explicit maladaptive weight loss behavior identified was smoking cigarettes and the most unhealthy dieting practice was skipping breakfast.\n\n# Conclusion\n\nThese findings suggest that health educators should promote education and intervention strategies for females that encourage appropriate weight control practices and dispel unhealthy and ineffective weight loss myths (using laxatives and skipping breakfast are effective weight control methods). Implications from this study are numerous. Female college students, regardless of weight status, would benefit from open discussions with health educators to identify healthy and unhealthy dieting practices they use. Throughout these discussions, females could identify healthy dieting practices that could be expanded upon to promote a healthy weight status and recognize the health consequences associated with using unhealthy dieting practices. Because this study involved only female college students at a state university the implications discussed here are subject to replication of the results. Beyond replication with other participants, researchers can consider how to effectively target specific dieting practices among females to promote healthy eating and exercise to promote and maintain a healthy weight. Future research should identify which dieting practices dieters report most content with and believe are most successful to promote short- and long-term weight management.\n\n# Competing interests\n\nThe author(s) declare that they have no competing interests.\n\n# Authors' contributions\n\nBMM conceived of the study, participated in its design, performed the statistical analysis, and drafted the manuscript. TAR participated in the design of the study and drafted the manuscript. VGA drafted the manuscript. JLS and MBD participated in coordination and data collection and helped to draft the manuscript. All authors read and approved the final manuscript.\n\n### Acknowledgements\n\nThis research was supported in part by undergraduate research assistantships, East Carolina University Honors Program. We thank Emily R. Bennett and Lauren B. Manning for their assistance with data acquisition.","meta":{"dup_signals":{"dup_doc_count":131,"dup_dump_count":36,"dup_details":{"curated_sources":2,"2023-40":2,"2023-14":1,"2022-40":1,"2022-27":2,"2022-05":1,"2021-49":1,"2021-31":2,"2021-17":1,"2021-04":1,"2020-40":1,"2020-29":1,"2020-10":1,"2019-51":1,"2019-43":1,"2017-17":1,"2015-48":5,"2015-40":2,"2015-35":5,"2015-32":3,"2015-27":4,"2015-22":6,"2015-14":4,"2014-52":6,"2014-49":8,"2014-42":11,"2014-41":7,"2014-35":7,"2014-23":5,"2014-15":8,"2015-18":5,"2015-11":4,"2015-06":5,"2014-10":5,"2013-48":5,"2013-20":5,"2024-18":1}},"file":"PMC1456978"},"subset":"pubmed_central"} {"text":"author: Jacek R. Wi\u015bniewski\ndate: 2012-10-26\ninstitute: Department of Proteomics and Signal Transduction, Max-Planck-Institute for Biochemistry, Am Klopferspitz 18, D-82152 Martinsried, Germany; Email: ; Tel.: +49-89-85-78-2205; Fax: +49-89-85-78-2219\ntitle: Proteomes: A New Proteomic Journal\n\nIn the early years of proteomics, mass spectrometry served only as a technique in protein chemistry facilitating the characterization of purified proteins and mapping their posttranslational modifications (PTMs). A bit later this technique almost completely replaced Edman degradation and amino acid analysis. The continuous development of the mass spectrometry techniques created a huge analytical potential allowing the study of nearly complete proteomes in single experiments. This evolution distanced proteomics from protein chemistry and placed it in a novel position. Its capability to identify and quantify in parallel thousands of proteins and their modifications at minute sample amount requirements is one of the most fascinating technological advances in biology today.\n\nThere are several major areas of proteomics, such as mapping of proteomes, identification of posttranslational modifications and discovering interactions. Notably, proteomics has not only the capability to unveil protein composition but is also suitable for studying protein levels and their changes on a system wide scale. Currently, we can map more than 10,000 proteins in a single cell type and estimate the concentration of each protein. Even though some estimates are rough, they place proteomics closer to biochemistry and are helpful in better understanding a variety of phenomena in physiology, growth control and signaling.\n\nIdentification of posttranslational modification is one of the most attractive areas of proteomics. Expensive and laborious tasks that have been necessary for mapping PTMs using classical biochemistry, including labeling with radioactive isotopes, purification of peptides from isolated proteins purifications, and sequencing, were substituted by mass spectrometry based proteomics. Thanks to these developments, analysis of PTMs has become very popular, in particular because large numbers of modification sites can be identified in a relatively simple way by analyzing enriched fractions or just within complete cell lysates. In-depth proteomic studies allow mapping of thousands of phosphorylation, acetylation, or glycosylation sites occurring in a single population of cells or in a tissue, and position proteomics in a key role for studying modifications in a biological system's manner. Supplementing the identification with protein abundance and site-occupancy information will lead to validation of thermodynamic significance of individual modifications.\n\nAnalysis of clinical samples is another popular area of proteomics. Its landscape is shared by the identification of novel biomarkers and discovery of potential drug targets. The major challenges in this area are the limited availability of quality samples and the abundance of housekeeping proteins such as albumin in plasma, or titin in muscle tissue. The presence of such proteins impedes identification of lower abundant molecules that are the focus of analyses. Development of approaches to bypass these difficulties and to facilitate the discovery process is the aim of many researchers.\n\nAlthough proteomic technologies were frequently understood as something 'under construction' doubtless proteomic technologies are now ready to provide novel insights in biological processes. We launch *Proteomes* with the certainty that we can attract high quality manuscripts which will cover various fields of proteomics. We expect that *Proteomes* will publish articles in which proteomic technologies will play an essential role in solving questions and providing new insights in biology, biochemistry, and molecular biology.","meta":{"dup_signals":{"dup_doc_count":105,"dup_dump_count":41,"dup_details":{"curated_sources":2,"2018-17":1,"2018-05":1,"2017-47":2,"2017-39":1,"2017-30":2,"2017-26":1,"2017-22":1,"2017-17":2,"2017-09":1,"2017-04":2,"2016-50":2,"2016-44":3,"2016-40":3,"2016-36":3,"2016-30":3,"2016-26":3,"2016-22":3,"2016-18":1,"2016-07":1,"2015-48":3,"2015-40":3,"2015-35":3,"2015-32":3,"2015-27":2,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":3,"2014-42":7,"2014-41":4,"2014-35":4,"2014-23":5,"2014-15":4,"2018-26":1,"2015-18":3,"2015-11":3,"2015-06":3,"2014-10":2,"2013-48":2,"2013-20":2,"2017-13":1}},"file":"PMC5314490"},"subset":"pubmed_central"} {"text":"abstract: John Westbrook is remembered.\nauthor: Christine Zardecki; Stephen K. BurleyCorrespondence e-mail: \ndate: 2021-11-01\nreferences:\ntitle: John D. Westbrook Jr (1957\u20132021)\n\nJohn D. Westbrook Jr (1957\u20132021), Research Professor at Rutgers University and Data & Software Architect Lead for the RCSB PDB, passed away on 18 October 2021. \u25b8<\/a>\n\nHe was incredibly beloved and respected by his colleagues at Rutgers and worldwide, known for his dry wit and endless enthusiasm for thinking about all aspects of data and data management.\n\nJohn had a long and highly successful career developing ontologies, tools, and infrastructure in data acquisition, validation, standardization, and mining in the structural biology and life science domains. His work established the PDBx\/mmCIF data dictionary and format as the foundation of the modern Protein Data Bank (PDB) archive (wwPDB.org).\n\nMore than twenty-five years ago, while still a graduate student, John recognized the importance of a well defined data model to ensure high-quality and reliable structural information to data users. He was the principal architect of the mmCIF data representation for biological macromolecular data. Data are presented in either key-value or tabular form based on a simple, context-free grammar (without column width constraints). All relationships between common data items (*e.g.* atom and residue identifiers) are explicitly documented within the PDBx Exchange Dictionary (). The use of the PDBx\/mmCIF format enables software applications to evaluate and validate referential integrity within any PDB entry. A key strength of the mmCIF technology is the extensibility afforded by its rich collection of software-accessible metadata.\n\nThe current PDBx\/mmCIF dictionary contains more than 6200 definitions relating to experiments involved in macromolecular structure determination and descriptions of the structures themselves. The first implementation of this schema was used for the Nucleic Acid Database, a data resource of nucleic acid-containing X-ray crystallographic structures. Today, this dictionary underpins all data management of the PDB. Since 2014, it has served as the Master Format for the PDB archive. It also forms the basis of the Chemical Component Dictionary (), which maintains and distributes small molecule chemical reference data in the PDB.\n\nIn 2011, the Worldwide Protein Data Bank (wwPDB) PDBx\/mmCIF Working Group was established to enable the direct use of PDBx\/mmCIF format files within major macromolecular crystallography software tools to provide recommendations on format extensions required for deposition of larger macromolecule structures to the PDB. This was a key step in the evolution of the PDB archive, which enabled studies of macromolecular machines, such as the ribosome, as single PDB structures (instead of split entries with atomic coordinates distributed among different entry files). In 2019, mandatory submission of PDBx\/mmCIF format files for deposition was announced (Adams *et al.*, 2019 \u25b8).\n\nTo ensure the success of the PDBx\/mmCIF dictionary and format, John worked with a wide range of community experts to extend the framework to encompass descriptions of macromolecular X-ray crystallographic experiments, 3D cryo-electron microscopy experiments, NMR spectroscopy experiments, protein and nucleic acid structural features, diffraction image data, and protein production and crystallization protocols. These efforts have recently focused on developing compatible data representations for X-ray free electron (XFEL) methods and integrative or hybrid methods (I\/HM). I\/HM structures, currently stored in the prototype PDB-Dev archive (), presented new challenges for data exchange among rapidly evolving and heterogeneous experimental repositories. Proper management of I\/HM structures in PDB-Dev also required extension of the PDBx\/mmCIF data dictionary to include coarse-grained or multiscale models, which will be essential for studying macromolecular structures *in situ* using cryo-electron tomography and other bioimaging methods.\n\nJohn contributed broadly to community data standards enabling interoperation and data integration within the biology and structural biology domains. His efforts have included (i) describing the increasing molecular complexity of macromolecular structure data, (ii) representing new experimental methodologies, including I\/HM techniques, and (iii) expanding the biological context required to facilitate broader integration with a spectrum of biomedical resources. John's work has been central to connecting crystallographic and related structural data for biological macromolecules to key resources across scientific disciplines. His efforts have been described in more than 120 peer-reviewed publications, one of which has been cited more than 21\u2005000 times according to the Web of Science (Berman *et al.*, 2000 \u25b8). Eight of his most influential published papers have appeared in the *Inter\u00adnational Tables for Crystallography*.\n\nJohn has also done yeoman service to the crystallographic community over many years and was recognized with the inaugural Biocuration Career Award from the International Society for Biocuration in 2016.\n\nFor the International Union of Crystallography, John served on the Commission for Maintenance of CIF Standard (COMCIFS), the Working Group on Data Diffraction Deposition (DDDWG), and the Committee on Data (CommDat). He also served as a Co-editor for *Acta Crystallo\u00adgraphica Section F*.\n\nJohn was a long-standing member of the American Crystallographic Association and served on the Data, Standards & Computing Committee. He also served on the Metadata Interest Group for the Research Data Alliance.\n\nJohn is survived by his wife, Bonnie J. Wagner-Westbrook, Ed.D. and his devoted Mother-in-Law, Joan N. Wagner of Clinton Twp., NJ; many cousins including Chandler Turner (of Portsmouth, VA), Ann (Turner) Heyes (of Tasmania, Australia), and Louise (Turner) Brown (of Oakland, CA).\n\n# References","meta":{"dup_signals":{"dup_doc_count":112,"dup_dump_count":15,"dup_details":{"curated_sources":2,"2023-50":11,"2023-40":10,"2023-23":13,"2023-14":6,"2023-06":12,"2022-49":6,"2022-40":4,"2022-33":2,"2022-27":1,"2022-21":1,"2024-30":6,"2024-26":9,"2024-22":4,"2024-18":10,"2024-10":15}},"file":"PMC8561733"},"subset":"pubmed_central"} {"text":"abstract: # Background\n .\n Therapeutic targets have been defined for diseases like diabetes, hypertension or rheumatoid arthritis and adhering to them has improved outcomes. Such targets are just emerging for spondyloarthritis (SpA).\n .\n # Objective\n .\n To define the treatment target for SpA including ankylosing spondylitis and psoriatic arthritis (PsA) and develop recommendations for achieving the target, including a treat-to-target management strategy.\n .\n # Methods\n .\n Based on results of a systematic literature review and expert opinion, a task force of expert physicians and patients developed recommendations which were broadly discussed and voted upon in a Delphi-like process. Level of evidence, grade and strength of the recommendations were derived by respective means. The commonalities between axial SpA, peripheral SpA and PsA were discussed in detail.\n .\n # Results\n .\n Although the literature review did not reveal trials comparing a treat-to-target approach with another or no strategy, it provided indirect evidence regarding an optimised approach to therapy that facilitated the development of recommendations. The group agreed on 5 overarching principles and 11 recommendations; 9 of these recommendations related commonly to the whole spectrum of SpA and PsA, and only 2 were designed separately for axial SpA, peripheral SpA and PsA. The main treatment target, which should be based on a shared decision with the patient, was defined as remission, with the alternative target of low disease activity. Follow-up examinations at regular intervals that depend on the patient's status should safeguard the evolution of disease activity towards the targeted goal. Additional recommendations relate to extra-articular and extramusculoskeletal aspects and other important factors, such as comorbidity. While the level of evidence was generally quite low, the mean strength of recommendation was 9\u201310 (10: maximum agreement) for all recommendations. A research agenda was formulated.\n .\n # Conclusions\n .\n The task force defined the treatment target as remission or, alternatively, low disease activity, being aware that the evidence base is not strong and needs to be expanded by future research. These recommendations can inform the various stakeholders about expert opinion that aims for reaching optimal outcomes of SpA.\nauthor: Josef S Smolen; J\u00fcrgen Braun; Maxime Dougados; Paul Emery; Oliver FitzGerald; Philip Helliwell; Arthur Kavanaugh; Tore K Kvien; Robert Landew\u00e9; Thomas Luger; Philip Mease; Ignazio Olivieri; John Reveille; Christopher Ritchlin; Martin Rudwaleit; Monika Schoels; Joachim Sieper; Martinus de Wit; Xenofon Baraliakos; Neil Betteridge; Ruben Burgos-Vargas; Eduardo Collantes-Estevez; Atul Deodhar; Dirk Elewaut; Laure Gossec; Merryn Jongkees; Mara Maccarone; Kurt Redlich; Filip van den Bosch; James Cheng-Chung Wei; Kevin Winthrop; D\u00e9sir\u00e9e van der Heijde[^1]Correspondence to Professor Josef S Smolen, Division of Rheumatology, Department of Medicine 3, Medical University of Vienna, Waehringer Guertel 18-20, Vienna A-1090, Austria; ; \ndate: 2014-01\ninstitute: 1Division of Rheumatology, Department of Medicine 3, Medical University of Vienna, Vienna, Austria; 22nd Department of Medicine, Hietzing Hospital Vienna, Vienna, Austria; 3Rheumazentrum Ruhrgebiet, Herne, Germany; 4Department of Rheumatology B, Cochin Hospital, Ren\u00e9 Descartes University, Paris, France; 5Division of Rheumatic and Musculoskeletal Disease, Leeds Institute of Molecular Medicine, University of Leeds, Chapel Allerton Hospital, Leeds, UK; 6Department of Rheumatology, St. Vincents University Hospital, Dublin, UK; 7Division of Rheumatology, Allergy, Immunology, University of California, San Diego, California, USA; 8Department of Rheumatology, Diakonhjemmet Hospital, Oslo, Norway; 9Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; 10Atrium Medical Center, Heerlen, The Netherlands; 11Clinic and Polyclinic of Dermatology, University of M\u00fcnster, M\u00fcnster, Germany; 12Swedish Medical Center and University of Washington, Seattle, Washington, USA; 13Rheumatology Department of Lucania, San Carlo Hospital of Potenza and Madonna delle Grazie Hospital of Matera, Potenza, Italy; 14Division of Rheumatology, University of Texas Health Science Center at Houston, Houston, Texas, USA; 15Allergy, Immunology and Rheumatology Division, The Center for Musculoskeletal Medicine, University of Rochester Medical Center, Rochester, New York, USA; 16Endokrinologikum Berlin, Berlin, Germany; 17Medical Department I, Rheumatology, Charit\u00e9 Campus Benjamin Franklin, Berlin, Germany; 18EULAR standing committee of People with Arthritis\/Rheumatism in Europe (PARE), Zurich, Switzerland; 19Rheumatology Department, Faculty of Medicine, Hospital General de M\u00e9xico Universidad Nacional Aut\u00f3noma de M\u00e9xico, Mexico City, Mexico; 20Department of Rheumatology, Reina Sofia University Hospital of C\u00f3rdoba\/IMBIC, C\u00f3rdoba, Spain; 21Division of Arthritis and Rheumatic Diseases, Oregon Health & Sciences University, Portland, USA; 22Laboratory for Molecular Immunology and Inflammation, Department of Rheumatology, Ghent University Hospital, Ghent, Belgium; 23Department of Rheumatology, Piti\u00e9 Salp\u00eatri\u00e8re Hospital, Pierre et Marie Curie University, Paris, France; 24Division of Allergy, Immunology and Rheumatology, Institute of Medicine, Chung Shan Medical University Hospital, Taichung, Taiwan; 25Department of Public Health and Preventive Medicine, Oregon Health & Science University, Portland, Oregon, USA; 26Department of Rheumatology, Leiden University Medical Center, Leiden, The Netherlands\nreferences:\ntitle: Treating spondyloarthritis, including ankylosing spondylitis and psoriatic arthritis, to target: recommendations of an international task force\n\nThe approaches to the diagnosis, therapy and follow-up of patients with ankylosing spondylitis (AS) and psoriatic arthritis (PsA) have undergone a number of paradigmatical changes over the last decade. Especially considerations of the disease spectrum of spondyloarthritis (SpA) have recently undergone remarkable changes. In addition to AS, defined by prevalent radiographic structural changes in the sacroiliac joints, non-radiographic axial SpA (axSpA) has been defined based on the absence of such changes but presence of sacroiliitis (as documented by MRI) and\/or human leukocyte antigen B27. The term axSpA, therefore, includes radiographic axSpA (AS) and non-radiographic axSpA. On this basis, new classification criteria have been established by Assessment of SpondyloArthritis international Society (ASAS),1 novel therapies have proven efficacious,2\u20136 MRI has been increasingly established as an imaging tool in SpA1 7 8 and new indices to assess disease activity have been developed.9\u201314 The novel approach to classification has also differentiated the two predominant manifestations of SpA, axial and\/or peripheral, and their potential parallel occurrence.15 The basis for the new classification lies in the sharing of characteristic features of SpA, such as sacroiliitis, spondylitis and enthesitis and common genetic markers and a positive family history. Furthermore, extramusculoskeletal manifestations such as psoriasis in PsA, a preceding gastrointestinal or urogenital infection as in the case of reactive arthritis (ReA), and chronic inflammatory bowel diseases (IBD) like Crohn's disease and ulcerative colitis, play a role in the definition of a clinical syndrome as belonging to the concept of SpA. For the classification of patients with PsA the Classification Criteria for *Psoriatic* Arthritis (CASPAR) criteria are well established.16 Since the presence of psoriasis plays a role in both criteria sets, the ASAS and the CASPAR criteria, there is some overlap between the two. There is no international agreement whether and how they can or should be differentiated. Finally, to account for therapeutic developments, management recommendations have recently been presented.17\u201320\n\nDespite all these advances, a variety of challenges exist when considering the management of patients with SpA,21\u201324 not least because the definition of a clear therapeutic target and strategies to reach such target are not yet optimally defined.\n\nIn many areas of medicine, such as diabetes care or cardiology, clear therapeutic targets are available.25\u201330 More recently, a treatment target has also been advocated for rheumatoid arthritis (RA), namely remission or low disease activity,31 32 a recommendation based on insights from various clinical trials as revealed by systematic literature reviews (SLRs).33 34 Much less information on the value of defining therapeutic targets is currently available for AS or PsA. Therefore, a task force was formed to discuss and develop a consensus on recommendations aimed at defining a treatment target for, and thus at improving the management of axial and peripheral SpA in clinical practice.\n\n# Methods\n\nThe consensus finding consisted of a three-step process. In a first step, the first and last author invited leading experts, defined on the basis of their citation frequency in the field and previous contributions to similar activities to form a steering committee. This steering committee, which included rheumatologists experienced in the care of patients with, and\/or clinical research in axial and\/or peripheral SpA (several of them Department chairs and thus in managerial functions), a dermatologist experienced in psoriasis, and patients being diagnosed with one of these diseases and\/or experienced in consensus finding processes, met in March 2011 in Vienna to discuss unmet needs in the therapeutic management of and the potential of using treatment targets in AS and PsA. To this end, the debate focused on axial and peripheral SpA separately in two breakout groups with a subsequent common assessment. In the course of these discussions there was unanimous agreement that defining therapeutic targets and an appropriate strategic treatment approach would be valuable, but that evidence for its validity may be lacking. Therefore it was decided to perform a SLR and respective PICO (Patient, Intervention, Control, Outcome) and search terms were formulated, in line with European League Against Rheumatism (EULAR) and Appraisal of Guidelines for Research and Evaluation recommendations.35 36 In the course of defining the scope of this activity, the target populations were also specified, namely health professionals involved in care of and patients affected by axial and\/or peripheral SpA. In addition, social security officials, hospital managers and policy makers at national and international levels were considered potential stakeholders in this activity.\n\nAt a subsequent meeting in November 2011 (Dusseldorf) comprising an expanded task force with increased international participation, the SLR was presented. These invitations were a consequence of the individuals' contributions to the field and deliberations among members of the steering committee. The literature search had revealed that currently no strategic trials addressing a target-oriented, steered therapy were published, although some indirect evidence on optimal therapeutic approaches was available to inform the next stages of the process.37 A major focus of discussion at this meeting, but also already at the steering committee meeting, was the question if diseases like AS, PsA, ReA and IBD arthritis should be seen as an entity or as different diseases. The respective decision would have a bearing to the consensus finding process, since it would mean to develop one, two or more documents. The initial deliberations tended toward separating the individual diseases for several reasons: (1) despite many commonalities, some important clinical manifestations are distinct between these conditions and certain health professionals (such as dermatologists and gastroenterologists) may not be sufficiently aware of the more unifying concept of SpA or its relevance when dealing with these conditions; (2) further, the existing distinction between PsA and AS is well known for and accepted by patients and changes in terminology may cause confusion regarding the understanding of their 'new' diagnosis; (3) to date, clinical trials have been performed almost entirely in individual subentities (AS, PsA) rather than SpA, and even most recently trials in a highly specific novel subset, non-radiographic axSpA, have been performed38; (4) the current drug approval process by regulatory agencies is also related to the individual diseases rather than SpA; and (5) there is some overlap between the different subgroups, but there are also major distinctions, for example, PsA with symmetric polyarthritis as the predominant feature would not fit well into either the axial or peripheral SpA group. Therefore, the provisional choice was to develop at least two documents, one for axSpA and one for PsA. The discussions took place in separate breakout sessions devoted to these topics and in a plenary session. At the plenary session, certain items were reformulated and reordered and two provisional sets of recommendations developed, with decisions made using a modified Delphi technique.32 The group then realised how similar the individual statements in each of the two documents were, but left further decisions to the next stage of the process.\n\nWith these two documents prepared and having in mind that peripheral SpA (such as ReA) had not yet been dealt with in detail, an even larger committee met in April 2012 (Amsterdam); its membership comprised the initial task force expanded by consideration of a more international scope to also include experts from Latin America and Asia, aside from previous participants from Europe and North America. Again, the scope and background of this activity was discussed and the provisional recommendations presented. The issue of disease definition and the need of developing one, two or three documents were addressed. The committee separated into three breakout groups discussing axSpA, peripheral SpA and PsA. In the course of the breakout discussions and the plenary session, the initial recommendations were amended and then votes cast. Importantly, when looking at the individual items, the participants felt that most of them were very similar and a broad decision was then taken to develop a single document comprising overarching principles and items common to SpA in general, but within that common document to develop a few individualised items for axSpA, PsA and peripheral SpA.\n\nEach statement, which had been formulated as a draft for voting in the course of the breakout sessions and by the whole task force, was subjected to voting as 'yes' (agreement with the wording) or 'no' (disagreement). Statements supported by \u226575% of votes were immediately accepted while those with \u226425% were rejected outright. Others were subjected to further discussion and subsequent voting, where \u226567% support or, in an eventual third round, a majority of \u226550% was needed.\n\nAfter the face-to-face meeting, the statements were distributed to the committee members by email for final comments. Only suggestions for improvements of clarity of wording or addressing redundancies were considered, while any changes to the meaning were not accepted.\n\nFinally, the group voted anonymously by email on the level of agreement, that is, strength of recommendation, with each of the derived bullet points (in the form it was ultimately agreed upon by the qualified majority of participants) using a 10-point numerical rating scale (1=do not agree at all, 10=agree completely).\n\n# Results\n\n## The evidence base\n\nThe SLR, which is published separately,37 revealed that in contrast to findings in RA33 no randomised controlled clinical trial has evaluated a targeted therapeutic approach in comparison with routine therapy. However, several publications had employed therapeutic targets and respective time requirements as endpoints or before escalating therapy, although this is often the placebo arm of a study that was allowed to escape or then escalated to active treatment. These comprised 14 studies in AS and 7 studies of PsA which were found suitable to inform the task force. Nevertheless, given the lack of studies evaluating target-steered versus non-steered treatment, the level of evidence for the developed recommendations is low and mainly based on expert consensus.\n\n## The consensus\n\nThe individual statements receiving a positive vote by the majority of the expert committee members comprise 5 overarching principles and 11 recommendations. The overarching principles and 9 of the statements are recommended for SpA in general, whereas the last 2 statements have been individualised for axSpA, peripheral SpA and PsA. The recommendations are shown in table 1<\/a>. They are discussed in detail below and this detailed description should be regarded as part and parcel of the recommendations.\n\nRecommendations to treat all forms of Spondyloarthritis to target\n\n| | | LoE | GoR | SoR |\n|:---|:---|:---|:---|:---|\n| **Overarching principles** | | | | |\n| A. | The treatment target must be based on a shared decision between patient and rheumatologist | 5 | D | 9.7\u00b10.8 |\n| B. | SpA and PsA are often complex systemic diseases; as needed, the management of musculoskeletal and extra-articular manifestations should be coordinated between the rheumatologist and other specialists (such as dermatologist, gastroenterologist, ophthalmologist) | 5 | D | 9.5\u00b10.92 |\n| C. | The primary goal of treating the patient with SpA and\/or PsA is to maximise long-term health related quality of life and social participation through control of signs and symptoms, prevention of structural damage, normalisation or preservation of function, avoidance of toxicities and minimisation of comorbidities | 5\\* | D | 9.6\u00b10.67 |\n| D. | Abrogation of inflammation is presumably important to achieve these goals | 5\\* | D | 9.1\u00b11.04 |\n| E. | Treatment to target by measuring disease activity and adjusting therapy accordingly contributes to the optimisation of short term and\/or long term outcomes | 5\\* | D | 9.2\u00b11.11 |\n| **Recommendations** | | | | |\n| Common items for all forms of SpA | | | | |\n| 1\\. | A major treatment target should be clinical remission\/inactive disease of musculoskeletal involvement (arthritis, dactylitis, enthesitis, axial disease), taking extra-articular manifestations into consideration | 5\\* | D | 9.5\u00b10.77 |\n| 2\\. | The treatment target should be individualised according to the current clinical manifestations of the disease | 5 | D | 9.3\u00b11.03 |\n| 3\\. | Clinical remission\/inactive disease is defined as the absence of clinical and laboratory evidence of significant inflammatory disease activity | 5 | D | 9.0\u00b11.41 |\n| 4\\. | Low\/minimal disease activity may be an alternative treatment target | 5\\* | D | 9.4\u00b10.91 |\n| 5\\. | Disease activity should be measured on the basis of clinical signs and symptoms, and acute phase reactants | 5\\* | D | 9.4\u00b11.14 |\n| 6\\. | The choice of the measure of disease activity and the level of the target value may be influenced by considerations of comorbidities, patient factors and drug-related risks | 5 | D | 9.4\u00b11.02 |\n| 7\\. | Once the target is achieved, it should ideally be maintained throughout the course of the disease | 5\\* | D | 9.4\u00b10.76 |\n| 8\\. | The patient should be appropriately informed and involved in the discussions about the treatment target, and the risks and benefits of the strategy planned to reach this target | 5 | D | 9.8\u00b10.50 |\n| 9\\. | Structural changes, functional impairment, extra-articular manifestations, comorbidities and treatment risks should be considered when making clinical decisions, in addition to assessing measures of disease activity | 5 | D | 9.5\u00b10.81 |\n| Specific items for individual types of Spondyloarthritis | | | | |\n| \u2003\u2003\u2002*Axial Spondyloarthritis (including ankylosing spondylitis)* | | | | |\n| 10\\. | Validated composite measures of disease activity such as the BASDAI plus acute phase reactants or the Ankylosing Spondylitis Disease Activity Score, with or without measures of function such as BASFI, should be performed and documented regularly in routine clinical practice to guide treatment decisions; the frequency of the measurements depends on the level of disease activity | 5 | D | 9.3\u00b10.95 |\n| 11\\. | Other factors, such as axial inflammation on MRI, radiographic progression, peripheral musculoskeletal and extra-articular manifestations, as well as comorbidities may also be considered when setting clinical targets | 5\\* | D | 9.3\u00b10.80 |\n| *\u2003\u2003\u2002Peripheral Spondyloarthritis* | | | | |\n| 10\\. | Quantified measures of disease activity, which reflect the individual peripheral musculoskeletal manifestations (arthritis, dactylitis, enthesitis) should be performed and documented regularly in routine clinical practice to guide treatment decisions; the frequency of the measurements depends on the level of disease activity | 5 | D | 9.3\u00b10.85 |\n| 11\\. | Other factors. such as spinal and extra-articular manifestations, imaging results, changes in function\/quality of life, as well as comorbidities may also be considered for decision | 5 | D | 9.4\u00b10.78 |\n| \u2003\u2003\u2002*Psoriatic arthritis* | | | | |\n| 10\\. | Validated measures of musculoskeletal disease activity (arthritis, dactylitis, enthesitis, axial disease) should be performed and documented regularly in routine clinical practice to guide treatment decisions; the frequency of the measurements depends on the level of disease activity; cutaneous manifestations should also be considered | 5 | D | 9.4\u00b10.78 |\n| 11\\. | Other factors, such as spinal and extra-articular manifestations, imaging results, changes in function\/quality of life, as well as comorbidities may also be considered for decision | 5 | D | 9.3\u00b11.00 |\n\nAn asterisk in the LoE column denotes that for this item indirect evidence is available from the literature search which was nevertheless not sufficient for a higher grading of the evidence level.\n\nBASDAI, Bath Ankylosing Spondylitis Disease Activity Index; BASFI, Bath Ankylosing Spondylitis Functional Index; GoR, grade of recommendation; LoE, level of evidence; PsA, psoriatic arthritis; SoR, strength of recommendation (level of agreement); SpA, spondyloarthritis.\n\n## Overarching principles\n\nIn the Committee's view, a number of elements related to treating SpA are so representative of good clinical practice that they form a general framework for more specific recommendations. These were therefore termed overarching principles, and five such principles were developed and voted on.\n\n*A. The treatment target must be based on a shared decision between patient and rheumatologist*.\n\nPatient involvement in therapeutic decision-making has become a mandate in patient care, especially when dealing with chronic diseases. This is a general patient right, and has been shown to improve mutual understanding and outcome39\u201342 and is also increasingly recognised to be important in SpA.43\u201345 The committee was convinced that patients must be informed about the proposed treatment target, therapeutic options to reach the target and reasons for recommending the target also in light of the risk related to treatment and risk related to the disease; on the other hand, patients should actively participate in this discussion. This aspect is subsequently reinforced in recommendation number 8 and these two items received the highest level of agreement among all bullet points. The principle also specifically mentions the rheumatologist, since it is the rheumatologist who should coordinate treatment of patients with SpA. Evidence regarding RA suggests that patient outcome is better when care is provided by a rheumatologist,46 and this might also be so for the musculoskeletal manifestations of PsA and SpA.\n\n*B. SpA and PsA are often complex systemic diseases; as needed, the management of musculoskeletal and extra-articular manifestations should be coordinated between the rheumatologist and other specialists (such as dermatologist, gastroenterologist, ophthalmologist)*.\n\nThis item is supposed to inform patients, healthcare professionals with less experience in the care of SpA and non-medical stakeholders that patients with SpA are frequently suffering from extramusculoskeletal manifestations and are often in need of multidisciplinary care for optimal therapy. When multiorgan involvement is present, a harmonised approach among specialists is required which should ideally be coordinated by the rheumatologist, especially if the musculoskeletal involvement causes major complaints.\n\n*C. The primary goal of treating the patient with SpA and\/or PsA is to maximise long-term health related quality of life and social participation through control of signs and symptoms, prevention of structural damage, normalisation or preservation of function, avoidance of toxicities and minimisation of comorbidities*.\n\nThe significant burden of axSpA and PsA in terms of disability, loss of quality of life and work productivity has only recently been appreciated.47\u201352 This generally formulated item addresses the importance to control signs and symptoms like pain, structural changes such as ankylosis,53 54 comborbidities55\u201357 and the importance of focusing on the totality of disease manifestations and complications when determining the proposal for a treatment target.\n\n*D. Abrogation of inflammation is presumably important to achieve these goals*.\n\nPsA and SpA are inflammatory diseases and inflammation leads to their signs and symptoms, functional impairment as well as structural changes.7 11 58\u201361 Therefore, stopping inflammation appears to be of key importance to optimise outcome. Indeed, in many patients non-steroidal anti-inflammatory drugs (NSAIDs) can lead to cessation of signs and symptoms, normalisation of physical function and potentially inhibition of structural damage in the spine.62 63 Interference with the proinflammatory cytokine tumour necrosis factor (TNF) suppresses inflammation effectively and can lead to disappearance of signs and symptoms and maximal improvement of physical function. Thus, the task force was convinced that disappearance of inflammation conveys the best outcome. However, current evidence indicates that TNF inhibition does not prevent progression of structural changes in AS.64 65 Moreover, it has not been determined if a state of remission leads to better long-term outcome of SpA and\/or PsA than low disease activity. Thus, this item is somewhat more controversial than most other ones. Therefore the word 'presumably' was added. This point constitutes a backbone for some of the subsequent individual recommendations.\n\n*E. Treatment to target by measuring disease activity and adjusting therapy accordingly contributes to the optimisation of short and\/or long-term outcomes*.\n\nThe SLR has revealed that patients with AS who do not reach predefined, measurable treatment targets can achieve further improvement upon adaptation of their therapy. While for PsA this has not yet been established, the task force regarded the need to measure disease activity and amend therapy with persistently active disease as a general necessity and, therefore, as a principle.\n\nThe level of agreement with these five principles was very high, ranging between 9.1 (item D) and 9.7 (item A) on a scale of 10 (table), indicating that this large and quite heterogeneous task force had arrived at a quite unanimous view on the principal importance of certain approaches to treatment of SpA.\n\n## Recommendations\n\nAs mentioned previously, after intensive deliberations the committee had decided to create just one document covering axial and peripheral SpA, including PsA. To this end, nine unified recommendations and two additional items dealing separately with PsA, axial SpA and peripheral SpA were developed in the course of the discussions. These recommendations applied the results of the SLR, but given the low available evidence in the literature, they were mostly based on expert opinion, albeit consensual opinions of a large group of experts. The sequence of the recommendations follows a logical order, but also reflects the level of importance considered by the committee for each individual bullet point.\n\n### Common recommendations\n\n1\\. A major treatment target should be clinical remission\/inactive disease of musculoskeletal involvement (arthritis, dactylitis, enthesitis, axial disease), taking extra-articular manifestations into consideration (level of evidence (LoE): 5, grade of recommendation (GoR): D).\n\nHitherto, no clinical trial has compared outcomes of PsA or axSpA for progression of structural changes or improvement of physical function or quality of life when remission rather than another state was targeted. Definitions for remission (which is called inactive disease in AS) or at least minimal disease activity (MDA) exist for PsA and AS,12 66 67 but in contrast to RA the long-term benefits of remission have not yet been sufficiently established. Also, no clear definition of remission for extra-articular musculoskeletal features, such as enthesitis or dactylitis, is currently available. Moreover, it is not sufficiently known at present how remission of musculoskeletal symptoms relates to remission of skin disease in PsA or bowel disease in IBD. Therefore, the formulation of this first bullet point reflects the general lack of data, by saying '*a* major' rather than '*the* major' treatment goal, or by expanding the term 'clinical remission' to the somewhat less stringent term 'inactive disease'. However, the lack of thorough evidence and the unwillingness of the group to arrive at a more determined and clear verbalisation of this bullet point must not be mistaken as an uncertainty of the task force regarding the necessity of treating patients to become free of signs and symptoms of their peripheral joint or axial disease. On the contrary, the task force deemed this approach to be of utmost importance for short-term benefit and long-term outcome. Therefore it was placed as the first recommendation. Importantly, the group focussed the terms 'remission'\/'inactive disease' to the musculoskeletal manifestations of SpA and *not* the extramusculoskeletal abnormalities, although it clearly stated in the last part of this item that this must not be neglected in the therapeutic decision making. After several rounds of discussion, 83% of the participants agreed with the formulation of this bullet point and the strength of recommendation amounted to 9.5\u00b10.9.\n\n2\\. The treatment target should be individualised according to the current clinical manifestations of the disease (LoE: 5, GoR: D)\n\nThis item emphasises that every patient should be treated according to her or his current clinical manifestations and that in light of their heterogeneity each of these manifestations has to be accounted for when setting the therapeutic target. However, it also implies that at certain points in time the musculoskeletal symptoms may not be in the foreground, such as when extra-articular manifestations prevail and need appropriate attention. Again, no data exist in the literature to support or refute this recommendation which was voted for by 87% of the participants and attained a strength of recommendation (SoR) of 9.3\u00b11.0.\n\n*3. Clinical remission\/inactive disease is defined as the absence of clinical and laboratory evidence of significant inflammatory disease activity*.\n\nThis bullet point provides a definition for item 1 in order to clarify the definition of remission. Remission of an inflammatory rheumatic disease ideally comprises the absence of its signs and symptoms, the maximal improvement in physical function and halt of structural changes. While there is compelling evidence that these three characteristics go along with each other in RA, this is not yet sufficiently known in AS and PsA. However, in PsA, progression of joint damage is correlated with swollen joint counts and dactylitis59 68 and, therefore, it may be assumed that clinical remission will also lead to halt of structural progression. This is not quite clear in AS, since progression of syndesmophyte formation has been observed even when patients were in clinical remission on TNF inhibitors and formation of syndesmophytes occurred without presence of MRI inflammation.8 64 65 69 On the other hand, elevated levels of acute phase reactants (APRs) are associated with progression of structural changes in AS.70 Further, physical function and quality of life are related to symptoms of these diseases.71 For all these reasons the task force defined remission as stated above. Definitions of remission or partial remission are available for AS, when using mere patient reported outcomes67 or the more recently described Ankylosing Spondylitis Disease Activity Score (ASDAS).12 For PsA, remission criteria are not well established,66 especially given that in PsA most composite measures have been borrowed from RA while a composite measure specific for PsA has only recently been validated,10 but criteria for disease activity states have not yet been defined. Importantly, the term 'significant' was added deliberately, indicating that the presence of a minute extent of residual activity, such as the presence of a tender joint or residual swollen but painless joint, or residual axial pain that does not appear to relate to inflammation, would still be compatible with remission. On the other hand, the committee wished to have a stringent definition of remission which would not allow significant residual disease activity, such as several swollen joints or significant back pain, even if dramatically improved by therapy, to be called remission. Also, this bullet point speaks of 'clinical remission' indicating that clinical rather than imaging measures should be used to define remission, at least currently. In this respect it should be noted that MRI shows evidence of inflammation in AS and that a negative MRI can be regarded as imaging remission; however, the relationship between clinical and imaging remission still needs to be elaborated. Statement 3, which like the previous ones is based on expert opinion, received approval by 83% of the committee members and a SoR of 9.0\u00b11.4.\n\n4\\. Low disease activity\/MDA may be an alternative treatment target (LoE: 5; GoR: D)\n\nWhile remission (inactive disease) constitutes an ideal goal, clinical practice, stringently as it was defined in bullet point number 3 it may be difficult to achieve in many patients, especially those with established\/long-standing disease. Indeed, patients with axSpA with longer disease duration are less likely to attain partial remission than those with early disease.72 38 Thus, while remission is the ultimate and an ideal goal, low disease activity constitutes a useful alternative in the opinion of the task force, since it is assumed that physical function and quality of life may not be much worse than in remission and progression of structural damage, while possibly not halted, would be minimal. Indeed, as included in the bullet point, low disease activity can also be regarded as 'MDA'. Thus, minor residual signs and symptoms may still exist differentiating this state from inactive disease. Importantly, by stating that low disease activity is an alternative goal to remission, the committee also clearly implies that any other, higher state, even moderate disease activity, would not be acceptable and its presence should elicit therapeutic adaptations. More research will be needed to provide information on the optimal time point for achieving the treatment target. However, given that in clinical trials of AS maximal improvement was achieved between week 12 and week 24 regarding all outcome measures including ASAS partial remission4 73 74 and that similar observations have been made in PsA,2 75 76 a maximum of 6\u2005months for reaching the treatment target of low disease activity or remission seems appropriate, but it is advisable to adapt therapy earlier if no significant reduction in disease activity is observed within 3\u2005months. Recently, thresholds for disease activity states including inactive disease (equivalent to remission) have been defined for axSpA using the ASDAS,12 and ASAS definition of partial remission is also available67; a measure of MDA has been developed for PsA which is beginning to be used in clinical trials.77 78 Additional research in this respect will be required, especially for PsA and peripheral SpA. This item was accepted as defined by 79% of the participants and received a SoR of 9.4\u00b10.9.\n\n*5. Disease activity should be measured on the basis of clinical signs and symptoms, and APRs (LoE: 5, GoR: D)*.\n\nTraditionally, given that the spine is not as accessible to physical examination of signs of inflammation like a peripheral joint, disease activity in axSpA has been evaluated by employing the Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) which comprises only patient reported variables related to symptoms of the disease. However, axSpA is an inflammatory disease with involvement of various inflammatory cells and cytokines.79 80 Indeed, inhibition of TNF is one of the mainstays of treatment today, leading to relief of symptoms, and inhibition of prostaglandin synthesis can reduce clinical symptoms, and retard progression of structural changes62 63; this appears to be particularly prominent in patients with elevated APRs, and the latter seem to be associated with progression of syndesmophyte formation.70 81 82 Consequently, disease activity assessment during follow-up and in the course of targeting a good outcome should comprise the assessment of clinical aspects of the disease as well as laboratory abnormalities, that is, APR measurement. This can be done separately by looking at for example, BASDAI and c-reactive protein (CRP), or by using a measure that comprises both aspects, such as ASDAS.11 Along the same line, PsA as an inflammatory and potentially destructive joint disease should be followed using measures that relate to the joints and the serological inflammatory response; these are contained in RA-related composite measures of disease activity, such as Disease Activity Score (DAS), DAS28 and Simplified Disease Activity Index (SDAI), but also in the Disease Activity in PsA and the psoriatic arthritis disease activity score.13 The current recommendation does not relate to systemic features of SpA, but solely to the musculoskeletal manifestations. For systemic features, other measures are needed and, according to the EULAR recommendations for treatment of PsA,18 this should be done in collaboration with respective specialists. Recommendation 5 received approval by 97% of participants and the SoR amounted to 9.4\u00b11.1.\n\n6\\. The choice of the measure of disease activity and the level of the target value may be influenced by considerations of comorbidities, patient factors and drug related risks (LoE: 5; GoR: D)\n\nPatients with chronic musculoskeletal diseases are frequently also suffering from comorbidities that (1) may be related to the overall spectrum of the disorder (such as IBD, uveitis or psoriasis), (2) may occur as a consequence of chronic inflammation (such as cardiovascular disease), (3) may be related to therapy (such as gastric ulcer or infection), or (4) just may occur concomitantly by chance. The presence of such comorbidities may alter the level of the treatment target, since the risk of aggravating the comorbid condition may outweigh the benefit conveyed by more intensive therapy to achieve the treatment target of the musculoskeletal manifestations. Further, the choice of follow-up measure may have to be changed under certain circumstances. For example, a concomitant disease which raises pain levels or APRs may influence the result of measuring disease activity. Likewise, when following patients on therapies that affect the APRs independently of clinical benefit one may have to reconsider the choice of a measure that contains an APR. Therefore, this point focuses on the application (and sometimes restricted applicability) of particular disease activity measures. This recommendation was approved by 97% of the task force members and received a SoR of 9.4\u00b11.0.\n\n7\\. Once the target is achieved, it should ideally be maintained throughout the course of the disease (LoE: 5; GoR: D)\n\nIt is clear that no patient's successfully targeted disease activity state should deteriorate during follow-up, since reactivation of disease may again lead to reduced quality of life and disability. There is evidence that on demand NSAID therapy, in contrast to regular NSAID treatment, is associated with progression of radiographic changes in AS63 and that stopping TNF-blocker therapy will lead to reactivation of AS83 and PsA.84 While the present consensus statement is not designed to provide recommendations on therapies with particular agents but rather on treatment strategies, the task force nevertheless points to the importance of maintaining the targeted therapeutic state once achieved and advises against stopping a successful therapy based on the available evidence. However, it has not yet been studied sufficiently if dose reduction or expansion of treatment intervals allows maintaining a good clinical state. Approval was given by 90% of the task force's members and the SoR amounted to 9.4\u00b10.8.\n\n8\\. The patient should be appropriately informed and involved in the discussions about the treatment target, and the risks and benefits of the strategy planned to reach this target (LoE: 5, GoR: D)\n\nWhile this statement has already been partly covered in the overarching principles, it was felt important to also raise this point in the context of the actual recommendations to bolster the importance of interaction between health professionals and their patients in all regards: setting and agreeing on the treatment target, discussing the strategies available to reach that target and the time it may take to attain it, and laying out the benefits and risks of the recommended treatment and consider the totality of clinical disease manifestations (including the extramusculoskeletal ones) and of comorbidities. This point also comprises the need to discuss steps to be taken if the treatment target is not achieved, such as adjustment of or switch to a new therapy. In this respect patient education programmes or booklets may provide additional helpful means. This item was approved by 81% of the participants and the SoR was 9.8\u00b10.5.\n\n*9. Structural changes, functional impairment, extra-articular manifestations, comorbidities and treatment risks should be considered when making clinical decisions, in addition to assessing measures of disease activity*.\n\nThis point is focused on considerations involved in therapeutic decision making. Although the last part of this recommendation emphasises the importance of regular assessment of disease activity with appropriate measures (see items 5 and 6), the first part suggests taking into account results of other investigations for treatment decisions, such as by imaging (especially structural changes in PsA), physical function and extra-articular manifestations. The latter comprise enthesitis or dactylitis as well as extramusculoskeletal disease. This is of importance, since treatment approaches to PsA will differ in the presence of enthesitis compared with patients who do not suffer from entheseal affection.18 Moreover, organ disease, such as lung involvement, aortitis, intestinal or skin manifestations as well as uveitis, may require involvement of other specialists (see overarching principle B). In particular uveitis can present across the spectrum of SpA and may reflect disease activity, and inflammatory bowel disease and psoriatic skin involvement must be considered in the respective disorders and do not strongly correlate with the degree or extent of musculoskeletal involvement. Risks and comorbidities are also reiterated here in the context of treatment decisions; before they were indicated with respect to the choice of measures of disease activity (see recommendation 6) and in relation to patient information (item 8). This recommendation achieved 100% agreement and a SoR of 9.5\u00b10.8.\n\n### Disease specific recommendations\n\nAs indicated above, items 10 and 11 have been formulated specifically for axSpA, PsA and peripheral SpA to account for the differences between certain characteristics of the different spondyloarthritides.\n\n#### Axial SpA (including AS)\n\n*10. Validated composite measures of disease activity such as BASDAI plus APRs or ASDAS, with or without measures of function such as Bath Ankylosing Spondylitis Functional Index (BASFI), should be performed and documented regularly in routine clinical practice to guide treatment decisions; the frequency of the measurements depends on the level of disease activity*.\n\nThis item is an expansion of recommendation numbers 5 and 9. It mentions those disease activity measures which have been repeatedly validated and are already in use in contemporary clinical trials. In line with recommendation 5, when BASDAI is employed, an APR, such as CRP or erythrocyte sedimentation rate, should also be determined. The ASDAS already comprises such a measure among its components.12 In addition, this recommendation also suggests the use of a particular functional measure, but other validated measures can also be applied (therefore the tem 'such as'). With highly active disease, follow-up examinations will have to be more frequent than with inactive disease\/remission. Moreover, the recommendation requests documentation of the measured results. Among task force members, 88% approved this item and SoR amounted to 9.3\u00b11.0.\n\n*11. Other factors such as axial inflammation on MRI, radiographic progression, peripheral musculoskeletal and extra-articular manifestations, may also be considered when setting clinical targets*.\n\nAgain, this recommendation expands item 9, but specifically mentions MRI as a highly valuable imaging method for the potential follow-up of axSpA. Likewise, non-axial disease manifestations will influence not only the therapeutic approach but have to be considered when setting treatment targets. Approval was voted for by 88% of the members and the SoR was 9.3\u00b10.8.\n\n#### Peripheral SpA\n\n*10. Quantified measures of disease activity, which reflect the individual peripheral musculoskeletal manifestations (arthritis, dactylitis, enthesitis) should be performed and documented regularly in routine clinical practice to guide treatment decisions; the frequency of the measurements depends on the level of disease activity*.\n\nThis recommendation, while expanding items 5, is here specifically tailored to peripheral SpA, such as ReA, IBD arthritis or former 'undifferentiated' peripheral SpA. Measures of disease activity are available and have been validated for the arthritis component of ReA9 and there exist measures for dactylitis and enthesitis which have not been primarily developed for peripheral SpA, but rather for PsA or AS.85\u201387 While they will have to be validated in peripheral SpA, they can be assumed useful for clinical practice until proven otherwise. Also this recommendation calls for documentation of the measured results. Of the participants, 100% approved this item and the SoR score achieved was 9.3\u00b10.9.\n\n*11. Other factors such as spinal and extra-articular manifestations, imaging results, changes in function\/quality of life, as well as comorbidities may also be considered for decision*.\n\nThis item reiterates and expands recommendation 9 and achieved 100% approval; the SoR was 9.4+0.8.\n\n#### Psoriatic arthritis\n\n*10. Validated measures of musculoskeletal disease activity (arthritis, dactylitis, enthesitis, axial disease) should be performed and documented regularly in routine clinical practice to guide treatment decisions; the frequency of the measurements depends on the level of disease activity; cutaneous manifestations should also be considered*.\n\nSpecific mention of skin disease tailors this recommendation to PsA, although skin may also be involved in axial and peripheral SpA. Further, in addition to arthritis, dactylitis and enthesitis, axial disease assessment is specifically brought forward here. Finally, a reminder is provided that the results of the various measures (and also the treatment target) should be documented. Clearly, with highly active disease patients should be seen frequently, such as monthly to every 3\u2005months, while with low disease activity or remission, follow-up examinations may be done only every 6\u201312\u2005months. However, skin involvement also has to be taken into account. The voting achieved 92% approval and the SoR amounted to 9.4\u00b10.8.\n\n*11. Other factors such as spinal and extra-articular manifestations, imaging results changes in function\/quality of life, as well as comorbidities may also be considered for decision*.\n\nAs for the other disease entities, this last recommendation expands item 9 and also the preceding one by reiterating the importance of comorbidities, and axial and soft tissue manifestations of PsA in the course of making treatment decisions. Approval was granted by 100% of the participants and the SoR amounted to 9.3\u00b11.0.\n\nA final anonymous vote on whether the task force members felt influenced by the fact that support for this activity was provided by a company, the result was 0.4\u00b11.3 (0 meaning no and 10 meaning heavy influence), indicating that the participants felt negligibly influenced.\n\n## Research agenda\n\nSince none of the recommendations is based on evidence, the research agenda has to comprise the search for evidence for all of them. However, beyond mere therapeutic aspects, insights into the relationships between individual musculoskeletal manifestations, damage and disability are still incomplete especially for the peripheral SpA, including PsA. Table\u00a02<\/a> lists a research agenda as mentioned during the task force's meetings.\n\nResearch agenda\n\n| Topics | Specific questions |\n|:---|:---|\n| Composite activity measures (mainly PsA and peripheral SpA) | Validation where needed, definition of disease activity states and response categories |\n| Remission definition | Is it important that all clinical domains of axial SpA, peripheral SpA or PsA are in remission or is it sufficient to define some of them? |\n| Treatment target | Is there a difference in long-term outcome when comparing remission with low disease activity? |\n| Activity and damage | What is the progression of joint damage in different disease activity states in PsA? |\n| Disease duration | Are there differences in responsiveness and thus differences in attaining certain targets with different disease duration in PsA? |\n| Treatment to target | There is a need to design therapeutic trials that compare steered therapy aiming at remission or low disease activity with non-steered treatment (like TICORA)88 |\n| Axial involvement in PsA | Do spinal and peripheral involvements respond similarly or differently? |\n| Enthesitis, dactylitis | More data need to be attained on the response of dactylitis or enthesitis to different therapies |\n| Care by rheumatologist | Is care of axial SpA, peripheral SpA or PsA by a rheumatologist advantageous for outcomes when compared with care by non-rheumatologists? |\n| Maintenance of response | How can response be maintained? Can the dose of the therapy employed be reduced or the interval of applications be expanded and outcome maintained? |\n| Patient | Is outcome different when patients are informed in a structured way when compared with more general means of information? |\n\nPsA, psoriatic arthritis; SpA, Spondyloarthritis.\n\n# Discussion\n\nRecommendations to treat axSpA and PsA have been developed over the recent years.17\u201319 However, none of these addressed a clear therapeutic target and a strategy to reach this target. This has now been done in the present set of recommendations, and additional strategic aspects of treatment approaches are presented. Thus, the present consensus on treatment targets and general treatment approaches complements the published management recommendations,17\u201320 but a notable difference is the absence of suggestions or recommendations regarding a particular drug in any of the overarching principles or individual recommendations.\n\nTreatment recommendations should usually be based on evidence. However, where evidence is missing, expert opinion has to come into play. The recommendations presented here are not based on hard evidence, because strategic therapeutic trials, in which therapy was consistently adapted to reach a prespecified treatment target and compared with a non-steered approach, as performed in RA,88 89 are currently not available for axSpA, peripheral SpA or PsA, and other pertinent literature is scarce. While a SLR has provided indirect evidence from clinical trials which targeted specific endpoints37 and thus supplied some information towards the work of the task force, the individual recommendations can only be regarded as expert opinion (consensus) and therefore call for more research in the field.\n\nSo why then were the recommendations developed now rather than waiting for more evidence? Because the definition of a treatment target and strategy is timely at present in light of two major aspects: (1) a field like that of SpA should not stay behind other disease areas which have already defined their targets years to decades ago, when at least indirect evidence for the benefit of attaining certain treatment targets is available; and (2) the tremendous therapeutic advances of the past decade have greatly improved the chances of achieving good outcomes and, therefore, setting stringent treatment milestones has become a reality which should not be concealed. Moreover, in the course of its initial discussions on this issue, the task force felt that with these advances and the concomitant formulation of a research agenda, investigations towards providing respective evidence could be fostered and accelerated. Importantly, this view was shared among all task force members, which comprised patients and an international group of physicians with expertise in SpA.\n\nAt all three steps of this activity, which included initial discussions by the steering committee, formulation of recommendations by an expanded working group and development of treatment recommendations for all three entities, axSpA, peripheral SpA and PsA, unanimous agreement was attained. Moreover, all items achieved strong consensus in an anonymous voting process, with the lowest result being a mean of 9.0 on a scale of 0\u201310, indicating that the task force stood quite united behind the recommendations.\n\nThe complexity of the current endeavour resulted from the heterogeneity of the diseases covered. After long discussions and the intermediate development of more than one document, it was decided to produce a single set of recommendations for axSpA, peripheral SpA and PsA, in line with recent criteria for classification of SpA.15 Five overarching principles and nine recommendations were developed in common for all forms of SpA, including PsA. Only two recommendations were separately produced for axSpA, peripheral SpA and PsA, although their general scope was still very similar and differences only very subtle. The overall activity was partly influenced by the treat-to-target recommendations for RA.32\n\nSeveral of the recommendations stand out in their importance, while others can be seen as supportive or operational. A call for remission or inactive disease became item 1, because this was regarded the foremost treatment target. Indeed, we can anticipate that reducing inflammation and disease activity to the minimum is optimal for the patients, at least for their quality of life. However, the members were aware that remission may not be achievable in all patients and, therefore, formulated an alternative treatment target, especially for patients with long-standing disease, namely low disease activity (recommendation 4). Importantly, this acknowledgement indicates that disease activity states other than remission or low disease activity constitute unacceptable clinical states, unless justified because of other reasons, such as comorbidity (items 6, 9 and 11). Importantly, while validated measures of disease activity are available for PsA and ReA, disease activity states have not yet been sufficiently defined, contrasting the situation in AS. Another complexity relates to the necessity to use measures that reflect the individual manifestations of a patient which in some instances may involve assessment of peripheral joint disease, axial involvement, dactylitis and enthesitis. To identify an individual treatment goal can in itself be seen as an important part of a treatment strategy when an intervention is initiated and should be accompanied by a monitoring programme. It is also important that the agreed goal is documented in the records of the patient.\n\nPatient involvement in defining the treatment target and selection of therapies based on their risks and benefits was deemed so important, that it is stated in the first overarching principle and additionally in one of the recommendations.\n\nAs indicated above, given the small evidence base, the research agenda is of utmost importance. Research activities should focus on strategic therapeutic trials, and on addressing missing information, such as the definition of disease activity states in PsA.\n\nThe recommendations are summarised in a simplified form in an algorithm presented in figure 1<\/a>. Like most types of recommendations, it will be necessary to revise the current document in due course, presumably in about 4 years or 5\u2005years or earlier, when significant evidence accumulates regarding the individual points of the recommendations. The task force hopes for an expansion of high quality research activities that either allow confirmation or modifications of its conclusions.\n\n# References\n\n[^1]: **Handling editor** Francis Berenbaum","meta":{"dup_signals":{"dup_doc_count":116,"dup_dump_count":55,"dup_details":{"curated_sources":2,"2023-50":3,"2023-40":3,"2023-23":2,"2022-40":2,"2022-33":3,"2022-27":4,"2022-21":4,"2022-05":2,"2021-49":1,"2021-43":2,"2021-39":5,"2021-31":4,"2021-25":1,"2021-21":3,"2021-17":6,"2021-10":3,"2021-04":2,"2020-50":1,"2020-45":2,"2020-40":1,"2020-34":2,"2020-29":1,"2020-16":2,"2020-10":1,"2020-05":3,"2019-47":2,"2019-39":1,"2019-26":2,"2019-22":1,"2019-18":2,"2019-13":1,"2019-09":1,"2019-04":2,"2018-51":3,"2018-47":2,"2018-39":3,"2018-30":1,"2018-26":1,"2018-22":1,"2018-13":2,"2018-09":1,"2017-51":2,"2017-47":1,"2017-43":2,"2017-39":1,"2017-34":3,"2017-26":2,"2017-22":1,"2017-17":1,"2017-09":1,"2024-30":3,"2024-26":1,"2024-22":4,"2024-18":2,"2017-13":1}},"file":"PMC3888616"},"subset":"pubmed_central"} {"text":"abstract: Understanding basic neuronal mechanisms hold the hope for future treatment of brain disease. The 1st international conference on synapse, memory, drug addiction and pain was held in beautiful downtown Toronto, Canada on August 21\u201323, 2006. Unlike other traditional conferences, this new meeting focused on three major aims: (1) to promote new and cutting edge research in neuroscience; (2) to encourage international information exchange and scientific collaborations; and (3) to provide a platform for active scientists to discuss new findings. Up to 64 investigators presented their recent discoveries, from basic synaptic mechanisms to genes related to human brain disease. This meeting was in part sponsored by Molecular Pain, together with University of Toronto (Faculty of Medicine, Department of Physiology as well as Center for the Study of Pain). Our goal for this meeting is to promote future active scientific collaborations and improve human health through fundamental basic neuroscience researches. The second international meeting on Neurons and Brain Disease will be held in Toronto (August 29\u201331, 2007).\nauthor: Guo-Qiang Bi; Vadim Bolshakov; Guojun Bu; Catherine M Cahill; Zhou-Feng Chen; Graham L Collingridge; Robin L Cooper; Jens R Coorssen; Alaa El-Husseini; Vasco Galhardo; Wen-Biao Gan; Jianguo Gu; Kazuhide Inoue; John Isaac; Koichi Iwata; Zhengping Jia; Bong-Kiun Kaang; Mikito Kawamata; Satoshi Kida; Eric Klann; Tatsuro Kohno; Min Li; Xiao-Jiang Li; John F MacDonald; Karim Nader; Peter V Nguyen; Uhtaek Oh; Ke Ren; John C Roder; Michael W Salter; Weihong Song; Shuzo Sugita; Shao-Jun Tang; Yuanxiang Tao; Yu Tian Wang; Newton Woo; Melanie A Woodin; Zhen Yan; Megumu Yoshimura; Ming Xu; Zao C Xu; Xia Zhang; Mei Zhen; Min Zhuo\ndate: 2006\ninstitute: 1Department of Neurobiology, University of Pittsburgh, Pittsburgh, USA; 2Department of Psychiatry, Harvard University, Boston, USA; 3Department of Pediatrics, and Cell Biology and Physiology, Washington University in St. Louis, St. Louis, USA; 4Department of Pharmacology and Toxicology, Queen's University, Kingston, Canada; 5Department of Anesthesiology, Washington University in St. Louis, St. Louis, USA; 6Centre for Synaptic Plasticity, University of Bristol, Bristol, UK; 7Department of Biology, University of Kentucky, Lexington, USA; 8Department of Physiology and Biophysics, University of Calgary, Calgary, Canada; 9Department of Psychiatry, University of British Columbia, Vancouver, Canada; 10Institute for Molecular and Cell Biology, University of Porto, Porto, Portugal; 11Skirball Institute, New York University School of Medicine, New York, USA; 12Department of Oral and Maxillofacial Surgery, University of Florida, Gainesville, USA; 13Department of Pharmaceutical Health Care and Sciences, Kyushu University, Kyushu, Japan; 14NINDS, NIH, Bethesda, USA; 15Department of Physiology, Nihon University, Tokyo, Japan; 16Department of Physiology, University of Toronto, Toronto, Canada; 17Department of Biological Sciences, Seoul National University, Seoul, Korea; 18Department of Anesthesiology, Sapporo Medical University School of Medicine, Sapporo, Japan; 19Department of Agricultural Chemistry, Tokyo University of Agriculture, Tokyo, Japan; 20Department of Molecular Physiology and Biophysics, Baylor College of Medicine, Houston, USA; 21Division of Anesthesiology, Niigata University, Niigata, Japan; 22Department of Neuroscience and High Throughput Biology Center, Johns Hopkins University, Baltimore, USA; 23Department of Human Genetics, Emory University, Atlanta, USA; 24Department of Psychology, McGill University, Montreal, Canada; 25Department of Physiology, University of Alberta, Edmonton, Canada; 26Sensory Research Center, Seoul National University, Seoul, Korea; 27Department of Biomedical Sciences, University of Maryland, Baltimore, USA; 28Department of Medical Genetics and Microbiology, University of Toronto, Toronto, Canada; 29Department of Neurobiology and Behavior, University of California, Irvine, USA; 30Department of Anesthesiology and Critical Care Medicine, Johns Hopkins University, Baltimore, USA; 31Brain Research Center, University of British Columbia, Vancouver, Canada; 32NICHD, NIH, Bethesda, USA; 33Department of Cell and Systems Biology, University of Toronto, Toronto, Canada; 34Department of Physiology and Biophysics, State University of New York at Buffalo, Buffalo, USA; 35Department of Basic Medicine, Kyushu University, Kyushu, Japan; 36Department of Anesthesia and Critical Care, University of Chicago, Chicago, USA; 37Department of Anatomy and Cell Biology, Indiana University School of Medicine, Indianapolis, USA; 38Department of Psychiatry, University of Saskatchewan, Saskatoon, Canada\ntitle: Recent advances in basic neurosciences and brain disease: from synapses to behavior\n\n# Introduction\n\nOne key factor to promote the progress of science is to exchange scientific ideas and new discovery through meetings. Scientific meeting provides critical chance for investigators to communicate new ideas, discuss different\/conflicting results, and set up potential collaborations. Annual meeting of the American Society for Neuroscience (SfN) has served the community well in this aspect. However, with the increased membership and the scale of the meeting, SfN meetings only take place in a few cities in US in a rotated manner. Due to tight security control after 9\/11, many foreign investigators failed to obtain visiting visa to the meeting in a timely fashion. Considering these factors, a small scale of neuroscience meeting in a more relaxed city should provide better chance for investigators, in particular, principal investigators (PIs) to directly communicate with each other, face-to-face. The 1st international conference on synapse, memory, drug addiction and pain is designed to meet the need. The major aim of this meeting is to provide an opportunity for setting up global scientific exchanges, to provide an active stage for PIs to report novel or unpublished data, to bring neuroscientists working at different level of organism and systems together, and to promote research findings from junior and mid-career investigators.\n\nThe meeting is organized by Dr. Min Zhuo from the University of Toronto, with the help from Dr. Jianguo Gu from the University of Florida, and in part sponsored by Molecular Pain, University of Toronto (Faculty of Medicine, Department of Physiology as well as Center for the Study of Pain), and Olympus Inc.\n\nThere are four major themes for the meeting: synapse, Synaptic plasticity, memory, pain, and brain disease. Unlike other meetings, each speaker was given 15 min to talk, and 5 min for discussion. The time slot allowed for each speaker was tightly controlled by each chair (with a timer!), and few speakers went beyond the time permitted. The wine reception over the poster section provided a wonderful opportunity for further discussions. Taking advantage of the excellent location of the main campus in the downtown of Toronto, attendees also enjoyed the nice summer weather and wonderful food in Toronto. The Niagara Fall, a famous tour site, is above 90 min drive from the downtown of Toronto.\n\nStudents and post-doc fellows also presented over 50 posters on various topics. Among them, three posters were selected for best poster presentations and have been awarded with \\$250\u2013500.\n\n# Text\n\n## List of major themes, chairs, speakers and titles\n\n### Synapse\n\nChair: Min Li (USA)\n\n\u2022 Jens R Coorssen (University of Calgary, Calgary, Canada)\n\nThe role of cholesterol in synaptic release\n\n\u2022 Lin Mei (Medical College of Georgia, Augusta, USA)\n\nNeuregulin regulation of neuronal activity\n\n\u2022 Alaa El-Husseini (University of British Columbia, Vancouver, Canada)\n\nMechanisms that govern protein assembly at nascent neuronal contacts\n\n\u2022 Wen-Biao Gan (New York University School of Medicine, New York, USA)\n\nDendritic Spine Stability and Its Modification by Experience\n\n\u2022 Elise F Stanley (Toronto Western Research Institute, Toronto, Canada)\n\nThe presynaptic transmitter release site complex\n\nChair: Alaa El-Husseini (Canada)\n\n\u2022 Mei Zhen (University of Toronto, Toronto, Canada)\n\nSAD kinase regulates neuronal polarity and synapse formation\n\n\u2022 Robin L Cooper (University of Kentucky, Lexington, USA)\n\nEffects of the serotonergic system on physiology, development, learning and behavior of drosophila melanogaster\n\n\u2022 Min Li (Johns Hopkins University, Baltimore, USA)\n\nChemical regulation of membrane excitability\n\n\u2022 Shuzo Sugita (University of Toronto, Toronto, Canada)\n\nMolecular mechanism of GTP-dependent exocytosis\n\n\u2022 Lu-Yang Wang (University of Toronto, Toronto, Canada)\n\nConvergent pre- and post-synaptic adaptations for high-fidelity neurotransmission at the developing calyx of Held synapse\n\n\u2022 Wei-Yang Lu (University of Toronto, Toronto, Canada)\n\nPhysical Interaction between Acetylcholinesterase and Neurexins\n\n### Synaptic plasticity\n\nChair: Graham L Collingridge (UK) and Yu Tian Wang (Canada)\n\n\u2022 Eric Klann (Baylor College of Medicine, Houston, USA)\n\nTranslational Control during Hippocampal Synaptic Plasticity and Memory\n\n\u2022 Peter V Nguyen (University of Alberta, Edmonton, Canada)\n\nBeta-Adrenergic Receptors Recruit ERK and mTOR to Promote Translation-Dependent Synaptic Plasticity\n\n\u2022 Shao-Jun Tang (University of California, Irvine, USA)\n\nRegulation of Activity-Dependent Protein Synthesis in Dendrites\n\n\u2022 Yu Tian Wang (University of British Columbia, Vancouver, Canada)\n\nSynaptic plasticity in learning and memory\n\n\u2022 Graham L Collingridge (University of Bristol, Bristol, UK)\n\nGlutamate receptors and synaptic plasticity in the hippocampus\n\n\u2022 Michael W Salter (University of Toronto, Toronto, Canada)\n\nIns and outs of SRC regulation of NMDA receptors and synaptic plasticity\n\n\u2022 John F MacDonald (University of Toronto, Toronto, Canada)\n\nInhibitory Regulation of the Src Hub and LTP in CA1 Hippocampal Neurons\n\nChair: Michael W Salter (Canada)\n\n\u2022 Vadim Bolshakov (Harvard University, Boston, USA)\n\nSpatiotemporal asymmetry of associative synaptic plasticity in fear conditioning pathways\n\n\u2022 Guo-Qiang Bi (University of Pittsburgh, Pittsburgh, USA)\n\nDynamics and plasticity of reverberatory activity in small neuronal circuits\n\n\u2022 Melanie A Woodin (University of Toronto, Toronto, Canada)\n\nBidirectional spike-timing dependent plasticity of inhibitory transmission in the hippocampus\n\n\u2022 John Isaac (NIH, Bethesda, USA)\n\nKainate receptors in novel forms of long-term synaptic plasticity\n\n\u2022 Newton Woo (NIH, Bethesda, USA)\n\nRegulation of Bi-directional Plasticity by BDNF\n\n\u2022 Zhengping Jia (University of Toronto, Toronto, Canada)\n\nMolecular regulation of spine properties and synaptic plasticity\n\n### Pain\n\n**C**hair: Megumu Yoshimura (Japan)\n\n\u2022 Kazuhide Inoue (Kyushu University, Kyushu, Japan)\n\nP2X4: mechanisms of over expression in neuropathic pain state\n\n\u2022 Jianguo Gu (University of Florida, Gainesville, USA)\n\nTRPM8 and cold allodynia\n\n\u2022 Uhtaek Oh (Seoul National University, Seoul, Korea)\n\nTRPV1 and its Role for Inflammatory Pain\n\n\u2022 Vasco Galhardo (University of Porto, Porto, Portugal)\n\nImpairment in prefrontal-based emotional decision-making in rat models of chronic pain\n\n\u2022 Ke Ren (University of Maryland, Baltimore, USA)\n\nNeuronal\/glial cell interactions in CNS plasticity and persistent pain\n\nChair: Jianguo Gu (USA)\n\n\u2022 Yves De Koninck (Laval University, Quebec City, Canada)\n\nPlasticity of chloride homeostasis vs. plasticity of GABA\/glycine; who wins?\n\n\u2022 Megumu Yoshimura (Kyushu University, Kyushu, Japan)\n\nSynaptic mechanisms of acupuncture in the spinal dorsal horn revealed by in vivo patch-clamp recordings\n\n\u2022 Koichi Iwata (Nihon University, Tokyo, Japan)\n\nAnterior cingulate cortex and pain -its morphological feature and functional properties\n\n\u2022 Min Zhuo (University of Toronto, Toronto, Canada)\n\nCortical potentiation and its roles in persistent pain and fear\n\n\u2022 Vania A Apkarian (Northwestern University, Chicago, USA)\n\nChronic pain and emotional learning and memory\n\nChair: Uhtaek Oh (Korea)\n\n\u2022 Zhou-Feng Chen (Washington University in St. Louis, St. Louis, USA)\n\nLiving without serotonin: a genetic approach to study the roles of the serotonergic system in opioid analgesia and tolerance\n\n\u2022 Catherine M Cahill (Queen's University, Kingston, Canada)\n\nTrafficking of Delta Opioid Receptors in Chronic Pain\n\n\u2022 Hiroshi Ueda (Nagasaki University, Nagasaki, Japan)\n\nMolecular mechanisms of neuropathic pain \u2013 lysophosphatidic acid as the initiator\n\n\u2022 Yuanxiang Tao (Johns Hopkins University, Baltimore, USA)\n\nAre the PDZ domains at excitatory synapses potential molecular targets for prevention and treatment of chronic pain?\n\n\u2022 Tatsuro Kohno (Niigata University, Niigata, Japan)\n\nDifferent actions of opioid and cannabinoid receptor agonists in neuropathic pain\n\n\u2022 Ze'ev Seltzer (University of Toronto, Toronto, Canada)\n\nPower and limitations of the comparative approach that uses animal models to identify human chronic pain genes\n\n\u2022 Mikito Kawamata (Sapporo Medical University School of Medicine, Sapporo, Japan)\n\nGenetic variation in response properties of spinal dorsal horn neurons and rostral ventromedial medulla neurons in different mouse strains\n\n### Brain disease\n\nChair: Xiao-Ming Xu (USA)\n\n\u2022 Guojun Bu (Washington University in St. Louis, St. Louis, USA)\n\nLDL Receptor Family and Alzheimer's disease\n\n\u2022 Satoshi Kida (Tokyo University of Agriculture, Tokyo, Japan)\n\nMechanism of interaction between reconsolidation and extinction of contextual fear memory\n\n\u2022 Weihong Song (University of British Columbia, Vancouver, Canada)\n\nHypoxia facilitates Alzheimer's disease pathogenesis\n\n\u2022 Zhen Yan (State University of New York at Buffalo, Buffalo, USA)\n\nInteractions between Acetylcholine, Amyloid and Ion Channels in Alzheimer's Disease\n\n\u2022 Jian Feng (State University of New York at Buffalo, Buffalo, USA)\n\nAchilles' Heel of Midbrain Dopaminergic Neurons: Vulnerabilities and Defense Strategies\n\n\u2022 Xiao-Jiang Li (Emory University, Atlanta, USA)\n\nSynaptic toxicity of Huntington disease protein\n\nChair: Xiao-Jiang Li (USA)\n\n\u2022 Fang Liu (University of Toronto, Toronto, Canada)\n\nRegulation of dopamine reuptake by the direct protein-protein interaction between the dopamine D2 receptor and the dopamine transporter\n\n\u2022 Danny G Winder (Vanderbilt University School of Medicine, Nashville, USA)\n\nSynaptic plasticity in the bed nucleus of the stria terminalis: roles in addiction and anxiety\n\n\u2022 Ming Xu (University of Chicago, Chicago, USA)\n\nMolecular Mechanisms of neuronal plasticity induced by drugs of abuse\n\n\u2022 Xia Zhang (University of Saskatchewan, Saskatoon, Canada)\n\nTAT-3L4F, a novel peptide for the treatment of drug addiction\n\n\u2022 Evelyn K Lambe (University of Toronto, Toronto, Canada)\n\nHypocretin and nicotine excite the same thalamocortical synapses in prefrontal cortex: correlation with improved attention in rat\n\n\u2022 Wan Qi (University of Toronto, Toronto, Canada)\n\nRegulation of NMDA and GABA-A receptors by the tumor suppressor PTEN\n\n### Memory\n\nChair: Karim Nader (Canada)\n\n\u2022 Paul W Frankland (University of Toronto, Toronto, Canada)\n\nFunctional integration of adult-born granule cells into spatial memory networks in the dentate gyrus\n\n\u2022 Mara Dierssen (Centre for Genomic Regulation, Barcelona, Spain)\n\nDendritic pathology and altered structural plasticity in Down syndrome: In the search of candidate genes\n\n\u2022 Sheena A Josselyn (University of Toronto, Toronto, Canada)\n\nNeuronal memory competition: The role of CREB\n\n\u2022 Bong-Kiun Kaang (Seoul National University, Seoul, Korea)\n\nRole of a novel nucleolar protein ApLLP in synaptic plasticity and memory in Aplysia\n\n\u2022 Remi Quirion (Douglas Hospital Research Centre and INMHA, Montreal, Canada)\n\nNovel genes possibly involved in learning and memory\n\nChair: Bong-Kiun Kaang (Korea)\n\n\u2022 John C Roder (University of Toronto, Toronto, Canada)\n\nForward and reverse genetic screens in the mouse for mutants impaired in learning and memory\n\n\u2022 Karim Nader (McGill University, Montreal, Canada)\n\nIdentifying the neural mechanisms by which boundary conditions inhibit reconsolidation from occurring.\n\n\u2022 Yukio Komatsu (Nagoya University, Nagoya, Japan)\n\nRole of BDNF in the production of LTP at visual cortical inhibitory synapses\n\n\u2022 Xiao-Ming Xu (University of Louisville School of Medicine, Louisville, USA)\n\nSpinal cord injury repair: combinatorial strategies involving neuroprotection and axonal regeneration\n\n\u2022 Zao C Xu (Indiana University School of Medicine, Indianapolis, USA)\n\nSynaptic plasticity in pathological conditions\n\n# Abstracts\n\n\u2022 **Guo-Qiang Bi (Department of Neurobiology, University of Pittsburgh, Pittsburgh, USA) \u2013 Dynamics and plasticity of reverberatory activity in small neuronal circuits**\n\nThe concept of cell assembly was proposed by Hebb to provide an elementary structure for thought process and memory. The Hebbian cell assembly has two essential properties: 1. neuronal activity can reverberate in specific sequences within the assembly without sustained external drive; 2. synaptic modification resulted from the reverberatory activity further stabilizes the reverberation. Using whole-cell patch-clamp recording and simultaneous calcium imaging, we found that brief (e.g. 1-ms) stimulation of few neurons in a small network of about 100 cultured hippocampal neurons could trigger reverberatory activity in the network lasting for seconds. Such reverberatory activity consists of repeating motifs of specific patterns of population activation in the network. Paired-pulse stimuli with inter-pulse interval of \\~200\u2013400 ms are more effective in activating such oscillatory reverberation. Furthermore, repeated activation of reverberation with paired-pulse stimuli leads to long-term enhancement of subsequent activation by single stimuli. In addition, pairing a non-effective input (that does not activate network reverberation) into one neuron with an effective input (that activates reverberation) into another can convert the non-effective pathway into an effective one. Reverberatory circuits in vitro may serve as a prototype of Hebbian cell assembly for studies of the dynamics properties and underlying cellular mechanisms. (Supported by NIMH and Burroughs Wellcome Fund)\n\n\u2022 **Vadim Bolshakov (Department of Psychiatry, Harvard University, Boston, USA) \u2013 Spatiotemporal asymmetry of associative synaptic plasticity in fear conditioning pathways**\n\nLong-term potentiation (LTP) in afferent inputs to the amygdala serves an essential function in the acquisition of fear memory. The factors underlying input specificity of synaptic modifications implicated in the information transfer in fear conditioning pathways remain unknown. We now show that synapses in two auditory inputs converging on the same LA neuron utilize a form of the temporally asymmetric learning rule when the strength of na\u00efve synapses is only modified when a postsynaptic action potential closely follows the synaptic response. The stronger inhibitory drive in thalamic pathway, as compared to cortical input, hampers the induction of LTP at thalamo-amygdala synapses contributing to the spatial specificity of LTP in convergent inputs. These results indicate that spike timing-dependent synaptic plasticity in the LA is both temporarily and spatially asymmetric, which may contribute to the conditioned stimulus discrimination during fear behavior.\n\n\u2022 **Guojun Bu (Department of Pediatrics, and Cell Biology and Physiology, Washington University in St. Louis, St. Louis, USA) \u2013 LDL Receptor Family and Alzheimer's Disease**\n\nAmyloid-\u03b2 peptide (A\u03b2) production and accumulation in the brain is a central event in the pathogenesis of Alzheimer's disease (AD). Recent studies have shown that apolipoprotein E (apoE) receptors, members of the low-density lipoprotein receptor (LDLR) family, modulate A\u03b2 production as well as A\u03b2 cellular uptake. A\u03b2 is derived from proteolytic processing of amyloid precursor protein (APP), which interacts with several members of the LDLR family. Studies from our laboratory have focused on three members of the LDLR family, the LDLR-related protein (LRP), LRP1B, and the LDLR. Our *in vitro* cellular studies have shown that while LRP's rapid endocytosis facilitates APP endocytic trafficking and processing to A\u03b2, LRP1B's slow endocytosis inhibits these processes. In addition to modulating APP endocytic trafficking, LRP's rapid endocytosis also facilitates A\u03b2 cellular uptake by binding to A\u03b2 either directly or via LRP ligands such as apoE. Our *in vivo* studies using transgenic approach have shown that overexpression of LRP in CNS neurons increases cell-associated A\u03b2 and this increase correlates with an enhanced memory deficits in mice. We are currently investigating the cellular mechanisms by which LRP facilitates intraneuronal A\u03b2 accumulation, a pathological event that directly contributes to the early cognitive deficits seen in AD. Our preliminary results indicate that apoE plays an important role in intraneuronal A\u03b2 accumulation, likely by shuttling A\u03b2 into neurons via LRP-mediated pathways. We hypothesize that depending on the A\u03b2 species (A\u03b2 40 vs. A\u03b2 42), its aggregation states (monomers vs. oligomers), and the presence of apoE isoforms (apoE3 vs. apoE4), at least a portion of A\u03b2 that is internalized via an LRP-dependent pathway accumulates inside neurons. Molecular and cellular models underlying the mechanisms of LRP's involvements in AD will be presented and discussed.\n\n\u2022 **Catherine M Cahill (Department of Pharmacology and Toxicology, Queen's University, Kingston, Canada) \u2013 Trafficking of Delta Opioid Receptors in Chronic Pain**\n\nNeuropathic (NP) pain is defined as pain caused by a peripheral and\/or central nervous system lesion with sensory symptoms and signs and is estimated to affect more than 1.5% of Americans. Despite its prevalence and adverse impact on functionality and quality of life, it remains a significant challenge for physicians as it is typically refractory to traditional analgesics. However research increasingly suggests a therapeutic role of \u03b4OR agonists in treating chronic pain. Our research aims to understand the changes in \u03b4OR expression and function using both *in vivo* and *in vitro* techniques in an animal model of NP pain. NP, but not sham-operated, rats developed cold- and thermal-hyperalgesia as well as tactile allodynia. Intrathecal administration of a selective \u03b4OR agonist significantly alleviated these nociceptive behaviours and these effects were attenuated by a selective \u03b4OR antagonist. Real-time RT-PCR and western blotting experiments revealed no change in overall expression of \u03b4OR in the dorsal spinal cord however preliminary studies suggest that induction of NP pain may induce changes in subcellular localization of \u03b4ORs leading to enhanced analgesia.\n\n\u2022 **Zhou-Feng Chen (Department of Anesthesiology, Washington University in St. Louis, St. Louis, USA) \u2013 Living without serotonin: a genetic approach to study the roles of the serotonergic system in opioid analgesia and tolerance**\n\nNarcotics have long been used as an effective treatment for pain. The roles of the serotonergic (5-HT) system in opioid analgesia and tolerance, however, have been controversial. We have recently shown that the transcription factor Lm \u00d7 1b is essential for the development of 5-HT neurons. In the absence of Lm \u00d7 1b, all 5-HT neurons failed to develop in the raphe system. Because Lm \u00d7 1b-null mice die around birth, we have designed to strategy to delete Lm \u00d7 1b in 5-HT neurons only. Lm \u00d7 1b conditional knockout (CKO) mice lack all 5-HT neurons in the raphe system. Surprisingly, Lm \u00d7 1b CKO mice survive to the adulthood without motor deficiency. To assess the roles of the 5-HT system in opioid analgesia, we have examined the tail-flick responses of Lm \u00d7 1b CKO mice injected with mu-, kappa- and delta-opioid receptor agonists. In addition, we also examined the site of action of opioid receptor agonists by systemic, intrathecal and intracerebroventricular injections. These pharmacological studies revealed that the 5-HT system contributes differentially to opioid analgesia. Moreover, an examination of morphine analgesic tolerance in Lm \u00d7 1b CKO mice indicated that morphine tolerance is independent of the 5-HT system. These results should have important implications in our understanding of mechanisms of action for the 5-HT system in opioid analgesia and tolerance.\n\n\u2022 **Graham L Collingridge (Centre for Synaptic Plasticity, University of Bristol, Bristol, UK) \u2013 Kainate receptors: Functions and the discovery of novel antagonists.**\n\nLess is known about the role of kainate receptors compared with the other classes of ionotropic glutamate receptors in the CNS. However, recent studies, mainly employing the Lilly antagonist, LY382884, have identified several functions: For example, at mossy fibre synapses in the hippocampus these receptors function as facilitatory autoreceptors and are involved the induction of LTP. Kainate receptors also contribute to synaptic transmission at this synapse. Elsewhere kainate receptors can function as inhibitory autoreceptors and can regulate GABA transmission.\n\nWhilst LY382884 is a very useful antagonist it has a relatively narrow selectivity for GluR5-containing kainate receptors *versus* AMPA receptors. David Jane and his colleagues in Bristol have therefore developed a series of highly potent and specific GluR5 antagonists, the most potent of which is ACET. This compound should be extremely useful in investigating the role of kainate receptors in physiological and pathological functions in the CNS, for example, in neurodegeneration and neuropathic pain.\n\n\u2022 **Robin L Cooper (Department of Biology, University of Kentucky, Lexington, USA) \u2013 Effects of the serotonergic system on physiology, development, learning and behavior of drosophila melanogaster**\n\nThe serotonergic system in nervous tissue is known to play a vital role in development and behavior in simple to complex animal models. Using a simple model organism, Drosophila, the importance of serotonin (5-HT) circuitry in development and acute actions can be addressed. Also there are only four 5-HT receptors in the Drosophila genome, of which 5-HT2dro is known to be essential in the embryonic stages of development. Previously we have shown a physiological sensitivity of exogenous application of 5-HT on a sensory-CNS-motor circuit in semi-intact preparations of 3rd instar larvae. Now, using pharmacological manipulations and available receptor mutants for 5-HT2dro, we are studying the role of 5-HT in development, behavior and physiology of 3rd instar larvae. Para-chlorophenylalanine (p-CPA), is a blocker of 5-HT biosynthesis pathway and 3,4-Methylenedioxymethamphetamine (MDMA, Ecstasy), is a common drug of abuse in humans, which is known to compel mammalian serotonergic neurons to release 5-HT. When fed these compounds from 1st to 3rd instar a slowing of the growth occurred in a dose dependent manner. The rate of body wall and mouth hook movements were reduced in p-CPA and MDMA fed larvae. HPLC results showed lower amounts of 5-HT in larval brains for p-CPA but not MDMA fed larvae. An increase in sensitivity of sensory-CNS-motor circuit to 5-HT in drug fed larvae appears to be due an up regulation of 5-HT receptors. The antisense line for 5-HT2dro receptor also produces a delay in larval development. Preliminary data shows an impaired associative gustatory and olfactory learning behaviors in 3rd instar larvae with lower 5-HT or reduced expression of the 5-HT2dro receptor.\n\n\u2022 **Jens R Coorssen (Department of Physiology and Biophysics, University of Calgary, Calgary, Canada) -The role of cholesterol in synaptic release**\n\nFast, Ca^2+^ -triggered membrane merger defines regulated exocytosis. In native secretory vesicles, cholesterol (CHOL) functions in the fundamental fusion mechanism, and CHOL\/sphingomyelin \u2013 enriched microdomains define the efficiency (Ca^2+^ sensitivity and kinetics) of fusion. The role of CHOL in the fusion mechanism is mimicked by structurally dissimilar lipidic membrane components having spontaneous negative curvature (\/NC\/) = CHOL, and correlates quantitatively with the\/NC\/each contributes to the membrane (e.g. a-tocopherol and dioleoylphosphatidylethanolamine). Unable to substitute for CHOL in rafts, these lipids do not rescue fusion efficiency. Lipids of spontaneous\/NC\/\\< CHOL (e.g. dioleoylphosphatidic acid), do not support fusion. We have also identified com-parable molecular dependencies and relationships at the synapse, suggesting a conserved role for CHOL and the\/NC\/contributed. This quantitative relationship between\/NC\/and fusion appears most consistent with the stalk-pore model, demonstrating that\/NC\/itself is an essential component of the fundamental native fusion mechanism. The data also suggest that different fusion sites, vesicles, or secretory cells can use other lipidic components, in addition to sterols, to provide optimal local\/NC\/and even to modulate the fusion process.\n\n\u2022 **Alaa El-Husseini (Department of Psychiatry, University of British Columbia, Vancouver, Canada) \u2013 Elaboration of dendritic filopoidia is not a rate-limiting step for production of stable axonal-dendritic contacts.**\n\nDendritic filopodia are thought to play an active role in synaptogenesis and serve as precursors to spine synapses. However, this hypothesis is largely based on a temporal correlation between the onset of filopodia elaboration and synaptogenesis. We have previously demonstrated that the palmitoylated protein motifs of GAP-43 and paralemmin are sufficient to increase the number of filopodia and dendritic branches in neurons. Here we examined whether filopodia induced by these motifs, as well as those induced by cdc42 lead to the formation of stable synaptic contacts and the development of dendritic spines. Our analysis shows that expression of these filopodia inducing motifs (FIMs) or the constitutively active form of cdc-42 enhances filopodia motility, but reduces the probability of forming a stable axon-dendrite contact. Conversely, expression of neuroligin-1 a synapse inducing cell adhesion molecule, resulted in a decrease in filopodia motility, an increase in the number of stable axonal contacts, and the recruitment of synaptophysin positive transport packets. Postsynaptic scaffolding proteins such as Shank-1 that induce the maturation of spine synapses reduced filopodia number, but increased the rate at which filopodia transformed into spines. By following individual dendrites over a 2-day period we determined that relatively few sites with filopodia are replaced by spine synapses (\\~3%). These results suggest that high levels of filopodia elaboration and motility may not necessarily be a rate-limiting step for synapse formation, and that factors that control filopodia-process dynamics may participate in synapse formation by rapid stabilization of the initial contact between dendritic filopodia and axons.\n\n\u2022 **Vasco Galhardo (Institute for Molecular and Cell Biology, University of Porto, Porto, Portugal) \u2013 Impairment in prefrontal-based emotional decision-making in rat models of chronic pain**\n\nChronic pain is known to cause several cognitive deficits in human subjects. Among these deficits is the incapability of performing correctly in decision-making tasks that have a risk component, such as rewards of variable value. This cognitive impairment is known to occur after amygdalar or orbitofrontal lesions, where individuals are incapable of long-term planning and take high-risk decisions even if they lead to overall losses. It was recently shown that chronic pain patients also present this pattern of impaired decision-making (Apkarian et al, Pain, 108:129, 2004). However, no studies in chronic pain animal models have addressed poor performance in frontal-based cognitive tasks. For this reason we developed a novel behavioural task based on repetitive reward-based simple decisions, and studied its performance by both control, frontal-lesioned, and chronic pain animals (n = 6 per group). The task consisted on consecutive trials in which a rat entered an operant chamber and had to choose between two levers to recover a food reward. After each trial, the animal was removed to a contiguous chamber where he waited for a sound signal to begin a new trial. During the 15 days of the training phase both levers gave equal pseudo-random rewards: one food pellet in 8 of 10 presses, and no reward in the other two \u2013 low risk. In the trial probe one of the levers was modified to give 3 food pellets, but only in 3 of 10 visits \u2013 high risk. The pattern of 120 consecutive choices was used to calculate the lever-choice index (low risk minus high risk, divided by number of completed trials). In the first 60 trials all the animals (controls, lesioned and monoarthritic) reverently choose the large reward lever, but control animals reversed the pattern of choice in the second half of the session. When analyzing the last 30 entries, controls had a choice index of +0.42 \u00b1 0.17, while monoarthritic rats had -0.48 \u00b1 0.14, neuropathic rat had -0.53 \u00b1 0.21 and frontal-lesioned animals -0.58 \u00b1 0.31. We have shown for the first time that chronic pain induces complex changes in the cognitive neural processes that handle immediate decision-making in the rat (Support: FCT-POCI\/55811\/2004).\n\n\u2022 **Wen-Biao Gan (Skirball Institute, New York University School of Medicine, New York, USA) \u2013 Dendritic Spine Stability And Its Modification By Experience**\n\nThe nervous system requires not only synaptic plasticity for learning but also stability for long-term information storage. To study the degree of synaptic structural plasticity in intact animals, we developed a transcranial two-photon imaging technique to follow individual postsynaptic dendritic spines over time in transgenic mice over-expressing Yellow Fluorescent Protein. Using this technique, we found that in young adolescent mice (1-month-old), 13\u201320% of spines were eliminated and 5\u20138% were formed over 2 weeks in visual, barrel, motor and frontal cortices, indicating a cortical-wide loss of spines during this developmental period. In adult mice (4\u20136 months old), 3\u20135% of spines were eliminated and formed over 2\u20134 weeks in various cortical regions. When imaged over 19 months, only 26% of adult spines were eliminated and 19% were formed in barrel cortex. Thus, after a concurrent reduction in the number of spines in the diverse regions of young adolescent cortex, spines become remarkably stable and a majority of them can last throughout life.\n\nTo determine how spine dynamics are modified by experience, we examine the effect of long-term sensory deprivation via whisker trimming on dendritic spines in the barrel cortex. During young adolescence when a substantial net loss of spines occurs, we found that whisker trimming preferentially reduces the rate of on-going spine elimination than spine formation. This effect of deprivation diminishes as animals mature but still persists in adulthood. In addition, restoring sensory experience following adolescent deprivation accelerates spine elimination but has no significant effect on spine formation. The rate of spine elimination also decreases after chronic blockade of NMDA receptors with the antagonist MK801 and accelerates after drug withdrawal. These studies underscore the important role of sensory experience in spine elimination over the majority of an animal's life span, particularly during adolescence.\n\n\u2022 **Jianguo Gu (Department of Oral and Maxillofacial Surgery, University of Florida, Gainesville, USA) \u2013 TRPM8 and cold allodynia**\n\nPeripheral nerve injury often results in neuropathic pain manifested with both mechanical and thermal allodynia. Thermal allodynia of neuropathic pain conditions includes cold- and heat allodynia. While TRPV1 is found to be involved in heat allodynia, molecular mechanisms of cold allodynia remain unclear. Recently, transient receptor potential channel M8 (TRPM8 receptor) is found to be a cold- and menthol-sensing receptors expressed on a subpopulation of primary afferent fibers. Here we report the upregulation of TRPM8 expression on nociceptive-like afferent neurons following chronic constrictive nerve injury (CCI) rats that manifested with cold allodynia. We found not only the number of TRPM8-expressing neurons was increased, but also the responsiveness to cold and menthol became enhanced in these afferent neurons following CCI. These results suggest TRPM8 upregulation is associated with cold allodynia and may be an underlying mechanism of cold allodynia\n\n\u2022 **Kazuhide Inoue (Department of Pharmaceutical Health Care and Sciences, Kyushu University, Kyushu, Japan) \u2013 P2X4: mechanisms of over expression in neuropathic pain state**\n\nThere is abundant evidence that extracellular ATP and other nucleotides have an important role in pain signaling at both the periphery and in the CNS. Recent findings suggest that endogenous ATP and its receptor system might be involved in neuropathic pain. Neuropathic pain is often a consequence of nerve injury through surgery, bone compression, diabetes or infection. This type of pain can be so severe that even light touching can be intensely painful; unfortunately, this state is generally resistant to currently available treatments. We recently reported that the expression of P2X4 receptors in the spinal cord is enhanced in spinal microglia after peripheral nerve injury, and blocking pharmacologically and suppressing molecularly P2X4 receptors produce a reduction of the neuropathic pain behaviour (Nature 424,778\u2013783, 2003), and that brain-derived neurotrophic factor (BDNF) released from microglia by the stimulation of P2X4 causes the depolarizing shift in reversal potential of anion in LI neurons of rats with nerve injury (Nature, 438, 1017\u20131021, 2005), resulting in causing neuropathic pain. Understanding the key roles of these ATP receptors may lead to new strategies for controlling the pain.\n\n\u2022 **John Isaac (NINDS, NIH, Bethesda, USA) \u2013 Rapid, Activity-Dependent Plasticity in Timing Precision in Neonatal Barrel Cortex**\n\nDuring development neuronal networks acquire the ability to precisely time events. This is a critical developmental step since precise timing is required for information processing and plasticity in the adult brain. Despite this it is not known what process drives this maturation in timing. I will present recent work from my laboratory showing that long-term potentiation (LTP) induced at thalamocortical synapses in neonatal layer IV barrel cortex produces a rapid and dramatic improvement in input and output timing precision. LTP reduces the latency and variability of synaptically-evoked action potentials and reduces co-incidence detection for synaptic input. In contrast, LTP has only a small and variable effect on synaptic efficacy. This improvement in timing occurs during development, suggesting this process occurs in vivo in the developing barrel cortex. Thus, rather than increasing synaptic efficacy, the primary role of this form of neonatal LTP is to enable neurons to precisely time events.\n\n\u2022 **Koichi Iwata (Department of Physiology, Nihon University, Tokyo, Japan) \u2013 Anterior cingulate cortex and pain -its morphological feature and functional properties**\n\nIt is well known that the anterior cingulate cortex (ACC) has a variety of functions related to pain including pain perception. Many ACC neurons respond to noxious and non-noxious stimulation of the body. Most of these neurons have a large receptive field and increase their firing frequency as stimulus intensity increases. ACC nociceptive neurons have very specific morphological features, such as a small soma and a large number of spines on the dendritic trees, and axon collaterals spreading over a wide area of the ACC. In a retrograde trans-synaptic tracing study, we found that ACC neurons receive predominantly A-delta afferents inputs. We also analyzed the responses of ACC nociceptive neurons in awake behaving monkeys. A small number of ACC neurons modulated their activity during noxious heating of the facial skin. The neuronal activity was significantly higher when monkeys escaped from a noxious heat stimulus than when the monkeys detected a small change in temperature (T2) above a larger initial shift (T1). No relationship between firing frequency and detection latency of the T2 stimulation was observed. These findings suggest that ACC nociceptive neurons are involved in attention to pain and escape from pain, but not in the sensory-discriminative aspect of pain.\n\n\u2022 **Zhengping Jia (Department of Physiology, University of Toronto, Toronto, Canada) \u2013 Molecular regulation of spine properties and synaptic plasticity**\n\nThe dendritic spine is the major postsynaptic site of excitatory synapse and its changes are linked to synaptic plasticity, memory formation and various forms of mental and neurological disorders. However, the molecular mechanisms that govern spine development and regulation are poorly defined. We take genetic approaches in mice to identify and characterize the molecular signaling processes involved in the regulation of spine formation, spine morphology, and spine and synaptic plasticity. Specifically, we are interested in the signal transduction pathways stimulated by the Rho family small GTPases, key mediators of actin dynamics in response to various external stimuli. Our objective is to define the in vivo function and synaptic regulation of Rho signaling in the context of spine properties, hippocampal long-term potentiation and fear memory formation. The specific roles and the underlying mechanisms of various components required for normal Rho signaling will be discussed.\n\n\u2022 **Bong-Kiun Kaang (Department of Biological Sciences, Seoul National University, Seoul, Korea) \u2013 Role of a novel nucleolar protein ApLLP in synaptic plasticity and memory in Aplysia**\n\nIn Aplysia long-term synaptic plasticity is induced by serotonin (5-HT) or neural activity, and requires gene expression. Here, we demonstrate that ApLLP, a novel nucleolus protein is critically involved in both long-term facilitation (LTF) and behavioral sensitization. Membrane depolarization induced ApLLP expression, which activated ApC\/EBP expression through a direct binding to CRE. LTF was produced by a single pulse of 5-HT 30 min after the membrane depolarization. This LTF was blocked when either ApLLP or ApC\/EBP were blocked by specific antibodies. In contrast, ApLLP overexpression induced LTF in response to a single 5-HT treatment. Simultaneously, a siphon noxious stimulus (SNS) to intact Aplysia induced ApLLP and ApC\/EBP expression, and single tail shock 30 min after SNS transformed short-term sensitization to long-term sensitization of siphon withdrawal reflex. These results suggest that ApLLP is an activity-dependent transcriptional activator that switches short-term facilitation to long-term facilitation\n\n\u2022 **Mikito Kawamata (Department of Anesthesiology, Sapporo Medical University School of Medicine, Sapporo, Japan) \u2013 Genetic variation in response properties of spinal dorsal horn neurons and rostral ventromedial medulla neurons in different mouse strains**\n\nAlthough various methods of analgesia are currently used for persistent pain such as inflammatory pain and neuropathic pain, optimal pain therapy has still not been established. This may be at least in part related to variability of perceived pain among patients. Recent behavioral studies have shown that nociception in the mouse is heritable, which may reflect variable sensitivity to tissue injury-induced pain in humans (Mogil et al., 1996). Noxious information is transmitted through fine myelinated A\u03b4 and unmyelinated C afferents from the periphery to the superficial dorsal horn (SDH), especially to the substantia gelatinosa (SG, lamina II of Rexed) (Light and Perl, 1979). This sensory information is modified and integrated in the SG and consequently regulates the outputs of projection neurons located in lamina I and laminae V-VI (Cervero and Iggo, 1980; Eckert et al., 2003). In addition, the descending inhibitory influences from supraspinal structures, including the rostral ventromedial medulla (RVM), on SDH neurons are known to be modified under certain pathological conditions (Basbaum, 1973; Dubuisson and Wall, 1980; Laird and Cervero, 1990; Sandkuhler et al., 1995; Wall et al., 1999).\n\nThus, nociceptive network circuits in the central nervous system, including the SDH and RVM, may play an important role in different pain sensitivity in individuals. Our hypothesis is that SDH neurons and RVM neurons in different mouse strains may show different response properties following tissue injury and different sensitivity to analgesics depending on different genetic background. In order to prove this hypothesis, *in vivo* extracellular recordings and *in vivo* whole-cell patch-clamp recordings were made from SDH neurons located in deep laminae (laminae V-VI) and from superficial SDH neurons located in lamina II, respectively, in different strains of mice (A\/J, C57BL\/6J, and CBA\/J strains) before and after tissue injury induced by surgical incision and formalin injection according to previously described methods (Furue et al., 1999; Kawamata et al, 2005). In a separate study, single neuronal activity was isolated from different types of RVM neurons such as ON cells, OFF cells and NEURTRAL cells, and response properties of these neurons were determined before and after intraventricular injection of DAMGO or surgical injury.\n\nThe results have shown that different mouse strains have different sensitivities to postoperative pain and formalin-induced pain, reflecting different characteristics of SDH neurons in the strains following surgical incision and application of formalin. Responses of RVM neurons were also different in different mouse strains following surgical injury and that different mouse strains have different sensitivities to morphine application. The results suggest that pain intensity and pain mechanisms depend at least in part on genetic background of the individual. Furthermore, mechanisms of pain seen in a clinical setting may thus differ in individuals depending on the response properties of SDH neurons and RVM neurons.\n\n\u2022 **Satoshi Kida (Department of Agricultural Chemistry, Tokyo University of Agriculture, Tokyo, Japan) \u2013 Mechanism of interaction between reconsolidation and extinction of contextual fear memory**\n\nRetrieval of conditioned fear memory initiates two potentially dissociable but opposite processes; reconsolidation and extinction. Reconsolidation acts to stabilize, whereas extinction tends to weaken the expression of the original fear memory. To understand the mechanisms for the regulation of memory stability after the retrieval, we have investigated the relationship between reconsolidation and extinction using contextual fear conditioning, associative learning between context (conditioned stimulus; CS) and fear (unconditioned stimulus; US). We first examined effects of duration of re-exposure to CS on memory reconsolidation and extinction. Protein synthesis inhibition following short re-exposure (3 min) to CS disrupted the contextual fear memory, indicating short re-exposure induces memory reconsolidation. In contrast, protein synthesis inhibition following long-re-exposure (30 min) blocked memory extinction. Importantly, in extinction phases, contextual fear memory was intact even though protein synthesis was inhibited. Therefore, these observations suggested the interaction between memory reconsolidation and extinction phases. Indeed, memory extinction seems to be associated with regulation of fear memory stability after retrieval.\n\nTo further understand how extinction phase interacts with reconsolidation phase, we assume that molecules functioning on one phase should also function on another phase if interaction between reconsolidation and extinction phases is observed at the molecular level. Therefore, we compared molecular signatures of these processes using pharmacology and mouse genetics. Pharmacological experiments using antagonists for cannabinoid receptor 1 (CB1) and L-type voltage-gated calcium channels (LVGCCs), that play essential roles in memory extinction, indicated that both CB1 and LVGCCs are required for memory extinction but not consolidation and reconsolidation. More importantly, double injection of anisomycin and antagonists for either CB1 or LVGCCs prevents the disruption of the original memory by protein synthesis. These results suggest that CB1 and LVGCCs are required for not only memory extinction but also the destabilization of reactivated memory. We are now trying similar experiments using conditional CREB mutant mice.\n\nIn addition, to compare the brain regions associated with reconsolidation and extinction, we analyzed brain regions showing increase in CREB activity in reconsolidation and extinction phases by immunocytochemistry. We observed increase in phosphorylated CREB at serine 133 in amygdala and hippocampus following short re-exposure to CS inducing memory reconsolidation and in amygdala and prefrontal cortex following long re-exposure to CS inducing memory extinction. These observations suggest that acquisition of memory extinction prevent the activation of hippocampus, resulting in preservation of contextual fear memory.\n\nTaken together, these our findings indicate the interaction between memory extinction and regulation of memory stability at the molecular, anatomical and behavioral level. Further understanding the mechanisms of this interaction might make more clearly understand the significance of memory reconsolidation.\n\n\u2022 **Eric Klann (Department of Molecular Physiology and Biophysics, Baylor College of Medicine, Houston, USA) \u2013 Translational Control During Hippocampal Synaptic Plasticity and Memory**\n\nAltered gene expression is a hallmark of long-lasting synaptic plasticity and long-term memory. Regulation of local protein translation permits synapses to control synaptic efficacy independently of mRNA synthesis in the cell body. Recent studies, including several from this laboratory, have identified biochemical signaling cascades that couple neurotransmitter and neurotrophin receptors to the translation regulatory machinery in translation-dependent forms of synaptic plasticity and memory. In this presentation, these translation regulatory mechanisms and the signaling pathways that govern the expression of various forms of translation-dependent synaptic plasticity and memory will be discussed. In addition, synaptic plasticity and memory deficits in genetically engineered mice that lack specific translation factors and translation regulatory proteins will be discussed. These studies have revealed interesting links among the biochemical activities of translation factors, synaptic plasticity, and memory that are likely to be important for other forms of plasticity and behavior, such as those that underlie pain and drug addiction.\n\n\u2022 **Tatsuro Kohno (Division of Anesthesiology, Niigata University, Niigata, Japan) \u2013 Different actions of opioid and cannabinoid receptor agonists in neuropathic pain**\n\nPeripheral nerve injury causes neuropathic pain, which is characterized by hyperalgesia and allodynia to mechanical and thermal stimuli. Neuropathic pain has traditionally been considered opioid-resistant to intrathecal opioids; however, the efficacy of opioid in treating neuropathic pain is controversial. In contrast, increasing evidence indicates that cannabinoids are effective in alleviating neuropathic pain. We evaluated the effect of opioids and cannabinoids in two independent partial peripheral nerve injury models, the spared nerve injury (SNI) and the spinal nerve ligation (SNL) models. In both the SNI and SNL rat peripheral neuropathic pain models the presynaptic inhibitory effect of the \u03bc opioid receptor (MOR) agonist (DAMGO) on primary afferent-evoked excitatory postsynaptic currents (EPSCs) and miniature EPSCs in superficial dorsal horn neurons is substantially reduced, but only in those spinal cord segments innervated by injured primary afferents. The two nerve injury models also reduce the postsynaptic potassium channel opening action of DAMGO on lamina II spinal cord neurons, but again only in segments receiving injured afferent input. The inhibitory action of DAMGO on ERK (extracellular signal-regulated kinase) activation in dorsal horn neurons is also reduced in affected segments following nerve injury. MOR expression decreases substantially in injured dorsal root ganglion neurons (DRG), while intact neighboring DRGs are unaffected. In contrast to MOR agonist, the selective CB1 receptor agonist (ACEA) still suppressed C-fiber-induced ERK activation in dorsal horn neurons in injured spinal cord segments from SNL rats. These studies suggest that opioids may reduce sensitivity in those patients whose pain is generated mainly from injured nociceptor discharge. However, opioid may still be able to suppress neuropathic pain via acting on intact primary afferents or via supraspinal mechanisms. Because the efficacy of cannabinoid in suppressing C-fiber-induced ERK expression fully remains in the injured spinal segments after nerve ligation, our results support an undiminished potency of cannabinoid in attenuating neuropathic pain. Our data also suggest that there might be different regulatory mechanisms of opioids and cannabinoids for neuropathic pain.\n\n\u2022 **Min Li (Department of Neuroscience and High Throughput Biology Center, Johns Hopkins University, Baltimore, USA) \u2013 Chemical regulation of membrane excitability**\n\nBiological phenomena \u2013 ranging from neuronal action potential, to rhythmic cardiac contraction, to sensory transduction, to hormone secretion \u2013 are ultimately controlled by one class of proteins: the ion channels. Changes of ion channel activities by genetic mutations or by drugs are causes for human diseases and basis for therapeutics. Potassium channels are critical to a variety of biological processes and represent a very large class of ion channel proteins permeable to potassium ions. Of the more than 400 ion channel genes in the human genome, at least 167 are annotated potassium channels.\n\nThe regulation and biogenesis of potassium channels are important processes essential to the understanding of their physiological roles. Recent evidence indicates that cardiotoxicity of many human drugs for other intended targets is caused by inhibition of a subset of potassium channels through different mechanisms. These drugs are both chemically stable and economically available. Therefore, they represent useful chemical probes to investigate potassium channel regulation both at the molecular level and at the cell biological level. Using a combination of high throughput chemical biology approaches and detailed biochemical and electrophysiological analyses, we have screened and identified a number of regulatory compounds with unique mechanisms of action in regulating potassium channels.\n\n\u2022 **Xiao-Jiang Li (Department of Human Genetics, Emory University, Atlanta, USA) \u2013 Synaptic toxicity of Huntington disease protein**\n\nHuntington's disease (HD) is characterized by the selective loss of striatal projection neurons. In early stages of HD, neurodegeneration preferentially occurs in the lateral globus pallidus (LGP) and substantia nigra (SN), two regions where the axons of striatal neurons terminate. The unique neuronal structure, which is characterized by numerous neuronal processes that interact with each other at their terminals, may confer the preferential vulnerability to expanded polyQ proteins. In HD mice that precisely and genetically mimic the expression of full-length mutant huntingtin (htt) in HD patients, we found that degraded N-terminal fragments of htt preferentially forms aggregates in striatal neurons that are most affected in HD. More importantly, neuropil aggregates form preferentially in the processes of striatal neurons. In HD transgenic mice that express N-terminal mutant htt, the progressive formation of these neuropil aggregates correlates with disease progression. We also observed degenerated axons in which htt aggregates were associated with dark, swollen organelles that resemble degenerated mitochondria. These findings suggest that the early neuropathology of HD originates from axonal dysfunction and degeneration associated with htt neuropil aggregates.\n\n\u2022 **John F MacDonald (Department of Physiology, University of Toronto, Toronto, Canada) \u2013 Inhibitory Regulation of the Src Hub and LTP in CA1 Hippocampal Neurons**\n\nThe induction of long-term potentiation (LTP) at CA1 synapses of the hippocampus requires an influx of Ca^2+^ via N-methyl-d-aspartate receptors (NMDARs). High frequency stimulation depolarizes CA1 neurons, relieving the voltage-dependent block of NMDARs by Mg^2+^, permitting the entry of Ca^2+^ that is critical for this induction. Thus, NMDARs serve as co-incident detectors of the LTP-inducing afferent input to CA1 neurons. Enhanced activation of the non-receptor tyrosine kinase Src is also required for this co-incidence function; and, Src is the convergent target of a variety of G-protein coupled receptors (GPCRs) of the G\u03b1 q family (e.g. LPA, muscarinic, mGluR5 and PACAP receptors). These GPCRs stimulate a Src-dependent upregulation of NMDARs via a sequential activation of PKC and the non-receptor tyrosine kinase, Pyk2 which is also required for induction of LTP. Src therefore acts as a hub for the regulation of the induction of LTP at CA1 synapses.\n\nSignaling pathways which inhibit Src, and thereby inhibit the induction of LTP, have not been extensively studied. We have previously shown that platelet-derived growth factor receptors (PDGFR\u03b2) inhibit NMDARs in CA1 neurons by a PKA-dependent but Src-permissive mechanism. For example, in inside-out patches from cultured hippocampal neurons PKA fails to inhibit NMDAR channel activity unless it is first enhanced with a Src-activator peptide. Furthermore, we show that in hippocampal slices PDGFBB (receptor ligand) inhibits the induction of LTP. The initial step in this pathway requires tyrosine phosphorylation of tyrosine 1021 of the PDGFR which forms a SH2 docking site for PLC\u03b3. PLC\u03b3 interacts with another non-receptor kinase, Abelson kinase (Abl) which among other activities regulates PDGFR activity via a biochemical feedback. In recordings from single isolated CA1 pyramidal neurons we show that intracellular applications of Abl kinase strongly inhibit currents evoked by applications of NMDA. This inhibition is reversibly blocked by extracellular applications the PDGFR antagonist gleevec demonstrating the dependency of this response on PDGFR activity. How PDGFR, PLC\u03b3 and Abl kinase activity translates into inhibition of NMDARs is not fully understood and is currently under investigation.\n\n\u2022 **Karim Nader (Department of Psychology, McGill University, Montreal, Canada) \u2013 Identifying the neural mechanisms by which boundary conditions inhibit reconsolidation from occurring.**\n\nAlthough memory reconsolidation has been demonstrated in various learning tasks and animal models suggesting it is a fundamental process, reports of boundary conditions imply that reconsolidation is not ubiquitous. These boundary conditions, however, remain poorly defined at the behavioral, systems and molecular levels. Attempting to ameliorate this situation, we characterized reconsolidation of strong memories across all three levels of analysis. At the behavioral level we demonstrated that this boundary condition is transient, as infusions of anisomycin into lateral and basal amygdala in rats did not impair reconsolidation of overtrained auditory fear memories after 2 or 7 days, but did so after 30 or 60 days after training. At the systems level we showed that the hippocampus imposes the boundary condition on the amygdala, as the overtrained memory underwent reconsolidation 2 days after training in animals with pretraining dorsal hippocampus lesions. At the molecular level we demonstrated that the degree of expression of NR2B-containing NMDA receptors in the amygdala modulates reconsolidation of overtrained fear memories, as these receptors, which we previously have identified as being essential for the transformation of a consolidated memory back to a labile state, were down-regulated 2, but not 60 days after overtraining; furthermore, animals with pre-training hippocampus lesions, that did not exhibit the overtraining boundary conditions two days after training, had normal level of expression of NR2B subunits at that time-point. These findings make three conceptual advances in our understanding of reconsolidation: first, boundary conditions can be transient, second, boundary conditions can be imposed by other brain systems, and third, a mechanism mediating the manifestation of boundary conditions is down-regulation of the receptors that are critical for inducing reconsolidation.\n\n\u2022 **Peter V Nguyen (Department of Physiology, University of Alberta, Edmonton, Canada) \u2013 Beta-Adrenergic Receptors Recruit ERK and mTOR to Promote Translation-Dependent Synaptic Plasticity**\n\nA key question in neuroscience research is: How does activation of neuromodulatory receptors initiate protein synthesis during long-term synaptic plasticity? Activation of beta-adrenergic receptors can enhance long-term memory and modulate long-term synaptic plasticity in the mammalian hippocampus. Protein synthesis is required for the persistence of long-term potentiation (LTP) and for the consolidation of long-term memory. However, the intracellular signaling cascades that couple beta-adrenergic receptors to translation initiation and subsequent protein synthesis are unidentified. We used electrophysiological recordings in area CA1 of mouse hippocampal slices to investigate the recruitment of signaling cascades necessary for beta-adrenergic LTP. We found that maintenance of this LTP requires the extracellular signal-regulated kinase (ERK) and mammalian target of rapamycin (mTOR) pathways, but not cAMP-dependent protein kinase (PKA). Consistent with these findings, treatment of hippocampal slices with isoproterenol, a beta-adrenergic agonist, increases phosphorylation of eukaryotic initiation factor 4E (eIF4E), the eIF4E kinase Mnk1, and the translation repressor, 4E-BP2. These translational regulators can be phosphorylated in an ERK- and mTOR-dependent manner. Moreover, activation of beta-adrenergic receptors eliminates deficits in late-LTP seen in transgenic mice that express reduced hippocampal PKA activity. Our results identify specific intracellular signaling pathways that link beta-adrenergic receptor activation at the membrane to translation initiation within the cytosol. More importantly, our data reveal a molecular mechanism for neuromodulatory control of protein synthesis during LTP, a process that is required for the formation of long-lasting memories. \\[Funded by Alberta Heritage Fdn. for Med. Res. and CIHR, NIH, NIMH, and the Fragile X Research Fdn\\].\n\n\u2022 **Uhtaek Oh (Sensory Research Center, Seoul National University, Seoul, Korea) \u2013 TRPV1 and its Role for Inflammatory Pain**\n\nCapsaicin (CAP) is a pungent ingredient in hot peppers. CAP has a unique action on the pain sensory system. CAP causes a pain when applied to the skin. The hyperalgesic action of CAP is mediated by the excitation of sensory neurons. CAP is known to activate ion channels that allow cation influxes, thus, depolarizing sensory neurons. CAP-activated ion channel along with its channel property was identified. The channel is a ligand-gated channel and permeable to various cations. The gene encoding for the CAP sensitive current was cloned and dubbed as VR1 (vanilloid receptor 1). Primary structure of VR1 shows that VR1 belongs to transient receptor potential (TRP) channel family, having 6 transmembrane domains with two long cytosolic amino acid sequences in each N- or C- terminus. According to the recently classified nomenclature, VR1 is now classified as TRPV1. Mice deficient of TRPV1 lacks thermal pain induced by inflammation. Thus, TRPV1 is most likely involved in the mediation of inflammatory pain. In the present symposium, I would like to introduce our research on TRV1, most notably, presenting evidence for the involvement of TRPV1 in mediating inflammatory pain signaling pathways.\n\nThe presence of TRPV1 receptor and its apparent role in pain suggests endogenous activator. Thus, endogenous activators of TRPV1 were searched. In our previous report, the hyperalgesic neural response such as c-fos expression in the dorsal horn of the spinal cord induced by inflammation is blocked by capsazepine, a CAP receptor blocker, suggesting that an endogenous capsaicin-like substance is produced and causes hyperalgesia by opening capsaicin-activated channels. Because ligands bind from the intracellular side of the channel, the endogenous ligands would likely be produced in the cell. We initially tested many intracellular messengers on the CAP channel to determine whether they activate the channel. We found that products of lipoxygenases (LO) are capable of activating the channel. Interestingly, products of LOs are implicated in mediating inflammatory nociception because various LO products are produced during inflammation and cause hyperalgesia when injected intradermally. In addition, products of LOs often function as intracellular messengers in neurons. Among their actions, products of LOs act directly on K+ channels in Aplysia sensory neurons (Piomellile et al., 1987) and mammalian cardiac muscle cells.\n\nIn the present seminar, we present evidence that products of LOs directly activate the CAP receptor in isolated membrane patches of sensory neurons. When applied to the bath of inside-out patches, 12-hydroperoxytetraenoic acid (12-HPETE) activates single-channel currents that were sensitive to capsazepine in isolated membrane patches. The IV curve of single-channel currents activated by 12-HPETE is outwardly-rectifying and identical to that obtained by CAP. The amplitude of single-channel currents activated by both 12-HPETE and CAP are not different. Theses results indicate that the channel currents activated by 12-HPETE are identical to those activated by CAP. The channels activated by 12-HPETE are permeable to various cations. LO products also activate TRPV1, the cloned CAP receptor, expressed HEK293 cells. Products of LOs other than 12-HPETE also activated the CAP channels. Among them, 12- and 15-HPETE, 5- and 15-(S)-hydroxyeicosatetraenoic acids, and leukotriene B4 possess the highest potency. Dose-response relationships reveal that the potencies of 12-HPETE, 15-HPETE, leukotrien B4, and 5-HETE are 8.0, 8.7, 9.2, and 11.7 \u03bcM, respectively, showing much lower potency than CAP. Anandamide, the endogenous ligand for cannabinoid receptors also activates the channel with half-maximal dose of 11.7 \u03bcM. Because prostaglandins (PGs) are known to be related to pain, various PGs are applied to the CAP receptors. PGs, however, fail to activate the channel. Other saturated or unsaturated fatty acids are also tested for its activation of CAP channels. They all fail to activate the channels.\n\nResults of our study indicate that CAP and various eicosanoids act on the capsaicin receptor, suggesting a structural similarity between CAP and eicosanoids. Thus, structures of eicosanoids and CAP in the energy-minimized state are superimposed to compare three-dimensional structures. Three-dimensional structures of 12-(S)-HPETE, 15-(S)-HPETE, 5-(S)-HETE, and LTB4 are compared with that of CAP. Interestingly, CAP in the energy-minimized state fits well to the S-shaped 12-HPETE. In particular, the phenolic hydroxide and amide moieties in CAP overlap precisely with the carboxylic acid and hydroperoxide moieties in 12-HPETE, respectively. The two key regions in CAP or 12-(S)-HPETE are known to have dipolar property that allows hydrogen bond interactions with the CAP receptor. In addition, the aliphatic chain region of the 12-(S)-HPETE fits well with the alkyl chain of CAP. In contrast, 15-HPETE, 5-HETE and LTB4, shared less structural similarity with CAP.\n\nBecause products of LO activate the channel, it seems obvious to ask what stimulates the LO\/TRPV1 pathway in order to cause pain. Although bradykinin (BK) is a powerful pain causing inflammatory mediator, but its activation mechanism of sensory neurons is not known. Because BK releases arachidonic acid, a key substrate for LO in sensory neurons, we hypothesized that BK activates TRPV1 via the PLA2\/LO pathway. In order to prove the hypothesis, we performed electrophysiological experiments, Ca2+-imaging, and chemical analysis of LO products. As a result, we observed that BK-evoked whole-cell currents recorded from sensory neurons were significantly reduced by capsazepine (CZP), a capsaicin receptor antagonist. In the skin nerve preparation, CZP and quinacrine, a PLA2 inhibitor, and NDGA, a LO inhibitor reduced BK-induced excitation of sensory nerves. In addition, quinacrine, NDGA and CZP blocked BK-induced Ca2+-influx. To examine if sensory neurons can, in fact, release the lipid products of LO by BK, we used HPLC-coupled with radioisotope to detect the lipid products. As results, we confirmed that 12-HETE, an immediate downstream metabolite of 12-HPETE was indeed released from sensory neurons after the BK application.\n\nIn addition, we also present unequivocal evidence that histamine, another inflammatory mediator, also uses the PLA2\/LO\/TRPV1 pathway for excitation of sensory neurons. Application of histamine caused influx of Ca2+, which was blocked by co-application of capsazepine or Ca2+-free condition. The Ca2+-influx induced by histamine was blocked by application of capsazepine. Likewise, the Ca2+-influx induced by histamine was also blocked by treatment of NDGA or quinacrine. Thus, these results now suggest that histamine activates TRPV1 by stimulation of PLA2 and LO. Because histamine is a major pruritogenic (itch causing) substance, identification of the histamine signaling pathway is much helpful to developing anti-pruritogenic substance to cure itch sensation in atopic dermatitis patients.\n\nThis study demonstrates that bradykinin and histamine excite sensory nerve endings by activating TRPV1 via production of 12-LO metabolites of arachidonic acid by activated PLA2. This finding identifies a mechanism that might be targeted in the development of new therapeutic strategies for the treatment of inflammatory pain or itch.\n\n\u2022 **Ke Ren (Department of Biomedical Sciences, University of Maryland, Baltimore, USA) \u2013 Neuronal\/glial cell interactions in CNS plasticity and persistent pain**\n\nNerve signals arising from sites of tissue injury lead to long-term changes in the central nervous system (CNS) referred to as central sensitization. Ample evidence indicates that central sensitization underlies mechanisms of persistent pain after injury. The emerging literature strongly implicates a role of neuronal\/glial cell interaction in central sensitization and hyperalgesia. Through still unknown mechanisms, glia can be activated after injury and release chemical mediators such as inflammatory cytokines that modulate neuronal activity and synaptic strength. Such glia-cytokine-neuron interactions may be critical in the chronic pain process. We tested this hypothesis in a rat model of synaptic plasticity and persistent pain. Tissue injury was produced by injecting complete Freund's adjuvant (CFA), an inflammatory agent, into the masseter muscle of the Sprague-Dawley rat. We first examined whether masseter inflammation induced glial activation in the spinal trigeminal complex (STC), the initial relay site for trigeminal nociceptive information. The results showed that masseter inflammation induced a selective and time-dependent increase in glial fibrillary acidic proteins (GFAP) levels, an indication of astroglial activation, in the STC. We next examined whether activation of glia by masseter inflammation is accompanied by an increase in inflammatory cytokine levels. Using Western blot and immunohistochemistry, an increase in IL-1beta in the STC was observed after masseter inflammation. The increase in IL-1beta was seen as early as 30 min after inflammation and lasted for about a week. Interestingly, the CFA-induced IL-1beta selectively colocalizes with GFAP, but not with NeuN, a neuronal marker, and CD11b, a marker of activated microglia. These results suggest that activated astrocytes are the source of IL-1beta release in the STC after masseter inflammation. To demonstrate the association of inflammation-induced cytokine release with glial activation, we tested the effect of propentofylline, a non-selective modulator of glia, on changes in GFAP and IL-1beta levels after masseter inflammation. Western blots showed that the propentofylline treatment blocked the increase in GFAP and IL-1beta after masseter CFA. We further showed that the increase in GFAP after masseter inflammation was blocked by local anesthesia of the injured site, suggesting its dependence on neuronal input. Interestingly, in a medullary slice preparation, substance P, a transmitter released from primary afferent terminals in the STC, induced an increase in GFAP and IL-1beta. These results are consistent with a role of neuronal signaling in triggering CNS glial activation. Finally, we tested the hypotheses that trigeminal glial activation and inflammatory cytokine release affect or facilitate neuronal plasticity through interactions with neuronal glutamate receptors. We administered IL-1receptor antagonist (ra) intrathecally via osmotic pumps at the level of the obex. The results showed that IL-1ra significantly attenuated behavioral hyperalgesia and blocked an increase in NMDA receptor phosphorylation after masseter inflammation. Our findings support a model of reciprocal neuron-glia interactions in the development of CNS plasticity and persistent pain. The model emphasizes activation of glia by injury-generated neuron input, concomitant cytokine release, and post-translational regulation of NMDA receptor sensitivity through IL-1receptor signaling. The outcome of these studies will help to identify novel targets and agents for clinical management of persistent pain. (Supported by NIH grants DE11964, DE15374, DA10275)\n\n\u2022 **John C Roder (Department of Medical Genetics and Microbiology, University of Toronto, Toronto, Canada) \u2013 Forward and reverse genetic screens in the mouse for mutants impaired in learning and memory**\n\nLearning and memory in the mouse is a quantitative trait and genes account for 71% of the variance between strains. Our goal here is to identify new genes that contribute to learning and memory. We will employ a forward genetic screen using a chemical mutagen (ENU). A total of 2500 ENU mice were pre-screened for normal development. Of these, 10 showed deficits in context-dependent fear conditioning (\\< 2 sd from the mean), but normal cue-dependent freezing. A smaller screen was done on 100 mice. 2 showed deficits in performance on the hidden platform but normal performance on the visible platform (control). All these presumed mutants showed low heritability and penetrance and could not be mapped to chromosomal positions. At this point we revised our screen.\n\nA number of strains (n = 10) were compared in the water maze and the one showing optimal performance (129 S61SVE-v Tac) was chosen for ENU mutagenesis. In addition, we changed the screen to a much more difficult task and one that relied on a different sensory modality (sound) in trace conditioning.\n\nUpon screening 450 mice in trace only, one was obtained that showed no freezing. This mutant showed robust inheritance and penetrance and we are in the process of fine mapping, positional cloning and sequencing of the antifreeze locus. Verification of candidate genes will be carried out by BAC rescue of the mutant phenotype with the wildtype gene. Alternatively, the creation of the same mutant phenotype in wildtype mice by mutating the wild-type locus in ES cells.\n\nWe will carry out extensive neurobehavioural assays to determine if the learning and memory deficits are restricted to the hippocampus or are found in other brain regions as well (i.e. pre-frontal cortex, cerebellum, nucleus accumbens, brain stem, amygdala, striatum). Learning and memory mutants will be tested for their ability to form cognitive spatial maps in the hippocampus in vivo. Neuroanatomical studies will assess if development perturbation underlie these deficits. Gene expression and proteomic studies will identify where the gene is expressed and the biochemical pathway underlying its action. Modifier screens will be carried out to elucidate exciting new genetic pathways. The mutant genes we identify will be models for human genetic diseases that involve impairments in learning and memory. In these cases, the mutant mice will provide test beds for pre-clinical tests of cognitive enhancers in patients. In addition, they will suggest new targets for drug development.\n\n\u2022 **Michael W Salter (Department of Physiology, University of Toronto, Toronto, Canada) \u2013 Ins and outs of SRC regulation of NMDA receptors and synaptic plasticity**\n\nRegulation of postsynaptic glutamate receptors is one of the principal mechanisms for producing alterations of synaptic efficacy in the CNS. A growing body of evidence indicates that at glutamatergic synapses NMDA receptors are upregulated by Src family tyrosine kinases which are opposed by the action of tyrosine phosphatases, one of which has been identified as STEP. Src itself is expressed nearly ubiquitously in higher organisms with the highest levels of expression found in the CNS. Src represents a point through which multiple signaling cascades from, for example G-protein-coupled receptors, Eph receptors and integrins, converge to upregulate NMDA receptor activity. The upregulation of NMDARs by activation of Src participates in the induction of long-term potentiation of synaptic transmission in the hippocampus and in the spinal cord dorsal horn. We have determined that Src is anchored within the NMDA receptor complex by the protein ND2. Recently, we have found that interfering with the ND2-Src interaction *in vivo* prevents behavioural pain hypersensitivity. Thus, multiple mechanisms control Src in the NMDA receptor complex and disrupting Src-mediated enhancement of NMDA receptor function affects pathological plasticity in the CNS.\n\n\u2022 **Weihong Song (Department of Psychiatry, University of British Columbia, Vancouver, Canada) \u2013 Hypoxia facilitates Alzheimer's disease pathogenesis**\n\nThe molecular mechanism underlying the pathogenesis of majority of sporadic Alzheimer's disease (AD) cases is unknown. A history of stroke was found to be associated with development of some AD cases, especially in the presence of vascular risk factors. A reduced cerebral perfusion is a common vascular component among AD risk factors. Hypoxia is a direct consequence of hypoperfusion. We identified a functional hypoxia responsive element (HRE) in BACE1 promoter. Hypoxia increased APP CTF\u03b2 production by increasing BACE1 gene transcription and expression in *vitro* and in *vivo*. This paper showed that hypoxia facilitated AD pathogenesis. Under hypoxic condition APP23 mice, Swedish mutant APP transgenic mice, developed more neuritic plaques than normoxic mice. We found that hypoxia deteriorated the memory impairment in APP23 mice. Our results demonstrate that hypoxia facilitates AD pathogenesis and interventions that improve cerebral perfusion might benefit AD patients.\n\n\u2022 **Shuzo Sugita (Department of Physiology, University of Toronto, Toronto, Canada) \u2013 Molecular mechanism of GTP-dependent exocytosis**\n\nMany secretory cells utilize a GTP-dependent pathway to trigger exocytotic secretion. However, little is currently known about the mechanism by which this may occur. In the present study we attempted to identify the key signaling pathway that mediates GTP-dependent exocytosis. Incubation of permeabilized PC12 cells with soluble RalA GTPase strongly inhibited GTP-dependent exocytosis. A Ral-binding fragment from Sec5, a component of the exocyst complex, showed a similar inhibition. Point mutations in both RalA (RalA^E38R^) and the Sec5 (Sec5^T11A^) fragment which abolish the RalA-Sec5 interaction also abolished the inhibition of GTP-dependent exocytosis. In contrast the RalA and the Sec5 fragment showed no inhibition of Ca^2+^-dependent exocytosis, but cleavage of a SNARE (soluble-*N*-ethylmaleimide-sensitive factor attachment protein receptor) protein by Botulinum neurotoxin blocked both GTP- and Ca^2+^-dependent exocytosis. In stable RalA and RalB double knockdown cells, GTP-dependent exocytosis was severely reduced and was restored upon reintroducing expression of RalA or RalB by transfection. However, Ca^2+^-dependent exocytosis remained unchanged in the double-knockdown cells. Our results indicate that GTP- and Ca^2+^-dependent exocytosis use different sensors and effectors for triggering exocytosis while their final fusion steps are both SNARE-dependent. They also suggest that endogenous RalA and RalB function specifically as GTP sensors for the GTP-dependent exocytosis.\n\n\u2022 **Shao-Jun Tang (Department of Neurobiology and Behavior, University of California, Irvine, USA) \u2013 Regulation of Activity-Dependent Protein Synthesis in Dendrites**\n\nProtein synthesis in dendrites is essential for long-lasting synaptic plasticity, but little is known about how synaptic activity is coupled to mRNA translation. Using hippocampal neuron cultures and slices, we have investigated the role of glutamate receptors and mTOR signaling in control of dendritic protein synthesis. We find: 1) Specific antagonists of NMDA, AMPA and metabotropic glutamate receptors abolish glutamate-induced dendritic protein synthesis, whereas agonists of NMDA and metabotropic but not AMPA glutamate receptors activate protein synthesis in dendrites; 2) Inhibition of mTOR signaling, as well as its upstream activators, PI3K and AKT, block NMDA receptor-dependent dendritic protein synthesis. Conversely, activation of mTOR signaling induces dendritic protein synthesis; and 3) Dendritic protein synthesis activated by tetanus-mediated LTP induction in hippocampal slices requires NMDA receptors and mTOR signaling. These results suggest critical role of the NMDA receptor-mTOR signaling pathway in regulating protein synthesis in dendrites of hippocampal neurons.\n\n\u2022 **Yuanxiang Tao (Department of Anesthesiology and Critical Care Medicine, Johns Hopkins University, Baltimore, USA) \u2013 Are the PDZ domains at excitatory synapses potential molecular targets for prevention and treatment of chronic pain?**\n\nThe PDZ (Postsynaptic density 95, Discs large, and Zonula occludens-1) domains are ubiquitous protein interaction modules often found among multi-protein signaling complexes at excitatory synapses. In the mammalian central nervous system, C-terminal motifs of N-methyl-d-aspartate (NMDA) receptor subunits NR2A and NR2B bind to the first and second PDZ domains of postsynaptic density (PSD)-95, PSD-93, and synaptic-associated protein (SAP)102, whereas C-terminal motifs of \u03b1-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) receptor subunit GluR2 interacts with the PDZ domain of protein interacting with C-kinase 1 (PICK1) and glutamate receptor interacting protein (GRIP). These PDZ-containing proteins not only are involved in synaptic trafficking of NMDA receptors and AMPA receptors but also couple the receptors to intracellular proteins and signaling enzymes, such as neuronal nitric oxide synthase (nNOS) and protein kinase C (PKC). Recent preclinical research shows that PSD-93, PSD-95, and PICK1 are highly expressed in the dorsal horn of the spinal cord. Immunocytochemical studies demonstrate that their immunoreactivities occur at a higher density in the superficial laminae and at a lower density in other laminae of the spinal dorsal horn. Spinal PSD-93 or PSD-95 deletion prevents NMDA receptor-dependent chronic pain from spinal nerve injury or injection of complete Freund's adjuvant (CFA) without affecting nociceptive responsiveness to acute pain. In addition, the disruption of the PDZ domain-mediated protein interaction between GluR2 and PICK1 in the spinal cord of rats or the knockout of PICK1 in mice has recently been shown to produce antinociceptive effects in AMPA receptor-dependent chronic pain caused by peripheral nerve injury and CFA, with preservation of acute pain transmission. Further studies have demonstrated that PSD-93 or PSD-95 deletion may alter the synaptic NMDA receptor expression and function in spinal cord neurons, which, in turn, may result in impaired NMDA receptor-dependent chronic pain. However, the underlying mechanism by which PICK1 lead to antinociception in chronic pain states is unclear. Our preliminary work indicates that CFA-induced chronic pain might increase time-dependent PKC phosphorylation of GluR2 Ser880, disrupt interaction of GluR2 with GRIP (but not with PICK1), and lead to PKC-dependent internalization of GluR2 (but not of GluR1) in the spinal cord neurons. GluR2 internalization might facilitate Ca2+ permeability and increase AMPAR function and neuronal activity, which may contribute to spinal central sensitization associated with chronic pain states. PICK1 deletion might reduce PKC phosphorylation of GluR2 Ser880 by blocking the recruitment of PKC to synaptic GluR2 and decrease PKC-dependent internalization of GluR2 in spinal cord neurons, which, in turn, might result in blunted AMPA receptor-dependent central sensitization in chronic pain. Therefore, it is very likely that PDZ domains at excitatory synapses may be new molecular targets for prevention and treatment of chronic pain.\n\n\u2022 **Yu Tian Wang (Brain Research Center, University of British Columbia, Vancouver, Canada) \u2013 Synaptic plasticity in learning and memory**\n\nSynaptic plasticity (i.e. a dynamic change in the strength of synaptic transmission between neurons), such as long-term potentiation (LTP) and depression (LTD) observed at the glutamatergic synapses of the CA1 region of the hippocampus, has long been proposed as a primary cellular mechanism for learning and memory. However, evidence for a definitive role of either LTP or LTD in learning and memory remains missing due to the lack of a specific inhibitor for LTP or LTD. Evidence accumulated in recent years strongly suggests that AMPA subtype glutamate receptors (AMPARs) are continuously cycling between the plasma membrane and intracellular compartments via vesicle-mediated plasma membrane insertion and endocytosis, and that facilitated AMPAR insertion and endocytosis at postsynaptic membranes contributes to the expression of LTP and LTD, respectively. Using a combination of recombinant receptor expression systems and hippocampal brain slice preparations, we were able to demonstrate that facilitated endocytosis of postsynaptic AMPARs during LTD is AMPAR GluR2 subunit-specific. These studies have lead us to develop a GluR2-derived interference peptide that, when delivered into neurons in the brain, can specifically block the expression of LTD without affecting normal basal synaptic transmission in many regions of the brain. Using the membrane-permeant form of the GluR2 peptide as a specific inhibitor of LTD, we were able to probe the role of LTD in freely moving rats with unprecedented specificity, and thereby provide evidence for the involvement of LTD in a number of learning and memory-related behaviours. Our work not only provides the first evidence for a definitive role of LTD in learning and memory, but also demonstrates the utility of peptides that disrupt AMPAR trafficking, the final step in the expression of synaptic plasticity, as tools to examine the critical role of LTD and\/or LTP in specific aspects of learning and memory in conscious animals.\n\n\u2022 **Newton Woo (NICHD, NIH, Bethesda, USA) \u2013 Regulation of Bi-directional Plasticity by BDNF**\n\nInitially characterized for its role in neuronal survival and differentiation, *B*rain *D*erived *N*eurotrophic *F*actor (BDNF) has emerged as a key regulator of synaptic plasticity, which is a persistent change in synaptic strength thought to underlie many cognitive functions. A salient characteristic of this ubiquitously expressed neurotrophin is that expression of BDNF is activity dependent, which has profound implications in development and neuronal plasticity. BDNF is synthesized as a precursor (proBDNF) that can undergo proteolytic cleavage to yield mature BDNF (mBDNF). Initially, the biological actions elicited by neurotrophins, including BDNF, were thought to only arise from the processed mature form. However, recent groundbreaking studies have demonstrated distinct biological roles for several proneurotrophins and their mature form via distinct receptor\/signaling cascades. This highlights the importance of the conversion of pro- to mature protein as a key regulatory step in neurotrophin actions. However, whether this proteolytic cleavage plays a role in synaptic plasticity has not been previously addressed. Here, I present evidence that such a conversion process from pro- to mature BDNF is important for one long-lasting form of synaptic plasticity, namely late-phase LTP (L-LTP). Application of strong theta-burst stimulation (TBS) induces L-LTP that is protein synthesis dependent. In BDNF +\/- mice, L-LTP is significantly impaired after TBS simulation. However, L-LTP can be rescued in BDNF +\/- mice when hippocampal slices were preincubated with mBDNF but not proBDNF. Subsequent experiments identified the tPA\/plasminogen system plays a critical role for both BDNF processing and L-LTP expression in the mouse hippocampus. To investigate the location of this conversion process, we performed several additional experiments using cleavage specific antibodies. It was discovered that in cultured hippocampal neurons, low frequency stimulation triggered proBDNF secretion, whereas high frequency stimulation induced the expression of mBDNF. Strikingly, tPA secretion only occurred after high frequency stimulation. Moreover surface staining of mBDNF was greatly enhanced upon depolarization. These results suggest that neuronal activity regulates the ratio of extracellular pro- to mature BDNF via tPA secretion. Finally, I present data that proBDNF facilitates hippocampal long-term depression (LTD). This facilitation of NMDAR LTD is dependent on the p75 neurotrophin receptor (p75^NTR^). Mice that lack p75^NTR^ exhibit a selective impairment in the NMDA-dependent form of long-term depression (LTD), but display normal expression of other hippocampal forms of synaptic plasticity. This selective deficit may be the result of a significant reduction in NR2B, a NMDA receptor subunit uniquely involved in LTD. Activation of p75^NTR^ by proBDNF enhanced hippocampal LTD. Our results challenge the classic view that the processed neurotrophins is the only functional form of neurotrophins to elicit biological actions and that an unexpected function of p75^NTR^ is to regulate the expression of hippocampal synaptic depression. Taken together, these results suggest a universal \"Yin-Yang\" model where pro- and mature- BDNF play diametrically opposite roles in synaptic plasticity.\n\n\u2022 **Melanie A Woodin (Department of Cell and Systems Biology, University of Toronto, Toronto, Canada) \u2013 Bidirectional spike-timing dependent plasticity of inhibitory transmission in the hippocampus**\n\nThe mammalian hippocampus, owing to its crucial role in memory, has been the primary focus of research into synaptic plasticity. Most studies have examined plasticity at excitatory (glutamatergic) synapses, despite the fact that neuronal output is not determined by the level of excitatory transmission alone, but by the levels of coincident excitatory and inhibitory transmission. In this study, we examined spike-timing dependent plasticity of GABAA receptor mediated inhibitory transmission in area CA1 of hippocampal slices from mature rats (6\u20138 weeks). Because the amplitude and reversal potential of GABAR currents are largely determined by intracellular chloride concentration, we first determined the GABAR reversal potential under conditions of intact intracellular chloride using the permeating agent gramicidin. Surprisingly, we found that GABAR reversal potential was \\~12 mV hyperpolarized compared to the reversal potential in a previous study of STDP of GABAR mediated transmission in P12-19 slices, as well as to our own recordings from P12-19 slices. We then performed a series of whole-cell recordings to determine the intacellular chloride concentration necessary to reproduce the GABAAR reversal potential measured with gramicidin. This allowed us to employ long-term, stable whole-cell recording to investigate whether a spike-timing protocol could induce changes in GABAAR reversal potential. Surprisingly, pairing of presynaptic stimulation with postsynaptic spiking led to bidirectional changes in the reversal potential, with the direction of change being dependent on the interval between pre- and post stimulation. When the postsynaptic neuron was made to fire bursts of action potentials 5 ms after presynaptic stimulation (correlated), at 5 Hz for 90 seconds, a depolarization of the reversal potential was seen. However, when the interval was lengthened to 100 ms (uncorrelated), a hyperpolarization of the reversal was seen. Due to the interplay between excitatory and inhibitory transmission, we suggest that this form of GABAergic plasticity may contribute to the enhancement of excitatory transmission under certain conditions.\n\n\u2022 **Zhen Yan (Department of Physiology and Biophysics, State University of New York at Buffalo, Buffalo, USA) \u2013 Interactions between Acetylcholine, Amyloid and Ion Channels in Alzheimer's Disease**\n\nIt has been well recognized that one prominent feature of Alzheimer's disease (AD) is the accumulation of b-amyloid (Ab), a major component of senile plaques. Another fundamental feature of AD is the severe degeneration of basal forebrain (BF) cholinergic neurons and deficient cholinergic functions in prefrontal cortex (PFC), a brain region implicated in high-level cognitive processes. We have found that cholinergic inputs from BF, by activating M1 muscarinic receptors in PFC pyramidal neurons, regulate the GABAA receptor channel, a key player in working memory, through a PKC\/Src-dependent mechanism. The M1 regulation of GABA transmission in PFC is impaired in the APP transgenic model of AD, due to the Ab interference with M1 activation of PKC. On the other hand, glutamate inputs from PFC, by activating Group III metabotropic glutamate receptors (mGluRIII) in BF neurons, suppresses NMDAR currents through an actin-dependent mechanism. Ab selectively disrupts mGluRIII regulation of NMDAR channels in BF cholinergic neurons, which may due to their sensitivity to Ab-induced cytoskeleton disintegration. Thus, our results have provided a potential mechanism for the synaptic failure of PFC pyramidal neurons and the selective degeneration of BF cholinergic neurons at the early stage of AD.\n\n\u2022 **Megumu Yoshimura (Department of Basic Medicine, Kyushu University, Kyushu, Japan) \u2013 Synaptic mechanisms of acupuncture in the spinal dorsal horn revealed by in vivo patch-clamp recordings**\n\nAccording to Chinese literatures, more than 300 acupoints have been described. Stimulation of each point elicits certain analgesia in the corresponding area. Physiological examinations have been made to unveil the mechanisms of the analgesic action of acupuncture, however, clear results have not been provided, because of difficulty in how to approach the changes of nociceptive transmission in CNS. One of promising approaches will be an in vivo patch-clamp recording from spinal dorsal horn neurons to see what is happening during acupuncture. Thus, we applied the in vivo patch-clamp recordings from substantia gelatinosa neurons that receive noxious inputs and observed a change in excitatory and inhibitory synaptic currents occuring spontaneously or evoked by stimulation of the skin in the receptive field. To enhance the nociceptive inputs from periphery, we used CFA induced inflammatory rats injected right hind paw. In this chronic pain model, spontaneous EPSCs with higher magnitude were observed in the majority of SG neurons. Application of acupuncture to the contralateral ST36 near the knee joint did not affect the spontaneous EPSCs. However, large amplitude of spontaneous IPSCs were elicited with the frequency of 2 to 10 Hz. Next, we tested the cell firing in the SG by stimulation of the skin with toothed forceps during the acupuncture. The skin-evoked spike firing was effectively inhibited reversibly by the acupuncture. In our previous slice experiments, noradrenaline and serotonin increased spontaneous IPSCs with large ampitude. Other possible candidates responsible for the depression of nociceptive inputs to the SG, such as dopamine, enkephalin, other opioids, substance P, CGRP did not increase the frequency and amplitude of spontaneous IPSCs.\n\n\u2022 **Ming Xu (Department of Anesthesia and Critical Care, University of Chicago, Chicago, USA) \u2013 Molecular Mechanisms of neuronal plasticity induced by drugs of abuse**\n\nDrug addiction is a brain disease that is characterized by the compulsive seeking and taking of a drug despite known adverse consequences. Drug addiction is also long-lasting with a high propensity to relapse. The brain dopaminergic system is a key neural substrate for mediating the actions of abused drugs. The development of drug addiction is thought to involve coordinated temporal and spatial actions of specific dopamine receptors, signaling molecules, and target molecules that change synaptic reorganizations in the brain. To dissect mechanisms underlying drug-induced neuroadaptations, we have made and analyzed D1 receptor mutant mice. We found that the D1 receptor mediates the locomotor-stimulating and rewarding effects of cocaine. The D1 receptor also mediates cocaine-induced changes in neuronal dendritic remodeling, ERK and CREB signaling, chromatin remodeling, and gene expression including c-*fos* and AP-1-regulated target genes. These results suggest that the D1 receptor is a major cell surface mediator for drug-induced behaviors and neuroadaptations, and that c-*fos-*regulated gene expression may contribute to the persistent nature of drug-induced behaviors. To investigate intracellular mechanisms of cocaine-induced persistent changes within D1 receptor-expressing neurons, we made D1 receptor neuron-specific c-*fos* mutant mice. We found that c-Fos contributes to the development and extinction of cocaine-induced conditioned place preference and behavioral sensitization, changes in dendritic reorganization, and regulation of immediate early genes and the expression of two classes of target genes that are involved in neurotransmission and neuronal connections. Noticeably, mutations of the D1 receptor gene and c-*fos* share several common consequences after repeated cocaine injections. Together, these findings suggest that the dopamine D1 receptor and c-Fos are a key receptor signaling system that contributes to persistent neuroadaptations to cocaine.\n\n\u2022 **Zao C Xu (Department of Anatomy and Cell Biology, Indiana University School of Medicine, Indianapolis, USA) \u2013 Synaptic plasticity in pathological conditions**\n\nSynaptic plasticity occurs during development and participates in physiological functions in adulthood during learning and memory. It has also been shown in pathological conditions such as epilepsy. To investigate the synaptic plasticity in neurodegenerative disorders and the underlying mechanisms, synaptic transmission and morphological changes were studied in neurotransplantations after excitotoxic lesion and in neurons after transient cerebral ischemia.\n\nFor transplantation studies, striatal primordium were collected from 16 d embryos and implanted into the striatum of adult Sprague-Dawley rats two days after kainic acid lesion. Intracellular recording in vivo and anterograde tracing experiments were performed 2\u20136 months after the transplantation. For ischemia studies, transient global ischemia was induced in adult Wistar rats. Electrophysiological recording and morphometry analysis of intracellularly stained neurons were performed at different intervals after ischemia.\n\nSpontaneous synaptic activities were greatly reduced in striatal grafts. Cortical or thalamic stimuli elicited monosynaptic excitatory postsynaptic potentials (EPSPs) from neurons in the graft. A late postsynaptic potential (L-PSP) was evoked from many graft neurons (17\/27) in addition to the initial EPSPs. Bursting action potentials were generated from the L-PSPs. Light and electron microscopic studies showed that the number of cortical and thalamic afferent fibers significantly reduced in the grafts. Some of these fibers formed dense clusters of terminals making multiple synapses on individual spines and dendrites. L-PSPs also could be evoked from neurons in the striatum and hippocampus following cerebral ischemia. Furthermore, the initial EPSPs were potentiated in ischemia-vulnerable neurons (spiny neurons in the striatum and CA1neurons in the hippocampus) but depressed or unchanged in ischemia-resistant neurons (large aspiny neurons in the striatum and CA3 neurons in the hippocampus) after ischemia. Quantitative analysis of 3-D reconstructed CA1 pyramidal neurons indicated that the total dendritic length in apical dendrites was significantly increased at 24 h after ischemia but remained about the same in basal dendrites. Such increase was due to the dendritic sprouting rather than dendritic extension, which occurred mainly in the middle segment of the apical dendrites.\n\nThese results demonstrate that the synaptic plasticity changes also occur in acute neurodegenerative disorders. The plasticity changes in striatal grafts might be the compensatory responses, whereas those in ischemic neurons might be associated with the selective neuronal damage after ischemic insults.\n\n\u2022 **Xia Zhang (Department of Psychiatry, University of Saskatchewan, Saskatoon, Canada) \u2013 Cannabinoid addiction and cannabinoid medicine**\n\nA. Suppression of cannabinoid rewarding effects and cannabinoid withdrawal syndrome respectively by the interfering peptide Tat-3L4F and lithium\n\nCannabinoids or marijuana is the most commonly used illicit drug in developed countries. The lifetime prevalence of marijuana dependence is the highest of all illicit drugs in the USA, but there is no effective medication available for treating marijuana addiction. Our recent two studies show potential strategies for treating marijuana addiction in humans. In the first study we found a physical interaction of the enzyme PTEN with a region in the third intracellular loop (3L4F) of 5-HT2C receptor (5-HT2cR) in cell cultures. PTEN limits agonist-induced phosphorylation of 5-HT2cR through its protein phosphatase activity. We then found the probable existence of PTEN:5-HT2cR complexes in putative dopaminergic neurons in the rat ventral tegmental area (VTA), a brain region in which virtually all abused drugs exert rewarding effects by activating its dopamine neurons. We next synthesized the interfering peptide Tat-3L4F, which is able to disrupt PTEN coupling with 5-HT2cR. Tat-3L4F or the 5-HT2cR agonist Ro600175 suppressed the increased firing rate of VTA dopaminergic neurons induced by delta9-tetrahydrocannabinol (THC), the psychoactive ingredient of marijuana. Using behavioral tests, we observed that Tat-3L4F or Ro600175 blocks conditioned place preference of THC, and that Ro600175, but not Tat-3L4F, produces anxiogenic effects, penile erection, hypophagia and motor functional suppression. These results suggest a potential strategy for treating cannabinoid addiction with the Tat-3L4F peptide. In the second study we demonstrate that lithium treatment prevented the cannabinoid withdrawal syndrome (CWS) in rats, which was accompanied by expression of the cellular activation marker Fos in oxytocin-immunoreactive neurons and a significant increase in oxytocin mRNA expression in the hypothalamic paraventricular and supraoptic nuclei. Lithium also significantly increased blood oxytocin levels. We suggest that the effects of lithium against the CWS are mediated by oxytocinergic neuronal activation and subsequent release and action of oxytocin within the CNS. This hypothesis is supported by further findings that the effects of lithium against the CWS were antagonized by an oxytocin antagonist and mimicked by oxytocin. These results led us to conduct a small-scale, pilot clinical study showing positive therapeutic effects of lithium against the CWS in patients with pure cannabinoid addiction.\n\nB. Promotion of hippocampal neurogenesis and suppression of anxiety and depression by cannabinoids. The adult hippocampus contains neural stem\/progenitor cells (NS\/PCs) capable of generating new neurons, i.e., neurogenesis. Most drugs of abuse examined to date decrease adult hippocampal neurogenesis, but the effects of cannabinoids remain unknown. We show that both embryonic and adult rat hippocampal NS\/PCs are immunoreactive for CB1 cannabinoid receptors. We then found that both the synthetic cannabinoid HU210 and an endogenous cannabinoid promote proliferation, but not differentiation, of cultured embryonic hippocampal NS\/PCs likely via a sequential activation of CB1 receptors, Gi\/o proteins, and ERK signaling. Chronic, but not acute, HU210 treatment promoted adult hippocampal neurogenesis and exerted anxiolytic- and antidepressant-like effects. X-irradiation of the hippocampus blocked both the neurogenic and behavioral effects of chronic HU210 treatment, suggesting that chronic HU210 treatment produces anxiolytic- and antidepressant-like effects likely via promotion of hippocampal neurogenesis.\n\n\u2022 **Mei Zhen (Department of Medical Genetics and Microbiology, University of Toronto, Toronto, Canada) \u2013 SAD kinase regulates neuronal polarity and synapse formation**\n\nUsing *C. elegans* GABAergic neurons as a model system, we identified a Ser\/Thr kinase SAD-1 as a key player in establishing axon\/dendrite polarity and synaptic structures. In *C. elegans* the loss of the SAD-1 function leads to the accumulation of synaptic vesicles at dendritic regions of neurites, furthermore synaptic vesicles are loosely clustered at chemical synapses. Using genetic and biochemical approaches, we determined two separate genetic pathways through which SAD-1 kinase functions: During early differentiation of neurons, SAD-1 physically interacts with a scaffolding protein Neurabin to restrict the axonal fate of the developing neurites. After the establishment of axon and dendrites, SAD-1, restricted at presynaptic region by several presynaptic channels, negatively controls the incorporation of active zone proteins at chemical synapses. We are further delineating the activator and downstream effectors of the SAD kinase.\n\n\u2022 **Min Zhuo (Department of Physiology, University of Toronto, Toronto, Canada) \u2013 Cortical potentiation and its roles in persistent pain and fear**\n\nNeuronal synapses in the central nervous systems are plastic, and can undergo long-term changes throughout life. Studies of molecular and cellular mechanisms of such changes not only provide important insight into how we learn and store new knowledge in our brains, but also reveal the mechanisms of pathological changes occurring following a noxious stimulus such as pain and fear. Using integrative approaches including genetic, pharmacological, electrophysiological and behavioral studies, we explore the synaptic mechanisms for LTP and LTD in the cingulate and prefrontal cortex of adult mice. We found that activation of postsynaptic NMDA receptor is required for the induction of synaptic LTP. The expression of cingulate LTP is likely mediated by postsynaptic AMPA receptors, while presynaptic form of paired-pulse facilitation remained unchanged during synaptic potentiation. Activation of calcium-calmodulin stimulated adenylyl cyclase AC1 is required for the induction of LTP. Similar to the hippocampus, NMDA NR2A subtype receptor is required for the induction of LTP. NMDA NR2B receptors, however, also contribute to synaptic potentiation. Genetic reduction of NR2B expression or pharmacological inhibition of NR2B receptor by selective antagonists reduced behavioral contexual fear memory. The possible contribution of the ACC to the formation of fear memory is its role in pain perception. Supporting this hypothesis, inhibition of NMDA NR2NB receptors in the ACC inhibited behavioral sensitization to non-noxious stimuli. Our results provide strong evidence that synaptic potentiation within the cingulate\/prefrontal cortex play important roles physiological and pathological responses to noxious sensory stimuli and injury, including emotional fear and persistent pain.\n\n# Competing interests\n\nThe author(s) declare that they have no competing interests.\n\n# Authors' contributions\n\nEach author provided abstract for the 1st International Conference on Synapse, Memory, Drug Addiction and Pain, as indicated in the manuscript. MZ collected and organized abstracts for publishing. All authors read and approved the final manuscript.\n\n### Acknowledgements\n\nMZ is supported by grants from NINDS NS42722, the Canadian Institutes of Health Research, the EJLB-CIHR Michael Smith Chair in Neurosciences and Mental Health, and the Canada Research Chair.","meta":{"dup_signals":{"dup_doc_count":104,"dup_dump_count":47,"dup_details":{"curated_sources":4,"2023-14":1,"2022-33":1,"2022-21":1,"2021-43":1,"2020-29":1,"2019-51":1,"2019-35":1,"2019-22":1,"2019-13":1,"2019-04":1,"2018-47":1,"2018-39":1,"2018-30":1,"2018-22":1,"2018-13":1,"2017-47":2,"2017-34":2,"2017-26":1,"2017-22":1,"2017-17":1,"2017-09":1,"2016-22":1,"2016-18":1,"2015-48":3,"2015-40":2,"2015-35":3,"2015-32":4,"2015-27":2,"2015-22":4,"2015-14":3,"2014-52":3,"2014-49":4,"2014-42":9,"2014-41":4,"2014-35":4,"2014-23":6,"2014-15":3,"2023-40":1,"2024-10":1,"2017-13":1,"2015-18":3,"2015-11":3,"2015-06":3,"2014-10":2,"2013-48":3,"2013-20":3,"2024-26":1}},"file":"PMC1769477"},"subset":"pubmed_central"} {"text":"abstract: A report on the Second EMBL\/EMBO Symposium on Functional Genomics: 'Exploring the Edges of Omics', European Molecular Biology Laboratory (EMBL), Heidelberg, Germany, 16-19 October 2004.\nauthor: Gino Poulin; Julie Ahringer\ndate: 2005\ninstitute: 1The Gurdon Institute and Department of Genetics, University of Cambridge, Tennis Court Road, Cambridge CB2 1QR, UK\ntitle: Living on the edge\n\nEMBL's recent symposium on functional genomics showed how this new field has matured. Work from a broad range of model organisms provided new biological insights, and a plethora of improved high-throughput technologies promised more of these in the future. We focus here on biological networks - a major theme of the meeting.\n\n# Network motifs\n\nThe identification of patterns in biological data can uncover mechanisms through which processes are regulated. Uri Alon (Weizmann Institute of Science, Rehovot, Israel) presented evidence that in gene-regulatory networks particular patterns of interconnections (network motifs) are enriched when compared to a randomized network. For example, the three-node feed-forward loop, in which transcription factor X regulates transcription factor Y and both jointly regulate gene Z, is a frequently used network motif; one example is in the L-arabinose utilization system in *Escherichia coli*. Studying this system *in vivo* Alon found that the feed-forward loop is protected against fluctuation of external signals and allows rapid shutdown of transcription. The identification of network motifs is important, as they are thought to perform specific information-processing tasks.\n\nDissecting networks involved in complex contexts such as animal development is a monumental task. Norbert Perrimon (Harvard Medical School\/Howard Hughes Medical Institute, Boston, USA) reported how his group is starting to tackle network complexity by carrying out genome-scale loss-of-function analysis in *Drosophila* cells using RNA interference (RNAi). The strategy is to perform multiple RNAi screens in different defined contexts (different cell lines or different stimuli) using sensitive and reliable reporter assays. Perrimon focused on canonical signaling pathways such as those involving Jaks and Stats, Wingless (Wg) and Hedgehog. From these systematic screens it appears that there are important overlaps between the pathways, and that the signaling components forming these pathways, are more numerous than expected. These findings were illustrated in a network topology map where, for example, 32 components are shared in the Wg and Hedgehog screens, but only two are shared between Wg, Hedgehog and Jak-Stat screens. To try and organize the data, a phenoprint matrix (a color-coded matrix that visually links phenotypes to genes) is being built, which at the moment encompasses about 20 genome-wide screens and more than 7,500 genes. This impressive work showing unexpected connections challenges our current view of how signal information is transduced to form an appropriate response.\n\nAn important role of biological networks is transcription regulation. Understanding how DNA-binding transcriptional regulators interpret the genome's regulatory code is essential. Richard Young (Whitehead and Broad Institutes, Massachusetts Institute of Technology, Cambridge, USA) reported the use of genome-wide location data (ChIP-chip) combined with phylogenetic conservation data to describe the promoter architecture and the global behavior of transcription factors in *Saccharomyces cerevisiae*. Four types of architectures were found: single regulators; repetitive binding motifs; multiple regulators; and co-occurring regulators. There are also four global behaviors: condition invariant (the transcription factor binds the same targets regardless of the environment tested); enabled (the transcription factor does not bind its target until enabled by the environment); expanded (the binding pattern is expanded by changes of environment); and altered (different targets depending on the environment). Of particular interest, it was estimated that 17% of DNA-binding factors are found on specific targets but wait for a signal before regulating transcription. This work will provide an excellent framework for modeling global gene expression in other eukaryotes.\n\n# Network hubs\n\nNetworks have particular nodes that are more highly connected than others; these nodes are called hubs. Marc Vidal (Dana-Farber Cancer Institute, Harvard Medical School, Boston, USA) described the use of large-scale yeast two-hybrid mapping to derive a protein-interaction network in which he found two types of hub, which behave differently. The first type is called the 'part' hub, and has numerous partners that interact with it simultaneously. The second type is the 'date' hub, which also has many potential partners, but where the interacting partners depend on location and time. The date hubs represent high-level connectors between structural or functional modules such as cellular organelles or particular pathways, whereas party hubs function inside these modules, at a lower level. In yeast, for example, calmodulin is a date hub that connects four different modules, while one of these modules, the endoplasmic reticulum, forms a party hub.\n\nStuart Kim (Stanford University, USA) has uncovered hubs through analyzing DNA microarray data for conserved gene co-regulation. These hubs, which he calls 'subunits' and 'integrators', also have different properties: subunit components are highly interconnected whereas integrator hubs have a central connection point with few connections between components. Also, subunit components are usually essential, whereas most integrator components are not, suggesting that these latter proteins may have partially redundant functions. He also presented evidence that newly evolved genes are not found in hubs. The uncovering of different properties for different types of hub is fundamental for further studies of biological networks.\n\n# The microRNA network\n\nMicroRNAs (miRNAs) regulate gene expression and are found in all metazoans studied so far. Three presentations addressed different aspects of miRNAs: identification of their targets; identification of novel miRNAs; and analysis of their biological functions. Steve Cohen (EMBL, Heidelberg, Germany) presented the results of systematic *in vivo* analysis of miRNA\/target pairing characteristics in *Drosophila*. It was determined that the 5' end of the miRNA is the most important in pairing and that a minimum of seven pairing nucleotides is required for silencing. Three types of target were also identified: canonical (perfect pairing), seed (fork-like) and compensatory (bubble-like). He estimated that half of the genes in the genome are regulated through miRNAs.\n\nHow miRNAs work is still not entirely defined. Ronald Plasterk (Hubrecht Laboratory, Utrecht, The Netherlands) has used RNAi in *Caenorhabditis elegans* to identify genes required for miRNA function. His laboratory used a reporter gene (*Hn-14::lacz*) that is regulated by the miRNA *let-7*. In this system, when *let-7* becomes expressed, the level of expression of the protein LACZ diminishes because of a translational inhibition of the reporter gene. Using a candidate-based approach that relies on previous genome-wide RNAi screens, 508 genes were tested by RNAi for causing an absence of silencing; 25 new genes were found with this property, one of which is the gene encoding the small ubiquitin-like modifier protein SUMO.\n\nVictor Ambros's laboratory (Dartmouth Medical School, Hanover, USA) is studying the biological function of miRNAs by generating deletions of the miRNA genes in *C. elegans*. He described how, by studying miRNA loss-of function phenotypes, miRNA activities have been grouped into four classes: coordinated (repression of multiple targets); collaborative (multiple miRNAs acting on common targets); redundant; and modulated. Redundancy within the *let-7* family was shown; double mutants between the two *let-7* family members *mir-48* and *mir-84* display a phenotype, but the single mutants do not.\n\nThe meeting was inspiring, presentations were of very high quality and participants were able to interact in a relaxed and comfortable atmosphere. The field of functional genomics has truly become 'functional' and we can look forward to hearing more at the next symposium.\n\n### Acknowledgements\n\nG.P. is supported by the Wellcome Trust and J.A. is a Wellcome Trust Senior Research Fellow (054523).","meta":{"dup_signals":{"dup_doc_count":106,"dup_dump_count":48,"dup_details":{"curated_sources":4,"2023-23":1,"2022-27":1,"2021-43":1,"2020-34":1,"2019-22":1,"2019-09":1,"2019-04":1,"2018-51":1,"2018-47":1,"2018-43":1,"2018-34":1,"2018-30":1,"2018-22":1,"2018-13":1,"2018-05":1,"2017-39":1,"2017-30":1,"2017-22":1,"2017-09":7,"2016-44":1,"2016-40":1,"2016-36":9,"2016-30":8,"2016-22":1,"2016-18":1,"2016-07":7,"2015-48":2,"2015-40":1,"2015-35":2,"2015-32":2,"2015-27":2,"2015-22":2,"2015-14":2,"2014-52":2,"2014-49":3,"2014-42":5,"2014-41":4,"2014-35":3,"2014-23":3,"2014-15":2,"2023-50":1,"2015-18":2,"2015-11":2,"2015-06":2,"2014-10":2,"2013-48":2,"2013-20":2,"2024-18":1}},"file":"PMC551530"},"subset":"pubmed_central"} {"text":"date: 2017-05\ntitle: WHO LAUNCHES GLOBAL EFFORT TO HALVE MEDICATION-RELATED ERRORS IN 5 YEARS\n\n**29 MARCH 2017 \\| GENEVA\/BONN -** WHO today launched a global initiative to reduce severe, avoidable medication-associated harm in all countries by 50% over the next 5 years.\n\nThe Global Patient Safety Challenge on Medication Safety aims to address the weaknesses in health systems that lead to medication errors and the severe harm that results. It lays out ways to improve the way medicines are prescribed, distributed and consumed, and increase awareness among patients about the risks associated with the improper use of medication.\n\nMedication errors cause at least one death every day and injure approximately 1.3 million people annually in the United States of America alone. While low- and middle-income countries are estimated to have similar rates of medication-related adverse events to high-income countries, the impact is about twice as much in terms of the number of years of healthy life lost. Many countries lack good data, which will be gathered as part of the initiative.\n\nGlobally, the cost associated with medication errors has been estimated at US\\$ 42 billion annually or almost 1% of total global health expenditure.\n\n\"We all expect to be helped, not harmed, when we take medication,\" said Dr Margaret Chan, WHO Director-General. \"Apart from the human cost, medication errors place an enormous and unnecessary strain on health budgets. Preventing errors saves money and saves lives.\"\n\nEvery person around the world will at some point in their life take medicines to prevent or treat illness. However, medicines do sometimes cause serious harm if taken incorrectly, monitored insufficiently or as the result of an error, accident or communication problems.\n\nBoth health workers and patients can make mistakes that result in severe harm, such as ordering, prescribing, dispensing, preparing, administering or consuming the wrong medication or the wrong dose at the wrong time. But all medication errors are potentially avoidable. Preventing errors and the harm that results requires putting systems and procedures in place to ensure the right patient receives the right medication at the right dose via the right route at the right time.\n\nMedication errors can be caused by health worker fatigue, overcrowding, staff shortages, poor training and the wrong information being given to patients, among other reasons. Any one of these, or a combination, can affect the prescribing, dispensing, consumption, and monitoring of medications, which can result in severe harm, disability and even death.\n\nMost harm arises from systems failures in the way care is organized and coordinated, especially when multiple health providers are involved in a patient's care. An organizational culture that routinely implements best practices and that avoids blame when mistakes are made is the best environment for safe care.\n\nThe Challenge calls on countries to take early priority action to address these key factors: including medicines with a high risk of harm if used improperly; patients who take multiple medications for different diseases and conditions; and patients going through transitions of care, in order to reduce medication errors and harm to patients.\n\nThe actions planned in the Challenge will be focused on four areas: patients and the public; health care professionals; medicines as products; and systems and practices of medication. The Challenge aims to make improvements in each stage of the medication use process including prescribing, dispensing, administering, monitoring and use. WHO aims to provide guidance and develop strategies, plans and tools to ensure that the medication process has the safety of patients at its core, in all health care facilities.\n\n\"Over the years, I have spoken to many people who have lost loved ones to medication-related errors,\" said Sir Liam Donaldson, WHO Envoy for Patient Safety. \"Their stories, their quiet dignity and their acceptance of situations that should never have arisen have moved me deeply. It is to the memories of all those who have died due to incidents of unsafe care that this Challenge should be dedicated.\"\n\nThis challenge is WHO's third global patient safety challenge, following the Clean Care is Safe Care challenge on hand hygiene in 2005 and the Safe Surgery Saves Lives challenge in 2008.\n\nAvailable from: ","meta":{"dup_signals":{"dup_doc_count":140,"dup_dump_count":56,"dup_details":{"curated_sources":2,"2023-40":2,"2023-06":1,"2022-40":3,"2022-33":2,"2022-27":1,"2022-21":2,"2021-49":2,"2021-39":4,"2021-31":1,"2021-25":2,"2021-17":2,"2021-04":2,"2020-50":1,"2020-45":1,"2020-40":1,"2020-34":1,"2020-29":1,"2020-24":2,"2020-16":1,"2020-10":1,"2020-05":1,"2019-51":1,"2019-47":2,"2019-39":3,"2019-35":2,"2019-30":3,"2019-26":2,"2019-22":4,"2019-18":2,"2019-13":4,"2019-09":2,"2019-04":3,"2018-51":1,"2018-47":2,"2018-43":1,"2018-39":1,"2018-34":2,"2018-26":4,"2018-22":2,"2018-17":4,"2018-13":5,"2018-09":2,"2018-05":6,"2017-51":1,"2017-47":6,"2017-43":2,"2017-39":4,"2017-34":2,"2017-30":7,"2017-26":5,"2017-22":5,"2017-17":9,"2023-50":1,"2024-22":2,"2024-18":1,"2024-30":1}},"file":"PMC5447226"},"subset":"pubmed_central"} {"text":"abstract: Primary malignant melanoma of the duodenum is an unusual oncologic entity. Patients usually present the similar clinical symptoms like other common tumors in this site. And there are no specific radiological features either. The cases with only little melanin pigment or without notable melanin pigment are very misleading, especially in small biopsies or frozen sections. Definite diagnosis depends on both careful histologic examination and the use of proper immunohistochemical stains. Moreover, detailed history and thorough investigation should be made to exclude the preexistence or coexistence of a primary lesion elsewhere. Herein we report the case of a 60-year-old male patient with primary malignant melanoma of the duodenum, which was misdiagnosed as lymphoma or undifferentiated carcinoma in frozen consultation. The patient had achieved disease-free survival for more than 46\u2009months without any evidence of recurrence after surgery.\n .\n # Virtual slides\n .\n The virtual slides for this article can be found here: .\nauthor: Hongxia Li; Qinhe Fan; Zhen Wang; Hai Xu; Xiao Li; Weiming Zhang; Zhihong Zhang\ndate: 2012\ninstitute: 1Department of Pathology, the First Affiliated Hospital of Nanjing Medical University, 300 Guangzhou Road, Nanjing 210029, P. R. China; 2Department of Radiology, the First Affiliated Hospital of Nanjing Medical University, 300 Guangzhou Road, Nanjing 210029, P. R. China\nreferences:\ntitle: Primary malignant melanoma of the duodenum without visible melanin pigment: a mimicker of lymphoma or carcinoma\n\n# Background\n\nMalignant melanoma, originated from melanocyte, is not a common tumor and accounts for 1 to 3% of all malignancies \\[1\\]. However, it is the most common metastatic tumor of the gastrointestinal (GI) tract especially the small intestine and can present with fairly common constitutional symptoms \\[2\\]. Primary malignant melanoma originating in the small bowel, particularly in the duodenum, is extremely rare and very controversial. In addition, some cases with little melanin pigment or without visible melanin pigment are very misleading, especially in small biopsy specimens or frozen sections. Identifying areas showing possible melanin pigment and immunostains with specific markers for melanoma, such as HMB45, Melan-A and S-100 protein, are diagnostically important. We encountered a case of primary malignant melanoma of the duodenum (PMMD) without any visible melanin pigment and misdiagnosed as lymphoma or undifferentiated carcinoma in frozen consultation, to highlight its clinic-pathological features and the diagnostic pitfalls.\n\n# Case presentation\n\n## Clinical summary\n\nA 60-year-old Chinese male patient suffering from durative right abdominal pain and dark stools for one month, a recent episode of nausea and vomiting, was admitted to hospital in May 27, 2008. No history of fever, anorexia, hematemesis, radiating pain or weight loss was reported. Upper GI endoscopy revealed a malignant tumor at the descending part of the duodenum and the biopsy suspected poorly differentiated carcinoma. But there were no enough tumor cells for further immunohistochemical stain. In physical examination, there was no any obvious black nevus or nodule, nor swelled superficial lymph node. Routine hematological and biochemical studies showed no abnormalities. Serum detection showed tumor markers such as alpha fetoprotein (AFP), carcinoembryonic antigen (CEA), carbohydrate antigen 50 (CA-50), carbohydrate antigen 19\u20139 (CA-19-9) were all in the normal range. FOB (Fecal Occult Blood) Testing was positive. GI tract barium meal revealed a filling defect in the descendant duodenum, without obvious mucosal destruction and barium passed smoothly. Computed tomography scan of the abdomen disclosed the presence of a space-occupying lesion originating from the descending part of the duodenum with stenotic lumens (Figure 1<\/a>). An exploratory laparotomy was performed through a midline incision in Jun 2, 2008 and confirmed the presence of a solid tumor arising from the lateral part of the descendant duodenum, which invading the duodenal serosa. Enlarged mesentery lymph node and peripancreatic lymph node were identified. Because the surgeons suspected the tumor as lymphoma, so only a palliative operation (tumor resection) was done.\n\n## Pathologic findings\n\nIntraoperative consultation showed a tumor measuring 2.5\u2009cm\u2009\u00d7\u20091.5\u2009cm\u2009\u00d7\u20091\u2009cm, mesentery lymph node and peripancreatic lymph node measuring 3\u2009cm, 2\u2009cm in diameter respectively. The cut surfaces of the specimens were both pliable and gray red without pigment. Frozen sections were applied. Histologically, the lesion located under the enteric mucosa with diffusely infiltrative tumor cells, however, without obvious mucosal destruction (Figure 2<\/a>A). The tumor cells were round, oval or polygonal epithelioid, showing markedly cytologic atypia with large eosinophilic nucleoli, abundant mitotic figures and moderate cytoplasm. But there was no obviously visible melanin pigment in cytoplasm (Figure 2<\/a>B). Mesentery lymph node was positive for tumor cells (1\/1), whereas peripancreatic lymph node was negative (0\/1). According to above-mentioned features, pathologists consulted malignant tumor, prone to lymphoma or undifferentiated carcinoma. And the surgeons thought it might be lymphoma so only local tumor was resected.\n\nSubsequently, routine hematoxylin-eosin sections (formalin-fixed and paraffin-embedded) and immunostains were done. But the results of immunohistochemical stains didn't support the diagnoses of lymphoma and undifferentiated carcinoma. The tumor cells were strongly positive for melanoma marker (HMB45), Melan-A, S-100 protein (Figure 3<\/a>A-C) and vimentin, whereas CD20, CD45RO, CD3, CD79\u03b1, myeloperoxidase, terminal deoxynucleotidyl transferase, pan cytokeratin (AE1\/AE3), keratin 5\/6, keratin 7, synaptophysin, chromogranin A, and sarcoma markers such as smooth muscle actin, desmin, CD117, CD34 were all negative. A proliferative index of 60% was noted with Ki-67 immunostaining (Figure 3<\/a>D). So the diagnosis of malignant melanoma was confirmed.\n\nThorough postoperative systemic evaluations including a detailed history, clinical examination, endoscopic assessment, and radiologic imaging were done to rule out the presence of a primary cutaneous, anal, or ocular lesion, or a melanoma in any other site; however, no primary site was found. Therefore, the resected tumor was determined to represent a PMMD with mesentery lymph node metastasis. The patient refused chemotherapy and just used Chinese traditional medicine treatment intermittently to improve the immunity. Close follow-up showed that the patient is doing well with no evidence of loco regional recurrence or distant metastasis for more than 46\u2009months after surgery.\n\n# Discussion\n\nMalignant melanoma is a relatively rare tumor comprising 1-3% of all tumors and exhibits an unusual tendency to metastasize to the GI tract \\[3\\]. Although frequently seen in autopsy series in up to 50 to 60% of patients, only 2 to 5% of patients are diagnosed with metastatic malignant melanoma to the GI tract while they are still alive \\[4\\]. It is because that there are no specific symptoms of early development, which are general and constitutional; and even if metastases do occur, they are usually accompany with symptoms such as abdominal pain (62%), hemorrhage (50%), nausea and vomiting (26%), mass (22%), intestinal obstruction (18%), or intussusceptions (15%), and are frequently diagnosed in an emergency situation \\[5\\]. Metastasis to the GI tract is seen most frequently in the small intestine, followed by the colon, stomach, and rectum, but is rare in the esophagus. However, primary malignant melanoma originating in the small intestine and particularly in the duodenum is extremely rare, with only a few case reports \\[1,6,7\\].\n\nThere are different theories concerning the origin of primary malignant melanoma in the small bowel, though the controversy still exists over the presence of primary malignant melanoma in this area. Some proposes that malignant melanoma may originate from neural crest cells. These multipotential cells migrate through the body and can get to the bowel via the umbilical-mesenteric canal, where they later differentiate into specialized cells \\[8\\]. Although not consistently confirmed, another theory suggests that malignant melanoma might potentially develop from amine precursor uptake and decarboxylation (APUD) cells \\[9\\]. Besides, it is presumed that the small intestine normally contains melanoblasts, which may support the assumption of primary development of melanoma at this site.\n\nBefore making the diagnosis of primary malignant melanoma of the small bowel, thorough systemic examination must be done to rule out the possibility of metastasis from other sites, which may preferentially develop on the skin, retina, anal canal, or under the nail, and less frequently at other locations such as the esophagus, penis, or vagina. In addition, there should be no history of previous removal or spontaneous regression of any atypical melanocytic skin tumor \\[10\\]. The following diagnostic criteria have been occasionally proposed. According to Sachs *et al.*\\[8\\], primary malignant melanoma in the small bowel is diagnosed where there are: 1) biopsy-proven melanoma from the intestine at a single focus; 2) no evidence of disease in any other organs including the skin, eye and lymph nodes outside the region of drainage at the time of diagnosis; and 3) disease-free survival of at least 12\u2009months after diagnosis. According to Blecker \\[2\\], it is diagnosed when there is lack of concurrent or previous removal of a melanoma or melanocytic lesion from the skin and lack of any other organ involvement and in situ change in the overlying or adjacent GI epithelium. In this case, one single focus located in the lateral of the descendant duodenum, and thorough postoperative investigation including a detailed history, clinical examination, endoscopic assessment, and radiologic imaging failed to reveal any evidence of a pre-existing primary tumor or any other metastatic lesion. Moreover, the patient has survived for more than 46\u2009months with no evidence of loco regional recurrence or distant metastasis. Thus, we felt confident in supporting the diagnosis of PMMD.\n\nAlthough the possibility of primary malignant melanoma in the small intestine does exist, the incidence is extremely low. In addition, melanoma by itself is a great mimicker of other neoplastic conditions \\[11\\]. It is very easy to diagnose for the cases with obvious melanin pigment, but sometimes, the specimen only with little pigment or without any visible pigment, which may create a major diagnostic challenge, must be differentiated from other common intestinal tumors like lymphoma, carcinoma of poor differentiation, neuroendocrine tumor, leiomyosarcoma, GIST (Gastrointestinal Stromal Tumor), et al. More sections as far as possible and careful observation can help to find valuable diagnositic clues. Furthermore, immunohistochemical staining and electron microscope detection may play an important role for differential diagnosis. In our patient, the initial frozen sections revealed the tumor cells were relatively uniform with markedly cytologic atypia and abundant mitotic figures, so the pathologists favored lymphoma or carcinoma of poor differentiation, which were the most common malignancies of the duodenum. But further immunohistochemical stains provided no evidence for above diagnoses. By more careful observation, we found the discohesive tumor cells had one or more large eosinophilic nucleoli, which were one of the main features of melanoma apart from visible melanin pigment, so malignant melanoma - a very rare malignancy in this site should be taken into consideration. Expression of some specific markers such as HMB45, Melan-A and S-100 protein is required to confirm our conjecture.\n\nOptical treatment for malignant melanoma is an extensive and curative surgery, if possible, because other methods including adjuvant radiotherapy, chemotherapy and immunotherapy cannot offer definite treatment outcome. The time of diagnosis and the presence of metastases are supposed to represent major determinants of prognosis \\[10,12\\]. The median survival after curative resection of primary malignant melanoma of the small intestine is 49\u2009months \\[7\\]; the longest reported survival is 21\u2009years \\[12\\]. To the best of our knowledge, our patient refused chemotherapy and just used Chinese traditional medicine intermittently. Now he has achieved disease-free survival of more than 46\u2009months without any evidence of recurrence. The certain mechanism of Chinese traditional medicine is still unclear and always be regarded as complicated, sometimes even mysterious. Nowadays, one of generally accepted opinions is that it can enhance patients' immunity efficiently. But melanoma is destined to behave aggressively, and widespread metastases may present as a late complication. Furthermore, the Ki-67 labeling index in this case was greater than 60% and the patient had one mesentery lymph node involved. This highlights the need for close and long-term follow-up.\n\n# Conclusion\n\nIn conclusion, PMMD is an extremely rare neoplastic lesion of the alimentary tract. There are no specific symptoms and radiologic imaging. Especially, the cases with little melanin pigment or without visible melanin pigment are very misleading. Definite diagnosis depends on not only pathomorphology and immunohistochemical stains, but also detailed history and thorough investigation, the latter are more important to exclude the preexistence or coexistence of a primary lesion elsewhere.\n\n# Consent\n\nWritten informed consent was obtained from the patient for publication of this Case Report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.\n\n# Abbreviations\n\nPMMD: Primary malignant melanoma of the duodenum; GI: Gastrointestinal.\n\n# Competing interests\n\nThe authors declare that they have no competing interests.\n\n# Authors' contributions\n\nLH conceived and designed the case report and drafted the manuscript; ZZ designed the case report and contributed to the revisions and editing of the manuscript. FQ participated in the histopathological evaluation and gave the final diagnosis; WZ participated in the histopathological evaluation and supplied the literature review; XH provided the relevant radiological details; LX and ZW participated in immunohistochemical analysis. All authors have read and approved the final manuscript.","meta":{"dup_signals":{"dup_doc_count":125,"dup_dump_count":29,"dup_details":{"curated_sources":3,"2022-27":1,"2021-43":1,"2020-40":1,"2020-24":1,"2019-35":1,"2017-47":1,"2017-34":1,"2015-48":5,"2015-40":4,"2015-35":6,"2015-32":4,"2015-27":4,"2015-22":6,"2015-14":5,"2014-52":5,"2014-49":5,"2014-42":13,"2014-41":6,"2014-35":7,"2014-23":9,"2014-15":5,"2023-23":1,"2015-18":5,"2015-11":5,"2015-06":5,"2014-10":4,"2013-48":5,"2013-20":5,"2024-22":1}},"file":"PMC3472194"},"subset":"pubmed_central"} {"text":"abstract: Repressingly, a robot scientist can generate functional genomic hypothesis and carry out experiments, increasing the working scientist's feelings of redundancy.\nauthor: Gregory A Petsko\ndate: 2004\ninstitute: 1Rosenstiel Basic Medical Sciences Research Center, Brandeis University, Waltham, MA 02454-9110, USA\ntitle: Doctor Dunsel\n\n\"Dunsel ... is a term used by midshipmen at Starfleet Academy. It refers to a part which serves no useful purpose.\"\n\nMr Spock, in *Star Trek*, Episode 53: \"The Ultimate Computer\"\n\nParanoia began morphing into depression with the arrival of the 15 January 2004 issue of *Nature*. On page 247 was a paper by King *et al*. entitled 'Functional genomic hypothesis generation and experimentation by a robot scientist'. The paper describes an automated system that uses techniques from artificial intelligence to formulate hypotheses to explain observations. The system then devises experiments to test these hypotheses, and actually carries out the experiments using a simple laboratory robot. But that's not all. It then interprets the results so as to falsify any hypotheses not consistent with the data. Moreover, it can iterate this process, making it capable of developing and testing quite extensive models.\n\nIn the paper, the authors used this system to probe the genetic control of aromatic amino-acid biosynthesis in yeast, using various growth conditions and auxotrophic strains. The robot scientist took a series of systematic gene deletion strains and tried growing each in nutritional medium that lacked one of the intermediates in the pathway. If the deleted gene was required to make that intermediate, the strain would not grow and a component of the pathway would have been identified. The machine automatically examined the cultures to see how opaque they were, returned the results to the artificial intelligence package, and then received instructions for what experiments to perform to validate the hypotheses based on the results of the first round, and so on. The final result was the assembled pathway: the set of genes coding for the enzymes that control each step. The authors claim in the end that the automated system carried out the project just as efficiently - and more cost-effectively - than scientifically trained human volunteers.\n\n*Nature*, perhaps feeling guilty about the hordes of scientists who might be losing sleep over the prospect of having to go out and actually work for a living, tried to soften the blow with an editorial comment called 'Don't fear the Robot Scientist' (page 181 of the same issue) that completely missed the point. \"Contrary to first impressions,\" the commentator says cheerily, \"an automated system that designs its own experiments will benefit young molecular geneticists. At first glance, it seems to render obsolete the armies of postgrads and postdocs employed in the world's molecular-genetics laboratories.\"\n\nThat wasn't what was worrying me at all. Replacing my graduate students and postdocs with machines that would work around the clock and never pester me for more disk space on the computer or a new set of pipetmen; that would never complain about the temperature in the lab and never forget to clear up after themselves - that didn't sound so bad. It was the thought that it might eventually replace me that was frightening. After all, this thing didn't just carry out the experiments, it designed them and formulated hypotheses based on them. I thought I was supposed to do that.\n\n*Nature* continued, \"The team behind the Robot Scientist argues that such automation 'frees scientists to make the high-level creative leaps at which they excel'\". Well, the thing already plans, performs and interprets experiments. Just what leaps would those be, guys - designing the next generation of software for the robot? Still, I decided after an initial bad moment or ten, the robot was carrying out functional genomics. As we all know, genomics doesn't require real thought, just the semblance of it. Maybe I would have to surrender my genomics projects to some machine, but that only represented a part of my research effort. The rest of my work is structural biology, a branch of science of such technical sophistication and intellectual rigor that it could never be automated.\n\nThen the 10 February 2004 issue of *Proceedings of the National Academy of Sciences* arrived. On page 1537 - right after a paper of my own, to add insult to injury - was an article by James Holton and Tom Alber (who was once my graduate student, to add injury to insult) entitled 'Automated protein crystal structure determination using ELVES'. It describes an expert system that can fully automatically determine the crystal structure of a protein from the primary X-ray data. True, individual steps in this process had been automated for some time, and the ELVES system had already been used to carry out such steps or even groups of steps, but always under the user's direction. This was different: there was no human intervention at all. The system was able to solve the structure of a 12,000 molecular weight coiled-coil protein from crystallographic data sets in two different crystal forms following a single command that launched the program and directed it to the location of the data files. The entire process, including interpretation of the resulting electron density map and refinement of the atomic model to convergence, took 9.5 hours on a multi-processor computer for one of the crystal forms, and 165 hours - the thing must have stopped for coffee or something - for the other form. The authors concluded that \"high resolution structures with well-ordered metals can be determined automatically\". To be fair, the protein structure, being all helical, did not present any real challenges in the model-building stage, and the authors are commendably candid about the limitations of the method: \"ELVES is incapable of overcoming problems arising from poor data or inadequate phasing signal. Problems such as radiation damage, weak heavy atom signals, twinning, poor heavy atom models, low resolution, or crystal disorder that hinder crystallographic projects are not overcome by automation.\" Not yet, but just wait, I could hear them say *sotto voce*.\n\nSo, now I was about to be replaced as a crystallographer too. The year 2004 was sure turning out to be a terrific year. Well, strictly speaking I'm not paid just to do science anyway. Most of my salary comes from teaching undergraduates, and I consoled myself with the thought that I could always do more of that. Consoled myself, that is, until the arrival of last week's *Boston Globe* newspaper, with a story about a new effort at Massachusetts Institute of Technology (MIT) to revamp its undergraduate curriculum to take advantage of \"innovations in educational methods\". You know what that means - computer-based instruction. I could see it coming: once my lectures were all on the internet in interactive, self-test form, there would be no need for me to actually do any of the teaching myself anymore, or to be paid to do so - a fact I was sure would not be lost on any Brandeis administrator who might happen to read the article.\n\nFeeling now very much like a horse might have felt about the time Henry Ford began turning out Model Ts, I tried to find something - anything - that I could do that a machine couldn't. Suddenly, it came to me: writing papers and grants. I probably spend half my non-teaching time writing things, things with highly technical content that also have to be comprehensible to people in my field who aren't involved in the work I'm doing or am proposing to do. In fact, if I want to get a grant from a foundation or publish a paper in a high-profile, general journal like *Nature* or *Genome Biology*, I have to try to make this highly technical material comprehensible to people who aren't in my field at all. Automate that, if you can.\n\nWell, that may not be far off, actually. As Clive Thompson has pointed out (*The New York Times Magazine*, 14 December 2003), the music business is making strides towards doing something very like that. An artificial intelligence program called Hit Song Science from the Barcelona-based company Polyphonic HMI tries to determine whether a new song is going to be a hit. It uses a clustering algorithm to locate acoustic similarities between songs, similarities like common bits of rhythm, harmonies or keys. It compares these features of a new tune with all the Top 40 hits of the last 30 years; the closer the features of a new song are to a 'hit cluster', the more likely it is predicted, by the software, to be a hit. Thompson reports that the algorithm produces some strange groupings - the rock group U2 is similar to Beethoven, for example - yet it seems to work. A number of record companies are now using it to help pick which songs on a new album they will promote heavily. And, perhaps ominously, others are using it in the studio to tweak new songs as they are being recorded, changing various aspects of them to bring them closer to the hits in the nearest cluster. All well and good for the record companies, but it seems to me that this process is likely to take the spontaneity - and much of the novelty - right out of the music business. Hit songs tend to sound too much alike as it is, at least to this jaded listener; now they are going to be forced to sound even more alike. And clearly the same approach could be used, theoretically at least, to produce grants with a high probability of being funded, and scientific papers guaranteed to be accepted by top-rank journals. Hot Paper Science would cluster the titles, author names and affiliations, title words and key concepts that are shared by papers published in *Cell*, for example. One then only has to input one's own initial effort, 'The complete sequence of the gerbil genome' by Gregory A Petsko, *et al*., for example, and out would come 'Gerbil genome sequence: signal transduction pathways relevant to cancer, neurodegenerative diseases and apoptosis, with additional insights into systems biology and biodefense', plus a set of suggested coauthors that would help guarantee acceptance. The software would go on to write the paper, of course; submit it; and, if necessary, argue with the referees.\n\nWell, that was it, I thought. Before long, even my writing functions would be taken over by machines. I was rapidly being made redundant, as they say in the UK - a twentieth-century equivalent of Captain Kirk in the *Star Trek* episode \"The ultimate computer\", his command capabilities handled more efficiently by a machine programmed to replace human beings in space exploration, his plaintive (and sexist) cry, \"But there are some things men must do to remain men!\" drowned out by the bootsteps of the relentless march of automation.\n\nBut then something happened to lift my gloom and restore my self-esteem. It was the arrival of an e-mail reminding me about the curriculum committee meeting scheduled for that afternoon. Of course! I wasn't useless after all. In fact, real human scientists are indispensable, and always will be. Computers may be better at solving crystal structures, and robots may be better at doing genome-enabled, hypothesis-driven experiments - may even be better at interpreting them - and eventually there will probably be software that writes better papers and grants, but we humans can still waste enormous amounts of time at interminable committee meetings. No machine will ever be stupid enough to do that.","meta":{"dup_signals":{"dup_doc_count":124,"dup_dump_count":44,"dup_details":{"curated_sources":4,"2023-14":1,"2022-27":1,"2022-05":1,"2019-26":1,"2019-09":1,"2018-51":2,"2018-43":2,"2018-34":1,"2018-22":1,"2018-05":1,"2017-39":1,"2017-30":1,"2017-22":1,"2017-17":1,"2017-09":9,"2016-44":1,"2016-40":1,"2016-36":10,"2016-30":7,"2016-22":1,"2016-18":1,"2016-07":5,"2015-48":3,"2015-40":1,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":5,"2014-42":7,"2014-41":5,"2014-35":3,"2014-23":3,"2014-15":3,"2023-40":1,"2015-18":3,"2015-11":3,"2015-06":3,"2014-10":4,"2013-48":3,"2013-20":3,"2024-10":1}},"file":"PMC395772"},"subset":"pubmed_central"} {"text":"author: Charles Darwin\ndate: 2010-10\ninstitute: First published in *Mind*, *2*, 285\u2013294. (reproduced from Christopher Green's Classics)\ntitle: A biographical sketch of an infant\n\nM. Taine's very interesting account of the mental development of an infant, translated in the last number of MIND (p. 252), has led me to look over a diary which I kept thirty-seven years ago with respect to one of my own infants. I had excellent opportunities for close observation, and wrote down at once whatever was observed. My chief object was expression, and my notes were used in my book on this subject; but as I attended to some other points, my observations may possibly possess some little interest in comparison with those by M. Taine, and with others which hereafter no doubt will be made. I feel sure, from what I have seen with my own infants, that the period of development of the several faculties will be found to differ considerably in different infants.\n\nDuring the first seven days various reflex actions, namely sneezing, hickuping, yawning, stretching, and of course sucking and screaming, were well performed by my infant. On the seventh day, I touched the naked sole of his foot with a bit of paper, and he jerked it away, curling at the same time his toes, like a much older child when tickled. The perfection of these reflex movements shows that the extreme imperfection of the voluntary ones is not due to the state of the muscles or of the coordinating centres, but to that of the seat of the will. At this time, though so early, it seemed clear to me that a warm soft hand \\[p. 286\\] applied to his face excited a wish to suck. This must be considered as a reflex or an instinctive action, for it is impossible to believe that experience and association with the touch of his mother's breast could so soon have come into play. During the first fortnight he often started on hearing any sudden sound, and blinked his eyes. The same fact was observed with some of my other infants within the first fortnight. Once, when he was 66 days old, I happened to sneeze, and he started violently, frowned, looked frightened, and cried rather badly: for an hour afterwards he was in a state which would be called nervous in an older person, for every slight noise made him start. A few days before this same date, he first started at an object suddenly seen; but for a long time afterwards sounds made him start and wink his eyes much more frequently than did sight; thus when 114 days old, I shook a paste-board box with comfits in it near his face and he started, whilst the same box when empty or any other object shaken as near or much nearer to his face produced no effect. We may infer from these several facts that the winking of the eyes, which manifestly serves to protect them, had not been acquired through experience. Although so sensitive to sound in a general way, he was not able even when 124 days old easily to recognise whence a sound proceeded, so as to direct his eyes to the source.\n\nWith respect to vision, \u2013 his eyes were fixed on a candle as early as the 9th day, and up to the 45th day nothing else seemed thus to fix them; but on the 49th day his attention was attracted by a bright-coloured tassel, as was shown by his eyes becoming fixed and the movements of his arms ceasing. It was surprising how slowly he acquired the power of following with his eyes an object if swinging at all rapidly; for he could not do this well when seven and a half months old. At the age of 32 days he perceived his mother's bosom when three or four inches from it, as was shown by the protrusion of his lips and his eyes becoming fixed; but I much doubt whether this had any connection with vision; he certainly had not touched the bosom. Whether he was guided through smell or the sensation of warmth or through association with the position in which he was held, I do not at all know.\n\nThe movements of his limbs and body were for a long time vague and purposeless, and usually performed in a jerking manner; but there was one exception to this rule, namely, that from a very early period, certainly long before he was 40 days old, he could move his hands to his own mouth. When 77 days old, he took the sucking bottle (with which he was partly fed) in his right hand, whether he was held on the left or right arm of his nurse, and he would not take it in his left hand \\[p. 287\\] until a week later although I tried to make him do so; so that the right hand was a week in advance of the left. Yet this infant afterwards proved to be left-handed, the tendency being no doubt inherited \u2013 his grandfather, mother, and a brother having been or being left-handed. When between 80 and 90 days old, he drew all sorts of objects into his mouth, and in two or three weeks' time could do this with some skill; but he often first touched his nose with the object and then dragged it down into his mouth. After grasping my finger and drawing it down into his mouth, his own hand prevented him from sucking it; but on the 114th day, after acting in this manner, he slipped his own hand down so that he could get the end of my finger into his mouth. This action was repeated several times, and evidently was not a chance but a rational one. The intentional movements of the hands and arms were thus much in advance of those of the body and legs; though the purposeless movements of the latter were from a very early period usually alternate as in the act of walking. When four months old, he often looked intently at his own hands and other objects close to him, and in doing so the eyes were turned much inwards, so that he often squinted frightfully. In a fortnight after this time (*i.e.* 132 days old) I observed that if an object was brought as near to his face as his own hands were, he tried to seize it, but often failed; and he did not try to do so in regard to more distant objects. I think there can be little doubt that the convergence of his eyes gave him the clue and excited him to move his arms. Although this infant thus began to use his hands at an early period, he showed no special aptitude in this respect, for when he was 2 years and 4 months old, he held pencils, pens, and other objects far less neatly and efficiently than did his sister who was then only 14 months old, and who showed great inherent aptitude in handling anything.\n\n*Anger*. \u2013 It was difficult to decide at how early an age anger was felt; on his eighth day he frowned and wrinkled the skin round his eyes before a crying fit, but this may have been due to pain or distress, and not to anger. When about ten weeks old, he was given some rather cold milk and he kept a slight frown on his forehead all the time that he was sucking, so that he looked like a grown-up person made cross from being compelled to do something which he did not like. When nearly four months old, and perhaps much earlier, there could be no doubt, from the manner in which the blood gushed into his whole face and scalp, that he easily got into a violent passion. A small cause sufficed; thus, when a little over seven months old, he screamed with rage because a lemon slipped away and he could not seize it with his hands. When eleven months old, if \\[p. 288\\] a wrong plaything was given to him, he would push it away and beat it; I presume that the beating was an instinctive sign of anger, like the snapping of the jaws by a young crocodile just out of the egg, and not that he imagined he could hurt the plaything. When two years and three months old, he became a great adept at throwing books or sticks, &c., at anyone who offended him; and so it was with some of my other sons. On the other hand, I could never see a trace of such aptitude in my infant daughters; and this makes me think that a tendency to throw objects is inherited by boys.\n\n*Fear*. \u2013 This feeling is probably one of the earliest which is experienced by infants, as shown by their starting at any sudden sound when only a few weeks old, followed by crying. Before the present one was 4 1\/2 months old I had been accustomed to make close to him many strange and loud noises, which were all taken as excellent jokes, but at this period I one day made a loud snoring noise which I had never done before; he instantly looked grave and then burst out crying. Two or three days afterwards, I made through forgetfullness the same noise with the same result. About the same time (*viz*. on the 137th day) I approached with my back towards him and then stood motionless; he looked very grave and much surprised, and would soon have cried, had I not turned round; then his face instantly relaxed into a smile. It is well known how intensely older children suffer from vague and undefined fears, as from the dark, or in passing an obscure corner in a large hall, &c. I may give as an instance that I took the child in question, when 2 1\/4 years old, to the Zoological Gardens, and he enjoyed looking at all the animals which were like those that he knew, such as deer, antelopes &c., and all the birds, even the ostriches, but was much alarmed at the various larger animals in cages. He often said afterwards that he wished to go again, but not to see \"beasts in houses\"; and we could in no manner account for this fear. May we not suspect that the vague but very real fears of children, which are quite independent of experience, are the inherited effects of real dangers and abject superstitions during ancient savage times? It is quite conformable with what we know of the transmission of formerly well-developed characters, that they should appear at an early period of life, and afterwards disappear.\n\n*Pleasurable Sensations*. \u2013 It may be presumed that infants feel pleasure whilst sucking and the expression of their swimming eyes seems to show that this is the case. This infant smiled when 45 days, a second infant when 46 days old; and these were true smiles, indicative of pleasure, for their eyes brightened and eyelids slightly closed. The smiles arose chiefly when looking at their mother, and were therefore probably of mental origin; \\[p. 289\\] but this infant often smiled then, and for some time afterwards, from some inward pleasurable feeling, for nothing was happening which could have in any way excited or amused him. When 110 days old he was exceedingly amused by a pinafore being thrown over his face and then suddenly withdrawn; and so he was when I suddenly uncovered my own face and approached his. He then uttered a little noise which was an incipient laugh. Here surprise was the chief cause of the amusement, as is the case to a large extent with the wit of grown-up persons. I believe that for three or four weeks before the time when he was amused by a face being suddenly uncovered, he received a little pinch on his nose and cheeks as a good joke. I was at first surprised at humour being appreciated by an infant only a little above three months old, but we should remember how very early puppies and kittens begin to play. When four months old, he showed in an unmistakable manner that he liked to hear the pianoforte played; so that here apparently was the earliest sign of an \u00e6sthetic feeling, unless the attraction of bright colours, which was exhibited much earlier, may be so considered.\n\n*Affection*. \u2013 This probably arose very early in life, if we may judge by his smiling at those who had charge of him when under two months old; though I had no distinct evidence of his distinguishing and recognising anyone, until he was nearly four months old. When nearly five months old, he plainly showed his wish to go to his nurse. But he did not spontaneously exhibit affection by overt acts until a little above a year old, namely, by kissing several times his nurse who had been absent for a short time. With respect to the allied feeling of sympathy, this was clearly shown at 6 months and 11 days by his melancholy face, with the corners of his mouth well depressed, when his nurse pretended to cry. Jealousy was plainly exhibited when I fondled a large doll, and when I weighed his infant sister, he being then 15 1\/2 months old. Seeing how strong a feeling jealousy is in dogs, it would probably be exhibited by infants at an earlier age than that just specified, if they were tried in a fitting manner.\n\n*Association of Ideas, Reason, &c.*\u2013 The first action which exhibited, as far as I observed, a kind of practical reasoning, has already been noticed, namely, the slipping his hand down my finger so as to get the end of it into his mouth; and this happened on the 114th day. When four and a half months old, he repeatedly smiled at my image and his own in a mirror, and no doubt mistook them for real objects; but he showed sense in being evidently surprised at my voice coming from behind him. Like all infants he much enjoyed thus looking at himself, and in less than two months perfectly understood that it was \\[p. 290\\] an image; for if I made quite silently any odd grimace, he would suddenly turn round to look at me. He was, however, puzzled at the age of seven months, when being out of doors he saw me on the inside of a large plate-glass window, and seemed in doubt whether or not it was an image. Another of my infants, a little girl, when exactly a year old, was not nearly so acute, and seemed quite perplexed at the image of a person in a mirror approaching her from behind. The higher apes which I tried with a small looking-glass behaved differently; they placed their hands behind the glass, and in doing so showed their sense, but far from taking pleasure in looking at themselves they got angry and would look no more.\n\nWhen five months old, associated ideas arising independently of any instruction became fixed in his mind; thus as soon as his hat and cloak were put on, he was very cross if he was not immediately taken out of doors. When exactly seven months old, he made the great step of associating his nurse with her name, so that if I called it out he would look round for her. Another infant used to amuse himself by shaking his head laterally: we praised and imitated him, saying \"Shake your head\"; and when he was seven months old, he would sometimes do so on being told without any other guide. During the next four months the former infant associated many things and actions with words; thus when asked for a kiss he would protrude his lips and keep still, \u2013 would shake his head and say in a scolding voice \"Ah\" to the coal-box or a little spilt water, &c., which he had been taught to consider as dirty. I may add that when a few days under nine months old he associated his own name with his image in the looking-glass, and when called by name would turn towards the glass even when at some distance from it. When a few days over nine months, he learnt spontaneously that a hand or other object causing a shadow to fall on the wall in front of him was to be looked for behind. Whilst under a year old, it was sufficient to repeat two or three times at intervals any short sentence to fix firmly in his mind some associated idea. In the infant described by M. Taine (pp. 254\u2013256) the age at which ideas readily became associated seems to have been considerably later, unless indeed the earlier cases were overlooked. The facility with which associated ideas due to instruction and others spontaneously arising were acquired, seemed to me by far the most strongly marked of all the distinctions between the mind of an infant and that of the cleverest full-grown dog that I have ever known. What a contrast does the mind of an infant present to that of the pike, described by Professor M\u00f6bius,\\[1\\] who during three whole months dashed and \\[p. 291\\] stunned himself against a glass partition which separated him from some minnows; and when, after at last learning that he could not attack them with impunity, he was placed in the aquarium with these same minnows, then in a persistent and senseless manner he would not attack them!\n\nCuriosity, as M. Taine remarks, is displayed at an early age by infants, and is highly important in the development of their minds; but I made no special observation on this head. Imitation likewise comes into play. When our infant was only four months old I thought that he tried to imitate sounds; but I may have deceived myself, for I was not thoroughly convinced that he did so until he was ten months old. At the age of 11 1\/2 months he could readily imitate all sorts of actions, such as shaking his head and saying \"Ah\" to any dirty object, or by carefully and slowly putting his forefinger in the middle of the palm of his other hand, to the childish rhyme of \"Pat it and pat it and mark it with T\". It was amusing to behold his pleased expression after successfully performing any such accomplishment.\n\nI do not know whether it is worth mentioning, as showing something about the strength of memory in a young child, that this one when 3 years and 23 days old on being shown an engraving of his grandfather, whom he had not seen for exactly six months, instantly recognised him and mentioned a whole string of events which had occurred whilst visiting him, and which certainly had never been mentioned in the interval.\n\n*Moral Sense*. \u2013 The first sign of moral sense was noticed at the age of nearly 13 months: I said \"Doddy (his nickname) won't give poor papa a kiss, \u2013 naughty Doddy\". These words, without doubt, made him feel slightly uncomfortable; and at last when I had returned to my chair, he protruded his lips as a sign that he was ready to kiss me; and he then shook his hand in an angry manner until I came and received his kiss. Nearly the same little scene recurred in a few days, and the reconciliation seemed to give him so much satisfaction, that several times afterwards he pretended to be angry and slapped me, and then insisted on giving me a kiss. So that here we have a touch of the dramatic art, which is so strongly pronounced in most young children. About this time it became easy to work on his feelings and make him do whatever was wanted. When 2 years and 3 months old, he gave his last bit of gingerbread to his little sister, and then cried out with high self-approbation \"Oh kind Doddy, kind Doddy\". Two months later, he became extremely sensitive to ridicule, and was so suspicious that he often thought people who were laughing and talking together were laughing at him. A little later (2 years and 7 1\/2 months old) I met him \\[p. 292\\] coming out of the dining room with his eyes unnaturally bright, and an odd unnatural or affected manner, so that I went into the room to see who was there, and found that he had been taking pounded sugar, which he had been told not to do. As he had never been in any way punished, his odd manner certainly was not due to fear, and I suppose it was pleasurable excitement struggling with conscience. A fortnight afterwards, I met him coming out of the same room, and he was eyeing his pinafore which he had carefully rolled up; and again his manner was so odd that I determined to see what was within his pinafore, notwithstanding that he said there was nothing and repeatedly commanded me to \"go away,\" and I found it stained with pickle-juice; so that here was carefully planned deceit. As this child was educated solely by working on his good feelings, he soon became as truthful, open, and tender, as anyone could desire.\n\n*Unconsciousness, Shyness*. \u2013 No one can have attended to very young children without being struck at the unabashed manner in which they fixedly stare without blinking their eyes at a new face; an old person can look in this manner only at an animal or inanimate object. This, I believe, is the result of young children not thinking in the least about themselves, and therefore not being in the least shy, though they are sometimes afraid of strangers. I saw the first symptom of shyness in my child when nearly two years and three months old: this was shown towards myself, after an absence of ten days from home, chiefly by his eyes being kept slightly averted from mine; but he soon came and sat on my knee and kissed me, and all trace of shyness disappeared.\n\n*Means of Communication*. \u2013 The noise of crying or rather of squalling, as no tears are shed for a long time, is of course uttered in an instinctive manner, but serves to show that there is suffering. After a time the sound differs according to the cause, such as hunger or pain. This was noticed when this infant was eleven weeks old, and I believe at an earlier age in another infant. Moreover, he appeared soon to learn to begin crying voluntarily, or to wrinkle his face in the manner proper to the occasion, so as to show that he wanted something. When 46 days old, he first made little noises without any meaning to please himself, and these soon became varied. An incipient laugh was observed on the 113th day, but much earlier in another infant. At this date I thought, as already remarked, that he began to try to imitate sounds, as he certainly did at a considerably later period. When five and a half months old, he uttered an articulate sound \"da\" but without any meaning attached to it. When a little over a year old, he used gestures \\[p. 293\\] to explain his wishes; to give a simple instance, he picked up a bit of paper and giving it to me pointed to the fire, as he had often seen and liked to see paper burnt. At exactly the age of a year, he made the great step of inventing a word for food, namely *mum*, but what led him to it I did not discover. And now instead of beginning to cry when he was hungry, he used this word in a demonstrative manner or as a verb, implying \"Give me food\". This word therefore corresponds with *ham* as used by M. Taine's infant at the later age of 14 months. But he also used *mum* as a substantive of wide signification; thus he called sugar *shu-mum*, and a little later after he had learned the word \"black,\" he called liquorice *black-shu-mum*, \u2013 black-sugar-food.\n\nI was particularly struck with the fact that when asking for food by the word *mum* he gave to it (I will copy the words written down at the time) \"a most strongly marked interrogatory sound at the end\". He also gave to \"Ah,\" which he chiefly used at first when recognising any person or his own image in a mirror, an exclamatory sound, such as we employ when surprised. I remark in my notes that the use of these intonations seemed to have arisen instinctively, and I regret that more observations were not made on this subject. I record, however, in my notes that at a rather later period, when between 18 and 21 months old, he modulated his voice in refusing peremptorily to do anything by a defiant whine, so as to express \"That I won't\"; and again his humph of assent expressed \"Yes, to be sure\". M. Taine also insists strongly on the highly expressive tones of the sounds made by his infant before she had learnt to speak. The interrogatory sound which my child gave to the word *mum* when asking for food is especially curious; for if anyone will use a single word or a short sentence in this manner, he will find that the musical pitch of his voice rises considerably at the close. I did not then see that this fact bears on the view which I have elsewhere maintained that before man used articulate language, he uttered notes in a true musical scale as does the anthropoid ape Hylobates.\n\nFinally, the wants of an infant are at first made intelligible by instinctive cries, which after a time are modified in part unconsciously, and in part, as I believe, voluntarily as a means of communication, \u2013 by the unconscious expression of the features, \u2013 by gestures and in a marked manner by different intonations, \u2013 lastly by words of a general nature invented by himself, then of a more precise nature imitated from those which he hears; and these latter are acquired at a wonderfully quick rate. An infant understands to a certain extent, and as \\[p. 294\\] I believe at a very early period, the meaning or feelings of those who tend him, by the expression of their features. There can hardly be a doubt about this with respect to smiling; and it seemed to me that the infant whose biography I have here given understood a compassionate expression at a little over five months old. When 6 months and 11 days old he certainly showed sympathy with his nurse on her pretending to cry. When pleased after performing some new accomplishment, being then almost a year old, he evidently studied the expression of those around him. It was probably due to differences of expression and not merely of the form of the features that certain faces clearly pleased him much more than others, even at so early an age as a little over six months. Before he was a year old, he understood intonations and gestures, as well as several words and short sentences. He understood one word, namely, his nurse's name, exactly five months before he invented his first word *mum*; and this is what might have been expected, as we know that the lower animals easily learn to understand spoken words.","meta":{"dup_signals":{"dup_doc_count":658,"dup_dump_count":73,"dup_details":{"curated_sources":4,"2023-23":1,"2022-49":1,"2022-33":1,"2022-27":1,"2021-49":1,"2021-39":2,"2021-31":2,"2021-25":1,"2021-10":2,"2020-40":2,"2020-24":2,"2020-05":1,"2019-47":1,"2019-43":1,"2019-30":1,"2019-26":1,"2019-18":2,"2019-13":1,"2019-09":2,"2019-04":2,"2018-47":1,"2018-43":3,"2018-34":3,"2018-30":2,"2018-26":2,"2018-22":5,"2018-17":8,"2018-13":3,"2018-09":7,"2018-05":6,"2017-51":2,"2017-47":10,"2017-43":6,"2017-39":10,"2017-34":4,"2017-30":7,"2017-26":8,"2017-22":8,"2017-17":11,"2017-09":12,"2017-04":13,"2016-50":13,"2016-44":14,"2016-40":14,"2016-36":14,"2016-30":14,"2016-26":12,"2016-22":14,"2016-18":14,"2016-07":16,"2015-48":15,"2015-40":12,"2015-35":14,"2015-32":15,"2015-27":14,"2015-22":15,"2015-14":11,"2014-52":15,"2014-49":22,"2014-42":39,"2014-41":24,"2014-35":22,"2014-23":25,"2014-15":24,"2023-50":1,"2024-10":8,"2017-13":10,"2015-18":13,"2015-11":13,"2015-06":14,"2014-10":17,"2013-48":15,"2013-20":12}},"file":"PMC4117005"},"subset":"pubmed_central"} {"text":"abstract: The technological development of telemedicine has performed important progress, assuming a diagnostic relief role inside of the processes. Among the fields in fast evolution, telepathology is placed among those of greater interest. Up to some years ago, telepathology allowed us to observe at a distance and in real time, histological or cytological slides through the Internet, using a motorized microscope (dynamic telepathology). Currently, telepathology has completed an important step in ahead being possible to digitize completely a slide and to store it. This allows observation of the whole surface of histological or cytological slides remotely with a customary PC, without human intervention (virtual slide). The described systems have exclusive characteristics, so that a \"hybrid system\" supporting both technologies, turns out to be the best solution applicable in a wide range program. In order to realize the theoretical aspects previously described, we report an organizational model practicable and applicable to a territory in which three hospitals operate. An essential prerequisite in order to arrange an efficient telepathology system turns out to be one structured data transmission network, equipped with elevated guaranteed bandwidth, and one consolidated experience in the registration and management of digital images.\nauthor: Roberto Mencarelli; Adriano Marcolongo; Alessio Gasparetto\ndate: 2008\ninstitute: 1Department of Surgical Pathology, Azienda ULSS 18, Rovigo, Italy; 2General Manager, Azienda ULSS 18, Rovigo, Italy; 3Department of Information Technology \u2013 Azienda ULSS 18, Rovigo, Italy\nreferences:\ntitle: Organizational model for a telepathology system\n\n# Introduction\n\nThe acquisition of a telepathology system designed for a district territory that comprises three hospitals managed from two different health administrations (Adria and Rovigo) is inserted in a wide plan of local development of telemedicine. The challenge has been planned from the two administrations and fully financed by the Cariparo Foundation.\n\nIn order to achieve the implementation of this telepathology system there was a need for hospitals technology structures innovation. In particular: physical and logical structures for data transport, hardware modernization, software development, processes reorganization.\n\nThis technological innovation process is essential for developing and offering to citizens high quality and reliable health services. In particular, a telepathology system needs high hardware and network performance for processing, managing and archiving digital slide images.\n\nThe main aims, thanks to the introduction of telepathology, can be summarized as follows:\n\n\u2022 Increase of service quality offered to patients thanks to qualified second opinions;\n\n\u2022 Sharing and discussion of interesting pathological cases for increasing professional ability;\n\n\u2022 Creation of a permanent and available scientific repository.\n\n# Methods\n\nWe can distinguish three types of telepathology: static telepathology, dynamic telepathology, virtual slide telepathology. In this paper, we analyze the last two for describing how we can implement a \"hybrid telepathology system\" that takes advantage of the peculiarities of each one. The dynamic telepathology allows exploration, in real time, of the whole slide surface thanks to a robotized remote controlled microscope; this also permits us to change magnification, focus and digitize the interesting portion. The disadvantages regard the necessary presence of a technician for positioning the desired slide within the microscope and the discontinuous digitization in terms of area and magnification. In virtual slide telepathology, the slide is completely digitized and stored in a repository; this permits a single or multiple user consultation, in every time and without human intervention. The virtual slide allows the exploration of the whole slide surface with different magnification. There are many types of devices that can be used for slide digitization and the right option depends on workload and necessary scan rate \\[1\\]. Virtual slide telepathology has a a negligible disadvantage \\[2\\]: it is not possible to focus the areas that were not correctly acquired. During the scan process, the scanning device takes into account a finite point number for the focusing procedure; each point is characterized by its own focal plane. When the sample surface is irregular, the device uses proprietary algorithms to calculate an \"in focus surface\" above the slide; this allows an average good focus but locally a less accurate one. During this treatment, it can be noted that this problem is directly correlated with sample analysis type and grows with increasing histological or cytological sample surface irregularity. Another characteristic to take into account is the virtual slide file size; this depends principally on the following parameters: the resolution, the compression ratio, the compression algorithm and the color depth. Considering the resolution used for the histological and cytological slide scan (0,5 \u03bcm\/pixel \u2013 20\u00d7), the scan area maximum dimension (9 cm2), an adequate compression ratio (15:1) and an appropriate compression algorithm (JPEG2000) it can be possible to obtain averagely a 500 MB file even if the non compressed file is approximately 7.5 GB \\[3,4\\]. This file size allows, at present, a selected virtual slide storage for scientific archive creation but doesn't permit complete archiving of all slides; in fact, in case of complete archiving the storage device costs would be much too expensive. Certainly, in the future, the cost\/byte reduction for storage and an international agreement about the standard for this storage will permit all slides complete archiving, deleting the conservation necessity. This will lead to many advantages regarding clinical data availability and accessibility. In the end, the necessity of an adequate guaranteed data communication bandwidth for sharing virtual slide must be considered. This problem can be solved thanks to the use of an Image Server and the specific image format (image pyramid); this offers the possibility of selective visualization of images in terms of resolution and portion of interest, without the necessity of a complete transfer of virtual slide from Image Server to a local PC. The complete focusing of the whole slide surface is therefore the real problem to solve, while the archiving and the consultation of virtual slides are effectively faced thanks to scalable and high performance storage devices and dedicated Image Server.\n\nThere are mainly two types of second opinions: second opinions in real time on frozen section during surgery (intraoperative consultation) and second opinions on histological or cytological slides which require a complex interpretation. Each type of second opinion has specific characteristics that are directly correlated with previous discussion. In the first second opinion type the analyses are conducted on thick sample or with irregular surface (this arises from the methods through which the samples are obtained and the liquid amount contained into the tissue to analyze); for this reason the automatic scanner device (virtual slide telepathology) can't focus correctly on the whole slide surface but with a motorized microscope (dynamic telepathology) it is possible to focalize each sample surface point. Therefore, the dynamic telepathology represents an important tool for second opinion during intraoperative consultation, especially for the presence on the territory of two hospitals with surgery but without a resident pathologist. Regarding histological and cytological slides, the tissue samples are treated for obtaining thin sections; this deletes the trouble regarding focus due to irregular surface, as previously described. In this case the use of virtual slide telepathology is favorable because it allows the visualization of the morphological picture at any time, remotely with a customary PC, without human intervention. This allows one to obtain a second opinion on complex cases from an expert of personal choice and to perform an external quality control.\n\nBased on previous considerations and for obtaining an effective telepathology system, the best choice is a \"hybrid system\" composed of motorized microscopes, with remote control, and a scanner for slide digitization, in order to achieve the best characteristics from each system without respective disadvantages. This choice has been applied to Rovigo province health structures, in particular Rovigo, Adria and Trecenta hospitals. For storage purposes it has been used a NAS (Network Attached Storage) device (50 TB capacity \u2013 1 Gb\/s transfer rate). The previously described system is completely integrated with the CPOE (Computerized Physician Order Entry) based Hospital information System.\n\nThanks to complete slides digitization and the use of Image Server with high computational performances, it will be possible to apply filters to acquired images or to apply algorithms for calculating interesting quantities (e.g.: the cellular membrane distribution and continuity). These techniques adequately developed, tested and standardized will be the base for Computer Aided Diagnosis (CAD) introduction.\n\n# Conclusion\n\nThe implementation of a telepathology \"hybrid system\" adequately projected and integrated with the Hospital information System, leads to an increase in the service quality offered by the Pathological Anatomy department, principally thanks to: (a) qualified and real time second opinions on frozen section during surgery or histological-cytological consultation on slides which requires a complex interpretation; (b) continuous education process thanks to the sharing of interesting virtual slides and creation of a permanent scientific archive.\n\nThe design and implementation of a telepathology system must be based on real operative demands, supported by a technological structure that can guarantee a reliable and efficient service and projected using hospital experience on digital images acquisition and storage.\n\n### Acknowledgements\n\nThis article has been published as part of *Diagnostic Pathology* Volume 3 Supplement 1, 2008: New trends in digital pathology: Proceedings of the 9th European Congress on Telepathology and 3rd International Congress on Virtual Microscopy. The full contents of the supplement are available online at ","meta":{"dup_signals":{"dup_doc_count":151,"dup_dump_count":50,"dup_details":{"curated_sources":2,"2022-49":1,"2022-21":1,"2021-39":1,"2020-40":1,"2020-29":1,"2020-16":1,"2019-30":1,"2019-22":1,"2019-13":1,"2019-04":1,"2018-47":1,"2018-39":1,"2018-30":1,"2018-17":1,"2018-09":1,"2017-39":1,"2017-30":1,"2017-22":1,"2017-09":13,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-22":1,"2016-18":1,"2015-48":5,"2015-40":3,"2015-35":5,"2015-32":5,"2015-27":5,"2015-22":5,"2015-14":5,"2014-52":5,"2014-49":3,"2014-42":14,"2014-41":8,"2014-35":5,"2014-23":8,"2014-15":6,"2023-14":1,"2024-10":1,"2017-13":1,"2015-18":5,"2015-11":5,"2015-06":3,"2014-10":4,"2013-48":5,"2013-20":3,"2024-26":1}},"file":"PMC2500113"},"subset":"pubmed_central"} {"text":"abstract: This is the third in a series of articles, invited by the editors of *Diabetes*, that describes the research programs and aims of organizations committed to funding and fostering diabetes-related research. The first piece, contributed by the Juvenile Diabetes Research Foundation, appeared in the January 2012 issue of *Diabetes*. The second piece that describes the American Diabetes Association's research program appeared in the June 2012 issues of *Diabetes* and *Diabetes Care*.\nauthor: Judith E. Fradkin; Griffin P. RodgersCorresponding author: Judith Fradkin, .\ndate: 2013-02\nreferences:\ntitle: Diabetes Research: A Perspective From the National Institute of Diabetes and Digestive and Kidney Diseases\n\nThe growing human and economic toll of diabetes has caused consternation worldwide. Not only is the number of people affected increasing at an alarming rate, but onset of the major forms of the disease occurs at ever younger ages. We now know that the reach of diabetes extends far beyond the classic acute metabolic and chronic vascular complications to increased risk of an ever-increasing array of conditions including Alzheimer disease, cancer, liver failure, bone fractures, depression, and hearing loss. In the U.S. one in three Medicare dollars is spent on care of people with diabetes, and the proportion of cardiovascular disease (CVD) risk attributable to diabetes is rising. While complications of diabetes may develop slowly over decades, antecedents of diabetes may lay in utero or early life. Thus the breadth of meaningful research extends across the life span, ranging from studies of how the in utero environment alters diabetes risk to improved understanding of the special needs of older patients with diabetes.\n\nSince the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) was established in 1950, we have seen huge progress in our ability to predict, classify, and treat diabetes and its complications as well as to prevent or delay type 2 diabetes. Landmark NIDDK-led clinical trials have demonstrated that glucose control can dramatically reduce diabetes complications, and lifestyle change producing modest weight loss, or the drug metformin, can substantially reduce development of type 2 diabetes. Were it not for this progress, the toll from rising rates of the major forms of diabetes would be much higher.\n\nDespite a challenging fiscal climate, the National Institutes of Health (NIH) expends over \\$1 billion on diabetes research annually. NIDDK accounts for about two-thirds of this total, and we are determined that these resources will be invested wisely and balanced among competing priorities. We must address the most compelling practical questions about clinical management and prevention of diabetes and its complications while also uncovering and exploiting novel pathways that will provide new targets and approaches to combat the disorder. Last year, a new strategic plan for diabetes research (1) was issued under NIDDK's leadership with input from over 100 scientists and multiple federal agencies and components of NIH. The plan highlights progress and opportunities in 10 key areas as well as resource and infrastructure needs. Conquering diabetes will require a wide range of expertise and talent from molecular and cell biology to behavioral and social sciences. Development and empowerment of this human capital is critical to this endeavor, including facilitating multidisciplinary collaborations and the application of new technologies to diabetes research. Here we will touch on highlights of our diabetes research priorities and initiatives, referring readers to the Diabetes Research Strategic Plan (1) for a more comprehensive analysis of advances and opportunities.\n\n# ENHANCING THE DIABETES RESEARCH WORKFORCE\n\nA well-trained and diverse scientific workforce is essential to our efforts to improve outcomes for people with or at risk for diabetes. To ensure a pipeline of new well-trained investigators in basic and clinical disciplines, NIDDK supports training grant, fellowship, and career award mechanisms to provide opportunities for investigators at all stages of the career trajectory. These are supplemented by programs targeted to specific needs, such as our medical student research program in diabetes, which allows medical students to conduct research under the direction of an established scientist at one of our seventeen NIDDK-funded Diabetes Research Centers; supplements to research and training grants to foster recruitment of underrepresented minority scientists to diabetes research; and institutional career development programs to attract pediatric endocrinologists to careers in childhood diabetes research. It is increasingly important to build multidisciplinary research teams and train multidisciplinary researchers. We are establishing interdisciplinary training grants to promote diabetes research training for bioengineers as well as career development programs in diabetes research for behavioral scientists. We will continue to foster the application of new expertise, for example in computational science and bioinformatics, to diabetes research problems. To help new investigators transition to independence, we also provide a less stringent pay line for early career investigators. We also invite new investigators with NIDDK research or career development grants to participate in NIDDK workshops designed to help them succeed as independent investigators.\n\n# FUNDAMENTAL RESEARCH\n\nTo uncover new approaches to prevention and therapy of diabetes and its devastating complications, NIDDK will continue to support a robust portfolio of investigator-initiated basic research. NIDDK has recently developed data on application and funding trends to help our research community understand the application and funding dynamics over recent years. This information is available at . It shows that relative funding levels of most research categories have remained fairly stable since 2003, and demonstrates our continuing strong support of investigator-initiated research project grants or R01s and training and career development programs. Information on resources to empower researchers, such diabetes centers, human islets, mouse models, reagents, and databases, and on funding opportunities and staff to contact in specific program areas is available at .\n\n# DIABETES PREVENTION\n\nDiabetes prevention is a major public health challenge. For type 2 diabetes, the NIDDK-led Diabetes Prevention Program (DPP) demonstrated a dramatic effect of modest weight loss or the generic drug metformin in delaying or preventing type 2 diabetes (2). Ongoing studies are examining the durability of this risk reduction, the cost effectiveness of the interventions, and their impact in reducing diabetes complications. To facilitate translation of landmark clinical research into clinical practice and public health activities, NIDDK established a program to test practical, cost-effective approaches to deliver interventions proven efficacious in clinical trials for effectiveness in community and practice settings. One such NIDDK-supported study of a lifestyle change intervention delivered by YMCA fitness trainers (3) already is being rolled out nationwide by the YMCA with coverage from insurers such as United Health Group. In another promising approach, diabetes educators trained selected patients with well-controlled diabetes to serve as community health workers delivering a lifestyle intervention based on the DPP to community members with prediabetes (4). This NIDDK-funded research provides a basis for a new congressionally established National Diabetes Prevention Program at the Centers for Disease Control and Prevention (CDC) to foster delivery of evidence-based lifestyle change programs for people at high risk for type 2 diabetes. Given the sustained effort required to achieve and maintain lifestyle change, much additional research is needed to improve, disseminate, and evaluate type 2 diabetes prevention programs in the U.S. Approaches are also needed to reduce the development of risk factors for diabetes. Of particular importance are studies to reduce environmental exposures during pregnancy or childhood that may increase diabetes and obesity risk.\n\nThe incidence of type 1 diabetes is rising worldwide and the disease is occurring at younger ages suggesting an environmental trigger is responsible. Bold new programs aimed at preventing type 1 diabetes have been undertaken with support from the Special Statutory Funding Program for Type 1 Diabetes Research, which provides \\$150 million per year through 2013 for type 1 diabetes research. These special funds are in addition to the regular NIH appropriation. One program established under the program, The Environmental Determinants of Diabetes in the Young (TEDDY), has screened nearly half a million neonates to establish a cohort of over 8,000 at high genetic risk for type 1 diabetes. Participants will be followed from birth through 15 years of age to identify dietary, infectious, microbiome, or other environmental triggers of autoimmunity and type 1 diabetes and to study the interaction between environmental factors and specific genetic variations associated with disease risk. Identification of an infectious agent or dietary factor that triggers or protects against the disease would have immense implications for prevention through the development of a vaccine or dietary change. Also with special program support, the Type 1 Diabetes TrialNet is identifying individuals recently diagnosed with type 1 diabetes or at high risk of developing the disease and testing interventions to prevent diabetes or to slow its progression. Selective immune modulation has been shown to preserve insulin secretion in newly diagnosed patients, and TrialNet is exploring the use of one such agent, teplizumab, to prevent type 1 diabetes in individuals at very high short-term risk of type 1 diabetes. In the future, combination therapies aimed at modulating multiple steps of the toxic immune response and restoring immunoregulation may produce a clinically significant delay in onset and ultimately the prevention of type 1 diabetes.\n\n# CLINICAL TRIALS TO INFORM DIABETES MANAGEMENT\n\nWhile information on how to prevent and treat type 2 diabetes has grown rapidly, adequate data from rigorous clinical trials are not available to inform many routine decisions on care for patients with diabetes. Current guidelines are moving away from a \"one size fits all\" approach to incorporate factors such as diabetes duration and the presence of complications or other comorbidities. However, we lack information to individualize therapy based on demographic, physiologic, or genetic variation. Improved understanding of the genetic, physiologic, and environmental factors that underlie diabetes mellitus are necessary for more individualized diabetes treatment.\n\nNumerous drugs are approved for the treatment of type 2 diabetes, based largely on relatively short-term efficacy in glycemic reduction. However, it is not known whether particular drugs or drug combinations will have more durable effects in the maintenance of glucose control. NIDDK is supporting a large comparative effectiveness trial to inform the choice of second agent when metformin alone is inadequate for glycemic control. This multicenter randomized trial will provide information on health benefits as well as cost effectiveness of widely used treatments.\n\nSmall studies suggest it may be possible to preserve \u03b2-cell function during prediabetes and early in the course of type 2 diabetes. Major questions include the optimal timing of interventions, whether specific treatments have maximum benefit at different stages of the disease, and what patient characteristics influence the choice of initial therapy for individuals. A newly formed consortium will examine the approaches to the initial treatment of type 2 diabetes that may reverse or slow the decline in \u03b2-cell function over time.\n\nWhile major trials have established the importance of blood pressure and lipid control in reducing CVD in type 2 diabetes, much less is known about how cardiovascular risk factors should be managed in type 1 diabetes. When blood pressure and lipid lowering should begin and optimal therapeutic targets remain to be established. Although type 1 diabetes increases the risk of CVD as much as 10-fold compared to an age-matched population, testing practical approaches to mitigate this risk is challenging due to the low incidence of CVD in the younger type 1 diabetes population. Such trials may require the development of the validated biomarkers discussed below.\n\nDiabetes self-management training and promotion of effective self-care behaviors are vital to improving outcomes. The choices patients make daily about diet, physical activity, adherence to medications, self-monitoring, foot and dental care, and medical follow-up for early detection of complications are critical for improving diabetes outcomes. Research has established effective counseling and education strategies, including motivational interviewing, patient empowerment, and social and peer support. However, research is needed to expand the reach of such approaches to more patients and providers. NIDDK encourages research studying approaches such as group visits, telemedicine, and social media that may extend the impact of the limited workforce skilled in the provision of this care.\n\n# DIABETES IN SPECIAL POPULATIONS\n\nDiabetes spares no age, sex, racial or ethnic group, yet each such group faces special challenges. Intensive glycemic control in youth may afford lifelong protection from complications yet infancy and adolescence pose unique challenges in attaining such control. Treatment priorities and optimal glycemic, blood pressure, and cholesterol targets to prevent complications and maintain quality of life may differ for older adults or those with limited life expectancy. The *Eunice Kennedy Shriver* National Institute of Child Health and Human Development (NICHD)-led Hyperglycemia and Adverse Pregnancy Outcome (HAPO) study (5) showed that perinatal harm to mother and offspring occurs in pregnancy at lower levels of glycemia than previously appreciated. To identify gestational glycemic thresholds for longer-term effects on offspring and the risk of type 2 diabetes in mothers, NIDDK will support a follow-up study of this important cohort. NIDDK has also recently launched a study to explore approaches for the prevention of gestational diabetes mellitus. It also remains to be established what treatments during pregnancy mitigate perinatal and\/or long-term complications of gestational diabetes mellitus.\n\nThe biologic and environmental risk factors that contribute to underlying racial and ethnic disparities are poorly understood. The alarming emergence of type 2 diabetes in youth overwhelmingly occurs in minority populations. Ominous data on poor risk factor control portends very poor outcomes for this vulnerable population. To address this threat, NIDDK recently completed a major multicenter trial comparing three approaches to the treatment of type 2 diabetes in youth and a trial conducted in schools serving poor and minority children to prevent the development of risk factors for type 2 diabetes.\n\nAs better therapy for HIV infection, organ transplantation, cystic fibrosis, and other conditions improve survival, diabetes has emerged as an increasingly important complication. Critical questions must be addressed about optimal strategies to prevent and treat diabetes. For example, early diabetes diagnosis and treatment helped improve survival in people with cystic fibrosis\u2013related diabetes (CFRD). An NIDDK-funded trial showed that chronic weight loss can be reversed with the institution of insulin therapy but not with repaglinide early in the course of CFRD before fasting hyperglycemia develops (6). A current initiative will support research to understand the pathogenesis of CFRD, which affects half of adults with cystic fibrosis. NIDDK will also encourage research on treatment and prevention of diabetes in HIV patients.\n\nExceptional contributions to understanding diabetes in special populations have emerged from NIDDK's intramural research program. Spanning decades and generations, longitudinal studies of Pima Indians have uncovered physiologic, environmental, and genetic determinants of diabetes in the U.S. population with highest rates of type 2 diabetes. These studies have foreshadowed findings in the broader population, such as the contribution of intrauterine factors to childhood obesity and type 2 diabetes and the prognosis for these youth. Collaborations between intramural and extramural researchers studying individuals with severe insulin resistance and lipodystrophy have yielded important physiologic information and the emergence of leptin replacement therapy for lipodystrophy.\n\n# EPIDEMIOLOGY AND SURVEILLANCE IN DIABETES\n\nPopulation health data are essential to inform prevention strategies for diabetes and its complications, to identify trends in the development of diabetes and diabetes complications in the general population and in subpopulations, and to measure gaps in the translation of proven therapies into practice. NIDDK has partnered with CDC to support the SEARCH for Diabetes in Youth study. This multicenter study identifies cases of diabetes in people below 20 years of age in five geographically dispersed populations that encompass the ethnic diversity of the U.S.\n\nSEARCH has found that 1 out of every 523 persons under 20 years of age has diabetes, and 15,000 children are diagnosed with type 1 diabetes and 3,700 diagnosed with type 2 diabetes annually (7). SEARCH also provides data on trends in incidence and risk factor control in the pediatric diabetes population.\n\nSo that information on diabetes can be obtained at lower cost than if an independent study were initiated, NIDDK provides substantial support for the diabetes components of major CDC-led efforts including the National Health and Nutrition Evaluation Survey and the National Health Information Survey; partners with other components of NIH to enhance diabetes measures in ongoing studies; and offers support for investigator-initiated ancillary studies focused on diabetes. Of particular importance are collaborations with the National Heart, Lung, and Blood Institute (NHLBI) to address the increasing proportion of CVD attributable to diabetes and issues such as balancing the cardiometabolic risks and benefits associated with statin use.\n\nA third edition of the NIDDK publication *Diabetes in America* is currently in preparation. *Diabetes in America* provides a compilation and assessment of epidemiologic, public health, and clinical data on diabetes and its complications in the U.S.\n\n# RESEARCH TO PRACTICE TRANSLATION\n\nThere is a substantial gap between the results achieved in clinical trials and the outcomes in real-world settings. This is particularly true for the minority racial and ethnic groups and low socioeconomic status populations that suffer a disproportionate diabetes burden. Addressing this disparity is a major focus of our multipronged translation research program. Newly established Centers for Diabetes Translation Research will serve as a key component of our program to translate efficacious research findings into practice and the community to improve the health of Americans with\u2014or at risk for\u2014diabetes. In addition, R34 planning grants and R18 translational clinical trial grants offer a targeted mechanism to test strategies to improve the delivery of evidence-based therapies to improve diabetes management or prevention. These programs test innovative methods to improve clinical care and translate research findings into cost-effective and sustainable clinical treatment strategies, including community-based approaches to make preventive measures as widely accessible and practical as possible. NIDDK has also partnered with CDC to support the Natural Experiments for Translation in Diabetes (NEXT-D) Study, a research network designed to test the effectiveness and sustainability of population-targeted diabetes prevention and control policies emerging from health care systems, business and community organizations, and health care legislation. It includes large-scale natural experiments or effectiveness studies and rigorously designed prospective studies of diabetes prevention and control interventions. The National Diabetes Education Program, jointly sponsored by NIH and CDC, plays an important role in dissemination and translation of NIDDK-supported clinical research.\n\n# GENETICS OF TYPE 1 AND TYPE 2 DIABETES\n\nBecause knowledge of genetic risk factors has the potential for disease prediction, patient stratification, and insights into pathogenesis that can generate new approaches to prevention and treatment, NIDDK has expended considerable resources to apply state-of-the-art molecular and computational science to identify diabetes risk genes. Perhaps the most profound impact of these genetic discoveries has been in children with neonatal diabetes, who were often wrongly diagnosed with type 1 diabetes and treated with insulin. Now children with mutations in the genes encoding the SUR1 and Kir6.2 subunits of the potassium ion channel that regulates insulin secretion are treated more safely and effectively with sulfonylurea drugs rather than insulin (8). Genetic testing for type 1 diabetes risk has made possible the TEDDY study and TrialNet prevention studies described above. The NIDDK-led international Type 1 Diabetes Genetics Consortium helped increase the number of identified risk genes and gene regions from 3 only a decade ago to over 50 today. The challenge now is to understand how this variation contributes to disease pathogenesis opening up new therapeutic targets.\n\nFor type 2 diabetes, we are encouraged by the DPP finding that relative risk reduction with lifestyle change was as great in those carrying the high-risk TCF7L2 mutation as in other participants, despite higher rates of progression to diabetes. DPP also provided important pharmacogenetic data showing participants with a specific *KCNJ11* variant or alterations in metformin transport genes were less protected by metformin. These observations offer the possibility of individualizing therapy based on genotype (9). However, despite substantial progress in identifying genetic variation contributing to type 2 diabetes risk in populations of European origin, there is a critical lack of information about genetic variation contributing to type 2 diabetes in disproportionately affected minority populations. NIDDK has established a consortium of investigators to identify genetic variation contributing to type 2 diabetes in minorities. With known risk genes explaining only a small fraction of the genetic risk for type 2 diabetes, the identification of epigenetic changes that may play a role in the transmission of diabetes risk across generations is assuming increasing importance.\n\n# DIABETES COMPLICATIONS\n\nResearch has identified many pathways contributing to glucose-induced damage to endothelial cells including elevated flux through polyol and hexosamine pathways, accumulation of advanced glycation end products, activation of proinflammatory pathways, and inactivation of protein kinase C. Yet many questions remain about how these pathways may interact and converge to increase production of reactive oxygen species (ROS) in the mitochondria and how we can build on this new knowledge to develop therapies that can reduce ROS, reverse glycation, and lessen inflammation. Pathogenetic mechanisms, therapeutic targets, and biomarkers must be identified in specific tissues, including mechanisms of injury to specialized cells such as podocytes and pericytes. The relative importance of hyperglycemia and insulin resistance is of particular interest in understanding the link between diabetes and Alzheimer disease.\n\nA finite period of glycemic control has profound and long-lasting effects on the development of complications, a phenomenon termed metabolic memory (10). It has been proposed that the interaction of epigenetic changes with other persistent effects of hyperglycemia, such as glycation and oxidation of long-lived macromolecules, may explain this finding. Understanding pathways that contribute to metabolic memory may yield therapies targeted at the underlying molecular mechanisms. It may also help us learn about whether treatments that prevent the development of complications also prevent progression.\n\nThe course and development of long-term complications cannot be solely explained by the extent and duration of exposure to hyperglycemia. Genetic variation may explain why some people develop complications despite good control and others with poor control are protected. This information could yield undiscovered disease pathways and therapeutic targets, improve disease prediction over currently available clinical markers, and identify individuals for whom intensive therapy would be more or less beneficial. However, we know relatively little about genetic variation that may contribute to or protect from complications, and this is an important area for investigation.\n\nWe are also elucidating mechanisms that impair tissue repair and regeneration in diabetes, including dysfunction of endothelial and other stem cell populations. Identification of specific populations of stem or progenitor cells affected by diabetes and the extent to which damage is reversible is vital for understanding how complications of diabetes might be reversed by stimulating formation of normal new vessels and regrowth of nerves. Differentiation of induced pluripotent stem cells also holds promise for the repair of damaged tissues.\n\nWhile rates of blindness, kidney failure, amputation, and CVD have fallen substantially in those with diabetes, population-wide lowering of CVD has outpaced that in people with diabetes and the share of CVD in the U.S. attributable to diabetes is rising. To address this challenge, NIDDK and NHLBI have collaborated in a number of major multicenter trials to establish effective approaches to reducing CVD in those with type 2 diabetes, including Action for Health in Diabetes (Look AHEAD), Action to Control Cardiovascular Risk in Diabetes (ACCORD), and Bypass Angioplasty Revascularization Investigation in Type 2 Diabetes (BARI 2D). Results from ACCORD and other recent large clinical trials attempting to prevent CVD did not demonstrate a benefit of intensified near-normal glucose control on clinical CVD events in people with moderate to long-term diabetes duration and moderate to high CVD risk. More information is needed on the impact of diabetes duration and preexisting tissue damage on the ability to respond to therapies. Beyond the practical management questions addressed in clinical trials, new approaches to uncouple diabetes and CVD must be based on a mechanistic understanding of factors linking these conditions, including obesity, inflammation, insulin resistance, metabolic perturbations, altered coagulation, neuropathy, and nephropathy, and how the pathophysiology of atherosclerosis differs between people with type 1 and type 2 diabetes.\n\n# THE \u03b2-CELL\n\nImpaired insulin production is key to all forms of diabetes. The extent to which it is possible to preserve and\/or restore \u03b2-cell function early in the course of diabetes and whether \u03b2-cell recovery is possible later in the disease remains to be established. Both the nature and optimal timing of interventions to preserve \u03b2-cell function and the impact on clinical care and outcomes must be addressed. NIDDK has recently established the Restore Insulin SEcretion (RISE) consortium to explore approaches to remission of insulin secretory function early in type 2 diabetes. Investigators will study both pharmacologic interventions and bariatric surgery. The Type 1 Diabetes TrialNet studies approaches to slow \u03b2-cell loss early in the course of type 1 diabetes. Specific immunomodulatory therapies have been shown to preserve C-peptide, and additional strategies including intense metabolic control with continuous glucose monitoring and pump therapy at onset of disease are being investigated.\n\nMechanistic studies are necessary to understand why \u03b2-cells lose their ability to secrete insulin as well as the physiology underlying recovery and preservation of endogenous insulin secretion. Particularly encouraging is the finding of some residual C-peptide production in a substantial proportion of patients with long-established type 1 diabetes. Such individuals might be amenable to novel strategies under development to stimulate islet neogenesis with the potential to improve \u03b2-cell mass and function. The intriguing observation of diabetes remission after some forms of bariatric surgery must be investigated to establish durability of the effect and the characteristics of patients and procedures associated with remission. Elucidation of mechanisms underlying improved \u03b2-cell function after bariatric surgery is being pursued in both human studies and animal models potentially generating new approaches to restore \u03b2-cell function. Moreover, the recent identification of platelet-derived growth factor as a factor involved in the replication of \u03b2-cells in early life that are lost over time offers a potential pathway to \u03b2-cell regeneration (11).\n\nClinical studies of approaches to mitigate \u03b2-cell loss and\/or restore function could be accomplished much more efficiently if more reliable biomarkers or methods to image \u03b2-cell mass or function were available. The development of such tools has been and continues to be a high priority of NIDDK. It may be facilitated by the identification of proteins expressed in \u03b2-cells and \u03b2-cell surface markers and antibodies through the Beta Cell Biology Consortium (). This consortium is pursuing a multifaceted approach to correct the loss of \u03b2-cell mass in diabetes, including cell reprogramming, regeneration, and replacement. It is also supporting research to develop mouse models in which development and function of human islets can be studied. Also, because of differences between murine and human islets, NIDDK has established a resource, the Integrated Islet Distribution Program (), to make human cadaveric islets available to the research community.\n\nThe Clinical Islet Transplantation Consortium, a joint effort of NIDDK and the National Institute of Allergy and Infectious Diseases (NIAID), is fostering development of islet transplantation as a cure for type 1 diabetic patients whose disease cannot be effectively managed with current methods of insulin administration or who have received a successful kidney transplant. Its ongoing trials aim to improve the methods of isolating and administering islets and minimizing the toxic effects of immunosuppressive drugs required for transplantation. NIDDK also supports collection, analysis, and communication of comprehensive and current data on all islet\/\u03b2-cell transplants performed in North America, as well as some European and Australian centers through the Collaborative Islet Transplant Registry. This clinical islet transplantation research is entirely supported through the special appropriation for type 1 diabetes research.\n\n# OBESITY\n\nA trans-NIH Task Force cochaired by NIDDK, NHLBI, and NICHD coordinates NIH efforts to identify genetic, behavioral, and environmental causes of obesity to understand how obesity leads to type 2 diabetes, CVD, and other serious health problems and to build on basic and clinical research findings to develop and study innovative prevention and treatment strategies. One key goal is to understand how biologic, cognitive, behavioral, social, and physical environmental factors interact to influence the development of obesity. For example, how do diet, exercise, and other factors influence reprogramming of neural circuits involved in regulating food intake and thermogenesis. Another goal is to identify distinct strategies that may be needed for achieving weight loss, maintaining weight loss, and preventing weight gain. Understanding responses to weight change that contribute to the very high recidivism to obesity may lead to effective strategies for maintenance of reduced body weight. These therapies might be quite different from those used to induce weight loss per se.\n\nChildhood obesity has fueled the rise of type 2 diabetes in teens and young adults. An NIDDK-led randomized trial testing an intervention to improve nutrition and physical activity in middle schools serving high-risk children demonstrated efficacy in secondary outcomes, reducing BMI *z*-score and other indices of adiposity. However, the primary outcome of the combined prevalence of overweight and obesity decreased in both the intervention and control schools perhaps due to information about the participating children's health, which was sent to all families (12). Family-based interventions have been successful for childhood weight control, but strategies for translation and widespread implementation of such interventions in high-risk populations remain to be developed. The roles of technologies such as smartphones and social networking and of community organizations must be evaluated for their potential to support individualized and tailored delivery of interventions outside of the clinical setting.\n\nAt the molecular level, there has been an explosion of knowledge about the mechanisms linking obesity to insulin resistance and excitement about potential therapeutic manipulation of adipose tissues based on the understanding of white adipose tissue heterogeneity, persistence of brown adipose tissue into adulthood, and metabolic flexibility of adipocytes. Tools and techniques have enabled researchers not only to define adipose anatomy and morphology but also to examine its dynamic function. Yet key questions remain about the mechanisms that determine adipocyte number and size, govern development and distribution, and link variation in body fat deposition to metabolic sequelae and macrophage recruitment and activation.\n\nParticularly challenging but of utmost importance are studies to understand mechanisms by which obesity, hyperglycemia, or other metabolic factors in pregnant women may predispose their offspring to obesity or diabetes. Research is needed on how placental biology and the intrauterine environment shape neural circuits, adipose tissue, and islet development in the fetus. Studies relating differences in energy homeostasis and body composition to genetic variation and defining the critical developmental periods for imprinting maladaptive metabolic changes will contribute to understanding how metabolic fate may be programmed early in development.\n\nThe finding that Roux-en-Y gastric bypass not only causes profound weight loss but can also restore euglycemia through mechanisms that appear independent of weight loss makes identification of these mechanisms a very high priority with the potential to uncover new therapeutic pathways to treat and possibly reverse type 2 diabetes. NIDDK is pursuing this through clinical research on the response to various bariatric surgical procedures and through murine studies in which examination of defined procedures in genetically altered mice enables examination of the roles of specific pathways in the metabolic outcomes of these surgeries. Understanding the hormonal and neural controllers of energy balance will be key to designing potential drug combinations targeting multiple components of the regulatory system with additive or synergistic effects.\n\n# DIABETES RESOURCES\n\nNIDDK supports numerous resources to improve the quality and multidisciplinary nature of research on diabetes by providing shared access to specialized resources. The NIDDK-supported Diabetes Research Centers, formerly known as Diabetes Endocrinology Research Centers and Diabetes Research and Training Centers, provide increased and cost-effective collaboration among multidisciplinary groups of investigators at institutions with an established, comprehensive research base in diabetes and related areas of endocrinology and metabolism. The National Mouse Metabolic Phenotyping Centers () provide a range of complex exams used to characterize mouse metabolism, hormones, energy balance, eating and exercise, organ function and morphology, physiology, and histology. NIDDK supported research consortia, such as the Beta Cell Biology Consortium () and the Nuclear Receptor Signaling Atlas (), provide data and reagents to the broader scientific community. Other important diabetes resources supported by NIDDK include a type 1 diabetes mouse repository at The Jackson Laboratory () and Islet Cell Resource Centers () that provide human islets for research.\n\nSamples and data from the limited number of cohorts from large diabetes studies with well-characterized phenotypes at baseline and with longitudinal measurement of characteristics of interest are highly prized. To expand the usefulness of these major clinical studies by allowing the wider research community to access study materials beyond the end of the study or after a limited proprietary interval for ongoing studies, NIDDK has established biosample, genetics, and data repositories. Recently NIDDK has gone beyond the repository concept to create a living biobank, which provides investigators with the opportunity to obtain \"on-demand\" biological samples from selected individuals. This effort builds on the unique population of individuals at risk for the development of type 1 diabetes ascertained and monitored through TrialNet. The unprecedented availability of such samples and data may allow immunologists to understand early inciting events in type 1 pathogenesis.\n\n# APPLYING NEW TOOLS AND TECHNOLOGIES TO DIABETES RESEARCH AND PATIENT CARE\n\nLarge clinical trials have established the long-term benefits of intensive blood glucose control in lowing the risk of diabetes complications. However, insulin therapy is burdensome and limited by hypoglycemia. NIDDK has devoted considerable resources to develop technologies for accurate and rapid detection of glucose levels and appropriate adjustment of insulin delivery to create an artificial pancreas that simulates the functions of \u03b2-cells. Achieving this goal will require more accurate and robust glucose-sensing devices; more effective and rapidly acting insulin preparations; algorithms that align real-time glucose measurements with adjustment in insulin delivery; infusion devices that deliver insulin more effectively, conveniently, and physiologically; and fail-safe mechanisms to avoid hyper- or hypoglycemia. It will also be important to determine the benefit of combining insulin delivery with the delivery of the counterbalancing hormone glucagon to reduce hyperglycemia, and how timely transmission and remote interpretation of patient data may contribute to safety and efficacy. Research is ongoing to assess the capacity of current artificial pancreas technology to improve overall metabolism, increase patient well-being, restore hypoglycemia awareness, and preserve existing pancreatic \u03b2-cell function, as well as to understand the factors affecting its use and acceptance in different age-groups.\n\nAdvances in sensors, processors, memory storage, wireless communication, and Web-based data transport, processing, and sharing have applications not only to new therapies but also to many facets of diabetes research in free-living populations. These range from instruments that measure energy intake and physical activity to the application of continuous monitoring of blood glucose to enable exploration of questions about the impact on human health of glycemic excursions, which may not be captured by HbA~1c~ measurement. Such studies could address the question of whether and how hypoglycemia may contribute to CVD events. Studies of energy balance would benefit from tools to define the neural circuits and molecular mediators that regulate energy balance by sensing and responding to signals of energy status and tools to quantitate mitochondrial biogenesis and turnover and assess mitochondrial function. To assess the progression of diabetes, measures that directly measure \u03b2-cell mass are particularly important because current methods are all linked to \u03b2-cell function, which may have both reversible and irreversible components. Complications research could benefit from tools for the study of extracellular matrix proteins and their interactions with growth factors and circulating stem cells or for the study of epigenetic change or glycation and lipoxidation of proteins. Appropriate systems biology and computational tools are needed to facilitate the integration of sequencing, expression, proteome, and metabolome profiles to identify key biologic processes and their interactions. NIDDK seeks to foster development of paradigm-shifting technology and truly transformative tools through workshops, targeted funding opportunity announcements, and interdisciplinary research grants.\n\n# DIABETES AS A GLOBAL HEALTH ISSUE\n\nDiabetes is a universal problem. The impact of diabetes is rapidly growing among populations in developing and middle-income countries; without action, deaths and disability due to diabetes will continue to increase substantially. NIDDK promotes international collaboration between investigators in the U.S. and scientists in other countries to develop and test strategies to stem the epidemic of diabetes at home and globally. Many other countries have health care and medical records systems that are particularly useful for clinical research on diabetes. NIDDK has collaborated globally on type 1 diabetes research through networks such as TEDDY and TrialNet to expand access to research participants and gain insight from research collaborators. Genetic research on diabetes also knows no boundaries and our research efforts have benefited from combined analysis of international cohorts. Global collaboration on type 2 diabetes is particularly relevant to understanding diabetes in immigrant and minority populations in the U.S. International collaborations offer unique opportunities to compare effects of different environmental exposures and understand why specific populations may be particularly vulnerable to diabetes.\n\n# CONCLUSION\n\nDaunting as the challenge of diabetes appears, research has tremendously improved outcomes for people with the disorder. Were it not for declining rates of kidney disease, amputation, and CVD in people with diabetes, the burden associated with increased diabetes prevalence would be much greater. NIDDK recognizes the importance of collaboration with other components of NIH, other government agencies, and the diabetes voluntary organizations to realize the research progress. Our challenge is to stem the growing tide of diabetes through research ranging from understanding the fundamental processes underlying diabetes and its complications to studies of practical approaches to combat diabetes in medical and community settings.\n\n## ACKNOWLEDGMENTS\n\nNo potential conflicts of interest relevant to this article were reported.\n\n# REFERENCES","meta":{"dup_signals":{"dup_doc_count":134,"dup_dump_count":49,"dup_details":{"curated_sources":2,"2022-05":1,"2021-39":1,"2018-30":1,"2018-26":2,"2018-22":1,"2018-17":3,"2018-13":1,"2018-09":3,"2018-05":2,"2017-51":2,"2017-47":2,"2017-43":3,"2017-39":3,"2017-34":3,"2017-30":2,"2017-26":4,"2017-22":3,"2017-17":4,"2017-09":6,"2017-04":6,"2016-50":5,"2016-44":6,"2016-40":6,"2016-36":6,"2016-30":5,"2016-26":5,"2016-22":5,"2016-18":2,"2016-07":1,"2015-48":2,"2015-40":1,"2015-35":2,"2015-32":1,"2015-27":2,"2015-22":1,"2015-14":2,"2014-52":2,"2014-49":2,"2014-42":4,"2014-41":3,"2014-35":2,"2014-23":1,"2014-15":2,"2023-06":1,"2017-13":3,"2015-18":2,"2015-11":2,"2015-06":2,"2014-10":1}},"file":"PMC3554357"},"subset":"pubmed_central"} {"text":"author: Eugene J. Barrett; Stephen RattiganCorresponding author: Eugene J. Barrett, .\ndate: 2012-11\nreferences:\nsubtitle: : Its Measurement and Role in Metabolic Regulation\ntitle: Muscle Perfusion\n\nMethods for measuring muscle blood flow have been evolving over the past 120 years (1,2). Studies of hormonal regulation of muscle flow and metabolism began with the classical work by Andres et al. (3). Numerous diabetes investigators interested in muscle metabolism in vivo have estimated the net balance of glucose and other metabolites across a skeletal muscle bed or limb from the product of the arterial-venous concentration difference and the blood flow. In this review, drawing upon early studies, we will emphasize some of the principles and limitations of various techniques for measuring flow to estimate net exchange or rates of production or consumption of metabolites. Table 1<\/a> summarizes pertinent strengths and limitations of the most commonly used methods for estimating either muscle blood flow or perfusion. From later studies, we will deal more directly with the issue of how flow is hormonally regulated and the relationship between skeletal muscle flow regulation and metabolic regulation. That discussion will extend beyond flow alone as an important regulated variable, emphasizing instead perfusion, which encompasses both the rate and distribution of blood flow in a tissue. We will highlight some of the new methodologies that have helped clarify further the linkage between the regulation of skeletal muscle perfusion and metabolic function.\n\nMethods for measurement of limb and muscle blood flow\n\n![](2661tbl1)\n\n# Limb balance measurements identify sites of insulin action and resistance\n\nIt is appropriate to begin this discussion with the development of the forearm balance technique by investigators at Johns Hopkins in the early 1950s. These investigators put forward the hypothesis that through continuous infusion of a dye \"tracer\" (in this case Evans blue dye) that binds tightly and rapidly to serum proteins into the brachial artery and sampling from an ipsilateral antecubital vein, blood flow to the forearm could be quantified using simple spectrophotometric methods (3). They pointed out several advantages to the forearm for such studies, including *1*) that skeletal muscle makes up the preponderance (\\~80%) of the tissue mass of the forearm; *2*) the forearm's relatively small mass and slow blood flow allow infusion of very small amounts of dye, which minimizes the contribution of recirculating dye; and *3*) the vascular anatomy of the forearm is well understood and in \\>80% of individuals bifurcation of the brachial artery occurs below the antecubital crease, and therefore infusion of dye above the elbow should distribute to both the radial and ulnar vessels. Their measurements of flow corresponded well with the plethysmographic measurements that were available at that time. Plethysmography measures blood flow from the time-dependent increase in volume of a segment of a limb after venous outflow occlusion using either a strain gauge or other detection device. The development and application of plethysmographic limb flow measurements have recently been excellently reviewed (4). As there is no gold standard for measuring flow in clinical studies, cross-validation between methods provides needed assurance.\n\nIn these dye dilution studies, the issue of dye mixing in the brachial artery was extensively examined (3), as adequate mixing is clearly required for accurate blood flow measurements. Despite the finding that dye streaming occurred at the infusion rate used, it was observed by simultaneously sampling from multiple forearm veins that adequate mixing had occurred in most subjects. Interestingly, use of a jet injector to promote mixing of the infusate at the arterial injection site provoked downstream vasodilation (perhaps secondary to ATP or adenosine released by the endothelium traumatized by the jet shear) and was abandoned. Traction on the arterial catheter also altered downstream arterial resistance and flow, underscoring that care must be taken with this method.\n\nCombining this dye dilution method with arterial-venous (A-V) metabolite sampling allowed estimation of the substrate balance across the forearm (Fig. 1<\/a>). These \"limb balance\" studies took advantage of the fact that the forearm receives only approximately one-fiftieth of the cardiac output (5). As a result, infusion of low doses of insulin (e.g., 0.05 mU\/min\/kg body wt) into the brachial artery provoked physiologically significant increases in plasma insulin concentrations bathing the forearm musculature, but when diluted in the whole-body plasma pool it had minimal or no effect on plasma glucose, potassium, or other metabolite concentrations. The same circumstance does not pertain for infusion of insulin into the femoral artery when leg balance measurements are made. The leg's greater mass and blood flow require higher rates of insulin infusion, and the insulin recirculates and affects plasma glucose and other metabolites. Using this forearm balance method a decade before they developed the insulin clamp (6), these investigators demonstrated that physiologic doses of insulin stimulated skeletal muscle glucose uptake under euglycemic conditions in humans (5,7) and that this action of insulin was impaired in obese adults (7).\n\nWithin a few years, other laboratories had begun to apply this limb balance method. In particular, the Cahill laboratory (8\u201310) at the Joslin Clinic and its diaspora (11\u201313) used limb balance measurements very effectively to study the actions of glucoregulatory hormones and exercise. Limb blood flow was measured by indocyanine green dye dilution or plethysmography. It is worthwhile noting that while these studies frequently ascribed the observed substrate use or production to muscle, other tissues within the limb can contribute to this measurement by either adding or removing metabolites and by contributing to the flow measurement. The former is perhaps less of an issue for the forearm where the deep antecubital vein cannulated in a retrograde fashion drains predominantly skeletal muscle, while the femoral vein is draining all muscle, adipose, and osseous tissues of the leg. However, to the extent that the A-V difference measurement selectively samples blood draining muscle (as it does in the forearm with retrograde deep forearm vein cannulation), multiplying the A-V difference for a metabolite by total forearm flow may not give an accurate balance if muscle and fat do not handle the metabolite in an identical fashion. It is important to recognize that when either forearm or leg metabolism is studied using the limb balance method, results are typically expressed as millimoles of substrate per minute per 100 mL of forearm or leg. As sex (14), age, and body habitus (15) influence the relative proportion of skeletal muscle versus adipose tissue present in the limb, caution needs to be used regarding ascribing metabolite exchange across the forearm or leg to a particular tissue type. In this regard, it bears keeping in mind that estimated rates of blood flow to subcutaneous adipose tissue and resting skeletal muscle are comparable (\u223c3\u20135 mL\/min\/100 g) (16) and glucose uptake to subcutaneous adipose tissue contributes significantly to body glucose disposal (17,18). Along this line, leg blood flow (milliliters per kilogram of leg) is not different between men and women, despite greater fat-free mass in males (19). These cautions and limitations pertain whether one is studying forearm or leg balances. With exercise, muscle blood flow can increase by \\>20-fold.\n\nUsing the leg balance method combined with the euglycemic insulin clamp, investigators demonstrated that leg glucose uptake could account for \u223c40% of body glucose disposal and suggested that muscle was principally responsible. Extrapolating from leg muscle mass to total body muscle, they concluded that during steady-state hyperinsulinemia, muscle accounted for 80\u201390% of glucose disposal (20). Leg glucose uptake was also found to be a major site of insulin resistance in both obesity and diabetes. The combined measurements of bulk blood flow together with the A-V substrate concentration differences proved to be a powerful tool for quantifying the metabolism of carbohydrates (21), fats (22), amino acids (23), and oxygen (24) by the limb tissues. Either dye dilution or plethysmographic methods were used. Whereas only modest data are available in the literature comparing these two methods for measurement of bulk blood flow, they do appear to yield comparable values for bulk limb blood flow (25).\n\nThe introduction of simultaneous A-V measurement of radiolabeled substrate with limb blood flow added yet another dimension to the limb balance technique. These methods were initially used to measure fatty acid uptake and oxidation by exercising muscle (26) and later to quantify both the transport and net uptake of glucose (27), free fatty acid (FFA) (28), and amino acids and subsequently to quantify rates of protein synthesis and degradation (29), of lipolysis and lipogenesis (22), and of oxidative and nonoxidative glucose disposal (30). Once again, the quantitative accuracy of these measurements hinges entirely on the measurement of blood flow. In aggregate, these studies demonstrated the great utility of the limb balance method to quantify metabolic events in peripheral tissue and to discover how they are regulated in health and disease.\n\nWith improvements in ultrasound methodology, Doppler flow measurements have become a third widely used method for measuring bulk limb blood flow, which has increasingly displaced the more invasive dye dilution technique. Doppler flow measurements were observed to correlate well with plethysmographic measurements (31).\n\nMuscle perfusion can be evaluated by various nuclear magnetic resonance (NMR) techniques. One of the most useful, because of its ability to determine both blood flow and flow distribution noninvasively, is arterial spin labeling (ASL). This technique can be used in humans and experimental animals (32). It has the potential to be directly compared with metabolism determined by other NMR techniques in the same region of tissue but not simultaneously (32). The ASL technique has been shown to correlate reasonably well (*r*^2^ = 0.85) with leg blood flow measured by venous plethysmography in humans (33) and well (*r*^2^ = 0.95) with limb blood flow in a rat-perfused hindlimb model (34). However, due to the expense of specialized NMR equipment and the substantial complexity of postacquisition data processing required, this technique has not been widely used by physiologists or clinical researchers.\n\nMuscle perfusion has also been measured by microdialysis where a semipermeable membrane is placed in the tissue and perfused with dialysate that contains markers such as ethanol or radiolabeled water. The exchange and recovery of these markers can be used to estimate total blood flow (35) and distribution of nutritive and nonnutritive blood flow (36). Combined with other tracers, the microdialysis technique can also provide information on the interstitial concentrations of metabolic and vasoactive moieties such as glucose, glycerol, lactate, amino acids, insulin, nitric oxide (NO), and adenosine (37). The technique, however, has several limitations that include its invasive nature; lack of spatial resolution, as it is difficult to determine the amount of tissue being sampled; and temporal resolution because of the long sampling times (minutes) required to collect sufficient dialysate for measurement.\n\n# Perfusion as A regulator of muscle metabolism\n\nA conceptual shift in the consideration of the role of blood flow in muscle metabolism occurred with studies from the laboratory of Alain Baron. He modified a method first used by Andersen and Saltin (38) for quantifying leg blood flow, which relied on a use of a thermodilution catheter introduced into the femoral vein (39). Thermodilution methods had previously been used extensively to measure blood flow in vascular beds other than skeletal muscle. While invasive, this methodology allowed very frequent measurements of blood flow during the course of prolonged metabolic balance studies. The design of the catheter allowed spraying of the chilled saline infusate to assure good mixing and nonlaminar flow prior to the temperature-sensing probe. With this technique, Baron observed that insulin not only increased glucose uptake by leg muscle in healthy individuals but also simultaneously increased blood flow. Further increasing flow with cholinergic stimulation during hyperinsulinemia significantly augmented leg glucose uptake (40). He and collaborators subsequently demonstrated that insulin increased flow by stimulation of NO production and that this flow increase was inhibited in type 2 and type 1 diabetes and obesity (39) and by experimentally induced insulin resistance (rev. in 41).\n\nThis work strongly suggested that insulin's action to promote the uptake of glucose, amino acids, and other substrates into muscle was aided by the concerted action of insulin on the vasculature to increase blood flow and presumably the delivery of insulin, glucose, and other metabolites to the muscle (41). The observation that this vascular action of insulin was blocked by inhibitors of NO synthase (NOS) (42\u201344) suggested that the endothelium was the target for insulin's vascular action. Subsequent studies in isolated endothelial cells demonstrated that insulin, acting via the phosphatidylinositol-3 kinase Akt pathway (45,46), specifically enhanced the activity of NOS (Fig. 3<\/a>).\n\nWhile these observations of insulin's effect of increasing total limb blood flow were confirmed by a number of laboratories (47,48), concerns were raised and controversy ensued around both the reported need for prolonged, relatively high steady-state insulin concentrations to provoke insulin's vascular action relative to effects on glucose disposal (49,50) and the lack of effect of increasing flow pharmacologically on limb glucose uptake in insulin-resistant (51,52) subjects.\n\nStudies using positron emission tomography (PET) methods by investigators in Turku, Finland, were particularly informative, as PET allowed simultaneous measurements of perfusion, using labeled water (H~2~^15^O) and glucose (^18^F-deoxyglucose) within voxels of muscle, avoiding the issue of tissue heterogeneity and bulk flow distribution to multiple tissues in the limb (51,53).\n\nRadiolabeled or fluorescent microspheres have also been used in a number of animal studies to examine the effect of insulin on blood flow (54\u201356). As the technique requires tissue removal, it has not been applicable to clinical studies. However, results of published studies confirm that insulin can enhance skeletal muscle blood flow. Inasmuch as the microspheres used typically measure 15 \u00b1 3 \u00b5 in diameter, it is not clear that they have access to the capillary bed; they are more likely retained within the terminal arterioles. Multiple measurements can be made in a single animal provided different isotopes or fluorescent labels are available whose emissions can be distinguished by differences in energy spectrum. Of interest is the study by Liang et al. (55), which included measurements of the effects of insulin on multiple tissues. Insulin-induced increases in blood flow during euglycemia were most apparent in skeletal and cardiac muscle.\n\nSince the various laboratories examining the relationship between insulin's effect on limb blood flow and metabolism used different methods for measuring flow (e.g., strain gauge plethysmography, dye- or thermodilution, or PET), some of the discordant findings may have arisen from these measurements. This issue was never fully settled. However, additional opportunities became available with the development of techniques to measure the distribution of flow within a tissue rather than simply bulk flow to the tissue. The observation that the volume of muscle microvasculature perfused was regulated dates back to August Krogh (57). Changes in the volume of muscle microvasculature perfused could influence hormone and nutrient exchange within the muscle by altering the endothelial surface area exposed.\n\nAs shown in Fig. 2<\/a>, the muscle microvasculature consists of third- and fourth-order arterioles, the capillary network, and small venules. Each terminal arteriole supplies multiple capillaries in an orderly arcade. At any given moment, only approximately one-third of the capillaries appear to be actively perfused in resting muscle (58). There is no conclusive evidence for the presence of smooth muscle \"sphincters\" at the capillary origin. Instead, it appears likely that many capillaries are functionally at least transiently not perfused because of *1*) the residual tone limiting flow to the terminal arteriole, *2*) the position of a particular capillary relative to the inflow into the arteriole, *3*) the interstitial pressure around the capillary, *4*) the intrinsic resistance of the capillary, and *5*) the residual pressure within the lumen of the draining venule. These factors in aggregate likely determine whether an individual capillary will be perfused at a particular time. Relaxation of the terminal arteriole, with consequent increases in precapillary pressure, will lead to perfusion of previously under- or unperfused capillaries. Time variable perfusion of terminal arterioles, a process termed \"vasomotion\" or \"flowmotion,\" appears to be due to the action of sympathetic nervous input, intrinsic myogenic responses in the vascular smooth muscle, and autoregulation by the endothelium within terminal aspects of the microvasculature conducting signals retrograde through gap junctions to signal to feed arterioles when some relaxation is needed to at least transiently restore perfusion (59). While intermittent perfusion of muscle microvasculature has been difficult to demonstrate in intravital microscopy studies of surgically exposed thin muscles (60), the use of noninvasive contrast ultrasound (see below) appears to have resolved this issue in larger muscle groups in both human and animal studies.\n\nJust over a decade ago, our laboratories began collaborating to develop and apply two methodologies for assessing the volume of vasculature perfused within muscle. The first relied on measurement of the A-V concentration difference of intravenously infused 1-methyl-xanthine (1-MX). Xanthine oxidase in the endothelial cell converts 1-MX to 1-methylurate, and the extent of metabolism provides an index of the endothelial surface perfused. Insulin was found in rats to increase 1-MX metabolism (61), and this was inhibited by factors that acutely provoked metabolic insulin resistance, including \u03b1-methyl serotonin (62), tumor necrosis factor-\u03b1 (63), elevated FFA concentrations (64), and NOS inhibition (65), and with chronic obesity (66). This method, however, was not found to be useful in humans because of the substantially lower extraction ratio for 1-MX across the skeletal muscle capillary bed.\n\nWe subsequently applied the technique of contrast-enhanced ultrasound (CEU) to the measurement of microvascular blood volume in both rodent and human studies. CEU was developed initially to image myocardial perfusion. The method relies on the insonation of intravenously infused, perfluorocarbon\u2013filled lipid microbubbles, which enhance the video intensity of ultrasound images. These microbubbles are typically 1\u20134 \u00b5 in diameter, remain within the vasculature, and have rheologic properties similar to erythrocytes. Most importantly, the microbubbles oscillate or rupture (depending on the energy of the ultrasound signal) when exposed to the ultrasound beam. This behavior is responsible for an intense reflected sound wave that enhances the image intensity.\n\nUnlike Doppler, plethysmography, PET, or tracer dilution methods, CEU does not measure volume flow, i.e., milliliters per minute per 100 mL tissue. Therefore, CEU cannot be used to provide an estimate of blood flow that could be coupled with A-V difference measurements to obtain a limb balance. Rather, it provides a sensitive index of the volume of microvasculature (and therefore the microvascular surface area available for nutrient exchange) that is perfused. It is most useful for comparing acute responses to interventions like increasing plasma insulin, exercise, feeding, etc. In this regard, it has proven quite useful for examining the action of insulin on the microvasculature. Similar to PET, CEU allows study of a region of interest within the muscle or adipose tissue. However, while PET signal intensity in muscle can be calibrated to the activity in blood measured simultaneously yielding a flow measurement, this is not available with CEU. However, CEU affords several advantages over PET scanning for clinical studies. It is less expensive, involves no radioisotopes, is portable, and the image analysis is substantially simpler. Perhaps most importantly, it allows separation of signals arising from larger conduit and feed arteries from signals arising from the microcirculation. This separation is based on capturing a \"replenishment curve,\" i.e., a time series of images that is initiated by instantaneously destroying all microbubbles in the volume being studied and following the microbubble replenishment and recovery of videointensity until it reaches a plateau. The plateau signal intensity is reached when all vessels in the tissue that are being perfused have again filled with blood containing microbubbles. Since flow in conduit and feed arteries is rapid, these fill quickly, while the capillaries and venules refill slowly. The image intensity created by vessels that fill rapidly can straightforwardly be subtracted from that seen at the plateau, with the difference being due to microvasculature containing microbubbles. The flow velocity for the replenishment curve is also provided by the image analysis, and flow is the product of flow velocity and volume perfused. However, both velocity and volume are measured in videointensity units, and their product is not readily converted to units of volume flow.\n\nIn a series of studies using either contrast ultrasound or the A-V concentration difference for 1-MX as an index of the endothelial surface perfused, we observed that insulin at physiologic concentrations enhanced the microvascular volume perfused within either human forearm (67) or rat hind leg muscle. This occurs within 15 min of onset of hyperinsulinemia (68,69) at modest physiologic insulin concentrations (70) and is blocked by inhibition of NOS (65). Importantly, loss of insulin's ability to enhance microvascular volume either by a blockade of NOS or as a result of endogenous insulin resistance secondary to obesity (71), diabetes (72), or experimental conditions (e.g., elevating FFA \\[73\\] or tumor necrosis factor-\u03b1 concentrations) impedes skeletal muscle glucose disposal. This recruitment process is illustrated in Fig. 3<\/a>.\n\nThe simplest explanation for this apparent relationship between the microvascular volume perfused within skeletal muscle and metabolic insulin resistance arises from understanding that the delivery of glucose and insulin to skeletal muscle appears to be a limiting process for insulin action within skeletal muscle (6,74). Increasing the endothelial surface area and thereby enhancing insulin delivery could facilitate both the time of onset and the magnitude of the metabolic response to insulin. There may in addition be factors within the endothelium that relate to insulin transendothelial transport that are adversely affected by insulin resistance and contribute further to the metabolic disarray.\n\nThere are two other potential techniques that could be used to measure the volume of vasculature perfused within skeletal muscle. The first is an adaptation of PET in which ^15^O-labeled carbon monoxide is inhaled and binds rapidly to hemoglobin. In this manner, the PET tracer becomes a vascular tracer. With this technique, investigators have observed that insulin increases the intensity of the PET signal consistent with an increase in the volume occupied by vasculature within a given voxel. While statistically significant effects were seen, the magnitude of the effect is substantially less than is seen with CEU. This may relate to the fact that much of the PET signal appears to arise from larger blood vessels and there is no straightforward way to subtract out this component. Additionally, in order to have sufficient signal, the PET image is collected over a substantially longer period of time, and this exposure is longer than the mean transit time for blood traversing skeletal muscle. As a result, flowmotion within the tissue slice may lead to capturing signal from vessels that are only open a portion of the time. Perhaps because of these difficulties, very little has been done with this technique to examine microvascular perfusion within skeletal muscle using carbon monoxide\u2013labeled tracers.\n\nNear-infrared spectroscopy provides yet another potential methodology. This technique relies on the deeper penetration of near-infrared electromagnetic radiation into tissue compared with visible or ultraviolet light. It has been used extensively with pulse oximetry, and it can measure the relative amounts of oxyhemoglobin and desoxyhemoglobin. When coupled with venous occlusion, it can be used to determine the total rate of inflow of oxygenated blood to a muscle and potentially the microvascular volume filled. To our knowledge, it has not been applied to studies of insulin or low-intensity exercise\u2013mediated capillary recruitment within muscle. There is concern regarding how much contribution skin and subcutaneous adipose tissue make to the near-infrared spectroscopy signal, and this would be a particular issue in obese individuals and females.\n\n# Parallel effects of insulin and contraction on microvascular recruitment\n\nIt has long been known that exercise acts synergistically with hyperinsulinemia to enhance glucose disposal. Multiple mechanisms within muscle account for this including increased glycolytic activity, recruitment of GLUT4 transporters to the plasma membrane, and increased tissue perfusion. The impact of expanding microvascular volume by recruitment of previously unperfused or underperfused capillaries on delivery of insulin to muscle tissue can be appreciated from studies looking at the effect of microvascular recruitment induced by muscular contraction. We recently demonstrated that in the rat hind limb, even very modest contraction enhances the delivery of both insulin and albumin to the muscle interstitium (75). We were particularly intrigued by the parallel between low-dose insulin and very light exercise, which both stimulated microvascular recruitment within muscle without affecting total blood flow to the tissue. In contrast, more intense exercise (75,76) and higher insulin concentrations increase both microvascular recruitment and total blood flow. This suggests that there is an orderly, staged vascular response to either stimulus, with the microvasculature being more sensitive to either stimulus than larger resistance vessels, which regulate total flow.\n\nBoth microvascular recruitment and enhanced blood flow could increase the delivery of insulin, glucose, and other metabolites to skeletal muscle, but there may be some advantage to having a staged response with very modest increases in insulin or exercise. Capillary recruitment would not require simultaneous increases in cardiac output to prevent a fall in systemic arterial pressure. For insulin, this may be helpful particularly in the postprandial setting when there is already a demand for increased blood flow to splanchnic tissues. However, when the stimulus is particularly strong (as occurs with high concentrations of insulin or intense exercise), other systemic vascular regulatory responses must be called into play to maintain vascular homeostasis.\n\nIncreases in microvascular volume secondary to microvascular recruitment are not restricted to skeletal muscle. With use of contrast ultrasound, insulin has been observed to increase microvascular volume within cardiac muscle (77) and subcutaneous adipose tissue of the leg (78). Using an entirely different approach, investigators in the Netherlands have demonstrated increased numbers of perfused capillaries within the nail fold as captured by video microscopy in human skin (79). This method is unique in allowing direct quantitation of the number of perfused capillaries. Insulin can regulate this recruitment, but its impact is diminished in obesity and hypertension and by insulin resistance (80).\n\n## Closing remarks.\n\nAccurate measurements of blood flow are critical for in vivo quantitation of skeletal muscle fuel metabolism and how it is affected by insulin resistance, diabetes, or other metabolic disorders. Multiple methodologies are currently available for use by clinical and basic scientists performing studies of muscle metabolism (Table 1<\/a>). Plethsymography, Doppler ultrasound, and dye dilution measurements are widely available, inexpensive, portable and not technically demanding and are consequently most commonly used. Other methods such as PET and magnetic resonance imaging, while more complex and expensive and not portable, give more detailed information about specific blood flow within a region of interest of tissue. This latter information is not available with the more readily performed methods. Doubtless, with improving technology these types of measurements will be made more often. Contrast ultrasound can provide unique information with regard to the perfusion of the microvasculature and a measurement of microvascular volume. However, CEU does not provide a measure of bulk flow. Using a variety of different techniques, investigators have clearly realized the intimate relationship between regulation of perfusion and metabolic regulation within muscle. In coming years, more will be learned with regard to the chemical signals responsible for the coordination of these two functions.\n\n## ACKNOWLEDGMENTS\n\nThis study was supported by National Institutes of Health grants DK-R01-057878 and R01 DK-073759 and American Diabetes Association Grant BS 06.\n\nNo potential conflicts of interest relevant to this article were reported.\n\nE.J.B. and S.R. reviewed literature and composed and edited the manuscript. E.J.B. is the guarantor of this work and, as such, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.\n\n# REFERENCES","meta":{"dup_signals":{"dup_doc_count":139,"dup_dump_count":41,"dup_details":{"curated_sources":2,"2018-26":2,"2018-22":3,"2018-17":3,"2018-13":2,"2018-09":1,"2018-05":2,"2017-51":3,"2017-47":2,"2017-43":5,"2017-39":2,"2017-34":4,"2017-30":3,"2017-26":5,"2017-22":2,"2017-17":7,"2017-09":6,"2017-04":7,"2016-50":4,"2016-44":7,"2016-40":7,"2016-36":6,"2016-30":6,"2016-26":6,"2016-22":5,"2016-18":2,"2016-07":2,"2015-48":1,"2015-40":1,"2015-35":1,"2015-32":1,"2015-27":2,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":2,"2014-42":3,"2014-41":3,"2014-35":1,"2017-13":5,"2015-18":3,"2015-11":1}},"file":"PMC3478558"},"subset":"pubmed_central"} {"text":"abstract: Until the pathophysiology\/etiology of rheumatoid arthritis (RA) is better understood, treatment strategies must focus on disease management. Early diagnosis and treatment with disease-modifying antirheumatic drugs (DMARDs) are necessary to reduce early joint damage, functional loss, and mortality. Several clinical trials have now clearly shown that administering appropriate DMARDs early yields better therapeutic outcomes. However, RA is a heterogeneous disease in which responses to treatment vary considerably for any given patient. Thus, choosing which patients receive combination DMARDs, and which combinations, remains one of our major challenges in treating RA patients. In many well controlled clinical trials methotrexate and other DMARDs, including the tumor necrosis factor-\u03b1 inhibitors, have shown considerable efficacy in controlling the inflammatory process, but many patients continue to have active disease. Optimizing clinical response requires the use of a full spectrum of clinical agents with different therapeutic targets. Newer therapies, such as rituximab, that specifically target B cells have emerged as viable treatment options for patients with RA.\nauthor: Larry Moreland\ndate: 2005\ninstitute: 1University of Alabama at Birmingham School of Medicine, Birmingham, Alabama, USA\nreferences:\ntitle: Unmet needs in rheumatoid arthritis\n\n# Introduction\n\nCurrent treatment guidelines suggest that early diagnosis and initial treatment with disease-modifying antirheumatic drugs (DMARDs) are necessary to limit early joint damage and functional loss and to reduce mortality associated with rheumatoid arthritis (RA) \\[1\\]. The earlier use of methotrexate alone and in combination with other DMARDs is now the standard of care and has yielded better outcomes for patients with RA.\n\nHowever, RA is a heterogeneous disease, and patient responses to standard treatments are variable. Most recent clinical trials of newer DMARDs alone and in combination with methotrexate have shown that ACR50 response \u2013 which includes reducing the signs and symptoms of disease by 50%, according to criteria established by the American College of Rheumatology (ACR) \u2013 was achieved in less than two-thirds of the patients \\[2-5\\]. That leaves at least one-third of the most seriously affected patients with RA without an effective long-term treatment strategy. Until we are able to identify which patients will respond to which treatment, the availability of a variety of agents with different therapeutic targets offers the best opportunity to optimize clinical outcomes.\n\nRituximab, a chimeric anti-CD20 monoclonal antibody that has emerged as a potential treatment for RA via selective targeting of B lymphocytes, has been used extensively in the treatment of B cell malignancies. There is a growing body of evidence for the pathophysiologic role of B cells. Silverman and Carson \\[6\\] described that B lymphocytes can present immune-complexed antigens to autoreactive T cells; express adhesion and other co-stimulatory molecules that promote T cell activation; synthesize chemokines that induce leukocyte infiltration; produce factors that initiate and sustain angiogenesis and granulation tissue formation; and release autoantibodies that are directly or indirectly destructive to tissues and maintain a memory response to autoantigens. Apart from B cells and T cells, populations of monocytes, macrophages, endothelial cells, and fibroblasts have been implicated in the ongoing inflammatory process \\[7\\]. The availability of a broader spectrum of agents with different targeting mechanisms will provide more effective treatment options for diverse patient populations.\n\n# Overall picture of rheumatoid arthritis\n\nRA affects almost 1% of the adult population worldwide \\[1\\]. Clinicians have reason to be concerned when they manage a chronic and debilitating condition that requires aggressive, life-long management. When one looks at large cohort populations, patients with RA exhibit increased morbidity and mortality, compounded by a dramatic impact on quality of life. Approximately 80% of affected patients are disabled after 20 years \\[8\\], and life expectancy is reduced by an average of 3\u201318 years \\[9\\].\n\nThe management of RA has a marked impact in terms not only of the financial burden to the health care system but also of the financial burden to individual patients and their families. It has been estimated that the disorder costs the average individual up to US\\$8500 annually \\[10\\], with time lost from work ranging from 2.7 to 30 days \\[11\\].\n\n# Treatment advances over the past decade\n\nDuring the past 10 years or so, advances in the treatment of RA have underscored the role of methotrexate as a major cornerstone of therapy. However, many randomized controlled trials have demonstrated that methotrexate in combination with another DMARD is more effective than methotrexate monotherapy for many patient populations \\[3-5,12\\].\n\nIn a 2002 study, Kremer and colleagues \\[12\\] tested the hypothesis that adding leflumonide to the regimen of patients taking methotrexate alone would strengthen the clinical response. The team assigned 263 patients with RA to leflunomide plus methotrexate or methotrexate alone. At 24 weeks, 46.2% (60 of 130) of patients receiving the leflunomide\u2013methotrexate combination had achieved an ACR20 clinical response, as compared with 19.5% of the patients who had been maintained on a methotrexate\u2013placebo regimen (*P* \\< 0.001). In addition, they reported that 26.2% of the leflunomide patients achieved an ACR50 response, as compared with 6.0% of the patients in the methotrexate\u2013placebo arm (*P* \\< 0.001). This study was one of the first to show that the combination of the two biologic agents produced statistically significant and clinically meaningful improvement in patients with active RA.\n\nIn a 2-year, randomized, double-blind, placebo-controlled trial, O'Dell and colleagues \\[13\\] compared the effectiveness of methotrexate in combination with either hydroxychloroquine or sulfasalazine, as well as a combination of all three drugs. After 2 years, 55% of the 58 patients in the triple therapy arm achieved an ACR50 response; 40% of patients on hydroxychloroquine and methotrexate achieved an ACR50 response; and 29% of those on sulfasalazine and methotrexate achieved an ACR50 response. The differences between triple therapy and double therapy with sulfasalazine reached statistical significance (*P* = 0.005).\n\n# Biologic response modifiers\n\nEfforts to develop safer and more effective treatments for RA, based on an improved understanding of the role of inflammatory mediators, have been realized through the development of the biologic response modifiers. Some biologics have been approved for use in RA by the US Food and Drug Administration and the European Medicine Evaluation Agency, including etanercept (a soluble tumor necrosis factor \\[TNF\\]-\u03b1 type II receptor\u2013IgG~1~ fusion protein administered subcutaneously), infliximab (a chimeric \\[human and mouse\\] monoclonal antibody against TNF-\u03b1), and adalimumab (a human anti-TNF monoclonal antibody). These therapies have shown the ability to change dramatically the outcomes of the disease in some RA patients.\n\nResearchers have observed that interleukin (IL)-1, IL-6, and TNF-\u03b1 are important mediators that initiate and maintain inflammation in RA, resulting in cellular infiltration of synovium and damage, and the destruction of cartilage and bone \\[14\\]. TNF-\u03b1, a potent cytokine that exerts diverse stimulatory effects, is produced mainly by monocytes and macrophages, but also by B cells, T cells, and fibroblasts. Newly synthesized TNF-\u03b1 is expressed on the cell membrane and subsequently released through the cleavage of its membrane-anchoring domain by a serine metalloproteinase. Thus, inhibition of TNF-\u03b1 secretion may represent a therapeutic target.\n\nRA is believed to be initiated by CD4^+^ T cells, which amplify the immune response by stimulating other mononuclear cells, synovial fibroblasts, chondrocytes, and osteoclasts. Activated CD4^+^ T cells contribute to stimulation of osteoclastogenesis and activation of metalloproteinases responsible for the degradation of connective tissue, resulting in joint damage.\n\nClearly, there are many possible therapeutic targets, but inhibition of cytokines appears to offer an especially efficient approach to suppressing inflammation and preventing joint damage \\[14\\]. There are four biological therapies currently approved for RA: three TNF-\u03b1 inhibitors (infliximab, etanercept, and adalimumab) and one IL-1 inhibitor (anakinra).\n\nAbatacept, a cytotoxic T lymphocyte-associated antigen 4\u2013IgG~1~ (CTLA4\u2013Ig) fusion protein, known as a co-stimulation blocker, is administered in a 30 min infusion. Recent clinical trial data show that the combination of abatacept and methotrexate improves the signs and symptoms, physical functioning, and quality of life of patients with active RA \\[15\\].\n\n## Etanercept\n\nIn the Trial of Etanercept and Methotrexate with Radiographic Patient Outcomes (TEMPO), Klareskog and coworkers \\[16\\] enrolled 685 patients with active RA in a double-blind, randomized trial to determine the safety and efficacy of treatment with etanercept or methotrexate alone or in combination versus placebo for up to 52 weeks. Patients were randomly assigned to treatment with etanercept 25 mg subcutaneously twice a week, oral methotrexate monotherapy up to 20 mg every week, or a combination of the two agents.\n\nThe researchers found that the proportions of patients achieving ACR50 and ACR70 responses were consistently higher for the combination group than either the etanercept or methotrexate treatment arms throughout the study (Fig. 1<\/a>). At week 52, 69% of the patients in the combination protocol achieved an ACR50 response, as compared with 43% and 48% in the methotrexate and etanercept monotherapy groups, respectively (*P* \\< 0.0001). Moreover, after a 1-year period, 43% of the patients in the combination group had an ACR70 response, as compared with 19% and 24% in the methotrexate and etanercept monotherapy patients, respectively (*P* \\< 0.0001). Additionally, more than one-third of patients treated with the methotrexate\u2013etanercept combination protocol were in remission at 52 weeks.\n\nThe number of patients reporting infection or adverse events (AEs) was similar in all groups. Overall, Klareskog and colleagues reported that the combination of etanercept and methotrexate was significantly better in reducing disease activity, improving functional disability, and retarding radiographic progression compared with methotrexate or etanercept alone.\n\nA comparison of therapy with methotrexate and therapy with etanercept was also conducted by Bathon and colleagues \\[17\\]. They evaluated 632 patients with early RA who were given either twice weekly subcutaneous etanercept (10 mg or 25 mg) or weekly oral methotrexate (mean 19 mg\/week) for 12 months.\n\nCompared with patients who received methotrexate, patients taking the 25 mg dose of etanercept exhibited a more rapid rate of improvement, with significantly more patients having ACR20, ACR50, and ACR70 response improvement in disease activity during the first 6 months (*P* \\< 0.05). The mean increase in the erosion score during the first 6 months was 0.30 in the group assigned to receive 25 mg etanercept and 0.68 in the methotrexate group (*P* = 0.001). The respective increases during the first 12 months were 0.47 and 1.03 (*P* = 0.002).\n\nAmong patients who received the 25 mg dose of etanercept, 72% had no increase in the erosion score, as compared with 60% of patients in the methotrexate group (*P* = 0.007). This group of patients also had fewer AEs (*P* = 0.02) and fewer infections (*P* = 0.006) than did the group treated with methotrexate. Compared with oral methotrexate, subcutaneous etanercept acted more rapidly to decrease symptoms and slow joint damage in patients with early active RA.\n\nBathon and colleagues observed that the patients in the study were at risk for rapidly progressive joint damage. Their disease was predicted to progress without treatment at an estimated rate of four to five points per year on the Sharp erosion subscale, and four points per year on the Sharp joint-space narrowing subscale. The rates of progression for joint-space narrowing were low. Both etanercept and methotrexate prevented joint-space narrowing. The overall rates of erosion were also low, equivalent to the occurrence of one new erosion or the erosion of 20% of one joint every year in the methotrexate group and every 2 years in the group assigned to receive 25 mg etanercept. The effects of this dose of etanercept were evident sooner than were the effects of methotrexate, but the rates of change were similar in the two groups during the latter half of the study. Over a 1-year period, treatment with etanercept halted erosions in 72% of patients, whereas treatment with methotrexate halted erosions in 60% of patients \\[17\\].\n\n## Adalimumab\n\nIn a pivotal study \\[18\\], the combination of the biologic adalimumab and methotrexate, particularly at a higher dose of 40 mg every other week, yielded a statistically significant improvement compared with methotrexate plus placebo. In that multicenter, 52-week, double-blind, placebo-controlled study, 619 patients with active RA who had an inadequate response to methotrexate alone were randomly assigned to receive adalimumab 40 mg subcutaneously every other week (*n* = 207), adalimumab 20 mg subcutaneously every week (*n* = 212), or placebo (*n* = 200) plus concomitant treatment with methotrexate. The primary efficacy end-points were radiographic progression at week 52 (total Sharp score by a modified method); clinical response at week 24, defined as improvements that achieved at least an ACR20 response; and improvement in physical function at week 52, based on the disability index of the Health Assessment Questionnaire (HAQ). At week 52 there was statistically significantly less radiographic progression (Fig. 2<\/a>), as measured by change in total Sharp score, in the patients receiving adalimumab either 40 mg every other week (change \\[mean \u00b1 standard deviation\\] 0.1 \u00b1 4.8) or 20 mg weekly (0.8 \u00b1 4.9), as compared with that in the placebo group (2.7 \u00b1 6.8; *P* \\< 0.001).\n\nAt week 52, ACR50 responses were achieved by 42% and 38% of patients taking adalimumab 40 mg every other week and 20 mg weekly, respectively, but by 10% of patients taking placebo (*P* \\< 0.001). In terms of physical function at week 52, patients on combination therapy with adalimumab and methotrexate experienced statistically significant improvement (mean change in HAQ score -0.59 and -0.61, respectively, versus -0.25; *P* \\< 0.001).\n\nAdalimumab was generally well tolerated. Discontinuations occurred in 22% of adalimumab treated patients and in 30% of placebo treated patients. The rate of both serious AEs and nonserious AEs was similar between adalimumab and placebo groups. The proportion of patients reporting serious infections was higher in patients receiving adalimumab (3.8%) than in those receiving placebo (0.5%; *P* \\< 0.02) and was highest in the patients receiving 40 mg every other week.\n\nThe benefits of early treatment of RA with adalimumab, either alone or in combination with methotrexate, were supported more recently by the results of the PREMIER study \\[19\\]. In an analysis of almost 800 patients with a disease duration of less than 3 years, adalimumab 40 mg every other week plus methotrexate (escalated rapidly to 20 mg\/week) exhibited statistically significant improvement in ACR50 clinical response and amelioration of disease progression compared with either adalimumab or methotrexate alone. Of patients in the combination therapy group, 61% and 46% achieved ACR50 and ACR70 responses, respectively, as compared with 46% and 28% in the methotrexate group (*P* \\< 0.001) \u2013 a statistically significant difference that was sustained for up to 2 years. The difference in clinical response was also comparable between the combination arm of the trial and patients taking adalimumab alone. Moreover, at 2 years radiographic remission (as indicated by a Disease Activity Scale \\[DAS28\\] score \\< 2.6) was achieved by 50% of patients receiving combination therapy.\n\n## Infliximab\n\nMuch of the recent knowledge regarding the safety and efficacy of infliximab in combination with methotrexate emerged from analysis of data from the Anti-Tumor Necrosis Factor Trial in Rheumatoid Arthritis with Concomitant Therapy (ATTRACT) trial \\[20,21\\], the pivotal study that led to approval of the TNF-\u03b1 inhibitor infliximab in the USA and worldwide. ATTRACT demonstrated that the combination of methotrexate and infliximab, in particular, with highest doses given every 4\u20138 weeks, resulted in better clinical responses than methotrexate plus placebo.\n\nIn that study the investigators established that infliximab not only provided significant improvements in physical function and quality of life, but also succeeded in slowing or halting progressive joint damage and signs and symptoms of RA in patients who previously had an incomplete response to methotrexate alone.\n\nThe study included 428 patients who were randomly assigned to receive methotrexate plus placebo or infliximab at a dose of 3 mg\/kg or 10 mg\/kg plus methotrexate for 54 weeks, with an additional year of follow up. The protocol was later amended to allow for continued treatment during the second year.\n\nOf 259 patients who entered the second year of treatment, 216 continued to receive infliximab plus methotrexate for 102 weeks. Ninety-four of these 259 patients experienced a gap in therapy of greater than 8 weeks before continuing therapy. Infusions were administered at baseline, week 2, and week 6, followed by treatment every 4 weeks or every 8 weeks \u2013 alternating with placebo infusions in the interim 4-week visits \u2013 at a dose of 3 mg\/kg or 10 mg\/kg for a total of 102 weeks, which included the gap in therapy.\n\nThe results of the study showed that the infliximab plus methotrexate regimens led to significantly greater improvement in HAQ scores (*P* = 0.006) and in the Short Form 36-Item Health Survey (SF-36) physical component summary scores (*P* = 0.011), compared with the group of patients receiving monotherapy with methotrexate. There also was stability in the SF-36 mental component summary score among patients who received the infliximab plus methotrexate regimens. The median changes from baseline to week 102 in the total radiographic score were 4.25 for patients who received the methotrexate-only regimen and 0.50 for patients who received the infliximab plus methotrexate regimen. The proportion of patients achieving an ACR50 response at week 102 varied from 20% to 21% for the infliximab plus methotrexate groups, as compared with 6% for the methotrexate-only group. These data emphasize that the combination of infliximab plus methotrexate conferred significant, clinically relevant improvement in physical function and quality of life, accompanied by inhibition of progressive joint damage and sustained improvement in the signs and symptoms of RA among patients who previously had an incomplete response to methotrexate alone.\n\nThe ASPIRE (Active controlled Study of Patients receiving Infliximab for RA of Early onset) trial \\[5\\] randomly assigned patients with early RA to either methotrexate alone or methotrexate plus 3 mg\/kg or 6 mg\/kg infliximab at weeks 0, 2, and 6, and every 8 weeks thereafter through week 46. It revealed improvement in ACR scores in both combination treatment groups compared with the methotrexate arm (38.9% and 46.7%, versus 26.4%, respectively; *P* \\< 0.001 for both comparisons), significantly less radiographic progression at 6 months and 12 months, and improvement in physical function.\n\n## Anakinra\n\nTreatment with anakinra, the first IL-1 receptor antagonist, either alone or in combination with methotrexate, has emerged as an effective medication for patients with moderate-to-severe RA. In a study of 419 patients with active RA of duration greater than 6 months but less than 12 years \\[22\\], patients were randomly assigned to placebo or one of five doses of anakinra (0.04, 0.1, 0.4, 1.0 or 2.0 mg\/kg) plus methotrexate. Those assigned to the five anakinra regimens exhibited statistically significant (*P* = 0.001), dose-escalating efficacy in ACR20 responses as compared with the placebo plus methotrexate group after 12 weeks. The ACR20 response rates in the anakinra 1.0 mg\/kg (46%; *P* = 0.001) and 2.0 mg\/kg (38%; *P* = 0.007) dose groups were significantly better than those in the placebo group (19%). ACR20 responses at 24 weeks were consistent with those at 12 weeks. Other researchers have reported similar 6-month ACR20 (and ACR50) responses, as well as in individual components (i.e. HAQ, pain, C-reactive protein level, and erythrocyte sedimentation rate) \\[23\\].\n\n## Abatacept\n\nAbatacept (CTLA4\u2013Ig), a cytotoxic T-lymphocyte-associated antigen 4\u2013IgG~1~ fusion protein, is the first in a new class of drugs known as co-stimulation blockers that are being evaluated for the treatment of RA. Abatacept selectively modulates the co-stimulatory signal required for full T cell activation. The agent, which binds to CD80 and CD86 on antigen-presenting cells, blocking the engagement of CD28 on T cells and thus preventing T cell activation, acts earlier in the inflammatory cascade than do other biologic therapies by directly inhibiting the activation of T cells and the secondary activation of macrophages and B cells.\n\nThe efficacy of this novel therapy was tested by Kremer and colleagues \\[24\\], who randomly assigned patients with active RA despite methotrexate therapy to receive 2 mg\/kg CTLA4\u2013Ig (105 patients), 10 mg\/kg (115 patients), or placebo (119 patients) for 6 months. All patients also received methotrexate therapy during the study. Patients treated with 10 mg CTLA4\u2013Ig were more likely to have an ACR20 response than were patients who received placebo (60% versus 35%; *P* \\< 0.001). Significantly higher rates of ACR50 and ACR70 responses were seen in both CTLA4\u2013Ig groups than in the placebo group \\[24\\]. The group given 10 mg CTLA4\u2013Ig had clinically meaningful and statistically significant improvements in all eight subscales of the SF-36. CTLA4\u2013Ig was well tolerated, with an overall safety profile similar to that of methotrexate \\[15\\].\n\nRecently released phase III results from the Abatacept in Inadequate responders to Methotrexate (AIM) trial \\[24\\] found that 48.3% of patients achieved an ACR50 response after 1 year of therapy compared with 18.2% of patients who were given methotrexate injections (*P* \\< 0.001). At 6 months, the number of patients on abatacept who achieved an ACR70 response was 19.8%, compared with 6.5% of patients on methotrexate. After 1 year, 28.8% of patients had reached an ACR70 response, compared with 6.1% of patients on methotrexate. Both differences were statistically significant (*P* \\< 0.001).\n\n# Tumor necrosis factor-\u03b1: potential safety issues\n\nAlthough researchers, scientists, and clinicians are enthusiastic in their support of early intervention with TNF-\u03b1 inhibitors for patients with RA, safety issues remain an important consideration. Although infusion reactions and other AEs are infrequent, they may be very serious in some patients, in particular when complications associated with opportunistic infections occur. There is a need to follow patients very closely and to work with primary care physicians to see that these issues are addressed first and foremost.\n\nRare and serious AEs include infections (bacterial, fungal, or tubercular), demyelination, infusion related events, hematologic\/lymphoproliferative disorders, drug-induced systemic lupus erythematosus\/vasculitis, hepatotoxicity (infliximab), and potential congestive heart failure. The development of neutralizing antibodies also can be an issue in some patients and needs further exploration. Further studies are needed to determine whether some of these reported side effects are truly related to the TNF-\u03b1 inhibitor, or are a consequence of the disease itself and\/or comorbid conditions and concomitant medications.\n\n# Challenges\n\nTypically, clinicians have reserved biologics for those patients with severe disease who have failed other therapies. However, the emerging body of evidence suggests that practitioners should be moving toward treating earlier disease with these biologic agents in an effort to prevent structural damage. In addition, because of the costs associated with biologic therapy \u2013 often more than US\\$1000 per month \\[25\\] \u2013 and the potential risk for immune suppression, one of the key challenges that clinicians should address when considering the use of TNF-\u03b1 inhibitor therapy for active RA is how to determine which patients should be receiving which agents and in what combination. Importantly, the medical community must endeavor to identify those patients who will respond over the long term to these agents and weigh the risk\/benefit. Is it possible to pick out which patients are most likely to have that long-term response? There are numerous challenges and opportunities, as well as many unmet needs for patients with RA.\n\n# Conclusion\n\nRA is a hetereogeneous disease; there is substantial evidence that some patients respond adequately to a single DMARD, whereas others require a combination regimen. Reliable predictors of response are needed to guide therapeutic decision making, along with a firm definition of therapeutic goals. It is of equal importance to arrest the ongoing pre-existing damage or to intervene earlier to prevent damage. In addition, over the long term, as research and treatment become more aggressive, efficacy, toxicity, and costs must be balanced within the therapeutic equation to enhance the quality of life in patients with RA (Table 1<\/a>).\n\nTherapy for rheumatoid arthritis: unmet needs\n\n| Problem | Details |\n|----|----|\n| Heterogeneity of the disease | Some patients respond to DMARD monotherapy |\n| | Not every patient responds to anti-TNF (mono or combination) therapy |\n| | 'Remission' rates |\n| | What is the pathology\/biologic process in each individual patient? |\n| Toxicities | Infections, immunogenecity, congestive heart failure, drug-induced vasculitis, etc |\n| Costs | Limits availability to may patients |\n\nDMARD, disease-modifying antirheumatic drug; TNF, tumor necrosis factor.\n\nThe next generation of therapies for RA will provide considerable opportunities. These include next generation TNF-\u03b1 inhibitors, anticytokines (anti-IL-6 receptor, anti-IL-15, and anti-IL-1), angiogenesis inhibitors, antiadhesion molecules, anti-T-cell co-stimulatory blockers (e.g. abatacept), anti-B-cell therapies (i.e. rituximab and belimumab), and many others.\n\n# Abbreviations\n\nACR = American College of Rheumatology; AE = adverse event; CTLA4\u2013Ig = cytotoxic T lymphocyte-associated antigen 4\u2013IgG~1~; DMARD = disease-modifying antirheumatic drug; HAQ = Health Assessment Questionnaire; IL = interleukin; RA = rheumatoid arthritis; SF-36 = Short Form 36-Item Health Survey; TNF = tumor necrosis factor.\n\n# Competing interests\n\nLM has received research grant support, and consulting and speaking fees from several companies in the past, including Abbott, Amgen, Centocor, Wyeth, Immunex, Bristol-Myers Squibb, Regeneron, Genentech, Merck, Pfizer, Boehringer Ingelheim, Roche and others.","meta":{"dup_signals":{"dup_doc_count":110,"dup_dump_count":30,"dup_details":{"curated_sources":1,"2023-23":1,"2022-33":1,"2021-39":1,"2021-17":1,"2021-04":1,"2020-29":1,"2019-51":1,"2017-09":13,"2016-44":1,"2016-40":1,"2016-36":12,"2016-30":8,"2016-22":1,"2016-18":1,"2016-07":13,"2015-48":3,"2015-40":2,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":5,"2014-42":5,"2014-41":4,"2014-35":4,"2014-23":6,"2014-15":4,"2023-40":1}},"file":"PMC2833970"},"subset":"pubmed_central"} {"text":"abstract: A light-sensitive, externally powered microchip was surgically implanted subretinally near the macular region of volunteers blind from hereditary retinal dystrophy. The implant contains an array of 1500 active microphotodiodes ('chip'), each with its own amplifier and local stimulation electrode. At the implant's tip, another array of 16 wire-connected electrodes allows light-independent direct stimulation and testing of the neuron\u2013electrode interface. Visual scenes are projected naturally through the eye's lens onto the chip under the transparent retina. The chip generates a corresponding pattern of 38 \u00d7 40 pixels, each releasing light-intensity-dependent electric stimulation pulses. Subsequently, three previously blind persons could locate bright objects on a dark table, two of whom could discern grating patterns. One of these patients was able to correctly describe and name objects like a fork or knife on a table, geometric patterns, different kinds of fruit and discern shades of grey with only 15 per cent contrast. Without a training period, the regained visual functions enabled him to localize and approach persons in a room freely and to read large letters as complete words after several years of blindness. These results demonstrate for the first time that subretinal micro-electrode arrays with 1500 photodiodes can create detailed meaningful visual perception in previously blind individuals.\nauthor: Eberhart Zrenner; Karl Ulrich Bartz-Schmidt; Heval Benav; Dorothea Besch; Anna Bruckmann; Veit-Peter Gabel; Florian Gekeler; Udo Greppmaier; Alex Harscher; Steffen Kibbel; Johannes Koch; Akos Kusnyerik; Tobias Peters; Katarina Stingl; Helmut Sachs; Alfred Stett; Peter Szurman; Barbara Wilhelm; Robert Wilke\\*Author for correspondence ().\ndate: 2011-05-22\ninstitute: 1Centre for Ophthalmology, University of T\u00fcbingen, Schleichstr. 12, 72076 T\u00fcbingen, Germany; 2Eye Clinic, University of Regensburg, Franz-Josef-Strauss-Allee 11, 93053 Regensburg, Germany; 3Retina Implant AG, Gerhard-Kindler-Str. 8, 72770 Reutlingen, Germany; 4Department of Ophthalmology, Semmelweis University, Tomo u. 25-29, 1083 Budapest, Hungary; 5Steinbeis Transfer Centre Eyetrial at the Centre for Ophthalmology, Schleichstr. 12-16, 72076 T\u00fcbingen, Germany; 6Klinikum Friedrichstadt, Friedrichstr. 41, 01067 Dresden, Germany; 7NMI Natural and Medical Sciences Institute at the University of T\u00fcbingen, Markwiesenstr. 55, 72770 Reutlingen, Germany\nreferences:\ntitle: Subretinal electronic chips allow blind patients to read letters and combine them to words\n\n# Introduction\n\nRetinitis pigmentosa (RP) and age-related macular degeneration are diseases that predominantly affect photoreceptors of the retina and cause progressive vision loss\u2014leading eventually to blindness in over 15 million people worldwide \\[1\\]. Although blindness owing to photoreceptor degeneration presently remains incurable, inner retinal nerve cells may continue to function for many years despite neuronal remodelling \\[2\\]. While gene therapy and application of neuro-protective factors may help maintain vision in the early stages of degeneration, survival of the inner retina encouraged us \\[3\\] and others \\[4\u201311\\] to attempt a partial restoration of visual function using electric stimulation of the remaining retinal network.\n\nTwo fundamentally different approaches have been taken in this area: (i) implantation of electrode arrays which interface epiretinally with retinal ganglion cells that form the retinal output pathway \\[6\u20137,11\u201313\\], and (ii) implantation of microchips under the transparent retina to substitute the degenerated photoreceptors. The latter type of microchip senses light and generates stimulation signals simultaneously at many pixel locations, using microphotodiode arrays (MPDAs; \\[3,14\\]). While the first approach typically requires external image and data processing due to bypassing retinal image analysis, the second seeks to replace the function of degenerated photoreceptors directly by translating the light of the image falling onto the retina point by point into small currents that are proportional to the light stimulus. Ours is the only approach where the photodiode\u2013amplifier\u2013electrode set is contained within a single pixel of the MPDA such that each electrode provides an electrical stimulus to the remaining neurons nearby, thereby reflecting the visual signal that would normally be received via the corresponding, degenerated photoreceptor.\n\nOn the basis of *in vitro* measurements \\[15\\] and animal studies \\[16\\] our consortium developed a subretinal electronic implant that carefully accounts for biocompatibility \\[17\\], biostability, surgical feasibility by means of a transchoroidal surgical technique \\[18\\], safe threshold stimulation and dynamic range of stimulation and the limits of spatial resolution *in vitro* \\[19\\]. This report describes the results of a clinical pilot study, illustrating that subretinally implanted multi-electrode arrays restore sufficient visual function for object recognition and localization and for the performance of visual tasks essential in the daily lives of blind patients. The results of this pilot study provide strong evidence that the visual functions of patients blinded by a hereditary retinal dystrophy can, in principle, be restored to a degree sufficient for use in daily life.\n\n# The subretinal implant\n\nAs shown in figure\u00a01<\/a>*a*, the tip of the implant consists of an MPDA with 1500 individual light-sensitive elements and a test field for direct stimulation (DS) with 4 \u00d7 4 electrodes for electrical, light-independent stimulation. Both are positioned on a thin polyimide foil (figure\u00a01<\/a>*b*, far left). For details on the control unit that provides power and wireless control signals, see figure\u00a02<\/a>*a*,*d* and electronic supplementary material, chapter 1*c*.\n\n## The microphotodiode array\n\nEach of the 1500 MPDA elements acts independently from its neighbours; four magnified elements (72 \u00d7 72 \u00b5m each) are shown in figure\u00a01<\/a>*g*. Each element includes a light-sensitive photodiode (15 \u00d7 30 \u00b5m) that controls a differential amplifier (circuit shown as a sketch) whose output stage is coupled to a titanium nitride (TiN) electrode (50 \u00d7 50 \u00b5m), connected to the amplifier via the contact hole (details see electronic supplementary material, chapter 1*b*). Essentially, an image is captured several times per second simultaneously by all photodiodes. Each element ('pixel') generates monophasic anodic voltage pulses at its electrode. Thus, pixelized repetitive stimulation is delivered simultaneously by all electrodes to adjacent groups of bipolar cells \\[15,19\\], the amount of current provided by each electrode being dependent on the brightness at each photodiode. Light levels ranging across approximately 2 log units are converted to charge pulses by each pixel with a sigmoidal relationship and the sensitivity can be shifted manually by several log units (see electronic supplementary material, chapter 1, figure S1). The chip is estimated to cover a visual angle of approximately 11\u00b0 by 11\u00b0 (1\u00b0 approx. 288 \u00b5m on the retina). The distance between two MPDA electrodes corresponds to a visual angle of 15 min of arc.\n\n## The 4 by 4 test field for direct stimulation (DS test field)\n\nThe DS test field consists of 4 \u00d7 4 quadruple TiN electrodes (100 \u00d7 100 \u00b5m^2^, 280 \u00b5m apart laterally and 396 \u00b5m diagonally) for light-independent electrical stimulation (see figure\u00a01<\/a>*c*). The DS test field was added for assessment of the electrode-interface characteristics and to study current injections and efficacy of pulses with different shapes and polarities other than those provided by the MPDA. In a limited spatial testing range simple patterns can be created with the DS test field as well (figure\u00a01<\/a>*d*,*e* and *f*).\n\nThreshold voltage to elicit a percept was assessed in an up-and-down staircase procedure. Typical charge transfer of a single electrode at threshold was between 20 and 60 nC per pulse (for details, see electronic supplementary material, chapter 1*a*). The maximum charge density at the electrodes in the DS field was 600 \u00b5C cm^\u22122^. These values were well within commonly accepted safety limits and have been proven safe even for continuous retinal stimulation *ex vivo* \\[20\\].\n\nImpedance values of single electrodes were typically 300 k\u03a9 (at 1 kHz sinusoidal AC). Although regular impedance measurements in the patients were not conclusive, analysis of all available data showed that charge thresholds, but not voltage thresholds decreased significantly during the first days after implantation. Thereafter, both charge and voltage thresholds showed a slight tendency towards increasing values over the remaining implantation period.\n\n# Patients\n\nThe patients (two males and one female, age 40, 44 and 38, respectively) were blind owing to hereditary retinal degeneration (patients 1 and 2: RP, patient 3: choroideraemia) but had good central vision previously. Disease onset was reported by patient 2 at age 16, by patients 1 and 3 at age 6. They had lost their reading ability at least 5 years before implantation. Bright light stimulation mediated some limited light perception without any recognition of shapes in all three patients. They reported neither general diseases nor regular medication (for details see electronic supplementary material, chapter 2*c*).\n\n# Methods\n\n## Surgical procedure\n\nThe implant, protected by a long steel tube, was advanced through a retroauricular incision to the lateral orbital rim and guided inside the orbit to the surface of the eyeball (\\[21\\]; figure\u00a02<\/a>*a*,*b*,*e*). The silicone cable (figure\u00a02<\/a>*a*) was implanted subperiostally beneath the temporal muscle. The polyimide foil was then protected by a silicone tube and guided from the lateral orbital rim, where it was fixed, to the equator of the eye. Subsequently pars plana vitrectomy was performed. A localized retinal detachment was created by saline injection in the upper temporal quadrant above the planned scleral and choroidal incision area. After preparation of a scleral flap, the implant was advanced *ab externo* transchoroidally along a guiding foil into the subretinal space until it reached the preoperatively defined position (\\[22\\]; see electronic supplementary material, chapter 2*d*). Although putting a chip directly under the fovea has not turned out to be a surgical problem we had abstained in initial patients from placing the chip under the macula, but asked to place the chip closer and closer to the foveola as the surgical learning curve improved. Silicone oil was then injected into the vitreous cavity to support retinal reattachment. No serious adverse events were noted during the course of the study. For post-operative observations and consideration on surgical safety see electronic supplementary material, chapter 2*f*).\n\n## Psychophysical tests\n\nBeginning 7 to 9 days after surgery, tests with solely electrical stimuli were performed with the DS test field. Thereafter light evoked visual functions mediated by the MPDA-array were assessed using four psychophysical tests concerning light detection, basic temporal resolution, object localization and movement detection using the 'basic light and motion test' (BaLM \\[23\\]) described in electronic supplementary material, chapter 2*g*.\n\nIf passed successfully, three further steps followed: tests for recognition of stripe patterns (BAGA \\[24\\]), localization and recognition of objects common to daily life and visual acuity assessment (Landolt-C rings presented in an up-and-down staircase procedure to estimate the visual acuity in terms of maximum likelihood by means of FrACT test \\[25\\]). If these tasks were completed successfully, more challenging tasks were set (figures\u00a03<\/a>*a* and 4<\/a>*a*). Except for some optional tasks (indicated) well-established two- or four-alternative forced-choice methods (2AFC and 4AFC, respectively) were employed in order to test for statistical significance of a patient's performance. All tests were performed separately in two conditions: with 'Power ON' and 'Power OFF' ('baseline performance').\n\nMaximum screen luminance was approximately 3200 cd m^\u22122^ (for white light), neutral density filters (Schott NG filters 0.15\u20134 log U) served for attenuation (for details see electronic supplementary material, chapter 2).\n\n# Results\n\n## Electrical stimulation for pre-testing and learning via DS test field\n\nPulses of varying duration, polarity and shape were applied via the DS test field (16 electrodes, as shown in figure\u00a01<\/a>*c*\u2013*f*) in a pre-testing routine. This procedure determined voltage thresholds for perception, accustomed the patients to electrically evoked visual impressions and tested retinal excitability and spatial resolution. An overview of the results including their statistical evaluation is given in a table presented in electronic supplementary material, chapter 3*a*.\n\nAll patients detected single-electrode single-pulse stimulation (0.5\u20136 ms pulses, typically 20\u201360 nC per electrode). Patients 1 and 2 consistently reported these stimuli as whitish round dot-like percepts, patient 3 reported percepts as elongated, short whitish\/yellowish lines. Upon activation of four electrodes with a single pulse, all three patients correctly distinguished vertical lines from horizontal lines within seconds and spontaneously reported them as straight. Patients 1 and 2 distinguished multiple single dots upon simultaneous activation of several electrodes in a diagonal row and reported dark areas separating the dots. Patient 3 saw diagonal lines formed by four electrodes, but not the dark areas between the dots.\n\nSimple patterns were also presented with the DS-array by pulsing electrodes sequentially (figure\u00a01<\/a>*d*); each electrode was switched on for 3\u20136 ms at intervals of 208 ms. Patients 1 and 2 correctly reproduced these patterns after the first single presentation; patient 3 failed to do so. Upon presentation of a four-alternative, forced-choice (4AFC) paradigm, patients 1 and 2 reliably differentiated four different positions of the opening of the letter 'U' (73% and 88% correct responses, respectively, see electronic supplementary material, chapter 5, movie 1). Furthermore, patient 1 correctly distinguished 'U' from 'I' and even squares from triangles when only a single activated electrode differed in position (16\/16 correct, figure\u00a01<\/a>*e*,*f*). Patient 2 correctly distinguished four letters individually presented randomly in 4AFC-mode (e.g. C,I,L,O (36\/36), I,L,V,T (10\/12)) in repetitive tests on different days (see electronic supplementary material, chapter 5, movie 2). He also distinguished sequential stimulation in clockwise versus anticlockwise direction (15 of 16 tests correct).\n\n## Light pattern perception with the microphotodiode array\n\nThe light-sensitive MPDA chip was operated at a sampling rate of 1 to 20 Hz with a pulse duration (PD) of 1\u20134 ms. The patient's head was comfortably positioned on a chin-rest (set up 1, figure\u00a03<\/a>*a*), and refraction was corrected for the viewing distance of 60 cm. Chip settings were adjusted for a working range of 8\u2013800 cd m^\u22122^ white light or 1.2\u20134.3 cd m^\u22122^ red light (for details see electronic supplementary material, chapter 2*g*). All standardized testing was performed using a functional baseline control, i.e. performance was also tested with the chip switched off at random intervals unknown to patient and observer, as summarized in electronic supplementary material, table ST1.\n\n### Light perception and localization\n\nAll three patients were able to perceive light mediated by the chip. This was verified in task 1, using the BaLM-test in set up 1 (figure\u00a03<\/a>*a*):\n\n1. \u2014\u2002*BaLM flash test*: in task 1, the whole screen was illuminated briefly with one or two flashes (200 ms duration with 600 ms pause) after an auditory signal. All three patients passed this test for light detection (81.3%, 100% and 100% correct, respectively) and scored well-above chance rate; (*n* = 16; ON versus OFF: *p* = 0.00005, *t*-test).\n\n2. \u2014\u2002*BaLM localization test*: when testing the ability to localize large bright areas in the visual field (small triangle in relation to a central fixation point in BaLM test) only patient 2 (87.5%; *n* = 16) passed the test successfully.\n\n3. \u2014\u2002*BaLM movement test*: perception of movement was tested with a random dot pattern at an angular speed of 1.11\u00b0 s^\u22121^ moving in one of four directions (dot diameter 1.4 cm, average distance 1.5 cm (s.d. 0.26)), passed only by patient 2 (8 of 12, 4AFC, figure\u00a03<\/a>*e*).\n\nIn task 2, spatial resolution was tested using grid patterns (figure\u00a03<\/a>*b*). Bright lines of 0.6 cm width separated by 1.8 cm wide dark lines as well as bright lines of 0.8 cm width separated by 2.4 cm wide dark lines were presented at 63 cm distance. The orientation of these patterns was correctly recognized by patient 2. In terms of spatial frequency this corresponds to 0.46 cycles deg^\u22121^ (five of eight correct, 4AFC, *p* = 0.02) and 0.34 cycles deg^\u22121^ (four of four correct, 4AFC, *p* = 0.004), respectively (see electronic supplementary material, chapter 3, table ST1 and chapter 5, movie 3). Patient 3 succeeded at 0.22 cycles deg^\u22121^ (12 of 20, 4AFC, white light). Patient 1 had difficulty seeing the stripes, probably owing to her nystagmus, but distinguished horizontal from vertical lines projected onto her chip in a special set up using a fundus camera with comparable spatial arrangements and luminance.\n\nAs the spectral sensitivity of the chip is practically flat far into the infrared region, patients at several instances reported high sensitivity to infrared light.\n\n### Landolt C ring\n\nIn task 3, single letters and Landolt C rings were presented on the screen in various sizes (figure\u00a03<\/a>*c*). Patients 1 and 3 discerned neither the Landolt C rings nor the letters and were accordingly not presented with tests of higher difficulty in set up 1. Patient 2, the only one with the chip placed under the macula, was quite successful and his visual performance is therefore described in greater detail below.\n\nOptimizing his implant settings resulted in an image recording time of 0.5 ms with a 7.5 Hz repetition frequency at a target luminance of 3.4 cd m^\u22122^ (red light), viewed with a correction of +7.0 dpt sph., \u22121.50 dpt cyl. at 121\u00b0. Landolt C rings (figure\u00a03<\/a>*c*) were presented in an up-and-down staircase procedure (FrACT \\[25\\]; for details see electronic supplementary material, chapter 2). A maximum of 60 s was allowed for the patient to find each C-ring on the screen in his small visual field; failure to respond in time counted as mistake. Maximum visual acuity was up to 20\/1000 (log MAR =1.69)\u2014corresponding to a Landolt ring with 4.5 cm outer diameter and a gap of 9 mm, viewed at about 60 cm distance. In three other trials on different days he achieved log MAR values of 1.75, 1.94 and 1.86, respectively (see electronic supplementary material, chapter 5, movie 4).\n\nPatient 2 reliably differentiated also the letters L,I,T,Z on a screen (22 of 24, 4AFC, figure\u00a03<\/a>*d*; 8.5 cm high, 1.7 cm line width, corresponding to a height of approximately 9\u00b0 of visual angle). He reported that having once found a letter, it appeared clearly in its natural form and was visible as a complete entity\u2014even during its first presentation.\n\n### Recognition of objects on a table\n\nIn the fourth task, the ability to perceive more naturalistic scenes was tested by a standardized set up at a dining table, assessed by an independent, professional mobility trainer (figure\u00a04<\/a>*a*, for details see electronic supplementary material, chapter 2*g*). Patient 1 reliably localized a saucer, a square and a cup on the table; patient 3 correctly localized and differentiated a large plate from a saucer.\n\nPatient 2 localized, and moreover recognized and correctly differentiated square-, triangle-, circle-, rectangular-, and diamond-shapes, which differed only in shape but not in area from each other (figure\u00a04<\/a>*b*, five of five correct, see electronic supplementary material, chapter 5, movie 5). Furthermore, he could localize and describe correctly a spoon, a knife, a cup (see electronic supplementary material, chapter 5, movie 6), as well as a banana and an apple (see electronic supplementary material, chapter 5, movie 7). Unlike the other dining table set ups this set up was entirely unknown to the patient and he was forced to make sense of an unfamiliar scene.\n\n### Optional tasks with letters, clock, grey papers of varying shades\n\nThe fifth group of tasks was performed only in patients who had successfully passed previous tasks. Patient 2 was able to distinguish between 16 different letters cut from white paper (5\u20138 cm high, font: Tahoma), placed on the black table (see figure\u00a04<\/a>*c*, 22\/36 correct). The patient read letters (LOVE, MOUSE, SUOMI, etc.) correctly (five of five), also repeatedly on several days. He noted spelling mistakes in his name MIIKKA (mentioning that one 'I' and one 'K' were missing) when he first saw this word (see electronic supplementary material, chapter 5, movie 8), i.e. he perceived both individual letters and continuous, meaningful words\u2014a prerequisite for reading.\n\nAs an additional task, a clock face was presented with two hands (6 \u00d7 1.5 cm for the hours, 12 \u00d7 1.5 cm for the minutes, figure\u00a04<\/a>*d*). Patient 2 was asked to indicate clock times set to full quarter hours. The patient correctly recognized 11 of 12 possible settings. Patient 2 also distinguished seven out of nine contrast differences of 15 per cent among nine neighbouring cards (10 \u00d7 10 cm, presented in 2AFC mode, *p* = 0.07) with linearly scaled shades of grey varying from 3 to 35 cd m^\u22122^ (figure\u00a04<\/a>*e*).\n\nAll patients showed distinct learning effects which, while they could not be quantified in this first pilot study, are reported as 'spontaneous observations' in electronic supplementary material, chapter 3*d*.\n\n### Pupillary reflexes\n\nPupillary constriction in response to light as an objective measure of MPDA efficacy was assessed by infrared pupillography (for methods and recordings, see electronic supplementary material, chapter 2*i*). The amplitude of pupillary constriction was clearly more pronounced when the chip was activated (see electronic supplementary material, chapter 2, figure S2). In all three patients the chip-on condition improved pupil reaction and was always accompanied by subjective light perception. An analysis of variance was calculated for the constriction amplitudes of all three patients (with chip-on or chip-off) and patient as factors (sum of squares 0.184, *F* = 6.48, *p* = 0.022).\n\n# Discussion\n\n## The general approaches to retinal prostheses\n\nA number of research groups have taken up the challenge of developing a retinal prosthesis. Rizzo *et al.* \\[4\\] and Weiland *et al.* \\[26\\] have reported on first trial stimulations of the retina with single epiretinal electrodes. Chow *et al*. \\[27\\] were the first to subretinally implant well-tolerated multiphotodiode arrays, intending to use the energy created by incident light for neuronal stimulation directly without amplification. However, owing to insufficient energy from the small light sensors these failed to restore vision. Second Sight (Medical Products Inc., Sylmar, CA) has a multicentre study running with the epiretinal ARGUS II device with 60 electrodes; some patients were reported to recognize large single letters by scanning them with rapid head movements \\[28\\]. Clinical studies with epiretinal electrode arrays were also performed by Koch *et al.* \\[29\\] and Richard *et al.* \\[30\\]. Other groups developed approaches with electrodes placed between sclera and choroid \\[8,10\\]. These groups argue that this 'suprachoroidal' approach may have the benefit of being less invasive, therefore bearing fewer risks in terms of surgical procedures. At this time, as only limited peer-reviewed information is available from ongoing clinical trials using subretinal, epiretinal and suprachoroidal approaches, it is too early to compare the final long-term outcome of the various designs. All have inherent theoretical advantages and disadvantages; basic differences and their consequences are pointed out in the following.\n\n*Epiretinal implants* seek to interact directly with the retinal output neurons; the image processing of the complex inner retinal network must be performed externally. The processing of camera-captured images can be more easily adjusted to account for individual electrode thresholds. However, the number of simultaneously addressed electrodes is limited by present technology. Several groups have developed externally powered, fully implantable epiretinal systems with arrays of up to 60 microelectrodes \\[7,28\u201332\\]. Although they have reported promising results, even for long term use, the low number of electrodes limit visual performance to object localization and shape perception \\[33\\]. Yanai *et al*. \\[6\\] reported no difference in patient performance when a single pixel or multiple pixels were activated using a prototype of the ARGUS I implant. In *epiretinal* implants that use head mounted cameras, eye movements are not correlated to the visually perceived scene. Such a mismatch of visual and proprioceptive information must render object localization difficult \\[34\\].\n\n*Subretinal approaches*, in contrast, replace in principle only the lost function of diseased photoreceptors; thus, the remaining network of the inner retina can be used for more natural processing of the image as it is forwarded, point\u2013by-point, several times per second to inner retinal neurons. Although the surgical procedure may be more demanding, the number of pixels can be much higher, presently limited only by the size of an implant and the spatial spread of electrical stimulation. Fixation of the chip in the subretinal space is easier and, once positioned, the chip remains in place, tightly connected to the inner retina without the need for scleral tacks as used in epiretinal approaches. Moreover, our subretinal implant (Retina Implant AG, Reutlingen, Germany) is the only one so far, where the image receiver array moves exactly with the eye. This has practical implications, outlined below, as natural eye movements can be used to find and fixate a target. On the other hand, the duration of our study was limited owing to time constraints of a transdermal cable; other studies have reported longer implantation times \\[33\\]. Moreover, the range of variations in online image processing is small in devices that work quasi-autonomously under the retina.\n\n*Suprachoroidal implants*, although bearing lower surgical risks, are located further away from target cells. This may result in high stimulation thresholds, increased power consumption, and certainly loss of spatial resolution. While the surgery is easier and less invasive, the location between highly light absorbing sclera and choroid does not allow the implantation of a light sensitive array that moves with the eye.\n\nIn the following sections, the results obtained in our subretinal study are discussed in more detail.\n\n## The spatial domain\n\nUsing simulated prosthetic vision Perez *et al*. \\[35\\] have shown that the precision in recognition tasks with normal sighted subjects increased with a density of pixels up to 1000 in a 10\u00b0 \u00d7 7\u00b0 visual field on the retina. Thus, at least several hundred electrodes should be employed to provide significant vision\u2014a daunting technical barrier \\[35\\]. The present study\u2014the first to successfully employ electronic arrays with such a large number of electrodes\u2014presents proof-of-concept that such devices can restore useful vision in blind human subjects, even though the ultimate goal of broad clinical application will take time to develop.\n\nThe size of the visual field (11\u00b0 \u00d7 11\u00b0) in our patients, although small, is sufficient for orientation and object localization, as is well established in patients with peripheral retinal dystrophies. Reading requires a field of 3 by 5 degrees according to Aulhorn \\[36\\].\n\nInter-individual variations in visual performance among the patients of this study can be assumed to result from their respective stages of degeneration \\[2\\], the duration of their blindness, and the retinal localization of the implant, although presently no convincing correlation can be established. Clearly spatial reorganization of the retina takes place; however, it is very slow, taking decades. As the inner retina is not dependent on choroidal perfusion, it also survives the complete loss of the choroid\u2014as seen in our patient with choroideraemia. This also explains why blockage of choroido-retinal transport by our implant does not affect survival of the inner retina.\n\nIn our study, precise localization of the microelectrode array under the fovea appeared important for the restoration of useful percepts via spatially ordered electrical stimulation. High spatial resolution and the ability to read are restricted in normal observers to the central retina (5\u00b0 \u00d7 3\u00b0), which is significantly over-represented in the visual cortex relative to more peripheral areas of the retina.\n\n## The temporal domain and the problem of image fading\n\nTemporal resolution was investigated over a range from 1 to 20 Hz. When applying continuous electrical stimuli via the *DS-array* at a fixed retinal location with PD of 1\u20134 ms, patient percepts faded after approximately 15 s when presented at a 0.3 Hz repetition rate; after approx. 2 s at 2 Hz; and after approx. 0.5 s at 10 Hz. This is in close accordance with the observations of Perez *et al*. \\[37\\] with *epiretinal* ARGUS II devices that an image stabilized on the retina quickly disappears; to restore the image required a movement of the image across the retina, by means of rapid head shaking. Similarly, Jensen & Rizzo \\[38\\] observed in rabbit retina that the retinal response to a second or third electrical pulse rapidly decreases as compared to the first pulse with increasing repetition rates; apparently inner retina neurons suffer from a prolonged inhibition if stimulated electrically under conditions where the surrounding network under the electrode is being activated as a whole. By contrast, objects like grating patterns or letters can be perceived continuously with our light sensitive *subretinal MPDA*. Patients see the image constantly as a complete entity without head movements\u2014even on the first day of stimulation. The source of this difference can be found in involuntary eye movements controlled by the superior colliculus. Even during fixation, our eyes continuously make slight movements (slow drifts and microsaccades up to 50 min of arc and 1 to 3 Hz) that refresh the image by constantly changing the activated photoreceptor population\u2014even during strict fixation \\[39\\]. Objects viewed by our patients\u2014with the chip moving in synchronization with natural eye movement\u2014dynamically activate a range of adjacent pixels on the chip, as eye movements and microsaccades continuously shift the 'electrical image' on the retina for about 1\u20133 pixels, thus preventing mechanisms of local adaptation and image fading. Details on the role and magnitude of microsaccades in relation to pixel size are outlined in electronic supplementary material, chapter 3*e* (figure S3).\n\n## The cellular 'interface'\n\n*In vitro* experiments have shown that subretinal stimulation, at least at threshold, preferentially stimulates bipolar cells \\[15,19\\]. This may be one reason for the correct retinotopic perceptions reported in this study, since local excitation of small groups of bipolar cells is recognized in the brain at the correct position in the visual field. By contrast, epiretinal stimulation of ganglion cell fibres may result in disparities between stimulation location and perceived visual field location because the axons of RGCs course across the retina on their way into the brain via the optic nerve. On the other hand, none of the different approaches has principal problems with addressing simultaneously ON and OFF neurons (see electronic supplementary material, chapter 3*c*).\n\n## Learning and cognition\n\nWith the subretinal approach and its retinotopically correct spatial transmission, no long-term learning procedure was necessary to enable the patients to recognize shapes correctly. Even at the first trial with the DS test field or with the MPDA, patients were able to correctly perceive the complete entity of an object in the presented physical geometric form, the bright parts appearing whitish or yellowish, the dark parts as grey or black; there were no reports on colour sensations although in very rare and brief instances coloured tinges were noticed by patients.\n\nThe observation that patient 2 could readily name an object upon its first presentation to his visual field is of particular importance, and is in line with our observation of retinotopically correct perception from DS experiments and from the other patients who recognized a line and its direction clearly. This does not mean that the patients had undisturbed percepts. Patients reported some wobbling of the image, probably owing to a relatively low image capture frequency (5\u20137 Hz) to which they adapted quickly.\n\nAs expected, patient performance improved over time. Practising with the MPDA between 4 to 6 h daily, they had to learn to control their eye position because each object was presented within a relatively small field of vision (11\u00b0 \u00d7 11\u00b0). Patient 2 reported that the two lines of the letter L were initially moving slightly independently of each other, but that they appeared connected at the corner after approximately one week. Apparently the binding of correlated motion cues can be regained quickly (see electronic supplementary material, chapter 3*d* and chapter 5, movie 9). If patients were asked to point to an object they had discovered there was clearly improvement of visuomotor abilities within a week.\n\n## Future concepts\n\n*Methodological and technical aspects*: our first approach was designed as a short duration study of up to several weeks in only a few patients in order to achieve a proof-of-concept for a cable bound version of a subretinal active implant. Our ongoing follow-up study is employing the next-generation system (Alpha IMS; \\[40\\], produced by Retina Implant AG, Reutlingen, Germany), where an encapsulated secondary coil for power and signal transmission is positioned subdermally behind the ear, with a primary coil clipped magnetically on top. We also anticipate that lateral processing in terms of mutual inhibition of pixels, as performed in centre-surround receptive field processing will improve contrast vision and spatial resolution. Penetrating three-dimensional electrodes as developed by various groups may improve the contact to the bipolar cell layer but may be more damaging to the retina.\n\n# Conclusion\n\nThis study demonstrated that subretinal micro-electrode arrays can restore visual percepts in patients blind from hereditary retinal degenerations to such an extent that localization and recognition of objects can provide useful vision, up to reading letters. Despite all remaining biological and technical challenges, our results offer hope that restoration of vision in the blind with electronic retinal prostheses is a feasible way to help those who cannot profit from emerging gene therapy and\/or the application of neuroprotective agents. The advantage of our approach is that all parts of the device can be implanted invisibly in the body, that inner retina processing can be used and that a continuous, stable image with unmatched spatial resolution is perceived. Still further development is necessary to provide long term stability, improved contrast, spatial resolution and increased field size through multiple chip implantation. Nevertheless, the present study provides proof-of-concept that electronic subretinal devices have the potential to improve visual function from a state of complete blindness to one of low vision that allows localization and recognition of objects up to reading capability.\n\n## Acknowledgements\n\nWe are very grateful to all who contributed to the 'SUBRET' project; for names of contributors, funding organizations and disclosure of interest we refer to electronic supplementary material.\n\n# References","meta":{"dup_signals":{"dup_doc_count":189,"dup_dump_count":45,"dup_details":{"curated_sources":2,"2018-47":3,"2018-43":1,"2018-39":2,"2018-34":2,"2018-30":3,"2018-26":2,"2018-22":3,"2018-17":4,"2018-13":4,"2018-09":3,"2018-05":4,"2017-51":5,"2017-47":4,"2017-43":6,"2017-39":4,"2017-34":4,"2017-30":5,"2017-26":5,"2017-22":4,"2017-17":7,"2017-09":6,"2017-04":6,"2016-50":3,"2016-44":9,"2016-40":6,"2016-36":6,"2016-30":5,"2016-26":5,"2016-22":5,"2016-18":5,"2016-07":5,"2015-48":5,"2015-40":5,"2015-35":5,"2015-32":3,"2015-27":3,"2015-22":4,"2015-14":4,"2014-52":4,"2014-49":5,"2018-51":1,"2017-13":3,"2015-18":4,"2015-11":2,"2015-06":3}},"file":"PMC3081743"},"subset":"pubmed_central"} {"text":"date: 2018-06\ntitle: 9 out of 10 people worldwide breathe polluted air, but more countries are taking action\n\n2 May 2018 - WAir pollution levels remain dangerously high in many parts of the world. New data from WHO shows that 9 out of 10 people breathe air containing high levels of pollutants. Updated estimations reveal an alarming death toll of 7 million people every year caused by ambient (outdoor) and household air pollution.\n\n\"Air pollution threatens us all, but the poorest and most marginalized people bear the brunt of the burden,\" says Dr Tedros Adhanom Ghebreyesus, Director-General of WHO. \"It is unacceptable that over 3 billion people \u2013 most of them women and children \u2013 are still breathing deadly smoke every day from using polluting stoves and fuels in their homes. If we don't take urgent action on air pollution, we will never come close to achieving sustainable development.\"\n\n7 million deaths every year\n\nWHO estimates that around 7 million people die every year from exposure to fine particles in polluted air that penetrate deep into the lungs and cardiovascular system, causing diseases including stroke, heart disease, lung cancer, chronic obstructive pulmonary diseases and respiratory infections, including pneumonia.\n\nAmbient air pollution alone caused some 4.2 million deaths in 2016, while household air pollution from cooking with polluting fuels and technologies caused an estimated 3.8 million deaths in the same period.\n\nMore than 90% of air pollution-related deaths occur in low- and middle-income countries, mainly in Asia and Africa, followed by low- and middle-income countries of the Eastern Mediterranean region, Europe and the Americas.\n\nAround 3 billion people \u2013 more than 40% of the world's population \u2013 still do not have access to clean cooking fuels and technologies in their homes, the main source of household air pollution. WHO has been monitoring household air pollution for more than a decade and, while the rate of access to clean fuels and technologies is increasing everywhere, improvements are not even keeping pace with population growth in many parts of the world, particularly in sub-Saharan Africa.\n\nWHO recognizes that air pollution is a critical risk factor for noncommunicable diseases (NCDs), causing an estimated one-quarter (24%) of all adult deaths from heart disease, 25% from stroke, 43% from chronic obstructive pulmonary disease and 29% from lung cancer.\n\nMore countries taking action\n\nMore than 4300 cities in 108 countries are now included in WHO's ambient air quality database, making this the world's most comprehensive database on ambient air pollution. Since 2016, more than 1000 additional cities have been added to WHO's database which shows that more countries are measuring and taking action to reduce air pollution than ever before. The database collects annual mean concentrations of fine particulate matter (PM10 and PM2.5). PM2.5 includes pollutants, such as sulfate, nitrates and black carbon, which pose the greatest risks to human health. WHO air quality recommendations call for countries to reduce their air pollution to annual mean values of 20 \u00b5g\/m3 (for PM10) and 10 \u00b5g\/m3 (for PM25).\n\n\"Many of the world's megacities exceed WHO's guideline levels for air quality by more than 5 times, representing a major risk to people's health,\" says Dr Maria Neira, Director of the Department of Public Health, Social and Environmental Determinants of Health, at WHO. \"We are seeing an acceleration of political interest in this global public health challenge. The increase in cities recording air pollution data reflects a commitment to air quality assessment and monitoring. Most of this increase has occurred in high-income countries, but we hope to see a similar scale-up of monitoring efforts worldwide.\"\n\nWhile the latest data show ambient air pollution levels are still dangerously high in most parts of the world, they also show some positive progress. Countries are taking measures to tackle and reduce air pollution from particulate matter. For example, in just two years, India's Pradhan Mantri Ujjwala Yojana Scheme has provided some 37 million women living below the poverty line with free LPG connections to support them to switch to clean household energy use. Mexico City has committed to cleaner vehicle standards, including a move to soot-free buses and a ban on private diesel cars by 2025.\n\nMajor sources of air pollution from particulate matter include the inefficient use of energy by households, industry, the agriculture and transport sectors, and coal-fired power plants. In some regions, sand and desert dust, waste burning and deforestation are additional sources of air pollution. Air quality can also be influenced by natural elements such as geographic, meteorological and seasonal factors.\n\nAir pollution does not recognize borders. Improving air quality demands sustained and coordinated government action at all levels. Countries need to work together on solutions for sustainable transport, more efficient and renewable energy production and use and waste management. WHO works with many sectors including transport and energy, urban planning and rural development to support countries to tackle this problem.\n\nKey findings:\n\nWHO estimates that around 90% of people worldwide breathe polluted air. Over the past 6 years, ambient air pollution levels have remained high and approximatively stable, with declining concentrations in some part of Europe and in the Americas.\n\nThe highest ambient air pollution levels are in the Eastern Mediterranean Region and in South-East Asia, with annual mean levels often exceeding more than 5 times WHO limits, followed by low and middle-income cities in Africa and the Western Pacific.\n\nAfrica and some of the Western Pacific have a serious lack of air pollution data. For Africa, the database now contains PM measurements for more than twice as many cities as previous versions, however data was identified for only 8 of 47 countries in the region.\n\nEurope has the highest number of places reporting data.\n\nIn general, ambient air pollution levels are lowest in high-income countries, particularly in Europe, the Americas and the Western Pacific.In cities of high-income countries in Europe, air pollution has been shown to lower average life expectancy by anywhere between 2 and 24 months, depending on pollution levels.\n\n\"Political leaders at all levels of government, including city mayors, are now starting to pay attention and take action,\" adds Dr Tedros. \"The good news is that we are seeing more and more governments increasing commitments to monitor and reduce air pollution as well as more global action from the health sector and other sectors like transport, housing and energy.\"\n\nThis year WHO will convene the first Global Conference on Air Pollution and Health (30 October \u2013 1 November 2018) to bring governments and partners together in a global effort to improve air quality and combat climate change. \n\nAvailable from: ","meta":{"dup_signals":{"dup_doc_count":184,"dup_dump_count":52,"dup_details":{"curated_sources":2,"2023-50":3,"2023-40":3,"2023-23":2,"2023-14":7,"2023-06":1,"2022-49":3,"2022-40":4,"2022-33":1,"2022-27":3,"2022-21":3,"2022-05":3,"2021-49":2,"2021-43":2,"2021-39":7,"2021-31":4,"2021-25":2,"2021-21":3,"2021-17":3,"2021-10":3,"2021-04":4,"2020-50":4,"2020-45":2,"2020-40":5,"2020-34":1,"2020-29":4,"2020-24":1,"2020-16":2,"2020-10":2,"2020-05":1,"2019-51":2,"2019-47":3,"2019-43":4,"2019-39":3,"2019-35":7,"2019-30":1,"2019-26":6,"2019-22":5,"2019-18":4,"2019-13":2,"2019-09":8,"2019-04":7,"2018-51":2,"2018-47":9,"2018-43":3,"2018-39":4,"2018-34":8,"2018-30":4,"2018-26":1,"2018-22":10,"2024-22":1,"2024-10":2,"2024-26":1}},"file":"PMC6146220"},"subset":"pubmed_central"} {"text":"abstract: # Objective\n .\n To characterise the clinical features of patients admitted to hospital with coronavirus disease 2019 (covid-19) in the United Kingdom during the growth phase of the first wave of this outbreak who were enrolled in the International Severe Acute Respiratory and emerging Infections Consortium (ISARIC) World Health Organization (WHO) Clinical Characterisation Protocol UK (CCP-UK) study, and to explore risk factors associated with mortality in hospital.\n .\n # Design\n .\n Prospective observational cohort study with rapid data gathering and near real time analysis.\n .\n # Setting\n .\n 208 acute care hospitals in England, Wales, and Scotland between 6 February and 19 April 2020. A case report form developed by ISARIC and WHO was used to collect clinical data. A minimal follow-up time of two weeks (to 3 May 2020) allowed most patients to complete their hospital admission.\n .\n # Participants\n .\n 20\u2009133 hospital inpatients with covid-19.\n .\n # Main outcome measures\n .\n Admission to critical care (high dependency unit or intensive care unit) and mortality in hospital.\n .\n # Results\n .\n The median age of patients admitted to hospital with covid-19, or with a diagnosis of covid-19 made in hospital, was 73 years (interquartile range 58-82, range 0-104). More men were admitted than women (men 60%, n=12\u2009068; women 40%, n=8065). The median duration of symptoms before admission was 4 days (interquartile range 1-8). The commonest comorbidities were chronic cardiac disease (31%, 5469\/17\u2009702), uncomplicated diabetes (21%, 3650\/17\u2009599), non-asthmatic chronic pulmonary disease (18%, 3128\/17\u2009634), and chronic kidney disease (16%, 2830\/17\u2009506); 23% (4161\/18\u2009525) had no reported major comorbidity. Overall, 41% (8199\/20\u2009133) of patients were discharged alive, 26% (5165\/20\u2009133) died, and 34% (6769\/20\u2009133) continued to receive care at the reporting date. 17% (3001\/18\u2009183) required admission to high dependency or intensive care units; of these, 28% (826\/3001) were discharged alive, 32% (958\/3001) died, and 41% (1217\/3001) continued to receive care at the reporting date. Of those receiving mechanical ventilation, 17% (276\/1658) were discharged alive, 37% (618\/1658) died, and 46% (764\/1658) remained in hospital. Increasing age, male sex, and comorbidities including chronic cardiac disease, non-asthmatic chronic pulmonary disease, chronic kidney disease, liver disease and obesity were associated with higher mortality in hospital.\n .\n # Conclusions\n .\n ISARIC WHO CCP-UK is a large prospective cohort study of patients in hospital with covid-19. The study continues to enrol at the time of this report. In study participants, mortality was high, independent risk factors were increasing age, male sex, and chronic comorbidity, including obesity. This study has shown the importance of pandemic preparedness and the need to maintain readiness to launch research studies in response to outbreaks.\n .\n # Study registration\n .\n ISRCTN66726260.\nauthor: Annemarie B Docherty; Ewen M Harrison; Christopher A Green; Hayley E Hardwick; Riinu Pius; Lisa Norman; Karl A Holden; Jonathan M Read; Frank Dondelinger; Gail Carson; Laura Merson; James Lee; Daniel Plotkin; Louise Sigfrid; Sophie Halpin; Clare Jackson; Carrol Gamble; Peter W Horby; Jonathan S Nguyen-Van-Tam; Antonia Ho; Clark D Russell; Jake Dunning; Peter JM Openshaw; J Kenneth Baillie; Malcolm G SempleCorrespondence to: M G Semple (or [@TweedieChap](https:\/\/twitter.com\/tweediechap) on Twitter)\ndate: 2020\nreferences:\ntitle: Features of 20\u2009133 UK patients in hospital with covid-19 using the ISARIC WHO Clinical Characterisation Protocol: prospective observational cohort study\n\n# Introduction\n\nThe outbreak of disease caused by the novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was declared a pandemic by the World Health Organization on 11 March 2020.1 The WHO situation report dated 30 April 2020 stated 3\u2009090\u2009445 people had confirmed coronavirus disease 2019 (covid-19) and 217\u2009769 people had died across the world.2\n\nIn the wake of the influenza A H1N1 pandemic (2009) and the emergence of Middle East respiratory syndrome coronavirus (2012), it was recognised that the effectiveness of a response to a future pandemic threat would critically depend on the speed and focus of that response. The United Kingdom set up and maintained a \"sleeping\" prepandemic suite of protocols, documents, and agreements in preparation for future outbreaks. The International Severe Acute Respiratory and emerging Infections Consortium (ISARIC) WHO Clinical Characterisation Protocol UK (CCP-UK) study was a core component of this portfolio.3 Further details about ISARIC WHO CCP-UK can be found at and in the online supplement.\n\nIn response to the emergence of SARS-CoV-2 and its pandemic potential, the ISARIC WHO CCP-UK study was activated on 17 January 2020, in time to enrol the first wave of patients with covid-19 admitted to hospitals in England and Wales. The first confirmed patient with covid-19 in the UK was reported on 31 January 2020.\n\nHospital admission rates for patients with covid-19 have been difficult to estimate because rates depend on the prevalence of community testing and admission criteria, which vary between countries. However, an estimated one in 10 to one in five adults have illnesses of sufficient severity to warrant hospital admission.4 Patients have mostly been admitted with severe acute respiratory infection or severe acute respiratory syndrome according to the previous WHO case definitions.5 6 The provision of intensive care also varies between countries. Studies first from China, and more recently from Europe and the United States, have found rates of admission to intensive care range from 5% to 32%.7 8 Old age, chronic major comorbidity, and male sex have consistently been associated with increased mortality.9 10 11 12\n\nIn this first report of the ISARIC WHO CCP-UK study, we characterise the clinical features of patients admitted to hospital with covid-19 in England, Scotland, and Wales during the growth phase of the first wave of this outbreak, up to 19 April 2020. Future reports will include Northern Ireland. We describe all patient outcomes as known on 3 May 2020 and explore risk factors associated with mortality in hospital.\n\n# Methods\n\n## Study design and setting\n\nThe ISARIC WHO CCP-UK (National Institute for Health Research Clinical Research Network Central Portfolio Management System ID: 14152) study is an ongoing prospective cohort study in 208 acute care hospitals in England, Scotland, and Wales. The protocol (supplementary material 2), revision history, case report form (version 9.2; supplementary material 3), information leaflets, consent forms and details of the Independent Data and Material Access Committee are available online.21\n\n## Participants\n\nInclusion criteria were people of all ages who were admitted to one of 208 acute care hospitals in England, Scotland, and Wales with proven or high likelihood of infection with a pathogen of public health interest, defined as SARS-CoV-2 for this event by Public Health England. Reverse transcriptase polymerase chain reaction was the only mode of testing available during the period of study. The decision to test was at the discretion of the clinician attending the patient, and not defined by protocol. The enrolment criterion \"high likelihood of infection\" reflects that a preparedness protocol cannot assume that a diagnostic test will be available for an emergent pathogen. Site training emphasises that only patients who tested positive for covid-19 were eligible for enrolment.\n\nNational guidance was provided by Public Health England and other UK public health agencies that advised who to test based on clinical case definitions for possible covid-19 (online supplement). We also included patients who had been admitted for a separate condition but had tested positive for covid-19 during their hospital stay. We collected additional biological samples for research purposes when consent was given (please see online supplement for details of consent procedures and biological samples). These samples are currently undergoing analysis and we will present the results when they become available. Patients were only enrolled during their index admission. We used three tiers in the ISARIC WHO CCP-UK protocol. Patients in tier 0 had clinical information from their routine health records uploaded into the case report form. Consent was not required for collection of depersonalised routine healthcare data for research in England and Wales. A waiver for consent was given by the Public Benefit and Privacy Panel in Scotland. Tier 1 and 2 of the protocol involve additional biological sampling for research purposes for which consent by, or assent for, participants was obtained.\n\n## Data collection\n\nWe collected baseline demographic data on a paper case report form (version 9.2; supplementary material 2) that was developed by ISARIC and WHO for use in outbreak investigations. Data were uploaded from admission, and usually before hospital episodes were complete, to a REDCap database (Research Electronic Data Capture, Vanderbilt University, US, hosted by University of Oxford, UK). We aimed to record measures of illness severity and routine blood test results at a minimum of four time points: day of hospital admission (day 1), day 3, day 6, day 9, and day of any admission to critical care. We recorded relevant treatments that patients received in hospital, level of care (ward based, high dependency unit, or intensive care unit), complications, and details of discharge or death while in hospital. Further information about these variables can be found in the online supplement.\n\n## Outcomes\n\nThe main outcomes were critical care admission (high dependency unit or intensive care unit) and mortality in hospital or palliative discharge. We chose a priori to restrict analysis of outcomes to patients who were admitted more than two weeks before data extraction (3 May 2020) to enable most patients to finish their hospital admission.\n\n## Bias\n\nResearch nurses relied on local covid-19 test reports to enrol patients. Capacity to enrol was limited by staff resources at times of high covid-19 activity. Otherwise we are unable to comment on the potential selection bias of our cohort. We are in the process of linking to routine administrative healthcare data and will be able to make comparisons at that point.\n\n## Missing data\n\nThe nature of the study means that a large amount of data were missing, particularly during the later parts of the growth curve of the UK outbreak. Because this paper is mainly descriptive, we have not performed any imputation for missing data, and describe the data as they stand. To reduce the impact of missing data on outcome analyses, we restricted these analyses to patients who had been admitted for at least two weeks before data extraction.\n\n## Statistical analyses\n\nContinuous data are summarised as median (interquartile range) and categorical data as frequency (percentage). For univariate comparisons, the Mann-Whitney U test or Kruskal-Wallis test were used. We compared categorical data by using the \u03c7^2^ test.\n\nWe used several approaches to model survival. Discharge from hospital was considered an absorbing state, meaning that once discharged, patients were considered no longer at risk of death. Patients who were discharged were not censored and held within the risk set, therefore accounting for the competing risk of discharge on death. We checked this approach by using a formal Fine and Gray competing risks approach. Hierarchical Cox proportional hazards approaches included geographical region (clinical commissioning group or health board) as a random intercept. We used a parsimonious criterion based model building approach based on several principles: clinically relevant explanatory variables were identified a priori for exploration; population stratification was incorporated; interactions were checked at first order level; final model selection was informed by log likelihood tests and the concordance statistic, with appropriate assumptions checked including the distribution of residuals and requirement for proportional hazards. We set statistical significance at 5%. All tests were two sided. We analysed data by using R (R Core Team version 3.6.3, Vienna, Austria), with packages including tidyverse, finalfit, survival, cmprsk, and coxme.\n\n## Patient and public involvement\n\nThis was an urgent public health research study in response to a Public Health Emergency of International Concern. Patients or the public were not involved in the design, conduct, or reporting of this rapid response research.\n\n# Results\n\nOn behalf of ISARIC WHO CCP-UK, 2468 research nurses, administrators, and medical students enrolled 20\u2009133 patients who were admitted with covid-19 to 208 hospitals in England, Scotland, and Wales between 6 February and 14:00 on 19 April 2020 (table 1<\/a> and fig E1). This figure represents 34% of the 59\u2009215 covid-19 admissions in these countries. The median time from onset of symptoms of covid-19 in the community to presentation at hospital was 4 days (interquartile range 1-8; n=16\u2009221).\n\nBaseline characteristics of 20\u2009133 patients with coronavirus disease 2019 stratified by sex. Data are numbers (percentages) unless stated otherwise\n\n| Characteristics | Male | Female | All |\n|----|----|----|----|\n| Total No (%) | 12\u2009068 (59.9) | 8065 (40.1) | 20\u2009133 |\n| Age at admission (n=20\u2009133) | | | |\n| \u2003Median (interquartile range) | 72.0 (58.0-81.0) | 74.0 (58.0-84.0) | 72.9 (58.0-82.0) |\n| Age (n=20\u2009133) | | | |\n| \u2003\\<18 | 180 (1.5) | 130 (1.6) | 310 (1.5) |\n| \u200318-39 | 534 (4.4) | 533 (6.6) | 1067 (5.3) |\n| \u200340-50 | 888 (7.4) | 530 (6.6) | 1418 (7.0) |\n| \u200350-59 | 1728 (14.3) | 980 (12.2) | 2708 (13.5) |\n| \u200360-69 | 2115 (17.5) | 1181 (14.6) | 3296 (16.4) |\n| \u200370-79 | 2972 (24.6) | 1720 (21.3) | 4692 (23.3) |\n| \u2003\u226580 | 3651 (30.3) | 2991 (37.1) | 6642 (33.0) |\n| Any comorbidity (n=18\u2009525) | | | |\n| \u2003No | 2591 (23.4) | 1570 (21.1) | 4161 (22.5) |\n| \u2003Yes | 8492 (76.6) | 5872 (78.9) | 14\u2009364 (77.5) |\n| Chronic cardiac disease (n=17\u2009702) | | | |\n| \u2003No | 7086 (66.8) | 5147 (72.6) | 12\u2009233 (69.1) |\n| \u2003Yes | 3527 (33.2) | 1942 (27.4) | 5469 (30.9) |\n| Chronic pulmonary disease, not asthma (n=17\u2009634) | | | |\n| \u2003No | 8616 (81.7) | 5890 (83.1) | 14\u2009506 (82.3) |\n| \u2003Yes | 1931 (18.3) | 1197 (16.9) | 3128 (17.7) |\n| Asthma (n=17\u2009535) | | | |\n| \u2003No | 9274 (88.6) | 5721 (80.9) | 14\u2009995 (85.5) |\n| \u2003Yes | 1192 (11.4) | 1348 (19.1) | 2540 (14.5) |\n| Smoker (n=14\u2009184) | | | |\n| \u2003Never smoked | 5030 (58.8) | 3938 (69.9) | 8968 (63.2) |\n| \u2003Former smoker | 2972 (34.8) | 1392 (24.7) | 4364 (30.8) |\n| \u2003Yes | 549 (6.4) | 303 (5.4) | 852 (6.0) |\n| Chronic kidney disease (n=17\u2009506) | | | |\n| \u2003No | 8792 (84.0) | 5884 (83.5) | 14\u2009676 (83.8) |\n| \u2003Yes | 1671 (16.0) | 1159 (16.5) | 2830 (16.2) |\n| Diabetes without complications (n=17\u2009599) | | | |\n| \u2003No | 8254 (78.3) | 5695 (80.7) | 13\u2009949 (79.3) |\n| \u2003Yes | 2290 (21.7) | 1360 (19.3) | 3650 (20.7) |\n| Diabetes with complications (n=17\u2009516) | | | |\n| \u2003No | 9628 (91.8) | 6589 (93.8) | 16\u2009217 (92.6) |\n| \u2003Yes | 860 (8.2) | 439 (6.2) | 1299 (7.4) |\n| Obesity (n=16\u2009081) | | | |\n| \u2003No | 8725 (90.6) | 5671 (87.8) | 14\u2009396 (89.5) |\n| \u2003Yes | 900 (9.4) | 785 (12.2) | 1685 (10.5) |\n| Chronic neurological disorder (n=17\u2009382) | | | |\n| \u2003No | 9222 (88.6) | 6189 (88.7) | 15\u2009411 (88.7) |\n| \u2003Yes | 1181 (11.4) | 790 (11.3) | 1971 (11.3) |\n| Dementia (n=17\u2009459) | | | |\n| \u2003No | 9211 (88.2) | 5888 (83.9) | 15\u2009099 (86.5) |\n| \u2003Yes | 1232 (11.8) | 1128 (16.1) | 2360 (13.5) |\n| Malignancy (n=17\u2009354) | | | |\n| \u2003No | 9251 (89.2) | 6360 (91.0) | 15\u2009611 (90.0) |\n| \u2003Yes | 1117 (10.8) | 626 (9.0) | 1743 (10.0) |\n| Moderate or severe liver disease (n=17\u2009360) | | | |\n| \u2003No | 10\u2009181 (98.0) | 6869 (98.5) | 17\u2009050 (98.2) |\n| \u2003Yes | 204 (2.0) | 106 (1.5) | 310 (1.8) |\n| Mild liver disease (n=17\u2009331) | | | |\n| \u2003No | 10\u2009195 (98.3) | 6855 (98.5) | 17\u2009050 (98.4) |\n| \u2003Yes | 174 (1.7) | 107 (1.5) | 281 (1.6) |\n| Chronic haematological disease (n=17\u2009328) | | | |\n| \u2003No | 9951 (96.0) | 6684 (96.0) | 16\u2009635 (96.0) |\n| \u2003Yes | 415 (4.0) | 278 (4.0) | 693 (4.0) |\n| Rheumatological disorder (n=17\u2009289) | | | |\n| \u2003No | 9562 (92.4) | 6031 (86.9) | 15\u2009593 (90.2) |\n| \u2003Yes | 787 (7.6) | 909 (13.1) | 1696 (9.8) |\n| Malnutrition (n=16\u2009695) | | | |\n| \u2003No | 9768 (97.8) | 6531 (97.4) | 16\u2009299 (97.6) |\n| \u2003Yes | 222 (2.2) | 174 (2.6) | 396 (2.4) |\n| Previous immunosuppressant drug treatment (n=18\u2009009) | | | |\n| \u2003Yes | 876 (8.1) | 791 (11.0) | 1667 (9.3) |\n| \u2003No | 9339 (86.6) | 6032 (83.5) | 15\u2009371 (85.4) |\n| \u2003Not applicable | 573 (5.3) | 398 (5.5) | 971 (5.4) |\n| Previous anti-infective treatment (n=18\u2009017) | | | |\n| \u2003No | 1940 (18.0) | 1311 (18.2) | 3251 (18.0) |\n| \u2003Yes | 8285 (76.8) | 5520 (76.4) | 13\u2009805 (76.6) |\n| \u2003Not applicable | 569 (5.3) | 392 (5.4) | 961 (5.3) |\n| AIDS\/HIV (n=17\u2009251) | | | |\n| \u2003No | 10\u2009259 (99.5) | 6909 (99.6) | 17\u2009168 (99.5) |\n| \u2003Yes | 55 (0.5) | 28 (0.4) | 83 (0.5) |\n\n## Age and sex\n\nThe median age of patients was 73 years (interquartile range 58-82, range 0-104; fig 1<\/a>); 310 patients (1.5%) were less than 18 years old and 194 (1.0%) were less than 5 years old. More men (59.9%, n=12\u2009068) than women (40.1%, n=8065) were admitted to hospital with covid-19. One hundred women (10%) of reproductive age (n=1033) were recorded as being pregnant.\n\n## Symptoms\n\nThe most common symptoms were cough (68.9%, 12\u2009896\/18\u2009730), fever (71.6%, 12\u2009499\/17\u2009452), and shortness of breath (71.2%, 12\u2009107\/16\u2009999; fig 2<\/a>, top left panel), though these data reflect the case definition. Only 4.5% (855\/19\u2009178) of patients reported no symptoms on admission. We found a high degree of overlap between the three most common symptoms (fig 2<\/a>, lower left panel).\n\nClusters of symptoms on admission were apparent (fig E2). The most common symptom cluster encompassed the respiratory system: cough, sputum, shortness of breath, and fever. We also observed three other clusters: one encompassing musculoskeletal symptoms (myalgia, joint pain, headache, and fatigue); a cluster of enteric symptoms (abdominal pain, vomiting, and diarrhoea); and less commonly, a mucocutaneous cluster. Twenty nine per cent (5384\/18\u2009605) of all patients complained of enteric symptoms on admission, mostly in association with respiratory symptoms; however, 4% of all patients described enteric symptoms alone.\n\n## Comorbidities\n\nFigure 2<\/a> (top right panel) and table 1<\/a> show major comorbidities recorded on admission. The most common major comorbidities were chronic cardiac disease (30.9%, 5469\/17\u2009702), diabetes without complications (20.7%, 3650\/17\u2009599), chronic pulmonary disease excluding asthma (17.7%, 3128\/17\u2009634), chronic kidney disease (16.2%, 2830\/17\u2009506), and asthma (14.5%, 2540\/17\u2009535). Of 18\u2009525 patients, 22.5% (4161) had no documented major comorbidity. There was little overlap between the three most common comorbidities (fig 2<\/a>, lower right panel).\n\nSix per cent (852\/14\u2009184) of patients were current smokers, 30.8% (4364) were previous smokers, and 63.2% (8968) had never smoked. Figure E3 shows the pattern of major comorbidity stratified by age.\n\n## Level of care\n\nA high proportion of patients required admission to high dependency or intensive care units (17%, 3001\/18\u2009183; fig 3<\/a>), and 55% (9244\/16\u2009849) received high flow oxygen at some point during their admission. Sixteen per cent of patients (2670\/16\u2009805) were treated with non-invasive ventilation, while 10% (1658\/16\u2009866) received invasive ventilation.\n\n## Patient outcomes\n\nOverall, 41% (8199\/20\u2009133) of patients were discharged alive, 26% (5165\/20\u2009133) died, and 34% (6769\/20\u2009133) continued to receive care at the date of reporting (fig 4<\/a>). The median age of patients who died in hospital from covid-19 in the study was 80 years, and only 11% (559\/4880) of these patients had no documented major comorbidity.\n\nFor patients who received only ward care, 47% (7203\/15\u2009297) were discharged alive, 26% (3954\/15\u2009297) died, and 27% (4140\/15\u2009297) remained in hospital at the date of reporting. As expected, outcomes were worse for those who needed higher levels of care.\n\nOf patients admitted to critical care (high dependency unit or intensive care unit), 28% (826\/3001) were discharged alive, 32% (958\/3001) died, and 41% (1217\/3001) continued to receive care at the date of reporting. Although the patients who received mechanical ventilation were younger than the overall cohort (61 years, interquartile range 52-69), only 17% (276\/1658) had been discharged alive by 19 April 2020, 37% (618\/1658) had died, and 46% (764\/1658) continued to receive care.\n\nLength of stay increased with age for patients discharged alive (fig E4). For patients who died, we found no association between age and time to death, with around 80% dying before day 14 of hospital admission.\n\n## Association of pre-existing patient characteristics and survival\n\nThe online supplement (table E4) describes univariable and multivariable associations with mortality. Figure 5<\/a> shows variables that remained significant in the multivariable model. Increasing age was a strong predictor of mortality in hospital after adjusting for major comorbidity (reference age \\<50 years): 50-59 years, hazard ratio 2.63 (95% confidence interval 2.06 to 3.35, P\\<0.001); 60-69 years, 4.99 (3.99 to 6.25, P\\<0.001); 70-79 years, 8.51 (6.85 to 10.57, P\\<0.001); \u226580 years, 11.09 (8.93 to 13.77, P\\<0.001). Female sex was associated with lower mortality (0.81, 0.75 to 0.86, P\\<0.001). Chronic cardiac disease, chronic non-asthmatic pulmonary disease, chronic kidney disease, obesity, chronic neurological disorder (such as stroke), dementia, malignancy, and liver disease were also associated with increased hospital mortality. An interactive infographic is available at . This information must not be used as a predictive tool in practice or to inform individual treatment decisions.\n\n# Discussion\n\n## Principal findings\n\nPatients with covid-19 usually presented with fever, cough, and shortness of breath, and met the WHO case definitions for severe acute respiratory infection or severe acute respiratory syndrome. The most common previous major comorbidities were chronic cardiac disease, diabetes, and chronic non-asthmatic pulmonary disease. Seventeen per cent of patients were admitted to critical care (high dependency unit or intensive care unit). Mortality in hospital was at least 26%, with 34% of the cohort still in hospital at the time of analysis; these proportions increased with escalating level of care. Factors associated with mortality in hospital were increasing age, male sex, and major comorbidities (cardiac disease, non-asthmatic pulmonary disease, kidney disease, liver disease, malignancy, obesity, and dementia).\n\nThe data presented in this study describe patients admitted to hospital during the growth phase of the SARS-CoV-2 pandemic in the UK. The first 101 patients were enrolled in the early phase of the outbreak as part of a high consequence infectious disease containment strategy that ended on 10 March 2020. These patients and others who were identified through screening in hospital, or who contracted covid-19 after admission (hospital acquired infection), are included in the 855 patients who were admitted without covid-19 symptoms. The impact these patients have had on the overall cohort characteristics has diminished as numbers have increased, and we believe it is important to keep these patients in the study. Other patients in our cohort without covid-19 symptoms are those who were diagnosed with the disease at the discretion of the clinician looking after them while staying in hospital for other reasons.\n\nThe pattern of disease we describe broadly reflects the pattern reported globally.7 Patients in our study had a higher median age and higher rates of chronic obstructive pulmonary disease and asthma than patients in China8 and the US.11 12 The prevalence of obesity in our study (11%) was considerably lower than the overall UK prevalence (29%).13 This proportion could reflect the relatively elderly male population admitted to hospital and misclassification or under reporting by admitting physicians. Our patients presented with a relatively short time interval between onset of symptoms and admission to hospital, which might also be a function of the older and vulnerable patient population.\n\nThe current case definition of cough and fever, if strictly applied, would miss 7% of our inpatients. A smaller proportion, 4% of patients, presented with enteric symptoms only. This figure could be an underestimate because these patients fall outside standard criteria for testing. This enteric presentation risks misclassification of patients, and assignment to non-covid-19 care areas, which could pose a nosocomial transmission risk. Severe SARS-CoV-2 infections are rare in people younger than 18 years, comprising only 1.5% of those admitted to hospital. Only 1.0% of those in our study were younger than 5 years. The J shaped age distribution is starkly different to the U shaped age distribution seen in seasonal influenza and the W shaped distribution observed in the 2009 influenza pandemic.14 The reason why SARS-CoV-19 has mostly spared children is not clear, but we speculate this could be because angiotensin converting enzyme 2 receptors are expressed differently in younger lungs.\n\nOther studies have not widely reported that obesity as recognised by clinical staff is associated with mortality in hospital after adjustment for other comorbidities, age, and sex. Obesity was recognised as a risk factor in the 2009 influenza A H1N1 pandemic, but not for the 2012 Middle East respiratory syndrome coronavirus.15 16\n\nThe proportion of pregnant women in our cohort was small (10%), similar to the estimated proportion of pregnant women in the community.15 Pregnancy was not associated with mortality, in apparent contrast to influenza.17\n\n## Comparison with other studies\n\nThe proportion of patients admitted to critical care in our study was similar to that reported in Italy (17%),18 19 and New York (14.2%),11 12 but higher than China.8 At the time of enrolment, the Intensive Care Society had issued guidance to its members that there would be no rationing of critical care admission until all capacity in the country had been exhausted. As far as we are aware, critical care capacity was not exceeded in the UK during the period of the study. We do not believe that any equipment shortages existed during this period that might have prompted more aggressive futility discussions.\n\nMortality in our cohort was high in patients admitted to general wards who were not admitted to critical care, which suggests that advanced care planning occurred. We were unable to capture treatment limiting decisions about level of care. The high median age of patients who died in the cohort (80 years) could partly explain the high mortality rate. Mortality rates were extremely high for patients who received invasive mechanical ventilation in the intensive care unit compared with the 2009 influenza A H1N1 pandemic, for which mortality in intensive care was 31%.15 Our data were in line with the initial ICNARC (intensive care national audit and research centre) audit reports, which represent intensive care units in England, Wales, Scotland, and Northern Ireland.20\n\nOutcome analyses only included patients who were admitted before 19 April to allow most patients to complete their hospital admission. However, an inherent reporting bias exists because the sickest of patients, particularly those admitted to intensive care, have the longest hospital stays; mortality rates in hospital could therefore increase. These mortality rates were considerably higher than the 24% mortality rate in hospital seen in patients in intensive care units in Italy19 and the US.11 12 The lower rate in the US could in part be explained by differences in healthcare systems and the proportion of intensive care unit beds to hospital beds between the two countries. In Italy, a lower proportion of patients received mechanical ventilation, and most of their patients (72%) remained in hospital at the time of the analysis.19\n\nThe finding of independent associations of advancing age, male sex, chronic respiratory (non-asthmatic) disease, chronic cardiac disease, and chronic neurological disease with mortality in hospital is in line with early international reports.9 10 However, although age adjusted mortality rates are high in elderly patients, most of these patients were admitted to hospital with symptoms of covid-19 and would not have been in hospital otherwise. Enhanced severity in male patients was seen across all ages.\n\n## Strengths and limitations of study\n\nISARIC WHO CCP-UK stood ready to conduct large scale studies of pandemic outbreaks for eight years, enabling us to enrol 34% of all patients with covid-19 admitted to 208 acute care hospitals across England, Wales and Scotland in the early phase of the pandemic.\n\nOur study has some limitations. We do not currently have data on the inpatients that were not enrolled, or people managed in community settings, such as usual domestic residences and older people's care homes. We are unable to comment on community risk factors that drive hospital admission except by inference from expected representation at admission. We will be linking to routine administrative healthcare datasets which will enable us to assess the presence of any selection bias.\n\nA large amount of data were missing and we suggest there are two main reasons for this. Firstly, enrolment occurred in the nonlinear growth phase of the outbreak, and outcomes for recent admissions have not been reported yet; these admissions account for 18% of the total number of patients enrolled. Secondly, the research network was dealing with unprecedented numbers of patients at a time when many were seconded to clinical practice or themselves off sick. This study is ongoing, and further data are being added to case report forms.\n\nWe suggest it is possible that the sickest patients were enrolled in our study, and this could partly explain our high mortality rates in hospital. Some of the sickest patients in the study had the longest lengths of hospital stay and we do not have outcome data for all of these patients yet.\n\n## Conclusions and policy implications\n\nThis large and rapidly conducted study of patients admitted to hospital in England, Wales, and Scotland with covid-19 shows the importance of putting plans in place for the study of epidemic and pandemic threats, and the need to maintain these plans. Our study identifies sectors of the population that are at greatest risk of a poor outcome, and reports the use of healthcare resources. Most patients with covid-19 experience mild disease. However, in our cohort, of those who were admitted to hospital two weeks before data extraction, less than half have been discharged alive and a quarter have died. The remainder continued to receive care at the date of reporting. Seventeen percent of patients admitted to hospital required critical care. Factors associated with mortality in hospital were increasing age, male sex, obesity, and major comorbidities.\n\nISARIC Coronavirus Clinical Characterisation Consortium21 investigators have submitted regular reports to the UK Government's New and Emerging Respiratory Virus Threats Advisory Group (NERVTAG)22 and the Scientific Advisory Group for Emergencies (SAGE).23 Patient level data have been shared and independently analysed by the Scientific Pandemic Influenza Group on Modelling (SPI-M)24 and other investigators. Aggregated data have been shared with WHO in the ISARIC covid-19 report.\n\nStudies such as this cannot be developed, approved, and opened from the start of a pandemic in time to inform case management and public health policy. Our study has shown the importance of forward planning and investment in preparedness studies. Over the next few months we will issue reports in *The BMJ* on specific topics and analyses that are key to understanding the impact of covid-19 and focus on improving patient outcomes.\n\n### What is already known on this topic\n\n1. Observational studies in China have reported risk factors associated with severe covid-19 that requires hospital admission\n\n2. Studies describing the features and outcomes of patients with severe covid-19 who have been admitted to hospital in Europe are lacking\n\n3. Older male adults, people with diabetes, hypertension, cardiovascular disease, or chronic respiratory disease are at greater risk of severe covid-19 that requires hospital admission and higher levels of care, and are at higher risk of death\n\n### What this study adds\n\n1. This rapid prospective investigation of patients with covid-19 admitted to hospital in England, Wales, and Scotland showed that obesity, chronic kidney disease, and liver disease were also associated with increased hospital mortality\n\n2. Obesity is a major additional risk factor that was not highlighted in data from China\n\n3. Severe covid-19 leads to a prolonged hospital stay and a high mortality rate; over a quarter of inpatients in this study had died at the time of reporting, and nearly a third remained in hospital\n\nThe study protocol is available at ; study registry . This work uses data provided by patients and collected by the NHS as part of their care and support \\#DataSavesLives. We are extremely grateful to the 2648 frontline NHS clinical and research staff and volunteer medical students who collected these data in challenging circumstances; and the generosity of the patients and their families for their individual contributions in these difficult times. We also acknowledge the support of Jeremy J Farrar, Nahoko Shindo, Devika Dixit, Nipunie Rajapakse, Piero Olliaro, Lyndsey Castle, Martha Buckley, Debbie Malden, Katherine Newell, Kwame O'Neill, Emmanuelle Denis, Claire Petersen, Scott Mullaney, Sue MacFarlane, Chris Jones, Nicole Maziere, Katie Bullock, Emily Cass, William Reynolds, Milton Ashworth, Ben Catterall, Louise Cooper, Terry Foster, Paul Matthew Ridley, Anthony Evans, Catherine Hartley, Chris Dunn, D Sales, Diane Latawiec, Erwan Trochu, Eve Wilcock, Innocent Gerald Asiimwe, Isabel Garcia-Dorival, J Eunice Zhang, Jack Pilgrim, Jane A Armstrong, Jordan J Clark, Jordan Thomas, Katharine King, Katie Neville, Alexandra Ahmed, Krishanthi S Subramaniam, Lauren Lett, Laurence McEvoy, Libby van Tonder, Lucia Alicia Livoti, Nahida S Miah, Rebecca K Shears, Rebecca Louise Jensen, Rebekah Penrice-Randal, Robyn Kiy, Samantha Leanne Barlow, Shadia Khandaker, Soeren Metelmann, Tessa Prince, Trevor R Jones, Benjamin Brennan, Agnieska Szemiel, Siddharth Bakshi, Daniella Lefteri, Maria Mancini, Julien Martinez, Angela Elliott, Joyce Mitchell, John McLauchlan, Aislynn Taggart, Oslem Dincarslan, Annette Lake, Claire Petersen, Scott Mullaney, and Graham Cooke.\n\nISARIC Coronavirus Clinical Characterisation Consortium (ISARIC4C): Consortium lead investigator: J Kenneth Baillie; chief investigator: Malcolm G Semple; co-lead investigator: Peter JM Openshaw; ISARIC clinical coordinator: Gail Carson; co-investigators: Beatrice Alex, Benjamin Bach, Wendy S Barclay, Debby Bogaert, Meera Chand, Graham S Cooke, Annemarie B Docherty, Jake Dunning, Ana da Silva Filipe, Tom Fletcher, Christopher A Green, Julian A Hiscox, Antonia Ying Wai Ho, Peter W Horby, Samreen Ijaz, Saye Khoo, Paul Klenerman, Andrew Law, Wei Shen Lim, Alexander J Mentzer, Laura Merson, Alison M Meynert, Mahdad Noursadeghi, Shona C Moore, Massimo Palmarini, William A Paxton, Georgios Pollakis, Nicholas Price, Andrew Rambaut, David L Robertson, Clark D Russell, Vanessa Sancho-Shimizu, Janet T Scott, Tom Solomon, Shiranee Sriskandan, David Stuart, Charlotte Summers, Richard S Tedder, Emma C Thomson, Ryan S Thwaites, Lance CW Turtle, Maria Zambon; project managers: Hayley E Hardwick, Chloe Donohue, Jane Ewins, Wilna Oosthuyzen, Fiona Griffiths; data analysts: Lisa Norman, Riinu Pius, Tom M Drake, Cameron J Fairfield, Stephen Knight, Kenneth A Mclean, Derek Murphy, Catherine A Shaw; data and information system managers: Jo Dalton, Michelle Girvan, Egle Saviciute, Stephanie Roberts, Janet Harrison, Laura Marsh, Marie Connor; data integration and presentation: Gary Leeming, Andrew Law, Ross Hendry; material management: William Greenhalf, Victoria Shaw, Sarah McDonald; local principal investigators: Kayode Adeniji, Daniel Agranoff, Ken Agwuh, Dhiraj Ail, Ana Alegria, Brian Angus, Abdul Ashish, Dougal Atkinson, Shahedal Bari, Gavin Barlow, Stella Barnass, Nicholas Barrett, Christopher Bassford, David Baxter, Michael Beadsworth, Jolanta Bernatoniene, John Berridge, Nicola Best, Pieter Bothma, David Brealey, Robin Brittain-Long, Naomi Bulteel, Tom Burden, Andrew Burtenshaw, Vikki Caruth, David Chadwick, Duncan Chambler, Nigel Chee, Jenny Child, Srikanth Chukkambotla, Tom Clark, Paul Collini, Graham Cooke, Catherine Cosgrove, Jason Cupitt, Maria-Teresa Cutino-Moguel, Paul Dark, Chris Dawson, Samir Dervisevic, Phil Donnison, Sam Douthwaite, Ingrid DuRand, Ahilanadan Dushianthan, Tristan Dyer, Cariad Evans, Chi Eziefula, Chrisopher Fegan, Adam Finn, Duncan Fullerton, Sanjeev Garg, Sanjeev Garg, Atul Garg, Effrossyni Gkrania-Klotsas, Jo Godden, Arthur Goldsmith, Clive Graham, Elaine Hardy, Stuart Hartshorn, Daniel Harvey, Peter Havalda, Daniel B Hawcutt, Antonia Ho, Maria Hobrok, Luke Hodgson, Anita Holme, Anil Hormis, Michael Jacobs, Susan Jain, Paul Jennings, Agilan Kaliappan, Vidya Kasipandian, Stephen Kegg, Michael Kelsey, Jason Kendall, Caroline Kerrison, Ian Kerslake, Oliver Koch, Gouri Koduri, George Koshy, Shondipon Laha, Susan Larkin, Tamas Leiner, Patrick Lillie, James Limb, Vanessa Linnett, Jeff Little, Michael MacMahon, Emily MacNaughton, Ravish Mankregod, Huw Masson, Elijah Matovu, Katherine McCullough, Ruth McEwen, Manjula Meda, Gary Mills, Jane Minton, Mariyam Mirfenderesky, Kavya Mohandas, James Moon, Elinoor Moore, Patrick Morgan, Craig Morris, Katherine Mortimore, Samuel Moses, Mbiye Mpenge, Rohinton Mulla, Michael Murphy, Megan Nagel, Thapas Nagarajan, Mark Nelson, Igor Otahal, Mark Pais, Selva Panchatsharam, Hassan Paraiso, Brij Patel, Justin Pepperell, Mark Peters, Mandeep Phull, Stefania Pintus, Jagtur Singh Pooni, Frank Post, David Price, Rachel Prout, Nikolas Rae, Henrik Reschreiter, Tim Reynolds, Neil Richardson, Mark Roberts, Devender Roberts, Alistair Rose, Guy Rousseau, Brendan Ryan, Taranprit Saluja, Aarti Shah, Prad Shanmuga, Anil Sharma, Anna Shawcross, Jeremy Sizer, Richard Smith, Catherine Snelson, Nick Spittle, Nikki Staines, Tom Stambach, Richard Stewart, Pradeep Subudhi, Tamas Szakmany, Kate Tatham, Jo Thomas, Chris Thompson, Robert Thompson, Ascanio Tridente, Darell Tupper-Carey, Mary Twagira, Andrew Ustianowski, Nick Vallotton, Lisa Vincent-Smith, Shico Visuvanathan, Alan Vuylsteke, Sam Waddy, Rachel Wake, Andrew Walden, Tony Whitehouse, Paul Whittaker, Ashley Whittington, Meme Wijesinghe, Martin Williams, Lawrence Wilson, Sarah Wilson, Stephen Winchester, Martin Wiselka, Adam Wolverson, Daniel G Wooton, Andrew Workman, Bryan Yates, Peter Young.\n\nExtra material supplied by authors\n\nWeb appendix: Supplementary material 1\u2014online supplement\n\nWeb appendix: Supplementary material 2\u2014protocol\n\nWeb appendix: Supplementary material 3\u2014case report form","meta":{"dup_signals":{"dup_doc_count":166,"dup_dump_count":31,"dup_details":{"curated_sources":2,"2023-50":9,"2023-40":3,"2023-23":4,"2023-14":4,"2023-06":9,"2022-49":7,"2022-40":4,"2022-33":11,"2022-27":11,"2022-21":10,"2022-05":8,"2021-49":10,"2021-43":5,"2021-39":7,"2021-31":5,"2021-25":4,"2021-21":9,"2021-17":7,"2021-10":4,"2021-04":1,"2020-50":2,"2020-45":4,"2020-40":1,"2020-34":3,"2020-29":3,"2020-24":2,"2024-30":3,"2024-26":3,"2024-22":4,"2024-18":4,"2024-10":3}},"file":"PMC7243036"},"subset":"pubmed_central"} {"text":"abstract: 1. IVIG, combined with moderate-dose of corticosteroids, might improve patient outcomes.\n .\n 2. The use of corticosteroids might accelerate recovery from COVID-19.\n .\n 3. No controlled clinical trials exist on the use of corticosteroids for COVID-19.\n .\n 4. IL6 correlates with severity, criticality, viral load, and prognosis of COVID-19.\n .\n 5. Tocilizumab, an anti-IL6, can confer benefit in patients with COVID-19 and high IL6.\nauthor: Amene Saghazadeh; Nima Rezaei\u204eCorresponding author at: Children's Medical Center Hospital, Dr. Qarib St, Keshavarz Blvd, Tehran 14194, Iran.\ndate: 2020-05-08\nreferences:\ntitle: Towards treatment planning of COVID-19: Rationale and hypothesis for the use of multiple immunosuppressive agents: Anti-antibodies, immunoglobulins, and corticosteroids\n\n# Introduction\n\nSeveral generations have been exposed to COVID-19 under different conditions of life at varying locations around the world. As long as the COVID-19 continues to spread, its power of genome modification would probably be increased. What concerns us more is that the 2019-nCoV, by the process of modification of genome structure, might become more and more fitted to humans to profoundly affect those who have already escaped \u2013 children and young people without a pre-existing condition. No one can tell how much time it takes to reach a new level of perfection, and that the development of vaccines for active immunization is a long process.\n\nMoreover, most of the best available anti-viral agents are not helpful in the treatment of COVID-19. A randomized controlled trial of 199 hospitalized adults with confirmed COVID-19 demonstrated no actual difference between patients who received lopinavir-ritonavir and patients who received standard care alone in clinical improvement, death rate, and positive virus test rate at day 28 \\[1\\]. Here we review intermediate indicators of the pathogenesis of COVID-19 that have the potential of being considered for the treatment of COVID-19.\n\n# SARS-CoV2 cell entry is not complex\n\nFig. 1<\/a> is a schematic illustration of the minimum proteins required to mediate the SARS-CoV2 cell entry. The trimeric, transmembrane spike (S) glycoprotein of the virus, SARS-2-S, includes two main functional subunits: S1 and S2. The former, in turn, consists of the four core domains, S1~A~, S1~B~, S1~C~, and S1~D~, that contribute to the attachment to the surface receptor of target cells. Then, the latter coordinates the fusion of viral and cellular membranes. Receptor binding activates proteases that can carry out proteolytic cleavage of the S protein. In coronaviruses, cleavage occurs at two sites: the S1\/S2 junction and at the S2\u2032, a region close to the viral fusion peptide \\[2\\]. Proteolytic cleavage of the S protein causes conformational changes so that they cannot revert to the original structure and profound enough to prime the S2 subunit for the fusion of viral and cellular membranes.\n\n# Certain characteristics of SARS-CoV2\n\n## SARS-CoV2 has acquired an S glycoprotein that highly underwent genetic variation and glycosylation\n\n### Polybasic cleavage site\n\nAs evidenced by sequence analysis, there is a residue insertion formed of four amino acids (12 nucleotides) at the boundary between S1 and S2 subunits of the SARS-CoV 2 S. It defines a polybasic furin cleavage site of RRAR for the human SARS-CoV 2 that was absent in human SARS-CoV, bat SARS-like CoVs, and pangolin SARS-like CoV while might be present in other species \\[3\\]. After the introduction of mutation to the residue insertion and furin cleavage site, the S1\/S2 cleavage of the SARS-CoV 2 S did not longer take place. However, the SARS-CoV 2 S entry raised for VeroE6 cells and remained high in BHK cells that express human ACE. Therefore, it seems that SARS-CoV2 transmissibility does not depend on the S1\/S2 cleavage.\n\nA polybasic cleavage site explains a virus that is highly-pathogenic for humans while it is low-pathogenic for other species. For example, using reverse genetic tools, an avian paramyxovirus type 7 (APMV-7) was developed by mutating the fusion (F) protein cleavage site \\[4\\]. The constructed APMV-7 showed furin cleavage and increased replication and syncytium formation in cell cultures. However, chicken exposed to the virus did not exhibit infection.\n\n### Glycosylation\n\nGlycosylation and its related products, i.e., glycans, introduce changes to the viral envelope that make the virus fitted for interaction with the host cell membrane \\[5\\]. Generally, glycans are oligosaccharides linked to the dense decoration of the spike glycoprotein. In particular, these oligosaccharides have shown to influence the folding of the S protein and proteolytic process so that they facilitate the virus cell entry. Moreover, a virus with the glycosylated glycoprotein gains an extra feature for an escape from the immune responses. Therefore, glycans are a good target for vaccine design.\n\nTwo main types of glycans are N-linked and O-linked glycans. Both are released from glycoproteins. Whereas enzymes fulfill the construction of N-glycans, chemical methods perform the release of O-glycans. N-glycans are linked to the amino acid asparagine (Asn) residues (Asn-any amino acids except for proline- Ser or Thr) utilizing an N-glycosidic bond, mostly N-acetylglucosamine. O-glycans are attached to the amino acid serine (Ser) and threonine (Thr) residues by the addition of an N-acetyl galactosamine (GalNAc). For example, N-glycans exist in Hendra virus, SARS-CoV, influenza virus, hepatitis virus, HIV-1, and West Nile virus \\[6\\], and O-glycans have occurred in the Ebola virus.\n\nThe 2019-nCoV S protein includes 13 and 9N-linked glycosylation sequons in the S1 and S2 subunit, respectively \\[3\\]. All of these have previously occurred in the SARS-CoV S glycoprotein, except for four-linked glycosylation sequons in the S1. Also, due to the existence of an amino acid proline in the polybasic cleavage site, which makes the inserted sequence PRRA, there are three O-linked glycans introduced to the 2019-nCoV RBD residues S673, T678, and S686 \\[7\\].\n\n### SARS-CoV2 receptor binding domain contains six amino acids providing favorable positions for binding to human ACE2\n\nWhen compared to SARS-CoV S~B~, the motif binding the human ACE2 to the SARS-CoV 2 S~B~ showed a relatively higher binding affinity for human ACE2 as indicated in smaller equilibrium dissociation constant (2.9\u00a0nM vs. 7.7\u00a0nM) \\[3\\]. Structural analysis has demonstrated that fourteen positions critically take part in the receptor-binding domain of the SARS-CoV S~B~ that contain eight conserved (T402, Y436, Y440, N473, Y475, T486, G488, and Y491) positions and six semi-conserved (R426 to N448, Y442 to L464, L472 to F495, N479 to Q502, Y484 to Q507, and T487 to N510) substitutions with respect to the SARS-CoV 2 S~B~.\n\n### SARS-CoV2 receptor binding domain contains cyclic regions that can make interaction with cell-surface GRP78\n\nPep42 is a cyclic oligopeptide that, with its hydrophobic character, can selectively interact with cell surface glucose\u2010regulated protein 78 (GRP78), a member of 70\u00a0kDa heat shock proteins. GRP78, also known as BiP or HSPA5, under endoplasmic reticulum stress, can be translocated from the endoplasmic reticulum to the membrane and helps to maintain cellular integrity under physiologic and pathological stress. It critically contributes to various functions ranging from protein folding, transportation, and degradation to cell-signaling, proliferation, survival, apoptosis, inflammation, and immunity. The expression of GRP78 decreases with age.\n\nOn the 2019-nCoV spike protein, 13 disulfide bonds are corresponding to 13 different cyclic regions thought to be similar to the cyclic form of Pep42 \\[8\\]. Among these, four regions I-IV take place in the outer surface of a putative receptor-binding domain (RBD) on the viral spike., including C361, C379, C391, and C480. These regions share sequence similarity with Pep42, ranging from 15.38% to 46.15%. However, only one of them, i.e., region IV (GRAVY\u00a0=\u00a00.08), is a hydrophobic region, like Pep42 (GRAVY\u00a0=\u00a01.1). Structural models evaluate the energy contribution for region IV as a part of region III to the GRP78 to be about (\u22129.8 out of \u221214.0\u00a0kcal\/mol), and the docking platform proposes region IV as the best region binding to GRP78. The region IV can be linked to the substrate-binding domain \u03b2 (SBDB) of GRP78 using five H-bonds (through P479, N481, E484, and N487) and four hydrophobic interactions (through T478, E484, and F486).\n\n# Passive immunization\n\nThere are two main ways to induce protection against infections: active and passive immunization. Active immunization comes to exist when the body's own immune system can produce antibodies actively following exposure to a viral antigen. On the contrary, there is a passive working mode of the immune system that would appear following the transfer of antibodies that act directly to neutralize viral infectivity.\n\n## Serum therapy\n\n### Convalescent plasma from patients recovered from COVID-19\n\n#### Hypothesis: Plasma from patients recovered from COVID-19 contain antibodies against 2019-nCoV\n\nA look at the history of viral outbreaks offers convalescent plasma as the only remedy to avoid further fatalities. The most recent examples include the pandemic of influenza A H1N1 (H1N1pdm09) in virus infection 2009, the Western African Ebola virus epidemic in 2014, and the outbreak of MERS-CoV in 2015 \\[9\\]. Meta-analysis studies have shown a reduced risk of mortality in patients with SARS-CoV and influenza receiving convalescent plasma \\[10\\].\n\n#### Rationale: Convalescent plasma from patients recovered from COVID-19 can treat patients with severe COVID-19\n\nA pilot study \\[11\\] recently has investigated the safety and effect of convalescent plasma that contain antibody levels higher than 1:640 combined with regular anti-viral agents and standard supportive care on clinical outcomes of ten patients with severe COVID-19. The study showed clinical improvement of all the ten patients accompanied by an increase in lymphocyte count and a decrease in CRP. Following transfusion of convalescent plasma, all the seven patients who were SARS-CoV2 RNA positive before transfusion of convalescent plasma turned SARS-CoV2 negative. There were no control groups receiving convalescent plasma alone or standard therapy without convalescent therapy to evaluate the main effect of convalescent plasma.\n\n### Serum from bats with SARS-like CoV\n\n#### Hypothesis: whole-genome sequencing and phylogenetic analysis shed light on the origin of the 2019-nCoV virus \u2013 It has been probably introduced from bats to men\n\nGenome sequencing of a fecal bat sample, Rp3, could detect an isolate of coronaviruses, which was almost identical to the causative agent of the SARS-CoV outbreak of 2002\u20132003. Hence, it attained the name SARS-like coronavirus isolate Rp3 (SL-CoV Rp3) \\[12\\]. The bronchoalveolar lavage fluid (BALF) of a patient with SARS contained the 2019-nCoV. RNA sequencing could reveal about 90% similarity in nucleotides of the novel coronavirus and of SARS-like coronavirus that had previously related to bats \\[13\\], \\[14\\]. In particular, the S protein of the 2019-nCoV has a high sequence identity of 80\u201398% with the S protein of bat SARS-like CoVs, such as SARSr-CoV ZXC21 S, ZC45 S, and RaTG13 \\[3\\]. Moreover, in phylogenomic trees, branches for the 2019-nCoV are of greater length than those for the 2003 SARS-CoV, and therefore more favorable to bats.\n\n#### Rationale: Bat serum is not able to efficiently neutralize SARS-CoV\n\nThe range of bats, belonging to the genus *Rhinolophus* (horseshoe bats) and the family *Rhinolophidae*, produce the SARS-CoV antibody \\[12\\]. Polymerase chain reaction (PCR) will confirm the presence of SARS-CoV nucleocapsid (N) and polymerase (P) proteins in fecal samples if an individual bat being seropositive for SARS-CoV \\[12\\]. There is a significant degree of resemblance of greater than 90% in the nucleotide sequence of the viral genomes between SL-CoV Rp3 \\[12\\] and the Tor2 strain of SARS-CoV \u2013 which was isolated in Toronto \\[15\\]. The differences in the genome sequences of SARS-CoV in the two species occur merely in the S gene \u2013 which encodes the S1 domain of the coronavirus spike protein and contains regions with high mutation rates \\[12\\]. The coronaviruses commonly possess five open reading frames (ORF) that correlate with the production of the replicase polyprotein (P), the spike (S), envelope (E), and membrane (M) glycoproteins and the nucleocapsid (N) protein. The human SARS-CoV Tor2 and bat SL-CoV Rp3 strains remain more than 90% identical at the proteins P, E, M, and N. the protein S consists of two main domains: 1) the S1 domain conveys the role of receptor binding and 2) the S2 domain assumes the role of the fusion of viral and host-cell membranes. In particular, the human SARS-CoV Tor2 strain shows a noticeable degree of difference in the S1 domain from the bat SL-CoV Rp3 strain. This diversity would suffice to produce functional differences between the species, and is an apparent reason why bat sera having high levels of cross-reactive antibodies not acted efficiently to neutralize SARS-CoV.\n\n### Serum from convalescent SARS patients\n\n#### Hypothesis: SARS-CoV and SARS-CoV2 are ideally similar in the structure and the cell entry receptor and protease\n\nSARS-CoV and SARS-CoV2 share absolutely the same cleavage junctions, almost the same sequence (96%) of their main protease, a high degree (76%) of similarity in the amino acid sequence of their S protein, a similar S2\u2032 cleavage site, a similar spectrum of cells they can enter, and the similarity of the most residues essential for binding ACE2 \\[16\\], \\[17\\], \\[18\\]. Also, both of them utilize the same domain of S1~B~ to interact with the ACE2 receptor. However, they differ in proteolytic processing to some degree. Study \\[16\\] of the human embryonic kidney (HEK) cell line, 293\u00a0T, has shown that a signal for the S2 subunit is present in cells inoculated with SARS-2-S, but not in cells inoculated with SARS-S.\n\nTwo main proteases for both SARS-S and SARS-2-S are endosomal cysteine proteases cathepsin B and L (CatB\/L) and the transmembrane protease, serine 2TMPRSS2 \\[16\\]. In 293\u00a0T cells lacking 2TMPRSS2, blocking CatB\/L activity through increasing the endosomal pH by ammonium chloride could significantly limit the entry of both SARS-S and SARS-2-S. In TMPRSS2\u00a0+\u00a0Caco-2 cells, the effect of ammonium chloride existed to a lesser extent. A combination of camostat mesylate, a blocker of TMPRSS2, and E-64d, an inhibitor of CatB\/L, yielded the complete inhibition of SARS-2-S entry in TMPRSS2\u00a0+\u00a0Caco-2 cells. In both the human lung cancer cell line Calu-3 and the primary human lung cells, there was a reduction of the entry of both SARS-S and SARS-2-S by camostat mesylate, indicating that SARS-S and SARS-2-S partially require TMPRSS2 for a lung infection.\n\n#### Rational: Serum from convalescent SARS patients is able to neutralize SARS-CoV2 efficiently\n\nAntiserum that contains antibodies against human ACE2 could hinder the entry of both SARS-S and SARS-2-S pseudotypes while not affected the entry of VSV-G and MERS-S pseudotypes. It supports the notion that SARS-S and SARS-2-S utilize the same primary entry receptor, i.e., ACE2, which is different from the primary receptors VSV-G and MERS-S engage for cell entry that is LDLR and DPP4, respectively.\n\nSera from three convalescent SARS patients reduced the SARS-S entry and, to e lesser degree, the SARS-2-S entry. The patient serum effect was in a dose-dependent manner \\[16\\].\n\n### Serum from rabbits immunized with SARS\n\n#### Hypotheses: Serum from rabbits immunized with SARS is more effective than serum from convalescent SARS patients\n\nParaoxonases (PON) are mammalian enzymes associated with anti-oxidant and anti-inflammatory effects \\[19\\]. Rabbit PON differs from human PON in terms of more activity and more stability under the circumstances \\[20\\].\n\n#### Rational: Serum from rabbits immunized with the S1 subunit of SARS-S is able to neutralize SARS-CoV2 very efficiently\n\nSera from rabbits immunized with the S1 subunit of SARS-S could effectively reduce the entry of both SARS-S and SARS-2-S \\[16\\]. When compared with patient serum, rabbit serum revealed to us higher efficiency in inhibition of SARS-2-S entry at the same concentration.\n\n## Intravenous immunoglobulins (IVIG)\n\n### Hypothesis: IVIG contains a large pool of human antibodies\n\nIVIG is an immunomodulatory treatment currently useful for a variety of human diseases that share an idiopathic origin, ranging from autoimmune disorders to primary antibody deficiencies. Also, IVIG has shown promising results in case of severe (such as sepsis, Parvovirus B19 infection, West Nile virus encephalitis, HIV, Clostridium difficile infections, Mycobacterium avium, Mycobacterium tuberculosis, and Nocardia infections) and recurrent infections in primary antibodies deficiencies \\[21\\].\n\nMost patients develop antibodies against the NP and RBD of 2019-nCoV during the second week after infection onset \\[22\\]. Analysis of serum samples collected 14 or more days after symptom onset revealed detection of IgG and IgM antibodies against NP in 94% and 88% and RBD in 100% and 94% among patients with COVID19. Studies consistently show that increased immunoglobulin levels accompany the transition from early to late course of COVID19. It poses the possibility that IVIG therapy might help to accelerate recovery from COVID-19.\n\n### Rationale: IVIG might help to improve the outcome of patients with COVID-19\n\nThe study \\[23\\] included ten patients with COVID-19 who demonstrated worsening symptoms, e.g., decreased lymphocyte count and decreased PaO2\/FIO2 ratio and oxygen saturation, following treatment with a short-term moderate-dose corticosteroid (methylprednisolone 80\u00a0mg\/d) plus immunoglobulin (10\u00a0g\/d). After switching to the double dose of 1600\u00a0mg\/d methylprednisolone plus 20\u00a0g\/d immunoglobulin, all of the patients improved in the clinical, laboratory, and paraclinical outcomes.\n\nPassive immunization protects against disease, and so it should be administered as early as possible when the patient is diagnosed. Studies show that the viral RNA of 2019-nCoV reaches its peak during the first week and then gradually decreases and that IgG and IgM begin to rise from the 10th day so that most patients have anti-viral antibodies by the 14th day.\n\n# Recombinant type I IFN\n\n## Hypothesis: IFNs play a role in anti-viral immunity\n\nType I IFNs play a primary role in the inhibition of viral replication through the coordination of anti-viral immune responses and modulation of inflammatory responses \\[24\\].\n\n## Rational: Recombinant type I IFN can effectively inhibit SARS-CoV2 replication\n\nIn 24\u201348\u00a0h after infection, SARS-CoV2 replication reaches titer that causes the cytopathic effect in Vero E6 cells, which express ACE2 and thus are susceptible to SARS-CoV2 infection \\[25\\]. It is similar to the viral replication kinetics for SARS-CoV \\[25\\]. Pretreatment with recombinant type I IFN could effectively inhibit SARS-CoV2 replication at both 24 and 48\u00a0h after infection \\[25\\]. For SARS-CoV, such an effect was present at 24\u00a0h but absent at 48\u00a0h after infection \\[25\\]. The difference between SARS-CoV2 and SARS-CoV in response to type I IFN treatment may be due to differences in structural proteins of these viruses, including NSP3, ORF3b, and ORF6 \\[25\\].\n\n# Development of a human monoclonal antibody (mAb)\n\n## Targeting the S protein\n\n### Hypothesis: A mAb against the binding domain of the virus can inhibit SARS-CoV2 infection\n\nRecently, hybridoma supernatants containing antibody repertoires from immunized transgenic mice that express the human immunoglobulin heavy and light chains and rat origin immunoglobulin constant regions were used to detect antibodies that can cross neutralize SARS-S and SARS-2-S \\[26\\]. One chimeric 47D11 H2L2 antibody displayed such a cross-neutralizing activity, decreased syncytia formation induced by SARS-S and SARS2-S, and could protect VeroE6 cells against SARS-S and SARS2-S pseudotyped virus \\[26\\]. It may lie in its similar affinities for interacting with the same domain of the S1 subunit, i.e., S1~B~, of each SARS-S and SARS-2-S.\n\n### Rational: The chimeric 47D11 H2L2 might not be effective in human lung cells as it is *in vitro*\n\n47D11 carried a higher affinity for interacting with the S2 subunit of SARS-S than that of SARS-2-S. It is important that for both SARS-S and SARS-2-S, the binding of the 47D11 antibody to the target \u2013 the S1~B~ domain \u2013 does not block the binding of S1~B~ and S2 to ACE2 receptor \\[26\\]. By contrast, neutralizing antibodies that specifically target SARS-S could compete with S1~B~ and S2 for binding to ACE2.\n\n## Targeting pro-inflammatory cytokines\n\n### Hypothesis: A mAb against IL6 can attenuate hyper inflammation\n\nTocilizumab, also known as atlizumab, is a humanized anti-human IL6 receptor antibody approved by FDA for several inflammatory and autoimmune diseases severe, such as cytokine release syndrome, rheumatoid arthritis, giant cell arteritis, polyarticular juvenile idiopathic arthritis, and systematic juvenile idiopathic arthritis. It is safe and effective for both adults and children two years of age and older.\n\n### Rationale: Tocilizumab can treat lung injury in patients with critical and severe COVID-19\n\nIn the study \\[27\\], 21 patients with COVID-19 whose condition was severe or critical received one or two doses of Tocilizumab plus standard therapy. Patients who had a mean IL6 level of more than 100\u00a0pg\/ml before tocilizumab treatment showed improvement in clinical symptoms and peripheral oxygen saturation and normalization for lymphocyte proportion and CRP levels. Also, lung lesion opacity was absorbed in 90% of patients. Neither serious adverse effects nor deaths occurred with tocilizumab treatment.\n\nThere are ongoing clinical trials for tocilizumab treatment in patients with moderate and severe COVID-19. Currently, the use of Tocilizumab is recommended for patients with COVID-19 who have warning signs of hyper inflammation, as can be measured by IL6, ferritin, platelet counts, inflammatory markers, and H score \\[28\\].\n\n# Corticosteroids\n\n## Hypothesis: Corticosteroids can modulate inflammation\n\nCorticosteroids are commonly used for modulation of a variety of inflammatory conditions. In addition to a daily regimen, they can be used in the form of pulse therapy to treat flares of autoimmune diseases. However, caution in the use of corticosteroids is needed due to the potential serious side effects associated with corticosteroid drugs and that corticosteroids generally suppress the immune system. The latter means that corticosteroids modulate hyper inflammation and, on the other hand, inhibit immune responses that are vital for the host defense against the virus \\[29\\].\n\n## Rationale: Corticosteroids might help accelerate recovery from COVID-19\n\nThe study \\[30\\] investigated the effect of inhaled corticosteroids ciclesonide, cortisone, prednisolone, dexamethasone, and fluticasone on the replication of the MERS-CoV. Among the four compounds, the only ciclesonide was capable of inhibiting viral replication. Also, ciclesonide induced a significant inhibition of viral replication of other human coronaviruses, such as HCoV-229E and SARS-CoV, and another positive-strand RNA virus, rubella virus, while not affect the viral replication of negative-strand RNA viruses, e.g., influenza and respiratory syncytial virus. For the MERS-CoV, a nonstructural protein 15 (NSP15) appeared to act as the target of ciclesonide. An amino acid substitution in the NSP15 conferred resistance of the mutated MERS-CoV to ciclesonide. Mometasone could help deal effectively with the mutated MERS-CoV. For the SARS-CoV2, all three ciclesonide, mometasone, and lopinavir were able to inhibit viral replication to a similar degree. Interestingly, their effect was more noticeable than serine protease inhibitors, e.g., nafamostat and camostat in cells that Vero cells that express TMPRSS2. It indicates the tendency of the SARS-CoV2 to enter the cell through the cathepsin\/endosomal pathway rather than through the TMPRSS2\/cell surface pathway.\n\nThe study \\[31\\] included 46 patients with severe COVID-19, of these 26 patients received methylprednisolone (1\u20132\u00a0mg\/kg\/d for 5\u20137\u00a0days), and 20 patients received standard therapy without methylprednisolone. The first group achieved faster improvement in clinical symptoms (fever and peripheral oxygen saturation) and lung lesions detected by CT imaging. However, two deaths occurred in the first group and one death in the second group. Moreover, the two groups did not differ in laboratory parameters, including WBC, lymphocyte count, monocyte count, and cytokines (IL-2, IL-4, IL-6, and IL-10) six days after treatment.\n\nThere is a report of the patient with COVID-19 treated with methylprednisolone since day 8 of the disease course. However, his situation worsened and developed respiratory failure and died on day 14 \\[32\\].\n\n# Eggs for increasing ACE2 and copper\n\nEgg ovotransferrin contains an angiotensin-converting enzyme (ACE) inhibitory peptide, known as IRW, that has shown to decrease blood pressure in hypertensive rats \\[33\\]. Through the up-regulation of ACE2, E-cadherin, ABCB-1, and IRF-8, IRW can decrease RAS activity, hyperplasia, and vascular inflammation and aid differential regulation of leukocytes. On the other hand, by the down-regulation of pro-inflammatory molecules, e.g., ICAM-1 and VCAM-1, IRW can reduce the recruitment of leukocytes and vascular inflammation.\n\nCopper might help the immune system to combat viral infections. A High-content of copper exists in eggs and eggshells of domestic birds. In particular, the eggshell of pigeon (4 $\\pm$ 0.29\u00a0\u03bcg\/g) and the egg of quail (4.67 $\\pm$ 1.08\u00a0\u03bcg\/g) contains high concentrations of copper \\[34\\].\n\n# Conclusion\n\nThe novel coronavirus, SARS-CoV2, can cause a potentially fatal disease, COVID-19, in humans. The infection of human cells by SARS-CoV2 includes two sequential steps: attachment of the virus to the surface receptor of target cells and the fusion of viral and host membranes. The former requires at least a receptor-binding domain on the SARS-CoV2 Spike protein that can interact with a cell surface receptor, for example, ACE2, expressed on human cells. The latter requires at least the host protease(s) to mediate proteolytic cleavage of the SARS-CoV2 Spike protein into S1 and S2 subunits and consequently promote the fusion of viral and host membranes. Also, SARS-CoV2 possesses a polybasic cleavage site that can explain the high pathogenicity of SARS-CoV2, N-glycans and O-glycans that make the dense decoration of SARS-CoV2 S protein, and cyclic regions that can interact with cell-surface GRP78. Essential elements that process SARS-CoV2 cell entry and specific characteristics that allow SARS-CoV2 to escape the immune system have the potential as targets for COVID-19 therapy.\n\nLack of specific treatments for COVID-19 and the very time-consuming process of vaccine development lead us to trust traditional notions of immunization using passive transfer of humoral immunity. Passive immunization can be done using plasma therapy and IVIG therapy. Plasma from patients recovered from COVID-19 that contains antibodies against SARS-CoV2 has shown promising results in patients with severe COVID-19. Also, SARS-CoV and SARS-CoV2 are ideally similar in the structure and the cell entry receptor and protease. Studies show that serum from convalescent SARS patients and serum from rabbits immunized with SARS are both able to neutralize SARS-CoV2 efficiently. However, serum from bats immunized with SARS-lice coronavirus SL-CoV Rp3 could not exert such an effect. It is due to a noticeable degree of difference in the S1 domain in the S1 domain between the bat SL-CoV Rp3 strain and SARS-CoV2. A short-term moderate dose of IVIG combined with moderate-dose of corticosteroids might improve patient outcomes. Studies show that the viral RNA of SARS-CoV2 reaches its peak during the first week and then gradually decreases and that IgG and IgM begin to rise from the 10th day so that most patients have anti-viral antibodies by the 14th day. Passive immunization protects against disease, and so it should be administered as early as possible when the patient is diagnosed.\n\nEvidence links COVID-19 to variable degrees of inflammation. Corticosteroids offer a potent anti-inflammatory option. However, they do not dictate precise actions and might cause suppression of anti-viral immune responses as well. Studies show that the use of corticosteroids might accelerate recovery from COVID-19. There are no controlled clinical trials that show whether the use of corticosteroids can reduce COVID-19-related death. Moreover, the pro-inflammatory cytokine IL6 is the best-documented cytokine in COVID-19 correlated with severity, criticality, viral load, and prognosis of patients with COVID-19. Tocilizumab, a monoclonal antibody against IL6, could confer clinical benefit in patients with high IL6 levels.\n\n# References","meta":{"dup_signals":{"dup_doc_count":1000,"dup_dump_count":22,"dup_details":{"curated_sources":2,"2023-50":9,"2023-40":6,"2023-23":35,"2023-14":64,"2023-06":3,"2022-49":6,"2022-40":16,"2022-33":50,"2022-27":117,"2022-21":116,"2022-05":93,"2021-49":114,"2021-43":35,"2021-39":57,"2021-31":164,"2021-25":11,"2021-21":1,"2024-30":2,"2024-26":39,"2024-22":16,"2024-18":39,"2024-10":5}},"file":"PMC7205724"},"subset":"pubmed_central"} {"text":"author: Gunther Eysenbach\ndate: 1999\nreferences:\ntitle: Welcome to the Journal of Medical Internet Research\n\nWelcome to the \"Journal of Medical Internet Research\" - JMIR, the first international scientific peer-reviewed journal on all aspects of research, information and communication in healthcare using Internet and Intranet-related technologies.\n\nWhy does the world need the JMIR? The Internet - and more specifically, the World-Wide-Web - has an impact on many areas of medicine - broadly we can divide them into \"clinical information and telemedicine\", \"medical education and information exchange\" and \"consumer health informatics\":\n\n- First, Internet protocols are used for clinical information and communication. In the future, Internet technology will be the platform for many telemedical applications.\n\n- Second, the Internet revolutionizes the gathering, access and dissemination of non-clinical information in medicine: Bibliographic and factual databases are now world-wide accessible via graphical user interfaces, epidemiological and public health information can be gathered using the Internet, and increasingly the Internet is used for interactive medical education applications.\n\n- Third, the Internet plays an important role for consumer health education, health promotion and teleprevention. (As an aside, it should be emphasized that \"health education\" on the Internet goes beyond the traditional model of health education, where a medical professional teaches the patient: On the Internet, much \"health education\" is done \"consumer-to-consumer\" by means of patient self support groups organizing in cyberspace. These patient-to-patient interchanges are becoming an important part of healthcare and are redefining the traditional model of preventive medicine and health promotion).\n\nAll these aspects of \"cybermedicine\" have implications for consumer empowerment and evidence-based medicine: The Internet (or Intranets) enables health professionals to access clinical data just in time, it allows health professionals to access the evidence on the efficacy of available interventions, and finally, it empowers consumers to actively take part in the decision making process.\n\nClearly, the medical use of the Internet presents enormous opportunities and challenges. The need for research and rapid publication of the findings is obvious. Research in this area should go beyond mere development and provision of technical solutions; it should also address social and human factors, and evaluate the impact of the Internet on society and health care, and public health.\n\nJMIR wishes to publish papers that help physicians and consumers to maximize the use of the Internet. We invite researchers to evaluate the effectiveness and efficiency of Internet communications in health care. We encourage publishers of Internet health information to apply rigorous research methods (such as randomization of users) to evaluate different methods and determinants of communication effectiveness. We invite researchers to compare the effectiveness of communication and information on the Internet with other (traditional) methods of communication. We call for papers that describe and evaluate the effects of the Internet on the patient-physician relationship and the impact on public health. We invite papers that describe the use of the Internet for evidence-based medicine, for example work that demonstrates the development and dissemination of clinical guidelines using the Internet. We wish to receive papers on ethical and legal problems, as well as cross-border and cross-cultural issues, affecting medicine on the Internet, and papers describing possible solutions to the problem of equity of information access. We would like to receive systematic studies examining the quality of medical information available in various online venues. We encourage thought regarding methods of evaluation, quality assessment and improvement of Internet information. We would like to receive proposals for standards in the field of medical publishing on the Internet, including self-regulation issues, policies and guidelines to provide reliable healthcare information. We encourage researchers to experiment with online questionnaires and other data collection experiments such as medical surveys and psychological tests. We would like to publish innovative approaches to use the Internet for healthcare research, examples might include clinical studies, drug reaction reporting and surveillance systems. We would like to publish comments and papers on electronic medical publishing and the use of the Internet for traditional scholarly publishing. We welcome descriptions of websites with innovative content or form, but authors should always make attempts to evaluate the impact of their work, for example trying to determine basic information about user demographics and traffic, where appropriate.\n\nAs publishers of a journal *about* the Internet, we are also dedicated to using and experimenting with the Internet as a medium itself. Obviously, we are utilizing the Internet for communication with authors (which is done exclusively by email), and communication with external reviewers, but we also intend to experiment with some novel methods of peer-review. We further invite authors to experiment with innovative methods to communicate their findings, for example submitting HypER-papers (Hypertext Enriched Research Papers) \\[1\\] or by the inclusion of animated figures (animated gifs), audio and video into their documents, or by attaching original data which could be downloaded and possibly dynamically analyzed using JAVA applets.\n\nNew Internet standards and tools are developing at a breathtaking pace, and many Internet trends have a half-life of less than 6 months. We think that traditional paper journals are simply too slow for the fast-moving field of Internet-technologies. One of our editors reported that an article submitted to a leading medical informatics journal was only published 2 years later - obviously an unacceptable (and unnecessary) delay. Thus, we are trying to publish fast - usually our peer-review time is 1-2 weeks, and we publish all e-papers as soon as they have been accepted (though the printed version may be published later). We will follow a dual publishing strategy - full articles will be published on the Internet, all abstracts and short important articles will be published as a printed version, mainly for archiving and indexing purposes. Our peer-review process will be rigorous and constructive, helping authors to improve their manuscripts and guaranteeing a high-quality journal. We eagerly look forward to your contributions.\n\nGunther Eysenbach MD\n\n*Editor,*\n\n*Journal of Medical Internet Research*","meta":{"dup_signals":{"dup_doc_count":123,"dup_dump_count":57,"dup_details":{"curated_sources":2,"2023-06":3,"2022-40":2,"2022-21":1,"2022-05":2,"2020-40":1,"2020-34":3,"2020-05":4,"2019-47":1,"2019-35":1,"2019-30":1,"2019-22":2,"2018-51":2,"2018-47":1,"2018-34":1,"2018-22":1,"2018-09":1,"2018-05":1,"2017-47":1,"2017-39":1,"2017-30":2,"2017-22":1,"2017-17":1,"2017-09":2,"2017-04":2,"2016-50":2,"2016-44":2,"2016-40":2,"2016-36":2,"2016-30":3,"2016-26":2,"2016-22":2,"2016-18":2,"2016-07":3,"2015-48":3,"2015-40":2,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":5,"2014-42":9,"2014-41":1,"2014-35":2,"2014-23":2,"2014-15":1,"2023-23":1,"2024-22":2,"2017-13":1,"2015-18":2,"2015-11":3,"2015-06":3,"2014-10":3,"2013-48":2,"2013-20":2,"2024-26":1}},"file":"PMC1761703"},"subset":"pubmed_central"} {"text":"author: Arnold von Eckardstein; Jerzy-Roch NoferCorresponding author: Arnold von Eckardstein, .\ndate: 2013-08\nreferences:\ntitle: Frail HDLs and Stiff Arteries in Type 2 Diabetes in Juveniles\n\nLowering of LDL cholesterol plasma levels with statins reduces coronary heart disease (CHD) event rates by up to 50% (1), implying a residual cardiovascular risk of the same magnitude despite treatment. Moreover, statins increase the risk of type 2 diabetes mellitus (T2DM), especially in patients showing components of the metabolic syndrome (2). Hence, there is considerable need for novel therapeutic regimens improving CHD prevention without increasing the risk of T2DM.\n\nHDLs are an interesting target for this objective. Most observational studies and meta-analyses thereof demonstrated the inverse relationship of HDL cholesterol (HDL-C) levels with the CHD risk (3) as well as T2DM and its vascular complications (4,5). HDL particles exert various potentially antiatherogenic (6\u20138) and antidiabetogenic activities (4). Atherosclerotic lesions were decreased or even reversed in animals by transgenic overexpression or application of exogenous apolipoprotein (apo) A-I, which constitutes the most abundant protein of HDL (6). Animal experiments also provided evidence that HDL improves the function and survival of pancreatic \u03b2-cells and glucose uptake into muscle, liver, and adipose tissue (4). In humans, artificially reconstituted HDL particles reduced coronary plaque volume (9,10) and improved glycemia (11). In contrast to these promising results, addition of fenofibrate, niacin, torcetrapib, or dalcetrapib to statins failed to reduce cardiovascular risk beyond that provided by statin treatment alone despite increasing HDL-C (12\u201315). Moreover, alterations in HDL-C, either associated with mutations in the human genome or provoked in genetic mouse models, did not consistently translate into opposite changes of cardiovascular risk and atherosclerotic plaque load, respectively (16,17).\n\nBecause of these controversial data, the suitability of HDL as a therapeutic target has been increasingly questioned. However, it is important to emphasize that interventional trials and Mendelian randomization studies targeted HDL-C, which neither exerts nor reflects any of the potentially antiatherogenic activities of HDL (6). In a prototypic HDL particle, two to five molecules of apoA-I and \u223c100 molecules of phosphatidylcholine form an amphipathic shell in which several molecules of unesterified cholesterol are imbedded and it surrounds a core of cholesterol esters (18). Molar differences in the content of these major protein and lipid constituents produce considerable heterogeneity of HDL in shape, size, density, and charge (Fig. 1<\/a>). The macroheterogeneity of HDL is further compounded by quantitatively minor proteins, lipids, or microRNAs (19\u201321), many of which contribute to the potentially antiatherogenic and antidiabetogenic properties of HDL. Additional HDL microheterogeneity is a consequence of various inflammatory diseases, including T2DM or CHD, which lead to the loss of or structural modification of typical HDL constituents or the acquisition of atypical HDL constituents (22). Several alterations of HDL structure and composition have been associated with the loss of potentially vasoprotective functions, such as stimulation of cholesterol efflux (7) and endothelium-dependent vasodilation (8), independently of plasma HDL-C levels. Importantly, the plasma concentrations of many microcomponents of HDL amount to only \u22641 \u03bcmol\/L. Hence, they are three to four orders of magnitude lower than those of HDL-C (usually \\>1 mmol\/L) and one to two orders of magnitude lower than those of apoA-I (50 \u03bcmol\/L) or HDL particles (10\u201320 \u03bcmol\/L). Accordingly, these microcomponents are nonrandomly distributed among HDL subclasses and are not recovered by measurements of HDL-C, apoA-I, or HDL subclasses.\n\nMany laboratories worldwide currently are searching for functional biomarkers of HDL, which are more closely related to atherosclerosis and cardiovascular outcomes than HDL-C. One promising approach in this direction has been undertaken by Gordon et al. (23), who investigated the association of HDL subclasses and their proteomes with the presence of T2DM and obesity in adolescents and with pulse wave velocity (PWV), a noninvasive measure of vascular stiffness and hence a surrogate of atherosclerosis. Among 12 HDL subfractions identified by the authors, large HDL particles showed greatest differences between T2DM patients and nondiabetic controls. Compared with those in healthy controls, these subfractions were deprived of several proteins, including apoA-I, apoA-II, apoE, apoM, and paraoxonase 1 (PON1) in T2DM patients. These results in humans closely correspond to recent findings by Kothapalli et al. (24) showing increased arterial stiffness in apoE-deficient mice and favorable effects on arterial elasticity exerted by apoE-containing HDL particles. In addition to changes in protein composition, the phospholipid content of large HDL subfractions showed a significant inverse correlation with PWV. In agreement with protective functionality, large HDL particles were enriched in the sphingosine-1-phosphate\u2013binding lipocalin apoM (25) and the antioxidative enzyme PON1 (26) in nondiabetic subjects. In contrast, cholesterol concentrations in smaller HDL particles showed positive correlations with PWV, suggesting adverse effects on vascular health. Of note, HDL-C did not show any significant correlation with PWV.\n\nThe study has several strengths. By investigating young patients with diabetes and controls, Gordon et al. (23) minimized the effect of confounders complicating data interpretation in diabetic adults. By using gel filtration rather than ultracentrifugation, the authors eliminated artifacts arising from protein and lipid displacement by shear forces or high ion concentrations. It is noteworthy that the authors retrieved several proteins, including apoM and PON1, in large HDL particles that were previously assigned to small HDL particles by ultracentrifugation (27). The risk of recording false-positive results after using the comprehensive proteomic approach was limited by the stringent selection of those proteins for statistical analyses, which were identified by previous proteomic examination of HDL.\n\nGeneral limitations of these explorative studies are the cross-sectional design and the small number of patients. As acknowledged by the authors, statistical association does not imply causality. In this respect, it will be interesting to test the effect of large HDL fractions on endothelial functions, which were previously found to be modulated by PON1 or apoM (8,25,26). The expansion to prospective studies will require methodological advancements permitting analyses of hundreds or even thousands of samples. To this end, a refined proteomic and lipidomic examination of large HDL particles might help to identify proteins or lipids that can be specifically targeted by high-throughput technologies such as immunoassays or single-reaction monitoring mass spectrometry. It also should be emphasized that the enzymatic assay used by the authors, which quantifies choline rather than phospholipids, neither discriminates between phosphatidylcholines, lysophoshatidylcholines, plasmalogens, and sphingomyelins nor records noncholine phospholipids such as phosphatidylethanolamines, phosphatidylserines, and sphingosine-1-phosphate. Another limitation of the study is that the mass spectrometry approach measured only relative concentrations. Interestingly, however, this semiquantitative approach unraveled reduced peptide signals in HDL from T2DM patients. It is possible that posttranslational protein modifications altered the mass of peptides and thereby prevented their recording by the assigned molecular mass. This explanation is congruent with previous findings showing enhanced glycation, nitration, chlorination, sulfoxidation, or carbamylation in HDL from diabetic subjects (22). Such modifications may offer new opportunities for use as biomarkers (7,8,22).\n\nIn conclusion, the study by Gordon et al. (23) provides new insights into the molecular heterogeneity of HDL and its association with T2DM and atherosclerosis. Apart from reproducing and extending these findings in larger observational studies, it will be important to resolve the structure, function, and metabolism of large HDL fractions. Further structure\u2013function studies may help to select molecules or modifications within HDL, which can be used as biomarkers for identification, treatment stratification, and monitoring of patients at increased risk for cardiovascular diseases or diabetes mellitus.\n\n## ACKNOWLEDGMENTS\n\nA.v.E. is supported by grants from the Swiss National Science Foundation (3100A0-116404\/1, 3100A0-130836\/1), the FP7 Programme of the European Commission (RESOLVE, 305707), and the Zurich Center of Integrative Human Physiology, and has received honoraria for lectures and advisory activities from Hoffmann-La Roche, Merck Sharpe & Dohme, and AstraZeneca. J.-R.N. is supported by grants from the German Foundation for Pathobiochemistry and Molecular Diagnostics, the Italian Ministry for Education, University, and Research (IDEAS RBID08777T), Novartis Germany, Actelion Pharmaceuticals Germany, and Siemens Healthcare Diagnostics. No other potential conflicts of interest relevant to this article were reported.\n\n# REFERENCES","meta":{"dup_signals":{"dup_doc_count":138,"dup_dump_count":46,"dup_details":{"curated_sources":2,"2022-05":1,"2020-05":1,"2018-26":1,"2018-22":3,"2018-17":1,"2018-13":2,"2018-05":2,"2017-51":2,"2017-47":1,"2017-43":7,"2017-39":1,"2017-34":3,"2017-30":5,"2017-26":4,"2017-22":2,"2017-17":8,"2017-09":4,"2017-04":10,"2016-50":4,"2016-44":4,"2016-40":4,"2016-36":4,"2016-30":3,"2016-26":3,"2016-22":3,"2016-18":3,"2016-07":3,"2015-48":3,"2015-40":3,"2015-35":3,"2015-32":2,"2015-27":3,"2015-22":3,"2015-14":2,"2014-52":4,"2014-49":5,"2014-42":3,"2014-41":2,"2014-35":1,"2023-06":1,"2024-26":2,"2024-22":1,"2017-13":2,"2015-18":3,"2015-11":1,"2015-06":3}},"file":"PMC3717845"},"subset":"pubmed_central"} {"text":"abstract: # Background\n .\n Filarial nematodes, including *Brugia malayi*, the causative agent of lymphatic filariasis, undergo molting in both arthropod and mammalian hosts to complete their life cycles. An understanding of how these parasites cross developmental checkpoints may reveal potential targets for intervention. Pharmacological evidence suggests that ecdysteroids play a role in parasitic nematode molting and fertility although their specific function remains unknown. In insects, ecdysone triggers molting through the activation of the ecdysone receptor: a heterodimer of EcR (ecdysone receptor) and USP (Ultraspiracle).\n .\n # Methods and Findings\n .\n We report the cloning and characterization of a *B. malayi* EcR homologue (*Bma-EcR*). Bma-EcR dimerizes with insect and nematode USP\/RXRs and binds to DNA encoding a canonical ecdysone response element (EcRE). In support of the existence of an active ecdysone receptor in *Brugia* we also cloned a *Brugia rxr* (retinoid X receptor) homolog (*Bma-RXR*) and demonstrate that Bma-EcR and Bma-RXR interact to form an active heterodimer using a mammalian two-hybrid activation assay. The Bma-EcR ligand-binding domain (LBD) exhibits ligand-dependent transactivation via a GAL4 fusion protein combined with a chimeric RXR in mammalian cells treated with Ponasterone-A or a synthetic ecdysone agonist. Furthermore, we demonstrate specific up-regulation of reporter gene activity in transgenic *B. malayi* embryos transfected with a luciferase construct controlled by an EcRE engineered in a *B. malayi* promoter, in the presence of 20-hydroxy-ecdysone.\n .\n # Conclusions\n .\n Our study identifies and characterizes the two components (*Bma-EcR* and *Bma-RXR*) necessary for constituting a functional ecdysteroid receptor in *B. malayi*. Importantly, the ligand binding domain of BmaEcR is shown to be capable of responding to ecdysteroid ligands, and conversely, ecdysteroids can activate transcription of genes downstream of an EcRE in live *B. malayi* embryos. These results together confirm that an ecdysone signaling system operates in *B. malayi* and strongly suggest that Bma-EcR plays a central role in it. Furthermore, our study proposes that existing compounds targeting the insect ecdysone signaling pathway should be considered as potential pharmacological agents against filarial parasites.\nauthor: George Tzertzinis; Ana L. Ega\u00f1a; Subba Reddy Palli; Marc Robinson-Rechavi; Chris R. Gissendanner; Canhui Liu; Thomas R. Unnasch; Claude V. Maina\\* E-mail: [^1][^2]\ndate: 2010-03\ninstitute: 1 New England Biolabs, Ipswich, Massachusetts, United States of America; 2 Department of Entomology, University of Kentucky, Lexington, Kentucky, United States of America; 3 Department of Ecology and Evolution, University of Lausanne, Lausanne, Switzerland; 4 Global Health Infectious Disease Research Program, College of Public Health, University of South Florida, Tampa, Florida, United States of America; University of Pittsburgh, United States of America\nreferences:\ntitle: Molecular Evidence for a Functional Ecdysone Signaling System in *Brugia malayi*\n\n# Introduction\n\nHuman filarial parasitic nematodes are responsible for two chronic severely debilitating tropical diseases: lymphatic filariasis and onchocerciasis. The global efforts in the treatment and control of the spread of infection for both parasites so far have resulted in limited success. Also, the widespread use of the few available specific drugs for fighting these diseases raises the possibility of the development of drug resistance \\[1\\]. With 140 million cases of infection worldwide, and over a billion people at risk of contracting these debilitating diseases \\[2\\], the development of a wide range of therapeutic interventions and treatment options is urgent.\n\nFilarial parasites spend portions of their life cycle in obligate mammalian and insect hosts. The completion of a successful life cycle requires the passage of the developing nematode through four molts, two in the mammalian host and two in the arthropod host. The transmission of the parasite from one host to the other initiates a rapid molt, indicating that the developmental cues that trigger molting are closely tied to the integration of the parasitic larva into a new host environment. Inhibition of molting would result in the arrest of the life cycle in either the mammalian or insect host and the prevention of both pathology and\/or the infective cycle. Thus, the study of the molting process in filarial nematodes could point to specific targets for drug development.\n\nMolting in ecdysozoans \\[3\\] has been best characterized in insects. 20-hydroxyecdysone (20E) acts as the temporal signal to initiate molting, regulates embryogenesis, and coordinates tissue-specific morphogenetic changes in insects \\[4\\]\u2013\\[6\\]. Ecdysone signaling is regulated by the activity of a heterodimeric receptor composed of two nuclear receptor proteins EcR and USP, although the hormone binding function resides only within EcR \\[7\\]\u2013\\[10\\]. After ligand binding, EcR\/USP activates a cascade of gene expression whose end result is the execution of molting \\[11\\].\n\nThree alternatively spliced mRNA isoforms of *EcR* have been identified in *Drosophila* \\[12\\]. Mutations in these different *EcR mRNA* isoforms result in a range of phenotypes that includes lethality at the embryonic, larval and pupal stages, disruption of salivary gland degeneration \\[13\\], aberrant neuronal remodeling during metamorphosis \\[14\\], and changes in female fecundity and vitellogenesis \\[15\\].\n\nEcR and USP, as well as a number of the proteins involved in the ecdysone-signaling cascade, are members of the nuclear receptor (NR) superfamily \\[10\\],\\[16\\]\u2013\\[18\\]. NRs are characterized by significant amino acid sequence similarities in two key functional domains: the DNA binding domain (DBD), which directs the sequence-specific DNA-binding of the receptor, and the ligand binding domain (LBD), which mediates dimerization, ligand binding and transcriptional activation \\[19\\]\u2013\\[20\\]. Some nuclear receptors have been shown to interact with a number of small molecule ligands such as metabolites and hormones, and these interactions are important for regulation of their activity. Other NRs are considered orphan receptors and are either not ligand-regulated or their cognate ligands have yet to be identified \\[19\\]. Homologs of the insect NRs that function downstream of EcR and USP have also been identified in filarial parasites (21, 22; Ega\u00f1a, Gissendanner and Maina, unpublished results) as well as in the free-living nematode *C. elegans* \\[23\\]\u2013\\[26\\]. Surprisingly, however, homologs of EcR or RXR\/USP are apparently absent in the exceptionally large *C. elegans* NR family \\[23\\].\n\nIn filarial nematodes the molecular triggers of molting remain largely unknown. As in insects, a possible candidate for a signal that controls molting in *B. malayi*, the causative agent of lymphatic filariasis, is the steroid hormone 20E. Both free and conjugated ecdysteroids have been identified in the larvae of several parasitic nematodes including *Dirofilaria immitis* and *Onchocerca volvulus* \\[27\\]\u2013\\[29\\]. In addition, ecdysteroids have been shown to exert biological effects on several nematodes. For example, in *Nematospiroides dubius* \\[30\\] and *Ascaris suum* \\[31\\] molting can be stimulated *in vitro* by low concentrations of ecdysteroids. Also, molting of third stage larvae of *D. immitis* can be stimulated with 20E and RH5849, an ecdysone agonist \\[32\\],\\[33\\]. The arrest at the pachytene stage of meiosis is abrogated when *D. immitis* ovaries are cultured *in vitro* with ecdysone and *B. pahangi* adult females can be stimulated to release microfilaria when cultured *in vitro* with ecdysone \\[34\\].\n\nThere appears to be a physiological connection between the filarial parasite and its arthropod host that may involve ecdysteroid signaling. Uptake of microfilaria (L1) by a feeding female mosquito at the time of a bloodmeal coincides with an increase in the production of mosquito ecdysteroids that results in the initiation of mosquito oocyte maturation \\[35\\]. Concurrent with this increase in ecdysteroid concentration in the mosquito host, larvae initiate a molt transition from L1 to L2 and later from L2 to L3, the infectious stage of the parasite. These observations suggest a potential role for ecdysone in the regulation of molting and other developmental processes in filarial nematodes.\n\nWe previously identified an *rxr* homolog in the dog filarial parasitic nematode *D. immitis* and demonstrated its ability to dimerize with an insect EcR and function in Schneider S2 cells \\[36\\]. We extend this work here with the identification and characterization of *EcR* and *rxr* homologs from *B. malayi*. Bma-EcR and Bma-RXR share some of the biochemical properties of insect EcR and RXR and show differences that appear to be nematode specific.\n\n# Methods\n\n## Parasites, RNA isolation and reverse transcription\n\n*Brugia malayi* adult males, females or L1 larvae (TRS Labs, Athens, GA) were frozen in liquid nitrogen and ground with a pestle and mortar. Total RNA was purified from the pulverized tissue using RNAwiz (Ambion). RNA was quantified with a spectrophotometer and its quality assessed by gel electrophoresis. One \u00b5g of total RNA per isolation was reverse-transcribed using the ProtoScript first strand cDNA synthesis kit (New England Biolabs) following the manufacturer's protocol.\n\n## Cloning of *Bma-EcR*, *Ov-RXR* and *Bma-RXR*\n\nThe genomic library from *B. malayi* in pBeloBAC vector gridded on Nylon filters (Filarial Genome Network, FGN, () was screened using a cDNA fragment from a *D. immitis* EcR homolog (*Di-EcR*) (C. Shea, J. Richer and C. V. Maina, unpublished results) as a probe. Three positive BACs were identified and the individual corresponding bacterial clones were cultured. The inserts were confirmed to contain identical or overlapping sequences by restriction digestion analysis. One 11kb *Xba*I fragment identified by southern blot hybridization with the *Di-EcR* probe was subcloned into Litmus 28i and the insert was sequenced using GPS\u00ae-1 Genome Priming System (NEB) as directed by the manufacturer. PCR primers were designed to amplify *Bma-EcR* using sequence from the identified exons \\[37\\]. The primers used to amplify the full ORF were: 5\u2032 \u2013 GGC GCT AGC ATG ACT ACA GCA ACA GTA ACA TAT CAT GAG TT \u2013 3\u2032 (Nco-MMT-5); 5\u2032 \u2013 GGC CTC GAG CGA TTC TAT GGA TAG CCG GTT GAG GTT \u2013 3\u2032 (Xho-GYP-3). To determine the expression pattern and identify alternate isoforms of *Bma-EcR*, adult female, male, L1, L2, L3 cDNA libraries (FGN) were screened using the following primers: 5\u2032 \u2013 GGG TAA TTC CTA CCA ACA GCT - 3\u2032(GNS); 5\u2032 \u2013 CAA GGG TCC AAT GAA TTC ACG AT \u2013 3\u2032 (GPL) corresponding to a fragment of the LBD from amino acids GNSYQQ to REFIGPL'. Additional sequence to extend the *Bma-EcR* isoforms identified was obtained by PCR, combining the latter two primers with the T3 and T7 promoter primers as their sequence is present in the library vector.\n\nAn *O. volvulus* L3 cDNA library (FGN) was screened by PCR using the following primers: 5\u2032 \u2013 GAT CTT ATC TAT CTA TGC CGA GAA ,\u2013 3\u2032; 5\u2032 \u2013 TAC TTT GAC ATT TGC GGT AAC GAC \u2013 3\u2032 corresponding to the amino acid sequence DLIYLCRE and RYRKCQSM of the conserved DNA binding domain of DiRXR-1 respectively. Additional *Ov-rxr* sequence was obtained by PCR using the same primers in combination with the T7 and T3 promoter primers. Candidate clones were identified by hybridization with a fragment of DiRXR-1 sequence. An amplified fragment from this library contained sequence corresponding to the *Ov-rxr* A\/B and C domains.\n\nUsing BLAST, the *Di-rxr-1* sequence \\[36\\] was used to screen the *B. malayi* genome sequence available from The Institute for Genomic Research (TIGR) parasites database (). This analysis resulted in the identification of several exons encoding an 1189 bp fragment of open reading frame (ORF) that corresponded to a putative homolog of *rxr* in *Brugia* (*Bma-RXR*). Based on the genomic sequence we designed PCR primers and used them to amplify the expected *Bma-RXR* mRNA using a nested PCR approach. One \u00b5l of a reverse transcription reaction from female total RNA was used as the template for the first round of PCR carried out with primers 5\u2032 \u2013 CGA TCT ATG CCC ATC AGA TTG-3\u2032 (LCP) and 5\u2032 \u2013 CAC AAT GCA AGC TAA GAG ATC G \u2013 3\u2032(RSL) at 46\u00b0C annealing temperature. Six percent of the first round PCR reaction was used as a template in a second round of PCR with primers 5\u2032 \u2013 CGA TTT AAC TCC AAA TGG AAG TCG \u2013 3\u2032 (DLT) and 5 \u2032\u2013 AGC AAA GCG TTG AGT TTG TGT TGG \u2013 3\u2032 (PTQ) at 47\u00b0C annealing temperature. Using the sequence obtained, primers were designed to extend the 5\u2032 of the coding sequence using a semi-nested PCR approach in combination with the 5\u2032 splice leader SL1 primer. As above, two rounds of PCR were employed, in the first round using the SL1 primer (5\u2032 \u2013 GGT TTA ATT ACC CAA GTT TGA G - 3\u2032) and primer 5\u2032 \u2013 GAT GCT CGA TCA CCG CAT ATT GCA CAA ATG - 3\u2032 (CAI) at 68\u00b0C annealing temperature and for the second round the SL1 primer and primer 5\u2032- TGG CAT ACA GTG TCA TAT TTG GTG TTG TGC - 3\u2032 (STT) at 66\u00b0C annealing temperature. The 3\u2032 coding sequence was obtained by 3\u2032 RACE using the First Choice RLM-RACE kit (Ambion) with *Bma-RXR* primers 5\u2032 \u2013 GGC TCT AAT GCT ACC ATC ATT TAA TGA A - 3\u2032 (ALM), and 5\u2032 \u2013 GAA GAT CAA GCT CGA TTA ATA AGA TTT GGA - 3\u2032 (EDQ) following the manufacturer's protocol. For each amplified fragment several clones were sequenced. The positions of the primers used are indicated by short arrows over the corresponding amino acid sequence in the alignment Figures.\n\n## Northern blot analyses\n\nTen \u00b5g of total RNA from adult male, adult female and microfilaria were used to carry out northern blot analyses using the NorthernMax-Gly kit (Ambion). The *Bma-EcR* and *Bma-RXR* probes were 1kb DNA fragments from the respective coding regions labeled using random priming with the NEBlot kit (NEB) and ^32^P-dATP (NEN-Dupont). The sizes of the hybridizing RNA species were estimated using an RNA ladder that was run adjacent to the samples as a reference.\n\n## Phylogenetic analyses\n\nPredicted amino acid sequences of cloned cDNAs were aligned with all nuclear receptor sequences from Swissprot and GenBank, using Muscle \\[38\\] with default options. A complete phylogeny of all the nuclear receptor super-family was built with PhyML \\[39\\]. Subsequently, phylogenies of the relevant sub-families were constructed with 1000 bootstrap replicates. Well aligned sites were selected with GBLOCKS \\[40\\], with relaxed options to allow a few gaps per column of the alignment. In each case PhyML was run with rate heterogeneity with 4 classes, parameter alpha estimated from the data, BIONJ starting tree. Support for nodes was estimated by Approximate Likelihood-Ratio Test (aLRT) \\[41\\].\n\n## Protein-protein interaction by GST-pull-down assays\n\nA cDNA fragment of *Bma-EcR* encoding aa 152\u2013465 (upstream of the C-domain to the end of the predicted ORF) was amplified by PCR using primers 5\u2032 \u2013 AGC TTC CAT GGC AGC TGA AGA AGG TCA ATC TAA TGG CGA CAG TGA GT \u2013 3\u2032 (536 to 557 of EF362469) and Xho-GYP-3 (See cloning of *Bma-EcR*). The fragment was cloned in frame with GST in the vector pGEX-KG \\[42\\]. The fusion protein was produced in *E. coli* BL21 by induction at 30\u00b0C with 0.1 mM IPTG and purified on Glutathione Sepharose beads (Pharmacia) as directed by the manufacturer. Recombinant *Di-rxr-1* and *Aausp* (gift from A. Raikhel, University of California Riverside) in pcDNA-3 (Invitrogen) were transcribed and translated *in vitro* in rabbit reticulocyte lysates using the TNT T7 coupled transcription-translation system (Promega) in the presence of ^35^S-Methionine (Amersham Biosciences) as recommended by the manufacturer. Glutathione resin beads loaded with 1 \u00b5g of GST:Bma-EcR fusion protein were incubated for 1 h at 4\u00b0C with 5 \u00b5l of rabbit reticulocyte lysate containing labeled proteins (Di-RXR-1 or AaUSP) in a total volume of 10 \u00b5l of binding buffer (20 mM Tris, 1 mM EDTA, 1 mM DTT, 10% glycerol, 150 mM sodium chloride, 0.5 mg\/ml of BSA, complete protease inhibitor cocktail (Sigma). The beads were washed twice with binding buffer and three times with buffer without BSA, then incubated with 10 mM reduced glutathione to elute the proteins, and centrifuged. Supernatants were mixed with loading buffer and analyzed by SDS-PAGE. Signals were detected by autoradiography of the dried gels.\n\n## DNA binding by electrophoretic mobility shift assays (EMSA)\n\nThe ecdysone response element (PAL-1) described by Hu et al. \\[43\\] was produced by annealing two synthetic oligonucleotides: 5\u2032 \u2013 TTG GAC AAG GTC AGT GAC CTC CTT GTT CT \u2013 3\u2032 and its complement (with two overhanging Ts at each 3\u2032 end). PAL-1 was labeled with ^32^P-dATP (NEN-Dupont) using Klenow polymerase (New England Biolabs) and purified by spin column G50 chromatography (Amersham Biosciences). A cDNA fragment containing the complete coding region of *Bma-EcRA* was cloned in pcDNA-3 (Invitrogen) using the *Nhe*I and *Xho*I restriction sites. Two additional constructs containing *Bma-EcRB* and *C* respectively were also cloned using the same strategy. The three *Bma-EcR* isoforms were transcribed and translated *in vitro* in rabbit reticulocyte lysates using the TNT T7 coupled transcription-translation system (Promega) following the manufacturer's protocol. The translation yield of each construct was assessed by labeling a portion of the reaction with ^35^S-Methionine and analyzing the products after gel electrophoresis and autoradiography. Binding reactions were performed at room temperature in 10mM Tris-HCl pH 7.5, 50 mM NaCl, 10 mM MgCl~2~, 0.5 mM DTT, 0.025 mM EDTA, 4% glycerol, 0.2 \u00b5g\/\u00b5L poly dI-poly-dC, 0.13 \u00b5g\/\u00b5L BSA, 0.05% NP40, with 13 fmol\/\u00b5l labeled PAL-1 and 3.5 \u00b5L TNT reaction mixture containing the corresponding proteins (0.5 \u00b5LAaUSP and 1.5 or 2.5 \u00b5L Bma-EcR), in 15 \u00b5L total volume for 20 min before loading in a 6% native TBE gel (Invitrogen). Signals were detected by autoradiography of the dried gels.\n\n## Constructs and transactivation assays in mammalian cells\n\nTo construct GAL4:Bma-EcR and VP16:Bma-RXR, the DEF domains of *Bma-EcR* and *Bma-RXR* were PCR amplified and cloned into pM and pVP16 vectors (EcR residues 259\u2013565; RXR residues 191 to 464) respectively (Clontech). The construct VP16:Lm-HsRXREF (Chimera 9) has been previously described \\[44\\]. pFRLUC, encoding firefly luciferase under the control of the GAL4 response element (Stratagene Cloning Systems) was used as a reporter.\n\nFifty thousand NIH 3T3 cells per well in 12-well plates were transfected with 0.25 \u00b5g of receptor(s) and 1.0 \u00b5g of reporter constructs using 4 \u00b5l of SuperFect (Qiagen). After transfection, the cells were grown in medium containing ligands for 24\u201348 hours. A second reporter, Renilla luciferase (0.1 \u00b5g), expressed under a thymidine kinase constitutive promoter was cotransfected into cells and was used for normalization. The cells were harvested, lysed and the reporter activity was measured in an aliquot of lysate. Luciferase activity was measured using Dual-luciferaseTM reporter assay system from Promega Corporation (Madison, WI, USA). The results are reported as averages of normalized luciferase activity and the error bars correspond to the standard deviation from multiple assays. The ligands used were: RG-102240, a synthetic stable diacylhydrazine ecdysone agonist \\[N-(1,1-dimethylethyl)-N\u2032-(2-ethyl-3-methoxybenzoyl)-3,5-dimethylbenzohydrazide\\] also known as GS-E or RSL1, (RheoGene, New England Biolabs) and Ponasterone-A (Invitrogen). The ligands were applied in DMSO at the indicated final concentrations and the final concentration of DMSO was maintained at \\<0.1%.\n\n## Constructs and transactivation assays in *Brugia malayi* embryos\n\nIn order to construct an ecdysteroid response reporter for *Brugia malayi* the repeat domain of the *B. malayi* 12 kDa small ribosomal subunit gene promoter \\[45\\] (construct BmRPS12 (\u2212641 to \u22121)\/luc) was replaced (in both orientations) with the PAL-1 EcRE shown to be recognized by Bma-EcR *in vitro*. Previous studies have shown that the repeat acts as a transcriptional enhancer. Outward facing primers flanking the repeat domain containing synthetic *Spe*I sites at their 5\u2032 ends were used in an inverse PCR reaction employing BmRPS12 (\u2212641 to \u22121) as a template \\[46\\]. The resulting amplicons were purified using the QiaQuick PCR cleanup kit (Qiagen). The purified amplicons were digested with *Spe*I, gel purified, self-ligated and transformed into *E. coli*. The resulting construct was designated BmRPS12 -rep. A double stranded oligonucleotide consisting of five tandem repeats of the EcRE: ctag(GGACAAGGTCA<\/u>GTGACCT<\/u>CCTTGTTC) 5\u00d7 with *Spe*I overhangs was then ligated into the *Spe*I site of BmRPS12 -rep. The insertions in the forward and reverse orientations were designated BmRPS12-EcRE and BmRPS12-EcRE-rev respectively.\n\nConstructs were tested for promoter activity in transiently transfected *B. malayi* embryos essentially as previously described \\[47\\]. In brief, embryos were isolated from gravid female parasites and transfected with BmRPS12-EcRE (or BmRPS12-EcRE-rev) mixed with a constant amount of a transfection control, consisting of the BmHSP70 promoter fragment driving the expression of renilla luciferase (construct BmHSP70 (\u2212659 to \u22121)\/ren). Following a rest of five minutes, the transfected embryos were transferred to embryo culture media (RPMI tissue culture medium containing 25 mM HEPES, 20% fetal calf serum, 20 mM glucose, 24 mM sodium bicarbonate, 2.5 mg ml-1 amphotericin B, 10 units ml-1 penicillin, 10 units ml-1 streptomycin and 40 mg ml\/L gentamycin), supplemented with 1 \u00b5M 20-OH ecdysone dissolved in 50% ethanol or solvent control. Transfected embryos were maintained in culture for 48 hours before being assayed for transgene activity. Firefly luciferase activity was normalized to renilla luciferase activity in each sample to control for variations in transfection efficiency. Firefly\/renilla activity ratios for each sample were further normalized to the activity ratio from embryos transfected in parallel in each experiment with the parental construct BmRPS12 -rep. This permitted comparisons of data collected in experiments carried out on different days.\n\n## Statistical analysis\n\nEach construct was tested in two independent experiments, with each experiment containing triplicate transfections of each construct to be analyzed. The statistical significance of differences noted between the activity in the control and experimental transfections was determined using Dunnett's test, as previously described \\[47\\].\n\n## Sequence accession numbers\n\nThe nucleotide sequences for Bma-EcR isoform A, Bma-EcR isoform C, Bma-RXR and Ovnhr-4 have been deposited in the GenBank database under GenBankAccession Numbers: EF362469, EF362470, EF362471, and EF362472.\n\n# Results\n\n## Bma-EcR cloning and genomic structure\n\nA candidate *EcR* homolog was first identified from *D. immitis* using degenerate PCR primers based on insect EcRs (C. Shea, J. Richer and C.V. Maina, unpublished results). Using sequences from the *D. immitis EcR* homolog, genomic libraries from *B. malayi* available from the Filarial Genome Network (FGN) were screened. A strongly hybridizing BAC was identified and sequenced. This BAC contained a gene that encodes a protein with strong similarities to the *EcR* branch of nuclear receptors (see below). We designated this gene as *Bma*-EcR (to distinguish it from *Bombyx mori* EcR \\[37\\]). Using sequences corresponding to the predicted *Bma-EcR* exons, PCR primers were designed and used to screen larval and adult cDNA libraries. This library survey revealed *Bma-EcR* expression in L1, L3, and L4 larval stages, as well as in adult males and females (data not shown). In the microfilaria (L1) library, using primers from the putative ligand-binding domain (LBD) encoding region, two alternatively spliced mRNA isoforms of *Bma-EcR* (isoforms A and C) were identified (Fig. 1A<\/a>). *Bma-EcRA* is the isoform containing the longest ORF (597 a.a.) with an intact LBD. *Bma-EcRC* contains exon 6 with a 29-nucleotide deletion that results in a reading frame shift that generates a premature stop codon and truncation of the LBD at helix 5 (Fig. 1<\/a>). This is the result of an alternative splice site within exon 6. In addition to these confirmed isoforms, a splice site consensus sequence was identified within exon 5 at the end of the DNA-binding domain (DBD) that, if used, would result in the omission of ten amino acids from the C-terminal extension of the DBD (indicated by an arrow in Figure 1A<\/a>). This type of spliced mRNA isoform has been identified in *D. immitis* (C. Shea, J. Richer and C. V. Maina, unpublished results). We were unable to clone such an isoform (EcRB) from *B. malayi* by RT-PCR. However, we cannot exclude the possibility that *Bma-EcRB* is expressed in specific tissues or developmental stages not represented in the libraries or RNA used.\n\n## Sequence and phylogenetic analysis of Bma-EcR\n\nBma-EcR shows strongest similarity to the NR1H group of nuclear receptors typified by the insect EcRs and mammalian FXR and LXR receptors (Figs. 1B<\/a>, 2<\/a>). The strongest similarity is in the DBD which contains the canonical C~4~ zinc finger structure of nuclear receptors. This domain is 10 amino acids longer in Bma-EcR than in the homologous region of the other EcRs. However, as indicated above, exclusion of these 10 amino acids by alternative splicing of this site (isoform B) would result in a better alignment of Bma-EcR with the other EcRs (Fig. 1B<\/a>). The LBD shows significant similarity in the regions encoding helices 3\u201310 \\[48\\]. An exception is found in the region of helix 11\u201312 (Fig. 1B<\/a>). Helix 12 of insect EcRs contains the AF2 motif, responsible for ligand-dependent transcriptional activation. Although predictions of secondary structure of the *Bma-EcR* LBD protein sequence indicate helical folding of putative helices 3\u201310, no helical propensity is predicted in the region of helix 12 (data not shown). Immediately following the helix 12 region a glutamine-rich helical segment is present. Glutamine-rich sequences are often associated with transcription activation domains \\[49\\]. These differences make Bma-EcR an unusual member of the receptor family that perhaps uses a different mechanism for ligand-dependent activation.\n\nGlobal phylogenetic analysis (see Supporting Information Fig. S1<\/a>) places *Bma-EcR* with arthropod EcRs. The position of *Bma-EcR* is strongly supported (99% aLRT support) in a phylogenetic tree of the sub-family (Figures 2<\/a> and S2<\/a>). The branch leading to *Bma-EcR* is long, indicating a relatively derived sequence, but not more derived than that of dipteran EcRs, for example. Separate analysis of the DBD and LBD produced similar tree topologies, especially concerning the position of *Bma-EcR*.\n\n## BmaEcR expression analysis\n\nNorthern blot analysis was used to establish the expression pattern of *Bma-EcR* in *Brugia* adult females, males, and L1 microfilaria. The fragment used as the probe encompassed the coding sequence common to both mRNA isoforms identified. A predominant species of approximately 3.75\u20134 Kb was present in all RNA samples tested (Fig. 3<\/a>), which was consistent with our detection by RT-PCR of the *Bma-EcR* isoforms in libraries from those same stages, and implies the existence of longer 5\u2032 and\/or 3\u2032 untranslated regions than are present in our cloned cDNA species. Shorter minor RNA species are detectable which may indicate the existence of additional isoforms (Fig. 3<\/a>).\n\n## Bma-EcR dimerization with RXR and USP\n\nEcRs heterodimerize with Ultraspiracle (USP) proteins to form functional ecdysone receptors that bind to ecdysteroid ligands and ecdysone response elements (EcREs) \\[35\\]. In order to test whether Bma-EcR heterodimerizes with a canonical insect USP or its filarial homologue Di-RXR-1 \\[36\\], an *in vitro* binding assay was carried out. *In vitro* translated ^35^S-labeled Di-RXR-1 or *Aedes aegypti* USP (AaUSP) were incubated with GST or GST:Bma-EcR fusion proteins immobilized on glutathione-beads. Specific bands corresponding to the full length AaUSP and Di-RXR-1 were detected bound to GST:Bma-EcR (Fig. 4A<\/a>). No binding to GST alone was detected with either protein bait. While the *in vitro* translation of AaUSP resulted in the production of a major protein species of the predicted full length AaUSP (Fig. 4A<\/a>, lane 1), *in vitro* translation of Di-RXR-1 produced multiple protein species (Fig. 4<\/a> lane 4) including one corresponding to full-length Di-RXR-1 (ca 55 kD), which specifically bound to GST:Bma-EcR (Fig. 4A<\/a> lane 6). These results indicate that Bma-EcR protein, like EcR, is capable of heterodimerization with USP protein *in vitro*.\n\n## BmaEcR DNA binding properties\n\nHaving established that Bma-EcR can dimerize with USP\/RXRs, we investigated the DNA binding properties of the two isolated protein isoforms of *Bma-EcR* (Forms A and C) to a palindromic ecdysone response element (PAL-1 EcRE) based on the *Drosophila hsp27* ecdysone response gene \\[43\\], using EMSA. In addition to the cloned *Bma-EcRA* and *Bma-EcRC* mRNA isoforms, a construct lacking the 10 amino acids downstream of the zinc finger domain was engineered (putative Bma-EcRB). The three Bma-EcR isoforms and AaUSP (as the heterodimerization partner) were produced in rabbit reticulocyte lysates and their relative amounts were estimated using ^35^S-met labeling and autoradiography (Fig S3<\/a>). An equal amount of AaUSP-containing reticulocyte lysate was incubated with increasing amounts of each Bma-EcR isoform preparation and ^32^P-labeled EcRE prior to analysis by native polyacrylamide gel electrophoresis. AaUSP produces a specific band with the EcRE as has been shown before \\[50\\] (Fig. 4B<\/a>, dot), which migrates faster than a nonspecific band produced by the reticulocyte lysate (Fig. 4B<\/a>, asterisk). Both Bma-EcRA and -B produced an additional slower migrating band consistent with a heterodimer bound to the probe (Fig. 4B<\/a>, lanes 2\u20135, arrow). In contrast, no additional band is detected with Bma-EcRC (Fig. 4B<\/a>, lanes 6\u20137). This result is not unexpected given that Bma-EcRC, which contains a premature stop codon and encodes a protein with a truncated LBD, lacks essential structural features for heterodimerization. Neither Bma-EcRA nor Bma-EcRB bound substantially to the EcRE in the absence of AaUSP (Fig. 4B<\/a>, lanes 8\u20139). This *in vitro* analysis of Bma-EcR heterodimerization with AaUSP and binding to an EcRE suggests that Bma-EcR has DNA-binding properties similar to those of ecdysone receptors.\n\n## Cloning of a *B. malayi* RXR homolog\n\nThe dimerization properties of Bma-EcR and the identification of *rxr* \\[36\\] and *EcR* homologs in the dog filarial parasite *D. immitis* pointed to the likelihood that an *rxr* homolog also exists in other filarial nematodes. Using degenerate PCR primers we were able to clone a fragment with high sequence similarity to Di-RXR-1 from *O. volvulus* cDNA (see Experimental Procedures; sequence deposited in GenBank ). In *B. malayi*, however, although we searched for an RXR type receptor in the genomic libraries available using *Di-rxr-1* as a probe, no strongly hybridizing sequences were detected. While this work was in progress genomic data from the *B. malayi* genome project became available, which provided us with an alternative route to clone the *B. malayi* RXR\/USP \\[51\\]. Using the sequence information from the other filarial species as well as the *Brugia malayi* genome project we designed a combined RT-PCR and RACE approach (described in detail in Experimental Procedures) that allowed us to obtain clones for the *B. malayi* homolog of RXR which we named *Bma-RXR*. The longest cDNA sequence identified for *Bma-RXR* is 1398 bp and encodes a 465 amino acid protein that has strong similarity to *D. immitis* Di-RXR-1 (100% amino acid identity in the DBD and 83% in the LBD). The amino acid sequence similarity between the *B. malayi* and *D. immitis* RXRs substantially deteriorates in the last exon. Interestingly, the last exon corresponds to the helix 12 region of the LBD where the activation function AF-2 usually resides (Fig. 5<\/a>). This LBD region is also highly dissimilar between the filarial nematode RXRs and their homologs in other non-nematode species. Notably the motif LIRVL consistent with the RXR AF2 is found in Bma-RXR but not Di-RXR-1.\n\n## Phylogenetic analysis of Bma-RXR\n\nSimilarly to Bma-EcR, global phylogenetic analysis places Bma-RXR together with USPs and RXRs (Supplementary material Figs. S1<\/a> and S2<\/a>.). Using HNF4s as the outgroup, there is 100% aALRT (approximate Likelihood Ratio Test ) support to place Bma-RXR in the USP\/RXR sub-family (Fig. 6<\/a>). Relationships among arthropod USP\/RXRs and Bma-RXR are not well resolved (aALRTs under 50%), but Bma-RXR groups strongly with Di-RXR-1. The grouping of Bma-RXR among USP\/RXRs remains the same whether the *Schistosoma mansoni* sequences are included or not in the tree (data not shown). The *Schistosoma* sequences are extremely divergent, to the extent of not being phylogenetically informative \\[52\\],\\[53\\], and branch at the base of the tree. While it is known that dipteran and lepidopteran USPs evolve especially fast \\[53\\], Di-RXR-1 and Bma-RXR appear to have evolved even faster. Separate phylogenies of the DBD and LBD (not shown) indicate that this is entirely due to a very derived LBD. This observation is consistent with the alignment. The DBD on the other hand has evolved slowly, like the DBDs of its homologs in other species.\n\n## Bma-RXR expression\n\nExpression of *Bma-RXR* was analyzed in adult females, males, and L1 larvae by Northen blot analysis (Fig. 7<\/a>). A \u223c5kb RNA species was clearly detected in female and male RNA samples. Low levels of the \u223c5 kb *Bma-RXR* RNA species were also observed in the L1 RNA sample. Two additional *Bma-RXR* bands of approximately 3.75 kb and 3 kb were also detected in adult females. The presence of *Bma-RXR* mRNA in males and L1 larvae was in agreement with RT-PCR results (data not shown).\n\n## Bma-EcR heterodimerization and ligand-dependence *in vivo*\n\nTo further characterize the properties of Bma-EcR we tested whether Bma-EcR, by virtue of its LBD, is capable of forming a dimer with Bma-RXR, its putative native partner, to constitute a functional receptor and transduce the hormonal signal of ecdysteroids in a cellular context. The assay we employed takes advantage of the fact that the LBD of nuclear receptors can function in a modular fashion fused to heterologous DNA binding domains such as the GAL4 DBD \\[43\\],\\[44\\]. In order to test the ability of Bma-EcR LBD to activate transcription of a reporter gene in response to a particular hormone ligand, NIH 3T3 cells were co-transfected with GAL4:Bma-EcR(LBD) in combination with RXR LBDs fused to VP16. In addition to the Bma-RXR(LBD) we tested human HsRXR and Hs-LmRXR(LBD) (a chimeric human-locust LBD). The latter was selected because it shows no constitutive dimerization and high ligand-dependent activity when partnered with other ecdysone receptors \\[44\\] (and ). The transfected cells were tested for trans-activation in the absence or presence of either the ecdysteroid Ponasterone-A or the synthetic ecdysone agonist RSL1 by assaying luciferase activity.\n\nSignificant transactivation was detected when GAL4:Bma-EcR(LBD) was partnered with Bma-RXR(LBD) (Fig. 8A<\/a>). The addition of RSL1 (Fig. 8B<\/a>) or Ponasterone A (data not shown) had no further stimulatory effect on the detected activity. These data demonstrate that Bma-EcR and Bma-RXR are *bona fide* nuclear receptor partners and that, like their insect counterparts, they avidly dimerize in the absence of ligand. The ligands apparently cannot appreciably increase the heterodimer's ability to activate transcription above that of the VP16 activation domain in this assay.\n\nSignificant ligand-dependent transcriptional activation of luciferase was detected, however, when GAL4:Bma-EcR(LBD) was partnered with the chimeric VP16:Hs-LmRXR and treated either with Ponasterone-A or RSL1 (Fig. 8C<\/a>). This is likely the result of ligand-dependent dimerization of the two receptor LBD fusions and subsequent trans-activation *via* the VP16 activation domain. This result clearly demonstrates the ability of Bma-EcR LBD to transduce the action of the ecdysteroid Ponasterone-A and the ecdysteroid agonist, RSL1 in the transfected cells.\n\nThe dimerization and transactivation studies presented here show that Bma-EcR is able to heterodimerize with Bma-RXR in a cellular context and capable of triggering a transcriptional response in an ecdysteroid-specific manner. These observations taken together along with their expression profile suggest that Bma-EcR and Bma-RXR have the prerequisite functional properties to constitute a functional *Brugia malayi* ecdysone receptor.\n\n## Ecdysone-dependent transcription in *B. malayi*: A reporter assay\n\nThe existence of homologs for both protein components of Ecdysone Receptor in *B. malayi* which possess functional dimerization and DNA binding properties, and the earlier pharmacological observations by H. Rees \\[33\\],\\[34\\] suggest that ecdysone could function as a transcriptional regulatory ligand in *B. malayi*. To directly test this hypothesis, we employed a recently established transient transformation technique to explore whether ecdysteroids can activate transcription in *B. malayi* using a reporter assay. Recent studies have demonstrated that the 5\u2032 UTR of the gene encoding the 12 kDa small subunit ribosomal protein of *B. malayi* (BmRPS12) was capable of acting as a promoter when used to drive the expression of a luciferase reporter gene in transiently transfected *B. malayi* embryos \\[45\\]. The BmRPS12 promoter contains 5 \u00be copies of an almost exact 44 nt repeat that acts as an enhancer element \\[45\\]. This promoter construct driving the expression of firefly luciferase (construct BmRPS12 (\u2212641 to \u22121)\/luc) was used to develop a reporter for *B. malayi* in which the enhancer repeat element was replaced with canonical ecdysone response elements (EcREs). We constructed the EcRE-BmRPS12-luciferase reporter (as described in Methods<\/a>) using the PAL-1 element that Bma-ECR is capable of binding *in vitro* (Fig. 4B<\/a>). This construct was tested for transcriptional activity in transfected *B. malayi* embryos, which were exposed to 20-OH ecdysone (20-E), or solvent alone, before being assayed for luciferase reporter activity. As shown in Fig. 9A<\/a> ecdysteroid treatment resulted in a significant increase of reporter gene activity in cultures exposed to 20-E relative to control cultures (transfected in parallel with the same construct but exposed to solvent alone). This response to 20-E requires the presence of the EcRE sequence, since a construct lacking the EcRE did not exhibit any increase in luciferase activity in response to 20-E. Similarly, the response was strictly dependent on hormone, as constructs containing the EcRE produced levels of activity that were not significantly different from those obtained with the construct lacking the EcRE, in the absence of 20-E. Constructs containing the EcRE in both orientations were equally responsive to 20-E treatment, in keeping with previous studies demonstrating the symmetric nature of the binding of nuclear hormone receptors to their cognate response elements \\[43\\]. The response to the 20-OH ecdysone was dose-dependent, reaching a plateau at 5 \u00b5M (Fig. 9B<\/a>). These results provide molecular evidence for the function of ecdysone in transcriptional responses of *B. malayi* and reveal the functional operation of a corresponding signaling system.\n\n# Discussion\n\nMolting in ecdyzosoans has been studied most extensively in insects. In insects EcR and USP initiate the transduction of the molt-triggering signal \\[4\\],\\[9\\]. Molting progression is mediated by the expression and activation of a number of well-characterized genes, including additional nuclear receptors \\[16\\],\\[17\\],\\[54,\\]. In contrast, in nematodes molting initiation and the molecular signaling responsible for its progression are only now starting to be understood. An RNAi screen in *C. elegans* for genes that are involved in molting has revealed a large number of \"molting\" genes, which encode proteins ranging from transcription factors and intercellular signaling molecules to proteases and protease inhibitors. However, no signal has been specifically identified as being the putative molting trigger \\[55\\]. Expression profiles of *C. elegans* \"ecdysone cascade\" nuclear receptors during molting cycles parallel the expression of their homologs in insects \\[56\\], and *nhr-23*, *nhr-25*, *nhr-41*, and *nhr-85*, the *C. elegans* orthologs of *DHR3*, *Ftz-F1*, *DHR78*, and *E75*, respectively, have been shown to be important for proper molting and\/or dauer larva formation \\[26\\], \\[56\\]\u2013\\[58\\]. The fact that the *C. elegans* genome contains no identifiable homologs of *EcR* or *rxr* \\[23\\] and that no ecdysteroids have been identified in this nematode, has led to the suggestion that ecdysone itself is unlikely to be the molting hormone in this free living nematode \\[55\\].\n\nOur previous studies demonstrated the existence of an *rxr* homolog in the canine filarial nematode *D. immitis* \\[36\\]. The isolation of *Di-rxr-1* indicated that, in contrast to *C. elegans*, filarial nematodes might contain different sets of NRs. The isolation of homologs of *EcR* and *rxr* in *Brugia malayi* presented here demonstrates that filarial nematodes express both components of the ecdysone receptor and these nuclear receptors show dimerization, DNA binding, and hormone-binding characteristics similar to those of the canonical insect ecdysone receptors. Our phylogenetic analyses place the two receptors in the corresponding branches of the superfamily tree. They also indicate a rapid evolution of the LBDs. The LBDs of nematode RXRs are extremely divergent, on a similar scale to that of *Schistosoma* RXR LBD. Subsequent to our identification of *EcR* and *rxr* homologs in *Brugia*, the sequencing of the genome was completed, identifying additional putative nuclear receptors in the ecdysone signaling cascade \\[51\\].\n\nWe cloned two *Bma-EcR* and one *Bma-rxr* mRNA isoforms. Northern blot analyses revealed *Bma-EcR* and *Bma-rxr* expression in adult males, females and L1s. In addition, RT-PCR analyses indicate that *Bma-EcR* is also present in L1, L2 and L3 larval stages. Since females contain developing embryos, it is not possible to differentiate between embryonic and female-specific expression of these two nuclear receptors in *B. malayi*. In insects *EcR* has been shown to be critical for both embryonic development and oogenesis \\[15\\],\\[59\\],\\[60\\] and in filarial nematodes ecdysone treatment releases meiotic arrest and stimulates microfilaria release \\[34\\]. Expression of *EcR* and *rxr* homologs in *B. malayi* females points to possible functions of the ecdysone receptor also in nematode oogenesis and\/or embryogenesis.\n\nThe expression pattern of *Bma-RXR* differs somewhat from the expression pattern of the other filarial *rxr* identified to date, *Di- rxr-1*, which is expressed in males but not females \\[36\\]. In insects the rxr homologue \"*Ultraspiracle*\" (USP) is considered the main functional partner of EcR and as such its expression overlaps with that of *EcR* \\[5\\]. This also seems to be the case in *B. malayi*, where we observed that at least one isoform of *Bma-rxr* has an overlapping expression pattern with *Bma-EcR*. However, two other *Bma-rxr* isoforms appear to be specifically expressed only in females.\n\nThe sequence differences of *B. malayi* and *D. immitis* RXR may mirror differences in expression patterns of the two RXR homologues. Whether these differences in sequence and expression pattern correlate with differences in ligand interaction and\/or function remains an open question.\n\nBoth Bma-EcRA and a putative isoform B are able to bind a canonical ecdysone response element (EcRE) when partnered with USP. The question of whether isoform B exists in *B. malayi* (as in *D. immitis*) remains unanswered. We have shown that such an isoform is biochemically active, being able to dimerize with an insect USP and bind EcRE *in vitro*. Furthermore, isoform B is the most similar to the insect EcRs. Bma-EcRB contains a shorter (i.e. canonical) \"T-box\" region than Bma-EcRA (Fig. 1<\/a>). The \"T-box\" region has been described as being able to modulate DNA binding to extended hormone response elements \\[61\\]. The presence of possible sequence variation in the \"T-box\" region in these two *Bma-EcR* variants could point to the possibility of differences in isoform-specific interactions with DNA target sequences.\n\nBma-EcRC contains a truncated LBD, and it is similar in organization to the estrogen-alpha variant Delta-5, which displays dominant-negative activity \\[62\\]. As we have shown, isoform C is unable to dimerize with a *bona fide* USP to bind the palindromic EcRE. These data suggest that Bma-EcRC may carry out a novel function that is independent of any interactions with an RXR partner. Establishing the role of *Bma-EcRC* is the aim of future investigations.\n\nThe sequence in the region of helices 11\u201312 in the LBD of *B. malayi* and *D. immitis* EcR and RXR homologues is strikingly divergent when compared to each other and to other EcRs and RXRs respectively. The most prominent feature in Bma-EcR is the absence of conserved helix 12 residues. This difference raises the question of what constitutes a functional activation function corresponding to AF2 in these nematode members of the nuclear receptor family. Our transcriptional activation assay results clearly show that the two receptors can dimerize and that the LBD of Bma-EcR is capable of transducing an ecdysteroid signal in a cellular context. Even though our analysis was carried out in a heterologous system, this type of assay has been shown to be highly informative for LBD-ligand interactions \\[44\\]. In this system, however, strong constitutive dimerization of receptor partners can obscure possible transcriptional effects of the ligand. Our results obtained with the chimeric RXR-LBD (which confers low constitutive dimerization) as a partner, indicate that the Bma-EcR LBD does show an ecdysteroid response. Evidence of hormone binding from these transactivation assays and the absence of a recognizable AF2 motif in Bma-EcR suggest that this receptor utilizes different features to achieve equivalent transcriptional functions than its insect counterparts.\n\nThe identification of the putative ecdysone receptor components presented here provides strong support to the long standing hypothesis that ecdysteroids play a role in filarial nematode embryogenesis and molting similar to their role in insects.\\[4\\],\\[32\\]. Ecdysteroids have been detected in a number of nematodes (reviewed by Barker and Rees, \\[32\\]). When *in vitro* cultivation of *Onchocerca volvulus* microfilaria was attained, it was observed that the addition of 20E to the culture media resulted in L1 larva progressing to the infective L3 stage \\[63\\]. This observation is consistent with the fact that after the bloodmeal, mosquitoes raise their ecdysteroid level, which correlates with the subsequent rapid molting of the ingested L1 larvae to the L2 stage. We attempted to directly demonstrate that ecdysone can act as a transcriptional trigger *in vivo* using a transient transformation reporter assay. Indeed, significant activity was observed in response to ecdysone . Our transgenic *Brugia* experiments confirm the *in vivo* functionality of both a consensus EcRE and 20-hydroxyecdysone in measurable transcriptional activity. Although we present no data to establish that the observed activation is mediated by the receptor(s) we have cloned, our results in conjunction with previous studies on this subject confirm that filarial nematodes in particular, contain and express the gene components of a functional ecdysone signaling system that is quite similar to that of other ecdysozoa. The role of this signaling system in filarial development will be the subject of further studies. Furthermore, the existence of a functional ecdysone signaling pathway in filarial nematodes does point to the possibility of using a novel approach for the development of drugs to fight filariasis based on testing of pre-existing compounds that specifically target the ecdysone pathway \\[64\\].\n\n# Supporting Information\n\nPhylogenetic tree of all Nuclear receptors. Maximum likelihood phylogenetic tree generated with all nuclear receptor sequences obtained from SwissProt and GenBank constructed as described in the Methods<\/a>. The positions of Bma-EcR and Bma-RXR reported here are indicated by arrows. The accession numbers and the statistical aLRT support for the branches are indicated.\n\n(0.23 MB TIF)\n\nClick here for additional data file.\n\nSub-trees containing EcRs and RXRs. Sub-trees from the phylogeny of Figure S1<\/a> containing all EcRs (left) or all RXRs (right). The accession numbers and the statistical aLRT support for the branches are indicated.\n\n(0.27 MB TIF)\n\nClick here for additional data file.\n\nIn vitro translated proteins used in Figure 4<\/a>. SDS-PAGE of ^35^S-labeled in vitro translated proteins used in Figure 4<\/a> showing size and relative amounts of the three Bma-EcR isoforms and AaUSP. One \u00b5L of each in vitro translated protein was analyzed by autoradiography of the dried gel.\n\n(0.13 MB TIF)\n\nClick here for additional data file.\n\nHsLmRXR-VP16 (LBD) chimera activates the reporter upon RSL1 treatment only in the presence of a responsive EcR heterodimer partner. Transactivation assay with the chimeric VP16:Hs-LmRXR(LBD) used in Figure 8<\/a>. The same construct was transfected along with a Gal4: CfEcR(LBD) fusion or alone in NIH-3T3 cells using the same experimental protocols as for Figure 8<\/a>. Activation of the reporter is observed only upon induction with the ecdysone agonist RSL-1. CfEcR(LBD) encodes the LBD of the ecdysteroid receptor from *Choristoneura fumiferana* \\[44\\].\n\n(0.07 MB DOC)\n\nClick here for additional data file.\n\nList of accession numbers for all EcR and RXR sequences used in the phylogenetic analyses shown in Figures 2<\/a> and 6<\/a>.\n\n(0.07 MB DOC)\n\nClick here for additional data file.\n\nWe thank the filarial genome network for resources, Dr. A. Raikhel, UC Riverside, for generously providing AaUSP constructs, Dr. C. Shea for reagents and helpful comments on the manuscript. We are also grateful to Dr. Donald Comb for his continuous support of this project.\n\n# References\n\n[^1]: **\u00a4:** Current address: Department of Biology, University of Louisiana at Monroe, Monroe, Louisiana 71209, United States of America\n\n[^2]: Conceived and designed the experiments: GT TRU CVM. Performed the experiments: GT ALE SRP CRG CL. Analyzed the data: GT ALE MRR CRG TRU CVM. Contributed reagents\/materials\/analysis tools: SRP TRU. Wrote the paper: GT ALE. Cloning of BmaEcR, OvRXR, construction of plasmids, and protein-DNA binding: GT. Cloning of BmaRXR, Northern blots: ALE. Mammalian cell 2-hybrid assays: SRP. Phylogenetic analysis of sequence data: MRR. Cloning of BmaRXR: CRG. Promoter for *Brugia* reporter assay and *Brugia* biolistic transformation assays: TRU. Cloning of nematode receptors, sequence analysis: CVM.","meta":{"dup_signals":{"dup_doc_count":106,"dup_dump_count":27,"dup_details":{"curated_sources":3,"2017-51":1,"2017-43":1,"2017-22":1,"2017-09":11,"2016-44":1,"2016-36":2,"2016-30":6,"2016-07":2,"2015-48":5,"2015-40":3,"2015-35":5,"2015-32":4,"2015-27":3,"2015-22":3,"2015-14":2,"2014-52":2,"2014-49":3,"2014-42":6,"2014-41":6,"2014-35":4,"2014-23":6,"2014-15":7,"2019-26":1,"2015-18":6,"2015-11":4,"2015-06":2,"2014-10":6}},"file":"PMC2834746"},"subset":"pubmed_central"} {"text":"abstract: The Hypertext atlas of Dermatopathology, the Atlas of Fetal and Neonatal Pathology and Hypertext atlas of Pathology (this one in Czech only) are available at . These atlases offer many clinical, macroscopic and microscopic images, together with short introductory texts. Most of the images are annotated and arrows pointing to the important parts of the image can be activated.\n .\n The Virtual Microscope interface is used for the access to the histological images obtained in high resolution using automated microscope and image stitching, possibly in more focusing planes. Parts of the image prepared in advance are downloaded on demand to save the memory of the user's computer. The virtual microscope is programmed in JavaScript only, works in Firefox\/Mozilla and MSIE browsers without need to install any additional software.\nauthor: Josef Feit; Lud\u011bk Matyska; Vladim\u00edr Ulman; Luk\u00e1\u0161 Hejtm\u00e1nek; Hana Jedli\u010dkov\u00e1; Marta Je\u017eov\u00e1; Mojm\u00edr Moulis; V\u011bra Feitov\u00e1\ndate: 2008\ninstitute: 1Institute of Pathology, Masaryk University, Brno, Czech Republic; 2Faculty of Informatics, Masaryk University, Brno, Czech Republic; 3Dept. of Dermatovenerology,, St. Anna Hospital, Masaryk University, Brno, Czech Republic; 4Dept. of Radiology, St. Anna Hospital, Masaryk University, Brno, Czech Republic\nreferences:\ntitle: Virtual microscope interface to high resolution histological images\n\n# Introduction\n\nThe Hypertext Atlas of Dermatopathology \\[1\\] is available on the Internet since 2008. It contains 4840 clinical, macroscopic and histologic images. Recently the Atlas of Fetal and Neonatal Pathology is available as well. The Atlas of Pathology for pre-graduate students of medicine is available as well (in Czech only) and new atlases are under preparation (today with about 2300 images).\n\nThe atlases contain annotated images (arrows pointing to important parts of the images can be activated) and short introductory texts. Histological images are taken using motorized microscope to take image parts, which are later stitched together. The image stitching is based on analysis of overlapping parts of individual image tiles and joined together using the gradient running on randomly generated curve to obtain the best cosmetic results.\n\n# Methods\n\n## Hardware, image acquisition\n\nLeica DMLA microscope with a set of PlanApo lenses (HC Fluotar 5\/0.15, HC PlanApo 10\/0.30, 20\/0.50, 40\/0.70, 100\/1.35 and a Plan 2\/0.07 lenses) equipped with the Nikon DMX-1200 digital camera is used to obtain image parts at the resolution at 1200 \u00d7 1020 pixels, 3 \u00d7 8 bit colour. Motorized stage (Merzh\u00e4user) is automatically moved from one image to another. The system is controlled by Lucia DI (LIM, Prague). Home made software is used to create composed, very large pictures and to prepare the virtual microscope image stacks.\n\n## Source texts of the atlas\n\nThe atlas source texts are in XML data format. Programs (written mostly in Perl 5.8 programming language) are used to parsing and checking the document structure and to generate the final HTML files, which are uploaded on the server. The overall size of the atlases is about 110 GB of data.\n\n## Image post-processing, virtual microscope\n\nEach image part can be taken in one or more focusing levels. This image stack can be processed by pan-focusing function, which selects sharp areas of each image from the stack to create one image tile and overcoming the problem of image artefacts caused by uneven slides. This feature is especially useful if only slides of suboptimal quality are available (slides of rare cases, from old slide collections etc.). We use usually 3 levels, their distance of focusing planes varies according to the objective used.\n\nAlternatively the whole image stacks are taken and saved. The resulting images are created from stitching tiles from each plane separately. This approach allows creation of multilevel images, which can be focused. This approach we use especially in images taken in high resolution using 100\u00d7 immersion lens, as in bone marrow biopsies. We use usually 5 or 7 focusing planes, sometimes more (up to 13).\n\nAfter stitching each image is digitally manipulated (colour correction, sharpening), archived and cut into pieces, side of which are of multiples 256. Larger pieces 512 and more) are converted into 256 parts. All these image parts are saved into structured directories.\n\nThe browser loads proper parts of image according to current viewport, magnification and focusing level. It reacts to user's actions (image dragging, magnification changing, change in the size of the virtual microscope window, focusing) through catching events, calculates the names of corresponding new image tiles, which are loaded and added into the DOM of the image being displayed. The image parts, which got out of he viewport, are released from memory.\n\nThe individual parts can be stored on the server or locally on the disc. No special server application is needed. The magnification can be changed, images can be dragged and the images saved in stacks can be focused. This approach is used especially in images t Users can change the size of the virtual microscope window up to the full screen.\n\n# Results\n\nOur atlases are available at . The access requires registration, which is free. In January 2008 about 1450 users were registered. Combined size of all the atlases is over 110 GB of data.\n\nThe virtual microscope interface works reasonably smooth. This approach does not require installation of any additional software (but the JavaScript in the browser must be enabled). MSIE since the version 4 and Mozilla family of Internet browsers are supported.\n\nThe interface consists of several windows: the text of the atlas, the window with an image in basic (900 px) size with possible activation of arrows, list of signs and window with the virtual microscope with magnification and focus changer (see the Figure 1<\/a>). Users can open more images at the same time for easy comparing.\n\n# Discussion\n\nPublishing teaching materials in the Internet has many advantages. In properly designed publication systems the complexity does not grow with extent of the source texts and images. It is easy to publish new version, to add new materials or reflect comments and desires of the students. The quality of the images is very high, usually much better than in printed textbooks and their number is virtually unlimited. Moreover, virtual microscope offers new experience to the students (dragging, focusing, magnification changing), leading to more active approach to learning \\[2\\]. Virtual slides can capture whole tissue specimens, not only selected areas (\"negative\" areas are important as well) and can be used in quizzes as well. In our teaching labs we do not use microscopes any more. Digital slides do not worn out, can be easily replaced if new, more instructive case is available and can be accessed any time from anywhere. Virtual microscope is suitable for preparing diagnostic seminars and reference collections. One slide is enough to prepare the case, so even small specimens can be used for seminar without the danger of cutting through the tissue without having representative specimen for each student or participant.\n\n# Conclusion\n\nOur atlases are continuously upgraded and expanded. In addition to the above-mentioned Atlas of Dermatopathology and Atlas of Fetopathology and Neonatal Pathology we are preparing new atlases (of muscle pathology and bone marrow biopsy). In future image sharing of our images will be possible as well, so that other teachers will be able to include links to images in our atlases, comment them according to their taste and still have access to all the features of the virtual microscope.\n\n### Acknowledgements\n\nThis work was supported by Project MediGRID (1ET202090537) of the Information Society program.\n\nThis article has been published as part of *Diagnostic Pathology* Volume 3 Supplement 1, 2008: New trends in digital pathology: Proceedings of the 9th European Congress on Telepathology and 3rd International Congress on Virtual Microscopy. The full contents of the supplement are available online at ","meta":{"dup_signals":{"dup_doc_count":176,"dup_dump_count":54,"dup_details":{"curated_sources":3,"2023-14":1,"2022-49":1,"2022-21":1,"2021-39":1,"2021-10":1,"2020-40":1,"2020-16":1,"2019-47":1,"2019-39":1,"2019-26":1,"2019-18":1,"2019-09":1,"2018-51":1,"2018-43":1,"2018-34":1,"2018-26":1,"2018-13":1,"2017-47":1,"2017-34":1,"2017-26":1,"2017-22":1,"2017-17":1,"2017-09":14,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-30":1,"2016-22":1,"2016-18":1,"2016-07":14,"2015-48":5,"2015-40":5,"2015-35":5,"2015-32":5,"2015-27":4,"2015-22":5,"2015-14":5,"2014-52":5,"2014-49":7,"2014-42":14,"2014-41":5,"2014-35":6,"2014-23":9,"2014-15":6,"2023-50":1,"2015-18":4,"2015-11":5,"2015-06":5,"2014-10":4,"2013-48":5,"2013-20":4,"2024-22":1}},"file":"PMC2500118"},"subset":"pubmed_central"} {"text":"abstract: With increasing dietary cholesterol intake the liver switches from a mainly resilient to a predominantly inflammatory state, which is associated with early lesion formation.\nauthor: Robert Kleemann; Lars Verschuren; Marjan J van Erk; Yuri Nikolsky; Nicole HP Cnubben; Elwin R Verheij; Age K Smilde; Henk FJ Hendriks; Susanne Zadelaar; Graham J Smith; Valery Kaznacheev; Tatiana Nikolskaya; Anton Melnikov; Eva Hurt-Camejo; Jan van der Greef; Ben van Ommen; Teake Kooistra\ndate: 2007\ninstitute: 1Department of Vascular and Metabolic Diseases, TNO-Quality of Life, BioSciences, Gaubius Laboratory, Zernikedreef 9, 2333 CK Leiden, The Netherlands; 2Department of Vascular Surgery, Leiden University Medical Center, Albinusdreef 2, 2300 RC Leiden, The Netherlands; 3Department of Physiological Genomics, TNO-Quality of Life, BioSciences, Utrechtseweg 48, 3704 HE Zeist, The Netherlands; 4GeneGo Inc., Renaissance Drive, St Joseph, MI 49085, USA; 5Department of Analytical Research, TNO-Quality of Life, Quality and Safety, Utrechtseweg 48, 3704 HE Zeist, The Netherlands; 6AstraZeneca, CV&GI Research, Silk Road Business Park, Macclesfield, Cheshire SK10 2NA, UK; 7Vavilov Institute for General Genetics, Russian Academy of Science, Gubkin Street 3, 117809 Moscow, Russia; 8AstraZeneca CV&GI Research, 43183 M\u00f6lndal, Sweden\nreferences:\ntitle: Atherosclerosis and liver inflammation induced by increased dietary cholesterol intake: a combined transcriptomics and metabolomics analysis\n\n# Background\n\nAtherosclerosis is a multifactorial disease of the large arteries and the leading cause of morbidity and mortality in industrialized countries \\[1\\]. There is ample evidence that hypercholesterolemia (that is, elevated plasma levels of low-density lipoprotein (LDL) and very low-density lipoprotein (VLDL)) induced by genetic modification or enhanced intake of dietary lipids is a major causative factor in atherogenesis \\[2,3\\]. It is equally clear that from the very beginning of lesion formation, atherogenesis requires an inflammatory component, which is thought to drive the progression of the disease \\[4,5\\]. Indeed, some of the variation in the rate of lesion progression in different individuals may relate to variations in their basal inflammatory state \\[6,7\\]. However, while the inflammatory processes in the complex evolution of the lesion from the early fatty streak to a fibrous plaque are considered self-perpetuating phenomena, the initial trigger and origin of the inflammatory component in hypercholesterolemia remains enigmatic \\[6,8\\].\n\nRecent observations by us and others suggest that the liver plays a key role in the inflammatory response evoked by dietary constituents (reviewed in \\[8,9\\]). For example, liver-derived inflammation markers such as C-reactive protein (CRP) and serum amyloid A (SAA) increase rapidly (within days) after consumption of an excess amount of dietary lipids \\[8,10\\], and thus by far precede the onset of early aortic lesion formation \\[8\\]. These findings suggest that nutritional cholesterol itself may contribute to the evolution of the inflammatory component of atherogenesis. We postulate that pro-atherogenic inflammatory factors originate at least partly from the liver. We also hypothesize that these factors come into play at high dietary cholesterol doses because of the exponential rather than linear nature of the relationship between cholesterol intake (measured as cholesterol plasma levels) and atherosclerotic lesion size \\[11,12\\] as specified in more detail in Additional data file 1.\n\nIn this study we sought evidence for the hypothesis that inflammation and hypercholesterolemia are not separate factors, but closely related features of the same trigger, dietary cholesterol. In particular, using a variety of newly developed functional bioinformatics tools, we addressed the question of how the liver responds to increasing dietary cholesterol loads at the gene transcription level and analyzed how hepatic cholesterol metabolism is linked to the hepatic inflammatory response, including underlying regulatory mechanisms. Notably, all analyses were performed at a very early stage of the atherogenic process (that is, after 10 weeks of cholesterol feeding) to limit potential feedback reactions from the vessel wall.\n\nAn established model for cholesterol-induced atherosclerosis, ApoE\\*3Leiden transgenic (E3L) mice, allowed the application of experimental conditions that mimic the human situation: E3L mice display a lipoprotein profile similar to that of humans suffering from dysbetalipoproteinemia and develop atherosclerotic lesions that resemble human lesions with regard to morphology and cellular composition \\[13,14\\]. E3L mice were exposed to increasing doses of dietary cholesterol (as the only dietary variable modulated), and liver genome and metabolome datasets were analyzed in a unique context, that is, at the time point of first lesion development. Advanced (functional) bioinformatical analysis allowed us to merge metabolome and transcriptome datasets and to analyze pathways and biochemical processes comprehensively. Recent advances in systems biology (for example, new biological process software for network building and data mining) have enabled us to discover significant relationships and to identify transcriptional master regulators that control gene alteration and are ultimately responsible for effects at the process level.\n\n# Results\n\n## Effect of dietary cholesterol load on plasma lipids and early atherosclerosis\n\nTreatment of female E3L mice with a cholesterol-free (Con), a low-cholesterol (LC) or a high-cholesterol (HC) diet resulted in total plasma cholesterol concentrations that stabilized at 5.9 \u00b1 0.3 mM, 13.3 \u00b1 1.9 mM and 17.9 \u00b1 2.4 mM, respectively. The increase in plasma cholesterol in the LC and HC groups was confined to the pro-atherogenic lipoprotein particles VLDL and LDL (Figure 1a<\/a>). High-density lipoprotein (HDL) levels and plasma triglyceride levels were comparable between the groups (Figure 1a<\/a> and Table 1<\/a>). The plasma levels of alanine aminotransferase (ALAT) and aspartate aminotransferase (ASAT), two markers of liver function, were comparable in the Con and LC groups and slightly elevated in the HC group.\n\nEffects of dietary cholesterol on plasma lipids and inflammation markers\n\n| | Con | LC | HC |\n|---------------------------|------------|--------------|-----------------|\n| Body weight (start) (g) | 20.3 \u00b1 1.5 | 20.8 \u00b1 1.5 | 20.6 \u00b1 0.9 |\n| Body weight gain (g) | 0.4 \u00b1 0.7 | 0.7 \u00b1 0.8 | 0.6 \u00b1 0.5 |\n| Food intake (g\/day) | 2.6 \u00b1 0.2 | 2.9 \u00b1 0.3\\* | 2.5 \u00b1 0.2^\u2020^ |\n| Plasma cholesterol (mM) | 5.9 \u00b1 0.3 | 13.3 \u00b1 1.9\\* | 17.9 \u00b1 2.4\\*^\u2020^ |\n| Plasma triglyceride (mM) | 1.7 \u00b1 0.4 | 2.3 \u00b1 0.3 | 2.1 \u00b1 0.7 |\n| Plasma E-selectin (\u03bcg\/ml) | 44.3 \u00b1 2.3 | 44.3 \u00b1 6.3 | 55.1 \u00b1 8.5\\*^\u2020^ |\n| Plasma SAA (\u03bcg\/ml) | 2.8 \u00b1 0.6 | 4.7 \u00b1 1.7 | 8.3 \u00b1 2.7\\*^\u2020^ |\n| Plasma ALAT (U\/mL) | 48 \u00b1 44 | 45 \u00b1 22 | 75 \u00b1 23 |\n| Plasma ASAT (U\/mL) | 260 \u00b1 123 | 237 \u00b1 57 | 569 \u00b1 221\\*^\u2020^ |\n\nThree groups of female E3L mice were fed either a cholesterol-free (Con) diet or the same diet supplemented with 0.25% (LC) or 1.0% (HC) w\/w cholesterol. Listed are the average body weight at the start (t = 0) of the experimental period together with the body weight gain, the average daily food intake and the average plasma levels of cholesterol, triglycerides, E-selectin, serum amyloid A (SAA), alanine aminotransferase (ALAT) and aspartate aminotransferase (ASAT). All data are mean \u00b1 standard deviation. \\**P* \\< 0.05 versus Con; ^\u2020^*P* \\< 0.05 versus LC (ANOVA, least significant difference *post hoc* test).\n\nAfter 10 weeks of dietary treatment, the animals were euthanized to score early atherosclerosis. Longitudinally opened aortas of the Con and LC groups were essentially lesion-free (*en face* oil red O-staining), while aortas of the HC group already contained lesions (not shown). Consistent with the presence of atherosclerosis in the HC group, the vascular inflammation marker E-selectin was elevated only in this group (Table 1<\/a>). Early onset of atherosclerosis was analyzed in more detail in the valve area of the aortic root (Figure 1b<\/a>), a region in which lesions develop first \\[15\\]. The total cross-sectional lesion area under basal conditions was 1,900 \u00b1 900 \u03bcm^2^ (Con group; Figure 1c<\/a>). Compared to the Con group, the lesion area was relatively moderately increased in the LC group (4.2-fold; *P* \\< 0.05) and strongly increased in HC (19.5-fold; *P* \\< 0.05), confirming the exponential rather than linear relationship between total plasma cholesterol levels and the lesion area.\n\nNext, lesions in the aortic root were graded according to the classification of the American Heart Association. Under control conditions (Con); only about 10% of the aortic segments contained lesions, all of which were very mild type I lesions not identified by *en face* staining (Figure 1d<\/a>). In the LC group, more (40%) aortic segments showed lesions, of which 38% were mild type I-III lesions and 2% were severe type IV lesions. In the HC group, 81% of the aortic segments displayed lesions, most of which were mild lesions (76% type I-III lesions; 5% type IV). The predominance of mild-type lesions confirms an early stage of atherosclerotic disease in all groups. Notably, a positive association was observed between the cross-sectional lesion area and the plasma levels of SAA, an inflammation marker formed in liver. SAA was significantly elevated in the HC group, pointing to a hepatic inflammatory response to cholesterol feeding (Table 1<\/a>) that is associated with early atherosclerotic lesion formation.\n\n## Analysis of the hepatic gene response to increasing doses of dietary cholesterol\n\nTo get insight into the complex traits underlying the (patho)physiological response of the liver to dietary cholesterol, whole-genome and metabolome measurements were made. Compared to the Con group, a relatively small number of genes (551) significantly changed with LC treatment (Figure 2<\/a>). HC treatment modulated most (440 out of 551) of these genes and, additionally, affected 1,896 other genes. The individual gene expression profiles within a treatment group were very similar and formed clusters as confirmed by hierarchical clustering analysis (not shown). Differences in gene expression between the treatment groups were validated and confirmed for a selected group of genes by RT-PCR (Additional data file 2).\n\nStandard Gene Ontology (GO) biological process annotation allowed categorization of 52% of the differentially expressed genes based on their biological function (Table 2<\/a>). LC treatment predominantly affected genes belonging to lipid and lipoprotein metabolism, protein metabolism, carbohydrate metabolism, energy metabolism and transport. HC treatment affected the same GO groups but, additionally, also genes relevant to immune and inflammatory responses, cell proliferation, apoptosis, cell adhesion and cytoskeleton integrity (Table 2<\/a> and Additional data file 3).\n\nOverview of genes that are differentially expressed in response to cholesterol\n\n| | LC | | | HC | | |\n|----|----|----|----|----|----|----|\n| | | | | | | |\n| GO category | Up | Down | Total | Up | Down | Total |\n| Lipid and lipoprotein metabolism (includes cholesterol and steroid metabolism) | 8 | 50 | 58 | 37 | 114 | 151 |\n| Protein metabolism (includes protein folding and breakdown) | 34 | 14 | 48 | 143 | 98 | 241 |\n| Other metabolism (includes carbohydrate metabolism) | 32 | 19 | 51 | 122 | 130 | 252 |\n| Generation of precursor metabolites and energy | 10 | 15 | 25 | 24 | 47 | 71 |\n| Transport | 31 | 15 | 46 | 125 | 77 | 202 |\n| Immune and stress response\/inflammation | 19 | 7 | 26 | 99 | 49 | 148 |\n| Cell proliferation\/apoptosis | 9 | 3 | 12 | 37 | 18 | 55 |\n| Cell adhesion\/cytoskeleton | 10 | 1 | 11 | 76 | 8 | 84 |\n\nDifferentially expressed genes of LC and HC groups (ANOVA FDR \\< 0.05 and *t*-test compared to Con group *P* \\< 0.01) were analyzed according to standard GO biological process annotation and grouped into functional categories.\n\nTo refine the liver transcriptome data analysis and to define which biological processes are switched on\/off with increasing dietary cholesterol loads, we performed gene enrichment analysis in four different functional ontologies: biological processes, canonical pathway maps, cellular processes and disease categories using MetaCore\u2122. This allowed us to analyze functionally related genes (for example, genes belonging to a specific biochemical process) as a whole. Table 3<\/a> summarizes the significantly changed biological processes for the LC and HC groups. Four key ('master') process categories were affected by cholesterol feeding: lipid metabolism; carbohydrate and amino acid metabolism; transport; immune and inflammatory responses.\n\nAnalysis of processes that are changed significantly upon treatment with dietary cholesterol\n\n| | | | Differentially expressed (%) | |\n|----|----|----|----|----|\n| | | | | |\n| Master process | Subprocess (child terms) | Number of genes measured | LC | HC |\n| Lipid metabolism | | 264 | 8.7\\* | 24.2\\* |\n| | Fatty acid metabolism, fatty acid beta-oxidation | 8 | 0.0 | 50.0\\* |\n| | Triacylglycerol metabolism | 7 | 0.0 | 57.1\\* |\n| | Cholesterol metabolism | 27 | 33.3\\* | 33.3\\* |\n| | Cholesterol biosynthesis | 7 | 71.4\\* | 57.1\\* |\n| | Lipoprotein metabolism | 18 | 16.7\\* | 44.4\\* |\n| | Lipid biosynthesis | 105 | 11.4\\* | 23.8\\* |\n| | | | | |\n| Immune response | | 297 | 3.0 | 12.1\\* |\n| | Antigen presentation, exogenous antigen | 10 | 10.0 | 70.0\\* |\n| | Antigen processing | 17 | 5.9 | 35.3\\* |\n| | Acute-phase response | 11 | 9.1 | 36.4\\* |\n| | | | | |\n| General metabolism | | 3,600 | 3.3 | 13.1\\* |\n| | Cellular polysaccharide metabolism | 19 | 5.3 | 26.3\\* |\n| | Polysaccharide biosynthesis | 9 | 0.0 | 33.3\\* |\n| | Cofactor metabolism | 116 | 5.2 | 21.6\\* |\n| | Regulation of translational initiation | 9 | 0.0 | 44.4\\* |\n| | Amino acid metabolism | 103 | 2.9 | 20.4\\* |\n| | | | | |\n| Transport | | 1,119 | 2.9 | 14.3\\* |\n| | Intracellular protein transport | 161 | 3.7 | 19.9\\* |\n| | Golgi vesicle transport | 16 | 6.3 | 37.5\\* |\n| | Mitochondrial transport | 11 | 18.2\\* | 54.5\\* |\n\nMaster processes and their subprocesses (child terms) are listed together with the number of genes measured (third column). Percentages reflect the fraction of genes differentially expressed (within a specific process or pathway) in the LC and HC groups compared to the Con group. Relevant biological processes were identified in GenMAPP by comparison of the set of differentially expressed genes (ANOVA; *P* \\< 0.01 and FDR \\< 0.05) with all genes present on the array. \\*Biological processes with a Z-score \\>2 and a PermuteP \\< 0.05.\n\nIn the LC group, most significant effects occurred within the master process of lipid metabolism. Important subprocesses (that is, processes in which more than 10% of process-related genes changed significantly) were lipid biosynthesis, lipoprotein metabolism, cholesterol metabolism and cholesterol biosynthesis (Table 3<\/a>). The overall functional effect of LC treatment can be summarized as a substantial down-regulation of cholesterol and lipid metabolism. This adaptive response of the liver indicates metabolic liver resilience up to doses of 0.25% (w\/w) cholesterol.\n\nA further increase of dietary cholesterol (1% w\/w; HC group) intensified the changes in gene expression seen with LC treatment, indicating further metabolic adaptation. For example, all individual genes of the cholesterol biosynthesis pathway were down-regulated to a greater extent by HC than by LC treatment (see pathway map in Additional data file 4a): the gene of the rate-limiting enzyme of this pathway, *HMG CoA reductase* (*HDMH*), was down-regulated 2.8-fold and 10.6-fold by LC and HC treatments, respectively. Similarly, genes relevant for lipid and lipoprotein metabolism, *LDL receptor* (LC group, 1.3-fold down-regulated; HC group, 1.9-fold down-regulated) and *lipoprotein lipase* (LC group, 1.8-fold up-regulated; HC group, 5.5-fold up-regulated), were dose-dependently modulated.\n\nBesides marked effects on 'lipid metabolism', HC treatment induced significant changes in the master processes: 'general metabolism', 'transport' and 'immune and inflammatory response' (Table 3<\/a>). In particular, HC treatment enhanced the subprocesses involved in translational initiation, Golgi vesicle transport, mitochondrial transport, antigen presentation, antigen processing and acute phase response by affecting the expression of more than 35% of the genes in these subprocesses.\n\nSignificantly, HC but not LC dietary stress activated specific inflammatory pathways (that is, the platelet-derived growth factor (PDGF), interferon-\u03b3 (IFN\u03b3), interleukin-1 (IL-1) and tumor necrosis factor-\u03b1 (TNF\u03b1) signaling pathways; Figure 3<\/a>). Activation of these inflammatory pathways with HC treatment leads to a significant up-regulation of MAP kinases, complement factors and acute phase proteins such as SAA. HC treatment significantly up-regulates, for example, all four *SAA* isotype genes (Figure 4a<\/a>), which is consistent with the observed changes in SAA protein concentrations in plasma (compare Table 1<\/a>).\n\nMore generally, HC treatment induced many genes, the gene products of which reportedly or putatively initiate or mediate inflammatory events (Additional data file 3), including genes encoding proteases, complement components, chemokines and their receptors, heat shock proteins, adhesion molecules and integrins, acute phase proteins, and inflammatory transcription factors, altogether indicating a profound reprogramming of the liver towards an inflammatory state not observed with LC treatment.\n\nIn separate experiments using female E3L mice, the hepatic inflammatory response to cholesterol-feeding was analyzed in more detail, including dose-dependency and time course. Plasma SAA served as a marker and readout of liver inflammation. Feeding of cholesterol at doses up to 0.50% w\/w did not alter plasma SAA levels (Figure 4b<\/a>). At higher cholesterol doses (\u22650.75% w\/w), plasma SAA levels increased markedly. A time course study with 1% w\/w cholesterol (HC diet) showed that plasma SAA levels started to increase after 2 weeks and that plasma SAA levels continued to increase over time (Figure 4c<\/a>). Together, these refined analyses indicate that the liver is resilient up to a cholesterol dose of about 0.50% w\/w (adaptive response) and that the inflammatory response evoked with higher cholesterol concentrations starts within two weeks of starting the HC diet.\n\nEnrichment analysis with disease categories confirmed the activation of many signaling and effector pathways relevant for inflammation and immunity by HC, but not by LC, treatment. The most affected (that is, activated at the gene expression level) disease categories with HC treatment were interrelated cardiovascular disorders and (auto) immune diseases, including cerebral and intracranial arterial diseases, cerebral amyloid angiopathy, hepatocellular carcinoma, and hepatitis (Additional data file 4b).\n\nAltogether, this global analysis shows that the liver responds to a low load of dietary cholesterol mainly by adapting its metabolic program, whereas at a high cholesterol load the liver is much more extensively reprogrammed, and, in addition to metabolic adaptations, expresses genes involved in inflammatory stress.\n\n## Analysis of diet-dependent metabolic changes in liver and plasma\n\nTo verify whether the switch from metabolic adaptation (with LC treatment) to hepatic inflammatory stress (with HC treatment) is also reflected at the metabolite level, we performed a comprehensive HPLC\/MS-based lipidome analysis (measurement in total of about 300 identified di- and triglycerides, phosphatidylcholines, lysophosphatidylcholines, cholesterol esters) on liver homogenates of Con, LC and HC groups, and corresponding plasma samples.\n\nThe individual metabolite fingerprints within a treatment group were similar and formed clusters as assessed by principal component analysis (PCA; Figure 5<\/a>). The clusters of the Con and LC groups overlapped partly, demonstrating that the Con and LC groups have a similar intrahepatic lipid pattern. This indicates that the metabolic adjustments on the gene level in the LC group were efficacious and enabled the liver to cope with moderate dietary stress. The HC cluster did not overlap with the clusters of the Con group, demonstrating that the switch to a proinflammatory liver gene expression profile is accompanied by development of a new metabolic hepatic state, which differs significantly from the metabolic state at baseline (Con group).\n\n## Identification of transcriptional regulators that control the hepatic response to cholesterol\n\nTo identify the transcription factors and underlying regulatory mechanisms that govern the hepatic response to LC and HC stress, we performed a combined analysis of the liver transcriptome and metabolome dataset. Functional networks allowed the identification of transcriptional key ('master') regulators relevant for liver resilience and liver inflammation.\n\nThe adaptation of hepatic lipid metabolism to LC stress was mainly controlled by retinoid X receptor (RXR), SP-1, peroxisome proliferator activated receptor-\u03b1 (PPAR\u03b1), sterol regulatory element binding protein (SREBP)1 and SREBP2 (networks not shown), which are established positive regulators of genes involved in cholesterol biosynthesis \\[16\\]. Combined analysis of genome and metabolite datasets revealed that the intrahepatic level of eicosapentaenoic acid, a suppressor of SREBP1 \\[17\\], was increased, providing a molecular explanation for the observed down-regulation of genes involved in cholesterol biosynthesis (Additional data file 5).\n\nA subsequent network analysis of HC-modulated genes allowed the identification of transcription factors that mediate the evolution of hepatic inflammation and are ultimately responsible for the effects on the process level. HC-evoked changes require specific transcriptional master regulators, some of which are established in this context (nuclear factor kappa B (NF-\u03baB), activator protein-1 (AP-1), CAAT\/enhancer-binding protein (C\/EBP)\u03b2, p53), and others that are new (CREB-binding protein (CBP), hepatocyte nuclear factor-4\u03b1(HNF4\u03b1), SP-1, signal transducer and activator of transcription-3\/-5 (STAT-3\/-5), Yin Yang-1 (YY1); Figure 6<\/a> and Additional data file 6).\n\nConsistent with this, the identified transcription factors control the expression of genes encoding acute phase response proteins, complement factors, growth factors, proteases, chemokine receptors and factors stimulating cell adhesion, as confirmed by data mining. Most importantly, HC treatment induced genes whose products can act extracellularly (Additional data file 7) and possess reportedly pro-atherogenic properties. Examples include complement components (C1qb, C1qR, C3aR1, C9), chemoattractant factors (ccl6, ccl12, ccl19), chemoattractant receptors (CCR2, CCR5), cytokines inducing impaired endothelial barrier function (IFN-\u03b3), adhesion regulators (integrin \u03b22, integrin \u03b25, CD164 antigen\/sialomucin, junction adhesion molecule-2), growth factors (PDGF, vascular endothelial growth factor (VEGF)-C, transforming growth factor (TGF)-\u03b2), proteases involved in matrix remodeling during atherogenesis (cathepsins B, L, S and Z; matrix metalloprotease-12), and cardiovascular risk factors\/inflammation markers (haptoglobin, orosomucoid 2, fibrinogen-like protein 2, \u03b11-microglobulin). This upregulation of pro-atherogenic candidate genes in the HC group is consistent with the observed enhanced early atherosclerosis found in this group.\n\nExpansion of the lipid and inflammatory networks revealed that hepatic lipid metabolism is linked to the hepatic inflammatory response via specific transcriptional regulators that control both processes. Among these dual regulators were CBP, C\/EBPs, PPAR\u03b1 and SP-1 (Table 4<\/a>). The presence of molecular links between lipid metabolism and inflammation raises the possibility that specific intervention with an anti-inflammatory compound may, in turn, affect plasma cholesterol levels. In a first attempt to test this possibility, female E3L mice were fed a HC diet to increase plasma cholesterol levels (from 5.3 mM to 19.3 mM) and systemic inflammation (SAA from 1.7 \u03bcg\/ml to 9.2 \u03bcg\/ml). Then, animals were treated with the same HC diet supplemented with salicylate, an inhibitor of NF-\u03baB signaling, or vehicle. While plasma cholesterol and SAA levels remained elevated in the vehicle-treated group, the salicylate-treated group displayed significantly lower plasma SAA levels (7.7 \u03bcg\/ml; *P* \\< 0.05) and significantly reduced cholesterol levels (9.9 mM; *P* \\< 0.05) demonstrating that specific intervention into the inflammatory component does indeed affect plasma cholesterol.\n\nIdentified master regulators that control inflammatory reprogramming of the liver\n\n| Transcription factor | Regulator of\/node point for | Example of downstream effects |\n|----|----|----|\n| AP-1 (c-jun\/c-fos) | Inflammation | Mmp-12, col1a1, hsp27 |\n| CREP binding protein (CBP) | Lipid, inflammation, immune response, cell proliferation | Very broadly acting coactivator (can bind to SREBPs) |\n| C\/EBPs | Lipid, inflammation, energy metabolism | Acute phase genes (for SAA, CRP, fibrinogen), hepatic gluconeogenesis and lipid homeostasis, energy metabolism (PEPCK, FAS), TGF-\u03b2 signaling |\n| Forkhead transcription factor FOXO1 | Lipid, inflammation\/proliferation | Glycolysis, pentose phosphate shunt, and lipogenic and sterol synthetic pathways, LPL (via SHP) |\n| NF-\u03baB | Inflammation | SAA, CD83, CD86, CCR5, VEGF-C |\n| PPAR\u03b1\/RXR\u03b1 | Lipid, inflammation | LPL, ABCA1, macrophage activation, glucose homeostasis |\n| p53 | Inflammation | HSP27, HSPA4, IFI16, IBP3, RBBP4 |\n| SMAD3 | Inflammation | Proteases and growth factors (via TGF-\u03b2 signaling) |\n| SP-1 | Lipid, inflammation | ABCA1, ICAM-1, cellular matrix genes COL1A1, COL1A2 |\n| SREBP-1\/-2 | Lipid, inflammation | Sterol biosynthesis genes, LDLR, link to C\/EBP\u03b1 |\n| STAT1\/3\/5 | Inflammation | Acute phase genes |\n| YY1 | Inflammation\/proliferation | Inflammatory response genes (SAA, vWF, CCR5), cellular matrix genes |\n\nBiological networks were generated using MetaCore\u2122 software and transcriptional master regulators were identified in significant gene networks (*P* \\< 0.05).\n\n# Discussion\n\nDevelopment of atherosclerotic lesions requires a lipid component (hypercholesterolemia) and an inflammatory component. In the present study, we demonstrate that high dose dietary cholesterol (HC diet) strongly induces early atherosclerotic lesion formation in a humanized model for atherosclerosis, E3L mice. This is not, or is only slightly, the case with a cholesterol-free (Con) or low dose cholesterol (LC) diet. Importantly, Con, LC, and HC diets dose-dependently increase plasma cholesterol levels, but only HC treatment induces a marked systemic inflammatory response, which precedes lesion formation and is related to liver inflammation. We employed newly developed (functional) systems biology technologies to unravel how increasing the dose of dietary cholesterol affects liver homeostasis and evokes hepatic inflammation. The following important findings were made. The liver absorbs escalating doses of dietary cholesterol primarily by adjusting the expression level of genes involved in lipid metabolism, as revealed by advanced gene expression analysis. This metabolic resilience is confirmed by analysis of metabolites in liver. At high doses of dietary cholesterol, the liver also develops an inflammatory stress response, which is characterized by up-regulation of pro-atherogenic candidate genes and activation of (at least four distinct) inflammatory pathways. The evolution of hepatic inflammation involves specific transcriptional regulators, several of which have been newly identified in this study. Interestingly, some of these transcription factors have a dual role and control both hepatic lipid metabolism and hepatic inflammation, indicating that the same regulatory mechanisms underlie these processes and thereby link the two processes.\n\nThe present study delineates, to our knowledge for the first time, the genome-wide response of the liver to increasing doses of dietary cholesterol, with specific attention to inflammatory processes, and in relation to early atherosclerotic lesion formation. The liver responds to moderate elevations in dietary cholesterol (LC diet) by adjusting major metabolic processes related to lipid metabolism. For example, the expression of genes involved in endogenous cholesterol synthesis (for example, *HMG-CoA reductase*) and cholesterol uptake from plasma (for example, *LDLR*) is diminished. At high loads of dietary cholesterol (HC diet), the liver strives for homeostasis by intensifying the changes in gene expression observed with the LC diet. Similar dose-dependent effects of dietary cholesterol have been reported by others \\[18\\] but the number of studies that assess dose-dependent effects of dietary cholesterol is relatively small and analyses are restricted to a limited number of genes. Our genome-wide approach is comprehensive and demonstrates that metabolic processes as a whole are adjusted at the level of gene expression.\n\nImportantly, the adjustment at the gene level is efficacious only up to a certain degree of cholesterol stress: while low loads of cholesterol are fully absorbed (consider the comparable intrahepatic lipidome fingerprints for the LC and Con groups), exposure to high loads of dietary cholesterol in the HC group significantly altered the liver lipidome, despite further intensified adjustment of gene expression. Our combined analysis of genes and functional readouts (lipid metabolites) clearly demonstrates that a dose of 1% w\/w of cholesterol, which is typically used to induce experimental dyslipidemia and atherosclerosis in mice \\[13,19,20\\], is an extreme condition because the metabolic resilience of the liver is already overstretched.\n\nConcomitant with the adjustment of metabolic genes to HC dietary stress, HC treatment also evokes a hepatic inflammatory response. The development of an inflammatory gene expression profile upon feeding of a diet containing cholesterol has also been reported by others. For example, Tous *et al*. \\[21\\] showed that atherosclerosis-prone apoE-\/- mice receiving a high fat\/high cholesterol diet develop an impairment of liver histology consisting of fat accumulation, macrophage proliferation, and inflammation, and that there is a chronological and quantitative relationship between liver impairment and the formation of atheromatous lesions. Vergnes *et al*. \\[22\\] showed that cholesterol and cholate components of the atherogenic diet have distinct pro-atherogenic effects on gene expression and particularly that cholesterol is required for induction of genes involved in acute inflammation in C57BL\/6J mice. Recinos *et al*. \\[23\\] reported that liver gene expression in LDLR-\/- mice is associated with diet and lesion development and demonstrated the induction of components of the alternative component pathway. Zabalawi *et al*. \\[24\\] showed the induction of fatal inflammation in LDLR-\/- and ApoAI-\/-LDLR-\/- mice fed dietary fat and cholesterol. However, the exact molecular inflammatory pathways switched on\/off by dietary cholesterol have remained unknown. While some of the above studies employing microarray analysis have examined some of the individual components of the inflammatory response to cholesterol, we have set out to generate a holistic profile of the complex, interrelated nature of the response of the liver to cholesterol. Advanced pathway analysis combined with functional network building enabled us to unravel four key inflammatory pathways (IFN\u03b3, TNF\u03b1, IL-1, and PDGF pathways) that play central roles in the evolution of cholesterol-induced inflammation in the liver. Further research is necessary to resolve the sequence of events over time (for example, which pathway is switched on first). Remarkably, these pathways are also critical for lesion development in the vessel wall, suggesting that the inflammatory response to cholesterol stress described herein for the liver may involve similar routes in other tissues and, as such, has more general significance.\n\nOur results suggest that hepatic inflammatory response may be causatively related to lesion initiation in the aorta, because pro-atherogenic candidate genes (that is, genes encoding candidate inflammatory components reportedly or putatively involved in early lesion formation) were found to be upregulated specifically in the HC group but not, or only slightly, in the LC group. The presence of a 'hepatic source' for inflammatory factors in HC stress may also explain the observed exponential (rather than linear) increase in lesion formation seen with increasing dietary cholesterol loads \\[11,12\\]. Consistent with this notion is the view that the inflammatory arm of atherogenesis is a principle driving force of lesion development.\n\nAn inflammatory reprogramming of the liver has also been observed in C57BL\/6J mice treated with a 1.25% w\/w cholesterol diet resulting in total plasma cholesterol concentrations of 3.6 mM \\[22\\], that is, a level comparable to the Con group in our study. Unlike that in E3L mice, the total plasma cholesterol in C57BL\/6J mice is mainly confined to HDL, an anti-atherogenic, anti-inflammatory lipoprotein facilitating transport of cholesterol from the periphery back to the liver. The fact that mice with a strongly different lipoprotein profile (E3L, LDLR-\/- and C57BL\/6) show a similar hepatic inflammatory response to cholesterol feeding indicates that the observed inflammatory effect of dietary cholesterol is a general phenomenon and not restricted to the model of dysbetalipoproteinemia used herein. Also, it suggests that the influx of dietary cholesterol into the liver (via chylomicrons) rather than plasma cholesterol is key to the inflammatory response of the liver. This supposition would also be in accord with the rapidity of the effect: in a time-resolved analysis of plasma SAA during atherogenesis, we report here a strong elevation of plasma SAA within two weeks of cholesterol-feeding in female E3L mice. This is also in line with the inflammatory reprogramming of C57BL6\/J mice within three weeks \\[22\\] and clearly demonstrates that the hepatic inflammatory response precedes the formation of atherosclerotic lesions, suggesting that dietary cholesterol can be an important trigger and a possible source of the inflammatory component of atherosclerotic disease. In the present study, the liver function markers ALAT and ASAT remained within the normal levels stipulated for the function criteria for donor livers \\[25\\], indicating normal liver function under the experimental conditions applied in this study. Our results do not exclude the possibility, however, that sterols may oxidize and become toxic and that the oxidized sterols contribute, at least partly, to the inflammatory effects observed by us and others.\n\nInflammation may also arise from established risk factors other than high plasma cholesterol (for example, hypertension, diabetes\/hyperglycemia). Dietary glucose can modulate the mRNA expression and serum concentrations of immune parameters but these alterations rapidly normalize in normoglycemic subjects \\[26\\]. In the case of an impaired metabolic state, however, postprandial hyperglycemia increases the magnitude and duration of systemic inflammatory responses, which probably promotes the development of cardiovascular disease.\n\nOur results show that the evolution of hepatic inflammation is controlled by specific transcriptional regulators, some of which are well known in the context of cholesterol-inducible inflammation (SREBPs, NF-\u03baB, AP-1, C\/EBPs), while others have been newly identified in the present study (CBP, HNF4\u03b1, SP-1, STAT-3\/-5, YY1). Interestingly, some of these factors may also represent molecular links between lipid\/cholesterol metabolism and inflammation. Supportive evidence for an interrelationship between liver metabolism and inflammation also comes from pharmacological intervention studies. On the one hand, cholesterol-lowering drugs reduce the general inflammatory status and the expression of liver-derived inflammation markers in E3L mice (compare to the pleiotropic effects of statins) \\[7,14,27\\] The anti-inflammatory IKK\u03b2-inhibiting compound salicylate \\[28,29\\] reduces plasma cholesterol in the same mouse model (this paper) indicating that modulation of cholesterol levels via inflammation may be possible as well. A hypocholesterolemic effect of salicylate has also been reported in catfish \\[30\\] and salicylate was found to inhibit hepatic lipogenesis in isolated rat hepatocytes *in vitro* \\[31\\]. Prigge and Gebhard \\[32\\] showed that acetylsalicylate (aspirin), a classical inhibitor of COX1 and COX2 \\[29\\], induces biliary cholesterol secretion in the rat, an effect that may contribute to the cholesterol-lowering effect seen with compounds of the salicylate category: in diabetic human subjects, very high doses of aspirin (around 7 g\/d) were associated with a 15% reduction of total plasma cholesterol and CRP \\[29\\].\n\n# Conclusion\n\nWe demonstrate that dietary cholesterol is not only a lipid risk factor but also a trigger of hepatic inflammation and, as such, also involved in the evolution of the inflammatory arm of atherosclerotic disease. A certain degree of genetic resilience and elasticity allows the liver to cope with moderate cholesterol stress, but high loads of cholesterol result in an inflammatory pro-atherogenic response (involving specific pathways and transcriptional regulators), which enhances early lesion formation. Our findings that cholesterol and inflammation are closely linked via specific transcriptional master regulators might lead to new strategies for future therapeutic intervention.\n\n# Materials and methods\n\n## Animals and diets\n\nFemale E3L mice were used at the age of 12 weeks for all experiments. Animal experiments were approved by the Institutional Animal Care and Use Committee of The Netherlands Organization for Applied Scientific Research (TNO) and were in compliance with European Community specifications regarding the use of laboratory animals.\n\nA group of E3L mice (*n* = 17) was treated with a cholesterol-free diet (diet T; Hope Farms, Woerden, The Netherlands) for 10 weeks (Con group). The major ingredients of diet T (all w\/w) were cacao butter (15%), corn oil (1%), sucrose (40.5%), casein (20%), corn starch (10%) and cellulose (6%). Two other groups (*n* = 17 each) received the same diet but supplemented with either 0.25% w\/w cholesterol (LC group) or 1.0% w\/w cholesterol (HC group). After ten weeks of diet feeding, animals were euthanized under anesthesia to collect livers, hearts and aortas. Tissues were snap-frozen in liquid nitrogen and stored at -80\u00b0C until use.\n\nTo assess the effect of salicylate on plasma levels of inflammation markers and cholesterol, two groups of female E3L mice (*n* = 10; 12 weeks old) were treated with the HC diet for 3 weeks. Then, HC dietary treatment was either continued (vehicle control group) or animals were fed HC supplemented with 0.12% w\/w salicylate (equaling a dose of 145 mg\/kg\/day) for 8 weeks. Plasma samples were obtained by tail bleeding without fixation of the test animals to minimize stress.\n\n## Analyses of plasma lipids and proteins\n\nTotal plasma cholesterol and triglyceride levels were measured after 4 h of fasting, using kits No.1489437 (Roche Diagnostics, Almere, The Netherlands) and No.337-B (Sigma, Aldrich Chemie BV, Zwijndrecht, The Netherlands) \\[33\\]. For lipoprotein profiles, pooled plasma was fractionated using an \u1ea2KTA FPLC system (Pharmacia, Roosendaal, The Netherlands) \\[9\\]. The plasma levels of SAA were determined by ELISA as reported \\[14\\]. Plasma ALAT and ASAT levels were determined spectrophotometrically using a Reflotron system (Roche Diagnostics) \\[9\\].\n\nFor lipiodomics analysis of liver homogenates and plasma samples, electrospray liquid chromatography mass spectroscopy (LC-MS) analysis was applied \\[34\\]. Briefly, samples (5 \u03bcl) were incubated with 200 \u03bcl isopropanol and a mixture of internal standards (heptadecanoyl-lysophosphatidylcholine, di-lauroyl-phosphatidylcholine, heptadecanoyl-cholesterol and tri-heptadecanoyl-glycerol; Sigma, St Louis, MO, USA)). After vortexing, the lipids were extracted and isolated by centrifugation (lipids in isopropanol phase). Electrospray LC-MS lipid analysis was performed on a Thermo LTQ apparatus equipped with a Surveyor HPLC system (Thermo Electron, San Jose, CA, USA). The samples were measured in fully randomized sequences. Quality control samples, prepared from a single pool of E3L mouse reference tissue, were analyzed at regular intervals (bracketing 10 samples). The LC-MS raw data files were processed using a software developed by TNO (IMPRESS) to generate comprehensive peak tables (m\/z value, retention time and peak area). Data were then subjected to retention time alignment of peaks, internal standard correction of peak areas and quality control resulting in a final lipidomics dataset.\n\nThe obtained lipidomics dataset was analyzed and visualized by PCA essentially as described \\[35\\]. Prior to analysis, the data were mean-centered and auto-scaled to ensure an equal contribution of all lipid measurements to the PCA-model.\n\n## Analyses of atherosclerosis\n\nHearts were fixed and embedded in paraffin to prepare serial cross sections (5 \u03bcm thick) throughout the entire aortic valve area for (immuno) histological analysis. Cross sections were stained with hematoxylin-phloxine-saffron, and atherosclerosis was analyzed blindly in four cross-sections of each specimen (at intervals of 30 \u03bcm) as reported \\[14,36\\]. QWin-software (Leica) was used for morphometric computer-assisted analysis of lesion number, lesion area, and lesion severity as described in detail elsewhere \\[7\\]. Significance of difference was calculated by one-way analysis of variance (ANOVA) test followed by a least significant difference *post hoc* analysis using SPSS 11.5 for Windows (SPSS, Chicago, IL, USA). The level of statistical significance was set at \u03b1 \\< 0.05.\n\n## Nucleic acid extraction and gene expression analysis\n\nTotal RNA was extracted from individual livers (*n* = 5 per group) using RNAzol (Campro Scientific, Veenendaal, The Netherlands) and glass beads according to the manufacturer's instructions. The integrity of each RNA sample obtained was examined by Agilent Lab-on-a-chip technology using the RNA 6000 Nano LabChip kit and a bioanalyzer 2100 (both Agilent Technologies, Amstelveen, The Netherlands). The quality control procedure is described in Additional data file 8. The One-Cycle Target Labeling and Control Reagent kit (Affymetrix \\#900493) and the protocols optimized by Affymetrix were used to prepare biotinylated cRNA (from 5 \u03bcg of total RNA) for microarray hybridization (*n* = 5 per group). The quality of intermediate products (that is, biotin-labeled cRNA and fragmented cRNA) was again controlled using the RNA 6000 Nano Lab-on-a-chip and bioanalyzer 2100. Microarray analysis was carried out using an Affymetrix technology platform and Affymetrix GeneChip^\u00ae^ mouse genome 430 2.0 arrays (45,037 probe sets; 34,000 well-characterized mouse genes). Briefly, fragmented cRNA was mixed with spiked controls, applied to Affymetrix Test chips, and good quality samples were then used to hybridize with murine GeneChip^\u00ae^ 430 2.0 arrays. The hybridization, probe array washing and staining, and washing procedures were executed as described in the Affymetrix protocols, and probe arrays were scanned with a Hewlett-Packard Gene Array Scanner (Leiden Genome Technology Center, Leiden, The Netherlands).\n\n## Gene expression data analysis\n\nRaw signal intensities were normalized using the GCRMA algorithm (Affylm package in R). Datasets are freely accessible online through ArrayExpress \\[37\\]. Normalized signal intensities below 10 were replaced by 10. Probe sets with an absent call in all arrays were removed before further analysis of the data. RT-PCR was performed essentially as described \\[27,38\\] to validate and confirm differences in gene expression between the treatment groups.\n\nStatistical analysis was performed in BRB ArrayTools (Dr Richard Simon and Amy Peng Lam \\[39\\]). Con, LC and HC groups were tested for differentially expressed genes using class comparisons with multiple testing corrections by estimation of false discovery rate (FDR). Differentially expressed genes were identified at a threshold for significance of \u03b1 \\< 0.01 and a FDR \\< 5%. Within the set of differentially expressed genes, a Student's *t*-test was carried out to analyze differential expression of individual genes between the cholesterol-fed groups and the Con group. Differences of *P* \\< 0.01 versus Con were considered significant.\n\nFor biological interpretation of the differentially expressed genes, software tools GenMAPP and MetaCore\u2122 (GeneGo Inc., St Joseph, MI, USA) were used. Enrichment of biological processes (GO annotation) was analyzed in GenMAPP, biological processes in GenMAPP with a Z-score \\>2 and PermuteP \\< 0.05 were considered as significantly changed. In MetaCore\u2122, enrichment analysis \\[40\\] of four independent ontologies was performed. In addition to biological process gene ontology, data were also analyzed in canonical pathway maps, GeneGo-cellular processes and disease categories.\n\nDistribution by canonical pathway maps reveals the most significant signaling and\/or metabolic pathways. Experimental data are visualized as red\/blue thermometers pointing up\/down, and signifying up\/down-regulation of the map objects. Distribution by GeneGo processes provides the most significant functional process categories enriched with experimental data. GeneGo processes represent comprehensive pre-built process-specific networks, in which all objects are interconnected by experimentally validated interactions. The up- and down-regulated genes are visualized as red or blue circles, respectively. The disease categories represent sets of genes associated with certain diseases. Gene enrichment analysis shows the relative enrichment of the up- and down-regulated genes with the genes from different disease categories. As in the case of process enrichment, this procedure is carried out by p value distribution.\n\nThe biological networks were assembled from manually curated protein-protein, protein-DNA and protein-ligand (metabolite) interactions, which are accumulated in the MetaCore\u2122 database. Each edge or link on the network is based on small-experiment data referenced in the corresponding literature. The legend for MetaCore\u2122 Networks from the MetaCore\u2122 guideline is provided in Additional data file 6d. For the generation of functional networks, transcriptome and metabolome datasets were merged, allowing combined analysis. Networks were generated using the Shortest path (SP) algorithm, which links the nodes from experimental datasets by the shortest directed graphs, allowing up to two additional steps using interactions and nodes from the MetaCore\u2122 database. To present most of the relevant network data on the same figure, we used the add\/expand function and the Merge Networks feature. The resulting networks provide links based on the known interaction data not only between the nodes from the query data set(s), but also between the nodes that regulate the given genes or metabolites. Network nodes with available experimental data are distinguished with red or blue circles, representing up- or down-regulation, respectively.\n\n# Abbreviations\n\nALAT, alanine aminotransferase; AP, activator protein; ASAT, aspartate aminotransferase; CBP, CREB-binding protein; C\/EBP, CAAT\/enhancer-binding protein; Con, cholesterol-free; CRP, C-reactive protein; E3L, ApoE\\*3Leiden transgenic; FDR, false discovery rate; GO, Gene Ontology; HC, high-cholesterol; HDL, high-density lipoprotein; HNF, hepatocyte nuclear factor; IFN, interferon; IL, interleukin; LC, low-cholesterol; LC-MS, liquid chromatography mass spectroscopy; LDL, low-density lipoprotein; NF-\u03baB, nuclear factor kappa B; PCA, principal component analysis; PDGF, platelet-derived growth factor; PPAR, peroxisome proliferator activated receptor; RXR, retinoid X receptor; SAA, serum amyloid A; SREBP, sterol regulatory element binding protein; STAT, signal transducer and activator of transcription; TGF, transforming growth factor; TNF, tumor necrosis factor; VLDL, very low-density lipoprotein; YY, Yin Yang.\n\n# Authors' contributions\n\nRK provided the conceptual background to the analysis, interpreted the results and wrote the manuscript. LV did the *in vivo* atherosclerosis studies, performed the assays and interpreted the data. MvE performed the computational analysis including biological processes and assisted with manuscript writing. YN coordinated the software development and the multidimensional analysis of biological processes using (pathway) networks. NC supervised the work and coordinated the lipidomics and genomics analyses. EV developed the lipidomics methodology and performed the lipidomics measurements. AS coordinated the multivariate statistical analysis and drafted the manuscript. HH helped with data interpretation and evaluated the manuscript. SZ performed animal experiments and quantified plasma inflammation markers. GS participated in designing the experiment and manuscript writing. VK developed the tools for multidimensional data analyses and performed computations. TN assisted with the preparation of the figures of networks and pathways. AM assisted in data interpretation and bioinformatical techniques for gene ontology analyses. EH participated in designing the study and manuscript preparation. JG coordinated the development of the metabolomics technologies and critically evaluated the manuscript. BO led the bioinformatical analyses, developed the concepts for integrated data analysis and drafted the manuscript. TK initiated the study, interpreted the data and helped with manuscript writing. All authors read and approved the final manuscript.\n\n# Additional data files\n\nThe following additional data are available with the online version of this paper. Additional data file 1<\/a> shows the exponential positive correlation between atherosclerotic lesion area and total plasma cholesterol in female E3L mice. Additional data file 2<\/a> shows the validation and confirmation of Affymetrix microarray gene expression data by RT-PCR analysis. Additional data file 3<\/a> is a table of the genes (including GenBank identification number and the gene symbol) that are differentially expressed with increasing doses of dietary cholesterol. Additional data file 4<\/a> shows the canonical pathway analysis for cholesterol metabolism and analysis of the gene expression data based on GO annotation with disease categories (MetaCore\u2122 software, GeneGO). Additional data file 5<\/a> shows the comprehensive network analysis (functional OMICs analysis) by merging gene expression datasets with the metabolite datasets using MetaCore\u2122 network software. Additional data file 6<\/a> shows the biological networks of differentially expressed genes in the HC group allowing the identification of transcriptional master regulators. Additional data file 7<\/a> is a table lisitng cholesterol-induced factors with reported extracellular function. Additional data file 8<\/a> describes the quality control analysis steps for RNA samples prior to hybridization on Affymetrix microarrays usig Agilent Lab-on-a-chip technology.\n\n# Supplementary Material\n\n###### Additional data file 1\n\nExponential positive correlation between atherosclerotic lesion area and total plasma cholesterol in female E3L mice.\n\nClick here for file\n\n###### Additional data file 2\n\nValidation and confirmation of Affymetrix microarray gene expression data by RT-PCR analysis.\n\nClick here for file\n\n###### Additional data file 3\n\nGenes (including GenBank identification number and the gene symbol) that are differentially expressed with increasing doses of dietary cholesterol.\n\nClick here for file\n\n###### Additional data file 4\n\nCanonical pathway analysis for cholesterol metabolism and analysis of the gene expression data based on GO annotation with disease categories (MetaCore\u2122 software, GeneGO).\n\nClick here for file\n\n###### Additional data file 5\n\nComprehensive network analysis (functional OMICs analysis) by merging gene expression datasets with the metabolite datasets using MetaCore\u2122 network software.\n\nClick here for file\n\n###### Additional data file 6\n\nBiological networks of differentially expressed genes in the HC group allowing the identification of transcriptional master regulators.\n\nClick here for file\n\n###### Additional data file 7\n\nCholesterol-induced factors with reported extracellular function.\n\nClick here for file\n\n###### Additional data file 8\n\nQuality control analysis steps for RNA samples prior to hybridization on Affymetrix microarrays usig Agilent Lab-on-a-chip technology.\n\nClick here for file\n\n### Acknowledgements\n\nThis study was supported by the Dutch Organization for Scientific Research (NWO; grant VENI 016. 036.061 to RK), the Dutch Heart Foundation (NHS; grant 2002B102 to LV), and the TNO research program NISB (to RK, ME, NC, EV, AS, JG and TK). We are grateful to Maren White for helpful discussions and critical reading of the manuscript. We thank AstraZeneca for supporting this study. We thank Ally Perlina, Wilbert Heijne, Robert-Jan Lamers, Annie Jie and Karin Toet for excellent bioinformatical and analytical help. The authors gratefully acknowledge grant support from The European Nutrigenomics Organisation (NuGO, CT-2004-505944; Focus Team 'Metabolic Stress and Disease to RK, TK and SZ).","meta":{"dup_signals":{"dup_doc_count":141,"dup_dump_count":32,"dup_details":{"curated_sources":4,"2022-21":1,"2020-45":1,"2020-24":1,"2019-47":1,"2019-39":1,"2017-09":18,"2016-44":1,"2016-40":1,"2016-36":25,"2016-30":18,"2015-48":3,"2015-40":2,"2015-35":3,"2015-32":3,"2015-27":2,"2015-22":3,"2015-14":2,"2014-52":3,"2014-49":5,"2014-42":8,"2014-41":5,"2014-35":3,"2014-23":4,"2014-15":5,"2023-23":1,"2015-18":3,"2015-11":3,"2015-06":3,"2014-10":2,"2013-48":3,"2013-20":2,"2024-22":1}},"file":"PMC2375038"},"subset":"pubmed_central"} {"text":"author: Etzel Carde\u00f1a\\*Correspondence: [^1][^2][^3]\ndate: 2014-01-27\ninstitute: Department of Psychology, Lund UniversityLund, Sweden\nreferences:\ntitle: A call for an open, informed study of all aspects of consciousness\n\nScience thrives when there is an open, informed discussion of all evidence, and recognition that scientific knowledge is provisional and subject to revision. This attitude is in stark contrast with reaching conclusions based solely on a previous set of beliefs or on the assertions of authority figures. Indeed, the search for knowledge wherever it may lead inspired a group of notable scientists and philosophers to found in 1882 the Society for Psychical Research in London. Its purpose was \"to investigate that large body of debatable phenomena\u2026 without prejudice or prepossession of any kind, and in the same spirit of exact and unimpassioned inquiry which has enabled Science to solve so many problems.\" Some of the areas in consciousness they investigated such as psychological dissociation, hypnosis, and preconscious cognition are now well integrated into mainstream science. That has not been the case with research on phenomena such as purported telepathy or precognition, which some scientists (a clear minority according to the surveys conducted ) dis-miss *a priori* as pseudoscience or illegitimate. Contrary to the negative impression given by some critics, we would like to stress the following:\n\n1. Research on parapsychological phenomena (psi) is being carried out in various accredited universities and research centers throughout the world by academics in different disciplines trained in the scientific method (e.g., circa 80 Ph.D.s have been awarded in psi-related topics in the UK in recent years). This research has continued for over a century despite the taboo against investigating the topic, almost complete lack of funding, and professional and personal attacks (Carde\u00f1a, 201). The Parapsychological Association has been an affiliate of the AAAS since 1969, and more than 20 Nobel prizewinners and many other eminent scientists have supported the study of psi or even conducted research themselves (Carde\u00f1a, 2013).\n\n2. Despite a negative attitude by some editors and reviewers, results supporting the validity of psi phenomena continue to be published in peer-reviewed, academic journals in relevant fields, from psychology to neuroscience to physics e.g., (Storm et al., 2010; Bem, 2011; Hameroff, 2012; Radin et al., 2012).\n\n3. Increased experimental controls have not eliminated or even decreased significant support for the existence of psi phenomena, as suggested by various recent meta-analyses (Sherwood and Roe, 2003; Schmidt et al., 2004; B\u00f6sch et al., 2006; Radin et al., 2006; Storm et al., 2010, 2012, 2013; Tressoldi, 2011; Mossbridge et al., 2012; Schmidt, 2012).\n\n4. These meta-analyses and other studies (Blackmore, 1980)suggest that data supportive of psi phenomena cannot reasonably be accounted for by chance or by a \"file drawer\" effect. Indeed, contrary to most disciplines, parapsychology journals have for decades encouraged publication of null results and of papers critical of a psi explanation (Wiseman et al., 1996; Sch\u00f6nwetter et al., 2011). A psi trial registry has been established to improve research practice .\n\n5. The effect sizes reported in most meta-analyses are relatively small and the phenomena cannot be produced on demand, but this also characterizes various phenomena found in other disciplines that focus on complex human behavior and performance such as psychology and medicine (Utts, 1991; Richard and Bond, 2003).\n\n6. Although more conclusive explanations for psi phenomena await further theoretical and research developments, they do not *prima facie* violate known laws of nature given modern theories in physics that transcend classical restrictions of time and space, combined with growing evidence for quantum effects in biological systems (Sheehan, 2011; Lambert et al., 2013).\n\nWith respect to the proposal that \"exceptional claims require exceptional evidence,\" the original intention of the phrase is typically misunderstood (Truzzi, 1978). Even in its inaccurate interpretation what counts as an \"exceptional claim\" is far from clear. For instance, many phenomena now accepted in science such as the existence of meteorites, the germ theory of disease, or, more recently, adult neurogenesis, were originally considered so exceptional that evidence for their existence was ignored or dismissed by contemporaneous scientists. It is also far from clear what would count as \"exceptional evidence\" or who would set that threshold. Dismissing empirical observations *a priori*, based solely on biases or theoretical assumptions, underlies a distrust of the ability of the scientific process to discuss and evaluate evidence on its own merits. The undersigned differ in the extent to which we are convinced that the case for psi phenomena has already been made, but not in our view of science as a non-dogmatic, open, critical but respectful process that requires thorough consideration of all evidence as well as skepticism toward both the assumptions we already hold and those that challenge them.\n\nDaryl Bem, Professor Emeritus of Psychology, Cornell University, USA\n\nEtzel Carde\u00f1a, Thorsen Professor of Psychology, Lund University, Sweden\n\nBernard Carr, Professor in Mathematics and Astronomy, University of London, UK\n\nC. Robert Cloninger, Renard Professor of Psychiatry, Genetics, and Psychology, Washington University in St. Louis, USA\n\nRobert G. Jahn, Past Dean of Engineering, Princeton University, USA\n\nBrian Josephson, Emeritus Professor of Physics, University of Cambridge, UK (Nobel prizewinner in physics, 1973)\n\nMenas C. Kafatos, Fletcher Jones Endowed Professor of Computational Physics, Chapman University, USA\n\nIrving Kirsch, Professor of Psychology, University of Plymouth, Lecturer in Medicine, Harvard Medical School, USA, UK\n\nMark Leary, Professor of Psychology and Neuroscience, Duke University, USA\n\nDean Radin, Chief Scientist, Institute of Noetic Sciences, Adjunct Faculty in Psychology, Sonoma State University, USA\n\nRobert Rosenthal, Distinguished Professor, University of California, Riverside, Edgar Pierce Professor Emeritus, Harvard University, USA\n\nLothar Sch\u00e4fer, Distinguished Professor Emeritus of Physical Chemistry, University of Arkansas, USA\n\nRaymond Tallis, Emeritus Professor of Geriatric Medicine, University of Manchester, UK\n\nCharles T. Tart, Professor in Psychology Emeritus, University of California, Davis, USA\n\nSimon Thorpe, Director of Research CNRS (Brain and Cognition), University of Toulouse, France\n\nPatrizio Tressoldi, Researcher in Psychology, Universit\u00e0 degli Studi di Padova, Italy\n\nJessica Utts, Professor and Chair of Statistics, University of California, Irvine, USA\n\nMax Velmans, Professor Emeritus in Psychology, Goldsmiths, University of London, UK\n\nCaroline Watt, Senior Lecturer in Psychology, Edinburgh University, UK\n\nPhil Zimbardo, Professor in Psychology Emeritus, Stanford University, USA\n\nAnd\u2026\n\nP. Baseilhac, Researcher in Theoretical Physics, University of Tours, France\n\nEberhard Bauer, Dept. Head, Institute of Border Areas of Psychology and Mental Hygiene, Freiburg, Germany\n\nJulie Beischel, Adjunct Faculty in Psychology and Integrated Inquity, Saybrook University, USA\n\nHans Bengtsson, Professor of Psychology, Lund University, Sweden\n\nMichael Bloch, Associate Professor of Psychology, University of San Francisco, USA\n\nStephen Braude, Professor of Philosophy Emeritus, University of Maryland Baltimore County, USA\n\nRichard Broughton, Senior Lecturer, School of Social Sciences, University of Northampton, UK\n\nAntonio Capafons, Professor of Psychology, University of Valencia, Spain\n\nJames C. Carpenter, Adjunct Professor of Psychiatry, University of North Carolina, Chapel Hill, USA\n\nAllan Leslie Combs, Doshi Professor of Consciousness Studies, California Institute of Integral Studies, USA\n\nDeborah Delanoy, Emeritus Professor of Psychology, University of Northampton, UK\n\nArnaud Delorme, Professor of Neuroscience, Paul Sabatier University, France\n\nVilfredo De Pascalis, Professor of General Psychology, \"La Sapienza\" University of Rome, Italy\n\nKurt Dressler, Professor in Molecular Spectroscopy Emeritus, Eidg. Techn. Hochschule Z\u00fcrich, Switzerland\n\nHoyt Edge, Hugh H. and Jeannette G. McKean Professor of Philosophy, Rollins College, USA\n\nSuitbert Ertel, Emeritus Professor of Psychology, University of G\u00f6ttingen, Germany\n\nFranco Fabbro, Professor in Child Neuropsychiatry, University of Udine, Italy\n\nEnrico Facco, Professor of Anesthesia and Intensive Care, University of Padua, Italy\n\nWolfgang Fach, Researcher, Institute of Border Areas of Psychology and Mental Hygiene, Freiburg, Germany\n\nHarris L. Friedman, Former Research Professor of Psychology, University of Florida, USA\n\nAlan Gauld, Former Reader in Psychology, University of Nottingham, UK\n\nAntoon Geels, Professor in the Psychology of Religion Emeritus, Lund University, Sweden\n\nBruce Greyson, Carlson Professor of Psychiatry and Neurobehavioral Sciences, University of Virginia, Charlottesville, USA\n\nErlendur Haraldsson, Professor Emeritus of Psychology, University of Iceland, Iceland\n\nRichard Conn Henry, Academy Professor (Physics and Astronomy), The Johns Hopkins University, USA\n\nDavid J. Hufford, University Professor Emeritus, Penn State College of Medicine, USA\n\nOscar Iborra, Researcher, Department of Experimental Psychology, Granada University, Spain\n\nHarvey Irwin, former Associate Professor, University of New England, Australia\n\nGraham Jamieson, Lecturer in Human Neuropsychology, University of New England, Australia\n\nErick Janssen, Adjunct Professor, Department of Psychology, Indiana University, USA\n\nPer Johnsson, Head, Department of Psychology, Lund University, Sweden\n\nEdward F. Kelly, Research Professor in the Department of Psychiatry and Neurobehavioral Sciences, University of Virginia, Charlottesville, USA\n\nEmily Williams Kelly, Research Assistant Professor in the Department of Psychiatry and Neurobehavioral Sciences, University of Virginia, Charlottesville, USA\n\nHideyuki Kokubo, Researcher, Institute for Informatics of Consciousness, Meiji University, Japan\n\nJeffrey J. Kripal, J. Newton Rayzor Professor of Religious Studies, Rice University, USA\n\nStanley Krippner, Professor of Psychology and Integrated Inquiry, Saybrook University, USA\n\nDavid Luke, Senior Lecturer, Department of Psychology and Counselling, University of Greenwich, UK\n\nFatima Regina Machado, Researcher, Universidade de S\u00e3o Paulo, Brasil\n\nMarkus Maier, Professor in Psychology, University of Munich, Germany\n\nGerhard Mayer, Researcher, Institute of Border Areas of Psychology and Mental Hygiene, Freiburg, Germany\n\nAntonia Mills, Professor First Nations Studies, University of Northern British Columbia, Canada\n\nGarret Moddel, Professor in Electrical, Computer, & Energy Engineering, University of Colorado, Boulder, USA\n\nAlexander Moreira-Almeida, Professor of Psychiatry, Universidade Federal de Juiz de Fora, Brasil\n\nAndrew Moskowitz, Professor in Psychology and Behavioral Sciences, Aarhus University, Denmark\n\nJulia Mossbridge, Fellow in Psychology, Northwestern University, USA\n\nJudi Neal, Professor Emeritus of Management, University of New Haven, USA\n\nRoger Nelson, Retired Research Staff, Princeton University, USA\n\nFotini Pallikari, Professor of Physics, University of Athens, Greece\n\nAlejandro Parra, Researcher in Psychology, Universidad Abierta Interamericana, Argentina\n\nJos\u00e9 Miguel P\u00e9rez Navarro, Lecturer in Education, International University of La Rioja, Spain\n\nGerald H. Pollack, Professor in Bioengineering. University of Washington, Seattle, USA\n\nJohn Poynton, Professor Emeritus in Biology, University of KwaZulu-Natal, South Africa\n\nDavid Presti, Senior Lecturer, Neurobiology and Cognitive Science, University of California, Berkeley, USA\n\nThomas Rabeyron, Lecturer in Clinical Psychology, Nantes University, France\n\nInmaculada Ramos Lerate, Researcher in Physics, Alba Synchrotron Light Source, Barcelona, Spain.\n\nChris Roe, Professor of Psychology, University of Northampton, UK\n\nStefan Schmidt, Professor, Europa Universit\u00e4t Viadrina, Germany\n\nGary E. Schwartz, Professor of Psychology, Medicine, Neurology, Psychiatry, and Surgery, University of Arizona, USA\n\nDaniel P. Sheehan, Professor of Physics, University of San Diego, USA\n\nSimon Sherwood, Senior Lecturer in Psychology, University of Greenwich, UK\n\nChristine Simmonds-Moore, Assistant Professor of Psychology, University of West Georgia, USA\n\nM\u00e1rio Sim\u00f5es, Professor in Psychiatry. University of Lisbon, Portugal\n\nHuston Smith, Prof. of Philosophy Emeritus, Syracuse University, USA\n\nJerry Solfvin, Associate Professor in Indic Studies, University of Massachusetts, Dartmouth, USA\n\nLance Storm, Visiting Research Fellow, University of Adelaide, Australia\n\nJeffrey Allan Sugar, Assistant Professor of Clinical Psychiatry, University of Southern California, Los Angeles, USA\n\nNeil Theise, Professor of Pathology and Medicine, The Icahn School of Medicine at Mount Sinai, USA\n\nJim Tucker, Bonner-Lowry Associate Professor of Psychiatry and Neurobehavioral Sciences, University of Virginia, USA\n\nYulia Ustinova, Associate Professor in History, Ben-Gurion University of the Negev, Israel\n\nWalter von Lucadou, Senior Lecturer at the Furtwangen Technical University, Germany\n\nMaurits van den Noort, Senior Researcher, Free University of Brussels, Belgium\n\nDavid Vernon, Senior Lecturer in Psychology, Canterbury Christ Church University, UK\n\nHarald Walach, Professor, Europa Universit\u00e4t Viadrina, Germany\n\nHelmut Wautischer, Senior Lecturer in Philosophy, Sonoma State University, USA\n\nDonald West, Emeritus Professor of Clinical Criminology, University of Cambridge, UK\n\nN.C. Wickramasinghe, Professor in Astrobiology, Cardiff University, UK\n\nFred Alan Wolf, formerly Professor in physics at San Diego State University, the Universities of Paris, London, and the Hebrew University of Jerusalem\n\nRobin Wooffitt, Professor of Sociology, University of York, UK\n\nWellington Zangari, Professor in Psychology, University of Sao Paulo, Brazil\n\nAldo Zucco, Professor, Dipartimento di Psicologia Generale, Universit\u00e0 di Padova, Italy\n\nAlthough for practical reasons only one author is listed, the call is a collective creation of the co-signatories.\n\n# References\n\n[^1]: This article was submitted to the journal Frontiers in Human Neuroscience.\n\n[^2]: Edited by: Christian Agrillo, University of Padova, Italy\n\n[^3]: Reviewed by: Imants Baruss, King's University College at The University of Western Ontario, Canada","meta":{"dup_signals":{"dup_doc_count":126,"dup_dump_count":67,"dup_details":{"curated_sources":2,"2023-50":2,"2023-40":2,"2023-23":1,"2023-06":3,"2022-49":2,"2022-40":3,"2022-33":1,"2022-27":1,"2022-21":1,"2022-05":2,"2021-49":1,"2021-43":2,"2021-39":1,"2021-31":1,"2021-21":1,"2021-17":2,"2021-10":1,"2021-04":1,"2020-45":1,"2020-40":2,"2020-29":1,"2020-24":1,"2020-10":2,"2020-05":1,"2019-47":2,"2019-43":2,"2019-39":1,"2019-35":2,"2019-30":2,"2019-26":2,"2019-22":1,"2019-18":2,"2019-09":2,"2018-51":3,"2018-47":1,"2018-43":2,"2018-39":1,"2018-34":2,"2018-30":1,"2018-26":4,"2018-22":1,"2018-13":5,"2018-05":3,"2017-51":2,"2017-47":1,"2017-43":5,"2017-39":2,"2017-34":2,"2017-30":2,"2017-26":2,"2017-22":4,"2017-17":2,"2017-09":3,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":2,"2016-30":2,"2016-22":1,"2014-15":3,"2024-26":2,"2024-22":2,"2024-18":1,"2024-10":1,"2017-13":5,"2014-10":1}},"file":"PMC3902298"},"subset":"pubmed_central"} {"text":"abstract: This is a report on a 37-patient continuation study of the open ended, Omega-3 Fatty Acid (O-3FA) add-on study. Subjects consisted of the original 19 patients, along with 18 new patients recruited and followed in the same fashion as the first nineteen. Subjects carried a DSM-IV-TR diagnosis of Bipolar Disorder and were visiting a Mood Disorder Clinic regularly through the length of the study. At each visit, patients' clinical status was monitored using the Clinical Monitoring Form. Subjects reported on the frequency and severity of irritability experienced during the preceding ten days; frequency was measured by way of percentage of days in which subjects experienced irritability, while severity of that irritability was rated on a Likert scale of 1 \u2013 4 (if present). The irritability component of Young Mania Rating Scale (YMRS) was also recorded quarterly on 13 of the 39 patients consistently. Patients had persistent irritability despite their ongoing pharmacologic and psychotherapy.\n .\n Omega-3 Fatty Acid intake helped with the irritability component of patients suffering from bipolar disorder with a significant presenting sign of irritability. Low dose (1 to 2 grams per day), add-on O-3FA may also help with the irritability component of different clinical conditions, such as schizophrenia, borderline personality disorder and other psychiatric conditions with a common presenting sign of irritability.\nauthor: Kemal Sagduyu; Mehmet E Dokucu; Bruce A Eddy; Gerald Craigen; Claudia F Baldassano; Ay\u015feg\u00fcl Y\u0131ld\u0131z\ndate: 2005\ninstitute: 1University of Missouri \u2013 Kansas City, Missouri, 8801 West 148^th^ Terrace, Overland Park, KS 66221, USA; 2Washington University, School of Medicine, Department of Psychiatry, Campus Box: 8134, 660 South Euclid Avenue, St. Louis, Missouri, 63110, USA; 3Department of Psychiatry, School of Medicine, University of Missouri-Kansas City, Resource Development Institute, 601 Walnut Street, Kansas City, MO 64106, USA; 4Mood Disorders Psychopharmacology Unit, University Health Network, Toronto Western Hospital, 399 Bathurst Street, ECW-3D-010, Toronto, Ontario M5T 2S8, Canada; 5Mood and Anxiety Disorders Clinic, Department of Psychiatry, University of Pennsylvania, 3535 Market Street, 2nd floor, Philadelphia, PA 19104, USA; 6Dokuz Eyl\u00fcl Medical School, Department of Psychiatry, \u0130zmir, Turkey\nreferences:\ntitle: Omega-3 fatty acids decreased irritability of patients with bipolar disorder in an add-on, open label study\n\n# Introduction\n\nAccording to the United States National Institute of Mental Health (NIMH), Bipolar Disorder (BPD), also known as manic-depressive illness, is a serious medical illness that causes shifts in a person's mood, energy, and ability to function. Different from the normal ups and downs that everyone goes through, the symptoms of bipolar disorder are severe. Bipolar disorder is a complex, chronic condition associated with considerable morbidity and mortality, including a high rate of suicide. Bipolar disorder causes dramatic mood swings from overly \"high\" and\/or irritable to sad and hopeless, and then back again, often with periods of normal mood in between. Severe changes in energy and behavior go along with these changes in mood. The periods of highs and lows are called episodes of mania and depression. Most people with bipolar disorder can achieve substantial stabilization of their mood swings and related symptoms over time with proper treatment. A strategy that combines medication and psychosocial treatment is optimal for managing the disorder over time.\n\n# Background\n\nOmega-3 fatty acids (0-3FA) may have a beneficial effect on irritable mood. Low O-3FA levels in red blood cell membranes of depressed patients hint that O-3FA may be helpful in treating mood disorders \\[1\\]. A recent article has given an excellent review of O-3FA and studies showing their effectiveness in depression, bipolar disorder and aggression \\[2\\]. In this article, two published studies are discussed that have reported on similar therapeutic effects of O-3FA \\[2\\]. One placebo-controlled study of 20 patients revealed that ethyl ester of eicosapentaenoic acid (E-EPA) was effective in stabilizing the moods of depressed patients \\[3\\]. Another report, a double-blind, placebo controlled study (N = 22\/19), measured the effect of O-3FA docosahexaenoic acid (DHA) on the aggressive tendencies of college students. The O-3FA DHA group (1.5\u20131.8 g O-FA DHA\/day) did not display any increase in aggressive tendencies when external stressors peaked, while the placebo group displayed a significant increase in their aggressive tendencies under similar circumstances \\[4\\]. In a recent study, 25% of 111 patients with bipolar-I disorder who met criteria for a DSM-IV major depressive episode also experienced substantial irritability in the absence of associated symptoms of mania. These findings suggest that abnormal irritability is not limited to mania or mixed states \\[5\\]. However, recent studies give caution that at a 6 gram per day average daily dose, as a single agent, omega 3 fatty acids may not be as effective as an antidepressant \\[6-9\\].\n\nO-3FA may also help with the irritability component of different clinical conditions, such as depression, mania, schizophrenia, borderline personality disorder and other psychiatric conditions with a common presenting sign of irritability. Numerous other conditions have an irritability component, including Borderline Personality Disorder, Alzheimer's disease, Premenstrual Dysphoric Disorder, to name a few \\[10-12\\]. There is one report suggesting beneficial effect of Omega-3 Fatty acid treatment for Borderline Personality Disorder. This double-blind, placebo-controlled pilot study specifically showed that EPA may influence both aggression and depression \\[12\\]. Although attention-deficit\/hyperactivity disorder (ADHD) also has an irritability component, recent publications bring doubt to the O3FA connection in ADHD \\[13,14\\].\n\nA recent, open ended, O-3FA add-on study has shown beneficial effect of O-3FA on irritability in 19 patients with mood disorders \\[15\\]. These patients had already been receiving different combinations of pharmacotherapy and talk therapy. Despite their treatment, the irritability component of their illness was still causing social, occupational and other life disturbances. Hence, they were chosen for the O-3FA add-on component of the study. In the nineteen-patient phase of the study, bipolar patients of every subtype, ages 18 to 65 years, with significant irritability were studied. All patients received a systematic assessment battery at entry and were treated by a psychiatrist, trained to deliver care and measure outcomes in patients with bipolar disorder, consistent with expert recommendations. At every follow-up visit, the treating psychiatrist completed a standardized assessment and assigns a clinical status based on DSM-IV criteria. Patients had independent evaluations at regular intervals throughout the study and remain under the care of the same treating psychiatrist while receiving variable medications and talk therapy, depending on their need \\[15\\]. In the 19-patient study, a paired sample t-test revealed a large decrease in the percent of days irritable after O-3FA was administered. Before treatment, the mean irritability percentage was 81.05 (SD = 23.31) and after treatment the mean irritability percentage dropped to 30.00 (SD = 36.67). Despite the small number of patients in the study (n = 19), the difference between means was statistically significant (t (18) 4.512, p \\< .001). Using a paired sample t-test, a significant difference was also found between the highest irritability score (mean = 2.79; SD = 0.92) and the last recorded irritability (mean = 0.79; SD = 0.85) while taking O-3FA (t(18) = 8.270; p \\< .001) \\[15\\].\n\n# Methods\n\nThis is a report on a 37-patient continuation phase of the open ended, O-3FA add-on study. Subjects consisted of the original 19 patients, in addition to the 18 new patients recruited and followed in the same fashion as the first nineteen \\[15\\]. Subjects carried a DSM-IV-TR \\[16\\] diagnosis of Bipolar Disorder and were visiting a Mood Disorder Clinic regularly throughout the length of the study. At each visit, patients' clinical status was monitored using the Clinical Monitoring Form \\[17\\]. Subjects reported on the frequency and severity of irritability experienced during the preceding ten days; frequency was measured by way of percentage of days in which subjects experienced irritability, while severity of that irritability was rated on a Likert scale of 1 \u2013 4 (if present). The irritability component of Young Mania Rating Scale \\[18\\] (YMRS) was also recorded quarterly on 13 of the 39 patients consistently. The patients were asked about general dietary omega-3 intake before the fish oil was added on, and basic nutritional guidance was given to subjects at the clinic. Patients in general were not heavy fish\/product consumers.\n\n## Dosage\n\nStarting dose and last maintenance dose were available for 37 subjects (Table 1<\/a>). Subjects self-medicated, and therefore, the last maintenance dose of O-3FA was chosen by each subject. The mean starting dose was 1824.32 mg (SD 1075.07), and the mean for the last maintenance dose was considerably higher at 2878.38 mg (SD 2011.79). The increase was statistically significant using a paired sample t-test (t = -3.44, 36df, p = .001).\n\n# Statistical Results\n\n## Percentage of Irritable (Days)\n\nThe initial mean was 63.51 (SD 34.17), indicating that on average, subjects were irritable for about six of the previous ten days. The mean for the last recorded percentage was less than half of the initial score: 30.27 (SD 34.03). The decrease was found to be statistically significant using a paired sample t-test (t = 4.36, 36 df, p \\< .001). The difference between the distributions was examined using the non-parametric sign test. The number of negative differences (25) significantly exceeded positive differences (7); there were five ties, and the pre\/post distributions were significantly different (p \\< .003).\n\n## YMRS Irritability Sub-score\n\nThirty four subjects had initial and last recorded YMRS irritability sub-scores. As with the above means there was a sizable decrease. The initial mean score was 3.18 (SD 1.09). The mean for the last recorded percentage was 1.68 (SD 1.89). The decrease was found to be statistically significant using a paired sample t-test (t = 4.21, 33 df, p \\< .001).\n\n## YMRS Total Score\n\nStarting and last recorded YMRS scores were available for 34 subjects. The mean starting score 10.71 (SD 6.77), and the mean for the last recorded score was 4.85 (SD 5.63). The decrease found to be statistically significant using a paired sample t-test (t = 4.14, 33 df, p \\< .001).\n\n## Severity\n\nThirty six subjects had initial and last recorded severity scores on the ADE. Again, a decrease was found. The initial mean score was 2.14 (SD 1.22). The mean for the last recorded score was 0.94 (SD 0.92). This decrease was found to be statistically significant using a paired sample t-test (t = 5.23, 35 df, p \\< .001).\n\n## Composite: Severity and Irritability\n\nAs an exploratory measure, a composite score was created by multiplying the ADE severity score, which has a maximum of 4 points, by the percentage of the ten days prior to measurement which the patient was rated as irritable. The initial mean on this composite was 159.72. As with other measures, there was wide variation: SD = 122.92. The mean for this measure on the last recorded scores was percentage was about one-fourth of the initial score: 43.89 (SD 64.38). The decrease was found to be statistically significant using a paired sample t-test (t = 5.00, 35 df, p \\< .001).\n\n## Last Recorded Maintenance Dose and Percentage of Irritability After\n\nBecause of apparent wide variation on these two measures and a concern that outliers may have affected some results, the last recorded irritability scores were plotted against the maintenance dose. This revealed a rather bimodal pattern, in which relatively lower irritability measures (\u2264 50%) clustered in the quadrant with lower dosage levels (\u2264 4,000 mg).\n\n## Duration and YMRS Total\n\nIn response to a similar observation regarding wide variation in the last recorded values (84 days to 5.5 years) the values were also plotted. A clearly bimodal pattern appeared in which 11 subjects (about one-third of study participants) clustered in the quadrant representing short duration (\\<500 days) and higher YMRS totals (\\>7). The remaining two-thirds of subjects clustered in the quadrant representing short duration and lower YMRS totals (\\<6).\n\n## Subject Weight\n\nThe mean start weight was 176.97 lbs (SD 43.13), and the mean for the last weight recorded was slightly higher at 178.59 lbs (SD 43.24). The increase was not statistically significant.\n\n# Follow-up Subjects\n\nFollow-up information, recorded after the collection of the \"last\" scores for most of the above variables, was available for 13 of the 37 subjects. Final YMRS total or scale scores were not available for this sub-group.\n\n## Omega 3 Duration\n\nThe final date recorded for the duration of O3 was derived based from an O3 start date and a \"final\" date recorded for O3. The time period ranged 84 days to 1995 days (5.46 years). The mean duration of O3 for this group was 439.62 days (SD = 487.46).\n\n## Dosage\n\nFor these subjects, the mean starting dose was 1807.69 mg (SD 990.34), and the mean for the last maintenance dose was higher at 2615.38 mg (SD 1894.66). The increase was not significant.\n\n## Percentage Irritable (Days)\n\nThe initial mean was 82.31 (SD 20.88). The mean for the last recorded percentage was dramatically lower: 25.38 (SD 32.04). The decrease was found to be statistically significant using a paired sample t-test (t = 6.52 12 df, p \\< .001). The difference between the distributions was examined using a sign test. The number of negative differences (12) significantly exceeded positive differences (0); there was one tie, and the pre\/post distributions were significantly different (p \\< .001).\n\n## Severity\n\nThe initial mean score for the 13 subjects with final scores was 2.69 (SD 0.95). The mean for the final score was 0.77 (SD 0.83). This decrease was found to be statistically significant using a paired sample t-test (t = 6.22, 12 df, p \\< .001).\n\n## Composite: Severity and Irritability\n\nAn exploratory composite score, described above, was also created for the subjects with final scores. For these subjects, the initial mean was higher than that of the total group, 223.08. Again, there was wide variation: SD = 104.19. The mean for this measure on the last recorded scores was percentage was much lower that the initial score: 33.08 (SD 39.87). The decrease was found to be statistically significant using a paired sample t-test (t = 6.70, 12 df, p \\< .001).\n\n## Weight\n\nFor these 13 subjects, the mean start weight was 166.23 lbs (SD 35.68), and the mean for the final weight recorded was also slightly higher at 168.23 lbs (33.62). As with the previous finding regarding weight, the increase was not statistically significant.\n\n# Results\n\nOmega-3 Fatty Acids added onto the existing treatment helped with the irritability component of a significant percentage of patients suffering from bipolar disorder with a persistent sign of irritability.\n\n# Discussion\n\nAs seen from the standard deviations of several of the variables discussed here, measures ranged widely. This creates difficulty in using descriptive data, such as means, to adequately portray subject attributes and performance. Using data reduction techniques or grouping subjects according to high and low scores on various attributes may be one way to increase the descriptiveness, which would be possible and more reasonable with a larger pool of subjects.\n\nA potential limitation or interpretive consideration merits discussion. For many of the variables discussed above, noticeable differences in measures were observed between the \"starting\" versus \"last recorded\" group (n = 37) and the \"starting\" versus \"final\" measures group (n = 13). Given these differences and the smaller number of subjects in the second set of comparisons, \"starting\" versus \"final\" comparisons should be interpreted with caution until differences inherent in this \"final\" subgroup (n = 13) are more clearly understood. This is clearly seen in the results of sign tests, in which the apparent magnitude of the \"final\" effects is pronounced.\n\nStatistically significant within-subjects differences were found in several independent variables. This is especially notable given the small number of subjects. The preliminary findings suggest that a rigorously designed study tailored especially to the examination of the effects of O-3FA is warranted.\n\nThe majority of data were collected within an ongoing \"best-practice, outpatient bipolar disorder study\" that involved medications and talk therapy which we have not reported or discussed herein. Results must, therefore, be interpreted with caution.\n\nThere are several mechanisms through which O-3FA are theorized to help with mood, irritability, aggression etc. Suggested theories of mechanism converge on the theory of nerve cell membrane stabilization. A recent study has come closest to showing physical proof of effectiveness of O-3FA through indirect demonstration of greater membrane fluidity, as detected by reductions in Tesla-2 (T2) values in MRI scans \\[19\\]. The overlapping beneficial effects of antipsychotics, antidepressants, anticonvulsants, O-3FA, and nonpsychoactive cannabinoids, as they relate to pain, stroke, schizophrenia, psychoneuroimmunology, Alzheimer's disease, and stress, may be because of their common effects at protein kinases, thus affecting the structure and function of the cell membrane and the cell \\[20\\]. These changes should help the cell operate within an optimal level of excitation, which may be related to emerging evidence that these therapeutic agents have neuroprotective value \\[20\\]. A recent randomized placebo controlled double blind intervention study suggests an adaptogenic role for O-3FA in stress \\[21\\].\n\nWe would like to discuss briefly the issue of daily dosing of O-3FA for nutrition and medicinal purpose: Recent studies give caution that at a 6 gram per day average daily dose, as a single agent, omega 3 fatty acids may not be as effective as an antidepressant \\[6-9\\]. However, these studies may have given too high of a dose of O-3FA, above 6 grams daily, with possibly beyond a therapeutic window of effectiveness for O-3FA. Our scatter plots indicate that the optimum effective dose for irritability is at 1\u20132 gram of EPA plus DHA per day, which would be the dosing we suggest. A recent exploratory dose study of O-3FA for schizophrenic patients showed that 2 g\/day EPA-treated patients had lower symptom scores, and needed less medication greatest. In this study, there was a positive relationship between improvement on rating scales and rise in red blood cell arachidonic acid concentration as well \\[22\\].\n\nThe United States (US) accounts for more than 51% of the 430.3 billion dollar expended on pharmaceutical products worldwide each year \\[23\\]. World healthcare society first needs access to low-cost, nontoxic, non-expert-dependent interventions to ensure basic health outcomes. Food may represent the most cost-effective means of promoting public health \\[23\\]. The American Heart Association recommends consumption of two servings of fish per week for persons with no history of coronary heart disease and at least one serving of fish daily for those with known coronary heart disease \\[24\\]. Approximately 1 g per day of EPA acid plus DHA acid is recommended for cardioprotection \\[24\\]. Higher dosages of omega-3 fatty acids are required to reduce elevated triglyceride levels (2 to 4 g per day) and to reduce morning stiffness and the number of tender joints in patients with rheumatoid arthritis (at least 3 g per day) \\[24\\].\n\nWe conclude that it is beneficial in many ways to establish a regular intake of 1\u20132 g per day EPA acid plus DHA, similar to daily intake of vitamins with minerals. Dietary interventions to remedy omega-3 deficiency is necessary \\[23\\]. It is time for more aggressive funding for research into medicinal foods, such as omega-three fatty acids \\[23\\].\n\n## Figures and Tables\n\nInitial, Last and Final Omega 3 Dosages (mg). summarizes dosage under three conditions. Figures for the Initial Dose include two subjects (n = 39) for whom no corresponding follow-up data were available.\n\n| Initial, Last Recorded and Final Omega 3 Dosages (mg) | | | |\n|----|----|----|----|\n| | Initial | Last Recorded | Final^T1^ |\n| | | | |\n| n | 39 | 37 | 13 |\n| Mean | 1833.33 | 2878.38 | 2615.38 |\n| Mode | 1000^T2,\\ T3^ | 1000 | 2000 |\n| Median | 2000 | 2000 | 2000 |\n| SD | 1071.91 | 2011.79 | 1894.66 |\n\nT1 = Final group results (n = 13) are discussed below.\n\nT2 = Multiple modes exist. The smallest value is shown.\n\nT3 = One gram (1,000 miligram) of fish oil; of which about 180 milligrams is (eicosapentaenoic acid) EPA and 120 milligrams is DHA (docosahexaenoic acid), (for a total of 300 milligrams of omega 3's) in each clear capsule.","meta":{"dup_signals":{"dup_doc_count":160,"dup_dump_count":55,"dup_details":{"curated_sources":5,"2022-27":1,"2021-49":1,"2021-10":1,"2021-04":1,"2020-45":1,"2020-34":1,"2020-24":1,"2019-47":1,"2019-43":1,"2019-30":1,"2019-22":1,"2019-13":1,"2019-04":1,"2018-51":1,"2018-43":1,"2018-34":1,"2018-30":1,"2018-22":1,"2018-13":1,"2017-47":2,"2017-34":1,"2017-26":1,"2017-22":1,"2017-09":13,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-30":1,"2016-22":1,"2016-18":1,"2015-48":5,"2015-40":3,"2015-35":5,"2015-32":5,"2015-27":3,"2015-22":5,"2015-14":4,"2014-52":5,"2014-49":6,"2014-42":9,"2014-41":8,"2014-35":9,"2014-23":7,"2014-15":7,"2023-50":1,"2017-13":1,"2015-18":4,"2015-11":5,"2015-06":3,"2014-10":5,"2013-48":5,"2013-20":4,"2024-26":1}},"file":"PMC549076"},"subset":"pubmed_central"} {"text":"abstract: Vocal learning is relatively common in birds but less so in mammals. Sexual selection and individual or group recognition have been identified as major forces in its evolution. While important in the development of vocal displays, vocal learning also allows signal copying in social interactions. Such copying can function in addressing or labelling selected conspecifics. Most examples of addressing in non-humans come from bird song, where matching occurs in an aggressive context. However, in other animals, addressing with learned signals is very much an affiliative signal. We studied the function of vocal copying in a mammal that shows vocal learning as well as complex cognitive and social behaviour, the bottlenose dolphin (*Tursiops truncatus*). Copying occurred almost exclusively between close associates such as mother\u2013calf pairs and male alliances during separation and was not followed by aggression. All copies were clearly recognizable as such because copiers consistently modified some acoustic parameters of a signal when copying it. We found no evidence for the use of copying in aggression or deception. This use of vocal copying is similar to its use in human language, where the maintenance of social bonds appears to be more important than the immediate defence of resources.\nauthor: Stephanie L. King; Laela S. Sayigh; Randall S. Wells; Wendi Fellner; Vincent M. Janike-mail: \ndate: 2013-04-22\ninstitute: 1Sea Mammal Research Unit, School of Biology, University of St Andrews, St Andrews, Fife KY16 8LB, UK; 2Biology Department, Woods Hole Oceanographic Institution, Woods Hole, MA 02543, USA; 3Chicago Zoological Society, c\/o Mote Marine Laboratory, 1600 Ken Thompson Parkway, Sarasota, FL 34236, USA; 4The Seas, Epcot, Walt Disney World Resort, 2016 Avenue of the Stars, EC Trl. W-251, Lake Buena Vista, FL 32830, USA\nreferences:\ntitle: Vocal copying of individually distinctive signature whistles in bottlenose dolphins\n\n# Introduction\n\nVocal production learning enables animals to copy novel sounds in their environment or to develop their own distinctive calls, avoiding overlap with those heard before \\[1\\]. Most commonly, vocal learning leads to convergence in sound parameters between individuals. A good example of this can be found in bird song dialects \\[2\\] or in the development of group-specific contact calls \\[3\u20137\\]. The exchange of such shared calls between individuals can be aggressive or affiliative in nature. While contact calls are known to be affiliative \\[7\\], song type matching in song birds tends to have an aggressive connotation \\[8\\]. Song sparrows, for example, use song type matching when defending their territory against an unknown male, but avoid it when interacting with known neighbours with whom they use more subtle repertoire matching \\[9,10\\]. Repertoire matching, i.e. the use of a shared song type while avoiding a reply with the same song type, may allow the addressing of a neighbour in a more affiliative or neutral way.\n\nIn most instances, these interactions occur with calls that are shared by more than one individual. In the case of contact calls, the common call belongs either to a group or a pair of animals. In bird song, animals have individual repertoires where each song type is shared with other individuals, but the overall composition of the repertoire may be unique. Production rates for each shared call or song type are usually similar across the individuals that share it. Individual call or song types survive in populations as cultural traditions that can outlive the animals that produce them at any one time \\[11\\].\n\nThe signature whistle of the bottlenose dolphin stands out from these examples in that it seems to be more individually specific. Bottlenose dolphins produce a large variety of narrow-band frequency-modulated whistles and pulsed sounds for communication \\[12\\]. As part of their repertoire, each individual also develops an individually distinctive signature whistle \\[13,14\\] that develops under the influence of vocal learning \\[15\u201317\\]. Individuals listen to their acoustic environment early in life and then develop their own novel frequency modulation pattern or contour for their signature whistle \\[15\\]. The result is a novel and unique modulation pattern that identifies the individual even in the absence of general voice cues \\[18\\]. Interindividual variation in signature whistles is much larger than that found in recognition signals of other species \\[19\\].\n\nBottlenose dolphins live in fluid fission\u2013fusion societies with animals forming a variety of different social relationships \\[20\\]. This social organization, coupled with restrictions in underwater vision and olfaction, has led to natural selection favouring designed individual signature whistles \\[12,14\\] instead of relying on the by-product distinctiveness of voice features \\[19\\]. The signature whistle tends to be the most commonly used whistle in each individual's repertoire accounting for around 50 per cent of all whistles produced by animals in the wild \\[21\\]. Bottlenose dolphins are, however, able to learn new sounds throughout their lives \\[22\\], and conspecifics occasionally imitate the signature whistles of others \\[23\\]. Thus, one animal's signature whistle can form a minor part of another animal's vocal repertoire as a result of copying \\[17,23,24\\]. Signature whistle copying is, however, rare \\[23,25\u201327\\], albeit significantly more common than expected by chance \\[25\\]. As such, each signature whistle forms only a major part of one animal's repertoire, allowing it to be a label for that particular individual when copied.\n\nNevertheless, the function of copying events remains unclear. It has been argued that copying of signature whistle types is equivalent to addressing other individuals. Such addressing can be affiliative or aggressive. Unlike songbirds, delphinids are not territorial and do not sing. Instead, they use their acoustic signals in the context of social interactions and group cohesion \\[12\\]. Bottlenose dolphins have low rates of aggression towards close associates and higher ones towards social competitors, for example among male alliances \\[20\\]. Investigating who is copying who can therefore give us information on the signal value of copying. In addition to affiliative and aggressive functions, a third hypothesis for whistle copying is that it is used as a deceptive form of signalling \\[28\\]. For example, deceptive signature whistle copying by male dolphins could allow them to gain access to females guarded by other males or to avoid directed aggression from a male alliance \\[29\\]. It appears that copies are sufficiently rare to allow for such a use without jeopardizing the reliability of signature whistles as identity signals.\n\nTo investigate these three hypotheses, the occurrence of signature whistle copying was studied in captive and briefly captured and subsequently released wild bottlenose dolphins. We hypothesized that if signature whistle copying is affiliative it should only occur between close associates. Alternatively, copying in an aggressive context should be more common between animals that are less closely associated. Furthermore, copies used in a deceptive way should ideally not be recognizable as copies, whereas in affiliative or aggressive contexts, they could be recognizable as such. We also investigated the temporal aspects of whistle copying given the importance of signal type matching in other species.\n\n# Material and methods\n\n## Social and acoustic data from the wild\n\nData were collected from wild bottlenose dolphins around Sarasota Bay, FL, USA between 1984 and 2009. The amount of time animals are sighted together can be used to give a measure of their association. The half-weight ratio coefficients of association (CoA) \\[30\\] is defined as CoA = 2*N~ab~*\/*N~a~* + *N~b~*~,~ in which *N~ab~* is the number of times individuals A and B have been seen together, *N~a~* is the number of times individual A has been seen without B, and *N~b~* is the number of times individual B has been seen without A. CoAs were calculated for all study animals from data gained during regular, systematic photographic identification surveys of dolphins. CoAs given for each pair of animals caught together are from the year the recordings were taken. Wild bottlenose dolphin acoustic recordings were collected during capture\u2013release events for health assessments and life-history studies in Sarasota Bay \\[31\\]. One such event takes on average 108 min from the time the net is set to the time the individual is released. During these events, animals were physically restrained and frequently out of visual sight, but not acoustic range, of one another. The signature whistle of an individual is the most common whistle type emitted in such isolation conditions \\[14\\]. The Sarasota Dolphin Research Programme has now accumulated a catalogue of whistles from over 250 individual dolphins from the resident community in Sarasota Bay since 1975 \\[14\\], many of which were recorded in multiple capture\u2013release sessions. We compared all whistles produced by an individual with the signature whistles of all others in the same capture set in order to identify copying events. Ages of animals were known from long-term observations \\[32\\] or from analysing growth rings in teeth \\[33\\].\n\nThe vocalizations of each individual were recorded via a suction cup hydrophone, allowing the identification of the caller for each recorded call. Either custom-built or SSQ94 hydrophones were used (High Tech Inc.). Between 1984 and 2004, the acoustic recordings were taken with either Marantz PMD-430 or Sony TC-D5M stereo-cassette recorders (frequency response of recording system: 0.02\u201318 kHz \u00b1 5 dB) or Panasonic AG-6400 or AG-7400 video-cassette recorders (frequency response of recording system: 0.02\u201325 kHz \u00b1 3 dB). For recordings taken from 2005 onwards, a Sound Devices 744T digital recorder was used (sampled at 96 kHz, 24-bit, frequency response of recording system: 0.02\u201348 kHz \u00b1 1 dB).\n\nThe first step of analysis consisted of visual comparisons of spectrograms of 205 h and 23 min of acoustic recordings of temporarily caught and released, wild bottlenose dolphins by one observer in order to identify copying events within each capture set. The total recording time inspected in this way was 110 h and 55 min for pairs of animals caught together with low association levels (CoA \\< 0.5) and 94 h and 28 min for pairs of animals caught together with high association levels (CoA \\> 0.5). The second step involved a detailed analysis of 32 h and 12 min (table 1<\/a>) of recordings where vocal copying had been found. These contained a total of 10 219 whistles, which is the dataset on which this in-depth analysis is based.\n\nPairs of animals involved in signature whistle copying events, with the animal producing copies in bold. The mean similarity values are given for each animal's signature whistle when compared with the vocal copy. The copier's own signature whistles had low similarity scores with the copy while the signature whistles of the copied animals had high similarity scores with the copies (see the electronic supplementary material, figure S1).\n\n| pair | sex | relationship | CoA | age | recording time (min) | no. of vocal copies | average similarity values |\n|:---|:---|:---|----|----|----|----|----|\n| **1. Calvin Ranier** | MM | associates | 1^a^ | 1528 | 7070 | 13\u2014 | 1.54.5 |\n| **2a. FB26** **2b. FB48** | MM | alliance partners | 0.8 | 3129 | 93101 | 385 | 1.0\/3.2^b^1.0\/3.5^b^ |\n| **3. FB114** FB20 | MM | associates | 0.07 | 1615 | 5195 | 4\u2014 | 2.43.3 |\n| **4. FB90**FB122 | FM | mothercalf | 0.98 | 254 | 9292 | 17\u2014 | 1.33.3 |\n| **5. FB65**FB67 | FF | calfmother | 0.67 | 621 | 7070 | 1\u2014 | 1.23.6 |\n| **6. FB228**FB65 | MF | calfmother | 0.95 | 521 | 106106 | 8\u2014 | 1.13.5 |\n| **7. FB5**FB55 | FF | mothercalf | 1.0 | 293 | 8585 | 17\u2014 | 1.33.3 |\n| **8a. FB358b. FB93** | FF | mothercalf | 0.9 | 323 | 9292 | 24 | 1.7\/3.7^b^2.5\/3.2^b^ |\n| **9. FB71**FB95 | FF | mothercalf | 1.0 | 281 | 9797 | 13\u2014 | 1.03.3 |\n| **10. FB5**FB155 | FF | mothercalf | 0.56 | 292 | 7979 | 40\u2014 | 1.03.5 |\n| **11. FB9**FB177 | FF | mothercalf | 0.9 | 20 | 105105 | 9\u2014 | 1.23.4 |\n\n^a^These animals were permanent residents in a captive facility.\n\n^b^Where both animals copied one another the average similarity value for that animal's own signature with the copy it produced of the other animal's signature whistle is given first (low number) followed by the average similarity value for that animal's own signature whistle with the copy produced by the other animal in the pair (larger number).\n\n## Social and acoustic data from captivity\n\nTo investigate the social context of copying, four captive adult males were recorded at The Seas Aquarium, Lake Buena Vista, FL, USA, during May\u2013June 2009. One male, Ranier, was estimated to be 28 years old and was collected at approximately 3 years of age in the northern Gulf of Mexico. The other males were Calvin (15 years old), Khyber (18 years old) and Malabar (8 years old), who were all captive born. All four animals had been together for 3.5 years at the start of the study; Ranier and Calvin had been together for 6 years. Vocalizations of these dolphins were recorded with two HTI-96 MIN hydrophones (frequency response: 0.002\u201330 kHz\u00b11 dB) and two CRT hydrophones (C54 series; frequency response: 0.016\u201344 kHz\u00b13 dB) onto a Toshiba Satellite Pro laptop using a four channel Avisoft v. 416 Ultrasound<\/span>Gate<\/span> recording device (sampled at 50 kHz, 8 bit).\n\nA total recording time of 16 h for the four males was analysed. The length of recording time when copying between pairs could be identified (as determined by their positions in the pool system) was as follows: 16 h (100%) for Ranier and Calvin, Ranier and Malabar, Khyber and Calvin and Khyber and Malabar; 14 h (87.3%) for Ranier and Khyber and 2 h (12.7%) for Calvin and Malabar. The caller was identified, using passive acoustic localization \\[23\\]. The social association of male pairs at The Seas was evaluated by measuring synchrony in their swimming patterns \\[34\\]. A focal animal instantaneous sampling method was used with an observation period of 7.5 min and a 15 s interval. At each 15 s interval, the focal animal's synchrony status was assessed relative to each other animal in the group. Observations took place 5 days per week between 08.00 and 18.00, and each animal served as the focal animal once each day in an order determined by a balanced, randomly ordered schedule. Observations were made between January 2009 and June 2009 when all four dolphins were together in the same pool.\n\n## Identifying copying events\n\nInitially, one observer (S.L.K.) compared all whistles in a given captured or captive group with each other, and identified all occurrences where the same whistle type was being produced by more than one animal by inspecting spectrograms (fast Fourier transform (FFT) length 512, overlap 100%, Blackmann\u2013Harris window) in Adobe Audition<\/span> v. 2.0 (Adobe Systems). Five naive human observers, blind to context and animal identity, were then used to rate the similarity of each copy of a signature whistle to the original signature whistle (the whistle as produced by its owner) and to the copier's own signature whistle. Visual classification was used as it is more reliable than computer-based methods in dolphin whistle classification \\[14,35\\] and is frequently used in animal communication studies \\[2,36\\]. The five observers were given the extracted contours (frequency modulation pattern) of the whistles as plots of frequency versus time and were asked to rate whistle similarity using a five-point similarity index ranging from 1 (dissimilar) to 5 (similar). Only copied whistles that reached a mean similarity score of more than 3 with the original signature whistle and less than 3 with the copier's own signature whistle were deemed copies and included in the analysis. A value of 3 indicates a relatively high similarity as indicated in previous studies \\[25,29,37\\].\n\n## Acoustic analysis\n\nThe whistle contours of every copy as well as of randomly chosen exemplars of signature whistles of both interacting individuals were extracted using a supervised contour extraction programme \\[38\\], with a time resolution of 5 ms. From the contours, the following parameters were measured: start frequency, end frequency, minimum frequency, maximum frequency, frequency range, duration and mean frequency. One further parameter, number of loops, was read directly from the spectrogram where applicable. A loop was defined as a repeated modulation pattern within a signature whistle that could be separated by periods of stereotyped, discrete segments of silence. These periods of silence were taken to be 250 ms or less, which is the maximum inter-loop interval found in this population \\[39\\].\n\n## Statistical analysis\n\nAll statistical procedures were conducted in R (R project for statistical computing; GNU project). Acoustic parameters were analysed by first testing for normality using the Lilliefors (Kolmogorov\u2013Smirnov) test. Depending upon the outcome, either the Mann\u2013Whitney test or a Welch's *t*-test was used to compare differences between parameters of the copies with the original signature whistles and the copier's own signature whistle. A sampling statistic was then created by multiplying these test statistics together, which created a combined test statistic for all parameters. This allowed comparisons of overall difference between two whistle types. A permutation test was used to shuffle the acoustic parameter measurements of the copies with those of the original signature whistles within each pair of animals. This was carried out to test whether the combined acoustic parameter statistic was significantly different from a random distribution. Ten thousand permutations were performed to calculate the distribution of the test statistic under the null hypothesis (random distribution), and the observed test statistic was then compared with this random distribution. A two-tailed test was used with a Bonferroni-adjusted significance level of *p* \\< 0.002. In addition, all parameters were used in a non-metric multi-dimensional scaling analysis with a good STRESS fit of 0.04.\n\nA permutation test was also used to test whether signal copying only occurred between affiliated pairs of animals. This involved shuffling the CoAs of the pairs of animals who produced vocal copies (*n* = 11) with those that did not (*n* = 191). Many of the individuals who copied were also in pairs with other animals where copying was not present. The sampling statistic of interest was the mean CoA for the pairs involved in signal copying. Ten thousand permutations were performed to calculate the distribution of the test statistic under the null hypothesis that the CoAs of copiers were randomly distributed. The observed test statistic was then compared with the random distribution.\n\nPermutation tests were also performed on the timing of copies after the original signature whistle. The times of copies (*n* = 108) were shuffled with the times of the copier's own signature whistles given in response to the copied signature whistles (*n* = 1651). The random distribution was calculated from 10 000 permutations under the null hypothesis that there was no difference between the timing of copies of signature whistles after the occurrence of the template whistle and the timing of the copier's own signature whistle after the occurrence of the template whistle. The observed test statistic (mean time between original signature whistle and copy) was compared with the random distribution.\n\n# Results\n\n## Who copies whom?\n\nIn total, 85 different capture\u2013release events of wild dolphins were analysed, comprising 121 individuals in different group compositions. Of these individuals, 48 were sampled on more than one occasion (range: 2\u20137). Of the 85 capture\u2013release events analysed, 11 consisted of single male\u2013male pairs, 31 consisted of single mother\u2013calf pairs and the remaining 43 consisted of groups of different compositions. These compositions included two or more adults of the same or both sexes, mother\u2013calf pairs with other adults and groups of mother\u2013calf pairs.\n\nAs in previous studies \\[14,40\\], each bottlenose dolphin almost exclusively used its own, individually distinctive signature whistle during capture\u2013release events. Whistle rates were generally high at these events, with a mean of 5.3 whistles per minute per individual. In 10 of 85 different capture\u2013release sets, however, individuals were found occasionally copying the signature whistle of another animal in the set (mean rate in sets with copying: 0.18 copies per minute per individual). This occurred in 10 of 179 pairs of animals recorded from 1988 through 2004, consisting of two of the 11 male\u2013male pairings and eight of the 31 mother\u2013calf pairs. In some instances, both members of a pair copied one another (figure 1<\/a> and table 1<\/a>; electronic supplementary material, figure S1). The total number of individuals who produced vocal copies was therefore 12. The five human judges who viewed frequency contour plots to quantify similarity of the copies with both the originals and the copier's own signature whistles showed statistically significant agreement (*\u03ba* = 0.42, *z* = 29.9, *p* \\< 0.0001) \\[41\\]. Similarity values for all copies are given in table 1<\/a>.\n\nThe results of a permutation test clearly showed that signature whistle copying occurred between closely affiliated pairs of animals (*p* = 0.0006). The mean half-weight coefficient of association (CoA; which can range from 0 to 1) for the 10 pairs of animals that copied was 0.8, whereas the mean CoA for non-copiers was 0.4 (figure 2<\/a>). Interestingly, there were also three instances of copying of whistles that were not signatures between two adult, wild females of low association (see the electronic supplementary material, figure S2). These animals also produced their own signature whistles but no signature whistle copies.\n\nIn recordings of four aquarium housed males (forming six possible pairs) at The Seas, one pair also engaged in signature whistle copying. These two individuals showed high levels of synchronous behaviour (23% of 285 min of observation time) in the pool. Synchrony is a sign of social bonding in male bottlenose dolphins \\[34\\]. One exchange of signature whistle copying between these males was 30 s in duration: both males emitted the signature whistle of one of them in an interactive sequence consisting of 13 and 11 renditions respectively (see the electronic supplementary material, figure S3). Copying in these individuals was not accompanied by aggressive behaviour (total observation time 16 h with 13 copies produced). The synchrony of the other male pairs was generally lower (7\u201313% of the observation time). One other pair, however, had a high level of synchrony (26%) but did not engage in whistle copying. Thus, copying does not necessarily occur in bonded males.\n\n## How accurate are vocal copies?\n\nFrequency parameter measurements of copies produced by 11 animals (one captive and 10 wild animals; two wild copiers were excluded owing to small sample sizes) revealed consistent differences between signature whistle copies and the original, copied signature whistle (table 2<\/a> and figure 3<\/a>). While the overall frequency modulation pattern of the copied whistle showed high similarity to the original (figure 1<\/a>), copiers introduced consistent variation in single acoustic parameters such as the start or end frequency (see the electronic supplementary material, table S1). In these parameters, copies were often closer to other whistle contours than to the copied signature whistle (figure 3<\/a>). Individuals varied in the parameters modified; on average 4.4 parameters (range: 1\u20136) differed significantly between the copy and the original signature whistle. Copies most frequently differed from the original (for 10 of 11 copiers) in mean frequency and maximum frequency (see the electronic supplementary material, table S1). Over half of the copiers also produced copies that differed significantly from the original signature whistle in end frequency (six of 11 copiers) and frequency range (seven of 11 copiers). The copies were equally likely to be higher or lower in frequency than the original. In addition to frequency parameters, one adult male, FB26, altered the number of loops in a multi-looped whistle in his copies of the signature whistle of his alliance partner, adult male FB48. Although FB48 varied his number of loops (range: 3\u20136), FB26 almost always produced a three-looped copy. The number of loops in FB26's copies and FB48's originals differed significantly (Mann\u2013Whitney: *W* = 152.5, *N*~1~ = 38, *N*~2~ = 35, *p* \\< 0.0001). All of the signature whistle copies also differed significantly from those of the copiers' own signature whistles in some parameters (mean number of parameters different = 3.54; range: 1\u20137), whereas other parameters of a copy resembled those of the copier's own signature whistle (mean = 2; range: 0\u20135).\n\nTest statistics for all acoustic parameter measurements combined for each copy and original signature whistle comparison. Shown are the sampling statistic of actual combined parameter measurements (observed), and the mean test statistic of combined parameter measurements under the null hypothesis based on 10 000 permutations (expected). Differences between acoustic parameter measurements of vocal copies and original signature whistles are significant at a level of *p* \\< 0.002.\n\n| | observed test statistic | expected test statistic | *p* |\n|:---|:---|----|----|\n| Ranier versus copy of Ranier | \u22127.52 | \u22120.002 | 0.002 |\n| FB48 versus copy of FB48 | 0.19 | \u22120.007 | 0.12 |\n| FB26 versus copy of FB26 | 559 | 0.025 | \\<0.0001 |\n| FB20 versus copy of FB20 | 166 | 0.43 | 0.0031 |\n| FB122 versus copy of FB122 | 0.27 | 0.003 | 0.1 |\n| FB65 versus copy of FB65 | 1004 | 0.03 | \\<0.0001 |\n| FB55 versus copy of FB55 | 24 000 | 0.016 | \\<0.0001 |\n| FB35 versus copy of FB35 | 125 | \u22120.01 | \\<0.0001 |\n| FB95 versus copy of FB95 | \u22121439 | \u22120.01 | \\<0.0001 |\n| FB155 versus copy of FB155 | 3 071 589 | 1.85a | \\<0.0001 |\n| FB177 versus copy of FB177 | \u22122646 | \u22120.0003 | \\<0.0001 |\n\n## Vocal matching\n\nTo further investigate whether copies were emitted in response to the identified model (referred to as the original signature whistle), we investigated whether they were temporally correlated and thus occurred in vocal matching interactions. Vocal matching can be described as a receiver responding to a signal by changing some features of its own vocal behaviour in order to imitate the preceding signal. Bottlenose dolphins had very high vocalization rates during these capture\u2013release events, so it was difficult to judge whether whistles were produced in response to those of other animals. An investigation into the timing of signature whistle copies, however, revealed that the mean time between an original signature whistle and its copy was significantly less than the mean time between an original signature whistle and a copier's own signature whistle (0.94 versus 2.55 s; permutation, *p* \\< 0.0001). In the long-term captive males, vocal rates were lower, and the matching pattern was clearer: almost all copying events occurred within 1 s after the emission of the original signature whistle by its owner, indicating copies were directed towards the owner of the original signature whistle.\n\n# Discussion\n\nWe conducted a large-scale analysis on the occurrence of vocal copying in wild bottlenose dolphins that were briefly caught, sampled and released. This dataset offered a unique opportunity to study the vocal interactions between individuals whose vocal repertoires \\[14,40\\] and association patterns had been well documented over decades in the wild \\[32,42\\]. In line with previous studies \\[23,25,26\\], we found whistle copying to be rare. This is consistent with the idea that signature whistles are used to indicate identity, because such a system would not be sustainable with high copying rates. While a copy could be recognizable as such if it occurred only in specific contexts, aquatic organisms usually have only limited contextual information with the acoustic signals they receive. Frequent copying of signature whistles would therefore render the identity information of the whistle unreliable. The rare copying of signature whistles may, however, be particularly suited to addressing close associates \\[23\u201325\\].\n\nWe found that copying occurred primarily in matching interactions between animals with high CoAs outside aggressive contexts, demonstrating that it is an affiliative signal. All pairs of animals that produced signature whistle copies were close associates, with only one pair having a low CoA for the year prior to recording. However, these two males were each other's closest male associate in the 4 year period prior to the recording. Many of the copiers were mother\u2013calf pairs, with both mothers and calves likely to copy one another. While most female calves' signature whistles are distinct from their mothers', males sometimes do sound like their mothers \\[37\\]. The signature whistles of the male calves in this study, however, did not resemble those of their mothers (see figure 1<\/a> and electronic supplementary material, figure S1). Signature whistles of male alliance partners also tend to become more alike over time \\[43\\]. In this study, however, males continued producing their own, non-identical, signature whistles as well as copying the finer details of each other's preferred whistle type. Thus, age, sex and relatedness were not significant factors for the results presented here.\n\nWe found no evidence for a deceptive function of signature whistle copies. In animals that are capable of vocal learning, variations can be introduced into a copied signal, allowing encoding of additional information. Bottlenose dolphins produced accurate copies of the frequency modulation pattern of a whistle (figure 1<\/a>), but introduced fine-scale differences in some acoustic parameters (table 2<\/a> and figure 3<\/a>). As a result, signature whistle copies were clearly recognizable as such. Copies may even carry identity information of the copier, as some individuals maintained some frequency parameters of their own signature whistles in their copies (see the electronic supplementary material, table S1). While these variations may appear subtle, they were generally outside the acoustic variations used by the signature whistle owner itself. Dolphins are clearly capable of detecting such differences in the fundamental frequency as well as the upper harmonics \\[44,45\\]. Hence, these copies cannot function in a deceptive manner. Only animals that are familiar with the whistle of the owner would, however, be able to recognize copies. In encounters with unknown animals, a high rate of copying would still lead to confusion, arguing for low rates of copying overall. In fact, wild bottlenose dolphins do not copy signature whistles when encountering other groups of dolphins at sea \\[46\\].\n\nThree lines of evidence suggest that active selection may have resulted in the variation found in signature whistle copies. First, bottlenose dolphins are capable of producing almost perfect copies of model sounds \\[22\\], suggesting that the variation is not due to limits on copying performance. Second, in experimental copying studies, bottlenose dolphins sometimes alter parameters of copies from one session to the next, and subsequently only produce copies with these novel parameter values \\[47\\]. Third, it has been shown that some dolphins introduce novel components such as sidebands to whistle copies, while they are perfectly capable of producing whistles without sidebands at these frequencies \\[24\\]. Thus, it is unlikely that variations introduced to copies are merely errors or reflect limitations in copying performance.\n\nA role of vocal learning in the development of signals used in group cohesion and the maintenance of social bonds can be found in a number of social species \\[3\u20137,48,49\\]. The bottlenose dolphin signature whistle stands out in that it is invented by its main producer and can only be shared by animals who had experience with the inventor. Besides humans, bottlenose dolphins appear to be the other main example of affiliative copying with such individually specific learned signals, although some parrot species do use vocal learning to develop labels for social companions \\[50\u201352\\] and therefore deserve further investigation in this context. Further studies are also needed to elucidate whether copying such signals is different from sharing learned contact calls or adjusting acoustic parameters in communal displays as found in other birds and primates. Bottlenose dolphins can be trained to use vocal copies of novel, arbitrary sounds to refer to objects \\[22\\]. It is not yet known whether they use learned signals in this way in their own communication system. However, bottlenose dolphins have been found to copy signature whistles of animals that are not present in their group \\[27\\]. It is possible that signature whistle copying represents a rare case of referential communication with learned signals in a communication system other than human language \\[12\\]. Future studies should look closely at the exact context, flexibility and role of copying in a wider selection of species to assess its significance as a potential stepping stone towards referential communication.\n\n## Acknowledgements\n\nWe thank Ana Catarina Alves, Aasta Eik-Nes, Thomas G\u00f6tz, Teresa Gridley, Mike Lonergan, Silvana Neves, Cornelia Oedekoven, Peter McGregor and Peter Tyack for help and advice during this study. Mike Beecher and three anonymous reviewers have provided helpful comments on the manuscript. We thank Walt Disney World Animals, Science and Environment and The Seas, Epcot, Walt Disney World Resorts, Lake Buena Vista, FL, USA for letting us work with their animals, especially Heidi Harley, Andy Stamper, Patrick Berry and Jane Davis. Fieldwork in Sarasota Bay would not have been possible without the efforts of a large team of biologists, veterinarians and dolphin handlers, including the staff of the Sarasota Dolphin Research Programme, Blair Irvine, Michael Scott, Jay Sweeney and a team of experienced volunteers. Ages of some of the Sarasota Bay dolphins were provided by Aleta Hohn. This work was supported by a BBSRC Doctoral Training Grant, Dolphin Quest, the Chicago Zoological Society, the National Oceanic and Atmospheric Administration (NOAA) Fisheries Service, Disney's Animals, Science and Environment, Dolphin Biology Research Institute, Mote Marine Laboratory, Harbor Branch Oceanographic Institute and a Royal Society University Research Fellowship and a Fellowship of the Wissenschaftskolleg zu Berlin to V.M.J. Work was conducted under NOAA Fisheries Service Scientific Research permit nos 417, 655, 945, 522\u20131569 and 522-1785 (to R.S.W.), IACUC approval through Mote Marine Laboratory, and approval of Disney's Animal Care and Welfare Committee. All data are archived and accessible at the Sea Mammal Research Unit, University of St Andrews.\n\n# References","meta":{"dup_signals":{"dup_doc_count":162,"dup_dump_count":39,"dup_details":{"curated_sources":2,"2018-43":2,"2018-34":2,"2018-26":4,"2018-17":2,"2018-13":1,"2018-09":3,"2017-51":1,"2017-47":3,"2017-43":1,"2017-39":3,"2017-34":1,"2017-30":4,"2017-26":2,"2017-22":2,"2017-17":5,"2017-09":6,"2017-04":6,"2016-50":5,"2016-44":5,"2016-40":5,"2016-36":6,"2016-30":7,"2016-26":5,"2016-22":5,"2016-18":5,"2016-07":6,"2015-48":5,"2015-40":7,"2015-35":5,"2015-32":5,"2015-27":5,"2015-22":5,"2015-14":4,"2014-52":5,"2014-49":8,"2017-13":2,"2015-18":5,"2015-11":4,"2015-06":3}},"file":"PMC3619487"},"subset":"pubmed_central"} {"text":"author: Elizabeth R. Seaquist; Dianne Figlewicz Lattemann; Roger A. DixonCorresponding author: Elizabeth R. Seaquist, .\ndate: 2012-12\ntitle: American Diabetes Association Research Symposium: Diabetes and the Brain\n\nFrom 28\u201330 October 2011, more than 100 people assembled in Alexandria, Virginia, to participate in an American Diabetes Association\u2013sponsored research symposium entitled \"Diabetes and the Brain.\" The objective of the symposium was to discuss the role of the brain in normal and abnormal metabolism and to consider the impact of diabetes on cerebral structure and function. Symposium participants were particularly interested in understanding how abnormalities in brain metabolism could affect the development of diabetes and obesity and how these diseases could, in turn, affect learning and memory. The growing epidemic of diabetes brought urgency to the meeting because of the ever-increasing number of people placed at risk for the cerebral complications of the disease. In this report, we present meeting highlights in five related areas as follows: *1)* metabolism, blood flow, and epigenetics; *2*) glucose sensing and hypoglycemia counterregulation; *3)* insulin resistance and action in the brain; *4)* neurocognition and imaging; and *5*) energy homeostasis.\n\n# METABOLISM, BLOOD FLOW, AND EPIGENETICS\n\nThe symposium opened with an outstanding review of cerebral metabolism by Gerald Dienel, PhD, from the University of Arkansas. He emphasized that glucose is the major fuel for the adult brain and that it has multifunctional roles in brain function. Metabolism of glucose via the glycolytic, pentose phosphate shunt, and tricarboxylic acid cycle pathways provides ATP for energy, NADPH for defense against oxidative stress, and carbon for biosynthesis of amino acids and sugars used for synthesis of glycoproteins and glycolipids and for de novo synthesis of tricarboxylic acid cycle-derived neurotransmitters glutamate and \u03b3-aminobutyric acid (GABA) (Fig. 1<\/a>). Glycogen is stored predominately in astrocytes, and the CO~2~ fixation reaction required for glutamate biosynthesis occurs in astrocytes. Local rates of blood flow and glucose utilization are closely linked to local activities of brain cells, and metabolic rates increase with increased functional activity and decline when cellular signaling is reduced. Energy metabolism and neurotransmission are closely interrelated because glutamate, GABA, acetylcholine, and glycine are synthesized via glucose metabolic pathways, and catecholamine synthesis and degradation rely on oxygen. Metabolic brain imaging and magnetic resonance spectroscopic studies take advantage of the obligatory use of glucose as the predominant brain fuel, and labeled glucose or labeled glucose analogs are used to assay rates of specific reactions or pathways in living brain. Different labeled precursors are used to measure major pathway fluxes in neurons (glucose) and astrocytes (acetate), and they are used for trafficking of metabolites between these two major cell types. In experimental diabetes, changes in the metabolic rate of glucose in the brain are small, but increased brain glucose levels increase production of sorbitol, elevate oxidative stress, impair gap junctional communication among astrocytes, and cause other abnormalities. Although cognitive capability and other brain functions are affected by diabetes, its impact on the brain does not appear to be as severe as it is on peripheral organs.\n\nEric Newman, PhD, from the University of Minnesota, discussed the regulation of cerebral blood flow in the retina and used it as a model for regulation of brain blood flow. He emphasized that neuronal activity in the brain leads to localized increases in blood flow, a process termed functional hyperemia. Recent work suggests that activity-dependent increases in blood flow are mediated largely by a feedforward mechanism, whereby transmitter release from active neurons results in release of vasodilatory agents, such as arachidonic acid and its metabolites, either from activated glial cells or from other neurons. He went on to say that the regulation of central nervous system (CNS) blood flow is disrupted in diabetic patients. In diabetes, these changes have been best-characterized in retinas in which decreases in light-evoked vasodilation are seen, even before overt retinopathy is observed. Loss of the functional hyperemia response may result in hypoxia and may contribute to development of diabetic retinopathy.\n\nDr. Newman presented results from a recently completed study in his laboratory, where he found that light-evoked vasodilation was substantially reduced in rats with diabetes for a duration of 7 months. Loss of vasodilation reproduced the disruption of functional hyperemia observed in patients with diabetes. Inducible nitric oxide synthase (iNOS) was upregulated in these animals. He suggested that this response could be responsible for the reduction of functional hyperemia in diabetes, particularly because inhibiting iNOS with aminoguanidine reversed its loss. Dr. Newman concluded that targeting iNOS or its downstream signaling pathway with selective inhibitors to restore functional hyperemia ultimately might be an effective therapy for treating diabetic retinopathy.\n\nGiulio Maria Pasinetti, MD, PhD, from Mount Sinai School of Medicine, focused on epigenetics (DNA methylation) in mouse models of diet-induced obesity. In the Tg2576 mouse, a high-fat diet for up to 5 months resulted in increased soluble \u03b2-amyloid in hippocampus and cortex (key CNS sites for learning and cognition) and impaired spatial memory behavioral performance. Using a model of the metabolic syndrome (increased body weight; glucose intolerance; and elevated blood pressure, plasma insulin, and cholesterol) developed in the wild-type C57BL\/6J mouse, his group observed impairment in hippocampal electrophysiological measures that may be linked to impaired memory consolidation. Dr. Pasinetti summarized progress in identification of genes that are targets for DNA epigenetic modification and altered expression in the CNS of the mouse model of metabolic syndrome. These include genes for mitochondrial energy metabolism that are downregulated by diet-induced obesity secondary to hypermethylation of specific promoter sequences. A caloric restriction mimetic, Combi-phenol, protects against or normalizes several sequelae of the metabolic syndrome in this model, including brain mitochondrial dysfunction, cognitive behavioral performance, and CNS network connectivity evaluated with diffusion tensor imaging. The Combi-phenol is being studied in phase I and II clinical trials for safety, measurement of biomarkers, and memory assessments as a potential therapeutic or therapeutic adjunct for Alzheimer disease (AD).\n\n# GLUCOSE SENSING AND HYPOGLYCEMIA COUNTERREGULATION\n\nBrain glucose-sensing mechanisms in the brain were the topic of the presentation by Vanessa Routh, PhD, from the New Jersey Medical School. She explained that the ventromedial hypothalamus (VMH) contains both glucose-excited and glucose-inhibited neurons that respond to physiologically relevant changes in glucose. Glucose-excited neurons increase firing as glucose is increased through a mechanism that involves closure of the K~ATP~ channel. The phenotype of these neurons remains uncertain, but some believe they may be producing pro-opiomelanocortin (POMC). Glucose-inhibited neurons reduce firing in response to an increase in glucose. This action appears to be mediated by AMP kinase and neuronal NOS and couples to a chloride channel. Glucose-inhibited neurons are distributed throughout the VMH, including approximately half of the neuropeptide Y (NPY)\u2013producing neuronal population. The ability of the glucose-inhibited neurons to sense and respond to glucose parallels the ability of the brain to sense and respond to fasting and hypoglycemia in normal rodents. Moreover, the response of glucose-inhibited and glucose-excited neurons to decreased glucose can be blunted by satiety signals such as leptin or insulin and can be increased by the orexigenic peptide ghrelin. On the basis of these results, Dr. Routh proposed the hypothesis that glucose-sensing neurons are important for sensing decreased glucose and initiating compensatory responses under conditions of energy deficit, such as fasting or hypoglycemia. However, during normal energy balance, it is important that glucose-sensing neurons do not respond to small glucose decreases associated with meal-to-meal fluctuations in glucose because such a response could lead to inappropriate signals of overall energy deficit. As a result, satiety hormones mask and orexigenic agents enhance the responsiveness of glucose-sensing neurons to reductions in glucose.\n\nStephanie Amiel, MD, from King's College London School of Medicine, presented a lecture entitled \"Neural Circuits Involved in Hypoglycemia Counterregulation.\" She reviewed the physiology underlying the counterregulatory response to hypoglycemia and then presented neuroimaging studies in humans that have helped visualize the global brain response to hypoglycemia. Many cerebral regions have been shown to participate in the detection of hypoglycemia and coordination of the counterregulatory response, including the hypothalamus, the thalamus, the hippocampus, and higher brain centers. With a new understanding of the parts of the brain involved in the counterregulatory response to hypoglycemia, she proposed that new therapeutic avenues to prevent and treat hypoglycemia may be possible.\n\nHow these different brain regions may detect and respond to hypoglycemia was discussed by E.R.S. MD, from the University of Minnesota, in her lecture entitled \"Mechanisms Underlying the Development of Hypoglycemia-Associated Autonomic Failure.\" She began her presentation by emphasizing that hypoglycemia-associated autonomic failure (HAAF) is a clinically significant problem that occurs in patients with type 1 and advanced type 2 diabetes. Recurrent hypoglycemia leads to HAAF because it shifts the glucose threshold that elicits adrenomedullary response (i.e., lower and lower blood sugars required to elicit response) and reduces the magnitude of hormonal counterregulatory response while having no effect on glucose level associated with neuroglycopenia symptoms. HAAF is reversible by avoiding hypoglycemia.\n\nThe mechanisms responsible for the development of HAAF remain uncertain, but at least three have been proposed. One potential mechanism is that hypoglycemia induces a change in brain glucose metabolism in such a way to prevent regulatory centers from sensing a change in blood glucose concentration during systemic hypoglycemia. A second potential mechanism is that the brain may increase the utilization of alternative fuels such as lactate and glycogen during hypoglycemia and therefore may not experience energy deficit in the setting of systemic hypoglycemia. A third mechanism suggests that changes in downstream signaling of GABA and opioids that link detection of hypoglycemia to counterregulatory response are altered in the setting of recurrent hypoglycemia.\n\n# INSULIN RESISTANCE AND INSULIN ACTION IN THE BRAIN\n\nC. Ronald Kahn, MD, from the Joslin Clinic and the Harvard Medical School, discussed diabetes, insulin action, and the brain. One approach to identifying pathways through which insulin alters brain metabolism has been creation of mice in which the insulin receptor is knocked out only in neural tissue. These mice show increased food intake, increased adipose mass, hyperleptinemia, and insulin resistance; however, behaviorally, they are the same as wild-type mice at a young age. However, as they age, they begin to behave abnormally in open field tests (interpreted as a sign of anxiety), in tail suspension tests (interpreted as a sign of depression), and in forced swimming tests (interpreted as a loss of motivation). How the loss of insulin action causes these behavior changes remains unknown, but preliminary studies show that these mice have decreased phosphorylation of glycogen synthase kinase 3 and increased phosphorylation of \u03c4, two proteins known to be important in the pathogenesis of AD. Dr. Kahn's group recently performed a gene expression analysis on the hypothalamus from streptozotocin-treated C57BL\/6 mice (a model for insulin-deficient diabetes) and found that genes involved in cholesterol biosynthesis are suppressed relative to controls. In addition, rates of cholesterol synthesis and the synaptosomal cholesterol content are decreased in the brains of these diabetic mice, as are levels of sterol regulatory element\u2013binding protein 2 and srebp cleavage-activating protein (SCAP) mRNA and protein, two important regulators of cholesterol synthesis.\n\nMark W.J. Strachan, MD, from the Western General Hospital, Edinburgh, Scotland, also discussed insulin action in the brain in his lecture entitled \"Emerging Data on the Relationship Between Insulin Resistance and Neurovascular Disorders.\" Insulin receptors are found in high concentrations within the limbic system of the brain. In vitro, insulin affects neuronal excitability and synaptic plasticity. Direct relationships have been shown between insulin action and the pathogenesis of AD. The \u03b2-amyloid oligomers, which accumulate in the brain in AD, are intimately linked with neuronal dysfunction and cause a rapid and substantial loss of cell surface insulin receptors in hippocampal neuronal cultures. The development of insulin resistance in the brain then can set-up a vicious cycle because resultant activation of glucose synthase kinase 3 promotes hyperphosphorylation of \u03c4 protein and altered processing of amyloid precursor protein, which increases \u03b2-amyloid oligomer production. Moreover, insulin resistance (through attendant hyperinsulinemia) may reduce the ability of insulin-degrading enzyme in the brain to break-down \u03b2-amyloid oligomers. Proving the existence of these mechanisms and pathways in humans has been challenging, but insulin concentrations in the cerebrospinal fluid do appear lower in people with severe AD. In addition, administration of intravenous insulin (with concurrent glucose to maintain euglycemia) has been shown to enhance cognitive function. Direct administration of insulin via the intranasal route has a favorable impact on cognition and may provide new opportunities for treatment. Therapies designed to improve cerebral insulin action have produced mixed results. Administration of rosiglitazone was not of cognitive benefit to people with AD, perhaps because it does not cross the blood--brain barrier, whereas an exercise intervention did lead to improvement in insulin sensitivity and cognitive function in older adults.\n\nSuzanne Craft, PhD, from the University of Washington, presented a programmatic analysis of a specific mechanism that may link type 2 diabetes with pathological brain changes, cognitive impairment, and even AD-related dementia. A key target (if not culprit) is insulin secretion and action. Dysregulation or other abnormalities in brain insulin play a role in normal aging-related cognitive decline, exacerbated cognitive decline with type 2 diabetes, and (when severe) mild cognitive impairment and AD. The two forms of insulin dysregulation that increase the risk of neurocognitive dysfunction, insulin resistance and peripheral hyperinsulinemia, also were discussed by other speakers in this symposium. Dr. Craft detailed several mechanisms through which insulin resistance and diabetes may increase the risk of cognitive impairment and AD. In addition to reduced brain insulin uptake, these specific mechanisms include: *1*) impaired glucose and lipid metabolism; *2)* disrupted regulation of \u03b2-amyloid clearance (i.e., a hallmark of AD neuropathology); *3)* increased inflammation; and *4)* compromised vascular function (e.g., microvascular lesions). Notably, insulin receptors are distributed in several critical brain regions (e.g., hippocampus, frontal lobe), and insulin performs several crucial brain cognition functions (e.g., increasing neurotransmitter levels, glucose utilization). The fact that numerous specific mechanisms can independently and perhaps interactively interrupt signaling in any of these regions creates a formidable challenge for clinical research.\n\nIf insulin resistance is a mechanism through which diabetes may compromise neurocognitive functioning, then there may be preventable or modifiable factors that lead to insulin dysregulation as well as therapeutics that may improve insulin resistance. Regarding the former, Dr. Craft reported that epidemiological studies suggest that the expanding prevalence of conditions related to insulin resistance in western societies may be attributable to decreased levels of physical activity and increased dietary intake of calories and saturated fats. Arguably, healthier lifestyles may be protective factors operating in several specific mechanisms through which insulin resistance can lead to neurocognitive dysfunction. Regarding the latter, Dr. Craft described her own research on the therapeutic strategy of administering intranasal insulin in an effort to improve neural insulin signaling. In her project known as the Study of Nasal Insulin to Fight Forgetfulness (SNIFF), she reported promising outcomes, including improved memory performance and improved performance in activities of daily living.\n\n# NEUROCOGNITION AND IMAGING\n\nChristopher M. Ryan, PhD, from the University of Pittsburgh, presented a historically informative tour of the major issues and strategies for assessing cognitive function in diabetic patients. Although the field has benefited from a surge of interest in recent years, it has come to recognize that improvements can be implemented across research laboratories in crucial aspects such as: *1*) establishing diagnostic standards and procedures for evaluating equivalences and systematic variations in clinical groups; *2*) reducing inconsistencies in the cognitive testing batteries deployed (e.g., standardization, theoretical breadth, measurement purity, manifest vs. latent variable representations, and validity issues); *3*) improving research sensitivity to special population characteristics that affect diabetes--cognition relationships (e.g., children, older adults, multiple impairments); and *4*) addressing the importance of longitudinal follow-up for understanding the trajectories of cognitive change. Dr. Ryan, who was an early contributor and still is a continuing contributor to the field, focused on identifying a framework for representing some of the major approaches to selecting cognitive measures for research in diabetes and reviewing a large number of frequently used cognitive batteries.\n\nRegarding the framework, Dr. Ryan described several related approaches. One approach emphasizes identifying relatively nonspecific cognitive markers of diabetes-related dysfunction. Neurocognitive speed has proven to be the most sensitive marker across multiple studies, even while using tasks that range widely in terms of their process purity. In a complementary approach, researchers use cognitive and neuropsychological test batteries to identify neurocognitive or clinical phenotypes (e.g., mild cognitive impairment) associated with diabetes. Results using this approach have improved as the batteries become more contemporary, comprehensive, and multidimensional. However, Dr. Ryan cautioned that it is crucial to map the expected cognitive effects to the brain changes (e.g., vascular) on the basis of theoretical and clinical data and to use batteries that are age-appropriate and change-sensitive to evaluate these relationships and performances at different phases of the life span.\n\nBrian M. Frier, MD, from the Royal Infirmary of Edinburgh, Scotland, shifted the focus from extant approaches and available test batteries to the correlative topic of observed cognitive effects of diabetes and their potential mechanisms. He began by noting that the \"unsystematic\" nature of early research in the area was likely attributable to cross-study variations and within-study limitations in sample sizes, cognitive batteries, and measurement of relevant covariates. Accordingly, Dr. Frier supplemented the typical organizing scheme of diabetes type. Rather than summarizing similar and dissimilar cognitive effects of type 1 and type 2 diabetes, he began by presenting the main observed (or to-be-tested) risk factors for cognitive decrements as associated with each. Both sets of risk factors include a genetic predisposition (although this remains an understudied aspect for both). For type 1 diabetes, the other risk factors include recurrent hypoglycemia, chronic hyperglycemia, long duration, and early age of onset, as well as a group of diabetes complications. Not surprisingly, for type 2 diabetes the list is much longer and more diverse (Fig. 2<\/a>). It includes vascular issues (microvascular and macrovascular disease, hypertension), acute hyperglycemia, recurrent hypoglycemia, hyperinsulinemia, dyslipidemia, amyloid deposition in the brain, and inflammatory mediators, as well as drug therapy issues and psychosocial factors such as depression. The implicit message is that diabetes per se may affect neurocognitive performance in a variety of ways but does so not just through the separate prisms of diabetes type. It is also influenced within type by a diverse set of barely overlapping but likely interacting influences.\n\nFor both disease types, there is consistent evidence that neurocognitive speed is affected, particularly straightforward indicators of mean performance rates that usually show slower average speeds for diabetic patients relative to controls. In addition, some downstream indicators of basic cognitive resources (e.g., executive function) and products (e.g., episodic memory) that may require speed for efficient performance are often implicated in research with both types of diabetes. According to Dr. Frier's review, several of the promising risk factors for cognitive decrements in type 1 diabetes have not yet been confirmed empirically. One confirmed factor among children and adolescents is early disease onset (up to age 7), a factor that is also linked to progress and participation in schooling and social development. Regarding research on cognitive effects of type 2 diabetes, a key challenge is to differentiate neurocognitive changes that characterize normal aging from those exacerbated by diabetes or associated with progressive neurodegenerative decline. Researchers are increasingly considering the roles of modulating factors. In addition to age, sex, and education, these include the specific risk factors in Fig. 2<\/a>, numerous comorbidities (e.g., obesity, vascular dysfunction), treatment and adherence (e.g., diet, oral agents), and exposure to risk-enhancing substances (alcohol, smoking, cholesterol). In summary, Dr. Frier's recommendations for future research included examining multiple influences on longer-term progressive or accelerated declines in cognitive functioning. We await further results from prospective studies that feature larger sample sizes, broader coverage of key cognitive functions, and markers of important risk and mediator factors, all measured across extended bands of adulthood.\n\nThe presentation by Geert Jan Biessels, MD, PhD, from the University Medical Centre Utrecht, addressed etiology, risk factors, and biomarkers of diabetes-related cognitive dysfunction with an emphasis on novel opportunities provided by longitudinal research methods. A first important point is that cognitive dysfunction in diabetes should be characterized not just in terms of implicated aspects of cognition (e.g., episodic memory or neurocognitive speed) or even magnitude of the concurrent deficit Rather because most observed decrements are modest in magnitude, dysfunction also should be defined by characteristics of change, including direction, slope, trajectory, and outcome. Whereas for some diabetic patients neurocognitive deficits may decline only slowly over long periods of adulthood, others may experience trajectories that lead beyond normal decline to frank cognitive impairment and even dementia. Dr. Biessels reported that although diabetic patients who exhibit sustained modest decrement profiles of both types of diabetes (1 and 2) and of any age, those who experience more rapid neurocognitive decline are more likely to be patients with type 2 diabetes who eventually have development of overt dementia.\n\nDr. Biessels' second major point was that the two types of cognitive dysfunction in diabetes may differ not only in terms of temporal and outcome characteristics but also in terms of associated biomarkers, risk factors, and etiological influences. In addition, Dr. Biessels noted that these dynamic and interactive processes can be directly examined with multiple assessments and appropriate statistical analyses. He described one promising longitudinal study that may yield data to answer many of the questions raised in his presentation. It features abundant risk factor data on large samples of type 2 diabetic patients from the Kaiser Permanente Northern California Diabetes Registry. Finally, a summary point from his biomarker and etiology discussion merits special attention. Dr. Biessels noted an emerging theory that may imply that the two types of diabetes--cognition trajectories may differ in part in the extent to which an additional (diabetes and immediate complications) \"hit\" from substantial brain pathology is absorbed by some patients. For example, a mild stroke or increased rates of amyloid deposition may accelerate the transition of some type 2 diabetic patients from normal cognition to impairment or dementia.\n\nGail Musen, PhD, from the Joslin Diabetes Center, highlighted the value of using neuroimaging techniques to shed light on neural correlates of cognitive dysfunction in diabetes. A principal rationale is that diabetes-related changes in the brain may be detectable with recent technology early in diabetes-related degenerative processes and that they may also precede and be differentially associated with specific diabetes-related cognitive manifestations and decremental patterns. Dr. Musen commented on the recent progress and future promise in this area, attributable in part to new and more widely available neuroimaging techniques. For example, early diabetes--brain research on white matter hyperintensities (often observed with magnetic resonance imaging) revealed mixed patterns across studies. Using the more sensitive, flexible, and objective diffusion tensor imaging, researchers can examine and compare a number of white matter abnormalities and connectivity patterns. These lines of investigation were not previously available because of limitations in imaging technology. In addition, voxel-based morphometry studies of cortical thickness (including both gray and white matter density) are being used to compare diabetic patients and healthy controls. Dr. Musen reported new research from her team on one promising avenue that may eventually link diabetes more closely with cognitive impairment and dementia risk. In normal adults, the default mode network is typically deactivated during goal-directed cognitive activity, with variations on activation and metabolism patterns for AD patients. She described one study in which similar disruptions occurred for type 1 diabetic patients and an effort to investigate similar issues with type 2 diabetes (a well-known risk factor for AD). Dr. Musen concluded her presentation with several forward-looking recommendations, including identification of neural mechanisms associated with potential cognitive dysfunctions in diabetes, pursuing this goal with larger-scale studies (in terms of both sample sizes and assessment waves), and collection of repeated assessments of target (neuroimaging, cognitive performance) and covariate (diabetes-related biomarkers) measures.\n\n# ENERGY HOMEOSTASIS\n\nD.F.L., PhD, from the University of Washington, presented translational research demonstrating the effects of insulin and leptin on reward behavior and dopamine function in the rat. Her work has focused predominately on the rewarding value of sucrose, based on findings that sucrose drives motivation for intake of mixtures of sugar, fat, and flavor. She reviewed the CNS anatomy that mediates reward behavior and energy homeostasis and made three key points. First, these sets of circuitries are substantially interconnected. Second, specific sites within both sets of circuitries play a role in mediating different aspects of reward behavior. Third, insulin and leptin can decrease reward behavior through actions on both reward and energy homeostatic circuitries. The neurotransmitter dopamine plays a central role in reward and motivation. D.F.L. reviewed the effect of insulin to increase dopamine reuptake, which curtails dopamine signaling. There is evidence that this effect is attributable to increased cell-surface cycling of the dopamine reuptake transporter, and that this cycling is dependent on activation of phosphatidylinositol 3 kinase mechanisms. Additionally, it has been shown that leptin can decrease dopamine neuronal activity. Together, both hormones act to dampen dopamine signaling in normal animals. D.F.L. finished by reporting that exposure to a moderate-fat diet results in increased motivation for sucrose and resistance to insulin and leptin effects in rats that are metabolically normal and preobese. She concluded with examples from current human imaging studies suggesting that the effects of insulin and insulin resistance on reward circuitry in rodents are relevant for humans.\n\nTony Lam, PhD, from the University of Michigan, reviewed work from his laboratory in a lecture entitled \"Gut--Brain Signaling.\" He has shown that the influx of fatty acids into the duodenum leads to the release of cholecystokinin, subsequent activation of vagal afferents, and transmission of the signal to the nucleus of the solitary tract. As a result, the gut-brain-liver axis is activated to reduce hepatic glucose production. In addition, metabolism of lipids and glucose in the hypothalamus activates protein kinase C and K~ATP~ channels that regulate hepatic glucose production. Diabetes and obesity alter these nutrient-sensing pathways in both the gut and the brain, which leads to abnormalities in glucose homeostasis.\n\nJoel Elmquist, DVM, PhD, from the University of Texas Southwestern Medical Center, reviewed the anatomical, synaptic, and cellular functions of the mediobasal hypothalamic circuitry in the coordinated regulation of multiple components of energy homeostasis. This circuitry has now been characterized for reciprocal regulatory interactions in locus, as well as for receiving and transmitting multiple signals from other CNS sites and from the periphery. Research has focused on the roles of (orexigenic) NPY\/agouti-related protein (AGRP) and (anorexigenic) POMC neurons. Dr. Elmquist provided a historical perspective on the field that laid the groundwork for the utility of genetically modified mouse models to determine specific neuronal and functional aspects of leptin activity. Specific ablation of leptin receptors in POMC neurons, which produce the neuropeptide melanocortin, results in modest obesity with decreased energy expenditure and locomotor activity but no effect on food intake, along with mild glucose intolerance and insulin resistance. Specific reinstatement of functional leptin receptors in these POMC neurons normalizes peripheral glucose homeostasis and glucagon release. Interestingly, a model in which both insulin and leptin receptors were ablated in POMC neurons did not result in strong effects on body weight but showed profound glucose dyshomeostasis and reduced fertility. This suggested coordinated action of insulin and leptin signaling in POMC neurons that regulate reproductive and hepatic function. However, leptin receptor effects are distributed throughout a network; for example, the leptin receptors expressed on steroidogenic factor-1 neurons of the VMH appear to regulate energy expenditure. The melanocortin receptor subtype 4 (MCR4) mediates many of the energy homeostatic effects of melanocortin, and new evidence suggests that MCR4 activity in multiple central and peripheral nervous system sites is important for regulation of energy homeostasis. Thus, MCR4 receptors expressed in SIM1 neurons of the paraventricular nucleus of the hypothalamus and the amygdala correct hyperphagia in MCR4-null mice. MCR4 receptors expressed in cholinergic neurons of the sympathetic nervous system increase energy expenditure and regulate hepatic glucose production, and MCR4 receptors expressed in the cholinergic neurons of the parasympathetic nervous system may regulate endocrine pancreatic function.\n\nTamas Horvath, DVM, PhD, from Yale University, presented his work examining synaptic interactions directly between POMC and other local neurons. The AGRP\/NPY neuron releases the inhibitory neurotransmitter GABA. In addition to a projection to the paraventricular nucleus of the hypothalamus, the AGRP\/NPY neuron sends a GABAergic projection to local POMC neurons in the arcuate nucleus. This local circuit is a direct target for leptin action as well as for the orexigenic hormone\/neuropeptide, ghrelin. Recent work from Horvath's group has demonstrated ghrelin stimulation of UCP2 expression. Studies in a UCP2 knockout model have shown an impairment of ghrelin-stimulated electrical activity and mitochondriogenesis in NPY\/AGRP neurons. Ghrelin effects have been observed in other CNS sites, i.e., the hippocampus and the dopaminergic neuron in the substantia nigra pars compacta. Taken together, the group's recent findings clearly delineate a key role of the UCP2 protein in many aspects of CNS function and its modulation by nutritional and metabolic status.\n\nThe theme of hypothalamic\/brain\/periphery networking was further emphasized in the presentation by Joseph Bass, MD, PhD, from Northwestern University, that focused on circadian regulation of energy homeostasis. Dr. Bass reviewed the discovery of clock proteins in the 1990s, which set the stage for research on circadian function and its regulation in the subsequent years. In mammals, including humans, the CNS \"master clock\" has been identified as the suprachiasmatic nucleus (SCN), and circadian rhythms are encoded by the sequential and reciprocal expression of transcription activators, CLOCK\/BMAL1, that induce expression of the transcription repressors, PER1\u20133\/CRY1\u20133, that inhibit CLOCK\/BMAL1 expression. The cycle of expression takes approximately 24 h and is entrained by light. The SCN has connections with key homeostatic nuclei in the hypothalamus, including the arcuate, paraventricular nucleus, ventromedial hypothalamus (VMH), lateral hypothalamus, and dorsal medial hypothalamus (DMH). Thus, there is direct input on behavior, activity, metabolism, and the hypothalamic-pituitary-adrenal axis, which in turn is a key diurnal regulator of activity and metabolism. Importantly, it is now appreciated that \"clock proteins\" are expressed in nearly all cells. Peripheral tissue clocks are entrained to food availability, adrenal glucocorticoids, and temperature, all of which can reset peripheral clocks. Although research is only beginning to uncover the complex crosstalk between stimuli and peripheral clocks, molecules that reflect metabolic status such as NAD and the nutrient sensor AMP kinase, relay information to the clocks and connectivity is bidirectional. A current line of investigation is examining the coupling of metabolic processes and CNS clock function. A high-fat diet in an animal model has been shown to increase period length of the daily activity rhythm and to phase-shift metabolic gene expression in the liver and adipose as well as in the hypothalamus. The CNS locus of this effect (SCN per se, or SCN projections) remains to be determined but clearly demonstrates metabolite\/CNS clock interaction. Finally, sleep and activity abnormalities such as narcolepsy and genetic evidence in humans with altered sleep patterns suggest that pursuit of this research and links between clock activities and metabolic disease will reveal the importance of clock\/environmental coordination to human metabolic health.\n\n# CONCLUSIONS\n\nThe workshop brought together scientists and clinicians working in disparate areas such as metabolism, psychology, imaging, and neuroscience who were all united by a desire to understand how diabetes affects cerebral structure and function. In the opening lecture, Dr. Gerald Dienel reported anecdotally that early in his career it was unusual for basic neuroscientists to consider diabetes as an influence on cognitive functioning, much less a risk factor for neurodegenerative disease. By the end of the workshop, new collaborations were formed that will support future multidisciplinary research in this area. Ideally, this work should be based on the following principles: both \"neuro\" and \"cognitive\" phenomena must be clinically relevant, theoretically selected, domain-specific, and carefully measured; research should emphasize, whenever possible, continuous (rather than categorical) assessments of phenomena related to diagnosis, performance, structure, modulating factors, and clinical outcomes; investigation should attend to a broad range of potential modulators, including proximal (e.g., insulin resistance), somatic or distal (e.g., functional biomarkers), comorbidities (e.g., vascular disruptions, inflammation), therapy--social (e.g., treatment availability and adherence, depression), and genetic susceptibilities (e.g., genetic polymorphisms related to diabetes, AD, obesity); work should continue to examine the complex systems through which endocrine and neural systems regulate energy metabolism; and, whenever possible, scientists should study these relationships dynamically using longitudinal follow-up assessments and appropriate change-sensitive statistical analyses.\n\n## ACKNOWLEDGMENTS\n\nNo potential conflicts of interest relevant to this article were reported.\n\nE.R.S., D.F.L., and R.A.D. contributed to the drafting and revising of the manuscript.\n\nThe authors gratefully acknowledge the organizational support of Shirley Ash (American Diabetes Association).","meta":{"dup_signals":{"dup_doc_count":145,"dup_dump_count":42,"dup_details":{"curated_sources":4,"2021-21":1,"2018-26":1,"2018-22":3,"2018-17":2,"2018-13":3,"2018-05":3,"2017-51":3,"2017-47":1,"2017-43":6,"2017-39":1,"2017-34":5,"2017-30":4,"2017-26":6,"2017-22":1,"2017-17":6,"2017-09":6,"2017-04":10,"2016-50":5,"2016-44":6,"2016-40":6,"2016-36":5,"2016-30":5,"2016-26":6,"2016-22":5,"2016-18":2,"2016-07":2,"2015-48":3,"2015-40":3,"2015-32":2,"2015-27":2,"2015-22":3,"2015-14":2,"2014-52":3,"2014-49":2,"2014-42":2,"2014-41":4,"2014-35":2,"2023-23":1,"2017-13":1,"2015-18":3,"2015-11":3,"2024-18":1}},"file":"PMC3501868"},"subset":"pubmed_central"} {"text":"abstract: # Objective\n .\n To examine whether business improvement districts (BID) contributed to greater than expected declines in the incidence of violent crimes in affected neighbourhoods.\n .\n # Method\n .\n A Bayesian hierarchical model was used to assess the changes in the incidence of violent crimes between 1994 and 2005 and the implementation of 30 BID in Los Angeles neighbourhoods.\n .\n # Results\n .\n The implementation of BID was associated with a 12% reduction in the incidence of robbery (95% posterior probability interval \u22122 to 24) and an 8% reduction in the total incidence of violent crimes (95% posterior probability interval \u22125 to 21). The strength of the effect of BID on robbery crimes varied by location.\n .\n # Conclusion\n .\n These findings indicate that the implementation of BID can reduce the incidence of violent crimes likely to result in injury to individuals. The findings also indicate that the establishment of a BID by itself is not a panacea, and highlight the importance of targeting BID efforts to crime prevention interventions that reduce violence exposure associated with criminal behaviours.\nauthor: John MacDonald; Daniela Golinelli; Robert J Stokes; Ricky Bluthenthal**Correspondence to** Dr John M MacDonald, Department of Criminology, University of Pennsylvania, McNeil Building, Suite 483 3718 Locust Walk, Philadelphia, PA 19104-6286, USA; \ndate: 2010-06-29\ninstitute: 1University of Pennsylvania, Philadelphia, Pennsylvania, USA; 2RAND Corporation, Santa Monica, California, USA; 3Drexel University, Philadelphia, Pennsylvania, USA\nreferences:\ntitle: The effect of business improvement districts on the incidence of violent crimes\n\nResearch indicates the importance of community-level attributes for explaining the incidence of interpersonal violence and crime in neighbourhoods, but there are few examples of effective community-level violence prevention interventions.1\u20133 Several studies suggest that implementation of the community economic development model of the business improvement district (BID) reduces crime in affected neighbourhoods.4 5 The BID model relies on special assessments levied on commercial properties located within designated business areas to augment services typically provided by public agencies, including sanitation, public safety, place marketing and planning efforts.6 Although managed and operated by private sector non-profit organisations, the majority of BID are public entities, chartered and regulated by local governments.7 The services delivered through BID assessment schemes, however, do not typically replace current public services. BID services typically are directed towards sanitation and security of common public space areas such as sidewalks (and not interior spaces), analogous to the common area service arrangements seen in home owners' associations.8 BID often focus their budgets on providing private security to their business locales and surrounding neighbourhoods, as a basic level of enhancement to publicly funded police services.9\n\nOne of the more rigorous evaluations of BID by Brooks5 indicated that their adoption in areas of Los Angeles was associated with a significant drop in the number of serious crimes reported to the police between 1990 and 2002, controlling for time stable differences between areas and in comparison with neighbourhoods that proposed BID but did not end up adopting them. Given that the adoption of a BID in Los Angeles requires extensive support from business and property owners (eg, at least 15% of the business owners or 50% of the property owners must sign supporting petitions) and a laborious process of legal and legislative oversight, the simple proposed adoption of a BID may not provide a strong comparison group.10 11 The actual process of BID adoption is, by itself, a signal of commitment from business merchants and landowners to promote economic development through various community change activities. Even after taking into account time stable area differences in the average volume of crime, poverty rates and other neighbourhood features, it is difficult to reconcile whether establishing a BID is independent from other facets of community change that may presage drops in crime. A detailed analysis of budget data and observations of neighbourhoods in Los Angeles where BID are situated, for example, showed that their priorities of spending were correlated with observable indicators of neighbourhood physical decay and surrounding economic conditions.12 Therefore, using the BID area before BID implementation may be a more appropriate comparison group.\n\nWe rely on a pre\u2013post intervention design to assess the effects of BID on the incidence of violent crimes. We use the year of implementation to reflect the exposure to the BID intervention and examine the pre\u2013post changes in the incidence of violent crimes in affected neighbourhoods, controlling for overall time trends.\n\n# Methods\n\n## Design\n\nWe examined BID effects by modelling the pre\u2013post changes in the incidence or rate of violent crimes from 1994 to 2005 for all neighbourhood areas exposed to BID. Between 1996 and 2003, a total of 30 separate BID were implemented in Los Angeles. The unit of analysis is any of the 30 neighbourhood areas that eventually adopted a BID in Los Angeles during the period of observation. Table 1<\/a> reports the number of BID areas that became fully operational at any given year. We consider a BID fully operational if its implementation occurred for the entire calendar year. The formulation of BID in Los Angeles requires a formalised and uniform planning and adoption stage that is structured by law. Details are provided elsewhere.12 For all the areas that eventually adopted a BID in Los Angeles, there are at least 2\u2005years' worth of data during which no BID (pre) was operational and, similarly, at least 2\u2005years of data during which all the BID were fully operational (post). We make use of this type of interrupted time series to estimate the average BID effect on the rate of violent crimes.\n\nBID by year of observation in Los Angeles\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Year<\/th>\nNo of BID started<\/th>\nBID area<\/th>\n<\/tr>\n<\/thead>\n
1994<\/td>\n\u2013<\/td>\n<\/td>\n<\/tr>\n
1995<\/td>\n\u2013<\/td>\n<\/td>\n<\/tr>\n
1996<\/td>\n2<\/td>\nWilshire Centre
\nFashion District<\/td>\n<\/tr>\n
1997<\/td>\n2<\/td>\nHollywood Entertainment I
\nSan Pedro<\/td>\n<\/tr>\n
1998<\/td>\n6<\/td>\nLos Feliz Village
\nLarchmont Village
\nDowntown Centre
\nFigueroa Corridor
\nCentury Corridor
\nGreater Lincoln Heights<\/td>\n<\/tr>\n
1999<\/td>\n11<\/td>\nGranada Hills
\nCanoga Park
\nVan Nuys Boulevard Auto Row
\nTarzana
\nStudio City
\nHollywood Media
\nWestwood Village
\nHistoric Core (Downtown)
\nToy District
\nDowntown Industrial
\nJefferson Park<\/td>\n<\/tr>\n
2000<\/td>\n2<\/td>\nChatsworth
\nSherman Oaks<\/td>\n<\/tr>\n
2001<\/td>\n4<\/td>\nEncino
\nLos Angeles Chinatown
\nWilmington
\nLincoln Heights Industrial<\/td>\n<\/tr>\n
2002<\/td>\n2<\/td>\nNorthridge
\nHighland Park<\/td>\n<\/tr>\n
2003<\/td>\n1<\/td>\nReseda<\/td>\n<\/tr>\n
2004<\/td>\n0<\/td>\n\u2013<\/td>\n<\/tr>\n
2005<\/td>\n0<\/td>\n\u2013<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\nSource: Los Angeles City Clerk's Office.\n\nBID, business improvement district.\n\n## Data\n\nThe data consist of the yearly counts of officially recorded violent felony crimes (homicide, rape, robbery and aggravated assault) by the Los Angeles Police Department that were aggregated from police reporting districts to corresponding BID areas. If a reporting district is present in a BID this district is counted as receiving the BID intervention. A total of 179 reporting districts are present in the 30 BID neighbourhood areas. We focus primarily on the counts of robbery, because this crime is less susceptible to underreporting by the police, is more likely than other crimes to occur in public settings between strangers and just over 30% of people who are robbed experience an injury.13\u201315 Nearly one out of five robberies nationally result in a serious injury that requires medical treatment, such as a gunshot wound, knife\/stab\/slash wound, broken bones or teeth, internal injury and\/or loss of consciousness.16 17 The estimated social costs of an average robbery are high, with an average cost of US\\$39\u2008287 (in 2005 dollars) if one includes monetary costs associated with medical and emergency services, lost productivity, mental health and general quality of life.18 By focussing our analysis primarily on violent crimes, and in particular robbery, we are offering a closer look at the effective role that BID expend on crime prevention efforts that attempt to improve the social control of public space areas through environmental design modifications and spending on private security. We do not examine property crime offences because they are less likely to be reported to the police, many occur in private settings (eg, larceny\/theft) and are not 'street crimes' directly subject to BID efforts to enhance social control in public spaces.\n\n## Time trends\n\nTable 2<\/a> presents a summary of the average frequency of robbery and all violent crimes for areas exposed to BID compared with non-BID areas that incorporate the rest of Los Angeles. The simple linear trend of these data indicates that BID areas experienced greater, on average, yearly reductions in the incidence of robbery and violence than non-BID areas. For example, the average yearly reduction in robbery was 1.9 in BID areas, compared with 1.2 in non-BID areas. The log of the average robbery counts indicates a 7% reduction in BID areas compared with a 5.7% reduction in non-BID areas. During the 12-year period, however, the average yearly count of reported violent offences dropped by 58% for Los Angeles as a whole, suggesting that it is important to take into account this secular trend in assessing the effects of BID on violent crimes.\n\nAverage crimes, by year in BID and non-BID reporting districts\n\n| Year | Non-BID areas (n=893) | | BID areas (n=179) | |\n|--------------|-----------------------|------------|-------------------|------------|\n| | Robbery | Violent | Robbery | Violent |\n| | M (median) | M (median) | M (median) | M (median) |\n| 1994 | 25.88 (17) | 64.27 (42) | 41.6 (35) | 86.53 (66) |\n| 1995 | 24.01 (16) | 60.89 (41) | 41.56 (35) | 89.75 (74) |\n| 1996 | 21.06 (13) | 54.70 (35) | 34.3 (29) | 77.35 (61) |\n| 1997 | 16.85 (11) | 49.14 (33) | 29.37 (24) | 70.52 (57) |\n| 1998 | 13.06 (8) | 42.99 (28) | 22.34 (17) | 59.77 (51) |\n| 1999 | 11.82 (7) | 40.58 (26) | 19.98 (16) | 56.46 (46) |\n| 2000 | 12.90 (8) | 43.63 (26) | 21.66 (18) | 60.26 (51) |\n| 2001 | 15.43 (10) | 49.38 (31) | 26.20 (21) | 68.74 (55) |\n| 2002 | 14.05 (8) | 44.56 (26) | 24.83 (20) | 62.94 (54) |\n| 2003 | 13.60 (8) | 42.24 (26) | 23.92 (19) | 60.21 (51) |\n| 2004 | 11.81 (7) | 36.65 (22) | 19.43 (15) | 49.12 (44) |\n| 2005 | 11.49 (7) | 26.49 (15) | 18.06 (14) | 37.91 (34) |\n| Average | 16.00 (9) | 46.29 (28) | 26.94 (10) | 64.96 (52) |\n| Linear trend | \u22121.2 | \u22122.5 | \u22121.9 | \u22123.6 |\n\nEach column presents the average (M) and median (50th percentile) values for the number of incidents in business improvement district (BID) and non-BID areas.\n\n## Statistical model\n\nWe used a Bayesian hierarchical model to assess the pre\u2013post effects of BID adoption in areas that were exposed to BID. We estimate the BID effect in each individual area and the average BID effect across all areas. We model the number of reported robberies and violent crimes with a Poisson distribution with mean (*\u03bb~it~*) for each of the 30 BID areas (*i*) over a 12-year (*t*) time period. We include a random effect parameter for areas to scale the time trend for the volume of crimes in each BID area, and to account for time invariant differences across the 30 BID areas. We account for the overall 12-year crime time trend in Los Angeles with a natural cubic spline.19 As the yearly incidence of violent crimes for Los Angeles as a whole was declining over the 12-year study period, it is important to account for the secular trend in the model so as not to overestimate the BID effect. The population at risk of violent crimes in each BID area is unknown, representing a mix of residents and non-residents (eg, shoppers). We bypass the problem of having to estimate the population at risk of violent crimes by comparing every BID area with itself. Under the assumption that the population at risk does not change over time, we propose a model that has as the main parameter of interest for area *i* the ratio (*K~i~*) of the crime rate if a BID was adopted divided by the crime rate that the trends predict would have happened if the BID had not been adopted. If *K~i~* is less than 1, this indicates that the presence of a BID in area (*i*) is associated with a reduction in the incidence of violent crimes for that area. We use a Bayesian hierarchical model to assess individual effects across each BID area (*i*) and to approximate the average overall BID effect (*\u03bc~K~*) across all 30 areas.20 Full technical details on the model and specification of priors are available elsewhere.12\n\nThe yearly violent crime trends for a sample of 12 of the 30 BID areas are displayed in figure 1<\/a> and show the timing of each of the BID interventions (denoted by a vertical line). Figure 1<\/a> provides a visual sense for the fact that the Bayesian hierarchical model is estimating an interrupted time series for each unique BID area and then pooling the effect across all areas. The Poisson model implies the counterfactual that the rate of violent crimes in an area after the BID becomes fully operational is proportional to what the rate would have been in that area had the BID not been implemented.\n\n# Results\n\nTable 3<\/a> reports the overall BID effect for the incidence of robbery and violent crimes in terms of the percentage reductions (posterior mean) associated with the adoption of BID and the 95% posterior probability intervals. Table 3<\/a> also reports the posterior probability of observing an overall BID effect on reducing the incidence of robbery and violent crimes (P(*\u03bc~K~*\\<1), where *\u03bc~K~* represents the overall effect across the 30 BID areas. We find that the posterior probability of a BID effect is 0.96. In other words, there is strong evidence that BID reduced the robbery rate. The estimated percentage reduction: obtained as one minus the posterior mean (*\u03bc~K~*) indicates a 12% average reduction (95% posterior probability interval \u22122 to 24) in the incidence of robbery associated with the implementation of BID. For the total incidence of violence BID effects are in the same direction, but the statistical evidence is not as strong (P(*\u03bc~K~* \\<1)=0.91), and indicates an 8% average reduction (95% posterior probability interval \u22125 to 21) in the total incidence of violence associated with the adoption of BID.\n\nEstimated percentage reduction in reported violent crimes from BID\n\n| Outcomes | Posterior mean*<\/a> | 95% Posterior probability interval | Posterior probability BID effect (P(\u03bc~*K*~ \\<1)) |\n|----|----|----|----|\n| Robbery | 12 | (\u22122,24) | 0.96 |\n| Violent | 8 | (\u22125,21) | 0.91 |\n\nPosterior mean reflects the percentage reduction as calculated by (1\u2212*\u03bc~K~*)\u00d7100.\n\nBID, business improvement district.\n\nGiven that the observed probability of an overall BID effect was strongest for robbery, table 4<\/a> reports the BID area-specific effects for official reports of robbery in terms of percentage reductions in robberies (*1\u2013K~i~)*. The individual BID area results for robbery show that for 14 of the 30 areas the posterior probability of observing a BID effect is 0.90 or higher. In terms of effect sizes for these 14 areas, the robbery rate is reduced by a high of 27% in the Century Corridor BID to a low of 14% in the Downtown Industrial BID. For another two BID areas, the observed posterior probability is more than 0.80, which still provides evidence for the presence of a BID effect in the expected direction. The BID effects appear to be most pronounced for the incidence of robbery, which one would expect is likely to be affected by environmental features of the neighbourhoods, such as hiring private security officers, which are the focus of BID efforts to control public space areas. Overall, there seems to be evidence in the data that the BID in Los Angeles had an effect in reducing the incidence of reported robberies.\n\nArea-specific estimates of BID effects on robbery\n\n| BID name | Posterior mean (1\u2013*K~i~*) | BID effect (p=*K~i~*\\<1) | 95% Posterior CI |\n|----|----|----|----|\n| Granada Hills | 18 | 0.93 | \u22126, 37 |\n| Chatsworth | 5 | 0.65 | \u221220, 25 |\n| Northridge | 18 | 0.94 | \u22125, 36 |\n| Reseda | 15 | 0.90 | \u22129, 33 |\n| Canoga Park | 3 | 0.60 | \u221224, 25 |\n| Van Nuys | 26 | 0.99 | 7, 41 |\n| Tarzana | \u221210 | 0.25 | \u221244, 16 |\n| Encino | 11 | 0.76 | \u221222, 35 |\n| Sherman Oaks | 10 | 0.76 | \u221218, 31 |\n| Studio City | 9 | 0.76 | \u221220, 31 |\n| Los Feliz Village | 21 | 0.98 | 1, 39 |\n| Highland Park | 11 | 0.83 | \u221214, 30 |\n| Hollywood Entertainment | 9 | 0.80 | \u221216, 28 |\n| Hollywood Media | 15 | 0.95 | \u22125, 32 |\n| Larchmont Village | 34 | 0.99 | 5, 53 |\n| Wilshire Centre | 4 | 0.63 | \u221225, 26 |\n| Los Angeles Chinatown | 21 | 0.98 | 0, 38 |\n| Westwood Village | 21 | 0.97 | \u22121, 39 |\n| Downtown Centre | 7 | 0.74 | \u221217, 25 |\n| Historic Core | 1 | 0.55 | \u221221, 21 |\n| Toy District | 8 | 0.77 | \u221216, 27 |\n| Fashion District | \u221224 | 0.05 | \u221263, 5 |\n| Downtown Industrial | 14 | 0.90 | \u20138, 31 |\n| Figueroa Corridor | 20 | 0.96 | \u22122, 36 |\n| Jefferson Park | 17 | 0.95 | \u22124, 33 |\n| Century Corridor | 27 | 1.00 | 8, 43 |\n| Wilmington | \u20137 | 0.28 | \u221234, 14 |\n| San Pedro | 8 | 0.75 | \u221218, 29 |\n| Lincoln Heights | 11 | 0.77 | \u221220, 34 |\n| Greater Lincoln Heights | 25 | 1.00 | 6, 41 |\n\n*K~i~*=ratio of robbery crimes (post-business improvement district (BID)\/pre-BID). Bold indicates a BID with a probability of a BID effect of \u22650.90.\n\nGiven the size of the BID effects appear to be most pronounced for robbery, this raises the question of how much BID spending occurs in relation to social costs saved by reducing robberies. Multiplying our estimated 12% (annual) reduction to the average incidence of robberies (*M*=160.7) associated with BID implementation to the estimated social costs (US\\$39\u2008287 in 2005 dollars)18 of an average robbery shows a marginal cost saving of approximately US\\$757\u2008611 (in 2005 dollars) (annually). Given that the average annual budget of the 30 BID in Los Angeles was approximately US\\$736\u2008670 (in 2005 dollars),12 this suggests that a sizeable social cost\u2013benefit of BID implementation can be attributed to the reductions in robbery alone.\n\n## Additional tests\n\nWe also conducted several additional tests on our measures of violent crime and the methods for approximating BID effects. Homicides, for example, are the most accurately reported violent crimes, but we did not discuss the yearly trends in homicide because the counts are so low. The average number of homicides per year in neighbourhoods associated with BID is less than one, and the median (50th percentile) is zero. The point estimates from replicating our model for homicide counts varies widely, and the probability of detecting a BID effect is low (P(\u03bc~K~\\<1)=0.43)), suggesting that BID have no appreciable effect on homicide. In particular, the homicide model indicates a 5% increase associated with BID adoption but with a large CI (95% posterior probability interval \u221250 to 29), resulting from low counts and imprecision in our estimate. A combined estimate of the count of robbery and homicide together was statistically identical to that of robbery alone, suggesting that the BID adoption effect observed is driven by the rate of robberies.\n\nWe also constructed alternative model specifications in line with previous work that Brooks5 used on an analysis of the effects of BID on reported serious crimes in Los Angeles during earlier years. We compared the estimated effect of BID implementation on robberies and all reported violent crimes using all police reporting districts in Los Angeles as the unit of analysis, including those that do not intersect BID areas. We then included a dummy variable denoting the timing of BID, adjacent neighbourhoods to BID as control variables, and fixed-effect terms (dummy variables) for each individual reporting district, year, and their interactions (reporting district\\*year). Our results from these specifications were sensitive to the parameterisation of the outcomes. If we relied on ordinary least squares regression we found a statistically significant BID effect in reducing the mean incidence of robberies (b=\u22124.63; p\\<0.001) and all violent crimes (b=\u22127.33; p\\<0.001) by approximately 16% and 11%, respectively. However, if we relied on a Poisson regression model the results were substantially lower and were only marginally significant for robberies (b=\u22120.02; p=0.07) and total violence (b=\u22120.01; p=0.09), reducing the respective incidence by 3% and 2%. We think this sensitivity test provides further justification for our use of a simpler model that estimates only the BID effects for those areas that eventually adopted BID, rather than assigning BID effects to the entire city of Los Angeles.\n\n## Limitations\n\nThe Bayesian hierarchical model provides an estimate of the effect of BID on the incidence of robbery and violent crimes in areas that were exposed to BID. Like all models this approach has several limitations. First, the model assumes that the population at risk of violence does not change once a BID starts. It is, however, possible that the establishment of a BID could change the population at risk of violent crimes in a number of ways. If, for example, there is a substantial increase in the number of shoppers or new residents because a BID was implemented then even a substantial decline in the incidence of violence would be offset by an increase in the denominator for the population at risk. Such an increase in the population at risk of violence would lead one to conclude that the adoption of the BID did not have an effect in reducing violence. Assuming that the population at risk coincides with the residential population in an area would be incorrect, as it is almost certain that successfully implemented BID attract a larger number of people for commerce to areas.\n\nSecond, the model assumes that the level to which violent crimes are reported to the police does not change systematically with the adoption of a BID. If, however, the adoption of a BID implies an increase in local merchants' and residents' willingness to report crimes to the police and an increased response from the police to combat crime, then the violent crime reports may actually increase as a function of the implementation of a BID. If this were the case, the adopted model would suggest that adopting a BID has the effect of increasing the incidence of violent crimes. Given that the findings suggest an overall effect of BID on reducing the robbery rate and marginal effects for all violent crimes, we have some confidence in these results.\n\nIn addition, if one assumes that BID areas have unique features in terms of the businesses that operate and the communities that encourage their establishment, constructing a group of comparison areas that are matched to the BID areas with respect to certain demographic features of area residents would represent a less conservative test of the effects of BID. We think that the areas that will eventually adopt a BID are the best comparison group for those areas that have already adopted a BID, as there are clearly features of BID areas that are unique in their ability to get a majority of landowners and merchants interested in their adoption. At the same time, our analysis offers no prescription for the various mechanisms by which BID impact robbery rates. BID adopt a variety of tactics, such as mobilising the police, hiring private security officers, street cleaning and environmental redesign to increase a sense of cleanliness and safety of BID areas. Unfortunately, the tactics adopted by each BID area are complex and not easy to approximate in a statistical model.\n\n# Discussion and conclusions\n\nThe results from this study suggest that BID reduce the rate of robbery crimes in affected Los Angeles neighbourhoods. The overall effect of BID on robberies is consistent with the efforts that many of these Los Angeles BID expend on improving the physical appearance of their areas to make them more attractive to commercial business and less attractive to potential offenders (eg, painting over graffiti, increased street lighting, closed-circuit television, or CCTV, cameras). The size of the BID effect on robberies varies across the 30 BID areas and appears to indicate a greater than expected reduction in robberies in those located in neighbourhoods that have undergone significant patterns of economic development or invested heavily in crime prevention. For example, BID area-specific effects were apparent in Jefferson Park and Figueroa Corridor, which are situated close to the University of Southern California, in areas of notable gentrification and economic development. Hollywood Media and Larchmont also exhibited BID-area-specific effects on the incidence of robberies and are situated in neighbourhoods undergoing gentrification. BID effects are present in Century Corridor, Figueroa Corridor and Hollywood Media, all of which invest heavily in crime prevention through hiring private security officers and other activities. 10\u201312 Los Angeles BID spend a considerable share of their resources hiring private security or public ambassadors who focus on keeping streetscapes clean and safe, thereby increasing the level of social control in public spaces. Approximately 13 of the 30 BID in Los Angeles spend more than US\\$200\u2008000 a year (2005 dollars) on such 'clean' and 'safe' efforts. These strategies are closely linked to research and theory on crime prevention through environmental design to reduce opportunities for crime and violence and, in particular, robbery.21\u201323\n\nGiven the limited budgets and staff of many BID, it is of no surprise that the mere presence of a BID is not uniformly associated with a reduced incidence of violence. Some BID spend as much as half their annual budgets on crime prevention and environmental redesign or beautification efforts. BID crime prevention activities may also garner additional resources from the police, as the police now have an active 'partner' in a community. Other established Los Angeles BID have relatively small budgets and focus their efforts primarily on place promotion in an effort to foster improved commercial activity for their constituent businesses.12 While the protocol for establishing a BID in Los Angeles is uniform and codified into law,12 the dosage of tactics to improve neighbourhood environments varies between BID areas. Relying on a conservative estimate of pre\u2013post changes in reported violent crimes for only those areas that adopted BID we found significant overall effects on robbery, with some BID areas exhibiting greater effects than others. We cannot say whether spending on private security or economic development efforts caused these reductions or are merely correlated with them. This study relied on observational data, which limits our ability to infer as to whether the correlations observed are causally related. We attempted to remove the potential selection effects of establishing a BID by estimating pre\u2013post effects on reported violent crimes for only those areas that eventually adopted a BID. In the absence of an experimental design in which BID are randomly assigned to neighbourhoods we do not know whether BID activities actually caused the declines in robbery rates.\n\nThe efforts spent by BID in Los Angeles on economic development activities and social control efforts that focus on crime prevention, beautification and advocating for more public safety and sanitation services to many blighted sections of Los Angeles12 are associated with a reduction in the incidence of robberies. This information can assist in designing and testing the feasibility of BID as a community-level violence prevention intervention.\n\n###### What is already known on this subject\n\n- The incidence of violence is associated with neighbourhood environments.\n\n- BID focus on public safety.\n\n- Community economic development models may help reduce crime.\n\n###### What this study adds\n\n- The first systematic look at the effects of BID on violent crimes.\n\n- Rigorous methodology for assessing the effects of BID on violent crimes.\n\n# References","meta":{"dup_signals":{"dup_doc_count":153,"dup_dump_count":53,"dup_details":{"curated_sources":2,"2023-06":1,"2022-33":2,"2022-27":1,"2020-45":2,"2020-40":1,"2019-26":1,"2018-51":1,"2018-43":1,"2018-26":1,"2018-22":2,"2018-17":1,"2018-13":1,"2018-05":2,"2017-51":1,"2017-43":4,"2017-39":2,"2017-34":3,"2017-30":1,"2017-22":4,"2017-17":1,"2017-09":5,"2017-04":2,"2016-50":1,"2016-44":4,"2016-40":4,"2016-36":3,"2016-30":3,"2016-26":2,"2016-22":3,"2016-18":3,"2016-07":4,"2015-48":4,"2015-40":2,"2015-35":2,"2015-32":3,"2015-27":4,"2015-22":3,"2015-14":4,"2014-52":3,"2014-49":5,"2014-42":9,"2014-41":6,"2014-35":7,"2014-23":7,"2014-15":6,"2023-50":1,"2015-18":4,"2015-11":3,"2015-06":4,"2014-10":2,"2013-48":3,"2013-20":1,"2024-10":1}},"file":"PMC2976613"},"subset":"pubmed_central"} {"text":"abstract: Pollination improves the yield of most crop species and contributes to one-third of global crop production, but comprehensive benefits including crop quality are still unknown. Hence, pollination is underestimated by international policies, which is particularly alarming in times of agricultural intensification and diminishing pollination services. In this study, exclusion experiments with strawberries showed bee pollination to improve fruit quality, quantity and market value compared with wind and self-pollination. Bee-pollinated fruits were heavier, had less malformations and reached higher commercial grades. They had increased redness and reduced sugar\u2013acid\u2013ratios and were firmer, thus improving the commercially important shelf life. Longer shelf life reduced fruit loss by at least 11%. This is accounting for 0.32 billion US\\$ of the 1.44 billion US\\$ provided by bee pollination to the total value of 2.90 billion US\\$ made with strawberry selling in the European Union 2009. The fruit quality and yield effects are driven by the pollination-mediated production of hormonal growth regulators, which occur in several pollination-dependent crops. Thus, our comprehensive findings should be transferable to a wide range of crops and demonstrate bee pollination to be a hitherto underestimated but vital and economically important determinant of fruit quality.\nauthor: Bj\u00f6rn K. Klatt; Andrea Holzschuh; Catrin Westphal; Yann Clough; Inga Smit; Elke Pawelzik; Teja Tscharntkee-mail: \ndate: 2014-01-22\ninstitute: 1Agroecology, Department of Crop Sciences, University of G\u00f6ttingen, Grisebachstrasse 6, 37077 G\u00f6ttingen, Germany; 2Centre for Environmental and Climate Research, University of Lund, S\u00f6lvegatan 37, 22362 Lund, Sweden; 3Department of Animal Ecology and Tropical Biology (Zoology III), Biocenter, Am Hubland, University of W\u00fcrzburg, 97074 W\u00fcrzburg, Germany; 4Quality of Plant Products, Department of Crop Sciences, Carl-Sprengel-Weg 1, 37075 G\u00f6ttingen, Germany\nreferences:\ntitle: Bee pollination improves crop quality, shelf life and commercial value\n\n# Introduction\n\nAgricultural production forms one of the most important economic sectors \\[1\\]. The quantity of most crop species is increased by pollination \\[2,3\\], which is a highly important, but also seriously endangered \\[4\\] ecosystem service. More than 75% of the 115 leading crop species worldwide are dependent on or at least benefit from animal pollination, whereas wind and self-pollination are sufficient for only 28 crop species \\[2\\]. Thereby, animal pollination contributes to an estimated 35% of global crop production \\[2\\]. It is mostly pollination-dependent crops such as fruits that contribute to a healthy human diet by providing particularly high amounts of essential nutrients such as vitamins, antioxidants and fibre \\[5,6\\]. Berries especially have been found to benefit human health and are increasingly used for therapies against chronic diseases and even cancer \\[5\\]. First attempts for sustaining pollination and other ecosystem services have been aligned in a strategic plan of the Convention on Biological Diversity in Nagoya in 2010. However, recent decisions, such as the new Common Agricultural Policy of the European Union (EU), still endanger ecosystem services by promoting high-intensity agricultural management. Thus, the value of pollination and other ecosystem services is still underestimated or even disregarded in national and international policies. In this study, we expand our knowledge of the underestimated benefits of bee pollination by experimentally quantifying its impacts on crop quantity, quality, shelf life and market value. This should contribute to a better understanding of its monetary and social importance, thereby enhancing a sustainable implementation in future policies. We used strawberries (*Fragaria x ananassa* DUCH.), a crop whose worldwide cultivation is on the increase \\[1\\], as a model system.\n\nStrawberry plants flower in several successive flowering periods within a season, with flowers becoming smaller over time \\[7\\]. Varieties are self-compatible in most cases, and stigmas become receptive before the anthers of the same flower release pollen, so that allogamy is favoured. Bee pollination increases strawberry weight and shape. Effects depend on varieties \\[8,9\\], presumably owing to differences in pollinator attraction \\[10\\] and their dependence on cross-pollination \\[7\\]. Recent findings about metabolic processes in strawberries support the idea that pollination may also impact shelf life \\[11\u201314\\]. Owing to high fruit sensitivity to fungal infections and mechanical injuries, strawberry fruits have a short shelf life \\[12\\]. More than 90% of fruits can become non-marketable after only 4 days in storage \\[15\\]. Several studies addressed the potential elongation of the shelf life of strawberries with modified storage procedures \\[15\u201319\\], which highlights how economically important this problem is. Crop features allowing longer storage and thereby, reducing post-harvest losses in supermarkets and households are of major interest worldwide \\[20\\], but have so far not been analysed in terms of pollination. Shelf life and pathogenic susceptibility of strawberries are mostly related to fruit firmness \\[15\\], but surface colour and sugar\u2013acid\u2013ratios are also involved \\[15\u201319\\]. Fruit colour further determines consumers perception and influences their purchasing behaviour \\[19\\], but has never been related to animal pollination. In addition, only few studies report a relation of pollination to firmness \\[21\u201323\\] and sugar contents \\[22,24\u201327\\] of fruits. Hence, comprehensive economic gains of bee pollination are largely unknown and in particular, the potential effect on commercially important parameters of the overall fruit quality has not yet been explored.\n\nWe set up a field experiment with nine commercially important strawberry varieties and assessed the influence of self, wind and bee pollination on strawberry fruits using exclusion treatments. We expected that: (i) bee-pollinated fruits would have higher numbers of fertilized achenes, the true 'nut' fruits of the strawberry, owing to higher pollination success compared with wind- and self-pollination; (ii) bee pollination would therefore lead to fruits with higher commercial value compared with wind- and self-pollinated fruits, owing to less malformations improving commercial grades and higher fruit weight; as well as (iii) higher firmness and longer shelf life; and (iv) bee-pollinated fruits should have a more intense red colour and lower sugar\u2013acid\u2013ratios, thus improving the post-harvest quality of strawberries.\n\n# Methods\n\n## Experimental set-up\n\nIn 2008, we planted nine commercially important strawberry varieties of *Fragaria x ananassa* DUCH. (Darselect, Elsanta, Florence, Honeoye, Korona, Lambada, Salsa, Symphony, Yamaska) on an experimental field. The field was subdivided in 12 plots, and nine rows per plot planted with 18 plants of a single variety per row. All varieties were present in all plots. The sequence of the rows within the plots was randomized. The field was surrounded by two further rows of strawberries to weaken edge effects. Five honeybee hives (*Apis mellifera* L.) and approximately 300 trap nests dominated by *Osmia bicornis* L., have been established for several years close to the field to ensure stable pollination services. Experiments were conducted in 2009 in the first yield year using exclusion treatments on two plants per variety and plot. Following the consecutive flowering periods of strawberries \\[7\\], all buds of a plant were covered with Osmolux bags (Pantek, Montesson, France) to allow only self-pollination (self-pollination treatment), gauze bags (mesh width 0.25 mm) to allow self- and wind pollination (wind pollination treatment) or remained uncovered to allow additional insect pollination (bee pollination treatment), respectively. Osmolux bags are semipermeable for water and steam, so that microclimate differences between bagged and unbagged flowers were kept at a minimum. Gauze bags do not create an atmosphere closed from outside the bags and thus have no influence on microclimate. Bags were removed directly after fruit set, when petals began to wither and fall off the flower, and the first approach of a fruit was visible, about 7 days after flower opening. At least 50 fruits per variety and treatment were harvested at maturity. All analyses except the titratable acid content were conducted on the same day of harvesting to avoid influence on post-harvest quality owing to water loss and metabolic procedures.\n\nWe collected insect pollinators under favourable weather conditions (*T* \\> 17\u00b0C; low cloud cover; wind speed less than 4 m s^\u22121^) in 2010, using standardized transect walks. Four strawberry varieties (Honeoye, Elsanta, Korona, Lambada) were selected based on their flowering time, so that all other varieties were flowering at the same time as at least one of these four varieties. Thus, pollinators were collected across the entire flowering season of the commercial strawberry field. Each transect consisted of one row of strawberries per plot of each of the selected varieties. On each of 4 days, four different plots of the experimental field were randomly selected and insects pollinating strawberry flowers were collected using sweep nets on each of the four selected varieties. Each transect was visited for 10 min. Pollinators were identified to species level, and data were pooled across all varieties.\n\n## Commercial value\n\n### Weight and commercial grades\n\nWe calculated the commercial value of each fruit based on fruit weight (BA2001 S, Sartorius, G\u00f6ttingen, Germany) and the market value of strawberry fruits, which is based on the availability of fruits on the market and commercial grades. Fruits were sorted into commercial grades, owing to aberrations in shape (deformations), colour (areas with yellow or green colour) and size (fruit diameter), following the official trade guidelines \\[28\\]. B.K.K. was trained by experienced strawberry growers and colleagues on how to apply the EU trade guidelines. Fruits with no or only slight deformations, with minimal areas of yellow or green colour which did not affect their general appearance, and with a minimum diameter of 18 mm, were sorted into grade extra\/one. Fruits showing distinct deformations and larger areas with yellow or green colour, but that had a minimum diameter of 18 mm were classified as grade two. Non-marketable fruits had strong deformations, large areas of yellow or green colour or were of a diameter smaller than 18 mm. Aberrations in colour usually occurred in combination with fruit deformations and were thus not treated separately. Following the above-mentioned Commission's regulation, grades extra and one can be treated separately, but are used combined in practice. We calculated proportions of fruits for each commercial grade and pollination treatment across all varieties (figure 2<\/a>*b*) and also separately for each variety (see the electronic supplementary material, S2).\n\nWe obtained the commercial value of each fruit by multiplying its weight with its market value per gram \\[29\\]. The latter was assessed based on harvest time and commercial grades. Harvest time influences the market value of strawberry fruits owing to the availability of fruits on the market: the more fruits that are available, the lower the market value. Thus, fruits that are sorted into lower commercial grades have lower market values. Finally, we extrapolated commercial value to 1000 fruits for a better relationship to market situations.\n\n### Firmness and shelf life\n\nWe bisected fruits and measured firmness at the centre of each half according to Sanz *et al.* \\[17\\] with the following modifications: we fitted the texture analyser (TxT2, Stable Micro System, Surrey, UK) with a 5 mm diameter probe and a 25 kg compression cell, and used a maximum penetration of 4 mm.\n\n## Post-harvest quality\n\nColourimetric analysis was applied according to Caner *et al.* \\[19\\] at two opposite sides of the centre of each fruit in the Lab-colour space using a portable colourimeter (CR-310 Chromameter, Konica Minolta, Badhoevedorp, The Netherlands). The total soluble solids are strongly correlated to the total sugar content of a solution and were measured using a handheld refractometer (HRH30, Kr\u00fcss, Hamburg, Germany). Measurements for each fruit were conducted twice and repeated when the values differed by more than 0.2 Brix. Fruits were freeze-dried (Epsilon 2-40, Christ, Osterode, Germany), and all samples from the same plant were pooled and milled. To account for an average water content of 82%, which was analysed on a sample of 250 fruits, 0.18 g of each freeze-dried sample was diluted in 20 ml distilled water and titrated according to Caner *et al.* \\[19\\].\n\n## Pollination success\n\nWe used at least eight fruits from each variety and treatment to analyse the number of fertilized achenes per fruit, which represent pollination success. Each fruit was blended in 100 ml distilled water for two minutes (Speedy Pro GVA 1, Krups, Offenbach, Germany). Fertilized achenes are heavier than water and sink to the bottom, whereas aborted achenes are lighter and accumulate at the water surface. Fertilized achenes were counted (Contador, Pfeuffer, Kitzingen, Germany) after drying for 48 h at 85\u00b0C.\n\n## Statistical analysis\n\nIn the case of repeated measurements per fruit, we calculated mean values for fruit characteristics. We fitted linear mixed-effects models with random effects allowing treatment slopes and intercepts to vary among varieties \\[30\\], using R \\[31\\]. To account for space and time errors and unbalance in the data, the random part was completed by two further terms: the interaction of plot, variety and plant, whereas flowering period was included as a crossed random effect. Response variables were commercial value per fruit, fruit weight, firmness, surface colour values (red colour, brightness, yellow colour) and number of fertilized achenes. In the models with sugar\u2013acid\u2013ratio as response variable, only plot and variety were used to complete the random part, because sugar\u2013acid\u2013ratios were calculated based on arithmetic means per plant.\n\nBee, wind and self-pollination treatments were used as fixed effect levels. To test whether pollination treatments differed and whether there was a main effect of all pollination treatments across all varieties, a model with unpooled treatment levels (full model), models with successively pooled treatment levels and a model without treatment as fixed effect were compared \\[30\\] using second order Akaike's information criterion (AICc) and likelihood \\[32\\]. This allowed us to test whether all treatments, specific treatment levels only, or none of the treatments had an effect on the response variables. The latter case was taken to indicate that treatment effects were specific to the variety and without a shared common effect between varieties. Residuals were inspected for constant variance, and transformations were used to account for non-normality and heterogeneity, where necessary. Main effect values and parameter estimates were extracted from the model and used for plotting after back transformation.\n\nPearson's chi-squared analysis was used to calculate differences between pollination treatments in the number of fruits for each commercial grade. Differences were shown in proportions for better illustration (figure 1<\/a>*b*).\n\n# Results\n\n## Commercial value\n\n### Weight and commercial grades\n\nStrawberry flowers were mainly visited by bees (98.5%). Wild bees were most abundant (64.6%), whereas *A. mellifera* L. (33.9%) and non-bee pollinators (flies: 1.6%) were found less often (table 1<\/a>). The solitary wild bee *O. bicornis* L. (52.0%) was the most abundant pollinator, whereas other wild bee species accounted for less than 5% of the bee community.\n\nPollinators visiting strawberry flowers on the experimental field. (To identify the main pollinators of strawberry flowers on the experimental field, four varieties were randomly selected and insects visiting strawberry flowers were collected. Sweep netting was conducted for 10 minutes on four transects that were randomly selected on each of four different days in 2010. Strawberries were mainly pollinated by solitary wild bees with *O. bicornis* L. being the most frequent species, while honeybees (*Apis mellifera* L.) and non-bee pollinators were less abundant.)\n\n| species | abundance | proportion | functional group |\n|:-----------------------------|-----------|------------|:-----------------|\n| *Osmia bicornis* L. | 66 | 52.0 | wild bee |\n| *Apis mellifera* L. | 43 | 33.9 | honeybee |\n| *Bombus terrestris* L. | 5 | 3.9 | wild bee |\n| *Andrena flavipes* Panz. | 3 | 2.4 | wild bee |\n| *Merodon equestris* F. | 2 | 1.6 | fly |\n| *Andrena gravida* Imhoff | 2 | 1.6 | wild bee |\n| *Bombus hypnorum* L. | 1 | 0.8 | wild bee |\n| *Bombus lapidarius* L. | 1 | 0.8 | wild bee |\n| *Bombus lucorum* L. | 1 | 0.8 | wild bee |\n| *Bombus pascuorum* Scop. | 1 | 0.8 | wild bee |\n| *Bombus pratorum* L. | 1 | 0.8 | wild bee |\n| *Andrena chrysosceles* Kirb. | 1 | 0.8 | wild bee |\n| total wild bees | 82 | 64.6 | \u2014 |\n| total honeybees | 43 | 33.9 | \u2014 |\n| total non-bees (flies) | 2 | 1.6 | \u2014 |\n\nBee pollination resulted in strawberry fruits with the highest commercial value (figure 1<\/a>*a*). On average, bee pollination increased the commercial value per fruit by 38.6% compared with wind pollination and by 54.3% compared with self-pollination. Fruits resulting from wind pollination had a 25.5% higher market value than self-pollinated fruits. Pollination treatments were stronger than differences between varieties and thus had a main effect across all varieties (see table 2<\/a> for AICc and likelihood values). Our results suggest that altogether, bee pollination contributed 1.12 billion US\\$ to a total of 2.90 billion US\\$ made with commercial selling of 1.5 million tonnes of strawberries in the EU in 2009 \\[1\\]\u2014but so far without consideration of the monetary value provided by enhanced shelf life (see below). Price and marketability of strawberries depend on commercial grades of fruit quality (shape, size and colour) \\[28\\]. Malformations, in particular, are a common problem affecting strawberry price and marketability \\[33\\]. Our experiment showed that pollination treatments significantly differed in the number of fruits for each commercial grade (**\u03c7**~4~^2^ = 60.504; *p* \\< 0.001, *n* = 1895). Bee pollination reduced malformations and thus enhanced marketability in all varieties except the variety Symphony (figure 1<\/a>*b*; see the electronic supplementary material, S2 for variety values). The highest proportion of bee pollinated fruits was assigned to the best grade extra\/one, whereas non-marketable fruits formed the smallest fraction. By contrast, wind and self-pollination led to high proportions of non-marketable fruits. When compared with wind and self-pollination, bee pollination not only improved fruit shape, but also fruit weight (figure 1<\/a>*c*). Bee-pollinated fruits were on average 11.0% heavier than wind-pollinated and 30.3% heavier than self-pollinated fruits. Pollination treatments were stronger than differences between varieties and thus had a main effect across all varieties (see table 2<\/a> for AICc and likelihood values).\n\nDelta AICc values and likelihood resulting from model comparisons. (AICc = 0 indicates the model with the highest explanatory power. Lower delta AICc and higher likelihood indicate better explanatory power of a model. Likelihood was calculated for models with delta AICc of less than seven \\[32\\]. Best explaining models are highlighted in italics. Sample sizes are given in brackets behind fruit parameters. None, no treatment level pooled; sans, model without fixed effect.)\n\n| fruit parameter | pooled levels | | | | |\n|:---|:---|:---|:---|:---|:---|\n| | none | bee = wind | wind = self | bee = self | sans |\n| commercial value (*n* = 1892) | | | | | |\n| *\u2003*AICc | *0* | 4.512 | 0.173 | 3.527 | 2.501 |\n| *\u2003*likelihood | *0.403* | 0.042 | 0.370 | 0.069 | 0.115 |\n| fruit weight (*n* = 1895) | | | | | |\n| *\u2003*AICc | *0* | 4.162 | 3.507 | 4.872 | 3.137 |\n| *\u2003*likelihood | *0.627* | 0.078 | 0.109 | 0.055 | 0.131 |\n| shelf life (*n* = 1268) | | | | | |\n| *\u2003*AICc | *0* | 0.347 | 1.791 | 7.218 | 5.273 |\n| *\u2003*likelihood | *0.431* | 0.362 | 0.174 | \u2014 | 0.031 |\n| red colour (*n* = 1279) | | | | | |\n| *\u2003*AICc | 1.428 | 1.608 | *0* | 2.021 | 0.323 |\n| *\u2003*likelihood | 0.155 | 0.142 | *0.317* | 0.115 | 0.270 |\n| sugar\u2013acid\u2013ratio (*n* = 345) | | | | | |\n| *\u2003*AICc | 2.128 | 3.244 | *0* | 1.247 | 1.147 |\n| *\u2003*likelihood | 0.131 | 0.075 | *0.378* | 0.203 | 0.213 |\n| pollination success (*n* = 356) | | | | | |\n| *\u2003*AICc | *0* | 4.267 | 9.192 | 8.704 | 7.290 |\n| *\u2003*likelihood | *0.894* | 0.106 | \u2014 | \u2014 | \u2014 |\n\n### Shelf life\n\nBee pollination strongly impacted the shelf life of strawberries by improving their firmness (figure 2<\/a>*a*). The firmness values of each treatment and variety were related to shelf life, measured as the number of days until 50% of fruits had been lost owing to surface and fungal decay (see the electronic supplementary material, S3). Higher firmness resulting from bee pollination potentially elongated the shelf life of strawberry fruits by about 12 h compared with wind pollination, and by more than 26 h compared with self-pollination. After 4 days in storage, only 29.4% of the wind-pollinated fruits and none self-pollinated fruit were still marketable, whereas, at the same time, 40.4% of the bee-pollinated fruits remained in a marketable condition. Thus, bee pollination accounted for a decrease of at least 11.0% in fruit losses during storage. These findings suggest that the value for bee pollination calculated in \u00a73a(i) has to be increased to accommodate this impact on the shelf life of strawberries. Hence, pollination benefits on the shelf life of strawberries potentially added another 0.32 billion US\\$ to the commercial value of strawberry pollination (without shelf-life effects: 1.12 billion US\\$). In total, bee pollination contributed 1.44 billion US\\$ to a total of 2.90 billion US\\$ made with the commercialization of 1.5 million tonnes of strawberries in the EU in 2009 \\[1\\]. Pollination treatments had a main influence on shelf life across all varieties (see table 2<\/a> for AICc and likelihood values). Varieties producing fruits with high firmness benefitted most from bee pollination.\n\n## Post-harvest quality\n\nIn most varieties, bee-pollinated fruits had a more intense red colour compared with fruits resulting from wind and self-pollination (figure 2<\/a>*b*). Self-pollinated fruits of the varieties Lambada and Symphony showed the most intense red colour in the self-pollination treatment. The bee pollination treatment differed from the two other pollination treatments across all varieties, whereas strong variety differences imped a difference between wind and self-pollination treatments (see table 2<\/a> for AICc and likelihood values). The brightness of bee- and wind-pollinated fruits was similar and highly correlated to yellow colour intensity (see the electronic supplementary material, S4 and S5). Both colour properties did not differ between bee and wind pollination, but self-pollinated fruits were darker and had less intense red colour. Thus, bee pollination resulted in bright fruits with a more intense red colour than wind pollination fruits, whereas self-pollinated fruits were darker and less red (figure 2<\/a>*b* and the electronic supplementary material, S4).\n\nSenescence of strawberries is not only related to losses in firmness and colour changes, but also to increasing sugar\u2013acid\u2013ratios. Bee-pollinated fruits generally had a lower sugar\u2013acid\u2013ratio compared with wind- and self-pollinated fruits across all varieties (figure 2<\/a>*c*), but fruits of the varieties Elsanta and Symphony had a higher sugar\u2013acid\u2013ratio with bee pollination. The difference between wind and self-pollination remained variety-dependent (see table 2<\/a> for AICc and likelihood values), whereas the sugar\u2013acid\u2013ratio of fruits resulting from bee pollination differed from both other treatments across all varieties.\n\n## Pollination success\n\nPollination success was related to the number of fertilized achenes dependent on pollination treatments. Bee pollination was much more efficient than wind and self-pollination, resulting in a higher number of fertilized achenes per fruit across all varieties (figure 3<\/a>; see table 2<\/a> for AICc and likelihood values). Bee pollination on average increased the number of fertilized achenes by about 26.8% compared with wind pollination and about 61.7% compared with self-pollination. Wind-pollinated fruits had a 47.7% higher number of fertilized achenes than fruits resulting from self-pollination. This confirms our findings to be true effects of bee pollination.\n\n# Discussion\n\nWe found bee pollination, which was mainly conducted by solitary wild bees, to play a key role for several features of the quantity and quality of strawberry fruits. Bee pollinated fruits showed less malformations, greater fruit weight and longer shelf life, resulting in higher commercial value as well as improved post-harvest quality by more intensive red colour and lower sugar\u2013acid\u2013ratios than fruits resulting from wind and self-pollination.\n\nThe mechanism behind the benefits of strawberry pollination by bees is based on the fertilization of the true 'nut' fruits of the strawberry, the achenes \\[11\u201314\\]. During their visits, bees allocate pollen homogeneously on the receptacles, increasing the number of fertilized achenes per fruit \\[34\\]. While unfertilized achenes resulting from insufficient pollination have no physiological functionality, fertilized achenes produce the plant hormone auxin \\[35\\], which mediates the accumulation of gibberellic acid \\[14\\]. Together, these plant hormones induce fruit growth by improving cell progeny and size, thereby enhancing the weight of strawberry fruits \\[12\\]. This further improves fruit quality and thereby commercial grades \\[12\\] by preventing malformations, which are caused by areas of unfertilized and thus physiologically inactive achenes \\[33\\].\n\nHow can pollination induce a longer shelf life in strawberries? The shelf life of strawberries and other fruits is mostly dependent on their firmness \\[15,33\\], which is also functionally based on fertilized achenes \\[33\\] and thus dependent on successful pollination. Auxin and gibberellic acid delay fruit-softening and thereby enhance firmness and shelf life, by limiting the expression of several fruit-softening proteins, the so-called expansins \\[11\\]. Higher levels of both plant hormones also increase the post-harvest quality of strawberries. Although auxin alone reduces the accumulation of anthocyanins \\[11\\], high levels of both auxin and gibberellic acid can, in conjunction, increase anthocyanin accumulation \\[12\\]. In contrast to firmness and colour changes, sugar\u2013acid\u2013ratios of strawberries are not directly affected by auxin and gibberellic acid \\[12\\]. But higher firmness of fruits is associated with more stable cell walls which might reduce respiration, which is known to limit metabolic processes affecting sugar and acid contents during storage \\[19\\]. Indirect positive effects of pollination are therefore probable.\n\nPlant hormones that can influence the quality of fruits and vegetables are known to occur not only in strawberries, but also in several other crops \\[36\\] that require animal pollination \\[7\\]. Crops such as coffee \\[37\\] and blueberry \\[38\\] benefit from animal pollination in terms of fruit set and fruit size; and it has been shown elsewhere that fruit shape can benefit from increased animal pollination \\[7\\]. This indicates that our findings may be transferable to a high variety of crops and that animal pollination may largely contribute to crop quality. However, only few studies have focused to date on effects of pollination other than the effects on crop yield and fruit set. It has been shown that the sugar content of loquats \\[24,26\\], vine cactus \\[25\\] and oriental melon \\[22\\] as well as the firmness of oriental melon \\[22\\] and cucumber \\[23\\] can be increased by animal pollination. Contrasting results are available for the tomato, whereas Al-Attal *et al.* \\[21\\] showed that pollination increased the firmness of tomatoes in greenhouses, but pollination had no effect on the firmness of cherry tomatoes under field conditions \\[27\\]. Oilseed rape is another important crop whose quality benefits from insect pollination by higher oil content and lower chlorophyll content \\[39\\]. These results support the assumption of a general impact of pollination on multiple aspects of crop quality. However, such comprehensive findings about the benefits of pollination on crop quality, yield and commercial value as in our study, which can be mechanistically well linked to formerly reported physiological processes, have never, to our knowledge, been reported before.\n\nOur results showed strawberries to be almost exclusively visited by bees, with solitary wild bees being most abundant. This contrasts with earlier findings, where honeybees were the most common pollinators of strawberries and other crops \\[7\\] and further shows that the wild bee pollination can be important for crop production, if wild bees are abundant close to crop fields. Wild bee pollinators have already been shown to be effective crop pollinators \\[40\\], including strawberries \\[41\\]. Additional experiments are required to assess the current abundance of wild bee pollinators and thus their importance for strawberry production on commercial strawberry fields under conventional management conditions.\n\nIn our study, we used an innovative approach to the calculation of the commercial value of pollination by considering not only overall yield \\[2,3\\] but also crop quality in terms of trade classes, shelf life and changing market values. Shelf life is a major factor determining the commercial value of pollination. Globally, between one-third and a half of all fruits and vegetables are lost due to mechanical damage and deterioration during handling, transport and storage directly after harvest, or wasted at retailer and consumer levels \\[42\\]. This illustrates the commercial and social importance of crop shelf life and the far-reaching impact of pollination deficits.\n\nOf course, our calculations may still underestimate the commercial value of bee pollination as they are not considering commercial pollination benefits related to colour, sugar\u2013acid\u2013ratio and other taste components.\n\n# Conclusion\n\nIn conclusion, our results showed that crop pollination is of higher economic importance than hitherto thought. Plant hormones, the production of which is mediated by pollination, occur in several other pollination-dependent fruits and vegetables \\[36\\]. This highlights the major importance of animal pollination for crop quality in other crops in addition to strawberries. Quality improvements of crops can greatly affect marketability and contribute to reducing food loss and waste. In the industrialized countries, 30\u201350% of all crops are thrown away at retail and consumer levels \\[20,42\\]. Under the current scenario of rapid human population increase and global food demand \\[43\\], achieving high quality and quantity of crops is a pressing issue. Our study suggests that comprehensive analyses of the benefits of pollination for animal-dependent crops, which comprise 70% of all major crop species \\[2\\], may clearly increase estimates of the economic value of this ecosystem service. Pollination appears to be economically much more important than previously recognized and needs better support through adequate agricultural management and policy.\n\n## Acknowledgements\n\nWe thank, K. M. Krewenka, J. Fr\u00fcnd, C. Scherber, M. von Fragstein, G. Everwand, B. Scheid, K. Klatt and H. P.-G. Klatt for comments on the manuscript and N. Bl\u00fcthgen, D. Kleijn and L. Garibaldi for reviewing the manuscript prior to submission. We thank the members, students and technicians of the agroecology group for their field assistance.\n\n## Funding statement\n\nThis work has been supported by the German Research Foundation (DFG) and the SAPES research environment. Data can be accessed on request from the Agroecology at G\u00f6ttingen University (see for contact details) and 3TU.Datacentrum (see ).\n\n# References","meta":{"dup_signals":{"dup_doc_count":118,"dup_dump_count":43,"dup_details":{"curated_sources":2,"2018-43":1,"2018-39":2,"2018-30":2,"2018-26":1,"2018-22":1,"2018-17":1,"2018-13":2,"2018-09":1,"2018-05":2,"2017-51":2,"2017-47":2,"2017-43":2,"2017-39":3,"2017-34":2,"2017-30":1,"2017-26":2,"2017-22":2,"2017-17":1,"2017-09":5,"2017-04":3,"2016-50":4,"2016-44":5,"2016-40":4,"2016-36":4,"2016-30":7,"2016-26":4,"2016-22":4,"2016-18":3,"2016-07":3,"2015-48":4,"2015-40":2,"2015-35":3,"2015-32":4,"2015-27":2,"2015-22":3,"2015-14":3,"2014-52":4,"2014-49":5,"2018-47":1,"2017-13":3,"2015-18":3,"2015-11":1,"2015-06":2}},"file":"PMC3866401"},"subset":"pubmed_central"} {"text":"abstract: # Introduction\n .\n National Health Service hospitals and government agencies are increasingly using mortality rates to monitor the quality of inpatient care. Mortality and Morbidity (M&M) meetings, established to review deaths as part of professional learning, have the potential to provide hospital boards with the assurance that patients are not dying as a consequence of unsafe clinical practices. This paper examines whether and how these meetings can contribute to the governance of patient safety.\n .\n # Methods\n .\n To understand the arrangement and role of M&M meetings in an English hospital, non-participant observations of meetings (n=9) and semistructured interviews with meeting chairs (n=19) were carried out. Following this, a structured mortality review process was codesigned and introduced into three clinical specialties over 12\u2005months. A qualitative approach of observations (n=30) and interviews (n=40) was used to examine the impact on meetings and on frontline clinicians, managers and board members.\n .\n # Findings\n .\n The initial study of M&M meetings showed a considerable variation in the way deaths were reviewed and a lack of integration of these meetings into the hospital's governance framework. The introduction of the standardised mortality review process strengthened these processes. Clinicians supported its inclusion into M&M meetings and managers and board members saw that a standardised trust-wide process offered greater levels of assurance.\n .\n # Conclusion\n .\n M&M meetings already exist in many healthcare organisations and provide a governance resource that is underutilised. They can improve accountability of mortality data and support quality improvement without compromising professional learning, especially when facilitated by a standardised mortality review process.\nauthor: Juliet Higginson; Rhiannon Walters; Naomi Fulop**Correspondence to** Professor Naomi Fulop, Department of Applied Health Research, University College London, 1-19 Torrington Place, London WC1E 7HB, UK; \ndate: 2012-05-03\ninstitute: NIHR King's Patient Safety and Service Quality Research Centre, King's College London, London, UK\nreferences:\ntitle: Mortality and morbidity meetings: an untapped resource for improving the governance of patient safety?\n\n# Introduction\n\nIn recent years, there has been an increasing international interest in using mortality rates to monitor the quality of hospital care.1 2 Concern about patient safety and scrutiny of mortality rates intensified in the UK with the extensive coverage of investigations into National Health Service (NHS) hospital failures and the Dr Foster report with its patient safety rating for NHS trusts.3 4 As a consequence, boards of healthcare organisations now require assurance that the care they provide is safe and that patients are not dying through failure of their services. Many trusts include mortality rates in their performance scorecards or dashboards and actively engage with national patient safety improvement initiatives, such as Patient Safety First and Safer Patient Initiative, to reduce mortality rates.5 6\n\nA forum that has traditionally reviewed in-hospital deaths is the long-standing Mortality and Morbidity (M&M) meeting established by surgeons to further professional education. In regularly reviewing deaths and complications, these meetings have the potential to provide accountability and the necessary improvement measures required for patient safety as well as professional learning. How effective they are in fulfilling these additional roles remains unexplored.\n\nIn many countries, M&M meetings are embedded within the medical curriculum for doctors in training.7 Junior doctors present cases to other doctors for reflection on diagnostic or treatment decision-making, and in return they receive clinicopathological wisdom and learn presentation skills. In the past, the brief discussions between the clinicians about the causes of death were thought to be effective peer review and an adequate means of changing practice.8 9 Little attention was paid to analysing the causes of deaths for quality improvement.10 11 Studies have shown that for M&M meetings to facilitate improvement and be more than a forum for peer review, they need to be structured and systematic in reviewing and discussing deaths, directing discussions towards improving system and process variations.12\u201314 Studies recommended that to support this, junior doctors' training should include more focus on systematic process change and less on medical error in M&M meetings.15\u201317\n\nHistorically, M&M meetings have been led and attended only by the medical profession and have remained autonomous, with knowledge not being available or shared with other professions or across the wider hospital governance framework. This 'silo' working has led to a lack of organisational learning and accountability.18 Increasingly hospitals are beginning to integrate M&M meetings into their governance processes, by making them mandatory and more accountable for reviewing deaths and taking corrective action should adverse events arise.19\u201321 To support this, the US Agency for Healthcare Research and Quality has produced web-based guidance for case analysis.22\n\nTraditionally, adverse outcomes discussed at M&M meetings have been attributed to individual competence in treating patients rather than the system or process failures involved with the care.11 23 Although both contribute to errors, the focus on individuals has led clinicians to fear embarrassment and loss of reputation, making them reluctant to speak openly about errors at meetings.24 25 This defensive behaviour is thought to be counterproductive to eliminating adverse events and assuring safe care.26 27\n\nIn light of this evidence we wanted to see whether and how M&M meetings in an English teaching hospital could facilitate quality improvement, be accountable and provide assurance within the organisation's governance processes. Using a structured mortality review process as a facilitating mechanism, we wanted to assess what impact this would have on the original focus of the meetings and on professional learning; to explore how hospital staff viewed the changes; and evaluate the potential that a different format of M&M meeting could offer the organisation.\n\n# Methods\n\n## Setting\n\nThe participating organisation was an English NHS teaching hospital offering specialist tertiary services in addition to general and surgical care. The study was carried out in close collaboration with the hospital to inform and support other strategies for reducing mortality rates. The single case study allowed us to study both horizontally (ie, across divisions) and vertically (up and down the managerial and professional hierarchies) within the organisation and this approach allowed us to make analytic generalisations rather than statistical ones.28\n\n## Initial assessment\n\nDuring 2009, we studied the characteristics and processes of all the hospital M&M meetings and their position within the governance framework using non-participant observations at meetings (n=9) and semistructured interviews with meeting chairs and governance managers (n=19). The results showed considerable variation in meeting structure, format and case presentations, confirming the findings from previous studies. We found that the responsibility for managing these meetings was devolved to clinical groups and as a consequence they had developed independently and in individual ways. There was no formal reporting structure from these meetings into the wider hospital governance to inform or assure the board that deaths were not occurring as a result of unsafe care.\n\nFollowing this we codeveloped a standardised mortality review form and database with hospital staff, based on recommendations in the literature and drawing on the method of examining death used by the National Confidential Enquiry into Patient Outcome and Death and the Scottish Audit of Surgical Mortality.29\u201332 The review form focused on whether the death was avoidable; on issues arising from the care of the patient; and on whether these could have contributed to the patient's death. It recorded where actions were necessary to address any problems; whether an adverse incident report was needed; and who was going to take any actions forward (Mortality review form: Appendix 1, web only). We developed and piloted the review form and database which were then introduced into three clinical specialties selected purposively for their patient cohort. This process was carried out from January to December 2010 during which time we studied whether and how this standardised mortality review process supported M&M meetings to contribute to wider governance processes.\n\n## Participants\n\nFive care groups across two divisions agreed to participate in the study. Three care groups agreed to adopt a standardised mortality review process, two from a general medical division and one from a specialist division. One group in each division agreed to be a study control and not implement the review process but participate in the evaluation process. The two clinical divisions were selected to participate because they regularly experienced high numbers of deaths albeit for contrasting reasons. Patients admitted to the general medical division had a range of complex medical problems and multiple comorbidities, were often frail and elderly and had numerous possible causes of death. In contrast, patients admitted into the second division, which offered specialist tertiary services, had a narrower range of specialty-specific causes of death, such as acute or end-stage organ failure.\n\n## Data collection\n\nWe used qualitative methods, interviews and observations to understand how and whether the standard mortality review process was or was not used, and how and whether it supported M&M meetings in contributing to governance.\n\nForty semistructured interviews were carried out by the researcher (JH), including frontline staff (n=32) and at senior executive level (n=8). (Interview schedule: Appendix 2, web only). Participants were recruited with the aim of acquiring a broad sample across professional and occupational groups (Sampling matrix: Appendix 3, web only). Interviews were carried out with Chairs of M&M meetings (n=5) as well as a range of consultants (n=6), junior doctors (n=6), nursing staff (n=7) who attended the meetings, and some who were not invited, to capture a wide range of experiences and views. Managers and senior clinicians (n=8), senior executives and board members (n=8) were selected because of their governance role.\n\nInterviewees were provided with study details when they were invited to participate and were assured of anonymity and confidentiality. Consent to participating and being recorded was obtained at the start of the interview. Interviews lasted approximately 45\u2005min. Frontline staff interviewees were asked to identify the role and format of M&M meetings; the importance of quality improvement in making care safer; views about the introduction of the review form and process; and the governance of M&M meeting outcomes and accountability. Senior executives were asked what patient safety data they currently received; the role that M&M meetings played in providing that data; and how they perceived the potential for a standardised process for reviewing deaths in the hospital governance of patient safety.\n\nIn addition to the interviews, we used non-participant observations at M&M (n=26) and governance (n=4) meetings to provide background and context to the interviewees' comments. Consent for these was obtained prior to commencing the data collection.\n\n## Data analysis\n\nObservations were organised in Microsoft Excel. Data from interviews were dual coded (JH and RW), generating inductive and deductive themes, which were agreed with a third researcher (NF) who read a sample of transcripts. Data were organised and analysed using NVivo 9 software.\n\n# Findings\n\nTwo of the three specialties implemented the review process throughout the study. The remaining specialty helped with the development of the process but refrained from testing the final version. We present our findings under the following three main headings: how M&M meetings contributed to the governance of patient safety within the hospital; how M&M meetings provided a resource for learning and accountability; and what impact the standardised review process had on both activities.\n\n# The contribution of M&M meetings to the governance of mortality data\n\nDuring the study, changes occurred in the governance of mortality data at both trust and divisional levels of the organisation as shown in figure 1<\/a>. The left hand side of the figure outlines the prestudy governance processes for mortality data within the general medical division and hospital and shows no upward reporting from divisions to the hospital board and only informal cascading of information down to frontline staff. The right hand side shows the changes to governance introduced during the course of the study.\n\nA monthly high-level trust safety committee was established to monitor externally published risk adjusted mortality rates, investigate outliers identified by the national regulator (the Care Quality Commission) and receive 6-monthly divisional reports from M&M meetings. It provided quarterly reports and assurance to a quality subcommittee of the board. M&M meetings within each division began reporting quarterly to the divisional risk and governance committee, using outcome measures produced from the standardised review process. Improvement measures determined by the M&M meetings were communicated to the frontline staff through newly established ward-based specialty governance and clinical management meetings. This strengthening of divisional and hospital governance arrangements may have been shaped in part by participation in, and feedback from, the study and a greater awareness of the potential of M&M meetings to contribute to governance.\n\nAll interviewees agreed that senior executives and board members should be accountable for the safety of patient care, especially when 'things went wrong', and that reporting outcomes from M&M meetings to a trust committee necessary. Illustrative interview quotes are shown in box 1<\/a>.\n\n###### The contribution of Mortality and Morbidity meetings to the governance of mortality data\n\n'Obviously, it's very important for the trust to know what's going on mortality-wise across the entire trust. And actually I don't think you can have meetings that are about such a significant area of care without that going to a higher group of people. It's all very well to say yes well we've got mortality meetings in all of the divisions and they're all going fine, but how does anybody know? \u2026 Because at the end of the day they're going to be the ones who are answering the phone when the press sort of ring up and say ah we hear that your standardised mortality ratio is through the roof!' (Consultant).\n\n'I think that information from these meetings can provide assurance that we don't have excess deaths. It helps us to pre-empt problems perhaps before we are alerted. By looking at morbidity cases we can also try to avoid problems before they lead to a fatality.' (Chair, Safety Monitoring Committee).\n\n'I think that it's really important to \u2026 know that we have learned from the experience, and so how you capture that and reassure management that we won't make the same mistake again is really important\u2026.' (Board member).\n\nM&M participants viewed reporting to this Safety Monitoring Committee as acceptable provided that the environment was non-judgemental and understanding of the case mix of their patients. Some clinicians and managers appreciated the committee's focus on data while others wished for an opportunity to share learning from other specialties, escalate unresolved issues and receive support for change. This was contrary to the views of two senior executives who thought the role of the committee was to monitor mortality rates, and that it was the divisions' responsibility to address quality and safety issues.\n\n# M&M meetings as a resource for learning, improvement and accountability\n\nObservations of M&M meetings during January to December 2010 showed considerable variation in the structure, organisation and the process of reviewing deaths as shown in tables 1<\/a> and 2<\/a>. Responsibility for the meetings was devolved to the specialties which had led them to be non-standard and autonomous.\n\nSummary of the structure and format of Mortality and Morbidity meetings\n\n| Meeting characteristics | General medical division | | | Specialist division | |\n|----|----|----|----|----|----|\n| | Test group 1 | Test group 2 | Control 1 | Test group 3 | Control 2 |\n| Frequency | Weekly | Monthly | Monthly | Monthly and then weekly | Not formalised |\n| Timing | Working hours | Lunchtime | Lunchtime | Lunchtime | Working hours |\n| Length | 20\u201330\u2005min | 1\u2005h | 1\u2005h | 3\u2005h then 1\u2005h | 1\u2005h |\n| Venue | Departmental seminar room | Lecture theatre in medical school | Hospital committee room | Departmental seminar room | Departmental seminar room |\n| Reason for organisation | Part of a management meeting | Part of a grand round programme | Capacity and semiformal setting | Local and easily accessible | Part of the professional development programme |\n| Attendees | Consultants, a junior doctor representative, clinical director, senior nurses, managers | Consultants, junior doctors, senior nurse, governance managers | Consultants, junior doctors | Consultants, junior doctors, clinical director, nurses | Consultants, junior doctors, clinical director nurses, pharmacists and dietician |\n| Number of attendees | 12 | 15\u201335 | 18\u201320 | 24\u201341 | 25 |\n\nSummary of the review processes carried out by Mortality and Morbidity (M&M) meetings\n\n| | General medical division | | | Specialist division | |\n|----|----|----|----|----|----|\n| | Test group 1 | Test group 2 | Control 1 | Test group 3 | Control 2 |\n| Source of mortality data for meetings | Departmental database | Inpatient episode data | Inpatient discharge data | Inpatient episode data | Departmental database |\n| How meetings informed | M&M coordinators extracted cases from database | Hospital intelligence unit notified M&M coordinators | Software programme designed by M&M co-chair | Hospital intelligence unit notified M&M coordinators | M&M coordinators extracted cases from database |\n| Numbers of cases reviewed | 1\u20136 | 10\u201342 | 8\u201315 | 2\u20137 | 5\u20136 |\n| Criteria for selection | All deaths | All deaths | All deaths | All deaths | 'Interesting' cases |\n| Review process | Standardised mortality review | Standardised mortality review | Departmental structured process | No formalised structure | No formalised structure |\n| Meeting records | Standardised mortality review and minutes | Standardised mortality process | Excel spreadsheet | Minutes | Minutes |\n| Issues identified and recorded | Yes | Yes | Yes | Yes | Yes |\n| Actions specified | Yes | Yes | Yes | Yes | Yes |\n| Actions assigned | Yes | Yes | No | No | No |\n| Actions followed up at next meeting | No | No | Yes | No | No |\n| Record circulated | Minutes yes | No | Yes | Available for viewing but not circulated | Available at next meeting |\n\n## Meeting frequency\n\nClinical staff found weekly meetings helpful as the cases were still fresh in their minds and the small number allowed time for indepth discussion. However, the chair of a meeting that met monthly suggested that weekly meetings were less 'special' and would lose their impact.\n\n## Venue\n\nTo encourage attendance, meetings were held at lunchtime or during working hours as part of the department's education programme. One specialty held their M&M review as part of a larger management meeting that considered adverse incidents, complaints and other 'bad bits' of the service. Incorporating mortality reviews had the benefit of integrating and addressing all risk issues together, especially as it had 'all the right people round the table' to achieve changes. It reduced the number of meetings that clinicians with managerial responsibilities had to attend but had the disadvantage of not being open to all clinical staff.\n\nThe various ways in which M&M meetings reviewed deaths is summarised in table 2<\/a>. One control group M&M meeting had already identified a need for a structured approach to their meetings and developed its own system for categorising deaths. However, this system did not identify the specific failures in care or action needed to address them.\n\n## Case presentations\n\nGenerally, junior doctors were tasked with reviewing deaths prior to meetings and presenting a summary of each case at the meetings. In one instance, 40 cases were presented in an hour and was 'mind-numbingly boring' according to one clinician. There was no time for indepth discussions making the review a tick box ceremony with errors liable to be 'swept under the table'. As peer review was also important to clinicians, many felt that all deaths should be presented to a wide audience. Achieving a balance between scrutinising all deaths by more than one person, and selecting and discussing avoidable deaths in depth, appeared to be challenging.\n\n## Participation and culture\n\nBoth doctors and nurses reported that it was important to know which cases were going to be discussed at meetings. They wanted to familiarise themselves with the relevant case details to 'defend' any course of action that they had, or had not taken and for nurses especially, to give them the confidence to participate. All interviewees stressed that M&M meetings should be blame-free to facilitate improvement and accountability, although some were not sure that this was true of their meetings. One control group had made a deliberate decision to develop a safe and non-critical environment before considering any other aspect of the meeting.\n\n## Accountability\n\nAll the participants saw learning and improvement in care as the purpose of M&M meetings and an essential part of the clinical activity. Many saw them as having an additional governance role as shown in box 2<\/a>. There were no reports that the educational and learning role of the M&M meeting was compromised by a greater focus on accountability.\n\n###### Mortality and Morbidity (M&M) meetings as a resource for learning and accountability\n\n'I think the potential for them (M&M meetings) is absolutely massive. I think if we have a culture of openness and discussing these cases that are very difficult in a very frank robust way, and junior doctors and hopefully other staff come to that and see that discussion, take (learning) away with them, then I think that's very powerful. And I think that if we're able to look at trends and patterns, then you can also use that to influence policy or practice within the division and within our wards and within our daily work.' (Consultant).\n\n'In terms of reporting, I think \u2026 the Trust should know exactly what is happening just as we should as clinicians' (Chair, M&M meeting).\n\n'Everything should be transparent; what you are doing well and what can be improved. If there are failings in the system it should be readily available for board members.' (Junior doctor).\n\n'I think the big driver (for M&M meetings) would be about improving patient outcomes\u2026and linking it to the governance agenda, patient safety and patient experience and that would be how I, and ward managers, should be engaged in the process.' (Nurse).\n\n'I guess if I look back I kind of wonder how on earth we thought we had assurance of any description before we started having robust M&Ms. I guess the reality is we didn't question too hard and had we questioned we'd have said no we didn't have any assurance. So they have a very important role.' (Senior manager).\n\n# The impact of the standardised mortality review process\n\n## Improved structure to meetings\n\nCase reviewers liked how the standardised process directed the line of questioning and made it easier to extract the relevant information from case notes. Some thought it was helpful as a teaching aid for junior doctors. It highlighted and captured areas of concern and helped to focus and structure the meeting: 'helps stop the rambling'. Many thought this was what was needed in their meetings, as described in box 3<\/a>.\n\n###### The impact of the standardised process on Mortality and Morbidity meetings\n\n'I think they've become much more structured, and again that's been partly helped by the fact that we have a form that actually provides a framework. I think without that things become very woolly; people can kind of get bogged down in looking at the micro aspects of morbidity and mortality, and not necessarily the bigger picture.' (Manager).\n\n## Improved case review\n\nThe review process brought standardisation to the examination of deaths and provided a framework that could be applied to patient deaths in different circumstances. It also provided an assurance that all deaths were reviewed in the same way, a fact that board members viewed positively. It gave the meetings more significance and made them 'official', reported one person, 'not just a paper exercise'. To maintain transparency in meetings clinicians suggested that the review form should be completed openly in the meeting.\n\n## Improved records\n\nThe electronic record of the standardised mortality review facilitated easier reporting and the identification of themes of issues over time. Clinicians spoke of its capacity to formalise 'organisational memory', those issues that arose infrequently and were only remembered by long-standing members of staff. Actions needed and assigned were more formally recorded, and in doing so improved the likelihood of corrective measures being undertaken. Equally important was the paper trail it provided of how problems were addressed and practice had changed.\n\n## Improved reporting and governance\n\nChairs and managers of the participating groups were especially positive about the support the process offered in producing performance reports for the trust Safety Monitoring Committee. M&M meeting outcomes\u2014the number of avoidable deaths, contributory factors and actions taken to address any quality issues\u2014were included in their twice-yearly report and were later developed into a standard reporting template by the committee.\n\nOne of the intervention groups in the general medical division identified a continuing problem with care which was identified through a more structured mortality review and addressed through an improved governance structure. This is outlined in box 4<\/a>.\n\n###### An example of a change in practice in the division with improved governance\n\nAt a monthly departmental Mortality and Morbidity (M&M) meeting, all 22 deaths from the previous month were presented. These cases had been reviewed by two junior doctors in consultation with the M&M meeting chair. The junior doctors presented a summary of each case on a single PowerPoint slide, describing clinical details and causes of death. In four of the 22 cases, where the patients were not expected to die, the meeting chair led a greater examination and discussion. In one of these cases, an acutely ill patient admitted to a medical ward had not received the appropriate timely specialist input despite a referral having been made. The M&M meeting agreed that this omission constituted poor clinical care which potentially contributed to the death. 'Process issues and communication' were attributed to the lack of response to the referral. The case was also reviewed in the corresponding specialist M&M meeting.\n\n### Action\n\n1. Specialty consultant invited to the next M&M meeting for an open discussion.\n\n2. Referral guidelines to be produced jointly by the two departments.\n\n### Problem repeated\n\n1. Failure to get prompt specialist input on a further occasion.\n\n### Action\n\n1. Chair raised the issue at a quarterly divisional M&M meeting: clinical director advised specialist divisions of the need for compliance with new referral guidance.\n\n2. Chair reported problem in his 6-monthly feedback to the trust Safety Monitoring Committee for hospital-wide compliance with the referral guidance.\n\n## Standardised process\n\nMany clinicians, and especially board members, saw the benefit of having a standardised process across the hospital to provide reassurance that all deaths were being reviewed in the same way. The majority of those interviewed wanted to see the review process extended to all hospital M&M meetings. Some participants of the control group M&M meetings, who had no experience of using the process, felt that if it was rigorously completed, the data would provide 'clear performance metrics which could be hugely valuable to the organisation'.\n\n## Concerns\n\nThere were a few less-positive comments from staff who had not experienced the review process first-hand. The specialty that decided not to continue testing the study review felt it would not advance their own process and might limit the openness of the discussion, thus jeopardising the 'fragile culture' of their meeting.\n\nSome meeting participants expressed concern that completing the review might become the focus of the meeting rather than it being an aid to direct discussion. Extra administrative support might also be needed. Others suggested that specialties might wish to add specialty-specific questions to the core template. Overall, the concerns were outweighed by support for the process.\n\n# Discussion\n\nM&M meetings are an established part of medical education and have existed for decades as forums for doctors to review and present cases for biomedical exploration. Their potential for quality improvement depends on them being more structured and reviewing deaths in a more systematic way. Changes to the curriculum of junior doctors' professional education have done much to promote this.24 33 34 Studies have also shown how specialty or departmental M&M meetings have improved quality by adopting structured reviews.14 23 35 Our findings build on these and show how M&M meetings can be integrated across the whole hospital and contribute to the accountability for deaths and governance of patient safety. Meetings that adopt a standardised and systematic review process can focus on systems and process failures; provide a record of meeting outcomes and follow-up actions taken to address failures; and facilitate the reporting of meeting outcomes and assurance to the board.\n\nWe had anticipated that there might be greater resistance from clinicians, who would want to maintain their autonomy and not want to codify and share sensitive data.36 37 Our findings show considerable support from them for both the standardised review process and the wider governance role of M&M meetings. There may be several reasons for a greater acceptance than expected. Health professionals are expected to take responsibility for governance of their own clinical practice and are increasingly aware that hospitals are being monitored for the quality of their care.38 Many are being co-opted into managerial roles such as clinical directors, medical directors and chairs of governance committees and are accountable for the governance of quality and safety data.36 39 This additional managerial responsibility encourages clinicians to value systems that make the provision of governance data easier.\n\nThe standard review process was generally accepted because it met with clinicians' requirements to capture the tacit nature of case context and complexity.18 26 Unlike many checklists it was adapted to their particular contextual needs which gave them ownership.40 Close collaboration during its development also helped to increase engagement and was seen less as a top-down imposition.41 42 Health professionals welcomed the focus on systems and process failures and away from individual competence, while senior executives and board members appreciated the standardisation and safety monitoring that the process supported.43 Suggestions for an M&M meeting review process that facilitates both learning and assurance, based on our findings, are listed in box 5<\/a>.\n\n###### Suggestions for using Mortality and Morbidity (M&M) meetings for governance of patient safety\n\n- Invite all types of health professionals to M&M meetings and notify them in advance which cases are to be reviewed and discussed.\n\n- Employ a short, standardised review process that highlights avoidable deaths and contributory factors, allowing for staff involvement in the design and flexibility for specialty-specific questions as necessary.\n\n- Allow adequate time for case discussions using the review questions as an aid.\n\n- Encourage a focus on systems and process variations and not individual competence.\n\n- Carry out the review openly, summarising actions at the end of the meeting and tracking them at the following one.\n\n- Record meeting outcomes electronically to facilitate audit and performance data.\n\n- Integrate M&M meetings into the wider governance structure and monitor meeting outcomes for shared learning and assurance.\n\nAlthough this is a single case study, personal communications with medical directors of four other NHS trusts championing patient safety helped place our findings into context. As with our case study, responsibility for M&M meetings in these hospitals had previously been devolved to clinicians and provided no assurance that deaths were being reviewed in a systematic and rigorous way. Outcomes from meetings were neither standardised nor integrated into the hospital governance. In the smaller hospitals, there was an increased focus on reviewing and following up deaths but this was carried out by individual senior clinicians. However, the larger hospitals were beginning to see the need for a more effective way of reviewing larger numbers of deaths and the important role that M&M meetings could play, and were introducing standard trust-wide review processes.\n\n# Conclusion\n\nM&M meetings have the potential to contribute to the governance of patient safety. They exist in many healthcare organisations and are a governance resource that is generally underutilised. They can improve the accountability of mortality data and support quality improvement without compromising professional learning, especially when facilitated by a standardised mortality review process.44\n\nThe research team would like to thank all the hospital staff who contributed to the research, and members of the PSSQ team for their helpful comments on earlier drafts of the paper.\n\n# References","meta":{"dup_signals":{"dup_doc_count":131,"dup_dump_count":62,"dup_details":{"curated_sources":2,"2023-50":2,"2022-40":1,"2022-21":1,"2022-05":1,"2021-49":1,"2021-43":1,"2021-17":1,"2020-29":2,"2020-16":1,"2020-05":1,"2019-47":1,"2019-43":2,"2019-39":2,"2019-18":1,"2018-30":2,"2018-26":1,"2018-17":1,"2018-13":1,"2018-09":1,"2018-05":1,"2017-51":2,"2017-47":1,"2017-43":3,"2017-39":4,"2017-30":4,"2017-22":4,"2017-17":3,"2017-09":5,"2017-04":1,"2016-50":2,"2016-44":3,"2016-40":3,"2016-36":3,"2016-30":2,"2016-26":1,"2016-22":2,"2016-18":2,"2016-07":2,"2015-48":2,"2015-40":1,"2015-35":2,"2015-32":2,"2015-27":1,"2015-22":3,"2015-14":2,"2014-52":2,"2014-49":2,"2014-42":7,"2014-41":4,"2014-35":2,"2014-23":5,"2014-15":7,"2024-22":1,"2024-18":1,"2017-13":1,"2015-18":2,"2015-11":1,"2015-06":2,"2014-10":3,"2013-48":2,"2013-20":1,"2024-30":1}},"file":"PMC3382446"},"subset":"pubmed_central"} {"text":"abstract: **Trisha Greenhalgh and colleagues** argue that, although evidence based medicine has had many benefits, it has also had some negative unintended consequences. They offer a preliminary agenda for the movement's renaissance, refocusing on providing useable evidence that can be combined with context and professional expertise so that individual patients get optimal treatment\nauthor: Trisha Greenhalgh; Jeremy Howick; Neal MaskreyCorrespondence to: T Greenhalgh \ndate: 2014\nreferences:\ntitle: Evidence based medicine: a movement in crisis?\n\nIt is more than 20 years since the evidence based medicine working group announced a \"new paradigm\" for teaching and practising clinical medicine.1 Tradition, anecdote, and theoretical reasoning from basic sciences would be replaced by evidence from high quality randomised controlled trials and observational studies, in combination with clinical expertise and the needs and wishes of patients.\n\nEvidence based medicine quickly became an energetic intellectual community committed to making clinical practice more scientific and empirically grounded and thereby achieving safer, more consistent, and more cost effective care.2 Achievements included establishing the Cochrane Collaboration to collate and summarise evidence from clinical trials;3 setting methodological and publication standards for primary and secondary research;4 building national and international infrastructures for developing and updating clinical practice guidelines;5 developing resources and courses for teaching critical appraisal;6 and building the knowledge base for implementation and knowledge translation.7\n\nFrom the outset, critics were concerned that the emphasis on experimental evidence could devalue basic sciences and the tacit knowledge that accumulates with clinical experience; they also questioned whether findings from average results in clinical studies could inform decisions about real patients, who seldom fit the textbook description of disease and differ from those included in research trials.8 But others argued that evidence based medicine, if practised knowledgably and compassionately, could accommodate basic scientific principles, the subtleties of clinical judgment, and the patient's clinical and personal idiosyncrasies.1\n\nTwo decades of enthusiasm and funding have produced numerous successes for evidence based medicine. An early example was the British Thoracic Society's 1990 asthma guidelines, developed through consensus but based on a combination of randomised trials and observational studies.9 Subsequently, the use of personal care plans and step wise prescription of inhaled steroids for asthma increased,10 and morbidity and mortality fell.11 More recently, uptake of the UK National Institute for Health and Care Excellence guidelines for prevention of venous thromboembolism after surgery has produced significant reductions in thromboembolic complications.12\n\nDespite these and many other successes, wide variation in implementing evidence based practice remains a problem. For example, the incidence of arthroscopic washout of the knee joint, whose benefits are unproved except when there is a known loose body, varies from 3 to 48 per 100\u2009000 in England.13 More fundamentally, many who support evidence based medicine in principle have argued that the movement is now facing a serious crisis (box 1).14 15 Below we set out the problems and suggest some solutions.\n\n# Box 1: Crisis in evidence based medicine?\n\n- The evidence based \"quality mark\" has been misappropriated by vested interests\n\n- The volume of evidence, especially clinical guidelines, has become unmanageable\n\n- Statistically significant benefits may be marginal in clinical practice\n\n- Inflexible rules and technology driven prompts may produce care that is management driven rather than patient centred\n\n- Evidence based guidelines often map poorly to complex multimorbidity\n\n# Distortion of the evidence based brand\n\nThe first problem is that the evidence based \"quality mark\" has been misappropriated and distorted by vested interests. In particular, the drug and medical devices industries increasingly set the research agenda. They define what counts as disease (for example, female sexual arousal disorder, treatable with sildenafil16 and male baldness, treatable with finasteride17) and predisease \"risk states\" (such as low bone density, treatable with alendronate).18 They also decide which tests and treatments will be compared in empirical studies and choose (often surrogate) outcome measures for establishing \"efficacy.\"19\n\nFurthermore, by overpowering trials to ensure that small differences will be statistically significant, setting inclusion criteria to select those most likely to respond to treatment, manipulating the dose of both intervention and control drugs, using surrogate endpoints, and selectively publishing positive studies, industry may manage to publish its outputs as \"unbiased\" studies in leading peer reviewed journals.20 Use of these kinds of tactic in studies of psychiatric drugs sponsored by their respective manufacturers enabled them to show that drug A outperformed drug B, which outperformed drug C, which in turn outperformed drug A.21 One review of industry sponsored trials of antidepressants showed that 37 of 38 with positive findings, but only 14 of 36 with negative findings, were published.22\n\nEvidence based medicine's quality checklists and risk of bias tools may be unable to detect the increasingly subtle biases in industry sponsored studies.23 Some so called evidence based policies (such as dementia case finding for the over 75s and universal health checks for the over 40s in the UK) seem to be based largely on political conviction.24 25 Critics have condemned the role of the drug industry in influencing the policy makers who introduced them.26\n\n# Too much evidence\n\nThe second aspect of evidence based medicine's crisis (and yet, ironically, also a measure of its success) is the sheer volume of evidence available. In particular, the number of clinical guidelines is now both unmanageable and unfathomable. One 2005 audit of a 24 hour medical take in an acute hospital, for example, included 18 patients with 44 diagnoses and identified 3679 pages of national guidelines (an estimated 122 hours of reading) relevant to their immediate care.27\n\n# Marginal gains and a shift from disease to risk\n\nEvidence based medicine is, increasingly, a science of marginal gains\u2014since the low hanging fruit (interventions that promise big improvements) for many conditions were picked long ago. After the early big gains of highly active antiretroviral therapy for HIV28 and triple therapy for *Helicobacter pylori* positive peptic ulcer,29 contemporary research questions focus on the marginal gains of whether these drug combinations should be given in series or in parallel and how to increase the proportion of patients who take their complex medication regimen as directed.30 31\n\nLarge trials designed to achieve marginal gains in a near saturated therapeutic field typically overestimate potential benefits (because trial samples are unrepresentative and, if the trial is overpowered, effects may be statistically but not clinically significant) and underestimate harms (because adverse events tend to be underdetected or underreported). The 74 year old who is put on a high dose statin because the clinician applies a fragment of a guideline uncritically and who, as a result, develops muscle pains that interfere with her hobbies and ability to exercise, is a good example of the evidence based tail wagging the clinical dog. In such scenarios, the focus of clinical care shifts insidiously from the patient (this 74 year old woman) to the population subgroup (women aged 70 to 75) and from ends (what is the goal of investigation or treatment in this patient?) to means (how can we ensure that everyone in a defined denominator population is taking statins?).\n\nAs the examples above show, evidence based medicine has drifted in recent years from investigating and managing established disease to detecting and intervening in non-diseases. Risk assessment using \"evidence based\" scores and algorithms (for heart disease, diabetes, cancer, and osteoporosis, for example) now occurs on an industrial scale, with scant attention to the opportunity costs or unintended human and financial consequences.26\n\n# Overemphasis on following algorithmic rules\n\nWell intentioned efforts to automate use of evidence through computerised decision support systems, structured templates, and point of care prompts can crowd out the local, individualised, and patient initiated elements of the clinical consultation.8 For example, when a clinician is following a template driven diabetes check-up, serious non-diabetes related symptoms that the patient mentions in passing may not by documented or acted on.32 Inexperienced clinicians may (partly through fear of litigation) engage mechanically and defensively with decision support technologies, stifling the development of a more nuanced clinical expertise that embraces accumulated practical experience, tolerance of uncertainty, and the ability to apply practical and ethical judgment in a unique case.33\n\nTemplates and point of care prompts also contribute to the creeping managerialism and politicisation of clinical practice.8 As Harrison and Checkland observe: \"As the language of EBM becomes ever more embedded in medical practice, and as bureaucratic rules become the accepted way to implement 'the best' evidence, its requirements for evidence are quietly attenuated in favour of an emphasis on rules.\"34\n\nFor example, the Quality and Outcomes Framework (QOF) in UK general practice is incentivised by financial \"quality points\" and administered largely by non-clinical staff who generate these points by recalling patients for structured reviews and checks. QOF has been associated with significant improvements in blood pressure control, especially in deprived populations.35 But its downside is an audit driven, technocratic exercise in which few patients are offered personalised shared decision making with a senior clinician before having the recommended tests and treatments, and in which clinical consultations are continually interrupted by pop-up point of care prompts.32 36\n\n# Poor fit for multimorbidity\n\nFinally, as the population ages and the prevalence of chronic degenerative diseases increases, the patient with a single condition that maps unproblematically to a single evidence based guideline is becoming a rarity. Even when primary studies were designed to include participants with multiple conditions, applying their findings to patients with particular comorbidities remains problematic. Multimorbidity (a single condition only in name) affects every person differently and seems to defy efforts to produce or apply objective scores, metrics, interventions, or guidelines.37 Increasingly, the evidence based management of one disease or risk state may cause or exacerbate another\u2014most commonly through the perils of polypharmacy in the older patient.38\n\n# Return to real evidence based medicine\n\nTo address the above concerns, we believe it is time to launch a campaign for real evidence based medicine (box 2).\n\n## Box 2: What is real evidence based medicine and how do we achieve it?\n\n### Real evidence based medicine:\n\n- Makes the ethical care of the patient its top priority\n\n- Demands individualised evidence in a format that clinicians and patients can understand\n\n- Is characterised by expert judgment rather than mechanical rule following\n\n- Shares decisions with patients through meaningful conversations\n\n- Builds on a strong clinician-patient relationship and the human aspects of care\n\n- Applies these principles at community level for evidence based public health\n\n### Actions to deliver real evidence based medicine\n\n- Patients must demand better evidence, better presented, better explained, and applied in a more personalised way\n\n- Clinical training must go beyond searching and critical appraisal to hone expert judgment and shared decision making skills\n\n- Producers of evidence summaries, clinical guidelines, and decision support tools must take account of who will use them, for what purposes, and under what constraints\n\n- Publishers must demand that studies meet usability standards as well as methodological ones\n\n- Policy makers must resist the instrumental generation and use of \"evidence\" by vested interests\n\n- Independent funders must increasingly shape the production, synthesis, and dissemination of high quality clinical and public health evidence\n\n- The research agenda must become broader and more interdisciplinary, embracing the experience of illness, the psychology of evidence interpretation, the negotiation and sharing of evidence by clinicians and patients, and how to prevent harm from overdiagnosis\n\n## Individualised for the patient\n\nReal evidence based medicine has the care of individual patients as its top priority, asking, \"what is the best course of action for this patient, in these circumstances, at this point in their illness or condition?\"39 It consciously and reflexively refuses to let process (doing tests, prescribing medicines) dominate outcomes (the agreed goal of management in an individual case). It engages with an ethical and existential agenda (how should we live? when should we accept death?) and with that goal in mind, carefully distinguishes between whether to investigate, treat, or screen and how to do so.40\n\nTo support such an approach, evidence must be individualised for the patient. This requires that research findings be expressed in ways that most people will understand (such as the number needed to treat, number needed to harm, and number needed to screen41) and that practitioners, together with their patients, are free to make appropriate care decisions that may not match what \"best (average) evidence\" seems to suggest.\n\nImportantly, real shared decision making is not the same as taking the patient through a series of if-then decision options. Rather, it involves finding out what matters to the patient\u2014what is at stake for them\u2014and making judicious use of professional knowledge and status (to what extent, and in what ways, does this person want to be \"empowered\"?) and introducing research evidence in a way that informs a dialogue about what best to do, how, and why. This is a simple concept but by no means easy to deliver. Tools that contain quantitative estimates of risk and benefit are needed, but they must be designed to support conversations not climb probability trees.\n\n## Judgment not rules\n\nReal evidence based medicine is not bound by rules. The Dreyfus brothers have described five levels of learning, beginning with the novice who learns the basic rules and applies them mechanically with no attention to context.42 The next two stages involve increasing depth of knowledge and sensitivity to context when applying rules. In the fourth and fifth stages, rule following gives way to expert judgments*,* characterised by rapid, intuitive reasoning informed by imagination, common sense, and judiciously selected research evidence and other rules.\n\nIn clinical diagnosis, for example, the novice clinician works methodically and slowly through a long and standardised history, exhaustive physical examination, and (often numerous) diagnostic tests.43 The expert, in contrast, makes a rapid initial differential diagnosis through intuition, then uses a more selective history, examination, and set of tests to rule in or rule out particular possibilities. To equate \"quality\" in clinical care with strict adherence to guidelines or protocols, however robust these rules may be, is to overlook the evidence on the more sophisticated process of advanced expertise.\n\n## Aligned with professional, relationship based care\n\nReal evidence based medicine builds (ideally) on a strong interpersonal relationship between patient and clinician. It values continuity of care and empathetic listening, especially for people who are seriously and incurably sick.44 Research evidence may still be key to making the right decision\u2014but it does not determine that decision. Clinicians may provide information, but they are also trained to make ethical and technical judgments, and they hold a socially recognised role to care, comfort, and bear witness to suffering.45 The challenges of self management in severe chronic illness, for example, are not merely about making treatment choices but about the practical and emotional work of implementing those choices.46 As serious illness is lived, evidence based guidelines may become irrelevant, absurd, or even harmful (most obviously, in terminal illness).\n\n## Public health dimension\n\nAlthough we have focused on individual clinical care, there is also an important evidence base relating to population level interventions aimed at improving public health (such as pricing and labelling of consumables, fluoridation of water, and sex education). These are often complex, multifaceted programmes with important ethical and practical dimensions, but the same principles apply as in clinical care. Success of interventions depends on local feasibility, acceptability, and fit with context\u2014and hence on informed, shared decision making with and by local communities, using summaries and visualisations of population level metrics.47\n\n# Delivering real evidence based medicine\n\nTo deliver real evidence based medicine, the movement's stakeholders must be proactive and persistent. Patients (for whose care the movement exists) must demand better evidence, better presented, better explained, and applied in a more personalised way with sensitivity to context and individual goals.48 There are already some models of good practice here. In arthritis, for example, patient advocacy groups that emphasise the importance of experiential evidence and patient centred strategies have existed for over 30 years and have influenced the choice of outcome measures used in comparative effectiveness studies.49 Patient input has refocused several NICE guidelines (for example, on psoriasis).50\n\nThird sector advisory and advocacy groups such as the UK's Consumer Association ([www.which.co.uk](http:\/\/www.which.co.uk\/)), Picker Institute ([www.pickereurope.org](http:\/\/www.pickereurope.org\/)), and Sense About Science ([www.senseaboutscience.org](http:\/\/www.senseaboutscience.org\/)) have a crucial role in educating citizens and contributing to public debate about the use and abuse of evidence. The James Lind Alliance ([www.lindalliance.org](http:\/\/www.lindalliance.org\/)) brings patients, carers, and clinicians together to prioritise research questions. Such groups must remain, as far as possible, independent of vested interests and aware of the distorting influence of tied funding.\n\n## Training must be reoriented from rule following\n\nCritical appraisal skills\u2014including basic numeracy, electronic database searching, and the ability systematically to ask questions of a research study\u2014are prerequisites for competence in evidence based medicine.6 But clinicians need to be able to apply them to real case examples.51\n\nToo often, teaching resources use schematic, fictionalised vignettes in which the sick patient is reduced to narrative \"factoids\" that can populate a decision tree or a score sheet in an objective structured clinical examination. Rather than focus on these tidy textbook cases, once they have learnt some basic rules and gained some experience, students should be encouraged to try intuitive reasoning in the clinic and at the bedside, and then use formal evidence based methods to check, explain, and communicate diagnoses and decisions.43 They must also be taught how to share both evidence and uncertainty with patients using appropriate decision aids52 and adapt their approach to individual needs, circumstances, and preferences.39\n\nLikewise, there is a strong argument for extending the continuing medical education curriculum beyond \"evidence updates.\" Peer observation and review, reflective case discussion in small groups (with input from patients who want to articulate their experiences, choices, and priorities) and ongoing conversations with fellow professionals can help hone and maintain the ability to manage the challenges of applying evidence based medicine in the real world.53 The linking together of educational theory, cognitive psychology, information mastery, and implementation science into a coherent approach that supports front line decision making with patients54 is rarely taught in practice.\n\n## Evidence must be usable as well as robust\n\nAnother precondition for real evidence based medicine is that those who produce and summarise research evidence must attend more closely to the needs of those who might use it. Lengthy and expensive reviews that are \"methodologically robust\" but unusable in practice often fail to inform, inspire, or influence.55 A recent systematic review of diabetes risk scores revealed that the authors of most studies were primarily concerned with the intellectual concept of improving the predictive value of the score but had given little or no thought to how their score might be used, by whom, or for what\u2014nor what the implications would be for real people who would be designated \"at risk\" by the score.56\n\nEvidence users include clinicians and patients of varying statistical literacy, many of whom have limited time or inclination for the small print.41 Different approaches such as brief, plain language summaries for the non-expert (as offered by NICE), visualisations,57 infographics,52 option grids,58 and other decision aids59 should be routinely offered and widely used. Yet currently, only a fraction of the available evidence is presented in usable form, and few clinicians are aware that such usable shared decision aids exist.\n\n## Publishers must raise the bar\n\nThis raises an imperative for publishing standards. Just as journal editors shifted the expression of probability from potentially misleading P values to more meaningful confidence intervals by requiring them in publication standards,60 so they should now raise the bar for authors to improve the usability of evidence, and especially to require that research findings are presented in a way that informs individualised conversations.\n\nGiven that real evidence based medicine is as much about when to ignore or over-ride guidelines as how to follow them, those who write guidelines should flag up the need for judgment and informed, shared decision making. The American College of Cardiology recently published new cholesterol guidelines;61 *JAMA* followed with a pragmatic, patient focused article on how to apply this guideline and when to consider ignoring it, including an online visualisation tool to support conversations with patients.62 As the authors commented, \"the target for performance measures is not the percentage of patients who . . . are prescribed statins, but the proportion of eligible patients who participate in shared decision making about statin use.\"62 Their approach deserves to be emulated widely.\n\n## Research must transcend conflicts of interest\n\nTo support real evidence based medicine, and in particular to reassure policy makers, clinicians, and the public that research and the guidance derived from it can be trusted,63 the infrastructure for research and guideline development must show the highest standards of probity. Independent funding of national bodies for medical research is crucial.\n\n## Broader, more imaginative research is needed\n\nThe research agenda for real evidence based medicine is much broader than critical appraisal and draws on a wider range of underpinning disciplines. For example, it should include the study of the patient's experience of illness and the real life clinical encounter for different conditions and in different circumstances. The field would be enriched, for example, by qualitative research to elucidate the logic of care\u2013that is, the numerous elements of good illness management that are complementary to the application of research evidence.64\n\nWe need to gain a better understanding (perhaps beginning with a synthesis of the cognitive psychology literature) of how clinicians and patients find, interpret, and evaluate evidence from research studies, and how (and if) these processes feed into clinical communication, exploration of diagnostic options, and shared decision making.54 Deeper study is also needed into the less algorithmic components of clinical method such as intuition and heuristic reasoning, and how evidence may be incorporated into such reasoning.43\n\nIn relation to producing usable evidence, we need to identify how to balance gold standard systematic reviews with pragmatic, rapid reviews that gain in timeliness and accessibility what they lose in depth and detail.65 In the same vein, we need research on how and in what circumstances to trade detail for brevity in developing guidelines. We need to develop decision aids that support clinicians and patients to clarify the goals of care, raise and answer questions about the quality and completeness of evidence, and understand and contextualise estimates of benefit and harm. We also need to improve both the usefulness and ease of use of these and other evidence based tools (models, scores, algorithms, and so on) including the intellectual, social, and temporal demands they make on users and the resource implications for the healthcare organisation and system.\n\nIn the educational field, it is time we extended the evidence base for integrated curriculums that promote reflection and case discussion alongside the application of evidence.66 Discussions on how to interpret and apply evidence to real cases, and the sharing of collective knowledge and expertise in the form of \"mindlines\" among clinicians53 or within illness communities67 may provide useful data sources for such studies. It is by studying these more sophisticated forms of knowing that we are likely to determine how best to produce expert clinicians and expert patients, and to prevent the harms that arise from overdiagnosis, overtreatment, and overscreening.33\n\nIn relation to effectiveness, we need greater attention to postmarketing research in day to day hospital and primary care settings to confirm that subsequent experience replicates the results of licensing trials. This will allow gold standard tests and their cut-off points for ruling out diagnoses and treatments to be revised to minimise overdiagnosis or underdiagnosis.43\n\nFinally, in relation to the collective effort to prevent the misappropriation of the evidence based quality mark, a key research priority remains the study of hidden biases in sponsored research\u2014for example, by refining the statistical techniques for challenging findings that appear too good to be true.\n\n# Conclusion\n\nMuch progress has been made and lives have been saved through the systematic collation, synthesis, and application of high quality empirical evidence. However, evidence based medicine has not resolved the problems it set out to address (especially evidence biases and the hidden hand of vested interests), which have become subtler and harder to detect. Furthermore, contemporary healthcare's complex economic, political, technological and commercial context has tended to steer the evidence based agenda towards populations, statistics, risk, and spurious certainty. Despite lip service to shared decision making, patients can be left confused and even tyrannised when their clinical management is inappropriately driven by algorithmic protocols, top-down directives and population targets.\n\nSuch problems have led some to argue for the rejection of evidence based medicine as a failed model. Instead we argue for a return to the movement's founding principles\u2014to individualise evidence and share decisions through meaningful conversations in the context of a humanistic and professional clinician-patient relationship (box 2). To deliver this agenda, evidence based medicine's many stakeholders\u2014patients, clinicians, educators, producers and publishers of evidence, policy makers, research funders, and researchers from a range of academic disciplines\u2014must work together. Many of the ideas in this paper are not new, and a number of cross sector campaigns with similar goals have already begun (box 3). We hope that our call for a campaign for real evidence based medicine will open up debate and invite readers to contribute (for example, by posting rapid responses on bmj.com).\n\n## Box 3: Campaigns aligned with real evidence based medicine\n\n1. *Too much medicine*\u2014A rapidly growing movement, led jointly by clinicians, academics and patients, aims to reduce harm from overdiagnosis, overscreening, and overtreatment.26 33 The second of what will hopefully be an annual \"preventing overdiagnosis\" conference will be held in Oxford in September 2014 ([www.preventingoverdiagnosis.net](http:\/\/www.preventingoverdiagnosis.net\/))\n\n2. *All trials* ([www.alltrials.net](http:\/\/www.alltrials.net\/))\u2014an international initiative to ensure that all clinical trials are registered at inception and no findings are withheld from publication\n\n3. *Reducing waste and increasing value in medical research* ([www.thelancet.com\/series\/research](http:\/\/www.thelancet.com\/series\/research))*\u2014*A recent *Lancet* series highlighting the waste and loss of value caused by research that addresses the wrong questions, uses inappropriate study designs; is weighed down by bureaucracy, or is so badly or inaccessibly reported that practitioners and policymakers simply cannot apply it\n\n4. *Improving publishing standards* ([www.icmje.org\/urm_main.html](http:\/\/www.icmje.org\/urm_main.html) )*\u2014*A campaign by the International Committee of Medical Journal Editors to improve the quality and transparency of medical publishing by discouraging ghost-writing and raising the standards for declarations of conflicts of interest\n\n5. *Integrated medical education\u2014*Campaign to strengthen the integration of the different components of the curriculum by developing bedside clinical skills, understanding and applying research evidence, and reflecting and deliberating about complex cases68 69\n\nCite this as: *BMJ* 2014;348:g3725","meta":{"dup_signals":{"dup_doc_count":178,"dup_dump_count":82,"dup_details":{"curated_sources":2,"2023-50":3,"2023-40":4,"2023-23":4,"2023-14":2,"2023-06":1,"2022-49":1,"2022-40":3,"2022-33":3,"2022-27":4,"2022-21":4,"2022-05":3,"2021-49":3,"2021-43":3,"2021-39":4,"2021-31":1,"2021-25":4,"2021-21":2,"2021-17":3,"2021-10":4,"2021-04":1,"2020-50":2,"2020-45":2,"2020-40":1,"2020-34":3,"2020-29":2,"2020-24":1,"2020-10":2,"2020-05":3,"2019-51":2,"2019-47":2,"2019-43":1,"2019-39":2,"2019-30":1,"2019-26":1,"2019-18":2,"2019-13":1,"2019-09":2,"2019-04":1,"2018-51":2,"2018-47":1,"2018-43":3,"2018-34":2,"2018-26":3,"2018-22":1,"2018-17":2,"2018-09":4,"2018-05":1,"2017-51":2,"2017-47":2,"2017-43":3,"2017-39":3,"2017-34":1,"2017-30":4,"2017-26":4,"2017-22":4,"2017-17":4,"2016-36":1,"2016-30":1,"2016-26":1,"2016-22":1,"2016-18":1,"2016-07":1,"2015-48":1,"2015-40":1,"2015-35":1,"2015-32":1,"2015-27":1,"2015-22":1,"2015-14":1,"2014-52":1,"2014-42":1,"2014-41":2,"2014-35":2,"2014-23":1,"2024-30":2,"2024-26":4,"2024-18":1,"2024-10":1,"2017-13":7,"2015-18":3,"2015-11":3,"2015-06":1}},"file":"PMC4056639"},"subset":"pubmed_central"} {"text":"abstract: On April 25th 1953, three publications in *Nature* forever changed the face of the life sciences in reporting the structure of DNA. Sixty years later, Raymond Gosling shares his memories of the race to the double helix.\nauthor: Naomi Attar\ndate: 2013\ninstitute: 1Genome Biology, BioMed Central, 236 Gray's Inn Road, London WC1X 8HB, UK\nreferences:\ntitle: Raymond Gosling: the man who crystallized genes\n\n*It has not escaped our attention that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material*.\n\nJames D Watson and Francis HC Crick \\[1\\]\n\nBy including this statement in their April 25th 1953 *Nature* article describing a model for the structure of DNA, Watson and Crick made one of the great understatements in history. In that moment, the seeds for the double helix's infamy - alongside the names 'Watson' and Crick' - were sown. Lesser known outside scientific circles is that this article did not include one iota of experimental data: Watson and Crick, who were based at the University of Cambridge's Cavendish laboratory, contributed deductive reasoning alone to the double helix model, albeit reasoning of an undoubtedly Nobel Prize-worthy standard. Instead, as has now been described many times, the model relied on X-ray diffraction data obtained by others, at King's College, and these data did not reach Watson and Crick by entirely wholesome means \\[2\\]. To add to the insult, Watson and Crick's report of the double helix did not fully credit the work of King's as being essential to the construction of their model, although the King's team did enjoy co-publication of their data alongside the double helix article, in the form of two articles in the same issue of *Nature* \\[3,4\\]. One of these articles described the X-ray diffraction work performed by senior researcher Rosalind E Franklin, together with PhD student Raymond G Gosling, and contained the highest quality diffraction patterns yet achieved for DNA \\[3\\]. It was these data that had proved invaluable in Watson and Crick's quest for the double helix.\n\nEarlier still, before Franklin arrived at King's, Gosling had achieved a major breakthrough in the search for DNA's structure when he became the first person to crystallize genes, under the guidance of Maurice Wilkins, who was the lead author of the other King's article to accompany Watson and Crick's model \\[4\\].\n\nWatson published his controversial memoir of the discovery, aptly named 'The Double Helix', in the 1960s \\[5\\], and in doing so propelled the story to worldwide fame, establishing DNA's structure as an icon of science in the popular imagination. However, events were relayed in Watson's book very much from his own point of view and at times, it has been argued, even verged on the fictitious.\n\nAside from Watson, Ray Gosling is the only surviving member of the select group of seven scientists to feature as an author on one of the three *Nature* articles. Gosling and his wife, Mary, were kind enough to welcome *Genome Biology* into their home, where he shared with us his perspective of the events of 60 years ago.\n\nElsewhere, *Genome Biology* has marked the anniversary by canvassing our Editorial Board for their opinions on the most important advances in the field since 1953 \\[6\\].\n\n# The accidental biophysicist\n\nSomething curious happened in scientific research in the mid-20th Century. Biology had been the neglected sibling of chemistry and, especially, physics, which had until then monopolized the glitz and glamor of scientific inquiry. Perhaps it was the sense that many of the big questions in physics had now been tackled, or a philosophical shift brought about by the experience of the dark forces of war and fascism in the 1930s and 40s, or perhaps it was just a simple matter of improved methodology; whatever the cause, history shows that many physicists and chemists began to become excited by biology at this time, and to turn their attention to addressing biological questions. Nobel Prize-winning physicist Erwin Schr\u00f6dinger is often credited with kick-starting this trend in his book 'What is life?' \\[7\\], which is said to have inspired even Crick and Watson themselves in their quest for the double helix.\n\nSo was Prof (Emeritus) Raymond Gosling DSc FKC (Figure 1<\/a>) similarly inspired when he opted to pursue a PhD in biophysics, in work that would culminate in establishing the molecular structure of DNA? Not a bit of it! Not only was Gosling unaware of Schr\u00f6dinger's work until \"much later\", he had not in fact originally wanted to become a scientist at all: \"I wanted to do Medicine, but Father said we couldn't afford Medicine because it would take *x* years to qualify and so on. And so the next best thing to doing medicine I thought was to do a fundamental subject like physics. I was very attracted by the thought that the Scots always refer to physics as 'natural philosophy'.\"\n\nAnd so a career in physics beckoned. But, having graduated from University College London in 1949, Gosling found himself limited in his options. Edward Andrade was the head of the Physics Department, and if Gosling had stayed on at University College, he would have been bound by Andrade's interests in viscosity and this \"didn't appeal.\" Instead, he was tempted by developments across town at King's College, rivals-in-chief to his own *alma mater* (Box 1), where John Randall (Box 2) had recently taken the Wheatstone Chair of Physics. Instead of being deterred by the rivalry, Gosling impishly thought to himself that it would be \"rather fun\" to join Randall's laboratory.\n\n# John Randall: the unsung hero of the double helix\n\nIf there's one message that Raymond Gosling would like you to take away from this article, it is that the role of John Randall in the pursuit of the double helix cannot be overstated, and that Randall has lamentably not been adequately credited in most tellings of the story. Gosling feels so strongly on this subject that he recently wrote to *The Times* to emphasize the point, prompted by an article that had recognized Rosalind Franklin's contribution but that had omitted Randall's.\n\nRandall's motto at the Medical Research Council (MRC) Biophysics Unit that he headed at King's College was 'to bring the logi of physics to the graphi of biology'. This interdisciplinary philosophy was at the time decidedly modern, and his switch from the questions of physics to those of biology was also, according to Gosling, \"ahead of the curve\". Randall's modernity extended to his respect for female scientists, many of whom he recruited at King's, as exemplified by his long-standing working relationship with Honor Bell. This facet of Randall's modern outlook was to prove very important with his later recruitment of Rosalind Franklin (Box 2).\n\n\"The people in the University and a few other establishments tended to make fun of Randall's approach, his claim that you would need to have different disciplines working together, and they called it 'Randall's Circus'. Now, that's what attracted me in the first place! I heard about this strange bald-headed little man with a Napoleonic complex who was running the circus in biophysics, and it sounded wonderful to me!\"\n\nSo was it the rebellion that Gosling was attracted to? \"Yeah, absolutely!\"\n\nRandall's most important legacy was his firm belief that DNA must be the agent of genetic inheritance, a concept not widely recognized at the time (despite good evidence to support it \\[8\\]), and his consequent determination to discover the structure of what he considered to be the genetic material. This was Randall's circus, and he was the ringmaster - and the ringmaster masterfully guided his troupe in its quest.\n\n# Maurice Wilkins stays awake\n\nAfter spending a year simultaneously studying zoology at Birkbeck and working as a medical physicist at Middlesex Hospital, prompted by Randall's insistence that he first learn some biology, Gosling joined the MRC Unit as a PhD student. At the outset, he worked alongside Maurice Wilkins (Box 2), another physicist-cum-biologist, and a veteran of the Manhattan Project racked with feelings of guilt. Wilkins and Gosling used ram's sperm as a source of DNA, following Randall's idea that \"ram's sperm have very flat heads - unlike human sperm, which is like a rugby football\", the benefit of which was that \"the long chains must lie in the plane of a ram's sperm head.\" But Gosling's attempts to obtain X-ray diffraction patterns from these sperm did not meet with much success.\n\nIt would take a series of what Gosling describes as \"serendipitous\" events to bring about a change in their fortunes (\"most of my life has been beset or encouraged by serendipitous acts,\" he says). There happened to be one man who could produce a sample of DNA of far superior quality to that produced by any other laboratory, and that man was Swiss biochemist Rudolf Signer. By one or two strokes of luck, Gosling managed to obtain a fair quantity of this DNA: \"Signer gave a lecture at the Royal Society on this method that he'd developed to separate out the DNA from the nuclear protein and so produce high molecular weight pure DNA. Signer asked at the end of the lecture if anybody would like some of this material, and he had a specimen tube full of this freeze-dried material. Only two people put their hand up. I'm glad to say that Maurice was awake enough to put his hand up!\"\n\nWas that often not the case? \"It was often not the case! He rushed down to the front and got half of all there was, which turned out to be very necessary.\"\n\nInitially, Gosling only wanted to use Signer's DNA as a control to determine which of six methods he had devised for making ram's sperm lie flat was most successful. Wilkins had \"a steady hand enough to pull fibers of 5 to 10 \u03bcm. Maurice pulled by wrapping them round a paperclip and then sort of - very scientific! - pushing it open to make them taut. I managed to get him to produce at least 35, so this is a 35 fiber specimen. And little blobs of LePage's cement, priced 6d in Woolworths down The Strand, pulling the fibers together. Very scientific!\"\n\nThe act of pulling the fiber orientated the molecules along the fiber axis, akin to the orientation that flattening ram's sperm was aiming for. Gosling took the fibers from the Signer sample to the basement of the Chemistry Department, which housed a Raymax X-ray tube. \"The first thing I produced was even fuzzier than my ram's sperm! Randall was most amused, and he was delighted to be able to point out that I'd missed a trick because the material that was in my DNA was largely carbon, nitrogen and oxygen, which was just the same as the atoms in the air inside the camera.\" The result was a diffuse back-scattering of X-rays, which fogged the film, and so Gosling was instructed to displace all the air with hydrogen. And this is where the next piece of serendipity steps in.\n\n# Serendipity, my dear Watson!\n\n\"This Raymax tube was already in a frightening place, it was three floors below the ground level, which was the level of the Thames, about 50 yards away, because it was the basement of the Chemistry Department. And it was lead-lined so that the X-rays should be shut in because there were various lecture theatres nearby. So having realized that I needed to keep a watchful eye on the amount of hydrogen I was filling the room up with - so that I wasn't going to blow myself up and repeat the Hindenberg Disaster - I bubbled the hydrogen through water to help judge when the camera had been swept clean of air. It just so happened that this produced enough water vapor in the camera to be taken up by the fibers and produce crystallites. It turns out that freeze-dried DNA from Signer's preparation would form micro-crystallites in a humidity of 92%, and that was by serendipity alone that I just hit that value.\"\n\nGosling is in no doubt that this was \"the most exciting thing that's happened to me before or since!\" He can still remember the moment clearly: \"standing in the dark room outside this lead-lined room, and looking at the developer, and up through the developer tank swam this beautiful spotted photograph, you are familiar with them now I'm sure. It took 90-something hours to take the photograph, again, pot luck. But it really was the most wonderful thing. And I knew at the time that what I'd just done was to produce a crystalline state in these fibers, and if then the DNA was the gene material, I must be the first person ever to make genes crystallize.\"\n\nDid Gosling realize at that moment that it would from then onwards just be a matter of time, that the structure of DNA was now in his grasp? \"Yes. Yes, that was why I could truly say it was my 'Eureka!' moment. I went back down the tunnels over to the Physics Department, where Wilkins used to spend his life, so he was still there. Wilkins realized even more certainly than I did that we had just crystallized genes. As with Randall, he was convinced that the DNA was the genetic material, and now he was convinced that the DNA could be made to crystallize. I can still remember vividly the excitement of showing this thing to Wilkins and drinking his sherry by the glass... by the gulpful.\"\n\nInterestingly, Gosling's account diverges somewhat from that given by Wilkins in his Nobel lecture (Box 3). Another - understandable - omission from Wilkins' lecture is a somewhat unorthodox, and nevertheless essential, contribution he made to Gosling's first X-ray images of crystalline DNA (Figure 2<\/a>). Gosling had \"sealed the conventional camera onto its base and the lid and so forth with vacuum wax and stuff that you used in order to keep air out. This was to keep the hydrogen in, of course. The collimator was made of heavy brass and although I could seal it to the outside of the camera, there was no way I could think of to really prevent the gas coming out of the collimator tube. To my great surprise, when I was showing him how far I had got, this rather shy Assistant Director of the MRC Unit said: 'Try this.' And he pulled out from his pocket a packet of Durex.\" By a quirky twist of irony, the introduction of condoms to the story occurs after Gosling had ceased working on sperm, having switched to Signer's DNA, which was derived from calf thymus.\n\nDoes the 'Assistant Director' refer to Maurice Wilkins? \"Yes. But he was painfully shy.\"\n\nBut he was such a dedicated scientist that he was willing to risk the embarrassment... \"To risk everything, yes, that's right! It was extraordinary, really.\"\n\nThe condom did the trick, and Gosling was able to produce some high quality X-ray images. This is where Jim Watson first enters the picture: \"there was a conference in Naples that was on the structure of biologically active molecules. Randall was invited to talk and said that he couldn't, so he sent Wilkins, and Wilkins showed our beautiful picture and said: 'Look boys!' Or rather more... I mean his lectures were as dry as dust, so in a dusty sort of way, he made it clear that they had crystallized the genetic material. Now, Watson was in the audience, and I was told by somebody else who was there that Watson up until then had been doing his usual trick of pretending to read the newspaper while everybody gave of their best results and so forth. And he actually... when the picture came up on the screen in Wilkins' lecture, he actually put his paper down. And so he was convinced then and there, that if the material could be crystallized, then the structure could be found and it was just a short step from one to the other.\"\n\nWatson, having seen the fruits of Gosling's serendipity, asked Wilkins if he could join the MRC Unit, but was refused \"because Wilkins was afraid of him. He's quite scary, old Jim, on full flight.\" Instead, he approached Bragg (Box 2) at Cambridge's Cavendish laboratory, where - by another stroke of serendipity (Box 4) - he ended up allocated a desk next to Francis Crick. And the rest, as they say, is history...\n\n# Rosalind Franklin: a friction engineered by Randall?\n\nMeanwhile, Randall was not convinced that Wilkins and Gosling would ever learn enough crystallography \"to be able to solve this spotty diagram.\" For this reason, Rosalind Franklin was recruited to the project; Gosling believes that her appointment is another key pillar of Randall's contribution to the double helix story, as her experimental talent proved invaluable.\n\nFamously, Franklin and Wilkins enjoyed - or, rather, did not enjoy - a very fractious relationship. Gosling sees it as a \"pure personality clash\", as well as a very unfortunate misunderstanding, which only came to light many years later. \"The whole trouble was that there was a meeting in Randall's office, where Rosalind turned up and Alex Stokes and myself were invited along to meet her, and Wilkins was off somewhere else. In his autobiography, I think he says that he was away in Wales with a new girlfriend. But that was the key to what followed, he wasn't there.\" At this meeting, Gosling was assigned to work under Franklin, having previously been working with Wilkins.\n\n\"It was a very curious thing. Randall actually wrote to Rosalind saying that she would be asked to direct the X-ray crystallographic work on the Signer DNA material, and I didn't know that he'd done that.\" As with Gosling, Wilkins himself was not made aware of this letter until many years later.\n\nGosling believes that the misunderstanding was \"deliberate\" on Randall's part, rather than an unfortunate oversight. When he eventually discovered the truth of the matter, Gosling was \"really shocked\" because \"it was against all Randall's principles as I understood them. Up until then, everybody would freely discuss their work and interact with Wilkins probably more often than interacting with Randall directly.\"\n\nPerhaps most peculiar in the whole episode is that Randall and Wilkins had known each other for many years, and had been close colleagues at several institutions. So what was Randall's motivation? \"He definitely subscribed to the divide and rule principle, as lots of people did. He thought it would make them competitive and improve their work.\"\n\nAnother school of thought might be that Randall did not think Wilkins up to the job of the matter at hand, but did not want to confront him directly about these concerns. However, when this theory is put to him, Gosling is unsure. But what is for certain is that Franklin had very much been told by Randall - \"this very dynamic Head of Department\" - that she was a post-graduate fellow and it was her research, not Wilkins'.\n\n\"Wilkins came back to the lab after a few days and, you know, said, 'How are you?' and 'What are you doing' sort of thing, and got her back up. It was very unfortunate.\" Gosling, in the dark as much as everyone else, joined others in attributing this response to a Bolshie streak in Franklin's character, and also had sympathy for Wilkins, having previously worked closely with him. Now that he was Franklin's student, Gosling was very much caught in the middle: \"It was terrible, terrible. I spent my life going from one to the other, giving messages, trying to play the peacemaker.\"\n\nEventually, the dispute between Wilkins and Franklin resulted in Randall suggesting to Franklin that she had better leave, even though the DNA work was not yet complete - although he was good enough to \"fix her up with his old pal \\[JD\\] Bernal\" at Birkbeck.\n\nLater on, Wilkins was \"beset by worry that he had been responsible for not integrating Rosalind into the group\" and it very much lingered with him, to the extent that, for the rest of his life, he would frequently ask Gosling whether he had been unkind to her.\n\n# The wrong model\n\nAll Hollywood script writers worth their salt know that a successful plot is built around a boy meeting a girl, losing the girl and then winning the girl back. Change a few nouns, and you pretty much have the story of the double helix: Crick and Watson 'discover' the structure, lose the 'discovery' when it turns out to be wrong, and then win back the discovery by coming up with a better model. Where they erred was in rushing to triumphalism with the first, incorrect, model: \"we suddenly received a call, in '51 I think it was, from Crick - from Maurice. Crick had got in touch with Maurice to say that he hoped he didn't mind, but they had built - him and Jim had built - a model of DNA as a double helix, following the results that we had deduced in structure 'B'. And would we like to go to Cambridge to see it?\"\n\nAccording to Gosling, it was clear to him then that the King's data, such as it was at the time, had fed into Watson and Crick's model. Nevertheless, Gosling, Franklin and Wilkins, together with their King's colleagues Bill Seeds and Geoffrey Brown, took the Liverpool Street train \"with a heavy heart\" to Cambridge. But, upon arrival, it was immediately apparent that Watson and Crick had made some elementary mistakes, in both senses of the word.\n\n\"We arrived in the lab to be shown the model and to the absolute relief of Rosalind and myself - I don't know about Wilkins, what he thought at the time, because I was dealing with my own thoughts and not observing other people - the boys had built a model with the phosphate linkages going up the middle of the thing, which gave it, of course, rigidity, and so you could hang all the nucleotides and things off the ends of the ionic chain. That must be wrong, because we knew that the water went into that phosphate-oxygen group, and there was an ionic linkage there between the sodium - it was the sodium salt of DNA - and the phosphate group, and you got eight molecules of water going in, quite a lot of water that would go in and come out very easily, as we had shown. So it meant that whatever the structure was, those phosphate groups had to be on the outside. And so we were delighted, and Bragg was embarrassed because it wasn't done to actually work on another man's problem.\"\n\nWatson and Crick's recklessness, in playing their cards so early, was pounced upon by Franklin, who \"tore the model apart point by point.\" As Gosling notes with some amusement, Crick was later to comment that Franklin's demolition of the model was the only time he ever saw Jim Watson at a loss for words. \"And I can believe it!\"\n\nSo did Franklin deliver her criticism with obvious relish, or did she play it straight? \"Oh, no, with obvious relish! She reminded me very much of a particular lady in the University of St Andrews Physics Department that I worked in when I left King's, in which she'd turn up at seminars by new PhD students or the like and she would tear their suggestions apart. 'You're wrong, and you're wrong for the following reasons, one, two, three, four...'\"\n\nGosling had, like Franklin, realized instantly that the model was wrong but did not join her in skewering its inadequacies. Why not? \"I left it to her. I didn't need to discuss it at all, I mean she was...she was on top of her form! My word, no.\"\n\nPerhaps the famously negative portrayal of Franklin in Jim Watson's book 'The Double Helix' was payback for this moment? \"Yes. Oh, I'd never thought of it, but yes, that's true. The humiliation. He must have felt - that's the word - he must have felt humiliated. Who the hell is this woman telling me... Yes, you can see it more clearly looking back, can't you?\"\n\nRather than focus on Jim Watson's humiliation, however, Gosling was at the time \"just happy that it meant that Rosalind and I could go back to the Strand and just get on with doing the mathematics. And it was ours to take as long as we liked.\"\n\nWatson and Crick's misplaced haste really did seem to have handed the game to King's. When Wilkins reported what had happened to Randall, he was \"furious and stormed off to Cambridge to see Sir Lawrence, and Lawrence was apologetic and actually forbade the lads from doing any more work on DNA, and that it was a King's problem and that Crick had plenty to do on hemoglobin and that he should concentrate on getting his PhD.\"\n\n# The American competition\n\nWith the ban imposed upon Watson and Crick, DNA would have remained a King's problem, were it not for Caltech's Linus Pauling (Box 2) and, moreover, the inexplicable indiscretion of Pauling's son Peter. Not content with having discovered pretty much everything else going in chemistry and molecular biology, Pauling had turned his attention to the structure of DNA. A physicist by training, Pauling had got under the skin of atoms and molecules with great success to describe the nature of the chemical bond and the secondary structure of proteins, and much else besides. So he seemed like a good bet in the race to discover the structure of DNA.\n\nPauling's son Peter was at the time based in Cambridge, and somehow managed to leak the news of his father's interest in the question to Jim Watson. The prospect of losing to Pauling, who had already beaten him to the alpha-helix and beta-sheet, was too much for Bragg, and he allowed Watson and Crick to start work on DNA again.\n\nIt so happens that the first papers Pauling wrote on the subject actually contained the same mistake made by Watson and Crick, in that the phosphates were on the inside. Further, his proposed structure was a triple, rather than a double, helix \\[7,8\\]. It was very wide of the mark, but the threat of Pauling's intellectual prowess was not to be underestimated.\n\nDoes Gosling believe that Bragg's reaction was justified? After all, a draft manuscript written by Franklin shows that she had already drawn many correct conclusions about DNA's structure, including its double helical nature. Would Pauling have arrived at the model before King's with Cambridge out of the race? \"That's the problem. That's the \\$64,000 question I've been asked so many times. 'How long would it have taken you?' And I don't know.\"\n\n# King's loses the race\n\nFranklin's methodological approach was not that of someone in a race to the prize, but instead favored slow, steady progress. Franklin's skills as a chemist had borne fruit in determining that there were two crystalline forms of DNA, dependent on the humidity; the respective forms at lower and higher humidity were christened by Gosling and Franklin 'A' and 'B'. It was accepted that 'B' would be the *in vivo* form, due to its formation in humid conditions, and Franklin had taken an exceptionally high quality X-ray diffraction pattern of this form ('photo 51', see Figure 3<\/a>), which proved invaluable to Watson and Crick. From the King's data, it was clear that 'B' was helical, but this could not be said with certainty for 'A' (Figure 4<\/a>).\n\nFranklin had set herself the task of deducing the structure of 'A' from first principles, using Patterson functions, and she was having quite some success in doing so. \"We were the first people ever to do a cylindrical Patterson. But now nobody does it, because the computer would do it for you in a twink. And so you would probably go straight to an atomic density map, rather than vectors.\"\n\nBut why the focus on 'A', given that this was known not to be the biologically relevant form? \"Well, that's hindsight, isn't it. Her answer would have been, 'we've got over 100 diffraction spots, so we can do all the mathematics of the Patterson function.'\" Such data were not available for 'B', and Franklin's preference for sticking to first principles, and aversion to playing about with models and \"guesswork\", was the deciding factor.\n\nAnother obstacle for Gosling was his ignorance of space groups. \"Alex Stokes (Box 2) had taught me enough crystallography that I had been able, before Rosalind was even appointed, to index the spotty picture that I had produced, and find the unit cell and therefore the symmetry that the molecule must exhibit. I got the space group right, I got all the major indices right. But what I didn't realize was that the C2 space group meant that there must be a dyad axis perpendicular to the fiber axis. The density values, which determine how many strings of the molecule are per unit cell, could in our case be two or four, and we were getting an answer for the density, which was a bit difficult to measure, between two and three. Now Crick realized immediately from my unit cell data that there must be a dyad axis and therefore, if this double diamond was showing a helical pattern, it meant there was a double helix.\"\n\nThe reason Crick knew all about this space group was that he was studying, under Bragg, the structure of hemoglobin, which just so happens to share the same space group. So another bit of serendipity? \"Absolutely.\"\n\nThe slow pace at King's was no match for the fury of Watson and Crick's efforts, and so it came to pass that Gosling, Franklin and company were called up to Cambridge a second time...\n\n# That 'eureka!' moment\n\n\"We went up, saw the structure, we came back to King's and looked at our Pattersons, and every section of our Pattersons we looked at screamed at you, 'double helix!' And it was just there! - once you knew what to look for. It was amazing.\"\n\nWhen Gosling saw this second structure - the double helix we are now so familiar with - for the first time, was it as obvious to him that this model was right as it had been that the first model was wrong? \"Absolutely. Absolutely, because it was so elegant. And it also explained the diffraction pattern as being such a clear double helix because the phosphate groups were on the outside, the sodium then was ionically bonded to the phosphorous, and the eight molecules of water went into the same group. So you got this enormous scattering power, due to the electronic number of oxygen, nitrogen and carbon.\"\n\nIt wasn't just the phosphates on the outside that made the model so convincing, but also the \"stuff in the middle.\" For one thing, it made a lot of sense that the identities of the nucleotides could be changed without impacting on the overall structure. For another, the nucleotides' \"stair rods looked identical to the X-rays because of the way they were bonded.\" More importantly, Watson and Crick took Chargaff's (Box 2) finding that \"no matter which DNA he studied, there was always a 1:1 ratio of adenine to cytosine, and guanine to thymine, and that meant that these were identical\" and incorporated it into the model in the form of complementary base pairing. With this seemingly simple rule, suddenly \"the solution to DNA's reproductivity was so simple.\"\n\nChargaff's ratio was already in the public domain, before Watson and Crick's model was unveiled, but \"no one had made the connection\" that this meant the DNA structure would rely on complementary base pairing, nor that this \"was necessary to explain how the DNA replicated.\"\n\nSo, given that the double helix and all its features were already present in Franklin and Gosling's data, but had somehow been opaque to them, perhaps it was a good thing that Watson and Crick worked on their model, even if by surreptitious means, and the injustice really was more the lack of attribution? Gosling agrees. \"I think so. We would have, I think, got there eventually. But Crick himself has said, and Wilkins has agreed with him, that if they hadn't built the model, we would have got there at King's, but not in one fell swoop, that it would have come out in dribs and drabs about the various aspects.\"\n\nAnd, as a consequence, history would have been deprived of a singularly exciting moment? \"That's right. That 'eureka!' moment, as Jim himself admits to, when you know that that specific pairing is there, and it's in the literature! And then, I can quite sympathize with him in a way, such a very strange feeling. Because I think it was Bernard Shaw who said that very few people are lucky enough to have an original thought in their whole lives. But there I was presented with mine, the crystallization of DNA, as he was, on a plate. Dong! Light bulb!\"\n\nGosling readily offers his admiration for Crick and Watson's achievement: \"they'd not only put together a model showing that the DNA was in the form of a double helix, but that it had this dyad axis and that this meant that these stair rods of the nucleotides had to be specifically paired - now that was worth a Nobel Prize.\"\n\nSimilarly, Franklin reacted to this second model with a level of grace to equal the schadenfreude with which she had destroyed the first. \"If you look at the BBC 'Secret of Life' documentary, which is absolutely brilliant - it really is a wonderful thing - they have a shot there, which is of course made up, but it has Rosalind Franklin by herself looking at the model in Crick's lab. And Bragg comes in and says something like, 'Do you have any regrets...' - or something like that - '...Miss Franklin.' And Miss Franklin apparently said, 'No, we all stand on each other's shoulders.' And that stuck in my mind. Whether she said it or not, I don't know. But that was her attitude that she took when she and I were discussing it.\"\n\nTo her credit, Franklin \"absolutely\" put science before ego. However, at the time, she was not aware of the extent to which Watson and Crick had based their model upon her data. Does Gosling believe she came to know about this before her death in 1958? \"Yes. Oh, she did know about that.\"\n\n# Obscurity and infamy\n\nPerhaps surprisingly, given the iconic nature of the double helix today, the 1953 discovery did not initially have much impact, as Gosling saw it. An illustration of this might be the difficulty he had finding work following his PhD - there were no opportunities for him to stay at King's, which he would have gladly done.\n\n\"I talked about DNA for about a year after I wrote my PhD because I was convinced that it would only take a year... that the structure of DNA now being known, the ability to control carcinogenic activity within any tissue you like would be available in two or three years' time. And when two or three years came and went, and nobody had shown that you could... 'Alright, so a double helix, so what?'\"\n\nDue to Franklin's enforced move to Birkbeck, Gosling ended up writing his thesis away from King's, under her guidance. This annoyed Randall somewhat, as he was \"of the old school, who... in those days, nobody but the Professor had PhD research students.\" Randall's response to the situation was to appoint himself as Gosling's internal *viva* (thesis defense) examiner, and he roped in Bernal as the external examiner. Oddly, when you consider that the content of his thesis amounted to a not insignificant contribution to the discovery of DNA's structure, the prospect of his *viva* with Randall \"scared the pants off\" Gosling. In part, he explains, this was because his studentship had begun by \"trying to establish what the problem was\", due to the \"amorphous\" state of biophysics, rather than starting off with a \"ready-made problem\", as his friends in Ingold's Chemistry Department at University College had done.\n\nGiven that his studentship had been under Randall's direction, the idea that Randall would then find fault with Gosling's rationale in the *viva* is hard to resolve, outside of a Kafka novel perhaps. \"Yes! Very Kafkaesque, yes.\"\n\nBut, still, he surely cannot have been too afraid of defending a thesis describing such successful work? \"Well, I can tell you, and I'll tell anybody, that the big difference between writing a paper - especially if you have co-authors - is you can always, when the discussion gets too heated, say you didn't write that bit, it was him! But with a thesis, you're aware that as you are writing each word, it can be attributed to you. And that's an odd feeling, especially when you know who your examiners are.\"\n\nGosling's fear did not end with the anticipation of his *viva*, but progressed into the examination itself. \"Randall and Bernal had obviously had a very good lunch. I went in there at 2 o'clock, and at half past five, they were still going on talking about the origin of life. I was hardly asked a question. The two of them were at it hammer-and-tongs. I was terrified. I was just terrified, because I was convinced that Bernal was a genius - one of only three I have met in my life, together with Crick and Haldane (Box 5).\"\n\nHaving become disillusioned about the failure of the double helix to make much of an impression, Gosling lost interest and drifted away from the field. He had in any case wanted to move closer to his original dream, medicine, and did indeed go on to lead a successful career as a medical physicist (Box 6).\n\nBut an unexpected development suddenly cast the double helix in the spotlight, and spawned a legend that solidified its place in popular culture. Jim Watson wrote a book. In Gosling's eyes, \"that's what did it\" for the double helix. And that's where Gosling's praise for Watson's book - 'The Double Helix' \\[5\\] - ends.\n\n\"That book is a novel. A very successful novel, but it is a novel. Wilkins and Crick wrote to Harvard saying they should not publish this book, and they didn't.\" But someone else did.\n\nAnother unhappy reader of 'The Double Helix' was Max Perutz. While Watson and Crick obtained some of the King's data through Crick's friendship with Maurice Wilkins (\"Innocently. Because he'd always discussed his work, as Crick had with him, from their undergraduate days.\"), other data came from a progress report submitted to the MRC, which Watson had obtained from Perutz.\n\n\"I actually had a letter that I can no longer find from Perutz saying that he wanted me to be assured that he did not rush down the corridor waving the MRC report containing our contribution about the size and shape of the molecule. It's true they had actually got it from him. But the way Watson tells it, it's sort of Archimedes getting out of the bathtub again, you know.\"\n\nKey for Crick and Wilkins was their dismay at the portrayal of Rosalind Franklin, which amounted to what you might describe as an *ad hominem* character assassination, and a wildly inaccurate one at that. Worse still, Franklin had died of ovarian cancer several years earlier, and so was not able to defend herself. Was Gosling also angry about how she had been portrayed? \"Yes. Very much so.\"\n\nBut he is gratified that many people rose to Rosalind's defense (\"her part in it now is if anything overplayed!\"), and in particular that Brenda Maddox's \"first class\" biography of Franklin was able to set the record straight \\[2\\]. Unwittingly, Watson is responsible for the widespread admiration with which Franklin is viewed today, which is to a large degree a reaction to his book.\n\nHow did Gosling feel about his own portrayal in 'The Double Helix'? Was he concerned that his role had been underplayed? \"No, not really. But I felt that Watson was so busy criticizing poor Rosalind that he didn't mention and give credit to the work of Alex Stokes and Wilkins and myself. And Randall.\"\n\nDespite his strong reservations about 'The Double Helix', Gosling acknowledges that Watson has much to be complimented for. \"He is very precocious, he is one year younger than me, which annoys me - ha! - and he is without doubt a very lively mind and his ability to spot that specific pairing was worth the Nobel Prize.\"\n\n# Reconnecting with DNA\n\nAs mentioned previously, Gosling's subsequent career deviated from biology (Box 6), and he didn't follow the progress of molecular biology during this time (\"not at all!\"). This scientific realignment proved problematic on one occasion during his tenure at the University of the West Indies, when he was asked to deliver a lecture to the Trinidad campus on what his discovery was leading to. \"I hadn't the foggiest! So I had to bone up, and that was incredible.\"\n\nAsked jocularly whether he contacted Francis Crick for assistance, Gosling replies that, actually, he did! \"Well, yeah, I did have a conversation with Francis. And it was... the whole thing was fascinating, I mean to look into this new world.\"\n\nWhen Gosling hears of developments that have come from his work at King's, does he feel connected to it? \"No, I feel detached, I really do - it's gone way ahead of where I was. But I realize that I don't go to the literature enough to say that I have kept up.\"\n\nThe 50th anniversary of the double helix reignited the world's interest in Ray Gosling, and suddenly he was invited to events alongside Nobel laureates and other giants of the molecular biology world - most of whom he had never met before, nor even been aware of their work. Gosling was particularly taken with Alec Jeffreys (\"a very nice chap\") and Paul Nurse (\"a very witty fellow indeed\"), and was \"very flattered\" when Nurse invited him to contribute a brief memoir of the double helix to a time capsule for The Francis Crick Institute.\n\nHave any DNA-based developments caused him concern, in the same vein as the conflicted feelings felt by Wilkins and some other alumni of the Manhattan Project? \"No. But I do very much feel that I can't stress enough the critical stage we're at. Up until now evolution has occurred spontaneously, set in motion in all sorts of ways. And what is happening now is that our species are the first ever to have their hands on the levers controlling evolution. If you are a pessimist, you would say the glass is half empty, and you would think it's rather a sinister situation. So you have to take an optimistic point of view, at least I think you do.\"\n\nWork on transgenic animals has made a strong impression on Gosling (he highlights the example of so-called spidergoats), as has the notion of an embryo with three genetic parents, in which maternal chromosomes and mitochondrial DNA originate from different individuals.\n\nReturning to his favorite theme of Randall, Gosling recalls how in later years the ringmaster of the double helix discovery focused his work on cilia. \"Randall again was ahead of his time, wasn't he? In concentrating on the cilia. Isn't that fascinating? How long's the old boy been dead? '84, gosh. He would have been so pleased, wouldn't he...\"\n\n# Box 1. 'The best rag I ever came across'\n\nTo illustrate the level of treachery that his move from University College to King's College might have been perceived as, Gosling relates a practical joke that took place in the 1930s: \"The engineers at University College got into the front reception hall in King's and cut a hole in the belly of Reggie the Lion, who's their mascot, and filled it with rotten vegetables that they'd got from Covent Garden, and sealed it up and painted it and put it back where it was, which was on a special plinth above the entrance doors. And it was weeks while the beadles looked for the source of the smell, and that was I think the best rag that I ever came across.\"\n\n# Box 2. Watson and Crick's key accomplices - willing and unwitting - in the quest for the double helix\n\n## John Randall\n\nA physicist by background, Randall was \"ahead of the curve\" in setting his sights on biological questions. He established the MRC Biophysics Unit at King's, where his modern approach translated to interdisciplinary research with an unusually large, for the time, quotient of female scientists. Randall was \"autocratic\" and had a \"Napoleonic complex\", and the Unit was very much his \"circus\". But Gosling has immense admiration for Randall, and believes that his drive and vision were the magic ingredient that led to the double helix discovery, rendering him the great \"unsung hero\" of the story.\n\n## Maurice Wilkins\n\nA long-standing colleague of Randall at a number of institutions, Wilkins became his right-hand man once recruited to the Unit, where he served as Assistant Director. A dedicated scientist, the \"painfully shy\" Wilkins guided Gosling's early work at King's. Wilkins had first attempted to study DNA using ultraviolet light techniques, but without much success. One of the two King's articles co-published with Watson and Crick's work in *Nature*'s April 25th 1953 edition described Wilkins' work, together with Alex Stokes and Herbert Wilson, on the structure of DNA. This publication focused on how to interpret X-ray diffraction patterns and on showing that DNA from various species adopts the same structure.\n\nFor his contributions to the discovery of the double helix, Wilkins shared the Nobel Prize in Physiology or Medicine for 1962 with Watson and Crick (Box 3).\n\n## Rosalind Franklin\n\nRosalind Franklin was recruited to King's, where Gosling worked under her, by Randall and was key to obtaining the quality of X-ray diffraction pattern necessary to determine the structure of DNA. She had come to Randall's attention for her work on the properties of coal as part of the war effort. At the time, it was \"very unusual\" for a woman to have a senior research role at King's, although not unusual in Randall's laboratory (see text), and many outside Randall's circle were taken aback by it.\n\nFranklin was \"a very good experimental scientist, as you had to be if you were a woman in those days.\" She fell out with Maurice Wilkins from the beginning of her time at King's, largely due to a misunderstanding engineered by Randall (see text), and then earned Jim Watson's enmity when she proved a caustic foe in the race for the double helix. Franklin viewed her study of the tobacco mosaic virus, performed at Birkbeck under Bernal after her DNA research, as her life's greatest work.\n\nFranklin, like Gosling, was a Londoner, although Franklin hailed from the affluent neighborhood of Notting Hill, as the daughter of a merchant banker, whereas Gosling had more modest origins in the suburbs.\n\nFranklin died of ovarian cancer in 1958 at the age of 37, nearly five years to the day after the publication of the double helix. It is thought that Franklin's work with X-rays might have contributed to her death. Several years later, Jim Watson included a very negative account of Franklin in his book, 'The Double Helix' \\[5\\], but was met with much opprobrium for doing so. A more recent biography by Brenda Maddox has done much to raise the profile of Franklin and to emphasize the importance of her contribution to the double helix discovery \\[2\\].\n\nAs Nobel Prizes are not awarded posthumously, Franklin was not eligible to be included in the 1962 Nobel Prize shared by Wilkins, Watson and Crick.\n\n## Alex Stokes\n\nAlex Stokes developed mathematical methods for interpreting X-ray diffraction patterns while at King's MRC Unit; without this know-how, Gosling would not have been able to make sense of his DNA data. Stokes was a co-author, with Wilkins and Herbert Wilson, of one of the three articles on DNA structure co-published in *Nature* on April 25th 1953.\n\n## Rudolf Signer\n\nAs with Randall, Gosling considers Signer to be an \"unsung hero\" of the double helix story. Signer was a Swiss biochemist, based at the University of Bern, who could produce a DNA sample of far superior quality to any other available at the time. Maurice Wilkins obtained such a sample, derived from calf thymus, after Signer generously offered it to all takers at a London lecture.\n\n## Lawrence Bragg\n\nLawrence Bragg was awarded the Nobel Prize in Physics, together with his father William Bragg, at the tender age of 25, making him the youngest ever recipient of this prize to date. Bragg's Nobel was in recognition for the methodology of studying crystal structures by X-ray diffraction, which the Braggs had developed at the University of Leeds. It was also at Leeds that William Astbury obtained early X-ray diffraction patterns of (non-crystalline) DNA.\n\nBy the 1950s, Bragg was serving as the Head of the Cavendish Laboratory at Cambridge, where both Crick and Watson worked under him. Also based at Bragg's Cavendish for a time was Peter Pauling, son to Linus, who leaked information of his father's interest in the question of DNA's structure. A fourth key player to work under Bragg at the Cavendish was the chemist Jerry Donohoe, who set Watson and Crick right about errors they were making in the likely chemical form of DNA's bases.\n\n## Linus Pauling\n\nPauling is most notable for his work on the nature of the chemical bond and on the secondary structure of proteins, and was one of the most prolific scientists of the 20th Century, in terms of major advances credited to his name.\n\nPauling began work on the structure of DNA while King's were working on the problem, and Crick and Watson used the specter of defeat at the hands of Pauling - who had beaten Bragg to the alpha-helix and beta-sheet - to goad Bragg into allowing them to restart work on DNA's structure. As an American based in California, Pauling also added an element of transatlantic competition to the race, although Cambridge-based Jim Watson was of course himself also American.\n\nPauling published a model for DNA's structure in February 1953. However, the model did not come close to the true structure, with the most obvious mistakes being the number of strands - he had proposed a triple, rather than a double, helix - and the location of the phosphates on the interior of the helix.\n\nPauling is the only person in history to have won two individual Nobel Prizes - these were for Chemistry (1954) and Peace (1962), the latter as a result of his political activism, some aspects of which had famously brought him unwelcome attention from the United States government, in the form of travel restrictions. In part for this reason, Gosling was never able to meet Pauling in person.\n\n## Erwin Chargaff\n\nChargaff had discovered the 1:1 ratio that existed between the complementary base pairs in DNA, but had not made the leap from that discovery to the rules of base pairing. Chargaff personally communicated his discovery - dubbed 'Chargaff's rules' - to Watson and Crick, and was \"as cross as two sticks\" not to be included in Watson's book.\n\nOriginally from the present-day Ukraine, and having spent time at various European research institutions, Chargaff relocated to New York to escape the rise of Nazism. It was here, at Columbia University, that he determined 'Chargaff's rules'.\n\n# Box 3. Serendipity or sagacity?\n\nGosling claims here and elsewhere \\[11\\] that he was the first person to crystallize DNA purely by a stroke of luck: he had used water to monitor hydrogen and, serendipitously, the humidity absorbed by the DNA fibers from this water resulted in crystallization (see text).\n\nIn his Nobel lecture \\[12\\], however, Gosling's colleague Maurice Wilkins gives a different version of events: 'One reason for this success was that we kept the fibres moist. We remembered that, to obtain detailed X-ray patterns from proteins, \\[JD\\] Bernal had kept protein crystals in their mother liquor. It seemed likely that the configuration of all kinds of water-soluble biological macromolecules would depend on their aqueous environment.'\n\n# Box 4. Watson and Crick: the perfect pairing\n\nGosling relates how, having been refused entry to King's, Jim Watson \"was nonetheless convinced that he had to learn some basic crystallography, and then he'd be able to find the structure of DNA.\" He approached Sir Lawrence Bragg and asked to join his Cavendish Laboratory at the University of Cambridge, and was accepted (\"you didn't turn down a pair of willing hands which came self-funded.\")\n\nBecause of the way the space requirements of his workers happened to be at that time, Bragg gave Watson a spare desk next to Francis Crick (\"a thoroughly nice man\"), who was \"already getting Sir Lawrence rather mad because - as Bragg was heard to say several times - the man never stopped talking. And this is true, the man was incredibly wired for looking at exciting new developments.\" Crick had just come across the argument about whether DNA or protein was the genetic material, and had come to the conclusion it was the DNA. \"And here is this pop-eyed chap from America turning up, who is saying that is exactly what it is and the people in King's have got it, and we should build models - that's the way to go.\"\n\nAs with the base pairs they discovered, Gosling considers Watson and Crick to be the perfect complement to one another: Crick's \"genius\" and Watson's \"persistence\".\n\nGosling notes the contrast between the experimentalists at King's and Watson and Crick, who \"never did an experiment in their lives, it was all deductive powers of reasoning.\" Nevertheless, Gosling believes that those powers were very much worthy of the Nobel Prize in Physiology or Medicine awarded to them, alongside Maurice Wilkins, in 1962.\n\n# Box 5. An encounter with JB Haldane\n\nGosling believes he has met three geniuses in his lifetime: Francis Crick, JD Bernal and JB Haldane. \"Those three, I was lucky to meet. I mean, I don't think there are very many scientists who have the privilege.\"\n\nHe first met Haldane when an undergraduate at University College, but a later encounter was much more memorable.\n\n\"JB Haldane came to a conversazione at the Royal Institution, and it was time for the eaties to be dished out, and so I was the one who got the short straw and had to stay by the model in case anybody came to look at it. And there it was, this lovely double helix. And this Haldane shuffled up and started to roll a cigarette, and apparently he used the cheapest possible tobacco and Rizla paper, and made his own, like he was a student. Nowadays, you would have accused him of making a reefer, a joint. But he rolled it, took a few puffs and said, 'Well, now you will have to find an untw... tw... ...twiddlase.' Because he had an awful stammer, and that I remember vividly. I stood there awestruck, thinking about an untwiddlase! I mean, this was before the concept of telomeres. But he was on it. Incredible man.\"\n\n# Box 6. Gosling after DNA\n\nAfter the initial double helix data had been published, Gosling and Franklin finished up their research into the structure of DNA with another article in *Nature* \\[13\\]. Gosling continued to work in crystallography for a few years, focusing on the structure of nucleotides, but spent most of his career as a medical physicist, developing devices for the study and diagnosis of atherosclerosis.\n\nThe idea to perform this work originated from a discussion he had while based at the University of the West Indies. \"Again, a somewhat serendipitous situation developed, in that I got to know very well indeed the senior lecturer in morbid anatomy. He wanted an explanation as to why the fatty plaques should develop in arteries where the blood was moving fastest, that this was an anomalous situation, and all of the work that was being done on atherosclerosis - on atherogenesis, if you like - at that time was being done by biochemists. And he said, surely, isn't there a big hole here for someone who is a biophysicist to look at the characteristics of the pulsatile flow, and that must play a part in the formation of these plaques.\"\n\nGosling began the project while on a sabbatical back in Randall's laboratory at King's and, while there, Randall persuaded him to change tack. \"One day in my little lab, Randall appeared. And he had this terrifying habit of doing just that. I mean, he walked around with hush puppies and you couldn't hear him coming, and suddenly you were in an empty lab and then the next thing you know, he's at your elbow. Terribly dangerous! But what he said was illuminating. He'd come down to find me because he wanted me to stop building glass tubes with lumps inside, because if I did, and if I was successful in replicating the conditions, all that would happen is that some bloody physiologist would come along and pooh pooh the whole thing because it was not like the *in vivo* state, and that what I ought to do is to build an animal model and observe it directly in a living situation. And then he disappeared.\"\n\nGosling took Randall's advice and, when he returned to Jamaica, switched to working on cockerels, which turned out to be the best animal model available. Later, he continued his work on atherosclerosis at Guy's Hospital, London, where he developed ultrasound devices for the analysis and diagnosis of atheromatous plaques. This phase of his career included an important discovery.\n\n\"As the lumps developed, so it changed the elasticity of the artery wall. As you know, you get hardening of the arteries as they build up with atheromatous plaques, but we were able to show that, before that happens, they get three times more distensible, which was an unlooked for, unknown thing - counterintuitive.\"\n\nGosling found the direct impact he could see his work making in medical physics more satisfying than fundamental biology, and he also preferred the more steady rate of progress to the manic ups and downs of his years as a crystallographer. His time at King's, therefore, was a preamble to a rewarding and successful career that led Gosling in a very different direction to his fellow actors in the double helix story.\n\nNow retired, Gosling's contribution to science has been recognized in the form of election as a Fellow of King's College and the award of a DSc from the University of the West Indies. Inexplicably, Gosling has never received recognition in the Queen's honors system from the British government, although he was invited to meet Prime Minister Tony Blair at Downing Street in honor of the double helix's 50th anniversary.","meta":{"dup_signals":{"dup_doc_count":177,"dup_dump_count":51,"dup_details":{"curated_sources":4,"2023-23":1,"2022-33":1,"2021-43":1,"2020-34":1,"2020-24":1,"2019-51":1,"2019-47":2,"2019-39":1,"2019-35":1,"2019-30":1,"2019-13":1,"2019-04":2,"2018-47":2,"2018-43":2,"2018-34":2,"2018-26":1,"2018-17":1,"2018-09":1,"2018-05":1,"2017-47":1,"2017-39":1,"2017-34":1,"2017-22":2,"2017-09":18,"2016-40":1,"2016-36":17,"2016-30":16,"2016-22":1,"2016-18":1,"2016-07":12,"2015-48":4,"2015-40":1,"2015-35":4,"2015-32":5,"2015-27":4,"2015-22":4,"2015-14":4,"2014-52":3,"2014-49":5,"2014-42":9,"2014-41":7,"2014-35":4,"2014-23":5,"2014-15":2,"2023-40":1,"2015-18":4,"2015-11":4,"2015-06":4,"2014-10":2,"2013-48":1,"2024-10":1}},"file":"PMC3663117"},"subset":"pubmed_central"} {"text":"abstract: A report of the Biochemical Society\/Wellcome Trust meeting 'Protein Evolution - Sequences, Structures and Systems', Hinxton, UK, 26-27 January 2009.\nauthor: John W Pinney; Michael PH Stumpf\ndate: 2009\ninstitute: 1Centre for Bioinformatics, Division of Molecular Biosciences, Imperial College London, Wolfson Building, London SW7 2AZ, UK\ntitle: Evolving proteins at Darwin's bicentenary\n\nThe effects of natural selection are ultimately mediated through protein function. The traditional view that selection on proteins is primarily due to the effects of mutations on protein structure has, however, in recent years been replaced by a much richer picture. This modern perspective was in evidence at a recent meeting on protein evolution in Hinxton, UK. Here we report some of the highlights.\n\nUnsurprisingly, Charles Darwin featured at lot at the meeting. Evolutionary arguments are all-pervasive in the biomedical and life sciences and this is particularly true for the analysis of proteins and their role in cell and molecular biology. From initial investigations of individual proteins in the 1940s and 1950s, which were motivated by even earlier work on blood groups, we can now routinely collect information from a large number of sequenced genomes to help us understand the evolution of proteins in terms of their sequences, structures and functions, and their roles as parts of biological systems.\n\n# Comparative evolution\n\nThe primacy of comparative, and thus evolutionary, arguments in the analysis of proteins and their structure was emphasized by Tom Blundell (University of Cambridge, UK), who reviewed almost 40 years of structural bioinformatics. He noted that in the early studies of insulin structure, the common ancestry of all life on Earth meant that lessons learned in the context of one species were transferable to other species. This in turn meant that sequence data could be linked to structure more directly through comparative arguments than would have been possible using biophysical or biochemical arguments. Despite vast increases in computational power and experimental resolution, this continues to be the case to the present day.\n\nThe explosion in available whole-genome data has provided us with a much richer understanding of genomic aspects of protein evolution. This was highlighted by Chris Ponting (University of Oxford, UK), who contrasted the distributions of proteins and protein family members in the human and mouse genomes. Such a comparison reveals high levels of sequence duplication - probably in line with what might be expected, given recent findings of copy-number variation - and suggests a scenario where ancient single-copy genes are only rarely gained or lost. Members of larger gene families, however, have experienced much more frequent gene duplication and loss; this may reflect the role of such gene families in adaptive evolution, as seen in the rapid evolution of the androgen-binding proteins in mouse.\n\nThe theme of adaptation was elaborated on by Bengt Mannervik (Uppsala University, Sweden), who focused on the evolution of enzymes, a class of proteins with perhaps uniquely well-characterized functionality. Here, he argued, the relative trade-off between substrate specificity and enzymatic activity has given rise to a quasi-species-like evolutionary scenario: abundant protein polymorphisms underlie a complex population of functional enzymatic variants. Such diversity in the metabolic functions available within the population may presumably help to buffer changes in the environment encountered during evolution.\n\nAraxi Urrutia (University of Bath, UK) addressed predominantly the link between gene and protein expression and evolutionary conservation and adaptation. As she pointed out, there is clear emerging evidence that highly expressed genes in humans share certain characteristics such as short intron lengths and higher codon-usage bias and favor less metabolically expensive amino acids. This affects the rate at which protein-coding genes evolve in a manner independent of protein structure. Moreover, this level of selection also appears to depend on the genomic context, as patterns of expression of neighboring genes are statistically correlated.\n\n# Insights from structure\n\nAlso fundamental to protein activity is post-translational modification, notably phosphorylation. This is a field of enormous biomedical importance, as kinase and phosphatase activities crucially regulate signaling and metabolic processes. The structural work of Louise Johnson (University of Oxford, UK) and colleagues bridges 'classical' structural biology and systems biology, and she discussed the structural factors underlying the regulation of kinases and phosphorylation. These comprehensive analyses are now also beginning to reveal how biochemical compounds can affect kinase regulation in a manner that may become clinically exploitable.\n\nKeeping to the structural theme, Christine Orengo (University College London, UK) discussed the phenomenal insights that have been gained recently into the evolution of protein domain superfamilies and the ensuing effects that this can have on protein structure, active sites, and ultimately, function. For example, the analysis clearly reveals common structural cores that are shared across the members of the same superfamily but may be modified in individual members. Orengo documented how such differences in the HUP superdomain family lead to differences in the participation of paralogs in protein complexes and biological processes following duplication.\n\nAlex Bateman (Wellcome Trust Sanger Institute, UK) further elaborated on the evolution of families of protein domains. Such a domain-centric point of view adds a valuable and useful perspective. Yet even at the level of shuffling these protein building blocks, the picture becomes more detailed as the available evolutionary resolution increases: for example, the frequency of changes in domain architecture is seen to approximately double following a gene duplication event as compared with a speciation event.\n\n# Protein evolution *in vitro* and *in vivo*\n\nUsing extensive and genome-wide data from yeast and humans, Laurence Hurst (University of Bath, UK) demonstrated the substantial role of non-structural selection pressures, such as those imposed by transcription and translation, on the evolutionary dynamics of proteins.\n\nTaking these into account results in a much richer picture of protein evolution, with the contribution of splicing-related constraints being particularly pronounced in mammals. Surprisingly, perhaps, these constraints show the same relative importance for protein evolution as aspects of gene expression do, as discussed by Urrutia. This is in stark contrast to the traditional amino-acid-centered view of protein evolution.\n\nUsing analogies with mountaineering, Dan Tawfik (Weizmann Institute, Rehovot, Israel) covered the exciting opportunities afforded by experimental studies of protein evolution. Evolution has sometimes been viewed previously as an observational and mathematical discipline rather than one characterized by experimental work. Tawfik showed how it is possible to explore evolutionary trajectories through the space of possible protein folds or functions in far more detail than had previously been thought possible. One of the exciting possibilities emerging from this work is that we will be able to study the interplay between neutral evolution and the various factors influencing selection. There is already good direct experimental detail from these laboratory studies that demonstrate the link between the rate of protein evolution and 'functional promiscuity' and conformational variability.\n\nOne of us (MPHS) described the phage-shock stress response in *Escherichia coli* as an example in which the loss and gain of proteins across bacterial species can only be understood in the context of mechanistic models of the system itself. Loss of individual genes can compromise the functionality of the stress response, which can only be tolerated under certain ecological conditions. As a result, it appears that either the complete set of proteins contributing to the stress response is maintained in bacterial genomes, or all are lost together. This all-or-nothing scenario is probably inextricably linked to the ecological niches inhabited by the bacteria.\n\nDavid Robertson (University of Manchester, UK) discussed how patterns of gene duplication and diversification have shaped the global structure of protein-protein interaction networks, as well as many of their detailed features. In contrast to previous work, this detailed analysis of the protein-interaction network in *Saccharomyces cerevisiae* clearly shows that the coevolution of interacting proteins cannot simply be explained by observed protein-protein interactions. What emerges from this and related studies is that many of the high-level models of network evolution proposed only a few years ago are too simplistic for dealing with such highly contingent and complex processes. Robertson concluded with a discussion of the evolutionary history of human disease genes, which also highlights the importance of historical levels of gene duplication, and reinforces the need for nuanced assessment of the different factors affecting protein evolution.\n\nDiscussing the physical interations of kinases, Mike Tyers (University of Edinburgh, UK) described an exciting new experimental mapping study of physical protein-protein interactions of kinases. The experimental determination of these, frequently weak, protein interactions poses many challenges, requiring considerable reworking of existing platforms for proteomics, but the information produced is expected to be of great value to systems biologists. Preliminary results already suggest that the wealth of material expected from this survey will aid our understanding of the molecular mechanisms involved in these processes.\n\nTwo hundred years after the birth of Charles Darwin, we understand a great deal about the processes of evolution and how they have shaped the diversity of life on Earth. The application of the simple idea of \"descent with modification\" to proteins, their structures, expression patterns, interactions and ultimately their emergent functions continues to produce fundamental insights into how biological systems evolve. But the picture emerging from this unprecedented access to molecular data at all levels of cellular organization is much more nuanced than we would have thought possible only a few years ago.","meta":{"dup_signals":{"dup_doc_count":101,"dup_dump_count":46,"dup_details":{"curated_sources":2,"2021-43":1,"2021-31":1,"2020-24":1,"2019-30":1,"2019-22":1,"2019-09":1,"2018-51":1,"2018-43":1,"2018-30":1,"2018-17":1,"2018-09":1,"2017-47":1,"2017-34":1,"2017-22":1,"2017-17":1,"2017-09":7,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":9,"2016-30":7,"2016-22":1,"2016-18":1,"2016-07":4,"2015-48":2,"2015-40":1,"2015-35":2,"2015-32":2,"2015-27":2,"2015-22":2,"2015-14":2,"2014-52":2,"2014-49":4,"2014-42":7,"2014-41":4,"2014-35":1,"2014-23":3,"2014-15":4,"2023-50":1,"2015-18":2,"2015-11":2,"2015-06":2,"2014-10":2,"2013-48":2,"2013-20":2,"2024-26":1}},"file":"PMC2688924"},"subset":"pubmed_central"} {"text":"abstract: # OBJECTIVE\n .\n To review the evidence about the impact of hypoglycemia on patients with diabetes that has become available since the past reviews of this subject by the American Diabetes Association and The Endocrine Society and to provide guidance about how this new information should be incorporated into clinical practice.\n .\n # PARTICIPANTS\n .\n Five members of the American Diabetes Association and five members of The Endocrine Society with expertise in different aspects of hypoglycemia were invited by the Chair, who is a member of both, to participate in a planning conference call and a 2-day meeting that was also attended by staff from both organizations. Subsequent communications took place via e-mail and phone calls. The writing group consisted of those invitees who participated in the writing of the manuscript. The workgroup meeting was supported by educational grants to the American Diabetes Association from Lilly USA, LLC and Novo Nordisk and sponsorship to the American Diabetes Association from Sanofi. The sponsors had no input into the development of or content of the report.\n .\n # EVIDENCE\n .\n The writing group considered data from recent clinical trials and other studies to update the prior workgroup report. Unpublished data were not used. Expert opinion was used to develop some conclusions.\n .\n # CONSENSUS PROCESS\n .\n Consensus was achieved by group discussion during conference calls and face-to-face meetings, as well as by iterative revisions of the written document. The document was reviewed and approved by the American Diabetes Association's Professional Practice Committee in October 2012 and approved by the Executive Committee of the Board of Directors in November 2012 and was reviewed and approved by The Endocrine Society's Clinical Affairs Core Committee in October 2012 and by Council in November 2012.\n .\n # CONCLUSIONS\n .\n The workgroup reconfirmed the previous definitions of hypoglycemia in diabetes, reviewed the implications of hypoglycemia on both short- and long-term outcomes, considered the implications of hypoglycemia on treatment outcomes, presented strategies to prevent hypoglycemia, and identified knowledge gaps that should be addressed by future research. In addition, tools for patients to report hypoglycemia at each visit and for clinicians to document counseling are provided.\nauthor: Elizabeth R. Seaquist; John Anderson; Belinda Childs; Philip Cryer; Samuel Dagogo-Jack; Lisa Fish; Simon R. Heller; Henry Rodriguez; James Rosenzweig; Robert VigerskyCorresponding author: Elizabeth R. Seaquist, .\ndate: 2013-05\nreferences:\ntitle: Hypoglycemia and Diabetes: A Report of a Workgroup of the American Diabetes Association and The Endocrine Society\n\nIn 2005, the American Diabetes Association Workgroup on Hypoglycemia released a report entitled \"Defining and Reporting Hypoglycemia in Diabetes\" (1). In that report, recommendations were primarily made to advise the U.S. Food and Drug Administration (FDA) on how hypoglycemia should be used as an end point in studies of new treatments for diabetes. In 2009, The Endocrine Society released a clinical practice guideline entitled \"Evaluation and Management of Adult Hypoglycemic Disorders,\" which summarized how clinicians should manage hypoglycemia in patients with diabetes (2). Since then, new evidence has become available that links hypoglycemia with adverse outcomes in older patients with type 2 diabetes (3\u20136) and in children with type 1 diabetes (7,8). To provide guidance about how this new information should be incorporated into clinical practice, the American Diabetes Association and The Endocrine Society assembled a new Workgroup on Hypoglycemia in April 2012 to address the following questions:\n\n1. How should hypoglycemia in diabetes be defined and reported?\n\n2. What are the implications of hypoglycemia on both short- and long-term outcomes in people with diabetes?\n\n3. What are the implications of hypoglycemia on treatment targets for patients with diabetes?\n\n4. What strategies are known to prevent hypoglycemia, and what are the clinical recommendations for those at risk for hypoglycemia?\n\n5. What are the current knowledge gaps in our understanding of hypoglycemia, and what research is necessary to fill these gaps?\n\n# How should hypoglycemia in diabetes be defined and reported?\n\nHypoglycemia puts patients at risk for injury and death. Consequently the workgroup defines iatrogenic hypoglycemia in patients with diabetes as all episodes of an abnormally low plasma glucose concentration that expose the individual to potential harm. A single threshold value for plasma glucose concentration that defines hypoglycemia in diabetes cannot be assigned because glycemic thresholds for symptoms of hypoglycemia (among other responses) shift to lower plasma glucose concentrations after recent antecedent hypoglycemia (9\u201312) and to higher plasma glucose concentrations in patients with poorly controlled diabetes and infrequent hypoglycemia (13).\n\nNonetheless, an alert value can be defined that draws the attention of both patients and caregivers to the potential harm associated with hypoglycemia. The workgroup (1) suggests that patients at risk for hypoglycemia (i.e., those treated with a sulfonylurea, glinide, or insulin) should be alert to the possibility of developing hypoglycemia at a self-monitored plasma glucose\u2014or continuous glucose monitoring subcutaneous glucose\u2014concentration of \u226470 mg\/dL (\u22643.9 mmol\/L). This alert value is data driven and pragmatic (14). Given the limited accuracy of the monitoring devices, it approximates the lower limit of the normal postabsorptive plasma glucose concentration (15), the glycemic thresholds for activation of glucose counterregulatory systems in nondiabetic individuals (15), and the upper limit of plasma glucose level reported to reduce counterregulatory responses to subsequent hypoglycemia (11). Because it is higher than the glycemic threshold for symptoms in both nondiabetic individuals and those with well-controlled diabetes (9,13,14), it generally allows time to prevent a clinical hypoglycemic episode and provides some margin for the limited accuracy of monitoring devices at low-glucose levels. People with diabetes need not always self-treat at an estimated glucose concentration of \u226470 mg\/dL (\u22643.9 mmol\/L). Options other than carbohydrate ingestion include repeating the test in the short term, changing behavior (e.g., avoiding driving or elective exercise until the glucose level is higher), and adjusting the treatment regimen. Although this alert value has been debated (9,13,14), a plasma concentration of \u226470 mg\/dL (\u22643.9 mmol\/L) can be used as a cut-off value in the classification of hypoglycemia in diabetes.\n\nConsistent with past recommendations (1), the workgroup suggests the following classification of hypoglycemia in diabetes:\n\n### 1) Severe hypoglycemia.\n\nSevere hypoglycemia is an event requiring assistance of another person to actively administer carbohydrates, glucagon, or take other corrective actions. Plasma glucose concentrations may not be available during an event, but neurological recovery following the return of plasma glucose to normal is considered sufficient evidence that the event was induced by a low plasma glucose concentration.\n\n### 2) Documented symptomatic hypoglycemia.\n\nDocumented symptomatic hypoglycemia is an event during which typical symptoms of hypoglycemia are accompanied by a measured plasma glucose concentration \u226470 mg\/dL (\u22643.9 mmol\/L).\n\n### 3) Asymptomatic hypoglycemia.\n\nAsymptomatic hypoglycemia is an event not accompanied by typical symptoms of hypoglycemia but with a measured plasma glucose concentration \u226470 mg\/dL (\u22643.9 mmol\/L).\n\n### 4) Probable symptomatic hypoglycemia.\n\nProbable symptomatic hypoglycemia is an event during which symptoms typical of hypoglycemia are not accompanied by a plasma glucose determination but that was presumably caused by a plasma glucose concentration \u226470 mg\/dL (\u22643.9 mmol\/L).\n\n### 5) Pseudo-hypoglycemia.\n\nPseudo-hypoglycemia is an event during which the person with diabetes reports any of the typical symptoms of hypoglycemia with a measured plasma glucose concentration \\>70 mg\/dL (\\>3.9 mmol\/L) but approaching that level.\n\n## The challenge of measuring glucose accurately\n\nCurrently, two technologies are available to measure glucose in outpatients: capillary measurement with point-of-care (POC) glucose meters (self-monitored blood glucose \\[SMBG\\]) and interstitial measurement with continuous glucose monitors (CGMs), both retrospective and real time. The International Organization for Standardization (ISO) and FDA standards require that POC meters' analytical accuracy be within 20% of the actual value in 95% of samples with glucose levels \u226575 mg\/dL and \u00b115 mg\/dL for samples with glucose \\<75 mg\/dL. Despite this relatively large permissible variation, Freckmann et al. (16) found that only 15 of 27 meters on the market in Europe several years ago met the current analytical standards of \u00b115 mg\/dL in the hypoglycemia range, 2 of 27 met \u00b110 mg\/dL, and none were capable of measuring \u00b15 mg\/dL.\n\nThe need for accurate meters in the \\<75 mg\/dL range is essential in insulin-treated patients, whether they are outpatients or inpatients, but it is less important in those outpatients who are on medications that rarely cause hypoglycemia. In critical care units, where the accuracy of POC meters is particularly crucial, their performance may be compromised by medications (vasopressors, acetaminophen), treatments (oxygen), and clinical states (hypotension, anemia) (17). Karon et al. (18) translated these measurement errors into potential insulin-dosing errors using simulation modeling and found that if there were a total measurement error of 20%, 1- and 2-step errors in insulin dose would occur 45% and 6% of the time, respectively, in a tight glycemic control protocol. Such imprecision may affect the safe implementation of insulin infusion protocols in critical care units and may account in part for the high hypoglycemia rates in most trials of inpatient intensive glycemic control.\n\nRetrospective and real-time CGMs represent an evolving technology that has made considerable progress in overall (point + rate) accuracy. However, the accuracy of CGMs in the hypoglycemic range is poor as demonstrated by error grid analysis (19,20). With existing real-time CGMs, accuracy can be achieved in only 60\u201373% of samples in the range of 40\u201380 mg\/dL (21,22). Because the accuracy of CGMs, like POC meters, is negatively affected by multiple factors in hospitalized patients and they are calibrated with POC meters affected by those same factors, CGMs are not recommended for glycemic management in hospitalized patients at this time (17).\n\n# What are the implications of hypoglycemia on both short- and long-term outcomes in people with diabetes?\n\nIatrogenic hypoglycemia is more frequent in patients with profound endogenous insulin deficiency\u2014type 1 diabetes and advanced type 2 diabetes\u2014and its incidence increases with the duration of diabetes (23). It is caused by treatment with a sulfonylurea, glinide, or insulin and occurs about two to three times more frequently in type 1 diabetes than in type 2 diabetes (23,24). Event rates for severe hypoglycemia for patients with type 1 diabetes range from 115 (24) to 320 (23) per 100 patient-years. Severe hypoglycemia in patients with type 2 diabetes has been shown to occur at rates of 35 (24) to 70 (23) per 100 patient-years. However, because type 2 diabetes is much more prevalent than type 1 diabetes, most episodes of hypoglycemia, including severe hypoglycemia, occur in people with type 2 diabetes (25).\n\nThere is no doubt that hypoglycemia can be fatal (26). In addition to case reports of hypoglycemic deaths in patients with type 1 and type 2 diabetes, four recent reports of mortality rates in series of patients indicate that 4% (27), 6% (28), 7% (29), and 10% (30) of deaths of patients with type 1 diabetes were caused by hypoglycemia. A temporal relationship between extremely low subcutaneous glucose concentrations and death in a patient with type 1 diabetes who was wearing a CGM device and was found dead in bed has been reported (31). Although profound and prolonged hypoglycemia can cause brain death, most episodes of fatal hypoglycemia are probably the result of other mechanisms, such as ventricular arrhythmias (26). In this section, we will consider the effects of hypoglycemia on the development of hypoglycemia unawareness and how iatrogenic hypoglycemia may affect outcomes in specific patient groups.\n\n## Hypoglycemia unawareness and hypoglycemia-associated autonomic failure\n\nAcute hypoglycemia in patients with diabetes can lead to confusion, loss of consciousness, seizures, and even death, but how a particular patient responds to a drop in glucose appears to depend on how frequently that patient experiences hypoglycemia. Recurrent hypoglycemia has been shown to reduce the glucose level that precipitates the counterregulatory response necessary to restore euglycemia during a subsequent episode of hypoglycemia (10\u201312). As a result, patients with frequent hypoglycemia do not experience the symptoms from the adrenergic response to a fall in glucose until the blood glucose reaches lower and lower levels. For some individuals, the level that triggers the response is below the glucose level associated with neuroglycopenia. The first sign of hypoglycemia in these patients is confusion, and they often must rely on the assistance of others to recognize and treat low blood glucose. Such individuals are said to have developed hypoglycemia unawareness. Defective glucose counterregulation (the result of loss of a decrease in insulin production and an increase in glucagon release along with an attenuated increase in epinephrine) and hypoglycemia unawareness (the result of an attenuated increase in sympathoadrenal activity) are the components of hypoglycemia-associated autonomic failure (HAAF) in patients with diabetes. HAAF is a form of functional sympathoadrenal failure that is most often caused by recent antecedent iatrogenic hypoglycemia (25) and is at least partly reversible by scrupulous avoidance of hypoglycemia (32\u201334). Indeed, HAAF has been shown to be maintained by recurrent iatrogenic hypoglycemia (33,34). The development of HAAF is associated with a 25-fold (35) or greater (36) increased risk of severe hypoglycemia during intensive glycemic therapy. It is important to distinguish HAAF from classical autonomic neuropathy, which may occur as one form of diabetic neuropathy. Impaired sympathoadrenal activation is generally confined to the response to hypoglycemia, and autonomic activities in organs such as the heart, gastrointestinal tract, and bladder appear to be unaffected.\n\nClinically, HAAF can be viewed as both adaptive and maladaptive. On the one hand, patients with hypoglycemia unawareness and type 1 diabetes appear to perform better on tests of cognitive function during hypoglycemia than do patients who are able to detect hypoglycemia normally (37). In addition, the time necessary for full cognitive recovery after restoration of euglycemia appears to be faster in patients who have hypoglycemia unawareness than in patients with normal detection of hypoglycemia (37). The HAAF habituation of the sympathoadrenal response to recurrent hypoglycemic stress in humans (38) may be analogous to the phenomenon of habituation of the hypothalamic-pituitary-adrenocortical response to recurrent restraint stress in rats (39). Rats subjected to recurrent moderate hypoglycemia had less brain cell death (40) and less mortality (41) during or following marked hypoglycemia than those not subjected to recurrent hypoglycemia.\n\nOn the other hand, HAAF is clearly maladaptive since defective glucose counterregulation and hypoglycemia unawareness substantially increase the risk of severe hypoglycemia with its morbidity and potential mortality (26). A particularly low plasma glucose concentration might trigger a robust, potentially fatal sympathoadrenal discharge. Life-threatening episodes of hypoglycemia need not be frequent to be devastating.\n\n## Impact of hypoglycemia on children with diabetes\n\nHypoglycemia is a common problem in children with type 1 diabetes because of the challenges presented by insulin dosing, variable eating patterns, erratic activity, and the limited ability of small children to detect hypoglycemia. The infant, young child, and even the adolescent typically exhibit unpredictable feeding\u2014not eating all the anticipated food at a meal and snacking unpredictably between meals\u2014and have prolonged periods of fasting overnight that increase the risk of hypoglycemia. Selecting the correct prandial dose of insulin is therefore often difficult. Very low insulin requirements for basal and mealtime dosing in the infant and young child frequently require use of miniscule basal rates in pump therapy and one-half unit dosing increments with injections. Management rarely requires the use of diluted insulin, e.g., 10 units per mL. Infants and toddlers may not recognize the symptoms of hypoglycemia and lack the ability to effectively communicate their distress. Caregivers must be particularly aware that changes in behavior such as a loss of temper may be a sign of hypoglycemia.\n\nPuberty is associated with insulin resistance, while at the same time the normal developmental stages of adolescence may lead to inattention to diabetes and increased risk for hypoglycemia. As children grow, they often have widely fluctuating levels of activity during the day, which puts them at risk for hypoglycemia. Minimizing the impact of hypoglycemia on children with diabetes requires the education and engagement of parents, patients, and other caregivers in the management of the disease (42,43).\n\nThe youngest patients are most vulnerable to the adverse consequences of hypoglycemia. Ongoing maturation of the central nervous system puts these children at greater risk for cognitive deficits as a consequence of hypoglycemia (44). Recent studies have examined the impact of hypoglycemia on cognitive function and cerebral structure in children and found that those who experience this complication before the age of 5 years seem to be more affected than those who do not have hypoglycemia until later (7). The long-term impact of hypoglycemia on cognition before the age of 5 years is unknown.\n\n## Impact of hypoglycemia on adults with type 1 diabetes\n\nLandmark data on the impact of hypoglycemia on adults with type 1 diabetes come from the Diabetes Control and Complications Trial (DCCT) and its follow-up study, where cognition has been systematically measured over time. In this cohort, performance on a comprehensive battery of neurocognitive tests at 18 years of follow-up was the same in participants with and without a history of severe hypoglycemia (28). Despite such reassuring findings, recent investigation with advanced imaging techniques has demonstrated that adults with type 1 diabetes appear to call upon a greater volume of the brain to perform a working memory task during hypoglycemia (45). These findings suggest that adults with type 1 diabetes must recruit more regions to preserve cognitive function during hypoglycemia than adults without the disease. More work will be necessary to understand the significance of these observations on the long-term cognitive ability of adults with type 1 diabetes.\n\n## Impact of hypoglycemia on patients with type 2 diabetes\n\nThere is growing evidence that patients with type 2 diabetes might be particularly vulnerable to adverse events associated with hypoglycemia. Over the last decade, three large trials examined the effect of glucose lowering on cardiovascular events in patients with type 2 diabetes: ACCORD (Action to Control Cardiovascular Risk in Diabetes), ADVANCE (Action in Diabetes and Vascular Disease: Preterax and Diamicron MR Controlled Evaluation), and VADT (Veterans Affairs Diabetes Trial). Between them, a total of 24,000 patients with high cardiovascular risk were randomly assigned to either intensive glycemic control or standard therapy (3\u20135). In each, subjects who were randomly assigned to the intensive arm experienced more episodes of hypoglycemia than did those who were randomly assigned to the standard treatment arm. In the ACCORD trial, subjects who were randomly assigned to the intensive arm also experienced a 20% increase in mortality, and the glycemic control study was stopped early due to this finding. A relationship between mortality and randomization to intensive glucose control was not observed in ADVANCE or VADT, although VADT was underpowered to explore this relationship. A number of explanations have been offered to explain the findings of ACCORD, including chance, greater weight gain, and specific medication effects, but perhaps the most convincing candidate was hypoglycemia, which was threefold higher in the intensive arm of ACCORD (4).\n\nIn the opinion of the blinded adjudication committee assigned to investigate mortality in ACCORD, hypoglycemia was judged to have a definite role in only one death, a probable role in three deaths, and a possible role in 38 deaths (46), which represents a role in less than 10% of the deaths recorded in the study population while the glycemic intervention was active. The investigators thus suggest that hypoglycemia at the time of death was probably not responsible for the increased mortality rate in the intensive arm of ACCORD. Since glycemia was not measured at the time of death in any of the ACCORD subjects, we may never know. However, the potential lethal mechanisms that might be provoked by hypoglycemia could cause mortality downstream of the hypoglycemic event, increasing the difficulty in establishing cause and effect.\n\nAll three trials clearly demonstrated that an episode of severe hypoglycemia was associated with an increased risk of subsequent mortality. In ACCORD, those who had one or more severe hypoglycemic episodes had higher rates of death than those without such episodes across both study arms (hazard ratio 1.41 \\[95% CI 1.03\u20131.93\\]) (46). One-third of all deaths were due to cardiovascular disease, and hypoglycemia was associated with higher cardiovascular mortality. In VADT, a recent severe hypoglycemic event was the strongest independent predictor of death at 90 days (3). In ADVANCE, where rates of hypoglycemia were low, a similar pattern was found (47). Of course, in post hoc analyses a causal relationship cannot be established with certainty. It is possible that the association between hypoglycemia and death may be merely an indicator for vulnerability for death from any cause.\n\nThe relationship between hypoglycemia and subsequent cognitive function in patients with type 2 diabetes has also been investigated. In a large population study, hypoglycemic episodes that required hospitalization or a visit to the emergency department between 1980 and 2002 were associated with approximately double the risk of incident dementia after 2003 (6). However, since the study population did not undergo detailed tests of cognitive function prior to 2003, it is possible that those with incident dementia actually had mild cognitive dysfunction prior to experiencing the episode(s) of severe hypoglycemia. The possibility that mild cognitive dysfunction might increase the risk of experiencing severe hypoglycemia has been supported by analyses from the ACCORD study (48). In the ACCORD MIND (Memory IN Diabetes) study, in which cognitive function was assessed longitudinally, no difference was noted in the rate at which cognitive performance declined over time in subjects randomly assigned to the intensive versus the standard glucose arms despite the fact that they experienced three times as much hypoglycemia (49). Future investigation will need to address this question because the existing data are somewhat contradictory.\n\n## Impact of hypoglycemia on the elderly\n\nPatients in the older age-groups are especially vulnerable to hypoglycemia. Epidemiological studies show that hypoglycemia is the most frequent metabolic complication experienced by older adults in the U.S. (50). Although severe hypoglycemia is common in older individuals with both type 1 and type 2 diabetes, patients with type 2 diabetes tend to have longer hospital stays and greater medical costs. The most significant predictors of this condition are advanced age, recent hospitalization, and polypharmacy, as shown in a study of Tennessee Medicare patients (51). Age-related declines in renal function and hepatic enzyme activity may interfere with the metabolism of sulfonylureas and insulin, thereby potentiating their hypoglycemic effects. The vulnerability of the elderly to severe hypoglycemia may be partially related to a progressive age-related decrease in \u03b2-adrenergic receptor function (52). Age-related impairment in counterregulatory hormone responses has been described in elderly patients with diabetes, especially with respect to glucagon and growth hormone (53). Symptoms of neuroglycopenia are more prevalent (54). With the prolonged duration of type 2 diabetes as is often seen in the elderly patient, the glucagon response to hypoglycemia is virtually absent (55). The intensification of glycemic control in the elderly patient is associated with an increased reduction in the plasma glucose thresholds for epinephrine release and for the appearance of hypoglycemia (56). As a result, changes in the level of glycemic control have a marked impact on the risk of developing hypoglycemia in the elderly.\n\nOlder adults with diabetes have a disproportionately high number of clinical complications and comorbidities, all of which can be exacerbated by and sometimes contribute to episodes of hypoglycemia. Older adults with diabetes are at much higher risk for the geriatric syndrome, which includes falls, incontinence, frailty, cognitive impairment, and depressive symptoms (57). The cognitive and executive dysfunction associated with the geriatric syndrome interferes with the patient's ability to perform self-care activities appropriately and follow the treatment regimen (58).\n\nTo minimize the risk of hypoglycemia in the elderly, careful education regarding the symptoms and treatment of hypoglycemia, with regular reinforcement, is extremely important because of the recognized gaps in the knowledge base of these individuals (59). In addition, it is important to assess the elderly for functional status as part of the overall clinical assessment in order to properly apply individualized glycemic control goals. Arbitrary short-acting insulin sliding scales, which are used much too often in long-term care facilities (60), should be avoided, and glyburide should be discontinued in favor of shorter-acting insulin secretagogues or medications that do not cause hypoglycemia. The recently published 2012 Beers list of prohibited medications in long-term care facilities specifically lists insulin sliding scales and glyburide as treatment modalities that should be avoided (61). Complex regimens requiring multiple decision points should be simplified, especially for patients with decreased functional status. In addition, caregivers and staff in long-term care facilities need to be educated on the causes and risks of hypoglycemia and the proper surveillance and treatment of this condition.\n\n## Impact of hypoglycemia on hospitalized patients\n\nPersons with diabetes are three times more likely to be hospitalized than those without diabetes, and approximately 25% of hospitalized patients (including people without a history of diabetes) have hyperglycemia (62\u201365). Inpatient hyperglycemia has been associated with prolonged hospital length of stay and with numerous adverse outcomes including mortality (64,66\u201368). The understandable zeal to minimize the adverse consequences of inpatient hyperglycemia, together with the demonstration that intensive glycemic control improved outcomes in surgical intensive care unit (ICU) patients (69), led to widespread adoption of aggressive glucose management among ICU patients. However, subsequent studies showed that such aggressive lowering of glycemia in the ICU is not uniformly beneficial, markedly increases the risk of severe hypoglycemia, and may be associated with increased mortality (70\u201374).\n\nThe true incidence and prevalence of hypoglycemia among hospitalized patients with diabetes are not known precisely. In a retrospective study of 31,970 patients admitted to the general wards of an academic medical center in 2007, a total of 3,349 patients (10.5%) had at least one episode of hypoglycemia (\u226470 mg\/dL) (75). In another review of 5,365 inpatients admitted to ICUs, 102 (1.9%) had at least one episode of severe hypoglycemia (\\<40 mg\/dL) (76). The risk factors for inpatient hypoglycemia include older age, presence of comorbidities, diabetes, increasing number of antidiabetic agents, tight glycemic control, septic shock, renal insufficiency, mechanical ventilation, and severity of illness (75,76). With regard to impact, a retrospective analysis of 4,368 admissions involving 2,582 diabetic patients admitted to the general ward indicated that severe hypoglycemia (\u226450 mg\/dL) was associated with increased length of stay and greater odds of inpatient death and death within 1 year of hospital discharge (77).\n\n## Impact of hypoglycemia during pregnancy\n\nMaintaining blood glucose control in pregnancy as close to that of healthy pregnant women is important in minimizing the negative effects on the mother and the fetus (78). This is true for women with pregestational type 1 or type 2 diabetes, as well for those with gestational diabetes mellitus. Normal blood glucose levels during pregnancy are 20% lower than in nonpregnant women (79), making the definition and detection of hypoglycemia more challenging. For women with type 1 diabetes, severe hypoglycemia occurs 3\u20135 times more frequently in the first trimester and at a lower rate in the third trimester when compared with the incidence in the year preceding the pregnancy (80). Risk factors for severe hypoglycemia in pregnancy include a history of severe hypoglycemia in the preceding year, impaired hypoglycemia awareness, long duration of diabetes, low HbA~1c~ in early pregnancy, fluctuating plasma glucose levels, and excessive use of supplementary insulin between meals. Surprisingly, nausea and vomiting during pregnancy did not appear to add significant risk. When pregnant and nonpregnant women are compared with CGM, mild hypoglycemia (defined by the authors as blood glucose \\<60 mg\/dL) is more common in all pregnant women, but equally so regardless of whether or not they have diabetes, either pregestational or gestational (81). Hypoglycemia is generally without risk for the fetus as long as the mother avoids injury during the episode. For women with preexisting diabetes, insulin requirements rise throughout the pregnancy and then drop precipitously at the time of delivery of the placenta, requiring an abrupt reduction in insulin dosing to avoid postdelivery hypoglycemia. Breastfeeding may also be a risk factor for hypoglycemia in women with insulin-treated diabetes (82).\n\n## Impact of hypoglycemia on quality of life and activities of daily living\n\nHypoglycemia and the fear of hypoglycemia have a significant impact on quality-of-life measures in patients with both type 1 and type 2 diabetes (83). Nocturnal hypoglycemia in particular may impact one's sense of well-being on the following day because of its impact on sleep quantity and quality (84). Patients with recurrent hypoglycemia have been found to have chronic mood disorders including depression and anxiety (85,86), although it is hard to establish cause and effect between hypoglycemia and mood changes. Interpersonal relationships may suffer as a result of hypoglycemia in patients with diabetes. In-depth interviews of a small group of otherwise healthy young adults with type 1 diabetes revealed the presence of interpersonal conflict including fears of dependency and loss of control. These adults also reported difficulty talking about issues related to hypoglycemia with significant others (87). This difficulty may carry over to their work life, where hypoglycemia has been linked to reduced productivity (88). Hypoglycemia also impairs one's ability to drive a car (89\u201391), and many jurisdictions require documentation that severe hypoglycemia is not occurring before persons with diabetes are permitted to have a license to operate a motor vehicle (92). However, impaired awareness of hypoglycemia has not consistently been associated with an increased risk of car collisions (92\u201395).\n\n# What are the implications of hypoglycemia on treatment targets for patients with diabetes?\n\nThe glycemic target established for any given patient should depend on the patient's age, life expectancy, comorbidities, preferences, and an assessment of how hypoglycemia might impact his or her life. This patient-centered approach requires that clinicians spend time developing an individualized treatment plan with each patient. For very young children, the risks of severe hypoglycemia on brain development may require a strategy that attempts to avoid hypoglycemia at all costs. For healthy adults with diabetes, a reasonable glycemic goal might be the lowest HbA~1c~ that does not cause severe hypoglycemia, preserves awareness of hypoglycemia, and results in an acceptable number of documented episodes of symptomatic hypoglycemia. With current therapies, a strategy that completely avoids hypoglycemia may not be possible in patients with type 1 diabetes who strive to minimize their risks of developing the microvascular complications of the disease. However, glycemic goals might reasonably be relaxed in patients with long-standing type 1 diabetes and advanced complications or in those who are free of complications but have a limited life expectancy because of another disease process. In such patients, the glycemic goal could be to achieve glucose levels sufficiently low to prevent symptoms of hyperglycemia.\n\nFor patients with type 2 diabetes, the risk of hypoglycemia depends on the medications used (96). Early in the course of the disease, most patients are treated with lifestyle changes and metformin, neither of which causes hypoglycemia. Therefore, an HbA~1c~ of \\<7% is appropriate for many patients with recent-onset type 2 diabetes. As the disease progresses, it is likely that medications that increase the risk of hypoglycemia will be added. This, plus the presence of complications or comorbidities that limit life expectancy, means that glycemic goals may need to be less aggressive. While the benefits of achieving an HbA~1c~ of \\<7% may continue to be advocated for patients with type 2 diabetes at risk for microvascular complications and with sufficient life expectancy, less aggressive targets may be appropriate in those with known cardiovascular disease, extensive comorbidities, or limited life expectancy.\n\nOlder individuals with gait imbalance and frailty may experience a life-changing injury if they fall during a hypoglycemia episode, so avoiding hypoglycemia is paramount in such patients. Patients with cognitive dysfunction may have difficulty adhering to a complicated treatment strategy designed to achieve a low HbA~1c~ (48). Such patients will benefit from a simplification of the treatment strategy with a goal to prevent hypoglycemia as much as possible. Furthermore, the benefits of aggressive glycemic therapy in those affected are unclear.\n\n# What strategies are known to prevent hypoglycemia, and what are the clinical recommendations for those at risk for hypoglycemia?\n\nRecurrent hypoglycemia increases the risk of severe hypoglycemia and the development of hypoglycemia unawareness and HAAF. Effective approaches known to decrease the risk of iatrogenic hypoglycemia include patient education, dietary and exercise modifications, medication adjustment, careful glucose monitoring by the patient, and conscientious surveillance by the clinician.\n\n## Patient education\n\nThere is limited research related to the influence of self-management education on the incidence or prevention of hypoglycemia. However, there is clear evidence that diabetes education improves patient outcomes (97\u201399). As part of the educational plan, the individual with diabetes and his or her domestic companions need to recognize the symptoms of hypoglycemia and be able to treat a hypoglycemic episode properly with oral carbohydrates or glucagon. Hypoglycemia, including its risk factors and remediation, should be discussed routinely with patients receiving treatment with insulin or sulfonylurea\/glinide drugs, especially those with a history of recurrent hypoglycemia or impaired awareness of hypoglycemia. In addition, patients must understand how their medications work so they can minimize the risk of hypoglycemia. Care should be taken to educate patients on the typical pharmacokinetics of these medications. When evaluating a patient's report of hypoglycemia, it is important to adopt interviewing approaches that guide the patient to a correct identification of the precipitating factors of the episodes of hypoglycemia. Such a heuristic review of likely factors (skipped or inadequate meal, unusual exertion, alcohol ingestion, insulin dosage mishaps, etc.) in the period prior to the event can deepen the patient's appreciation of the behavioral factors that predispose to hypoglycemia.\n\nThere is convincing evidence that formal training programs that teach patients to replace insulin \"physiologically\" by giving background and mealtime\/correction doses of insulin can reduce the risk of severe hypoglycemia. The Insulin Treatment and Training programs developed by M\u00fchlhauser and Berger (100) have reported improved glycemic control comparable with DCCT while reducing the rates of severe hypoglycemia (101,102). These programs have been successfully delivered in other settings (103,104) with comparable reductions in hypoglycemic risk (105). Patients with frequent hypoglycemia may also benefit from enrollment in a blood glucose awareness training program. In such a program, patients and their relatives are trained to recognize subtle cues and early neuroglycopenic indicators of evolving hypoglycemia and respond to them before the occurrence of disabling hypoglycemia (106,107).\n\n## Dietary intervention\n\nPatients with diabetes need to recognize which foods contain carbohydrates and understand how the carbohydrates in their diet affect blood glucose. To avoid hypoglycemia, patients on long-acting secretagogues and fixed insulin regimens must be encouraged to follow a predictable meal plan. Patients on more flexible insulin regimens must know that prandial insulin injections should be coupled to meal times. Dissociated meal and insulin injection patterns lead to wide fluctuations in plasma glucose levels. Patients on any hypoglycemia-inducing medication should also be instructed to carry carbohydrates with them at all times to treat hypoglycemia.\n\nThe best bedtime snack to prevent overnight hypoglycemia in patients with type 1 diabetes has been investigated without clear consensus (108\u2013112). These conflicting reports suggest that the administration of bedtime snacks may need to be individualized and be part of a comprehensive strategy (balanced diet, patient education, optimized drug regimens, and physical activity counseling) for the prevention of nocturnal hypoglycemia.\n\n## Exercise management\n\nPhysical activity increases glucose utilization, which increases the risk of hypoglycemia. The risk factors for exertional hypoglycemia include prolonged exercise duration, unaccustomed exercise intensity, and inadequate energy supply in relation to ambient insulinemia (113,114). Postexertional hypoglycemia can be prevented or minimized by careful glucose monitoring before and after exercise and taking appropriate preemptive actions. Preexercise snacks should be ingested if blood glucose values indicate falling glucose levels. Patients with diabetes should carry readily absorbable carbohydrates when embarking on exercise, including sporadic house or yard work. Because of the kinetics of rapid-acting and intermediate-acting insulin, it may be prudent to empirically adjust insulin doses on the days of planned exercise, especially in patients with well-controlled diabetes with a history of exercise-related hypoglycemia.\n\n## Medication adjustment\n\nHypoglycemic episodes that are not readily explained by conventional factors (skipped or irregular meals, unaccustomed exercise, alcohol ingestion, etc.) may be due to excessive doses of drugs used to treat diabetes. A thorough review of blood glucose patterns may suggest vulnerable periods of the day that mandate adjustments to the current antidiabetes regimen. Such adjustments may include substitution of rapid-acting insulin (lispro, aspart, glulisine) for regular insulin, or basal insulin glargine or detemir for NPH, to decrease the risk of hypoglycemia. Continuous subcutaneous insulin infusion offers great flexibility for adjusting the doses and administration pattern of insulin to counteract iatrogenic hypoglycemia (115). For patients with type 2 diabetes, sulfonylureas are the oral agents that pose the greatest risk for iatrogenic hypoglycemia and substitution with other classes of oral agents or even glucagon-like peptide 1 analogs should be considered in the event of troublesome hypoglycemia (96). Interestingly, successful transplantation of whole pancreata or isolated pancreatic islet cells in patients with type 1 diabetes (116\u2013118) results in marked improvements in glycemic control and near abolition of iatrogenic hypoglycemia.\n\nPatients who develop hypoglycemia unawareness do so because of frequent and recurrent hypoglycemia. To avoid such frequent hypoglycemia, adjustments in the treatment regimen that scrupulously avoid hypoglycemia are necessary (Table 1<\/a>). In published studies, this has required frequent (almost daily) contact between clinician and patient, and adjustments to caloric intake and insulin regimen based on blood glucose values (10,119,120). With this approach, restoration of autonomic symptoms of hypoglycemia occurred within 2 weeks, and complete reversal of hypoglycemia unawareness was achieved by 3 months. In some but not all reports, the recovery of symptoms is accompanied by the improvement in epinephrine secretion (32,33,120,121). The return of hypoglycemic symptom awareness was associated with a modest increase (\u223c0.5%) in HbA~lc~ values (33), but others have reported no loss of glycemic control (32,34).\n\nApproach to restore recognition of hypoglycemia in patients with HAAF\n\n![](1384tbl1)\n\n## Glucose monitoring\n\nGlucose monitoring is essential in managing patients at risk for hypoglycemia. Patients treated with insulin, sulfonylureas, or glinides should check their blood glucose whenever they develop the symptoms of hypoglycemia in order to confirm that they must ingest carbohydrates to treat the symptoms and collect information that can be used by the clinician to adjust the therapeutic regimen to avoid future hypoglycemia. Patients on basal-bolus insulin therapy should check their blood glucose before each meal and figure this value into the calculation of the dose of rapid-acting insulin to take at that time. Such care in dosing will likely reduce the risk of hypoglycemia.\n\nRecent technological developments have provided patients with new tools for glucose monitoring. Real-time CGM, by virtue of its ability to display the direction and rate of change, provides helpful information to the wearer leading to proactive measures to avoid hypoglycemia, e.g., when to think about having a snack or suspending insulin delivery on a pump. The CGM's audible and\/or vibratory alarms may be particularly helpful in avoiding severe hypoglycemia at night and restoring hypoglycemic awareness. With the low-glucose alarms set at 108 mg\/dL, 4 weeks of real-time CGM use restored the epinephrine response and improved adrenergic symptoms during a hyperinsulinemic hypoglycemic clamp in a small group of adolescents with type 1 diabetes and hypoglycemic unawareness (122).\n\nThe artificial pancreas, which couples a CGM to an insulin pump through sophisticated predictive algorithms, holds out the promise of completely eliminating hypoglycemia. Several internationally collaborative groups are working on various approaches to the artificial pancreas. The first step in this direction is the low-glucose suspend pump that is available in Europe and currently in clinical trials in the U.S. This device shuts off insulin delivery for up to 2 h once the interstitial glucose concentration reaches a preset threshold and reduces the duration of nocturnal hypoglycemia (123).\n\n## Clinical surveillance\n\nClinicians and educators must assess the risk of hypoglycemia at every visit with patients treated with insulin and insulin secretagogues. An efficient way to begin this assessment might be to have the patient complete the questionnaire shown in Table 2<\/a> while in the waiting room. Review of the completed questionnaire will help the clinician learn how often the patient is experiencing symptomatic and asymptomatic hypoglycemia, ensure the patient is aware of how to appropriately treat hypoglycemia, and remind both parties of the risks associated with driving while hypoglycemic. To ensure that hypoglycemia has been adequately addressed during a visit, providers may want to use the Hypoglycemia Provider Checklist (Table 3<\/a>).\n\nHypoglycemia Patient Questionnaire\n\n![](1384tbl2)\n\nHypoglycemia Provider Checklist\n\n![](1384tbl3)\n\nA careful review of the glucose log collected by the patient should also be done at each visit. The date, approximate time, and circumstances surrounding recent episodes of hypoglycemia should be noted, together with information regarding the awareness of the warning symptoms of hypoglycemia. A reliable history of impaired autonomic responses (tremulousness, sweating, palpitations, and hunger) during hypoglycemia may be the most practical approach to making the diagnosis of hypoglycemia unawareness. If symptoms are absent or if frequent episodes of recurrent hypoglycemia occur within hours to days of each other, it is likely that the patient has HAAF. Other historical clues such as experiencing more than one episode of severe hypoglycemia that required the assistance of another over the preceding year or a family report that they are recognizing more frequent episodes of hypoglycemia may also provide clues that the patient has developed hypoglycemia unawareness. A self-reported history of impaired or absent perception of autonomic symptoms during hypoglycemia correlates strongly with laboratory confirmation of hypoglycemia unawareness (33,121,124,125).\n\n# What are the current knowledge gaps in our understanding of hypoglycemia, and what research is necessary to fill these gaps?\n\nSince the publication of the previous report from the Workgroup on Hypoglycemia in 2005 (1), much has been learned about the impact of hypoglycemia on patient outcomes. However, hypoglycemia continues to cause considerable morbidity and even mortality in patients with diabetes. If patients are to benefit from the reduction in microvascular complications that follows from achieving near-normal levels of glycemia, additional research will be necessary to prevent them from experiencing hypoglycemia and HAAF. First, new surveillance methods that provide consistent ways of reporting hypoglycemia must be developed so that the impact of any intervention to prevent and treat hypoglycemia can be fully assessed. Greater attention must be focused on understanding which patients are most at risk for hypoglycemia and on developing new educational strategies that effectively reduce the number of episodes experienced by at-risk patients. New therapies that do not cause hypoglycemia, including an artificial pancreas, need to be developed for both type 1 and type 2 diabetes. The technologies used to monitor blood glucose must become more accurate, more reliable, easier to use, and less expensive. The mechanisms that render patients unable to increase glucagon secretion in response to hypoglycemia and that are responsible for the development of HAAF must be identified so strategies can be developed to ensure that patients always experience early warning signs of impending neuroglycopenia. The impact of hypoglycemia on short-term outcomes such as mortality and long-term outcomes such as cognitive dysfunction need to be better defined, and the mechanisms for these associations need to be understood. Focused research in these priority areas will address our knowledge gaps about hypoglycemia and ultimately reduce the impact of iatrogenic hypoglycemia on patients with diabetes.\n\nThe workgroup meeting was supported by educational grants to the American Diabetes Association from Lilly USA, LLC and Novo Nordisk and sponsorship to the American Diabetes Association from Sanofi. The sponsors had no input into the development of or content of the report. No other potential conflicts of interest relevant to this article were reported.\n\nThe workgroup members thank Stephanie Kutler and Meredith Dyer of The Endocrine Society and Sue Kirkman, MD, of the American Diabetes Association for staff support.\n\n# References","meta":{"dup_signals":{"dup_doc_count":161,"dup_dump_count":79,"dup_details":{"curated_sources":2,"2023-40":3,"2023-14":1,"2023-06":1,"2022-49":1,"2022-40":1,"2022-21":2,"2022-05":1,"2021-49":3,"2021-43":3,"2021-39":3,"2021-31":2,"2021-25":2,"2021-21":1,"2021-17":3,"2021-10":4,"2021-04":2,"2020-50":2,"2020-45":6,"2020-40":4,"2020-34":3,"2020-29":2,"2020-24":5,"2020-16":3,"2020-10":2,"2020-05":1,"2019-51":1,"2019-47":3,"2019-43":5,"2019-39":4,"2019-35":1,"2019-26":1,"2019-13":2,"2019-09":1,"2019-04":3,"2018-51":2,"2018-47":1,"2018-43":1,"2018-39":3,"2018-34":2,"2018-26":3,"2018-17":2,"2018-09":1,"2018-05":2,"2017-51":1,"2017-47":2,"2017-43":1,"2017-39":2,"2017-34":1,"2017-30":3,"2017-22":4,"2017-17":1,"2017-09":3,"2017-04":3,"2016-50":2,"2016-44":3,"2016-40":2,"2016-36":2,"2016-30":1,"2016-26":1,"2016-22":1,"2016-18":1,"2016-07":1,"2015-48":1,"2015-35":1,"2015-32":1,"2015-22":1,"2014-52":1,"2014-49":2,"2014-42":3,"2014-41":2,"2014-35":2,"2014-23":1,"2014-15":1,"2024-26":2,"2017-13":3,"2015-18":1,"2015-11":1,"2015-06":1,"2014-10":1}},"file":"PMC3631867"},"subset":"pubmed_central"} {"text":"author: David S Pisetsky\ndate: 2013\ninstitute: 1Duke University Medical Center, 151G, Durham VA Medical Center, Durham, NC 27705, USA\nreferences:\ntitle: The Choosing Wisely initiative: Does it have your back?\n\nAs a subspecialty, rheumatology demands knowledge, intuition, and judgment to go along with a broad appreciation of the nuances and mysteries of internal medicine. Creativity is also part of the practice since many conditions in the purview of rheumatology lack established therapies that have received US Food and Drug Administration approval or testing in a controlled trial. Therefore, rheumatologists must create treatment plans, often on the fly (or by the seat of their pants), producing a succession of n-of-1 trials in their practices. As if their current attributes were not enough, rheumatologists, like other providers, must now have wisdom and practice not only smartly but wisely. Such is the direction of Choosing Wisely.\n\nFor those of you who are not familiar with Choosing Wisely, this initiative of the American Board of Internal Medicine Foundation is a very timely and important effort to help reduce the cost of health care by encouraging providers to adopt more judicious and evidence-based ways to diagnose and treat illness. To meet this goal, professional medical organizations and societies are identifying situations in which cost reduction can be achieved by avoiding the overuse of low-yield (but expensive) diagnostic tests or prescription of therapies of marginal or questionable efficacy.\n\nAlthough the goal of Choosing Wisely is highly laudable, many of the items targeted for reduction or elimination at this time are not that controversial or radical, nor will they require a major change in practice patterns of most providers. In common parlance, these items are low-hanging fruit. As a first step in cost containment, the Choosing Wisely initiative has stayed away (probably wisely) from big drivers of costs in the US, like the price of drugs, fragmentation of care, and lack of adequate preventative services. Certainly, in the US, the problem of uninsured patients continues to be vexing and could require the wisdom of Solomon as state governments go their various ways in response to Obamacare.\n\nBy its nature, the Choosing Wisely initiative is public and political as well as medical, designed to show the citizenry, including the government and payers, that physicians are willing to consider cost and partner with patients to constrain spending. This spending seems forever to increase to levels that everyone calls unsustain able, although, so far, the body politic has not screamed 'Enough is enough' and meant it. For its part, the American College of Rheumatology followed a rigorous and thoughtful approach to designate practices for the Choosing Wisely campaign (Table 1<\/a>). Because the recommendations came from a Delphi process of organization members, the choices are informed and reasonable. Having never ordered Lyme serologies or used magnetic resonance imaging to stage rheumatoid arthritis, I may have already demonstrated my wisdom.\n\nThe American College of Rheumatology's top 5 list for choosing wisely\n\n| |\n|----|\n| 1\\. Do not test anti-nuclear antibody (ANA) subserologies without a positive ANA and clinical suspicion of immune-mediated disease. |\n| 2\\. Do not test for Lyme disease as a cause of musculoskeletal symptoms without an exposure history and appropriate examination findings. |\n| 3\\. Do not perform magnetic resonance imaging of the peripheral joints to routinely monitor inflammatory arthritis. |\n| 4\\. Do not prescribe biologic agents for rheumatoid arthritis before a trial of methotrexate (or other conventional non-biologic disease-modifying anti-rheumatic drug). |\n| 5\\. Do not routinely repeat dual x-ray absorptiometry scans more often than once every 2 years. |\n\nReproduced with permission from John Wiley & Sons \\[8\\].\n\nIn the case of musculoskeletal disease, however, other groups have weighed in. The American College of Physicians says, 'Don't obtain imaging studies in patients with non-specific low back pain', whereas the American Academy of Family Physicians says, 'Don't do imaging for low back pain within the first six weeks, unless red flags are present'. The recommendations on back pain may not be easy to follow, because in the real world, patients with back pain can present with confusing signs and symptoms and often cannot date the onset of their problems. Furthermore, red flags are fortunately rare.\n\nAs I have learned many times, I do not know what nonspecific back pain is or how to differentiate it from the pain of a spondyloarthropathy, which, after all, can be pretty 'non-specific'. I am not alone in my uncertainty. I know a very experienced and savvy rheumatologist who missed the diagnosis of ankylosing spondylitis in himself, thinking that the nagging ache in the lower spine resulted from the pounding of too much tennis and jogging. He is now on a tumor necrosis factor blocker and doing smashingly, wondering how he had missed the diagnosis for over five years.\n\nOur unit had its discussion of Choosing Wisely in a dreary conference room with three large bookcases filled with outdated editions of major texts. Our unit also includes clinical immunology and allergy, and one of our faculty members questioned the American Academy of Allergy, Asthma and Immunology recommendation about the use of spirometry in diagnosing and managing asthma. Similarly, I raised concerns about logistical issues in waiting until a fluorescent anti-nuclear antibody test is positive before ordering specific anti-nuclear antibodies.(Does that mean another clinic visit, which can be associated with a hefty facility fee?) Nevertheless, we all got on board and pledged to Choose Wisely in the future.\n\nAlthough I support the Choosing Wisely initiative, I wish that the recommendations did not have such an Old Testament quality and embody, like the Ten Commandments, so many *don'ts*. I did find the words 'refrain' and 'avoid' while searching the lists on the Choosing Wisely website, but a few *do's* would have been nice. Also, I think that more recommendations on specific treatments to use and not eschew would have been very helpful given the enormous range of drugs now on the market.\n\nAs many rheumatologists know, back pain presents a conundrum not only because diagnosis is difficult but because treatment is difficult. For many patients, acute and chronic back pain can be severe, debilitating, and disabling. Often, such pain will not respond adequately to analgesics such as acetaminophen or a non-steroidal anti-inflammatory agent or measures such as heat or massage. Opioids then become an option, but a decision to prescribe opioids may be become problematic in the framework of Choosing Wisely, which suggests limiting diagnostic evaluations for back pain at least initially.\n\nBecause of the many societal as well as medical issues associated with controlled substances, physicians like to pursue at least some diagnostic testing prior to prescribing opioids, especially in individuals at high risk for drug side effects (for example, older individuals who could fall) or for complications of dependency and addiction. Recognizing the huge problem of opioid abuse in the population and the pressures on providers to prescribe these agents, the Federation of State Medical Boards has issued recommendations on prescribing controlled substances for pain; the recommendations emphasize the importance of a thorough patient evaluation in determining the need for a controlled substance. I wonder whether a history and physical exam are enough to make this judgment in the absence of some kind of imaging of the back.\n\nFor Choosing Wisely to be successful, the participation of patients is critical since they must partner with providers to accept a different kind of medical care at a time of cost containment and evolving expectations. Patients with pain, however, represent a special group because of the burden of their symptoms and the pressing need for relief. Unless a patient feels satisfied, it is not unusual for them to try a variety of interventions in quick succession, going from an orthopedist to a rheumatologist to a chiropractor to an acupuncturist. Indeed, studies suggest that 1 in 4 patients with pain will seek a new provider at least three times because they feel that their care is not optimal, meaning that their pain has not been adequately relieved.\n\nIn such a potentially conflicted setting, the patient's perception of the provider is critical. In my experience, patients appreciate and accept treatment recommendations when time is spent on the visit, complaints are taken seriously, and, yes, diagnostic testing is performed. Sometimes imaging is performed for no other reason than to allay patient worry and reduce fear and anxiety about the meaning of a surge of excruciating pain. While the yield on x-rays may be low, an imaging study may be important to signal a commitment to find the diagnosis and develop a treatment plan that is based on the best evidence. Not infrequently, this plan will consider the risks and benefits of opioids if prescribed and, importantly, address psychological and life-style issues that may lead to abuse.\n\nAt present, many of the items in Choosing Wisely relate to diagnosis, and imaging is at the top of the list because of its expense. Clearly, there are many ways to reduce cost, whether by limiting the number of tests or reducing the cost of each. To me, both options should be on the table as this initiative goes forward. Certainly, choices should be the subject of intensive research that can fall under the rubric of comparative effectiveness. Given the number of skilled investigators, registries, and outcome assessment tools available for innovative real-world studies, recommendations on choices can be based not only on wisdom but on science.\n\nChoosing Wisely is an excellent idea and the first attempts are in the right direction. My hope is that, on the next round, we confront some real choices - hard ones - and figure out what wisdom really is when it comes to the practice of medicine.\n\n# Competing interests\n\nThe author declares that they have no competing interests.","meta":{"dup_signals":{"dup_doc_count":106,"dup_dump_count":39,"dup_details":{"curated_sources":2,"2022-33":1,"2019-22":1,"2019-09":1,"2018-51":2,"2018-43":1,"2018-34":1,"2018-22":1,"2018-13":1,"2017-47":1,"2017-34":1,"2017-26":1,"2017-22":1,"2017-09":10,"2016-40":1,"2016-36":9,"2016-30":8,"2016-22":1,"2016-07":11,"2015-48":2,"2015-40":1,"2015-35":2,"2015-32":2,"2015-27":2,"2015-22":2,"2015-14":2,"2014-52":2,"2014-49":4,"2014-42":5,"2014-41":4,"2014-35":3,"2014-23":3,"2014-15":4,"2023-40":1,"2024-18":2,"2015-18":2,"2015-11":2,"2015-06":2,"2014-10":2,"2013-48":2}},"file":"PMC3978556"},"subset":"pubmed_central"} {"text":"abstract: U87MG is a commonly studied grade IV glioma cell line that has been analyzed in at least 1,700 publications over four decades. In order to comprehensively characterize the genome of this cell line and to serve as a model of broad cancer genome sequencing, we have generated greater than 30\u00d7 genomic sequence coverage using a novel 50-base mate paired strategy with a 1.4kb mean insert library. A total of 1,014,984,286 mate-end and 120,691,623 single-end two-base encoded reads were generated from five slides. All data were aligned using a custom designed tool called BFAST, allowing optimal color space read alignment and accurate identification of DNA variants. The aligned sequence reads and mate-pair information identified 35 interchromosomal translocation events, 1,315 structural variations (\\>100 bp), 191,743 small (\\<21 bp) insertions and deletions (indels), and 2,384,470 single nucleotide variations (SNVs). Among these observations, the known homozygous mutation in *PTEN* was robustly identified, and genes involved in cell adhesion were overrepresented in the mutated gene list. Data were compared to 219,187 heterozygous single nucleotide polymorphisms assayed by Illumina 1M Duo genotyping array to assess accuracy: 93.83% of all SNPs were reliably detected at filtering thresholds that yield greater than 99.99% sequence accuracy. Protein coding sequences were disrupted predominantly in this cancer cell line due to small indels, large deletions, and translocations. In total, 512 genes were homozygously mutated, including 154 by SNVs, 178 by small indels, 145 by large microdeletions, and 35 by interchromosomal translocations to reveal a highly mutated cell line genome. Of the small homozygously mutated variants, 8 SNVs and 99 indels were novel events not present in dbSNP. These data demonstrate that routine generation of broad cancer genome sequence is possible outside of genome centers. The sequence analysis of U87MG provides an unparalleled level of mutational resolution compared to any cell line to date.\nauthor: Michael James Clark; Nils Homer; Brian D. O'Connor; Zugen Chen; Ascia Eskin; Hane Lee; Barry Merriman; Stanley F. Nelson\\* E-mail: [^1]\ndate: 2010-01\ninstitute: 1Department of Human Genetics, University of California Los Angeles, Los Angeles, California, United States of America; 2Department of Computer Science, University of California Los Angeles, Los Angeles, California, United States of America; University of Washington, United States of America\nreferences:\ntitle: U87MG Decoded: The Genomic Sequence of a Cytogenetically Aberrant Human Cancer Cell Line\n\n# Introduction\n\nGrade IV glioma, also called glioblastoma multiforme (GBM), is the most common primary malignant brain tumor with about 16,000 new diagnoses each year in the United States. While the number of cases is relatively small, comprising only 1.35% of primary malignant cancers in the US \\[1\\], GBMs have a one-year survival rate of only 29.6%, making it one of the most deadly types of cancer \\[2\\]. Recent clinical studies demonstrate improved survival with a combination of radiation and Temozolomide chemotherapy, but median survival time for GBM patients who receive therapy is only 15 months \\[3\\]. Due to its highly aggressive nature and poor therapeutic options, understanding the genetic etiology of GBM is of great interest and therefore, GBM has been selected as one of the three initial cancer types to be thoroughly studied in the TCGA program \\[4\\].\n\nTo that end, numerous cell line models of GBM have been established and used in vast numbers of studies over the years. It is well recognized that cell line models of human disorders, especially cancers, are an important resource. While these cell lines are the basis of substantial biological insight, experiments are currently performed in the absence of genome-wide mutational status as no cell line that models a human disease has yet had its genome fully sequenced. Here, we have sequenced the genome of U87MG, a long established cell line derived from a human grade IV glioma used in over 1,700 publications \\[5\\]. A wide range of biological information is known about this cell line. The U87MG cell line is known to have a highly aberrant genomic structure based on karyotyping, SKY \\[6\\], and FISH \\[7\\]. However, these methods neither provide the resolution required to visualize the precise breakpoint of a translocation event, nor are they generally capable of identifying genomic microdeletions (deletions on the order of a megabase or less in size) in a whole genome survey of structural variation. SNP genotyping microarrays can be used to detect regions of structural variation in the forms of loss of heterozygosity (LOH) and copy number (CN) based on probe intensity, but do not reveal chromosomal joins. To assess the genomic stability of U87MG, the genome was genotyped by Illumina Human 1M-Duo BeadChip microarray. In spite of being cultured independently for several years, the regions of LOH and the CN state of our U87MG genome matched exactly with data retrieved from the Sanger COSMIC database for U87MG \\[8\\], which had been assayed on an Affymetrix Genome-Wide Human SNP Array 6.0. This suggests that although U87MG bears a large number of large-scale chromosomal aberrations, it has been relatively stable for years and is not rapidly changing. This suggests that prior work on U87MG may be reinterpreted based on the whole genome sequence data presented here.\n\nThe first draft of the consensus sequence of the human genome was reported in 2001 \\[9\\],\\[10\\]. The first individual human diploid sequence was sequenced using capillary-based Sanger sequencing \\[11\\]. Since then, a few additional diploid human genomes have been published utilizing a variety of massively parallel sequencing techniques to sequence human genomes to varying degrees of coverage, variant discovery, and quality typically costing well over \\$200,000 and several machine months of operation \\[12\\]\u2013\\[16\\]. For the sequencing of U87MG, we utilized ABI SOLiD technology, which uses a ligation-based assay with two-base color-encoded oligonucleotides that has been demonstrated to allow highly accurate single nucleotide variant (SNV) and insertion\/deletion (indel) detection \\[17\\]. Additionally, long mate-paired genomic libraries with a mean insert size of 1\u20132kb allowed higher clone coverage of the genome, which improved our ability to identify genomic structural variations such as interchromosomal translocations and large deletions. While longer insert sizes would improve resolution of some structural variants, during genomic shearing the highest density of large fragments occurs at 1.5kb, allowing a sufficiently complex library to be generated from only 10 micrograms of genomic DNA while still being well powered to identify structural variations. Here, we demonstrate that aligning the two-base color-encoding data with BFAST software and decoding during alignment allows for highly sensitive detection of indels, which have in the past been difficult to detect by short read massively parallel sequencing.\n\nFor cancer sequencing, it is important to assess not only SNVs, but indels, structural variations and translocations, and it is preferable to extract this information from a common assay platform. A major characteristic of the U87MG cell line that differentiates it from the samples used in other whole genome sequencing projects published thus far is its highly aberrant genomic structure. Due to its heavily rearranged state, we thoroughly and accurately assessed each of these major classes of mutations and demonstrated that small indels, large microdeletions and interchromosomal translocations are actually the major categories of mutations that affect known genes in this cancer cell line. These analyses provide a model for other genome sequencing projects outside major genome centers of how to both thoroughly sequence and assess the mutational state of whole genomes.\n\n# Results\n\n## Data Production\n\nFrom ten micrograms of input genomic DNA, we performed two and a half full sequencing runs on the ABI SOLiD Sequencing System, for a total of five full slides of data \\[17\\]. Utilizing the ABI long mate-pair protocol, we produced 1,014,984,286 raw 50bp mate-paired reads (101.5Gb). In some cases the bead was recognized by the imaging software for only one read, thereby producing an additional 120,691,623 single end reads (6.0Gb). In aggregate, we generated a total of 107.5Gb of raw data (Table 1<\/a>).\n\n10.1371\/journal.pgen.1000832.t001\n\n###### Genome sequencing summary.\n\n\n\n| | |\n|:---|:--:|\n| **Sequencing Libraries** | 1 |\n| **SOLiD Runs (Slides)** | 2.5 (5) |\n| **Strategy** | 2\u00d750 |\n| **Mate-paired reads passing quality filter (total bases)** | 1,014,984,286 (101.5Gb) |\n| **Single-end reads passing quality filter (total bases)** | 120,691,623 (6.0Gb) |\n| **Mate-paired reads uniquely aligned by BFAST (bases)** | 390,064,184 (39.06Gb) |\n| **Unpaired reads uniquely aligned by BFAST (bases)** | 266,635,829 (13.33Gb) |\n| **Single-end reads uniquely aligned by BFAST (bases)** | 62,336,824 (3.12Gb) |\n| **Total bases uniquely aligned by BFAST** | 55.51Gb |\n\nWe also performed an exon capture approach designed to sequence the exons of 5,253 genes (10.7Mb) annotated in the Wellcome Trust Sanger Institute Catalogue of Somatic Mutations in Cancer (COSMIC) V38 \\[8\\], Cancer Gene Census, Cancer Genome Project Planned Studies and The Cancer Genome Atlas (TCGA) \\[4\\] GBM gene list using a custom-created Agilent array. This approach used the Illumina GAII sequencing system \\[18\\] to sequence captured DNA fragments using a paired end sequencing protocol. This resulted in 9,948,782 raw 76bp paired end reads (1.51Gb), and a mean base coverage of 29.5\u00d7. These reads were used to calculate concordance rates with the larger whole genome sequence dataset.\n\nThe Blat-like Fast Accurate Search Tool (BFAST) \\[19\\] version 0.5.3 was used to align 107.5Gb of raw color space reads to the color space conversion of the human genome assembly hg18 from University of California, Santa Cruz (, based on the March 2006 NCBI build 36.1). Duplicate reads, typically from the same initial PCR fragment during genomic library construction, were inevitable and accounted for 16.4% of the total aligned data. These were removed using the alignment filtering utility in the DNAA package (). A total of 390,604,184 paired end reads (39.06Gb), 266,635,829 (13.33Gb) unpaired reads, and 62,336,824 (3.12Gb) single end reads were successfully mapped to a unique location in the reference genome with high confidence for a total of 55.51Gb of aligned sequence (Table 1<\/a>). For the exon capture dataset, we uniquely aligned 8,142,874 paired end reads (1.2Gb) and 1,097,000 (83Mb) unpaired reads for a total of 1.32Gb of raw aligned sequence (Table 2<\/a>). Using the ABI SOLiD reads, we identified small insertions and deletions (indels), single nucleotide variants (SNVs), and structural variants such as large-scale microdeletions and translocation events. The exon capture Solexa reads were used to validate SNVs identified in the SOLiD sequencing.\n\n10.1371\/journal.pgen.1000832.t002\n\n###### Exon capture sequencing summary.\n\n\n\n| | |\n|:---|:--:|\n| **Sequencing Libraries** | 1 |\n| **Illumina Runs (Lanes)** | 1\/8 (1) |\n| **Strategy** | 2\u00d776 |\n| **Number of bases targeted** | 10752923 |\n| **Mate-paired reads passing quality filter (total bases)** | 9,948,782 (1.51Gb) |\n| **Mate-paired reads uniquely aligned by BFAST (bases)** | 8,142,874 (1.2Gb) |\n| **Unpaired reads uniquely aligned by BFAST (bases)** | 1,097,000 (83Mb) |\n| **Total bases uniquely aligned by BFAST** | 1.32Gb |\n| **Total targeted bases sequenced** | 317,017,503 |\n| **Mean coverage within targeted bases** | 29.5\u00d7 |\n\nThe overall pattern of base sequence coverage from the shotgun reads changes across the genome, and as expected is highly concordant with the copy number state as determined by Illumina 1M Duo and Affymetrix 6.0 SNP analysis (Figure 1<\/a>). Regions of two normal copies, such as chromosome 3, showed even base sequence coverage across their entire length (12.4 reads\/base, excluding centromeric and telomeric regions which are not represented accurately in hg18). Meanwhile, regions with one-copy state according to the SNP chip, such as the distal q-arm of chr11 and the distal p-arm of chr6, show about half the base sequence coverage (7.2 reads\/base) as a predicted two-copy region. Likewise, predicted three-copy state regions, such as the distal q-arm of chr13, show about 1.5 times the base sequence coverage of a predicted two-copy region. A complete deletion spanning the region on chromosome 9 that includes the CDKN2A gene is also seen in both the SNP chip and ABI SOLiD base sequence coverage. These data show at a very large scale that sequence placement is generally correct and supports the copy number state calls from the array based data.\n\n## Variant Discovery\n\nSingle nucleotide variants (SNVs) and small insertions and deletions ranging from 1 to 20 bases (indels) were identified from the alignment data using the MAQ consensus model \\[20\\] as implemented in the SAMtools software suite \\[21\\]. SAMtools produced variant calls, zygosity predictions, and a Phred-scaled probability that the consensus is identical to the reference. To improve the reliability of our variant calls, variants were required to have a Phred score of at least 10 and further needed to be observed greater than or equal to 4 but less than 60 times and at least once on each strand.\n\nIn total, we identified 2,384,470 SNVs meeting our filtering criteria. Of these, 2,140,848 (89.8%) were identified as exact matches to entries in dbSNP129 \\[22\\]. Exact matches had both the variant and observed alleles in the dbSNP entry, allowing for the discovery of novel alleles at known SNP locations. In total, 243,622 SNVs (10.2%) were identified as novel events not previously recorded in dbSNP 129. This rate of novel variant discovery is consistent with other whole human normal genome sequences of European ancestry relative to dbSNP \\[12\\]. These SNVs were further characterized based on zygosity predictions from the MAQ consensus model, separating SNVs into homozygous or heterozygous categories (Table 3<\/a>). The observed diversity value for SNVs (\u03b8~SNV~, number of heterozygous SNVs\/number of base pairs) across autosomal chromosomes was 4.4\u00d710^\u22124^, which is generally consistent with the normal human genome variation rate.\n\n10.1371\/journal.pgen.1000832.t003\n\n###### SNV filtering and quantification.\n\n\n\n| SNV Classification | Total | In dbSNP 129 | Not in dbSNP 129 |\n|:---|:--:|:--:|:--:|\n| **Variants meeting filter criteria** | 2,384,470 | 2,140,848 | 243,622 |\n| **Synonymous or not coding region** | 2,375,812 | 2,133,226 | 242,586 |\n| **Coding region & non-synonymous** | 8,658 | 7,622 | 1,036 |\n| **Splice site mutations** | 151 | 132 | 19 |\n| **Heterozygous splice site mutations** | 62 | 47 | 15 |\n| **Homozygous splice site mutations** | 89 | 85 | 4 |\n| **Premature stop** | 134 | 93 | 41 |\n| **Heterozygous premature stop** | 82 | 48 | 34 |\n| **Homozygous premature stop** | 52 | 45 | 7 |\n| **Non-synonymous** | 8,538 | 7,518 | 1,020 |\n| **Heterozygous non-synonymous** | 4,005 | 3,134 | 871 |\n| **Homozygous non-synonymous** | 4,533 | 4,384 | 149 |\n\nFor small (\\<21bp) insertions and deletions, 191,743 events were detected with 116,964 not previously documented in dbSNP 129. The same criteria as used for SNVs was used for determining if an indel was novel and they were further classified as homozygous or heterozygous using the SAMtools variant caller (Table 4<\/a>). The observed diversity value (\u03b8~indel~, number of heterozygous indels\/number of base pairs) across autosomal chromosomes was 0.38\u00d710^\u22124^.\n\n10.1371\/journal.pgen.1000832.t004\n\n###### Indel filtering and quantification.\n\n\n\n| Indel Classification | Total | In dbSNP 129 | Not in dbSNP 129 |\n|:---|:--:|:--:|:--:|\n| **Variants meeting filter criteria** | 191,743 | 74,779 | 116,964 |\n| **Synonymous or not coding region** | 191,359 | 74,643 | 116,716 |\n| **Coding region & non-synonymous** | 384 | 136 | 248 |\n| **Splice site mutations** | 84 | 34 | 50 |\n| **Heterozygous splice site mutations** | 20 | 7 | 13 |\n| **Homozygous splice site mutations** | 64 | 27 | 37 |\n| **Heterozygous premature stop** | 91 | 15 | 76 |\n| **Homozygous premature stop** | 94 | 40 | 54 |\n| **Heterozygous non-synonymous** | 168 | 45 | 123 |\n| **Homozygous non-synonymous** | 193 | 86 | 107 |\n| **Heterozygous frameshift** | 141 | 33 | 108 |\n| **Homozygous frameshift** | 179 | 80 | 99 |\n| **Heterozygous in-frame indels** | 26 | 11 | 15 |\n| **Homozygous in-frame indels** | 14 | 5 | 9 |\n\nA subset of 38 variants meeting genome-wide filtering criteria, including a 20-base deletion, was tested by PCR and Sanger sequencing with 34 being validated. In summary, 85.2% of SNVs (23\/27), and 100% of small insertions (3\/3), deletions (4\/4), translocations (3\/3) and microdeletions (1\/1) were validated in this manner (Table S1<\/a>). While this is a small sample, it demonstrates an overall low false positive rate.\n\n## Indel Size Distribution\n\nThe size distribution of indels identified in U87MG is generally consistent with previous studies on coding and non-coding indel sizes in non-cancer samples \\[23\\]\u2013\\[25\\]. Small deletion sizes ranged from 1 to 20 bases in size and their distribution approximates a power law distribution in concordance with previous findings \\[23\\] (Figure 2A<\/a>). There is a small deviation from the power law distribution with an excess of 4-base indels in U87MG's non-coding regions (Figure 2A<\/a>, red bars) \\[11\\],\\[26\\].\n\nA similar trend is seen with insertions in non-coding sequence with the maximum observed insertion size of 17 bases (Figure 2B<\/a>, red bars). The maximum insertion size observed is less than the maximum deletion size because it is easier to align longer deletions than it is to align insertions. Some small insertions and deletions are likely to be larger than the upper limit of 17 and 20 bases actually observed, but the 50-base read length limits the power to align such reads directly.\n\nIn coding regions, there is a bias towards events that are multiples of 3-bases in length that maintain the reading frame despite variant alleles, suggesting that many of these are polymorphisms (Figure 2A<\/a>-deletions, Figure 2B<\/a>-insertions, blue bars). In non-coding regions, only 10.8% of indels are a multiple of 3 bases in size, while in coding regions, 27.0% are 3, 6, 9, 12 or 15 bases in size. This trend is expected based on past observations of non-cancer samples \\[11\\],\\[26\\].\n\n## Nucleotide Substitution Frequencies\n\nObserved SNV base substitution patterns were consistent with common mutational phenomena in both coding sequences and genome wide. As expected, the predominant nucleotide substitution seen in SNVs is a transition, changing purine for purine (A\\<-\\>G) or pyrimidine for pyrimidine (C\\<-\\>T). Previous studies have observed that two out of every three SNPs are transitions as opposed to transversions \\[27\\], and we observed that 67.4% of our SNVs were transitions, while 32.6% were transversions, a 2.07\u22361 ratio. (Figure 3<\/a>) However, in coding regions, there appears to be an increase in C-\\>T\/G-\\>A transitions and a decrease in T-\\>C\/A-\\>G transitions, whereas genome-wide these transitions were approximately equivalent.\n\n## Estimation of Genomic Coverage\n\nTo assess the coverage depth of the U87MG genome sequence, we followed Ley at al. \\[13\\] and required detection of both alleles at most positions in the genome. We utilized the Illumina 1M-Duo BeadChip to find reliably sequenced positions in the genome with an understanding that this may lead to bias towards more unique regions of the genome. In order to best use the SNP genotyping array data, we included only those regions that are diploid based on normal frequency of heterozygous calls and copy number assessment. This effectively permitted us to use the heterozygous calls for assessing accuracy of the short read data for variant calling (Figure 1<\/a>). Only SNPs both observed to be heterozygous and that the Illumina genotyping chip called 'high quality' were used, which provided a total of 219,187 high quality heterozygous SNPs for comparison. 99.71% of these were sequenced at least once. After applying variant detection filtering criteria (see Materials and Methods<\/a>) and assessing concordance between the sequence calls and genotyping array calls, 93.71% of the genome was sequenced at sufficient depth to call both alleles of the diploid genome. This is roughly equivalent to the likelihood of sufficient sampling of the whole genome when repeats and segmental duplications are excluded.\n\nNotably, a variant allele was observed at every position called heterozygous by SNP chip, while a reference allele was observed at 201,414 (97.94%) positions. In other words, the SNV detection algorithm uniformly miscalled the homozygous variant allele. Filtering for quality causes a bias toward identifying SNVs at sites that have higher coverage. That said, after SNV quality filtering, diploid coverage of the cytogenetically normal portions of the genome was 10.85\u00d7 for each allele, which is clearly adequate for calling over 90% of the base variant positions on each allele at high accuracy.\n\nBecause the positions of the genome included on SNP arrays is not a random sampling of the genome, we also assessed mapping coverage genome-wide. Of all bases in the haploid genome, 78.9% of the whole reference genome was covered by at least one reliably placed read. Of that portion of the genome, 91.9% of all bases were effectively sequenced based on passing variant calling filters (Phred\\>10, \\>4\u00d7 coverage, \\<60\u00d7 coverage). Thus, a total of 72.5% of the whole genome was sequenced, including repeats and duplicated regions, which is typical of short sequence shotgun approaches.\n\n## Exon Capture Cross-Validation of Sequence Variants\n\n10.9Mb of genomic sequence was targeted consisting of the amino acid encoding exons of 5,235 genes and were sequenced to a mean coverage of 30\u00d7 using the Illumina GAII sequencer. Given the larger variability of coverage from the capture data, only a subset of these bases (8.5Mb) was evaluable to determine the false positive variant detection rate from the complete genomic sequence data. This region contained 1,621 SNPs present in dbSNP129. Within the 8.5Mb of common and well-covered sequence in the genomic sequence data and the capture sequence, there were 1,780 SNVs called from the genomic sequence. The same non-reference allele was concordantly observed at 1,631 positions within the capture data. At 149 positions, the non-reference allele was not observed in the capture data, but the reference allele was detected. However, the mean coverage at these 149 positions was significantly lower than that of the other 1,631 positions (p\u200a=\u200a0.0003), suggesting that the non-reference allele was not adequately covered and is under called in the capture data. Moreover, of the 1,621 dbSNPs in the region, the capture adequately covered only 1,515. In these data there was a bias for the pull down data to under observe the non-reference allele (Figure S1<\/a>). The 106 dbSNP positions detected in the ABI whole genome sequence dataset were observed to all call the reported alternate allele from dbSNP. In theory, if these were errors, then non-reference base calls should be randomly distributed to the three alternate base calls. Thus, no discrepancies are reliably identified within the dbSNP overlap when a variant was called in the ABI genomic sequence data.\n\nThere were a total of 100 novel SNVs detected in the ABI genomic sequence dataset that were also very well evaluated in the Illumina pull down data with at least 20 high quality Illumina reads, such that the ABI sequence could be well validated. Of these, 2 of the 100 discovered variants in the genomic sequence dataset were not observed in the Illumina pull down sequencing dataset. Thus, of the entire 8.5mb interval there are 2 unconfirmed variants for an estimated false positive error rate of about 3\u00d710^\u22127^ for the whole interval. Alternatively viewed, there were 100 novel SNVs, with a 2% error rate in those novel positions. Thus, the de novo false discovery rate may be as high as 2%. Extrapolating to the whole set of 243,622 novel SNVs, we expect up to 4,872 false positives SNVs. These observations are roughly concordant with a sampling of 37 novel SNVs (not in dbSNP) in the whole genome set selected for testing by Sanger sequencing. Of these, 34 out of 37 (92%) were validated.\n\n## Individual Genome Comparison\n\nThere are now several publicly available complete genomes sequenced on next generation platforms. We compared the SNVs discovered in U87MG to two of these published genomes: the James D. Watson genome \\[12\\] and the first Asian genome (YanHuang) \\[14\\]. Further, we simultaneously compared each of these to dbSNP version 129 \\[22\\]. Compared with dbSNP, 10.2% of U87MG SNVs, 9.5% of Watson SNVs, and 12.0% of YanHuang SNVs were not present within dbSNP (Figure 4<\/a>). As U87MG was derived from a patient of Caucasian ancestry, which is confirmed by genotyping, it is unsurprising to see a higher overlap with dbSNP for U87MG than for YanHuang. Between the three genomes themselves, 44.7% of U87MG SNVs overlapped with Watson SNVs while 60.0% of SNVs were in common with YanHuang SNVs. Only 8.5% of dbSNP SNVs were shared between Watson and U87MG, while 11.3% of them were shared between YanHuang and U87MG. Thus, there is not a substantially higher amount of SNVs in the U87MG cancer genome relative to normal genomes.\n\n## Structural Variation Identification\n\nWe utilized the predictable insert distance of mate-paired sequence fragments to directly observe structural variations in U87MG. Our target insert size of 1.5kb gave us a normal distribution of paired end insert lengths ranging from 1kb to 2kb with median around 1.25kb and mean around 1.45kb in the actual sequence data (Figure S2<\/a>). We identified 1,314 large structural variations, including 35 interchromosomal events, 599 complete homozygous deletions (including a large region on chromosome 9 containing CDKN2A\/B, which commonly experience homozygous deletions in brain cancer), 361 heterozygous deletion events, and 319 other intrachromosomal events (Table 5<\/a>). The 599 complete microdeletions summed up to approximately 5.76Mb of total sequence, while the 361 heterozygous microdeletions summed to 5.36Mb of total sequence. Most of the microdeletions were under 2kb in total size. Because of the high sequence coverage and mate pair strategy each event was supported by an average of 138 mate pair reads. Mispairing of the mate pairs did occur occasionally due to molecular chimerism in the library fabrication process, but such reads occur at a low frequency (\\<1\/40 of the reads). Thus, the true rearrangement\/deletion events were highly distinct from noise in well-mapped sequences. Interchromosomal events included translocations and large insertion\/deletion events where one part of a chromosome was inserted into a different chromosome, sometimes replacing a segment of DNA. All together, these structural variations show a highly complex rearrangement of genomic material in this cancer cell line (Figure 5<\/a>). All identified structural variants are summarized in Table S2<\/a>. We note as well that even when breakpoints are within genome-wide common repeats there can be sufficient mapping information to reliably identify the translocation breakpoint (Figure S3<\/a>).\n\n10.1371\/journal.pgen.1000832.t005\n\n###### Structural variations detected.\n\n\n\n| Type | \\# of events | \\# that span genes (%) | \\# of affected genes (%) |\n|:---|:--:|:--:|:--:|\n| **Complete deletion** | 599 | 95 (15.9%) | 145 |\n| **Heterozygous deletion** | 361 | 58 (16.0%) | 91 |\n| **Interchromosomal translocation** | 35 | 32 (91.4%) | 35 |\n| **Other intrachromosomal events** | 319 | 146 (45.8%) | 166 |\n\nThe thirty-five interchromosomal events often coincided with positions of copy number change based on the average base coverage (Figure 5<\/a>). Figure 6<\/a> shows two interchromosomal events between chromosomes 2 and 16. The events on chromosome 16 are less than 1kb apart while those on chromosome 2 are about 160kb apart. Based on the average base coverage, there appears to be a loss of genomic material between the event boundaries on their respective chromosomes, shifting from two to one copy. Although we are unable to determine the origin of such an event, it appears that there was an interchromosomal translocation between chromosomes 2 and 16 with a loss of the DNA between the identified regions on each chromosome.\n\nA subset of 3 translocations were confirmed by amplifying DNA from the breakpoint-spanning region by polymerase chain reaction and sequencing by dideoxy Sanger sequencing (Table S1<\/a>). Each confirmed the predicted breakpoint to within 100 nucleotides of the correct position. In a subset of cases, unmapped short read fragments could be identified from the shotgun short read data that span the breakpoint and are concordant at base resolution with Sanger sequencing of PCR amplified product spanning the breakpoint\n\n## Genes Affected by Mutations in Coding Sequence\n\nThe SNVs and indels identified in U87MG were assessed for their potential to affect protein-coding sequence. We considered variants predicted to be homozygous and to affect the coding sequence of a gene through a frameshift, early termination, intron splice site, or start\/stop codon loss mutation as causing a complete loss of that protein. We chose to focus on homozygous null mutations for two major reasons. First, this is an interesting set of genes that we can predict from the whole genome data are non-functional within this commonly used cell line. Although heterozygous mutations can certainly affect gene products in multiple ways, it is difficult to assess their effect from genomic data alone. Second, by cross-referencing such null mutations with known regions of common mutation in gliomas we can pick out specific candidates that are of interest to the glioma community.\n\nOf the 2,384,470 SNVs and 191,743 small indels in U87MG, a total of 332 genes are predicted to have loss-of-function, homozygous mutations as a consequence of small variants (Table S3<\/a>). Of these, 225 genes contained variants matching alleles annotated in dbSNP (version 129), while 107 contained novel variants not observed in dbSNP.\n\nWe further divided these homozygous mutant genes by variant type. Of genes mutated by SNVs, 146 contained variants present in dbSNP while only 8 were knocked out by variants not in dbSNP. The ratio of known SNPs causing loss-of-function mutations to total known SNPs (146\/2,140,848\u200a=\u200a6.82\u00d710^\u22125^) was not significantly different from the ratio of novel SNVs causing loss-of-function mutations over total novel SNVs (8\/243,622\u200a=\u200a3.28\u00d710^\u22125^; p\u200a=\u200a0.04). This indicates that many of the possible de novo point mutations may indeed be rare inherited variants made homozygous by chromosomal loss of the normal allele.\n\nIn contrast to the trend in SNVs, small indels that homozygously mutated genes were more often novel. There were 79 genes predicted homozygously mutated by indel variants reported in dbSNP while 99 were predicted mutated by novel indels. Despite this trend, however, there was not a significant enrichment of deleterious indels among the novel indels (99\/191,743\u200a=\u200a5.16\u00d710^\u22124^) compared to the known indels (79\/116,964\u200a=\u200a6.75\u00d710^\u22124^; p\u200a=\u200a0.08) This suggests that the difference in ratios of novel versus documented SNVs (8 vs. 146) and indels (99 vs. 79) is the result of compositional bias in dbSNP129, which contains a far greater number of SNPs compared to indels.\n\nWe also assessed the structural variants in U87MG for whether or not they were likely to affect a gene. Two different criteria were used to determine if translocations and microdeletions impacted a coding region, both predicted to produce an aberrant or nonfunctional protein. Using the UCSC known gene database, we identified 35 genes affected by interchromosomal translocations, 145 affected by complete deletions, 91 affected by heterozygous deletions and 166 affected by other intrachromosomal translocations (Table 4<\/a>).\n\nInterchromosomal translocation events were significantly enriched for occurring at positions where they would affect genes with 32 out of 35 events (91.4%) occurring within 1kb of a gene (p\\<0.0001), while only 44.1% of the reference genome is within 1kb of a known gene. In total, intrachromosomal events did not display this enrichment with 145\/319 (45.5%) falling within 1kb of a gene (p\u200a=\u200a0.67). However, we ran a set of simulations to assess whether microdeletions were enriched to overlap exons because we noted that 585 of our 599 complete microdeletions were less than 10kb in length with a mean size of 1.8kb. We ran 100,000 simulations randomly placing 600 microdeletions of 2kb lengths and determined how many times a microdeletion spanned an exon. In this way, we demonstrated that complete (homozygous) microdeletions under 10kb in size spanned exons slightly more often than by chance with a simulated p-value of .046. Similar assessment of microdeletions greater than 10kb in size did not find evidence of enrichment. These findings suggest that small microdeletions may preferentially occur within genes as opposed to being randomly distributed across the genome, but the signal is not strong from the available data. Genes affected by structural variations are summarized in Table S4<\/a>.\n\n## Annotation of Relevant Mutated Genes\n\nThe annotation tool DAVID was used to further examine the biological significance of the list of likely knockout mutations (including genes affected by SNVs, indels, microdeletions and translocation events) using the EASE analysis module. After gene ontology (GO) analysis, 18 GO terms were nominally enriched and associated with the mutated gene with a p-value \\<\u200a=\u200a0.01 (Table S5<\/a>). These GO enrichments include cell adhesion (GO:0007155 and GO:0022610), membrane (GO:0044425), and protein kinase regulator activity (GO:0019887).\n\nThe list of genes was also compared to the list of cancer-associated genes maintained by the Cancer Gene Census project (). For SNVs and small indels, eight were observed in the census list, but this is not unexpected given the large number of mutations found in this cell line (p\u200a=\u200a0.21). Two CGC genes were affected by complete microdeletions (CDKN2A and MLLT3), and one gene each was affected by heterozygous microdeletions (IL21R) and interchromosomal translocations (SET). These included genes previously annotated as mutated in instances of T cell prolymphocytic leukemia (TCRA and MLLT3), glioma (PTEN), endometrial cancer (PTEN), anaplastic large-cell lymphoma (CLTCL1), prostate cancer (ETV1 & PTEN), Ewing sarcoma (FLI1 and ETV1), desmoplastic small round cell tumor (FLI1), acute lymphocytic leukemia (FLI1 and MLLT3), clear cell sarcoma (FLI1), sarcoma (FLI1), myoepithelioma (FLI1), follicular thyroid cancer (PAX8), non-Hodgkin lymphoma (IL21R), acute myelogenous leukemia (SET), fibromyxoid sarcoma (CREB3L2), melanoma (XPC), and multiple other tumor types (PTEN and CDKN2A).\n\nWe also explored the overlap of genes with mutations in GBMs according to the Cancer Genome Atlas (TCGA) with those we predicted are homozygously loss-of-function mutated in U87MG (Table S5<\/a>). Seven genes mutated in U87MG by SNVs or indels were also found mutated within the TCGA sample (PTEN, LTF, KCNJ16, ABCA13, FLI1, MLL4, DSP). This overlap is not statistically significant (p\u200a=\u200a0.16). Ten additional genes overlapped, including two genes mutated by interchromosomal translocations (CNTFR, ELAVL2), three genes mutated by intrachromosomal translocations (ANXA8, LRRC4C, ALDH1A3), and five by homozygous microdeletions (CDKN2A, CDKN2C, MTAP, IFNA21, TMBIM4).\n\nFinally, in order to place the homozygous mutations of U87MG in context relative to GBM mutational patterns as a whole, the Genomic Identification of Significant Targets in Cancer (GISTIC) method \\[28\\] was applied to 293 glioblastoma samples with genome wide copy number information available from the TCGA. This yielded a list of significant, commonly deleted regions present across glioblastomas as a group and highlights genes commonly mutated in GBMs. These data indicate that all or parts of chromosomes 1, 6, 9, 10, 13, 14, 15, and 22 are commonly deleted within GBMs as a group. In total, these regions comprise 915,306,764 bases, covering roughly 30 percent of the genome. In order to highlight genes homozygously mutated in U87MG that are within the regions of common loss, we cross-referenced these lists and found that 62\/332 (19%) are within the GISTIC defined regions. This does not suggest a significant overlap of homozygously mutated genes in U87MG with commonly deleted regions, but those mutated genes that do overlap may be of increased relevance to cancer. Two of the 62 genes are also in the Cancer Gene Census: PTEN and TCRA. We propose that a subset of the genes mutated in U87 within these commonly deleted regions may be the specific targets of mutation and should be assessed on larger sample sets. (Table S5<\/a> and Figure S4<\/a>).\n\n# Discussion\n\nReported individual human genome sequencing projects using massively parallel shotgun sequencing with alignment to the human reference genome clearly indicate the practicality of individual whole genome sequencing. However, the monetary cost of data generation, data analysis issues, and the time it takes to perform the experiments have remained substantial limitations to general application in many laboratories. Here we demonstrate enormous improvements in the throughput of data generation. Using a mate-pair strategy and only ten micrograms of input genomic DNA, we generated sufficient numbers of short sequence reads in approximately 5 weeks of machine operation with a total reagent cost of under \\$30,000. We believe this makes U87MG the least expensive published genome sequenced to date signaling that routine generation of whole genomes is feasible in individual laboratories. Further, the two-base encoding strategy employed within the ABI SOLiD system is a powerful approach for comprehensive analysis of genome sequences and, in concert with BFAST alignment software, is able to identify SNVs, indels, structural variants, and translocations.\n\nOf particular interest in whole-genome resequencing studies such as this one is how much raw data must be produced to sequence both alleles using a shotgun strategy. Here, 107.5Gb of raw data was generated. Of this, 55.51Gb was mapped to unique positions in the reference genome. In effect, this results in a mean base coverage of 10.85\u00d7 per allele within non-repetitive regions of the genome. Repetitive regions are of course undermapped, as their unique locations are more difficult to determine. This level of oversampling is adequate for high stringency variant calling (error rate less than 5\u00d710^\u22126^) at 93.71% of heterozygous SNP positions. There may be some biases in library generation resulting in bases that are not successfully covered even if they are relatively unique, but solutions to this may be found in performing multiple sequencing runs with varied library designs, as suggested in other studies \\[17\\].\n\nWith rapid advances in the generation of massively parallel shotgun short reads, one of the major computational problems faced is the rapid and sensitive alignment of greater than 1 billion paired end reads needed to resequence an individual genome. We demonstrate a practical solution using BFAST, which was able to perform fully gapped local alignment on the two-base encoded data to maximize variant calling in less than 4 days on a 20-node 8-core computer cluster.\n\nComparing U87MG SNVs with the James Watson \\[12\\] and YanHuang \\[14\\] genome projects' SNVs displays differences in SNV detection between the three projects. Being derived from a Caucasian individual, U87MG and James Watson are expected to share more SNVs than U87MG and YanHuang. However, when we compared SNVs between U87MG and these two genomes, more SNVs were actually shared between U87MG and YanHuang. Meanwhile, the YanHuang project called significantly more SNVs in total than both our U87MG sequencing project and the James Watson project. These results stress that utilizing different sequencing platforms (U87MG-ABI SOLiD, James Watson-Roche 454, YanHuang-Illumina Solexa), alignment tools (U87MG-BFAST, James Watson-BLAT, YanHuang-SOAP) and analytical approaches results in finding different quantities of SNVs. The higher genomic coverage in our U87MG sequence relative to James Watson and the increased sensitivity of BFAST relative to BLAT and SOAP were counted on to find highly robust variants. This is particularly important when sequencing a cancer genome because of the interest in finding novel cancer mutations as opposed to common polymorphisms.\n\nThe genomic sequence demonstrates global differences in variant type across the coding and non-coding portions of the human genome. By increasing the sensitivity of indel detection, we revealed that small indels have mutated genes at a higher rate than SNVs. A larger proportion of the indels identified are predicted to cause a protein coding change compared to SNVs (178\/191,743 indels vs. 154\/2,384,470 SNVs).\n\nIn U87MG, there is a relative increase in 4-base indels genome-wide, which has been observed in other normal genomes \\[23\\]\u2013\\[25\\] (Figure 2<\/a>, red bars). However, indels found in coding regions exhibit a bias toward events that are multiples of 3-bases in length (Figure 2<\/a>, blue bars) presumably selected to maintain reading frame. Thus, many of these events are likely to be polymorphisms and not disease related genomic mutations \\[25\\]. Similarly, the nucleotide substitution frequencies demonstrate a bias in coding regions compared to non-coding. Two-thirds of the substitutions were transitions genome-wide, as expected \\[27\\], but there was an enrichment of CG-\\>TA transitions in coding regions (Figure 3<\/a>). It is well established that the most common source of point mutations and SNPs in primates is deamination of methyl-cytosine (meC), causing transition to a thymine (T) \\[16\\],\\[29\\], and there is circumstantial evidence of that in U87MG's genome as well.\n\nThe resolution of genome-wide chromosomal rearrangements is substantially improved by the mate-pair strategy, coupled with sensitive and independent alignment of the short 50-base reads (Figure 5<\/a>). Based on published SKY data, we anticipated 7 interchromosomal breakpoints \\[6\\]. However, whole-genome mate-paired sequence data revealed the precise chromosomal joins of 35 interchromosomal events, which account for previously observed chromosomal abnormalities in U87MG but at additional finer scale resolution (Figure 5<\/a>, Figure 6<\/a>, Figure 7<\/a>). The translocation events were enriched in genic regions with 32\/35 (91.4%) occurring within 1kb of genes. A weaker, but still noticeable enrichment over genes occurs with microdeletions as well, which are generally missed by other experimental techniques like DNA microarrays. Thus, within the overall mutational landscape of this cancer cell line, translocations and structural variants preferentially occurred over genes, supporting a model where cancer mutations occur via structural instability rather than novel point mutations.\n\nDelving into the functional effects of the mutations in U87MG through gene ontology and cross-referencing the literature, we found a large number of known and predicted cancer mutations present in the cell line. There is always a concern when dealing with a cancer cell line that mutations will be more related to its status as a cell line than to the cancer it was derived from. While this remains a concern, the large number of predicted and known cancer genes present in U87MG suggests other genes mutated in it have relevance to cancer as well. Using GISTIC to find regions with common deletions in glioma samples, we highlight 60 genes that are mutated in U87MG and are located in regions that are commonly deleted in GBMs that are not included within the Cancer Gene Census list as potential candidate mutational targets in GBMs (Table S5<\/a>).\n\nCancer cell lines are commonly used as laboratory resources to study basic molecular and cellular biology. It is clearly preferable to have complete genomic sequence for these valuable resources. U87MG is the most commonly studied brain cancer cell line and is highly cytogenetically aberrant. While this made the sequencing and mutational analysis more challenging, it serves as a model for future cultured cell line genomic sequencing. Through custom analyses, we found that the mutational landscape of the U87MG genome is vastly more complicated than we would have expected based on the variants discovered in previously published genomes. It is our hope that the increased genomic resolution presented here will direct researchers and clinicians in their work with this brain cancer cell line to create more effective experiments and lead to a greater ability to draw meaningful conclusions in the future.\n\n# Materials and Methods\n\n## Data Sources\n\nThe NCBI reference genome (build 36.1, hg18, March 2006), genome annotations, and dbSNP version 129 were downloaded from the UCSC genome database located at . A local mirror of the UCSC genome database (hg18) was used for the subsequent analysis of variants using included gene models and annotations. The Watson genome variants were downloaded from Cold Spring Harbor Laboratory () with bulk data files available from . The YanHuang variants were downloaded from the Beijing Genomics Institute at Shenzhen () with bulk data files available from .\n\n## Sample Preparation\n\nU87MG cells were ordered from ATCC (HTB-14) and cultured in a standard way. Genomic DNA was isolated from cultured U87MG cells using Qiagen Gentra Puregene reagents. DNA was stored at \u221220C until library generation.\n\n## ABI Sequencing\n\nLong-Mate-Paired Library Construction: The U87MG genomic DNA 2\u00d7 50bp long mate-paired library construction was carried out using the reagents and protocol provided by Applied Biosystems (SOLiD 3 System Library Preparation Guide). A similar protocol was reported previously \\[17\\]. Briefly, 45ug of genomic DNA was fragmented by HydroShear (Digilab Genomic Solutions Inc) to 1.0\u20132.5kb. The fragmented DNA was repaired by the End-It DNA End-Repair Kit (Epicentre). Subsequently, the LMP CAP adaptor was ligated to the ends. DNA Fragments between 1.2\u20131.7kb were selected by 1.0% agarose gel to avoid concatamers and circularized with a biotinylated internal adaptor. Non-circularized DNA fragments were eliminated by Plasmid-Safe ATP-Dependent DNase (Epicentre) and 3ug of circularized DNA was recovered after purification. Original DNA nicks at the LMP CAP oligo\/genomic insert border were translated into the target genomic DNA about 100bp by nick translation using E. coli DNA polymerase I. Fragments containing the target genomic DNA and adaptors were cleaved from the circularized DNA by single-strand specific S1 nuclease. P1 and P2 adaptors were ligated to the fragments and the ligated mixture was used to create two separate libraries with 10 cycles of PCR amplification. Finally, 250\u2013300bp fragments were selected to generate mate paired sequencing libraries with average target genomic DNA on each end around 90bp by excision from PAGE gel and use as emulsion PCR template. Templated Beads Preparation: The templated beads preparation was performed using the reagents and protocol from the manufacturer (Applied Biosystems SOLiD 3 Templated Beads Preparation Guide). SOLiD 3 Sequencing: The 2\u00d750b mate-paired sequencing was performed exactly according to the Applied Biosystems SOLiD 3 System Instrument Operation Guide and using the reagents from Applied Biosystems.\n\n## Exon Pull-Down Capture Sequencing with Illumina GAII\n\nWe used an array pull-down capture strategy established in our lab \\[30\\]. An Agilent custom array for capturing 5,253 \"cancer-related\" genes was designed through Agilent e-array system ([www.agilent.com](http:\/\/www.agilent.com)). Only the amino acid encoding regions were targeted with 60mer oligos spaced center-to-center 20\u201330bp. The probes were randomly distributed across two separate 244K arrays. The library for cancer gene capture sequencing was generated following the standard Illumina paired-end library preparation protocol. 5ug of genomic DNA was used for the starting material and 250\u2013300bp fragments were size-selected during the gel-extraction step. In the last step, 18 cycles of PCR were performed in multiple tubes to yield 4ug of product and mixed with 50ug of Human Cot-1 DNA (Invitrogen), 52ul of Agilent 10\u00d7 Blocking Agent, 260ul of Agilent 2\u00d7 Hybridization Buffer and 10\u00d7 molar concentration of unpurified Illumina paired-end primer pairs custom made according to the sequences provided by Illumina (Oligonucleotide sequences, 2008, Illumina, Inc: available on request from Illumina). The mix was then diluted with elution buffer for the final volume of 520ul and then incubated at 95\u00b0C for 3 min and 37\u00b0C for 30min. 490ul of the hybridization mix was added to the array and hybridized in the Agilent hybridization oven (Robins Scientific) for 65 hrs at 65\u00b0C, 20rpm. After hybridization, the array was washed according to the Agilent wash procedure A protocol. The second wash was extended to 5 minutes to increase the wash stringency. After washing, the array was stripped by incubating it in the Agilent hybridization oven at 95\u00b0C for 10min, 20rpm with 1.09\u00d7 Titanium Taq PCR Buffer (Clonetech). After the incubation and collection of the solution, 4 tubes of PCR were performed with each tube containing 96ul of the collected solution, 1ul of dNTPs (10mM each), 1ul of Titanium Taq (Clonetech) and Solexa primers, 1ul each. 15 cycles of PCR was performed at the following condition: 30sec at 95\u00b0C, (10 sec at 95\u00b0C, 30 sec at 65\u00b0C, 30 sec at 72\u00b0C)\u00d718 cycles, 5 min at 72\u00b0C and hold at 4\u00b0C. The amplified product was purified using QIAquick PCR Purification Kit and eluted in 30ul of EB. After confirming the size of the amplicon on 2% agarose gel and measuring the concentration, the amplicon was diluted to 10nM, the working concentration for cluster generation. The Illumina flowcell was prepared according to the manufacturer's protocol and the Genome Analyzer was run using standard manufacturer's recommended protocols. The image data produced were converted to intensity files and were processed through the Firecrest and Bustard algorithms (1.3.2) provided by Illumina to call the individual sequence reads.\n\n## ABI SOLiD Sequence Alignment and Consensus Base Calling\n\nWe used Blat-like Fast Accurate Search Tool version 0.5.3 (BFAST ) \\[19\\] to perform sequence alignment of the two-base encoded reads off the ABI SOLiD to the NCBI human reference genome (build 36.1). Utilizing the local alignment algorithm included in BFAST \\[31\\], we were able to simultaneously decode the short reads, while searching for color errors (encoding errors), base changes, insertions, and deletions.\n\nWe found candidate alignment locations (CALs) for each end independently. We utilized ten indexes to be robust to up to six color errors, equating to a 12% per-read error rate:\n\n1111111111111111111111\n\n111110100111110011111111111\n\n10111111011001100011111000111111\n\n1111111100101111000001100011111011\n\n111111110001111110011111111\n\n11111011010011000011000110011111111\n\n1111111111110011101111111\n\n111011000011111111001111011111\n\n1110110001011010011100101111101111\n\n111111001000110001011100110001100011111\n\nWe also set parameters to use only informative keys when looking up reads in each index (BFAST parameter -K 8), and to ignore reads with too many CALs aggregated across all indexes (BFAST parameter -M 384). If reads mapped to greater than 384 locations, then they were categorized as 'unmapped'. We then performed local alignment for each of the returned CALs, simultaneously decoding the read from color space searching for color errors (encoding errors), base changes, insertions, and deletions \\[31\\]. We choose the \"best scoring\" alignment, accepting an alignment only if it was at least the equivalent edit distance of two color errors away from the next best alignment. This is approximately similar to a 'mapping quality' of 20 or better from the MAQ program output, for reference. We removed duplicate reads using the alignment filtering utility found in DNAA (). For single-end and mate-paired reads where only one end mapped, we removed duplicates based on reads having identical stat positions. For mate-paired reads, we removed duplicates where both ends had the same start position.\n\n## Illumina Genome Analyzer Sequence Alignment\n\nIllumina generated sequence was aligned to the NCBI human reference genome (build 36.1) using BFAST with the following parameters applied. Each end of the fragment library was mapped independently to identify CALs, utilizing ten indexes to be robust to errors and variants in the short (typically 36bp) reads:\n\n1111111111111111111111\n\n1111101110111010100101011011111\n\n1011110101101001011000011010001111111\n\n10111001101001100100111101010001011111\n\n11111011011101111011111111\n\n111111100101001000101111101110111\n\n11110101110010100010101101010111111\n\n111101101011011001100000101101001011101\n\n1111011010001000110101100101100110100111\n\n1111010010110110101110010110111011\n\nWe also set parameters to use only informative keys when looking up reads in each index (BFAST parameter -K 8), and to ignore reads with too many CALs aggregated across all indexes (BFAST parameter -M 1280). We then performed a standard local alignment for each CAL. Reads were declared mapped if a single unique best scoring alignment was identified within the genome. Duplicate reads were filtered out in the same manner as for the ABI SOLiD data.\n\n## Single Nucleotide Variant and Small Insertion and Deletion Detection\n\nTo find SNVs including SNPs and small indels, we assumed the MAQ consensus-calling model \\[20\\] utilizing the implementation in SAMtools \\[21\\]. We used a value of 0.0000007 for the prior of a difference between two haplotypes (-r parameter). This was chosen based on ROC analysis of a test dataset (data not shown).\n\n## Structural Variation Detection\n\nStructural variations were detected using custom algorithms designed to comprehensively search for groups of mate-pair reads with aberrant paired-end insert size distributions that are consistently identifying a unique structural variant in the genome. We utilized the \"dtranslocations\" utility in the DNAA package () for the primary structural variation candidate search. The utility first selected all pairs for which each end is uniquely mapped to a single location in the human genome and for which the mate-pair reads are not positioned in the expected size range relative to the consensus genome. Then we filter out false positives that are not consistent with a chromosomal difference on an allele. Briefly, the genome was divided into 500-base bins sequentially stepped 100-bases apart from their start positions. Each bin was then paired with other bins on the basis of containing similar 'mismapped' mate-pair reads. The aberrant mate-paired reads were defined as reads that were mapping less than 1000 or greater than 2000 bases apart within the reference genome sequence, which is selected based on the insert size distribution calculated from the aggregate dataset (Figure S2<\/a>). These were then rank-ordered based on the number of mate-pairs meeting criteria, and the destination bin with the most reads within it was paired with a given source bin to create a 'binset'. Binsets containing less than 4 reads were filtered out, removing 98.3% of the candidates based on having too little evidence supporting them. The resulting list of filtered binsets was then scanned for clusters of binsets. Binset clusters are groups of binsets where the source bins occur within 2000 bases of each other and the destination bins occur within 2000 bases of each other. Redundant binsets were combined and those binset clusters that contain too few (less than 9 binsets spanning at least 1000 bases) or too many binsets (greater than 29 binsets spanning at most 3000 bases\u2014higher is impossible given our insert size distribution) were removed as artifacts. The resulting binset clusters represent the reads immediately flanking structural breakpoint events. This detection process is currently being automated as Breakway (), but was done using custom scripts at the time of analysis.\n\nThe structural variations were then separated into interchromosomal and intrachromosomal events. Intrachromosomal events of less than 1Mb are assessed for deletion status by averaging base coverage within the bounds of the event and comparing it to base coverage 200kb outside the event on both sides. Those that have average interior base coverage less than 25% of the average exterior base coverage are classified as \"complete\" deletions. Those with average interior base coverage between 25% and 75% that of average exterior base coverage are classified as \"heterozygous deletions\" (deletions of at least one copy of the region, but with at least one copy remaining).\n\n## Genes Affected by Mutations in Coding Sequence\n\nVariant calls from the SAMtools pileup tool were first loaded into a SeqWare QueryEngine database and subsequently filtered to produce BED files. This filtering criteria required that a variant be seen at least 4 times and at most 60 times with an observation occurring on each strand at least once. For SNVs we further enforced the criteria that SNVs should only be called in reads lacking indels and the last 5 bases of the reads were also ignored. This reduced the likelihood that spurious mismappings were used to predict SNVs and eliminated the lowest quality bases from consideration. For small indels (\\<21bp) we enforced a slightly different filter by requiring that any reads supporting an indel were only allowed to contain one contiguous indel and these reads were not considered if the indel occurred on either the beginning or end of the read. These criteria, like the SNV criteria, were used to reduce the likelihood of using mismapped reads or locally misaligned reads in the variant calling algorithm. The elimination of reads with indels at the beginning or end of the read was intended to remove potential alignment artifacts caused by ambiguous gap introduction due to lack of information at the ends to guide proper alignment. Together, these filtering criteria reduced the likelihood that sequencing errors were identified as SNV or indel variants. We used scripts available in the BFAST toolset and SeqWare Pipeline to filter and annotate the variant calls. Variants passing these filters were further annotated by their overlap with dbSNP version 129. Variants were required to share the same genomic position as a dbSNP entry along with matching the allele present in the database to be considered overlapping. Mapping to dbSNP allowed us to filter out known SNPs from de novo variants.\n\nFiltered SNV and indel variants were then analyzed for their affect within the genome that is annotated with gene models. This analysis used scripts from the SeqWare Pipeline project and gene models downloaded from the UCSC hg18 human genome annotation database. Six different gene model sets from hg18 were considered: UCSC genes (knownGene), RefSeq genes (refGene, ), Consensus Coding Sequence genes (ccdsGene, ), Mammalian Gene Collection genes (mgcGenes, ), Vertebrate Genome Annotation genes (vegaGene, ), and Ensembl genes (ensGene, ). Each variant was evaluated for overlap with genes from each of the 6 gene models. If overlap was detected the variant was examined and tagged with one or more of the following terms depending on the nature of the event: \"utr-mutation\", \"coding-nonsynonymous\", \"coding-synonymous\", \"abnormal-ref-gene-model-lacking-stop-codon\", \"abnormal-ref-gene-model-lacking-start-codon\", \"frameshift\", \"early-termination\", \"inframe-indel\", \"intron-splice-site-mutation\", \"stop-codon-loss\", and\/or \"start-codon-loss\". The variant was also tagged with the gene symbol and other accessions to facilitate lookups. This information was loaded into a SeqWare QueryEngine database to allow for querying and filtering of the variants as needed.\n\nGenes affected by structural variations were assessed in two ways depending on the structural variation type. For interchromosomal translocation events, a gene was considered \"affected\" when either end of an interchromosomal translocation event fell in a genic region (including the entire coding region plus 1kb up- or down-stream of the gene's coding region). The same criteria were used for all intrachromosomal translocation events. For events that were classified as complete or heterozygous deletions, a gene was considered affected if all or part of a coding exon was deleted.\n\n## Annotation of Relevant Mutated Genes\n\nHomozygous SNVs, small indels, large deletions, and translocation events for variants that included predicted coding sequence changes were tallied. This became a reference list of variants with serious homozygous mutations that likely completely disrupted, or \"knocked out\", the normal function or synthesis of the target protein.\n\nFor the SNVs and small indels, a \"knockout\" variant was defined as a homozygous call by the SAMtools variant caller where the variant was predicted by the SeqWare Pipeline scripts to change coding sequence with one or more of the following annotations: \"early-termination\", \"frameshift\", \"intron-splice-site-mutation\", \"start-codon-loss\", and\/or \"stop-codon-loss\". The \"early-termination\" event represented a stop codon introduced upstream of the annotated stop codon. The \"frameshift\" represented an indel that resulted in a shifting of the reading frame of the gene resulting in, typically, early termination and non-sense coding sequence. The \"intron-splice-site-mutation\" referred to a mutation in the two consensus splice site intronic bases flanking exons (GT at the 5\u2032 splice site and AG at the 3\u2032 splice site). Finally, \"stop-codon-loss\" and \"start-codon-loss\" simply refer to variants that interrupt the stop or start codons. We chose to not include \"coding-nonsynonymous\" and \"inframe-indel\" annotations in this list of knocked out variants because, while potentially serious as these mutations are, they are not guaranteed to result in an unexpressed or non-functional protein. However, homozygous frameshift, early termination, splice site, and stop\/start codon loss mutations are very likely to interrupt a gene's expression and translation to functional protein.\n\nAs described above, large microdeletions that removed all or part of an exon and interchromosomal translocation events that fell within 1kb of a gene's coding region were also classified as mutated genes.\n\nOnce suspect knockout variants were identified, a mapping process was used to translate one or more variants to the gene symbol. This mapping allowed us to condense multiple variants affecting multiple gene models to a more abbreviated list of gene symbols likely to be affected by these knockout mutations. The mapping from variants to gene symbols used variants identified with gene models from the refGene and the knownGene tables in the UCSC hg18 database and mapped these variants to gene symbols using queries against the name field of the knownGene table and the alias field of the kgAlias table. The UCSC table browser was used to accomplish these queries and map the knownGene identifiers to gene symbols via the kgXref table. A similar approach was used for homozygous large-scale microdeletions and translocation events.\n\n## DAVID\/EASE Analysis\n\nThe list of knockout genes was uploaded to the Database for Annotation, Visualization, and Integrated Discovery (DAVID, version 2008) to identify enriched Gene Ontology (GO) terms \\[32\\]\u2013\\[33\\]. Overlap with GO terms from the biological process, cellular component, and molecular function ontologies were considered. The default parameters were used and a p-value cutoff of \\<\u200a=\u200a0.01 was considered significant.\n\n## Cancer Gene Census\n\nThe overlap between the Cancer Gene Census genes and those identified as knockouts in U87MG were compared. The Cancer Gene Census project is an ongoing effort to catalog genes with mutations that have been implicated in cancer \\[34\\]. It is a highly curated list that includes annotations for each gene including tumor types, class of mutations, and other genetic properties. We used the gene symbol list from the September 30^th^, 2009 complete working list, which includes 412 gene symbols.\n\n## TCGA\n\nThe overlap between mutations in the Cancer Genome Atlas (TCGA) and those identified as knockouts in U87MG was analyzed. TCGA is an ongoing effort to understand the molecular basis of cancer through large-scale copy number analysis, expression profiling, genome sequencing, and methylation studies among other techniques \\[4\\]. It provides information on mutations found by Sanger sequencing on many patient samples. For glioblastoma this includes sequence data aberrations detected in 158 patient samples and 1,177 genes.\n\n## Genomic Identification of Significant Targets in Cancer\n\nThe Genomic Identification of Significant Targets in Cancer (GISTIC) method was used to find significant areas of deletion in 293 samples from the TCGA \\[24\\]. The GISTIC technique was designed to identify and analyze chromosomal aberrations across a set of cancer samples, based on the amplitude of the aberrations as well as the frequency across samples. This approach produced a series of commonly deleted regions across the set of TCGA GBMs. To calculate the areas of deletion, we used 293 Affymetrix SNP 6.0 samples segmented using the GLAD SNP analysis module \\[35\\]. Default parameters of GISTIC were used. GISTIC produces peak limits, wide peak limits, and in addition broader region limits. These commonly deleted broader regions were then scanned for predicted knockout genes in U87MG.\n\n## Indel Size Distribution and Nucleotide Substitution Frequencies\n\nThe distribution of small indel sizes was examined for both deletions and insertions. Indels classified as affecting coding-sequence by the SeqWare Pipeline (see above) were compared to those outside coding regions. Raw counts were collected, recalculated as percents of total, and compared directly.\n\nSimilarly, nucleotide substitution frequency was examined for SNVs from U87MG both genome-wide and only in coding regions. Once binned appropriately, the SNV nucleotide substitutions were counted, tallied in a table, and graphed as percents of total.\n\n## Individual Genome Comparison\n\nVariants from the Watson and Yan Huang genome were downloaded from each respective project from the following URLs: and . These files contained variant calls for each genome along with annotations describing the variant as novel or occurring in dbSNP. The Watson genome only contained SNV calls so our comparison was limited to just SNVs. The Yan Huang genome also contained calls indicating heterozygous or homozygous. However, a variant was considered to match between genomes regardless of zygosity state. We compared the overlap of the U87MG genome, dbSNP and each of these genomes in turn. SNVs from U87MG that were considered for comparison had to meet our criteria; variants had to be observed at least 4 times, at most 60 times, at least once per strand, and with a minimum phred score of 10. SNVs in the three-way comparison were said to match if the position and allele matched between the genomes. If both variants matched between U87MG and the other genome and one was annotated in dbSNP, then the other was considered in dbSNP as well. If neither contained annotations from dbSNP the variant was considered novel. A similar process was carried out for variants distinct to each genome. The results were recorded as Venn diagrams showing the overlap between dbSNP, U87MG, and the Watson or Yan Huang genome.\n\n## Illumina SNP Chip\n\nGenomic DNA from U87MG was submitted to the Southern California Genotyping Consortium to be run on the Illumina Human 1M-Duo BeadChip, which consists of 1,199,187 probes scattered across the human genome. The Illumina Beadstudio program was used to analyze the resulting intensity data. Loss of heterozygosity was determined by analyzing B-allele frequency as determined by the Beadstudio program. Normal two-copy regions of the genome are represented by long stretches of probes with B-allele frequencies of 0, 0.5 or 1. Regions of LOH, on the other hand, deviate from this pattern significantly. Copy number was determined by looking at probe intensity.\n\n## Sanger Sequencing Validation\n\nPrimers for validation were designed by targeting regions immediately flanking the event predicted by our whole genome sequence analysis using the Primer3 tool (). Polymerase chain reaction was performed following standard protocols using Finnzymes Phusion Hot-Start High Fidelity polymerase. Products were run on 2% agarose gel electrophoresis and product purity and size was assessed by staining with ethidium bromide. Sanger sequencing was performed at the UCLA Genotyping and Sequencing core facility using an ABI 3730 Capillary DNA Analyzer. Sequence trace files were analyzed using Geospiza FinchTV. Validation status and PCR primers are listed in Table S1<\/a>.\n\n## Data Deposition and Availability\n\nIntensities, quality scores, and color space sequence for the genomic sequence of U87 SOLiD were uploaded to the Sequence Read Archive under the accession SRA009912.1\/Sequence of U87 Glioblastoma Cell-line. Intensities, quality scores, and nucleotide space sequence for the exon capture U87 Illumina sequence were also uploaded to the Short Read Archive under the same accession. For both datasets, alignment files have been uploaded to the Short Read Archive as additional analysis results.\n\nVariant calls for both datasets are available via a SeqWare QueryEngine web service at . This tool allows for querying the variants using a variety of search criteria including coverage, mutational consequence, gene symbol, and others. SeqWare QueryEngine produces results in both BED and WIG format making it compatible with the majority of genome browsers such as the UCSC genome and table browsers. Variant data will be uploaded to SRA as metadata along with the raw sequences. For the whole genome SOLiD alignment, small indels (\\<21bp), SNVs, large deletions, and translocation events can be queried. For the exon capture Illumina alignment, small indels and SNVs can be queried.\n\n## Software Availability\n\nMost software used for this project is open-source and freely available. We created two software projects that were instrumental in the analysis of the U87MG data: BFAST and SeqWare. The color- and nucleotide-space alignment tool BFAST can be downloaded from and many of our alignment filtering as well as the primary step in structural variation detection can be found in the DNAA package at . The SeqWare software project was used throughout the analysis of variant calls. We used the SeqWare LIMS tool for sample tracking, the SeqWare Pipeline analysis programs for annotating variants with dbSNP status and mutational consequence predictions, and SeqWare QueryEngine was used to database and query variant calls and annotations. This software and documentation can be downloaded from .\n\n# Supporting Information\n\nConcordance between Solexa capture data and SOLiD whole genome data. The left plot displays the SNP call concordance between each experiment (Solexa capture data in blue, SOLiD whole genome data in red) with the Illumina 1M Beadchip microarray for the 8.5Mb of sequence pulled down in the capture experiment. The right plot displays concordance of the non-reference (mutant) allele calls with the array data for those regions.\n\n(0.43 MB TIF)\n\nClick here for additional data file.\n\nPaired end insert size distribution. Empirical paired end insert size distribution for reads where both ends aligned with duplicates removed.\n\n(0.41 MB TIF)\n\nClick here for additional data file.\n\nAlignment is robust against genome-wide repeat elements. Circos plot \\[35\\] of reads spanning a complete microdeletion on chromosome 2, bases 201855000\u2013201858000, are shown in dark blue, with the normal reads in the surrounding region in light blue. The green plot shows base-coverage at each position. The outermost track shows the structure of a gene, CASP8, overlapping this region (large boxes-exons, lines-introns). The track containing black and red boxes shows genome-wide repeat elements (black-LINE, red-SINE). Note the high density of reads even over conserved LINE elements. Some SINE elements do demonstrate a drop in alignments, but these do not prevent the identification of structural variation-spanning reads.\n\n(0.21 MB TIF)\n\nClick here for additional data file.\n\nCommonly deleted regions in GBM according to GISTIC. This deletion plot shows significant regions of deletion in 293 GBM samples from the TCGA. The top of the plot shows the G-score and the bottom shows the q-values. G-score reflects the frequency and amplitude of the deletion. Q-values greater than 0.25 were considered significant. Overlap of genes mutated in U87 via SNVs or Indels and broad regions of deletion are considered to be likely cancer targets. This includes all or part of chromosomes 1, 6, 9, 10, 13, 14, 15, and 22.\n\n(0.43 MB TIF)\n\nClick here for additional data file.\n\nPCR and dideoxy sequencing validation. A list of the variants that were validated by PCR and dideoxy sequencing including primers used, varient location, and validation status.\n\n(0.03 MB XLS)\n\nClick here for additional data file.\n\nStructural variants in U87MG. All structural variants listed as regions immediately flanking the genomic breakpoint.\n\n(0.18 MB XLS)\n\nClick here for additional data file.\n\nGenes knocked out by SNVs\/Indels. List of all genes predicted to be knocked out by SNVs and Indels in U87MG.\n\n(0.20 MB XLS)\n\nClick here for additional data file.\n\nGenes affected by structural variants. List of all genes predicted to be affected by structural variants in U87MG.\n\n(0.48 MB XLS)\n\nClick here for additional data file.\n\nAnnotation of mutated genes. Lists of genes predicted to be mutated in U87MG annotated by various cancer-related gene databases.\n\n(0.17 MB XLS)\n\nClick here for additional data file.\n\nWe would like to acknowledge Bret Harry and Jordan Mendler for computational support and for maintaining our computer cluster and pipeline.\n\n# References\n\n[^1]: Conceived and designed the experiments: MJC NH BDO ZC HL BM SFN. Performed the experiments: MJC BDO ZC HL. Analyzed the data: MJC NH BDO AE HL BM SFN. Contributed reagents\/materials\/analysis tools: MJC NH BDO ZC HL BM SFN. Wrote the paper: MJC NH BDO ZC AE HL BM SFN.","meta":{"dup_signals":{"dup_doc_count":104,"dup_dump_count":34,"dup_details":{"curated_sources":3,"2021-21":1,"2021-17":1,"2020-45":1,"2019-51":1,"2019-35":1,"2018-47":1,"2018-13":2,"2018-05":1,"2017-39":1,"2017-26":3,"2017-22":4,"2017-17":3,"2017-09":2,"2016-40":1,"2016-36":2,"2016-30":2,"2016-18":1,"2016-07":1,"2015-48":1,"2015-22":3,"2014-52":7,"2014-49":3,"2014-42":7,"2014-41":8,"2014-35":7,"2014-23":8,"2014-15":5,"2023-14":1,"2017-13":6,"2015-18":6,"2015-11":4,"2015-06":1,"2014-10":4,"2013-48":1}},"file":"PMC2813426"},"subset":"pubmed_central"} {"text":"author: Liza Gross\ndate: 2006-09\ntitle: Evolution of Neonatal Imitation\n\nHumans do it. Chimps do it. Why shouldn't monkeys do it, too? Mimicry exists throughout the animal kingdom, but imitation with a purpose\u2014matching one's behavior to others' as a form of social learning\u2014has been seen only in great apes. (Mockingbirds can imitate an impressive number of other birds' songs, but they can't mimic you sticking out your tongue like a chimp can.) This matching behavior likely helps individuals conform to social norms and perform actions in the proper context. It's generally believed that monkeys do not imitate in this way. However, the discovery that rhesus monkeys have \"mirror neurons\"\u2014neurons that fire both when monkeys watch another animal perform an action and when they perform the same action\u2014suggests they possess the common neural framework for perception and action that is associated with imitation.\n\nMost studies exploring the early signs of matching behavior have focused on humans. A landmark 1977 study by Andrew Meltzoff and Keith Moore showed that 12- to 21-day-old infants could imitate adults who pursed their lips, stuck out their tongue, opened their mouth, and extended their fingers. They later found similar results in newborns, demonstrating that imitation is innate, not learned. A handful of studies on newborn chimps found a similar capacity for imitating human facial gestures. In a new study, Pier Ferrari, Stephen Suomi, and colleagues explored the possibility that imitation evolved earlier in the primate tree by studying neonatal imitation in rhesus monkeys, which split from the human lineage about 25 million years ago. They found that rhesus infants can indeed imitate a subset of human facial gestures\u2014gestures the monkeys use to communicate. The first investigation of neonatal imitation outside the great ape lineage, their study suggests that the trait is not unique to great apes after all.\n\nFerrari et al. tested 21 baby rhesus monkeys' response to various experimental conditions at different ages (one, three, seven, and 14 days old). Infants were held in front of a researcher who began with a passive expression (the baseline condition) and then made one of several gestures, including tongue protrusion, mouth opening, lip smacking, and hand opening.\n\nDay-old infants rarely displayed mouth opening behavior, but smacked their lips frequently. When experimenters performed the mouth opening gesture, infants responded with increased lip smacking but did not increase any other behavior. None of the other stimuli produced significant responses. But by day 3, matched behaviors emerged: infants stuck out their tongues far more often in response to researchers' tongue protrusions compared with control conditions, and smacked their lips far more often while watching researchers smacking theirs. (Watch an infant imitating mouth opening at DOI: [10.1371\/journal.pbio.0040302.sv001](10.1371\/journal.pbio.0040302.sv001).) By day 7, the monkeys tended to decrease lip smacking when humans performed the gesture, and by two weeks, all imitative behavior stopped.\n\nInfant rhesus monkeys, these results suggest, have a narrow imitation window that opens three days after birth, when they can reproduce human tongue protrusion and lip smacking. This imitation period is much longer in humans (two to three months) and chimps (about two months). It's possible that rhesus babies show more varied and prolonged imitative behavior in response to mom or other monkeys than to human experimenters, who may not provide the most relevant biological cues. But this narrow window does comport with the development schedule of rhesus monkeys, which is much shorter than that of humans and chimps.\n\nMany questions remain about the neural mechanisms of neonatal imitation. The researchers argue that their results support a resonance mechanism linked to mirror neurons, which have recently been identified while monkeys observe others' lip smacking and tongue protrusion. In this model, observing human mouth gestures directly activates mirror neurons in the monkeys' brain, ultimately leading to a replication of the gesture.\n\nHuman babies can imitate an adult's facial gesture a day after seeing it, which may help them identify individuals. For rhesus monkeys, lip smacking (which often alternates with tongue protrusion) accompanies grooming sessions and signals affiliation\u2014an important social cue for a species that is often described as \"despotic and nepotistic.\" Picking up these social gestures early in life may well facilitate the animal's early social relations (primarily with the mother) and assimilation into the social fabric of the group, providing a mechanism for distinguishing friend from foe. It will be interesting to test the extent of imitation in monkeys with more complex social dynamics. While the social life of rhesus monkeys may not demand the more sophisticated repertoire of behaviors seen in great apes, they seem to be hard-wired for imitation just like apes.","meta":{"dup_signals":{"dup_doc_count":106,"dup_dump_count":43,"dup_details":{"curated_sources":4,"2022-33":2,"2021-31":2,"2021-25":1,"2021-10":1,"2020-45":1,"2020-24":1,"2019-18":1,"2018-51":2,"2018-34":1,"2017-30":2,"2017-17":4,"2017-09":5,"2017-04":2,"2016-50":1,"2016-44":2,"2016-40":2,"2016-36":2,"2016-30":5,"2016-26":2,"2016-22":2,"2016-18":1,"2016-07":2,"2015-48":3,"2015-40":2,"2015-35":2,"2015-32":3,"2015-27":1,"2015-22":4,"2015-14":2,"2014-52":4,"2014-49":4,"2014-42":4,"2014-41":2,"2014-35":3,"2014-23":8,"2014-15":2,"2023-40":1,"2015-18":3,"2015-11":1,"2015-06":2,"2014-10":3,"2013-48":2,"2013-20":2}},"file":"PMC1560175"},"subset":"pubmed_central"} {"text":"abstract: We investigated the association of glycemia and 43 genetic risk variants for hyperglycemia\/type 2 diabetes with amino acid levels in the population-based Metabolic Syndrome in Men (METSIM) Study, including 9,369 nondiabetic or newly diagnosed type 2 diabetic Finnish men. Plasma levels of eight amino acids were measured with proton nuclear magnetic resonance spectroscopy. Increasing fasting and 2-h plasma glucose levels were associated with increasing levels of several amino acids and decreasing levels of histidine and glutamine. Alanine, leucine, isoleucine, tyrosine, and glutamine predicted incident type 2 diabetes in a 4.7-year follow-up of the METSIM Study, and their effects were largely mediated by insulin resistance (except for glutamine). We also found significant correlations between insulin sensitivity (Matsuda insulin sensitivity index) and mRNA expression of genes regulating amino acid degradation in 200 subcutaneous adipose tissue samples. Only 1 of 43 risk single nucleotide polymorphisms for type 2 diabetes or hyperglycemia, the glucose-increasing major C allele of rs780094 of *GCKR,* was significantly associated with decreased levels of alanine and isoleucine and elevated levels of glutamine. In conclusion, the levels of branched-chain, aromatic amino acids and alanine increased and the levels of glutamine and histidine decreased with increasing glycemia, reflecting, at least in part, insulin resistance. Only one single nucleotide polymorphism regulating hyperglycemia was significantly associated with amino acid levels.\nauthor: Alena Stan\u010d\u00e1kov\u00e1; Mete Civelek; Niyas K. Saleem; Pasi Soininen; Antti J. Kangas; Henna Cederberg; Jussi Paananen; Jussi Pihlajam\u00e4ki; Lori L. Bonnycastle; Mario A. Morken; Michael Boehnke; P\u00e4ivi Pajukanta; Aldons J. Lusis; Francis S. Collins; Johanna Kuusisto; Mika Ala-Korpela; Markku LaaksoCorresponding author: Markku Laakso, .\ndate: 2012-07\nreferences:\ntitle: Hyperglycemia and a Common Variant of *GCKR* Are Associated With the Levels of Eight Amino Acids in 9,369 Finnish Men\n\nInsulin regulates carbohydrate, lipid, protein, and amino acid metabolism (1). Insulin inhibits proteolysis and associated release of amino acids and stimulates amino acid uptake and protein synthesis in skeletal muscle (2,3). Selected amino acids, however, enhance insulin secretion (4,5) or modulate insulin sensitivity (6\u201310), the two main mechanisms in the regulation of glucose homeostasis.\n\nA recent study reported that three branched-chain amino acids (BCAAs), valine, leucine and isoleucine, and two aromatic amino acids, phenylalanine and tyrosine, predicted type 2 diabetes (11). The risk of diabetes was fivefold higher in individuals in the top quartile of a combination of three amino acids (isoleucine, phenylalanine, and tyrosine) compared with individuals in the lowest quartile. Although some small studies have reported that the levels of amino acids differ between individuals with normal and abnormal glucose tolerance (12,13), previous studies have not investigated the levels of amino acids across the entire range of glucose tolerance.\n\nAmino acids modulate insulin action on glucose transport (9,14\u201316) and gluconeogenesis (17). High levels of BCAAs, especially leucine, have been shown to associate with insulin resistance (16,18,19) or insulin-resistant states, including diabetes (6,12,13). BCAAs have been shown to downregulate insulin action on glucose uptake by inhibiting critical steps in the postreceptor insulin signaling cascade (10), although other studies have concluded that leucine and isoleucine stimulate glucose uptake (7,8,20). Gluconeogenic amino acids (mainly alanine and glutamine) can enhance hepatic glucose production and thus lead to hyperglycemia (21,22). Finally, amino acids such as arginine, glutamine, leucine, and phenylalanine directly stimulate insulin secretion (4).\n\nType 2 diabetes is a complex metabolic disease with a significant genetic component, and \\>40 gene loci associated with the risk of type 2 diabetes or hyperglycemia have been identified (23\u201326). The mechanisms by which these loci contribute to the risk of diabetes are only partially known. There are no previous studies on the association of these gene variants with the levels of amino acids.\n\nThe aims of our study were *1*) to investigate the relationship between the levels of amino acids and fasting and 2-h glucose across the entire range of glucose tolerance, *2*) to investigate the role of insulin sensitivity and insulin secretion in this relationship, *3*) to investigate the relationship between insulin sensitivity and adipose tissue mRNA expression of genes implicated in the catabolism of amino acids, and *4*) to investigate whether any of 43 risk single nucleotide polymorphisms (SNPs) for type 2 diabetes or hyperglycemia affect serum amino acid levels.\n\n# RESEARCH DESIGN AND METHODS\n\n## Subjects and clinical measurements.\n\nThe study included 9,369 nondiabetic or newly diagnosed type 2 diabetic men from the population-based Metabolic Syndrome in Men (METSIM) Study (mean \u00b1 SD age, 57 \u00b1 7 years; BMI, 27.0 \u00b1 4.0 kg\/m^2^). The study design has been described in detail elsewhere (27).\n\nGlucose tolerance was evaluated according to the American Diabetes Association criteria (28). A total of 3,026 subjects (32.3%) had normal glucose tolerance (NGT), 4,327 (46.2%) had isolated impaired fasting glucose (IFG), 312 (3.3%) had isolated impaired glucose tolerance (IGT), 1,058 (11.3%) had IFG and IGT, and 646 (6.9%) had newly diagnosed type 2 diabetes. Additional analyses were performed in 1,775 nondiabetic subjects re-examined during an ongoing follow-up METSIM study, of which 375 maintained NGT, 1,249 remained nondiabetic, and 151 developed new type 2 diabetes during the mean follow-up of 4.7 \u00b1 1.0 years. None of 9,369 subjects was receiving antidiabetic treatment. BMI was calculated as weight (kg) divided by height (m) squared. The study was approved by the ethics committee of the University of Kuopio and Kuopio University Hospital and conducted in accordance with the Helsinki Declaration.\n\n## Amino acid measurements.\n\nA high-throughput serum nuclear magnetic resonance (NMR) platform operating at 500 MHz was used for amino acid quantification (29). Fasting serum samples collected at the baseline study were stored at \u221280\u00b0C and thawed overnight in a refrigerator before sample preparation. Aliquots of each sample (300 \u03bcL) were mixed with sodium phosphate buffer (300 \u03bcL). A proton NMR spectrum was acquired where most spectral signals from the macromolecules and lipoprotein lipids were suppressed to enhance detection of the amino acid signals. The eight amino acids (alanine, phenylalanine, valine, leucine, isoleucine, tyrosine, histidine, glutamine) were quantified in standardized concentration units. Details of the NMR experimentation and amino acid quantification have been described previously (29,30).\n\n## Insulin sensitivity and insulin secretion indices.\n\nResults of oral glucose tolerance testing (OGTT) were used to calculate the Matsuda index of insulin sensitivity (ISI) as 10,000\/\u221a (fasting insulin \u00d7 fasting glucose \u00d7 mean insulin during OGTT \u00d7 mean glucose during OGTT) (31). An index of early-phase insulin secretion during an OGTT, insulin area under the curve (InsAUC)~0\u201330~\/glucose (Glu)AUC~0\u201330~, was calculated as (fasting insulin + 30-min insulin)\/(fasting glucose + 30-min glucose) (pmol\/mmol) (27).\n\n## Genotyping.\n\nGenotyping of 43 SNPs (29 SNPs associated with risk for type 2 diabetes and 14 associated with increased fasting or 2-h glucose in an OGTT) (23\u201326) was performed using the Applied Biosystems TaqMan Allelic Discrimination Assay at the University of Eastern Finland or the Sequenom iPlex Gold SBE assay at the National Human Genome Research Institute at the National Institutes of Health. The TaqMan genotyping call rate was 100%, and the discordance rate was 0% among 4.5% DNA samples genotyped in duplicate. The Sequenom iPlex call rate was 90.2\u201396.9%, and the discordance rate was 0% among 4.2% DNA samples genotyped in duplicate. All SNPs were in Hardy-Weinberg equilibrium at the significance level corrected for multiple testing by Bonferroni method (*P \\<* 0.0012). Descriptive data for individual SNPs are shown in [Supplementary Table 1](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db11-1378\/-\/DC1).\n\n## Gene expression analysis.\n\nTotal RNA was isolated from 200 subcutaneous fat biopsy samples of METSIM participants using Qiagen miRNeasy kit according to manufacturer's instructions. RNA integrity number values were assessed with the Agilent Bioanalyzer 2100. High-quality samples (RNA integrity number \\>7.0) were used for transcriptional profiling with the Illumina Human HT-12 v3 Expression BeadChip. Genome Studio software (2010.v3) was used for obtaining fluorescent intensities. The HT-12 BeadChip contains 48,804 expression and 786 control probes. Expression data from 19,306 probes were removed because of *1*) failure of the probe to align to a genomic or transcriptomic location; *2*) alignment of the probe to multiple genomic or transcriptomic locations; or *3*) presence of SNPs in the probe sequence that may affect hybridization efficiency using the methodology developed by Barbosa-Morais et al. (32). The remaining 29,497 probes were processed using nonparametric background correction, followed by quantile normalization with control and expression probes using the *neqc* function in the *limma* package (R v2.13.0) (33). The 16,223 probes with detection *P* values \\<0.01 in any of the 200 samples were used for further analysis. Gene expression data have been deposited to Gene Expression Omnibus (GEO) with the accession number GSE32512.\n\n## Statistical analysis.\n\nStatistical analyses were conducted using SPSS 17 software (SPSS, Chicago, IL). All amino acids, BMI, Matsuda ISI, and early-phase insulin secretion index (InsAUC~0\u201330~\/GluAUC~0\u201330~) were log-transformed to correct for their skewed distribution. Amino acids were compared across the fasting and 2-h glucose categories using the general linear model adjusted for age and BMI, or additionally for Matsuda ISI or InsAUC~0\u201330~\/GluAUC~0\u201330~ indices. *P* \\< 0.003 (corrected for 16 tests by Bonferroni method) was considered statistically significant. Associations between amino acid levels and indices of insulin sensitivity and insulin secretion were evaluated with Pearson correlation coefficients. The association of amino acid levels with newly developed type 2 diabetes was tested with logistic regression adjusted for confounding factors. Correlations between gene expression levels and phenotypes were calculated using the Pearson correlation coefficient. We used the Benjamini\u2013Hochberg false discovery rate (FDR) method (34) to correct for multiple comparisons, and considered an FDR-adjusted *P*~FDR~ \\< 0.05 statistically significant.\n\nFor genetic association analysis, unstandardized effect sizes (B \\[SE\\]) per copy of the minor allele were estimated by linear regression analysis adjusted for age and BMI, using untransformed dependent variables, and percentages of B from the mean were calculated. *P* values were calculated using logarithmically transformed variables when appropriate. A *P* \\< 1.45 \u00d7 10^\u22124^ adjusted for multiple comparisons by Bonferroni method was considered to be statistically significant given a total of 344 tests performed (8 traits \u00d7 43 SNPs). We had \u226580% power to detect changes in the mean trait value from 1.4 to 5.9% per copy of the minor allele at the significance level of 0.05, depending on the minor allele frequency ([Supplementary Fig. 1](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db11-1378\/-\/DC1)). Hardy-Weinberg equilibrium was evaluated by \u03c7^2^ test.\n\n# RESULTS\n\n## Hyperglycemia and the levels of eight amino acids.\n\nWe generated categories of fasting (FPG) and 2-h (2hPG) plasma glucose (by 0.5 and 1.0 mmol\/L steps, respectively) to investigate the relationship between amino acid levels and glycemia in participants with normoglycemia, IFG, IGT, and type 2 diabetes. Categories with FPG \\<5.0 mmol\/L and 2hPG \\<5.0 mmol\/L were set as the reference categories. Across the FPG categories, we observed a significant (*P* \\< 0.003) increase in isoleucine level of 38% in the highest glucose category versus the reference category (*P* = 6.7 \u00d7 10^\u221210^ adjusted for age and BMI), tyrosine (+21%, *P* = 1.8 \u00d7 10^\u221215^), alanine (+17%, *P* = 4.3 \u00d7 10^\u221242^), phenylalanine (+15%, *P* = 1.1 \u00d7 10^\u22129^), and leucine (+19%, *P* = 4.2 \u00d7 10^\u22124^) levels, and a significant decrease in histidine (\u22129%, *P* = 3.3 \u00d7 10^\u22124^) and glutamine (\u221222%, *P* = 1.9 \u00d7 10^\u221248^; Fig. 1A<\/em><\/a>). Similar trends were seen across the 2hPG categories, with a significant increase in isoleucine (+38%, *P* = 2.7 \u00d7 10^\u221258^), tyrosine (+30%, *P* = 1.1 \u00d7 10^\u221210^), alanine (+14%, *P* = 2.3 \u00d7 10^\u221246^), phenylalanine (+14%, *P* = 1.1 \u00d7 10^\u221213^), and leucine (+15%, *P* = 1.0 \u00d7 10^\u221218^) levels, and a significant decrease in histidine (\u22125%, *P* = 9.1 \u00d7 10^\u22126^) and glutamine (\u221212%, *P* = 1.4 \u00d7 10^\u221231^) levels with higher 2hPG (Fig. 1B<\/em><\/a>). These effects were more significant in obese (BMI \u226527 kg\/m^2^) than in nonobese (BMI\\<27 kg\/m^2^) participants, although the trends were similar in both groups ([Supplementary Fig. 2](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db11-1378\/-\/DC1)). Overall, most amino acids increased, whereas glutamine and histidine decreased with higher FPG and 2hPG.\n\nTable 1<\/a> shows that all amino acids that were increased in hyperglycemia negatively correlated with Matsuda ISI (*r* \u2264 \u22120.3, *P* \u2264 1 \u00d7 10^\u2212154^) and positively (*r* \u2265 0.2, *P* \u2264 3 \u00d7 10^\u221285^) with the InsAUC~0\u201330~\/GluAUC~0\u201330~ index of early-phase insulin secretion. For glutamine, which was decreased in hyperglycemia, the correlations were weaker and in the opposite direction (*r* = 0.17 for Matsuda ISI and \u22120.12 for InsAUC~0\u201330~\/GluAUC~0\u201330~). Correlations of amino acid levels (except for histidine) with InsAUC~0\u201330~\/GluAUC~0\u201330~ were weaker than correlations with Matsuda ISI and were largely attenuated after the adjustment of InsAUC~0\u201330~\/GluAUC~0\u201330~ for Matsuda ISI. Furthermore, additional adjustment for Matsuda ISI attenuated or abolished most of the associations between glucose categories and amino acid levels (with the exception of valine and histidine), whereas adjustment for InsAUC~0\u201330~\/GluAUC~0\u201330~ attenuated only the associations for histidine ([Supplementary Table 2](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db11-1378\/-\/DC1)). Thus, insulin sensitivity seemed to at least partly explain the relationship between glucose and amino acid levels.\n\nPearson correlations between the levels of eight amino acids and indices of insulin sensitivity (Matsuda ISI) and early-phase insulin secretion (InsAUC~0\u201330~\/GluAUC~0\u201330~) in nondiabetic METSIM participants\n\n![](1895tbl1)\n\n## Association of eight amino acid levels with risk of type 2 diabetes.\n\nWe investigated the relationship between amino acid levels and the development of type 2 diabetes during a 4.7-year follow-up, including 526 re-examined METSIM participants (375 with NGT at baseline and follow-up examinations, and 151 with type 2 diabetes). Of eight amino acids measured at baseline, high levels of alanine (*P* = 6.7 \u00d7 10^\u22125^), leucine (*P* = 0.005), isoleucine (*P* = 3.3 \u00d7 10^\u22125^), tyrosine (*P* = 0.001), and phenylalanine (*P* = 0.048) were significantly associated with incident diabetes (adjusted for age and BMI; Table 2<\/a>). Additional adjustment for Matsuda ISI alone, or Matsuda ISI and InsAUC~0\u201330~\/GluAUC~0\u201330~, but not for InsAUC~0\u201330~\/GluAUC~0\u201330~ alone, abolished statistical significances, suggesting that the association of these amino acids with incident diabetes was mostly explained by insulin resistance. Further adjustment for fasting hyperglycemia did not essentially change the association, but adjustment for 2-h glucose made the association for alanine statistically significant (*P* = 0.027). The elevated baseline glutamine level was significantly associated with a decreased risk of newly developed type 2 diabetes (*P* = 4.1 \u00d7 10^\u22126^), and this association persisted after the adjustment for Matsuda ISI (*P* = 1.5 \u00d7 10^\u22124^) or InsAUC~0\u201330~\/GluAUC~0\u201330~ (*P* = 6.7 \u00d7 10^\u22126^) or both (*P* = 0.048). Additional adjustment for fasting glucose abolished this association (*P* = 0.051). The levels of glutamine were significantly lower at baseline in 440 participants who developed abnormal glucose tolerance (IGT and\/or IFG, diabetes) at follow-up, compared with 375 participants who remained normoglycemic (155.6 vs. 159.6; *P* = 0.019, adjusted for age and BMI; [Supplementary Table 3](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db11-1378\/-\/DC1)). We also performed statistical analyses in a larger sample, including all 1,624 reexamined participants who did not develop diabetes during the follow-up and 151 participants who developed type 2 diabetes. The results remained essentially similar, although *P* values were less significant.\n\nAssociation between the baseline levels of eight amino acids and newly developed type 2 diabetes during the follow-up of the METSIM study participants (logistic regression adjusted for age, BMI, and additional covariates)\n\n![](1895tbl2)\n\n## Gene expression of genes involved in amino acid metabolism in relation to insulin sensitivity.\n\nAnalysis of microarray data from subcutaneous adipose tissue samples of 200 METSIM participants showed that Matsuda ISI correlated significantly with mRNA levels of several genes involved in the metabolism of alanine, including a key enzyme, alanine aminotransferase (*r* = 0.46, *P*~FDR~ = 9.7 \u00d7 10^\u22129^); glutamine, including a key enzyme, glutamine synthetase (*r* = 0.44, *P*~FDR~ = 4.0 \u00d7 10^\u22128^); BCAA, including key enzymes of BCAA degradation, branched chain amino-acid transaminase 2 (*r* = 0.36, *P*~FDR~ = 2.5 \u00d7 10^\u22125^) and branched-chain \u03b1-keto acid dehydrogenase (BCKDH) A and B (*r* = 0.45 and 0.35, *P*~FDR~ = 1.3 \u00d7 10^\u22128^ and 3.0 \u00d7 10^\u22125^); and phenylalanine, tyrosine, and histidine ([Supplementary Table 4](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db11-1378\/-\/DC1)). The most consistent results were observed for the BCAA degradation, as enzyme mRNAs correlated positively with Matsuda ISI at almost all steps of the metabolic pathway ([Supplementary Fig. 3](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db11-1378\/-\/DC1)). Figure 2<\/a> shows that the mRNA levels of three key enzymes in BCAA degradation also correlated negatively with ISI, FPG, and 2hFP levels. However, these associations disappeared when controlled for Matsuda ISI. These results indicate that the association between insulin sensitivity and amino acid catabolism (especially BCAA catabolism) also exists at the mRNA level and add further evidence that insulin sensitivity is likely contributing to higher BCAA levels in hyperglycemia.\n\n## Risk variants for type 2 diabetes and\/or hyperglycemia and the levels of eight amino acids.\n\nOf the 43 SNPs investigated, only *GCKR* rs780094 was significantly associated with the levels of several amino acids after Bonferroni correction for multiple testing (*P* \\< 1.45 \u00d7 10^\u22124^) (Fig. 3<\/a> and [Supplementary Table 1](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db11-1378\/-\/DC1)). The glucose-increasing C (major) allele was associated with lower levels of alanine (effect size \u22121.9% per C allele, *P* = 1.6 \u00d7 10^\u221211^) and isoleucine (\u22122.3%, *P* = 3.1 \u00d7 10^\u22126^), and higher levels of glutamine (+1.2%, *P* = 1.0 \u00d7 10^\u22126^ adjusted for age and BMI). The C allele also had nominally significant effects on leucine (\u22121.2%, *P* = 0.001), tyrosine (+1.1%, *P* = 0.001), and histidine (+0.8%, *P* = 0.003). These associations did not change significantly after additional adjustment for FPG, 2hPG, or Matsuda ISI (data not shown), traits known to be modulated by SNPs of *GCKR*. Furthermore, the association of amino acids with newly developed type 2 diabetes was not affected by further adjustment for rs780094.\n\nA number of nominally significant associations of other SNPs with amino acids were found; the top ranking was rs8042680 (*PRC1*). The type 2 diabetes risk allele (A) was nominally associated with higher levels of leucine (+1.2%, *P* = 0.002), isoleucine (+1.7%, *P* = 0.001), and valine (+1.2%, *P* = 0.001; [Supplementary Table 1](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db11-1378\/-\/DC1)). The most consistent nominally significant associations were seen for rs75789326 (*IRS1*). The nonrisk allele (G) was associated with lower leucine (\u22120.8%, *P* = 0.019), isoleucine (\u22121.5%, *P* = 0.003), tyrosine (\u22120.7%, *P* = 0.037), valine (\u22120.7%, *P* = 0.047), and histidine (\u22120.5%, *P* = 0.039) levels.\n\n# DISCUSSION\n\nThis is the first large, population-based study aiming to investigate the relationship between hyperglycemia and 43 risk SNPs for type 2 diabetes\/hyperglycemia and amino acid levels. We observed that with increasing FPG and\/or 2hPG, the levels of alanine, valine, leucine, isoleucine, phenylalanine, and tyrosine increased, whereas the levels of histidine and glutamine decreased. Significant correlations between insulin sensitivity and mRNA expression of genes regulating amino acid metabolism were found. Only 1 SNP (rs780094 in *GCKR*) of 43 risk SNPs for type 2 diabetes or hyperglycemia was significantly associated with the levels of several amino acids.\n\n## Hyperglycemia and levels of eight amino acids.\n\nWe demonstrated that the levels of alanine, phenylalanine, valine, leucine, isoleucine, and tyrosine increased and the levels of histidine and glutamine decreased in hyperglycemia, indicating that the changes in amino acid levels parallel closely the changes in FPG and 2hPG levels. Similar trends were also observed in normoglycemia, suggesting that even mild elevations of glucose levels result in changes in amino acid levels. The largest and most significant increase across the FPG and 2hPG categories was observed for isoleucine (increased up to 38 and 30%, respectively, compared with the reference category), and alanine (increased up to 17 and 14%), and the largest decrease was seen for glutamine (by 22 and 12%, respectively). We also showed that obesity did not have a major effect on these associations, although the effects of hyperglycemia on the levels of amino acids tended to be more pronounced in obese individuals.\n\nThe levels of all amino acids correlated inversely (except for glutamine correlating positively) with insulin sensitivity (Matsuda ISI). The strongest correlation was found for isoleucine (*r* = \u22120.49), a BCAA previously shown to be a potent activator of glucose uptake in skeletal muscle (7,8). Insulin resistance is likely to be an important mechanism explaining these associations because the adjustment for Matsuda ISI attenuated or abolished most of the associations between glucose (especially 2hPG) and amino acid levels. Further evidence for the role of insulin resistance mediating the associations of amino acids with hyperglycemia comes from our 4.7-year prospective follow-up of the METSIM cohort. We demonstrated that alanine, leucine, isoleucine, tyrosine, and glutamine were associated with incident type 2 diabetes. Adjustment for insulin sensitivity abolished significant associations, with the exception of glutamine, indicating that insulin resistance is likely to play an important role in the risk of type 2 diabetes induced by amino acids. Insulin secretion or hyperglycemia did not significantly modify these associations. Our study not only supports the recent observations of a relationship between amino acid levels with the risk of diabetes (11) but also offers a possible underlying mechanism to explain this relationship.\n\nOur study found that the levels of BCAAs (particularly isoleucine) were strongly correlated with mRNA levels of genes involved in BCAA catabolism. BCAAs are thought to be early indicators of insulin resistance (6,10,18,35). High levels of BCAAs promote insulin resistance by interfering with the insulin-signaling pathway (10) or by directly inhibiting muscle glucose transport and\/or phosphorylation in skeletal muscle (9), whereas insulin resistance in turn may contribute to high levels of amino acids by increased release of BCAAs (and other amino acids) due to impaired ability of insulin to suppress proteolysis in skeletal muscle (36). BCAAs are oxidized in skeletal muscle and adipose tissue (37) to form alanine and glutamine (38). Oxidation of BCAAs was increased in individuals with type 2 diabetes (21). In animal models of type 2 diabetes, BCAA catabolism is downregulated due to low activity of BCKDH complex (39), the key enzyme of BCAA oxidation. In our study, adipose tissue mRNA expression of *BCKDHA* and *BCKDHB* (encoding \u03b1 and \u03b2 polypeptides of BCKDH E1) as well as of *BCAT2* (branched chain amino-acid transaminase 2 mitochondrial, the first enzyme in BCAA catabolism) and other genes involved in BCAA catabolism correlated positively with Matsuda ISI and negatively with glucose levels. This suggests that the insulin resistance-related decrease in degradation of BCAAs could be one of the mechanisms leading to the elevation of BCAA levels in hyperglycemia in humans. Other possible mechanisms for the elevation of BCAAs in hyperglycemia could be a decreased uptake and increased release of BCAAs from skeletal muscle due to increased protein catabolism in insulin resistance.\n\nAlanine is a nonessential amino acid synthesized from pyruvate and amino acids (mainly BCAAs) primarily in skeletal muscle and gut and used for gluconeogenesis in the liver. Alanine is the main precursor for gluconeogenesis in the liver (38,40) and a stimulator of glucagon secretion (41). Therefore, high plasma levels of alanine may contribute especially to fasting hyperglycemia by enhancing gluconeogenesis. Our study shows that elevated levels of alanine in hyperglycemia, both in obese and nonobese individuals, could be mediated by insulin resistance, which is supported by a significant positive correlation between insulin sensitivity and mRNA level of alanine aminotransferase, the key enzyme in alanine metabolism.\n\nGlutamine is a nonessential gluconeogenic amino acid synthesized from glucose or amino acids or released by proteolysis from skeletal muscle (42). Unlike alanine, glutamine is primarily taken up by the gut and kidneys, where it is processed to glucose via the gluconeogenesis pathway. Glucagon, but not insulin sensitivity, seems to play a role in the uptake of glutamine by the gut (43). Differential regulation of glutamine metabolism, compared with other amino acids, could possibly explain the opposite associations between glutamine and glucose metabolism. We found that plasma levels of glutamine decreased with elevated levels of FPG and 2hPG and that a low level of glutamine was significantly associated with future diabetes, independently of obesity and insulin resistance. Baseline glutamine levels were already decreased in normoglycemic participants who developed abnormal glucose tolerance at follow-up compared with those who remained normoglycemic. Our results are in agreement with previous studies showing that patients with type 2 diabetes (12) and lean insulin-resistant normoglycemic offspring of patients with type 2 diabetes have lower levels of glutamine than normoglycemic individuals (44).\n\n## Risk variants for type 2 diabetes and\/or hyperglycemia and the levels of eight amino acids.\n\nA common variant in *GCKR* was the only SNP among 43 loci associated with the levels of several amino acids. *GCKR* encodes the glucokinase regulatory protein, which inhibits the effects of glucokinase (GCK) on glycogen synthesis and glycolysis in the liver (45). SNPs at the *GCKR* locus have been associated with fasting glycemia (24,46), risk of type 2 diabetes (46,47), insulin resistance (46\u201348), and elevated hepatic glucose uptake (49). In our study, the glucose-increasing major C allele of the intronic SNP rs780094 was significantly and unexpectedly associated with decreased levels of alanine and isoleucine and elevated levels of glutamine. Mechanisms explaining the relationship of rs780094 of *GCKR* with amino acids are unknown, but the association with low levels of alanine could be a consequence of reduced glycolysis induced by *GCKR* resulting in decreased production of pyruvate and its conversion to alanine. On the basis of our study, it remains unclear whether the effects of rs780094 of *GCKR* on amino acid metabolism are primary or secondary. Although in our study the C allele of rs780094 was associated with both FPG and 2hPG, as well as with Matsuda ISI, the association of *GCKR* with the levels of amino acids was independent of these variables (data not shown). The *GCKR* variant did not affect the association of amino acids with newly developed type 2 diabetes, which agrees with its modest effect on amino acid levels.\n\nAlthough several type 2 diabetes\/hyperglycemia risk SNPs showed nominally significant associations with the levels of selected amino acids, the most consistent results were found for rs7578326 near *IRS1*. The major A allele previously linked to type 2 diabetes risk in a genome-wide association study (26) was nominally associated with elevated levels of several amino acids (valine, leucine, isoleucine, tyrosine, histidine). *IRS1* encodes the insulin receptor substrate 1, an important component of the insulin-signaling pathway, and its gene variant affects insulin resistance and adiposity (50). The association of *IRS1* with amino acid levels, if confirmed, could contribute to evidence for a causal relationship between insulin resistance and amino acid metabolism. The type 2 diabetes risk allele of the rs8042680 in *PRC1*, encoding protein regulator of cytokinesis 1, was associated specifically but nominally with higher levels of all three BCAAs. *PRC1* is not known to have a function in amino acid metabolism. These results suggest that the relationship between the levels of glucose and amino acids could be determined, at least in part, by genes regulating hyperglycemia or the risk of diabetes.\n\nThis study has limitations. Only Finnish men were included, and therefore, we do not know whether our results are applicable to women and to different ethnic or racial groups. We had only a modest statistical power to demonstrate statistically significant associations of gene variants with amino acids.\n\nIn conclusion, our large, population-based study shows that levels of branched-chain, aromatic amino acids and alanine increase whereas glutamine and histidine decrease with increasing FPG and\/or 2hPG levels. These associations seemed to be mediated by insulin resistance, at least in part, supported by correlations of expression of genes involved in amino acid metabolism with the Matsuda ISI. Among the 43 loci associated with risk for hyperglycemia or type 2 diabetes, only *GCKR* rs780094 was significantly associated with several amino acids, especially with the levels of alanine.\n\n## ACKNOWLEDGMENTS\n\nThis work was supported by the Academy of Finland (A.S., M.L., SALVE program to M.A.-K.), the Finnish Diabetes Research Foundation (M.L.), the Finnish Cardiovascular Research Foundation (M.L., M.A.-K.), the Jenny and Antti Wihuri Foundation (A.J.K.), an EVO grant from the Kuopio University Hospital (5263), DK062370 to M.B., 1Z01 HG000024 to F.S.C., and National Institutes of Health Grant HL-095056 to P.P. and Grant HL-28481 to P.P. and A.J.L.\n\nNo potential conflicts of interest relevant to this article were reported.\n\nA.S. wrote the manuscript and researched the data. M.C., N.K.S., P.P., and A.J.L. performed the mRNA experiments, analyzed the data, and reviewed and edited the manuscript. P.S. conceived, designed, and performed the NMR experiments, analyzed the data, and reviewed and edited the manuscript. A.J.K. analyzed the NMR data, contributed analysis tools, and reviewed and edited the manuscript. H.C., J.Pa., and J.Pi. researched the data and reviewed and edited the manuscript. L.L.B., M.A.M., and F.S.C. designed and performed genotyping and reviewed and edited the manuscript. M.B. contributed analysis tools and reviewed and edited the manuscript. J.K. designed the study and reviewed the manuscript. M.A.-K. conceived and designed the NMR experiments, analyzed the data, and reviewed and edited the manuscript. M.L. designed the study, contributed to discussion, and reviewed and edited the manuscript. M.L. is the guarantor of this work and, as such, had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.\n\n# REFERENCES","meta":{"dup_signals":{"dup_doc_count":150,"dup_dump_count":63,"dup_details":{"curated_sources":2,"2022-21":1,"2021-31":3,"2021-21":2,"2021-17":1,"2020-29":1,"2020-24":1,"2020-16":1,"2020-10":1,"2019-51":2,"2019-47":1,"2019-43":3,"2019-35":2,"2019-30":2,"2019-26":2,"2019-22":1,"2019-13":2,"2019-04":1,"2018-47":1,"2018-39":1,"2018-30":1,"2018-26":1,"2018-22":3,"2018-17":2,"2018-13":2,"2018-09":2,"2018-05":2,"2017-51":3,"2017-47":2,"2017-43":4,"2017-39":2,"2017-34":2,"2017-30":4,"2017-26":3,"2017-22":2,"2017-17":4,"2017-09":3,"2017-04":7,"2016-50":4,"2016-44":4,"2016-40":3,"2016-36":3,"2016-30":3,"2016-26":3,"2016-22":3,"2016-18":3,"2015-48":3,"2015-40":3,"2015-35":1,"2015-32":1,"2015-27":1,"2015-22":3,"2015-14":2,"2014-52":3,"2014-49":2,"2014-42":5,"2014-41":3,"2014-35":3,"2014-23":3,"2022-40":1,"2017-13":4,"2015-18":3,"2015-11":2,"2015-06":1}},"file":"PMC3379649"},"subset":"pubmed_central"} {"text":"abstract: A report on the 13th International Congress on Yeasts (ICY), held at Madison, Wisconsin, USA, 26-30 August 2012.\nauthor: Daniela Delneri\ndate: 2012\ninstitute: 1Faculty of Life Sciences, University of Manchester, Oxford Road, Manchester M13 9PL, UK\ntitle: The yeast factory\n\nAlmost 400 scientists attended the 13th International Congress on Yeasts (ICY), where advances in yeast physiology, genomics, biotechnology, ecology and taxonomy were reported. In addition, the meeting celebrated the achievements accomplished with yeast in the past two decades, with the keynote speaker, Steve Oliver (University of Cambridge, UK), giving an overview of the scientific findings in yeast genomics, starting from the pioneering sequencing of *Saccharomyces cerevisiae* chromosome III in 1992 to the recent translational studies using yeast as a model to predict copy number variation phenotypes in human cancer cell lines. The plenary speaker Bernard Dujon (Institut Pasteur, France) also superbly summarized the history of comparative genomics with yeasts, the achievements of the G\u00e9nolevures consortium () and the evolutionary insights gained in the hemiascomycetes lineage.\n\nHere, I highlight some contributions from three key themes of the meeting, including genomic diversity, comparative studies and evolution; biogeography; and taxonomy.\n\n# Genomic diversity, comparative analysis and evolution\n\n*S. cerevisiae* yeasts form diversified groups, usually defined by the source from which they were isolated. Justin Fay (Washington University, USA) analyzed a highly differentiated group of *S. cerevisiae* isolated from grapes to assess whether the level of phenotypic variation between natural and domesticated strains could be a consequence of limited gene flow between sympatric wild and vineyard strains. Chardonnay must (pressed grapes) was fermented with both oak tree isolates and wine strains of yeast, and differences in the aroma and flavor of the resulting wine were assessed by 51 human volunteers. Not only was the oak tree wine the least palatable, but genome comparative analysis also showed that the wild isolates were exchanging genes with vineyard strains, potentially contaminating wine production with off-flavors. Fay concluded there is no clear link between the genetic diversification of vineyard yeasts and domestication.\n\nUnderstanding the molecular basis of phenotypic differences among yeast strains may lead to the engineering of new strains of biotechnological relevance. Amparo Querol (IATA-CSIC, Valencia, Spain) presented her studies on differential expression profiles during wine fermentation of 29 yeast strains (14 fermentative yeasts and 15 wild isolates). The detection of differences in transcript levels between homologous genes from natural and wine strains can help the selection of properties that are desirable in enology (fermentation at low temperature, high production of glycerol and lower ethanol yield).\n\nComparative genomics studies have also explored the evolution and reconstruction of yeast carbon metabolism. Jure Piskur (Lund University, Sweden) investigated the origin of the Crabtree effect, which enables some species of yeast to produce ethanol, rather than biomass, at high sugar concentrations, by analyzing 40 different yeasts in the Saccharomycetes class, covering 200 million years of history. Interestingly, he found that species that originated before the whole genome duplication, such as *Lachancea* species, were 'Crabtree positive', producing as much ethanol as biomass. He concluded that the ability to produce ethanol from sugar is independent from the whole genome duplication, and originated about 120 million years ago, when the first modern flowering plants and fruits appeared on our planet.\n\nDawn Thompson (Broad Institute, USA) compared transcriptome data for 15 fungal species grown in a variety of conditions in order to study how regulation evolves and how network rewiring can lead to phenotypic adaptation. Growth rate, expression profiles, metabolites and nucleosome organization profiles were analyzed and compared to reconstruct molecular networks of ancestral and extant species.\n\nInterspecies hybridization is a source of phenotypic diversity for the colonization of new environments. Eladio Barrio (University of Valencia, Spain) analyzed the genomes of hybrids between *S. cerevisiae* and *Saccharomyces kudriavzevii*, focusing on mitochondrial inheritance. There was a tendency for the hybrids to lose the portion of the genome derived from *S. kudriavzevii*, but also to retain the mitochondria from this species. Phylogenetic reconstruction of mitochondria based on the cytochrome-c oxidase subunit II (*COX2*) sequence showed that mitochondrial DNA often recombines in a region between *COX2* and the *ORF1* sequence (a large ORF related to the group I introns), and that introgression from *S. paradoxus* is present.\n\nSerge Casaregola (CIRM-Levures, INRA, France) carried out a population genomics study of two species of the CTG clade (yeasts that translate CTG as serine instead of leucine), *Millerozyma farinosa* and *Debaryomyces hansenii*, and showed that several cryptic species, which could not be distinguished using ribosomal DNA analysis, were present. New hybrids, such as those between *M. farinosa* and *Millerozyma miso*, were also discovered. These hybrids seemed to undergo loss of heterozygosis readily during sporulation. Casaregola concluded that the diversity among yeasts of the CTG clade is wider than previously thought, with the coexistence of many cryptic species, haploids, heterozygote diploids and hybrids.\n\nYeast has long been used as model to study the phenotypic effects of copy number variation and mitochondrial diseases. Maitreya Dunham (University of Washington, USA) analyzed the origins and consequences of copy number variation in yeast strains grown in a chemostat under different nutritional limitations. By introducing a collection of barcoded plasmids containing a copy of every gene into the yeast cells, she was able to study the effect of duplications. She identified a set of genes whose copy number increase was beneficial to the cell in all the conditions tested, and others that were environment-dependent. Dunham also reported a new inverted tandem repeat structure that is inconsistent with the currently accepted mechanisms of DNA repair. She proposed a new model for origin-dependent inverted repeat amplification, suggesting that such tandem architecture can be caused by an error in the DNA replication fork.\n\nMonique Bolotin-Fukuhara (Universit\u00e9 Paris Sud, France) described the generation of a collection of yeast strains with mutations in mitochondrial transfer RNAs that mimicked human pathological diseases. She showed that the severity of the human phenotype was recapitulated in yeast, and that the nature of the mitochondrial sequence neighboring the engineered mutations affected the severity of the phenotype.\n\n# Biogeography\n\nSignificant advances on the ecology and natural distribution of *Saccharomyces* *sensu stricto* species were made recently. Jos\u00e9 Paulo Sampaio (Universidade Nova de Lisboa, Portugal), explained how the belief that *Saccharomyces* was a purely domesticated organism, living mainly in human-made environments such as fermenting grapes, has hampered the detection of truly ecological species for several years. Improvements in the isolation methodologies from natural substrates are gradually changing this situation, allowing the identification of several new yeast isolates. In fact, yeasts are widely distributed on oaks and on species of the closely related *Nothofagus* genus, found in Patagonia, the Philippines, South Australia and New Zealand. The cryotolerant species *Saccharomyces arboriculus*, *Saccharomyces eubayanus* and *Saccharomyces uvarum* have all been isolated from *Nothofagus* species. Sampaio's study of *S. uvarum* biogeography has shown the existence of three different populations, found in Europe, South America and Australasia, respectively. Interestingly, the Australasian population is highly divergent, and hybrid crosses of Australasian strains with those from the South American populations have as little as 36% spore viability. Although *Saccharomyces paradoxus* and *S. cerevisiae* seem to have a global distribution, other lineages such as *S. kudriavzevii* and *Saccharomyces mikatae* were isolated from restricted geographical areas.\n\nChristopher Hittinger (University of Wisconsin, USA) reported on the discovery and the genome analysis of *S. eubayanus* (isolated from *Nothofagus* sp. in Patagonia), the missing parental contributor to the lager-brewing yeast *Saccharomyces pastorianus*. A total of 200 *S. eubayanus* and *S. uvarum* strains were isolated in Argentina and were sequenced to determine the genetic diversity and population structure of this species. Interestingly, Hittinger pointed out that both *S. eubayanus* and *S. uvarum* have retained some parts of the RNA interference machinery in their genomes, such as the *Dicer 1* (*DCR1*) gene, which are lost in the other *Saccharomyces sensu stricto* species. Hittinger also mentioned the *Saccharomyces sensu stricto* consortium (SSS), which includes improved and assembled genome sequences for these yeasts ().\n\n# Taxonomy\n\nThe theme of taxonomy was an important part of the meeting given the rapid and increasing number of newly discovered yeast species. In his plenary lecture, Teun Boekhout (CBS-KNAW, Utrecht, The Netherlands) called for a revision of the phylogenetic classification not only of the basidiomycetous yeasts, but also of the highly polyphyletic *Candida* genus, in which the majority of species are harmless and have no relation to the human pathogen *Candida albicans*. The systematics of yeast species taxonomy should be carried out using refined multigene phylogenies or whole genome analysis, because the morphological and physiological criteria are often misleading, conflicting with the gene tree. It follows that there is a need for whole genome sequencing projects at the species level, building on from the existing genomic characterization of several strains belonging to *S. cerevisiae* and *S. paradoxus* species (). Boekhout also discussed his large-scale effort to determine unusual phenotypes in less explored non-conventional yeasts that may be of industrial relevance (the EU Cornucopia project).\n\nVincent Robert (CBS-KNAW, Utrecht, the Netherlands) gave an update on the DNA barcoding of all the 9,000 strains in the CBS collection by sequencing the D1\/D2 and ITS region of the ribosomal DNA. This barcode method is useful to revise identifications and select strains for further exploration. Moreover, MALDI-TOF (matrix-assisted laser desorption\/ionization time-of-flight mass spectrometry) techniques, based on differential peptide fingerprinting patterns, are being developed and applied for strain classification and quality control analysis of the CBS collection.\n\nThe recent discovery of the new species *S. eubayanus*, one of the original parents of *S. pastorianus*, necessitated a revision of the nomenclature and phylogeny of the *sensu strictu* species, with the old *Saccharomyces bayanus* var. *uvarum* becoming a recognized species called *S. uvarum*, and the old *S. bayanus* var. *bayanus* becoming *S. bayanus*, presenting a hybrid genome from different strains and species belonging to *S. uvarum*, *S. eubayanus* and *S. cerevisiae*. Huu-Vang Nguyen (INRA, AgroParisTech, Thiverval-Grignon, France) showed that a small number of specific markers (*NTS2*, *MAL31*, *MEL1*, *MTY1*) are sufficient to discriminate between pure lines of *S. uvarum* and *S. eubayanus* and the hybrids *S. bayanus* and *S. cerevisiae*. By PCR and restriction fragment length polymorphism analysis, he was able to reclassify the *S. bayanus* strains CBS 424, CBS 425 and CBS 3008, showing that they were inbred lines of *S. uvarum* and *S. eubayanus*, and that they did not contain the *S. cerevisiae* portion of the genome usually present in *S. bayanus* strains. This method is a quick and reliable approach to differentiating between different hybrids and species belonging to the *Saccharomyces* *sensu stricto* group.\n\nIn conclusion, the ICY meeting showed that there is an exciting future for yeast research, with an ever expanding repertoire of species and strains to study and set of genomics tools with which to do so.","meta":{"dup_signals":{"dup_doc_count":127,"dup_dump_count":52,"dup_details":{"curated_sources":2,"2023-23":1,"2022-27":1,"2021-49":1,"2020-24":1,"2019-30":1,"2019-22":1,"2019-09":1,"2018-51":2,"2018-39":2,"2018-30":1,"2018-22":1,"2018-09":1,"2018-05":1,"2017-51":1,"2017-47":1,"2017-43":1,"2017-39":1,"2017-34":1,"2017-30":1,"2017-26":1,"2017-22":1,"2017-17":1,"2017-09":11,"2017-04":1,"2016-50":1,"2016-44":1,"2016-36":9,"2016-30":8,"2016-22":1,"2016-18":1,"2016-07":5,"2015-48":3,"2015-40":2,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":3,"2014-42":5,"2014-41":5,"2014-35":5,"2014-23":3,"2014-15":3,"2023-40":1,"2015-18":3,"2015-11":3,"2015-06":3,"2014-10":3,"2013-48":2,"2017-13":1}},"file":"PMC3491391"},"subset":"pubmed_central"} {"text":"abstract: Recent advances in shotgun sequencing and computational methods for genome assembly have advanced the field of metagenomics, the culture-independent cloning and analysis of microbial DNA extracted directly from an environmental sample, to provide glimpses into the life of uncultured microorganisms.\nauthor: Patrick D Schloss; Jo Handelsman\ndate: 2005\ninstitute: 1Department of Plant Pathology, University of Wisconsin, Madison, WI 53706, USA\nreferences:\ntitle: Metagenomics for studying unculturable microorganisms: cutting the Gordian knot\n\nThe estimate that fewer than 1% of the prokaryotes in most environments can be cultivated in isolation \\[1\\] has produced a quandary: what is the significance of the field of modern microbial genomics if it is limited to culturable organisms? Until recently, this limitation meant that the genomes of most microbial life could not be dissected because more than half of the known bacterial phyla contain no cultured representatives, and the archaeal kingdoms are likewise dominated by uncultured members. The problem can be likened to the Gordian knot of Greek legend, which was impossible to unravel. The knot, which was constructed with interwoven strands with no ends exposed, served as a source of great pride of the citizens of Gordium where it was displayed. It was Alexander the Great who finally cut the massive knot and called the act his greatest victory. One strategy to expose the rest of the microbial world to the eye of the microbiologist - analogous to attempting to untie the knot - is to coax more bacteria into pure culture. The alternative approach - which could cut through it as Alexander the Great did - is metagenomics.\n\nMetagenomics is the culture-independent analysis of a mixture of microbial genomes (termed the metagenome) using an approach based either on expression or on sequencing \\[2,3\\]. Recent studies in the Sargasso Sea \\[4\\], acid mine drainage \\[5\\], soil \\[6\\], and sunken whale skeletons \\[6\\] have used the shotgun-sequencing approach to sample the genomic content of these varied environments. In each study, environmental samples were obtained and the microbial DNA was extracted directly from the sample, sheared, cloned into *Escherichia coli*, and random clones were sequenced. In some of the studies sequence overlaps were then used to assemble contigs or scaffolds of genomic sequence. The Sargasso Sea study \\[4\\] resulted in nearly 2,000,000 random sequence reads, a massive total \\[7\\]; the acid-mine-drainage community sequence, more modest in size but impressive in the analytical insights gained, was based on 100,000 sequence reads (Table 1<\/a>). Assembling so many sequence reads, while simultaneously accounting for heterogeneities between genomes, introduced unique challenges for each study. In the hyper-diverse soil metagenomic sequencing project, fewer than 1% of the 150,000 sequence reads could be assigned to a contig \\[6\\], whereas the acid-mine-drainage sequencing project successfully assigned 85% of the sequence reads to one of 1,183 scaffolds \\[5\\]. The genome sequences of uncultured microorganisms residing in mixed communities can now realistically be determined.\n\n# A simple oceanic community\n\nThe most extensive metagenomic sequencing effort has been the attempt by Venter *et al.* \\[4\\] to sequence the prokaryotic genomes in the water of the Sargasso Sea, a well characterized region of the Atlantic near Bermuda that has unusually low nutrient levels; this study has already spawned numerous other meta-analyses (for example, \\[8-11\\]). Among one billion nucleotides of sequenced DNA, Venter *et al.* \\[4\\] identified more than 1.2 million open reading frames (ORFs), including 782 that had significant similarity to rhodopsin-like proteins. This was a surprise because the rhodopsins were previously thought to be present in only a small group of organisms, and the Sargasso Sea study broadened the spectrum of species known to have them. One intriguing problem in metagenomics is that most ORFs cannot be assigned to gene families of known function \\[2\\]. In the Sargasso Sea sequences, for instance, 69% of the ORFs had no known function \\[4\\]. This analysis points to a major limitation in annotating sequences from uncultured microorganisms: if no relative of the organism being sequenced has ever been sequenced, then the likelihood of matching each of the newly identified genes to genes of known function is low. The choice of database used for comparison determines the answer, as demonstrated by the identification by Venter *et al.* \\[4\\] of 16S rRNA sequences from the Sargasso Sea by querying a database containing only 16S rRNA gene sequences from genome sequences of Bacteria and Archaea. As they limited the comparative database to cultured microorganisms, it was not surprising that they did not identify any 16S rRNA gene fragments from any phyla with no cultured representatives. A further limitation of this study was presented by Delong \\[12\\], who pointed out that the two genomes that Venter *et al.* \\[4\\] were able to complete were probably contaminants in the sea-water sample. Obvious examples of assembly error (for example, contigs containing bacterial 5S and 23S rRNA genes adjacent to an archaeal 16S rRNA gene) suggest an insidious assembly problem throughout the sequence collection \\[12\\]. Perhaps the next stage of the project will profit from the mistakes of this 'pilot' sequencing attempt \\[4\\].\n\n# An even simpler biofilm community\n\nAlthough the nutrient-limited Sargasso Sea was selected for metagenomics because it was thought to contain a simple community \\[4\\], the community was not simple enough to allow assembly of most of the sequence reads into contigs. Tyson *et al.* \\[5\\] selected a far simpler community, that of a biofilm found in the very acidic waste water from an iron mine (termed acid mine drainage), which contains three bacterial and three archaeal lineages. By grouping the assembled contigs into 'bins' according to their GC content and the number of reads per contig, they were able to assign each bin to an organism. The near-complete genome sequences (ten-fold coverage) of *Ferroplasma* type II and *Leptospirillum* group II members enabled Tyson *et al.* \\[5\\] conceptually to model the metabolic processes that each genome contributes to the broader community.\n\nThis thorough sequencing and metabolic analysis provided the starting point for a 'proteogenomic' analysis. Protein was extracted from biofilms found in the acid mine drainage and digested with trypsin \\[13\\]. Applying shotgun mass spectrometry to the fragmented proteins, Ram *et al.* \\[13\\] obtained a sequence of part of the proteome. By combining the proteome and metagenome sequences \\[5\\], they linked one or more peptide sequences to approximately 49% of the ORFs from the five dominant genomes \\[13\\]. The most powerful outcome of this analysis was the identification from the *Leptospirillum* group II sequences of a novel acid-stable iron-oxidizing c-type cytochrome with an adsorption maximum wavelength at 579 nm (Cyt~579~). Cyt~579~ is the primary iron-oxidizing enzyme in the microbial community and mediates the rate-limiting step in acid production. In this relatively simple community, the proteogenomic approach enabled Ram *et al.* \\[13\\] to quantify protein production from each ORF, validate the DNA-derived metabolic model, and identify a process that potentially acts as a keystone for the whole ecosystem.\n\n# First metagenomic analyses of complex microbial communities\n\nA fundamental challenge in understanding microbial communities is to chronicle genetic conservation across time and location and to delineate the smallest complement of genes conserved in genomes across different communities \\[4\\]. Tringe *et al.* \\[6\\] tackled this problem by sequencing microbial communities sampled both from soil from a Minnesota farm and from three deep-sea communities living on sunken whale skeletons ('whale-fall') and comparing them with the Sargasso Sea sequence collection. ORFs from each metagenomic sequence were assigned to clusters of orthologous genes (COGs \\[14\\]), operons, pathways in the Kyoto Encyclopedia of Genes and Genomes (KEGG) \\[15\\], and COG functional categories \\[14\\].\n\nThe relative enrichment found using each of the four annotation methods between the Sargasso Sea, deep-sea whale fall, and Minnesota farm soil was then determined, resulting, in essence, in an *in silico* subtractive hybridization. The over-representation of rhodopsin ORFs in the Sargasso Sea and ORFs encoding cellobiose phosphorylase in the Minnesota farm soil make biological sense, because marine microorganisms are more likely to use light-driven energy transduction systems and soil microorganisms are more likely to encounter plant-derived oligosaccharides such as cellobiose. The large number of ORFs of no known function that were over-represented in each community may indicate as-yet unknown functional systems. Generating copious sequence information from a community is intrinsically valuable, but this comparative analysis \\[6\\] is a worthy example of how metagenomics may move beyond descriptive, annotation-based analyses toward meaningful inference about ecological phenomena.\n\n# Dealing with complexity and contamination\n\nApplication of molecular biology methods to cultured organisms has led to striking insights into the life of microbes in mono-species culture. But genomics has failed to elucidate the functions of microbial communities, where most microorganisms on Earth spend most of their time and that provide the platform from microorganisms shape plant, animal, environmental and human health. Metagenomics, coupled with gene arrays, proteomics, expression-based analyses, and microscopy, will give insights into problems such as genome evolution and the membership of particular niches that are currently hindered by our inability to culture most microorganisms in pure culture \\[16\\]. To realize the full potential of metagenomics, however, a number of obstacles need to be overcome. Perhaps the most significant of these is the microbial complexity in most communities. The successful analysis of the acid mine drainage community was predicated on its simplicity. In contrast, the Minnesota farm soil probably contains more than 5,000 species and 10^4^-10^5^ strains, making it inevitable that the over 150,000 sequence reads could not be assembled into contigs \\[6\\] (Table 1<\/a>). It is likely that 2-5 gigabase-pairs of sequence are necessary to obtain eight-fold coverage of the dominant species in the community, suggesting that inventive approaches are needed to enrich DNA sequences from less abundant organisms or from members that are unique to a community \\[3\\].\n\nAnother focus for improvement in metagenomics is the use of robust sampling and DNA-extraction procedures. Methodology that guards against contamination such as that revealed in the Sargasso Sea samples is essential. Making the metagenomic studies ecologically meaningful will require sampling strategies that account for spatial and temporal variability, thereby enabling comparisons between communities \\[17\\]. These comparisons will also require standardized and aggressive methods for extracting DNA. It is unfortunate that all of the large metagenomic sequencing projects used chemical extraction methods to obtain DNA, whereas the technique of 'bead beating', which applies high shear forces to cells, is more effective than chemical lysis methods at breaking tough cells (for example, \\[18\\]). The studies that used chemical lysis methods therefore include DNA from only a subset of the organisms that can be accessed by modern methods.\n\nThis is an exciting time for metagenomics, as many projects are underway to sequence the metagenomes of biologically interesting environments. The US Joint Genome Institute (JGI) has essentially sequenced the metagenomes of the microbial communities associated with two extinct ancient cave bears, which contained less than 2 and 6% cave bear DNA, respectively \\[19\\]. The JGI is also currently sequencing metagenomic DNA for more than ten studies through their scientific Community Sequencing Program \\[20\\], and the J. Craig Venter Foundation is sequencing the metagenomes of samples taken along a path intended to simulate the voyage of Darwin's ship *The Beagle*, as well as samples of New York City's air \\[21\\]. A future prospect is completing the human genome by sequencing the metagenome of the 10^12^ microbial cells that are associated with the human body \\[22\\]. Each of these studies will unearth secrets unique to the environment being examined, and comparison of results of these studies will provide a meta-understanding of the recurrent and unique themes in community structure and function.\n\n## Figures and Tables\n\nSummary of metagenomic sequencing projects\n\n| Community | Estimated species richness | Thousands of sequence reads | Total DNA sequenced (Mbp) | Sequence reads in contigs (%) |\n|----|----|----|----|----|\n| Acid mine drainage | 6 | 100 | 76 | 85 |\n| | | | | |\n| Deep sea whale fall | | | | |\n| Sample 1 | 150 | 38 | 25 | 43 |\n| Sample 2 | 50 | 38 | 25 | 32 |\n| Sample 3 | 20 | 40 | 25 | 47 |\n| | | | | |\n| Sargasso Sea | | | | |\n| Samples 1-4 | 300 per sample | 1,662 | 1,361 | 61 |\n| Sample 5-7 | 300 per sample | 325 | 265 | \\<1 |\n| | | | | |\n| Minnesota farm soil | \\>3,000 | 150 | 100 | \\<1 |","meta":{"dup_signals":{"dup_doc_count":133,"dup_dump_count":54,"dup_details":{"curated_sources":2,"2023-23":1,"2022-27":1,"2021-43":1,"2021-17":1,"2021-10":2,"2021-04":2,"2020-45":1,"2020-29":1,"2020-10":1,"2019-47":1,"2019-35":1,"2019-26":1,"2019-22":1,"2019-09":1,"2018-51":1,"2018-43":1,"2018-34":1,"2018-22":1,"2018-05":1,"2017-39":1,"2017-34":1,"2017-26":1,"2017-17":1,"2017-09":9,"2016-44":1,"2016-40":1,"2016-36":9,"2016-30":9,"2016-22":1,"2016-18":1,"2016-07":2,"2015-48":3,"2015-40":1,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":6,"2014-42":9,"2014-41":5,"2014-35":5,"2014-23":3,"2014-15":3,"2023-40":1,"2017-13":1,"2015-18":3,"2015-11":3,"2015-06":3,"2014-10":3,"2013-48":3,"2013-20":2,"2024-18":1}},"file":"PMC1273625"},"subset":"pubmed_central"} {"text":"abstract: We need to familiarize ourselves with the facts of evolution, so that we can mount a spirited defense against creationism and the forces of ignorance.\nauthor: Gregory A Petsko\ndate: 2008\ninstitute: 1Rosenstiel Basic Medical Sciences Research Center, Brandeis University, Waltham, MA 02454-9110, USA.\ntitle: It is alive\n\nThey're at it again. Armed with another new idea from the Discovery Institute, that bastion of ignorance, right-wing political ideology, and pseudo-scientific claptrap, the creationist movement has mounted yet another assault on science. This time it comes in two flavors: propaganda and legislative.\n\nThe propaganda is in the form of a poorly written, badly acted movie produced by Ben Stein, an attorney and entertainment figure who once served as a speechwriter for US Presidents Gerald Ford and Richard Nixon. As if working for Nixon didn't do enough to demonstrate his faulty judgment, he has become an ardent critic of evolution and an advocate for 'intelligent design', which is creationism poorly disguised as 'science'. He co-wrote and stars in the film *Expelled: No Intelligence Allowed*, which attempts to link evolution to the eugenics movement in Nazi Germany and to the Holocaust, and portrays advocates of intelligent design as champions of academic freedom and victims of discrimination by the scientific community. The famous evolutionary biologist and atheist Richard Dawkins has a spirited attack on the film on his website , and there's also a lively critique from the National Center for Science Education .\n\nFortunately, the film is sinking faster than the *Lusitania*. As far as I can discover, it has grossed less than US\\$8 million in ticket sales to date, far less than its cost, and is playing to virtually empty houses in the few theaters that are still showing it. Whether this is because people recognize its ideas as rubbish or because it is simply a bad movie, I don't know. So we can probably ignore it, as it so richly deserves. But the legislative attack is much more serious.\n\nOn 11 June 2008, the Louisiana House of Representatives voted 94:3 in favor of a bill that would promote 'critical thinking' by students on topics such as evolution, the origins of life, global warming, and human cloning. The Louisiana Senate already passed a similar bill, Senate Bill 733, by a vote of 35:0, but an amendment adopted by the House, which would allow the state Board of Elementary and Secondary Education to prohibit supplemental materials it deems inappropriate, means that the Senate must pass the bill again. If they do, and this seems a certainty, then the bill will be sent to Louisiana governor Bobby Jindal, at 36 the youngest governor in the United States and the first Indian-American to serve as the head of a state government. A former Hindu who converted to Catholicism in high school, Jindal attended Oxford University on a Rhodes Scholarship. Jindal was a biology major at Brown University, so he should understand the science at stake here, but he opposes stem-cell research and has publicly supported the teaching of 'intelligent design' in public schools. He has not stated whether or not he will sign Bill 733. A fascinating subtext to this story is that Jindal is reportedly under consideration by Republican presidential nominee John McCain as a possible vice-presidential nominee.\n\nThe bill is cleverly worded: it states in section 1C that it \"shall not be construed to promote any religious doctrine, promote discrimination for or against a particular set of religious beliefs, or promote discrimination for or against religion or nonreligion.\" In an interview with the conservative newspaper *The Washington Times* (12 June 2008), Jason Stern, vice-president of the Louisiana Family Forum, a Christian right-wing lobby group, insisted \"It's not about a certain viewpoint. It's allowing \\[teachers\\] to teach the controversy.\"\n\nLet me say this as clearly as possible, so there can be no mistake about what I mean: there is no controversy. Just because a few misguided so-called scientists question the validity of the concept of evolution doesn't mean there is a controversy. There are still some people who believe the Earth is flat (there's even a 'Flat Earth Society'), but that doesn't mean that a grade-school science teacher should teach his or her students that the Earth might be flat. The fact that some people believe nonsense does not give that nonsense scientific validity. A challenge to existing scientific principles must be based on evidence, not on belief, and there isn't a shred of evidence to support either creationism or intelligent design. Those ideas belong in a religion or philosophy class, not in a science class.\n\nBy the way, speaking of religion class, if we accept the creationists' own rationale for this bill, then shouldn't right-wing fundamentalist Christian schools be forced to 'teach the controversy' about religion? It's a much more controversial subject than science. Shouldn't their students be forced to consider the possibility that there is no God, or that the Muslim faith, or the Hindu faith, or the Jewish faith might be the true one? Or that there are so many different translations and versions of the Bible that there is no way of knowing which one is the 'word of God'? You can see how quickly their argument breaks down.\n\nWhat about the academic freedom argument? If someone wants to teach creationism in a science class, shouldn't they have the right to do so? Certainly - if they want to get fired. Because if they do that they deserve to get fired. It has nothing to do with academic freedom; it's about basic competence. Consider, for example, a science teacher who taught that the Sun revolves around the Earth. Even the intelligent-design advocates would probably have to admit that such a science teacher was incompetent and ought to be dismissed. That teacher might counter with a claim that his or her academic freedom was being infringed, but no court would uphold it, any more than a court would uphold a similar claim from a history teacher who taught that the Allies lost World War 2 or that Napoleon Bonaparte was emperor of Japan. Science, and history, may welcome speculation, but the speculation must be based on facts, and when it isn't, then it doesn't belong in that subject. Any 'science' teacher who teaches that the Earth might have been created about 6,000 years ago and that all the material evidence that it's billions of years old is controversial is simply incompetent. If the state of Louisiana wants its children taught by such people then they deserve the kind of workforce and citizenry they are going to get.\n\nIt's worth pointing out that in 1987, in the case of Edwards *versus* Aguillard, the US Supreme Court ruled as unconstitutional the idea of equal time for \"creation science\" and evolution in biology classes. That precedent will almost certainly be used as the basis for a constitutional challenge to the Louisiana law if it passes. Also, in the state of Pennsylvania, the 'Kitzmiller *versus* Dover' case in 2005 put to rest the idea of intelligent design as an alternative to evolution being taught in biology classes - the judge there, in a brilliantly reasoned opinion, demonstrated that intelligent design was just creationism by another name. Although not a Supreme Court case, this decision was strong enough to cause creation science advocates to switch tactics to arguments about academic freedom, the focus of the current legislation at issue in Louisiana.\n\nLest you think this is merely some Bible Belt aberration, let me assure you that the creationists are marshalling this argument in other states as well. In Michigan, Senate Bill 1361, introduced in the Michigan Senate on 3 June 2008, and referred to the Senate Committee on Education, is yet another 'academic freedom' bill aimed squarely at the teaching of evolution. Identical to Michigan House Bill 6027, which is still in the House Committee on Education, Senate Bill 1361 would, if enacted, require state and local administrators \"to create an environment within public elementary and secondary schools that encourages pupils to explore scientific questions, learn about scientific evidence, develop critical thinking skills, and respond appropriately and respectfully to differences of opinion about controversial issues\" and \"to assist teachers to find more effective ways to present the science curriculum in instances where that curriculum addresses scientific controversies\" by allowing them \"to help pupils understand, analyze, critique, and review in an objective manner the scientific strengths and scientific weaknesses of existing scientific theories pertinent to the course being taught.\" And in Texas (why is it not a shock that the state that gave us George W Bush would show up here), the Texas State Board of Education is again considering mandating a science curriculum that teaches the \"strengths and weaknesses\" of evolution. On 7 June 2008, the Houston Chronicle wrote that \"strengths and weaknesses\" language is \"a 'teach the controversy' approach, whereby religion is propounded under the guise of scientific inquiry\". The editorial went on to say: \"What students really need is to be able to study science from materials that have not been hijacked by creationists whose personal agenda includes muddying the science curriculum. Creationism is not a 'system of science'.\"\n\nAs scientists, we need to protest with our feet and our wallets. I am about to become the president of the American Society for Biochemistry and Molecular Biology, a scientific society with about 12,000 members. Our 2009 annual meeting is scheduled to take place in New Orleans. If Bill 733 becomes law in Louisiana, it will be too late to move the meeting to another state. But we need to see to it that no future meeting of our society will take place in Louisiana as long as that law stands, nor should we hold it in any other state (are you listening, Michigan and Texas?) that passes a similar law. I call upon the presidents of the American Chemical Society, the American Association of Immunologists, the Society for Neuroscience, and all the other scientific societies in the US and around the world to join me in this action and make clear to the state legislators in Louisiana, the governor of the state, and the mayor and business bureau of New Orleans that this will be the consequence. You can do the same. Governor Jindal can be reached through his website and Ray Nagin, mayor of New Orleans, can be reached through the Mayor's office .\n\nIn its ability to rise again just when we think we've got it licked, creationism is like Frankenstein's monster. \"Come see, villagers! It is alive!\" We'll never be rid of it by being silent and doing nothing, so one important thing is to force governments that ally themselves with this monster to pay for their folly by denying them our business. In addition, we must all arm ourselves with the one weapon we have that, in the end, the monster cannot overcome: the truth. All of us need to familiarize ourselves with the facts of evolution so that we can mount a spirited defense against the forces of ignorance and the charlatans who would exploit human insecurity and need for certainty.\n\nCarl Sagan memorably called science \"a candle in the dark\". Well, the darkness is always around us, closer than you think sometimes. Yes, it is alive. Creationism's alive because some of our fellow men and women keep it alive. In the dark.","meta":{"dup_signals":{"dup_doc_count":119,"dup_dump_count":47,"dup_details":{"curated_sources":2,"2020-29":1,"2020-16":1,"2019-47":1,"2019-43":1,"2019-35":1,"2019-26":1,"2019-13":1,"2019-04":1,"2018-51":1,"2018-47":1,"2018-43":1,"2018-39":1,"2018-30":1,"2018-26":1,"2018-09":1,"2017-43":1,"2017-22":1,"2017-09":7,"2016-44":1,"2016-36":8,"2016-30":7,"2016-22":1,"2016-18":1,"2016-07":4,"2015-48":3,"2015-40":3,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":3,"2014-42":8,"2014-41":3,"2014-35":5,"2014-23":4,"2014-15":5,"2022-27":1,"2024-10":1,"2015-18":3,"2015-11":2,"2015-06":3,"2014-10":2,"2013-48":3,"2013-20":3,"2024-26":1}},"file":"PMC2481410"},"subset":"pubmed_central"} {"text":"abstract: In face of the multiple controversies surrounding the DSM process in general and the development of DSM-5 in particular, we have organized a discussion around what we consider six essential questions in further work on the DSM. The six questions involve: 1) the nature of a mental disorder; 2) the definition of mental disorder; 3) the issue of whether, in the current state of psychiatric science, DSM-5 should assume a cautious, conservative posture or an assertive, transformative posture; 4) the role of pragmatic considerations in the construction of DSM-5; 5) the issue of utility of the DSM - whether DSM-III and IV have been designed more for clinicians or researchers, and how this conflict should be dealt with in the new manual; and 6) the possibility and advisability, given all the problems with DSM-III and IV, of designing a different diagnostic system. Part I of this article will take up the first two questions. With the first question, invited commentators express a range of opinion regarding the nature of psychiatric disorders, loosely divided into a realist position that the diagnostic categories represent real diseases that we can accurately name and know with our perceptual abilities, a middle, nominalist position that psychiatric disorders do exist in the real world but that our diagnostic categories are constructs that may or may not accurately represent the disorders out there, and finally a purely constructivist position that the diagnostic categories are simply constructs with no evidence of psychiatric disorders in the real world. The second question again offers a range of opinion as to how we should define a mental or psychiatric disorder, including the possibility that we should not try to formulate a definition. The general introduction, as well as the introductions and conclusions for the specific questions, are written by James Phillips, and the responses to commentaries are written by Allen Frances.\nauthor: James Phillips; Allen Frances; Michael A Cerullo; John Chardavoyne; Hannah S Decker; Michael B First; Nassir Ghaemi; Gary Greenberg; Andrew C Hinderliter; Warren A Kinghorn; Steven G LoBello; Elliott B Martin; Aaron L Mishara; Joel Paris; Joseph M Pierre; Ronald W Pies; Harold A Pincus; Douglas Porter; Claire Pouncey; Michael A Schwartz; Thomas Szasz; Jerome C Wakefield; G Scott Waterman; Owen Whooley; Peter Zachar\ndate: 2012\ninstitute: 1Department of Psychiatry, Yale School of Medicine, 300 George St., Suite 901, New Haven, CT 06511, USA; 2Department of Psychiatry and Behavioral Sciences, Duke University Medical Center, 508 Fulton St., Durham, NC 27710, USA; 3Department of Psychiatry and Behavioral Neuroscience, University of Cincinnati College of Medicine, 260 Stetson Street, Suite 3200, Cincinnati, OH 45219, USA; 4Department of History, University of Houston, 524 Agnes Arnold, Houston, 77204, USA; 5Department of Psychiatry, Columbia University College of Physicians and Surgeons, Division of Clinical Phenomenology, New York State Psychiatric Institute, 1051 Riverside Drive, New York, NY 10032, USA; 6Department of Psychiatry, Tufts Medical Center, 800 Washington Street, Boston, MA 02111, USA; 7Human Relations Counseling Service, 400 Bayonet Street Suite \\#202, New London, CT 06320, USA; 8Department of Linguistics, University of Illinois, Urbana-Champaign 4080 Foreign Languages Building, 707 S Mathews Ave, Urbana, IL 61801, USA; 9Duke Divinity School, Box 90968, Durham, NC 27708, USA; 10Department of Psychology, Auburn University Montgomery, 7061 Senators Drive, Montgomery, AL 36117, USA; 11Department of Clinical Psychology, The Chicago School of Professional Psychology, 325 North Wells Street, Chicago IL, 60654, USA; 12Institute of Community and Family Psychiatry, SMBD-Jewish General Hospital, Department of Psychiatry, McGill University, 4333 cote Ste. Catherine, Montreal H3T1E4 Quebec, Canada; 13Department of Psychiatry and Biobehavioral Sciences, David Geffen School of Medicine at UCLA, 760 Westwood Plaza, Los Angeles, CA 90095, USA; 14VA West Los Angeles Healthcare Center, 11301 Wilshire Blvd, Los Angeles, CA 90073, USA; 15Department of Psychiatry, SUNY Upstate Medical University, 750 East Adams St., \\#343CWB, Syracuse, NY 13210, USA; 16Irving Institute for Clinical and Translational Research, Columbia University Medical Center, 630 West 168th Street, New York, NY 10032, USA; 17New York Presbyterian Hospital, 1051 Riverside Drive, Unit 09, New York, NY 10032, USA; 18Rand Corporation, 1776 Main St Santa Monica, California 90401, USA; 19Central City Behavioral Health Center, 2221 Philip Street, New Orleans, LA 70113, USA; 20Center for Bioethics, University of Pennsylvania, 3401 Market Street, Suite 320 Philadelphia, PA 19104, USA; 21Department of Psychiatry, Texas AMHSC College of Medicine, 4110 Guadalupe Street, Austin, Texas 78751, USA; 22Silver School of Social Work, New York University, 1 Washington Square North, New York, NY 10003, USA; 23Department of Psychiatry, NYU Langone Medical Center, 550 First Ave, New York, NY 10016, USA; 24Department of Psychiatry, University of Vermont College of Medicine, 89 Beaumont Avenue, Given Courtyard N104, Burlington, Vermont 05405, USA; 25Institute for Health, Health Care Policy, and Aging Research, Rutgers, the State University of New Jersey, 112 Paterson St., New Brunswick, NJ 08901, USA\nreferences:\ntitle: The six most essential questions in psychiatric diagnosis: a pluralogue part 1: conceptual and definitional issues in psychiatric diagnosis\n\n# General Introduction\n\nThis article has its own history, which is worth recounting to provide the context of its composition.\n\nAs reviewed by Regier and colleagues \\[1\\], DSM-5 was in the planning stage since 1999, with a publication date initially planned for 2010 (now rescheduled to 2013). The early work was published as a volume of six white papers, *A Research Agenda for DSM-V* \\[2\\] in 2002. In 2006 David Kupfer was appointed Chairman, and Darrel Regier Vice-Chairman, of the DSM-5 Task Force. Other members of the Task Force were appointed in 2007, and members of the various Work Groups in 2008.\n\nFrom the beginning of the planning process the architects of DSM-5 recognized a number of problems with DSM-III and DSM-IV that warranted attention in the new manual. These problems are now well known and have received much discussion, but I will quote the summary provided by Regier and colleagues:\n\nOver the past 30 years, there has been a continuous testing of multiple hypotheses that are inherent in the *Diagnostic and Statistical Manual of Mental Disorders*, from the third edition (DSM-III) to the fourth (DSM-IV)... The expectation of Robins and Guze was that each clinical syndrome described in the Feighner criteria, RDC, and DSM-III would ultimately be validated by its separation from other disorders, common clinical course, genetic aggregation in families, and further differentiation by future laboratory tests--which would now include anatomical and functional imaging, molecular genetics, pathophysiological variations, and neuropsychological testing. To the original validators Kendler added differential response to treatment, which could include both pharmacological and psychotherapeutic interventions... However, as these criteria have been tested in multiple epidemiological, clinical, and genetic studies through slightly revised DSM-III-R and DSM-IV editions, the lack of clear separation of these syndromes became apparent from the high levels of comorbidity that were reported... In addition, treatment response became less specific as selective serotonin reuptake inhibitors were found to be effective for a wide range of anxiety, mood, and eating disorders and atypical antipsychotics received indications for schizophrenia, bipolar disorder, and treatment-resistant major depression. More recently, it was found that a majority of patients with entry diagnoses of major depression in the Sequenced Treatment Alternatives to Relieve Depression (STAR\\*D)study had significant anxiety symptoms, and this subgroup had a more severe clinical course and was less responsive to available treatments... Likewise, we have come to understand that we are unlikely to find single gene underpinnings for most mental disorders, which are more likely to have polygenetic vulnerabilities interacting with epigenetic factors (that switch genes on and off) and environmental exposures to produce disorders. \\[\\[2\\], pp. 645-646\\]\n\nAs the work of the DSM-5 Task Force and Work Groups moved forward, a controversy developed that involved Robert Spitzer and Allen Frances, Chairmen respectively of the DSM-III and DSM-IV Task Forces. The controversy began with Spitzer's Letter to the Editor, \"DSM-V: Open and Transparent,\" on July 18, 2008 in *Psychiatric Times* \\[3\\], detailing his unsuccessful effort to obtain minutes of the DSM-5 Task Force meetings. In ensuing months Allen Frances joined him in an exchange with members of the Task Force. In a series of articles and blog postings in *Psychiatric Times*, Frances (at times with Spitzer) carried out a sustained critique of the DSM-5 work in which he focused both on issues of transparency and issues of process and content \\[4-16\\]. The latter involved the Task Force and Work Group efforts to address the problems of DSM-IV with changes that, in Frances' opinion, were premature and not backed by current scientific evidence. These changes included new diagnoses such as mixed anxiety-depression, an expanded list of addictive disorders, the addition of subthreshold conditions such as Psychosis Risk Syndrome, and overly inclusive criteria sets - all destined, in Frances' judgment, to expand the population of the mentally ill, with the inevitable consequence of increasing the number of false positive diagnoses and the attendant consequence of exposing individuals unnecessarily to potent psychotropic medications. The changes also included extensive dimensional measures to be used with minimal scientific foundation.\n\nFrances pointed out that the NIMH was embarked on a major effort to upgrade the scientific foundation of psychiatric disorders (described below by Michael First), and that pending the results of that research effort in the coming years, we should for now mostly stick with the existing descriptive, categorical system, in full awareness of all its limitations. In brief, he has argued, we are not ready for the \"paradigm shift\" hoped for in the 2002 *A Research Agenda*.\n\nWe should note that as the DSM-5 Work Groups were being developed, the Task Force rejected a proposal in 2008 to add a Conceptual Issues Work Group \\[17\\] - well before Spitzer and Frances began their online critiques.\n\nIn the course of this debate over DSM-5 I proposed to Allen in early 2010 that we use the pages of the Bulletin of the Association for the Advancement of Philosophy and Psychiatry (of which I am Editor) to expand and bring more voices into the discussion. This led to two issues of the Bulletin in 2010 devoted to conceptual issues in DSM-5 \\[18,19\\]. (Vol 17, No 1 of the AAPP Bulletin will be referred to as Bulletin 1, and Vol 17, No 2 will be referred to as Bulletin 2. Both are available at ) Interest in this topic is reflected in the fact that the second Bulletin issue, with commentaries on Frances' extended response in the first issue, and his responses to the commentaries, reached over 70,000 words.\n\nAlso in 2010, as Frances continued his critique through blog postings in *Psychiatric Times*, John Sadler and I began a series of regular, DSM-5 conceptual issues blogs in the same journal \\[20-33\\].\n\nWith the success of the Bulletin symposium, we approached the editor of PEHM, James Giordano, about using the pages of PEHM to continue the DSM-5 discussion under a different format, and with the goal of reaching a broader audience. The new format would be a series of \"essential questions\" for DSM-5, commentaries by a series of individuals (some of them commentators from the Bulletin issues, others making a first appearance in this article), and responses to the commentaries by Frances. Such is the origin of this article. (The general introduction, individual introductions, and conclusion are written by this author (JP), the responses by Allen Frances.\n\nFor this exercise we have distilled the wide-ranging discussions from the Bulletin issues into six questions, listed below with the format in which they were presented to commentators. (As explained below, the umpire metaphor in Question 1 is taken from Frances' discussion in Bulletin 1.)\n\n1\\) How to Choose Among the Five Umpires of Epistemology?\n\nAre DSM diagnoses more like constructs or more like diseases? We would like to have the positions of each of the five epistemological umpires stated as clearly as possible.\n\nUmpire 1) There are balls and there are strikes and I call them as they are.\n\nUmpire 2) There are balls and there are strikes and I call them as I see them.\n\nUmpire 3) There are no balls and there are no strikes until I call them.\n\nUmpire 4) There are balls and there are strikes and I call them as I use them.\n\nUmpire 5) Don't call them at all because the game is not fair.\n\nCould you please state the position of the umpire which you endorse?\n\n2\\) What is a Mental Disorder?\n\nIt has been difficult to reach agreement on a definition of mental disorder. Could you comment on this problem, or offer what you think is an adequate definition of the concept, mental disorder?\n\n3\\) What are the Benefits and Risks of Conservatism?\n\nGiven the state of the science of psychiatric disorders, should we design DSM-5 in a conservative manner, with minimal change, or do the state of psychiatric science and the problems in DSM-IV dictate major change?\n\n4\\) Is Pragmatism Practical?\n\nWhat roles do science and pragmatism play in the construction of DSM-5? Does our science allow us to make major decisions on a scientific basis? What role do pragmatic considerations play, both when the science is strong and when the science is weak?\n\n5\\) How Compatible are All the Purposes of DSM?\n\nIs there a conflict over utility in the DSMs? The authors of DSM-III, DSM-IV, and DSM-5 intend the manuals to be useful for both clinicians and researchers. Is there a conflict between what is useful for clinicians and what is useful for researchers? Which group is served better by DSM-III and DSM-IV, and by the prospective changes in DSM-5?\n\n6\\) Is DSM the Only Way to do Diagnosis?\n\nGiven the problems in DSM-III, DSM-IV, and (likely) in DSM-5, would you argue for an alternative, more rational diagnostic system than the DSM? Could you describe it? Would your alternative system simply replace the DSM or restructure it in a major way?\n\nAs will become apparent in what follows, these six questions are in multiple ways interrelated, and for that reason a response to one of the questions is often relevant to another of the questions. This is, for instance, quite obvious with Questions 1 and 2. What you think a mental disorder is will affect how you define the notion of mental disorder. Question 4 quickly enters this discussion. Should pragmatic, in addition to purely scientific, considerations enter into your effort to describe and define mental illness? Under Question 1, for instance, Harold Pincus offers a \"pragmatic\" response that could easily be placed under Question 4.\n\nAnd now let's bring in Question 3 - whether to take a conservative or activist attitude toward changes in DSM-5. Don't forget that threading its way through all of these questions is the dissatisfaction and disappointment with the scientific status of DSM-III and IV. That troubled status clearly played a role in the epistemological (and ontological) discussion in Question 1, the definitional issue of Question 2, and the pragmatic aspect of Question 4. It is emblematic of the complexity of these discussions that the same troubled state of the current nosology will lead Scott Waterman in an activist direction in Question 3 and Michael Cerullo in a conservative direction.\n\nThe final two questions take us in somewhat other directions, but both are related to the discussions that precede them. Question 5, about utility, raises major issues concerning how the manual is actually used, and for whom it is really designed - again, questions related to those of scientific status, definition, pragmatic considerations, and finally attitudes toward change. With this question it's hard to find anyone wanting to defend the premise of DSM-III and IV (and apparently DSM-5) that the manuals are equally useful for clinicians and researchers.\n\nFinally with Question 6 we have an ultimate question - whether the current state of the DSMs warrants a total overhaul. With Ronald Pies we have an individually imagined overhaul; with Joel Paris we have a commentary on DSM-5's effort at revision, and with Michael First's presentation of the NIMH Research Domain Criteria project (RDoC), we have NIMH's response - that the diagnostic manuals of the future may not resemble the DSMs as we know them.\n\nWe should not expect from this or any other publication final answers to the questions of psychiatric classification. The questions are too large, and our expectations have to be more modest. What we know is that the goals of DSM-III & IV have not been achieved and that we are left with more immediate questions as to how to proceed with the current revision, DSM-5. Responses to these questions are understandably mixed. What we hope from this article is to keep the discussion going, and perhaps to move it forward a bit.\n\nFinally, because of the total size of this exercise, \"The Six Most Essential Questions In Psychiatric Diagnosis: A Pluralogue\" will be published in four parts: each of the first three covering two questions and the final part a general conclusion. Thus this article, Part 1, covers the first two questions.\n\n# Question \\#1: How do we Choose Among the Five Umpires of Epistemology?\n\n*Are DSM diagnoses more like constructs or more like diseases? We would like to have the positions of each of the five epistemological umpires stated as clearly as possible*.\n\n*Umpire 1) There are balls and there are strikes and I call them as they are*.\n\n*Umpire 2) There are balls and there are strikes and I call them as I see them*.\n\n*Umpire 3) There are no balls and there are no strikes until I call them*.\n\n*Umpire 4) There are balls and there are strikes and I call them as I use them*.\n\n*Umpire 5) Don't call them at all because the game is not fair*.\n\n*Could you please the position of the umpire which you endorse?*\n\n## Introduction\n\nQuestion \\#1 involves both ontological and epistemological issues: what are psychiatric disorders, and how do we know them? Framing these questions with the metaphor of umpires and balls and strikes comes from Allen Frances's response to commentaries in Bulletin 1, \"DSM in Philosophyland: Curiouser and Curiouser.\" That response offered the positions of three umpires: the realist first umpire, the nominalist second umpire, and the constructionist third umpire. The author sided with Umpire 2, espousing a nominalist stance to the effect that he knows that there is real psychopathology out there but has no guarantee that his diagnostic constructs sort it out correctly. He wrote: \"This brings us to me a (call'um as I see'um) second umpire. In preparing DSM-IV, I had no grand illusions of seeing reality straight on or of reconstructing it whole cloth from my own pet theories. I just wanted to get the job done - produce a useful document that would make the fewest possible mistakes, and create the fewest problems for patients\" (Bulletin 1, p. 22).\n\nFor this article we have added two more umpires: a pragmatist fourth umpire and a fifth umpire who rejects the entire exercise. We were motivated to add these umpires by the fact that some of the responses required them.\n\nFurther, we recognize that in asking respondents to choose one position and defend it, we have made an unreasonable demand. Why should an individual not say, I'm a combination of these two umpires, or, I'm a lot of this umpire and a little of that, or finally, I'm a first umpire if we're talking about Huntington's disease, but a second umpire if we're talking about schizoaffective disorder. So, quite understandably, in some our responses we witness the same problem we have with our diagnoses: comorbidity - in this case epistemological (or ontologic) comorbidity rather than diagnostic comorbidity.\n\nIn this debate over the nature of psychiatric disorders we experience a tension among the umpires that reflects the status of nosologic science. On the one hand our patients suffer greatly from psychiatric symptoms, and it seems wildly foolish to theorize away their suffering. On the other hand our efforts to organize and classify their suffering can seem arbitrary and confusing. We organize or categorize a symptom cluster and give it a diagnostic name, and it overlaps with another cluster. Or a patient simply has symptoms of both. We start off with the expectation that there will be a match-up between therapeutic agent and diagnostic cluster, and we discover that, at the extreme, most of our pharmacologic agents seem to treat most of our disorders. Finally, we somehow want to resolve this confusion by getting at the underpinnings of the identified disorders, and we discover that the genetics and neuroscience don't support our groupings.\n\nIn view of this confusion it's not surprising that opinion divides itself in various ways. Focus on the real suffering out there, along with a conviction that the diagnostic clusters reflect distinct, real conditions, and you end up as a first umpire. Focus on that suffering with uncertainty about the isomorphism between label and disorder, and you become a second umpire. Switch your focus onto the arbitrariness of the labeling, and you end up questioning whether there is anything but the labeling and become a third umpire. Or switch away from the issues of these umpires onto the effects of one label versus another, and you are now a fourth umpire. Finally, decide that it's all nonsense, and you are our fifth umpire.\n\n## Commentary: A Game for Every Kind of Umpire (Almost)\n\nPeter Zachar, Ph.D. and Steven G. Lobello, Ph.D.\n\nAuburn University Montgomery Department of Psychology.\n\nOne might think that a philosophical pragmatist should identify with either the pragmatist or the nominalist position in Allen Frances's clever analogy, but that isn't the case. From a pragmatist perspective, philosophical -isms such as realism, pragmatism, nominalism, and constructionism are conceptual distinctions that we make for certain purposes. The question is what information or response options are gained from making these distinctions that would not be gained were other distinctions made.\n\nFor example, let's take the pragmatist's view that *I call balls and strikes as I use them*. If taken too literally this is a recipe for a shallow utilitarianism. One of the ethical principles of umpires is to try to make the game as fair as possible - so every batter and pitcher should face the same strike zone (for that umpire). An umpire should attempt to call the pitches as they are (to the best of his ability), and not widen the zone for batters he favors and narrow it for those he does not. Also, in most games, a degree of unreliability in deciding what counts as a ball or strike may not matter, but it can matter a lot in big games. Presumably every psychiatric patient should be treated like a big game, but with 15 minute medication management sessions that is not likely the case. So a kind of realist attitude is important for keeping the game fair. This is true of psychiatric nosology as well. We should always attempt to classify the world as it is not how we want it to be. A pragmatist would not deny the spirit of this ethic.\n\nMost pragmatists would point out that the purpose of the strike zone is to assure that the batter has a chance to hit the ball well enough to get on base. He cannot do so if the pitch is too high, in the dirt, or wide of the plate. This makes the strike zone a practical kind. There are also practical constraints on the strike zone's location that create a kind of objectivity - but beyond that there is no gold standard. Furthermore, it is not true that every pitch that goes through the zone on the way to the catcher is a strike. For example, spit balls have such unpredictable trajectories that batters have very little chance of hitting them, and they are therefore illegal whether or not they are in the zone. Psychiatry lacks fixed gold standards as well, and the social implications of giving a diagnosis that is contrary to the purpose of diagnosing can also affect whether something is considered to be an official disorder (e.g., pedophilia).\n\nWhat of the nominalists who say *I call balls and strikes as I see them*? Perhaps a better way to think about nominalists is that they deny both that the criteria for balls and strikes were created by the Platonic baseball gods and that competent umpires can recognize what is naturally a ball and naturally a strike. Cousins to the pragmatist, the nominalists say that what exist are particular pitches, and we tend to group them into the ball category or the strike category for various and sundry reasons. Very different pitches like fast balls, curve balls and sliders can all be strikes. These groupings can also be altered. For example up until the 1920s the spit ball was a legitimate pitch (as homosexuality was once considered a legitimate psychiatric disorder).\n\nSo nominalists and pragmatists are uncomfortable when realists start talking about fixed world structures and natural kinds. There are, however, *kinds* - fastballs, curve balls, etc. With the realists, the pragmatists and nominalists recognize the value of understanding the causal mechanisms that produce these kinds (e.g., Vaseline helps you throw good spit balls), but individual pitches can be grouped in a plurality of ways.\n\nThe constructionist position is the easiest to defend in this example because baseball is a social construction, and like other social constructions such as the U. S. Government and currency, baseball is a real thing. So what information do we gain from the constructionist analysis? Rather than saying *There are no balls and strikes until \"I\" call them*, it is more accurate to say that social construction is a historical and community activity. Baseball proper did not exist in 1800 and a pretty good story can be told about the social and economic factors that helped shape the game we have today. A similar narrative could be developed for psychiatry, for example, there is a good story to be told about how degeneration theory in the 19^th^ century and pharmaceutical marketing practices in the 20^th^ century both shaped the classification system. Social constructionists would also point out that something like the introduction of the designated hitter was not a deductive consequence of the rules of the game. Its legitimacy has to be understood with respect to the baseball community and its chosen authorities. Something similar is true of the scientific community and its designated authorities, including the process by which the DSM and the ICD is developed. The pragmatists consider this useful information.\n\nFinally we come to the Szasian. It is a category mistake to lump a political and ethical position such as *I refuse to play because the game is not fair* with realism, pragmatism, nominalism and constructionism. Anti-psychiatry is better considered a behavioral option available to a disillusioned realist. In terms of baseball, the claim would be that in the rest of sports, things like field goals and holes-in-one are objectively fixed, but there is so much variation between umpires in terms of the strike zone, that any rational person would see that the so-called objectivity of the game is a myth. Other like-minded critics would point out that there seems to be statistical evidence that the strike zone gets wider when the count is full - which keeps the game exciting. It is also economically convenient for the sport as a whole if pitchers are allowed some leeway when being close to throwing perfect games and batters allowed leeway when being close to breaking hitting records. Field goals and holes-in-one do not work like that, say the critics, yet baseball wants its consumers to think it is like those other sports. Perhaps the best argument against the Szaszian view is to point out that if they studied football and golf more closely, they might see that things are not as always as objective over there as they assume. Baseball should not be evaluated with respect to an idealized image of other sports just as psychiatry should not be evaluated with respect to an idealized image of other medical specialties.\n\n## Commentary: Mental Disorders, Like Diseases, Are Constructs. So What?\n\nClaire Pouncey, M.D., Ph.D.\n\nPhiladelphia, PA.\n\nThe literature on the philosophy of psychiatric nosology often conflates questions of ontology - i.e., whether mental disorders exist as abstract entities- with questions of epistemology - i.e., how we can know anything about them if they do. To ask whether mental disorders are (actual) diseases or (mere) constructs confuses these two types of questions about mental disorders, as I will use the first three umpire positions to illustrate. This error is prevalent in academic discussions about psychiatric nosology.\n\nOntologic commitments are basic metaphysical commitments about what exists in the world. Most of us, by virtue of the fact that we operate in our physical and social worlds as we do, are committed to the existence of intersubjectively appreciable mid-level objects, such as plants, buildings, bodies of water, and other persons, to name just a few. That is, we are realists *about* (and realism is always local to a particular question) mid-level objects, as evidenced by our behaviors.\n\nIt is easier to be skeptical (a.k.a. antirealist) about invisible, microscopic, macroscopic, and abstract objects. Most of us are ontologically committed to the existence of oxygen, given what we know about basic physiology and the chemistry of our natural environment, although it is microscopic in its elemental form and undetectable by the senses in its macroscopic form. Our commitments to microscopic entities such as muons, macroscopic entities such as red giants, intangible phenomena such as global warming, or second-order (categorical) entities such as phyla may be much weaker, and more prone to debate.\n\nMental disorders generate ontological skepticism on several levels. First, they are abstract entities that cannot be directly appreciated with the human senses, even indirectly, as we might with macro- or microscopic objects. Second, they are not clearly natural processes whose detection is untarnished by human interpretation, or the imposition of values. Third, it is unclear whether mental disorders should be conceived as abstractions that exist in the world apart from the individual persons who experience them, and thus instantiate them. Together, these reasons to doubt the ontic status of mental disorders become quite persuasive.\n\nSetting ontological antirealism aside, we can ask epistemological questions separately: if we assume that mental disorders do exist as abstract entities, how do we go about studying them, and on what basis can we possibly gain genuine knowledge about them? Even if we collectively agree that, for example, a particular person at a given time were experiencing a major depressive episode, on what grounds can we know that 'major depressive disorder' exists as an abstract entity? On what grounds can we infer that the broader class 'mood disorders', or 'mental disorders' as the most general class, exist as further abstractions? Epistemic realists may be realists about *Hector's* depression, about the existence of an abstract entity that is major depressive disorder, or about the existence of mental disorders in the world generally. They may not be realists about all three. Similarly, epistemic antirealists may doubt one or more of these commitments.\n\nUmpire \\#1 is both an ontological realist and an epistemological realist about balls and strikes in baseball. Balls and strikes are real things (events) that exist (happen) in the world, and Umpire 1 has the means and ability to detect them in accurate and unbiased ways: \"There are balls and there are strikes and I call them as they are.\" This tends to be the position attributed to psychiatry. Psychiatry's rhetoric, if not the actual commitments of all practitioners, says both that mental disorders are abstract entities that exist in the world and manifest in individual persons, and that these processes can be intersubjectively appreciated and elucidated as they truly are. Let's call this the Strong Realist position.\n\nSuch confidence is not exhibited by Umpire \\#2, who shares the ontological realism of Umpire \\#1, but not the epistemological realism. In tempering his epistemological position to \"I call them as I see them,\" Umpire \\#2 maintains that balls and strikes exist apart from his perception of them, but softens his position to recognize that he may not always perceive them as they exist in the world. That is, Umpire \\#2 is *ontologically* committed to the existence of balls and strikes, but does not assume that he always has epistemic access to that reality. Let's call this the Strong Realist\/Weak Constructivist position.\n\nUmpire \\#3 is an ontological and an epistemological antirealist about balls and strikes: no balls or strikes exist in the world regardless of who thinks they might. In calling them, the umpire constructs the truth. This is not necessarily to say that all his calls are unfounded fictions, but rather it is to say that although the umpire describes his perceptions as accurately he can, there is no ultimate, underlying reality to which those perceptions could be compared, even in the absence of epistemic limitations. Let's call this the Strong Constructivist position.\n\nPsychiatry's strongest critics tend to make strong constructivist arguments: mental disorders do not exist, so any diagnosis, treatment intervention, or research finding is exempt from ultimate confirmation or refutation. In their strongest form, calling mental disorders 'constructs' *is* meant to communicate that they are mere fictions, completely unfounded medical lore. However, note that on the Strong Realist\/Weak Constructivist view this is not the case. Calling a mental disorder a 'construct' does not imply invention so much as it serves as a reminder that our epistemic access to the reality of things is always limited. On this view, every abstract entity is a construct, and constructs can be legitimate objects of scientific investigation. Often, there is broad agreement about the nature of scientific constructs, such as phyla, subatomic particles, or diseases, even if the construct is construed as a working hypothesis, or a category of disparate entities that does not lend itself to simple definition or characterization. On this view, mental disorders are like diseases: they are a heterogeneous class of abstract entities that have uncertain ontic status apart from the persons who instantiate them. In formalizing its nosology, psychiatry is trying to call them as we see them.\n\n## Commentary: Why Umpires Don't Matter\n\nNassir Ghaemi, M.D.\n\nTufts University Department of Psychiatry.\n\nNietzsche said truth is a mobile army of metaphors. If you get your metaphor wrong, you'll miss the truth. I think this is the case with the umpire metaphor that seems to be the central concept underlying the thinking of my interlocutor. I think it is simply wrong-headed. It sets up psychiatry and science and knowledge as a game, where the rules can be changed, and where there may be no truth. If you are a postmodernist extremist, this may make sense. But if you accept that there are truths in the world (such as that if you take very high doses of lithium, you will get toxicity), then it makes no sense.\n\nA mistaken metaphor has no response except to say that it is mistaken.\n\nBefore offering a better metaphor, let me say that I accept the realist position, that is, that diseases exist independent of me and you that are expressed as psychiatric symptoms like the chronic delusions of schizophrenia, or the mood states of manic-depression. To prove this fact, I suggest three approaches. One, suggested by Paul McHugh, is to actually see people who have these symptoms, the old kick the table test of realism. The second is to debate the merits of the positions pro and con; I won't do so here, but I think others have done so in reasonably persuasive ways, such as Roth and Kroll's *Reality of Mental Illness*. The third is to apply the pragmatic test, and see the consequences of one position or the other. I accept the realist view in at least some psychiatric diseases, but I would add that if one does not, he or she should think of the consequences. I don't see how one can reject the reality of psychiatric disease, and still practice psychiatry, especially with the use of harmful drugs.\n\nThis metaphor brings out those stark choices, as well as provides further rationale for the reality of at least some psychiatric diseases based on how matters have gone in other examples of similar problems in the history of science and medicine.\n\nHere then is a better metaphor for understanding psychiatric nosology, one that I heard from Kenneth Kendler and which I am expanding here. In a presentation on \"epistemic iteration,\" building on work in history of science, Kendler described how we can understand any scientific process as involving an approximation of reality through successive stages of knowledge. The main alternative to this process is \"random walk\" where there is no trend toward any goal in the process of scientific research. Kendler points out that epistemic iteration won't work if there are no real psychiatric illnesses. If these are all, completely and purely, nothing but social constructions, figments of our cultural imaginations, then there is no point to scientific research at all. (I would add: to be honest doctors, we should stop thereby killing patients with our toxic drugs - since all drugs are toxic - stop taking their money to buy our large houses, and retire.) The random walk model is a dead end for any ethical practice of medicine, because if there is no truth to the matter, then we should not claim to have any special knowledge about the truth.\n\nIf there is a reality to any psychiatric illness, then epistemic iteration makes sense, and indeed it has been the process by which much scientific knowledge has been obtained in the past. Take temperature. A long process evolved before we arrived at the expansion of mercury as a good way to measure temperature. There was a reality: there is such a thing as hot and cold temperatures. How we measured that reality varied over time, and we gradually have evolved at a very good way of measuring it. Temperature is not the same thing as mercury expansion: our truth here is not some kind of mystical absolute knowledge. But it is a true knowledge.\n\nA similar rationale may apply to psychiatric diseases. We may, over time, approximate what they are, with our tools of knowledge, if we try to do so in a successive and honest manner, seeking to really know the truth, rather than presuming it does not exist.\n\nThe better metaphor, then, which captures epistemic iteration versus random walk alternatives would be to think of a surface, and a spot on that surface, which we can label X, representing the true place we want our disease definition (see figure). If we were God, we would know that X is the right way to describe the disease. Let A be our current knowledge. How do we get from A to X. One way is to go from A to B, from B to C, from C to D, in a zig zag pattern, as our research takes us in different directions, but gradually and successively closer to X. This is epistemic iteration.\n\nThe random walk pattern would involve the same starting point A, and multiple movements to B, C, and D, but with no endpoint, because no X would exist (see figure). In this process, movement is random, there is no reality pulling scientific research towards it, like gravity pulling objects closer, and there is no end, and no truth. If this is the nature of things, then our profession has to admit to everyone everywhere that this is what we are doing. We should then give up any claims to specific knowledge and stop treating - and harming - people.\n\nThe history of medicine and the history of science gives many examples of both approaches. So the question really is an ontological one: do mental illnesses exist as realities in the external world, as biological diseases independent of our social constructs and personal beliefs? The umpire metaphor assumes, but does not answer, that question. The epistemic iteration metaphor shows how the answer to that question faces us with two opposed choices about how we understand science and psychiatry. If psychiatry is like the rest of medicine, if there are some psychiatric diseases that are independent biological realities just as there are some medical diseases, then the epistemic iteration metaphor would seem valid in some cases, and the umpire metaphor, useless as it is, should be discarded. Figure 1<\/a>.\n\n## Commentary: The Three Umpires of Metaphysics\n\nMichael Cerullo, M.D.\n\nUniversity of Cincinatti Department of Psychiatry.\n\nThe debate about the nature of the external world has been a central question of metaphysics since the first pre-Socratic philosophers. Most working scientists and philosophers today would be classified as modern realists who believe there is an independent objective external reality. Within the realist camp there is further debate about just how much we can know about absolute reality. Immanuel Kant termed the underlying reality of the world \"the thing in itself\" (*das Ding an sich*) and believed we could never truly know this ultimate reality \\[34\\]. Opposed to the realists are the anti-realists who hold that there is no independent objective reality separate from our own subjective experience. Allen Frances' umpire analogy is a good way to frame the major positions in this debate \\[2\\](Francis 2, 21-25). Frances' first umpire who believes there are balls and strikes and calls them as they are is a modern realist. Umpire two is a Kantian realist who believes there are balls and strikes but can only call them as she sees them. Umpire three is an anti-realist who believes there are no balls and strikes until he calls them.\n\nThese days it is hard to seriously defend an anti-realist position in science. Neuroscientists contend that all behavior, from depression to extroversion to a dislike of tomatoes, is dependent and explainable by the workings of the brain. On the other hand there is still a real debate as to whether subatomic particles are the final bedrock of reality or a mere appearance of a deeper reality (strings? more particles all the way down?). However this latter Kantian uncertainty doesn't seem to have much relevance to the debate about the brain. After all, it doesn't seem to make any difference in our understanding of neurons if their atoms are ultimately made of strings or point particles.\n\nOutside of metaphysics there is another parallel to the umpire analogy in epistemology. Within epistemology there is a subfield interested in the taxonomy of illness. The two major groups in this debate are the naturalists and normativists \\[35,36\\]. Naturalists believe disease can be defined objectively as a breakdown in normal biology. The naturalist position corresponds to the first umpire. Normativists believe our definitions of disease are subjective and culturally driven and thus identify with the third umpire. The second umpire seems to mix elements of both epistemological positions.\n\nMy own sympathies lie with modern realism when it comes to behavior and a combination of normativist and naturalist positions when defining disease. Although there is physical explanation for all behavior (hence my realist position), not everything in the universe is physical. Definitions of disease require value judgments, and while each value judgment surely has a physical explanation in the brain, nothing physical can decide which judgment is correct. Even in areas of medicine outside psychiatry there is often a strong normativist element in how diseases are defined. Many diseases such as hypertension or hypercholesterolemia require making arbitrary cut off points in laboratory values. Deciding these cut off points requires making hard decisions about public health and considering the risk\/benefit ratio of any decision. There is clearly a strong normativist element in theses definitions, yet clearly that does not make them bad or incorrect descriptions. Many psychiatric diseases also have a similar logic. While everyone has some sad mood or anxiety there are obvious extremes which are justifiably labeled as mood or anxiety disorders. Once again there may be certain arbitrary cut off points when deciding how much sadness or anxiety is too much but that does not invalidate these definitions anymore so that it would the \"physical\" illness listed above. This being said, there are also many diseases that are much better defined from a more naturalist perspective. For example, in psychiatry schizophrenia seems to be better defined from the naturalist perspective along with other physical diseases like Parkinson's disease or dementia. It seems easier to define these diseases using the naturalist ideal of disease as a breakdown in the \"typical\" human biology.\n\nThe lesson in these debates is that psychiatrists (and the public) should recognize that all definitions of disease have normativist and naturalist elements even in a world described by a scientific realism. None of Frances' umpires fits with my combined metaphysical and epistemological positions. Therefore I suggest a different umpire, one who believes in an objective physical world that we can access to determine exactly what are balls and strikes. Yet it is the umpire and players who first must choose the rules of the game, some of which may always seem arbitrary but the majority of which are constrained by the physics of balls and bats and the semantic and historical notions of games and baseball.\n\n## Commentary\n\nJerome C. Wakefield, Ph.D., D.S.W.\n\nSilver School of Social Work and Department of Psychiatry, New York University.\n\nRegarding the Umpires: First, to avoid confusion, one has to distinguish the role of Umpire calls within the rules of baseball from the call as an attempt to state what happened. The Umpire calls them as he\/she sees them, with the goal of getting it right - and understands that the way it looks can be misleading. But, whether correct or incorrect, the Umpire's call \"stands\" despite any later evidence that emerges to the contrary, and to that extent the call constitutes\/constructs the game's reality. Diagnosis, too, has dual aspects - a game in which one plays by the rules to justify reimbursement, and a hypothesis about what is going on in the patient. I focus on the hypothesis-testing aspects of both Umpire calls and the DSM.\n\nIn attempting to make a call that reflects the truth, Umpires 1 and 3 embrace intellectual doctrines designed to deal with their epistemic anxieties - Umpire 1 can't stand uncertainty, and Umpire 3 can't stand the arrogance that comes from Umpire 1's certainty. Ironically, Umpire 1 and Umpire 3 fall into the same fallacy, that of collapsing ontology and epistemology into one. Umpire 1 naively sees his\/her judgment as being a direct impression of reality without epistemic mediation, thus epistemological uncertainty is avoided. Umpire 3 sees his\/her judgment as creating or constituting \"reality\" from his\/her perspective, so again epistemological uncertainty is avoided. On the other hand, Umpire 2, while closest to the correct approach, describes his\/her reality and his\/her perception in a rather disconnected way.\n\nSo, I vote for Umpire 1.5 (humble realism): There are balls and there are strikes (plus some ontologically fuzzy cases), and based on how I see them and any other available evidence, I call them as I believe they are, and because the evidence in these cases is usually a pretty good indicator of reality, calling them as I see them usually equals calling them as they are. But, I can be wrong! The truth does not necessarily correspond to my call, and fresh evidence can always be brought to bear to help get closer to the truth.\n\nCommon sense offers the best guide here. Recently, Tigers' pitcher Armando Galarraga was one pitch away from achieving baseball immortality with a perfect game, an extremely rare event. In a close call at first base, Umpire Jim Joyce called the runner safe, destroying Galarraga's chance. But, as everyone saw from the instant reply, in fact the runner, Jason Donald, was out. Jim Joyce said to the press; \"I just cost that kid a perfect game... I thought (Donald) beat the throw. I was convinced he beat the throw, until I saw the replay... It was the biggest call of my career and I kicked the (expletive) out of it.\" He then went to Galarraga and explained what he saw, and made it clear that he was wrong (\"Imperfect\" Umpire Apologizes by Steve Adubato, Ph.D., Star-Ledger). Fortunately for the lessons we and our kids take away from baseball, Joyce was not Umpire 1 or 2 or 3, but humble realist Umpire 1.5 who understood the possibility of error inherent in the attempt for mind to represent reality.\n\nAs to the other part of the question, the dichotomy between constructivism and realism is a false one. Our diagnostic categories are constructs (as are all concepts) intended in the long run to refer to underlying diseases\/disorders. Current DSM diagnoses are constructs that are starting points for a recursive process aimed at getting at disorders. We somewhat misleadingly refer to them now as \"disorders,\" although frequently we acknowledge that one of these categories likely encompasses many disorders. Close attention to the way revise our views and the grounds on which our judgments are made suggests that the individuation of disorders ultimately depends on the individuation of dysfunctions (see the answer to question 6).\n\n## Commentary\n\nJoseph Pierre, M.D.\n\nUCLA Department of Psychiatry.\n\nConsider the brief history of Pluto as a planet, as told in the recently published book, *How I Killed Pluto and Why It Had It Coming* \\[37\\]. A few thousand years ago, during the era of Greek geocentrism, the Earth was considered to be the center of the universe, while the sun and moon were regarded as two of the seven planets that orbited around it. Later in the 16^th^ century, as Copernicus' mathematical models of heliocentrism were embraced, the Earth and the sun traded categories at the expense of the moon. The subsequent discoveries of Uranus in 1781, Neptune in 1846, and Pluto in 1930 resulted in the total of nine planets that most of us learned about in elementary school. However, in 2006, Pluto was officially downgraded from classification as a planet, in part because of the discovery in 2005 of a larger mass of rock and ice called \"Xena\" orbiting not that far away. Now our children will be taught that there are only eight planets, and will perhaps eventually learn that there are also heavenly bodies called \"dwarf planets,\" among them Pluto and Eris (the new, official name for \"Xena\").\n\nTo anyone that really relies on taxonomy in their daily work, it inevitably becomes apparent that such efforts at classification never seem to do a perfect job of \"carving nature at its joints.\" This is especially true with scientifically-based taxonomies - they change based on the evolution of underlying definitions; new categories and sub-categories emerge while previous entities are re-categorized in order to accommodate new data; and challenges to classification at border-zones linger on. Although this kind of change sometimes causes the general public to regard science with skepticism, it is this very adaptability in the face of new data that is the strength of science and the feature that most distinguishes it from dogma.\n\nThe belief that this dynamic process is both acceptable and necessary for the Diagnostic and Statistical Manual of Mental Disorders (DSM) would seem to place myself in the category of Allen Frances' \"Umpire \\#2,\" where I suspect the vast majority of clinicians reside. Still, since I have just suggested that reality often defies simple classification, allow me to state my position more clearly. I believe that psychiatric disorders do exist and that they are brain-mediated diseases (leaving aside for the moment the challenge of defining \"disease\") with genetic, biologic, and environmental etiologies and influences. The disorders (not diseases) cataloged in the DSM represent our best attempts at achieving consensus definitions of these conditions, seriously limited as we are by diagnosis that is based almost exclusively on describing manifest symptoms. Because of this limitation, it is unavoidable that psychiatric diagnosis is overly simplistic, just as many medical diagnoses would still be if not for technology-driven discoveries about pathophysiology. As such, DSM diagnoses are constructs, and DSM-IV's chief utility is as a \"good enough rough guide for clinical work \\[38\\].\"\n\nAs an imperfect work in progress, the DSM-IV contains diagnostic constructs of variable validity. In the tradition of Umpire \\#1, I believe that many of the disorders in DSM do a good job of describing the essential symptomatic features of what are probably \"real diseases\" (e.g. obsessive-compulsive disorder). However, I can also acknowledge the concerns of Umpire \\#3, including that some DSM disorders may tread dangerously close to pathological labeling of socially unacceptable behaviors (e.g. paraphilias) \\[39\\], while others might be better understood as \"culture-bound syndromes\" (e.g. anorexia) \\[40\\].\n\n## Commentary\n\nGary Greenberg, Ph.D.\n\nNew London, CT.\n\n\"There are no balls or strikes until I call them\" is not the postmodern fantasia that it sounds, nor is it a throwback to the idealism that Samuel Johnson refuted so thoroughly by kicking Bishop Berkeley in the knee. Or, to put it another way, it is neither the death knell of psychiatry nor a straw man for psychiatrists to use to refute their critics.\n\nWhat it is, really, is just plain common sense. To question diagnosis is not to question the existence of suffering, or of the mind that gives us the experience of suffering, or of the value of sorting it into category. It is merely to point out that before we can do that sorting, we have to posit those categories. Where do they come from? Are there really diseases in nature?\n\nConsider this question. What is the difference, from nature's point of view, between the snapping of a branch of an old oak tree and the snapping of a femur of an old man? We rightly recoil from the suggestion that there is no difference, and yet to assume that there is *in nature* a difference is to assume that nature cares about us enough to provide us with categories of broken hips. There is ample evidence, most stunningly Darwinian theory, that this is not true. Nature is indifferent. Unlike Major League Baseball, nature doesn't provide the rules by which the world can be divided into balls and strikes.\n\nIf there is a difference between the hip and the branch, it is surely to be found in the difference between the man and the tree, which is that the man is capable of caring about his femur, as are the people that love him. The only reason to distinguish one break from the other is to create a category--*intracapsular transcervical fracture, Stage II*, let's say. Naming the suffering, we bring it into the human realm. (It is not a coincidence that the authors of Genesis tell us that the first task given to Adam and Eve in Eden was to name the creatures of the earth; naming is how we put our stamp on the world.) By inventing categories like this one, we give ourselves a way to get hold of it, which in medicine means among other things a way to talk to other professionals about it, a way to determine treatment options, and a way to provide a prognosis to the patient and family. What we don't do is to discover that nature intends hips to break in certain ways, that there exist in nature intracapsular transcervical fractures and trochanteric fractures, any more than nature provides a branch with different ways to snap off a tree.\n\nThis much is uncontroversial, largely because whether you buy the argument or not, you are still going to treat the problem more or less the same way. The difference between fracture as a man made and a natural category is trivial, unless you're in a philosophical argument. But when it comes to psychiatry, something changes. To call a snapped femur an illness is to make only the broadest assumptions about human nature--that it is in our nature to walk and to be out of pain. To call fear *generalized anxiety disorder* or sadness accompanied by anhedonia, disturbances in sleep and appetite, and fatigue *depression* requires us to make much tighter, and more decisive, assumptions about who we are, about how we are supposed to feel, about what life is for. How much anxiety is a creature cognizant of its inevitable death supposed to feel? How sad should we be about the human condition? How do you know that?\n\nTo create these categories is to take a position on the most basic, and unanswerable, questions we face: what is the good life, and what makes it good? It's the epitome of hubris to claim that you have determined scientifically how to answer those questions, and yet to insist that you have found mental illnesses in nature is to do exactly that. But that's not to say that you can't determine scientifically patterns of psychic suffering as they are discerned by people who spend a lot of time observing and interacting with sufferers. The people who detect and name those patterns cannot help but organize what they observe according to their lived experience. The categories they invent then allow them to call those diseases into being. They don't make the categories up out of thin air, but neither do they find them under microscopes, or under rocks for that matter. That's what it means to say that the diseases don't exist until the doctors say they do. Which doesn't mean the diseases don't exist at all, just that they are human creations, and, at their best, fashioned out of love.\n\nIf psychiatry were to officially recognize this fundamental uncertainty, then it would become a much more honest profession--and, to my way of thinking, a more noble one. For it would not be able to lose sight of the basic mystery of who we are and how we are supposed to live.\n\n## Commentary\n\nHarold A Pincus, M.D.\n\nColumbia University Department of Psychiatry.\n\nThe fourth umpire has a very pragmatic perspective and understands that a classification of diagnostic categories is used for many different purposes by many different groups and individuals. Umpire 4 also understands that these various \"user groups\" approach their tasks with varying empirical, philosophical and historical backgrounds and, and with this proliferation of users and backgrounds, there needs to be a balance between (to mix metaphors) letting \"a thousand flowers bloom\" - creating a Tower of Babel with little ability to effectively communicate among these groups - and a single approach that cannot be tailored to particular needs. From this perspective, there is a recognition that the world has changed and the management of information has become the pre-eminent task of a classification system, overshadowing (but also enhancing), the clinical, research and educational goals of a classification. As such, the ICD\/DSM should serve a critical translation function to anchor communications among multiple user groups that apply psychiatric classification in their day to day functions.\n\nThis information management goal intersects with multiple user groups in terms of:\n\n-health policy\n\n-clinical decision making\n\n-quality measurement\n\n-epidemiology\n\n-educational certification\/accreditation\n\n-multiple areas of research from genetics to psychopharmacology to cognitive science, etc.\n\nThe way this would work is that the ICD\/DSM classification would remain relatively stable, serving as a kind of \"Rosetta Stone\" to facilitate communication among the various user groups. Each individual user \"tribe\" (or individual scientist) would be free to identify various alternative classifications. However, all journals or other public reporting mechanisms would require that any clinical population also be described in the ICD\/DSM classification in addition to whatever tribal criteria for the \"Syndrome XYZ\", 70% met ICD\/DSM criteria for GAD, 40% OCD, and 30% Anxiety Disorder, NOS). Changes in future (descriptive) classifications should be infrequent and guided by a highly conservative process that would only incorporate changes with strong evidence that they:\n\n1\\. Enhance overall communication among the \"tribes\"\n\n2\\. Enhance clinical decision-making\n\n3\\. Enhance patient outcomes\n\nHowever, ICD\/DSM would have a section describing the relationships among the various tribal concepts that could be updated on a more frequent basis.\n\nNote that this approach gives up the ideal (or even a focus) on validity, per se. Maintaining effective communication (most notably, effective use, reliability and understandability) and clinical utility \\[41\\] (either the more limited improvement of clinical and organizational decision-making processes or the ideal of outcomes improvement) become the principal goals of the classification. In other words, while a psychiatric classification must be useful for a variety of purposes, it cannot be expected to be simultaneously at the forefront of, for example, neurobiology and genetics, psychoanalysis, and the education of mental health counselors, primary care providers and psychologists.\n\nHowever, multiple groups can continue their work on epistemic iteration using genetic approaches and others can develop ways to better measure quality or costs of care and yet others can study dimensional ratings of personality. However, each tribal group would need to be able to communicate across the commons using the \"Rosetta Stone\". Thus, we would not be wobbling toward the asymptote of true validity, but, instead, be very slowly, but continually, rising toward the goal of better outcomes for patients.\n\n## Commentary\n\nThomas Szasz, M.D.\n\nSUNY Upstate Medical University.\n\nI thank Dr. James Phillips for inviting me to comment on this debate. I am pleased but hesitant to accept, lest by engaging in a discussion of the DSM (the American Psychiatric Association's *Diagnostic and Statistical Manual of Mental Disorders*) I legitimize the conceptual validity of \"mental disorders\" as medical diseases, and of psychiatry as a medical specialty.\n\nPsychiatrists and others who engage in this and similar discussions accept psychiatry as a science and medical discipline, the American Psychiatric Association (APA) as a medical-scientific organization, and the DSM as a list of \"disorders,\" a weasel word for \"diagnoses\" and \"diseases,\" which are different phenomena, not merely different words for the same phenomenon.\n\nIn law, the APA is a legitimating organization and the DSM a legitimating document. In practice, it is the APA and the DSM that provide medical, legal and ethical justification for physicians to diagnose and treat, judges to incarcerate and excuse, insurance companies to pay, and a myriad other social exchanges to be transacted. Implicitly, if not explicitly, the debaters's task is to improve the \"accuracy\" of the DSM as a \"diagnostic instrument\" and increase its power as a document of legitimation.\n\nLong ago, having become convinced of the fictitious character of mental disorders, the immorality of psychiatric coercions and excuses, and the frequent injuriousness of psychiatric treatments, I set myself a very different task: namely, to delegitimize the legitimating authorities and agencies and their vast powers, enforced by psychiatrists and other mental health professionals, mental health laws, mental health courts, and mental health sentences.\n\nIn *Psychiatry: The Science of Lies*, I cite the warning of John Selden, the celebrated seventeenth-century English jurist and scholar: \"The reason of a thing is not to be inquired after, till you are sure the thing itself be so. We commonly are at, *what's the reason for it?* before we are sure of the thing.\" In psychiatry it is usually impossible to be sure of \"'what a thing itself really is,\" because \"the thing itself\" is prejudged by social convention couched in ordinary language and then translated into pseudo-medical jargon.\n\nSeventy-five years ago, in my teens, I suspected that mental illness was a bogus entity and kept my mouth shut. Twenty-five years later, more secure in my identity, I said so in print. Fifty years later, in the tenth decade of my life, I am pleased to read Dr. Allen Frances candidly acknowledging: \"Alas, I have read dozens of definitions of mental disorder (and helped to write one) and I can't say that any have the slightest value whatever. Historically, conditions have become mental disorders by accretion and practical necessity, not because they met some independent set of operationalized definitional criteria. Indeed, the concept of mental disorder is so amorphous, protean, and heterogeneous that it inherently defies definition. This is a hole at the center of psychiatric classification.\" This is as good as saying, \"Mental illness, there ain't no such thing,\" and still remain loyal to one's profession.\n\nThe fallacy intrinsic to the concept of mental illness - call it mistake, mendacity, metaphor, myth, oxymoron, or what you will - constitutes a vastly larger \"problem\" than the phrase \"a hole at the center of psychiatric classification\" suggests. The \"hole\" - \"mental illness\" as medical problem - affects medicine, law, education, economics, politics, psychiatry, the mental health professions, everyday language - indeed the very fabric of contemporary Western, especially American, society. The concept of \"psychiatric diagnosis,\" enshrined in the DSM and treated by the discussants as a \"problem,\" is challenging because it is also a solution, albeit a false one.\n\nMedicalization, epitomized by psychiatry, is the foundation stone of our modern, secular-statist ideology, manifested by the Therapeutic State. The DSM, though patently absurd, has become an utterly indispensable legal-social tool.\n\nIdeologies - supported by common consent, church, state, and tradition - are social facts\/\"truths.\" As such, they are virtually impervious to criticism and possess very long lives. The DSM is here to stay and so is the intellectual and moral morass in which psychiatry has entwined itself and the modern mind.\n\n## Commentary: On Inviting the Gorilla to the Epistemological Party\n\nElliott Martin, M.D.\n\nYale University Department of Psychiatry.\n\nWhat makes the epistemological umpire analogy so enticing is its capacity for adaptation, the fact that the strike zone must be different for every batter. If I call 'em as I see 'em, then of course what is a ball thrown to one batter may be a strike thrown to another. As applied to the broadly descriptive nosology of DSM IV there is hardly an argument to be made against this. But let's add a missing piece to the scenario. Let's cast the eight hundred pound gorilla in the analogy, the insurers, as 'the owner'. More specifically, let's call the beast 'the hometown owner'. And then let's say the umpire's salary is paid by the owner.\n\nWith the game yet played on rural fields, before the advent of electronic pitch-tracking devices, before the price of every pitch was calculated, before the global media contracts, the strike zone was a sacred space, the tiny, arbitrary, marked off piece of ether from which intimacy the entire game was decided. Before the 'owners' blew the entire field up to stadium-size the game was about conceptualization and process; before psychiatry was snatched up by the insurers the pathologies were sought in subjectivity over objectivity. Artfulness existed alongside science. What, after all, did psychiatrists care for nosology before the rise of private insurance over the past several decades? Disordered thinking, as opposed to ordered thinking, was just that. Slapping a name on it did little to change the fact. One man's depression is another man's 'blues', and what does the patient care for the label?\n\n'Carving nature' does require a measure of reliability, true, but the only conversations I have had in which I have coughed up the full DSM criteria have been those over the telephone, most often in the emergency room, with insurance reviewers 'objectively' determining, from up to thousands of miles away, whether a particular patient warrants two days or three days in which to be cured. And at that, for the benefit and safety of my patients, my strike zone widens tremendously after five minutes, and my diagnoses tend to reduce to the very non-DSM, if at times heavily punctuated, 'imminently suicidal!' or 'imminently homicidal!'. The arguments tend to end there, and it is apparent that what is missing in the epistemological umpire analogy is the hard baseball rule against arguing balls and strikes.\n\nAs a former academic, however, I simply have to believe that there is an inherent value in the pursuit of knowledge for knowledge's sake, that all sciences, veiled or not, are interwoven, regardless of the current paradigms, and the loss of even one is somehow crippling to the others. But 'the owners', despite the fact that they stand oblivious, willfully or not, to the devastation they create, can no longer be ignored in these arguments. Whatever the historical mechanisms, the pursuit of knowledge has come up hard against the pursuit of profit in these last few decades. I contend that the process of classification is the process that, if not created by, than at least has been manipulated ever since by the owners. As students of the human mind, arbitrary classification of disorders of the mind does not inform us; it informs the gorilla. Describing 'normalcy' and 'variants thereof' only serve to destroy further an already hobbled subjectivity. Nosology destroys narrative, and where formerly our patients were more appropriately likened to novels, they are now become, for the ease of illiterate overlords, more like newspapers.\n\nAs the noted Assyriologist, Jean Bottero, put it in defense of his own limited field, \"Yes, the university of sciences is useless; for profit, yes, philosophy is useless, anthropology is useless, archaeology, philology, and history are useless, oriental studies and Assyriology are useless, entirely useless. That is why we hold them in such high esteem!\" \\[\\[42\\], p. 25\\] Psychiatry finds itself in a unique position among the 'useless' sciences. Like the umpire offered a bribe by the owner, if the field chooses utterly to subserve profit it likely stands to gain tremendously. If the field chooses to uphold an ideal of humanism in the face of gorilla-ism then we will likely be faced with the same fate as philosophy, anthropology, archaeology, philology, and history. In which case let us all call 'em as we see 'em, keep the paperwork tidy, and at the very least be ever mindful of the watchful gaze of the gorilla.\n\n## Allen Frances responds: There Is A Time And Place For Every Umpire\n\nNone of the five umpires is completely right all of the time. And none is totally wrong all of the time. Each has a season and appropriate time at the plate.\n\nForty years ago, Umpires 1, 3, and 5 were in competitive ascendance. The nascent school of biological psychiatry was a confident Umpire 1- convinced that mental disorders would soon yield their secrets and be as fully understood as physical illnesses. In fact, there was a heated controversy whether the new diagnostic manual (DSM III) then being prepared was a catalog of 'disorders' or of their much preferred term 'diseases'.\n\nIn sharp contrast, the competing models that dominated psychiatry forty years ago were very much like the skeptical Umpires 3 and 5- in their different ways, all were nihilistic about the value or reality of psychiatric diagnosis. Psychoanalysis dealt with highly inferential concepts impossible to reduce to reliable diagnosis. Family, group, and community psychiatry went so far as to deny that the individual patient was a proper or very relevant unit for diagnostic assessment, preferring models that diagnosed the system at larger aggregates of interpersonal affiliation. When Szasz, then as now, decried the 'myth of mental illness', there was little coherent opposition outside the group of the smugly confident pioneers of biological psychiatry (who soon would be hoisted by their own petard).\n\nThe years have not been kind to umpires 1, 3, and 5. Each still stakes some small claim to attention, but umpire 2 now clearly rules and welcomes the collaboration of his close cousin, the ever practical umpire 4.\n\nWhy the revolution in epistemological sentiment? Biological psychiatry helped spark a wondrous neuroscience revolution that is perhaps the most thrilling focus of twenty first century biological science. But the findings have revealed a remarkably complex brain unwilling to yield any simple answers. There is thus far almost no translation from the glory of basic science discovery to the hard slog of understanding the etiology and pathogenesis of the 'mental disorders'. These no longer seem at all reducible to simple diseases, but rather are better understood as no more than currently convenient constructs or heuristics that allow us to communicate with one another as we conduct our clinical, research, educational, forensic, and administrative work.\n\nMost hard core biological psychiatrists have lost heart in the na\u03c2ve faith of umpire 1 that he can define simple models of illness. Those who were hunting (and reporting) the gene or genes for schizophrenia, bipolar, and other disorders have been forced repeatedly to retract and eat humble pie. Initial findings never achieved replication for what became the obvious reason that there is no 'disease' of schizophrenia- that instead schizophrenia is better understood as just a construct (albeit it a very useful one) with hundreds of different 'causes'.\n\nMeanwhile the diagnostic nihilism of Umpires 3 and 5 also became less relevant when DSM III proved that psychiatric diagnosis could be a reliable and useful tool of communication.\n\nUmpire 2 now rules. Mental disorders are no more and no less than constructs. And Umpire 4 is quick to point out that they are very useful constructs. The current dominance of Umpires 2 and 4 is temporary, and certainly not complete. In some very gradual and piecemeal way, the future holds hope for an increased role for Umpire 1. As we slowly discover the biology of mental disorders, small subunits will cohere around a common pathogenesis and declare themselves as a disease. This is beginning to happen for the dementias of the Alzheimer's type. But it will always be necessary to retain the corrective voices of the skeptical Umpires 3 and 5- to remind us just how little we know and how feeble are our tools for knowing.\n\n### Reply to Drs Zachar and Lobello\n\nThank you for your contribution which I received after writing my own. You have stated my position with much greater clarity and erudition than I could muster.\n\n### Reply to Dr Pouncey\n\nThank you for your clarification of the Umpire metaphor. Your analysis nicely demonstrates the similarities and the differences in the positions of Umpires 1 and 2- both accept the possibility of an independent reality, but differ sharply in there estimation of our current ability to apprehend it.\n\n### Reply to Dr Ghaemi\n\nDr Ghaemi sets up a false and totally unnecessary dichotomy between his true believer version of realism and what he calls \"taking a random walk\". It is possible, indeed necessary, to take a very modest position regarding the current state of certitude of psychiatric knowledge on the causes of psychopathology without assuming that we know nothing or are walking totally blind or that our constructs have no current heuristic value. Umpire 2's honest admission that he can do no better than call them as he sees them does not deny the possibility of real strikes and real balls- it just states the very constrained limits of our apprehension. I have no problem at all with the metaphor of epistemic iteration- it is obviously the route of all science. But let's realize how early in the path we are and how uncertain is its best direction.\n\n### Reply to Dr Cerullo\n\nHow comforting to be a first umpire. I admire the magisterial confidence of Dr Cerullo's statement, \"Most working scientists and philosophers would be classified as modern realists who believe there is an independent objective external reality\". I wish I could feel so firmly planted in a \"real\" world and possess such na\u03c2ve faith in mankind's capacity to apprehend its contours. Alas, as I read it, the enormous expansion of human knowledge during the last hundred years is enough to make umpire 1's head spin with confusion. The more we learn, the more we discover just how much we don't (and perhaps can't) know. Einstein gave us a four dimensional world that even physicists have trouble visualizing. Then the string theorists made it exponentially more complicated by expanding the dimensions into double figures and introducing conceptions of reality that may or may not ever be testable. The quantum theorists describe a \"spooky\" (Einstein's term) and inherently uncertain world that lends itself to extremely accurate large n prediction, but totally defies our intuitive understanding of the specific mechanics. It also turns out that we are pathetically limited in our sensory capacities, even when they are extended with our most powerful sensing instruments. Evolution allows us to detect only 4% of our universe, the rest of energy and matter being \"dark\" to us. Indeed, there may be a vast multiplicity of multiverses out there and we may never know them. So I don't see human beings as having great status as judges of reality- we are like mice describing the proverbial elephant- having available only fallible and very temporary constructs.\n\nTo get back to our umpires, the connections between brain functioning and psychiatric problems are definitely real, but they are so complex and heterogeneous as to defy any simple \"realist\" faith that we are close to seeing them straight on, much less solving them.\n\n### Response to Dr Wakefield\n\nDrs Wakefield and Pouncey have made many of the same important points. Dr Wakefield's \"humble realism\" (associated with an honest and flexible willingness to admit fallibility and the possibility of error) works for a great baseball umpire and is not a bad model for a psychiatric diagnostician. The difference between umpire 2 and umpire 1.5 depends on how close you think our field is to understanding the reality of psychopathology. I am even more humble than Dr Wakefield and will stick with umpire 2.\n\n### Reply to Dr Pierre\n\nI agree.\n\n### Reply to Dr Greenberg\n\nIn defending Umpire 3, Dr Greenberg assumes a grandly, neutral view of man's place in the world and makes clear how limited are our abilities in naming and classifying its manifestations. Greenberg rightly suggests that the distinction between a broken branch and a broken femur may be extremely meaningful to the patient and his doctor, but is really trivial in the grand scheme of an indifferent nature. He might equally have pointed out that from a bacteria's perspective, pneumonia is not a disease- it is just an opportunity for a good feed. Diseases, according to Greenberg's argument, are no more than human constructs made up de novo by us as inherently self interested third umpires.\n\nFrom Greenberg's lofty perch, mankind's attempts to label do seem pathetically self referential and solipsistic, extremely limited in their apprehension of reality (even assuming that there is a graspable reality ready to be apprehended). But it seems to me that his level of philosophic detachment works only in the exalted theoretical realms, and contrary (to his statement) fails badly to do justice to the needs and opportunities of our everyday, \"common sense\" world.\n\nGreenberg and I do agree completely on several points: 1) if mother nature had the gift of speaking our language and the motivation to do so, she would probably indicate she couldn't care less about our names and that she doesn't feel particularly well described by them; 2) our categories are no more than tentative approximations and are subject to distortion by personal whims, cultural values local to time and place, ignorance, and the profit motive; and, 3) psychiatry's names should be used with special caution because they lack strong external validators, carry great social valence, and describe very fuzzy territorial boundaries.\n\nWhere my umpire 2 position differs from Greenberg's umpire 3 is in our relative estimations of how closely our names and constructs can ever come to approximating an underlying reality. My umpire 2 position is skeptical about umpire 1's current ability \"to call them as they are\" and advises modesty in the face of the brain's seemingly inexhaustible complexity. But I remain hopeful that there is a reality and that, at least at the human level, it will eventually become more or less knowable. We may never fully figure out the origin and fate of the universe or the loopy weirdness of the quantum world. But the odds are that decades (or centuries) of scientific advance will gradually elucidate the hundreds (or thousands) of different pathways responsible for what we now crudely call \"schizophrenia\".\n\nGreenberg is more skeptical than I about the progress of science and is, at heart, a platonic idealist who finds life cheapened by excessive brain materialism. He sees psychiatric disorders as no more than human constructs - metaphors, some of which are useful, some harmful. His umpire 3 does not does not believe the glory and pain of human existence can or should be completely reduced to the level of chemical reactions or neuronal misconnections. This is a fair view for poets and philosophers (and Greenberg is both), but I see a ghost in his machine and dispute that allowing it in makes \"common sense\".\n\n### Reply to Dr Pincus\n\nThank you for inventing the fourth umpire. Dr Pincus is the most practical of men and he has created a handy metaphor for describing the ultimate goal of any DSM- to be useful to its users. There is only one problem with the fourth umpire's position- but it is a big one. There is no external check on his discretion, no scientific or value system that guides what is useful. Everything depends on the skill and goodwill of the umpire. In the wrong hands pragmatism can have dreadful consequences- commissars who treat political dissent as mental illness or judges who psychiatrically commit run of the mill rapists to keep them off the streets. But to ignore the practical consequences of psychiatric decisions leads to its own set of abuses- most recently diagnostic inflation and excessive treatment.\n\n### Reply to Dr Szasz\n\nI have enormous respect for intellectual reach and depth of Dr Szasz' critique of psychiatric diagnosis and for the moral power of his lifelong efforts to prevent its misuse. He skillfully undercut the pretentions of the Umpire I position at a time when its biological proponents were at their triumphalist peak, loudly trumpeting that they were close to finding the gene for schizophrenia and to elucidating its brain lesions. He anticipated and exposed the naivete of these overly ambitious and misleading claims. He has fought the good fight to protect the rights, dignity, and personal responsibility of those deemed to be \"mentally ill\". My argument with Dr Szasz is that he goes too far and draws bright lines where there are shades of gray. Surely, he is right that schizophrenia is no \"disease\", but that does not mean it is a \"myth\". Surely, he is right that psychiatric diagnosis can be misused and misunderstood, but that doesn't mean it is useless or can be dispensed with. Dr Szasz is correct in defining the many of problems with psychiatric diagnosis, but doesn't have alternative solutions. There is a baby in there with the bath water he is so eager to discard.\n\n### Reply to Dr Martin\n\nI agree that we can't always assume the Umpires are acting only from the purist and most disinterested of motives. Games can be fixed for financial gain and psychiatry operates in a real world of large drug company, insurance, and publishing profits. My experience has been that the actual framers of DSM IV and of DSM 5 have not been shills for industry- but that heavy drug marketing has led to much over-diagnosis using DSM IV and that the risks are greatly heightened because of the new diagnoses being suggested for DSM 5. Dr Martin's comment makes clear that we must be aware the diagnosis of a given patient can be distorted by real world economic factors and must be ever vigilant to protect the integrity of the process.\n\n# Question \\#2: What is a Mental Disorder?\n\n*It has been difficult to reach agreement on a definition of mental disorder. Could you comment on this problem, or offer what you think is an adequate definition of the concept, mental disorder?*\n\n## Introduction\n\nOn the face of it this is a strange question. As treating clinicians we surely should be able to offer a definition of what it is we treat. As researchers surely we should be able to define the object of our research. And finally as philosophers writing about mental illness, surely we should be able to provide a definition of the object of our investigation. So why is it so difficult to accomplish these tasks? Allen Frances has puzzled over this question, and as he indicates below, it leads him into Humpty Dumpty's world of \"shifting, ambiguous, and idiosyncratic word usages.\"\n\nFailures to accomplish a consensually accepted definition lead in two directions: give up or keep trying. The first approach is represented by Warren Kinghorn, who argues in his commentary that we won't achieve the desired definition and don't need it anyway - any more than other specialists do for their work - and thus should abandon the effort to try yet again to get it right in DSM-5.\n\nThe opposite approach is represented in different ways: by Jerome Wakefield, on the one hand, and Stein and colleagues, on the other. In his contribution to this article Wakefield presents the evolution-based harmful dysfunction definition of mental illness for which he is justly well-known. In this contribution he argues that the varied positions of figures like Allen Frances and Kenneth Kendler depend implicitly on the HD understanding and definition of mental disorder.\n\nStein and colleagues (not represented in this article) \\[43\\] take another approach in trying to improve the DSM-IV definition by operationalizing it, and then going to work on the operationalized definition. They tweak some of the DSM-IV (definitional) criteria as well as adding further criteria, e.g., acknowledging the normative, value-laden aspect of many diagnoses. In their effort to improve the DSM-IV definition, they address many of the complaints lodged against DSM-IV (co-morbidity, poor separation between diagnoses, poor separation from normality, etc.) In trying to be comprehensive, they include issues of clinical utility, scientific accuracy through validators, and pragmatic concern for patient outcome. They do not, however, deal with the issue discussed in Question 5, namely, that these can be conflicting agendas, and that at times we effectively prioritize one over the other.\n\nLife never being as neat as one would like it, Allen Frances and Joseph Pierre defeat my effort of a simple dichotomy and occupy a middle ground between the alternatives of giving up on a definition or trying to improve it. Each argues for attempting a definition but assures us that it will be a messy undertaking.\n\nAnd that conclusion reminds me that in this search for an adequate definition of mental disorder it would be useful to invoke Wittgenstein's notion of family resemblances. In discussing the essence of language or language games, Wittgenstein writes:\n\nInstead of producing something common to all that we call language, I am saying that these phenomena have no one thing in common which makes us use the same word for all,-but that they are *related* to one another in many ways. And it is because of this relationship, or these relationships, that we call them all \"language.\"...\n\nI can think of no better expression to characterize these similarities than \"family resemblances\"; for the various resemblances between members of a family: build, features, colour of eyes, gait, temperament, etc. etc. overlap and criss-cross in the same way.-And I shall say: 'games' form a family. \\[\\[44\\], pp. 31-32\\].\n\nThe Wittgensteinian approach would represent the middle ground of Frances and Pierre. In technical DSM terminology, diagnoses don't all share the same properties or validators; they resemble one another because they share some. A diagnosis might have a place in the nosology because of \"historical accretion,\" but might lose it because it doesn't meet standards of other current validators. Examples are the paraphilias and conduct disturbances, both of which may lose their admission tickets because of excessive normative valence and inadequate internal distress. Borderline Personality Disorder (now Borderline Type) is being retained presumably because descriptively it covers a lot of the symptomatic morass otherwise not covered; but with its known problems - heterogeneous presentation, excessive comorbidity, lack of genetic or pathophysiologic foundation - it will probably collapse and be carved up eventually for lack of real validators, i.e., not enough validators in common with other members of the DSM family. And finally, to underline Pierre's point, distinguishing a validator from a value is an ambiguous enterprise.\n\nI readily acknowledge that the family resemblance model will not satisfy those in search of a tighter definition of what constitutes a psychiatric disorder. Its main virtue is that it reflects how we actually \"define\" - and in the absence of a tighter definition, will continue to \"define\" - psychiatric disorders.\n\n## Commentary\n\nJerome C. Wakefield, Ph.D., D.S.W.\n\nSilver School of Social Work and Department of Psychiatry, New York University.\n\nAl Frances's powerful writings on psychiatric diagnosis offer vigorous arguments about what should and should not be diagnosed as psychiatrically disordered (not only regarding DSM-5 overreaching; consider his adding the clinical significance criterion to most diagnostic criteria sets in DSM-IV). Yet, Frances vehemently denies that there is any coherent concept underlying our judgments of what is and is not a disorder. This may save him from a troublesome additional debate, but, as observed by some commentators, it undercuts the coherence let alone force of his critique of false-positive implications of DSM-5 proposals.\n\nDespite his disavowals, Frances's arguments derive their enormous power from an implicit reliance on common intuitions about the concept of disorder as failure of biologically designed human nature. Sometimes this implicit appeal emerges explicitly, as in Frances's \\[45\\] explanation of why he rejects DSM-5's proposed approach to behavioral addiction: \"The fundamental problem is that repetitive (even if costly) pleasure seeking is a ubiquitous part of human nature.... The evolution of our brains was strongly influenced by the fact that, until recently, most people did not get to live very long. Our hard brain wiring was built for short term survival and propagating DNA- not for the longer term planning that would be desirable now that we have much lengthened lifespans.... This type of hard wiring was clearly a winner in the evolutionary struggle when life was \"nasty, brutish, and short\". But it gets us into constant trouble in a world where pleasure temptations are everywhere and their long term negative consequences should count for more than our brains are wired to appreciate.\"\n\nNotice that Ken Kendler et al. \\[46\\], on the extreme opposite side of the DSM-5 debate, implicitly appeal to the same biological-design criterion when explaining why fearful distress in reaction to real danger is not a disorder: \"An individual experiencing a panic attack after just barely escaping a fatal climbing accident would not be considered psychiatrically disordered because the mechanism for panic attacks probably evolved to prepare us for such situations of real danger\" (p. 771).\n\nSo, what is the concept of disorder to which Frances and Kendler implicitly appeal? The DSM's definition of disorder says that a disorder exists only when symptoms are caused by a dysfunction in the individual and lead to certain forms of harm, such as distress or impairment. Observing that the concept of \"dysfunction\" was left unelaborated and that distress or disability are not the only harms that would warrant diagnosis, I proposed what I labeled the \"harmful dysfunction\" (HD) analysis of the concept of disorder \\[47-52\\].\n\nThe harmful dysfunction analysis maintains that the concept of disorder has two components, a factual component and a value component. To be a disorder, a condition must satisfy both components. The value or \"harm\" component refers to negative or undesirable or harmful conditions, which applies to most symptomatic conditions. Obviously, who gets to make the judgment that a condition is harmful and on what grounds (especially in a pluralistic society) is a complex issue. But the basic point is that no condition, even if a clear biological malfunction, is a disorder if it is not considered in some sense harmful to the individual or society. This is the basis for the \"clinical significance\" requirement.\n\nThe factual component requires that the condition must involve a failure of some mental mechanism to perform one of its natural, biologically designed functions. This is highly inferential and speculative and fuzzy at this stage of knowledge of mental processes, but it is the conceptual target at which we aim nonetheless. Indeed, although both the notions of dysfunction and harm are fuzzy concepts, as long as they determine a range of clear cases on either side of the disorder\/non-disorder boundary, they can provide a cogent and useful conceptual structure. Other useful categorical distinctions - such as between night and day, or child and adult - also have fuzzy boundaries, and pragmatic considerations determine specifically where the dividing line is drawn (see Question 1).\n\nToday, we understand that human nature -specifically, species-typical biological design -- is due to evolution through natural selection. So, dysfunction in the sense relevant to judgments of medical disorder consists of failure of internal mechanisms to perform evolved functions. The \"dysfunction\" component of the analysis means that, as far as the legitimate application of the concept of disorder goes, disorder cannot be manufactured from personal or social values and used as a cover for \"treatment\" in service of social control. The \"dysfunction\" requirement places a limit on what can be legitimately said to be a disorder, and explains why many negative conditions are false positives.\n\nHow can we test this account of the generally shared meaning of our distinction between disorder and non-disorder? One way is to see if it is consistent with some of our shared intuitions. Consider some simple examples of conditions NOT considered disorders. Neither illiteracy nor an immigrant's inability to speak the local language are considered disorders, yet are terribly impairing, potentially distressing, and disadvantageous mental conditions (whereas dyslexia and aphasia are disorders). Being a \"night person\" rather than a \"morning person\" in a 9-to-5-structured culture is potentially disadvantageous, but considered a normal variation. Fertility when pregnancy is unwanted, pregnancy when children are unwanted, and pain during childbirth are all undesirable and potentially harmful conditions commonly treated by physicians but not considered disorders. Neither debilitating fatigue after exertion nor sleep - probably the single most massively impairing human condition of all, rendering virtually everyone semi-paralyzed and periodically hallucinating for one-third of their lifespans -- are seen as disorders. Nor is delinquent behavior by rambunctious teenagers or grief after a loved one's death.\n\nThe source of these classificatory judgments cannot be reduced to personal or social values. Many of these non-disordered conditions are personally and\/or socially undesirable. There is some additional element operating here to explain these judgments. The common element is that we consider all these conditions to be part of the way human beings are designed to function, however problematic they may be in our current environment.\n\nMoreover, our intuitions are that entire cultures can be incorrect about their disorder judgments. Victorians' deeply held values and beliefs led them to classify masturbation and female clitoral orgasm as disorders, some ante-bellum Southern physicians considered slaves who ran away from their masters to be psychiatrically disordered (\"drapetomania\"), and Soviet psychiatrists treated political dissidents as disordered. We believe that these diagnostic judgments, although consistent with the respective cultures' values, were incorrect -- not correct \"for them\" and incorrect for us, but just plain wrong. The factual component of \"disorder\" explains how this can be so.\n\nNow, consider the types of conditions to be found in the DSM. At this point in the history of psychiatry, we are about where Hippocrates was in formulating diagnostic categories. We know virtually nothing about the underlying natures of mental mechanisms, but from circumstantial evidence and indirect inferences, we infer what conditions are likely disorders. The superordinate categories in the DSM - e.g., psychotic (thought) disorders, anxiety disorders, mood (sadness\/elation emotion) disorders, sexual disorders, sleep disorders, and so on - correspond to human systems about which we feel we can pretty reliably infer that they are biologically designed to operate in certain ways, and we can recognize in a range of cases when something has gone wrong - albeit with large fuzzy boundaries and large ranges of uncertain cases given our ignorance.\n\nAnother way of assessing the HD analysis is by whether it accomplishes certain important goals motivating such an analysis. At a minimum, an analysis of the concept of mental disorder should do four things:\n\n\\(1\\) Explain widely shared classificatory judgments about whether or not problematic conditions are disorders.\n\n\\(2\\) Explain why mental disorders are disorders in the same sense as physical disorders, thus why psychiatry is part of medicine.\n\n\\(3\\) Explain the distinction between control of socially undesirable mental conditions versus treatment of mental disorder.\n\n\\(4\\) Offer a fruitful way of thinking about research.\n\nThe first three goals are addressed above. As to research, the HD analysis explains the primary goal as seeking to understand mental mechanisms and their functions, and ultimately to identify and distinguish specific mental dysfunctions, yielding an \"etiological\" classification.\n\nFrances's end run around the concept of disorder is accomplished partly by his insisting on a scientific-sounding \"cost-benefit analysis\" of each DSM-5 proposal. This is a useful rubric for raising concerns and is of course an improvement over wanton unreflective pathologization. In drawing boundaries in some fuzzy domains, the concept of disorder offers little guidance, and perhaps cost-benefit analysis is all we can do. However, generally reframing the process of deciding whether to pathologize a condition as a cost-benefit analysis, if taken seriously, is not only intellectually unsupportable but dangerous, for the same reasons why one would not want cost-benefit analysis to determine whether an accused was judged guilty in a trial - namely, one loses any protection against social control aspirations. After all, for all we know a careful cost-benefit analysis might show that from a social perspective \"treating\" the Soviet dissidents for psychosis, slaves who ran away for \"drapetomania,\" and women for \"pathological\" clitoral orgasms was justifiable in the respective social circumstances--but it was incorrect use of diagnosis nonetheless, because the individuals so labeled (in many cases) did not actually suffer from disorders. Just as the health professions spring from the concept of disorder, restraint of efforts to use the health professions for social control also spring from this crucial concept. To deny the existence and importance of the concept of mental disorder in a discussion of the validity of diagnostic criteria in psychiatry is sort of like a teacher denying there is knowledge or ignorance before entering the classroom or a judge denying there is criminal guilt or innocence before rendering a verdict.\n\nI believe that, despite his misstep on the concept of disorder, Frances is entirely correct in his relentless focus on false positives as the Achilles heel of the DSM-5 effort. The reason why this is so is not simply a matter of preventing repetitions of the \"false epidemics\" of the past due to criteria revisions. The deeper point is that the very aspirations of the editors of DSM-5 for a paradigm shift towards etiologically based diagnosis - a shift for which we are not yet ready - depends in the long run on distinguishing etiologies. That means distinguishing different dysfunctions as well as distinguishing normal non-dysfunctions from dysfunctions that underlie similar symptom presentations.\n\nBecause we are not yet ready to distinguish dysfunctions, DSM-5 must remain theory neutral. However, a serious conceptual-validity review could have allowed us to do considerably better in distinguishing dysfunction from likely non-dysfunction for many categories. Instead, the DSM-5 work groups are expanding diagnoses so that more individuals coming into consultation can be labeled with a disorder, without careful conceptual analysis. Given this focus, the DSM-5 is likely to take psychiatry further away from the editors' stated goal. The DSM-5 could have made progress towards etiologically grounded categories by eliminating some of the false positives that afflict the manual's operational criteria sets, placing them under separate V Code categories. With false positives swamping many categories, psychiatric science will continue to flounder, unable to distinguish dysfunction etiologies because its criteria cannot in many cases even distinguish internal dysfunction where something has gone wrong from intense but normal biologically designed reactions to events.\n\n## Commentary: Definitions of \"Mental Disorder:\" Elusion and Illusion\n\nWarren Kinghorn, M.D.\n\nDuke University Department of Psychiatry.\n\nThe definitions of \"mental disorder\" which have appeared in each edition of the *DSM* since *DSM-III* (hereafter referred to as the \"*DSM* definitions\") are both carefully crafted and widely ignored. Pioneered by Spitzer and Endicott as a means for demarcating (true) mental disorders from non-pathological conditions in the wake of controversy over the status of homosexuality in *DSM-II* \\[53\\], the *DSM* definitions have from the beginning, and despite copious qualification and caveat, proved controversial and largely irrelevant to the practical use of the *DSM*. Specific criticisms of the *DSM* definitions, such as their circular use of \"clinically significant,\" are by now commonplace \\[54,55\\]. The *DSM-5* Task Force has proposed an updated definition which seeks to improve on prior definitions and which emphasizes the pragmatic and method-driven nature of psychiatric nosology \\[43\\]. The *DSM-5* proposal is worthy of sustained critical consideration which I do not offer here. In this brief account, rather, I wish to offer three reasons why future editions of the *DSM* should not include a definition of \"mental disorder.\" In each case, I argue, the *DSM* definitions *seem* to contribute to a certain good, but this is illusion.\n\nFirst, the *DSM* definitions *seem* only to describe the terrain of what is already considered \"mental disorder,\" innocuously supplying rough logical boundaries to psychiatric praxis without limiting or shaping it. But this is an illusion. Although, as Sadler \\[54\\] points out, the *DSM* definitions have not in practice influenced the way that new diagnoses are incorporated into the *DSM*, they do provide a regulative language for how one speaks about the mental disorders that are already there. Questions, for example, about whether major depressive disorder most properly inheres in an individual or in a group (or even in a society; \\[56\\]) run counter to the methodological individualism of the *DSM*, enshrined in its definitions of mental disorder, and are therefore difficult to ask without bringing the entire *DSM* project into question. This should not be the case; it is precisely by excluding useful questions that the *DSM* renders itself an obstacle to nosological advance rather than a catalyst to it.\n\nSecond, the *DSM* definitions of mental disorder *seem* to demarcate a safe conceptual territory within human life and experience within which the medical model can properly rule. The unspoken but nonetheless persuasive model seems to run something like: *Why should a psychiatric technology (medication, ECT, manual-driven psychotherapy, etc.) be deployed within this situation?* A: Because it's a mental disorder, and one uses psychiatric treatments for mental disorders. *Q: But how do we know that this is a mental disorder?* A: Well, it's in the *DSM*, and besides, it fits into the conceptual space which the *DSM* defines as \"mental disorder.\" But this, too, is illusory and deceptive. The deployment of psychiatric technology is not justified because a particular condition is demarcated as \"mental disorder\" but because, after all goods are weighed and all options considered, the use of technology is prudentially indicated. Whether the condition is classified as \"mental disorder\" has little to do with this particular question. Although the *DSM* definitions importantly exclude certain situations (e.g., primary social deviance) from the medical model, it would be better for the *DSM* simply to stipulate these ethical commitments rather than to embed them within the definition of \"mental disorder.\" Moreover, the fact that a certain condition satisfies the *DSM* definitions does not serve as *prima facie* justification of the deployment of technology for that condition, and the *DSM* should not collude in any contrary assumption.\n\nFinally, the *DSM* definitions *seem* to focus diverse mental health professionals on a common moral project. Clinicians may disagree about etiology and treatment, that is, but can at least join together in stamping out \"mental disorder\" as described in the generously broad *DSM* definitions. But this also is illusion, proved empirically to be so by the failure of every *DSM* definition to achieve widespread consensus and destined to be proved again in the inevitable failure of the *DSM-5* definition. The reason for this is not that the definitions are poorly crafted (quite the opposite) but that such consensus, within the contemporary mental health landscape, is not a conceptual possibility. For example, with regard to the *DSM-5* proposal - the best definition to date - particular clinicians are certain to reject not only nosological individualism but also the foundational assumptions behind \"underlying psychobiological dysfunction,\" the exclusion of expectable responses to common stressors and losses, and the distinction between \"behavioral\" and \"psychological.\" Furthermore, even if consensus on a formal definition were attainable, it would accomplish little since agreement about \"dysfunction\" and \"impairment\" and \"deviance\" can be no stronger than correlative agreement about proper human functioning in a particular situation (such that, e.g., what one clinician judges as impairment, another judges as unreasonable expectation). There is, unfortunately, no more agreement about proper human function than about the (closely related) nature of \"mental health.\"\n\nCrafting a definition of mental disorder is a useful thought-experiment for psychiatric nosologists and debating such definitions is great fun for the philosophically-minded. But in a document as influential and generally accessible as the *DSM*, such definitions elude and mislead more than they illumine. They should therefore be honorably retired.\n\n## Commentary\n\nJoseph Pierre, M.D.\n\nUCLA Department of Psychiatry.\n\nDeveloping an ironclad definition of mental illness (or for that matter the more general notion of disease) is indeed a daunting task \\[43,57-59\\]. Most attempts at a medical model definition are based on some variation on the concept of \"something that has gone wrong biologically with an individual that results in distress or functional impairment,\" but problems with this approach quickly emerge. First, we have no firm explanations for what's biologically wrong in mental disorders \\[57\\] - no \"underlying psychobiological dysfunctions\" \\[43\\] have yet been elucidated (and as the joke goes, in the few cases where such \"lesions\" have been identified, they transform from psychiatric disorders into neurologic diseases). Second, the concepts of \"wrongness\" and \"dysfunction\" are unavoidably value-laden \\[57,60\\], while \"distress\" and \"suffering\" are subjective and relativistic \\[43,61\\]. Third, distinguishing between mental illness and \"problems of living,\" \"expectable responses to common stressors\" (or extraordinary stressors), and \"social deviance or conflicts with society\" is at best challenging \\[43\\], given the inextricable, reciprocal relationship between individuals and their environments\/cultures \\[58\\]. Therefore, in the absence of lab tests to detect biological lesions, psychiatric diagnosis inevitably rests upon some kind of \"judgment call\" on the part of a clinician and the inescapable conclusion that \"no definition perfectly specifies precise boundaries for the concept of 'mental disorder \\[43\\].'\"\n\nAs suggested earlier, clinicians do not in general fret over what does or does not constitute a disease. If, for example, a patient's arm is broken in a car accident, a doctor doesn't lose sleep pondering whether this represents \"broken bone disorder\" or simply an expected response to an environmental stressor - the bone is set and the arm is casted. For psychiatrists, ever since Freud's development of \"The Talking Cure,\" the business of psychiatry has increasingly shifted from asylum care of patients with psychosis to outpatient treatment of the \"worried well\" \\[62\\]. Likewise, we now live in a society that regards the attainment of happiness as a worthwhile goal in life, if not an entitlement \\[63\\]. Therefore, mental disorder or not, clinicians working in \"mental health\" see it as their calling to try to improve the lives of whomever walks through their office door seeking help.\n\nBut clinical decisions are not the only decisions that hinge upon diagnosis. The underappreciated challenge to defining a mental disorder in the real world stems from the many different questions that are ultimately being asked of diagnosis:\n\nShould \"disorder X\" be treated?\n\nWhat is the best way to treat \"disorder X\"?\n\nShould screening for \"disorder X\" be implemented in the community in the interest of treatment and prevention?\n\nShould special school services be offered for children with \"disorder X?\"\n\nShould insurance companies reimburse for care based on \"disorder X?\"\n\nShould research funding be granted to study \"disorder X?\"\n\nShould a patient population based on \"disorder X\" be selected for etiologic research?\n\nShould someone with \"disorder X\" who has committed a crime be sentenced to prison or to involuntary psychiatric treatment?\n\nThe far-reaching implications of these questions render the distinction between mental illness and normality far more significant than when considered in a clinical vacuum, especially in the era of rationed healthcare services, insurance reimbursement, and competitive research funding. For the clinician and patient, erring on the side of assigning, or receiving, a diagnosis of a mental disorder has become incentivized, but future economic and policy decisions may necessitate a more conservative threshold for defining mental illness \\[61\\]. Therefore, the many questions asked of diagnosis cannot be answered by any single definition of mental illness, or by simply referring to the DSM. Instead, and in particular as DSM-5 embraces the concept of diagnostic spectra in which the borders between pathology and normality are stretched, wide-ranging consideration and context-specific analyses by clinicians, patients and their families, researchers, DSM architects, and policy-makers will be a vital, ongoing process that shapes the fate of modern psychiatry.\n\n## Commentary\n\nJohn Chardavoyne, M.D.\n\nYale University Department of Psychiatry.\n\nHow should American psychiatry define \"mental disorder?\" There is a varied array of dimensions that are emphasized in the various definitions, including the biological basis of psychiatric illness; the behavioral manifestations; the severity of impairment; the level of distress; and the differentiation between pathology versus normalcy. In this commentary I will provide examples of these different aspects, suggest reasons for the confusion, and propose the beginning of a more integrated approach.\n\nFirst, let's review elements of a few of the definitions that have been proposed. The DSM-IV definition emphasizes explicitly that it characterizes disorders, not people; that there are biological, psychological, or behavioral dysfunctions that result in a psychological or behavioral syndrome; this creates impairment or distress; it is not a result from discord between the individual and society; and that these problems are not culturally-sanctioned. . The proposed changes for DSM-V include that there is psychobiological dysfunction that results in a psychological or behavioral syndrome; that there is evidence of distress or impairment in functioning; that the response is not expectable and not culturally-sanctioned; it is not a result from discord between the individual and society; and that there is diagnostic validity and utility. . According to the National Alliance on Mental Illness, mental illness is defined with an emphasis on the medical model. . The International Classification of Diseases-10 definition highlights the symptoms. .\n\nThese definitions reflect the difficulties in adequately describing the nature of a \"mental disorder.\" They reflect the uncertainty about how biology results in psychiatric manifestations and vice versa and difficulties in the following: the way to differentiate pathology from normalcy, how to categorize the syndromes, and the location of the source of dysfunction. Along these lines, a chief concern is the emphasis on overt behaviors to classify mental disorders. Despite how patients report subjective experiences and distress, the overt behaviors have been used as the major indicators of disorder. The difficulties related to quantifying the first-person subjective experience compared to the third-person objective observations (overt behaviors) manifest here. The Psychodynamic Diagnostic Manual reintroduces the subjective experience into assessment \\[64\\].\n\nThis begs the question of the direction that psychiatry wants to go. Do psychiatrists want to be able to treat an individual with manic symptoms and also someone with intimacy problems? Presumably the former would have more behavioral markers than the latter. However, how can one quantify the suffering of one versus the other? Simply because there are not as many behavioral manifestations, does that mean that the person is not suffering sufficiently to warrant treatment by a psychiatrist? Equally important, does that suggest that insurance companies continue to determine reimbursement for treatment based on behaviors without considering level of suffering and other factors that contribute towards psychological dysfunction like thoughts, feelings, and relationships?\n\nAnother element to consider when defining \"mental disorder\" is the fact that there is a person foremost who has the signs and symptoms of a mental disorder. As referenced earlier, the DSM-IV states that the definition focuses on the disorder and not the person. . This portends various problems because psychiatrists treat people. This fact can be lost in this age of fifteen-minute medication checks and insurance pressures. The essential aspect of the help is the relationship so acknowledgement of the patient as a person with an illness should be added to the definition of mental disorder. \"... When therapists apply manualized treatments to selected symptom clusters without addressing the complex person who experiences the symptoms and without attending to the therapeutic relationship that supports the treatment, therapeutic results are short-lived and rates of remission are high.\" \\[64\\]Admittedly, what complicates this issue is our lack of understanding of how brain processes (ion fluxes, neurotransmission, etc) result in consciousness, intentionality, thoughts, and the subjective experience of emotions and how these psychological states affect brain processes. Along this vain, perhaps the use of \"mental\" should be reconsidered because it implies, intentionally or not, the separation of mind and brain. Perhaps \"neuro-mental\" could be considered? The definition possibly could begin like this: \"An individual (with hopes, dreams, disappointments, and feelings like his treaters) is considered to have a neuro-mental disorder when....\"\n\nThe definition of a mental disorder will evolve over time as our knowledge advances and cultures evolve. It may be worthwhile to recognize that a definition will be merely a synthesis of the various dialectical poles and will require repeated adjustments over time. If we're willing to change our understanding of the definition, then perhaps we'll be willing to better understand our patients.\n\n## Commentary: The Difficulties of Defining a Mental Disorder in DSM-III\n\nHannah S. Decker, Ph.D.\n\nUniversity of Houston.\n\nI am commenting on this question as a historian of psychiatry who is writing a book on the making of DSM-III.\n\nThe first two DSMs in 1952 and 1968 eschewed a definition of mental disorder befitting their modest origins from nomenclatures that served primarily \"the \\[statistical\\] needs and case loads of public mental hospitals\" \\[\\[65\\], p. vi\\]. But the editor of the third edition (1980), Robert L. Spitzer, had more ambitious goals for the manual resulting in a volume that was over three times the size of DSM-II and had dozens of new diagnoses. It was Spitzer's plan to include in DSM-III a definition of \"mental illness\" as a subset of \"medical illness.\" Circumstances forced him to abandon this type of definition, but there is a definition of \"mental disorder\" in DSM-III, prefaced by the familiar caveat, \"there is no satisfactory definition that specifies precise boundaries for the concept 'mental disorder'\" \\[\\[66\\], p. 5\\].\n\nSpitzer wanted to establish that, without any doubt, psychiatry was a part of medicine. He had initially thought seriously about mental disorders even before he was appointed the head of the Task Force. In 1973, he had brokered the removal of the diagnosis of homosexuality as a mental disorder from DSM-II, and the controversy surrounding the event sensitized him to the subject of what constituted a mental disorder. He soon found impediments to his goal of establishing definitions of medical and mental disorder in DSM-III. Still, at every turn he persevered because he envisioned the issuance of the new diagnostic manual as having intellectual goals far larger than its being a diagnostic classification. Spitzer wanted DSM-III to play a role in combating the anti-psychiatry movement of the 1960s and early '70s and to refute critics such as Thomas Szasz who said mental illness was a myth.\n\nI would like to spell out briefly the obstacles that lay in the path of an agreed-upon definition of a mental disorder. At the annual meeting of the American Psychiatric Association in May 1976, Spitzer and Jean Endicott, a close colleague on the DSM-III Task Force, put forth their definitions of medical and mental disorders. The reaction was quite negative. As Spitzer later reported: \"Some questioned the need and wisdom of having any definition. Many argued that the definition proposed was too restrictive, and if officially adopted, would have the potential for limiting the appropriate activities of our profession... they also felt that it was out of keeping with trends in medicine that emphasize the continuity of health and illness\" \\[\\[67\\], p. 16\\]. (This continues to be an important question in current debates over what diagnoses should be in DSM-5. Allen Frances, in particular, has argued against pathologizing what he sees as aspects of normality, \"everyday incapacity,\" in his words.)\n\nThen, Spitzer encountered strenuous opposition from psychologists to the notion that mental disorders were medical disorders. This was a turf issue, with the psychologists fearing that they would lose the right to treat mental disorders if they were defined as medical. In June 1976, a conference was held in St. Louis on \"Critically Examining DSM-III in Midstream.\" Dr. Maurice Lorr, representing the American Psychological Association, \"expressed the view that mental disorders (as medical disorders) should be limited to those conditions for which a biological etiology or pathophysiology could be demonstrated.\" In addition, just two months earlier, a former president of the American Psychological Association had been quite blunt in expressing his view that DSM-III was \"turning every human problem into a disease, in anticipation of the shower of health plan gold that is over the horizon\" \\[\\[67\\], p. 36\\].\n\nIn spite of these disagreements, Spitzer, as was his wont, did not surrender easily. He returned the next year to bolster his arguments. This was at the yearly meeting of the American Psychopathological Association, an organization of preeminent American psychiatrists dedicated to research on human behavior. In 1977 it devoted its annual conference to \"Critical Issues in Psychiatric Diagnosis.\" Spitzer and Endicott not only presented retooled definitions of both medical and mental disorders, but Spitzer, as an editor of the 1978 published proceedings of the conference, now took the opportunity to remind his readers of the blows psychiatry had endured in the 1960s and early '70s: \"The very concept of psychiatric illness has been under considerable attack in recent years. This attack has largely depended upon studies derived from the social sciences. Some have taken the stand that what are called mental illnesses are simply those particular groups of behaviors that certain societies have considered deviant and reprehensible.\" Spitzer believed that this rejection of the legitimacy of psychiatry was partly owed to the fact that \"no generally agreed upon definition of mental illness has been propounded that is not open to the criticisms of cultural relativism\" \\[\\[68\\], p. 5\\].\n\nIn addition to his conviction that DSM-III, with its new diagnostic criteria, would bring diagnostic reliability to psychiatry, Spitzer conceived of the new DSM as a weapon that could repel psychiatry's cultural challengers. The new manual would thus have a potential of historical proportions. Nevertheless, although Spitzer labored mightily to develop \"mental illness\" as a subset of \"medical illness,\" he was ultimately forced to bow both to the opinions of his psychiatric colleagues, who had philosophical and practical objections to his definition, and to the demands of the psychologists that mental illnesses be labeled \"mental disorders.\" The upshot was that mental disorders did not get to be defined as medical disorders.\n\nThe attempts of Robert Spitzer--a psychiatrist of considerable accomplishment in many areas of the profession--to establish a definition of a mental disorder, illustrate the complexities of arriving at one that is intellectually satisfying, clinically useful, and practically acceptable. Nevertheless, he did include a definition of mental disorder in DSM-III under the category of \"Basic Concepts\" \\[\\[66\\], pp. 5-6\\]. DSM-IV \\[\\[69\\], pp. xxi-xxii\\], with some changes, essentially preserved Spitzer's definition, which also forms the basis of the definition planned for DSM-5 \\[70\\] (See also Stein et al \\[43\\]. The individuals addressing this latest revision include the habitual warning: \"No definition perfectly specifies precise boundaries for the concept of either 'medical disorder' or 'mental\/psychiatric disorder'\" \\[70\\].\n\n## Allen Frances responds: Mental Disorder Defies Definition\n\nHumpty Dumpty: \"When I choose a word it means just what I choose it to mean\"\n\nWhen it comes to defining the term \"mental disorder\" or figuring out which conditions qualify, we enter Humpty's world of shifting, ambiguous, and idiosyncratic word usages. This is a fundamental weakness of the whole field of mental health.\n\nMany crucial problems would be much less problematic if only it were possible to frame an operational definition of mental disorder that really worked. Nosologists could use it to guide decisions on which aspects of human distress and malfunction should be considered psychiatric- and which should not. Clinicians could use it when deciding whether to diagnose and treat a patient on the border with normality. A meaningful definition would clear up the great confusion in the legal system where matters of great consequence often rest on whether a mental disorder is present or absent.\n\nAlas, I have read dozens of definitions of mental disorder (and helped to write one) and I can't say that any have the slightest value whatever. Historically, conditions have become mental disorders by accretion and practical necessity, not because they met some independent set of operationalized definitional criteria. Indeed, the concept of mental disorder is so amorphous, protean, and heterogeneous that it inherently defies definition. This is a hole at the center of psychiatric classification\n\nAnd the specific mental disorders certainly constitute a hodge podge. Some describe short term states, others lifelong personality. Some reflect inner misery, others bad behavior. Some represent problems rarely or never seen in normals, others are just slight accentuations of the everyday. Some reflect too little control, others too much. Some are quite intrinsic to the individual, others are defined against varying and changing cultural mores and stressors. Some begin in infancy, others in old age. Some affect primarily thought, others emotions, yet others behaviors, others interpersonal relations, and there are complex combinations of all of these. Some seem more biological, others more psychological or social.\n\nIf there is a common theme it is distress and disability, but these are very imprecise and nonspecific markers on which to hang a definition. Ironically, the one definition of mental disorder that does have great and abiding practical meaning is never given formal status because it is tautological and potentially highly self serving. It would go something like \"Mental disorder is what clinicians treat and researchers research and educators teach and insurance companies pay for.\" In effect, this is historically how the individual mental disorders made their way into the system.\n\nThe definition of mental disorder has been elastic and follows practice rather than guides it. The greater the number of mental health clinicians, the greater the number of life conditions that work their way into becoming disorders. There were only six disorders listed in the initial census of mental patients in the mid nineteenth century, now there are close to three hundred. Society also has a seemingly insatiable capacity (even hunger) to accept and endorse newly defined mental disorders that help to define and explain away its emerging concerns.\n\nAs a result, psychiatry is subject to recurring diagnostic fads. Were DSM5 to have its way we would have a wholesale medicalization of everyday incapacity (mild memory loss with aging); distress (grief, mixed anxiety depression); defects in self control (binge eating); eccentricity (psychotic risk); irresponsibility(hypersexuality); and even criminality (rape, statutory rape). Remarkably, none of these newly proposed diagnoses even remotely pass the standard loose definition of \"what clinician's treat\". None of these \"mental disorders\" has an established treatment with proven efficacy. Each is so early in development as to be no more than \"what researchers research\" - a concoction of highly specialized research interests.\n\nWe must accept that our diagnostic classification is the result of historical accretion and accident without any real underlying system or scientific necessity. The rules for entry have varied over time and have rarely been very rigorous. Our mental disorders are no more than fallible social constructs.\n\nDespite all these limitations, the definitions of mental disorders contained in the DSM's are necessary and do achieve great practical utility. The DSM provides a common language for clinicians, a tool for researchers, and a bridge across the clinical\/research interface. It offers a textbook of information for educators and students. It contains the coding system for statistical, insurance, and administrative purposes. DSM diagnoses also often play an important role in both civil and criminal legal proceedings. The DSM system is imperfect, but indispensable.\n\nIt is undoubtedly a failing on my part, but I find myself unable to take much interest in efforts to define mental disorder. My too practical temperament prefers to spend my too limited time on earth attending to concrete and soluble problems and studiously avoids the abstract and the insoluble. Defining mental disorder in a useful way clearly lies above my intellectual pay grade.\n\nThis is not to say that the question is uninteresting or unimportant. Would that there were a workable definition of mental disorder. We could then comfortably decide which of the proposed mental disorders need be included in the DSM, which aspects of human suffering and deviance are best left out. We could also come to a ready judgment about each individual potential 'patient'- who best qualifies for diagnosis and treatment, who is best left to his own devices.\n\nAlas, however, the sheep and the goats refuse to declare themselves in any convenient and discernible way. The definitions of mental disorder offered here make perfect sense in the abstract, but provide no guidance on how to make concrete decisions. They do not tell us, for example, whether mixed anxiety depression or binge eating or the early forgetting of advanced years are disorders or facts of life. They do not guide us in diagnosing the many people who populate the fuzzy boundary between mental disorder and normality.\n\nSeeing no practical consequence, I have no opinion on the fine points of definition- since these seem to be of only academic interest. Mental disorder is (like 'disease' and 'obscenity' or 'love') something you hope you can spot when you see it, but by implicit rules that inherently are poorly defined and ever shifting.\n\n### Reply to Dr Wakefield\n\nIf anyone in the world could usefully define mental disorder, it would be Jerry Wakefield. He has tried long, hard, skillfully, even brilliantly and has come up with a definition that works extremely well on paper. His \"harmful dysfunction\" and evolutionary perspective provide the best possible abstract definition of mental disorder. The problem is that Dr Wakefield's definition is not operational in a way that provides guidance on the two questions that most count: 1) Is this proposed new diagnosis a mental disorder that should be included in the official nomenclature? 2) Does this person have sufficient psychiatric problems to warrant a diagnosis of mental disorder? Unfortunately, neither question lends itself to his definitional solution. As Dr Wakefield himself points out, \"both the notions of dysfunction and harm are fuzzy concepts\" that are only useful to \"determine a range of clear cases on either side of the disorder\/nondisorder boundary.\" We are left to settle the crucial and frequent tough fuzzy boundary questions in what remains a necessarily unsatisfactory, ad hoc, and often idiosyncratic manner.\n\nDr Wakefield seems to accept the necessity of my less abstract \"cost\/benefit analysis\" approach for reducing the reckless DSM 5 diagnostic exuberance at the fuzzy boundary with normality. But he goes on rightly to criticize its susceptibility to misuse in the service of social control or economic manipulation. These issues are taken up in more detail in question \\#4 on the limitations of pragmatics in framing the diagnostic system.\n\nDr Wakefield and I agree completely on the most important question facing psychiatry today- the risk of false positives and excessive treatment. Diagnostic inflation has been a huge problem in the way DSM IV has been used. It will be greatly amplified by the many new high prevalence diagnoses being suggested for DSM 5. Unfortunately, no available definition of mental disorder has been able to withstand the pressure of lowered diagnostic standards and drug company advertising.\n\n### Response to Dr Kinghorn\n\nI agree that the DSM's gain little from attempting to provide an abstract definition of mental disorder and that having a useless definition may be worse than offering no definition at all. Also, let's remember that we are not alone in being definitionally challenged- there is really no good operational definition in medicine for \"disease\" or \"illness.\"\n\n### Response to Dr Pierre\n\nI agree completely with Dr Pierre's eloquent critique of the definitions of mental disorder and fully endorse his concern that such consequential decisions rest on such a fragile tissue. It will be important to get the widest input on all the crucial questions raised by Dr Pierre. We can't rely on any definition of mental disorder, nor can we trust the wisdom of experts on that disorder or of any single professional association. The decisions about what constitutes a mental disorder require the same safety care as the FDA devotes before allowing the introduction of a new drug.\n\n### Reply to Dr Chardavoyne\n\nI understand Dr Chardavoyne's regret that DSM may seem to lose the person in its effort to define the disorder. I just don't see a solution within the diagnostic system- which necessarily has to focus on symptom similarities, rather than the particularities and idiosyncrasies that make each of us who we are. Holding fast to the person is the crucial task of every clinician, but it is not something the DSM can help with.\n\n### Reply to Dr Decker\n\nHannah Decker does us a great service by recalling and recording attempts to define mental disorder. I am afraid, however, that this is a situation in which knowing a problematic past is insufficient to avoid repeating it. There will always be a strong desire to define 'mental disorder' because it is so important in setting the boundary with normality. But all efforts at universal definition will fail because the concept is so inherently fuzzy and situation bound. The only consolation is that 'medical illness' is equally vague and hard to define.\n\n# Conclusion\n\nThe two questions covered in this article form a natural pair. How you define mental disorder (Question 2) will certainly depend on what you think mental disorders are and how we are able to know about them (Question 1). I will briefly summarize the discussion developed in these questions and save a larger review for the general conclusion.\n\nAs indicated above in the General Introduction, the startling failure of research to validate the DSM categories of DSM-III and DSM-IV has led to a conceptual crisis in our nosology: what exactly is the status of DSM diagnoses? Do they identify real diseases, or are they merely convenient (and at time arbitrary) ways of grouping psychiatric symptoms? These are the issues dealt with in Question 1, framed in the umpire metaphor introduced by Allen Frances in his \"DSM in Philosophyland\" piece published in Bulletin 1 and commented on at length in Bulletin 2. The commentaries in this article roughly follow the positions of the five imagined umpires, although, as explained above, most of us will not restrict ourselves to a purist version of one of the umpires. Indeed, Frances himself, while stating a clear point of view, acknowledges that each umpire position captures a bit of the truth.\n\nThe first two commentaries address Question 1 in a broad way, commenting on the process of deciding about the merits of the various positions. Peter Zachar and Stephen Lobello are scholars who take their baseball seriously and weave the metaphor into a complex analysis in which their pragmatic (practical kinds) perspective subsumes all the umpires, including the pragmatic fourth umpire. In her commentary Claire Pouncey doesn't quite assume a position but provides a clear presentation of the differences among the first three umpires. She begins with the clarification that the umpire question involves both ontology and epistemology: what is there, and can I know it. Umpire 1 is a Strong Realist - both an ontological realist and an epistemological realist. Umpire 2 is a Strong Realist\/Weak Constructivist - an ontological realist and an epistemological less-than-realist. Umpire 3 is a Strong Constructivist - an ontological anti-realist and an epistemological anti-realist.\n\nIn his commentary Nassir Ghaemi offers a spirited defense of a realist, first-umpire position, challenging those who don't accept the reality of mental illnesses as to what they're doing treating patients. I am calling him a first umpire, but he rejects the umpire metaphor, offering in its stead Kenneth Kendler's notion of \"epistemic iteration.\"\n\nThe next three commentators assume some variation on the \"nominalist\" second umpire position. Michael Cerullo invokes the naturalist\/normativist debate, a distinction that echoes Jerome Wakefield's harmful dysfunction notion of psychiatric disorders. Cerullo argues that all diseases, including psychiatric disorders, have natural and normativist aspects, although some lean more toward the naturalist dimension and others toward the normativist dimension.\n\nIn his contribution Jerome Wakefield follows with a thorough presentation of his well-known harmful dysfunction understanding of mental disorders. For purposes of the umpire discussion he locates his HD umpire in a humble realist 1.5 position - nominalism with a tilt toward realism.\n\nFinally, Joseph Pierre invokes the fate of the planet Pluto to point to the reality of things studied by science and reminds us of the biological reality of mental disorders; but, acknowledging the uncertainties of our knowledge, he assumes the second umpire position. Like the others in the second umpire group, he notes that some psychiatric disorders make more claim on a first umpire stance than others.\n\nGary Greenberg boldly assumes the third umpire position, even invoking Samuel Johnson's kick in a face-off with Ghaemi's use of the kick to defend the first umpire. Greenberg argues that the human interest is so powerful in determining what counts as disease and what does not that honesty drives us to the constructivist stance.\n\nIn his commentary Harold Pincus elaborates the very diverse ways in which concepts of mental disorders are used by an assortment of user groups, leading him to emphasize the fourth umpire, pragmatist, position toward psychiatric conditions. He argues cogently that validity as we now know it will not be a meaningful concept in the future.\n\nFinally, with his usual energy and without any indication of retreat, Thomas Szasz comfortably assumes the position of fifth umpire and reviews the stance toward psychiatric disorders he familiarized us with fifty years ago.\n\nAnd still finally, in a reflection that probably belongs best with the fifth umpire, Elliott Martin argues that the insurance industry has so co-opted the nosology that we might consider it the only umpire in the game.\n\nIn his response to the commentaries on Question 1, Allen Frances begins by noting that \"\\[n\\]one of the five umpires is completely right all of the time. And none is totally wrong all of the time. Each has a season and appropriate time at the plate.\" He then proceeds to a historical perspective, noting that in the heyday of biological psychiatry forty years ago, Umpire 1, 3, and 5 were ascendant. On the one hand, the biological psychiatrists were confident that the realist position of Umpire 1 would prevail. And on the other hand, they were challenged by a broad range of skeptics occupying the positions of the Umpires 3 and 5. In Frances' account that has all changed. Chastened by the failures of biological psychiatric to produce, but convinced of the reality of psychiatric illness, we as a majority have gravitated toward the position of Umpire 2 - there is certainly psychiatric illness, but the categories of DSM-III and IV may not have carved those infamous joints correctly. Frances offers a guarded defense of the categories, nonetheless, arguing that, until further science has settled the issue of what are valid categories, the current ones serve a useful function of organizing the clinical phenomena which we confront in our work. The pragmatic Umpire 4 thus has a say in our current efforts to diagnose. \"Mental disorders are no more and no less than constructs. And Umpire 4 is quick to point out that they are very useful constructs.\" Frances ends on an optimistic note that with more scientific clarity in the future, we can anticipate that Umpire 1 will gradually assume prominence over Umpire 2.\n\nWith the second question, definition, Frances can certainly claim more experience that most of us because of the time he put in grappling with this question in DSM-IV. His dissatisfaction with his own work product, and his skepticism about ever getting it right are certainly revealing - and consistent with his response to the first question. He points to the heterogeneity of what gets called a mental disorder, as well as to the unavoidable fact that many have been admitted to the club through the distinctly unscientific process of historical accretion. All that said, he argues that the DSM categories serve a very important role in facilitating communication among mental health professional and are thus necessary, however imperfect and imprecise. He concludes on a note of flagging interest in settling this question.\n\nThe commentators have stretched themselves to the imaginable extremes in tackling this question. The majority, along with Frances, view the DSM as a very motley assortment of behaviors and states of mind, and they see the DSM definition as trying to accommodate what we in fact treat, as opposed to leading us to decide what we *should* treat. The exception is Jerome Wakefield, who has argued persuasively for some time - and rehearses the main features of his argument here - that we can provide a clear of definition of mental disorder with the notion of harmful dysfunction. This is a definition that covers both the scientific and normative aspects of mental disorder, and that purports to guide us rather than follow us in our practice.\n\nThe commentaries at the other end of the spectrum start with Warren Kinghorn, who argues that since the DSM definition accomplishes nothing, even what it minimally claims to accomplish - organizing the terrain and establishing common goals of practice - we should acknowledge that we don't use it, don't need it, and should just retire it.\n\nJoseph Pierre offers another argument for the impossibility of developing an adequate definition - poor science, value intrusion, ever broadening parameters of practice - and also reminds us that general medicine does quite well without an official definition. In spite of his cogent argument for the failed project of definition, he appears to stop short of Kinghorn's recommendation, and without mustering Frances' defense, appears to be in favor of having the definition, however inadequate. That is presumably because, unlike, Kinghorn, he feels that an official definition can have significant consequences for the field of psychiatry.\n\nJohn Chardavoyne, makes a plea for escaping the inadequate science by reorienting the definition away from the disease and back to the person. In doing this he retains a definition but stands somewhat outside the debate that has engaged Frances and the other commentators.\n\nIn a final commentary Hannah Decker takes a look back and reviews Robert Spitzer's struggles to develop a definition of mental illness for DSM-III - a commentary that prompts Allen Frances to remark that this is a situation in which even a thorough examination of the past may not improve our performance in the present.\n\n# Competing interests\n\nMF is an external consultant to the NIMH Research Domain Criteria (RDoC) Project. NG has research grants from Pfizer and Sunovion, and is a research consultant for Sunovion. MS is a consultant for AstraZeneca, Merck, Novartis and Sunovion. Other authors report no competing interests.\n\n# Authors' contributions\n\nJP(Phillips) wrote the general General Introduction and Conclusion, as well as the introductions to the individual conclusions. AF wrote the Responses to Commentaries. MC, JC, HD, MF, NG, GG, AH, WK, SL, EM, AM, JP(Paris), RP, HP, DP, CP, MS, TS, JW, SW, OW, PZ wrote the commentaries. All authors read and approved the final manuscript.","meta":{"dup_signals":{"dup_doc_count":142,"dup_dump_count":74,"dup_details":{"curated_sources":4,"2023-14":2,"2022-49":1,"2022-21":1,"2021-17":1,"2021-10":1,"2020-50":1,"2020-40":1,"2020-29":2,"2020-24":1,"2020-16":1,"2020-10":1,"2020-05":1,"2019-47":4,"2019-43":1,"2019-39":2,"2019-30":1,"2019-22":1,"2019-18":1,"2019-13":1,"2019-09":2,"2019-04":1,"2018-51":2,"2018-47":1,"2018-43":1,"2018-34":1,"2018-30":3,"2018-26":1,"2018-17":1,"2018-13":3,"2018-09":3,"2018-05":1,"2017-51":1,"2017-47":3,"2017-43":2,"2017-39":2,"2017-34":3,"2017-30":2,"2017-26":3,"2017-22":5,"2017-17":2,"2017-09":3,"2017-04":3,"2016-50":4,"2016-44":3,"2016-40":3,"2016-36":3,"2016-30":5,"2016-26":2,"2016-22":2,"2016-18":2,"2015-48":3,"2015-40":1,"2015-35":1,"2015-32":1,"2015-27":2,"2015-22":1,"2015-14":1,"2014-52":1,"2014-49":1,"2014-42":4,"2014-41":2,"2014-35":1,"2014-23":2,"2014-15":2,"2024-18":1,"2024-10":1,"2017-13":5,"2015-18":2,"2015-11":1,"2015-06":2,"2014-10":1,"2013-48":1,"2013-20":2,"2024-22":1}},"file":"PMC3305603"},"subset":"pubmed_central"} {"text":"date: 2009-04-10\ntitle: Findings of scientific misconduct.\n\n# Findings of Scientific Misconduct\n\nNotice Number: NOT-OD-09-082\n\nKey Dates** \n**Release Date: April 9, 2009\n\nIssued by** \n**Department of Health and Human Services\n\nNotice is hereby given that the Office of Research Integrity (ORI) and the Assistant Secretary for Health have taken final action in the following case:\n\nRobert B. Fogel, M.D., Harvard Medical School and Brigham and Women's Hospital: Based on information that the Respondent volunteered to his former mentor on November 7, 2006, and detailed in a written admission on September 19, 2007, and ORI's review of Joint Inquiry and Investigation reports by Harvard Medical School (HMS) and the Brigham and Women's Hospital (BWH), the U.S. Public Health Service (PHS) found that Dr. Robert B. Fogel, former Assistant Professor of Medicine and Associate Physician at HMS, and former Co-Director of the Fellowship in Sleep Medicine at BWH, engaged in scientific misconduct in research supported by National Heart, Lung, and Blood Institute (NHLBI), National Institutes of Health (NIH), awards P50 HL60292, R01 HL48531, K23 HL04400, and F32 HL10246, and National Center for Research Resources (NCRR), NIH, award M01 RR02635.\n\nPHS found that Respondent engaged in scientific misconduct by falsifying and fabricating baseline data from a study of sleep apnea in severely obese patients published in the following paper: Fogel, R.B., Malhotra, A., Dalagiorgou, G., Robinson, M.K., Jakab, M., Kikinis, R., Pittman, S.D., and White, D.P. \\`\\`Anatomic and physiologic predictors of apnea severity in morbidly obese subjects.' Sleep 2:150-155, 2003 (hereafter referred to as the \\`\\`Sleep paper'); and in a preliminary abstract reporting on this work.\n\nSpecifically, PHS found that for the data reported in the Sleep paper, the Respondent:\n\nChanged\/falsified roughly half of the physiologic data\n\nFabricated roughly 20% of the anatomic data that were supposedly obtained from Computed Tomography (CT) images\n\nChanged\/falsified 50 to 80 percent of the other anatomic data\n\nChanged\/falsified roughly 40 to 50 percent of the sleep data so that those data would better conform to his hypothesis.\n\nRespondent also published some of the falsified and fabricated data in an abstract in Sleep 24, Abstract Supplement A7, 2001.\n\nDr. Fogel has entered into a Voluntary Settlement Agreement in which he has voluntarily agreed, for a period of three (3) years, beginning on March 16, 2009:\n\n\\(1\\) To exclude himself from serving in any advisory capacity to PHS, including but not limited to service on any PHS advisory committee, board, and\/or peer review committee, or as a consultant;\n\n\\(2\\) That any institution that submits an application for PHS support for a research project on which the Respondent's participation is proposed or that uses the Respondent in any capacity on PHS supported research, or that submits a report of PHS-funded research in which the Respondent is involved, must concurrently submit a plan for supervision of the Respondent's duties to the funding agency for approval; the supervisory plan must be designed to ensure the scientific integrity of the Respondent's research contribution; a copy of the supervisory plan must also be submitted to ORI by the institution; the Respondent agrees that he will not participate in any PHS-supported research until such a supervisory plan is submitted to ORI; and\n\n\\(3\\) To ensure that any institution employing him submits, in conjunction with each application for PHS funds or report, manuscript, or abstract of PHS-funded research in which the Respondent is involved, a certification that the data provided by the Respondent are based on actual experiments or are otherwise legitimately derived and that the data, procedures, and methodology are accurately reported in the application or report. The Respondent must ensure that the institution sends the certification to ORI.\n\nFOR FURTHER INFORMATION CONTACT:** \n** Director** \n** Division of Investigative Oversight** \n** Office of Research Integrity** \n** 1101 Wootton Parkway, Suite 750** \n** Rockville, MD 20852** \n** (240) 453-8800.","meta":{"dup_signals":{"dup_doc_count":101,"dup_dump_count":41,"dup_details":{"curated_sources":2,"2022-33":1,"2018-30":1,"2018-13":1,"2018-05":1,"2017-47":1,"2017-34":1,"2017-30":1,"2017-22":1,"2017-17":1,"2017-09":2,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-30":2,"2016-26":1,"2016-22":1,"2016-18":1,"2016-07":3,"2015-48":3,"2015-40":3,"2015-35":4,"2015-32":4,"2015-27":3,"2015-22":3,"2015-14":4,"2014-52":3,"2014-49":2,"2014-42":2,"2014-41":6,"2014-35":5,"2014-23":6,"2014-15":4,"2022-49":1,"2015-18":3,"2015-11":3,"2015-06":3,"2014-10":5,"2013-48":4,"2013-20":4}},"file":"PMC4259702"},"subset":"pubmed_central"} {"text":"author: Govindarajan Rajagopalan; Yogish C. Kudva; Chella S. DavidCorresponding author: Chella S. David, \ndate: 2012-07\nreferences:\ntitle: Is HOT a Cool Treatment for Type 1 Diabetes?\n\nType 1 diabetes (T1D) is an organ-specific autoimmune disease characterized by immune-mediated destruction of the insulin-producing \u03b2-cells of the islets of Langerhans that results in life-long insulin dependence (1). Given the immunological nature of the disease, numerous antigen-specific as well as nonantigen-specific tolerance induction or immune deviation strategies have been developed as treatments for T1D. Although successful in experimental models, results of studies to translate these strategies to humans have been discouraging (2\u20135). This has prompted researchers to explore safer, broader, and more effective immunotherapeutic approaches to prevent, treat, and\/or revert T1D. In this issue of *Diabetes*, Faleo et al. (6) show compelling data regarding use of hyperbaric oxygen therapy in autoimmune diabetes using nonobese diabetic (NOD) mice. This model spontaneously develops T1D, a feature that closely mimics human disease. Faleo et al. (6) show that hyperbaric oxygen therapy (HOT) significantly protects from T1D when initiated early in the disease course, but not after its onset, suggesting that this approach could be useful in high-risk individuals.\n\nAccording to the Undersea and Hyperbaric Medical Society, HOT is a process wherein patients breathe 100% oxygen (atmospheric air contains \\~21% oxygen) while inside a treatment chamber at a pressure higher than at sea level (usually 2.5 times greater pressure). In 1662, Henshaw, a British clergyman, first used pressurized atmospheric air to treat certain ailments. Since that time, HOT has been used to treat various conditions such as decompression sickness, arterial gas embolism, and carbon monoxide poisoning (with or without cyanide poisoning), and to facilitate wound healing (7). It should be noted that inhalation of 100% oxygen at 1 normal atmosphere of pressure or exposing isolated parts of the body to 100% oxygen does not constitute HOT.\n\nFaleo et al. (6) exposed prediabetic NOD mice to pressurized 100% oxygen (HOT-100%), 21% oxygen (HOT-21%), or oxygen-depleted air supplemented with pure oxygen (HOT-12%) for 60 min every day for varying periods of time. T1D risk was assessed using different models. In the cyclophosphamide-accelerated model, more than 75% of control, HOT-12%, and HOT-21% mice developed hyperglycemia, compared with less than 50% of HOT-100% mice. The severity of insulitis was also reduced in the HOT-100% group. Even though HOT was not administered on the day of cyclophosphamide therapy, the possibility that HOT could have influenced the metabolism of cyclophosphamide (8), and thereby indirectly influenced the incidence of accelerated hyperglycemia, was not addressed by the authors.\n\nIn the spontaneous model, when HOT was initiated at 4 weeks of age, 85% of untreated control NOD mice developed hyperglycemia by 35 weeks compared with 65% of mice in the HOT-100% group. Although this finding was statistically significant, a considerable percentage\u201465%\u2014of this inbred, homogenous mouse strain still developed hyperglycemia with HOT-100%, an observation that raises questions about the likely effectiveness of HOT among more heterogeneous human T1D populations. Data from a key experiment\u2014the effect of HOT-100% on the incidence of hyperglycemia when administered close to onset of diabetes in NOD mice (13 weeks of age)\u2014were not reported. In this group, HOT-100% attenuated insulitis and significantly delayed the onset of hyperglycemia when administered along with glucagon-like peptide (GLP)-1 analog and exenatide (EXN) delivered by mini-osmotic pumps. Inclusion of a group receiving HOT-100% and vehicle or a scrambled peptide, instead of EXN, would have strengthened the inference that HOT-100% is more effective when combined with EXN. Overall, the findings in this report suggest that only prolonged exposure to HOT-100% that is initiated at a very young age reduced T1D risk in this mouse model.\n\nThe mechanisms by which HOT provided its beneficial effects in T1D or even in other models of inflammation are not fully understood. HOT can suppress inflammation by modulating the expression of integrins (9) and probably other adhesion molecules. This would interfere with homing of inflammatory cells to islets, thereby reducing insulitis. In support of this hypothesis, HOT-100%\u2013treated mice had reduced insulitis. Furthermore, expression of CD62L, a lymphocyte homing marker, was altered in the spontaneous, but not cyclophosphamide-accelerated, T1D model.\n\nHOT could modulate inflammation through hypoxia-inducible factor (HIF)-1 (10). Hypoxia increases HIF expression, whereas hyperoxia represses HIF. HIF-1 can induce interleukin (IL)-17 and inhibit Foxp3^+^ regulatory T-cell (Treg) development (11). Therefore, by reducing the expression of HIF-1 (12,13), HOT could inhibit the Th17 pathway and promote Treg development, thereby protecting from T1D. Although HIF was not investigated in this report, increased Treg in HOT-100% mice supports this theory. However, it remains to be elucidated whether HOT-100% mediates its effects directly through increasing Treg numbers and functions and\/or through suppressing IL-17. HIF is also induced rapidly in transplanted islets and promotes islet apoptosis (14,15). Therefore, repression of HIF by HOT may promote \u03b2-cell survival and play a beneficial role in diabetes. Reduced apoptosis and increased proliferation of \u03b2-cells in HOT-100% mice support this theory. Paradoxically, HOT can also increase HIF expression (16), and HIF can have anti-inflammatory activities in T1D (17). Therefore, the interplay between HOT and HIF in the T1D settings remains to be determined and is a topic ripe for further investigation.\n\nHOT is also known to increase levels of reactive oxygen species (ROS) as well as reactive nitrogen species (RNS) (18). Both ROS and RNS participate in innumerable physiological and pathological processes (19). Of relevance, ROS are known to damage \u03b2-cells and participate in inflammatory processes (20), which could be counterproductive in T1D patients receiving HOT. Therefore, the roles of ROS and RNS in HOT need to be thoroughly investigated. Furthermore, this study showed that HOT-100% suppressed the responses of T cells to a potent mitogenic stimulation (anti-CD3) and elevated the levels of IL-10. Induction of such generalized immune suppression by HOT could be a potential drawback in humans, particularly in children, because this might predispose to infections, diminish the ability to clear infections, and mount adequate protective immunity after immunizations.\n\nIn conclusion, this descriptive work by Faleo et al. (6) addresses the potential benefits of HOT in T1D. Molecular oxygen is a fundamental component of various biochemical processes, and its use as a therapeutic agent could have diverse effects (21), both positive and negative. A detailed investigation is warranted to understand its mechanisms of actions as well as possible side effects (Fig. 1<\/a>). In T1D, HOT is ineffective once the autoimmunity has progressed to a prediabetic stage, making its utility in patients with existing T1D questionable. When started early in the disease course, HOT reduced the incidence of T1D only by \\~20% in NOD mice. However, in humans, this could be a significant number. Therefore, given the established safety and limited side effects of HOT in humans, its translation to a pilot human clinical trial, either alone or with other immunomodulatory agents, at least in high-risk individuals (those with a first-degree relative with T1D or who express susceptible HLA alleles, etc.), is a possibility.\n\n## ACKNOWLEDGMENTS\n\nNo potential conflicts of interest relevant to this article were reported.\n\n# REFERENCES","meta":{"dup_signals":{"dup_doc_count":118,"dup_dump_count":45,"dup_details":{"curated_sources":2,"2019-35":1,"2019-26":1,"2019-13":1,"2018-26":2,"2018-22":1,"2018-17":3,"2018-09":2,"2018-05":2,"2017-51":2,"2017-47":1,"2017-43":5,"2017-39":1,"2017-34":2,"2017-30":4,"2017-26":4,"2017-22":1,"2017-17":6,"2017-09":4,"2017-04":6,"2016-50":3,"2016-44":4,"2016-40":3,"2016-36":3,"2016-30":3,"2016-26":3,"2016-22":3,"2016-18":2,"2016-07":3,"2015-48":3,"2015-40":2,"2015-35":2,"2015-32":1,"2015-27":3,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":1,"2014-42":1,"2014-41":5,"2014-35":2,"2019-43":1,"2017-13":3,"2015-18":3,"2015-11":1,"2015-06":3}},"file":"PMC3379652"},"subset":"pubmed_central"} {"text":"date: 2004\ntitle: Editorial\n\nFollowing an overwhelming vote by the US House of Representatives urging the National Institutes of Health (NIH) to develop an Open Access strategy, the NIH has recently invited comment on its plans to enhance access to the research that it funds. Under the proposed scheme, NIH-funded researchers would have to provide electronic copies of the final accepted versions of each of their manuscripts, for archiving along with any supplementary information in PubMed Central \\[\\]. Six months after publication of the research in question - or sooner if the publisher agrees - the provisional copies would be made publicly available at no charge to readers.\n\n*Journal of Biology* heartily supports the NIH proposal, which brings us one step closer to the immediate availability of all peer-reviewed research free of charge. Indeed, as Open Access pioneers, BioMed Central and *Journal of Biology* already provide PubMed Central with final full text and PDF versions of all published research articles immediately, and we encourage all publishers to follow suit. In an ideal future, the electronic version of each research article would be the final and definitive form - easily archived, centrally searchable and available at the click of a mouse to all who would read it, be they scientists or members of the public. Of course, print would still play an important role, but printed articles will no longer constitute the historic record of the work. And moving away from the printed article in favor of its online incarnation makes sense for other reasons too: electronically, researchers can display all relevant data instead of an edited subset, and moving images and other web-only formats can be easily integrated. It will no longer be possible to represent a complete research article accurately on the printed page.\n\nIn keeping with this shift in emphasis, *Journal of Biology* is now presenting its printed articles in a new light. Since its inception, the journal has eschewed the rigidity of producing regular issues, instead producing collections of articles grouped by their focus on a key piece of research, not necessarily by publication date. Each printed collection serves to draw readers' attention to the definitive online content, which is freely available on the journal's website or in central archives such as PubMed Central. In recognition of the non-traditional way that *Journal of Biology* has always conceptually clustered its articles, the cover of the bound printed articles now sports a bold new design - do please look out for it in print and on the journal's website.\n\nSince its launch in June 2002, nearly a million copies of *Journal of Biology* have been distributed in print to life scientists worldwide, entirely free of charge. But the journal and its publisher, BioMed Central, are committed to building a sustainable Open Access business model. This should primarily ensure that research articles are immediately and permanently available online without charge, as well as being deposited into permanent repositories - both of which provide far more efficient ways of disseminating, retrieving, and searching for scientific information. For this reason, the journal has now changed its approach to print distribution. Some readers will continue receiving complimentary copies as before, either with *The Scientist* or individually, but for others who wish to receive articles in print, a modest annual subscription charge of \\$50 (\u00a330\/\u20ac40) will be levied - see \\[\\]. These funds will help to maintain the quality of the reprints and to cover mailing costs. The journal also offers special print rates to librarians and institutions, so please encourage your library or head of institute to sign up. Of course, all of the content of *Journal of Biology* will continue to be available free of charge online.\n\n*Journal of Biology* urges other funding bodies and policy makers to follow the lead of the NIH, the UK Parliament, the Howard Hughes Medical Institute, The Wellcome Trust and the signatories of the Berlin Declaration \\[\\], and to encourage the researchers they fund to publish their results in a way that promotes public availability of scientific information. At the same time, it should be recognized that we still have a long way to go before every research article is free for anyone to read online on the day it is published. You can play a part in achieving this goal by submitting your next important article to *Journal of Biology*.","meta":{"dup_signals":{"dup_doc_count":114,"dup_dump_count":36,"dup_details":{"curated_sources":2,"2020-10":1,"2019-47":1,"2018-22":1,"2018-05":1,"2017-39":1,"2017-30":1,"2017-22":1,"2017-17":1,"2017-09":8,"2016-44":1,"2016-36":7,"2016-30":4,"2016-22":1,"2016-18":1,"2016-07":8,"2015-48":3,"2015-40":3,"2015-35":2,"2015-32":2,"2015-27":2,"2015-22":3,"2015-14":2,"2014-52":2,"2014-49":4,"2014-42":9,"2014-41":6,"2014-35":4,"2014-23":6,"2014-15":6,"2020-40":1,"2015-18":3,"2015-11":3,"2015-06":3,"2014-10":4,"2013-48":3,"2013-20":3}},"file":"PMC549718"},"subset":"pubmed_central"} {"text":"abstract: Decreased collateral vessel formation in diabetic peripheral limbs is characterized by abnormalities of the angiogenic response to ischemia. Hyperglycemia is known to activate protein kinase C (PKC), affecting the expression and activity of growth factors such as vascular endothelial growth factor (VEGF) and platelet-derived growth factor (PDGF). The current study investigates the role of PKC\u03b4 in diabetes-induced poor collateral vessel formation and inhibition of angiogenic factors expression and actions. Ischemic adductor muscles of diabetic *Prkcd*^+\/+^ mice exhibited reduced blood reperfusion, vascular density, and number of small vessels compared with nondiabetic *Prkcd*^+\/+^ mice. By contrast, diabetic *Prkcd*^\u2212\/\u2212^ mice showed significant increased blood flow, capillary density, and number of capillaries. Although expression of various PKC isoforms was unchanged, activation of PKC\u03b4 was increased in diabetic *Prkcd*^+\/+^ mice. VEGF and PDGF mRNA and protein expression were decreased in the muscles of diabetic *Prkcd*^+\/+^ mice and were normalized in diabetic *Prkcd*^\u2212\/\u2212^ mice. Furthermore, phosphorylation of VEGF receptor 2 (VEGFR2) and PDGF receptor-\u03b2 (PDGFR-\u03b2) were blunted in diabetic *Prkcd*^+\/+^ mice but elevated in diabetic *Prkcd*^\u2212\/\u2212^ mice. The inhibition of VEGFR2 and PDGFR-\u03b2 activity was associated with increased SHP-1 expression. In conclusion, our data have uncovered the mechanisms by which PKC\u03b4 activation induced poor collateral vessel formation, offering potential novel targets to regulate angiogenesis therapeutically in diabetic patients.\nauthor: Farah Lizotte; Martin Par\u00e9; Benoit Denhez; Michael Leitges; Andr\u00e9anne Guay; Pedro GeraldesCorresponding author: Pedro Geraldes, .\ndate: 2013-08\nreferences:\ntitle: PKC\u03b4 Impaired Vessel Formation and Angiogenic Factor Expression in Diabetic Ischemic Limbs\n\nThe main long-term complications from diabetes are vascular diseases, which are in turn the main causes of morbidity and mortality in diabetic patients (1). Diabetic vascular complications affect several important organs, including the retina, kidney, and arteries (2,3). Peripheral vascular diseases are the major risk factor for nontraumatic lower limb amputation in patients with diabetes (4), characterized by collateral vessel development insufficient to support the loss of blood flow through occluded arteries in the ischemic limbs (5). Multiple abnormalities in the angiogenic response to ischemia have been documented in the diabetic state and depend on complex interactions of multiple growth factors and vascular cells.\n\nExperiments to improve angiogenesis and vascular cell survival by local infusion of vascular endothelial growth factor (VEGF) or angiopoietin by increasing its expression have also been reported in nondiabetic animal models (6,7). Moreover, animal studies have used platelet-derived growth factor (PDGF) to improve collateral vessel formation and vascular healing in the diabetic state (8). Clinical trials using recombinant growth factors have noted transient improvement of myocardial and distal leg circulation (9\u201311). However, these favorable vascular effects appeared to produce limited clinical benefits (12). Local administration of growth factors, such as VEGF by gene therapy in the setting of diabetes, does not appear to have the beneficial long-term effects seen in the absence of diabetes or to improve quality of life (13,14). One potential problem with normalizing VEGF or PDGF action alone is that a variety of growth factors may be needed to establish and maintain the capillary bed.\n\nVarious studies have clearly identified that the expression of growth factors, such as VEGF, PDGF, and stromal-derived factor-1 (SDF-1), are critically important in the formation of collateral vessels in response to ischemia (15\u201317). Previous studies suggested that hyperglycemia attenuates VEGF production and levels in myocardial tissue and in animal models of wound repair (5,18). Furthermore, decreased VEGF and PDGF expression in the peripheral limbs and nerves of diabetic animals and rodents has been reported (19\u201321). Although the underlying mechanism of reduction of VEGF and PDGF expression in diabetes is not clear, it is well-known that the major inducers of VEGF and PDGF (i.e., hypoxia and oxidants) can both play a role in diabetes. We and other researchers have reported that variation in PDGF signaling, rather than expression, is linked to morphological abnormalities in the retina and in collateral capillary formation in an ischemic limb model of diabetic animals (22,23). Clearly, poor collateral vessel formation during diabetes-induced ischemia is attributable to the lack of production and\/or action of critical growth factors such as VEGF and PDGF. Therefore, further studies of the basic mechanisms of hyperglycemia-induced activation of toxic metabolites, such as activation of protein kinase C (PKC), are needed to identify how these proteins contribute to growth factor deregulation.\n\nPKC, a member of a large family of serine\/threonine kinases, is involved in the pathophysiology of vascular complications. When activated, PKC phosphorylates specific serine or threonine residues on target proteins that vary, depending on cell type. PKC has multiple isoforms that function in a wide variety of biological systems (24). PKC activation increases endothelial permeability and decreases blood flow and the production and response of angiogenic growth factors that contribute to the loss of capillary pericytes, retinal permeability, ischemia, and neovascularization (25\u201329).\n\nPrevious data have demonstrated that high glucose levels in smooth muscle cells activate PKC\u03b1, -\u03b2, -\u03b4, and -\u03b5 but not the atypical PKC\u03b6 (30,31). In general, high levels of glucose-induced PKC activation cause vascular dysfunction by altering the expressions of growth factors such as VEGF, PDGF, transforming growth factor-\u03b2, and others (32\u201334). PKC\u03b4 has been proposed to participate in smooth muscle cell apoptosis, and deletion of this PKC isoform led to increased arteriosclerosis (35). Moreover, we previously demonstrated that diabetes-induced PKC\u03b4 activation generates PDGF unresponsiveness, causing pericyte apoptosis, acellular capillaries, and diabetic retinopathy (23). We therefore hypothesized that PKC\u03b4 activation could be involved in proangiogenic factor inhibition that triggers poor collateral vessel formation in diabetes.\n\n# RESEARCH DESIGN AND METHODS\n\n## Reagents and antibodies.\n\nPrimary antibodies for immunoblotting were obtained from commercial sources, including actin (horseradish peroxidase \\[HRP\\]; I-19), SHP-1 (C19), VEGF (147), PKC\u03b1 (C-20), PKC\u03b2 (C-18), PKC\u03b5 (C-15), and nitric oxide synthase (NOS) 3 (C-20) antibodies from Santa Cruz Biotechnology Inc.; phospho (p)-tyrosine, p-PKC\u03b4 (Thr 505), PKC\u03b4, p-VEGF receptor 2 (VEGFR2; Y1175), VEGFR2, p-PDGF receptor-\u03b2 (PDGFR-\u03b2; Tyr 1009), and PDGFR-\u03b2 antibodies from Cell Signaling; anti-\u03b1 smooth muscle actin from Abcam; polyclonal antibody against protein-tyrosine phosphatase 1B (PTP1B) and CD31 from BD Bioscience; SHP-2 and SHP-1 antibodies from Millipore; and rabbit and mouse peroxidase-conjugated secondary antibody from GE Healthcare Bio-Sciences. All other reagents used, including EDTA, leupeptin, phenylmethylsulfonyl fluoride, aprotinin, and Na~3~VO~4~, were purchased from Sigma-Aldrich (St. Louis, MO) unless otherwise stated.\n\n## Animal and experimental design.\n\nC57BL\/6J mice (6 weeks old) were purchased from The Jackson Laboratory (Bar Harbor, ME) and bred in our animal facility. *Prkcd*^\u2212\/\u2212^ mice, described previously and provided by Dr. Michael Leitges (35), were generated by the insertion of a LacZ\/neo cassette into the first transcribed exon of the PKC\u03b4 gene. This insertion abolished the transcription of PKC\u03b4, leading to a null allele. *Prkcd*^\u2212\/\u2212^ mice with mixed background of 129SV and C57BL\/6J strains were crossbred for 10 generations (F12) with wild-type C57BL\/6J background from The Jackson Laboratory. Animals were rendered diabetic for a 2-month period by intraperitoneal streptozotocin injection (50 mg\/kg in 0.05 mol\/L citrate buffer, pH 4.5; Sigma) on 5 consecutive days after an overnight fast; control mice were injected with citrate buffer. Blood glucose was measured by Glucometer (Contour, Bayer Inc.). Throughout the study period, animals were provided with free access to water and standard rodent chow (Harlan Teklad, Madison, WI). All experiments were conducted in accordance with the Canadian Council of Animal Care and University of Sherbrooke guidelines.\n\n## Hind limb ischemia model.\n\nWe assessed blood flow in nondiabetic and *Prkcd*^+\/+^ and *Prkcd*^\u2212\/\u2212^ mice diabetic for 2 months. Animals were anesthetized, and the entire lower extremity of each mouse was shaved. A small incision was made along the thigh all the way to inguinal ligament and extending superiorly toward the mouse abdomen. The femoral artery was isolated from the femoral nerve and vein and ligated distally to the origin of the arteria profunda femoris. The incision was closed by interrupted 5-0 sutures (Syneture).\n\n## Laser Doppler perfusion imaging and physical examination.\n\nHind limb blood flow was measured using PIM3 laser Doppler perfusion imaging (Perimed Inc.). Consecutive perfusion measurements were obtained by scanning the region of interest (hind limb and foot) of anesthetized animals. Measurements were performed before and after artery ligation and on postoperative days 7, 14, 21, and 28. To account for variables that affect blood flow temporally, the results at any given time were expressed as a ratio against simultaneously obtained perfusion measurements of the right (ligated) and left (nonligated) limb. Tissue necrosis was scored to assess mice that had to be killed during the course of the experiment due to necrosis\/loss of toes.\n\n## Histopathology and TUNEL assay.\n\nRight and left abductor muscles from *Prkcd*^+\/+^ and *Prkcd*^\u2212\/\u2212^ mice were harvested for pathological examination. Sections were fixed in 4% paraformaldehyde (Sigma-Aldrich) for 18 h and then transferred to 90% ethanol for light microscopy and immunohistochemistry. Paraformaldehyde-fixed tissue was embedded in paraffin, and 6-\u00b5m sections were stained with hematoxylin and eosin (Sigma). Apoptotic cells were detected using the TACS 2 Tdt-Fluor in situ apoptosis detection kit (Trevigen, Gaithersburg, MD) according to the manufacturer's instructions.\n\n## Immunofluorescence.\n\nAdductor muscles were blocked with 10% goat serum for 1 h and exposed in sequence to primary antibodies (CD31 and \u03b1-smooth muscle actin, 1:100) overnight, followed by incubation with secondary antibodies Alexa-647 conjugated anti-rabbit IgG and Alexa-594 conjugated anti-mouse (1:500; Jackson ImmunoResearch Laboratories). Confocal images were captured on a Zeiss LSM 410 microscope; images of one experiment were taken at the same time under identical settings and handled in Adobe Photoshop similarly across all images.\n\n## Immunoblot analysis.\n\nAdductor muscles were lysed in Laemmli buffer (50 mmol\/L Tris \\[pH 6.8\\], 2% SDS, and 10% glycerol) containing protease inhibitors (1 mmol\/L phenylmethylsulfonyl fluoride, 2 \u03bcg\/mL aprotinin, 10 \u03bcg\/mL leupeptin, 1 mmol\/L Na~3~VO~4~; Sigma). Protein amount was measured with a BCA kit (Bio-Rad). The lysates (10\u201320 \u00b5g protein) were separated by SDS-PAGE, transferred to polyvinylidene fluoride membrane, and blocked with 5% skim milk. Antigens were detected using anti-rabbit HRP-conjugated antibody for other Western blotting and detected with the ECL system (Pierce Thermo Fisher, Piscataway, NJ). Protein content quantification was performed using computer-assisted densitometry with Image J software (National Institutes of Health).\n\n## Real-time PCR analysis.\n\nReal-time PCR was performed to evaluate mRNA expressions of PKC\u03b1, PKC\u03b2, PKC\u03b4, PKC\u03b5, VEGF, PDGF, KDR\/Flk-1, PDGFR-\u03b2, endothelial NOS (eNOS), SDF-1, FGF2, SHP-1, SHP-2, and PTP1B of nonischemic and ischemic limbs. Total RNA was extracted from adductor muscles with TRI-REAGENT, as described by the manufacturer and RNeasy mini kit (Qiagen, Valencia, CA). The RNA was treated with DNase I (Invitrogen) to remove any genomic DNA contamination. Approximately 1 \u03bcg RNA was used to generate cDNA using SuperScript III reverse transcriptase and random hexamers (Invitrogen) at 50\u00b0C for 60 min. PCR primers and probes are listed in [Supplementary Table 1](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1432\/-\/DC1). Glyceraldehyde-3-phosphate dehydrogenase and 18S ribosomal RNA expression were used for normalization. PCR products were gel purified, subcloned using a QIA quick PCR Purification kit (Qiagen), and sequenced in both directions to confirm identity.\n\n## Nuclear extract and nonradioactive transcription factor assay.\n\nAdductor muscles were lysed and nuclear-specific proteins isolated using the NucBuster Protein Extraction Kit (Novagen, Madison, WI) according to the manufacturer's instructions. Detection of hypoxia-inducible factor-1\u03b1 (HIF-1\u03b1) in the nucleus was quantified using the nonradioactive transcription factor assay kit (Cayman, Ann Arbor, MI). Briefly, nuclear protein (20 \u00b5g) was incubated for 24 h in a 96-well plate containing immobilized specific double-stranded DNA consensus sequence of the HIF-1\u03b1 response element. HIF-1\u03b1 contained in the nuclear extract was linked specifically to the HIF-1\u03b1 response element. Wells were washed five times, and the HIF transcription factor complex was detected by addition of a specific primary antibody directed against HIF-1\u03b1 and incubated for 1 h. Wells were washed five times and exposed with secondary antibody conjugated to HRP for 1 h. Wells were then washed five times, and developing agent was added to provide a sensitive colorimetric readout at 450 nm (Infinite M200; Tecan Group Ltd., M\u00e4nnedorf, Switzerland) to quantify nuclear HIF-1\u03b1 levels.\n\n## Statistical analyses.\n\nThe data are shown as mean \u00b1 SD for each group. Statistical analysis was performed by unpaired *t* test or by one-way ANOVA, followed by the Tukey test correction for multiple comparisons. All results were considered statistically significant at *P* \\< 0.05.\n\n# RESULTS\n\n## Deletion of PKC\u03b4 improved reperfusion and vascular response ischemia on diabetic limbs.\n\nNondiabetic and diabetic male *Prkcd*^\u2212\/\u2212^ mice and control littermates (*Prkcd*^+\/+^) underwent unilateral right femoral artery ligation. Body weight and fasting glucose levels were measured when mice were killed ([Supplementary Table 2](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1432\/-\/DC1)). Blood flow reperfusion was assessed using the PIM 3 laser Doppler imaging system (Fig. 1A<\/em><\/a>). Diabetic *Prkcd*^+\/+^ mice exhibited reduced blood reperfusion of the ischemic limb compared with nondiabetic *Prkcd*^+\/+^ mice (*P* = 0.0046). In contrast, reperfusion of blood flow of diabetic *Prkcd*^\u2212\/\u2212^ mice was significantly improved (*P* = 0.0003) compared with diabetic *Prkcd*^+\/+^ mice and similar to nondiabetic *Prkcd*^+\/+^ and *Prkcd*^\u2212\/\u2212^ mice 28 days after the ligation (Fig. 1B<\/em><\/a>). Because diabetic patients are at high risk of lower limb amputation, we assessed limb necrosis and apoptosis. Impaired reperfusion in ischemic limbs of diabetic *Prkcd*^+\/+^ mice was associated with elevated tissue necrosis, amputation (Fig. 1C<\/em><\/a>), and apoptosis (Fig. 2<\/a>) compared with nondiabetic counterparts.\n\nOne main effect of hypoxia is to induce angiogenesis and to promote new capillary formation. To test whether activation of PKC\u03b4 is responsible for poor collateral vessel formation in diabetes, we measured capillary density and capillary diameter in the ischemic adductor muscles. Figure 3<\/a> demonstrated that the adductor muscles of diabetic *Prkcd*^+\/+^ mice displayed a 31% vascular density reduction compared with nondiabetic *Prkcd*^+\/+^ mice. The decline of capillary density was accompanied with a 50% reduction in number of vessels with a diameter of 50 \u00b5m or less. Interestingly, diabetic *Prkcd*^\u2212\/\u2212^ mice showed a significant increase in capillary density and number of vessels with a diameter of less than 50 \u00b5m compared with diabetic *Prkcd*^+\/+^ mice (Fig. 3D<\/em><\/a>).\n\n## PKC\u03b4 is activated in diabetic ischemic limb.\n\nHyperglycemia is known to activate multiple PKC isoforms, preferably the \u03b2 and \u03b4 isoforms, in vascular cells. Expression of various isoforms of PKC was assessed by quantitative PCR in muscle tissues (Fig. 4<\/a>). Compared with nondiabetic *Prkcd*^+\/+^ mice, mRNA expression of PKC\u03b2 and \u03b4 was modestly increased in adductor muscles of diabetic *Prkcd*^+\/+^ mice and unchanged in *Prkcd*^\u2212\/\u2212^ mice (Fig. 4B<\/em> and D<\/em><\/a>). There was no significant difference in the mRNA expression of PKC\u03b1 and -\u03b5 (Fig. 4A<\/em> and C<\/em><\/a>). Although diabetic *Prkcd*^+\/+^ mice did not exhibit higher levels of protein expression of PKC\u03b1, -\u03b22, -\u03b5, or -\u03b4 isoforms, adductor muscles of *Prkcd*^+\/+^ mice showed a significant and impressive increase of PKC\u03b4 phosphorylation (Thr 505; *P* = 0.0040), as a marker of PKC\u03b4 activation, 28 days after unilateral femoral artery ligation compared with nondiabetic *Prkcd*^+\/+^ mice (Fig. 5<\/a>).\n\n## Inhibition of PKC\u03b4 promotes proangiogenic growth factor expression and activation.\n\nTo explain how the absence of PKC\u03b4 improved reperfusion in diabetic ischemic limbs, we performed a wide analysis of the gene and protein expression of angiogenic-related factors and their receptors. Quantitative gene expression analyses by real-time PCR indicated that VEGF-A, PDGF-B, and PDGFR-\u03b2 mRNA expression was significantly decreased in the adductor muscles of diabetic mice by 46, 30, and 63%, respectively, compared with nondiabetic *Prkcd*^+\/+^ mice (Fig. 6A<\/em>, C<\/em>, and D<\/em><\/a>). The reduction of these genes in diabetic *Prkcd*^+\/+^ mice was not observed in diabetic *Prkcd*^\u2212\/\u2212^ mice. Moreover, mRNA expression of VEGFR2 (KDR\/Flk-1), PDGF-B, and PDGFR-\u03b2 was significantly upregulated in diabetic *Prkcd*^\u2212\/\u2212^ compared with diabetic *Prkcd*^+\/+^ mice (Fig. 6B\u2013D<\/em><\/a>). These results suggest that impaired PDGF and VEGF expression by PKC\u03b4 activation might be the contributing factor for poor collateral vessel formation in diabetes. Expression of other angiogenic factors, such as SDF-1, FGF-2, and eNOS, as well as transcriptional factor activity of HIF-1\u03b1, was unchanged within all groups of mice (Fig. 6E\u2013H<\/em><\/a> and [Supplementary Fig. 1](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1432\/-\/DC1)). In contrast to 4 weeks after femoral artery ligation, transcriptional factor activity and mRNA levels of HIF-1\u03b1 were significantly decreased in diabetic *Prkcd*^+\/+^ mice compared with nondiabetic *Prkcd*^+\/+^ and diabetic *Prkcd*^\u2212\/\u2212^ mice ([Supplementary Figs. 2 and 3](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1432\/-\/DC1)).\n\n## VEGFR2 and PDGFR-\u03b2 activation is decreased in diabetic ischemic muscles.\n\nTo further investigate the mechanisms of impaired angiogenic response to restore blood flow in diabetes, the expression, activation, and signaling pathway of VEGF-A and PDGF-B and their respective receptors (VEGFR2 and PDGFR-\u03b2) were examined. Protein expression of PDGF-B was significantly decreased in diabetic versus nondiabetic adductor muscles of wild-type animals. In contrast, VEGF-A and PDGF-B protein expression was elevated in the ischemic limb of the diabetic PKC\u03b4 null mice (Fig. 7A<\/em> and B<\/em><\/a>). Phosphorylation of VEGFR2 and PDGFR-\u03b2 was inhibited in ischemic adductor muscles of diabetic mice compared with nondiabetic *Prkcd*^+\/+^ mice. However, activation of Src was elevated in adductor muscles of diabetic *Prkcd*^+\/+^ mice compared with nondiabetc *Prkcd*^+\/+^ and *Prkcd*^\u2212\/\u2212^ mice (Fig. 7B<\/em><\/a>). Interestingly, tyrosine phosphorylation of VEGFR2 and PDGFR\u03b2, as well as PLC\u03b31, Akt, and ERK phosphorylation, was greatly enhanced in *Prkcd*^\u2212\/\u2212^ mice compared with diabetic *Prkcd*^+\/+^ mice (Fig. 7A<\/em> and B<\/em><\/a>). We did not observe any changes in the eNOS protein expression among experimental groups (Fig. 7A<\/em><\/a>).\n\n## Expression of SHP-1 caused VEGFR2 and PDGFR-\u03b2 inactivation.\n\nWe have previously shown that activation of PKC\u03b4 leads to increased expression of SHP-1, which inhibits the PDGF-signaling pathway and promotes retinal pericyte apoptosis in diabetic animals. To determine whether SHP-1 is implicated in PKC\u03b4-induced VEGFR2 and PDGFR-\u03b2 dephosphorylation in diabetic ischemic adductor muscles, we measured SHP-1 expression by quantitative PCR and immunoblot analysis. Figure 8A<\/em> and B<\/em><\/a> indicates that mRNA expression of SHP-1, but not SHP-2 or PTP1B, is elevated in diabetic *Prkcd*^+\/+^ mice, whereas SHP-1 is clearly downregulated in *Prkcd*^\u2212\/\u2212^ mice. We confirmed through immunoblot analysis that SHP-1 protein expression was elevated by 2.3-fold in ischemic adductor muscles of diabetic *Prkcd*^+\/+^ mice compared with nondiabetic *Prkcd*^+\/+^ mice. The increase expression of SHP-1 was not observed in diabetic *Prkcd*^\u2212\/\u2212^ mice (Fig. 8C<\/em><\/a>). No change was detected in the protein expression of SHP-2 and PTP1B within all groups of mice (Fig. 8D<\/em><\/a>).\n\n# DISCUSSION\n\nDiabetes is associated with the progression of vascular complications, such as peripheral arterial disease, and is a major risk factor for lower limb amputations (4). In the current study, we have demonstrated that activation of PKC\u03b4 diminishes the expression of VEGF and PDGF, two critical proangiogenic factors contributing to poor capillary formation and blood flow reperfusion of the ischemic limbs. In addition to reducing expression of VEGF and PDGF, phosphorylation of VEGF and PDGF receptors was abrogated in diabetic ischemic muscles compared with nondiabetic ischemic muscles. The inhibition of growth factor receptor phosphorylation was associated with the upregulation of SHP-1 expression, which has been reported to deactivate tyrosine kinase receptors such as VEGF and PDGF receptors. Overall, deletion of PKC\u03b4 prevents the reduction of VEGF and PDGF expression and re-establishes KDR\/Flk-1 and PDGFR-\u03b2 phosphorylation, favoring new capillary formation and blood flow reperfusion.\n\nWound healing is a complex, well-orchestrated, and dynamic process that involves a coordinated and precise interaction of various cell types and mediators. Given the fundamental contribution of VEGF and PDGF to the angiogenic process, the mechanism by which activation of PKC\u03b4 isoform prevents growth factors expression and signaling actions may provide a better understanding of how diabetes reduces collateral vessel formation in the ischemic limb. In this study, we demonstrated that PKC\u03b4 is activated in diabetic ischemic muscles and reduced blood flow reperfusion, contributing to tissue necrosis, amputation, and apoptosis. Previous studies have reported that PKC\u03b4 is involved in vascular cell apoptosis. PKC\u03b4 activates p-38, mitogen-activated protein kinase, p53, and caspase-3 cleavage to favor endothelial (36) and smooth muscle cell apoptosis (37,38). Therefore, deletion of PKC\u03b4 may enhance vascular cell migration and proliferation, two significant steps in the formation of new blood vessels.\n\nTotal expression of PKC isoform in ischemic muscles was slightly affected by diabetes, probably because mRNA and protein analyses were performed 28 days after femoral artery ligation. However, phosphorylation of PKC\u03b4 on threonine 505, a phosphorylation site within the activation loop, clearly suggests that PKC\u03b4 is activated in the muscles of diabetic ischemic limbs compared with nondiabetic muscles. Previous data showed that the inhibition of PKC\u03b4, using an isozyme-specific peptide, improved the number of microvessels and cerebral blood flow after acute focal ischemia in normotensive rats (39). Our data demonstrate that deletion of PKC\u03b4 restores blood flow perfusion in diabetic ischemic muscles by promoting the number of capillaries and reducing tissue apoptosis.\n\nThe reduction of VEGF and PDGF receptor expression and the downstream signaling pathway is associated with impaired angiogenesis process in diabetic foot ulcer and ischemic diseases. Our results indicate that diabetes-induced PKC\u03b4 activation decreases VEGF, PDGF, KDR\/Flk-1, and PDGFR-\u03b2 mRNA expression in the ischemic limb, which is completely restored in PKC\u03b4-null mice. Interestingly, impaired angiogenic response in ischemic arterial diseases of type 1 and type 2 diabetes is associated with VEGF inhibition in endothelial cells and monocytes (13,40). It is possible that the ablation of PKC\u03b4 may also affect VEGF signaling in monocytes, which may contribute to vessel formation abnormalities. However, this assumption will need further investigation.\n\nHIF-1\u03b1 is a master regulator of angiogenic factors in response to tissue hypoxia. Previous study showed that HIF-1\u03b1 gene transfer increased recovery of limb perfusion, increasing eNOS activation and vessel density (41). In our study, however, the increase in the expression of VEGF in muscles of PKC\u03b4-deficient mice may not have been entirely due to upregulation of HIF-1\u03b1. Because protein extraction was performed 4 weeks after the femoral artery ligation, it is possible that the expression of HIF-1\u03b1 could have returned to basal levels. This hypothesis is supported by results obtained 2 weeks after the surgery. Our data demonstrated that HIF-1\u03b1 transcriptional factor activity and mRNA expression were increased in nondiabetic and diabetic PKC\u03b4-null mice 2 weeks only after surgery ([Supplementary Figs. 2 and 3](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1432\/-\/DC1)). Besides VEGF and PDGF expression, our data suggest that PKC\u03b4 activation disrupts VEGF and PDGF signaling, whereas in PKC\u03b4-deficient mice, the activity of VEGFR2, PDGFR-\u03b2, PLC\u03b31, Akt, and ERK is enhanced. Surprisingly, Src phosphorylation was increased in the ischemic muscles of diabetic wild-type mice even if PDGFR-\u03b2 activity was reduced. However, a previous study reported that reactive oxygen species (ROS) production induced Src phosphorylation (42). Because ROS are massively produced in ischemic and hyperglycemic conditions, it is probable that ROS production is responsible for the Src phosphorylation seen in diabetic wild-type mice.\n\nThere is strong evidence that progenitor cell recruitment and homing participate in angiogenesis and wound repair, which are guided by SDF-1 (43). Although the number of progenitor cells is reduced in diabetic mice, inadequate progenitor cell mobilization has been proposed as one potential mechanism of impaired angiogenesis (44). However, our results did not observe any change in SDF-1 expression in PKC\u03b4-null mice, suggesting that mobilization and local trafficking of progenitor cells to the ischemic site was not affected by the PKC\u03b4 isoform.\n\nDespite advances in revascularization techniques, limb salvage and pain relief cannot be achieved in many diabetic patients with diffuse peripheral vascular disease. VEGF-mediated gene therapy has shown promising results as an innovative method in the treatment of severe cardiovascular diseases. However, a randomized study of gene therapy failed to meet the primary objective of significant amputation reduction (45). During the 10-year follow-up period, no significant differences were detected in the number of amputations or causes of death with the use of transient VEGF-A\u2013mediated gene therapy. One reason for this lack of improvement is perhaps because neovascularization requires the interaction of multiple growth factors that can promote, in a synergic manner, new and mature blood vessels. Enhancing the responsiveness of diabetic vascular cells to proangiogenic factors may offer a potential new approach to treat peripheral arterial diseases. Protein tyrosine phosphatase is a group of proteins that is critical in abating cell response to growth factors by inhibiting tyrosine kinase phosphorylation. Our results demonstrated that SHP-1 expression was increased in diabetic ischemic muscles and was responsible for VEGF and PDGR receptor dephosphorylation.\n\nAlthough not significant, a slight rise in SHP-2 (18%) and PTP1B (37%) expression was observed in diabetic PKC\u03b4-null mice. Previous studies have shown that PDGF activation enhanced SHP-2 and PTP1B activity (46,47), which may explain our results. We have reported that activation of PKC\u03b4 induces the expression of SHP-1 in cultured pericytes exposed to high glucose concentrations and inhibits the PDGF signaling pathway contributing to pericyte apoptosis (23). Others studies have also shown that SHP-1 is a negative regulator of VEGF signal transduction and inhibits endothelial cell proliferation (48,49). Interestingly, silencing SHP-1 increased phosphorylation of KDR\/Flk-1 and markedly enhanced capillary density in a nondiabetic hind limb ischemia model (50). However, our current study does not provide a direct link between SHP-1 expression and reduced angiogenesis, which will require further investigations. Nevertheless, our findings have identified PKC\u03b4, and potentially SHP-1, as potential therapeutic targets for the treatment of diabetic peripheral arterial diseases and cardiovascular complications.\n\nIn summary, we have provided evidence that PKC\u03b4 is activated by diabetes in ischemic muscles and induced SHP-1 expression, contributing to VEGF and PDGF unresponsiveness and poor angiogenesis response. Although various therapies are partly successful in restoring blood flow to the affected tissues, there is no effective strategy to specifically produce new functional vessels to dismiss diabetic ischemic stress. Our data enhance our understanding of the mechanisms underlying poor collateral vessel formation induced by PKC activation and may offer potential novel targets to regulate angiogenesis therapeutically in patients with diabetes.\n\n## ACKNOWLEDGMENTS\n\nThis study was supported by grants from the Canadian Diabetes Association, Fonds de Recherche du Qu\u00e9bec\u2013Sant\u00e9, and Diab\u00e8te Qu\u00e9bec to P.G. and was performed at the Centre de Recherche Clinique \u00c9tienne-Le Bel, a research center funded by the Fonds de Recherche du Qu\u00e9bec\u2013Sant\u00e9. P.G. is currently the recipient of a Scholarship Award from the Canadian Diabetes Association and the Canadian Research Chair in Vascular Complications of Diabetes.\n\nNo potential conflicts of interest relevant to this article were reported.\n\nF.L., M.P., B.D., and P.G. performed experiments and analyzed the data. M.L. provided the *Prkcd*-deficient mice. A.G. performed animal care and researched data. F.L. and P.G. wrote the manuscript. P.G. is the guarantor of this work, and, as such, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.\n\nThe authors gratefully acknowledge Marie-\u00c9laine Clavet (Montreal Heart Institute) for her assistance with histochemistry technics.\n\n# REFERENCES","meta":{"dup_signals":{"dup_doc_count":147,"dup_dump_count":49,"dup_details":{"curated_sources":2,"2020-16":2,"2018-47":1,"2018-43":1,"2018-34":1,"2018-30":1,"2018-22":3,"2018-17":1,"2018-13":3,"2018-09":1,"2018-05":1,"2017-51":2,"2017-47":3,"2017-43":7,"2017-39":3,"2017-34":1,"2017-30":12,"2017-26":2,"2017-22":3,"2017-17":13,"2017-09":4,"2017-04":14,"2016-50":3,"2016-44":4,"2016-40":4,"2016-36":4,"2016-30":3,"2016-26":4,"2016-22":4,"2016-18":2,"2016-07":1,"2015-48":2,"2015-40":1,"2015-35":1,"2015-32":3,"2015-27":2,"2015-22":3,"2015-14":1,"2014-52":3,"2014-49":4,"2014-42":1,"2014-41":2,"2014-35":1,"2021-21":1,"2024-22":1,"2017-13":2,"2015-18":3,"2015-11":2,"2015-06":3,"2024-26":1}},"file":"PMC3717846"},"subset":"pubmed_central"} {"text":"abstract: We present the first ever global account of the production, use, and end-of-life fate of all plastics ever made by humankind.\nauthor: Roland Geyer; Jenna R. Jambeck; Kara Lavender Law\\*Corresponding author. Email: \ndate: 2017-07\nreferences:\ntitle: Production, use, and fate of all plastics ever made\n\n# INTRODUCTION\n\nA world without plastics, or synthetic organic polymers, seems unimaginable today, yet their large-scale production and use only dates back to \\~1950. Although the first synthetic plastics, such as Bakelite, appeared in the early 20th century, widespread use of plastics outside of the military did not occur until after World War II. The ensuing rapid growth in plastics production is extraordinary, surpassing most other man-made materials. Notable exceptions are materials that are used extensively in the construction sector, such as steel and cement (*1*, *2*).\n\nInstead, plastics' largest market is packaging, an application whose growth was accelerated by a global shift from reusable to single-use containers. As a result, the share of plastics in municipal solid waste (by mass) increased from less than 1% in 1960 to more than 10% by 2005 in middle- and high-income countries (*3*). At the same time, global solid waste generation, which is strongly correlated with gross national income per capita, has grown steadily over the past five decades (*4*, *5*).\n\nThe vast majority of monomers used to make plastics, such as ethylene and propylene, are derived from fossil hydrocarbons. None of the commonly used plastics are biodegradable. As a result, they accumulate, rather than decompose, in landfills or the natural environment (*6*). The only way to permanently eliminate plastic waste is by destructive thermal treatment, such as combustion or pyrolysis. Thus, near-permanent contamination of the natural environment with plastic waste is a growing concern. Plastic debris has been found in all major ocean basins (*6*), with an estimated 4 to 12 million metric tons (Mt) of plastic waste generated on land entering the marine environment in 2010 alone (*3*). Contamination of freshwater systems and terrestrial habitats is also increasingly reported (*7*\u2013*9*), as is environmental contamination with synthetic fibers (*9*, *10*). Plastic waste is now so ubiquitous in the environment that it has been suggested as a geological indicator of the proposed Anthropocene era (*11*).\n\nWe present the first global analysis of all mass-produced plastics ever made by developing and combining global data on production, use, and end-of-life fate of polymer resins, synthetic fibers, and additives into a comprehensive material flow model. The analysis includes thermoplastics, thermosets, polyurethanes (PURs), elastomers, coatings, and sealants but focuses on the most prevalent resins and fibers: high-density polyethylene (PE), low-density and linear low-density PE, polypropylene (PP), polystyrene (PS), polyvinylchloride (PVC), polyethylene terephthalate (PET), and PUR resins; and polyester, polyamide, and acrylic (PP&A) fibers. The pure polymer is mixed with additives to enhance the properties of the material.\n\n# RESULTS AND DISCUSSION\n\nGlobal production of resins and fibers increased from 2 Mt in 1950 to 380 Mt in 2015, a compound annual growth rate (CAGR) of 8.4% (table S1), roughly 2.5 times the CAGR of the global gross domestic product during that period (*12*, *13*). The total amount of resins and fibers manufactured from 1950 through 2015 is 7800 Mt. Half of this\u20143900 Mt\u2014was produced in just the past 13 years. Today, China alone accounts for 28% of global resin and 68% of global PP&A fiber production (*13*\u2013*15*). Bio-based or biodegradable plastics currently have a global production capacity of only 4 Mt and are excluded from this analysis (*16*).\n\nWe compiled production statistics for resins, fibers, and additives from a variety of industry sources and synthesized them according to type and consuming sector (table S2 and figs. S1 and S2) (*12*\u2013*24*). Data on fiber and additives production are not readily available and have typically been omitted until now. On average, we find that nonfiber plastics contain 93% polymer resin and 7% additives by mass. When including additives in the calculation, the amount of nonfiber plastics (henceforth defined as resins plus additives) manufactured since 1950 increases to 7300 Mt. PP&A fibers add another 1000 Mt. Plasticizers, fillers, and flame retardants account for about three quarters of all additives (table S3). The largest groups in total nonfiber plastics production are PE (36%), PP (21%), and PVC (12%), followed by PET, PUR, and PS (\\<10% each). Polyester, most of which is PET, accounts for 70% of all PP&A fiber production. Together, these seven groups account for 92% of all plastics ever made. Approximately 42% of all nonfiber plastics have been used for packaging, which is predominantly composed of PE, PP, and PET. The building and construction sector, which has used 69% of all PVC, is the next largest consuming sector, using 19% of all nonfiber plastics (table S2).\n\nWe combined plastic production data with product lifetime distributions for eight different industrial use sectors, or product categories, to model how long plastics are in use before they reach the end of their useful lifetimes and are discarded (*22*, *25*\u2013*29*). We assumed log-normal distributions with means ranging from less than 1 year, for packaging, to decades, for building and construction (Fig. 1<\/a>). This is a commonly used modeling approach to estimating waste generation for specific materials (*22*, *25*, *26*). A more direct way to measure plastic waste generation is to combine solid waste generation data with waste characterization information, as in the study of Jambeck *et al.* (*3*). However, for many countries, these data are not available in the detail and quality required for the present analysis.\n\nWe estimate that in 2015, 407 Mt of primary plastics (plastics manufactured from virgin materials) entered the use phase, whereas 302 Mt left it. Thus, in 2015, 105 Mt were added to the in-use stock. For comparison, we estimate that plastic waste generation in 2010 was 274 Mt, which is equal to the independently derived estimate of 275 Mt by Jambeck *et al.* (*3*). The different product lifetimes lead to a substantial shift in industrial use sector and polymer type between plastics entering and leaving use in any given year (tables S4 and S5 and figs. S1 to S4). Most of the packaging plastics leave use the same year they are produced, whereas construction plastics leaving use were produced decades earlier, when production quantities were much lower. For example, in 2015, 42% of primary nonfiber plastics produced (146 Mt) entered use as packaging and 19% (65 Mt) as construction, whereas nonfiber plastic waste leaving use was 54% packaging (141 Mt) and only 5% construction (12 Mt). Similarly, in 2015, PVC accounted for 11% of nonfiber plastics production (38 Mt) and only 6% of nonfiber plastic waste generation (16 Mt).\n\nBy the end of 2015, all plastic waste ever generated from primary plastics had reached 5800 Mt, 700 Mt of which were PP&A fibers. There are essentially three different fates for plastic waste. First, it can be recycled or reprocessed into a secondary material (*22*, *26*). Recycling delays, rather than avoids, final disposal. It reduces future plastic waste generation only if it displaces primary plastic production (*30*); however, because of its counterfactual nature, this displacement is extremely difficult to establish (*31*). Furthermore, contamination and the mixing of polymer types generate secondary plastics of limited or low technical and economic value. Second, plastics can be destroyed thermally. Although there are emerging technologies, such as pyrolysis, which extracts fuel from plastic waste, to date, virtually all thermal destruction has been by incineration, with or without energy recovery. The environmental and health impacts of waste incinerators strongly depend on emission control technology, as well as incinerator design and operation. Finally, plastics can be discarded and either contained in a managed system, such as sanitary landfills, or left uncontained in open dumps or in the natural environment.\n\nWe estimate that 2500 Mt of plastics\u2014or 30% of all plastics ever produced\u2014are currently in use. Between 1950 and 2015, cumulative waste generation of primary and secondary (recycled) plastic waste amounted to 6300 Mt. Of this, approximately 800 Mt (12%) of plastics have been incinerated and 600 Mt (9%) have been recycled, only 10% of which have been recycled more than once. Around 4900 Mt\u201460% of all plastics ever produced\u2014were discarded and are accumulating in landfills or in the natural environment (Fig. 2<\/a>). Of this, 600 Mt were PP&A fibers. None of the mass-produced plastics biodegrade in any meaningful way; however, sunlight weakens the materials, causing fragmentation into particles known to reach millimeters or micrometers in size (*32*). Research into the environmental impacts of these \"microplastics\" in marine and freshwater environments has accelerated in recent years (*33*), but little is known about the impacts of plastic waste in land-based ecosystems.\n\nBefore 1980, plastic recycling and incineration were negligible. Since then, only nonfiber plastics have been subject to significant recycling efforts. The following results apply to nonfiber plastic only: Global recycling and incineration rates have slowly increased to account for 18 and 24%, respectively, of nonfiber plastic waste generated in 2014 (figs. S5 and S6). On the basis of limited available data, the highest recycling rates in 2014 were in Europe (30%) and China (25%), whereas in the United States, plastic recycling has remained steady at 9% since 2012 (*12*, *13*, *34*\u2013*36*). In Europe and China, incineration rates have increased over time to reach 40 and 30%, respectively, in 2014 (*13*, *35*). However, in the United States, nonfiber plastics incineration peaked at 21% in 1995 before decreasing to 16% in 2014 as recycling rates increased, with discard rates remaining constant at 75% during that time period (*34*). Waste management information for 52 other countries suggests that in 2014, the rest of the world had recycling and incineration rates similar to those of the United States (*37*). To date, end-of-life textiles (fiber products) do not experience significant recycling rates and are thus incinerated or discarded together with other solid waste.\n\nPrimary plastics production data describe a robust time trend throughout its entire history. If production were to continue on this curve, humankind will have produced 26,000 Mt of resins, 6000 Mt of PP&A fibers, and 2000 Mt of additives by the end of 2050. Assuming consistent use patterns and projecting current global waste management trends to 2050 (fig. S7), 9000 Mt of plastic waste will have been recycled, 12,000 Mt incinerated, and 12,000 Mt discarded in landfills or the natural environment (Fig. 3<\/a>).\n\nAny material flow analysis of this kind requires multiple assumptions or simplifications, which are listed in Materials and Methods, and is subject to considerable uncertainty; as such, all cumulative results are rounded to the nearest 100 Mt. The largest sources of uncertainty are the lifetime distributions of the product categories and the plastic incineration and recycling rates outside of Europe and the United States. Increasing\/decreasing the mean lifetimes of all product categories by 1 SD changes the cumulative primary plastic waste generation (for 1950 to 2015) from 5900 to 4600\/6200 Mt or by \u22124\/+5%. Increasing\/decreasing current global incineration and recycling rates by 5%, and adjusting the time trends accordingly, changes the cumulative discarded plastic waste from 4900 (for 1950 to 2015) to 4500\/5200 Mt or by \u22128\/+6%.\n\nThe growth of plastics production in the past 65 years has substantially outpaced any other manufactured material. The same properties that make plastics so versatile in innumerable applications\u2014durability and resistance to degradation\u2014make these materials difficult or impossible for nature to assimilate. Thus, without a well-designed and tailor-made management strategy for end-of-life plastics, humans are conducting a singular uncontrolled experiment on a global scale, in which billions of metric tons of material will accumulate across all major terrestrial and aquatic ecosystems on the planet. The relative advantages and disadvantages of dematerialization, substitution, reuse, material recycling, waste-to-energy, and conversion technologies must be carefully considered to design the best solutions to the environmental challenges posed by the enormous and sustained global growth in plastics production and use.\n\n# MATERIALS AND METHODS\n\n## Plastic production\n\nThe starting point of the plastic production model is global annual pure polymer (resin) production data from 1950 to 2015, published by the Plastics Europe Market Research Group, and global annual fiber production data from 1970 to 2015 published by The Fiber Year and Tecnon OrbiChem (table S1). The resin data closely follow a second-order polynomial time trend, which generated a fit of *R*^2^ = 0.9968. The fiber data closely follow a third-order polynomial time trend, which generated a fit of *R*^2^ = 0.9934. Global breakdowns of total production by polymer type and industrial use sector were derived from annual market and polymer data for North America, Europe, China, and India (table S2) (*12*, *13*, *19*\u2013*24*). U.S. and European data are available for 2002 to 2014. Polymer type and industrial use sector breakdowns of polymer production are similar across countries and regions.\n\nGlobal additives production data, which are not publicly available, were acquired from market research companies and cross-checked for consistency (table S3) (*17*, *18*). Additives data are available for 2000 to 2014. Polymer type and industrial use sector breakdowns of polymer production and the additives to polymer fraction were both stable over the time period for which data are available and thus assumed constant throughout the modeling period of 1950\u20132015. Any errors in the early decades were mitigated by the lower production rates in those years. Additives data were organized by additive type and industrial use sector and integrated with the polymer data. *P*~*i*~ (*t*) denotes the amount of primary plastics (that is, polymers plus additives) produced in year *t* and used in sector *i* (fig. S1).\n\n## Plastic waste generation and fate\n\nPlastics use was characterized by discretized log-normal distributions, LTD~i~ (*j*), which denotes the fraction of plastics in industrial use sector *i* used for *j* years (Fig. 1<\/a>). Mean values and SDs were gathered from published literature (table S4) (*22*, *25*\u2013*29*). Product lifetimes may vary significantly across economies and also across demographic groups, which is why distributions were used and sensitivity analysis was conducted with regard to mean product lifetimes. The total amount of primary plastic waste generated in year *t* was calculated as PW (*t*) = $\\sum_{\\mathit{i} = 1}^{8}\\sum_{\\mathit{j} = 1}^{65}\\mathit{P}_{\\mathit{i}}(\\mathit{t} - \\mathit{j}) \\bullet \\text{LTD}_{\\mathit{i}}(\\mathit{j})$ (figs. S3 and S4). Secondary plastic waste generated in year *t* was calculated as the fraction of total plastic waste that was recycled *k* years ago, SW (*t*) = \\[PW (*t* \u2212 *k*) + SW (*t* \u2212 *k*)\\]\\[RR (*t* \u2212 *k*)\\], where *k* is the average use time of secondary plastics and RR (*t* \u2212 *k*) is the global recycling rate in year *t* \u2212 *k*. Amounts of plastic waste discarded and incinerated are calculated as DW(*t*) = \\[PW(*t*) + SW(*t*) \u2219 DR(*t*) and IW(*t*) = \\[PW(*t*) + SW(*t*)\\] \u2219 IR(*t*), with DR(*t*) and IR(*t*) being the global discard and incineration rates in year *t* (fig. S5). Cumulative values at time *T* were calculated as the sum over all *T* \u2212 1950 years of plastics mass production. Examples are cumulative primary production $\\text{CP}_{\\mathit{i}}(\\mathit{T}) = \\sum_{\\mathit{t} = 1950}^{\\mathit{T}}\\mathit{P}_{\\mathit{i}}(\\mathit{t})$ and cumulative primary plastic waste generation, $\\text{CPW}(\\mathit{T}) = \\sum_{\\mathit{t} = 1950}^{\\mathit{T}}PW(\\mathit{t})$ (Fig. 3<\/a>).\n\n## Recycling, incineration, and discard rates\n\nTime series for resin, that is, nonfiber recycling, incineration, and discard rates were collected separately for four world regions: the United States, the EU-28 plus Norway and Switzerland, China, and the rest of the world. Detailed and comprehensive solid waste management data for the United States were published by the U.S. Environmental Protection Agency dating back to 1960 (table S7) (*34*). European data were from several reports by PlasticsEurope, which collectively cover 1996 to 2014 (*12*, *13*, *38*). Chinese data were synthesized and reconciled from the English version of the China Statistical Yearbook, translations of Chinese publications and government reports, and additional waste management literature (*35*, *36*, *39*\u2013*41*). Waste management for the rest of the world was based on World Bank data (*37*). Time series for global recycling, incineration, and discard rates (fig. S5) were derived by adding the rates of the four regions weighted by their relative contribution to global plastic waste generation. In many world regions, waste management data were sparse and of poor quality. For this reason, sensitivity analysis with regard to waste management rates was conducted.\n\nThe resulting global nonfiber recycling rate increased at a constant 0.7% per annum (p.a.) between 1990 and 2014. If this linear trend is assumed to continue, the global recycling rate would reach 44% in 2050. The global nonfiber incineration rate has grown more unevenly but, on average, increased 0.7% p.a. between 1980 and 2014. Assuming an annual increase of 0.7% between 2014 and 2050 yielded a global incineration rate of 50% by 2050. With those two assumptions, global discard rate would decrease from 58% in 2014 to 6% in 2050 (fig. S7). The dashed lines in Fig. 3<\/a> are based on those assumptions and therefore simply forward projections of historical global trends and should not be mistaken for a prediction or forecast. There is currently no significant recycling of synthetic fibers. It was thus assumed that end-of-life textiles are incinerated and discarded together with all other municipal solid waste.\n\n# Supplementary Material\n\n###### http:\/\/advances.sciencemag.org\/cgi\/content\/full\/3\/7\/e1700782\/DC1\n\n## Acknowledgments\n\nWe thank C.-J. Simon and R. Nayaran for providing polymer production data and G.L. Mahoney for designing Fig. 2<\/a>. We especially thank I. Creelman and R. Song for data collection and model implementation, E. Smith for assistance in compiling the U.S. Environmental Protection Agency and China solid waste management data, S. Wang for researching China plastic waste management and translation of articles, J. Chamberlain for researching additives, and I. MacAdam-Somer for researching synthetic fibers. This work benefitted from helpful discussions with T. Siegler and C. Rochman and comments from two anonymous reviewers. **Funding:** This work was conducted within the Marine Debris Working Group at the National Center for Ecological Analysis and Synthesis, University of California, Santa Barbara, with support from Ocean Conservancy. R.G. was supported by the NSF Chemical, Bioengineering, Environmental and Transport Systems grant \\#1335478. **Author contributions:** R.G. led the research design, data collection, model development, calculations, interpretation of results, and writing of the manuscript; J.R.J. contributed to research design, data collection, model development, and interpretation of results; K.L.L. contributed to research design, analysis and interpretation of results, and writing of the manuscript. **Competing interests:** The authors declare that they have no competing interests. **Data and materials availability:** All data needed to evaluate the conclusions in the paper are present in the paper and\/or the Supplementary Materials. Additional data related to this paper may be requested from the authors.\n\n### SUPPLEMENTARY MATERIALS\n\nSupplementary material for this article is available at \n\nfig. S1. Global primary plastics production (in million metric tons) according to industrial use sector from 1950 to 2015.\n\nfig. S2. Global primary plastics production (in million metric tons) according to polymer type from 1950 to 2015.\n\nfig. S3. Global primary plastics waste generation (in million metric tons) according to industrial use sector from 1950 to 2015.\n\nfig. S4. Global primary plastics waste generation (in million metric tons) according to polymer type from 1950 to 2015.\n\nfig. S5. Estimated percentage of global (nonfiber) plastic waste recycled, incinerated, and discarded from 1950 to 2014 \\[(*12*, *13*, *34*\u2013*42*) and table S7\\].\n\nfig. S6. Annual global primary and secondary plastic waste generation TW (*t*), recycling RW (*t*), incineration IW (*t*), and discard DW (*t*) (in million metric tons) from 1950 to 2014.\n\nfig. S7. Projection of global trends in recycling, incineration, and discard of plastic waste from 1980 to 2014 (to the left of vertical black line) to 2050 (to the right of vertical black line).\n\ntable S1. Annual global polymer resin and fiber production in million metric tons (*12*\u2013*15*).\n\ntable S2. Share of total polymer resin production according to polymer type and industrial use sector calculated from data for Europe, the United States, China, and India covering the period 2002\u20132014 (*12*, *13*, *19*\u2013*24*).\n\ntable S3. Share of additive type in global plastics production from data covering the period 2000\u20132014 (*17*, *18*).\n\ntable S4. Baseline mean values and SDs used to generate log-normal product lifetime distributions for the eight industrial use sectors used in this study (*22*, *25*\u2013*29*).\n\ntable S5. Global primary plastics production and primary waste generation (in million metric tons) in 2015 according to industrial use sector.\n\ntable S6. Global primary plastics production and primary waste generation (in million metric tons) in 2015 according to polymer type\/additive.\n\ntable S7. Additional data sources for U.S. plastics recycling and incineration.\n\ntable S8. Complete list of data sources.\n\n# REFERENCES AND NOTES","meta":{"dup_signals":{"dup_doc_count":103,"dup_dump_count":36,"dup_details":{"curated_sources":1,"2021-31":3,"2021-25":7,"2021-21":1,"2021-17":5,"2021-04":6,"2020-50":2,"2020-45":6,"2020-40":2,"2020-34":7,"2020-29":2,"2020-24":1,"2020-16":4,"2020-10":5,"2020-05":1,"2019-51":5,"2019-47":3,"2019-43":4,"2019-39":3,"2019-35":5,"2019-30":2,"2019-26":7,"2019-22":1,"2019-18":1,"2019-13":1,"2019-09":3,"2018-51":1,"2018-47":1,"2018-43":1,"2018-39":1,"2018-34":5,"2018-26":1,"2018-22":1,"2018-13":1,"2017-51":1,"2017-43":1,"2017-34":1}},"file":"PMC5517107"},"subset":"pubmed_central"} {"text":"date: 2018-04\ntitle: Individualized, supportive care key to positive childbirth experience, says WHO\n\n**15 FEBRUARY 2018 \\| GENEVA -** AWHO has issued new recommendations to establish global care standards for healthy pregnant women and reduce unnecessary medical interventions.\n\nWorldwide, an estimated 140 million births take place every year. Most of these occur without complications for women and their babies. Yet, over the past 20 years, practitioners have increased the use of interventions that were previously only used to avoid risks or treat complications, such as oxytocin infusion to speed up labour or caesarean sections.\n\n\"We want women to give birth in a safe environment with skilled birth attendants in well-equipped facilities. However, the increasing medicalization of normal childbirth processes are undermining a woman's own capability to give birth and negatively impacting her birth experience,\" says Dr Princess Nothemba Simelela, WHO Assistant Director-General for Family, Women, Children and Adolescents.\n\n\"If labour is progressing normally, and the woman and her baby are in good condition, they do not need to receive additional interventions to accelerate labour,\" she says.\n\nChildbirth is a normal physiological process that can be accomplished without complications for the majority of women and babies. However, studies show a substantial proportion of healthy pregnant women undergo at least one clinical intervention during labour and birth. They are also often subjected to needless and potentially harmful routine interventions.\n\nThe new WHO guideline includes 56 evidence-based recommendations on what care is needed throughout labour and immediately after for the woman and her baby. These include having a companion of choice during labour and childbirth; ensuring respectful care and good communication between women and health providers; maintaining privacy and confidentiality; and allowing women to make decisions about their pain management, labour and birth positions and natural urge to push, among others.\n\n**Every labour is unique and progresses at different rates**\n\nThe new WHO guideline recognizes that every labour and childbirth is unique and that the duration of the active first stage of labour varies from one woman to another. In a first labour, it usually does not extend beyond 12 hours. In subsequent labours it usually does not extend beyond 10 hours.\n\nTo reduce unnecessary medical interventions, the WHO guideline states that the previous benchmark for cervical dilation rate at 1 cm\/hr during the active first stage of labour (as assessed by a partograph or chart used to document the course of a normal labour) may be unrealistic for some women and is inaccurate in identifying women at risk of adverse birth outcomes. The guideline emphasizes that a slower cervical dilation rate alone should not be a routine indication for intervention to accelerate labour or expedite birth.\n\n\"Many women want a natural birth and prefer to rely on their bodies to give birth to their baby without the aid of medical intervention,\" says Ian Askew, WHO Director, Department of Reproductive Health and Research. \"Even when a medical intervention is wanted or needed, the inclusion of women in making decisions about the care they receive is important to ensure that they meet their goal of a positive childbirth experience.\"\n\n**High quality care for all women**\n\nUnnecessary labour interventions are widespread in low-, middle- and high-income settings, often putting a strain on already scarce resources in some countries, and further widening of the equity gap.\n\nAs more women give birth in health facilities with skilled health professionals and timely referrals, they deserve better quality of care. About 830 women die from pregnancy- or childbirth-related complications around the world every day \u2013 the majority could be prevented with high-quality care in pregnancy and during childbirth.\n\nDisrespectful and non-dignified care is prevalent in many health facilities, violating human rights and preventing women from accessing care services during childbirth. In many parts of the world, the health provider controls the birthing process, which further exposes healthy pregnant women to unnecessary medical interventions that interfere with the natural childbirth process.\n\nAchieving the best possible physical, emotional, and psychological outcomes for the woman and her baby requires a model of care in which health systems empower all women to access care that focuses on the mother and child.\n\nHealth professionals should advise healthy pregnant women that the duration of labour varies greatly from one woman to another. While most women want a natural labour and birth, they also acknowledge that birth can be an unpredictable and risky event and that close monitoring and sometimes medical interventions may be necessary. Even when interventions are needed or wanted, women usually wish to retain a sense of personal achievement and control by being involved in decision making, and by rooming in with their baby after childbirth.\n\nThe data also highlight the need for rapid scale-up of research. There have been some encouraging signs in funding available for investment in research for a cure for dementia in recent years, but much more needs to be done. The number of articles in peer-reviewed journals on dementia in 2016 was close to 7000. This compares with more than 15 000 for diabetes, and more than 99 000 for cancer during the same year. Research is needed not only to find a cure for dementia, but also in the areas of prevention, risk reduction, diagnosis, treatment and care.\n\nThe Observatory will provide a knowledge bank where health and social care authorities, medical professionals, researchers and civil society organizations will be able to find country and regional dementia profiles, global reports, policy guidance, guidelines and toolkits on dementia prevention and care.\n\n**Dementia**\n\nDementia is an umbrella term for several diseases that are mostly progressive, affecting memory, other cognitive abilities and behaviour and interfering significantly with a person's ability to maintain the activities of daily living. Women are more often affected than men. Alzheimer's disease is the most common type of dementia and accounts for 60\u201370% of cases. The other common types are vascular dementia and mixed forms.\n\nAvailable from: ","meta":{"dup_signals":{"dup_doc_count":118,"dup_dump_count":47,"dup_details":{"curated_sources":2,"2023-50":3,"2023-40":1,"2023-23":1,"2023-14":1,"2023-06":2,"2022-49":1,"2022-33":1,"2022-05":1,"2021-43":1,"2021-39":1,"2021-31":3,"2021-25":1,"2021-21":2,"2021-17":2,"2021-04":2,"2020-50":1,"2020-45":2,"2020-40":3,"2020-29":2,"2020-24":1,"2020-16":2,"2020-10":2,"2020-05":2,"2019-51":5,"2019-43":2,"2019-39":1,"2019-35":7,"2019-26":3,"2019-22":3,"2019-18":5,"2019-13":2,"2019-09":3,"2019-04":7,"2018-51":3,"2018-47":2,"2018-43":5,"2018-39":2,"2018-34":5,"2018-30":1,"2018-26":3,"2018-22":4,"2018-17":2,"2018-13":7,"2024-26":1,"2024-18":2,"2024-10":2,"2024-30":1}},"file":"PMC5938665"},"subset":"pubmed_central"} {"text":"abstract: Using a genome-scale approach to study transcription levels in a human CD8^+^ T-cell clone, a recent study has suggested that the repertoire of molecules on the surface of T cells is close to being completely characterized.\nauthor: Joerg Ermann; Chan D Chung; C Garrison Fathman\ndate: 2004\ninstitute: 1Division of Immunology and Rheumatology, Department of Medicine, Stanford University School of Medicine, Stanford, CA 94305, USA\nreferences:\ntitle: Scratching the (T cell) surface\n\nOver the last few years technological advances have made it possible to study, in parallel, the expression of thousands of genes in cells, tissues or organisms. While this genome-scale approach to gene-expression analysis has been touted by some as the new 'golden era' of hypothesis-unlimited discovery-driven research \\[1\\], it is apparent that our ability to make sense of the vast accumulations of data does not always keep up with our ability to generate them. Many groups have used technologies such as microarrays or serial analysis of gene expression (SAGE) to analyze gene expression in cells of the immune system. A recent article by Evans *et al*. in *Immunity* \\[2\\] attempting to expand beyond the simple description of differential gene-expression patterns arrives at a bold conclusion. The article's title is intriguing - *The T-cell surface - how well do we know it?* - and the conclusion is that we know it quite well.\n\nNot surprisingly, molecules on the surface of T cells are of great interest to immunologists. Much information has been generated about T-cell-specific surface molecules and their function since the first monoclonal antibodies against leukocyte surface markers were made in the late 1970s. Attempts to compare the specificities of different monoclonal antibodies and to identify their targets led to the development of the cluster of differentiation (CD) nomenclature \\[3\\]. Most of the CD antigens turned out to be proteins (a few CD antibodies detect carbohydrate modifications) and the respective genes have been cloned in both humans and mice. Over time the CD system thus transformed into a classification for leukocyte surface molecules, rather than antibodies. Currently, there are 247 assigned CDs \\[4\\]. Almost 200 additional molecules are being considered for CD status during the current Eighth International Workshop on Human Leukocyte Differentiation Antigens that will culminate in the HLDA8 Conference in Adelaide, Australia in December 2004 \\[5\\].\n\nThe molecules on the surface of T cells belong to very diverse structural and functional classes, and include components of the immunoreceptors on T, B and NK cells, adhesion molecules, and cytokine and chemokine receptors. Some CDs have proven to be useful markers of subpopulations of cells with strikingly different functions. Most T cells express CD3 and an \u03b1\u03b2 T-cell receptor (TCR) paired with either the CD4 or the CD8 molecule as co-receptor. CD4^+^ \u03b1\u03b2 T cells recognize antigens presented by antigen-presenting cells in the context of major histocompatibility complex (MHC) class II molecules. Their main effector function is the production of cytokines and the facilitation of immune responses. A subclass of CD4^+^ T cells that has recently gained attention comprises regulatory\/suppressor T cells, which are characterized by the constitutive expression of CD25 - the \u03b1-chain of the interleukin 2 (IL-2) receptor. These cells are thought to negatively regulate immune responses and to prevent uncontrolled autoimmunity. CD8^+^ \u03b1\u03b2 T cells recognize antigens in the context of MHC class I molecules, which are expressed on most somatic cells. Upon activation, CD8^+^ T cells develop into cytolytic T lymphocytes (CTLs) ready to kill cells infected by intracellular pathogens, such as viruses, or eradicate tumors cells.\n\nIn their article on the T-cell surface, Evans *et al*. \\[2\\] used SAGE to study gene expression in a human CD8^+^ CTL clone. The SAGE library that was generated (referred to as the CTL library) contained 71,174 SAGE tags representing 20,204 distinct sequences. This number was estimated to cover all of the transcripts whose expression level was at or above 0.008% of the transcriptome (that is having at least 22 copies per cell). The library included 111 genes with, or being considered for, CD status. Several pairwise comparisons with unrelated SAGE libraries were then performed.\n\nThe central analysis of the paper starts with a comparison of the CTL library with a library derived from cerebellum; 1,098 transcripts were significantly more abundant in the CTL library. Among these, about a quarter of the transcripts with known function coded for proteins involved in protein\/mRNA synthesis, a result that was thought to reflect the proliferating versus non-proliferating character of the two cell\/tissue types (CTL versus cerebellum). Interestingly, the set of genes that was highly differentially expressed in the CTLs was enriched for surface markers, signaling molecules and soluble mediators. In an attempt to find a core set of CTL-specific genes, additional comparisons were performed between the CTL library and SAGE libraries from ovary epithelium (as a type of proliferating cell) and a panel of tumor libraries. This resulted in a shortened list of 387 CTL-specific transcripts.\n\nNotably, at all stages of comparison 42-45% of the transcripts lacked an assigned function. Of the known genes in the final list of 387 specific transcripts in the CTL library, 27% were cell-surface molecules, including TCR components, CD2, CD5, and CD8. Evans *et al*. \\[2\\] then asked how many of the unknown CTL-specific transcripts encoded surface molecules. Sequences representing UniGene clusters \\[6\\] were analyzed for signatures of surface molecules by domain analysis, looking for transmembrane regions or other domains characteristic of leukocyte surface molecules, and by BLAST searches for related genes with known function. Surprisingly, only 2 of the 97 (2%) UniGene clusters analyzed showed some potential for encoding novel surface molecules. The authors therefore concluded that \"the cell-type-specific composition of the resting CD8^+^ T-cell surface is now largely defined.\"\n\nHow complete and cell-type-specific is this list of 387 genes? The CTL SAGE library of 20,204 transcripts represents the transcriptome of the CTL clone. A detailed discussion of the limitations of the SAGE method is beyond the scope of this article; suffice it to say that it is possible that functionally important genes may be missing from the library due to low mRNA expression levels, chance or because they lack the target sequence for the tagging enzyme used in the SAGE protocol \\[7\\]. In order to get to the final set of 378 CTL-specific genes Evans *et al*. \\[2\\] eliminated all transcripts that were present at comparable levels in unrelated libraries. This powerful approach seems to have validated itself by the fact that TCR components and other principal T-cell markers were present in the shortlist of 387 genes. The method used is not unbiased, however. The choice of libraries (in this case cerebellum, ovary epithelium, and a panel of tumor cell lines that were not specified further), data quality, and the algorithms used for comparison can be expected to have an enormous impact on the results obtained. It is easy to see how functionally important genes might get lost because they are expressed at sufficiently high levels in one of the cell or tissue types used for comparison. A list of cell-type-specific genes derived by this method of successive *in silico* subtraction defines a cell-type-specific gene-expression pattern against the transcriptional background of the cell or tissue types used for comparison. It is not a list of all genes relevant for cell-type-specific function.\n\nThe major finding of the Evans *et al*. study \\[2\\] is that among the 387 CTL-specific transcripts 27% of the known genes encoded cell-surface molecules, whereas only 2% of the unknown genes showed some potential in that regard. The implication is that the catalog of CTL surface molecules is close to being complete. While it is not unreasonable to assume that the concerted efforts over the last two decades to characterize surface molecules on leukocytes have led to a situation where most CTL-specific surface molecules are known \\[8\\], some questions remain. Is this finding unique for CTLs (or for leukocytes in general)? What would be the result of a similar analysis in, for instance, ovary epithelium? Were there unknown cell-surface molecules in the CTL SAGE library? If so, at what point of the stepwise subtraction process did these transcripts get eliminated? It has been noted that leukocytes share many surface molecules with neuronal cells and epithelial cells \\[8\\], the very cells used for subtraction by Evans *et al*. \\[2\\]. An alternative experimental approach to analyzing the incidence of unknown cell-surface molecules might be to generate SAGE libraries from microsomal and free-ribosomal mRNA pools generated through equilibrium density centrifugation. This approach has been demonstrated to discriminate secretory and cell-surface molecules from nonsecretory proteins quite efficiently \\[9,10\\]. One would expect a significantly lower percentage of unknown transcripts in the secretory\/surface molecule fraction.\n\nOnly the CTL SAGE library was actually generated by Evans *et al*. \\[2\\]. The other libraries used for comparison were derived from publicly available databases. Open access to primary gene-expression data is essential, not only for enabling researchers to reproduce published analyses, but also to allow for novel experimental approaches that incorporate relevant data generated by others. Important information can be gained by comparing genome-wide expression data across large numbers of samples. In a recent, extreme example, 3,283 DNA microarrays from human, *Drosophila*, *Caenorhabditis elegans* and yeast were used to define evolutionarily conserved genetic modules of co-expressed genes \\[11\\]. SAGE data have been publicly available on SAGEMap \\[12,13\\] for a number of years. Microarray data are far more complex, but a standard for the annotation of microarray data (Minimal Information About a Microarray Experiment; MIAME) \\[14\\] and a platform-independent data exchange format (Microarray Gene Expression Markup Language; MAGE-ML) \\[15\\] have been developed. Furthermore, public repositories for microarray data such as ArrayExpress \\[16,17\\] and GeoBus \\[18,19\\] are now available.\n\nSAGE has the advantage over current microarray technology of measuring absolute transcript abundance. Nevertheless, there are some limitations as to what can be said about the T-cell surface by studying mRNA levels. First, for a number of surface molecules, such as CD45, a variety of functionally important splice variants have been described \\[20\\] that cannot be distinguished by the 3' SAGE tag. Second, mRNA levels correlate poorly with protein abundance \\[21\\]. Third, posttranslational protein modifications can be functionally relevant; for example, glycosylation of CD8 has been demonstrated to affect thymocyte selection by influencing activation thresholds \\[22\\]. Fourth, T-cell activation involves re-localization of surface molecules leading to the formation of the immunological synapse, a supramolecular cluster at the contact zone between antigen-presenting cell and the T cell \\[23\\]. These early events precede changes in gene expression.\n\nFinally, it seems important to note that the T-cell surface is an abstraction. T cells comprise quite different subsets of cells at variable activation states. As pointed out by Evans *et al*. \\[2\\], the finding that most of the molecules on the T-cell surface appear to be known applies strictly only to a resting CD8^+^ T-cell clone *in vitro*. 'The T-cell surface - how well do we know it?' is an important question on our way into the post-genomic era of immunology. But even with complete lists of the genes expressed in certain T-cell subpopulations, much more needs to be learned about the regulation and complex interactions of the proteins they encode. We are just scratching the (T cell) surface.","meta":{"dup_signals":{"dup_doc_count":140,"dup_dump_count":48,"dup_details":{"curated_sources":2,"2020-34":1,"2019-22":1,"2018-47":1,"2018-39":1,"2018-22":1,"2018-09":1,"2018-05":2,"2017-51":1,"2017-43":1,"2017-39":2,"2017-34":1,"2017-30":2,"2017-26":1,"2017-22":1,"2017-17":2,"2017-09":3,"2017-04":2,"2016-50":3,"2016-44":2,"2016-40":2,"2016-36":6,"2016-30":4,"2016-26":2,"2016-22":3,"2016-18":3,"2016-07":5,"2015-48":4,"2015-40":3,"2015-35":4,"2015-32":3,"2015-27":5,"2015-22":4,"2015-14":3,"2014-52":4,"2014-49":3,"2014-42":9,"2014-41":4,"2014-35":3,"2014-23":6,"2014-15":3,"2021-49":1,"2017-13":2,"2015-18":4,"2015-11":4,"2015-06":4,"2014-10":4,"2013-48":4,"2013-20":3}},"file":"PMC395726"},"subset":"pubmed_central"} {"text":"abstract: Chronic consumption of diets high in resistant starch (RS) leads to reduced fat cell size compared to diets high in digestible starch (DS) in rats and increases total and meal fat oxidation in humans. The aim of the present study was to examine the rate of lipogenesis in key lipogenic organs following a high RS or DS meal. Following an overnight fast, male Wistar rats ingested a meal with an RS content of 2% or 30% of total carbohydrate and were then administered an i.p bolus of 50 \u03bcCi ^3^H~2~O either immediately or 1 hour post-meal. One hour following tracer administration, rats were sacrificed, a blood sample collected, and the liver, white adipose tissue (WAT), and gastrocnemius muscle excised and frozen until assayed for total ^3^H-lipid and ^3^H-glycogen content. Plasma triglyceride and NEFA concentrations and ^3^H-glycogen content did not differ between groups. In all tissues, except the liver, there was a trend for the rate of lipogenesis to be higher in the DS group than the RS group which reached significance only in WAT at 1 h (p \\< 0.01). On a whole body level, this attenuation of fat deposition in WAT in response to a RS diet could be significant for the prevention of weight gain in the long-term.\nauthor: Janine A Higgins; Marc A Brown; Leonard H Storlien\ndate: 2006\ninstitute: 1University of Colorado Health Sciences Center, Denver, Colorado 80262, USA; 2Metabolic Research Centre, Faculty of Health & Behavioural Sciences, University of Wollongong, NSW 2522, Australia; 3AstraZeneca International, M\u00f6lndal, Sweden\nreferences:\ntitle: Consumption of resistant starch decreases postprandial lipogenesis in white adipose tissue of the rat\n\n# Findings\n\nIt has been reported that chronic resistant starch (RS) feeding in rats causes a decrease in adipocyte cell size, a decrease in fatty acid synthase expression, and reduced whole-body weight gain relative to digestible starch (DS) feeding \\[1,2\\]. Additionally, in healthy adults, a single RS meal caused a substantial elevation in total and meal fat oxidation compared to a DS meal \\[3\\]. These data suggest that RS intake may influence postprandial lipid metabolism. The aim of the present study was to examine the rate of lipogenesis in key lipogenic organs acutely following a RS or DS meal.\n\nMale Wistar rats (*Rattus norvegicus*) were obtained from the Animal Resource Center (Murdoch, Western Australia) and were housed in groups of three at the University of Wollongong, Animal House. The rats were maintained at 22\u00b0C on a 12-h light\/dark cycle (light cycle from 0700\u20131900 h), with free access to a standard laboratory chow (Young Stock Feed, Young, Australia) and water. The study was conducted according to the National Health and Medical Research Council (NH&MRC; Australia) code of practice for the care and use of animals for scientific purposes. Test Diets were prepared as previously described \\[5\\]. As a percentage of total energy, all diets contained 67% carbohydrate (57% starch; 10% sucrose), 22% protein, and 11% fat. All diets were identical in composition except for the percentage of RS and DS included in the starch component. For the RS diet, the starch used was a natural high amylose starch, Hi-Maize 957\u2122 (National Starch and Chemical Co), which is 60% amylose\/40% amylopectin, versus waxy cornstarch, which is 0% amylose\/100% amylopectin, for the DS diet. Diets were presented to the animals in an unprocessed, unpelleted form so the starches were not subjected to cooking or extrusion. The dietary fiber level of the starches was determined using the Association of Analytical Chemistry (AOAC) enzymatic-gravimetric method. Total dietary fiber (dry solids) was lower (approximately 1%) for the DS starch than the RS starch which contained 19.3% fiber.\n\nTwo weeks prior to a meal test, rats were presented with test diet ad libatum overnight to familiarize them with this diet and prevent neophobia on the test day. Twice over the next week, rats were starved for 5 h then placed in testing cages and presented with a meal of similar size to that which they would receive during the meal test (approximately 0.4 g of diet). Rats had no exposure to test diet or disruption of normal eating habits for seven days prior to the meal test. For all tests, rats were starved overnight (12\u201315 hours). The following morning, rats were placed in 25 \u00d7 25-cm wire testing cages and allowed 30 min to acclimate before testing commenced. The rat was then presented with a test meal of 1.0 g carbohydrate\/kg body weight (1.49 g diet\/kg body weight). Rats were allowed 15 minutes to consume all food presented to them. Rats were injected with 50 \u03bcCi ^3^H~2~O either immediately following food consumption (1 h groups) or 1 h post-ingestion (2 h groups). One hour after ^3^H~2~O administration, rats were sacrificed by i.p. injection of 250 mg\/kg pentobarbitone sodium. 400 \u03bcl blood was collected and the gastrocnemius muscle, liver, epidydimal white adipose tissue (WAT), and interscapular brown adipose tissue (BAT) were excised and immediately freeze-clamped. All tissues were stored at -80\u00b0C until assayed for lipid and glycogen content. Plasma was pipetted into fresh tubes and stored at -20\u00b0C for subsequent analysis of triglyceride and non-esterified fatty acid (NEFA) concentrations.\n\nAll plasma samples were assayed in duplicate. Plasma triglycerides were measured using a colorimetric assay kit supplied by Boehringer Mannheim (Germany), according to the manufacturer's instructions. Plasma NEFAs were estimated using a NEFA C kit (Wako Pure Chemicals Inc., Japan). All colorimetric assays were analyzed using a BioRad (model 550) microplate reader (Hercules, CA).\n\nEstimation of total lipogenesis and glycogenesis was conducted as previously described \\[4\\]. Briefly, 100\u2013200 mg of tissue was weighed out and completely digested in 1 M KOH by heating at 70\u00b0C for 40 min. Half of this tissue digest was spotted onto filter paper, air dried, and glycogen precipitated using ethanol. The remainder of the tissue digest was added to a chloroform\/methanol mixture, agitated and the lipid extracted using three hexane washes. Pooled hexane layers from each tissue were dried under nitrogen, suspended in scintillant and counted. Note that glycogen measurements were not performed on adipose tissue samples due to their very low total glycogen content.\n\nResults are expressed as means \u00b1 SEM using Statistics for Windows (Analytical Software, Tallahassee, FL), unless otherwise indicated. Statistical differences between diet groups were determined using an ANOVA with repeated measures and protected least significant difference (PLSD) test for comparison of means.\n\nThere was no difference in body weight between the groups used (330 \u00b1 18 g and 328 \u00b1 20 g for the DS and RS groups, reapectively). All animals ate at least 87% of the diet presented to them. There was no difference in total test meal consumption between groups. Animals consumed 93% (0.376 g), 95% (0.384 g), 92% (0.372 g), and 96% (0.388 g) of food presented for the RS 1 h, DS 1 h, RS 2 h, DS 2 h groups, respectively.\n\nPlasma triglyceride and NEFA concentrations did not differ between groups (Figure 1<\/a>). Triglyceride concentrations at baseline, 1 h, and 2 h were 0.39 \u00b1 0.04, 0.63 \u00b1 0.11, and 0.54 \u00b1 0.11 mmol\/L for the DS group vs 0.38 \u00b1 0.05, 0.65 \u00b1 0.07, and 0.67 \u00b1 0.09 mmol\/L for the RS group. NEFA data showed similar trends.\n\nIn all tissues, except the liver, there was a trend for the rate of lipogenesis to be higher in the DS group than the RS group. This trend reached significance only in WAT at 1 h (p \\< 0.01; Figure 2<\/a>). There was no significant difference in the rate of glycogensis in the liver or gastrocnemius in response to a DS or RS meal at any time point investigated (data not shown).\n\nThe amount of carbohydrate fed per meal in this study is comparable to that used in oral glucose tolerance tests by other investigators (eg. \\[4\\]). The average meal size received during this study was 0.38 g. The small size of this meal facilitated relatively uniform and complete consumption of the meal by all animals. In addition, a small meal size reflects the nibbling nature of a rat's natural eating pattern and is therefore physiologically appropriate for use in a rodent model.\n\nOver the 2 h postprandial period, there was approximately a 30% reduction in the amount of lipogenesis, exclusively in white adipose tissue, in response to a RS diet relative to a DS diet. This number is reflective of data in humans which shows a 20\u201330% increase in postprandial fat oxidation following a RS meal \\[3\\]. As rats are 'nibblers', they will conceivably consume a meal of the size fed in this study many times over the course of a day. There fore, the observed 30% reduction in lipid synthesis in response to a RS diet could reduce adipose tissue mass in the long-term.\n\nFurther studies need to be conducted to elucidate the mechanism\/s behind the observed decrease in lipogenesis in WAT in response to RS consumption. As no differences in circulating plasma triglyceride or NEFA concentrations was observed, it is possible that differences in circulating insulin concentration play a role in this effect. We have previously shown that RS consumption, at the level fed in the present study, decreases circulating plasma insulin without changing plasma glucose concentration in rats \\[6\\]. It could be argued that this attenuation in insulin concentration due to RS consumption is effecting WAT lipogenic enzyme activity as it has previously been shown that these enzymes are more insulin responsive in WAT than liver which could explain the tissue specificity of the observed response.\n\nKabir et al \\[2\\] have previously reported that long-term feeding of a high RS diet reduces fat cell size relative to a high DS diet. Data presented here indicate that a high RS meal reduces total fat synthesis in white adipose tissue following a meal relative to a high DS meal. This decrease in lipid synthesis could indeed contribute to control of fat cell size in RS fed rats in the long-term. Decreased lipogenesis selectively in WAT and a reduction in adipocyte cell size in response to RS feeding have broad implications for the treatment and prevention of obesity. Furthermore, increasing the RS intake of a typical Western diet would be easy to achieve through the consumption of natural foods with a high RS content and\/or the consumption of commercially produced foods which are fortified with high RS starches, such as breads, pasta, and cereals. RS can even be easily incorporated into foods such as candy and pizza dough, making increased consumption an achievable possibility.\n\n# Abbreviations\n\nRS, resistant starch\n\nDS, digestible starch\n\nWAT, white adipose tissue\n\nBAT, brown adipose tissue\n\nNEFA, non-esterified fatty acid\n\n# Competing interests\n\nThe author(s) declare that they have no competing interests.\n\n# Authors' contributions\n\nJH conceived of the study design and was responsible for overall study coordination, conducting rat meal tests\/sacrifice, data analysis, and manuscript preparation. MB was responsible for conducting rat meal tests\/sacrifice, statistical analyses, and contributed to manuscript preparation. LS conducted rat tissue extractions and contributed to the study design and manuscript preparation.\n\n### Acknowledgements\n\nStarches were donated by National Starch and Chemical Company, Australia. Vitamin and mineral mixes were a gift from Millmaster Feeds, Enfield, Australia.","meta":{"dup_signals":{"dup_doc_count":109,"dup_dump_count":40,"dup_details":{"curated_sources":2,"2021-49":1,"2020-34":1,"2020-10":1,"2019-47":1,"2019-43":1,"2019-35":1,"2018-05":1,"2017-30":1,"2017-17":1,"2017-09":11,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-30":1,"2016-22":1,"2016-18":1,"2015-48":3,"2015-40":2,"2015-35":4,"2015-32":4,"2015-27":3,"2015-22":4,"2015-14":2,"2014-52":4,"2014-49":5,"2014-42":9,"2014-41":4,"2014-35":5,"2014-23":6,"2014-15":5,"2022-27":1,"2015-18":3,"2015-11":3,"2015-06":2,"2014-10":3,"2013-48":3,"2013-20":3,"2024-18":1}},"file":"PMC1618391"},"subset":"pubmed_central"} {"text":"abstract: # OBJECTIVE\n .\n To forecast the number of U.S. individuals aged \\<20 years with type 1 diabetes mellitus (T1DM) or type 2 diabetes mellitus (T2DM) through 2050, accounting for changing demography and diabetes incidence.\n .\n # RESEARCH DESIGN AND METHODS\n .\n We used Markov modeling framework to generate yearly forecasts of the number of individuals in each of three states (diabetes, no diabetes, and death). We used 2001 prevalence and 2002 incidence of T1DM and T2DM from the SEARCH for Diabetes in Youth study and U.S. Census Bureau population demographic projections. Two scenarios were considered for T1DM and T2DM incidence: *1*) constant incidence over time; *2*) for T1DM yearly percentage increases of 3.5, 2.2, 1.8, and 2.1% by age-groups 0\u20134 years, 5\u20139 years, 10\u201314 years, and 15\u201319 years, respectively, and for T2DM a yearly 2.3% increase across all ages.\n .\n # RESULTS\n .\n Under scenario 1, the projected number of youth with T1DM rises from 166,018 to 203,382 and with T2DM from 20,203 to 30,111, respectively, in 2010 and 2050. Under scenario 2, the number of youth with T1DM nearly triples from 179,388 in 2010 to 587,488 in 2050 (prevalence 2.13\/1,000 and 5.20\/1,000 \\[+144% increase\\]), with the greatest increase in youth of minority racial\/ethnic groups. The number of youth with T2DM almost quadruples from 22,820 in 2010 to 84,131 in 2050; prevalence increases from 0.27\/1,000 to 0.75\/1,000 (+178% increase).\n .\n # CONCLUSIONS\n .\n A linear increase in diabetes incidence could result in a substantial increase in the number of youth with T1DM and T2DM over the next 40 years, especially those of minority race\/ethnicity.\nauthor: Giuseppina Imperatore; James P. Boyle; Theodore J. Thompson; Doug Case; Dana Dabelea; Richard F. Hamman; Jean M. Lawrence; Angela D. Liese; Lenna L. Liu; Elizabeth J. Mayer-Davis; Beatriz L. Rodriguez; Debra Standiford; Corresponding author: Giuseppina Imperatore, .Received 6 April 2012 and accepted 4 September 2012.\ndate: 2012-12\nreferences:\nsubtitle: : Dynamic modeling of incidence, mortality, and population growth\ntitle: Projections of Type 1 and Type 2 Diabetes Burden in the U.S. Population Aged \\<20 Years Through 2050\n\nDiabetes is one of the most common and costly chronic pediatric diseases (1). The SEARCH for Diabetes in Youth study (SEARCH) estimated that in 2001 about 154,000 individuals in the U.S. aged \\<20 years were living with diabetes and that each year approximately 15,000 youth aged \\<20 years are being diagnosed with type 1 diabetes mellitus (T1DM) and 3,700 with type 2 diabetes mellitus (T2DM) (2,3). Assessing the future burden of diabetes in youth by diabetes type is crucial for implementing public health primary and secondary prevention programs and planning health care delivery services.\n\nA number of studies have estimated the burden of diagnosed diabetes through 2050 in the U.S (4,5). A limitation is that they were not able to separate the contribution of T1DM from T2DM to the projected diabetes burden. Although the majority of adults with diabetes have T2DM, the majority of youth with diabetes currently have T1DM. On the other hand, T2DM may be becoming more common in adolescents, especially among minority youth (2,3).\n\nThere is substantial variation in the incidence of T1DM and T2DM across the major racial\/ethnic groups in the U.S. The incidence of T1DM is highest among non-Hispanic whites (NHWs) and lowest in American Indians (3). In contrast, T2DM disproportionally affects individuals from all racial\/ethnic minority groups (3). Therefore, changes in the race\/ethnicity distribution of the U.S. population will substantially impact the absolute number of individuals living with T1DM or T2DM. This makes even more compelling the need for diabetes type\u2013specific projections.\n\nTo overcome the limitations of previous studies and provide contemporary estimates of the national type-specific burden of diabetes in youth, we constructed a system of dynamic equations that incorporate diabetes prevalence and incidence, as well as birth, migration, and mortality estimates. These equations model the future burden of diabetes in U.S. youth aged \\<20 years through 2050. In addition, we perform sensitivity analyses to assess the impact of increases in the incidence and\/or changes in the risk of mortality separately for T1DM and T2DM.\n\n# RESEARCH DESIGN AND METHODS\n\n## Data sources\n\nThe data sources for this study include the U.S. Census Bureau (6) and the SEARCH study (2,3,7\u201311). Census data include estimates of the 2001 U.S. population by ages (0, 1, 2,\u2026,19 years), race\/ethnicity (NHW, non-Hispanic black \\[NHB\\], Hispanic, Asian and Pacific Islander \\[API\\], American Indian\/Alaska Native \\[AIAN\\]), and sex, as well as projection estimates of births, deaths, and net migration by the same dimensions for the years 2002\u20132050. For each of these components of population change\u2014fertility, mortality and net migration\u2014the census applied three different assumptions to forecast the future population size. We used the series using the middle assumption for each of these components, also designated as the middle series. SEARCH data include diabetes prevalence in 2001 and incidence from 2002 to 2007 collected from geographically defined populations in Ohio, Colorado, South Carolina, and Washington, as well as Indian Health Service beneficiaries from four American Indian populations, and enrollees in managed health care plans in California and Hawaii. SEARCH is a multicenter study that began in 2001 and is conducting population-based ascertainment of youth aged \\<20 years with clinically diagnosed, nongestational diabetes. Institutional review board(s) for each site approved the study protocol. A detailed description of the SEARCH study has been published elsewhere (2,3).\n\n## Statistical analysis\n\n### 2001 prevalence estimation.\n\nWe used Poisson regression to estimate T1DM and T2DM prevalence as a function of age (0,1,\u2026,19 years), race\/ethnicity (NHW, NHB, Hispanic, API, AIAN), and sex. The Bayesian information criterion was used to select the best fitting models (12). The Bayesian information criterion selects a model from a collection of possible nonnested models by maximizing the likelihood but with a penalty for higher dimensional models. Posterior predictive checking was used to assess model consistency with the data (13). We used the deviance function as a test measure and calculated the \"Bayesian *P* value.\" Bayesian *P* values do not work like the more common frequentist *P* values; values between 0.1 and 0.9 indicate good model fit (14). Models were fit using Bayesian methods in WinBUGS (15). The final models included a cubic spline for age (knots at 0, 9, and 19 years), race\/ethnicity, and sex. Posterior predictive checking yielded Bayesian *P* values equal to 0.39 and 0.22, indicating good model fit for T1DM and T2DM, respectively. Our estimates of the 2001 T1DM and T2DM prevalence were the means of the posterior distributions obtained from fitting the final models.\n\n### 2002 incidence rate estimation.\n\nFrom a population size of 30,549,412 person-years in 2002 through 2007, SEARCH identified 6,164 incident cases of T1DM and 1,534 incident cases of T2DM. We used Poisson regression to estimate T1DM and T2DM incidence as a function of age (0,1,\u2026,19 years), race\/ethnicity (NHW, NHB, Hispanic, API, AIAN), sex, and calendar year. We were primarily interested in incidence estimates for the year 2002. Including 6 years of data with random effects by year improved the year 2002 estimates by \"borrowing strength\" from the other years. The final models included a cubic spline for age (knots at 0, 9, and 19 years), race, sex, age by sex interactions, and random effects of calendar year. Posterior predictive checking yielded Bayesian *P* values equal to 0.29 and 0.12, indicating good model fit for T1DM and T2DM, respectively. Our estimates of the 2002 T1DM and T2DM incidence rates were the means of the posterior distributions obtained from fitting the final models.\n\n### Projection model.\n\nWe constructed dynamic models consisting of systems of difference equations similar to models described previously (16,17). In these models, the U.S. population aged 0 to 19 years is modeled at 1-year intervals starting at year 2001 and ending at year 2050. Specifically, we defined numbers of individuals in various disease states (diabetes, no diabetes, and death). The mathematical details are presented in [Supplementary Appendix 1](http:\/\/care.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/dc12-0669\/-\/DC1).\n\n### Sensitivity analyses.\n\nWe conducted sensitivity analyses by varying both relative risks of death in youth with diabetes versus nondiabetic youth and incidence rate projections for T1DM and T2DM. Two scenarios were considered for relative risks of death: *1*) the relative risks of death for youth in the age-group \\<20 years with T1DM or T2DM are equal to one, i.e., death rates by age, race\/ethnicity, and sex for persons with and without diabetes are equal to the census projected death rates; *2*) death rates of youth with diabetes are higher than those without diabetes with a relative risk equal to 1.5. Two scenarios for T1DM incidence rates were considered: *1*) constant incidence over time at 2002 levels (baseline scenario); *2*) yearly percentage increases of 3.5, 2.2, 1.8, and 2.1% by age-groups 0\u20134 years, 5\u20139 years, 10\u201314 years, and 15\u201319 years (as seen in a previous study from Colorado \\[18\\]), respectively, were applied to all 10 race-sex combinations. Correspondingly, two scenarios for T2DM incidence rates were considered: *1*) constant incidence over time at 2002 levels (baseline scenario); *2*) a yearly 2.3% increase applied uniformly to all ages (0\u201319 years) for each of the 10 race-sex combinations. This increase represents the overall increase of T1DM registered in the Colorado youth population (18).\n\n# RESULTS\n\nOur estimates of the 2002 T1DM and T2DM incidence rates by age and race\/ethnicity from fitting the final models are presented in Fig. 1<\/a>. The incidence of T1DM peaks around the age of 10 years and is highest among NHWs followed by NHBs, Hispanics, APIs, and AIANs (Fig. 1A<\/em><\/a>). The incidence of T2DM peaks around 14 years of age. AIAN youth have the highest incidence rate, followed by NHBs, Hispanics, APIs, and NHWs (Fig. 1B<\/em><\/a>).\n\nWe used these incidence rates in the projection model for each year from 2002 through 2050 in the baseline scenario, where we assume constant incidence rates over time.\n\n## Baseline scenario\n\nTable 1<\/a> shows the projected number of youth with T1DM and T2DM by race\/ethnicity for selected years. The model forecasts that the number of youth with T1DM will increase by 23%, from 166,018 in 2010 to 203,385 in 2050. Due to the absolute increases in the numbers of minority youths in the population as projected by the U.S. Census, this increase is primarily driven by these youths. In 2010, NHWs represented 71% of all youth with T1DM, but by 2050 this proportion will decrease to 55%. Over the 40-year period, the number of Hispanic youth with T1DM is projected to increase 2.5-fold. The overall prevalence of T1DM remains largely unchanged from 1.97\/1,000 in 2010 to 1.80\/1,000 in 2050, a decrease of 9%, reflecting the lower incidence of T1DM among youth from minority groups compared with NHW youth.\n\nProjections of the number of individuals aged \\<20 years with T1DM or T2DM and prevalence estimates\\* for selected years, by race\/ethnicity under baseline scenario\u2020\n\n![](2515tbl1)\n\nThe model estimated that in 2010 20,203 youth in the U.S. had T2DM. Youth of minority groups represented the majority of T2DM cases, while 25% were NHW. The number of youth with T2DM is projected to increase to 30,111 by 2050. As for T1DM, this increase is driven largely by the increase in the population of youth of minority groups. By 2050 Hispanics are estimated to represent 50% of U.S. youth aged \\<20 years with T2DM. Between 2010 and 2050, the overall prevalence of T2DM may increase by 13%, from 0.24\/1,000 to 0.27\/1,000.\n\n## Increased incidence scenario\n\nTable 2<\/a> presents the results of the sensitivity analyses in which the incidence rates of T1DM and T2DM increase while the relative risk of death is equal to 1.0. Under this scenario, the model projects the number of youth with T1DM will increase 3.3-fold, from 179,388 in 2010 to 587,488 in 2050. The number of youth with T1DM will increase 6.6-fold in Hispanics, 5.4-fold in APIs, 4.4-fold in AIANs, 3.0-fold in NHBs, and 2.5-fold in NHWs. The prevalence will increase by 144%, from 2.13\/1,000 to 5.20\/1,000, with the highest estimate still among NHW youth (7.04\/1,000).\n\nProjections of the number of individuals aged \\<20 years with T1DM or T2DM and prevalence estimates\\* for selected years, by race\/ethnicity under increased incidence rate scenario\u2020\n\n![](2515tbl2)\n\nUnder the scenario of an annual increase of 2.3% in the incidence of T2DM, the model indicates that, in the U.S., in 2010 there were 22,820 youth aged \\<20 years with T2DM. By 2050, our model predicts that this number will almost quadruple, to 84,131, with Hispanics representing 50% and NHBs 27% of all youth with T2DM. On the basis of our projections, the prevalence will rise from 0.27\/1,000 in 2010 to 0.75\/1,000 in 2050, an increase of 178%. The prevalence will be highest in NHBs (1.63\/1,000) and lowest in NHWs (0.28\/1,000).\n\nFor both T1DM and T2DM, the differences in the numbers of youth with diabetes from the baseline scenario are due to the increasing incidence rates over baseline.\n\nUnder the increasing incidence scenario and the baseline scenario, when we set the relative risk of death at 1.5 for youth with either T1DM or T2DM diabetes, results were virtually identical (data not shown).\n\n# CONCLUSIONS\n\nWe have estimated the future burden of diabetes in youth by type in the major race\/ethnic groups in the U.S., using the most recent population-based estimates of diabetes incidence and prevalence and taking into account demographic changes over time. Our model projected that over the next 40 years, at the current incidence rates, the number of youth with T1DM and T2DM may increase by 23% and 49%, respectively. However, if the incidence of T1DM or T2DM increases, there may be more than a threefold increase in the number of youth with T1DM and about a fourfold increase in the number of youth with T2DM, especially among minority youth.\n\nVery little is known about effective prevention of T1DM, and more research is needed. However, T2DM can be prevented in high risk adults. Additional research is needed to examine the most effective methods for T2DM prevention in youth and should address strategies applicable to obesity prevention and control, as well as strategies for youth at high risk for T2DM. The projected increase in the prevalence of T2DM should serve as a call to action so that by 2050 the actual number of affected youth will fall markedly short of our projections.\n\nBecause of the early age of onset and longer diabetes duration, children and adolescents are at risk for developing diabetes-related complications at a younger age. As these youth age, this profoundly affects their productivity, quality of life, and life expectancy and increases health care costs. Even in childhood, the medical expenditures of youth with diabetes are approximately 6.2 times of those without diabetes (19). The health care system and society as a whole will need to plan and prepare for the delivery of quality health care to meet the needs of the growing number of youth with diabetes. This may need to include the training of additional health care professionals to treat and manage children and adolescents with T1DM and T2DM.\n\nStrengths of the current study include the use of contemporary population-based estimates of the prevalence and incidence of T1DM and T2DM from the SEARCH study for the major race\/ethnic groups in the U.S. This enabled us to quantify race\/ethnicity\u2013specific future diabetes burden. Prevalence and incidence estimates were based on physician's diagnosis of T1DM or T2DM, and case definitions met consistent eligibility criteria (2,3). Moreover, physician's diagnosis of diabetes type was in good agreement with the etiologic biochemical and clinical characteristics of the two major types of diabetes (20).\n\nThe projections have some limitations. First, the recent estimated increase in the incidence of T1DM is limited and only available in one U.S. study conducted in Colorado (18). The Colorado study found a slightly lower annual increase in T1DM incidence among youth than a large registry-based study conducted in 17 European countries (EURODIAB; overall yearly average increase 2.3% in Colorado vs. 3.9% in Europe) (18,21). However, the pattern of the increase in Colorado was similar to that observed in Europe, with children younger than 5 years old experiencing the greatest relative increase. If the actual rate of increase in the U.S. is more similar to that observed in Europe, then our projections may underestimate the future burden of T1DM in the U.S. However, it should be noted that EURODIAB included only children aged 0\u201315 years and the greatest relative increase in the incidence rate was observed in countries with low baseline incidence. Second, in our study we applied the same rate of increase across all race\/ethnic groups. The Colorado study population included only NHWs and Hispanics, and the overall rate of increase in Hispanics was slightly lower than that of NHWs (1.6 vs. 2.7% per year, respectively). The U.S. Census projections indicate that the proportion of the youth population of NHW race\/ethnicity will diminish from 62% in 2001 to 41% in 2050 (22). Because of this demographic shift and the possibility that youth of other races\/ethnicities than NHW may experience a lower increase in the incidence, it is possible that the number of youth with T1DM could be lower than that estimated by our study under the increasing incidence scenario. Third, we assumed constant increases in T1DM incidence over time and did not account for the possible effect of yet to be identified primary prevention strategies that may influence our predicted number of youth with T1DM. Given our current knowledge, the increased incidence scenario should be taken with caution. However, we would like to point out that recent findings from Europe indicated a constant linear trend over a 15-year period (21) or, starting in the early 1990s, even a steeper increase (23).\n\nFinally, because of the lack of population-based estimates of T2DM incidence trends in youth aged \\<20 years, we used a yearly increase of 2.3% in our increasing incidence scenario, based on the overall increase of T1DM in the Colorado study (18). In the Pima Indians, a population at very high risk of developing T2DM (24), among youth aged 5\u201314 years, between 1965 and 2003 the incidence of T2DM increased almost sixfold (25). In Finnish adolescents and young adults (aged 15\u201339 years) during a 10-year period from 1992 to 2002, the incidence of T2DM increased on average by 4.3% per year, while that of T1DM increased by 3.9% per year (26). Obesity is a major risk factor for the development of T2DM. Since the 1980s, obesity prevalence among U.S. children and adolescents tripled; however, recent national data indicate that during the last decade obesity prevalence may be leveling off at 17% (27). If obesity remains stable for the next 40 years, it is plausible that the current T2DM incidence rate will remain steady. However, even under this scenario, the number of youth with T2DM may increase by 49%. On the other hand, implementation of interventions for the prevention of childhood obesity at the individual or population level may result in decreasing T2DM incidence over time (28,29).\n\nIn both scenarios in our study, increasing the relative risk of death to 1.5 did not affect our estimates. This might be partially explained by the very low number of diabetes-related deaths in this age-group (1.15 per million youths) (30).\n\nOur projections suggest a shift in the proportional distribution of racial\/ethnic groups among youth with T1DM. By 2050, about half of T1DM youths will be of minority race\/ethnic groups. This change may influence potential trends in clinical presentation, treatment patterns, and quality of care. Minority youth are more likely to be overweight or obese (27) and this may lead to a misdiagnosis of T2DM. Among SEARCH study participants, minority youth with T1DM were significantly more likely to have poor glucose control (glycated hemoglobin \\>9%) than NHW youth (31). Minority youth with T1DM are also more likely to live in households with low income and parental education (7\u201311). This in turn may affect their access to and quality of health care (32,33). Because of the changing demographics of the youth population with T1DM, health care policies and delivery systems need to assure that less advantaged youth receive appropriate care.\n\nOur projections indicate a serious picture of the future national diabetes burden in youth. Even if the incidence remains at 2002 levels, because of the population growth projected by the U.S. Census the future numbers of youth with diabetes is projected to increase, resulting in increased health care needs and costs. Future planning should include strategies for implementing childhood obesity prevention programs and primary prevention programs for youth at risk for developing T2DM. Likewise, to prevent future human suffering and health care costs, effective interventions for the prevention of diabetes-related complications should be available to all youth with diabetes (34). At the same time, it is crucial to continuously monitor diabetes trends at the population level, as well as complications and quality of care among youth.\n\n# Supplementary Material\n\n###### Supplementary Data\n\n###### News Release\n\n###### Slide Set\n\n## Acknowledgments\n\nSEARCH for Diabetes in Youth is funded by the Centers for Disease Control and Prevention (PA 00097, DP-05-069, and DP-10-001) and supported by the National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases.\n\nJ.M.L. is employed by Kaiser Permanente Southern California. No other potential conflicts of interest relevant to this article were reported.\n\nG.I. conceived the study, participated in study design and coordination, and wrote a first draft of the manuscript. J.P.B. developed and programmed the multistate dynamic models, participated in study design and coordination, and helped draft the manuscript. T.J.T. developed and programmed the incidence projection model, participated in study design and coordination, and helped draft the manuscript. D.C. participated in study design and coordination, assembled the SEARCH prevalence and incidence data for analysis, and commented on drafts of the manuscript. D.D., R.F.H., J.M.L., A.D.L., L.L.L., E.J.M.-D., and D.S. reviewed and edited the manuscript and contributed to discussion. B.L.R. reviewed the manuscript and contributed to discussion. G.I. is the guarantor of this work and, as such, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.\n\nParts of this study were presented in poster form at the 71st\u00a0Scientific Sessions of the American Diabetes Association, San Diego, California, 24\u201328 June 2011.\n\nThe SEARCH for Diabetes in Youth study is indebted to the many youth and their families, and their health care providers, whose participation made this study possible.\n\n# References","meta":{"dup_signals":{"dup_doc_count":154,"dup_dump_count":71,"dup_details":{"curated_sources":2,"2023-14":2,"2022-40":2,"2022-21":4,"2021-49":1,"2021-43":2,"2021-39":2,"2021-31":2,"2021-25":2,"2021-21":3,"2021-17":4,"2021-10":2,"2021-04":1,"2020-45":1,"2020-24":3,"2020-10":2,"2019-51":1,"2019-43":2,"2019-39":2,"2019-35":1,"2019-30":2,"2019-26":3,"2019-22":1,"2019-18":1,"2019-13":2,"2018-51":2,"2018-47":1,"2018-30":2,"2018-13":2,"2018-05":4,"2017-47":2,"2017-43":1,"2017-39":2,"2017-34":1,"2017-30":2,"2017-26":2,"2017-22":2,"2017-17":4,"2017-09":3,"2017-04":3,"2016-50":2,"2016-44":3,"2016-40":3,"2016-36":2,"2016-30":3,"2016-26":2,"2016-22":3,"2016-18":2,"2016-07":2,"2015-48":1,"2015-40":2,"2015-35":3,"2015-32":1,"2015-27":3,"2015-22":3,"2015-14":2,"2014-52":2,"2014-49":3,"2014-42":5,"2014-41":3,"2014-35":4,"2014-23":2,"2014-15":1,"2024-10":1,"2017-13":2,"2015-18":3,"2015-11":2,"2015-06":2,"2014-10":1,"2013-48":1,"2013-20":1,"2024-30":1}},"file":"PMC3507562"},"subset":"pubmed_central"} {"text":"date: 2017\nreferences:\ntitle: Non-O blood groups associated with higher risk of heart attack\n\n# Introduction\n\nHaving a non-O blood group is associated with a higher risk of heart attack, according to research presented recently at Heart Failure 2017 and the 4th World Congress on Acute Heart Failure.\n\nLead author Tessa Kole, a Master's degree student at the University Medical Centre Groningen, the Netherlands, said: 'It has been suggested that people with non-O blood groups (A, B, AB) are at higher risk for heart attacks and overall cardiovascular mortality, but this suggestion comes from case\u2013control studies, which have a low level of evidence. If this was confirmed, it could have important implications for personalised medicine.'\n\nThe current study was a meta-analysis of prospective studies reporting on O and non-O blood groups, and incident cardiovascular events, including myocardial infarction (heart attack), coronary artery disease, ischaemic heart disease, heart failure, cardiovascular events and cardiovascular mortality.\n\nThe study included 1 362 569 subjects from 11 prospective cohorts, described in nine articles. There were a total of 23 154 cardiovascular events. The researchers analysed the association between blood group and all coronary events, combined cardiovascular events and fatal coronary events.\n\nThe analysis of all coronary events included 771 113 people with a non-O blood group and 519 743 people with an O blood group, of whom 11 437(1.5%) and 7 220 (1.4%) suffered a coronary event, respectively. The odds ratio (OR) for all coronary events was significantly higher in carriers of a non-O blood group, at 1.09 \\[95% confidence interval (CI) of 1.06\u20131.13\\].\n\nThe analysis of combined cardiovascular events included 708 276 people with a non-O blood group and 476 868 people with an O blood group, of whom 17 449 (2.5%) and 10 916 (2.3%) had an event, respectively. The OR for combined cardiovascular events was significantly higher in non-O blood group carriers, at 1.09 (95% CI 1.06\u20131.11).\n\nThe analysis of fatal coronary events did not show a significant difference between people with O and non-O blood groups.\n\n'We demonstrate that having a non-O blood group is associated with a 9% increased risk of coronary events and a 9% increased risk of cardiovascular events, especially myocardial infarction', said Ms Kole.\n\nThe mechanisms that might explain this risk are under study. The higher risk for cardiovascular events in non-O blood group carriers may be due to having greater concentrations of von Willebrand factor, a blood clotting protein which has been associated with thrombotic events. Further, non-O blood group carriers, specifically those with an A blood group, are known to have higher cholesterol. And galectin-3, which is linked to inflammation and worse outcomes in heart failure patients, is also higher in those with a non-O blood group.\n\nMs Kole said: 'More research is needed to identify the cause of the apparent increased cardiovascular risk in people with a non-O blood group. Obtaining more information about risk in each non-O blood group (A, B and AB) might provide further explanations of the causes.'\n\nShe concluded: 'In future, blood group should be considered in risk assessment for cardiovascular prevention, together with cholesterol, age, sex and systolic blood pressure. It could be that people with an A blood group should have a lower treatment threshold for dyslipidaemia or hypertension, for example. We need further studies to validate if the excess cardiovascular risk in non-O blood group carriers may be amenable to treatment.'\n\n# References","meta":{"dup_signals":{"dup_doc_count":108,"dup_dump_count":55,"dup_details":{"curated_sources":2,"2023-40":3,"2023-23":2,"2023-14":3,"2023-06":3,"2022-49":2,"2022-40":2,"2022-33":2,"2022-27":2,"2022-21":2,"2022-05":1,"2021-49":2,"2021-39":1,"2021-31":2,"2021-25":3,"2021-21":2,"2021-17":1,"2021-10":1,"2021-04":2,"2020-50":1,"2020-45":2,"2020-40":2,"2020-34":1,"2020-29":1,"2020-16":1,"2020-10":1,"2019-47":2,"2019-35":1,"2019-30":1,"2019-26":2,"2019-18":1,"2019-13":1,"2019-04":1,"2018-51":3,"2018-47":2,"2018-43":1,"2018-39":3,"2018-30":3,"2018-26":3,"2018-22":1,"2018-17":2,"2018-13":4,"2018-09":3,"2018-05":1,"2017-51":1,"2017-47":4,"2017-39":4,"2017-34":1,"2017-30":3,"2017-22":3,"2023-50":1,"2024-26":1,"2024-22":1,"2024-18":3,"2024-10":3,"2024-30":1}},"file":"PMC5558144"},"subset":"pubmed_central"} {"text":"abstract: This study describes and validates a new method for metagenomic biomarker discovery by way of class comparison, tests of biological consistency and effect size estimation. This addresses the challenge of finding organisms, genes, or pathways that consistently explain the differences between two or more microbial communities, which is a central problem to the study of metagenomics. We extensively validate our method on several microbiomes and a convenient online interface for the method is provided at .\nauthor: Nicola Segata; Jacques Izard; Levi Waldron; Dirk Gevers; Larisa Miropolsky; Wendy S Garrett; Curtis Huttenhower\ndate: 2011\ninstitute: 1Department of Biostatistics, 677 Huntington Avenue, Harvard School of Public Health, Boston, MA 02115, USA; 2Department of Molecular Genetics, 245 First Street, The Forsyth Institute, Cambridge, MA 02142, USA; 3Department of Oral Medicine, Infection and Immunity, 188 Longwood Ave, Harvard School of Dental Medicine, Boston, MA 02115, USA; 4Microbial Sequencing Center, 7 Cambridge Center, The Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA; 5Department of Immunology and Infectious Diseases, 665 Huntington Avenue, Harvard School of Public Health, Boston, MA 02115, USA; 6Department of Medicine, 75 Francis Street, Harvard Medical School, Boston, MA 02115, USA; 7Department of Medical Oncology, 44 Binney Street, Dana-Farber Cancer Institute, MA 02215, USA\nreferences:\ntitle: Metagenomic biomarker discovery and explanation\n\n# Background\n\nBiomarker discovery has proven to be one of the most broadly applicable and successful means of translating molecular and genomic data into clinical practice. Comparisons between healthy and diseased tissues have highlighted the importance of tasks such as class discovery (detecting novel subtypes of a disease) and class prediction (determining the subtype of a new sample) \\[1-4\\], and recent metagenomic assays have shown that human microbial communities can be used as biomarkers for host factors such as lifestyle \\[5-7\\] and disease \\[7-10\\]. As sequencing technology continues to develop and makes microbial biomarkers increasingly easily detected, this enables clinical diagnostic and microbiological applications through the comparison of microbial communities \\[11,12\\].\n\nThe human microbiome, consisting of the total microbial complement associated with human hosts, is an important emerging area for metagenomic biomarker discovery \\[13,14\\]. Changes in microbial abundances in the gut, oral cavity, and skin have been associated with disease states ranging from obesity \\[15-17\\] to psoriasis \\[18\\]. More generally, the metagenomic study of microbial communities is an effective approach for identifying the microorganisms or microbial metabolic characteristics of any uncultured sample \\[19,20\\]. Analyses of metagenomic data typically seek to identify the specific organisms, clades, operational taxonomic units, or pathways whose relative abundances differ between two or more groups of samples, and several features of microbial communities have been proposed as potential biomarkers for various disease states. For example, single pathogenic organisms can signal disease if present in a community \\[21,22\\], and increases and decreases in community complexity have been observed in bacterial vaginosis \\[23\\] and Crohn's disease \\[8\\]. Each of these different types of microbial biomarkers is correlated with disease phenotypes, but few bioinformatic methods exist to explain the class comparisons afforded by metagenomic data.\n\nIdentifying the most biologically informative features differentiating two or more phenotypes can be challenging in any genomics dataset, and this is particularly true for metagenomic biomarkers. Robust statistical tools are needed to ensure the reproducibility of conclusions drawn from metagenomic data, which is crucial for clinical application of the biological findings. Related challenges are associated with high-dimensional data regardless of the data type or experimental platform; the number of potential biomarkers, for example, is typically much higher than the number of samples \\[24-26\\]. Metagenomic analyses additionally present their own specific issues, including sequencing errors, chimeric reads \\[27,28\\], and complex underlying biology; many microbial communities have been found to show remarkably high inter-subject variability. For example, large differences are detected even among the gut microbiomes of twins \\[29\\], and both human microbiomes and environmental communities are thought to be characterized by the presence of a long tail of rare organisms \\[30-32\\]. Moreover, simply identifying potential biomarkers without elucidating their biological consistency and roles is only a precursor to understanding the underlying mechanisms of microbe-microbe or host-microbe interactions \\[33\\]. In many cases, it is necessary to explain not just how two biological samples differ, but why. This problem is referred to as class comparison: how can the differences between phenotypes such as tumor subtype or disease state be explained in terms of consistent biological pathways or molecular mechanisms?\n\nA number of methods have been proposed for class discovery or comparison in metagenomic data. MEGAN \\[34\\] is a metagenomic analysis tool with recent additions for phylogenetic comparisons \\[35\\] and statistical analyses \\[36\\]. MEGAN, however, can only compare single pairs of metagenomes, as is also the case with STAMP \\[37\\], which does introduce a concept of 'biological relevance' in the form of confidence intervals. UniFrac \\[38\\] compares sets of metagenomes at a strictly taxonomic level using phylogenetic distance, while MG-RAST \\[39\\], ShotgunFunctionalizeR \\[40\\], mothur \\[41\\], and METAREP \\[42\\] all process metagenomic data using standard statistical tests (mainly *t*-tests with some modifications). Most methods for community analysis from an ecological perspective rely on unsupervised cluster analyses based on principal component analysis \\[43\\] or principal coordinate analysis \\[44\\]. These can successfully detect groups of related samples, but they fail to include prior knowledge of phenotypes or environmental conditions associated with the groups, and they generally do not identify the biological features responsible for group relationships. Metastats \\[45\\] is the only current method that explicitly couples statistical analysis (to assess whether metagenomes differ) with biomarker discovery (to detect features characterizing the differences) based on repeated *t* statistics and Fisher's tests on random permutations. However, none of these methods, even those offering nuanced analyses of metagenomic data, provide biological class explanations to establish statistical significance, biological consistency, and effect size estimation of predicted biomarkers.\n\nIn this work, we present the linear discriminant analysis (LDA) effect size (LEfSe) method to support high-dimensional class comparisons with a particular focus on metagenomic analyses. LEfSe determines the features (organisms, clades, operational taxonomic units, genes, or functions) most likely to explain differences between classes by coupling standard tests for statistical significance with additional tests encoding biological consistency and effect relevance. Class comparison methods typically predict biomarkers consisting of features that violate a null hypothesis of no difference between classes; we additionally detect the subset of features with abundance patterns compatible with an algorithmically encoded biological hypothesis and estimate the sizes of the significant variations. In particular, effect size provides an estimation of the magnitude of the observed phenomenon due to each characterizing feature and it is thus a valuable tool for ranking the relevance of different biological aspects and for addressing further investigations and analyses. The introduction of prior biological knowledge in the method contributes to constrain the analysis and thus to address the challenges traditionally connected with high-dimensional data mining. LEfSe thus aims to support biologists by suggesting biomarkers that explain most of the effect differentiating phenotypes of interest (two or more) in biomarker discovery comparative and hypothesis-driven investigations. The visualization of the discovered biomarkers on taxonomic trees provides an effective means for summarizing the results in a biologically meaningful way, as this both statistically and visually captures the hierarchical relationships inherent in 16S-based taxonomies\/phylogenies or in ontologies of pathways and biomolecular functions.\n\nWe validated this approach using data from human microbiomes, a mouse model of ulcerative colitis, and environmental samples, in each case predicting groups of organisms or operational taxonomic units that concisely differentiate the classes being compared. We further evaluated LEfSe using synthetic data, observing that it achieves a substantially better false positive rate compared to standard statistical tests, at the price of a moderately increased false negative rate (that can be adjusted as needed by the user). An implementation of LEfSe including a convenient graphical interface incorporated in the Galaxy framework \\[46,47\\] is provided online at \\[48\\].\n\n# Results and discussion\n\nLEfSe is an algorithm for high-dimensional biomarker discovery and explanation that identifies genomic features (genes, pathways, or taxa) characterizing the differences between two or more biological conditions (or classes) (Figure 1<\/a>). It emphasizes statistical significance, biological consistency and effect relevance, allowing researchers to identify differentially abundant features that are also consistent with biologically meaningful categories (subclasses; see Materials and methods). LEfSe first robustly identifies features that are statistically different among biological classes. It then performs additional tests to assess whether these differences are consistent with respect to expected biological behavior; for example, given some known population structure within a set of input samples, is a feature more abundant in all population subclasses or in just one? Specifically, we first use the non-parametric factorial Kruskal-Wallis (KW) sum-rank test \\[49\\] to detect features with significant differential abundance with respect to the class of interest; biological consistency is subsequently investigated using a set of pairwise tests among subclasses using the (unpaired) Wilcoxon rank-sum test \\[50,51\\]. As a last step, LEfSe uses LDA \\[52\\] to estimate the effect size of each differentially abundant feature and, if desired by the investigator, to perform dimension reduction.\n\nWe have specifically designed LEfSe for biomarker discovery in metagenomic data. We thus summarize our results here from applying the tool to 16S rRNA gene and whole genome shotgun datasets to detect bacterial organisms and functional characteristics differentially abundant between two or more microbial environments. These include body sites within human microbiomes (mucosal surfaces and aerobic\/anaerobic environments), adult and infant microbiomes, inflammatory bowel disease status in a mouse model, bacterial and viral environmental communities, and synthetic data for quantitative computational evaluation.\n\n## Taxa characterizing body sites within the human microbiome\n\nMicrobial community organization at multiple human body sites is an area of active current research, since both low- and high-throughput methods have shown both differences and overlaps among the microbiota of multiple body sites \\[53,54\\]. We examined these differences in the 16S-based phylometagenomic dataset from 24 individuals enrolled in the Human Microbiome Project \\[13,55\\]. A minimum of 5,000 16S rRNA gene sequences were obtained for 301 samples from 24 healthy subjects (12 male, 12 female) covering 18 body sites, including 6 main body site categories: the oral cavity (9 sub-sites sampled), the vagina (3 sub-sites sampled), the skin (2 sub-sites sampled), the retroauricular crease (2 sub-sites sampled), the nasal cavity (1 sample) and the gut (1 sample). We validated LEfSe by contrasting mucosal versus non-mucosal body site classes and by comparing three levels of aerobic environments (anaerobic, microaerobic, and aerobic). In both cases, the sub-sites within each class of body site were used as a biological subclass.\n\n## Mucosal surfaces are colonized by diverse bacteria; non-mucosal microbiomes are strongly enriched for Actinobacteria\n\nOur first analysis focused on differences in microbiota composition between mucosal and non-mucosal body sites. The oral cavity, gut, and vaginal sites were classified as sources of mucosal communities and the anterior fossa (skin), nasal cavity, and retroauricular crease as non-mucosal. Mucosal environments differ greatly from the other body sites, characterized primarily by interaction with the human immune system, oxidative challenge, and hydration \\[56\\].\n\nLEfSe provides three main outputs (Figure 2<\/a>), describing the effect sizes of differences observed among mucosal\/non-mucosal communities, the phylogenetic distribution of these differences based on the Ribosomal Database Project (RDP) bacterial taxonomy \\[57\\], and the raw data driving these effects. LEfSe detected 15 bacterial clades showing statistically significant and biologically consistent differences in non-mucosal body sites (Figure 2a<\/a>).\n\nThe most differentially abundant bacterial taxa in non-mucosal body sites belong to phyla with prevalent aerobic members: Actinobacteria, Firmicutes, and Proteobacteria, including environmental organisms from the Betaproteobacteria and Gammaproteobacteria clades. Non-mucosal overrepresented genera include *Propionibacterium*, *Staphylococcus* (found exclusively in non-mucosal samples), *Corynebacterium*, and *Pseudomonas*. Also of note is the relevant representation of plastids from plant organisms (chloroplasts), for which the distribution of associated taxa varies, as some are limited to non-mucosal surfaces (environmental exposure and potentially cosmetic products) and others to the digestive track (ingested food). No clades are consistently present in all mucosal body sites, demonstrating the \u03b2-diversity of these communities (that is, differences among their population structure), but many taxa within Actinobacteria, Bacillales, and several other clades are relatively abundant at all non-mucosal sites. The within-subject \u03b2-diversity at all phylogenetic levels is highlighted in Additional file 1<\/a>, quantifying the extent to which distances among different mucosal body sites are larger than the equivalent distances among non-mucosal sites. This leads to a lack of taxa common to all mucosal body sites, and therefore no taxa are determined by LEfSe to be characteristic of the mucosa as a whole.\n\nThe Actinomycetales are usually the most abundant phylogenetic unit (order level) in non-mucosal communities, with percentages higher than 90% in several skin samples and at most 20% in the great majority of the oral mucosal samples and substantially lower in the vagina and gut (Figure 2c<\/a>). From a quantitative viewpoint, the taxonomic order Actinomycetales makes up essentially all of the detected members of the phylum Actinobacteria, except in the vaginal site, which reported a substantial Bifidobacteriales presence. Bifidobacteriales themselves are not detected as differentially abundant between mucosal and non-mucosal body sites, since this is a feature only of the vaginal samples and not of all mucosal body sites. The contrast of many clades' abundance versus distribution is striking; for example, the genera *Alloscardovia*, *Parascardovia* and *Scardovia* are present in all body sites at very low abundances, while *Gardnerella* is overrepresented only in vaginal samples, with over three orders of magnitude difference in abundance. A similar commonality of distribution was found for the Bacillales at an even lower abundance. At the genus level, *Propionibacterium*, *Staphylococcus*, *Corynebacterium* and *Pseudomonas* are differentiated by both distribution and abundance. The *Staphylococcus* genus in particular is detected by LEfSe with a very high LDA score (more than five orders of magnitude), reflecting marked abundance in non-mucosal sites (mean 10%, 18% and 21% in the skin, retroauricular crease and anterior nares body sites, respectively) and consistently low abundance in mucosal sites (mean less than 0.001%).\n\n## Classes with multiple levels: distinct aerobic, anaerobic, and microaerobic communities in the human microbiome\n\nThe roles of anaerobic metabolism in the commensal human microbiota have not yet been fully investigated due to the difficulty of studying these communities in culture. We thus further investigated the aerobicity characteristics of human microbial communities at a high level by grouping body sites into three classes with distinct levels of available molecular oxygen. The high-O~2~ exposure class includes body sites directly and permanently exposed to oxygen: skin, anterior nares and retroauricular crease. The mid-O~2~ exposure class includes the oral and vaginal body sites that can be directly, but not permanently, atmospherically exposed, and the low-O~2~ exposure class (the gut) is mainly anaerobic. The body sites included in the three classes may have other distinguishing features in addition to different oxygen exposure and, in general, these confounding factors can cause features unrelated with aerobiosis to be detected as biomarkers. However, the LEfSe biological consistency step assures that the detected biomarkers are characteristic of all the subclasses of a given class and with respect to all subclasses of the other classes. For example, the high-abundance of a bacterial clade in the mouth due to an oral-specific niche is not detected as a biomarker unless the same niche is also present in the vaginal samples (the other body site in the mid-O~2~ class) and not present in any high-O~2~ or low-O~2~ single body sites. So LEfSe will detect biomarkers more confidently connected with the aerobiosis characteristics than traditional methods that do not incorporate subclass information. Moreover, LEfSe is specifically able to analyze ordinal classes with multiple levels, and in agreement with established microbiology, we observed specific microbial clades ubiquitous within and characteristic to each of these three environments, detailed as follows (Figure 2d<\/a>).\n\nLEfSe allows ordinal classes with more than two levels to be analyzed in two different stringencies. The first requires significant taxa to differ between every pair of class values (that is, aerobicity in this example; see Materials and methods); the discovered biomarkers must accurately distinguish all individual classes (high-, mid-, and low-O~2~). In this example (Figure 2d<\/a>; strict version), we detected 13 clades with LDA scores above 2, showing three distinct abundance levels. Alternatively, LEfSe can determine significant taxa differing in at least one (and possibly multiple) class value(s) (non-strict version); in other words, biomarkers that distinguish at least one individual class. Using this method (Figure 2e<\/a>), we find 60 clades with LDA scores of at least 2.\n\nUsing either approach, each oxygen level is broadly characterized by a specific clade. The overall abundances of the Actinobacteria phylum are higher in body sites directly exposed to molecular oxygen with several members of the Actinomycetales order that colonize the skin. Actinomycetales includes the *Propionibacterium* genus, which is highly abundant on the skin, low in moderate-O~2~ environments, and absent from the gut. The Lactobacillales (primarily Bacilli) are specific to moderate O~2~ exposure levels, with conversely lower presences in the high-O~2~ exposure class, and are again absent from the gut. The Bacteroidaceae (particularly *Bacteroides*) are ubiquitous in anaerobic samples; interestingly, however, members of this family are more abundant in high oxygen availability conditions (particularly in skin and retroauricular crease) than in medium oxygen availability, showing the niche diversity within the phylogenetic branching. This is in agreement with observations that the microenvironment of many microbial consortia shows extreme biogeographical variation with respect to nutrients, metabolites, and oxygen availability \\[58,59\\].\n\n## Bifidobacteria and additional clades are underrepresented in a mouse model of ulcerative colitis\n\nRodent models have been established to provide a uniquely accurate and tractable model for studying the gut microbiota, including the molecular and cellular mechanisms driving chronic intestinal inflammation \\[60-63\\]. In particular, mouse models of inflammatory bowel disease \\[63\\] facilitate a mechanistic evaluation of the contribution of the gut microbiota to the initiation and perpetuation of chronic intestinal inflammation, as occurs in human Crohn's disease and ulcerative colitis \\[64\\]. One host molecular mechanism known to maintain the balance between immune regulation and the commensal microflora is T-bet, a transcription factor expressed in many immune cell subsets. Its loss in the absence of an adaptive immune system results in a highly penetrant and aggressive form of ulcerative colitis \\[65\\] that is specifically dependent on and transmissible through the gut flora. We thus sought to investigate the characteristics of the fecal microbiota in a mouse model of spontaneous colitis that occurs in a colony of Balb\/c *T-bet*^-\/-^ \u00d7 *Rag2*^-\/-^ mice using 16S rRNA gene metagenomic data \\[66,67\\].\n\nLEfSe was applied to the microbiota data of 20 *T-bet*^-\/-^ \u00d7 *Rag2*^-\/-^ (case) and 10 *Rag2*^-\/-^ (control) mice (dataset provided in Additional File 10<\/a>), finding 19 differentially abundant taxonomic clades (\u03b1 = 0.01) with an LDA score higher than 2.0 (Figure 3<\/a>). These differentially abundant clades were consonant with both our prior 16S rRNA-based sequence analysis using complete linkage hierarchical clustering and quantitative real time PCR-based experiments performed on the same fecal DNA samples \\[67\\]. More specifically, the marked loss in Bifidobacteriaceae and *Bifidobacterium* associated with *T-bet*^-\/-^ \u00d7 *Rag2*^-\/-^ we observed here may explain the positive responsiveness of this colitis to a *Bifidobacterium animalis* subsp. *lactis* fermented milk product validated with low-throughput approaches \\[67\\].\n\nAt the family level, the *Rag2*^-\/-^ enrichment of Bifidobacteriaceae, Porphyromonadaceae, Staphylococcaceae and the *T-bet*^-\/-^ \u00d7 *Rag2*^-\/-^ enrichment of Lachnospiraceae confirm our reports in \\[68\\] using culture-based and quantitative real time PCR techniques. LEfSe's LDA score more informatively reorders these taxa relative to the *P*-values found for these families in our previous work, highlighting the Bifidobacteria and, interestingly, several clades within the Clostridia. These include the *Rag2*^-\/-^-specific *Roseburia* and *Papillibacter* genera belonging to *T-bet*^-\/-^ \u00d7 *Rag2*^-\/-^-specific families (Lachnospiraceae and Ruminococcaceae). The significant presence of *Metascardovia* (Bifidobacteriaceae) in *Rag2*^-\/-^ mice is also interesting, as it may have a role similar to *Bifidobacterium* and because *Metascardovia* has been previously observed primarily in the oral cavity \\[68\\]. This analysis both highlights the agreement of LEfSe's effect size estimation with respect to low-throughput confirmations and suggests additional clades to be further investigated experimentally.\n\n## A comparison with current metagenomic analysis tools using viral and microbial pathways from environmental data\n\nWe applied LEfSe to the environmental data of \\[69\\], a dataset with the goal of characterizing the functional roles of viromes (that is, viral metagenomes) versus microbiomes (that is, bacterial metagenomes). This task was used in \\[45\\] to characterize the Metastats algorithm on the same raw data. Among the 29 high-level functional roles (including unclassified roles) in the subsystem hierarchy of the SEED \\[70\\] and NMPDR \\[71\\] frameworks, LEfSe identifies only the 'Nucleosides and nucleotides' subsystem to be strictly differentially abundant among all environmental subclasses, specifically with higher levels in viromes than microbiomes. This is an accurate characterization of exactly the protein function most commonly encoded in viral genomes, whereas bacterial genomes of course encode a wide range of less specifically enriched functionality. When LEfSe is relaxed to detect significant variations consistent for at least one, rather than all, environmental subclasses, we additionally determine the 'Respiration' subsystem to be significantly enriched in microbiomes with respect to viromes, likely reflecting the uniformly aerobic bacterial metabolism captured by these data.\n\nIn addition to the Nucleosides and nucleotides and Respiration subsystems, Metastats \\[45\\] reports five other high-level functional roles as differentially abundant (*P* = 0.001). However, when taking the subclass structure into account across the sampled environments, these additional differences show much less consistent variation. This is demonstrated in Figure 4<\/a>, which reports histograms of raw data for these cases and the different results of LEfSe, Metastats and the KW test alone. Moreover, since the subsystem framework is hierarchical (three levels), LEfSe's results include a cladogram showing the significant differences on each level (see Figure 4<\/a> for a two-level cladogram, and Additional file 2<\/a> for a three-level cladogram).\n\nConsidering all three levels of SEED functional specificity, LEfSe reports 59 subsystems to be more abundant in microbial metagenomes and only 7 that are more abundant in viral metagenomes (Additional file 3<\/a>). Bacterial genomes encode a much greater quantity and diversity of biomolecular functionality than most viral genomes, and these differences are thus to be expected. However, they also highlight a consideration specific to most metagenomic (and, more generally, ecological) analyses, which typically analyze relative abundances. A few very common subsystems in viromes (that is, Nucleosides and nucleotides) will force the relative abundance of all other subsystems to decrease, resulting in apparent under-abundance. The subsystems detected to be virus-specific may thus show this trend in part due to the normalization of abundances in each sample. This issue is specific to neither LEfSe nor Metastats, however, and must be taken into account during interpretation of any relative abundance data, metagenomic or otherwise \\[72\\].\n\n## Functional activity within the infant and adult microbiota indicates post-weaning microbial specialization\n\nJust as LEfSe can determine whether organisms or pathways are differentially abundant among several metagenomic samples, it can also focus on individual enzymes or orthologous groups. Kurokawa *et al*. \\[73\\] analyzed 13 gut metagenomes from nine adults and four unweaned infants in terms of the functions of orthologous gene families. They originally did this by comparing the COGs \\[74,75\\] found in each metagenome to a reference database; later, White *et al*. \\[45\\] applied the Metastats algorithm to directly detect differences between infant and adult microbiomes. Using significance \u03b1 values of 0.01 due to the low cardinality of the classes (in particular the infant class), LEfSe detected 366 COGs to be enriched in either adult or infant metagenomes, 17 of which have a LDA score higher than 3 (Additional file 4<\/a>).\n\nAmong the 17 COG profiles with LEfSe scores higher than 3, 11 are also detected by Metastats. The six COGs not detected by Metastats (Additional file 5<\/a>) are Outer membrane protein (COG1538) and Na^+^-driven multidrug efflux pump (COG0534), enriched in adults, and Transposase and inactivated derivatives (COG2801, COG2963), Transcriptional regulator\/sugar kinase (COG1940) and Transcriptional regulator (COG1309), enriched in infants. All six COGs possess abundance profiles that are completely non-overlapping between infant and adult individuals (apart from COG1538, in which the lowest level in adults is slightly lower than the highest in infants) and are thus nominally quite discriminative. On the other hand, among the 192 COGs found by Metastats, 9 are not detected by LEfSe even at the lowest LDA score threshold (Additional file 6<\/a>). All possess overlapping abundance values between infant and adult classes (at least two, and often more, of the highest samples in the less abundant class overlap the putatively more abundant class). This lack of discriminatory power precludes LEfSe from highlighting the differences as significant between adults and infants, particularly given the low number of infant samples.\n\nIntriguingly, LEfSe's distinct list of functional activities in the core infant and adult microbiomes is suggestive of 'generalist' microbial activity during early life and specialization over time \\[76\\]. In fact, inspecting the five differentially abundant COGs with the highest effect sizes for each class, we find for infants very high-level functional groups related to broad transcriptional regulation (COG1609, COG1940, COG1309 and COG3711). In adults, all five represent more specialized orthologous groups, including COG1629 (Outer membrane receptor proteins, mostly Fe transport), COG1595 (DNA-directed RNA polymerase specialized sigma subunit, sigma24 homolog), and COG4771 (Outer membrane receptor for ferrienterochelin and colicins). Since the number of differentially abundant COGs is very high (366), this observation was only highlighted at the top of the candidate biomarker list due to LEfSe's effect size quantification, which allows the most characteristic differences among classes to emerge. For the same reason, we can easily confirm that sugar metabolism plays a crucial role in the infant gut and iron metabolism in adults, as already stated in \\[45,73\\]; the COGs with the highest LDA scores indeed possess sugar and glucose functional activities for infants and iron-related functionality for adults.\n\n## LEfSe achieves a very low false positive rate in synthetic data\n\nWe further investigated the ability of LEfSe to detect biomarkers using synthetic high-dimensional data (see Materials and methods for the description of the dataset) in comparison with the KW test alone (a non-parametric adaptation of the analysis of variance (ANOVA)) and with Metastats \\[45\\]. The LDA effect size step of LEfSe is not considered here for simplicity, and the artificial data are detailed in Figure 5<\/a>.\n\nTheoretically, the settings of the first two experiments (Figure 5a,b<\/a>) exactly match the application conditions for the KW test. The false positive rate (mean 2.5%, regardless of the distance between feature means and of the standard deviation of the normal distribution) is in fact consistent with the \u03b1 value of 0.05, given that the negative features are half of the total. LEfSe behaved qualitatively very similar to KW, but with a considerably lower false positive rate (less than 0.5% in the great majority of the cases against a mean value of 2.5%) and a higher false negative rate. In biology, false positives are often perceived as more dramatic than false negatives \\[77-79\\]; this is often attributable to the fact that it is undesirable to invest in expensive experimental follow-up of false positives, whereas in high-throughput settings, a few true positives outweigh the false negatives that are left uninvestigated. With this motivation for minimizing false positives, we conclude that LEfSe performs at least as well as KW when no meaningful subclass structure is available. On the other hand, when subclasses can be identified internally to the classes and some of them do not agree with the trend among classes, LEfSe performs qualitatively and quantitatively much better than KW (Figure 5c<\/a>). The false positives are in fact always substantially lower than KW, whereas the false negatives are higher only for very noisy features. Metastats \\[45\\] seems to achieve results very similar to KW (Additional file 7<\/a>) with the same disadvantages with respect to LEfSe.\n\n# Conclusions\n\nGaining insight into the structure, organization, and function of microbial communities has been proposed as one of the major research challenges of the current decade \\[80\\], and it will be enabled by both experimental and computational metagenomic analyses. To this end, we have developed the LEfSe algorithm for comparative metagenomic studies, permitting the characterization of microbial taxa specific to an experimental or environmental condition, the detection of pathways and biological mechanisms over- or under-represented in different communities, and the identification of metagenomic biomarkers in mammalian microbiomes. LEfSe is shown here to be effective in detecting differentially abundant features in the human microbiome (characteristically mucosal or aerobic taxa) and in a mouse model of colitis. A comparison with existing statistical methods and state-of-the-art metagenomic analyses of environmental, infant gut microbiome, and synthetic data shows that LEfSe consistently provides lower false positive rates and can effectively aid in explaining the biology underlying differences in microbial communities.\n\nThese findings demonstrate that a concept of class explanation including both statistical and biological significance is highly beneficial in tackling the statistical challenges associated with high-dimensional biomarker discovery \\[28,81,82\\]. Specifically, LEfSe determines features potentially able to explain the differences among conditions rather than the features that simply possess uneven distributions among classes. This is distinct from most current statistical approaches \\[45\\] and akin to the incorporation of biological prior knowledge that has proven highly successful in recent genome-wide association studies \\[83-85\\]. Moreover, particularly in (often noisy) metagenomic datasets, effect size can serve as an orthogonal measure to complement ranking biomarkers based on *P*-values alone. Differences between classes can be very statistically significant (low *P*-value) but so small that they are unlikely to be biologically responsible for phenotypic differences. On the other hand, a biomarker with a relatively large *P*-value (for example, 0.01) may correspond to a large effect size, with statistical significance diminished by technical noise. LEfSe investigates both aspects computationally by testing both the consistency and the effect size of differences in feature abundance among classes with respect to the structure of the problem. This is performed subsequently to standard statistical significance tests and is integrated in LEfSe by assessing biologically meaningful groups of samples among subclasses within each condition. This coupling of statistical approaches with biological consistency and effect size estimation alleviates possible artifacts or statistical inhomogeneity known to be common in metagenomic data, for example, extreme variability among subjects or the presence of a long tail of rare organisms \\[32,86\\]. Similarly, while multiple hypothesis corrected statistical significance speaks to the potential reproducibility of a result, estimation of effect size in high-dimensional settings is crucial for addressing biological consistency and interpretability.\n\nThe biology highlighted by these investigations speaks to the potential of metagenomics for both microbial ecology and translational applications. For example, certain bacterial clades are frequently detected as biomarkers even in diverse environments, suggesting that some species can adapt in surprisingly condition-specific manners. *Staphylococcus* and the Bacillales, for example, are discriminative for mucosal tissues, aerobic conditions, and murine colitis, whereas no Proteobacteria consistently characterize any of these conditions, even though they always represent a substantial portion of the communities. These observations may reflect extensive microenvironmental heterogeneity and the coexistence of generalist and specialist bacteria \\[87-89\\].\n\nIn addition to these insights into microbiology, metagenomic biomarkers, including the abundances of specific organisms, abundances of entire clades, or the presence\/absence of specific organisms, can serve to describe host phenotypes, lifestyle, diet, and disease as well \\[5-10\\]. If the depletion of *Bifidobacterium* species in ulcerative colitis proves to occur early in human disease etiology, this and comparable shifts in the microbiota have potential applications in the detection of human disorders \\[90,91\\], especially as shifts in some bacterial consortia can be detected easily and inexpensively. Oral microbial biomarkers, for example, can be easily acquired and analyzed with microarray chips targeted for bacterial profiling \\[92\\]. These appear particularly promising for clinical applications \\[11\\], as the microbial communities in the saliva seem to represent one potential proxy for other human microbiota \\[93\\]. Other important clinical applications of metagenomic analyses include probiotic treatments \\[94,95\\] and microbiome transplantation \\[96-99\\] for gastrointestinal diseases.\n\nLEfSe, the computational approach to biomarker class comparisons detailed here, thus contributes to the understanding of microbial communities and guides biologists in detecting novel metagenomic biomarkers. The algorithm's effectiveness on real and synthetic data has been highlighted by several experiments in which we successfully characterized both host-associated microbiota and environmental microbiomes in multiple contexts. To support ongoing metagenomic analyses, we have implemented LEfSe as a user-friendly web application that can provide both raw data and publication-ready graphical results, including reports of detected microbial variation on taxonomic trees for visual and biological summarization. LEfSe is freely available online in the Galaxy workflow framework \\[46,47\\] at the following link \\[48\\].\n\n# Materials and methods\n\nThe LEfSe algorithm is introduced in overview in the Results section, and Figure 6<\/a> illustrates in detail the format of the input (a matrix with *n* rows and *m* columns) and the three steps performed by the computational tool: the KW rank sum test \\[49\\] on classes, the pairwise Wilcoxon test \\[50,51\\] between subclasses of different classes, and the LDA \\[52\\] on the relevant features.\n\nEach of the *n* features is represented with a positive-valued vector containing its abundances in the *m* samples, and each sample is associated with values describing its class and, optionally, subclass and\/or originating subject. The factorial KW rank sum test is applied to each feature with respect to the class factor; the subclass and subject information are used as stratifying subgroups when present. Features that, according to the KW rank sum test, do not violate the null hypothesis of identical value distribution among classes (with default *P*-value, \u03b1 = 0.05) are not further analyzed. The pairwise Wilcoxon test is applied to retained features belonging to subclasses of different classes. For each feature, the pairwise Wilcoxon test is not satisfied if at least one comparison between subclasses has a *P*-value higher than the chosen \u03b1 or if the sign of variation is not equal among all comparisons. For example, if a feature appears in samples from two classes with three subclasses each, all nine comparisons between subclasses in different classes must violate the null hypothesis, and all signs of the differences between medians must be consistent. The features that pass the pairwise Wilcoxon test are considered successful biomarkers. An LDA model is finally built with the class as dependent variable and the remaining feature values, subclass, and subject values as independent variables. This model is used to estimate their effect sizes, which are obtained by averaging the differences between class means (using unmodified feature values) with the differences between class means along the first linear discriminant axis, which equally weights features' variability and discriminatory power. The LDA score for each biomarker is obtained computing the logarithm (base 10) of this value after being scaled in the \\[1,10^6^\\] interval and, regardless of the absolute values of the LDA score, it induces the ranking of biomarker relevance. For robustness, LDA is additionally supported by bootstrapping (default 30-fold) and subsequent averaging.\n\nLEfSe's first two steps employ non-parametric tests because of the nature of metagenomic data. Relative abundances will, in most cases, violate the main assumption of typical parametric tests (normal population in each class), whereas non-parametric tests are much more robust to the underlying distribution of the data since they are distribution-free approaches. The only assumption of the Wilcoxon and KW tests is that the distributions in each class are identically shaped with possible differences in the medians. For example, the bimodal or multimodal abundance distribution of an organism violates the assumptions of parametric tests but not those of non-parametric tests, unless the number of peaks in the distribution (or, more generally, the shape of the distribution) also changes among classes. LDA is used for effect size estimation as our experiments determined it to more accurately estimate biological consistency compared to approaches like differences in group means\/medians or support vector machines (SVMs) \\[100\\]. A comparison between LDA and SVM approaches for effect size estimation on the murine model of ulcerative colitis (for which low-throughput biological validations of biomarkers are available in \\[67\\]) is reported in our supplemental material (Additional files 8<\/a> and 9<\/a>) and shows the advantages of LDA with respect to upranking features of potential biological interest. Theoretically, this is motivated by LDA's ability to find the axis of highest variance and SVM's focus on features' combined predictive power rather than single feature relevance. Note that as we are performing class comparison rather than class prediction, it is worth specifying that the effect size estimation accuracy of an algorithm is not directly connected with its predictive ability (for example, SVM approaches are generally considered more accurate than LDA for prediction).\n\n## Multiclass strategies\n\nComparisons with more than two classes require special strategies for applying the Wilcoxon and LDA steps, whereas the factorial KW test is already appropriate for this setting. Our multiclass strategy for the Wilcoxon test depends on the problem-specific strategy chosen by the user to define features differentially distributed among the *n* classes. In the most stringent strategy, we require that all *n* abundance profiles of a feature are statistically significantly distinct among all *n* classes. This strategy, called 'strict', is implemented by requiring that all Wilcoxon tests between classes are significant. A more permissive strategy, called 'non-strict', considers a feature as a biomarker if at least one class is significantly different from all others. The more permissive strategy thus needs to satisfy only a subset of the Wilcoxon tests. Regardless of the strategy, the LDA step always reports the highest score detected among all pairwise class comparisons.\n\n## Subclass structure variants encoding different biological hypotheses\n\nDifferent interpretations of the biomarker class comparison problem are implemented in LEfSe by modifying the requirements for pairwise Wilcoxon comparisons among subclasses. If classes contain subclasses that represent distinct strata, we test only comparisons within each identical subclass (Figure 4<\/a>). For example, to assess the effect of a treatment on two sub-types of the same disease, we compare pre- and post-treatment levels within each subclass and require that the trend observed at the class level is significant independently for both subclasses. To implement this variant, LEfSe performs the Wilcoxon step only comparing subclasses with the same name. Alternatively, subclasses may represent covariates within which feature levels may vary but for which the problem does not dictate explicit stratification (Figure 2<\/a>). In both settings, we explicitly require all the pairwise comparison to reject the null hypothesis for detecting the biomarker; thus, no multiple testing corrections are needed.\n\n## Subclasses containing few samples\n\nWhen few samples are available, non-parametric tests like the Wilcoxon have reduced power to detect differences. This can affect LEfSe when subclasses are very small, preventing the overall test from even rejecting the null hypothesis. For this reason, small subclasses should be avoided when possible, for example, by excluding them from the problem or by grouping together all subclasses with small cardinalities. For cases in which removing or grouping subclasses is not possible or disrupts the biological consistency of the analysis, LEfSe substitutes the Wilcoxon test with a test to compare whether subclass medians differ with the expected sign. The user can choose the subclass cardinality threshold at which this median comparison is substituted for the Wilcoxon test.\n\n## Parameter settings\n\nExcept as stated otherwise in Results, all experiments in this study were run with LEfSe's \u03b1 parameter for pairwise tests set to 0.05 for both class normality and subclass tests, and the threshold on the logarithmic score of LDA analysis was set to 2.0. The stringency of these parameters is easily tunable (also through the web interface) and allows the user to detect biomarkers with lower *P*-values and\/or higher effect size in order, for example, to prioritize additional biological experiments and validations. All LDA scores are determined by bootstrapping over 30 cycles, each sampling two-thirds of the data with replacement, with the maximum influence of the LDA coefficients in the LDA score of three orders of magnitude.\n\n## Data description\n\nExcept as stated otherwise, taxonomic abundances for 16S samples were generated from filtered sequence reads using the RDP classifier \\[101\\], with confidences below 80% rebinned to 'uncertain'. For all the datasets described below, the final input for LEfSe is a matrix of relative abundances obtained from the read counts with per-sample normalization to sum to one. Witten-Bell smoothing \\[102\\] was used to accommodate rare types, but due to LEfSe's non-parametric approach, this has minimal effect on the discovered biomarkers and on the LDA score. This also allows our biomarker discovery method to avoid most effects of sequence quality issues as long as any sequencing biases are homogeneous among different conditions, as no specific assumptions on the statistical distribution and noise model are made by the algorithm as is standard for non-parametric approaches.\n\n## Human microbiome data\n\nThe 16S rRNA-based phylometagenomic dataset of the normal (healthy) human microbiome was made available through the Human Microbiome Project \\[13\\], and consists of 454 FLX Titanium sequences spanning the V3 to V5 variable regions obtained for 301 samples from 24 healthy subjects (12 male, 12 female) enrolled at a single clinical site in Houston, TX. These samples cover 18 different body sites, including 6 main body site categories: the oral cavity (9 samples), the gut (1 sample), the vagina (3 samples), the retroauricular crease (2 samples), the nasal cavity (1 sample) and the skin (2 samples). Detailed protocols used for enrollment, sampling, DNA extraction, 16S amplification and sequencing are available on the Human Microbiome Project Data Analysis and Coordination Center website \\[103\\], and are also described elsewhere \\[55,56\\]. In brief, genomic DNA was isolated using the Mo Bio PowerSoil kit \\[104\\] and subjected to 16S amplifications using primers designed incorporating the FLX Titanium adapters and a sample barcode sequence, allowing directional sequencing covering variable regions V5 to partial V3 (primers: 357F 5'-CCTACGGGAGGCAGCAG-3' and 926R 5'-CCGTCAATTCMTTTRAGT-3'). Resulting sequences were processed using a data curation pipeline implemented in mothur \\[41\\], which reduces the sequencing error rate to less than 0.06% as validated on a mock community. As part of the pipeline parameters, to pass the initial quality control step, one unambiguous mismatch to the sample barcode and two mismatches to the PCR amplification primers were allowed. Sequences with an ambiguous base call or a homopolymer longer than eight nucleotides were removed from subsequent analyses, as suggested previously \\[105\\]. Based on the supplied quality scores, all sequences were trimmed when a base call with a score below 20 was encountered. All sequences were aligned using a NAST-based sequence aligner to a custom reference based on the SILVA alignment \\[106,107\\]. Sequences that were shorter than 200 bp or that did not align to the anticipated region of the reference alignment were removed from further analysis. Chimeric sequences were identified using the mothur implementation of the ChimeraSlayer algorithm \\[108\\]. Unique reads were classified with the MSU RDP classifier v2.2 \\[58\\] using the taxonomy proposed by \\[109\\], maintained at the RDP (RDP 10 database, version 6). The 16S rRNA reads are available in the Sequence Read Archive at \\[110\\].\n\n## *T-bet^-\/-^ \u00d7 Rag2^-\/-^* and *Rag2^-\/-^* mouse data\n\n*T-bet*^-\/-^ \u00d7 *Rag2*^-\/-^ and *Rag2*^-\/-^ mice, their husbandry, and their chow have been described in \\[67\\]. The animal studies and experiments were approved and carried out according to Harvard University's Standing Committee on Animals as well as National Institutes of Health guidelines. Collection, processing, and extraction of DNA from fecal samples were performed as described in \\[67\\]. The V5 and V6 regions of the 16S rRNA gene were targeted for amplification and multiplex pyrosequencing with error-correcting barcodes. Sequencing was performed using a Roche FLX Genome Sequencer at DNAVision (Charleroi, Belgium) and data were preprocessed to remove sequences with low-quality scores. There were 7,579 \u00b1 2,379 high-quality 16S reads per sample with a mean read length of 278 bp.\n\n## Viral and microbial environmental data\n\nWe retrieved from the online supplemental material of \\[69\\] the 80 available metagenomes (42 viromes, 38 microbiomes). We identified three environments containing at least seven samples and grouped them into coral, hyper-saline, and marine subclasses; the fourth subclass, other, groups all environments with few samples.\n\n## Infant and adult microbiome data\n\nThe COG profiles of the nine adult and four unweaned infant microbiomes were obtained from the supplemental material of \\[73\\] and used unmodified in this study.\n\n## Synthetic datasets\n\nWe built three collections of artificial datasets in order to compare LEfSe to KW and Metastats. All datasets have 1,000 features and 100 samples belonging evenly to two classes, and the values are sampled from a Gaussian normal distribution. The samples in the two classes are further organized in four subclasses (two per class) with equal cardinality. Of the 1,000 features, 500 features have different means across classes and should thus be detected as biomarkers (positive features), the other 500 features are evenly distributed among classes or among at least one subclass in both classes and should not be detected as discriminative (negative features). The methods are evaluated assessing the false positive rate (number of erroneously detected biomarkers with respect to the total number of features) and the false negative rate (number of correctly detected non-discriminant features with respect to the total number of features, that is, sensitivity). The three collections of datasets (graphically shown in Figure 5<\/a>) differ in the distribution of values in the subclasses and in the mean\/standard deviation of the normal distribution. (a) The subclasses in the same class have the same parameters (thus the subclass organization is meaningless). Negative features all have \u03bc = 10,000 and \u03c3 = 100, whereas one class of the positive features has \u03bc = 10,000 - t (\u03c3 = 100) and the other \u03bc = 10,000 + t (\u03c3 = 100) where t is a parameter ranging from 1 to 150. The performances of all methods are assessed at regular steps of the t parameter. (b) Datasets in this collection are defined in the same way as collection (a) but with t = 1,000 for all datasets and \u03c3 ranging from 1,000 to 10,000. (c) The negative class in the third collection has different subclass distribution. In particular, the second subclass of the first class has the same mean of the first subclass of the second class. The other two subclasses have different means (\u03bc = 10,000 - t and \u03bc = 10,000 + t, t = 1,000), but the feature is not considered differential since the difference is not consistent between subclasses. The positive features are defined in the same way as dataset (b).\n\n## Implementation and availability of the method\n\nLEfSe is implemented in Python and makes use of R statistical functions in the coin \\[111\\] and MASS \\[112\\] libraries through the rpy2 library \\[113\\] and of the matplotlib \\[114\\] library for graphical output. LEfSe is provided with a graphical interface in the Galaxy framework \\[46,47\\], which allows the user to select parameters (the primary three stringency parameters, the multiclass setting, and other computational, statistical, and graphical preferences), to pipeline data between modules in a workflow framework, to generate publication-quality graphical outputs, and to combine these results with other statistical and metagenomic analyses. LEfSe is available at \\[48\\].\n\n# Abbreviations\n\nbp: base pair; KW: Kruskal-Wallis; LDA: linear discriminant analysis; LEfSe: linear discriminant analysis effect size; PCR: polymerase chain reaction; RDP: Ribosomal Database Project; SVM: support vector machines.\n\n# Authors' contributions\n\nNS and CH conceived the study; NS and LM implemented the methodology; NS: JI: LW: DG: WG: and CH analyzed the results; NS: JI: LW: DG: WG: and CH wrote the manuscript. All authors read and approved the manuscript in its final form.\n\n# Supplementary Material\n\n###### Additional file 1\n\n**Supplementary Figure S6**. Histogram of within-subject \u03b2-diversity (community dissimilarity) between different mucosal (red) and non-mucosal (green) body sites.\n\nClick here for file\n\n###### Additional file 2\n\n**Supplementary Figure S1**. Cladogram representing the differences between viromes and microbiomes on the subsystem framework.\n\nClick here for file\n\n###### Additional file 3\n\n**Supplementary Figure S2**. Histogram of LDA logarithmic scores of biomarkers found by LEfSe comparing microbiomes and viromes within the subsystem framework.\n\nClick here for file\n\n###### Additional file 4\n\n**Supplementary Figure S3**. Histogram of LDA logarithmic scores of COG biomarkers found by LEfSe comparing adult and infant microbiomes.\n\nClick here for file\n\n###### Additional file 5\n\n**Supplementary Figure S4**. Functional features (COGs) that are discrimantive for the comparison between adult and infant microbiomes according to LEfSe but not detected by Metastats among the discriminant features with LDA score higher than 3. If we consider all the discriminant features without threhold on LDA score, LEfSe identifies 366 COGs in total, 185 of which are not discriminant for Metastats.\n\nClick here for file\n\n###### Additional file 6\n\n**Supplementary Figure S5**. Functional features (COGs) that are discrimantive for the comparison between adult and infant microbiomes according to Metastats but not detected by LEfSe. Even if median and variance suggest the differences to be discriminative, there are always some microbiomes (at least two) that are overlapping between classes. This is due to the stringent \u03b1-value (0.01) set for the KW test in LEfSe and to the fact that we use non-parametric statistics (differently from Metastats). Notice, however, that even using a low \u03b1-value LEfSe detects many more biomarkers than metastats (366 versus 192).\n\nClick here for file\n\n###### Additional file 7\n\n**Supplementary Figure S9**. Comparison between LEfSe and Metastats using the synthetic data described in Figure 5<\/a> and in the Materials and methods. LEfSe was applied as detailed in the paper; for Metastats we used the default settings (that is, \u03b1 = 0.05 and N~permutations~ = 1,000) and, as for LEfSe and KW, we disabled the per-sample normalization as the features are independent. **(a,b)** Metastats has a higher false positive rate (average 5%) than LEfSe (average below 0.5%) and lower false negative rate. **(c)** When the subclass information is meaningful (see Figure 5<\/a> for the representation of the dataset), LEfSe performs substantially better than Metastats both in terms of false positive and false negatives. Overall, on these synthetic data, Metastats achieves very similar results compared to KW (Figure 5<\/a>) and neither of them can make use of additional information regarding the within-class structure, thus achieving poor results compared to LEfSe when such kinds of information are available.\n\nClick here for file\n\n###### Additional file 8\n\n**Supplementary Figure S7**. SVM-based effect size estimation for the biomarkers found for the *Rag2^-\/-^* versus *T-bet^-\/-^xRag2^-\/-^* comparison reported in Figure 3<\/a> of the manuscript. The LDA-based approach for assessing effect size (Figure 3<\/a>) is closer to the biological follow-up experiments and is more visually consistent. The reason for LDA superiority over SVM approaches for effect size estimation is theoretically connected with the ability of LDA to find the axis with the highest variance, and the SVM effort on evaluating the combined feature predictive power rather than single feature relevance. It is worth specifying that the effect size estimation accuracy of an algorithm is not directly connected with its predictive ability (SVM approaches are usually considered more accurate than LDA for prediction).\n\nClick here for file\n\n###### Additional file 9\n\n**Supplementary Figure S8**. Comparison between the features with the highest SVM-based effect size (*Papillibacter*, on the left), the highest LDA-based effect size (*Bifidobacterium*, in the center), and the Actinobacteria phylum (on the right). From a visual analysis, *Bifidobacerium* shows a larger effect size, which is also evident looking at the ratios between class means, suggesting LDA as a better option for effect size estimation than SVM approaches. As detailed in the manuscript, the relevance of *Bifidobacterium* has been experimentally validated. Moreover, the large difference in the score given by the SVM approach to Actinobacteria compared to *Bifidobacterium* and *Papillibacter* is not consistent.\n\nClick here for file\n\n###### Additional file 10\n\n***T-bet*^-\/-^ \u00d7 *Rag2*^-\/-^ - *Rag2*^-\/-^ dataset**. Input LEfSe file for the analysis of the ulcerative colitis phenotype in mice.\n\nClick here for file\n\n## Acknowledgements\n\nWe would like to thank the entire Human Microbiome Project consortium, including the four sequencing centers (the Broad Institute, Washington University, Baylor College of Medicine, and the J Craig Venter Institute), associated investigators from many additional institutions, and the NIH Office of the Director Roadmap Initiative. This work was supported in part by grant DE017106 from the National Institute of Dental and Craniofacial Research (JI), NIH grants AI078942 (WSG) and Burroughs Wellcome Fund (WSG), and was funded by NIH 1R01HG005969 to CH.","meta":{"dup_signals":{"dup_doc_count":222,"dup_dump_count":48,"dup_details":{"curated_sources":2,"2023-40":1,"2023-23":2,"2022-49":4,"2022-33":3,"2022-27":1,"2022-05":2,"2021-43":1,"2021-31":1,"2021-25":2,"2021-17":3,"2021-10":1,"2021-04":1,"2020-45":1,"2020-34":1,"2020-24":1,"2020-05":1,"2019-51":2,"2019-43":1,"2019-39":1,"2017-09":26,"2016-44":1,"2016-40":1,"2016-36":26,"2016-30":24,"2015-48":5,"2015-40":3,"2015-35":5,"2015-32":5,"2015-27":5,"2015-22":5,"2015-14":4,"2014-52":4,"2014-49":4,"2014-42":12,"2014-41":9,"2014-35":8,"2014-23":8,"2014-15":3,"2023-50":1,"2024-26":2,"2024-18":1,"2017-13":2,"2015-18":4,"2015-11":5,"2015-06":5,"2014-10":5,"2013-48":5,"2013-20":2}},"file":"PMC3218848"},"subset":"pubmed_central"} {"text":"date: 2017-12\ntitle: Global response to malaria at crossroads\n\n**29 NOVEMBER 2017 \\| GENEVA -** After unprecedented global success in malaria control, progress has stalled, according to the World malaria report 2017. There were an estimated 5 million more malaria cases in 2016 than in 2015. Malaria deaths stood at around 445 000, a similar number to the previous year.\n\n\"In recent years, we have made major gains in the fight against malaria,\" said Dr Tedros Adhanom Ghebreyesus, Director-General of WHO. \"We are now at a turning point. Without urgent action, we risk going backwards, and missing the global malaria targets for 2020 and beyond.\"\n\nThe WHO Global Technical Strategy for Malaria calls for reductions of at least 40% in malaria case incidence and mortality rates by the year 2020. According to WHO's latest malaria report, the world is not on track to reach these critical milestones.\n\nA major problem is insufficient funding at both domestic and international levels, resulting in major gaps in coverage of insecticide-treated nets, medicines, and other life-saving tools.\n\n**Funding shortage**\n\nAn estimated US\\$ 2.7 billion was invested in malaria control and elimination efforts globally in 2016. That is well below the US \\$6.5 billion annual investment required by 2020 to meet the 2030 targets of the WHO global malaria strategy.\n\nIn 2016, governments of endemic countries provided US\\$ 800 million, representing 31% of total funding. The United States of America was the largest international funder of malaria control programmes in 2016, providing US\\$1 billion (38% of all malaria funding), followed by other major donors, including the United Kingdom of Great Britain and Northern Ireland, France, Germany and Japan.\n\n**The global figures**\n\nThe report shows that, in 2016, there were an estimated 216 million cases of malaria in 91 countries, up from 211 million cases in 2015. The estimated global tally of malaria deaths reached 445 000 in 2016 compared to 446 000 the previous year.\n\nWhile the rate of new cases of malaria had fallen overall, since 2014 the trend has levelled off and even reversed in some regions. Malaria mortality rates followed a similar pattern.\n\nThe African Region continues to bear an estimated 90% of all malaria cases and deaths worldwide. Fifteen countries \u2013 all but one in sub-Saharan Africa \u2013 carry 80% of the global malaria burden.\n\n\"Clearly, if we are to get the global malaria response back on track, supporting the most heavily affected countries in the African Region must be the primary focus,\" said Dr Tedros.\n\n**Controlling malaria**\n\nIn most malaria-affected countries, sleeping under an insecticide-treated bednet (ITN) is the most common and most effective way to prevent infection. In 2016, an estimated 54% of people at risk of malaria in sub-Saharan Africa slept under an ITN compared to 30% in 2010. However, the rate of increase in ITN coverage has slowed since 2014, the report finds.\n\nSpraying the inside walls of homes with insecticides is another effective way to prevent malaria. The report reveals a steep drop in the number of people protected from malaria by this method \u2013 from an estimated 180 million in 2010 to 100 million in 2016 \u2013 with the largest reductions seen in the African Region.\n\nThe African Region has seen a major increase in diagnostic testing in the public health sector: from 36% of suspected cases in 2010 to 87% in 2016. A majority of patients (70%) who sought treatment for malaria in the public health sector received artemisinin-based combination therapies (ACTs) \u2013 the most effective antimalarial medicines.\n\nHowever, in many areas, access to the public health system remains low. National-level surveys in the African Region show that only about one third (34%) of children with a fever are taken to a medical provider in the public health sector.\n\nTackling malaria in complex settings\n\nThe report also outlines additional challenges in the global malaria response, including the risks posed by conflict and crises in malaria endemic zones. WHO is currently supporting malaria responses in Nigeria, South Sudan, Venezuela (Bolivarian Republic of) and Yemen, where ongoing humanitarian crises pose serious health risks. In Nigeria's Borno State, for example, WHO supported the launch of a mass antimalarial drug administration campaign this year that reached an estimated 1.2 million children aged under 5 years in targeted areas. Early results point to a reduction in malaria cases and deaths in this state.\n\n**A wake-up call**\n\n\"We are at a crossroads in the response to malaria,\" said Dr Pedro Alonso, Director of the Global Malaria Programme, commenting on the findings of this year's report. \"We hope this report serves as a wake-up call for the global health community. Meeting the global malaria targets will only be possible through greater investment and expanded coverage of core tools that prevent, diagnose and treat malaria. Robust financing for the research and development of new tools is equally critical.\"\n\nAvailable from: ","meta":{"dup_signals":{"dup_doc_count":145,"dup_dump_count":51,"dup_details":{"curated_sources":2,"2023-50":2,"2023-23":2,"2023-14":3,"2023-06":2,"2022-40":2,"2022-33":3,"2022-27":1,"2022-21":2,"2021-49":1,"2021-43":4,"2021-39":2,"2021-31":1,"2021-25":3,"2021-21":1,"2021-17":2,"2021-10":4,"2021-04":1,"2020-50":3,"2020-40":3,"2020-34":3,"2020-29":1,"2020-24":3,"2020-16":2,"2020-10":2,"2020-05":3,"2019-51":2,"2019-47":2,"2019-43":2,"2019-39":4,"2019-35":3,"2019-30":5,"2019-26":3,"2019-22":2,"2019-18":4,"2019-13":2,"2019-09":3,"2019-04":1,"2018-51":3,"2018-47":3,"2018-43":4,"2018-39":5,"2018-34":1,"2018-30":4,"2018-26":3,"2018-22":1,"2018-17":3,"2018-13":8,"2018-09":6,"2018-05":6,"2017-51":6,"2024-26":1}},"file":"PMC5787647"},"subset":"pubmed_central"} {"text":"date: 2018\nreferences:\ntitle: An egg a day could significantly reduce CVD risk\n\nPeople who consume an egg a day could significantly reduce their risk of cardiovascular disease (CVD) compared with eating no eggs, suggests a study carried out in China. CVD is the leading cause of death and disability worldwide, including China, mostly due to ischaemic heart disease and stroke (including both haemorrhagic and ischaemic stroke).\n\nUnlike ischaemic heart disease, which is the leading cause of premature death in most Western countries, stroke is the most responsible cause in China, followed by heart disease. Although ischaemic stroke accounted for the majority of strokes, the proportion of haemorrhagic stroke in China is still higher than that in high-income countries.\n\nEggs are a prominent source of dietary cholesterol, but they also contain high-quality protein, many vitamins and bioactive components such as phospholipids and carotenoids.\n\nPrevious studies looking at associations between eating eggs and impact on health have been inconsistent, and most of them found insignificant associations between egg consumption and coronary heart disease or stroke. Therefore, a team of researchers from China and the UK led by Prof Liming Li and Dr Canqing Yu from the School of Public Health, Peking University Health Science Centre, set out to examine the associations between egg consumption and cardiovascular disease, ischaemic heart disease, major coronary events, haemorrhagic stroke and ischaemic stroke.\n\nThey used data from the China Kadoorie Biobank (CKB) study, an ongoing prospective study of around half a million (512 891) adults aged 30 to 79 years from 10 different geographical areas in China. The participants were recruited between 2004 and 2008 and were asked about the frequency of their egg consumption. They were followed up to determine their morbidity and mortality.\n\nFor the new study, the researchers focused on 416 213 participants who were free of prior cancer, CVD and diabetes. From that group at a median follow up of 8.9 years, a total of 83 977 cases of CVD and 9 985 CVD deaths were documented, as well as 5 103 major coronary events. At the start of the study period, 13.1% of participants reported daily consumption of eggs (usual amount 0.76 eggs\/day) and 9.1% reported never or very rare consumption of eggs (usual amount 0.29 eggs\/day).\n\nAnalysis of the results showed that compared with people not consuming eggs, daily egg consumption was associated with a lower risk of CVD overall. In particular, daily egg consumers (up to one egg\/day) had a 26% lower risk of haemorrhagic stroke, a 28% lower risk of haemorrhagic stroke death and an 18% lower risk of CVD death.\n\nIn addition, there was a 12% reduction in risk of ischaemic heart disease observed for people consuming eggs daily (estimated amount 5.32 eggs\/week), when compared with the 'never\/rarely' consumption category (2.03 eggs\/week).\n\nThis was an observational study, so no firm conclusions can be drawn about cause and effect, but the authors said their study had a large sample size and took into account established and potential risk factors for CVD.\n\nThe authors concluded: 'The present study finds that there is an association between moderate level of egg consumption (up to 1 egg\/day) and a lower cardiac event rate. Our findings contribute scientific evidence to the dietary guidelines with regard to egg consumption for the healthy Chinese adult.'\n\n# References","meta":{"dup_signals":{"dup_doc_count":185,"dup_dump_count":52,"dup_details":{"curated_sources":2,"2023-50":3,"2023-40":3,"2023-23":4,"2023-14":1,"2023-06":3,"2022-49":1,"2022-40":1,"2022-33":1,"2022-27":3,"2022-21":3,"2022-05":1,"2021-49":3,"2021-43":2,"2021-39":2,"2021-31":2,"2021-25":2,"2021-21":6,"2021-17":2,"2021-10":2,"2021-04":3,"2020-50":2,"2020-45":1,"2020-40":5,"2020-34":1,"2020-29":4,"2020-24":2,"2020-16":3,"2020-10":2,"2020-05":3,"2019-51":3,"2019-47":2,"2019-43":4,"2019-39":5,"2019-35":3,"2019-30":9,"2019-26":3,"2019-22":6,"2019-18":4,"2019-13":6,"2019-09":9,"2019-04":3,"2018-51":9,"2018-47":4,"2018-43":8,"2018-39":5,"2018-34":9,"2018-30":3,"2018-26":7,"2024-22":3,"2024-18":2,"2024-10":4,"2024-30":1}},"file":"PMC6107805"},"subset":"pubmed_central"} {"text":"abstract: Plant chloroplast originated, through endosymbiosis, from a cyanobacterium, but the genomic legacy of cyanobacterial ancestry extends far beyond the chloroplast itself, and persists in organisms that have lost chloroplasts completely.\nauthor: John A Raven; John F Allen\ndate: 2003\ninstitute: 1Division of Environmental and Applied Biology, University of Dundee, Dundee DD1 4HN, UK; 2Department of Plant Biochemistry, Center for Chemistry and Chemical Engineering, Box 124, Lund University, SE-221 00 Lund, Sweden; Correspondence: John A Raven. E-mail: email@example.com\nreferences:\ntitle: Genomics and chloroplast evolution: what did cyanobacteria do for plants?\n\n# The endosymbiont hypothesis is mainstream\n\nChloroplasts, the sites of photosynthesis within plant cells, comprise a prominent and well-known class of plastids, subcellular organelles with diverse, specialist functions in plant and algal cells. Mereschkowsky \\[1,2\\] is widely recognized as having written the first clear exposition of the hypothesis that plastids are derived from endosymbiotic cyanobacteria, then known as blue-green algae. Initially greeted with skepticism or even derision, Mereschkowsky's 1905 hypothesis gained support from electron microscopical and biochemical studies which showed that plastids contain DNA, RNA and ribosomes, supplying a structural and biochemical basis for non-Mendelian, cytoplasmic inheritance of plastid-related characters \\[3\\]. Subsequent molecular genetic studies have demonstrated the ubiquity of plastid genomes and confirmed that their replication, transcription and translation closely resemble those of (eu)bacteria.\n\nMolecular phylogenetic studies now make it abundantly clear that the closest bacterial homologs of plastids are indeed cyanobacteria \\[4\\], supporting earlier conclusions from the comparative biochemistry of photosynthesis. Only cyanobacteria and chloroplasts have two photosystems and split water, to make oxygen, as a source of reducing power. But it has long been clear that many of the proteins needed for plastid functions, including photosynthesis, are now encoded in the nuclear genome and arrived there during evolution by the wholesale uptake of cyanobacteria, including their genomes, followed by gene transfer into the nucleus \\[5\\]. Recent advances in genomics have greatly enhanced our understanding of the evolution of plastids, allowing us to address specific questions such as which genes were moved or retained, and why. It also becomes possible to see clearly the algal ancestry of cells that have vestigial and otherwise unrecognizable plastids, and even to discern the unmistakable genomic footprint of plastids long lost from organisms one might never imagine to have descended from plants.\n\nMolecular genetic studies of plastid genomes show that they encode only 60-200 proteins, while perhaps as many as 5,000 nuclear-coded gene products are targeted to plastids \\[6\\]. From complete sequences it is known that each cyanobacterial genome codes for at least 1,500 proteins, and they are therefore at least an order of magnitude larger than plastid genomes. It is perhaps surprising that the size of the proteome of a free-living cyanobacterium is not greatly different from that of a subcellular organelle. Genomic studies have been very important in showing the evolutionary fate of the cyanobacterial genes that originated from the endosymbiotic pre-plastids. The genes in pre-plastids were either retained, lost, or transferred to the nucleus. The process of transfer of genes to the nucleus would have involved duplication of each plastid gene, and a nuclear copy of the gene becoming able to produce a functional product in the cytosol or, with appropriate targeting sequences, in other compartments.\n\n# The fates of endosymbiont genes\n\nAn important recent analysis by Martin *et al.* \\[6\\] has put limits on the number of genes in the nucleus of *Arabidopsis thaliana* that derive from the plastid ancestor. Previous analyses of limited portions of the *A. thaliana* nuclear genome suggested that 800-2000 genes from the plastid ancestor were transferred to the nucleus. The analysis by Martin *et al.* \\[6\\] was based on comparison of the whole nuclear genome of *A. thaliana* with whole genomes of three cyanobacteria (*Nostoc punctiforme, Prochlorococcus marinus* and *Synechocystis* sp. PCC 6803), 16 other prokaryotes, and *Saccharomyces cerevisiae* (yeast). The analysis was restricted to the 9,368 *A. thaliana* gene products that are sufficiently conserved for the comparison of primary sequences. Of these, the greatest number of similarities were detected with the yeast nuclear genome; these common genes were presumably inherited by *Arabidopsis* from the host cell that acquired the plastid(s) \\[6\\]. The second most numerous class of genes in the *Arabidopsis* nuclear genome are those directly homologous to cyanobacterial genes. A decreasing number of similarities is found in Gram-positive bacteria, non-proteobacterial Gram-negative bacteria, proteobacteria, and least of all in archaebacteria \\[6\\]. Extrapolating the data from the 9,368 conserved proteins to the total of 24,990 non-redundant nuclear genes of *Arabidopsis* gives a total of some 4,500 genes, or 18% of the nuclear genes, that came from the cyanobacterial ancestor of the plastids. More than half of these are not targeted back to the plastids but to other cell compartments (including the secretory pathway) \\[6\\]. The protein products of many nuclear genes that were not acquired from the plastid ancestor are now targeted to the plastid. The genes within the nuclear genomes that originated from the plastid ancestor cover all of the functional categories defined by The *Arabidopsis* Genome Initiative \\[7\\].\n\nThe cyanobacterial ancestor of the plastids was, relative to the three cyanobacteria with completed genome sequences that were examined by Martin *et al.* \\[6\\], closer to *N. punctiforme* than to *P. marinus* or *Synechocystis* sp. Although three genomes is not a large sample size, it is of interest that *N. punctiforme* is a diazotroph, so the plastid ancestor could also have been a nitrogen-fixer. Were early plastids perhaps also able to fix atmospheric nitrogen?\n\n# Losing chloroplasts but keeping cyanobacterial genes\n\nThe work of Brinkman *et al.* \\[8\\] re-examines the processes that have led to the high proportion of proteins of a bacterial human pathogen, *Chlamydia*, that are similar to those of plants. This similarity was formerly attributed to horizontal gene transfer from plants, or plant-like host organisms, to the bacterium. Brinkman *et al.* \\[8\\] point out that such gene transfer is unlikely since all extant Chlamydiaceae are obligate intracellular parasites of animals. Instead, the analysis by Brinkman *et al.* \\[8\\] shows that the majority of the plant-like genes in *Chlamydia* are, in plant cells, targeted to the chloroplast. But the conclusion that this targeting of proteins to chloroplasts is necessarily a function of their origin from a plastid ancestor is not always sound. Furthermore, Martin *et al.* \\[6\\] did not find much similarity between *Chlamydia* and *Arabidopsis* (see Figure 1<\/a> in \\[6\\]). Clearly, further investigation is needed.\n\nFigure 1<\/a> illustrates the various endosymbiotic events described here. Amongst eukaryotes, the apicomplexan parasitic pathogens *Toxoplasma* and *Plasmodium* have curious cytoplasmic organelles bounded by three membranes, namely 'apicoplasts', which genome sequencing has established as *bona fide* plastids complete with a characteristic inverted repeat within the plastid genome \\[9\\]. The presence of three membranes, as is found around chloroplasts of dinoflagellates and euglenoids, betrays an ancestry from a secondary symbiosis, as does the presence of four membranes surrounding the plastids of, for example, photosynthetic heterokonts (a diverse group, some of which are algae) such as diatoms and brown algae. The function of the apicoplast is not clearly understood, but one suggestion is that it is indispensable for the synthesis of iron-sulfur proteins. The function of the residual plastid genome is even less clear, and it provides a test case for any theory for the function of organellar genes. Although *Plasmodium* has a plastid genome that some think is on the way out, trypanosomes, which are also non-photosynthetic, have no plastid or plastid genome at all, but are now clearly seen to be former euglenoids because of the remaining genes for a variety of plant-like enzymes, including sedoheptulose-1,7-bisphosphatase (otherwise found only in the Benson-Calvin cycle) \\[10,11\\].\n\n# *Arabidopsis* is not the only plant\n\nThe article by Martin *et al.* \\[6\\] uses chloroplast genomics to infer plastid phylogeny, as well as gene loss and gene transfer, for 16 sequenced plastid genomes. An important conclusion from this analysis is that two secondary endosymbiotic events involving a red alga are needed to explain the occurrence of plastids in cryptophytes (algae with phycobilin pigments in the thylakoid lumen rather than in particles on the thylakoid membrane as in cyanobacterial and red algae; an example is *Guillardia*) and heterokonts (the diatom *Odontella*). This contrasts with the arguments of Cavalier-Smith (recently set out in \\[12\\]) for a single endosymbiotic event, based on evidence such as the replacement of the glyceraldehyde-3-phosphate dehydrogenase gene derived from the red algal plastid with one of host origin in both cases.\n\nAnother recent article \\[13\\] deals with genome-based phylogenies of plastids; 19 complete chloroplast genomes are studied using a new computational method, and broadly similar conclusions are reached to those of Martin and co-workers \\[6\\]. This work also allows novel functional assignments to a number of chloroplast open reading frames. The functional implications of chloroplast genomics, with special reference to experimental opportunities and 'directional genetics' in *Arabidopsis thaliana*, have recently been reviewed by Leister \\[14\\].\n\nAn important question relating to the evolution of plastid genomes in higher plants is the timing of the changes in the plastid genome in the streptophyte clade (made up of charophytes, a group of green algae or chlorophytes, plus embryophytes, or higher plants), which evolved more than 500 million years ago. From the unicellular flagellate *Mesostigma*, which is either a basal chlorophyte or lies at the split between Chlorophyta and Streptophyta, to the embryophytes, of which the liverwort *Marchantia* is the most basal to have been sequenced, the changes are gene losses, including transfer to the nucleus, scrambling of gene order, and intron insertion \\[15\\].\n\nAn important contribution to bridging the evolutionary gap between *Mesostigma* and *Marchantia* is the work of Turmel *et al.* \\[15\\] on a member of the charophytes *sensu stricto* (that is, excluding *Mesostigma*) *Chaetosphaeridium globosum*. Before the work of Turmel *et al.* \\[15\\] only fragmentary data addressed the issue of gene content and organization of the eight charophytes *sensu stricto*. The complete plastid genome sequence of *Chaetosphaeridium globosum* \\[15\\] shows that most of the embryophyte characteristics were present in the charophyte alga, so that the major changes had occurred between the branch to *Mesostigma* and that to *Chaetosphaeridium.* The common features shared by the plastid DNA of *Chaetosphaeridium* and of embryophytes include the gene content, the intron composition, and the gene order. Thus, the *Chaetosphaeridium* chloroplast genome has 124 genes (compared to 136 in *Mesostigma* and 110-120 in embryophytes), one Group I intron (there are none in *Mesostigma* and one in embryophytes), 16 *cis*-spliced Group II introns (none in *Mesostigma* and 18-19 in embryophytes) and one *trans*-spliced Group II intron (none in *Mesostigma*, one in embryophytes). Genome size (118-155 kilobases) is relatively constant among *Mesostigma, Chaetosphaeridium* and higher plant plastids. By contrast, the mitochondrial genome of *Chaetosphaeridium* is closely similar to that of *Mesostigma* in terms of size (57 kb and 42 kb, respectively), gene content and, perhaps, intron content. *Chaetosphaeridium* has a much smaller genome size than the obese mitochondrial genomes of *Marchantia* (187 kb) or *Arabidopsis* (367 kb), and many more *cis*-spliced Group II introns (18-25 rather than two). The apparently different tempo of evolution in mitochondria and plastids of the charophytes deserves further investigation. An important point about the functional genomics of the plastid is the determinant of which genes essential for plastid function are retained in the plastid genome. Higher plant plastid genomes have slightly fewer genes than in the plastids of the charophytes *sensu lato* (that is, the charophytes *sensu stricto* plus *Mesostigma*).\n\n# Cells within cells\n\nOne requirement of the endosymbiont hypothesis is whole-scale gene transfer from the chloroplast to the nucleus. Long thought to be either impossible or, at best, highly problematical, its difficulties are often thought to relate to the failure of some genes to move at all. Gene transfer from chloroplast to nucleus is now estimated to occur naturally in tobacco at a frequency of one transposition in 16,000 pollen grains \\[16\\]. In natural populations and over evolutionary time, this frequency represents a massive informational onslaught and highlights the urgency of the question of why chloroplasts have genomes at all. There must be some crucial, over-riding, selective advantage in retaining certain genes in chloroplasts but not others. Evidence is now accruing for the ten-year-old proposal that gene expression in the chloroplast is regulated by the function of a core of chloroplast gene products in photosynthesis and electron transport \\[17,18\\].\n\nIt is clear that genomics, in the sense of whole-genome analyses, is making very important contributions to our understanding of the evolution of plastids, and is complementing, and to a significant extent supplanting, 'single gene' phylogenies. Genomics is revolutionizing our understanding of the changes involved in the primary endosymbiosis that produced the plastids of red, green and glaucophyte algae, and in the subsequent genetic changes in green (charophycean) plastids with the evolution of higher plants. Genomics is also indispensable for understanding how red and green algae yielded the plastids derived from secondary endosymbiosis.\n\nThe endosymbiont hypothesis took a long time to graduate from wild and untestable speculation to an accepted view of plastid origins and evolution. In contrast, comparative genomics has quickly elevated the kinship of chloroplasts and cyanobacteria to a keystone of our understanding of the most abundant of cells, the primary producers on which life now depends, not to mention some vicious and enterprising pathogens whose exploits are a global burden to human health. The title of this article asks what the cyanobacteria have done for plants. \"What have they not done?\" is a question perhaps more easily addressed.\n\n### Acknowledgements\n\nWe thank W. Martin for comments and NERC (JAR) and NFR (JFA) for related research grants.\n\n## Figures and Tables","meta":{"dup_signals":{"dup_doc_count":139,"dup_dump_count":49,"dup_details":{"curated_sources":4,"2023-23":1,"2022-21":1,"2019-30":1,"2019-18":1,"2019-09":1,"2019-04":1,"2018-51":1,"2018-47":1,"2018-43":1,"2018-39":1,"2018-34":2,"2018-17":1,"2017-51":1,"2017-39":1,"2017-34":1,"2017-26":1,"2017-17":1,"2017-09":10,"2017-04":1,"2016-44":1,"2016-40":1,"2016-36":11,"2016-30":9,"2016-22":1,"2016-18":1,"2016-07":4,"2015-48":3,"2015-40":1,"2015-35":2,"2015-32":4,"2015-27":3,"2015-22":4,"2015-14":2,"2014-52":3,"2014-49":4,"2014-42":8,"2014-41":4,"2014-35":6,"2014-23":7,"2014-15":3,"2023-40":1,"2017-13":1,"2015-18":4,"2015-11":3,"2015-06":4,"2014-10":3,"2013-48":4,"2013-20":3,"2024-18":1}},"file":"PMC153454"},"subset":"pubmed_central"} {"text":"author: Melissa Hawkins\ndate: 2020-08-13\ntitle: Quarantine Bubbles \u2013 When Done Right \u2013 Limit Coronavirus Risk and Help Fight Loneliness\n\nAfter three months of lockdowns, many people in the U.S. and around the world are turning to quarantine bubbles, pandemic pods or quaranteams in an effort to balance the risks of the pandemic with the emotional and social needs of life.\n\nI am an epidemiologist and a mother of four, three of whom are teenagers in the throes of their risk-taking years. As the country grapples with how to navigate new risks in the world, my kids and I are doing the same.\n\nWhen done carefully, the research shows that quarantine bubbles can effectively limit the risk of contracting SARS-CoV-2 while allowing people to have much needed social interactions with their friends and family.\n\nQuaranteams are founded on the idea that people can interact freely within a group, but that group stays isolated from other people as much as possible.\n\n# Reduce risk if you can't eliminate it\n\nA quaranteam is a small group of people who form their own social circle to quarantine together \u2013 and a perfect example of a harm reduction strategy.\n\nHarm reduction is a pragmatic public health concept that explicitly acknowledges that all risk cannot be eliminated, so it encourages the reduction of risk. Harm reduction approaches also take into consideration the intersection of biological, psychological and social factors that influence both health and behavior.\n\nFor example, abstinence-only education doesn't work all that well. Safe-sex education, on the other hand, seeks to limit risk, not eliminate it, and is better at reducing teen pregnancy and sexually transmitted infection.\n\nQuarantine bubbles are a way to limit the risk of getting or transmitting SARS-CoV-2 while expanding social interaction.\n\n# Mental health matters too\n\nStaying indoors, avoiding all contact with friends or family and having food and groceries delivered would be the best way to limit your risk of catching SARS-CoV-2. But the risks of the pandemic extend beyond the harm from infection. Health encompasses mental as well as physical well-being.\n\nThe negative mental health impacts of the pandemic are already starting to become evident. A recent survey of U.S. adults found that 13.6% reported symptoms of serious psychological distress, up from 3.9% in 2018. A quarter of people 18 to 29 years old reported serious psychological distress, the highest levels of all ages groups. Many people are experiencing anxiety and depression due to the pandemic or were already living with these challenges. Loneliness certainly doesn't help.\n\nLoneliness and social isolation increase the risk for depression and anxiety and can also lead to increases in the risk for serious physical diseases like coronary heart disease, stroke and premature death.\n\nQuaranteams, therefore, are not simply a convenient idea because they let people see their friends and family. Isolation poses serious health risks \u2013 both physically and mentally \u2013 that social bubbles can help alleviate while improving social well-being and quality of life.\n\nManaging a virus is all about managing human interactions, and quarantine bubbles work to insulate groups from risk.\n\n# Social network theory shows that quaranteams work\n\nSocial relationships enhance well-being and mental health but they also act as a vehicle for infection transmission. As people around the world emerge from lockdowns, this is the conundrum: How do we increase social interaction while limiting the risk of spread?\n\nA recent study used social network theory \u2013 how information spreads among groups of people \u2013 and infectious disease models to see if quaranteams would work in this pandemic.\n\nTo do that, the researchers built computer models of social interactions to measure how the virus spread. They built a model of typical behavior, of typical behavior but with only half the number of interactions and of three different social distancing approaches that also had half the number of interactions as normal.\n\nThe first social distancing scenario grouped people by characteristics \u2013 people would only see people of a similar age, for example. The second scenario grouped people by local communities and limited inter-community interaction. The last scenario limited interactions to small social groups of mixed characteristics from various locations \u2013 i.e. quarantine bubbles. These bubbles could have people of all ages and from various neighborhoods, but those people would only interact with each other.\n\nAll of the social distancing measures reduced the severity of the pandemic and were also better than simply reducing interactions at random, but the quaranteam approach was the most effective at flattening the curve. Compared to no social distancing, quarantine bubbles would delay the peak of infections by 37%, decrease the height of the peak by 60% and result in 30% fewer infected individuals overall.\n\nOther countries are starting to incorporate quaranteams in their prevention guidelines now that infection rates are low and contact tracing programs are in place. England is the latest country to announce quaranteam guidance with their support bubble policy.\n\nNew Zealand implemented a quarantine bubble strategy in early May and it seems to have worked. Additionally, a recent survey of 2,500 adults in England and New Zealand found a high degree of support for the policies and high degree of motivation to comply.\n\nPeople in a quarantine bubble need to agree on how much risk is acceptable and establish a set of rules.\n\n# How to build a quarantine bubble\n\nTo make an effective quaranteam, here's what you need to do.\n\nFirst, everyone must agree to follow the rules and be honest and open about their actions. Individual behavior can put the whole team at risk and the foundation of a quaranteam is trust. Teams should also talk in advance about what to do if someone breaks the rules or is exposed to an infected person. If someone starts to show symptoms, everyone should agree to self-isolate for 14 days.\n\nSecond, everyone must decide how much risk is acceptable and establish rules that reflect this decision. For example, some people might feel OK about having a close family member visit but others may not. Our family has agreed that we only visit with friends outside, not inside, and that everyone must wear masks at all times.\n\nFinally, people need to actually follow the rules, comply with physical distancing outside of the quaranteam and be forthcoming if they think they may have been exposed.\n\nAdditionally, communication should be ongoing and dynamic. The realities of the pandemic are changing at a rapid pace and what may be OK one day might be too risky for some the next.\n\n# The risks of joining a quaranteam\n\nAny increase in social contact is inherently more risky right now. There are two important ideas in particular that a person should consider when thinking about how much risk they're willing to take.\n\nYou must be smart and honest when determining how much risk you're willing to take and who is affected.\n\nThe first is asymptomatic spread. Current data suggests that at any given time, anywhere between 20% and 45% of people infected with SARS-CoV-2 are asymptomatic or pre-symptomatic and able to transmit the virus to others. The best way to know if someone is infected or not is to get tested, so some people might consider requiring testing before agreeing to join a quaranteam.\n\nThe second thing to consider is that consequences of getting sick are not the same for everyone. If you or someone you live with has another health condition \u2013 like asthma, diabetes, a heart condition or a compromised immune system \u2013 the assessment of risk and reward from a quaranteam should change. The consequences of a high-risk person developing COVID-19 are much more serious.\n\nOne of the greatest difficulties facing both scientists and the public alike is the uncertainty about this virus and what lies ahead. But some things are known. If individuals are informed and sincere in their quaranteam efforts and follow the regular guidance of social distancing, mask wearing and enthusiastic hand-washing, quaranteams can offer a robust and structured middle ground approach to manage risk while experiencing the joy and benefits of friends and family. These are things we could all benefit from these days, and for now, quaranteams may be the best step forward as we emerge from this pandemic together.\n\nThis article is republished from The Conversation under a Creative Commons license. Read the original article.","meta":{"dup_signals":{"dup_doc_count":171,"dup_dump_count":30,"dup_details":{"curated_sources":3,"2023-50":3,"2023-40":4,"2023-23":6,"2023-14":4,"2023-06":4,"2022-49":4,"2022-40":3,"2022-33":4,"2022-27":4,"2022-21":8,"2022-05":3,"2021-49":6,"2021-43":4,"2021-39":7,"2021-31":7,"2021-25":6,"2021-21":3,"2021-17":8,"2021-10":5,"2021-04":6,"2020-50":7,"2020-45":11,"2020-40":8,"2020-34":8,"2020-29":17,"2024-30":3,"2024-26":5,"2024-22":4,"2024-18":1,"2024-10":5}},"file":"PMC8389109"},"subset":"pubmed_central"} {"text":"abstract: # Background\n .\n Conventional systematic review techniques have limitations when the aim of a review is to construct a critical analysis of a complex body of literature. This article offers a reflexive account of an attempt to conduct an interpretive review of the literature on access to healthcare by vulnerable groups in the UK\n .\n # Methods\n .\n This project involved the development and use of the method of Critical Interpretive Synthesis (CIS). This approach is sensitised to the processes of conventional systematic review methodology and draws on recent advances in methods for interpretive synthesis.\n .\n # Results\n .\n Many analyses of equity of access have rested on measures of utilisation of health services, but these are problematic both methodologically and conceptually. A more useful means of understanding access is offered by the synthetic construct of candidacy. Candidacy describes how people's eligibility for healthcare is determined between themselves and health services. It is a continually negotiated property of individuals, subject to multiple influences arising both from people and their social contexts and from macro-level influences on allocation of resources and configuration of services. Health services are continually constituting and seeking to define the appropriate objects of medical attention and intervention, while at the same time people are engaged in constituting and defining what they understand to be the appropriate objects of medical attention and intervention. Access represents a dynamic interplay between these simultaneous, iterative and mutually reinforcing processes. By attending to how vulnerabilities arise in relation to candidacy, the phenomenon of access can be better understood, and more appropriate recommendations made for policy, practice and future research.\n .\n # Discussion\n .\n By innovating with existing methods for interpretive synthesis, it was possible to produce not only new methods for conducting what we have termed critical interpretive synthesis, but also a new theoretical conceptualisation of access to healthcare. This theoretical account of access is distinct from models already extant in the literature, and is the result of combining diverse constructs and evidence into a coherent whole. Both the method and the model should be evaluated in other contexts.\nauthor: Mary Dixon-Woods; Debbie Cavers; Shona Agarwal; Ellen Annandale; Antony Arthur; Janet Harvey; Ron Hsu; Savita Katbamna; Richard Olsen; Lucy Smith; Richard Riley; Alex J Sutton\ndate: 2006\ninstitute: 1Department of Health Sciences, University of Leicester, 22\u201328 Princess Road West, Leicester LE1 6TP, UK; 2Division of Oncology\/General Practice, University of Edinburgh, Edinburgh Centre for Neuro-Oncology, Western General Hospital, Crewe Road South, Edinburgh EH4 2XU, UK; 3Department of Health Sciences, University of Leicester, Leicester General Hospital, Leicester LE5 4PW, UK; 4Department of Sociology, University of Leicester, Leicester LE1 7RH, UK; 5School of Nursing, University of Nottingham, Queens Medical Centre, Nottingham NG7 2HA, UK; 6Centre for Research in Social Policy, Loughborough University, Leicestershire LE11 3TU, UK; 7Nuffield Research Unit, Department of Health Sciences, University of Leicester, 22\u201328 Princess Road West, Leicester LE1 6TP, UK\nreferences:\ntitle: Conducting a critical interpretive synthesis of the literature on access to healthcare by vulnerable groups\n\n# Background\n\nLike many areas of healthcare practice and policy, the literature on access to healthcare is large, diverse, and complex. It includes empirical work using both qualitative and quantitative methods; editorial comment and theoretical work; case studies; evaluative, epidemiological, trial, descriptive, sociological, psychological, management, and economics papers, as well as policy documents and political statements. \"Access\" itself has not been consistently defined or operationalised across the field. There are substantial adjunct literatures, including those on quality in healthcare, priority-setting, and patient satisfaction. A review of the area would be of most benefit if it were to produce a \"mid-range\" theoretical account of the evidence and existing theory that is neither so abstract that it lacks empirical applicability nor so specific that its explanatory scope is limited.\n\nIn this paper, we suggest that conventional systematic review methodology is ill-suited to the challenges that conducting such a review would pose, and describe the development of a new form of review which we term \"Critical Interpretive Synthesis\" (CIS). This approach draws is sensitised to the range of issues involved in conducting reviews that conventional systematic review methodology has identified, but draws on a distinctive tradition of qualitative inquiry, including recent interpretive approaches to review \\[1\\]. We suggest that that using CIS to synthesise a diverse body of evidence enables the generation of theory with strong explanatory power. We illustrate this briefly using an example based on synthesis of the literature on access to healthcare in the UK by socio-economically disadvantaged people.\n\n## Aggregative and interpretive reviews\n\nConventional systematic review developed as a specific methodology for searching for, appraising, and synthesising findings of primary studies \\[2\\]. It offers a way of systematising, rationalising, and making more explicit the processes of review, and has demonstrated considerable benefits in synthesising certain forms of evidence where the aim is to *test* theories, perhaps especially about \"what works\". It is more limited when the aim, as here, is to include many different forms of evidence with the aim of *generating* theory \\[3\\]. Conventional systematic review methods are thus better suited to the production of *aggregative* rather than *interpretive* syntheses.\n\nThis distinction between aggregative and interpretive syntheses, noted by Noblit and Hare in their ground-breaking book on meta-ethnography\\[1\\] allows a useful (though necessarily crude) categorisation of two principal approaches to conducting reviews \\[4\\]. Aggregative reviews are concerned with assembling and pooling data, may use techniques such as meta-analysis, and require a basic comparability between phenomena so that the data can be *aggregated* for analysis. Their defining characteristics are a focus on *summarising data*, and an assumption that the concepts (or variables) under which those data are to be summarised are largely secure and well specified. Key concepts are defined at an early stage in the review and form the categories under which the data from empirical studies are to be summarised.\n\n*Interpretive* reviews, by contrast, see the essential tasks of synthesis as involving both induction and interpretation. Their primary concern is with the development of concepts and theories that integrate those concepts. An interpretive review will therefore avoid specifying concepts in advance of the synthesis. The interpretive analysis that yields the synthesis is conceptual in process and output. The product of the synthesis is not aggregations of data, but theory grounded in the studies included in the review. Although there is a tendency at present to conduct interpretive synthesis only of qualitative studies, it should in principle be possible and indeed desirable to conduct interpretive syntheses of all forms of evidence, since theory-building need not be based only on one form of evidence. Indeed, Glaser and Strauss \\[5\\] in their seminal text, included an (often forgotten) chapter on the use of quantitative data for theory-building.\n\nRecent years have seen the emergence of a range of methods that draw on a more interpretive tradition, but these also have limitations when attempting a synthesis of a large and complex body of evidence. In general, the use to date of interpretive approaches to synthesis has been confined to the synthesis of qualitative research only \\[6-8\\]. Meta-ethnography, an approach in which there has been recent significant activity and innovation, has similarly been used solely to synthesise qualitative studies, and has typically been used only with small samples \\[9-11\\]. Few approaches have attempted to apply an interpretive approach to the whole corpus of evidence (regardless of study type) included in a review, and few have treated the literature they examine as itself an object of scrutiny, for example by questioning the ways in which the literature constructs its problematics, the nature of the assumptions on the literature draw, or what has influenced proposed solutions.\n\nIn this paper we offer a reflexive account of our attempt to conduct an interpretive synthesis of all types of evidence relevant to access to National Health Service (NHS) healthcare in the UK by potentially vulnerable groups. These groups had been defined at the outset by the funders of the project (the UK Department of Health Service Delivery and Organisation R&D Programme) as children, older people, members of minority ethnicities, men\/women, and socio-economically disadvantaged people. We explain in particular our development of Critical Interpretive Synthesis as a method for conducting this review.\n\n# Methods\n\n## Formulating the review question\n\nConventional systematic review methodology \\[12,13\\] emphasises the need for review questions to be precisely formulated. A tightly focused research question allows the parameters of the review to be identified and the study selection criteria to be defined in advance, and in turn limits the amount of evidence required to address the review question.\n\nThis strategy is successful where the phenomenon of interest, the populations, interventions, and outcomes are all well specified \u2013 i.e. if the aim of the review is *aggregative*. For our project, it was neither possible nor desirable to specify in advance the precise review question, *a priori* definitions, or categories under which the data could be summarised, since one of its aims was to allow the definition of the phenomenon of access to emerge from our analysis of the literature \\[14\\]. This is not to say that we did not have a review question, only that it was not a specific hypothesis. Instead it was, as Greenhalgh and colleagues \\[15\\] describe, tentative, fuzzy and contested at the outset of the project. It did include a focus on equity and on how access, particularly for potentially vulnerable groups, can best be understood in the NHS, a health care system that is, unlike most in the world, free at the point of use.\n\nThe approach we used to further specify the review question was highly iterative, modifying the question in response to search results and findings from retrieved items. It treated, as Eakin and Mykhalovskiy \\[16\\] suggest, the question as a compass rather than an anchor, and as something that would not finally be settled until the end of the review. In the process of refining the question, we benefited from the multidisciplinary nature of our review team: this allowed a range of perspectives to be incorporated into the process, something that was also helpful and important in other elements of the review.\n\n## Searching the literature\n\nA defining characteristic of conventional systematic review methodology is its use of explicit searching strategies, and its requirement that reviewers be able to give a clear account of how they searched for relevant evidence, such that the search methods can be reproduced. \\[2\\] Searching normally involves a range of strategies, but relies heavily on electronic bibliographic databases.\n\nWe piloted the use of a highly structured search strategy using protocol-driven searches across a range of electronic databases but, like Greenhalgh and Peacock \\[17\\] found that was this unsatisfactory. In particular, it risked missing relevant materials by failing to pick up papers that, while not ostensibly about \"access\", were nonetheless important to the aim of the review. We then developed a more organic process that fitted better with the emergent and exploratory nature of the review questions. This combined a number of strategies, including searching of electronic databases; searching websites; reference chaining; and contacts with experts. Crucially, we also used expertise within the team to identify relevant literature from adjacent fields not immediately or obviously relevant to the question of \"access\".\n\nHowever, searching generated thousands of potentially relevant items \u2013 at one stage over 100,000 records. A literature of this size would clearly be unmanageable, and well exceed the capacity of the review team. We therefore redefined the aim of the searching phase. Rather than aiming for comprehensive identification and inclusion of all relevant literature, as would be required under conventional systematic review methodology, we saw the purpose of the searching phase as identifying potentially relevant papers to provide a sampling frame. Our sampling frame eventually totalled approximately 1,200 records.\n\n## Sampling\n\nConventional systematic review methodology limits the number of papers to be included in a review by having tightly specified inclusion criteria for papers. Effectively, this strategy constructs the field to be known as having specific boundaries, defined as research that has specifically addressed the review question, used particular study designs and fulfilled the procedural requirements for the proper execution of these. Interpretive reviews might construct the field to be known rather differently, seeing the boundaries as more diffuse and ill-defined, as potentially overlapping with other fields, and as shifting as the review progresses. Nonetheless, there is a need to limit the number of papers to be included in an interpretive synthesis not least for practical reasons, including the time available. Sampling is also warranted theoretically, in that the focus in interpretive synthesis is on the development of concepts and theory rather than on exhaustive summary of all data. A number of authors \\[18-20\\] suggest drawing on the sampling techniques of primary qualitative research, including principles of theoretical sampling and theoretical saturation, when conducting a synthesis of qualitative literature.\n\nFor purposes of our synthesis, we used purposive sampling initially to select papers that were clearly concerned with aspects of access to healthcare, partly informed by an earlier scoping study \\[21\\] and later used theoretical sampling to add, test and elaborate the emerging analysis. Sampling therefore involved a constant dialectic process conducted concurrently with theory generation.\n\n## Determination of quality\n\nConventional systematic review methodology uses assessment of study quality in a number of ways. First, as indicated above, studies included in a review may be limited to particular study designs, often using a \"hierarchy of evidence\" approach that sees some designs (e.g. randomized controlled trials) as being more robust than others (e.g. case-control studies). Second, it is usual to devise broad inclusion criteria \u2013 for example adequate randomisation for RCTs \u2013 and to exclude studies that fail to meet these. Third, an appraisal of included studies, perhaps using a structured quality checklist, may be undertaken to allow sensitivity analyses aimed at assessing the effects of weaker papers.\n\nUsing this approach when confronted with a complex literature, including qualitative research, poses several challenges. No hierarchy of study designs exists for qualitative research. How or whether to appraise papers for inclusion in an interpretive reviews has received a great deal of attention, but there is little sign of an emergent consensus \\[22\\]. Some argue that formal appraisals of quality may not be necessary, and some argue that there is a risk of discounting important studies for the sake of \"surface mistakes\" \\[23\\]. Others propose that weak papers should be excluded from the review altogether, and several published syntheses of qualitative research have indeed used quality criteria to make decisions about excluding papers. \\[10,24\\]\n\nWe aimed to prioritise papers that appeared to be relevant, rather than particular study types or papers that met particular methodological standards. We might therefore be said to be prioritising \"signal\" (likely relevance) over \"noise\" (the inverse of methodological quality) \\[25\\]. We felt it important, for purposes of an interpretive review, that a low threshold be applied to maximise the inclusion and contribution of a wide variety of papers at the level of *concepts*. We therefore took a two-pronged approach to quality. First, we decided that only papers that were deemed to be fatally flawed would be excluded. Second, once in the review, the synthesis itself crucially involved judgements and interpretations of credibility and contribution, as we discuss later.\n\nTo identify fatally flawed papers, we used the criteria in Table 1<\/a>, adapted from those proposed (at the time of our review) by the National Health Service (NHS) National Electronic Library for Health for the evaluation of qualitative research, to inform judgements on the quality of the papers. These criteria were used for assessing *all* empirical papers (but not those classified as 'reviews') regardless of study type. The final judgement about inclusion of the review rested both on an assessment of relevance as well as on the assessment of the quality of the individual papers. Decisions about relevance and quality were recorded, and a small sample of decisions about relevance and quality was reviewed. In the event, very few papers \u2013 approximately 20 \u2013 were excluded on grounds of being \"fatally flawed\", because even weak papers were often judged to have potentially high relevance. The value of deferring judgements of credibility and contribution until the synthesis became increasingly evident.\n\nAppraisal prompts for informing judgements about quality of papers\n\n\n\n\n
Are the aims and objectives of the research clearly stated?
\nIs the research design clearly specified and appropriate for the aims and objectives of the research?
\nDo the researchers provide a clear account of the process by which their findings we reproduced?
\nDo the researchers display enough data to support their interpretations and conclusions?
\nIs the method of analysis appropriate and adequately explicated?<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\nMost fundamentally, as the review progressed, we became increasingly convinced that the assumption that all studies deemed to have satisfactorily fulfilled criteria of execution and reporting can contribute equally to a synthesis is flawed. As we discuss further below, one of the distinctive characteristics of a *critical* interpretive synthesis is its emphasis not only on summary of data reported in the literature but also on a more fundamental critique, which may involve questioning taken-for-granted assumptions.\n\n## Data extraction\n\nA data-extraction pro-forma was initially devised to assist in systematically identifying characteristics of research participants, methods of data collection, methods of data analysis and major findings of each paper. For both qualitative and quantitative papers, this involved extracting the titles of the categories and sub-categories using the terms used in the paper itself and a summary of the relevant material. Practically, however, it proved impossible to conduct this form of data extraction on all documents included in the review, including very large documents. We therefore summarised some documents more informally, for example using highlighter pen. More generally, the value of formal data extraction for purposes of this type of study will require further evaluation.\n\n## Conducting an interpretive synthesis\n\nWe had intended, at the outset of this project, to use meta-ethnography, a method for interpretive synthesis where there is currently an active programme of methodological research, \\[9-11\\] as our approach to synthesis. However, this had previously only been used to synthesise qualitative studies. Our experiences of working with a large sample of papers using multiple methods led us to refine and respecify some of the concepts and techniques of meta-ethnography in order to enable synthesis of a very large and methodologically diverse literature. Eventually we had made so many amendments and additions to the original methodology that we felt it was more appropriate, helpful and informative to deem it a new methodology with its own title and processes. It is this approach which we term critical interpretive synthesis (CIS). It is important to emphasise, however, that CIS is an approach to review and is not solely a method for synthesis.\n\nMeta-ethnography, as originally proposed \\[1\\], involves three major strategies.:\n\n1\\. **Reciprocal translational analysis** (RTA). The key metaphors, themes, or concepts in each study report are identified. An attempt is then made to translate the concepts into each other. Judgements about the ability of the concept of one study to capture concepts from others are based on attributes of the themes themselves, and the concept that is \"most adequate\" is chosen.\n\n2\\. **Refutational synthesis.** Contradictions between the study reports are characterised, and an attempt made to explain them.\n\n3\\. **Lines-of-argument synthesis** (LOA) involves building a general interpretation grounded in the findings of the separate studies. The themes or categories that are most powerful in representing the entire dataset are identified by constant comparisons between individual accounts.\n\n### Reciprocal translational analysis\n\nReciprocal translational analysis involves translating findings of one paper into another by systematically comparing findings from each study, using techniques such as maps. \\[9\\] We encountered considerable methodological and practical problems in trying to apply RTA across a large set of papers, in part because of the kinds of iterations we were conducting in refining the sample. These meant that there were difficulties in identifying a stable \"set\" of papers on which an RTA could be conducted. RTA appears to be most suitable for a well-defined, relatively small (fewer than 50) and complete set of papers, because substitution or deletion of papers causes problems with both identifying index concepts and showing which concepts from other papers translate into these. A further problem is that, when confronted with a very large and diverse literature such as ours, RTA tends to provide only a summary in terms that have already been used in the literature. Although this may be a useful strategy as a stage on the way to a more interpretive synthesis, its value may be more limited than is the case for smaller samples of qualitative study reports where its benefits have been more evident.\n\nBefore our review, RTA had previously only been used for synthesising interpretive research, not a large and diverse body of literature, so this may be one reason why it was unsuccessful for our purposes. It is important to distinguish between the doubtful value of RTA in our synthesis (particularly because of the size and diversity of the literature), and the doubtful use of RTA in general. The diversity of the literature would also have prevented us from undertaking an aggregative synthesis using meta-analysis, but this clearly could not be read as a criticism of meta-analysis itself, but of its limitations when applying it to a diverse literature.\n\n### Lines of argument synthesis\n\nRecent work \\[9-11\\] has innovated in the methodology of lines-of-argument (LOA) synthesis originally proposed by Noblit and Hare by building on Schutz's \\[26\\] notions of \"orders\" of constructs Schutz used the idea of \"first order construct\" to refer to the everyday understandings of ordinary people and \"second order construct\" to refer to the constructs of the social sciences. The explanations and theories used by authors in primary study reports could therefore be seen as second order interpretations. This recent work uses LOA synthesis to develop what are referred to as \"third order\" interpretations, which build on the explanations and interpretations of the constituent studies, and are simultaneously consistent with the original results while extending beyond them. Our experiences have led us to respecify some of this approach.\n\nWe suggest that the appropriate way of conceptualising the output of an LOA synthesis is as a *synthesising argument*. This argument integrates evidence from across the studies in the review into a coherent theoretical framework comprising a network of constructs and the relationships between them. Its function is to provide more insightful, formalised, and generalisable ways of understanding a phenomenon. A synthesising argument can be generated through detailed analysis of the evidence included in a review, analogous to the analysis undertaken in primary qualitative research. It may require the generation of what we call *synthetic constructs*, which are the result of a transformation of the underlying evidence into a new conceptual form. Synthetic constructs are grounded in the evidence, but result from an interpretation of the whole of that evidence, and allow the possibility of several disparate aspects of a phenomenon being unified in a more useful and explanatory way.\n\nWhat we have called a \"synthetic construct\" might also be seen as a \"third order construct\". We suggest that the term \"synthetic construct\" is a more useful term because it is more explicit, and also because we emphasise that a *synthesising argument* need not consist solely of synthetic constructs. Instead, synthesising arguments may explicitly link not only synthetic constructs, but also second order constructs already reported in the literature. In effect, therefore, our approach does not make this precise distinction between second and third order constructs.\n\n### Refutational syntheses\n\nWe further suggest that what Noblit and Hare \\[1\\] call \"refutational syntheses\" are best conducted as part of the analysis that produces the synthesising argument Few published meta-ethnographies have in fact reported a separate refutational synthesis. It is, we suggest, more productive instead to adopt a critical and reflexive approach to the literature, including consideration of contradictions and flaws in evidence and theory.\n\nAn important element of producing a synthesising argument is the need, when conducting the analysis, to consider and reflect on the credibility of the evidence, to make critical judgements about how it contributes to the development of the synthesising argument, and to root the synthesising argument appropriately in critique of existing evidence. Clearly, credibility depends on the quality of the research, its currency, and the robustness of its theoretical base. But more generally, a critical interpretive synthesis is critical in the broader sense of *critique* rather than this more limited sense of critical appraisal, in which each study is judged against the standards of its type. Critique may involve identification of the research traditions or meta-narratives that have guided particular fields of research \\[27\\] as well as critical analysis of particular forms of discourses. Its aim is therefore to treat the literature as warranting critical scrutiny in its own right.\n\n## Conducting the analysis\n\nOur analysis of the evidence, in order to produce a synthesising argument, was similar to that undertaken in primary qualitative research. We began with detailed inspection of the papers, gradually identifying recurring themes and developing a critique. We then generated themes that helped to explain the phenomena being described in the literature, constantly comparing the theoretical structures we were developing against the data in the papers, and attempting to specify the categories of our analysis and the relationships between them. To facilitate the process of identifying patterns, themes, and categories across the large volumes of text-based data in our study, we used QSR N5 software. However, it is important to note that, as with any qualitative analysis, full transparency is not possible because of the creative, interpretive processes involved. Nonetheless, the large multidisciplinary team involved in the review, and the continual dialogue made necessary by this, helped to introduce \"checks and balances\" that guarded against framing of the analysis according to a single perspective.\n\nA key feature of this process that distinguishes it from some other current approaches to interpretive synthesis (and indeed of much primary qualitative research) was its aim of being *critical*: its questioning of the ways in which the literature had constructed the problematics of access, the nature of the assumptions on which it drew, and what has influenced its choice of proposed solutions. Our critique of the literature was thus dynamic, recursive and reflexive, and, rather than being a stage in which individual papers are excluded or weighted, it formed a key part of the synthesis, informing the sampling and selection of material and playing a key role in theory generation.\n\n## Findings: access to healthcare by socio-economically disadvantaged people\n\nOur critical interpretive synthesis of the literature on access to healthcare by socio-economically disadvantaged people in the UK included 119 papers. Early analytic categories were tentative and contingent, but gradually became firmed up and more highly specified as our analysis continued. Our synthesis involved a critique of the tendency to use measures of utilisation as a means of assessing the extent to which access to healthcare is equitable. It further involved the generation of a synthesising argument that has the synthetic construct of *candidacy* at its core. For space reasons, we can report here only a brief illustrative summary.\n\n## Critique of utilisation as a measure of access\n\nMuch of the evidence on whether access to healthcare in the UK is equitable has relied on measuring utilisation of health services. This approach measures the units of healthcare (consultations, procedures, etc) that people have actually consumed. The literature suggests that different groups have identifiable patterns of use of services, but the significance of these is often difficult to interpret. General practice (GP) consultation rates among socio-economically disadvantaged people have generally been found to be higher \\[28,29\\] though some recent work has suggested that social class variables are generally insignificant in explaining health service use \\[30\\] Studies that have attempted to adjust for need, usually on the basis of estimates of morbidity, have generally suggested that the apparent excess of GP consultation can be explained by higher need \\[31\\].\n\nOur critique of the literature suggests that utilisation is a generally unhelpful measure of equity of access. Not only do the logistical and practical problems of conducting utilisation studies pose substantial threats to validity and reliability, these studies are problematic for other reasons. They rely on a largely untested set of normative (i.e. ideas about how the world ought to be) and somewhat questionable assumptions about the \"correct\" level of utilisation, and on a difficult-to-measure (or conceputalise) estimates of \"need\". They often invoke normative assumptions about need relative to some apparently privileged though often ill-defined reference group (such as \"affluent\" people), and therefore risk failing to identify problems in access for that reference group. Misleadingly reassuring results may be produced that indicate that \"need\" and use or receipt are proportionate. We argue that utilisation, or, more appropriately, *receipt* of healthcare is the outcome of many different complex processes, which all need to be recognised if access is to be properly understood.\n\nOur analysis suggested that a focus instead on *candidacy*, a synthetic construct that we generated during the course of our analysis, would demonstrate the vulnerabilities associated with socio-economic disadvantage, emphasise the highly dynamic, multi-dimensional and contingent character of access, and allow a more insightful interpretation of the evidence on receipt of healthcare.\n\n## Candidacy\n\nOur synthesising argument around access to healthcare by socio-economically disadvantaged people is organised around a set of central concepts and, in particular, the core synthetic category of \"candidacy\". Candidacy functions as a synthetic construct because it is the product of the transformation of the evidence into a new conceptual form. It is distinct from earlier uses of the term \"candidacy\", including its use in the lay epidemiology of heart disease \\[32\\].\n\nWe have defined candidacy as follows: candidacy describes the ways in which people's eligibility for medical attention and intervention is jointly negotiated between individuals and health services. Our synthesising argument runs as follows: candidacy is a dynamic and contingent process, constantly being defined and redefined through interactions between individuals and professionals, including how \"cases\" are constructed. Accomplishing access to healthcare requires considerable work on the part of users, and the amount, difficulty, and complexity of that work may operate as barriers to receipt of care. The social patterning of perceptions of health and health services, and a lack of alignment between the priorities and competencies of disadvantaged people and the organisation of health services, conspire to create vulnerabilities. Candidacy is managed in the context of operating conditions that are influenced by individuals, the setting and environment in which care takes place, situated activity, the dynamics of face-to-face activity, and aspects of self (such as gender), the typifications staff use in categorising people and diseases, availability of economic and other resources such as time, local pressures, and policy imperatives.\n\n## Identification of candidacy\n\nHow people recognise their symptoms as needing medical attention or intervention is clearly key to understanding how they assert a claim to candidacy. Our analysis suggests that people in more deprived circumstances are likely to manage health and to recognise candidacy as a series of crises. There is significant evidence of lower use of preventive services among more deprived groups, \\[33,34\\] as well as evidence of higher use of accident and emergency facilities, emergency admissions and out-of-hours use \\[35,35,37,38\\]. Among more deprived groups, there is a tendency to seek help in response to specific events that are seen as warranting candidacy. \"Warning signs\" may be downgraded in importance by socio-economically disadvantaged populations because of a lack of a positive conceptualisation of health, \\[39,40\\] the normalisation of symptoms within deprived communities \\[41-43\\], and fear of being \"blamed\" by health professionals \\[44\\].\n\n## Navigation\n\nUsing services requires considerable work on the part of people. First, people must be aware of the services on offer, and there has been persistent concern that more deprived people may lack awareness of some services \\[45,46\\]. Second, using health services requires the mobilisation of a range of practical resources that may be variably available in the population. A key practical resource that impacts on the ability to seek care for the socio-economically disadvantaged, for example, is transport \\[44,47,48\\]. Other practical resources that may impact on the ability of disadvantaged groups to negotiate health services include more rigid patterns of working life \\[47\\]. Goddard and Smith \\[49\\] summarise evidence suggesting that those from more deprived social groups face financial costs of attending health services which, though not sufficient to dissuade them from using services when they are ill (i.e. in response to a specific \"event\"), act as a barrier to attending \"optional\" services related to health promotion and health prevention\n\n## The permeability of services\n\nPatterns of use of health services reflect issues in the organisation of services as much as they reflect a tendency to manage health as a series of crises on the part of disadvantaged people. We generated the synthetic construct of \"permeability\" to refer to the ease with which people can use services. Porous services require few qualifications of candidacy to use them, and may require the mobilisation of relatively fewer resources. Such services might include Accident and Emergency departments. Services that are less permeable demand qualifications (such as a referral), and also demand a higher degree of cultural alignment between themselves and their users, particularly in respect of the extent to which people feel comfortable with the organisational values of the service. Such services might include out-patients clinics in hospitals.\n\nServices that are less permeable tend to have high levels of default by socio-economically disadvantaged people \\[50-53\\]. Appointments systems, for example, are a threat to permeability by socio-economically disadvantaged people because they require resources and competencies (including stable addresses, being able to read, and being able to present in particular places at particular times \\[33,50,54\\] In addition, the extent to which people feel alienated from the cultural values of health services and their satisfaction with services have important implications for which services they choose to use \\[41,55\\].\n\n## Appearances at health services\n\n*Appearing* at health services involves people in asserting a claim to candidacy for medical attention or intervention. Whatever the nature of the claim, making it clearly involves work that requires a set of competencies, including the ability to formulate and articulate the issue for which help is being sought, and the ability to present credibly. More deprived people are at risk in these situations: they may be less used to or less able to provide coherent abstracted explanations of need, and may feel intimidated by their social distance from health professionals. Sword \\[56\\] points out that people with low incomes may feel alienated by the power relations that often characterise encounters with professionals. Dixon et al \\[57\\] and, in the US, Cooper and Roter \\[58\\] suggest that middle class people may be more adept at using their \"voice\" to demand better and extensive services: they may be more articulate, more confident, and more persistent, while people from lower class backgrounds are less verbally active. Somerset et al \\[59\\] report that in making referral decisions, patients' social status and their ability to articulate verbally act as background (and unexpressed) influences that affect the likelihood of referral.\n\n## Adjudications\n\nOnce a patient has asserted their candidacy by presenting to health services, the professional judgements made about that candidacy strongly influence subsequent access to attention and interventions. We generated the synthetic construct of \"adjudication\" to refer to the judgements and decisions made by professionals which allow or inhibit continued progression of candidacy. May et al's \\[60\\] analysis suggests doctors' practices are often exercised through a repertoire of routine judgements about the possibilities presented by individual patients and the routinely available means of solving these. These typifications are, we suggest, strongly influenced by local conditions, including the operating conditions in which practitioners work and sensitivity to resource constraints. Candidacy of socially disadvantaged people appears to be at risk of being judged to be less eligible, at least for some types of interventions, although the evidence that this happens is not particularly strong.\n\nOur analysis suggests that it is likely that professionals' perceptions of patients who are likely to \"do well\" as a result of interventions may disadvantage people in more deprived circumstances. As Hughes and Griffiths \\[61\\] identify, clinical decisions may rest on often implicit social criteria about which patients \"ought\" to receive care People in disadvantaged groups are more likely to smoke, to be overweight and to have co-morbidities, and professional perceptions of the cultural and health capital required to *convert* a unit of health provision into a given unit of health gain may function as barriers to healthcare \\[34\\]. In addition, perceptions of social \"deservingness\" may play a role \\[61,62\\]. Goddard and Smith \\[49\\] summarise evidence suggesting that independent of the severity of the disease, some GPs are more likely to refer the economically active and those with dependants. Clearly, there is potential for socially disadvantaged people to be disfavoured in such decisions.\n\n## Offers and resistance\n\nMuch of the work on utilisation of healthcare explicitly or implicitly assumes that non-utilisation is a direct reflection of non-offer. However, this type of normative analysis fails to acknowledge that people may choose to refuse offers. There is some evidence of patterns of resistance to offers. Referral implies that a GP has identified particular features of candidacy and is seeking to match those to a service that deals with that form of candidacy, but patients can resist being referred \\[42,63\\] and can resist offers of medication \\[64,65\\].\n\n## Operating conditions and the local production of candidacy\n\nA small body of recent research has identified what might be called local influences on the production of candidacy, and in our analysis these are hugely important. These are the contingent and locally specific influences on interactions between practitioners and patients, which may be emergent over time through repeated encounters. Crucial to the local production of candidacy is the perceived or actual availability and suitability of resources to address that candidacy \\[60,63\\].\n\n# Discussion\n\nDemands from health policy-makers and managers for syntheses of evidence that are useful, rigorous and relevant are fuelling interest in the development of methods that can allow the integration of diverse types of evidence \\[66\\]. With the diversity of techniques for evidence synthesis now beginning to appear, those using existing, 'new' or evolving techniques need to produce critical reflexive accounts of their experiences of using the methods \\[3\\]. Our experience of conducting a review of access to healthcare, where there is a large, amorphous and complex body of literature, and a need to assemble the findings into a form that is useful in informing policy and that is empirically and theoretically grounded \\[67\\], has led us to propose a new method \u2013 Critical Interpretive Synthesis \u2013 which is sensitised to the kinds of processes involved in conventional systematic review while drawing on a distinctively qualitative tradition of inquiry.\n\nConventional systematic review methodology is well-suited to aggregative syntheses, where what is required is a summary of the findings of the literature under a set of categories which are largely pre-specified, secure, and well-defined. It has been important in drawing attention to the weaknesses of informal reviews, including perceived failures in their procedural specification and the possibility that the (thus) undisciplined reviewer might be chaotic or negligent in identifying the relevant evidence, or might construct idiosyncratic theories and marshall the evidence in support of these. It has thus revealed some of the pitfalls of informal literature review. Conventional systematic review methodology has demonstrated considerable benefits in synthesising certain forms of evidence where the aim is to *test* theories (in the form of hypotheses), perhaps especially about \"what works\". However, this approach is limited when the aim, confronted with a complex body of evidence, is to *generate* theory \\[15,27\\].\n\nCurrent methods for conducting an interpretive synthesis of the literature, (such as meta-ethnography) are also limited, in part because application of many interpretive methods for synthesis has remained confined to studies reporting qualitative research. Realist synthesis \\[68\\], which does include diverse forms of evidence, is oriented towards theory evaluation, in particular by focusing on theories of change. Methods for including qualitative and quantitative evidence in systematic reviews developed by the EPPI Centre at the Institute of Education, London, have involved refinements and extensions of conventional systematic review methodology \\[6-8\\], and have limited their application of interpretive techniques to synthesis of qualitative evidence.\n\nMore generally, many current approaches fail to be sufficiently *critical*, in the sense of offering a critique. There is rarely an attempt to reconceptualise the phenomenon of interest, to provide a more sweeping critique of the ways in which the literature in the area have chosen to represent it, or to question the epistemological and normative assumptions of the literature. With notable exceptions such as the recent approach of meta-narrative analysis \\[15\\], critique of papers in current approaches to review tends to be limited to appraisal of the methodological specificities of the individual papers.\n\nConducting an interpretive review of the literature on access to healthcare by vulnerable groups in the UK therefore required methodological innovation that would be alert to the issues raised by systematic review methodology but also move beyond both its limitations and those of other current interpretive methods. The methods for review that we developed in this project (Table 2<\/a>) built on conventional systematic review methodology in their sensitivity to the need for attentiveness to a range of methodological processes. Crucially, in doing so, we drew explicitly on traditions of qualitative research inquiry, and in particular on the principles of grounded theory \\[5\\].\n\nKey Processes in critical interpretive synthesis\n\n\n\n\n
\u25aa A review question should be formulated at the outset, but should remain open to modification. Precise definitions of many constructs may be deferred until late in the review and may be a product of the review itself.
\n\u25aa Searching, sampling, critique and analysis proceed hand in hand, and should be seen as dynamic and mutually informative processes.
\n\u25aa Searching initially should use a broadly defined strategy, including purposive selection of material likely or known to be relevant.
\n\u25aa The analysis should be aimed towards the development of a synthesising argument: a critically informed integration of evidence from across the studies in the review. The synthesising argument takes the form of a coherent theoretical framework comprising a network of constructs and the relationships between them. The synthesising argument links synthetic constructs (new constructs generated through synthesis) and existing constructs in the literature.
\n\u25aa There is a need for constant reflexivity to inform the emerging theoretical notions, as these guide the other processes.
\n\u25aa Ongoing selection of potentially relevant literature should informed by the emerging theoretical framework. Literatures not directly or obviously relevant to the question under review may be accessed as part of this process.
\n\u25aa CIS encourages an ongoing critical orientation to the material to be included in the review. Some limited formal appraisal of methodological quality of individual papers is likely to be appropriate. Generally the aim will be to maximise relevance and theoretical contribution of the included papers.
\n\u25aa Formal data extraction procedures may be helpful, particularly at the outset of the review, but are unlikely to be an essential feature of the approach.
\n\u25aa CIS does not offer aim to offer a series of pre-specified procedures for the conduct of review. It explicitly acknowledges the \"authorial voice\"; that some aspects of its production of the account of the evidence will not be visible or auditable; and that its account may not be strictly reproducible. Its aim is to offer a theoretically sound and useful account that is demonstrably grounded in the evidence.
\n\u25aa CIS demands constant reflexivity on the part of authors of reviews. Authors are charged with making conscientious and thorough searches, with making fair and appropriate selections of materials, with seeking disconfirming evidence and other challenges to the emergent theory, and with ensuring that the theory they generate is, while critically informed, plausible given the available evidence.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\nIn addition to its explicit orientation towards theory generation, perhaps what most distinguishes CIS from conventional systematic review methods is its rejection of a \"stage\" approach to review. Processes of question formulation, searching, selection, data extraction, critique and synthesis are characterised as iterative, interactive, dynamic and recursive rather than as fixed procedures to be accomplished in a pre-defined sequence. CIS recognises the need for flexibility in the conduct of review, and future work would need to assess how far formal methods of critical appraisal and data extraction will be essential elements of the method. Our experience suggests that while attention to scientific quality is required, more generally the emphasis should be on critique rather than critical appraisal, and an ongoing critical orientation to the material examined and to emerging theoretical ideas. Formal data extraction may also be an unnecessarily constraining and burdensome process.\n\nCIS emphasises the need for theoretical categories to be generated from the available evidence and for those categories to be submitted to rigorous scrutiny as the review progresses. Further, it emphasises a need for constant reflexivity to inform the emerging theoretical notions, and guides the sampling of articles. Although CIS demands attention to flaws in study design, execution and reporting in our judgements of the quality of individual papers, its critical approach goes beyond standard approaches. Thus, in our review, some methodologically weak papers were important in terms of their theoretical contribution, or in terms of demonstrating the breadth of evidence considered in the construction of particular categories, or in terms of providing a more comprehensive summary of the evidence, while a single strong paper might be pivotal in the development of the synthesis. Hughes and Griffiths' paper on micro-rationing of healthcare \\[61\\], for example, was a key paper in helping to generate the construct of candidacy that later came to unify the themes of our analysis. The critical interpretation in our analysis focused on how a synthesising argument could be fashioned from the available evidence, given the quality of the evidence and the kinds of critiques that could be offered of the theory and assumptions that lay behind particular approaches. In treating the literature as an object of scrutiny in its own right, CIS problematises the literature in ways that are quite distinctive from most current approaches to literature reviewing.\n\n## Access to healthcare\n\nThe CIS approaches we adopted deferred final definition of the phenomenon of access and the appropriate ways of conceptualising it until our analysis was complete. Our critique of the current literature focused on the inadequacies of studies of utilisation as a guide to explaining inequities in health care. The conceptual model of access that we developed emphasises candidacy as the core organising construct, and recasts access as highly dynamic and contingent, and subject to constant negotiation.\n\nIn this conceptual model of access to healthcare, health services are continually constituting and seeking to define the appropriate objects of medical attention and intervention, while at the same time people are engaged in constituting and defining what they understand to be the appropriate objects of medical attention and intervention. Candidacy describes how people's eligibility for healthcare is determined between themselves and health services. Candidacy is a continually negotiated property of individuals, subject to multiple influences arising both from people and their social contexts and from macro-level influences on allocation of resources and configuration of services. \"Access\" represents a dynamic interplay between these simultaneous, iterative and mutually reinforcing processes. By attending to how vulnerabilities arise in relation to candidacy, the phenomenon of access can be much better understood, and more appropriate recommendations made for policy, practice and future research. Although our review focused on the UK, we suggest that the construct of candidacy is transferable, and has useful explanatory value in other contexts.\n\nIn addition to the core construct of candidacy, our analysis required the production of a number of other linked synthetic constructs \u2013 constructs generated through an attempt to summarise and integrate diverse concepts and data \u2013 including \"adjudications\" and \"offers\". It was also possible to link existing \"second order\" constructs, for example relating to help-seeking as the identification of candidacy by patients, into the synthesising argument, and making these work as synthesising constructs. We feel that this approach allows maximum benefit to be gained from previous analyses as well as the new synthesis.\n\n## Reflections on the method\n\nClearly, questions can be raised about the validity and credibility of the CIS analysis we have presented here. Conventional systematic review methodology sets great store by the reproducibility of its protocols and findings. It would certainly have been possible to produce an account of the evidence that was more reproducible. For example, we could have used the evidence to produce a thematic summary that stuck largely to the terms and concepts used in the evidence itself. However, we felt it important that we produced an interpretation of the evidence that could produce new insights and fresh ways of understanding the phenomenon of access, and that the \"critical voice\" of our interpretation was maintained throughout the analysis. Simply to have produced a thematic summary of what the literature was saying would have run the risk of accepting that the accounts offered in the evidence-base were the only valid way of understanding the phenomenon of access to healthcare by vulnerable groups. We therefore make no claim to reproducibility, but wish to address some possible concerns. First, it could be argued that a different team using the same set of papers would have produced a different theoretical model. However, the same would be true for qualitative researchers working with primary qualitative data, who accept that other possible interpretations might be given to, say, the same set of transcripts. Clearly, the production of a synthesizing argument, as an interpretive process, produces one privileged reading of the evidence, and, as the product of an authorial voice, it cannot be defended as an inherently reproducible process or product. We would suggest, however, that our analysis can be defended on the grounds that it is demonstrably grounded in the evidence; that it is plausible; that it offers insights that are consistent with the available evidence; and that it can generate testable hypotheses and empirically valuable questions for future research.\n\nSecond, subjecting a question to continual review and refinement, as we did, may make it more difficult for those conducting critical interpretive reviews to demonstrate, as required by conventional systematic review methodology, the \"transparency\", comprehensiveness, and reproducibility of search strategies. This dilemma between the \"answerable\" question and the \"meaningful\" question has received little attention, but it underpins key tensions between the two ends of the academic\/pragmatic systematic review spectrum. On balance, faced with a large and amorphous body of evidence in an area such as access to healthcare, and given the aims of an interpretive synthesis, we feel that our decision not to limit the focus of the review at the outset, and our subsequent sampling strategies, were well justified. Our decision not to commit to a particular view of what access might be and how it should be assessed at the outset of the project was critical to our subsequent development of a more satisfactory understanding of access.\n\nThird, it could be argued that we have synthesized too small a sample of the available papers, or that the processes used to select the papers are not transparent. We recognize that we have analyzed and synthesized only a fraction of all relevant papers in the area of access to healthcare by vulnerable groups. However, a common strategy in conventional systematic review is to limit the study types to be included; this strategy also might result in only a proportion of the potentially relevant literature being synthesised. While we have described our methods for sampling as purposive, it is possible that another team using the same approach could have come up with a different sample, because, particularly in the later stages of our review, our sampling was highly intuitive and guided by the emerging theory.\n\nThe final version of the conceptual model of access to healthcare that we eventually developed did not emerge until quite late in the review process, and much of the later sampling was directed at testing and purposively challenging the theory as we began to develop it. Again, such forms of searching and sampling do not lend themselves easily to reproducibility or indeed auditability. Testing whether the interpretations change in response to different findings will be an important focus for future research, which will also need to evaluate whether apparently disconfirming evidence is the result of methodological flaws or poses a genuine challenge to theory.\n\n# Conclusion\n\nConducting interpretive reviews in challenging areas where there is a large body of diverse evidence demands an approach that can draw on the strengths of conventional systematic review methodology and on the recent advances in methods for interpretive synthesis. We have termed the approach we developed to this review \"critical interpretive synthesis\". We believe that this methodology offers the potential for insight, vividness, illumination, and reconceptualisation of research questions, particularly in challenging areas such as access to healthcare, and look forward to further evaluations of its application.\n\n# Competing interests\n\nThe author(s) declare that they have no competing interests.\n\n# Authors' contributions\n\nMDW designed the project, led and supervised its execution, and drafted the manuscript. EA, AA, JH, RH, SK, RO, LS, RR and AJS participated in the design of the study. All authors engaged in searching, screening, sampling, data extraction, and critical appraisal\/critique activities, and contributed to the thematic analysis. DC and SA managed the searching, maintained the databases and coded material using N6 software. All authors contributed to the draft of the manuscript.\n\n# Pre-publication history\n\nThe pre-publication history for this paper can be accessed here:\n\n\n\n### Acknowledgements\n\nThis project was funded by the NHS Service Delivery and Organisation (SDO) Research Programme.","meta":{"dup_signals":{"dup_doc_count":133,"dup_dump_count":73,"dup_details":{"curated_sources":2,"2023-50":3,"2023-40":3,"2023-14":2,"2022-40":1,"2022-33":1,"2022-27":4,"2022-05":2,"2021-49":1,"2021-43":1,"2021-39":3,"2021-21":1,"2021-17":1,"2021-10":1,"2021-04":3,"2020-50":1,"2020-40":3,"2020-29":2,"2020-24":1,"2020-16":2,"2020-05":4,"2019-51":2,"2019-43":1,"2019-35":2,"2019-26":2,"2019-18":1,"2019-09":1,"2018-51":1,"2018-47":1,"2018-43":1,"2018-34":2,"2018-22":1,"2018-13":1,"2018-09":1,"2017-51":1,"2017-43":1,"2017-39":1,"2017-34":1,"2017-26":2,"2017-22":2,"2017-17":2,"2017-09":2,"2017-04":2,"2016-50":2,"2016-44":2,"2016-40":3,"2016-36":2,"2016-30":3,"2016-26":1,"2016-22":1,"2016-18":1,"2016-07":1,"2015-48":1,"2015-35":1,"2015-32":1,"2015-27":2,"2015-22":2,"2015-14":2,"2014-52":2,"2014-49":4,"2014-42":6,"2014-41":4,"2014-35":3,"2014-23":1,"2014-15":1,"2024-10":1,"2017-13":1,"2015-18":3,"2015-11":1,"2015-06":1,"2014-10":2,"2013-48":2,"2013-20":2,"2024-26":1}},"file":"PMC1559637"},"subset":"pubmed_central"} {"text":"abstract: # OBJECTIVE\n .\n To examine in obese young adults the influence of ethnicity and subcutaneous adipose tissue (SAT) inflammation on hepatic fat fraction (HFF), visceral adipose tissue (VAT) deposition, insulin sensitivity (SI), \u03b2-cell function, and SAT gene expression.\n .\n # RESEARCH DESIGN AND METHODS\n .\n SAT biopsies were obtained from 36 obese young adults (20 Hispanics, 16 African Americans) to measure crown-like structures (CLS), reflecting SAT inflammation. SAT, VAT, and HFF were measured by magnetic resonance imaging, and SI and \u03b2-cell function (disposition index \\[DI\\]) were measured by intravenous glucose tolerance test. SAT gene expression was assessed using Illumina microarrays.\n .\n # RESULTS\n .\n Participants with CLS in SAT (*n* = 16) were similar to those without CLS in terms of ethnicity, sex, and total body fat. Individuals with CLS had greater VAT (3.7 \u00b1 1.3 vs. 2.6 \u00b1 1.6 L; *P* = 0.04), HFF (9.9 \u00b1 7.3 vs. 5.8 \u00b1 4.4%; *P* = 0.03), tumor necrosis factor-\u03b1 (20.8 \u00b1 4.8 vs. 16.2 \u00b1 5.8 pg\/mL; *P* = 0.01), fasting insulin (20.9 \u00b1 10.6 vs. 9.7 \u00b1 6.6 mU\/mL; *P* \\< 0.001) and glucose (94.4 \u00b1 9.3 vs. 86.8 \u00b1 5.3 mg\/dL; *P* = 0.005), and lower DI (1,559 \u00b1 984 vs. 2,024 \u00b1 829 \u00d710^\u22124^ min^\u22121^; *P* = 0.03). Individuals with CLS in SAT exhibited upregulation of matrix metalloproteinase-9 and monocyte antigen CD14 genes, as well as several other genes belonging to the nuclear factor-\u03baB (NF-\u03baB) stress pathway.\n .\n # CONCLUSIONS\n .\n Adipose tissue inflammation was equally distributed between sexes and ethnicities. It was associated with partitioning of fat toward VAT and the liver and altered \u03b2-cell function, independent of total adiposity. Several genes belonging to the NF-\u03baB stress pathway were upregulated, suggesting stimulation of proinflammatory mediators.\nauthor: Kim-Anne L\u00ea; Swapna Mahurkar; Tanya L. Alderete; Rebecca E. Hasson; Tanja C. Adam; Joon Sung Kim; Elizabeth Beale; Chen Xie; Andrew S. Greenberg; Hooman Allayee; Michael I. GoranCorresponding author: Michael I. Goran, .\ndate: 2011-11\nreferences:\ntitle: Subcutaneous Adipose Tissue Macrophage Infiltration Is Associated With Hepatic and Visceral Fat Deposition, Hyperinsulinemia, and Stimulation of NF-\u03baB Stress Pathway\n\nAdipose tissue inflammation is now recognized as an important mediating link that may help explain the relationship between obesity and several metabolic abnormalities, including insulin resistance (1,2), liver fat accumulation (1,2), and vascular dysfunction (3). This association, however, is not consistent across obese individuals. For example, despite a similar degree of obesity, some obese individuals develop insulin resistance, type 2 diabetes, and nonalcoholic fatty liver disease, whereas others remain protected. This has led to the description of metabolically healthy obese patients (4) who display low hepatic fat content and high insulin sensitivity (SI) together with a favorable inflammatory profile (5). One factor that may explain differences in metabolic risk between individuals with the same degree of body fat is adipose tissue inflammation.\n\nAdipose tissue has long been considered as an inactive tissue, in which its presumed primary role was to store energy excess as triglycerides (TG). It is now widely accepted that adipose tissue acts as an endocrine organ as well through secretion of various adipokines and cytokines and plays a role in regulation of metabolic pathways (6). In obese individuals, excessive storage of free fatty acids (FFA) as TGs may lead to subcutaneous adipose tissue (SAT) dysfunction, resulting in impaired TG storage and possibly diversion of FFA to other tissues, such as the liver or the visceral compartment (7). Such a condition has been associated with high adipose tissue inflammation, characterized by higher secretion of proinflammatory cytokines and macrophage recruitment. Previous research has demonstrated that in adipose tissue from obese mice and humans, such macrophages aggregate around dead adipocytes, forming characteristic ring patterns referred to as crown-like structures (CLS) (8). Furthermore, the macrophages within CLS have been shown to be proinflammatory, and their presence is associated with insulin resistance (9,10).\n\nHispanics are more prone to an ectopic fat pattern, such as visceral and liver fat accumulation (11), when compared with African Americans; this may be driven partly by impaired SAT storage function and associated with adipose macrophage infiltration (12). Conversely, African Americans are similarly prone to obesity but appear protected against visceral and hepatic fat accumulation.\n\nThe purpose of this study, therefore, was to investigate the effect of adipose tissue inflammation on visceral and hepatic fat deposition, SI, and adipose tissue gene expression in two different ethnic groups. We hypothesized that individual differences in adipose tissue inflammation, reflected by the presence of CLS, may explain metabolic abnormalities of obesity, such as hepatic fat deposition and insulin resistance. We therefore recruited participants of Hispanic and African American ethnicity, who are obese and at high risk for type 2 diabetes but show very distinct fat repartition patterns, and investigated whether histological and gene expression differences in adipose tissue contribute to poor metabolic outcomes, such as higher liver fat and insulin resistance.\n\n# RESEARCH DESIGN AND METHODS\n\n## Study participants.\n\nThis cross-sectional analysis includes 36 obese (BMI \u226530 kg\/m^2^) African American (7 men, 9 women) or Hispanic participants (9 men, 11 women) aged 18\u201325 years. Participants were excluded if they had taken medications known to affect body composition, been diagnosed with any major illness since birth, or had any diagnostic criteria for diabetes. Written informed consent and assent were received from all participants. This study was approved by the Institutional Review Board of the Keck School of Medicine, University of Southern California.\n\n## Fat quantification.\n\nWhole-body fat was measured by dual-energy X-ray absorptiometry using a Hologic QDR 4500 W (Hologic, Bedford, MA). Abdominal magnetic resonance imaging data were obtained by the Dixon method, with a sensitive three-point chemical-shift fat-water separation method using a 1.5 Tesla Siemens Symphony Maestro whole-body scanner (Siemens AG, Erlangen, Germany) with Numaris 4 software. A two-dimensional multislice breath-hold protocol previously reported by Hussain et al. (13) was adopted to obtain 19 axial images across the abdomen from the dome of the liver to the L2-L3 vertebrae. The slice thickness was 10 mm with no interslice gaps. The fat-only dataset was used in the subsequent quantification of SAT volume and visceral adipose tissue (VAT) volume, whereas the fat fraction dataset was used to assess percent hepatic fat content (hepatic fat fraction \\[HFF\\]). A commercially available image segmentation and quantification software (SliceOmatic; Tomovision, Inc.) was used. SAT and VAT volumes were computed across all 19 image slices in each participant. HFF was computed as the mean fat fraction of all imaging slices within which the liver was present.\n\n## SAT collection.\n\nAbdominal SAT biopsies were performed lateral to the umbilicus in the skin crease below the abdominal pannus in the lower abdomen using standard sterile techniques. The region was locally anesthetized with 2 mL of 2% lidocaine. Through a 0.5-cm skin incision, SAT was collected with a disposable 0.5-cm diameter biopsy punch using three passes. All tissue samples were stored in formalin or promptly frozen in liquid nitrogen (LN2) and stored at \u221280\u00b0C.\n\n## Adipose tissue immunohistochemistry.\n\nInfiltration of macrophage cell populations into adipose tissue was characterized using cell-specific stains against CD68, an established cell surface marker for macrophages (predilute antibodies from DakoCytomation Corporation). In brief, 5-\u00b5m-thick adipose tissue sections were fixed and loaded onto a Biogenex I-6000 machine for incubation with primary antibodies. Multilink biotinylated secondary antibody was then allowed to react for 30 min at room temperature. Slides were then washed with PBS and placed in diaminobenzidine solution and microscopically examined for a positive reaction and counterstained with hematoxylin. All samples were evaluated in a blinded fashion by the pathologist for the presence (+) or absence (\u2212) of macrophage CLS, indicating the presence of dead adipocytes (8). Tissue sections were observed with a Nikon Eclipse E800 light microscope using a \u00d720 objective, and digital images were captured with a DXM 1200 camera. CLS density (CLS per 400 adipocytes) was determined using a drawing tablet and the Nikon Lucia IMAGE version 4.61 morphometric program. Four representative tissue sections in each sample were observed, and participants were dichotomously categorized as being CLS+ if distinct adipose tissue macrophage clusters were present in any examined high-power field or CLS\u2212 if clusters were completely absent in all histological fields for a given participant. To further characterize macrophage type, we performed CD11c immunoreactivity in six CLS+ (four Hispanics, two African Americans) and four CLS\u2212 participants (three Hispanics, one African American) using a mouse monoclonal antibody against human CD11c, a specific cell surface marker for type 1 macrophages (Novacastra Mouse Monoclonal Antibody CD11c \\[Clone 5D11\\]).\n\n## Adipose tissue microarray analysis.\n\nTotal RNA was isolated from adipose tissue biopsies using RNeasy Mini kits (Qiagen, Valencia, CA). Biotinylated RNA for hybridization with the Illumina arrays was amplified using the TotalPrep RNA Amplification Kit. Global gene expression was determined for each sample using an Illumina (San Diego, CA) HumanHT-12 v3 expression bead chip through a service provided by the Southern California Genotyping Consortium (). Quality control was performed and fulfilled the criteria for array hybridization suggested by the Tumor Analysis Best Practices Working Group (14). Data analysis was performed using Genome studio and Partek Inc. (St. Louis, MO). Background corrected and quantile normalized signal intensity values were exported to Partek. Array data have been submitted to the Gene Expression Omnibus (accession: GSE23506). A fold change of 1.3 was used as a criterion for inclusion of genes in functional annotation and pathway analysis. Our study was 70% powered to detect a fold change of 1.3 ().\n\n## RT-PCR.\n\nGenes of interest were validated using quantitative RT-PCR. Reverse transcription was performed with 0.5 \u00b5g of total RNA and random hexamer primers (Applied Biosystems, Foster City, CA). Inventoried and validated TaqMan probes were used. RT-PCR amplification was performed using an ABI HT7900 instrument (Applied Biosystems). All values are expressed as the average relative expression normalized to GUS-B endogenous control subjects.\n\n## SI and intravenous glucose tolerance test.\n\nAn insulin-modified frequently sampled intravenous glucose tolerance test (IVGTT) (15) was performed after an overnight fast. Upon arrival, a catheter was inserted into both arms at the antecubital level. At time 0, glucose (25% dextrose, 0.3 g\/kg body wt) was administered intravenously. Insulin (0.02 units\/kg body wt, Humulin R \\[regular insulin for human injection\\]; Eli Lilly, Indianapolis, IN) was injected intravenously at 20 min. Blood samples of glucose and insulin were collected at time points \u221215, \u22125, 2, 4, 8, 19, 22, 30, 40, 50, 70, 100, and 180 min and of FFA, TG, cholesterol, and cytokines at time \u221215. Glucose and insulin values obtained from the IVGTT were entered into the MINMOD Millenium 2003 computer program (version 5.16, Bergman, USC) to determine SI, glucose effectiveness, acute insulin response (AIR), and disposition index (DI) (15). Homeostasis model assessment-insulin resistance (HOMA-IR) was calculated according to the following formula: fasting insulin \\[mU\/mL\\] \u00d7 fasting glucose \\[mmol\/L\\]\/22.5, while HOMA\u2013\u03b2-cell = 20 \u00d7 fasting insulin \\[mU\/mL\\]\/fasting glucose \\[mmol\/L\\] \u2212 3.5.\n\n## Blood analysis.\n\nBlood samples from all time points taken during the IVGTT were centrifuged immediately for 10 min at 2,500 revolutions per minute at 8\u201310\u00b0C and frozen at \u221270\u00b0C until analysis. Glucose was assayed in duplicate on a Yellow Springs Instrument 2700 Analyzer (Yellow Springs, OH) using the glucose oxidase method. Insulin was assayed in duplicate using a specific human insulin ELISA kit from Linco (St. Charles, MO), and FFA were quantified using a colorimetric kit (NEFA-HR(2); Wako Diagnostics, Richmond, VA). TGs and total, LDL, and HDL cholesterol were measured using the Kodak Ektachem DT slide assay. Circulating inflammatory mediators including plasminogen activator inhibitor 1, monocyte chemoattractant protein 1 (MCP-1), interleukin-8 (IL-8), tumor necrosis factor-\u03b1 (TNF-\u03b1), and hepatocyte growth factor were measured in batch using multiplex Luminex assays (Linco Research). High-sensitivity C-reactive protein was measured chemically using ADVIA 1800 Chemistry System (Siemens Healthcare Diagnostics, Deerfield, IL).\n\n## Statistical methods.\n\nAll data are means \u00b1 SD, unless otherwise specified. Statistical analyses were performed using STATA 11.0 (Stata Corporation, College Station, TX). *P* values \\< 0.05 were considered statistically significant. Values for high-sensitivity C-reactive protein were log-transformed to reach a normal distribution. Unadjusted comparisons for parameters between adipose inflammatory status were done using Student *t* tests. Adjustments of comparisons for sex, ethnicity, total fat, and VAT were performed using ANCOVA when appropriate. Because of the large SDs of HOMA\u2013\u03b2-cell, SI, AIR, and DI, these parameters were tested using the nonparametric Wilcoxon signed-rank test. \u03c7^2^ Tests were performed to assess the effect of ethnicity and sex on inflammatory status. Correlation analyses were done using Pearson correlation tests. Differentially expressed genes were determined using one-way ANOVA and exported to Ingenuity pathway analysis 8.6 (Ingenuity Systems, Redwood City, CA).\n\n# RESULTS\n\n## Clinical and histological data.\n\nA total of 36 obese participants with an average BMI of 35.6 \u00b1 3.9 kg\/m^2^ completed the study (mean age 21.2 \u00b1 2.3 years; 55% Hispanics, 55% women). We first carried out histological analyses of adipose biopsies to determine the presence of aggregated macrophages as CLS. Mean section area was 14.0 \u00b1 5.1 mm^2^; 16 participants showed presence of CLS (CLS+), whereas there were no signs of CLS in the remaining 20 participants (CLS\u2212). In the CLS+ participants, the mean number of CLS per 400 adipocytes was 12.9 \u00b1 17.9, with a median of 5.4 and an interquartile range of 2.4\u201314.2. The characteristics of the participants, stratified by CLS status, are shown in Table 1<\/a>. Of note, CLS+ participants were equally distributed among men and women as well as Hispanics and African Americans (\u03c7^2^ tests: *P* \\> 0.05), even after adjusting for total fat and VAT. This suggests that major differences with respect to the presence of CLS are not driven by sex or ethnicity in our study population. In the whole group, CLS+ individuals had increased VAT (3.7 \u00b1 1.3 vs. 2.6 \u00b1 1.6 L; *P* = 0.04), HFF (9.9 \u00b1 7.3 vs. 5.8 \u00b1 4.4%; *P* = 0.03), fasting TNF-\u03b1 (20.8 \u00b1 4.8 vs. 16.2 \u00b1 5.8 pg\/mL; *P* = 0.01), and insulin concentrations (20.8 \u00b1 2.6 vs. 9.7 \u00b1 6.5 mU\/mL; *P* = 0.0007), independent of sex, ethnicity, total fat, and visceral fat volume (Fig. 1<\/a>). Markers of insulin resistance, including fasting glucose, fasting insulin, HOMA-IR, and HOMA-\u03b2, were also significantly higher in the CLS+ group, compared with the CLS\u2212, and these comparisons remained significant after adjusting for covariates. DI, reflecting \u03b2-cell function, was significantly lower in the CLS+ group (1,559 \u00b1 984 vs. 2,024 \u00b1 829 \u00d710^\u22124^ min^\u22121^; *P* = 0.03), whereas SI, glucose effectiveness, and AIR were not significantly different (Table 1<\/a>). HDL cholesterol concentrations tended to be lower in CLS+ participants (*P* = 0.09), but there were no differences in fasting TGs and total and LDL cholesterol concentrations. In a subset of participants, we performed further immunohistochemical studies to examine the presence of CD11c+ immunoreactivity, to detect the presence of dendritic cells. Of the six participants with CLS+, four of them showed positive CD11c immunoreactivity staining, whereas it was completely absent in all four CLS\u2212 participants (Fig. 2<\/a>). CD11c+ cells are a subclass of macrophages, called dendritic cells, which have been demonstrated to be proinflammatory and linked to systemic insulin resistance (10,16).\n\nAnthropometric, body composition, SI, and plasma parameters stratified by adipose tissue inflammatory status\n\n| | CLS\u2212 | CLS+ | *P* value |\n|----|----|----|----|\n| *n* | 20 | 16 | |\n| Number of CLS (per 400 adipocyte) | 0 \u00b1 0 | 12.9 \u00b1 17.9 | 0.002 |\n| Age (years) | 21.6 \u00b1 2.2 | 21.1 \u00b1 2.4 | NS |\n| Sex (men\/women) | 8\/12 | 8\/8 | NS |\n| Ethnicity (Hispanic\/African American) | 10\/10 | 10\/6 | NS |\n| Body composition | | | |\n| \u2003BMI (kg\/m^2^) | 35.1 \u00b1 3.3 | 36.3 \u00b1 4.1 | NS |\n| \u2003Total fat (%) | 37.7 \u00b1 7.4 | 37.3 \u00b1 6.3 | NS |\n| \u2003SAT (L) | 16.0 \u00b1 4.9 | 17.3 \u00b1 3.9 | NS |\n| \u2003VAT (L) | 2.6 \u00b1 1.6 | 3.7 \u00b1 1.3 | 0.04 |\n| \u2003HFF (%) | 5.8 \u00b1 4.4 | 9.9 \u00b1 7.3 | 0.03 |\n| SI | | | |\n| \u2003Fasting glucose (mg\/dL) | 86.8 \u00b1 5.3 | 94.4 \u00b1 9.3 | 0.005 |\n| \u2003Fasting insulin (mU\/mL) | 9.7 \u00b1 6.6 | 20.9 \u00b1 10.6 | \\<0.001 |\n| \u2003HOMA-IR | 2.0 \u00b1 1.4 | 4.4 \u00b1 2.2 | \\<0.001 |\n| \u2003HOMA\u2013\u03b2-cell | 191.4 \u00b1 125.4 | 399 \u00b1 301 | 0.01 |\n| \u2003Acute insulin response (mU\/mL \u00d7 10 min) | 1,320 \u00b1 912 | 1,116 \u00b1 796 | NS |\n| \u2003SI (\u00d710^\u22124^ min^\u22121^\/mU\/mL) | 1.9 \u00b1 0.7 | 1.3 \u00b1 0.9 | NS |\n| \u2003DI (\u00d710^\u22124^ min^\u22121^) | 2,024 \u00b1 829 | 1,559 \u00b1 984 | 0.03 |\n| Lipid metabolism | | | |\n| \u2003FFAs (mmol\/L) | 0.87 \u00b1 0.19 | 0.83 \u00b1 0.15 | NS |\n| \u2003TGs (mg\/dL) | 113 \u00b1 59 | 102 \u00b1 48 | NS |\n| \u2003Cholesterol (mg\/dL) | | | |\n| \u2003\u2003Total | 155 \u00b1 31 | 158 \u00b1 35 | NS |\n| \u2003\u2003LDL | 81 \u00b1 26 | 94 \u00b1 39 | NS |\n| \u2003\u2003HDL | 50 \u00b1 11 | 43 \u00b1 13 | 0.09 |\n| Inflammation | | | |\n| \u2003MCP-1 (pg\/mL) | 349 \u00b1 182 | 379 \u00b1 97 | NS |\n| \u2003IL-8 (pg\/mL) | 9.7 \u00b1 3.3 | 11.4 \u00b1 3.2 | NS |\n| \u2003TNF-\u03b1 (pg\/mL) | 16.2 \u00b1 5.8 | 20.8 \u00b1 4.8 | 0.01 |\n| \u2003HGF (ng\/mL) | 2.1 \u00b1 0.8 | 2.1 \u00b1 0.8 | NS |\n| \u2003PAI-1 (ng\/mL) | 104 \u00b1 44 | 121 \u00b1 43 | NS |\n| \u2003hs-CRP (mg\/L) | 7.7 \u00b1 16.2 | 6.7 \u00b1 8.3 | NS |\n| \u2003Leptin (ng\/mL) | 45 \u00b1 22 | 51 \u00b1 28 | NS |\n| \u2003Adiponectin (mg\/mL) | 17 \u00b1 6 | 14 \u00b1 5 | NS |\n\nData are means \u00b1 SD. HGF, hepatocyte growth factor; hs-CRP, high-sensitivity C-reactive protein; NS, not significant; PAI-1, plasminogen activator inhibitor 1.\n\nWe then investigated whether presence of CLS translated into the same phenotypes between Hispanics and African Americans. When analyses were stratified by ethnicity, presence of CLS was specifically associated with higher VAT (2.8 \u00b1 0.9 vs. 1.6 \u00b1 0.9 L; *P* = 0.02) and glucose (100.1 \u00b1 8.6 vs. 84.4 \u00b1 5.6 mg\/dL; *P* \\< 0.001) in African Americans. In Hispanics, CLS was associated with higher TNF-\u03b1 (22.6 \u00b1 3.8 vs. 16.5 \u00b1 5.8 pg\/mL; *P* = 0.01) and a trend for lower SI (1.6 \u00b1 0.4 vs. 2.1 \u00b1 0.2 \\[\u00d710^\u22124^ min^\u22121^\/mU\/mL\\]; *P* = 0.06).\n\n## Adipose tissue gene expression analysis.\n\nIn the same participants, we subsequently assessed gene expression in SAT biopsies. Of the 23,000 known annotated genes analyzed on the Illumina Human HT-12 chip, 375 genes (\u223c2%) were differentially expressed between the CLS+ and CLS\u2212 groups based on a detection *P* value \\< 0.05. Table 2<\/a> shows the top 15 differentially up- and downregulated genes in CLS+ compared with CLS\u2212 individuals. Based on Gene Ontology descriptions, molecules involved in inflammatory disease, such as matrix metalloproteinase-9 (*MMP9*; fold change: +4.8; *P* = 0.0004), interferon \u03b3-inducible protein 30 (fold change: +2.2; *P* = 0.003), and IL-1 receptor antagonist (fold change: +2.0; *P* = 0.03), were among the most upregulated genes in CLS+ subjects compared with CLS\u2212 individuals. Genes involved in response to inflammation were also upregulated, including lipopolysaccharide binding protein (fold change: +2.0; *P* = 0.004), TNF receptor superfamily, member 11b (fold change: +1.4; *P* = 0.01), *MCP-1* (fold change: +1.6; *P* = 0.02), and monocyte antigen *CD14* (fold change: +1.5; *P* = 0.01). By comparison, insulin receptor substrate (*IRS*)-*1* (fold change: \u22121.6; *P* = 0.02) and *IRS-2* (fold change: \u22121.5; *P* = 0.01), which play central roles in the insulin signaling cascade, were downregulated in CLS+ individuals. Importantly, the differential expression of these inflammatory and insulin signaling genes was observed in both Hispanics and African Americans. On the basis of their relevance to inflammatory pathways we selected the following genes for RT-PCR validation: *CD14*, *MMP9*, suppressor of cytokine signaling 3, and *IRS-1.* In agreement with our hypothesis and the microarrays results, CLS+ individuals showed upregulation of *CD14* (1.7 \u00b1 0.9 vs. 0.9 \u00b1 0.3; *P* = 0.02) and *MMP9* (1.3 \u00b1 1.1 vs. 0.3 \u00b1 0.2; *P* = 0.01) compared with CLS\u2212 individuals. The suppressor of cytokine signaling 3 was upregulated (1.5 \u00b1 1.9 vs. 0.8 \u00b1 0.5; *P* = 0.2) and *IRS1* was downregulated in CLS+ individuals (1.0 \u00b1 0.6 vs. 1.4 \u00b1 1.2; *P* = 0.3), but these results did not reach significance. Figure 3<\/a> shows a schematic representation of the differentially expressed molecules and their functions.\n\nTop 15 up- and downregulated genes in CLS+ compared with CLS\u2212 individuals\n\n| Upregulated in CLS+ individuals | | | | Downregulated in CLS+ individuals | | | |\n|----|----|----|----|----|----|----|----|\n| Unigene official gene symbol | Gene name | *P* value | Fold change | Unigene official gene symbol | Gene name | *P* value | Fold change |\n| *MMP9* | matrix metallopeptidase 9 | 0.0004 | +4.8 | *KRT15* | keratin 15 | 0.04 | \u22122.2 |\n| *SPP1* | secreted phosphoprotein 1 | 0.01 | +3.2 | *CIDEA* | cell death-inducing *DFFA*-like effector a | 0.04 | \u22121.9 |\n| *SLC2A5* | solute carrier family 2 (facilitated glucose\/fructose transporter), member 5 | 0.0001 | +2.7 | *COBL* | cordon-bleu homolog (mouse) | 0.01 | \u22121.9 |\n| *PLA2G7* | platelet-activating factor acetylhydrolase, plasma | 0.005 | +2.6 | *COL6A6* | collagen, type VI, \u03b1-6 | 0.004 | \u22121.8 |\n| *IFI30* | interferon, \u03b3-inducible protein 30 | 0.003 | +2.2 | *CEACAM6* | carcinoembryonic antigen-related cell adhesion molecule 6 | 0.04 | \u22121.8 |\n| *PLA2G2A* | phospholipase A2, group IIA | 0.003 | +2.1 | *S100P* | S100 calcium binding protein P | 0.02 | \u22121.6 |\n| *LBP* | lipopolysaccharide binding protein | 0.004 | +2.1 | *GSDMB* | gasdermin B | 0.02 | \u22121.6 |\n| *ITGAX* | integrin, \u03b1-X | 0.006 | +2.0 | *CISH* | cytokine inducible SH2-containing protein | 0.02 | \u22121.6 |\n| *IL1RN* | IL-1 receptor antagonist | 0.03 | +2.0 | *ALDH3B2* | aldehyde dehydrogenase 3 family, member B2 | 0.04 | \u22121.6 |\n| *CLIC6* | chloride intracellular channel 6 | 0.03 | +2.0 | *ADH1A* | alcohol dehydrogenase 1A (class I), \u03b1-polypeptide | 0.006 | \u22121.6 |\n| *CHI3L2* | chitinase 3-like 2 | 0.001 | +2.0 | *IRS1* | insulin receptor substrate 1 | 0.02 | \u22121.6 |\n| *HMOX1* | heme oxygenase 1 | 0.0001 | +2.0 | *FHOD3* | formin homology 2 domain containing 3 | 0.02 | \u22121.6 |\n| *ACP5* | acid phosphatase 5 | 0.007 | +2.0 | *IRS2* | insulin receptor substrate 2 | 0.01 | \u22121.5 |\n| *PRND* | prion protein 2 | 0.005 | +1.9 | *RAP1GAP* | RAP1 GTPase activating protein | 0.03 | \u22121.5 |\n| *HP* | haptoglobin | 0.009 | +1.8 | *ADH1B* | alcohol dehydrogenase 1B (class I), \u03b2-polypeptide | 0.02 | \u22121.5 |\n\nPositive fold change, upregulated in individuals with CLS (CLS+); negative fold change, downregulated in individuals with CLS (CLS\u2212).\n\nTo gain further insight into the role of these 375 differentially expressed genes, we performed a pathway analysis using Ingenuity Pathway Analysis Systems and identified 31 significantly differentially regulated pathways (some of which are listed in Table 3<\/a>). The first few differentially regulated pathways were related to liver disease or injury, including: *1*) liver X receptor\/retinoid X receptor activation pathways, which play a major role in hepatic lipid synthesis; *2*) hepatic cholestasis, a metabolic disease resulting from abnormal bile flow; *3*) bile acid biosynthesis; *4*) xenobiotic detoxification by cytochrome P450 enzymes; and *5*) hepatic fibrosis and hepatic stellate cell activation. Of note, molecules found in these various pathways included *MMP9*; lipopolysaccharide binding protein; *CD14*, IL-1-receptor-antagonist; TNF receptor superfamily, member 11b; and *MCP1*, which all belong to the nuclear factor-\u03baB (NF-\u03baB) signaling pathway.\n\nDifferentially regulated pathways between CLS+ and CLS\u2212 individuals\n\n| Canonical pathway | No. of genes | *P* value | Symbol |\n|----|----|----|----|\n| Liver X receptor\/retinoid X receptor activation | 9 | 4.20 E-05 | CD14, CCL2, TNFRSF11B, APOC1, MSR1, MMP9, NGFR, IL1RN, LBP |\n| Hepatic cholestasis | 12 | 6.69 E-05 | CD14, CYP27A1, ABCC3, SLCO3A1, TNFRSF11B, IRAK1, ESR1, NGFR, RARA, IL1RN, LBP, HSD3B7 |\n| Bile acid biosynthesis | 7 | 9.26 E-05 | CYP27A1, ALDH1A3, ADHFE1, DHRS9, ADH1A, ADH1B, HSD3B7 |\n| Metabolism of xenobiotics by cytochrome P450 | 10 | 1.17 E-04 | AKR1C2, ALDH1A3, CYP2F1, ALDH3B2 (includes EG:222), ADHFE1, CYP2C9, CYP2S1, DHRS9, ADH1A, ADH1B |\n| Atherosclerosis signaling | 10 | 1.4 E-04 | PDGFA, CXCR4, CCL2, ITGB2, ALOX5, MSR1, MMP9, PLA2G2A, TNFRSF12A, IL1RN |\n| Hepatic fibrosis\/hepatic stellate cell activation | 11 | 2.7 E-04 | CD14, PDGFA, TIMP1, CCL2, EDNRA, TNFRSF11B, AGTR1, MMP9, NGFR, EGFR, LBP |\n| IL-10 signaling | 7 | 6.6 E-04 | CD14, FCGR2B, CCR1, IL1RN, LBP, FCGR2A, HMOX1 |\n| Fatty acid metabolism | 9 | 9.4 E-04 | ALDH1A3, CYP2F1, ADHFE1, AUH, CYP2C9, CYP2S1, DHRS9, ADH1A, ADH1B |\n| Complement system | 5 | 1.2 E-03 | C5AR1, C1QC, C2, C3AR1, CFB |\n| Fc\u03b3 receptor-mediated phagocytosis in macrophages and monocytes | 8 | 1.71 E-03 | LYN, FCGR1A, PLD3, FGR, SYK, FCGR2A, WAS, HMOX1 |\n| Lipopolysaccharide\/IL-1-mediated inhibition of retinoid X receptor function | 12 | 2.1 E-03 | CD14, ABCC3, IL4I1, TNFRSF11B, ALDH1A3, APOC1, IRAK1, ALDH3B2 (includes EG:222), CYP2C9, NGFR, RARA, LBP |\n| Acute phase response signaling | 11 | 2.8 E-03 | TNFRSF11B, IRAK1, SERPINA3, C2, NGFR, HAMP, IL1RN, HP, LBP, HMOX1, CFB |\n| Dendritic cell maturation | 10 | 2.9 E-03 | FCGR2B, CD86, TNFRSF11B, TYROBP, FCGR1A, NGFR, IL1RN, TREM2, FCGR2A, FCGR1B |\n| IL-9 signaling | 4 | 6.8 E-03 | IRS1, BCL3, IRS2, CISH |\n| PPAR signaling | 5 | 1.5 E-02 | PDGFA, TNFRSF11B, CITED2, NGFR, IL1RN, PPARGC1A |\n| IL-8 signaling | 9 | 2.01 E-02 | ITGB2, ITGAX, IRAK1, MMP9, PLD3, GNAI1, ANGPT2, EGFR, HMOX1 |\n| IL-6 signaling | 6 | 2.07 E-02 | CD14, TNFRSF11B, NGFR, IL1RN, LBP, TNFAIP6 |\n| Role of macrophages, fibroblasts, and endothelial cells in rheumatoid arthritis | 13 | 3.6 E-02 | PDGFA, F2RL1, TNFRSF11B, FCGR1A, CFB, C5AR1, CCL2, FZD2, IRAK1, NGFR, HP, IL1RN, TNFSF13B |\n| Type 2 diabetes signaling | 1.26E00 | 3.8E-02 | IRS1, TNFRSF11B, IRS2, PKM2, NGFR, SMPD1 |\n\nPPAR, peroxisome proliferator\u2013activated receptor.\n\n# DISCUSSION\n\nObesity is associated with inflammation, which may play an important role in fatty liver disease and insulin resistance (1,2,17,18). In this study, we showed that in a group of obese Hispanics and African Americans, 44% of the participants had adipose tissue inflammation, which was associated with higher amounts of VAT and liver fat, hyperinsulinemia, and reduced \u03b2-cell function. At the molecular level, individuals with CLS showed upregulation of several genes belonging to the NF-\u03baB stress pathway.\n\nIn obese individuals, inflammatory processes are thought to originate from the excessive accumulation of fat in the adipose tissue, where it translates into recruitment of macrophages around dead adipocytes, forming ring patterns known as CLS (12). Consistent with a role for CLS in adipose tissue inflammation, we demonstrated the presence of a subclass of proinflammatory macrophages, CD11c+ dendritic cells, which have been linked to systemic insulin resistance, only in adipose tissue of subjects with CLS (10,16). Moreover, we demonstrated that in obese Hispanics and African Americans, VAT, liver fat, and circulating TNF-\u03b1 were significantly higher in individuals with CLS in fat biopsies, independent of ethnicity, sex, total body fat, and SAT. Such accumulation of fat in the visceral compartment and the liver markedly increases the risk for type 2 diabetes, and these metabolic abnormalities have been recognized as independent features of the metabolic syndrome (19,20). Indeed, inflammation is also closely associated with insulin resistance, both in the liver and the adipose tissue (21\u201323). This may be because of decreased insulin signaling by inflammatory mediators such as IL-1 and TNF-\u03b1 (24). Consistent with these previous observations, we show that individuals with CLS (CLS+) had increased fasting glucose and insulin, as well as decreased DI, reflecting altered glucose homeostasis and \u03b2-cell function.\n\nWe further investigated the effect of inflammation separately in Hispanics and African Americans. The prevalence of individuals with inflammation was equally distributed among ethnicities, suggesting that there is no existing ethnic predisposition to adipose tissue inflammation. Previous reports have shown that Hispanics are more prone to accumulation of lipids in the VAT and the liver (11). Given the tight link between ectopic fat and inflammation, we therefore expected Hispanics to have a higher degree of inflammation. However, our results show that inflammation may occur in Hispanics regardless of VAT amount. Moreover, presence of CLS was associated with lower SI and increased plasma TNF-\u03b1 concentrations. In contrast, African Americans usually display lower amounts of VAT; in this ethnic group, we found that VAT was significantly increased only by the presence of adipose tissue inflammation. Although we cannot establish any causal relationship at this point, these ethnic discrepancies suggest that in Hispanics, presence of inflammation is not a major player in these adipose tissue depots accumulation, but is instead associated with the development of systemic inflammation and insulin resistance. By contrast, in African Americans, who are usually protected against VAT accumulation, presence of adipose tissue inflammation may reflect a generalized increase in adipocyte depot mass and activation of inflammatory pathways. This suggests that presence of adipose tissue inflammation may be linked to distinct metabolic outcomes, depending on ethnicity.\n\nBased on these clinical observations, we investigated how different gene expression patterns between CLS+ and CLS\u2212 individuals could provide a functional link between adipose tissue inflammation, hepatic fat accumulation, and insulin resistance. As a result, several molecules related to the NF-\u03baB pathway were upregulated in CLS+ individuals. NF-\u03baB pathway mediates important stress responses and activates several proinflammatory cascades (25). Classically, its activation requires the binding of proinflammatory stimuli such as TNF-\u03b1, IL-1, or bacterial lipopolysaccharide to their appropriate membrane receptors, which subsequently trigger the activation cascade (26). Activation of NF-\u03baB results in enhanced transcription of IL-6, IL-8, and TNF-\u03b1, which trigger stress and inflammatory pathways and further stimulate the NF-\u03baB pathway (21). NF-\u03baB also leads to increased MCP-1, which locally recruits macrophages, and MMP9, which plays a role in inflammation-mediated tissue remodeling, such as adipocyte size expansion (27) and fibrosis in the liver (28). Notably, all of these molecules were upregulated in CLS+ individuals in the current study, reflecting a higher degree of inflammation and macrophage activation (Fig. 3<\/a>). In mononuclear cells from obese individuals, the NF-\u03baB pathway is also activated, contributing to higher blood concentrations of proinflammatory mediators such IL-6, TNF-\u03b1, and MMP9 (29). This suggests that CLS-associated macrophages may have similar roles in adipose tissue, thus contributing to local release of proinflammatory cytokines (27). Using pathway analysis, we also observed that the most differentially regulated pathways were related to liver functions\/diseases, such as liver X receptor\/retinoid X receptor activation, hepatic cholestasis, and hepatic fibrosis pathways, with activation of several genes related to inflammatory processes, and more specifically to the NF-\u03baB pathway. These pathways play pivotal roles in liver injury and may lead to increased hepatic lipid synthesis (30\u201332).\n\nOur present results show that CLS may be found in some but not all obese individuals, independently of their ethnic background or sex, and is associated with increased VAT, hepatic fat content, and insulin resistance (33). Although our results are based on associations, when evaluated together with existing literature, the findings collectively suggest that with increasing obesity, adipocytes enlarge, until reaching a threshold (33). This may promote adipocyte death, macrophage aggregation, and CLS formation (8). The ensuing inflammatory activity may trigger the NF-\u03baB stress pathway and additional remodeling mechanism, characterized by increased MMP9 protease expression (34,35). Excess TG accumulation in adipose tissue may subsequently spill over to the systemic circulation and in turn accumulate in ectopic tissues, such as VAT and the liver (7,36), which may both contribute to decreased SI. The fact that in Hispanics high amounts of VAT are not necessarily associated with adipose tissue inflammation suggests that such relationships may be more subtle and possibly depend on other environmental and\/or genetic factors.\n\nOne limitation of our study is the absence of nutritional data. Both high-fat or high-fructose diets induce insulin resistance and inflammation (37). It remains therefore possible that differences in dietary intakes may contribute to the inflammatory status of adipose tissue. Whether the nutritional status may affect adipose tissue inflammation and the presence of CLS over the long run remains to be investigated. Moreover, this study was limited to African Americans and Hispanics, and obesity duration was not collected. It remains therefore to determine how the onset and duration of obesity may affect adipose tissue inflammation in these populations. Finally, we did not separate adipocytes from other cell types. Further studies using cell sorting will be required to discriminate the expression profile between cell types within the adipose tissue.\n\nIn conclusion, this study demonstrates that macrophage infiltration in SAT from obese individuals is equally distributed between sexes and ethnicities. Presence of inflammation is associated with higher VAT and hepatic fat content, as well as higher fasting glucose and insulin and reduced \u03b2-cell function, independent of total fat. These phenotypes may be attributed to upregulation of major gene pathways involved in proinflammatory cascades, such as the NF-\u03baB stress pathway. Further intervention studies will be required to assess the time course of such metabolic alterations and establish causal relationship between adipose tissue inflammation, hepatic fat accumulation, and development of insulin resistance.\n\n## ACKNOWLEDGMENTS\n\nThis work was supported by grants from the American Diabetes Association; National Institutes of Health Grant R01-DK-082574 to A.S.G.; the Robert C. and Veronica Atkins Foundation Grant to A.S.G. and M.I.G.; and the U.S. Department of Agriculture, Agricultural Research Service, under agreement No. 58-1950-7-70 to A.S.G. K.-A.L. is supported by a grant from the Swiss National Science Foundation (PBLA33-122719).\n\nNo potential conflicts of interest relevant to this article were reported.\n\nK.-A.L. designed the study, contributed to data and sample collection, analyzed data, wrote the manuscript, and reviewed and edited the manuscript. S.M. analyzed data and reviewed and edited the manuscript. T.L.A., R.E.H., and T.C.A. contributed to data and sample collection and reviewed and edited the manuscript. J.S.K. reviewed and edited the manuscript. E.B., C.X., and A.S.G. contributed to data and sample collection and reviewed and edited the manuscript. H.A. designed the study and reviewed and edited the manuscript. M.I.G. designed the study, analyzed data, and reviewed and edited the manuscript.\n\n# REFERENCES","meta":{"dup_signals":{"dup_doc_count":141,"dup_dump_count":51,"dup_details":{"curated_sources":2,"2021-10":1,"2020-45":1,"2020-05":1,"2019-51":1,"2019-39":1,"2018-30":1,"2018-26":1,"2018-17":3,"2018-09":1,"2018-05":3,"2017-51":1,"2017-47":2,"2017-43":3,"2017-39":2,"2017-34":3,"2017-30":4,"2017-26":2,"2017-22":3,"2017-17":5,"2017-09":5,"2017-04":4,"2016-50":3,"2016-44":5,"2016-40":4,"2016-36":4,"2016-30":3,"2016-26":3,"2016-22":4,"2016-18":2,"2016-07":3,"2015-48":4,"2015-40":2,"2015-35":5,"2015-27":4,"2015-22":4,"2015-14":4,"2014-52":4,"2014-49":2,"2014-42":2,"2014-41":4,"2014-35":3,"2014-23":1,"2014-15":2,"2021-25":1,"2017-13":4,"2015-18":4,"2015-11":3,"2015-06":2,"2014-10":2,"2013-48":2,"2013-20":1}},"file":"PMC3198061"},"subset":"pubmed_central"} {"text":"abstract: \u03b2-Cell function improves in patients with type 2 diabetes in response to an oral glucose stimulus after Roux-en-Y gastric bypass (RYGB) surgery. This has been linked to the exaggerated secretion of glucagon-like peptide 1 (GLP-1), but causality has not been established. The aim of this study was to investigate the role of GLP-1 in improving \u03b2-cell function and glucose tolerance and regulating glucagon release after RYGB using exendin(9-39) (Ex-9), a GLP-1 receptor (GLP-1R)\u2013specific antagonist. Nine patients with type 2 diabetes were examined before and 1 week and 3 months after surgery. Each visit consisted of two experimental days, allowing a meal test with randomized infusion of saline or Ex-9. After RYGB, glucose tolerance improved, \u03b2-cell glucose sensitivity (\u03b2-GS) doubled, the GLP-1 response greatly increased, and glucagon secretion was augmented. GLP-1R blockade did not affect \u03b2-cell function or meal-induced glucagon release before the operation but did impair glucose tolerance. After RYGB, \u03b2-GS decreased to preoperative levels, glucagon secretion increased, and glucose tolerance was impaired by Ex-9 infusion. Thus, the exaggerated effect of GLP-1 after RYGB is of major importance for the improvement in \u03b2-cell function, control of glucagon release, and glucose tolerance in patients with type 2 diabetes.\nauthor: Nils B. J\u00f8rgensen; Carsten Dirksen; Kirstine N. Bojsen-M\u00f8ller; Siv H. Jacobsen; Dorte Worm; Dorte L. Hansen; Viggo B. Kristiansen; Lars Naver; Sten Madsbad; Jens J. HolstCorresponding author: Nils B. J\u00f8rgensen, .\ndate: 2013-09\nreferences:\ntitle: Exaggerated Glucagon-Like Peptide 1 Response Is Important for Improved \u03b2-Cell Function and Glucose Tolerance After Roux-en-Y Gastric Bypass in Patients With Type 2 Diabetes\n\nHyperglycemia in patients with type 2 diabetes is resolved shortly after Roux-en-Y gastric bypass (RYGB), suggesting that mechanisms independent of weight loss contribute to the improvement in glycemic control (1\u20134).\n\nWithin 1 month and as early as 5 days after RYGB, \u03b2-cell function in response to a meal improves in subjects with type 2 diabetes, and this is accompanied by an increased postprandial glucagon-like peptide (GLP)-1 response (3,5,6). In contrast, after intravenous infusion of glucose, which does not elicit the incretin effect, an improvement in \u03b2-cell function is absent (5,7,8). Therefore, it could be speculated that the early improvements in \u03b2-cell function after RYGB are due to the enhanced GLP-1 secretion related to eating a meal, but causality has not been established (9).\n\nIn patients with type 2 diabetes, energy restriction per se is known to result in improved hepatic insulin sensitivity and decreased hepatic glucose production and, as a result, lowered fasting plasma glucose concentrations (10\u201312). Similar metabolic changes are seen after RYGB, when energy intake is limited (13,14), and this has led to the proposal that caloric restriction with a subsequent reduction in glucotoxicity, rather than an increased effect of GLP-1, is responsible for the improved \u03b2-cell function (14,15).\n\nThe aim of this study was to investigate the role of GLP-1 in the improved \u03b2-cell function and glucose tolerance seen after RYGB in subjects with type 2 diabetes. This was accomplished by pharmacologically blocking the GLP-1 receptor (GLP-1R) during a liquid meal tolerance test before and after surgery using exendin(9-39) (Ex-9; Bachem AG, Bubendorf, Switzerland), a specific GLP-1R antagonist (16).\n\nPrevious studies have documented increased meal-related glucagon secretion after RYGB despite improvements in insulin secretion and sensitivity and exaggerated GLP-1 release (3,17,18). This observation is surprising given the glucagonostatic properties of GLP-1 and insulin (19,20). Therefore, a further aim of this study was to evaluate the interaction between GLP-1 and glucagon release after RYGB in both the fasting and postprandial states.\n\n# RESEARCH DESIGN AND METHODS\n\nPatients with type 2 diabetes were recruited from the Hvidovre Hospital's bariatric surgery program (Hvidovre, Denmark), met the criteria for bariatric surgery (age \\>25 years and BMI \\>35 kg\/m^2^), and had accomplished a mandatory preoperative, diet-induced loss of 8% of total body wt before inclusion. Patients were excluded if they had uncontrolled hypothyroidism, had been taking antithyroid medication or anorectic agents within 3 months before the experiments, or had a fasting C-peptide level \\<700 pmol\/L. To confirm the diagnosis of type 2 diabetes, an oral glucose tolerance test (OGTT) was performed \u22641 month before the first experiment.\n\nThe study was approved by the Municipal Ethical Committee of Copenhagen (reg. nr. H-A-2008-080-31742), was in accordance with the Declaration of Helsinki II, and was registered with [clinicaltrials.gov](http:\/\/clinicaltrials.gov) (NCT01579981) and the Danish Data Protection Agency. Written informed consent was obtained from all patients before entering the study. Incretin-based therapies were put on hold for at least 14 days and all other antidiabetic medications for at least 3 days before the first preoperative experiment. Insulin analogs were replaced with NPH insulin at least 2 weeks before the first experiment. RYBG was performed as previously described (18). Patients were examined at 3 visits: before, 1 week after, and 3 months after RYGB. Visits consisted of 2 days where the patients were examined during a liquid meal tolerance test with a concurrent patient-blinded, primed, continuous infusion of Ex-9 or isotonic saline in random order. On each study day, patients met at 0800 h after a 10-h overnight fast. Patients were weighed (Tanita Corp., Tokyo, Japan), a catheter was inserted into the antecubital vein of each arm (one for blood sampling and one for infusion), and three fasting blood samples were drawn (\u221240 to 30 min). A primed continuous infusion of either saline or Ex-9 was initiated at time \u221230 min using a precision infusion pump (P2000; IVAC Medical Systems, Hampshire, U.K.). Saline was infused at a rate corresponding to the Ex-9 infusion volumes. After infusion was started, participants maintained a fasting state for 30 min to allow the drug to reach target tissues and drug concentrations to stabilize before the meal. Three further baseline samples were drawn just before ingestion of the meal (\u221210 to 0 min). At time 0 min, a liquid meal (Fresubin Energy Drink, 200 mL, 300 kcal, carbohydrate \\[50% of energy\\], protein \\[15% of energy\\], fat \\[35% of energy\\]; Fresenius Kabi Deutschland, Bad Homburg, Germany) was provided. To estimate gastric emptying, 1 g of paracetamol (Pamol; Nycomed Danmark, Roskilde, Denmark) was crushed to a powder and added to the meal. To avoid dumping after surgery and to obtain a comparable stimulus before and after RYGB, meal ingestion was supervised to ensure even distribution of meal intake over a 30-min period.\n\nBlood was sampled before and frequently following the meal for a total of 4 h (\u221240, \u221235, \u221230, \u221210, \u22125, 0, 30, 45, 60, 90, 120, 180, and 240 min). During each test, patients were placed sitting in a reclined position in a hospital bed, and no strenuous activity was allowed.\n\n## Ex-9.\n\nEx-9 was purchased from Bachem AG. Ex-9 from the same lot was used for all experiments. The peptide was dissolved in sterilized water containing 1% human albumin (Plasma Product Unit PSU; Novo Nordisk A\/S, Bagsvaerd, Denmark) and subjected to sterile filtration. The dissolved peptide was dispensed in appropriate volumes and stored frozen (\u221220\u00b0C) under sterile conditions until the day of the experiment. The peptide was demonstrated to be 99.5% pure by high-performance liquid chromatography. The Ex-9 infusion rate of 900 pmol\/kg body wt\/min was chosen because this had previously been reported to block the GLP-1R by 95% (21,22), and the resulting steady-state concentration of Ex-9 was predicted to be 391 nmol\/L using previously published pharmacokinetic parameters (23). Likewise, bolus size was calculated to be 43.000 pmol\/kg body wt.\n\n## Sample collection and laboratory analyses.\n\nBlood was collected into clot activator tubes for insulin and C-peptide analysis and into prechilled EDTA tubes for analysis of GLP-1, glucose-dependent insulinotropic polypeptide (GIP), glucagon, glucose, paracetamol, and Ex-9.\n\nClot activator tubes were left to coagulate for 30 min, whereas EDTA tubes were placed on ice until centrifuged at 4\u00b0C. Plasma glucose was measured immediately using the glucose oxidase technique (YSI model 2300 STAT Plus; YSI, Yellow Springs, OH). Samples of GLP-1, GIP, glucagon, and Ex-9 were stored at \u221220\u00b0C. All other samples were frozen and stored at \u221280\u00b0C until batch analysis.\n\nSerum insulin and C-peptide concentrations were determined by AutoDELFIA fluoroimmunoassay (Wallac OY, Turku, Finland). Plasma samples were assayed for total GLP-1 immunoreactivity, as described previously (24). Total GIP was measured using a C-terminally directed antiserum 867 with characteristics similar to the previously employed R65 (25,26). Glucagon was measured with the LINCO assay (Millipore, Billerica, MA) because it does not cross-react with Ex-9 (27). Ex-9 was measured using antibody 3145 raised in rabbits immunized with exendin-4, which shows 100% cross-reactivity with Ex-9 but \\<0.01% cross-reactivity with GLP-1, glucagon, or GIP (27). Paracetamol was measured using a calorimetric assay (Roche Diagnostics GmbH, Mannheim, Germany), and HbA~1c~ was measured using high-performance liquid chromatography with a cation exchange column (Tosoh Bioscience, Tokyo, Japan).\n\n## Calculations and statistical analyses.\n\nFasting glucose and hormone values were calculated as the mean of the time points from the interval \u221240 to \u221230 min. Total area under the curve (AUC) was calculated using the trapezoidal model. Baseline values were the mean of the time points from the interval \u221210 to 0 min. Incremental AUC (I-AUC) was calculated as AUC above baseline.\n\nInsulin resistance was calculated using homeostasis model assessment of insulin resistance (HOMA-IR) using the following equation: Insulin~fasting~ \\[pmol\/L\\] \u00d7 Glu~fasting~ \\[mmol\/L\\] \/ (22.5 \u00d7 6.945), where Glu is glucose.\n\nPrehepatic insulin secretion rates (ISRs) were calculated by deconvolution of peripheral C-peptide concentrations and application of population-based parameters for C-peptide kinetics using the ISEC software (28,29). ISR is expressed as picomoles \u00d7 kilograms^\u22121^ \u00d7 minutes^\u22121^. The relationship between glucose concentrations and ISR during the meal, the \u03b2-cell glucose sensitivity (\u03b2-GS), was characterized as previously described (3). The time to peak paracetamol concentration was used as a marker of gastric emptying.\n\nData are expressed as means \u00b1 SE. Statistical analyses were carried out using Wilcoxon matched pairs signed rank test or Kruskal-Wallis rank sum test, as appropriate. *P* values \\<0.05 were considered significant. The analyses were carried out using the R 2.11.1 statistical software package.\n\n# RESULTS\n\n## Patient characteristics.\n\nEleven patients with type 2 diabetes were recruited for the study. Two patients were excluded from data analysis: one because intravenous cannulation could not be performed on the second experimental day and the other because RYGB was abandoned because of massive intestinal adherences on the basis of earlier gastrointestinal surgery and peritonitis.\n\nThus, a total of nine patients with type 2 diabetes (age 50 \u00b1 3 years; three females and six males; BMI 39.2 \u00b1 2.4 kg\/m^2^; diabetes duration 5.7 \u00b1 1.3 years; HbA~1c~ 6.5 \u00b1 0.3% \\[48 \u00b1 4 mmol\/mol\\]; 2-h OGTT plasma glucose 14.4 \u00b1 1.0 mmol\/L) were examined 9 \u00b1 2 days before and 7 \u00b1 1 days (1 week) and 96 \u00b1 2 days (3 months) after RYGB. Preoperatively, one patient was treated with diet alone and eight were treated with metformin either as monotherapy (*n* = 4) or in combination with glimepiride (1), vildagliptin (1), liraglutide (1), or liraglutide and insulin glargine (1). None of the patients received any antidiabetic medication after RYGB. Further comorbidities were hypertension (6), hypercholesterolemia (2), hypothyroidism (*n* = 1, well controlled, with thyroid-stimulating hormone within the normal range), and myocardial infarction 4 years earlier (1). None had records or symptoms of gastroparesis, neuropathy, retinopathy, or nephropathy.\n\nThe patients entered the bariatric surgery program 337 \u00b1 40 days before RYGB with a BMI of 42.9 \u00b1 2.8 kg\/m^2^. On the day of the OGTT (30 \u00b1 9 days before RYGB), BMI had decreased to 39.5 \u00b1 2.5 kg\/m^2^ (92 \u00b1 0.9% of initial BMI), which was not different from that measured on the first experimental day. One week after RYGB, patients had lost 3.9 \u00b1 0.4% (*P* \\< 0.01) of preoperative BMI, and after 3 months they had lost 13.5 \u00b1 1.1% (*P* \\< 0.01). HbA~1c~ decreased to 5.7 \u00b1 0.2% (39 \u00b1 2 mmol\/mol; *P* \\< 0.01) 3 months after surgery.\n\n## Ex-9 concentrations.\n\nMean Ex-9 concentrations from time 0 to 240 min were similar before (404 \u00b1 52 nmol\/L) and after RYGB (1 week: 444 \u00b1 65 nmol\/L; 3 months: 405 \u00b1 51 nmol\/L; *P* = 0.7) and were stable throughout the experiment (Fig. 1<\/a>).\n\n## Gastric emptying.\n\nParacetamol profiles are shown in Fig. 1<\/a>. Before the operation, peak paracetamol concentrations were reached 70 \u00b1 10 min after the start of the meal, but after RYGB they peaked at the first postprandial sample point (30 min) (1 week: 33 \u00b1 2 min \\[*P* \\< 0.01\\]; 3 months: 32 \u00b1 2 min \\[*P* \\< 0.01\\]). GLP-1R blockade had a minor insignificant effect on time to peak paracetamol concentration before the operation (before Ex-9: 60 \u00b1 10 min \\[*P* = 0.49\\]) but no detectable effect after RYGB (Ex-9 at 1 week: 30 \u00b1 0 min; at 3 months: 30 \u00b1 0 min).\n\n## Fasting glucose and hormone concentrations.\n\nData regarding fasting glucose and hormone concentrations are listed in Table 1<\/a>. Following RYGB, fasting glucose, insulin, and C-peptide concentrations decreased, whereas fasting GLP-1, GIP, and glucagon concentrations did not substantially change.\n\nFasting glucose and hormone concentrations (\u221240, \u221235, and \u221230 min) before infusion of saline or Ex-9\n\n![](3044tbl1)\n\nChanges in glucose and hormone concentrations 30 min after start of infusion, that is, before the meal, are reported in Table 2<\/a>. Ex-9 infusion caused glucose concentrations to increase significantly at all study times, whereas ISR decreased most before and at least 3 months after the operation. Fasting GLP-1 and GIP concentrations were not affected by Ex-9 infusion, but glucagon concentrations increased significantly compared with the changes observed during saline infusion before and 3 months after RYGB.\n\nChanges in fasting glucose and hormone concentrations after a primed infusion of saline or Ex-9 but before the meal\n\n![](3044tbl2)\n\n## Insulin resistance.\n\nInsulin resistance decreased by \u223c50% after RYGB (HOMA-IR before RYGB: 5.72 \u00b1 0.74; 1 week after: 2.71 \u00b1 0.41 \\[*P* = 0.004\\]; 3 months after: 2.18 \u00b1 0.30 \\[*P* = 0.004\\]). HOMA-IR did not differ between the days of saline and Ex-9 infusion at any of the three visits (data not shown).\n\n## Postprandial glucose concentrations and insulin secretion.\n\nData for postprandial glucose concentrations and insulin secretion are reported in Table 3<\/a>, and glucose and ISR profiles are shown in Fig. 2<\/a>. Following RYGB, 2-h postprandial glucose concentrations were significantly decreased but peak glucose concentrations were unchanged. Blocking the GLP-1R with Ex-9 caused 2-h and peak glucose concentrations to significantly increase compared with saline at all visits.\n\nPostprandial glucose and hormone responses with saline or Ex-9 infusion\n\n![](3044tbl3)\n\nAfter the operation, insulin secretion was significantly increased in response to the meal as evidenced by both increased peak ISR and total meal-induced insulin secretion (I-AUC ISR). Ex-9 infusion resulted in significant decreases in peak ISR and I-AUC ISR 1 week and 3 months after RYGB but not before the operation.\n\n## \u03b2-Cell glucose sensitivity.\n\nData for \u03b2-GS are shown in Fig. 3<\/a>. In response to the liquid meal, \u03b2-GS increased as a consequence of RYGB. One week after the operation, \u03b2-GS had almost doubled (*P* = 0.008), and it remained elevated at 3 months (*P* = 0.027). When the GLP-1R was blocked with Ex-9 before the operation, an insignificant decrease in \u03b2-GS (*P* = 0.07) was observed. After RYGB, Ex-9 infusion reduced \u03b2-GS to preoperative levels both at 1 week and 3 months after the operation, and these decreases were highly significant (both *P* = 0.004 vs. saline).\n\n## Postprandial GLP-1, GIP, and glucagon concentrations.\n\nAfter RYGB, I-AUC GLP-1 was increased eightfold, reflecting a greatly elevated meal-induced GLP-1 response (Fig. 4A<\/em><\/a>). Ex-9 infusion resulted in a further augmentation of GLP-1 release both before and after RYGB. The GIP response (Fig. 4B<\/em><\/a>) was not affected by RYGB or Ex-9 infusion.\n\nMeal-induced glucagon secretion (I-AUC glucagon) (Fig. 4C<\/em><\/a> and Table 3<\/a>) increased after RYGB. Before the operation, I-AUC glucagon was not affected by GLP-1R blockade (*P* = 0.3), but after RYGB, Ex-9 infusion resulted in a marked and significant increase in glucagon secretion upon stimulation by the meal compared with the corresponding day of saline infusion.\n\n# DISCUSSION\n\nOne week and 3 months after RYGB, glucose tolerance and \u03b2-GS are improved and the meal-related GLP-1 response and glucagon release are increased in subjects with type 2 diabetes, as we have previously demonstrated (3). The key finding in the current study is that pharmacological blockade of the GLP-1R after RYGB reverses the improvements in \u03b2-GS and glucose tolerance and increases postprandial glucagon release. These results strongly suggest that the increased endogenous GLP-1 secretion is of great importance for the acute effects of RYGB on \u03b2-cell function and for glucose tolerance in patients with type 2 diabetes. Furthermore, we demonstrate that glucagon secretion is inhibited by GLP-1 after RYGB.\n\nBefore RYGB, GLP-1R inhibition impaired glucose tolerance because of an increment in fasting and postprandial glucose concentrations. Fasting insulin secretion decreased but meal-induced secretion was unaltered, and \u03b2-GS was not significantly reduced when Ex-9 was infused. Fasting glucagon concentrations increased during GLP-1R blockade, but the meal-induced glucagon response was unaltered. The effects of Ex-9 infusion on insulin and glucagon secretion during fasting were similar to those reported in two recent studies of patients with type 2 diabetes (30,31), but contrary to our results, these studies found a pronounced effect of GLP-1R inhibition on insulin and glucagon secretion when endogenous GLP-1 secretion was stimulated. However, both studies used the hyperglycemic clamp technique to keep plasma glucose concentrations fixed, whereas our study allowed plasma glucose concentrations to vary; as a consequence, they were higher on the day of Ex-9 infusion. This greater glucose stimulus is likely to counteract the effects of GLP-1R antagonism on postprandial insulin and glucagon secretion in our study and may explain these seemingly contradictory results.\n\nOne week after RYGB, plasma glucose concentrations had improved in both the fasting and postprandial states. After the operation, insulin secretion increased, \u03b2-GS doubled, the GLP-1 response was greatly increased, and meal-induced glucagon release was elevated. These results confirm our previous report and demonstrate that RYGB improves \u03b2-cell function soon after surgery (3). GLP-1R blockade caused glucose tolerance to deteriorate and \u03b2-GS to decrease to preoperative levels. At the same time, postprandial GLP-1 and glucagon responses increased.\n\nThree months after the operation, glucose tolerance was further improved with enhanced insulin and GLP-1 secretion compared with before surgery, and overall glucose control was improved, as evidenced by a decreased HbA~1c~, despite patients not receiving antidiabetic medication. GLP-1R antagonism produced results similar to those observed 1 week after the operation, again returning \u03b2-GS to preoperative levels and augmenting the meal-related glucagon response, resulting in an impairment of glucose tolerance.\n\nBefore the operation, the weight of patients was stable, but after RYGB they experienced rapid weight loss, amounting to \u223c4% within a week. It has been suggested that the acute effects of RYGB on \u03b2-cell function in patients with type 2 diabetes are strictly related to reduced energy intake (14,15), although reports on the effects of such acute energy restriction are conflicting (12,32). However, while energy restriction has little or no effect on GLP-1 release, postprandial GLP-1 secretion is greatly increased after RYGB (3,5,14,33\u201335). Previous reports have shown that the incretin effect after RYGB is normalized in patients with type 2 diabetes and that this cannot be explained simply by weight loss (36,37). Furthermore, in patients with type 2 diabetes, \u03b2-cell function in response to a meal or an oral glucose load has consistently been reported to improve within the first month after RYGB but not after gastric restrictive surgery where patients undergo the same postoperative dietary regimen or when \u03b2-cell function is evaluated by intravenous glucose infusion (3,5\u20137,33,38,39). The present data illustrate that the exaggerated GLP-1 response in patients with type 2 diabetes seems to be the main explanation for the improvement in \u03b2-cell function after RYGB in response to a meal. Reduced glucotoxicity may play a role in the improved \u03b2-cell function after surgery, but it seems to be of minor importance compared with the actions of GLP-1 (40,41). Of note, HOMA-IR was not different between saline and Ex-9 infusion at any visit. Therefore, the changes we observed with GLP-1R blockade are unlikely to be a consequence of adaptation of \u03b2-cells to changes in insulin resistance (42). It has been suggested that the large GLP-1 response after RYGB is triggered by the accelerated delivery of nutrients to the enteroendocrine cells of the distal ileum (43,44). We recently reported the gastric emptying of liquids to be greatly accelerated in patients who have received RYGB as measured using a scintigraphic method, and this was associated with an increased release of GLP-1 (45). In agreement with this finding, paracetamol absorption in this study was accelerated after RYGB, supporting such a mechanism of action. GLP-1R blockade increased postprandial GLP-1 release by 60\u2013100% before and after RYGB, a finding that may be related to inhibition of GLP-1R on the enteroendocrine cells that work to feed negatively back on GLP-1 secretion (46). Although GLP-1 inhibits gastric emptying (19), paracetamol absorption kinetics were not significantly changed with Ex-9 infusion, making accelerated gastric emptying an unlikely mechanism by which GLP-1R blockade increases GLP-1 secretion in this study. It must be emphasized that estimation of gastric emptying using this method is limited by the blood sampling frequency, especially after RYGB. Previous studies have indicated that GLP-1 tonically suppresses fasting glucagon concentrations in patients with type 2 diabetes, and we were able to reproduce that finding here (31). The glucagonostatic properties of GLP-1 and the exaggerated response after RYGB would predict a reduced release of glucagon as induced by a meal (19); however, the glucagon response increases after the operation. Eliminating the effect of endogenous GLP-1 with Ex-9 further increased glucagon secretion, implying that meal-induced glucagon release continues to be under the inhibitory control of GLP-1. The mechanism underlying the increased meal-induced glucagon response remains to be elucidated.\n\nSalehi et al. (47) reported that Ex-9 infusion resulted in a more pronounced decrease of insulin secretion when a meal was ingested during a hyperglycemic clamp in nondiabetic patients who received RYGB compared with normal controls. Strengths of the current study include the prospective design, the inclusion of patients with type 2 diabetes, and the use of a physiological meal test, allowing us to demonstrate the effects of endogenous GLP-1 signaling before and after RYGB and to evaluate both \u03b2-cell function and glucose tolerance. Therefore, our results expand the observations of Salehi et al.\n\nWe used a primed infusion of Ex-9 at a rate of 900 pmol \u00d7 kg^\u22121^ \u00d7 min^\u22121^. This dose of Ex-9, which resulted in stable plasma concentrations of \u223c400 nmol\/L on all experimental days, has previously been reported to result in 95% inhibition of the GLP-1R even in the presence of very high GLP-1 concentrations, such as those expected in the portal vascular bed after RYGB upon stimulation by a meal (21,22). GLP-1R blockade did not influence GIP secretion in this study, and it has previously been shown that Ex-9 infusion does not interfere with GIP signaling on the \u03b2-cells (48). In our own studies of the effect of GIP in COS-7 cells transfected with the human GIP receptor, there was no effect of Ex-9 in nanomolar to micromolar concentrations (M. Rosenkilde, unpublished observations). Collectively, this shows that Ex-9 specifically inhibits the actions of GLP-1 with respect to the incretin effect.\n\nA limitation of this study is the small sample size of nine patients, which makes the detection of more subtle changes less likely. Despite this, we were able to demonstrate a highly significant effect of GLP-1R inhibition on insulin secretion and \u03b2-cell function after RYGB, underscoring the strength of the association.\n\nIn light of the marked inhibition of \u03b2-cell function, glucose tolerance was only moderately affected during Ex-9 infusion after the operation. This could reflect an increase in the relative importance of non-insulin-mediated glucose uptake for glucose tolerance (49\u201351), but the study was not designed to evaluate this.\n\nIn addition, we did not account for hepatic glucose production in this study, which could already be decreased 1 week after the operation, perhaps as a result of caloric restriction (9). Further studies are needed to investigate this early aspect of glucose metabolism after RYGB.\n\nIn vitro studies of the murine GLP-1R suggest that it is constitutively active and inhibited by Ex-9 even in the absence of GLP-1; i.e., Ex-9 is an inverse agonist (52). It is questionable whether this is a property of the human GLP-1R in vivo (53), but if so our results could be influenced by an increased GLP-1R sensitivity to the inverse agonism of Ex-9 after surgery. However, the effects of Ex-9 on basal insulin and glucagon secretion were greatest before the operation and decreased with time from surgery, which makes it highly unlikely that Ex-9 should more potently inhibit GLP-1R signaling after RYGB.\n\nWe conclude that an increased effect of GLP-1 as a consequence of hypersecretion is the main explanation for the improved \u03b2-cell function after RYGB in response to a meal in patients with type 2 diabetes. Furthermore, the effects of GLP-1 are important in controlling glucagon release and glucose tolerance in these patients after the operation.\n\n## ACKNOWLEDGMENTS\n\nThis work was carried out as a part of the program of the UNIK: Food, Fitness & Pharma for Health and Disease (see [www.foodfitnesspharma.ku.dk](http:\/\/www.foodfitnesspharma.ku.dk)). The UNIK project is supported by the Danish Ministry of Science, Technology and Innovation. Further support was received from the Danish Diabetes Association, The Novo Nordisk Foundation, and The Strategic Research Council for the Capital Area and the Danish Research Agency (Ministry of Science, Technology and Innovation). The purchase of Ex-9 was made possible through a donation from Desir\u00e9e & Niels Ydes Foundation.\n\nNo potential conflicts of interest relevant to this article were reported.\n\nN.B.J. planned and conducted experiments, researched data, and wrote the manuscript. C.D. conducted experiments, contributed to discussion, and reviewed the manuscript. K.N.B.-M., S.H.J., S.M., and J.J.H. contributed to discussion and reviewed the manuscript. D.W., D.L.H., V.B.K., and L.N. reviewed the manuscript. S.M. and J.J.H. planned experiments. N.B.J. is the guarantor of this work and, as such, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.\n\nParts of this study were presented in oral form at the 48th Annual Meeting of the European Association for the Study of Diabetes, Berlin, Germany, 30 September\u20131 October 2012, and in poster form at the 72nd Scientific Sessions of the American Diabetes Association, Philadelphia, Pennsylvania, 8\u201312 June 2012.\n\nThis work would not have been possible without the technical assistance of Alis Sloth Andersen and Dorthe Baunbjerg Nielsen (Department of Endocrinology, Hvidovre Hospital) and Lene Brus Alb\u00e6k (Department of Biomedical Sciences, Faculty of Health Sciences, University of Copenhagen).\n\n# REFERENCES","meta":{"dup_signals":{"dup_doc_count":175,"dup_dump_count":53,"dup_details":{"curated_sources":2,"2022-21":1,"2021-21":1,"2020-34":2,"2020-29":1,"2020-16":1,"2019-47":1,"2019-39":1,"2018-39":1,"2018-26":3,"2018-22":2,"2018-17":3,"2018-13":2,"2018-09":2,"2018-05":2,"2017-51":3,"2017-47":1,"2017-43":6,"2017-39":1,"2017-34":5,"2017-30":4,"2017-26":6,"2017-22":1,"2017-17":10,"2017-09":6,"2017-04":9,"2016-50":5,"2016-44":6,"2016-40":5,"2016-36":5,"2016-30":5,"2016-26":4,"2016-22":5,"2016-18":5,"2016-07":3,"2015-48":4,"2015-40":1,"2015-35":2,"2015-32":3,"2015-27":5,"2015-22":3,"2015-14":3,"2014-52":5,"2014-49":3,"2014-42":4,"2014-41":3,"2014-35":2,"2023-23":1,"2024-22":1,"2017-13":4,"2015-18":5,"2015-11":2,"2015-06":3,"2024-26":1}},"file":"PMC3749359"},"subset":"pubmed_central"} {"text":"author: Nigel Irwin; Victor Alan GaultCorresponding author: Victor Alan Gault, .\ndate: 2013-09\nreferences:\ntitle: Unraveling the Mechanisms Underlying Olanzapine-Induced Insulin Resistance\n\nAtypical antipsychotics (AAPs) are widely prescribed agents for treatment of schizophrenia and other related psychiatric disorders. Although AAPs were a major development in psychopharmacology, so-called second-generation agents such as olanzapine have exhibited unexpected and unfavorable metabolic side effects. These side effects include weight gain, glucose intolerance, and insulin resistance, all of which increase the likelihood of developing diabetes and cardiovascular disease (1). Nonetheless, how these adverse metabolic effects arise following AAP treatment remains unclear. A key question is whether AAP-associated metabolic impairments are because of the psychiatric illness itself or if they are merely secondary to weight gain.\n\nIn this issue, Teff et al. (2) examine the direct effects of second-generation AAPs on insulin resistance and postprandial gut hormone profiles following a mixed meal. The antipsychotic drugs used in these experiments\u2014olanzapine and aripiprazole\u2014were administered to healthy volunteers for 9 consecutive days to exclude potential confounding issues associated with psychiatric disease or potential weight gain. In the clinic, olanzapine therapy tends to result in weight gain and metabolic dysregulation (3), whereas aripiprazole is considered metabolically neutral (4). Thus, Teff et al. logically hypothesized that detrimental effects on meal-related metabolism would be limited to olanzapine. Consistent with this hypothesis, the authors showed that aripiprazole had no effect on body weight, blood pressure, and circulating levels of key blood parameters over the 9-day treatment period. Aripiprazole also did not result in significant changes in postprandial metabolism following a mixed meal. In contrast, olanzapine increased fasting plasma insulin by day 9. This corresponded with postprandial hyperinsulinemia, suggesting that olanzapine had detrimental effects on insulin sensitivity. Indeed, hyperinsulinemic-euglycemic clamps supported this theory. However, aripiprazole also induced insulin resistance despite it generally being considered lacking in metabolic effects.\n\nA unique aspect to the study by Teff et al. (2) was participant ingestion of a mixed meal that elicited both incretin responses (5) and neurally mediated insulin release (6), both of which could be critical for the metabolic effects of centrally acting drugs. Surprisingly, olanzapine administration increased postprandial glucagon-like peptide-1 (GLP-1) secretion and circulating glucagon levels. These responses were unexpected for two reasons. First, GLP-1 is believed to improve insulin sensitivity and reduce weight gain. Second, it directly inhibits glucagon release (7). The authors speculate that other unknown factors, such as glucose-dependent insulinotropic polypeptide (GIP) secretion or cholinergic vagally mediated actions, may mediate the altered postprandial metabolic profile following olanzapine administration. In this regard, it has recently been shown that biologic action of GIP can be potentiated by xenin-25, a peptide cosecreted with GIP from intestinal K cells, which is thought to indirectly activate muscarinic receptors (8). Olanzapine is a well-characterized muscarinic receptor antagonist (9). Thus, interactions between GIP and other vagal inputs could be of importance for the postprandial metabolic effects of AAPs (Fig. 1<\/a>).\n\nThe new data are interesting but need to be corroborated in studies with larger sample sizes to increase power and also to be verified in psychiatric patients. A previous study has shown no effect of olanzapine on gut hormone secretion (10). The direct effect of AAPs on GIP secretion and action certainly needs to be considered. Although weight gain did not affect data interpretation, Teff et al. (2) clearly acknowledge that adiposity could be a confounding factor. In addition, the nutrient composition of the mixed meal could have important implications on GIP and GLP-1 secretion. Both incretin hormones are strongly released in response to carbohydrates, but GIP is released much more powerfully than GLP-1 in response to fat (11).\n\nTaken together, the data illustrate that olanzapine can induce insulin resistance and postprandial hormonal dysregulation independently of weight gain. Although the regulatory mechanisms involved remain to be fully elucidated, the well-characterized weight gain following prolonged olanzapine administration would likely exacerbate these effects. Thus, interventions to inhibit weight gain in patients receiving AAP therapy may only be partially effective in preventing metabolic disease. The report by Teff et al. (2) should stimulate continued efforts aimed at resolving the direct detrimental metabolic effects of AAPs that may ultimately lead to improved treatment options for these patients.\n\n## ACKNOWLEDGMENTS\n\nNo potential conflicts of interest relevant to this article were reported.\n\n# REFERENCES","meta":{"dup_signals":{"dup_doc_count":133,"dup_dump_count":49,"dup_details":{"curated_sources":2,"2022-27":2,"2022-21":1,"2022-05":1,"2021-17":1,"2020-29":1,"2018-26":1,"2018-22":3,"2018-13":2,"2018-09":1,"2018-05":1,"2017-51":2,"2017-47":2,"2017-43":5,"2017-39":2,"2017-34":2,"2017-30":7,"2017-26":3,"2017-22":2,"2017-17":9,"2017-09":4,"2017-04":8,"2016-50":3,"2016-44":3,"2016-40":3,"2016-36":3,"2016-30":3,"2016-26":3,"2016-22":3,"2016-18":2,"2016-07":2,"2015-48":2,"2015-40":3,"2015-35":2,"2015-32":3,"2015-27":2,"2015-22":2,"2015-14":2,"2014-52":3,"2014-49":4,"2014-42":2,"2014-41":3,"2014-35":4,"2014-23":2,"2023-50":1,"2017-13":3,"2015-18":3,"2015-11":2,"2015-06":2,"2024-30":1}},"file":"PMC3749356"},"subset":"pubmed_central"} {"text":"date: 2017-03\ntitle: EARLY CANCER DIAGNOSIS SAVES LIVES, CUTS TREATMENT COSTS\n\n**3 FEBRUARY 2017 \\| GENEVA -** New guidance from WHO, launched ahead of World Cancer Day (4 February), aims to improve the chances of survival for people living with cancer by ensuring that health services can focus on diagnosing and treating the disease earlier.\n\nNew WHO figures released this week indicate that each year 8.8 million people die from cancer, mostly in low- and middle-income countries. One problem is that many cancer cases are diagnosed too late. Even in countries with optimal health systems and services, many cancer cases are diagnosed at an advanced stage, when they are harder to treat successfully.\n\n\"Diagnosing cancer in late stages, and the inability to provide treatment, condemns many people to unnecessary suffering and early death,\" says Dr Etienne Krug, Director of WHO's Department for the Management of Noncommunicable Diseases, Disability, Violence and Injury Prevention.\n\n\"By taking the steps to implement WHO's new guidance, healthcare planners can improve early diagnosis of cancer and ensure prompt treatment, especially for breast, cervical, and colorectal cancers. This will result in more people surviving cancer. It will also be less expensive to treat and cure cancer patients.\"\n\nAll countries can take steps to improve early diagnosis of cancer, according to WHO's new Guide to cancer early diagnosis.\n\n**The three steps to early diagnosis are:**\n\nImprove public awareness of different cancer symptoms and encourage people to seek care when these arise.\n\nInvest in strengthening and equipping health services and training health workers so they can conduct accurate and timely diagnostics.\n\nEnsure people living with cancer can access safe and effective treatment, including pain relief, without incurring prohibitive personal or financial hardship.\n\nChallenges are clearly greater in low- and middle-income countries, which have lower abilities to provide access to effective diagnostic services, including imaging, laboratory tests, and pathology \u2013 all key to helping detect cancers and plan treatment. Countries also currently have different capacities to refer cancer patients to the appropriate level of care.\n\nWHO encourages these countries to prioritize basic, high-impact and low-cost cancer diagnosis and treatment services. The Organization also recommends reducing the need for people to pay for care out of their own pockets, which prevents many from seeking help in the first place.\n\nDetecting cancer early also greatly reduces cancer's financial impact: not only is the cost of treatment much less in cancer's early stages, but people can also continue to work and support their families if they can access effective treatment in time. In 2010, the total annual economic cost of cancer through healthcare expenditure and loss of productivity was estimated at US\\$ 1.16 trillion.\n\nStrategies to improve early diagnosis can be readily built into health systems at a low cost. In turn, effective early diagnosis can help detect cancer in patients at an earlier stage, enabling treatment that is generally more effective, less complex, and less expensive. For example, studies in high-income countries have shown that treatment for cancer patients who have been diagnosed early are 2 to 4 times less expensive compared to treating people diagnosed with cancer at more advanced stages.\n\nDr Oleg Chestnov, WHO Assistant Director-General for Noncommunicable Diseases and Mental Health, notes: \"Accelerated government action to strengthen cancer early diagnosis is key to meet global health and development goals, including the Sustainable Development Goals (SDGs).\"\n\nSDG 3 aims to ensure healthy lives and promote well-being for all at all ages. Countries agreed to a target of reducing premature deaths from cancers and other noncommunicable diseases (NCDs) by one third by 2030. They also agreed to achieve universal health coverage, including financial risk protection, access to quality essential health-care services, and access to safe, effective, quality and affordable essential medicines and vaccines for all. At the same time, efforts to meet other SDG targets, such as improving environmental health and reducing social inequalities can also help reduce the cancer burden.\n\nCancer is now responsible for almost 1 in 6 deaths globally. More than 14 million people develop cancer every year, and this figure is projected to rise to over 21 million by 2030. Progress on strengthening early cancer diagnosis and providing basic treatment for all can help countries meet national targets tied to the SDGs.\n\nAvailable from: ","meta":{"dup_signals":{"dup_doc_count":171,"dup_dump_count":59,"dup_details":{"curated_sources":2,"2023-50":2,"2023-40":1,"2023-23":2,"2023-06":3,"2022-49":2,"2022-40":1,"2022-33":2,"2022-27":2,"2022-21":2,"2022-05":2,"2021-43":3,"2021-31":2,"2021-25":2,"2021-21":2,"2021-17":1,"2021-10":2,"2020-50":2,"2020-45":1,"2020-34":1,"2020-29":3,"2020-24":5,"2020-16":2,"2020-10":2,"2019-51":3,"2019-47":3,"2019-43":1,"2019-39":2,"2019-30":3,"2019-26":1,"2019-22":2,"2019-13":4,"2019-09":1,"2019-04":3,"2018-51":1,"2018-47":3,"2018-43":3,"2018-39":1,"2018-34":3,"2018-30":3,"2018-26":3,"2018-22":6,"2018-17":3,"2018-13":6,"2018-09":5,"2018-05":4,"2017-51":5,"2017-47":6,"2017-43":6,"2017-39":4,"2017-34":5,"2017-30":4,"2017-26":5,"2017-22":6,"2017-17":6,"2017-09":6,"2024-26":1,"2024-10":2,"2017-13":1,"2024-30":1}},"file":"PMC5387916"},"subset":"pubmed_central"} {"text":"abstract: A report on the 44th Annual *Drosophila* Research Conference, Chicago, USA, 5-9 March, 2003.\nauthor: Stephen Richards\ndate: 2003\ninstitute: 1Human Genome Sequencing Center, Department of Human and Molecular Genetics, Baylor College of Medicine, Houston, TX 77006, USA. E-mail: email@example.com\ntitle: How is the *Drosophila* research community making use of the genome sequence?\n\n*Drosophila* researchers have capitalized on the genome sequence in the three years since it was released. From identifying paralogs and orthologs to microarray analysis of specific cell types, it is clear that the genome sequence is changing approaches to *Drosophila* research. In this review of the 2003 *Drosophila* research conference, I have focused on talks illustrating the various ways in which the genome sequence is being used.\n\nThe Larry Sandler Award is presented annually to the graduate student with the best thesis that uses *Drosophila* as a model system. This year's recipient, Sinisa Urban (University of Cambridge, UK), gave perhaps the best presentation of the meeting. His thesis focused on how the cell controls the release of the Spitz epidermal growth factor (EGF) signal. In *Drosophila*, a single EGF receptor is used repeatedly in over 60 different contexts; control of how the signal is released is therefore critical. Spitz is a transmembrane protein, which must be cleaved by a protease to release the signaling portion for secretion. Firstly, Urban presented biochemical data showing that cleavage of the Spitz protein by Rhomboid occurs in the Golgi apparatus and is followed by glycosylation and secretion of the Spitz signaling portion. Secondly, using biochemical and mutagenic approaches, he demonstrated that Rhomboid is a serine protease: it contains the residues necessary for serine protease catalysis, and is inhibited by known serine protease inhibitors. He also identified a family of seven Rhomboid-like proteins in *Drosophila* and additional Rhomboid-like proteins in species as diverse as the Gram-negative bacteria *Pseudomonas aeruginosa* and *Providencia stuartii.* Finally, he discussed the cleavage site in the Spitz protein: a seven-amino-acid sequence, ASIASGA in the single-letter amino-acid code, in a transmembrane region of the Spitz protein. This motif is also present in the TGF\u03b1 and Delta signaling proteins, suggesting they may also be substrates for Rhomboid cleavage. The ASIASGA motif seems to have two functions that allow it to be cleaved by Rhomboid: it produces a kink in the transmembrane \u03b1 helix and it forms a hydrophilic pocket at the top of the helix allowing water, which is necessary for protease activity, to enter the cleavage site.\n\nMichelle Markstein (University of California, Berkeley, USA) used a computational method (Fly Enhancer ) to search the genome sequence for clusters of enhancers that are targets of the Dorsal transcription factor. Looking for clusters of enhancer sequences appears to improve the sensitivity of such methods and has allowed the identification of approximately one third of the genes estimated to be directly affected by Dorsal. Besides known targets such as *zen, sog* and *brinker*, she found novel targets, including *Phm*, *Ady* and CG12443; these were confirmed by embryonic *in situ* hybridization and expression of *lacZ* under the control of the putative enhancer. Interestingly, it seems that clusters of different enhancer binding sequences may be more diagnostic for the identification of *cis*-control regions than clusters of a single binding site.\n\nA number of groups described research using microarray analysis. Ulrike Gaul (Rockefeller University, New York, USA) presented an analysis of glial cell transcription. Glial cells labeled with green fluorescent protein under the control of the *repo* promoter were chemically dissociated from embryos and sorted by fluorescence-activated cell sorting (FACS). Gene expression in glial and non-glial cell fractions was assessed using an Affymetrix gene array, and 255 strongly expressed genes were identified. CG11Q10 is expressed only in midline and longitudinal glia; reduction of the transcript level by RNA interference (RNAi) prevents midline glial cells from separating axon tracks in embryonic commissures. Other examples of new genes found in this screen include molecules affecting axon guidance, cell migration and shape, and axon wrapping. The combination of microarray analysis and RNAi provides a new paradigm for rapid screening.\n\nAmir Orian (Fred Hutchinson Cancer Center, Seattle, USA) has investigated the binding sites of the Myc-Max-Mad (MMM) transcription factor complex. Fusions of *Dam* methylase to these proteins were introduced into transgenic flies, then genomic DNA was digested with a methylation-sensitive restriction enzyme and the fragments were analyzed on a microarray. Interestingly, methylation of genes encoding synaptic-vesicle and mitochondrial proteins was observed, suggesting that the MMM complex may exert previously unknown influences on these processes.\n\nGreg Gibson (North Carolina State University, Raleigh, USA) used long-oligonucleotide arrays to study the inheritance of gene expression. Gene expression was measured in seven strains of *D. melanogaster* and all F1 progeny of crosses between those strains. His data show that approximately 10% of genes are differentially expressed between any two of the strains studied, and that 20% of genes are expressed differently in the F1 compared to the parental strains. It is possible to divide these differences into several classes: some are expected, such as additive, dominant and recessive patterns of inheritance of the expression level; in other cases, the level of gene expression in the F1 is significantly greater or less than can be explained by additive expression of both parental strains.\n\nMany researchers are making use of the expanding *Drosophila* gene collection. Mark Stapleton (Lawrence Berkeley National Laboratory, Berkeley, USA) identified RNA-editing substrates by comparing the high-quality cDNA and genomic sequences. He found 27 adenosine deaminase substrates, the majority of which are ion-channel transcripts. Pavel Tomancak (University of California, Berkeley, USA) presented a comparison of *D. melanogaster* and *D. pseudoobscura* embryonic expression patterns for a number of genes. The vast majority of the 176 genes investigated showed identical expression patterns in the two species. But two genes with different expression patterns were identified. The expression of the midline fasciclin transcript is moved from the neuroectoderm in *D. melanogaster* to the mesoderm in *D. pseudoobscura. Ecdysone-inducible gene E2* (described in a poster presented by Amy Beaton, University of California, Berkeley, USA) is expressed in the anterior of early embryos and in the developing foregut by stage 11 in *D. melanogaster*, but in *D. pseudoobscura* it is expressed in the posterior of early embryos and in the developing hindgut by stage 11.\n\nLaura Lee (Massachusetts Institute of Technology, Cambridge, USA) identified seven novel substrates for Pan gu, a protein kinase required early in the cell cycle during embryogenesis. Her biochemical screen made use of coupled transcription-translation of cDNA clones from the *Drosophila* gene collection to produce \\[^35^S\\]-labeled proteins in a 384-well format. Pools of 24 proteins were then screened in a variety of binding, degradation and enzymatic assays. Examples include screens for Disheveled-binding proteins, microtubule-binding proteins and the Pan gu kinase assay based on band shirts on electrophoretic gels.\n\nOne talk highlighted the imprecise art of gene prediction. Marc Hild (University of Heidelberg, Germany) presented a microarray constructed using a less stringent gene-prediction program and a possible 21,396 putative ORFs. Expression data aquired using this array suggests that there are 3,000 more *Drosophila* genes than were predicted in the Release 3 version of the genome. Some of these sequences produce phenotypes in S2 tissue culture cells when inhibited by RNAi. Once the data are made public and analyzed in detail, many of these 'novel' genes will no doubt be found to have exons that overlap those of previous predictions. Other differences may be 'philosophical': for example, should a gene prediction be considered if it has an open reading frame of less than 100 amino acids? It is clear that biological evidence is required to positively identify a gene.\n\nThe *D. pseudoobscura* sequence, available from the Drosophila Genome Project , may be the surest way to identify the meaningful sequences of the *D. melanogaster* genome. Richard Gibbs (Baylor College of Medicine, Houston, USA) presented the initial release of the *D. pseudoobscura* sequence. A tBLASTn comparison of the two *Drosophila* genomes identified putative orthologs in *D. pseudoobscura* for 95% of *D. melanogaster* genes. Alignment of the two genomic sequences identified both large features, such as chromosomal inversions, and small ones, such as conserved non-coding regions. A comparative genomic approach using both sequences will improve gene prediction and allow the identification of *cis*-regulatory sequences for the majority of *Drosophila* genes. We can hope that the sequence of *D. pseudoobscura* will be as informative to *Drosophila* research as that of *D. melanogaster*, and many presentations on 'the other *Drosophila*' can be expected at future annual conferences.","meta":{"dup_signals":{"dup_doc_count":122,"dup_dump_count":44,"dup_details":{"curated_sources":3,"2023-14":1,"2020-40":1,"2019-51":1,"2019-47":1,"2019-09":1,"2019-04":1,"2018-51":1,"2018-43":1,"2018-34":1,"2018-22":1,"2018-05":1,"2017-39":1,"2017-30":1,"2017-22":1,"2017-09":10,"2016-44":1,"2016-40":1,"2016-36":10,"2016-30":9,"2016-22":1,"2016-18":1,"2016-07":5,"2015-48":3,"2015-40":2,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":2,"2014-52":3,"2014-49":4,"2014-42":8,"2014-41":4,"2014-35":4,"2014-23":6,"2014-15":2,"2023-50":1,"2015-18":4,"2015-11":2,"2015-06":3,"2014-10":2,"2013-48":2,"2013-20":2,"2024-26":1}},"file":"PMC156583"},"subset":"pubmed_central"} {"text":"abstract: A report on the 13th International Conference on Systems Biology, held in Toronto, Ontario, Canada, 19-23 August 2012.\nauthor: Adam P Rosebrock; Amy A Caudy\ndate: 2012\ninstitute: 1Donnelly Centre for Cellular and Biomolecular Research, University of Toronto, 160 College St., Toronto, Ontario, Canada, M5S3E1; 2Department of Molecular Genetics, University of Toronto, 160 College St., Toronto, Ontario, Canada, M5S3E1\ntitle: The future of deciphering personal genomes? The flies (and yeast and worms) still have it\n\n# Introduction\n\nSydney Brenner said in his 2002 Nobel lecture that we are 'drowning in a sea of data, starving for knowledge'. Drafts of the human genome sequence were barely a year old at the time, and even optimistic projections placed the \\$1,000 genome beyond 2040. Just a decade after Brenner's lecture, technological revolutions in sequencing and protein analysis have changed the pace of genome biology. The reference genome has given way to individual patient sequences. Sydney's sea of data is becoming ever deeper.\n\nNew tools are necessary to extract biological meaning from these data. The diverse group of scientists gathered in systems biology is at the forefront of devising new ways to net knowledge out of a data sea that widens and deepens every day. The flood of data produced by modern 'omics technologies demands new methods that connect genotype to phenotypic consequence. With the \\$1,000 genome now predicted by the end of 2012, we are faced with the challenge of interpreting raw sequence into health-relevant information.\n\nHere, we cover some of the highlights of an inherently diverse meeting, focusing on the role of systems biology as relevant to genomic medicine (abstracts are freely available online: ).\n\n# Biological systems are dynamic; systems biologists are catching up\n\nTwenty-three pairs of chromosomes within a single cell give rise to hundreds of tissues with diverse functions, morphologies and biochemical activities. The humble budding yeast is born, has a fruitful replicative life on a variety of foods, and eventually succumbs to senescence. Life consists of a series of changing processes that cannot be explained by a snapshot view; models that accurately represent biological processes must be built from data that capture these dynamics.\n\nAdvances in mass spectrometry have enabled examination of rare and modified proteins; modified proteins are turning out to be not rare at all. Kirti Sharma in Matthias Mann's lab (Max Planck Institute of Biochemistry, Martinsreid, Germany) claims that 75% of the proteins in cancer cells are present as phosphorylated forms, and that this is likely to be a vast underestimate, with current methods capturing only one-tenth of the phosphopeptides in the cell.\n\nA major focus of systems biology is understanding the interaction between biomolecules. Genetic approaches, including systematic identification of synthetic lethal gene deletions (championed by Charley Boone at the University of Toronto, ON, Canada), are being supplemented by measurement of physical interactions. These efforts are focused on reducing the massive disconnect between genome sequence and physical function.\n\nMany key proteins in the cell are interaction hubs and form complexes with many different partners. Interactions are frequently mutually exclusive or separated in space, time, or by cellular state. Anne-Claude Gingras (Samuel Lunenfeld Research Institute, Toronto, ON, Canada) and Andrew Emili (University of Toronto, Toronto, ON, Canada) are working to move from a simple static representation of protein complexes to an understanding of these dynamic interactions. Gingras is interrogating protein interactions by affinity purification of different components of a single complex. Emili has taken a global approach by analyzing the entire proteome following multiple orthogonal separation methods and identifying complexes by common separation patterns. Michael Washburn (Stowers Institute, Kansas City, MO, USA) has been using mass spectrometry to address questions once reserved for structural biology. Washburn has used affinity purification and mass spectrometry to measure protein interactions in partially disassociated complexes. Interactions that survive are likely to occur between directly contacting molecules.\n\nWeak interactions are common and essential in biological systems. Mike Tyers (IRIC, Universit\u00e9 de Montr\u00e9al, QC, Canada) reminded listeners that without weak interactions, we would be stones. Tyers' work demonstrates that multiple weak sites provide a mechanism for buffering, redundancy and cooperative kinetic behavior. Disruption of individual weak-functioning residues can frequently be tolerated, confounding current mutation-counting sequencing efforts.\n\n# Model organisms drive understanding of genotype to phenotype\n\nStudies in model organisms can answer questions that have proved to be difficult to study in man. Missing heritability is one of the biggest problems in human genome-wide association studies. Joshua Bloom (Princeton University, Princeton, NJ, USA) mapped dozens of quantitative trait loci in yeast. They were able to account for more than 90% of the heritability of these traits, suggesting that the failure to map quantitative traits in human populations results from the diversity of alleles and the influence of environment rather than a problem with the approach of mapping.\n\nEnrolling more patients is a common way to increase the power of genome-wide association studies. Two talks at the meeting presented a systems-biology alternative. Chad Myers (University of Minnesota, Minneapolis, MN, USA) and Manuel Mattheisen (Brigham and Williams Hospital, Boston, MA, USA) demonstrated approaches that group genes prior to testing for significant variants. Both researchers used functional groupings of genes based largely on data from model systems. Model organisms continue to be a critical scaffold for understanding biology.\n\nBen Lehner (ICREA, Barcelona, Spain) highlighted how chance controls life. *Caenorhabditis elegans* harboring null alleles of Tbx-9, a transcription factor regulating development, have a fifty-fifty chance of death or completely normal life. Lehner's group has identified fluctuations in gene expression within this isogenic population that modulate this life and death coin-toss. In our world of Brownian motion and stochastic gene expression, deterministic sequence to phenotype analysis will remain the stuff of science fiction.\n\n# 'Personal genomes' are of more value to the community than the patient\n\nGeneticist Sewall Wright once said that natural variation is the worst kind of experiment to study. Wright, the father of genetic drift and the fitness landscape, had a prescient understanding of our predicament. Mike Snyder (Stanford University, Stanford, CA, USA) gave an update on the Snyderome, a longitudinal study of his transcriptome, proteome, metabolome and other physiological parameters in what may well be the most expensive case study in history. Next-generation sequencing has become Snyder's 'Ouiji board'. His risk allele for diabetes may have manifested itself in an episode of elevated blood glucose, which Snyder (possibly) corrected by a wholesale elimination of sweets. Fortunately for Snyder, none of his other disease risk alleles have been borne out or have required such radical life changes. Personal genome sequencing is here, but the interpretation is difficult, and as Snyder advises, not for worriers.\n\nAccording to Chris Saunder (Memorial Sloan Kettering Cancer Center, New York, NY, USA), genome medicine is sequence-rich and phenotype-poor. Saunder remarked that the erudite molecular profiling of the Cancer Genome Atlas and related efforts frequently lacks accompanying data on phenotype, cautioning that careful and deep phenotyping is as important as sequencing itself. George Church (Harvard University, Cambridge, MA, USA) highlighted efforts coordinated through the Personal Genomics Project to provide such rich phenotypic information. This dataset of genomes coupled with phenotype will be useful in understanding the scope of variation, but Church cautioned that the diagnostic value of genomes is limited. Like a stethoscope, sequencing is just one source of health information.\n\nIdentification of causative alleles is particularly critical in cancer; cells that appear in relapse are often from a different or earlier lineage than cells in the original tumor. John Dick (Princess Margaret Hospital, Toronto, ON, Canada) described work to identify causative mutations in genomes using xenografts. As in relapsing patients, many of the tumors that grew in mice are genetically different from the primary tumor. Dick's work questions the value of a 'tumor genome' derived from bulk cells, emphasizing the need to identify rare cells that can form new tumors. Personalized medicine may well become cell-, not patient-, specific.\n\nDave Hill (CCSB, Boston, MA, USA) reminded attendees that many proteins are multifunctional. Patient-specific mutations will affect some or many of a diverse array of protein interaction partners. To move from sequence to diagnosis and prognosis, one needs to understand these complex interactions. Hill cautioned that early successes with Mendelian traits will not translate into rapid understanding of complex genetic interactions.\n\nThe most remarkable personal genome, though, was that of only half a human. Thijn Brummelkamp (Netherlands Cancer Institute, Amsterdam, The Netherlands) presented a series of functional genetic screens using a haploid somatic human cell line. Brummelkamp used traditional insertional mutagenesis to identify essential human genes, bringing classic tools of model organism research to human genetic analysis.\n\n# In addition to new tools, systems biology requires new ways of training tomorrow's scientists\n\nRadical advances in technology have driven systems biology in the past decade. George Church highlighted the phenomenal growth of sequencing technology, noting that 44 commercial sequencing technologies are now available or soon will be. New sequencing technologies will be cheaper, faster and more accurate, and will permit new types of analyses.\n\nThe diverse skills required by systems biology require development of a new kind of scientist. David Botstein (Princeton University, Princeton, NJ, USA) accepted an award for excellence in quantitative education for his creation of a new teaching paradigm. Botstein's view is that students languish in prerequisite general courses while thirsting for access to current research and educational specialization. Contrary to the status quo, Botstein envisions research and teaching as organically connected tasks. This integrated training approach is necessary to bridge the significant cultural gap between biologists, mathematicians and computer scientists. Students should be taught current scientific practice, not scientific history. If the acceleration in systems biology continues, merely keeping up with the present will be a monumental task.","meta":{"dup_signals":{"dup_doc_count":108,"dup_dump_count":58,"dup_details":{"curated_sources":2,"2022-21":1,"2021-43":1,"2021-31":1,"2021-17":1,"2020-40":1,"2019-30":1,"2019-18":1,"2019-13":1,"2019-04":2,"2018-47":2,"2018-39":2,"2018-30":2,"2018-22":2,"2018-17":1,"2018-13":2,"2018-09":1,"2017-51":1,"2017-47":1,"2017-43":1,"2017-39":2,"2017-34":2,"2017-30":1,"2017-26":2,"2017-22":2,"2017-17":2,"2017-09":2,"2017-04":2,"2016-50":2,"2016-44":2,"2016-40":2,"2016-36":2,"2016-30":2,"2016-26":1,"2016-22":1,"2016-18":1,"2016-07":2,"2015-48":2,"2015-40":2,"2015-35":2,"2015-32":2,"2015-27":2,"2015-22":2,"2015-14":2,"2014-52":2,"2014-49":1,"2014-42":6,"2014-41":3,"2014-35":4,"2014-23":3,"2014-15":4,"2023-23":1,"2017-13":2,"2015-18":2,"2015-11":2,"2015-06":2,"2014-10":2,"2013-48":2,"2024-18":1}},"file":"PMC3580444"},"subset":"pubmed_central"} {"text":"abstract: Detailed comprehensive molecular analysis using families and multiple matched tissues is essential to determine whether imprinted genes have a functional role in humans.\n .\n See research article: \nauthor: Gudrun Moore; Rebecca Oakey\ndate: 2011\ninstitute: 1Institute of Child Health, University College London, 30 Guilford Street, London WC1N 1EH, UK; 2Department of Medical and Molecular Genetics, Kings College London, 8th Floor Tower Wing, London SE1 9RT, UK\nreferences:\ntitle: The role of imprinted genes in humans\n\n# Research highlight\n\nImprinted or parent-of-origin-dependent gene expression has over the past 25 years developed into an exciting and dynamic research field. Its functional or even evolutionary importance is considered most relevant in mammals and in flowering plants \\[1\\]. In mammals the link to the existence of the placenta and the differences between the two parental sexes in terms of resources and evolutionary drive through imprinting has been the focus of much debate. One fundamental question remains: has parent-of-origin gene expression evolved and been maintained because of the different needs of the mother and father in producing viable, strong offspring? The mother needs to survive the pregnancy but the father's drive is focused on the offspring being the fittest. Much of the functional relevance of the research in the imprinting field, particularly with its application to the human, has grown out of this 'resources for fittest' debate. A study in this issue of *Genome Biology* \\[2\\] starts to analyze more thoroughly which genes are truly imprinted in humans using genome-wide assessment.\n\nImprinting in the mouse is well understood. It was discovered separately by the Surani \\[3\\] and McGrath \\[4\\] groups in the early 1980 s, who found that gynogenetic embryos (which contain only maternal genomes) developed differently *in utero* and with emphasis on different tissues to the androgenotes (only paternal genomes). Interestingly, the androgenotes had a more developed placenta and the gynogenotes had a better developed embryo. Links were soon made between imprinted gene models in the mouse and human diseases, imprinted genes were implicated in many fetal growth syndromes, and they were shown to regulate maternal-fetal interactions, postnatal feeding behaviors and neurological development. Disturbance of the apparently rigorous mono-allelic imprinted gene expression was also linked to cancer, and alterations in imprinting methylation patterns or expression in peripheral blood leukocytes were considered as biomarkers for cancer \\[5\\].\n\nThe study by Morcos *et al.* \\[2\\] extends this human analysis comparison further. Here the authors \\[2\\] make a genome-wide assessment of imprinted expression in paired sets of samples of adult human tissue, comparing lymphoblastoid cell lines with primary fibroblasts. These two cell lines are both relatively easy to obtain from humans with ethical approval. Using families they could track parental-allele-specific expression, and using paired tissue samples they could study tissue-specific variation between lymphoblastoid cells and fibroblasts. To truly confirm whether a gene is imprinted, differential methylation, tissue-specific expression and parental allele origin must all be tracked in the same family. Observing differentiated methylated patterns in isolation, however, does not always totally reflect monoallelic expression \\[6\\]. These all-inclusive experiments can be done relatively easily in mouse but are ethically impossible to copy in humans. These authors \\[2\\] have achieved the best compromise by using matched tissues and by studying families. Their results are both interesting and intriguing.\n\nPrevious careful comparative analysis between the imprinted genes in mouse and humans showed that roughly half of the mouse imprinted genes are either not or never have been imprinted in humans. Of the about 140 imprinted genes identified so far in the mouse, only 60 are imprinted in humans and several are specific to humans. In addition, some have different tissue-specific expression profiles; for example, growth factor receptor binding protein 10 (*GRB10*) in humans is imprinted only in invasive trophoblasts (maternally expressed) and brain (paternally expressed) \\[7\\], whereas in mouse it is maternally expressed in most embryonic tissues and predominantly paternally expressed in brain \\[8\\]. If the regulation of gene dosage is so important, why is there not greater conservation of imprinted expression? Or maybe the genes still imprinted in humans have been selected and\/or maintained for important reasons. Epigenetics provides the mechanisms through which imprinting influences gene expression. These mechanisms affect the processes of cell differentiation and embryonic growth, although they are as yet not completely understood. When epigenetic mechanisms go awry, transcriptional activity may be perturbed and result in disorders and syndromes. This underscores the rationale for studies such as these \\[2\\], particularly on a genome-wide scale, for identifying imprinted genes and classifying their conservation across mammalian species.\n\nIn the study by Morcos *et al.* \\[2\\], of the 44 informative imprinted genes from the literature that were analyzed, 19 were validated as imprinted using this rigorous assessment. More importantly, only 1 in 13 candidate imprinted genes were confirmed. This demonstrates again that only over 50% of mouse imprinted genes are truly imprinted in humans in the adult tissues assayed and only 10% of candidates can be verified.\n\nOne caveat of this approach stems from the fact that human embryonic tissues are extremely difficult to access and thus the authors \\[2\\] used lymphoblastoid cells and fibroblasts instead; this has some limitations. It is known that imprinting is important in the developing embryo and fetus and typically occurs in a tissue-specific manner. So the use of transformed lymphoblastoid cells as a human tissue resource does not necessarily reflect the *in situ* state. It could be argued that the true role of imprinted genes is in fetal development but, even so, analysis of fetal tissues and placenta has also revealed much lower numbers of imprinted genes in humans than in mouse \\[9\\]. In humans there are fewer imprinted genes and these may be the ones that are most relevant for the 'resources for fittest' needs that are most important in human fetal growth.\n\nThis study \\[2\\] plus other work on human tissues in this dynamic field are all helping to clarify the numbers of imprinted genes in humans and lead towards an understanding of the role of imprinting in humans. There remains no doubt that gene dosage control in the developmental period is exquisitely sensitive and needs accurate control mechanisms. The future focus in humans needs to be on careful dissection of the function of those genes that are confirmed to be imprinted using methods similar to those in this study \\[2\\].","meta":{"dup_signals":{"dup_doc_count":107,"dup_dump_count":35,"dup_details":{"curated_sources":1,"2023-14":1,"2021-43":1,"2020-24":1,"2019-47":1,"2019-22":1,"2019-09":1,"2018-51":1,"2018-43":1,"2018-34":1,"2018-26":1,"2017-43":1,"2017-22":1,"2017-09":10,"2016-44":1,"2016-40":1,"2016-36":10,"2016-30":8,"2016-22":1,"2016-18":1,"2016-07":6,"2015-48":3,"2015-40":3,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":4,"2014-42":7,"2014-41":6,"2014-35":5,"2014-23":5,"2014-15":4,"2023-50":1}},"file":"PMC3129664"},"subset":"pubmed_central"} {"text":"date: 2014-01-10\ntitle: Findings of research misconduct.\n\n# Findings of Research Misconduct\n\n**Notice Number:** NOT-OD-14-037\n\n**Key Dates \nRelease Date:** January 10, 2014\n\n**Related Announcements \n** None\n\nNone\n\n**Issued by \n** Department of Health and Human Services ([DHHS](http:\/\/www.hhs.gov\/))\n\nPurpose\n\nNotice is hereby given that the Office of Research Integrity (ORI) has taken final action in the following case:\n\nDong-Pyou Han, Ph.D., Iowa State University of Science and Technology: Based on the report of an inquiry conducted by the Iowa State University of Science and Technology (ISU), a detailed admission by the Respondent, and additional analysis conducted by ORI, ORI and ISU found that Dr. Dong-Pyou Han, former Research Assistant Professor, Department of Biomedical Services, ISU, engaged in research misconduct in research supported by National Institute of Allergy and Infectious Diseases (NIAID), National Institutes of Health (NIH), grants P01 AI074286, R33 AI076083, and U19 AI091031.\n\nORI and ISU found that the Respondent falsified results in research to develop a vaccine against human immunodeficiency virus-1 (HIV-1) by intentionally spiking samples of rabbit sera with antibodies to provide the desired results. The falsification made it appear that rabbits immunized with the gp41-54 moiety of the HIV gp41 glycoprotein induced antibodies capable of neutralizing a broad range of HIV-1 strains, when the original sera were weakly or non-reactive in neutralization assays. Falsified neutralization assay results were widely reported in laboratory meetings, seven (7) national and international symposia between 2010 and 2012, and in grant applications and progress reports P01 AI074286-03, -04, -05, and -06; R33 AI076083-04; U19 AI091031-01 and -03; and R01 AI090921-01. Specifically:\n\na\\. Respondent falsified research materials when he provided collaborators with sera for neutralization assays from (i) rabbits immunized with peptides from HIV gp41-54Q (and related antigens HR1-54Q, gp41-54Q-OG, gp41-54Q-GHC, gp41-54Q-Cys and Cys-gp41-54Q) to assay HIV neutralizing activity, when Respondent had spiked the samples with human IgG known to contain broadly neutralizing antibodies to HIV-1; and (ii) rabbits immunized with HIV gp41-54Q to assay HIV neutralizing activity, when Respondent had spiked the samples with sera from rabbits immunized with HIV-1 gp120 that neutralized HIV.\n\nb\\. Respondent falsified data files for neutralization assays, and provided false data to his laboratory colleagues, to make it appear that rabbits immunized with gp41-54Q and recombinant Lactobacillus expressing gp41-64 (LAB gp41-64) produced broadly reactive neutralizing antibodies, by changing the numbers to show that samples with little or no neutralizing activity had high activity.\n\nDr. Han has entered into a Voluntary Exclusion Agreement and has voluntarily agreed for a period of three (3) years, beginning on November 25, 2013:\n\n\\(1\\) To exclude himself from any contracting or subcontracting with any agency of the United States Government and from eligibility or involvement in nonprocurement programs of the United States Government referred to as \\`\\`covered transactions' pursuant to HHS' Implementation (2 CFR Part 376 et seq) of OMB Guidelines to Agencies on Governmentwide Debarment and Suspension, 2 CFR Part 180 (collectively the \\`\\`Debarment Regulations'); and\n\n\\(2\\) To exclude himself voluntarily from serving in any advisory capacity to the U.S. Public Health Service (PHS) including, but not limited to, service on any PHS advisory committee, board, and\/or peer review committee, or as a consultant.\n\nInquiries\n\nPlease direct all inquiries to:** \n \n** David E. Wright, Ph.D.** \n** Director** \n** Office of Research Integrity** \n** 1101 Wootton Parkway, Suite 750** \n** Rockville, MD 20852** \n** Telephone: 240-453-8800","meta":{"dup_signals":{"dup_doc_count":109,"dup_dump_count":41,"dup_details":{"curated_sources":2,"2021-21":1,"2020-45":1,"2019-35":1,"2019-30":1,"2018-51":1,"2018-30":1,"2018-17":1,"2018-05":1,"2017-30":1,"2017-22":1,"2017-17":1,"2017-09":2,"2017-04":1,"2016-50":1,"2016-44":2,"2016-40":2,"2016-36":2,"2016-30":3,"2016-26":3,"2016-22":2,"2016-18":3,"2016-07":5,"2015-48":5,"2015-40":4,"2015-35":6,"2015-32":3,"2015-27":5,"2015-22":5,"2015-14":4,"2014-52":3,"2014-49":2,"2014-42":2,"2014-41":6,"2014-35":2,"2014-23":3,"2014-15":1,"2021-31":1,"2015-18":5,"2015-11":5,"2015-06":4,"2014-10":4}},"file":"PMC4259752"},"subset":"pubmed_central"} {"text":"abstract: For the first time in history, psychiatrists during the Nazi era sought to systematically exterminate their patients. However, little has been published from this dark period analyzing what may be learned for clinical and research psychiatry. At each stage in the murderous process lay a series of unethical and heinous practices, with many psychiatrists demonstrating a profound commitment to the atrocities, playing central, pivotal roles critical to the success of Nazi policy. Several misconceptions led to this misconduct, including allowing philosophical constructs to define clinical practice, focusing exclusively on preventative medicine, allowing political pressures to influence practice, blurring the roles of clinicians and researchers, and falsely believing that good science and good ethics always co-exist. Psychiatry during this period provides a most horrifying example of how science may be perverted by external forces. It thus becomes crucial to include the Nazi era psychiatry experience in ethics training as an example of proper practice gone awry.\nauthor: Rael D Strous\ndate: 2007\ninstitute: 1Department of Psychiatry, Beer Yaakov Mental Health Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel\nreferences:\ntitle: Psychiatry during the Nazi era: ethical lessons for the modern professional\n\n# Background\n\nDuring the Nazi era, for the first time in history, psychiatrists sought to systematically exterminate their patients. It has been acknowledged that the medical profession was profoundly involved in crimes against humanity during this period, with various publications describing this malevolent period of medical history. It is less known, however, that psychiatrists were among the worst transgressors. At each stage of the descent of the profession into the depths of criminal and genocidal clinical practice lay a series of unethical decisions and immoral professional judgments. Furthermore, very little has been published on lessons that may be learned from this dark period in the history of psychiatry and on ethical principles that may be extrapolated for the future practice of clinical and research psychiatry and for inclusion in educational programs. This paper reviews the role of psychiatrists in the Nazi era and analyzes the underlying misconceptions that led to the aberrant behavior. Finally, some recommendations for inclusion of the study of this period in ethics training are presented \\[26\\].\n\n# Role of psychiatrists in Nazi atrocities\n\nThe professional status of psychiatrists did not place any obstacle to their participation in Nazi crimes, and many demonstrated a profound commitment to the atrocities. Psychiatrists were instrumental in instituting a system of identifying, notifying, transporting, and killing hundreds of thousands of mentally ill and \"racially and cognitively compromised\" individuals in settings ranging from centralized psychiatric hospitals to prisons and death camps. Their role was *central* and *critical* to the success of Nazi policy, plans, and principles. Psychiatrists, along with many other physicians, facilitated the resolution of many of the regime's ideological and practical challenges, rather than taking a passive or even active stance of resistance \\[1\\]. Psychiatrists played a prominent and central role in two categories of the crimes against humanity, namely sterilization and euthanasia \\[2\\]. It was psychiatrists (many of whom were senior professors in academia) who sat on planning committees for both processes and who provided the theoretical backing for what transpired. It was psychiatrists who reported their patients to the authorities and coordinated their transfer from all over Germany to gas chambers situated on the premises of the six psychiatric institutions: Brandenburg, Grafeneck, Hartheim, Sonnenstein, Bernburg, and Hadamar \\[2,3\\]. It was psychiatrists who coordinated the \"channeling\" of patients on arrival into specially modified rooms where gassing took place. It was psychiatrists who saw to the killing of the patients (initially using carbon monoxide and later, starvation and injection). Finally, it was psychiatrists who faked causes of death on certificates sent to these patients' next of kin. It has been estimated that over 200,000 individuals with mental disorders of all subtypes were put to death in this manner \\[4-7\\]. Much of this process took place before the plan to annihilate the Jews, Gypsies and homosexuals of Europe. Hitler never gave the order to kill patients with mental illness. He only permitted it in a letter written in October 1939 and backdated to September 1, 1939 \\[2,6\\]. Psychiatrists were therefore never ordered to facilitate the process or carry out the murder of mentally ill...they were empowered to do so. Activity by psychiatrists and psychiatric institutions thus constituted the connection between euthanasia and the larger scale annihilation of Jews and other \"undesirables\" such as homosexuals in what came to be known as the Holocaust. Parenthetically, only one physician ever came to command an extermination camp. His name was Dr Imfried Eberl, a psychiatrist, who established Treblinka based on his experience as the Brandenburg Psychiatry Facility medical superintendent. He managed the camp for six months until he was fired for inefficiency in disposing of the thousands of bodies he succeeded in accumulating \\[2\\].\n\n# Attitude of mainstream psychiatry to Nazi psychiatry practice following the war\n\nWhile it would be expected that the involvement of psychiatrists in such a profound manner would be well-known in the field, this is not the case. Little has been published on the subject in mainstream psychiatry journals and even less is part of the formal education process for medical students and psychiatry residents. Several reasons may be proposed for this. First, it remains an embarrassment for the field that so many senior members \u2013 professors, department heads and internationally known figures \u2013 were so intimately involved. Second, many of those involved continued to practice and conduct research long after the war and were protected by colleagues. Third, and arguably most important, what psychiatrists did was based upon a paradigm shift in how patients and mental illness were viewed. Activities of psychiatrists became much of a value judgment in how they \"read\" the community and principles of neo-Darwinism with subsequent consideration of racial hygiene. In the absence of firm and unbending timeless ethical underpinnings to the practice of psychiatry, many felt that what they were doing was correct from a moral and scientific standpoint; therefore, they were not the demons and \"paradigms of evil\" that we perceive them to be. Their actions were a colossal misjudgment based on what today we may term \"pseudoscience\", but which at the time was deemed correct by many. Although actions based on \"scientific theories\" of mental illness in the past have led to patient deaths \u2013 one example being Henry Cotton and his belief that mental illness results from focal infection or chronic sepsis \\[8\\] \u2013 the extent and scale of the German psychiatrists' actions during the Nazi era remains unprecedented. These rationalizations based on faulty scientific theory and unethical medical practice were difficult to accept and therefore the nature and extent of these activities remained on the backbenches of the academic literature until more recently, when these issues have begun to be faced in an era of openness and transparency.\n\n# Common assumptions leading to gross ethical misconduct\n\nIn addition to resting on poor science, the atrocities of the German psychiatric establishment were based upon several fundamental errors of ethical, professional, and scientific conduct. While many may simply brush off any deeper consideration of the issues with the stance that \"they were just evil\", such an approach only deepens the risk that such events will be repeated. The truth cannot be more different: perversion of ethical medical practice due to theoretical misjudgment and fundamental error in approach to the patient are what led to these atrocities of catastrophic proportions. So where did they go wrong? Several misconceptions lay at the source:\n\n## 1) Medical ethics is ethnic, cultural, and time sensitive\n\nThe theory behind such a proposition is that much of medical ethics is time and culture bound \\[9\\]. Therefore what may be unethical now may not necessarily have been unethical then. This approach inculcates a relative attitude to the atrocities, minimizing the severity of the injustice and gross professional negligence so inherent in what transpired. Certain aspects of medical ethics transcend time and culture. Except under very specific and precise circumstances, such as when there is a serious and immediate risk to others, a physician should always respect autonomy, beneficence, and patients' confidentiality and dignity. Although it may be suggested that there is a major leap between disregard of these time-honored factors and the genocide of euthanasia, this is how it all began \u2013 it may even define the central thread of the atrocities to the mentally ill. While the form of ethical medical practice may depend on resources and cultural nuances (Tarasoff etc.), the basis for ethical behavior should remain constant, irrespective of time and place. Thus, while some maintain that for one generation a practice may be considered unethical but not for another \\[9\\], this is a misconception, as certain practices and concepts do not change with context. It is therefore never appropriate to kill one's patients *en masse* based on diagnosis and economic and racial-hygiene considerations for the community at large.\n\n## 2) Philosophical constructs and ideas should define clinical practice\n\nDuring the period of the Nazi regime, psychiatry supported compulsory sterilization and euthanasia of the physically and mentally ill, and subsequently, the killing of \"inferior\" races. They did this by applying scientifically invalid conclusions from evolutionary biology \\[10\\]. Aside from the fact that these philosophical constructs and scientific paradigms of evolutionary theory were flawed, they were also immoral and contravened basic tenets of medical ethics and clinical practice. Much of this approach was based on theories of neo-Darwinism. Furthermore, ever since Francis Galton in 1865 first published the idea of eugenics (a term rooted in the Greek \"good in birth\" or \"noble in heredity\"), individuals with mental illness had been targeted by eugenics programs, with psychiatrists intimately involved in the theoretical debate. The eugenics movement was not limited to Germany, and proponents of eugenics were prominent in several other countries including most notably Britain and the USA \\[11\\]. Interestingly, during the period in which euthanasia of the mentally ill was taking place in Germany, a fascinating debate transpired between two prominent American academics and was published in the *American Journal of Psychiatry* in 1942. Foster Kennedy, professor of neurology at Cornell University in New York, argued that all children with proven mental retardation (\"feeblemindedness\") over the age of five should be put to death. Leo Kanner, however, maintained that such individuals might still serve a purpose to society \u2013 garbage collection, postmen, etc. \u2013 as well as give meaning to their parents by virtue of having to care for them. Astoundingly, no one emphasized the unethical nature of putting individuals with disability to death. Instead, the editorial, published anonymously, appeared to side with Kennedy, and advised help for the parents in coming to terms with such a reality for their children and for the need for \"enabling legislation\" in order to facilitate the process legally (apparently in contrast to that of the German experience) \\[12\\]. The Nazi experience, which took much of the concept to fruition, was an extreme perversion of this movement, which existed already (at least at the conceptual level) in the minds of many psychiatrists supporting the idea.\n\n## 3) Preventative medicine is more important than curative medicine\n\nIn the interests of preserving the future quality and purity of the Aryan race, racial hygiene became the battle cry of the German nation with Nazi medicine attempting to prevent the proliferation of illness. Within this context, it became the role of physicians in general, and psychiatrists in particular, to define who should be eliminated in order to best preserve the German nation's uniqueness and \"higher-being\". Thus in place of managing mental illness with the available tools (which were minimal) or investing resources in research for more appropriate treatment, it became important for physicians and psychiatrists to prevent such forms of illness or defects through euthanasia \\[7\\]. A particular focus was placed on psychiatric patients in the racial-hygiene program because they were perceived as weakening the \"master-race\" with no known cure. Therefore these \"lives not worth living\" were deemed useless and dangerous to German society and, in order to prevent their dissemination, the process of eliminating them in the context of the sterilization and euthanasia program came about. This procedure of trying to prevent illness, while a noble concept, should never be instituted at the expense of (and complete exclusion of) treating illness, as the disastrous Nazi program proved \\[7\\]. Even if one accepts their reasoning \u2013 and in this case they were wrong \u2013 selective sterilization of the mentally-ill would never significantly reduce the frequency of mental illness based on the Hardy-Weinberg law of preservation of rare recessive genes in a population of phenotypically normal carriers \\[10\\]. The Nazis embraced an exclusionary biological and racial determinism that removed any reparative function from clinical psychiatry. What remained was prevention of mental illness. Psychiatrists lived up to the challenge.\n\n## 4) Psychiatrists have a particular role in channeling societal issues and public discussion\n\nMany psychiatrists maintain that they have an inherent responsibility more than other medical professions to be involved in community affairs. This is because psychiatry by nature advocates a holistic approach to the patient, which often includes taking into account societal factors and contemporary ideology. Thus while the unique role of psychiatry in the genocide may be overstated, since other areas of medicine were also involved, psychiatrists fitted in particularly well. The dangers inherent in such involvement, while not obvious, are, however, prominent when important boundaries become blurred. Clinical practice and political machinations need to be kept separate. Many psychiatrists during the Nazi era were state-controlled and this further facilitated their conforming to the program. The rights of individuals cannot be totally ignored in the interests of society. The dangers become particularly acute in psychiatry compared to other subspecialties in medicine since it may be suggested that the field of psychiatry is often used in order to remove undesirables from society and place them in asylums. It may be argued that labeling of mental disease and its classification is a means of controlling members of the community who do not comply with accepted norms; therefore their freedom should be taken away and replaced with hospitalization. However, while at times there may be a fine line separating mental health and illness, it becomes very clear that the extent to which Nazi psychiatry allowed the political and community atmosphere to influence and govern clinical practice was grossly unethical, murderous, and unacceptable to an extreme extent.\n\n## 5) Political and economic pressures may influence clinical practice\n\nThe management of patients must be dictated primarily by the patient's best interests and not by virtue of any ideology that may be prevalent at the time in society. This may include economic \"ideological\" considerations. Thus while pressures may exist \"encouraging\" the physician to make decisions one way or another based on the prevailing mood or tendency of the community at any time or place, this should be resisted and medical management should continue, unaffected by external considerations. The patient has to receive individual management and not be treated according to what is in vogue at the time. Psychiatrists should be wary of political and economic pressures that impinge upon medical decisions and health service provision. Nazism was supposed to be \"applied biology\" \\[13\\]. Science in general and psychiatry in particular needs to be independent from contemporary sociological and political contexts as well as protected from political abuse, even when embraced by the medical establishment. It has been proposed that the primary downfall of Nazi medicine was the failure of physicians to challenge the substantive core of Nazi values \"Too many physicians were willing to go with the political flow; too many were unwilling to resist, to 'deviate' from 'commonly accepted' practices\" \\[14\\]. Sound medical practice should be protected from the movement of political forces.\n\n## 6) Psychiatrists\/scientists have a responsibility to \"enhance\" mankind\n\nMuch of the early involvement by psychiatric clinicians and researchers in the process of \"racial purification\" arose from a genuine desire to improve mankind and not necessarily from the perspective of racist genocide. While no direct parallel can be drawn, today many continue in a sincere scientific effort towards the \"enhancement\" of man through molecular biology and genetic engineering \\[15\\]. Appropriate dialogue is required in order to ensure that the desire for \"improving man\", creating a \"better human\", does not come at the expense of the individual patient.\n\n## 7) The interests of science take priority over the interests of the individual patient\n\nClinical management and research participation may appear to be equivalent, but they are not. A clear distinction must be made between the two and the patient must be aware of this. Research is critically important for the future of good medical practice and is fundamental to the philosophy of medical ethics in psychiatry which would be reflected in the long-term striving for excellence in clinical management. However, it should always be made clear to the patient that participation is voluntary and that more conventional treatment regimes exist and are available if preferred. Particular issues such as scientific validity, favorable risk-benefit ratio, voluntaryism, and decisional capacity, while important in all aspects of clinical practice, become of acute importance with respect to individuals with mental illness \\[16,17\\]. The Nazi experience, which completely disregarded such factors in the interests of \"science\" and racial-hygiene, is a prime example of the dangers inherent when such factors are not respected. Ethical commitment to research safeguards needs to be reflected in appropriate standards, guaranteeing appropriate study participation \\[16\\]. Refusal to participate in a study should likewise never interfere with the doctor-patient relationship and in the case of a patient agreeing to participate in research, it remains the duty of the physician to protect the health of the individual.\n\n## 8) High-quality science and high-quality ethics always co-exist\n\nIt has become easy for those in the West to dismiss the depths of unethical medical practice of the Nazi physicians by categorizing it as bad science. This is easier to accept than the possibility that even within the context of good science, ethical behavior by physicians may go astray. In fact, the Nazi era in Germany was a time of remarkable scientific advances in several areas including cancer research and treatment, biochemistry, and quantum mechanics to name a few. In addition, the Nazis were pioneers of jet-propelled air flight, guided missiles, electronic computers, electron microscopes, and atomic fission \\[14\\]. Thus, scientific advancement does not necessarily go hand in hand with ethical advancement. It would be incorrect to brush off the ethical challenges that true scientific advancement in medicine may present, since the connection with true ethical practice is not necessarily a natural one.\n\n# Ethics in the training of future psychiatrists\n\nThe theory of \"medical ethics\" has become a requirement in residency training programs in several countries around the world \\[18\\]. This has become a particularly pressing issue considering the need to understand principles of research ethics and roles of psychiatrists as investigators and researchers. The teaching of medical ethics during residency is particularly well-timed because professional identity and ethical practices are in their formative stages. Such ethics training is important in order to define the critical role that the four principles of autonomy, beneficence, non-maleficence, and justice play in clinical practice \\[19\\]. Several reports have been published that extol the virtues of medical ethics training in psychiatry residency training programs \\[20\\].\n\nHowever, while the importance of such training programs is well recognized, so was the importance of medical ethics acknowledged in Germany in the 1930's. In fact, Germany possessed one of the most advanced and sophisticated codes of medical ethics in the world in existence from 1931. Some have even suggested that in certain aspects it was stricter than the subsequent Nuremberg Code or Helsinki Accord \\[14,21\\]. Doctors in general and psychiatrists in particular who were involved in the euthanasia program were not morally blind or devoid of the power of moral reflection. This belief would render the guilty parties not responsible for their actions. However, such codes did not help and when it came to bringing to fruition the ideology and plans of the wider society's beliefs, psychiatrists cooperated fully. They even took enthusiastic initiative in the process, allowing societal politics and ideas to interfere with clinical practice. Furthermore, although a broader consideration of potential abuse and malpractice in other totalitarian regimes would further strengthen the importance of the subject, a focus on Nazi psychiatric practice in particular brings to the fore clearly a most apt and recent example of how such interference can go awry. The example of Nazi psychiatry is a prime illustration of how ethics training without a focus on history is useless; where policy, even though existing, can be disregarded in the most grotesque fashion.\n\nFurthermore, Cowley \\[19\\] has argued that although many medical schools have now given medical ethics a secure place in the curriculum, they err in treating it like a scientific body of knowledge. Ethics is a unique subject precisely because of its widespread relevance in all areas of life, and any teaching has to start from this shared understanding and from the familiar ethical concepts of ordinary language. Ethical jargon obscures the essential integration of ethics with the personal and \"drives a wedge between ethical concepts and ethical conduct\". This may have accounted for some of the unethical conduct of \"Hitler's psychiatrists\" in their disregard of basic principles despite the existence of strong ethical policy. \"Ethical mantras\" have little value when they exist away from a context of a mature understanding and self-reflection that needs to precede good ethical judgment and professionalism \\[19\\].\n\nThe question remains: why did so many psychiatrists willingly participate in the process of mass murder of the mentally ill? Perhaps some light may be shed on this issue by consideration of similar behavior as reported by Browning \\[22\\] in the history of an \"ordinary\" and unremarkable battalion of the Order Police that participated in mass shootings and deportations. Browning describes how these ordinary men were not coerced to kill, but rather participated in a willing fashion due to peer pressure, government sanction and following of orders, and in order to advance their own interests (careerism). This is in contrast to Goldhagen \\[23\\], who suggested that the average German citizen either participated in or ignored genocidal actions during the Nazi Era due to ingrained anti-Semitism, which was an intrinsic part of German society and had built up over centuries.\n\nYet another approach proposes that psychiatrists during the Nazi era were at particularly high risk for moral and ethical breaches because of how society and they themselves defined their role and power. Inherent in their work lay the risk of dehumanizing the patients with whom they had daily involvement, individuals at the extremes of human behavior. Moreover, psychiatry by nature incorporates contemporary ideology in its approach to the individual and society, and psychiatrists during that period were in essence state-controlled. All of these factors may have led to their tendency to objectify patients \\[reviewed in \\[1\\]\\]. Thus, these psychiatrists were primed to become involved in furthering Nazi ideology.\n\nThese differing approaches and considerations need to be conveyed to students of psychiatry, emphasizing that merely explaining the actions of psychiatrists and other physicians during this period by saying \"they were evil\" is misleading and reductionistic \\[24\\]. Danger exists in such an approach since it would preclude consideration of one's own risk for involvement in such a process.\n\nDespite the wealth of ethics literature and the requirement for medical ethics in training programs, the experience of Nazi German psychiatry receives minimal mention, if at all, in contemporary medical student and resident ethics training courses. This is a serious oversight, since well-developed ethical principles did not stop the trespassing of political ideology into clinical practice and research in the 1930's. The result was equally devastating to the patients and to the practice of the profession. Every psychiatry resident needs to know about this. A knowledge and appreciation of Nazi psychiatry practices should become an important component of an integrated program of psychiatry ethics, with a focus on the human aspects of the psychiatric clinician and researcher, and a warning against the influences of political and community ideology impinging upon professional practice.\n\nThe content of ethics training for medical students and residents will remain a creative exercise for educators. Such a program should include an informative historical review of this period, including information on the unethical medical activities that transpired, case studies of psychiatrists at all levels of involvement, a consideration of various ethical frameworks for psychiatric care that were violated, as well as a consideration of why psychiatrists came to be so inextricably involved. Such a course would pave the way for a discussion about creating the most optimal framework of ethics for psychiatric practice. Several approaches may be considered, including the recent one by Bloch and Green \\[25\\], who proposed a model based on a \"complementarity of principalism\" (pragmatic focus on respect for autonomy, beneficence, non-maleficence and justice) and care ethics (highlighting character traits pertinent to caring for vulnerable psychiatric patients).\n\nThus, in addition to the usual formal medical ethics training in which the importance of autonomy, beneficence, confidentiality, and professionalism is emphasized, an in-depth appreciation of the actions of German psychiatrists during the Nazi years should be imparted. Since medical students and residents today are learning in an ethical environment that is unprecedented in its complexity \\[20\\], it becomes crucial to include the Nazi era medical experience as an example of proper practice gone awry, despite its being in the interest of science, and despite receiving the support of many of the foremost world leaders of the profession. The responsibility of psychiatrists to act as moral agents in the interests of patients, especially in the area of psychiatry, is thus of paramount value.\n\nThe professional burden of the memory of what transpired during the Nazi Era by the hands of members of the psychiatry profession is great. Those that were inextricably involved were colleagues, and this requires us to grapple with the intrinsic guilt of the profession, and to take responsibility to fix fundamental flaws in how we view patients and their management. A dark side to medicine exists: psychiatry, academia, and science played a key role in the establishment of National Socialism and all that ensued. The experience of psychiatry during the Nazi era provides an example of how science can be perverted by politics and therefore can become vulnerable to misuse and abuse. An exclusive focus on the monstrous aspects of Nazi medicine enables us to dismiss such events as aberrant and deviant, with a subsequent failure to internalize the inherent and very real dangers of the perversion of science and clinical management by outside political influences. Psychiatry cannot afford to turn a blind eye to such a past.\n\n# Competing interests\n\nThe author declares that he has no competing interests.","meta":{"dup_signals":{"dup_doc_count":101,"dup_dump_count":68,"dup_details":{"curated_sources":4,"2023-40":2,"2023-23":1,"2022-40":1,"2022-27":1,"2021-43":1,"2021-39":1,"2021-25":1,"2021-10":1,"2021-04":1,"2020-50":1,"2020-45":1,"2020-40":1,"2020-29":1,"2019-51":1,"2019-43":2,"2019-35":1,"2019-30":1,"2019-26":1,"2019-18":2,"2019-04":1,"2018-47":1,"2018-43":1,"2018-39":1,"2018-34":1,"2018-22":3,"2018-17":1,"2018-13":1,"2018-09":1,"2018-05":1,"2017-51":1,"2017-47":1,"2017-43":1,"2017-39":3,"2017-34":3,"2017-30":3,"2017-26":2,"2017-22":2,"2017-17":2,"2017-09":2,"2017-04":3,"2016-50":2,"2016-44":2,"2016-40":1,"2016-36":2,"2016-30":2,"2016-26":1,"2016-22":1,"2016-18":1,"2015-48":1,"2015-35":1,"2015-32":1,"2015-27":1,"2015-22":1,"2015-14":1,"2014-52":1,"2014-49":2,"2014-42":4,"2014-41":2,"2014-23":2,"2014-15":2,"2017-13":2,"2015-18":1,"2015-11":1,"2015-06":1,"2014-10":1,"2013-48":1,"2013-20":1,"2024-18":1}},"file":"PMC1828151"},"subset":"pubmed_central"} {"text":"abstract: # Background\n .\n Alternate day modified fasting (ADMF) is an effective strategy for weight loss in obese adults.\n .\n # Objective\n .\n The objective of this study was to examine the dietary and physical activity adaptations that occur during short-term ADMF, and to determine how these modulations affect rate of weight loss.\n .\n # Methods\n .\n Sixteen obese subjects (12 women\/4 men) completed a 10-week trial consisting of 3 phases: 1) 2-week control phase, 2) 4-week ADMF controlled feeding phase, and 3) 4-week ADMF self-selected feeding phase.\n .\n # Results\n .\n Body weight decreased (*P* \\< 0.001) by 5.6 \u00b1 1.0 kg post-treatment. Energy intake on the fast day was 26 \u00b1 3% of baseline needs (501 \u00b1 28 kcal\/d). No hyperphagic response occurred on the feed day (95 \u00b1 6% of baseline needs consumed, 1801 \u00b1 226 kcal\/d). Daily energy restriction (37 \u00b1 7%) was correlated to rate of weight loss (*r* = 0.42, *P* = 0.01). Dietary fat intake decreased (36% to 33% of kcal, *P* \\< 0.05) with dietary counseling, and was related to rate of weight loss (*r* = 0.38, *P* = 0.03). Hunger on the fast day decreased (*P* \\< 0.05) by week 2, and remained low. Habitual physical activity was maintained throughout the study (fast day: 6416 \u00b1 851 steps\/d; feed day: 6569 \u00b1 910 steps\/d).\n .\n # Conclusion\n .\n These findings indicate that obese subjects quickly adapt to ADMF, and that changes in energy\/macronutrient intake, hunger, and maintenance of physical activity play a role in influencing rate of weight loss by ADMF.\nauthor: Monica C Klempel; Surabhi Bhutani; Marian Fitzgibbon; Sally Freels; Krista A Varady\ndate: 2010\ninstitute: 1Department of Kinesiology and Nutrition, University of Illinois at Chicago, Chicago, IL, USA; 2Department of Medicine, University of Illinois at Chicago, Chicago, IL, USA; 3Department of Biostatistics, University of Illinois at Chicago, Chicago, IL, USA\nreferences:\ntitle: Dietary and physical activity adaptations to alternate day modified fasting: implications for optimal weight loss\n\n# Introduction\n\nRates of obesity have dramatically increased over the past three decades. At present, 34% of adults in the United States are obese (body mass index (BMI) \\> 30 kg\/m^2^) \\[1\\]. According to the National Heart Blood and Lung Institute (NHLBI) Obesity Guidelines \\[2\\], dietary interventions should be implemented as the first line of treatment to help obese individuals lose weight. The most common diet therapy prescribed by practitioners is daily calorie restriction (CR). CR involves decreasing energy intake by 15 to 40% of baseline needs everyday. Evidence from short-term CR trials (8 to 24 weeks) demonstrate that CR is an effective means of decreasing body weight by 5 to 10% from baseline in obese patients \\[3-6\\].\n\nAlthough CR is the most frequent diet strategy implemented to facilitate weight loss \\[7\\], many obese patients find it difficult to adhere to CR since food intake must be limited *every day*\\[8-10\\]. Alternate day modified fasting (ADMF) was created as an alternative to CR to improve compliance with dietary restriction regimens \\[11\\]. ADMF includes a \"feed day\" where food is consumed ad-libitum over a 24-h period, alternated with a \"fast day\", where food intake is partially reduced for 24-h. ADMF only requires an individual to restrict food intake on *every other day*, and as such, greatly increases adherence to these protocols. To date, four ADMF human trials have been performed \\[12-15\\], two of which were weight loss studies \\[13,14\\]. In the first trial by Johnson et al. \\[13\\], 8 weeks of modified ADMF, which allowed for 20% of energy needs to be consumed on the fast day, decreased body weight by 8% from baseline in overweight adults. In the second study conducted by our group \\[14\\], 8 weeks of modified ADMF (i.e. 25% energy intake on the fast day, alternated with an ad libitum feed day) resulted in a 6% weight loss in obese individuals. Although these preliminary findings suggest that ADMF may be an effective weight loss strategy, \\[13,14,16\\] what has yet to be examined is the dietary and physical activity adaptations that contributed to this pronounced weight loss by ADMF. Key questions that remain unresolved include: Are obese subjects able to dramatically change their meal pattern and limit their energy intake to 25% of needs on the fast day? If this is the case, what degree of hyperphagia occurs on the feed day in response to this lack of food on the fast day, and how does this affect net energy restriction and rate of weight loss? Moreover, how long does it take for obese subjects to become habituated to ADMF (i.e. no longer feel hungry on the fast day)? Furthermore, what changes in habitual physical activity occur during ADMF, and how do these changes affect rate of weight loss?\n\nAccordingly, the objective of the present study was to examine the dietary and physical activity adaptations that occur during short-term ADMF, and to determine how these modulations affect rate of weight loss.\n\n# Methods\n\n## Subject selection\n\nThis study was approved by the Office for the Protection of Research Subjects at the University of Illinois, Chicago, and all volunteers gave their written informed consent prior to participation in the trial. As reported previously \\[14\\], participants were recruited by means of advertisements placed in community centers in the Chicago metropolitan area. Inclusion and exclusion criteria were assessed by an in-person interview. Participants meeting the following criteria were included in the study: age 35 to 65 y; body mass index between 30 and 39.9 kg\/m^2^; weight stable for 3 months prior to the beginning of the study (i.e. less than 5 kg weight loss or weight gain); non-diabetic; no history of cardiovascular disease; lightly active (i.e. \\< 3 h\/week of light intensity exercise at 2.5 to 4.0 metabolic equivalents (METs) for 3 months prior to the study); not participating in an exercise class; non-smoker; and not taking weight loss, lipid or glucose lowering medications. Peri-menopausal women were excluded from the study, and post-menopausal women (absence of menses for more than 2 y) were required to maintain their current hormone replacement therapy regimen for the duration of the study.\n\n## Experimental design\n\nObese participants were enrolled in the study as a single cohort. Subjects participated in a 10-week trial consisting of three consecutive dietary intervention phases: (1) 2-week pre-loss control phase, (2) 4-week weight loss\/ADMF controlled feeding phase, and (3) 4-week weight loss\/ADMF self-selected feeding phase. During *Phase 1*, each subject maintained their usual eating and exercise habits in order to maintain a stable body weight. During *Phase 2*, subjects participated in a 4-week controlled-feeding ADMF period. All subjects consumed 25% of their baseline energy needs on the fast day (24 h), and then ate ad libitum on each alternating feed day (24 h). Individual baseline energy requirement was determined by the Mifflin equation \\[17\\]. Subjects were provided with a calorie-restricted meal on each fast day, and ate ad libitum at home on the feed day. The provided fast day meal was formulated for each subject using Nutritionist Pro Software (version 4.3, Axxya Systems, Stafford, TX). All diets were prepared in the metabolic kitchen at the Human Nutrition Research Unit (HNRU) at the University of Illinois, Chicago, and were provided as a 3-day rotating menu consisting of typical American foods. Meals provided on fast days during the controlled feeding phase are displayed in Table 1<\/a>. Each feed\/fast day began at midnight, and all fast day meals were consumed between 12.00 pm and 2.00 pm to ensure that each subject was undergoing the same duration of fasting. During *Phase 3*, all subjects participated in a self-selected feeding ADMF period in conjunction with weekly dietary counseling. This phase was put in place to determine if subjects could maintain the ADMF regimen on their own at home. During this phase, subjects still consumed 25% of their baseline energy needs on the fast day (between 12.00 pm and 2.00 pm), and ate ad libitum on the feed day. No food was provided to the subjects during this phase. Instead, a Registered Dietician met with each subject each week (for approximately 30 min per session) to develop individualized fast day meal plans. These plans included menus, portion sizes, and food lists that were consistent with each subject's food preferences and prescribed calorie levels for the fast day. Subjects were also instructed how to make healthy food choices on the ad libitum feed days, by choosing low fat meat and dairy options, and increasing fruit and vegetable intake.\n\nMeal components of provided fast day meals during controlled feeding phase\n\n| Foods | Fast day 1 | Fast day 2 | Fast day 3 |\n|-----------------|--------------------|------------------|-------------------|\n| Entree | Chicken fettuccini | Vegetarian pizza | Chicken enchilada |\n| Fruit\/vegetable | Carrot sticks | Apple | Orange |\n| Snack | Cookie | Peanuts | Crackers |\n\n## Weight loss assessment\n\nBody weight was measured weekly to the nearest 0.25 kg in the fasted state, without shoes, and in light clothing using a balance beam scale (HealthOMeter, Sunbeam Products, Boca Raton, FL).\n\n## Reported food intake on feed days\n\nEach participant completed a 3-day food record on 2 feed days during the week, and on 1 feed day during the weekend, at each week of the 10-week trial. Thus, a total of 30 feed day food records were collected for each subject. At baseline, the Research Dietician provided 15 min of instruction to each participant on how to complete the food records. These instructions included verbal information and detailed reference guides on how to estimate portion sizes and record food items in sufficient detail to obtain an accurate estimate of dietary intake. Subjects were instructed to record food items, in as much detail as possible, in the blank food diary provided. Any mixed foods were broken down to individual food items to be recorded one per line. Participants were not required to weigh foods but were asked to measure the volume of foods consumed with household measures (i.e. measuring cups and measuring spoons). When a commercial product was consumed, subjects were asked to indicate the weight of the product to assess portion size. Food records were collected at the weigh-in each week, and were reviewed by the Dietician for accuracy and completeness. All dietary information from the food records was entered into the food analysis program, Nutritionist Pro (Axxya Systems) by a single trained operator to alleviate inter-investigator bias. The program was used to calculate the total daily intake of energy, fat, protein, carbohydrate, cholesterol, and fiber.\n\n## Reported food intake on fast days\n\nDuring the ADMF controlled feeding phase, subjects were asked to report any additional food item consumed that was not included in the provided meal. During the ADMF self-selected feeding phase, each participant was asked to record his or her food intake on each fast day. At the beginning of this phase, the Research Dietician went over the food record instructions once again with each subject. Fast day food records were collected at the weigh-in each week, and all records were reviewed for accuracy and completeness by the Dietician. Dietary information from the fast day food records was analyzed by a single trained operator using Nutritionist Pro (Axxya Systems).\n\n## Hunger, satisfaction with diet, and fullness assessment\n\nSubjects completed a validated visual analog scale (VAS) on each fast day, in the evening, approximately 5 min before going to bed (reported bedtime ranged from 8.20 pm to 1.40 am) \\[18\\]. In brief, the VAS consisted of 100-mm lines, and subjects were asked to make a vertical mark across the line corresponding to their feelings from 0 (not at all) to 100 (extremely) for hunger, satisfaction with diet, or fullness. The VAS was collected at the weigh-in each week and reviewed for completeness. Quantification was performed by measuring the distance from the left end of the line to the vertical mark.\n\n## Physical activity assessment\n\nHabitual, free-living physical activity was assessed by a pedometer (Digiwalker SW-200, Yamax Corporation, Tokyo, Japan SW). Subjects wore the pedometer each day throughout the 10-week trial. The pedometer was worn attached to the participant's waistband during waking hours (except while bathing or swimming), and reset to zero each morning. Number of daily steps were recorded in a pedometer log provided, and the log was collected by study personnel at the weigh-in each week. No subjects were enrolled in an exercise class, and all participants were asked to refrain from joining any exercise programs during the course of the study. In this way, any changes in physical activity during the study could be estimated by the use of the pedometer.\n\n## Statistics\n\nResults are presented as means \u00b1 standard error of the mean (SEM). Tests for normality were included in the model. One-factor repeated measures analysis of variance was performed to determine an overall *P* value over time. The main variables tested included body weight, energy intake, nutrient intake, hunger, satisfaction and fullness. The Bonferroni correction was used to assess significance. Relations between continuous variables (i.e. body weight, energy intake, nutrient intake, hunger, satisfaction and fullness) were assessed by simple regression analyses as appropriate. Data were analyzed by using SPSS software (version 18.0 for Mac OS X; SPSS Inc., Chicago, IL).\n\n# Results\n\n## Subject characteristics at baseline\n\nOf the 52 participants screened, 20 were deemed eligible to participate in the study, and 16 (4 men\/12 women) completed the entire 10-week trial. Subjects who completed the study were middle age (46 \u00b1 3 y, 35-65 y), obese (BMI 34 \u00b1 1 kg\/m^2^, 30.2-39.9 kg\/m^2^), sedentary (2.4 \u00b1 0.3 h\/week of physical activity), and borderline hypercholesterolemic (LDL cholesterol level 106 \u00b1 10 mg\/dl). Eight participants were African-American, 2 were Caucasian, and 6 were Hispanic.\n\n## Changes in body weight in response to ADMF\n\nDuring the control phase, body weight remained stable (week 1: 96.4 \u00b1 5.3 kg, week 2: 96.5 \u00b1 5.2 kg). At the end of the ADMF controlled feeding phase (week 6), body weight decreased (*P* \\< 0.001) to 93.8 \u00b1 5.0 kg (feed day measurement) and 93.7 \u00b1 5.0 kg (fast day measurement). By the end of the ADMF self-selected feeding phase (week 10), body weight was further reduced (*P* \\< 0.001) to 92.8 \u00b1 4.8 kg (feed day measurement) and 90.8 \u00b1 5.0 kg (fast day measurement). Thus, a total weight loss of 5.6 \u00b1 1.0 kg (-0.7 \u00b1 1.0 kg per week) was attained after 8 weeks of ADMF.\n\n## Degree of energy restriction achieved with ADMF and relation to body weight changes\n\nEnergy intake and percent energy restriction were determined from food record data collected on feed and fast days. Mean completion rate of feed and fast day food records was 83 \u00b1 5%, and 86 \u00b1 4%, respectively. Energy intake on feed and fast days during each week of the trial is displayed in Figure 1A<\/a>. During the control phase, mean energy intake was 1937 \u00b1 180 kcal. Mean feed day energy intake (1801 \u00b1 226 kcal) at each week of the trial was similar to that of the control phase, and did not differ between ADMF controlled-feeding and self-selected feeding phases. Mean energy intake on the fast day (501 \u00b1 28 kcal, 26 \u00b1 3% of baseline needs consumed) was lower (*P* \\< 0.001) than that of the feed day at each week of the trial. The ratio of energy consumed on the fast day versus the feed day during the controlled feeding phase (0.28 \u00b1 0.03) did not differ from that of the self-selected feeding phase (0.30 \u00b1 0.05). Percent energy restriction is reported in Figure 1B<\/a>. Over the course of the trial, percent daily energy restriction remained high and stable (37 \u00b1 7%), and did not differ between the ADMF controlled-feeding and self-selected feeding phases. Degree of energy restriction achieved by ADMF was correlated to rate of weight loss (*r* = 0.42, *P* = 0.01) and absolute post-treatment weight loss (*r* = 0.48, *P* = 0.008).\n\n## Hyperphagic response\n\nHyperphagia on the feed day in response to the lack of food on the fast day is reported in Figure 2<\/a>. We hypothesized that the participants would increase their energy intake on the feed day by approximately 125% of their baseline needs. However, no such hyperphagic response was observed, as mean feed day energy intake (1801 \u00b1 226 kcal) was similar to calculated requirements (1896 \u00b1 160 kcal) at each week of the trial. Thus, on average, subjects were only consuming 95 \u00b1 6% of their calculated energy needs on the feed day.\n\n## Changes in nutrient intake during ADMF and relation to body weight changes\n\nThe nutrient composition of feed and fast day meals during each phase of the trial is displayed in Table 2<\/a>. During the control phase, subjects were consuming a high fat (\\> 35% of kcal), high saturated fat (\\> 7% of kcal), high cholesterol (\\> 200 mg\/d), and low fiber diet (\\< 25 g\/d), as per the National Cholesterol Education Program (NCEP) dietary guidelines \\[19\\]. During the ADMF controlled feeding phase, the nutrient composition of feed day diet was similar to that of the control phase (i.e. high total fat, high saturated fat, high cholesterol and low fiber). During the ADMF self-selected feeding phase, total fat (33 \u00b1 4% kcal) and saturated fat (7 \u00b1 1% kcal) intake on the feed day decreased (*P* \\< 0.05), relative to the control phase. Dietary cholesterol, however, was still above the recommended daily allowance (223 \u00b1 27 mg\/d), and dietary fiber (15 \u00b1 1 g) was still below the recommended intake level on the feed day. Decrease in total fat intake was related to rate of weight loss (*r* = 0.38, *P* = 0.03).\n\nNutrient composition of feed day and fast day meals during each phase of the trial^1^\n\n| | Pre-loss control phase^2^ | Weight loss\/ADMF controlled feeding phase | | Weight loss\/ADMF self-selected feeding phase | |\n|----|----|----|----|----|----|\n| | | **Feed day^2^** | **Fast day^3^** | **Feed day^2^** | **Fast day^2^** |\n| | | | | | |\n| Energy (kcal) | 1937 \u00b1 180 | 1792 \u00b1 228 | 413 \u00b1 20 | 1645 \u00b1 187 | 588 \u00b1 46 |\n| Protein (% kcal) | 18 \u00b1 1 | 18 \u00b1 1 | 23 \u00b1 1 | 19 \u00b1 1 | 20 \u00b1 1 |\n| Carbohydrate (% kcal) | 46 \u00b1 3 | 47 \u00b1 3 | 52 \u00b1 0 | 46 \u00b1 2 | 51 \u00b1 3 |\n| Total fat (% kcal) | 36 \u00b1 5^a^ | 36 \u00b1 6^a^ | 25 \u00b1 1^b^ | 33 \u00b1 4^b^ | 29 \u00b1 1^b^ |\n| \u2003Saturated fat (% kcal) | 11 \u00b1 1^a^ | 10 \u00b1 1^a^ | 6 \u00b1 1^b^ | 7 \u00b1 1^b^ | 9 \u00b1 1^a^ |\n| \u2003Monounsaturated fat (% kcal) | 11 \u00b1 1 | 12 \u00b1 1 | 11 \u00b1 1 | 13 \u00b1 1 | 8 \u00b1 1 |\n| \u2003Polyunsaturated fat (% kcal) | 10 \u00b1 2 | 11 \u00b1 1 | 8 \u00b1 1 | 10 \u00b1 1 | 9 \u00b1 1 |\n| \u2003Trans fat (% kcal) | 4 \u00b1 1^a^ | 3 \u00b1 1^a^ | 0^b^ | 3 \u00b1 1^a^ | 3 \u00b1 1^a^ |\n| Cholesterol (mg) | 249 \u00b1 46^a^ | 239 \u00b1 24^a^ | 68 \u00b1 3^b^ | 223 \u00b1 27^a^ | 73 \u00b1 9^b^ |\n| Cholesterol (mg\/kcal) | 0.13 \u00b1 0 | 0.13 \u00b1 0 | 0.17 \u00b1 0 | 0.14 \u00b1 0 | 0.12 \u00b1 0 |\n| Fiber (g) | 16 \u00b1 2^a^ | 12 \u00b1 2^a^ | 10 \u00b1 1^b^ | 15 \u00b1 1^a^ | 7 \u00b1 1^b^ |\n| Fiber (g\/kcal) | 0.008 \u00b1 0 | 0.008 \u00b1 0 | 0.02 \u00b1 0 | 0.009 \u00b1 0 | 0.01 \u00b1 0 |\n\n^1^ Values reported as mean \u00b1 SEM. Values in the same row with different superscript letters are significantly different, *P* \\<0.05 (One-factor ANOVA with Bonferroni analysis).\n\n^2^ Food intake self-reported each week using 3-d food record.\n\n^3^ Food was provided on the fast day during the controlled feeding phase.\n\n## Hunger, satisfaction with diet, and fullness\n\nChanges in hunger, satisfaction, and fullness during the trial are displayed in Figure 3<\/a>. During the first week of ADMF, hunger scores were elevated. However, after two weeks of ADMF, hunger scores decreased (*P* \\< 0.05) and remained low throughout the rest of the trial. Satisfaction with the ADMF diet was low during the first 4 weeks of the intervention, but gradually increased (*P* \\< 0.05) during the last 4 weeks of the study. Fullness scores remained low during the entire 8-week ADMF intervention.\n\n## Changes in physical activity habits\n\nAll subjects wore a pedometer each day throughout the entire trial to assess changes in physical activity habits. On average, subjects were very compliant with pedometer use, and steps were recorded on 87 \u00b1 4% of study days. We hypothesized that subjects would feel less energetic on the fast days, and would therefore take less steps\/d on fast days than feed days. Interestingly, no difference was noted when fast day values (6416 \u00b1 851 steps\/d) were compared to feed day values (6569 \u00b1 910 steps\/d) (Figure 4<\/a>). Moreover, physical activity remained constant throughout the 10-week study, as steps\/d taken during the control phase was similar to that of the ADMF phases.\n\n# Discussion\n\nPreliminary reports indicate that ADMF may be an effective strategy to help obese individuals lose weight \\[13,14\\]. However, the dietary and physical activity adaptations that contributed to this pronounced weight loss by ADMF were not tested previously. We show here, for the first time, that weight loss by ADMF occurred due to change in meal pattern, i.e. obese subjects limited their energy intake to 25% of needs on the fast day with no hyperphagic response on the feed day. This change in meal pattern helped these subjects to achieve a marked degree of energy restriction (37% net daily) which was related to the pronounced weight loss attained (5.6 kg in 8 weeks). This study is also the first to demonstrate that subjects become habituated to the ADMF diet (i.e. feel very little hunger on the fast day) after approximately 2 weeks, and that physical activity habits are not affected by fasting on alternate days.\n\nA key objective of the present study was to examine the degree of energy restriction achieved by ADMF and to investigate how this relates to rate of weight loss. In order to measure energy intake and percent energy restriction, we asked obese participants to complete food records on feed and fast days throughout the trial. Results from the food record analysis reveal that obese subjects were able to consistently limit their energy intake to approximately 25% of needs (500 kcal) on the fast day. Our data also show that the ratio of energy consumed on the fast day versus the feed day did not differ between phases. However, it should be noted that there was a trend towards consuming less energy on the feed days, and more energy on the fast days over the course of the trial. It will therefore be of interest in long-term ADMF studies to examine whether restriction gradually diminishes on the fast day after several months of diet. The degree of hyperphagia that occurred on the feed day in response to the lack of food on the fast day was also assessed. Our data indicate that no hyperphagic response took place as subjects only consumed approximately 95% of their calculated energy needs on each feed day throughout the trial. These findings therefore suggest that obese subjects are able to drastically change their meal pattern in a way that conforms to the ADMF protocol. Nevertheless, there are several limitations to these data that must be discussed. First and foremost, it is well known that obese subjects underreport energy intake by 20 to 40% when completing food records \\[20,21\\]. The extent to which these subjects underreported energy intake became apparent when we tried to relate reported energy intake to the weight loss achieved. From the food record data, we calculated that, on average, subjects were restricted by 37% of calculated needs every day. If the subjects were indeed restricted by this amount, this would have resulted in a rate if weight loss of 1.2 kg\/week. In actuality, the rate of weight loss was 0.7 kg\/week. This disparity between reported intake and weight loss can be observed when examining the limited amount of weight lost during the 4-week self-selected feeding phase (body weight reduction of 93.8 kg to 92.8 on the feed day, equivalent to 1 kg of weight loss). The incongruity between self-reported energy intake and rate of weight loss therefore suggests that subjects were underreporting energy intake. In view of this, it will be important for future ADMF trials to assess energy intake and energy restriction by more robust methods, such as the doubly labeled water technique \\[22,23\\]. It should also be mentioned that assessing body weight changes by ADMF is difficult as weight measurements are drastically different from feed to fast day. This discrepancy in body weight is most likely due to the additional weight of food present in the gastrointestinal tract, and not changes in fat mass from day to day. As a potential solution, future trials of ADMF should average body weight measurements taken from consecutive feed and fast days to attain a more accurate assessment of weight.\n\nIn addition to energy intake, we also examined changes in dietary macronutrient composition throughout the course of the trial. We hypothesized that during the ADMF controlled feeding phase (weeks 3-6), when dietary counseling was not provided, subjects would instinctively choose higher fat\/more energy dense foods on the feed day to make up for the lack of energy consumed on the fast day. Interestingly, fat intake did not increase from the baseline period (36% of kcal) to the ADMF-controlled feeding period (36% of kcal). These preliminary data suggest that subjects are not likely to consume higher fat diets on the feed day when partaking in an ADMF regimen. We also hypothesized that the dietary counseling provided during the self-selected feeding phase (weeks 7-10), would help subjects decrease total fat, saturated fat, and cholesterol intake, while increasing fiber intake. Results reveal that counseling assisted these individuals in lowering their total fat and saturated fat intakes to levels that conform with NCEP dietary recommendations \\[19\\], and that these changes in fat intake were related to rate of weight loss. On the other hand, dietary counseling appeared to have no effect on cholesterol or fiber intake. This lack of effect of dietary counseling on the intakes of these nutrients has been reported previously \\[24\\]. It should also be noted that fiber intake on the fast day was particularly low (7-10 g\/d). Since quantity of food consumed on the fast day is limited, it would be difficult for individuals to meet fiber requirements \\[25\\]. As such, it is recommended that future trials in the ADMF field provide a fiber supplement on the fast day to help individuals meet recommendations \\[19,26\\].\n\nChanges in perceived hunger, satisfaction with diet, and fullness were also evaluated on each fast day throughout the trial. This study is the first to show that obese subjects become habituated with ADMF after approximately 2 weeks of diet (i.e. feel very little hunger on the fast day). Our data also demonstrate that subjects become more satisfied with ADMF after approximately 4 weeks of diet. Feelings of fullness, however, remained low across the course of the trial suggesting that subjects never felt \"full\" at any point while undergoing 8-weeks of ADMF. These findings may have important implications for long-term adherence to ADMF by obese men and women \\[27-29\\]. More specifically, since hunger virtually diminishes, and since satisfaction with diet considerably increases within a short amount of time (2-4 weeks), it is likely that obese participants would be able to follow the diet for longer periods of time. It is important to note, however, that the subjects only completed the VAS scales pre-bedtime. Thus, the data only reflects their feelings immediately before going to bed, and is not indicative of their feelings of hunger and satisfaction throughout the day. Future trials in this area should administer these VAS scales throughout the day to obtain a more complete data set for these variables. It should also be noted that hunger spiked at week 8. We speculate that this may have occurred because this study week corresponded to Memorial Day weekend, and subjects may have felt hungrier while attending food-related celebrations. Moreover, trials examining the ability of obese subjects to comply with ADMF for longer durations (i.e. 24 to 52 weeks), and in consequence, lose larger amounts of weight, will be an important focus of future research in this field.\n\nThe effects of ADMF on habitual physical activity was also assessed by having the subjects wear a pedometer on everyday of the study. We hypothesized that subjects would feel less energetic on the fast days, and would therefore be less physically active (i.e. take less steps\/d) on fast days than feed days. Surprisingly, physical activity level did not differ between feed and fast days. Moreover, there was no difference in activity level when steps\/d taken during the ADMF phase were compared to steps\/d taken during the control phase. Similar results have also been reported in normal weight individuals undergoing ADMF \\[12\\]. These data suggest that obese individuals are able to maintain their level of habitual physical activity despite decreases in energy intake on the fast day. This maintenance of physical activity while undergoing ADMF would thus allow obese individuals to lose weight consistently on feed and fast days as energy expenditure would stay constant.\n\nA key limitation of this study is that there was no true control group. Having a control arm run parallel to the treatment (ADMF) arm would have strengthened the study by allowing us to: 1) compare changes in the ADMF group to that of a non-restricted control group at each time point, and 2) identify events (such as holidays) that may have resulted in deviations from the prescribed diet. Future studies aiming to test similar objectives should employ a control group where possible.\n\nIn summary, these findings indicate that obese subjects quickly adapt to ADMF, and that changes in energy\/macronutrient intake, hunger level, and maintenance of physical activity play a role in influencing rate of weight loss by ADMF. These preliminarily data offer promise for the implementation of ADMF as a long-term weight loss strategy in obese populations.\n\n# Competing interests\n\nThe authors declare that they have no competing interests.\n\n# Authors' contributions\n\nMCK performed all the diet analyses and assisted with trial coordination and manuscript preparation. SB coordinated the human clinical trial. MF and SF assisted with data analysis and manuscript preparation. KAV designed the study and wrote the manuscript. All authors read and approved the final manuscript.\n\n## Acknowledgements\n\nWe are grateful for the help of Kathryn Tomaszewski and Kristin Hoody during the analysis phase of the trial. Funding source: University of Illinois at Chicago, Departmental funding.","meta":{"dup_signals":{"dup_doc_count":130,"dup_dump_count":38,"dup_details":{"curated_sources":2,"2023-40":2,"2023-14":1,"2022-40":1,"2022-21":1,"2021-49":1,"2021-39":1,"2021-31":1,"2021-17":1,"2021-04":1,"2020-40":1,"2020-29":1,"2020-16":1,"2020-10":1,"2019-39":1,"2019-30":1,"2015-48":5,"2015-40":2,"2015-35":5,"2015-32":3,"2015-27":3,"2015-22":5,"2015-14":5,"2014-52":4,"2014-49":7,"2014-42":14,"2014-41":8,"2014-35":6,"2014-23":7,"2014-15":7,"2024-26":1,"2024-18":1,"2015-18":4,"2015-11":5,"2015-06":5,"2014-10":5,"2013-48":5,"2013-20":4,"2024-30":1}},"file":"PMC2941474"},"subset":"pubmed_central"} {"text":"abstract: The 'action' in genome-level evolution lies not in the large gene-containing segments that are conserved among related species, but in the breakpoint regions between these segments. Two recent papers in *BMC Genomics* detail the pattern of repetitive elements associated with breakpoints and the epigenetic conditions under which breakage occurs.\nauthor: David Sankoff\ndate: 2009\ninstitute: 1Department of Mathematics and Statistics, University of Ottawa, 585 King Edward Avenue, Ottawa K1N 6N5, Canada\nreferences:\ntitle: The where and wherefore of evolutionary breakpoints\n\nFor many years, dating back to well before the genomics era, there have been numerous observations and hypotheses of associations between the presence or absence of breakpoints of chromosomal evolution and prominent features of the genomic landscape: telomeres, centromeres, recombination hotspots, gene deserts or gene-rich regions, isochores, cytogenetically fragile sites, oncological rearrangements, segmental duplications, transposons and other repetitive elements. Two recent papers in *BMC Genomics* take somewhat different tacks on this subject. Longo *et al.* \\[1\\] capitalize on new sequencing resources for the tammar wallaby, *Macropus eugenii*, to substantiate the links between the rapid and complex patterns of evolution of centromeric sequence and recurrent rearrangement activity in marsupials, and to discover one evolutionary breakpoint region in humans that has repetitive element similarity to corresponding regions in marsupials. Lemaitre *et al.* \\[2\\] combine a high-resolution breakpoint localization procedure with specialized data that they have calculated or obtained on DNAse sensitivity, CG content, hypomethylation and replication origins \\[3\\] to dispel some of the most widespread folklore in the field. They show that propensity to breakage is not favored in gene deserts but, on the contrary, is closely related to transcriptional activity and DNA accessibility in a region, a conclusion that lends a decidedly epigenetic flavor to our understanding of rearrangement.\n\n# The ephemeral breakpoint\n\nA breakpoint or breakpoint region is not a tangible physical entity in a genome; it is an analytical construct arising only in the comparison of two genomes and, as such, exists or not, and has one set of characteristics or another, depending on the assumptions and methodology of this comparison. When we can identify two contiguous chromosomal segments in one genome, each of which seems orthologous to a different segment in another genome, and these latter segments are not contiguous, we can say that there is a breakpoint. When one of the segments is small (according to a threshold of anywhere from 10^2^ to 10^6^ base pairs), we might wish to consider the two breakpoints delimiting the segment as reflecting a single breakpoint. If the two segments are actually contiguous in the second genome but one is inverted compared with its orientation in the first genome, we might want to count the breakpoint or not. Normally, the DNA alignment of the two genomes will not be such that the breakpoint can be pinpointed as separating two specific adjacent base pairs, but rather there will be a more or less lengthy region in the middle of the segment on the first genome that does not align well to either of the two segments of the second genome or their flanking sequences. Instead of break 'point', we have a break 'region' with its own particular characteristics \\[4\\].\n\nTo complete the deconstruction of the breakpoint terminology, we can na\u00efvely imagine the free ends of two or more double-stranded breaks in DNA molecules flailing around inside the nucleus until they are repaired (incorrectly), resulting in a rearrangement within a chromo some or involving two chromosomes. This does indeed happen as a result of radiation, toxic or mechanical stress or, as is clearly demonstrated by Lemaitre *et al.* \\[2\\], following normal cellular activity that requires regions of open chromatin. It should be emphasized, however, especially where breakpoints are associated with repetitive elements, rearrangements do not derive from any actual DNA breakage, but from nonhomologous recombination caused by faulty alignment of repetitive elements during meiosis.\n\nThe Longo *et al.* article \\[1\\] contains a carefully executed and controlled analysis of the distribution of different kinds of repetitive elements in selected segments from three kinds of genomic region in the tammar wallaby: centromeric regions, breakpoint regions (actually three locations in one breakpoint region) and euchromatic regions not containing a breakpoint. They showed a dramatic enrichment in the breakpoint region of sequence characteristic of endogenous retroviruses (ERVs) and LINE1 transposable elements, and a deficiency of SINE and CR1 transposable element sequences, when compared with the euchromatic regions, with the centromeric regions falling in between the two other patterns. In addition, in a human genomic region containing parts homologous to the marsupial breakpoint region and parts homologous to one of the euchromatin selections, the pattern of repetitive elements makes a transition from ERVs and LINE1s to SINEs. This is suggestive of an association between neocentromeric tendencies, regional instabilities around evolutionary breakpoints and the incorporation of specific kinds of repetitive elements. Although the authors' \\[1\\] longstanding interest in marsupial evolution and the role of centromeres in genomic rearrangements, as well as the availability of new sequence resources on the tammar wallaby, are certainly sufficient motivation for the study of repetitive elements in this context, and given the very different patterns known for human and primate pericentromeric evolution, it will now be important to generalize this work to genomes for which sequencing is essentially complete and to undertake a more comprehensive survey of repetitive elements in regions of each kind.\n\n# Reuse and recurrence\n\nThe term 'breakpoint reuse' is used in the rearrangements literature to cover two rather different concepts. In its original algorithmic use \\[5\\], it denoted the excess of the number of rearrangements necessary to transform one genome into another compared with half the number of breakpoints induced by the comparison of two genomes (given that inversions and reciprocal translocations normally create two breakpoints each). This was accounted for by assuming that some breakpoints (without specifying which ones) were used more than once in the transformation. Soon afterwards, its most frequent meaning became the recurrence of the same breakpoint in two lineages but not their common ancestor with respect to an outgroup lineage \\[6\\]. Despite the attractiveness of these concepts to many authors (such as Longo *et al.* \\[1\\]), neither breakpoint reuse nor breakpoint recurrence is solidly established as a major evolutionary phenomenon, in contrast to well-known disease-causing somatic cell rearrangements. The original concept of reuse, which did not pertain to particular breakpoints but only their aggregates, has rarely if ever been systematically and quantitatively documented at the level of all the individual breakpoints induced by a pair of genomes. Indeed, the algorithmic results suggesting breakpoint reuse are not only wildly variable depending on how telomeric breakpoints are weighted \\[7\\], but are in any case predictable artifacts of highly constrained models of evolution through rearrangement \\[8\\] (models that permit no deletion of chromosome segments, no chromosome or chromosomal arm duplication, no segmental duplication, no transpositions, no jumping translocations and no deletion of paralogous syntenic blocks or interleaving deletions of duplicated blocks), and of the levels of resolution used in defining synteny blocks and breakpoint regions \\[9,10\\]. In the breakpoint definition above, if two breakpoints are collapsed when the small segment between them is below threshold size (a common practice), this mistakenly shows up as an increase in breakpoint reuse. As for the phylogenetic recurrence of breakpoints, the major source in this field \\[6\\] actually shows that 80% of the breakpoints in their mammalian phylogeny are not recurrent, and that almost all of the remaining ones affect the syntenically unstable rodent lineage. The tiny proportion of apparently recurrent breakpoints in the rest of the phylogeny would be hard to distinguish from coincidence, given the resolution of the synteny block construction.\n\nThe connection between the 'fragile sites' in traditional cyto genetics and evolutionary breakpoints is exceedingly weak \\[11\\] and, indeed, statistically insignificant except through a heuristically contrived categorization of the data. The same may be said for the oft-cited attempt \\[6\\] to associate cancer breakpoints with evolutionary breakpoints by selectively comparing only two of the reported frequency categories of neoplastic breakpoints.\n\n# Accident and selection\n\nAn e volutionary breakpoint is the product not only of some meiotic accident at a site predisposed to breakage or nonhomologous recombination. It is also a configuration that has managed to do all of the following: make it through steps of abnormal chromosome alignment and segregation to the gamete stage; participate in creating a viable heterokaryotypic zygote that eventually develops into reproductive maturity; endure generations of likely negative selection; and emerge through genetic drift as a homokaryotypic feature of some presumably small bottleneck population. Predisposition to breakage at the cellular level is just the first step on the road to fixation, and phenotypic selection operating at the meiotic, embryonic, adult and population levels has a more important role. Somatic cells presumably have many of the same predispositions to physical breakage, although not of course to nonhomologous recombination, but cancer cells do not have to survive meiosis or life outside the affected individual, and that may be a large part of the reason why the repertoire and quantitative distribution of rearrangements in tumor genomes are very different from those in evolution \\[12\\].\n\nGenetic deduction appealing to selection-based arguments at the gene expression level, together with indirect and anecdotal evidence, has recently prompted speculation about prohibition of rearrangement breakage in short inter genic regions in mammals \\[13\\]. These claims, however, have effectively been demolished by Lemaitre *et al.* \\[2\\], who measured directly and systematically, at a high level of resolution, the connections between both high rate of breakage and short intergenic distances and four strong correlates of transcriptional activity: GC content, proximity to origins of replication (as inferred from 'N-domains' \\[3\\]), hypomethylation (based on CpG ratios) and DNase sensitivity. This innovative and convincing work, to which the authors added support ranging from the classic Bernardi theory of isochores \\[14\\] to the more recent mammalian replicon model, overturns the conventional genetic wisdom and reopens evolutionary questions about mechanisms promoting neutral variation at the karyotypic level. It adds a weighty contribution to the accumulating body of results, such as those on the gibbon *Nomascus leucogenys leucogenys* \\[15\\] and those previously produced by the O'Neills-Graves collaboration on marsupials, cited in the Longo *et al.* article \\[1\\], on the epigenetic conditioning of evolutionary chromosome rearrangement.\n\n### Acknowledgements\n\nThis work was supported in part by grants from the Natural Sciences and Engineering Research Council of Canada (NSERC). DS holds the Canada Research Chair in Mathematical Genomics.","meta":{"dup_signals":{"dup_doc_count":130,"dup_dump_count":41,"dup_details":{"curated_sources":2,"2020-40":1,"2020-10":1,"2019-51":1,"2019-43":1,"2019-22":1,"2019-09":1,"2018-51":1,"2018-13":1,"2017-47":1,"2017-34":1,"2017-22":1,"2017-09":11,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":11,"2016-30":11,"2016-22":1,"2016-18":1,"2016-07":10,"2015-48":2,"2015-40":3,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":2,"2015-14":3,"2014-52":3,"2014-49":1,"2014-42":8,"2014-41":5,"2014-35":6,"2014-23":4,"2014-15":4,"2022-05":1,"2015-18":3,"2015-11":3,"2015-06":3,"2014-10":3,"2013-48":3,"2013-20":3}},"file":"PMC2736669"},"subset":"pubmed_central"} {"text":"abstract: The authors describe the development, implementation, and evaluation of their innovative social marketing campaign.\nauthor: Katherine Ahrens; Charlotte K Kent; Jorge A Montoya; Harlan Rotblatt; Jacque McCright; Peter Kerndt; Jeffrey D Klausner\\* To whom correspondence should be addressed. E-mail:\ndate: 2006-12\nreferences:\ntitle: Healthy Penis: San Francisco's Social Marketing Campaign to Increase Syphilis Testing among Gay and Bisexual Men\n\n# The Problem: Sharp Increase in Syphilis among Men Who Have Sex with Men\n\nSan Francisco experienced a sharp rise in early syphilis between 1999 and 2002, with the number of cases rising from 44 to 494 per year (Figure 1<\/a>). Rates continued to rise through 2004. Since 1999, most syphilis cases have been among men who identified as gay or bisexual (88%), were white (60%), and were infected with HIV (61%) (Figure 1<\/a>). In June 2002, the San Francisco Department of Public Health (SFDPH), STD Prevention and Control Services launched a social marketing campaign, called Healthy Penis, designed to increase syphilis testing and awareness among gay and bisexual men.\n\nSocial marketing, like marketing in the private sector, is a research-driven approach to behavior change \\[1\\]. Unlike commercial marketing, however, it is truly consumer-centered and designed to increase the health of the consumer \\[2\\]. The success of any social marketing campaign can be attributed to the net effect of its five main components: branding, segmentation, price, placement, and promotion \\[1\u20133\\]. The first component, *branding*, focuses on the health behavior message so that the desired behavior, or product, appeals to the needs and values of the consumer through functional and emotional attributes.\n\nThe second component is concerned with the *segmentation* of the intended audience\u2014developing a campaign message that emphasizes the target population's values, attitudes, and beliefs and\/or capitalizes on their current stage of behavior change. Messages that are customized in this manner will ensure that the health behavior (product) is appealing and applicable to each subgroup.\n\nThe third component is *price*. In social marketing, price is the social, psychological, or physical cost the consumer associates with performing the health behavior. The fourth component is product *placement*. In public health, this involves delivering the resources that make the desired health behavior possible at a time when it will most likely be sought out. The fifth component is the *promotion* of the health behavior through communication media like print, television, radio, outdoor advertising, or face-to-face techniques that are determined by the information-consumption habits of the consumer. Notable successful social marketing campaigns include North Carolina's \"Click It or Ticket\" campaign aimed at increasing seatbelt use, Florida's \"truth\" anti-smoking campaign, and the United States National WIC (Women, Infants and Children) Breastfeeding Promotion project \\[4\\].\n\nThe impact of a social marketing campaign is evaluated based on the measurement of change in the targeted behavior, i.e., increased syphilis testing in the case of Healthy Penis, and the extent to which this change is associated with people who received the campaign messages. To that end, we initially evaluated the Healthy Penis campaign in 2002\u20132003 and found a higher rate of syphilis testing among those aware of the campaign than those unaware of the campaign \\[5\\]. In addition, syphilis knowledge was higher among those aware, a secondary goal of the campaign.\n\nIn this article we describe the development, implementation, and evaluations of the Healthy Penis campaign. We present a simple method to evaluate the immediate and long-term effectiveness of a public health media campaign designed to change behavior. We include our most recent evaluation, conducted in 2004\u20132005, which assessed the long-term effectiveness of the campaign \\[6\\] and determined if respondents aware of the campaign 2.5 years after it began displayed the same syphilis knowledge and testing behavior as those initially aware of the campaign in the first evaluation.\n\n# Healthy Penis Campaign Development\n\nThe Healthy Penis campaign was launched during the annual San Francisco Lesbian Gay Bisexual Transgender Parade in late June 2002 and continued through December 2005. A San Francisco\u2013based social marketing firm, Better World Advertising, created the campaign in collaboration with the SFDPH to target the gay and bisexual community. This campaign was part of a broad multifaceted effort begun in 1999 by the SFDPH in response to the syphilis epidemic \\[7\\].\n\nThe campaign was developed in collaboration with the Los Angeles\u2013based Stop the Sores syphilis prevention campaign, which allowed us to lower our start-up costs to \\$75,000 in 2002. The campaign was continued through the end of 2005 at an additional cost of \\$295,000 \\[7\\]. Three-quarters of the campaign's first-year funds were spent on campaign development, with the remaining quarter spent on displaying campaign materials \\[8\\].\n\nSyphilis is often initially asymptomatic and testing is readily available in San Francisco for easy diagnosis. Based on these characteristics and input from the community partners group, the SFDPH determined that the primary campaign health behavior message should be \"get tested,\" with the main objective of changing community norms around syphilis testing. Secondary objectives were to increase awareness about the syphilis epidemic in gay and bisexual men and to increase knowledge about syphilis (symptoms, routes of transmission, the link between syphilis and HIV transmission, and so on). The complete methodology for selection of campaign concepts and themes for this campaign has been described in a previous manuscript \\[5\\].\n\nThe Healthy Penis campaign was promoted in neighborhoods where the greatest concentration of gay or bisexual men lived and where there were businesses that catered to this population. The campaign incorporated the use of humorous cartoon strips that featured characters like Healthy Penis and Phil the Sore to: (1) promote syphilis testing, (2) publicize the rise of syphilis among gay and bisexual men, (3) provide information on syphilis transmission, symptoms, and prevention, and (4) delineate the connection between syphilis and HIV (Figure 2<\/a>). These cartoon strips were initially published semi-monthly in a popular gay Bay Area publication. After publication, poster-size reproductions were posted on the streets; in bars and commercial sex venues; on bus shelters and bus advertising; on palm cards; and on banner advertisements on one of the most popular Internet sites for meeting sex partners among gay men.\n\nTopics addressed in campaign cartoons initially emphasized all four areas described above and then after 1.5 years focused primarily on the message \"get tested.\" In addition to messages contained within the cartoon strip, brief text boxes were displayed on the bottom of the cartoon images describing modes of syphilis transmission (skin-to-skin contact, oral, anal, and vaginal sex), symptoms (painless sore and\/or rash), and explaining that syphilis is curable. These text boxes were displayed on cartoons from June 2002 to October 2003 with decreasing frequency.\n\n# Evaluation of the Campaign: What Impact Has It Had?\n\n## Survey methodology.\n\nTo evaluate the effectiveness of the campaign, we conducted two waves of surveys using the same instrument: one six months after the campaign began (December 2002\u2013February 2003; Evaluation I), and a second 2.5 years after the campaign began (September 2004\u2013March 2005; Evaluation II). For each survey, men were intercepted at coffee shops, bars, markets, laundromats, sex clubs, a clean and sober community center, on sidewalks, and in other venues located in campaign-targeted neighborhoods.\n\nRespondents were asked about basic demographic information; unaided (e.g., spontaneous mention) and aided (e.g., prompted response) awareness of the Healthy Penis campaign; perceived key messages of the campaign; syphilis knowledge (via open-ended questions); sexual practices in the past month; HIV status; and how many times they were tested for syphilis in the past six months.\n\nComparisons were made among those aware of the campaign between the first and second sample of respondents. Recent history of syphilis testing was compared with campaign awareness level for each evaluation separately. Analysis included Fisher's Exact and Cochran-Armitage tests, and logistic regression.\n\n## Survey population.\n\nTwo hundred and forty-four interviews were conducted with San Francisco residents and included in the first evaluation; 150 interviews were included in the second evaluation. For each evaluation, all respondents were men who have sex with men (MSM) and most were white and HIV negative. Our respondents were similar to the MSM population in San Francisco. More detailed characteristics of the two samples are described in Table 1<\/a>. There were no significant demographic differences between Evaluation I and II participants.\n\n###### Demographic Characteristics of Respondents from First and Second Evaluations of the Healthy Penis Campaign\n\n![](pmed.0030474.t001)\n\n## Campaign awareness.\n\nCampaign awareness was high with 80% (*n* = 194\/244) and 85% (*n* = 127\/150) of respondents aware of the campaign in Evaluation I and II, respectively. Unaided and aided awareness was similar between the two evaluations; 33% versus 41% spontaneously mentioned the Healthy Penis campaign (unaided awareness) when asked to recall recent advertisements or public events about sexual health issues, and an additional 47% versus 44% recognized the campaign (aided awareness) when shown a campaign image (Evaluation I and II, respectively). There was no difference in overall campaign awareness, unaided awareness, or aided awareness between the two evaluations (*p* = 0.23, *p* = 0.13, *p* = 0.60, respectively). Among respondents aware of the campaign, campaign exposure was greater in the first evaluation; Evaluation I respondents reported a median of six exposures (mean, 18) over the previous three months, while Evaluation II respondents reported a median of five times (mean, six) in the previous three months (Wilcoxon two-sample test, *p* \\< 0.0001). Perceptions of key messages among respondents who were aware of the campaign did not change markedly between Evaluations I and II, with the most common message identified being \"get tested\" (53% versus 55%, respectively, *p* = 0.73).\n\n## Syphilis testing in the last 6 months.\n\nBoth evaluations found a positive association between campaign awareness and recent syphilis testing\u2014the primary objective of the social marketing campaign. An increasing proportion of respondents reported syphilis testing in the previous six months by campaign awareness level (none, aided awareness, and unaided awareness): Evaluation I: 26%, 40%, 55% (Cochran-Armitage trend test z = \u22123.303, *p* = 0.001); Evaluation II: 35%, 42%, 60% (Cochran-Armitage trend test z = \u22122.304, *p* = 0.02) (Figure 4<\/a>). After controlling for potential confounders (age, HIV status, casual sex partners in the last month) in a multivariable logistic regression model, each increase in campaign awareness level during Evaluation I was associated with a 90% increase in likelihood for having tested for syphilis in the past six months (odds ratio \\[OR\\] 1.9 \\[95% confidence interval (CI), 1.3\u20132.9\\]). Other variables significantly related to syphilis testing in Evaluation I included being HIV infected (OR 4.9 \\[95% CI, 2.3\u201310.5\\]), and having casual sex partners in the last month (OR 2.0 \\[95% CI, 1.1\u20133.9\\]). The same multivariable model applied to Evaluation II respondents found each increase in campaign awareness level to be associated with a 76% increase in likelihood for syphilis testing (OR 1.76 \\[95% CI, 1.01\u20133.1\\]), and no other factors were significant in the model, although HIV-infected respondents showed an increased likelihood of syphilis testing (OR 2.1 \\[95% CI, 0.9\u20135.2\\], *p* = 0.10). In both evaluations, a higher proportion of HIV-positive respondents reported syphilis testing than did HIV-negative respondents (Evaluation I 71% versus 34%, p \\< 0.05, Evaluation II 63% versus 44%, *p* = 0.07).\n\nOverall, reported syphilis testing in the previous six months did not change from the first evaluation to the second evaluation (42% versus 49%, *p* = 0.25).\n\n## Syphilis knowledge.\n\nThere were few significant differences between evaluations among respondents aware of the campaign in terms of knowledge of syphilis, including knowledge of symptoms; groups most affected by syphilis; how to find out if you have syphilis; health consequences of untreated syphilis; and the relationship between syphilis and HIV acquisition and transmission (Figure 3<\/a>).\n\n## Syphilis incidence.\n\nIn 2005, incidence of early syphilis was lower than in the previous three years, with decreases in cases in gay\/bisexual men accounting for the drop \\[9\\]. Although our campaign evaluations were cross-sectional and causality cannot be determined, it does appear that the Healthy Penis campaign, along with other SFDPH syphilis elimination efforts \\[7\\], may have led to this decrease in syphilis incidence.\n\n# Conclusions from Evaluations\n\nMSM who were aware of the Healthy Penis campaign were more likely than those unaware to have recently tested for syphilis and to have greater knowledge about syphilis. This effect was sustained for almost three years.\n\n# Next Steps for Healthy Penis\n\nAfter initial evaluation of the Healthy Penis campaign, elements have been used in Philadelphia, Seattle, Palm Springs, and Santa Clara County, California ([www.healthypenis.org](http:\/\/www.healthypenis.org)). Thus the campaign can be translated to other jurisdictions. Because much of the material is already developed, other jurisdictions may adapt and use the campaign at a lower cost than was originally incurred in San Francisco, although they will need to conduct evaluations to ensure the campaign is effective for their population. In addition, the principles used in the development, implementation, and evaluation of Healthy Penis are universally applicable for any social marketing campaign.\n\nBased on our experience with the Healthy Penis campaign, we believe spending more start-up resources on campaign development rather than campaign placement \\[8\\] was probably what led to our high level of campaign awareness, demonstrating that our campaign resonated with our target population. We also learned that partnering with another jurisdiction (Los Angeles) lowered start-up costs and creating a Community Partners Group helped us tailor our campaign messages to be most effective. And most importantly, we learned that careful, well-planned evaluations are critical in assessing the impact of a campaign, which then supported continuing the campaign and implementing it elsewhere. Our evaluations strongly suggest that the Healthy Penis social marketing campaign was effective in augmenting syphilis testing and increasing syphilis awareness and knowledge in the San Francisco gay and bisexual community. This effect might have contributed to decreased syphilis incidence in 2005.\n\nThe authors would like to thank Les Pappas from Better World Advertising, the STD Community Advisory Group for Syphilis Elimination, and the SFDPH employees who administered the evaluation surveys.\n\n**Author contributions.** CKK, JAM, HR, JM, PK, and JDK conceived and designed the experiments. KA, JAM, and HR analyzed the data. KA, CKK, JAM, HR, JM, PK, and JDK wrote the paper. JM worked closely with Better World Advertising in the planning and development of the campaign in 1999.\n\n# References\n\n### Abbreviations\n\nCI\n\n: confidence interval\n\nMSM\n\n: men who have sex with men\n\nOR\n\n: odds ratio\n\nSFDPH\n\n: San Francisco Department of Public Health","meta":{"dup_signals":{"dup_doc_count":114,"dup_dump_count":38,"dup_details":{"curated_sources":2,"2020-45":1,"2020-24":1,"2019-51":1,"2019-09":1,"2018-22":1,"2017-43":1,"2017-34":1,"2017-22":1,"2017-09":7,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-30":3,"2016-18":1,"2016-07":2,"2015-48":5,"2015-40":2,"2015-35":2,"2015-32":2,"2015-27":2,"2015-22":3,"2015-14":5,"2014-52":6,"2014-49":5,"2014-42":8,"2014-41":7,"2014-35":7,"2014-23":7,"2014-15":7,"2022-49":1,"2024-18":1,"2024-10":1,"2015-18":5,"2015-11":3,"2015-06":2,"2014-10":5,"2024-26":1}},"file":"PMC1762065"},"subset":"pubmed_central"} {"text":"abstract: Until the advent of modern neuroscience, free will used to be a theological and a metaphysical concept, debated with little reference to brain function. Today, with ever increasing understanding of neurons, circuits and cognition, this concept has become outdated and any metaphysical account of free will is rightfully rejected. The consequence is not, however, that we become mindless automata responding predictably to external stimuli. On the contrary, accumulating evidence also from brains much smaller than ours points towards a general organization of brain function that incorporates flexible decision-making on the basis of complex computations negotiating internal and external processing. The adaptive value of such an organization consists of being unpredictable for competitors, prey or predators, as well as being able to explore the hidden resource deterministic automats would never find. At the same time, this organization allows all animals to respond efficiently with tried-and-tested behaviours to predictable and reliable stimuli. As has been the case so many times in the history of neuroscience, invertebrate model systems are spearheading these research efforts. This comparatively recent evidence indicates that one common ability of most if not all brains is to choose among different behavioural options even in the absence of differences in the environment and perform genuinely novel acts. Therefore, it seems a reasonable effort for any neurobiologist to join and support a rather illustrious list of scholars who are trying to wrestle the term 'free will' from its metaphysical ancestry. The goal is to arrive at a scientific concept of free will, starting from these recently discovered processes with a strong emphasis on the neurobiological mechanisms underlying them.\nauthor: Bj\u00f6rn Brembs\\*\ndate: 2011-03-22\ninstitute: Freie Universit\u00e4t Berlin, Institute for Biology \u2013 Neurobiology, K\u00f6nigin-Luise-Strasse 28\/30, 14195 Berlin, Germany\nreferences:\ntitle: Towards a scientific concept of free will as a biological trait: spontaneous actions and decision-making in invertebrates\n\n# Introduction: the rejection of the metaphysical concept of free will\n\nWhat could possibly get a neurobiologist with no formal training in philosophy beyond a few introductory lectures, to publicly voice his opinion on free will? Even worse, why use empirical, neurobiological evidence mainly from invertebrates to make the case? Surely, the lowly worm, snail or fly cannot even be close to something as philosophical as free will? The main reason is this neurobiologist's opinion that free will is a biological trait and not a metaphysical entity. 'Free will is a biological property, not a gift or a mystery' \\[1\\]. Today, neurobiology has accumulated sufficient evidence that we can move on from speculating about the existence of free will towards plausible models of how brains have implemented it. On the surface, this statement seems to contradict public statements from many other neurobiologists who fervently deny free will. In fact, it appears that if neurobiologists feel compelled to write about free will, they do so only to declare that it is an illusion \\[2\u20135\\]. Of course, all of these neurobiologists are correct in that free will as a metaphysical entity indeed most probably is an illusion. Colloquial and historical use of the term 'free will' has been inextricably linked with one variant or another of dualism. There have been so many and thorough recounts of the free will debate, that I will only reference some, which can serve to introduce the concepts used here \\[6\u20139\\]. Psychologists and neurobiologists have rightfully pointed out for decades now that there is no empirical support for any form of dualism. The interactionism proposed by Popper & Eccles was probably one of the last prominent accounts of dualism \\[10\\]. Since then, these and related positions have largely fallen into irrelevance. Today, the metaphysical concept of free will is largely devoid of any support, empirical or intellectual.\n\n# The rejection of determinism\n\nThat said, it is an all too common misconception that the failure of dualism as a valid hypothesis automatically entails that brains are deterministic and all our actions are direct consequences of gene\u2013environment interactions, maybe with some random stochasticity added in here and there for good measure \\[2\\]. It is tempting to speculate that most, if not all, scholars declaring free will an illusion share this concept. However, our world is not deterministic, not even the macroscopic world. Quantum mechanics provides objective chance as a trace element of reality. In a very clear description of how keenly aware physicists are that Heisenberg's uncertainty principle indeed describes a property of our world rather than a failure of scientists to accurately measure it, Stephen Hawking has postulated that black holes emit the radiation named after him \\[11\\], a phenomenon based on the well-known formation of virtual particle\u2013antiparticle pairs in the vacuum of space. The process thought to underlie Hawking radiation has recently been observed in a laboratory analogue of the event horizon \\[12,13\\]. On the 'mesoscopic' scale, fullerenes have famously shown interference in a double-slit experiment \\[14\\]. Quantum effects have repeatedly been observed directly on the nano-scale \\[15,16\\], and superconductivity (e.g. \\[17\\]) or Bose\u2013Einstein condensates (e.g. \\[18\\]) are well-known phenomena. Quantum events such as radioactive decay or uncertainty in the photoelectric effect are used to create random-number generators for cryptography that cannot be broken into. Thus, quantum effects are being observed also on the macroscopic scale. Therefore, determinism can be rejected with at least as much empirical evidence and intellectual rigor as the metaphysical account of free will. 'The universe has an irreducibly random character. If it is a clockwork, its cogs, springs, and levers are not Swiss-made; they do not follow a predetermined path. Physical indeterminism rules in the world of the very small as well as in the world of the very large' \\[9\\].\n\n# Behavioural variability as an adaptive trait\n\nIf dualism is not an option and determinism is equally untenable, what other options are we left with? Some scholars have resorted to quantum uncertainty in the brain as the solution, providing the necessary discontinuity in the causal chain of events. This is not unrealistic, as there is evidence that biological organisms can evolve to take advantage of quantum effects. For instance, plants use quantum coherence when harvesting light in their photosynthetic complexes \\[19\u201322\\]. Until now, however, it has proved difficult to find direct empirical evidence in support of analogous phenomena in brains \\[9\\]. Moreover, and more importantly, the pure chance of quantum indeterminism alone is not what anyone would call 'freedom'. 'For surely my actions should be caused because I want them to happen for one or more reasons rather that they happen by chance' \\[9\\]. This is precisely where the biological mechanisms underlying the generation of behavioural variability can provide a viable concept of free will.\n\nBiologists need not resort to quantum mechanics to understand that deterministic behaviour can never be evolutionarily stable. Evolution is a competitive business and predictability is one thing that will make sure that a competitor will be out of business soon. There are many illuminating examples of selection pressures favouring unpredictability, but three recently published reports dealing with one of the most repeatable and hence best-studied class of behaviours are especially telling. These examples concern escape behaviours.\n\nOne of the most well-studied escape behaviours is the so-called C-start response in fishes. The response is called C-start because fishes that perceive sudden pressure changes on one side of their body bend in a C-shape away from the perceived stimulus to escape in the opposite direction. One of the largest neurons in vertebrate nervous systems is mediating this response, the Mauthner cell (e.g. \\[23\\]). Recently, Kenneth Catania and colleagues described the hunting technique of tentacled snakes (*Erpeton tentaculatus*) \\[24,25\\]. The snakes hunt for fishes by cunningly eliciting a C-start response in the potential prey animal with a more caudal part of their body, prompting the fish to C-start exactly into the mouth of the snake.\n\nSome of the most important predators of earthworms are moles. When moles dig through the ground, they produce a very distinctive sound. Earthworms have evolved to respond to this sound by crawling to the surface, where the moles will not follow them. Kenneth Catania recently reported that the technique of 'worm-grunting', employed in order to catch earthworms as fish bait, exploits this response. The worm grunters use a combination of wooden poles and metal rods to generate the sound and then collect the worms from the surface \\[26\\].\n\nIn the third example, another very well-studied escape response is exploited by birds. Under most circumstances, the highly sophisticated jump response of dipteran flies is perfectly sufficient to catapult the animals out of harm's way (e.g. \\[27\\]). However, painted redstarts (*Myioborus pictus*) are ground-hunting birds that flush out dipterans by eliciting their jump response with dedicated display behaviours. Once the otherwise well-camouflaged flies have jumped, they are highly visible against the bright sky and can be caught by the birds \\[28,29\\].\n\nIt is not a huge leap to generalize these insights from escape responses to other behaviours. Predictability can never be an evolutionarily stable strategy. Instead, animals need to balance the effectiveness and efficiency of their behaviours with just enough variability to spare them from being predictable. Efficient responses are controlled by the environment and thus vulnerable. Conversely, endogenously controlled variability reduces efficiency but increases vital unpredictability. Thus, in order to survive, every animal has to solve this dilemma. It is no coincidence that ecologists are very familiar with a prominent, analogous situation, the exploration\/exploitation dilemma (originally formulated by March \\[30\\]): every animal, every species continuously faces the choice between staying and efficiently exploiting a well-known, but finite resource and leaving to find a new, undiscovered, potentially much richer, but uncertain resource. Efficiency (or optimality) always has to be traded off with flexibility in evolution, on many, if not all, levels of organization.\n\nA great invertebrate example of the sort of Protean behaviour \\[31,32\\] selected for by these trade-offs is yet another escape behaviour, that of cockroaches. The cerci of these insects have evolved to detect minute air movements. Once perceived, these air movements trigger an escape response in the cockroach away from the side where the movement was detected. However, which angle with respect to the air movement is taken by the animal cannot be predicted precisely, because this component of the response is highly variable \\[33\\]. Therefore, in contrast to the three examples above, it is impossible for a predator to predict the trajectory of the escaping animal.\n\n# Brains are in control of variability\n\nCompetitive success and evolutionary fitness of all ambulatory organisms depend critically on intact behavioural variability as an adaptive function \\[34\\]. Behavioural variability is an adaptive trait and not 'noise'. Not only biologists are aware of the fitness advantages provided by unpredictable behaviour, but philosophers also realized the adaptive advantages of behavioural variability and their potential to serve as a model for a scientific account of free will, as long as 25 years ago (e.g. \\[6\\]). The ultimate causes of behavioural variability are thus well understood. The proximate causes, however, are much less studied. One of the few known properties is that the level of variability also can vary. Faced with novel situations, humans and most animals spontaneously increase their behavioural variability \\[35\u201338\\]. Animals even vary their behaviour when a more stereotyped behaviour would be more efficient \\[39\\].\n\nThese observations suggest that there must be mechanisms by which brains control the variability they inject into their motor output. Some components of these mechanisms have been studied. For instance, tethered flies can be trained to reduce the range of the variability in their turning manoeuvres \\[40\\]. For example, one such stationary flying fly may be trained to cease generating left-turning manoeuvres by heating the fly (with an infrared heat beam) whenever it initiates such actions and by not heating it during right-turning manoeuvres. Before such training, it would generate left- and right-turning manoeuvres in equal measure. Protein kinase C activity is required for such a reduction \\[41\\]. Interestingly, analogous to the exploration\u2013exploitation dilemma mentioned above, the mechanism by which the animals learn to decrease their behavioural variability ('self-learning') interacts with the learning mechanism by which the animals learn about external stimuli ('world learning'). Part of this interaction balances self- and world learning such that self-learning (i.e. the endogenous reduction in behavioural variability) is slowed down whenever the world-learning mechanism is engaged. This part of the interaction is mediated by a subpopulation of neurons in a part of the insect brain called the mushroom bodies \\[42,43\\]. This population of neurons ensures that animals preferentially learn from their environment and reduce their endogenous behavioural variability only when there are good reasons for doing so. Such an organization may underlie the need for practice in order to reduce our behavioural variability when learning new skills, e.g. the basketball free-throw or the golf swing. The parallel to the exploration\u2013exploitation dilemma lies in the balance between the endogenous and exogenous processing these interactions bestow upon the animal: learning about the world first allows the animal to keep its behaviour flexible in case the environment changes, while at the same time being able to efficiently solve the experimental task. If, however, it turns out that the environment does not change, then\u2014and only then\u2014is the circuitry controlling the behaviour itself modified, to more permanently alter the behaviour-generating process itself and thereby maximize on efficiency by reducing the endogenous variability.\n\nAnimals other than insects also learn to control their variability using feedback from the environment, such that levels of behavioural variability\u2014from highly predictable to random-like\u2014are directly influenced by reinforcement. For instance, consummatory feeding behaviour of the marine snail *Aplysia* is highly variable \\[44,45\\]. Recent evidence suggests that the seemingly rhythmic cycling of biting, swallowing and rejection movements of the animal's radula (a tongue-like organ) vary in order to be able to adapt to varying food sources \\[46\\]. In fact, much like the reduced variability in flies trained to avoid heat in the self-learning paradigm explained above, *Aplysia* can be trained to reduce the variability in their feeding behaviour and generate rhythmic, stereotyped movements \\[47\u201352\\]. It also takes practice for snails to become efficient and predictable. The default state is to behave variably and unpredictably.\n\nThe mechanisms to control behavioural variability are in place also in humans. For instance, depression and autism are characterized by abnormally stereotypic behaviours and a concomitant lack of behavioural variability. Patients suffering from such psychopathologies can learn to vary their behaviours when reinforced for doing so \\[53,54\\]. Also, the interactions between world- and self-learning seem to be present in vertebrates: extended training often leads to so-called habit formation, repetitive responses, controlled by environmental stimuli (e.g. \\[55,56\\]). It is intriguing that recent fMRI studies have discovered a so-called default-mode network in humans, the fluctuations in which can explain a large degree of the individual's behavioural variability \\[57\\], and that abnormalities in this default network are associated with most psychiatric disorders \\[58\u201360\\].\n\n# What are the neural mechanisms generating behavioural variability?\n\nIt thus appears that behavioural variability is a highly adaptive trait, under constant control of the brain balancing the need for variability with the need for efficiency. How do brains generate and control behavioural variability in this balance? These studies have only just begun. As was the case in much of neuroscience's history, be it ion channels, genes involved in learning and memory, electrical synapses or neurotransmitters, invertebrate model systems are leading the way in the study of the neural mechanisms underlying behavioural variability as well.\n\nTwo recent reports, concerned another highly reproducible (and therefore well-studied) behaviour, optomotor responses. Tethered flies respond to a moving grating in front of them with characteristic head movements in the same direction as the moving grating, aimed at stabilizing the image on the retina. By recording from motion-sensitive neurons in fly optic lobes, the authors found that the variability in these neurons did not suffice to explain the variability in the head movements \\[61,62\\]. Presumably, downstream neurons in the central brain inject additional variability, not present in the sensory input, which is reflected in the behaviour.\n\nA corresponding conclusion can be drawn from two earlier studies, which independently found that the temporal structure of the variability in spontaneous turning manoeuvres both in tethered and in free-flying fruitflies could not be explained by random system noise \\[63,64\\]. Instead, a nonlinear signature was found, suggesting that fly brains operate at criticality, meaning that they are mathematically unstable, which, in turn, implies an evolved mechanism rendering brains highly susceptible to the smallest differences in initial conditions and amplifying them exponentially \\[63\\]. Put differently, fly brains have evolved to generate unpredictable turning manoeuvres. The default state also of flies is to behave variably. Ongoing studies are trying to localize the brain circuits giving rise to this nonlinear signature.\n\nResults from studies in walking flies indicate that at least some component of variability in walking activity is under the control of a circuit in the so-called ellipsoid body, deep in the central brain \\[65\\]. The authors tested the temporal structure in spontaneous bouts of activity in flies walking back and forth individually in small tubes and found that the power law in their data disappeared if a subset of neurons in the ellipsoid body was experimentally silenced. Analogous experiments have recently been taken up independently by another group and the results are currently being evaluated \\[66\\]. The neurons of the ellipsoid body of the fly also exhibit spontaneous activity in live imaging experiments \\[67\\], suggesting a default-mode network also might exist in insects.\n\nEven what is often presented to students as 'the simplest behaviour', the spinal stretch reflex in vertebrates, contains adaptive variability. Via the cortico-spinal tract, the motor cortex injects variability into this reflex arc, making it variable enough for operant self-learning \\[68\u201372\\]. Jonathan Wolpaw and colleagues can train mice, rats, monkeys and humans to produce reflex magnitudes either larger or smaller than a previously determined baseline precisely because much of the deviations from this baseline are not noise but variability deliberately injected into the reflex. Thus, while invertebrates lead the way in the biological study of behavioural variability, the principles discovered there can be found in vertebrates as well.\n\nOne of the common observations of behavioural variability in all animals seems to be that it is not entirely random, yet unpredictable. The principle thought to underlie this observation is nonlinearity. Nonlinear systems are characterized by sensitive dependence on initial conditions. This means such systems can amplify tiny disturbances such that the states of two initially almost identical nonlinear systems can diverge exponentially from each other. Because of this nonlinearity, it does not matter (and it is currently unknown) whether the 'tiny disturbances' are objectively random as in quantum randomness or whether they can be attributed to system, or thermal noise. What can be said is that principled, quantum randomness is always some part of the phenomenon, whether it is necessary or not, simply because quantum fluctuations do occur. Other than that it must be a non-zero contribution, there is currently insufficient data to quantify the contribution of such quantum randomness. In effect, such nonlinearity may be imagined as an amplification system in the brain that can either increase or decrease the variability in behaviour by exploiting small, random fluctuations as a source for generating large-scale variability. A general account of such amplification effects had already been formulated as early as in the 1930s \\[73\\]. Interestingly, a neuronal amplification process was recently observed directly in the barrel cortex of rodents, opening up the intriguing perspective of a physiological mechanism dedicated to generating neural (and by consequence behavioural) variability \\[74\\].\n\n# Determinism versus indeterminism is a false dichotomy\n\nTogether with Hume, most would probably subscribe to the notion that 'tis impossible to admit of any medium betwixt chance and an absolute necessity' \\[75\\]. For example, Steven Pinker (1997, p. 54) concurs that 'A random event does not fit the concept of free will any more than a lawful one does, and could not serve as the long-sought locus of moral responsibility' \\[76\\]. However, to consider chance and lawfulness as the two mutually exclusive sides of our reality is only one way to look at the issue. The unstable nonlinearity, which makes brains exquisitely sensitive to small perturbations, may be the behavioural correlate of amplification mechanisms such as those described for the barrel cortex \\[74\\]. This nonlinear signature eliminates the two alternatives, which both would run counter to free will, namely complete (or quantum) randomness and pure, Laplacian determinism. These represent opposite and extreme endpoints in discussions of brain functioning, which hamper the scientific discussion of free will. Instead, much like evolution itself, a scientific concept of free will comes to lie between chance and necessity, with mechanisms incorporating both randomness and lawfulness. The Humean dichotomy of chance and necessity is invalid for complex processes such as evolution or brain functioning. Such phenomena incorporate multiple components that are both lawful and indeterminate. This breakdown of the determinism\/indeterminism dichotomy has long been appreciated in evolution and it is surprising to observe the lack of such an appreciation with regard to brain function among some thinkers of today (e.g. \\[2\\]). Stochasticity is not a nuisance, or a side effect of our reality. Evolution has shaped our brains to implement 'stochasticity' in a controlled way, injecting variability 'at will'. Without such an implementation, we would not exist.\n\nA scientific concept of free will cannot be a qualitative concept. The question is not any more 'do we have free will?'; the questions is now: 'how much free will do we have?'; 'how much does this or that animal have?'. Free will becomes a quantitative trait.\n\n# Initiating activity: actions versus responses\n\nAnother concept that springs automatically from acknowledging behavioural variability as an adaptive trait is the concept of actions. In contrast to responses, actions are behaviours where it is either impossible to find an eliciting stimulus or where the latency and\/or magnitude of the behaviour vary so widely, that the term 'response' becomes useless.\n\nA long history of experiments on flies provides accumulating evidence that the behaviour of these animals is much more variable than it would need to be, given the variability in the neurons mediating the stimulus-response chain (reviewed in \\[77\\]). For instance, in the study of the temporal dynamics of turning behaviours in tethered flies referenced above \\[63\\], one situation recorded fly behaviour in constant stimulus conditions, i.e. nothing in the exquisitely controlled environment of the animals changed while the turning movements were recorded. Yet, the flies kept producing turning movements throughout the experiment as if there had been stimuli in their environment. Indeed, the temporal structure in these movements was qualitatively the same compared with when there were stimuli to be perceived. This observation is only one of many demonstrating the endogenous character of behavioural variability. Even though there was nothing in the environment prompting the animals to change their behaviour, they kept initiating turning manoeuvres in all directions. Clearly, each of these manoeuvres was a self-initiated, spontaneous action and not a response to some triggering, external stimulus.\n\nIn fact, such self-initiated actions are a necessary prerequisite for the kind of self-learning described above \\[41\u201343\\]. At the start of the experiment, the fly cannot know that it is its own turning manoeuvres that cause the switch from cold to hot and vice versa. To find out, the fly has to activate the behavioural modules it has available in this restrained situation and has to register whether one of them might have an influence on the punishing heat beam. There is no appropriate sensory stimulus from outside to elicit the respective behaviour. The fly must have a way to initiate its behaviours itself, in order to correlate these actions with the changes in the environment. Clearly, the brain is built such that under certain circumstances the items of the behavioural repertoire can get released independent of sensory stimuli.\n\nThe fly cannot know the solutions to most real-life problems. Beyond behaving unpredictably to evade predators or outcompete a competitor, all animals must explore, must try out different solutions to unforeseen problems. Without behaving variably, without acting rather than passively responding, there can be no success in evolution. Those individuals who have found the best balance between flexible actions and efficient responses are the ones who have succeeded in evolution. It is this potential to behave variably, to initiate actions independently of the stimulus situation, which provides animals with choices.\n\n# Freedom of choice\n\nThe neurobiological basis of decision-making can also be studied very well in invertebrate models. For instance, isolated leech nervous systems chose either a swimming motor programme or a crawling motor programme to an invariant electrical stimulus \\[78\u201380\\]. Every time the stimulus is applied, a set of neurons in the leech ganglia goes through a so far poorly understood process of decision-making to arrive either at a swimming or at a crawling behaviour. The stimulus situation could not be more perfectly controlled than in an isolated nervous system, excluding any possible spurious stimuli reaching sensory receptors unnoticed by the experimenter. In fact, even hypothetical 'internal stimuli', generated somehow by the animal must in this case be coming from the nervous system itself, rendering the concept of 'stimulus' in this respect rather useless. Yet, under these 'carefully controlled experimental circumstances, the animal behaves as it damned well pleases' (Harvard Law of Animal Behaviour) \\[34\\].\n\nSeymour Benzer, one of the founders of Neurogenetics, captured this phenomenon in the description of his first phototaxis experiments in 1967: ' \u2026 if you put flies at one end of a tube and a light at the other end, the flies will run to the light. But I noticed that not every fly will run every time. If you separate the ones that ran or did not run and test them again, you find, again, the same percentage will run. But an individual fly will make its own decision'. (cited from Brown & Haglund (1994) *J. NIH Res.* **6**, 66\u201373). Not even 10 years later, Quinn *et al.* separated flies, conditioned to avoid one of two odours, into those that did avoid the odour and those that did not. In a subsequent second test, they found that both the avoiders and the non-avoiders separated along the same percentages as in the first test, prompting the authors to conclude: 'This result suggests that the expression of learning is probabilistic in every fly' \\[81\\]. Training shifted the initial 50\u201350 decision of the flies away from the punished odour, but the flies still made the decisions themselves\u2014only with a different probability than before training. Most recently, in the experiments described above, the tethered flies without any feedback made spontaneous decisions to turn one way or another \\[63\\]. These are only three examples from more than 40 years in which many behavioural manifestations of decision-making in the fly brain have been observed. Like heat, flies can control also odour intensity with their yaw torque \\[40\\]. They can control the angular velocity of a panorama surrounding them not only by yaw torque but also by forward thrust, body posture or abdomen bending \\[82\\]. In ambiguous sensory situations, they actively switch between different perceptual hypotheses, they modify their expectations about the consequences of their actions by learning and they can actively shift their focus of attention restricting their behavioural responses to parts of the visual field \\[83,84\\]. These latest studies prompted further research into the process of the endogenous direction of selective attention in flies \\[85\u201389\\]. Martin Heisenberg realized early on \\[90\\] that such active processes entail the sort of fundamental freedom required for a modern concept of free will and keeps prominently advocating this insight today \\[91\\].\n\nJohn Searle has described free will as the belief 'that we could often have done otherwise than we in fact did' \\[92\\]. Taylor & Dennett cite the maxim 'I could have done otherwise' \\[93\\]. Clearly, leeches and flies could and can behave differently in identical environments. While some argue that unpredictable (or random) choice does not qualify for their definition of free will \\[2\\], it is precisely the freedom from the chains of causality that most scholars see as a crucial prerequisite for free will. Importantly, this freedom is a necessary but not a sufficient component of free will. In order for this freedom to have any bearing on moral responsibility and culpability in humans, more than mere randomness is required. Surely, no one would hold a person responsible for any harm done by the random convulsions during an epileptic seizure. Probably because of such considerations, two-stage models of free will have been proposed already many decades ago, first by James \\[94\\], later also by Henri Poincar\u00e9, Arthur Holly Compton, Karl Popper, Henry Margenau, Daniel Dennett, Robert Kane, John Martin Fisher, Alfred Mele, Stephen Kosslyn, Bob Doyle and Martin Heisenberg (cited, reviewed and discussed in \\[7\\]), as well as Koch \\[9\\]: one stage generates behavioural options and the other one decides which of those actions will be initiated. Put simply, the first stage is 'free' and the second stage is 'willed'. This implies that not all chance events in the brain must manifest themselves immediately in behaviour. Some may be eliminated by deterministic 'selection' processes before they can exert any effects. Analogous to mutation and selection in evolution, the biological process underlying free will can be conceptualized as a creative, spontaneous, indeterministic process followed by an adequately determined process, selecting from the options generated by the first process. Freedom arises from the creative and indeterministic generation of alternative possibilities, which present themselves to the will for evaluation and selection. The will is adequately determined by our reasons, desires and motives\u2014by our character\u2014but it is not *pre*-determined. John Locke (1689, p. 148) already separated free from 'will', by attributing free to the agent and not the will: 'I think the question is not proper, whether the will be free, but whether a man be free' \\[95\\]. Despite the long tradition of two-stage models of free will, only now are the first, tangible scientific pieces of evidence being published. For instance, the independent discovery of nonlinear mechanisms in brains from different phyla is compatible with such two-stage models \\[63,74\\]. Essentially, the existence of neural circuits implementing a two-stage model of free will 'would mean that you can know everything about an organism's genes and environment yet still be unable to anticipate its caprices' \\[96\\]. Importantly, this inability is not due to inevitable stochasticity beyond control; it is due to dedicated brain processes that have evolved to generate unpredictable, spontaneous actions in the face of pursuit\u2013evasion contests, competition and problem-solving.\n\n# Consciousness and freedom\n\nIt thus is no coincidence that we all feel that we possess a certain degree of freedom of choice. It makes sense that depriving humans of such freedom is frequently used as punishment and the deprived do invariably perceive this limited freedom as undesirable. This experience of freedom is an important characteristic of what it is like to be human. It stems in part from our ability to behave variably. Voltaire expressed this intuition in saying 'Liberty then is only and can be only the power to do what one will'. The concept that we can decide to behave differently even under identical circumstances underlies not only our justice systems. Electoral systems, our educational systems, parenting and basically all other social systems also presuppose behavioural variability and at least a certain degree of freedom of choice. Games and sports would be predictable and boring without our ability of constantly changing our behaviour in always the same settings.\n\nThe data reviewed above make clear that the special property of our brain that provides us with this freedom surely is independent of consciousness. Consciousness is not a necessary prerequisite for a scientific concept of free will. Clearly, a prisoner is regarded as un-free, irrespective of whether he is aware of it or not. John Austin \\[97\\] provides another instructive example 'Consider the case where I miss a very short putt and kick myself because I could have holed it'. We sometimes have to work extremely hard to constrain our behavioural variability in order to behave as predictably as possible. Sports commentators often use 'like a machine' to describe very efficient athletes. Like practice, conscious efforts are able to control our freedom up to a certain degree. Compare, for instance, a line that you quickly drew on a piece of paper, with a line that was drawn with the conscious effort of making it as straight as possible. However, the neural principle underlying the process generating the variability is beyond total conscious control, requiring us to use rulers for perfectly straight lines. Therefore, the famous experiments of Benjamin Libet and others since then \\[2,4,5,98\u2013100\\] only serve to cement the rejection of the metaphysical concept of free will and are not relevant for the concept proposed here. Conscious reflection, meditation or discussion may help with difficult decisions, but this is not even necessarily the case. The degree to which our conscious efforts can affect our decisions is therefore central to any discussion about the degree of responsibility our freedom entails, but not to the freedom itself.\n\n# The self and agency\n\nIn contrast to consciousness, an important part of a scientific concept of free will is the concept of 'self'. It is important to realize that the organism generates an action itself, spontaneously. In chemistry, spontaneous reactions occur when there is a chemical imbalance. The system is said to be far from thermodynamic equilibrium. Biological organisms are constantly held far from equilibrium, they are considered open thermodynamic systems. However, in contrast to physical or chemical open systems, some of the spontaneous actions initiated by biological organisms help keep the organism away from equilibrium. Every action that promotes survival or acquires energy sustains the energy flow through the open system, prompting Georg Litsche to define biological organisms as a separate class of open systems (i.e. 'subjects'; \\[101\\]). Because of this constant supply of energy, it should not be surprising to scientists that actions can be initiated spontaneously and need not be released by external stimuli. In controlled situations where there cannot be sufficient causes outside the organism to make the organism release the particular action, the brain initiates behaviour from within, potentially using a two-stage process as described above. The boy ceases to play and jumps up. This sort of impulsivity is a characteristic of children every parent can attest to. We do not describe the boy's action with 'some hidden stimuli made him jump'\u2014he jumped of his own accord. The jump has all the qualities of a beginning. The inference of agency in ourselves, others and even inanimate objects is a central component of how we think. Assigning agency requires a concept of self. How does a brain know what is self?\n\nOne striking characteristic of actions is that an animal normally does not respond to the sensory stimuli it causes by its own actions. The best examples are that it is difficult to tickle oneself and that we do not perceive the motion stimuli caused by our own eye saccades or the darkness caused by our eye blinks. The basic distinction between *self*-induced (re-afferent) and externally generated (ex-afferent) sensory stimuli has been formalized by von Holst & Mittelstaedt \\[102\\]. The two physiologists studied hoverflies walking on a platform surrounded by a cylinder with black and white vertical stripes. As long as the cylinder was not rotated, the animals seemed to behave as if they were oblivious to the stripes. However, as soon as the cylinder was switched on to rotate around the flies, the animals started to turn in register with the moving stripes, in an attempt to stabilize their orientation with respect to the panorama. Clearly, when the animals turned themselves, their eyes perceived the same motion stimuli as when the cylinder was rotated. The two scientists concluded that the animals detect which of these otherwise very similar motion signals are generated by the flies and which are not and dubbed this the 'principle of reafference'. To test the possibility that the flies just blocked all visual input during self-initiated locomotion, the experimenters glued the heads of the animals rotated by 180\u00b0 such that the positions of the left and right eye were exchanged and the proboscis pointed upwards. Whenever these 'inverted' animals started walking in the stationary striped cylinder, they ran in constant, uncontrollable circles, showing that they did perceive the *relative* motion of the surround. From this experiment, von Holst and Mittelstaedt concluded that self-generated turning comes with the expectation of a visual motion signal in the opposite direction that is perceived but normally does not elicit a response. If the visual motion signal is not caused by the animal, on the other hand, it most probably requires compensatory action, as this motion was not intended and hence not expected. The principle of reafference is the mechanism by which we realize which portion of the incoming sensory stream is under our own control and which portion is not. This is how we distinguish between those sensory stimuli that are consequences of our own actions and those that are not. Distinguishing self from 'world' is the prerequisite for the evolution of separate learning mechanisms for self- and world learning, respectively \\[43\\], which is the central principle of how brains balance actions and responses. The self\/world distinction is thus the second important function of behavioural variability, besides making the organism harder to predict: by using the sensory feedback from our actions, we are constantly updating our model of how the environment responds to our actions. Animals and humans constantly ask: What happens if I do this? The experience of willing to do something and then successfully doing it is absolutely central to developing a sense of self and that we are in control (and not being controlled).\n\nThus, in order to understand actions, it is necessary to introduce the term self. The concept of self necessarily follows from the insight that animals and humans initiate behaviour by themselves. It would make no sense to assign a behaviour to an organism if any behavioural activity could, in principle, be traced back by a chain of causations to the origin of the universe. An animal or human being is the agent causing a behaviour, as long as no sufficient causes for this activity to occur are coming from outside the organism. Agency is assigned to entities who initiate actions themselves. Agency is crucial for moral responsibility. Behaviour can have good or bad consequences. It is the agent for whom the consequences matter the most and who can be held responsible for them.\n\n# Why still use the term free will today?\n\nBy providing empirical data from invertebrate model systems supporting a materialistic model of free will, I hope to at least start a thought process that abandoning the metaphysical concept of free will does not automatically entail that we are slaves of our genes and our environment, forced to always choose the same option when faced with the same situation. In fact, I am confident I have argued successfully that we would not exist if our brains were not able to make a different choice even in the face of identical circumstances and history. In this article, I suggest re-defining the familiar free will in scientific terms rather than giving it up, only because of the historical baggage all its connotations carry with them. One may argue that 'volition' would be a more suitable term, less fraught with baggage. However, the current connotations of volition as 'willpower' or the forceful, conscious decision to behave against certain motivations render it less useful and less general a term than free will. Finally, there may be a societal value in retaining free will as a valid concept, since encouraging a belief in determinism increases cheating \\[103\\]. I agree with the criticism that retention of the term may not be ideal, but in the absence of more suitable terms, free will; remains the best option.\n\nI no longer agree that ' ''free will'' is (like ''life'' and ''love'') one of those culturally useful notions that become meaningless when we try to make them ''scientific'' ' \\[96\\]. The scientific understanding of common concepts enrich our lives, they do not impoverish them, as some have argued \\[100\\]. This is why scientists have and will continue to try and understand these concepts scientifically or at least see where and how far such attempts will lead them. It is not uncommon in science to use common terms and later realize that the familiar, intuitive understanding of these terms may not be all that accurate. Initially, we thought atoms were indivisible. Today we do not know how far we can divide matter. Initially, we thought species were groups of organisms that could be distinguished from each other by anatomical traits. Today, biologists use a wide variety of species definitions. Initially, we thought free will was a metaphysical entity. Today, I am joining a growing list of colleagues who are suggesting it is a quantitative, biological trait, a natural product of physical laws and biological evolution, a function of brains, maybe their most important one.\n\n## Acknowledgements\n\nConcepts and ideas in several sections of this article have been adapted from a to-be-published presentation of Martin Heisenberg. I am very grateful for his sharing this presentation with me. I am also indebted to Christopher Harris, Bob Doyle, Matt Leifer, Sandeep Gautam, Andrew Lang, Julien Colomb and two anonymous referees for helpful comments on an earlier version of the manuscript.\n\n# References","meta":{"dup_signals":{"dup_doc_count":122,"dup_dump_count":53,"dup_details":{"curated_sources":2,"2022-33":1,"2021-21":1,"2020-34":1,"2020-24":1,"2019-13":1,"2018-47":2,"2018-43":3,"2018-39":2,"2018-34":1,"2018-30":3,"2018-26":2,"2018-22":2,"2018-17":1,"2018-13":2,"2018-09":2,"2018-05":2,"2017-51":3,"2017-47":2,"2017-43":2,"2017-39":1,"2017-34":3,"2017-30":1,"2017-26":4,"2017-22":1,"2017-17":3,"2017-09":3,"2017-04":3,"2016-50":2,"2016-44":4,"2016-40":3,"2016-36":3,"2016-30":3,"2016-26":2,"2016-22":3,"2016-18":3,"2016-07":3,"2015-48":3,"2015-40":3,"2015-35":2,"2015-32":3,"2015-27":2,"2015-22":3,"2015-14":2,"2014-52":3,"2014-49":3,"2023-23":1,"2017-13":4,"2015-18":3,"2015-11":3,"2015-06":3,"2013-48":1,"2013-20":1,"2024-30":1}},"file":"PMC3049057"},"subset":"pubmed_central"} {"text":"abstract: Protocol design complexity has increased substantially during the past decade and this in turn has adversely impacted drug development economics and performance. This article reviews the results of two major Tufts Center for the Study of Drug Development studies quantifying the direct cost of conducting less essential and unnecessary protocol procedures and of implementing amendments to protocol designs. Indirect costs including personnel time, work load and cycle time delays associated with complex protocol designs are also discussed. The author concludes with an overview of steps that research sponsors are taking to improve protocol design feasibility.\nauthor: Kenneth Getz\ndate: 2014-05-12\ninstitute: Center for the Study of Drug Development, School of Medicine, Tufts University, 75 Kneeland Street, Suite 1100, Boston, MA 02111, USA; E-Mail: ; Tel.: +1-617-636-3487; Fax: +1-617-636-2425\nreferences:\ntitle: Improving Protocol Design Feasibility to Drive Drug Development Economics and Performance\n\n# 1. Introduction\n\nResearch and Development leaders widely agree that among the many factors that drive the high and rising cost of drug development, one of the most significant is protocol design \\[1\\]. As the blueprint articulating project strategy and directing project execution performed by both internal and external personnel, protocol design is uniquely positioned to fundamentally and directly impact\u2014positively or negatively\u2014drug development efficiency and economics.\n\nProtocol design has historically been the domain of clinical research scientists. Under a resource-rich R&D operating environment, the regulatory approval of safe and effective medical products was the primary test for assessing study design quality. In the current resource-constrained operating environment, a primary test of quality study design seeks to exclude any protocol that substantially wastes scarce resources, gathers extraneous data, and exposes study subjects to unnecessary risk.\n\nMaking matters worse, in the current environment, a rising number of stakeholders influence protocol design practice and quality. Not only are scientists, regulatory agencies, health authorities, operating managers and key opinion leaders informing study design decision making, but also patients\/patient organizations, investigative site personnel, health care providers, policymakers and payers are increasingly playing a role.\n\nInterviews conducted among drug development executives suggest a number of considerations and pressures that protocol design decision-making must accommodate: New scientific understanding about chronic disease mechanisms and how to measure their progression and economic impact requires collecting more clinical data. Crowded classes of experimental therapies and the ongoing movement to develop stratified medicines compel research sponsors to collect more genetic and biomarker data.\n\nClinical scientists and statisticians often add procedures to gather more contextual data to aid in their interpretation of the findings and to guide development decisions. Clinical teams are now collecting biomarker data, genetic material, data on economic and therapeutic outcomes, and companion diagnostic data.\n\nClinical teams routinely add procedures guided by the belief that the marginal cost of doing so, relative to the entire clinical study budget, is small when the risk of not doing so is high \\[1,2\\]. Additional clinical data is collected as a precautionary measure in the event that a study fails to meet its primary and key secondary objectives. Data from these procedures may prove valuable in *post-hoc* analyses that reveal new and useful information about the progression of disease, its treatment and new directions for future development activity. Clinical teams add procedures for fear that they may neglect to collect data requested by regulatory agencies and health authorities, purchasers and payers. Failure to collect requested data elements could potentially delay regulatory submission, product launch and product adoption. And medical writers and protocol authors also often permit outdated and unnecessary procedures into new study designs because they are routinely included in legacy protocol authoring templates and operating policies.\n\nCollectively, these factors have all contributed to the dramatic increase in protocol design complexity during the past decade. Research conducted by the Tufts Center for the Study of Drug Development (Tufts CSDD, Boston, MA, USA) documents this alarming trend (see Table 1<\/a>) and characterizes rising levels of scientific complexity (e.g., number of endpoints; number of procedures performed; number of study volunteer eligibility criteria) and operating complexity (e.g., number of countries where clinical trials are conducted; number of investigative sites activated and monitored; number of patients screened and enrolled).\n\nIn 2012, to demonstrate safety and efficacy, a typical phase III protocol had 170 procedures on average performed on each study volunteer during the course of 11 visits across an average 230-day time span. Ten years ago, the typical phase III protocol had an average of 106 procedures, nine visits and an average 187-day time span. For the typical phase III protocol conducted in 2012, study volunteers came from an average of 34 countries and 196 research centers, up from 11 countries and 124 research centers ten years ago. And in 2012, to qualify to participate in a typical phase III protocol, each volunteer had to meet 50 eligibility criteria -- up from an average of 31 inclusion and exclusion criteria ten years ago \\[1\\].\n\nijerph-11-05069-t001_Table 1\n\nComparing Scientific and Logistical Complexity of a Typical Phase III Protocol Across Two Time Periods.\n\n\n\n\n\n\n\n\n\n\n\n
Design Characteristics
\n(All Values are Means)<\/th>\n
2002<\/th>\n2012<\/th>\n<\/tr>\n<\/thead>\n
Total number of endpoints<\/td>\n7<\/td>\n13<\/td>\n<\/tr>\n
Total number of procedures<\/td>\n106<\/td>\n167<\/td>\n<\/tr>\n
Total number of eligibility criteria<\/td>\n31<\/td>\n50<\/td>\n<\/tr>\n
Total number of countries<\/td>\n11<\/td>\n34<\/td>\n<\/tr>\n
Total number of investigative sites<\/td>\n124<\/td>\n196<\/td>\n<\/tr>\n
Total number of patients randomized<\/td>\n729<\/td>\n597<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\nIn an effort to reign in rising drug development costs, a growing number of pharmaceutical and biotechnology companies are looking to identify protocol design practices that can be modified. This article reviews two major Tufts CSDD studies that have quantified the direct cost impact of protocol design practices. This article also summarizes growing knowledge of the impact of protocol design complexity on indirect costs. At the conclusion of the article, steps that drug developers are taking to simplify protocol design and lower overall study costs are discussed.\n\n# 2. Direct Costs Associated with Protocol Design Practices\n\n## 2.1. Protocol Procedures\n\nIn a study completed in 2012, Tufts CSDD demonstrated that the cost of adding multiple individual protocol procedures, in the aggregate, is substantial. Fifteen mid-sized and large pharmaceutical and biotechnology companies participated in the study. Each company provided data on their phase II and III protocols targeting diseases across multiple therapeutic areas and executed by investigative sites dispersed globally since 2009. To minimize unusual and atypical designs, pediatric, medical device, orphan drug and extension studies were excluded from the sampling frame. In all, 116 unique phase II and III protocols having at least one procedure tied to a primary endpoint were analyzed \\[2\\].\n\nParticipating companies classified each protocol procedure according to the objective and endpoint it supported as defined by the clinical study report (CSR) and the study's specific statistical analysis plan (SAP) along the following lines:\n\n1. \"Core\" procedures\u2014supported primary and\/or secondary study objectives or primary or key secondary and safety endpoints.\n\n2. \"Required\" procedures\u2014supported screening requirements and compliance-related activity including drug dispensing, informed consent form review, and study drug return.\n\n3. \"Standard\" procedures\u2014are commonly performed during initial and routine study participant visits including medical history, height and weight measurement, adverse event assessment, and concomitant medication review.\n\n4. \"Non-Core\" procedures\u2014supported supplemental secondary, tertiary and exploratory endpoints, and safety and efficacy procedures not associated with a study endpoint or objective.\n\nParticipating companies classified 25,103 individual phase II and III protocol procedures. Direct cost data for 16,607 procedures was analyzed using Medidata Solutions PICAS^\u00ae^ database. Overall, half of the total procedures per protocol were classified as \"Core\" to the study. Wide variability in the incidence of procedures supporting endpoint classifications was observed across therapeutic areas. Protocols targeting endocrine and central nervous system (CNS) disorders contained a higher relative average number of procedures supporting supplementary, tertiary, and exploratory procedures (e.g., \"Non-Core\") endpoints. Oncology protocols had the highest relative proportion of procedures supporting \"Core\" endpoints and objectives, and the lowest relative proportion supporting \"Non-Core\" endpoints.\n\nThe distribution of direct costs was similar to that of categorized procedures: Overall, 47.9% of the total study budget on average was spent on the direct costs to administer \"Core\" procedures in phase II and phase III protocols. The direct cost to administer \"Required\" (regulatory compliance) and \"Standard\" procedures for phase III protocols was 22.7% and 12.0% of the study budget respectively.\n\nFor phase III protocols alone, approximately half of the total direct cost (46.7%) was spent to administer \"Core\" procedures; 18.6% on average was spent to administer \"Non-Core\" procedures; 22.7% to administer procedures supporting screening requirements and regulatory compliance; and 12% supported \"Standard\" procedures (See Table 2<\/a>).\n\nijerph-11-05069-t002_Table 2\n\nDistribution of Procedures and Direct Costs per Procedure by End Point Classification.\n\n| Endpoint Type | Phase II Procedures | Phase III Procedures | Phase II Procedure Costs | Phase III Procedure Costs |\n|----|----|----|----|----|\n| Core | 54.4% | 47.7% | 55.2% | 46.7% |\n| Required | 8.0% | 10.0% | 16.3% | 22.7% |\n| Standard | 19.7% | 17.6% | 15.4% | 12.0% |\n| Non-Core | 17.9% | 24.7% | 13.1% | 18.6% |\n| Total | 100% | 100% | 100% | 100% |\n\nOne out of five (22.1%) phase II and III protocol procedures, on average, supported tertiary and exploratory objectives and endpoints. The proportion of procedures collecting non-core data in phase III studies alone was even higher (24.7%). Non-core procedures consumed on average 19% of total direct costs for each phase III study, and 13% of total direct costs for each phase II study \\[2\\].\n\nThe estimated total cost to the pharmaceutical industry each year to perform procedures supporting \"Non-Core\" objectives and endpoints for all FDA-regulated phase II and III protocols is an estimated \\$4\u2013\\$6 billion USD. This estimate is very conservative as it excludes all indirect costs for personnel and infrastructure required to capture, monitor, clean, analyze, manage and store extraneous protocol data and it does not include any estimate for the unnecessary risk to which patients may be exposed.\n\n## 2.2. Protocol Amendments\n\nA second Tufts CSDD study showed that the incidence of protocol amendments rises with an increase in protocol design complexity. Amendments that are implemented after protocols have been approved are commonplace, but they are widely viewed as a nuisance, resulting in unplanned study delays and additional costs. Although clinical research professionals view protocol amendments as major problems, it is perhaps more accurate to view amendments as attempts to address underlying protocol design problems and external factors impacting design strategies.\n\nConducted in 2010, Tufts CSDD convened a group of seventeen mid-sized and large pharmaceutical and biotechnology companies each of whom provided protocol amendment data from 3,410 protocols approved between January 2006 and December 2008. The protocols were representative of multiple therapeutic areas. Protocols approved within the most recent 12-months were excluded from the study because these had not had enough time to accumulate amendments \\[3\\].\n\nAmendments were defined as any change to a protocol requiring internal approval followed by approval from the Institutional Review Board (IRB), Ethical Review Board (ERB) or regulatory authority. Detailed data on 3,596 amendments containing 19,345 total protocol modifications was analyzed. Companies participating in this study assigned specific changes made per amendment yielding a total of 6,855 changes classified. Companies also provided the top causes for amendments and rated each amendment in terms of whether it was \"Completely Avoidable\", \"Somewhat Avoidable\", \"Somewhat Unavoidable\" or \"Completely Unavoidable\".\n\nNearly all protocols required at least one amendment. Completed protocols across all phases had an average of 2.3 amendments, though later-stage phase II and III protocols averaged 2.7 and 3.5 amendments respectively. Each amendment required an average of 6.9 changes to the protocol. Therapeutic areas that had the highest incidence of amendments and changes per amendment included cardiovascular and GI protocols (See Table 3<\/a>).\n\nijerph-11-05069-t003_Table 3\n\nMean Number of Amendments and Changes per Amendment per Protocol by Phase.\n\n| Research Phase | Number of Amendments | Protocol Changes per Amendment |\n|----------------|----------------------|--------------------------------|\n| Phase I | 1.9 | 5.6 |\n| Phase II | 2.7 | 6.8 |\n| Phase III | 3.5 | 8.5 |\n| Phase IIIb\/IV | 2.6 | 8.3 |\n| All phases | 2.3 | 6.9 |\n\nOf the 6,855 changes categorized, 16% were modifications made to the description and eligibility criteria of the patient population under investigation; 12% were adjustments made in the number and types of safety assessment procedures; 10% were edits and revisions made to the general information contained in the protocol (e.g., protocol title and study staff contact information).\n\nNearly 40% of all amendments occurred before the first study volunteer received his or her first dose in the clinical trial. This was most pronounced in phase I studies where 52% of amendments occurred prior to beginning patient enrollment. In phase II, III, and IIIb\/IV studies, 37%, 30%, and 38% of amendments occurred before first patient first dose, respectively (see Figure 1<\/a>).\n\nThe most common cause of amendments was the availability of new safety information (20%), followed by requests from regulatory agencies to amend the study (19%) and changes in the study strategy (18%). Protocol design flaws and difficulties recruiting study volunteers were also top cited causes at 11% and 9% of categorized amendments, respectively.\n\nTwo-thirds (63%) of amendments had causes that sponsor companies considered unavoidable, including amendments that were the result of new safety information, new regulatory requests, changes in the standard of care or study objectives. Nearly one-out-of-four protocol amendments (37%) was considered partially or completely avoidable.\n\nBased on data provided by participating companies, on average the direct cost to implement (*i.e.*, first patient resubmitted under the revised protocol) a single protocol amendment is approximately \\$500,000 (USD) in unplanned expense and adds 61 days to the project timeline. This figure undercounts the full economic impact as it does not include the cost of internal staff time dedicated to implementing each amendment, costs or fees associated with protocol language translation, and costs associated with resubmission to the local authority.\n\nTufts CSDD estimates that the total cost for sponsors to implement \"avoidable\" protocol amendments in 2014 was approximately \\$2 billion (USD). This estimate is based on the incidence of amendments by phase for all active global FDA regulated clinical trials as reported by the agency; the direct cost to implement each amendment; and the proportion of amendments that is avoidable. This estimate does not include the billions of dollars realized annually in aggregate time-savings and earlier commercialization due to the elimination of each avoidable amendment \\[3\\].\n\nTo date, Tufts CSDD research on the direct cost impact of protocol design complexity has focused on non-core procedures. Tufts CSDD is currently conducting research looking at the direct cost of Core, Required and Standard procedures.\n\n## 2.3. Indirect Costs\n\nTufts CSDD research is only beginning to quantify the indirect costs associated with protocol design complexity. These costs are no doubt many magnitudes higher than their estimated direct costs and include increasing cycle time and delays; full- and part-time staffing costs associated with project management and oversight, protocol administration and the gathering, monitoring, collecting, cleaning, analyzing, maintaining and storing of clinical data.\n\n## 2.4. Cycle Time\n\nResearch published in peer-reviewed and trade literature shows a very clear relationship between study design complexity and performance: Study designs that include a relatively large number of eligibility requirements and unique procedures conducted frequently have lower study volunteer recruitment and retention rates, take longer, and generate lower quality clinical data than designs without such features \\[4\\].\n\nTufts CSDD research has demonstrated that the average overall duration of clinical trials\u2014from \"Protocol Ready\" to the last patient completing her last visit\u2014is 74% longer for complex clinical trials. And whereas three-out-of-four volunteers are randomized following screening and two-thirds of volunteers complete simpler protocols; 59% are randomized and less than half (48%) complete complex protocols \\[4\\].\n\nOther studies in the literature corroborate these findings and provide insights into causes of cycle time delays \\[5,6,7,8,9\\]. Clark found, for example, that the collection of excessive and unnecessary clinical data is driving longer study durations. The author warned that data collection and regulatory agency submission delays may ultimately harm regulatory approval rates \\[5\\].\n\nRoss and colleagues conducted a comprehensive analysis of peer-reviewed academic studies and found that health professionals were less likely to refer, and patients less likely to participate in, more complex clinical trials \\[6\\]. Madsen showed that patients are significantly less likely to sign the informed consent form when facing a more demanding protocol design \\[7\\].\n\nIn a study conducted by Boericke and Gwinn, the higher the number of study eligibility criteria, the more frequent and longer were the delays in completing clinical studies \\[8\\]. Andersen and colleagues showed that volunteer drop-out rates are much higher among patients participating in more complex clinical trials. The authors cautioned that when volunteers terminate their participation early and are lost to follow-up, the validity of the study results may be compromised \\[9\\].\n\n## 2.5. Investigative Site Work Burden\n\nRising protocol design complexity also places additional execution burden on study staff. Work burden to administer protocol procedures was assessed using an approach developed by Tufts CSDD in 2008 and adapted from Medicare's Relative Value Unit (RVU) methodology \\[4\\]. The RVU scale was created by the Federal government in 1982 to determine reimbursement payment levels for physicians' relative costs instead of prevailing charges for medical procedures. Tufts CSDD mapped medical procedure RVUs to clinical trial procedures to derive Work Effort Units (WEU) per protocol procedure. For procedures that were not already assigned a Medicare RVU, a panel of 10 physicians at the Tufts University School of Medicine was convened to estimate the time spent per protocol procedure. \"Investigative Site Work Burden\" is the product of WEUs per procedure and the frequency of the procedures that were conducted over the course of the protocol.\n\nTotal investigative site work burden to administer phase II protocol procedures grew the fastest\u201473%\u2014during the ten year period 2002 to 2012 outpacing growth in the total number of procedures performed per protocol. The work effort required of site personnel to administer procedures supporting phase I, III and IV protocols increased 48%, 56% and 57% during the same ten year period respectively (see Figure 2<\/a>). Protocols targeting diseases associated with immunomodulation, oncology, CNS, and cardiovascular required the highest work effort from investigative site personnel to administer.\n\n## 2.6. Study Monitoring, Data Management and Logistics Costs\n\nTufts CSDD studies have only just started to develop methodologies to measure the incremental indirect costs supporting study monitoring activity, data management and study logistics associated with complex protocol designs. In a preliminary analysis of regulatory submissions from nine major pharmaceutical companies, Tufts CSDD found that the average NDA in 1997 was nearly 70,000 pages in length and required 2 gigabytes of memory to store electronically. In 2012, the average NDA was almost 3.5 million pages and required 100 gigabytes of digital memory \\[10\\]. According to research from Medidata Solutions, a typical Phase III study collected nearly 1 million clinical data points in 2012, up from an estimated half that level 10 years ago \\[1\\].\n\nStudies in the literature describe the impact of collecting too much data. Friedman and colleagues report that the high volume of data collected in today's protocols distracts research scientists, compromises the data analysis process and ultimately harms data quality \\[11\\]. As more data is collected during protocol administration, error rates increase, according to Nahm *et al*. \\[12\\]. Barrett found that more procedures per protocol were associated with a higher incidence of unused data in NDA (New Drug Application) submissions \\[13\\]. Abrams *et al.* found that unused clinical data compromises the data analysis process \\[14\\].\n\nPreliminary results from a 2014 Tufts CSDD study suggest that study monitors perform 100% source data verification on clinical data gathered from all non-core procedures and that one-third of all queries per case report form are associated with these less essential procedures.\n\nAs mentioned earlier, Tufts CSDD research indicates that the number of countries and investigative sites where clinical trials are conducted has grown considerably. As clinical trials have become more globally dispersed, research sponsors have introduced numerous logistical complexities including delivering clinical supplies and collecting lab data from more remote locations; interacting with multiple regulatory agencies and health authorities; monitoring and overseeing sites in remote locations where infrastructure is more variable.\n\n## 2.7. Steps to Optimize Study Design\n\nDuring the past several years, a growing number of organizations have taken the initial steps of diagnosing their protocol design practices and comparing them with industry benchmarks. Sponsor organizations have referred to published data in peer-reviewed papers to compare internal study design practices with benchmarks. Commercially available software and consulting services are also available to assist sponsor companies in assessing the problem and implementing new practices to streamline and simplify study design \\[1\\].\n\nIn addition to diagnosing the problem, sponsor organizations are looking for ways to ensure that protocol feasibility is conducted more effectively. Whereas the scientific objectives solely dictated study design elements in the past, in an environment where resources and capital are far more limited, operating objectives now carry substantially more influence over design decisions. As such, the primary objective in optimizing protocol design is to perform great science that can be feasibly and cost-effectively executed.\n\nMany sponsor organizations now solicit feedback from principal investigators, study coordinators and from patients to identify areas where study design feasibility can be improved prior to final approval of the protocol. Some of these feedback mechanisms are conducted as in-person meetings and focus groups while others are conducted online using social and digital media communities \\[15\\].\n\nProtocol authoring practices received a valuable new reference resource in 2013. A group of international study design stakeholders issued a collection of checklists and guidelines designed to ensure that the purpose of protocol procedures are transparent and tied to core objectives and endpoints. The SPIRIT checklist was developed with input from 115 multi-disciplinary contributors from medical writing, journal editors, regulatory agencies, ethics committees, clinical research and health care professionals.\n\nThe SPIRIT 2013 checklist and guidelines call for simple, minimalist designs that are clearly tied to core endpoints and objectives as defined by the clinical study report. Formatting conventions for various protocol elements (e.g., table of contents, glossary of terms, abbreviated terms) are also provided. The SPIRIT checklist has been pilot tested and can be downloaded for free \\[16\\].\n\nA 2013 Tufts CSDD study documented the creation of internal governance committees charged with challenging clinical teams to improve protocol feasibility \\[1\\]. Pharmaceutical and biotechnology companies began establishing these committees 24 months ago in an effort to adopt a more systematic and long-term approach to optimizing study design. Committees are positioned within their respective organizations as objective governance and assessment mechanisms, offering guidance and input into the existing protocol review process without requiring organizations to alter legacy study design practices and procedures.\n\nThe committees raise clinical team awareness of the impact that design decisions have on study budgets and on study execution feasibility. Committees typically provide input into the study design just prior to final protocol approval and they routinely offer insight into how protocol designs can be streamlined and better \"fit to purpose\". Long term, internal facilitation committees may assist organizations in fundamentally improving their study design practices.\n\nNearly all of the facilitation committees are comprised of cross-functional representatives who volunteer their time. Committees representatives come from a variety of functions including clinical development; clinical operations; statistics; data management; medical writing; clinical pharmacology; regulatory; safety; and pharmacovigilance in addition to finance and\/or procurement. Although some committees have more expansive goals than do others, all committees are charged with simplifying and streamlining protocols by reducing unnecessary procedures and the number of amendments. To meet these goals, committees assess the direct cost to perform non-core procedures and core procedures that may be conducted more frequently than necessary.\n\nAdaptive trial designs represent an optimization opportunity that has received limited attention from research sponsors to date. Adaptive trial designs are preplanned, typically through the use of trial simulations and scenario planning where one or more specified clinical trial design elements are modified and adjusted\u2014while the trial is underway\u2014based on an analysis of interim data.\n\nTufts CSDD estimates that approximately one out of five (20%) late stage clinical trials are using a simple adaptive design approach. A much lower percentage\u20145%\u2014is using more sophisticated adaptations (adaptive dose range studies and seamless phase II-III studies). Sponsor companies report that they expect the adoption of adaptive trial designs in earlier exploratory phase clinical trials to increase significantly over the next several years \\[17\\].\n\nStudy terminations due to safety and\/or efficacy futility are the most common simple adaptive design used and they are more widely used at this time. Sponsor companies have found that early terminations due to futility are relatively easy to implement and a growing number of companies similarly view studies employing sample size re-estimation adaptations.\n\nAlthough the concept of adaptive trial designs has been widely discussed for more than ten years, adoption has been slow for a variety of reasons. Internal organizational resistance appears to be the primary factor limiting more widespread adoption \\[17\\]. Based on interviews with company executives, regulatory agency receptivity to the use of adaptive trial designs does not appear to be as significant a barrier to adoption though agency clarity with regard to its position on the use of adaptive designs appears to be lacking.\n\nClinical teams and operating functions perceive enrollment and logistical factors\u2014specifically delays and disruptions in trial execution, patient participation and distribution of clinical supplies\u2014as major barriers to adoption. Sponsors are also concerned about introducing bias following interim analyses; the lack of adaptive trial design experience among both internal development teams and external contract research organizations; gaps in infrastructure and technology to implement more sophisticated adaptive designs; and the limited capacity of independent data monitoring committees.\n\nAdaptive trial designs hold promise in optimizing study design. Early study terminations due to futility and sample size re-estimation could save up to a hundred million dollars (USD) in direct and indirect costs annually per pharmaceutical company depending on clinical trial scope, when the trial is actually terminated, and on the sponsor's overall implementation of this adaptive approach across the development portfolio. Perhaps the greatest impact from adaptive trial designs will come from improvements in late stage success rates. Even modest improvements in success rates for new molecular entities (NME) and new biologic entities (BME) represent billions of dollars in increased revenue potential for research sponsors.\n\nIn the immediate term, adaptive trial designs are offering cross-functional teams direct insights into study design through scenario planning and trial simulation prior to finalizing the protocol. Rigorous upfront planning\u2014similar to optimization practices for traditional study designs\u2014is forcing organizations to challenge protocol feasibility prior to placing the protocol in the clinic.\n\n# 3. Conclusions\n\nProtocol design holds the key to fundamentally and sustainably transforming drug development performance, cost and success. Research conducted by Tufts CSDD and others contributes to our growing understanding of specific opportunities that can be implemented to optimize protocol design. There is no question that pharmaceutical and biotechnology companies will adopt and implement new approaches to test and modify study design feasibility with the goal of best executing excellent science. More flexible and adaptive trial designs are expected to play a growing role in helping to optimize study design by compelling sponsor companies to perform more rigorous upfront planning and simulation and to implement preplanned adaptions that may lower fixed operating costs and ultimately improve program success rates.\n\n## Acknowledgments\n\nThe author wishes to thank his colleagues at the Tufts Center for the Study of Drug Development and participating companies for their support of the Center's research studies and Medidata Solutions for providing an unrestricted study grant to quantify the direct cost of procedures by endpoint classification.\n\n## Conflicts of Interest\n\nThe author declares no conflict of interest.\n\n# References","meta":{"dup_signals":{"dup_doc_count":113,"dup_dump_count":37,"dup_details":{"curated_sources":4,"2018-39":1,"2018-26":1,"2018-17":1,"2018-09":1,"2018-05":1,"2017-47":1,"2017-39":1,"2017-30":3,"2017-22":1,"2017-17":1,"2017-09":1,"2017-04":2,"2016-50":3,"2016-44":4,"2016-40":4,"2016-36":4,"2016-30":4,"2016-26":4,"2016-22":2,"2016-18":2,"2016-07":3,"2015-48":2,"2015-40":3,"2015-35":4,"2015-32":4,"2015-27":4,"2015-22":4,"2015-14":4,"2014-52":4,"2014-49":5,"2014-42":12,"2014-41":5,"2018-51":1,"2015-18":3,"2015-11":4,"2015-06":4,"2017-13":1}},"file":"PMC4053871"},"subset":"pubmed_central"} {"text":"author: James E Ferrell\ndate: 2009\ninstitute: 1Department of Chemical and Systems Biology, Stanford University School of Medicine, Stanford, CA 94305-5174, USA\ntitle: Q&A: Systems biology\n\n# What is systems biology?\n\nSystems biology is the study of complex gene networks, protein networks, metabolic networks and so on. The goal is to understand the design principles of living systems.\n\n# How complex are the systems that systems biologists study?\n\nThat depends. Some people focus on networks at the 'omics'-scale: whole genomes, proteomes, or metabolomes. These systems can be represented by graphs with thousands of nodes and edges (see Figure 1<\/a>). Others focus on small subcircuits of the network; say a circuit composed of a few proteins that functions as an amplifier, a switch or a logic gate. Typically, the graphs of these systems possess fewer than a dozen (or so) nodes. Both the large-scale and small-scale approaches have been fruitful.\n\n# Why is systems biology important?\n\nStas Shvartsman at Princeton tells a story that provides a good answer to this question. He likens biology's current status to that of planetary astronomy in the pre-Keplerian era. For millennia people had watched planets wander through the nighttime sky. They named them, gave them symbols, and charted their complicated comings and goings. This era of descriptive planetary astronomy culminated in Tycho Brahe's careful quantitative studies of planetary motion at the end of the 16th century. At this point planetary motion had been described but not understood.\n\nThen came Johannes Kepler, who came up with simple theories (elliptical heliocentric orbits; equal areas in equal times) that empirically accounted for Brahe's data. Fifty years later, Newton's law of universal gravitation provided a further abstraction and simplification, with Kepler's laws following as simple consequences. At that point one could argue that the motions of the planets were understood.\n\nSystems biology begins with complex biological phenomena and aims to provide a simpler and more abstract framework that explains why these events occur the way they do. Systems biology can be carried out in a 'Keplerian' fashion \u2013 look for correlations and empirical relationships that account for data \u2013 but the ultimate hope is to arrive at a 'Newtonian' understanding of the simple principles that give rise to the complicated behaviors of complex biological systems.\n\nNote that Kepler postulated other less-enduring mathematical models of planetary dynamics. His *Mysterium Cosmographicum* showed that if you nest spheres and Platonic polyhedra in the right order (sphere-octahedron-sphere-icosahedron-sphere-dodecahedron-sphere-tetrahedron-sphere-cube-sphere), the sizes of the spheres correspond to the relative sizes of the first six planets' orbits. This simple, abstract way of accounting for empirical data was probably just a happy coincidence. Happy coincidences are a potential danger in systems biology as well.\n\n# Is systems biology the antithesis of reductionism?\n\nIn a limited sense, yes. Some 'emerging properties', as discussed below, disappear when you reduce a system to its individual components.\n\nHowever, systems biology stands to gain a lot from reductionism, and in this sense systems biology is anything but the antithesis of reductionism. Just as you can build up to an understanding of complex digital circuits by studying individual electronic components, then modular logic gates, and then higher-order combinations of gates, one may well be able to achieve an understanding of complex biological systems by studying proteins and genes, then motifs (see below), and then higher-order combinations of motifs.\n\n# What are emergent properties?\n\nSystems of two proteins or genes can do things that individual proteins\/genes cannot. Systems of ten proteins or genes can do things that systems of two proteins\/genes cannot. Those things that become possible once a system reaches some level of complexity are termed emergent properties.\n\n# Can you give a concrete example of an emergent property?\n\nThree proteins connected in a simple negative-feedback loop (A \u2192 B \u2192 C -\\| A) can function as an oscillator; two proteins (A \u2192 B-\\|A)can not. Two proteins connected in a simple negative-feedback loop can convert constant inputs into pulsatile outputs; a one-protein loop (A -\\| A) cannot. So pulse generation emerges at the level of a two-protein system and oscillations emerge at the level of a three-protein system.\n\n# In systems biology there is a lot of talk about nodes and edges. What is a node? An edge?\n\nBiological networks are often depicted graphically: for example, you could draw a circle for protein A, a circle for protein B, and a line between them if A regulates B or vice versa. The circles are the nodes in the graph of the A\/B system. Nodes can represent genes, proteins, protein complexes, individual states of a protein, and so on.\n\nA line connecting two nodes is an edge. The edge can be directed: for example, if A regulates B, we write an arrow \u2013 a directed edge \u2013 from A to B, whereas if B regulates A we write an arrow from B to A. Or the edge can be undirected; for example, it represents a physical interaction between A and B.\n\n# Staying with graphs, what's a motif?\n\nAs defined by Uri Alon, a motif is a statistically over-represented subgraph of a graphical representation of a network. Motifs include things like negative feedback loops, positive feedback loops, and feed-forward systems.\n\n# Isn't positive feedback the same thing as feed-forward regulation?\n\nNo. They are completely different. In a positive-feedback system, A activates B and B turns around to activate A. A transitory stimulus that activates A could lock the system into a self-perpetuating state where both A and B are active. In this way, the positive-feedback loop can act like a toggle switch or a flip-flop. A positive-feedback loop behaves much like a double-negative feedback loop, where A and B mutually inhibit each other. That system can act like a toggle switch too, except that it toggles between A on\/B off and A off\/B on states, rather than between A off\/B off and A on\/B on states. Good examples of this type of system include the famous lambda phage lysis\/lysogeny toggle switch, and the CDK1\/Cdc25\/Wee1 mitotic trigger.\n\nIn a feed-forward system, A impinges upon C directly, but A also regulates B, which regulates C. A feed-forward system can be either 'coherent' or 'incoherent', depending upon whether the route through B does the same thing to C as the direct route does. There is no feedback \u2013 A affects C, but C does not affect A \u2013 and the system cannot function as a toggle switch. A good example of feed-forward regulation is the activation of the protein kinase Akt by the lipid second messanger PIP3 (PIP3 binds Akt, which promotes Akt activation, and PIP3 also stimulates the kinase PDK1, which phosphorylates Akt and further contributes to Akt activation). Since both routes contribute to Akt activation, this is an example of coherent feed-forward regulation. Uri Alon's classic analysis of motifs in *Escherichia coli* gene regulation identified numerous coherent feed-forward circuits in that system.\n\n# In high school I hated physics and math, but I loved biology. Should I go into systems biology?\n\nNo.\n\n# What kind of physics and math is most useful for understanding biological systems?\n\nSome level of comfort in doing simple algebra and calculus is a must. Beyond that, probably the most useful math is nonlinear dynamics. The Strogatz textbook mentioned below is a great introduction to nonlinear dynamics.\n\n# Do I need to understand differential equations?\n\nSystems biologists often model biological processes with ordinary differential equations (ODEs), but the fact is that almost none of them can be solved exactly. (The one that can be solved exactly describes exponential approach to a steady state, and it's something every biologist should work out at some point in his or her training.) Most often, systems biologists solve their ODEs numerically, often with canned software packages like Matlab or Mathematica.\n\nIdeally, a model should not only reproduce known biology and predict unknown biology, it should also be 'robust' in important respects.\n\n# What is robustness, and why is it important to systems biologists?\n\nRobustness is the imperviousness of some performance characteristic of a system in the face of some sort of insult \u2013 such as stochastic fluctuations, environmental insults, or deletion of nodes from the system. For example, the period of the circadian oscillator is robust with respect to changes in the temperature of the environment. Robustness can be quantitatively defined as the inverse of sensitivity, which itself can be defined a few ways \u2013 often sensitivity is taken to be:\n\n$$\\frac{d\\ln Response}{d\\ln Pertubation}$$\n\nso that robustness becomes\n\n$$\\frac{d\\ln Pertubation}{d\\ln Response}$$\n\nRobustness is important to systems biologists because of the attractiveness of the idea that a biological system must function reliably in the face of myriad uncertainties. Maybe robustness, more than efficiency or speed, is what evolution must optimize to create successful biological systems.\n\nModeling can provide some insight into the robustness of particular networks and circuits. Just as a biological system must be robust with respect to insults the system is likely to encounter, a successful model should also be robust with respect to parameter choice. If a model 'works', but only for a precisely chosen set of parameters, the system it depicts may be too finicky to be biologically useful, or to have been 'found' in evolution.\n\n# What other types of models are useful in systems biology?\n\nODE models assume that each dynamical species in the model \u2013 each protein, protein complex, RNA, or whatever \u2013 is present in large numbers. This is sometimes true in biological systems. For example, regulatory proteins are often present at concentrations of 10 to 1,000 nM. For a four picoliter eukaryotic cell, this corresponds to 24,000 to 2,400,000 molecules per cell. This is probably large enough to warrant ODE modeling. However, genes and some mRNAs are present at concentrations of one or two molecules per cell. At such low numbers, each individual transcriptional event or mRNA degradation event becomes a big deal, and the appropriate type of modeling is stochastic modeling.\n\nSometimes systems are too complicated, or have too many unknown parameters to warrant ODE modeling. In these cases, Boolean models and probabilistic Bayesian models can be particularly useful.\n\nSometimes it is important to see how dynamical behaviors propagate through space, in which case either partial differential equation (PDE) models or stochastic reaction\/diffusion models may be just the ticket.\n\n# Where can I go for more information?\n\n## Review articles\n\nHartwell LH, Hopfield JJ, Leibler S, Murray AW: **From molecular to modular biology**. *Nature* 2005, **402(Suppl):**C47\u2013C52.\n\nKirschner M: **The meaning of systems biology**. *Cell* 2005, **121:**503\u2013504.\n\nKitano H: **Systems biology: a brief overview**. *Science* 2002, **295:**1662\u20131664.\n\n## Textbooks\n\nAlon U: *An Introduction to Systems Biology: Design Principles of Biological Circuits*. Boca Raton, FL: Chapman & Hall\/CRC; 2006.\n\nHeinrich R, Schuster S: *The Regulation of Cellular Systems*. Berlin: Springer; 1996.\n\nKlipp E, Herwig R, Kowald A, Wierling C, Lehrach H: *Systems Biology in Practice: Concepts, Implementation and Application*. Weinheim, Germany: Wiley-VCH; 2005.\n\nPalsson B: *Systems Biology: Properties of Reconstructed Networks*. Cambridge University Press; 2006.\n\nStrogatz SH: *Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry and Engineering*. Boulder, CO: Westview Press; 2001.","meta":{"dup_signals":{"dup_doc_count":113,"dup_dump_count":41,"dup_details":{"curated_sources":2,"2022-05":1,"2020-40":1,"2020-10":1,"2019-47":1,"2019-22":1,"2018-05":1,"2017-39":1,"2017-34":1,"2017-26":1,"2017-22":3,"2017-17":1,"2017-09":9,"2017-04":1,"2016-50":1,"2016-44":2,"2016-40":2,"2016-36":7,"2016-30":5,"2016-22":1,"2016-18":1,"2015-48":3,"2015-40":3,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":4,"2015-14":3,"2014-52":3,"2014-49":3,"2014-42":8,"2014-41":3,"2014-35":6,"2014-23":5,"2014-15":6,"2022-27":1,"2015-18":2,"2015-11":2,"2015-06":2,"2014-10":2,"2013-48":2,"2013-20":2}},"file":"PMC2656213"},"subset":"pubmed_central"} {"text":"abstract: # Background\n .\n Chondrosarcoma (Chs) is the third most frequent primary malignant tumour of bone and can be primary or secondary, the latter results mainly from the malignant transformation of a benign pre-existing tumour.\n .\n # Methods\n .\n All the cases diagnosed as Chs (primary tumours, recurrences and\/or metastasis and xenotransplanted Chs) from the files of our Department were collected. Only cases with paraffin blocks available were selected (Total 32 cases). Six Tissue Microarrays (TMAs) were performed and all the cases and biopsies were distributed into the following groups: a) only paraffin block available from primary and\/or metastatic tumours (3 TMAs), b) paraffin block available from primary and\/or metastatic tumours as well as from the corresponding Nude mice xenotransplant (2 TMAs), c) only paraffin block available from xenotransplanted Chs (1 TMA). A reclassification of all the cases was performed; in addition, conventional hematoxylin-eosin as well as immunohistochemistry staining (S100, SOX-9, Ki-67, BCL-2, p53, p16, CK, CD99, Survivin and Caveolin) was analyzed in all the TMA.\n .\n # Results\n .\n The distribution of the cases according to the histopathological pattern and the location of tumours were as follows: fourteen Grade I Chs (all primaries), two primary Grade II Chs, ten Grade III Chs (all primaries), five dedifferentiated Chs (four primaries and one primary with metastasis), and two Chs from cell cultures (Ch grade III). One recurrent extraskeletal myxoid Chs was included as a control in the TMA. Although there was heterogeneity in immunohistochemistry results of the different material analyzed, S100, SOX-9, Caveolin and Survivin were more expressed. The number of passages in xenotransplants fluctuated between 1 and 13. Curiously, in Grade I Chs, these implanted tumours hardly grew, and the number of passages did not exceed one.\n .\n # Conclusion\n .\n The study of Chs by means of TMA techniques is very important because it will improve the assessment of different antibodies applied in the immunohistochemical assays. Xenotransplanted tumours in TMA improve knowledge concerning the variability in the morphological pattern shown by these tumours during the evolution in nudes.\nauthor: Isidro Machado; Francisco Giner; Empar Mayordomo; Carmen Carda; Samuel Navarro; Antonio Llombart-Bosch\ndate: 2008\ninstitute: 1Department of Pathology, University of Valencia, Valencia, Spain\nreferences:\ntitle: Tissue microarrays analysis in chondrosarcomas: light microscopy, immunohistochemistry and xenograft study\n\n# Introduction\n\nThe development of Tissue Microarray (TMA) technology offers the advantage to screening large tissue cohorts for biomarker expression and to examine serial sections obtained from the same tumour specimen and xenograft tumours \\[1-8\\]. TMAs are especially suitable for the reproduction of morphological and immunohistochemical (IHC) results in different laboratories \\[2,4,5\\]. Chondrosarcoma (Chs) is the third most frequent primary malignant tumour of bone exceeded only by myeloma and osteosarcoma. It is characterised by the production of cartilage and can be primary or secondary, the latter results mainly from the malignant transformation of benign pre-existing tumours \\[9,10\\]. Several clinical and histopathological subtypes are recognized and the final diagnoses involve a large group of physicians including pathologists, radiologists and surgeons \\[10-12\\]. The conventional Chs can be classified according the location in bone in two groups: central Chs; located inside the medullar cavity and the peripheral Chs; located in the surface of the bone \\[9\\]. The histopathology is quite similar in both locations and is classified in three grades: Grade I, II and III. Grade I Chs are characterized by cells with small densely straining nuclei and chondroid or myxoid background, in Grade II Chs the nucleus are of moderate size with increased cellularity and low mitotic rate, Grade III Chs reveal large nuclei in areas with increased cellularity with moderate mitotic rate and scant chondroid or myxoid matrix\\[9\\]. Mesenchymal Ch, dedifferentiated Ch and clear cell Ch are described as infrequent variants of Chs and the clinical presentation, radiological findings and histopathological features are different to conventional forms \\[9-12\\]. The differential diagnosis is quite difficult between low grade Chs and benign chondral conditions, but usually the clinical data, radiography and morphological picture defines a definitive diagnosis \\[9,10\\].\n\nCytogenetic studies show heterogeneity related to karyotypic complexity, nevertheless, alterations in p16 tumour suppressor gene and p53 are related with Ch progression \\[9,10,13,14\\].\n\nXenograft models in bone tumour are of great importance because nude tumours constitute an easy source of material for tumour characterization using histopathological, IHC, electron microscopy, cytogenetic and molecular biology criteria \\[6,7,15\\]. Usually the histology of the tumours is preserved and they can also be followed over subsequent generations in various tumours passages \\[7,15\\]. Xenograft models in Chs are performed infrequently and being followed for successive generations is even more sporadic. Few publications exist on TMA study in Chs and the combination with a xenograft study is even more unusual. Therefore, the aims of this study are firstly the analysis of heterogeneity in Chondrosarcomas by means of TMA technology using histopathological and IHC criteria and secondly to analyze their successive xenografts after xenotransplantation into nude mice followed for several passages to describe the histopathological and IHC pattern variations occurring in the original tumour and their successive xenotransplants.\n\n# Materials and methods\n\n## Samples sources\n\nAll the cases diagnosed as Chs (primary tumours, recurrences and\/or metastasis and xenotransplanted Chs) from the files of our Department were collected. Only cases with paraffin blocks available were selected. All the clinical data of the patients were reviewed. (Total: 32 cases). A reclassification of all the cases according to the new criteria for Chs diagnosis (WHO) was performed.\n\n## Assembly of TMA\n\nSix TMAs were performed and all the cases and biopsies were distributed into the following groups: a) only paraffin block available from primary and\/or metastatic tumours (3 TMA), b) paraffin block available from primary and\/or metastatic tumours as well as from the corresponding Nude mice xenotransplant (2 TMA), c) only paraffin block available from xenotransplanted Chs (1 TMA). A hematoxylin and eosin stained (H\/E) section from each primary tumour and xenograft was prepared and areas of representative non-necrotic neoplasm circled on coverslip. The TMAs were assembled using a manual tissue arrayer (Beecher Instruments, Sun Prairie, WI). Normally, two cores (1 mm in thickness) of each biopsy were performed; nevertheless, more than 2 cores were made if the biopsy revealed a different pattern. All TMAs included two cores of normal kidney or liver as control tissues. Following TMA construction, H\/E stained section of the TMA recipient block was prepared and reviewed to confirm the presence of intact neoplasm. Several sections of 5 \u03bc were prepared in order to perform H\/E stain as well as different IHC staining.\n\n## Immunohistochemical analysis\n\nIHC analysis was performed using anti-CD99 antibody (clone 12E7, DakoCytomation) at a 1:50 dilution, anti-S100 polyclonal antibody (DakoCytomation) at a 1:200 dilution, anti-SOX-9 polyclonal antibody (Santa Cruz Biotechnology, Santa Cruz, CA) at 1:100 dilution, anti-survivin polyclonal antibody (Santa Cruz Biotechnology, Santa Cruz, CA) at 1: 50 dilution, anti-p16 antibody (clone F12, Santa Cruz Biotechnology, Santa Cruz, CA) at 1:100 dilution, anti-p53 antibody (clone DO7, Novocastra) at 1:50 dilution, pan-CK (AE1\/AE3) antibody (DakoCytomation) at 1:50 dilution, anti-Ki-67 antibody (MIB-1, DakoCytomation) at 1:50 dilution, anti-Caveolin (CAV) polyclonal antibody (Santa Cruz Biotechnology, Santa Cruz, CA) at 1:200 dilution, anti-Bcl-2 antibody (clone 124, Novocastra) at 1:50 dilution. Antigen retrieval was performed by pressure cooker boiling for 3 minutes in 10 mmol\/L of citrate buffer (pH 6.0). The LSAB method (DakoCytomation) was used, followed by revelation with 3,3'-diaminobenzidine. Cytoplasmic and\/or membrane staining was considered positive for CD99, S100, CK, CAV and Bcl-2 antibodies, nuclear staining was considered positive for SOX-9, Survivin, p53, p16 and Ki-67 antibodies. Sections were examined and immunoreactivity was defined as follow: negative, fewer than 5% of tumour cells stained; poorly positive (+), 5% to 10% of tumours cells stained; moderately positive (++), 10% to 50% of tumours cells stained and strongly positive (+++), more than 50% of the tumours cells were stained. All sections were evaluated independently by 3 pathologists (IM, SN and ALLB). The agreement of staining intensity scoring by all was recorded, and in cases of disagreement, intensity and score was determined by consensus.\n\n## Xenotransplant\n\nMale nude mice, were purchase from IFFA-CREDO (Lyon, France), kept under specific pathogen-free conditions throughout the experiment, and provided with vinyl isolates plus sterilized food, water, cage and bedding. The specimens for xenotransplant were obtained at surgery (OT) and placed in a culture medium (RPMI 1640) plus antibiotic at 37 \u2103 until transplantation, usually 6 hours after surgery. Fragments of non-necrotic tumour, about 3 to 5 mm in size, were transplanted into the subcutaneous tissue in the backs of two nude mice. The new tumour transfers were made by following the same procedure as in the initial xenotransplant and always under highly sterile conditions. Material from different passages was obtained in order to perform all TMAs. Additional material was obtained for electron microscopy, culture, and frozen sections.\n\n# Results\n\nGrade I and Grade III Chs were the most frequent histopathological patterns, the distribution of the cases were as follows: fourteen Grade I Chs (all primaries), two primary Grade II Chs, ten Grade III Chs (all primaries) (Figure 1<\/a>), five dedifferentiated Chs (four primaries and one primary with metastasis) and two Chs from cell cultures (Ch grade III). One recurrent extraskeletal myxoid Chs was included as a control in the TMA. Most of the tumours were located in bone and\/or soft tissue. The results of IHC study are given in Table 1<\/a>. Although there was heterogeneity in IHC results of the different material analyzed, S100, SOX-9, Caveolin and Survivin were more expressed (Figure 1<\/a>).\n\nInmunohistochemical profile. N\/A = Not assessable\n\n| **Antibodies** | **N\/A** | **Negative (-)** | **Poorly positive (+)** | **Moderately positive (++)** | **Strongly positive (+++)** |\n|----|----|----|----|----|----|\n| **S100** | 3 | 0 | 0 | 1 | 28 |\n| **SOX-9** | 1 | 2 | 7 | 5 | 17 |\n| **Survivin** | 2 | 2 | 4 | 3 | 21 |\n| **Caveolin** | 1 | 1 | 6 | 12 | 12 |\n| **CD99** | 2 | 8 | 12 | 5 | 5 |\n| **p16** | 5 | 10 | 8 | 3 | 6 |\n| **p53** | 2 | 16 | 6 | 6 | 2 |\n| **Ki-67** | 4 | 7 | 5 | 9 | 7 |\n| **Bcl-2** | 3 | 16 | 7 | 3 | 3 |\n| **CK** | 1 | 28 | 2 | 1 | 0 |\n\nConcerning xenografts, the number of passages in xenotransplants fluctuated between 1 and 13. Curiously, in low grade \u2013 Grade I Chs, these implanted tumours hardly grew, and the number of passages did not exceed one. Nevertheless, in Grade II or Grade III Chs the number of passages were greater. An example of dedifferentiated Chs evolution in nudes is described in Table 2<\/a> and (Figures: 2<\/a>, 3<\/a>, 4<\/a>, 5<\/a>, 6<\/a>).\n\nModel of Xenograft in Ch. Evolution in Nude mice\n\n| **Material** | **De-differentiation** | **S100** | **SOX-9** | **Ki-67** | **p53** |\n|----|----|----|----|----|----|\n| **Original Tumour** | \\+ | ++ | +++ | \\+ | \\+ |\n| **Nu385-P0** | ++ | ++ | ++ | ++ | ++ |\n| **Nu385-P1** | +++ | \\+ | \\+ | ++ | +++ |\n| **Nu385-P2** | +++ | \\+ | \\+ | +++ | +++ |\n| **Nu385-P3** | +++ Osteoid | \\+ | \\+ | +++ | +++ |\n\n# Discussion\n\nTMA-based morphological and IHC evaluation of Chs primary tumour and xenograft proved to be a high throughput source in demonstrating the phenotypic variability of original and xenograft Chs. Grading Chs has proven prognostic value, nevertheless the differential diagnosis between low grade Chs and benign chondroid lesions is quite difficult and requires an integration of clinical, radiology and histopathology information \\[9-12\\]. Grade I Ch was the most frequent in this cohort, as reported in the literature and most of the tumours appeared in bone and soft tissue \\[16\\]. At the histopathological level, the distinction between Grade I, II, III and dedifferentiated Chs was relatively easy using the updated criteria described \\[9,10\\]. S100 immunoreactivity in chondroid foci and the transcription factor SOX-9 (regulator of chondrogenesis) has been reported as the most sensitive and confirmatory markers in Chs diagnosis, both were the most expressed in the study. SOX-9 staining differentiates Chs from various small cells round tumour such as small cell osteosarcomas, non-Hodgkin lymphomas and Ewing\/PNET family tumours \\[17\\]. Recently, the expression of Bcl-2 and parathyroid-like hormone (PTHLH) as two antibodies that supported Ch diagnosis and differentiating them from other benign conditions as osteochondromas has been described \\[18\\], although in the present study, Bcl-2 expression was infrequent in the original tumours and in their corresponding xenograft. Curiously, CAV and Survivin were overexpressed in Chs, therefore apoptosis and cell interactions could be related to tumour chondrogenesis. Further studies with larger series could provide new insight into the pathogenesis of Chs.\n\nXenograft materials are widely used by researchers although in the case of Chs tumours; few studies have been published using TMA technology \\[19\\]. As many Chs tumours reveal different patterns in the same biopsy, it is not surprising that the xenograft tumour showed different histopathological pictures in some cases compared with the original tumour. Usually, Grade I Chs showed no variation in histology and IHC between the primary tumours and their xenograft. Intriguingly, in Grade I Chs, these implanted tumours hardly grew, and the number of passages did not exceed one, whereas in Grade III; dedifferentiated Chs as well as in Chs from culture, the numbers of successive passages were higher, and light microscopy as well as IHC study revealed differences between primary tumour and their xenograft. Tumour cells in Chs culture could acquire newer genetic and\/or epigenetic alterations that increase the aggressive phenotype; in addition, these cells in culture grow quite fast and replace stromal and necrotic cells from the original tumour. Original tumours (Grade III or dedifferentiated Ch) developed morphological transformation to a poorly differentiated subtype in some of the successive xenotransplanted tumours, in addition differences between IHC expression of chondrogenic differentiation markers (S100, SOX-9), cell cycle regulators (p53, p16) and proliferation marker (Ki-67) were detected. The dedifferentiated Ch xenograft model displayed a morphological transition over subsequent passages in nude mice, initially the tumour showed a biphasic pattern characterized by areas with typical chondroblastic differentiation surrounding by areas with marked dedifferentiation. Over the successive passages, the tumour acquired more dedifferentiation and the last tumours passages revealed osteoid material similar to osteosarcoma. The approximate period between passages was three months. In addition, S100 and SOX-9 expression showed relatively decreased expression and Ki-67, p53 and p16 revealed increased expression in the successive xenograft. Some of these changes may represent newer genetic and\/or epigenetic alterations developed in more aggressive subclones or could be induced by the establishment of the xenograft as occurs in tumour progression\\[6,7\\]. TMAs in xenografted tumours offer an invaluable tool for performing further research in order to study newer markers of cell differentiation, activation, genetics and cell signalling in Chs. Despite the increasing use of TMAs, limitations remain, particularly in the case of tumours, due to intratumoral heterogeneity of protein expression \\[1,5\\]. Therefore in order to ensure TMA representativeness, more representative areas of sections of the original donor block should be selected, as well as increasing the number of cores collected and the size of the single cores \\[5\\]. Nowadays, TMAs are becoming an indispensable tool in the study of cancer progression and provide newer insights concerning the biology of several tumours.\n\n# Conclusion\n\nThe analysis in TMA of Chs xenograft followed for successive generations provides new information concerning the biology and morphology of these tumours. The histopathology remained similar to the original tumour in most of the cases, but occasionally, as in the model displayed, the tumours acquire more dedifferentiation after several transfer. IHC studies using TMA techniques are useful for the assessment of the antibodies related to Chs.\n\n### Acknowledgements\n\nThis work was supported by grants PI06\/1576 , RD06-0020\/0102 from the Instituto Carlos III de Madrid, Spain and Contract n\u00b0: 018814 (EuroBoNet) from the 6thFP of the EC. We would like to acknowledge Elisa Alonso for her collaboration and technical assistance with the pathology.\n\nThis article has been published as part of *Diagnostic Pathology* Volume 3 Supplement 1, 2008: New trends in digital pathology: Proceedings of the 9th European Congress on Telepathology and 3rd International Congress on Virtual Microscopy. The full contents of the supplement are available online at ","meta":{"dup_signals":{"dup_doc_count":114,"dup_dump_count":51,"dup_details":{"curated_sources":3,"2022-49":1,"2022-21":1,"2021-39":1,"2020-40":1,"2020-29":1,"2020-16":1,"2019-30":1,"2019-22":1,"2019-13":1,"2019-04":1,"2018-47":1,"2018-39":1,"2018-30":1,"2018-17":1,"2018-05":2,"2017-39":1,"2017-30":1,"2017-22":1,"2017-17":1,"2017-09":18,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-22":1,"2016-18":1,"2016-07":18,"2015-48":2,"2015-40":2,"2015-35":1,"2015-32":2,"2015-27":2,"2015-22":2,"2015-14":2,"2014-52":2,"2014-49":1,"2014-42":6,"2014-41":2,"2014-35":3,"2014-23":3,"2014-15":3,"2023-14":1,"2024-10":1,"2015-18":2,"2015-11":2,"2015-06":2,"2014-10":2,"2013-48":2,"2013-20":2,"2024-26":1}},"file":"PMC2500106"},"subset":"pubmed_central"} {"text":"author: Jelle T. Prins\\*To whom correspondence should be addressed: Jelle T. Prins, University of Groningen, Groningen, Netherlands, E-mail: \ndate: 2010-04-22\ninstitute: 1University of Groningen, Groningen, Netherlands\ntitle: PhD Thesis: burnout among Dutch medical residents\n\n# Introduction\n\nMedical residents (postgraduate trainees in a medical specialty) fulfil an important role in the Dutch health care system. They take their share of the responsibility for efficient patient care in hospitals, mental health care centres, rehabilitation centres and other medical institutions. Medical residents see the period of postgraduate training as a phase in which finding a balance between training, work and private life plays an important part. Not much was known about the extent to which the stress felt by residents causes them to develop symptoms of burnout. It was also unknown which factors determine the actual development of burnout. The research which was carried out therefore revolved around the incidence of burnout among medical residents in the Netherlands. It also looked into the potential risk-heightening or risk-lowering effects of a number of demographic and work-related factors. The aims of the studies were the following:\n\n- to determine the prevalence of burnout among medical residents;\n\n- to study the effect of individual and work-related factors on burnout;\n\n- to examine the relationship between burnout and the quality of care.\n\nIn order to answer the research questions, two cross-sectional studies were carried out. The first study involved medical residents at University Medical Centre Groningen; the second all 5245 medical residents in the Netherlands.\n\n# First Study\n\nFor the first study on the prevalence of burnout, medical residents at University Medical Centre Groningen in the Netherlands were approached. The Utrecht BurnOut Scale (UBOS\/MBI-HHS) was used. This self-assessment form was sent to 292 medical residents. The response rate was 54%. The results show that 13% of the respondents claimed to experience moderate to severe burnout. The highest percentage of burnout was found among residents in psychiatry. Looking into the effects on burnout of emotional, informative and appreciative support experienced by residents from supervisors, fellow residents, nurses and patients showed that medical residents appeared to be less satisfied with the perceived emotional support from supervisors than with the support they received from colleagues and nurses. A significant relationship was established between dissatisfaction with emotional and appreciative support from supervisors and emotional exhaustion felt by residents. A link was also established between dissatisfaction with emotional support from supervisors and increased feelings of depersonalization. Medical residents also reported on the degree of reciprocity (the balance between pain and gain in relationships) that medical residents experienced in their working relationships with for example their supervisors. Only 13% of the residents experienced over-benefit in their relationships with their supervisors, 41% claimed under-benefit and 46% reported a good balance between give and take in the relationship (reciprocity). Medical residents who experienced under-benefit in the relationship with their supervisors reported significantly more emotional exhaustion and depersonalization than residents who experienced reciprocity in the relationship. Contrary to what we assumed, there did not appear to be any significant link between number of years in training and the perceived reciprocity in relationships.\n\n# National study\n\nFor the national study, all medical residents registered with the Medische Registratie Commissie (Medical Registration Committee) in 2005 were approached (N=5245); 41% responded. Of the respondents, 20.6% were classified as having burnout and 14.6% and 6% of these had moderate and severe symptoms respectively. Eleven per cent of the respondents appeared to be highly engaged, 23.2% of the residents scored above the cut-off point on vigour, 36.4% on dedication, and 27.8% on absorption. Medical residents with a partner and\/or children scored significantly lower on depersonalization than residents who did not have partners and\/or children. The percentage of residents with symptoms of burnout was lowest in the group of residents in general surgery, followed by residents in obstetrics & gynaecology and in the supportive specialties (such as radiology, pathology). Residents in general surgery were much more engaged and vigorous than residents in other surgical specialties, internal medicine, other medical specialties, supportive specialties and psychiatry. It also appeared that residents in general surgery were more dedicated and more absorbed in their work than residents in the supportive specialties.\n\nIn the national study 94% of the residents admitted to having made one or more errors which had no negative consequences for their patients, 71% claimed to have performed procedures that they were not actually competent to carry out and 56% admitted to having made one or more errors that had negative consequences for the patient **during their training so far**. Seventy five per cent of the respondents felt that the quality of the treatment they had given was inadequate. Male residents reported more errors than female residents. The group of residents in general surgery reported the highest numbers of errors in procedures compared with residents in other specialties. Medical residents in internal medicine also reported more errors than their colleagues in a number of other specialties. Medical residents in psychiatry reported the highest number of errors relating to time problems. Medical residents with symptoms of burnout appeared to report significantly more errors than residents who did not satisfy the criteria for burnout. Highly engaged residents reported fewer errors than their less engaged colleagues.\n\n# Conclusions\n\nThe two empirical studies included in the dissertation revealed that burnout among medical residents is no exception. The last and most comprehensive study made clear that nationwide 20.6% of medical residents experienced burnout. The Central Bureau for Statistics (CBS) in the Netherlands reported that 8-11% of the Dutch labour force was burned out. When the CBS criteria for assessing burnout were applied to the residents in the national study, the burnout percentage jumped to 41% \u2013 a prevalence that is four times higher than that in the national labour force. Although interesting from a research point of view, these percentages are troublesome from a health care perspective, especially since burnout was not only found to influence personal distress but also the quality of delivered care.\n\nThe conclusion that many residents experience an unbalanced relationship with their supervisor, which affects their well being in a negative way, is worrisome. A relationship with a supervisor is usually seen as a resource which should have positive connotations with well-being. Medical residents experience the relationship with their supervisors not only as one in which they under-benefit but they are also dissatisfied with the social support they receive from their supervisors. These factors contribute directly to the development of burnout in medical residents.\n\nMedical residents, like other young 'high potentials', are believed to excel in different areas at the same time: work, training and private life. It could be that in the present culture medical residents are overburdened. Balancing work, training and private life can lead to unintended effects for medical residents. One might question if the responsibility for keeping in balance during residency should rest on medical residents only. In the new competency driven training programmes medical residents have to be able to organize a balance between patient care and self development. However it can be questioned if today's residents are being taught the art of balancing during their training.\n\n# The author\n\nPrins J.T. Burnout among Dutch medical residents, University of Groningen, juni 17th 2009 (see Figure 1 (Fig. 1)<\/a> and 2 (Fig. 2)<\/a>), Promotor: prof. dr. H.B.M. van de Wiel, copromotor dr. J.E.H.M. Hoekstra-Weebers. 143 pages","meta":{"dup_signals":{"dup_doc_count":107,"dup_dump_count":37,"dup_details":{"curated_sources":2,"2018-09":1,"2018-05":1,"2017-30":3,"2017-22":2,"2017-09":2,"2017-04":2,"2016-50":2,"2016-44":3,"2016-40":2,"2016-36":3,"2016-30":3,"2016-26":2,"2016-22":2,"2016-18":2,"2016-07":3,"2015-48":3,"2015-40":2,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":5,"2014-42":8,"2014-41":4,"2014-35":5,"2014-23":6,"2014-15":4,"2018-26":1,"2015-18":3,"2015-11":2,"2015-06":2,"2014-10":3,"2013-48":3,"2013-20":2,"2024-30":1}},"file":"PMC3140360"},"subset":"pubmed_central"} {"text":"author: M Khoury; V Escriou; A Galy; R Yao; C Largeau; D Scherman; C Jorgensen; F Apparailly\ndate: 2007\ninstitute: 1Inserm, U 844, INM, H\u00f4pital Saint Eloi, Montpellier, France; 2Universit\u00e9 Montpellier1, UFR de M\u00e9decine, Montpellier, France; 3Inserm, U 640, Paris, France; 4CNRS, UMR8151, Paris, France; 5Universit\u00e9 Paris Descartes, Facult\u00e9 de Pharmacie, Paris, France; 6Ecole Nationale Sup\u00e9rieure de Chimie de Paris, Paris, France; 7Inserm, U 790, Genethon, Evry, France; 8Universit\u00e9 Paris-Sud 11, Orsay, France; 9CHU Lapeyronie, service Immuno-Rhumatologie, Montpellier, France\ntitle: Combined anti-inflammatory tritherapy using a novel small interfering RNA lipoplex successfully prevents and cures mice of arthritis\n\n# Background\n\nTNF\u03b1 is a key cytokine in rheumatoid arthritis (RA) physiopathology. We recently demonstrated that a new cationic liposome formulation allowed intravenous delivery of a small interfering RNA (siRNA) targeting TNF\u03b1 and efficiently restoring the immunological balance in an experiment model of RA. Since 30% of patients do not respond to anti-TNF biotherapies, however, there is a need to develop alternative therapeutic approaches.\n\n# Objective\n\nStrong association of other proinflammatory cytokines with the pathogenesis of RA prompted us to investigate which cytokine other than TNF\u03b1 could be targeted for therapeutic benefit using RNA interference.\n\n# Methods\n\nTwo siRNA sequences were designed for IL-1\u03b2, IL-6 and IL-18 proinflammatory cytokines, and their efficacy and specificity were validated *in vitro* on J774.1 mouse macrophage cells, measuring both mRNA and protein levels following a lipopolysaccharide challenge. For *in vivo* administration, siRNAs were formulated as lipoplexes with the RPR209120\/DOPE liposome and a carrier DNA, and were injected intravenously in DBA\/1 mice having collagen-induced arthritis. The clinical course of the disease was assessed by paw thickness over time, and radiological and histological scores were obtained at euthanasia. The cytokine profiles were measured by ELISA in sera and knee-conditioned media. The immunological balance was assessed using antitype II collagen assays. The distribution of siRNAs was evaluated by fluorometry in GFP transgenic mice over time after anti-GFP siRNA lipoplex injections.\n\n# Results\n\nThe designed siRNA sequences silenced 70\u201375% of the lipopolysaccharide-induced IL-1\u03b2, IL-6 and IL-18 mRNA expression in macrophages compared with a control siRNA. Each siRNA affected the targeted cytokine specifically, without modifying other proinflammatory cytokine mRNAs. In the collagen-induced arthritis model, weekly injections of siRNA lipoplexes significantly reduced the incidence and severity of arthritis, abrogating joint swelling, and destruction of cartilage and bone, in both preventive and curative settings. The most striking therapeutic effect was observed when combining the three siRNAs targeting IL-1\u03b2\/IL-6\/IL-18 at once. Such tritherapy was associated with downregulation of both inflammatory and autoimmune components of the disease, and overall parameters were improved compared with the TNF\u03b1 siRNA lipoplex-based treatment. The siRNA formulation was widely distributed, delivering the siRNA to several organs with a strong efficacy in the liver and spleen.\n\n# Conclusion\n\nTritherapy targeting IL-1\u03b2\/IL-6\/IL-18 seems highly effective to reduce all pathological features of RA including inflammation, joint destruction and Th1 response. These data show that cytokines other than TNF\u03b1 can be targeted to improve symptoms of RA and reveal novel potential drug development targets. The systemic administration of anticytokine siRNA cocktails as a lipoplex could represent a novel and promising anti-inflammatory alternative therapy in RA.","meta":{"dup_signals":{"dup_doc_count":126,"dup_dump_count":38,"dup_details":{"curated_sources":2,"2019-35":1,"2019-26":1,"2019-18":1,"2019-09":1,"2018-51":1,"2018-43":1,"2018-34":1,"2018-22":1,"2017-47":1,"2017-34":1,"2017-22":1,"2017-09":8,"2016-44":1,"2016-40":1,"2016-36":7,"2016-30":6,"2015-48":4,"2015-40":4,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":4,"2015-14":3,"2014-52":4,"2014-49":5,"2014-42":10,"2014-41":7,"2014-35":4,"2014-23":7,"2014-15":4,"2019-43":1,"2015-18":4,"2015-11":4,"2015-06":4,"2014-10":3,"2013-48":4,"2013-20":4,"2024-10":1}},"file":"PMC4061928"},"subset":"pubmed_central"} {"text":"author: Philip E. Bourne; Virginia Barbour\\* E-mail: [^1]\ndate: 2011-06\ninstitute: 1Skaggs School of Pharmacy and Pharmaceutical Science, University of California San Diego, La Jolla, California, United States of America; 2Public Library of Science, Cambridge, United Kingdom\ntitle: Ten Simple Rules for Building and Maintaining a Scientific Reputation\n\n> *While we cannot articulate exactly what defines the less quantitative side of a scientific reputation, we might be able to seed a discussion. We invite you to crowd source a better description and path to achieving such a reputation by using the comments feature associated with this article. Consider yourself challenged to contribute.*\n\nAt a recent Public Library of Science (PLoS) journal editors' meeting, we were having a discussion about the work of the Committee on Publication Ethics (COPE; ), a forum for editors to discuss research and publication misconduct. Part of the discussion centered on the impact such cases have on the scientific reputation of those involved. We began musing: What on earth is a scientific reputation anyway? Not coming up with a satisfactory answer, we turned to a source of endless brainpower\u2014students and other editors. Having posed the question to a group of graduate students, PLoS, and other editors, we got almost as many different answers as people asked, albeit with some common themes. They all mentioned the explicit elements of a reputation that relate to measurables such as number of publications, H factor, overall number of citations etc., but they also alluded to a variety of different, qualitative, factors that somehow add up to the overall sense of reputation that one scientist has for another.\n\nWhat these students and editors identified en masse is one important side of a scientific reputation that is defined by data; but they also identified a much more nebulous side, that, while ill-defined, is a vital element to nurture during one's career. A side defined to include such terms as fair play, integrity, honesty, and caring. It is building and maintaining this kind of less tangible reputation that forms the basis for these Ten Simple Rules. You might be wondering, how can you define rules for developing and maintaining something you cannot well describe in the first place? We do not have a good answer, but we would say a reputation plays on that human characteristic of not appreciating the value of something until you do not have it any more.\n\nA scientific reputation is not immediate, it is acquired over a lifetime and is akin to compound interest\u2014the more you have the more you can acquire. It is also very easy to lose, and once gone, nearly impossible to recover. Why is this so? The scientific grapevine is extensive and constantly in use. Happenings go viral on social networks now, but science has had a professional and social network for centuries; a network of people who meet each other fairly regularly and, like everyone else, like to gossip. So whether it is a relatively new medium or a centuries-old medium, good and bad happenings travel quickly to a broad audience. Given this pervasiveness, here are some rules, some intuitive, for how to build and maintain a scientific reputation.\n\n# Rule 1: Think Before You Act\n\nScience is full of occasions whereupon you get upset\u2014a perceived poor review of a paper, a criticism of your work during a seminar, etc. It is so easy to immediately respond in a dismissive or impolite way, particularly in e-mail or some other impersonal online medium. Don't. Think it through, sleep on it, and get back to the offending party (but not a broader audience as it is so easy to do nowadays with, for example, an e-mail cc) the next day with a professional and thoughtful response, whatever the circumstances. In other words, always take the high road whatever the temptation. It will pay off over time, particularly in an era when every word you commit to a digital form is instantly conveyed, permanently archived somewhere, and can be retrieved at any time.\n\n# Rule 2: Do Not Ignore Criticism\n\nWhether in your eyes, criticism is deserved or not, do not ignore it, but respond with the knowledge of Rule 1. Failure to respond to criticism is perceived either as an acknowledgement of that criticism or as a lack of respect for the critic. Neither is good.\n\n# Rule 3: Do Not Ignore People\n\nIt is all too easy to respond to people in a way that is proportional to their perceived value to you. Students in particular can be subject to poor treatment. One day a number of those students will likely have some influence over your career. Think about that when responding (or not responding). As hard as it is, try to personally respond to mail and telephone calls from students and others, whether it is a question about your work or a request for a job. Even if for no other reason, you give that person a sense of worth just by responding. Ignoring people can take other serious forms, for example in leaving deserving people off as paper authors. Whether perceived or real, this can appear that you are trying to raise your contribution to the paper at the expense of others\u2014definitely not good for your reputation.\n\n# Rule 4: Diligently Check Everything You Publish and Take Publishing Seriously\n\nScience does not progress in certainties\u2014that is one of its joys but also what makes it such a hard profession. Though you cannot guarantee that everything you publish will, in 50 years' time, be shown to be correct, you can ensure that you did the work to the accepted standards of the time and that, whether you were the most junior or senior author, you diligently checked it (and checked it again\u2026) before you submitted it for publication. As a first author you may well be the only one who appreciates the accuracy of the work being undertaken, but all authors have a responsibility for the paper. So, however small or big your contribution, always be upfront with your co-authors as to the quality and accuracy of the data you have generated. When you come to be a senior author, it is so easy to take a draft manuscript at face value and madly publish it and move on. Both actions can come back to haunt you and lead to a perception of sloppy work, or worse, deception. As first author, this mainly lets down your other authors and has a subtle impact on your growing reputation. As the senior author of an error-prone study, it can have a more direct and long-lasting impact on your reputation. In short, take publication seriously. Never accept or give undeserved authorship and in addition never leave anyone out who should be an author, however lowly. Authorship is not a gift\u2014it must be earned and being a guest or gift author trivializes the importance of authorship. Never agree to be an author on a ghostwritten paper. At best these papers have undeclared conflicts of interest; at worst potential malpractice.\n\n# Rule 5: Always Declare Conflicts of Interest\n\nEveryone has conflicts of interest, whether they are financial, professional, or personal. It is impossible for anyone to judge for himself or herself how their own conflict will be perceived. Problems occur when conflicts are hidden or mismanaged. Thus, when embarking on a new scientific endeavor, ranging from such tasks as being a grant reviewer, or a member of a scientific advisory board, or a reviewer of a paper, carefully evaluate what others will perceive you will gain from the process. Imagine how your actions would be perceived if read on the front page of a daily newspaper. For example, we often agree to review a paper because we imagine we will learn from the experience. That is fine. Where it crosses the line is when it could be perceived by someone that you are competing with the person whose work you are reviewing and have more to gain than just general knowledge from reviewing the work. There is a gray area here of course, so better to turn down a review if not sure. Failure to properly handle conflicts will eventually impact your reputation.\n\n# Rule 6: Do Your Share for the Community\n\nThere is often unspoken criticism of scientists who appear to take more than they give back. For example, those who rarely review papers, but are always the first to ask when the review of their paper will be complete; scientists who are avid users of public data, but are very slow to put their own data into the public domain; scientists who attend meetings, but refuse to get involved in organizing them; and so on. Eventually people notice and your reputation is negatively impacted.\n\n# Rule 7: Do Not Commit to Tasks You Cannot Complete\n\nIt tends to be the same scientists over and over who fail to deliver in a timely way. Over an extended period, this becomes widely known and can be perceived negatively. It is human nature for high achievers to take on too much, but for the sake of your reputation learn how to say no.\n\n# Rule 8: Do Not Write Poor Reviews of Grants and Papers\n\nWho is a good reviewer or editor is more than just perception. Be polite, timely, constructive, and considerate and, ideally, sign your review. But also be honest\u2014the most valued reviewers are those who are not afraid to provide honest feedback, even to the most established authors. Editors of journals rapidly develop a sense of who does a good job and who does not. Likewise for program officers and grant reviews. Such perceptions will impact your reputation in subtle ways. The short term gain may be fewer papers or grants sent to you to review, but in the longer term, being a trusted reviewer will reflect your perceived knowledge of the field. Although the impact of a review is small relative to writing a good paper in the field yourself, it all adds up towards your overall reputation.\n\n# Rule 9: Do Not Write References for People Who Do Not Deserve It\n\nIt is difficult to turn down writing a reference for someone who asks for one, even if you are not inclined to be their advocate; yet, this is what you should do. The alternative is to write a reference that (a) does not put them in a good light, or (b) over exalts their virtues. The former will lead to resentment; the latter can impact your reputation, as once this person is hired and comes up short, the hirer may question aspects of your own abilities or motives.\n\n# Rule 10: Never Plagiarize or Doctor Your Data\n\nThis goes without saying, yet it needs to be said because it happens, and it is happening more frequently. The electronic age has given us tools for handling data, images, and words that were unimaginable even 20 years ago, and students and postdocs are especially adept in using these tools. However, the fundamental principle of the integrity of data, images, and text remains the same as it was 100 years ago. If you fiddle with any of these elements beyond what is explicitly stated as acceptable (many journals have guidelines for images, for example), you will be guilty of data manipulation, image manipulation, or plagiarism, respectively. And what is more, you will likely be found out. The tools for finding all these unacceptable practices are now sophisticated and are being applied widely. Sometimes the changes were done in good faith, for example, the idea of changing the contrast on a digital image to highlight your point, but one always needs to think how such a change will be perceived and in fact whether it might, even worse, give the average reader a false sense of the quality of that data. Unfortunately, even if done in good faith, if any of these practices are found out, or even raised as a suspicion, the impact on one's career can be catastrophic.\n\nIn summary, there are a number of dos and don'ts for establishing a good reputation\u2014whatever that might be. Do not hesitate in giving us your thoughts on what it means to be a reputable scientist.\n\n[^1]: Philip E. Bourne is Editor-in-Chief of *PLoS Computational Biology*. Virginia Barbour is Chief Editor of *PLoS Medicine* and Secretary of COPE.","meta":{"dup_signals":{"dup_doc_count":102,"dup_dump_count":43,"dup_details":{"curated_sources":2,"2023-06":1,"2022-49":1,"2022-05":2,"2021-10":1,"2020-10":2,"2019-18":1,"2019-13":2,"2018-47":1,"2018-34":1,"2017-51":1,"2017-43":1,"2017-39":1,"2017-26":1,"2017-22":1,"2017-17":1,"2017-09":2,"2016-50":1,"2016-30":2,"2016-26":1,"2016-22":1,"2016-18":1,"2016-07":1,"2015-48":3,"2015-40":3,"2015-35":4,"2015-32":5,"2015-27":3,"2015-22":3,"2015-14":2,"2014-52":3,"2014-49":5,"2014-42":9,"2014-41":3,"2014-35":3,"2014-23":7,"2014-15":5,"2023-40":1,"2024-30":2,"2024-22":2,"2017-13":1,"2015-18":3,"2015-11":3,"2014-10":3}},"file":"PMC3127799"},"subset":"pubmed_central"} {"text":"abstract: # Background\n .\n Abel and Trevors have delineated three aspects of sequence complexity, Random Sequence Complexity (RSC), Ordered Sequence Complexity (OSC) and Functional Sequence Complexity (FSC) observed in biosequences such as proteins. In this paper, we provide a method to measure functional sequence complexity.\n .\n # Methods and Results\n .\n We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families. Considerations were made in determining how the measure can be used to correlate functionality when relating to the whole molecule and sub-molecule. In the experiment, we show that when the proposed measure is applied to the aligned protein sequences of ubiquitin, 6 of the 7 highest value sites correlate with the binding domain.\n .\n # Conclusion\n .\n For future extensions, measures of functional bioinformatics may provide a means to evaluate potential evolving pathways from effects such as mutations, as well as analyzing the internal structural and functional relationships within the 3-D structure of proteins.\nauthor: Kirk K Durston; David KY Chiu; David L Abel; Jack T Trevors\ndate: 2007\ninstitute: 1Department of Biophysics, University of Guelph, Guelph, ON, N1G 2W1, Canada; 2Department of Computing and Information Science, University of Guelph, Guelph, ON, N1G 2W1, Canada; 3Program Director, The Gene Emergence Project, The Origin-of-Life Foundation, Inc., 113 Hedgewood Drive Greenbelt, MD 20770-1610, USA; 4Department of Environmental Biology, University of Guelph, Guelph, ON, N1G 2W1, Canada\nreferences:\ntitle: Measuring the functional sequence complexity of proteins\n\n# Background\n\nThere has been increasing recognition that genes deal with information processing. They have been referred to as \"subroutines within a much larger operating system\". For this reason, approaches previously reserved for computer science are now increasingly being applied to computational biology \\[1\\]. If genes can be thought of as information-processing subroutines, then proteins can be analyzed in terms of the products of information interacting with laws of physics. It may be possible to advance our knowledge of proteins, such as their structure and functions, by examining the patterns of functional information when studying a protein family.\n\nOur proposed method is based on mathematical and computational concepts (e.g., measures). We show here that, at least in some cases in sequence analysis, the proposed measure is useful in analyzing protein families with interpretable experimental results.\n\nAbel and Trevors have delineated three qualitative aspects of linear digital sequence complexity \\[2,3\\], Random Sequence Complexity (RSC), Ordered Sequence Complexity (OSC) and Functional Sequence Complexity (FSC). RSC corresponds to stochastic ensembles with minimal physicochemical bias and little or no tendency toward functional free-energy binding. OSC is usually patterned either by the natural regularities described by physical laws or by statistically weighted means. For example, a physico-chemical self-ordering tendency creates redundant patterns such as highly-patterned polysaccharides and the polyadenosines adsorbed onto montmorillonite \\[4\\]. Repeating motifs, with or without biofunction, result in observed OSC in nucleic acid sequences. The redundancy in OSC can, in principle, be compressed by an algorithm shorter than the sequence itself. As Abel and Trevors have pointed out, neither RSC nor OSC, or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life \\[5\\]. FSC includes the dimension of functionality \\[2,3\\]. Szostak \\[6\\] argued that neither Shannon's original measure of uncertainty \\[7\\] nor the measure of algorithmic complexity \\[8\\] are sufficient. Shannon's classical information theory does not consider the meaning, or function, of a message. Algorithmic complexity fails to account for the observation that 'different molecular structures may be functionally equivalent'. For this reason, Szostak suggested that a new measure of information\u2013functional information\u2013is required \\[6\\]. Chiu, Wong, and Cheung also discussed the insufficiency of Shannon uncertainty \\[9,10\\] when applied to measuring outcomes of variables. The differences between RSC, OSC and FSC in living organisms are necessary and useful in describing biosequences of living organisms.\n\nConsider two main uses for the proposed method for measuring FSC, which incorporates functionality: 1) comparative analysis of biosequence subgroups when explicit time lag is known; and 2) typicality analysis between biosequence subgroups when there is no explicit time lag. In the first case, such as an evolutionary time scale, an increase or decrease in FSC between an earlier gene or protein and a later gene or protein can be measured, evaluating its possible degradations and\/or functional effects due to various changes such as insertions, deletions, mutations and rearrangements. In the second case, a large set of aligned sequences representing a protein family can be subdivided according to phylogenetic relationships derived from typicality of species groupings and the FSC for each subgroup measured. This is important when evaluating the emergence or evolution of viral or microbial strains with novel functions such as in the comparisons of the Chlamydia family genomes \\[11\\]. An analysis may reveal the extent of the difference in FSC between one functional group and the other, as well as the modular interactions of the internal relationship structure of the sequences \\[12\\].\n\nThe ability to measure FSC would be a significant advance in the ability to identify, analyze, compare, and predict the metabolic utility of biopolymeric sequences. Mutational drift, emerging pathogenic viral and microbial species\/strains, generated mutations, acquired heritable diseases and mutagenic effects could all be evaluated quantitatively. Furthermore, *In vitro* experiments using SELEX \\[13-15\\] to study transitions in possible early ribozyme family growth could then be evaluated in a quantitative, as well as qualitative and intuitive fashion. Evolutionary changes, both actual and theoretical, can also be evaluated using FSC.\n\nIt is known that the variability of data can be measured using Shannon uncertainty \\[16\\]. However, Shannon's original formulation when applied to biological sequences does not express variations related to biological functionality such as metabolic utility. Shannon uncertainty, however, can be extended to measure *the joint variable* (*X, F*), where *X* represents the variability of data, and *F* functionality. This explicitly incorporates empirical knowledge of metabolic function into the measure that is usually important for evaluating sequence complexity. This measure of both the observed data and a conceptual variable of function jointly can be called *Functional Uncertainty* (*H*~*f*~) \\[17\\], and is defined by the equation:\n\nH\n\n(\n\nX\n\nf\n\n(\n\nt\n\n)) = -\u2211\n\nP\n\n(\n\nX\n\nf\n\n(\n\nt\n\n)) log\n\nP\n\n(\n\nX\n\nf\n\n(\n\nt\n\n))\n\nwhere *X*~f~ denotes the conditional variable of the given sequence data (*X*) on the described biological function *f* which is an outcome of the variable (*F*). For example, a set of 2,442 aligned sequences of proteins belonging to the ubiquitin protein family (used in the experiment later) can be assumed to satisfy the same specified function *f*, where *f* might represent the known 3-D structure of the ubiquitin protein family, or some other function common to ubiquitin. The entire set of aligned sequences that satisfies that function, therefore, constitutes the outcomes of *X*~f~. Here, functionality relates to the whole protein family which can be inputted from a database. The advantage of using *H*(*X*~f~(*t*)) is that changes in the functionality characteristics can be incorporated and analyzed. Furthermore, the data can be a single monomer, or a biosequence, or an entire set of aligned sequences all having the same common function. The significance of the statistical variations can then be evaluated if necessary \\[8\\]. The state variable *t*, representing time or a sequence of ordered events, can be fixed, discrete, or continuous. Discrete changes may be represented as discrete time states.\n\nFunctional bioinformatics is emerging as an important area of research \\[18-20\\]. Even though the term 'biological function' has been freely used for specific experimentation, there is no generally consistent usage of the term. According to Karp \\[21\\], biological functionality can refer to biochemical specified reactions, cellular responses, and structural properties of proteins and nucleic acids. It can be defined or specified at the global level (i.e., the entire organism), locally at the sub-molecular level, or applicable to the whole molecule. Hence confusion exists in interpreting its meaning. Karp \\[21\\] recognized that biological function is a complex concept. Using Webster, he refers to 'specially fitted' action or 'normal and specific contribution' of a part to the economy of the whole. Function can be related to cellular components (e.g., macromolecules, proteins or small molecules) that interact and catalyze biochemical transformations. In specific applications, it can be a local function of an enzyme such as the substrate that is acted on, or the ligands that activate or inhibit the enzyme. In more systemic integrated functions, it may refer to pathways, single or multiple, in a hierarchical, nested scope \\[22,23\\]. In general, it is a challenge 'to define a single best set of biologically acceptable rules for performing this decomposition' \\[21\\].\n\nIn our approach, we leave the specific defined meaning of functionality as an input to the application, in reference to the whole sequence family. It may represent a particular domain, or the whole protein structure, or any specified function with respect to the cell. Mathematically, it is defined precisely as an outcome of a discrete-valued variable, denoted as *F=*{*f*}. The set of outcomes can be thought of as specified biological states. They are presumed non-overlapping, but can be extended to be fuzzy elements. In order to get a meaningful calculation in measuring FSC, the measure should be statistically significant in practice \\[7,10\\], much larger than zero when relating to the sequences. When sequences are chosen that are unrelated to the function to be analyzed, or are simply arbitrarily ordered or randomly generated sequences, then the measure of FSC will be small and statistically not significant. For example, if many sequences that do not share the same function *f*, are mistakenly included within an aligned set representing some particular function, we should expect the measure of FSC of that set to be degraded, possibly even to a very small value. However, when the specified functionality is chosen meaningfully (even in part), then FSC can be interpreted.\n\nConsider further, when a biosequence is mutated, the mutating sequence can be compared at two different time states going from *t*~i~ to *t*~j~. For example, *t*~i~ could represent an ancestral gene and *t*~j~ a current mutant allele. Different sequences sharing the same function *f* (as outcomes of the variables denoted respectively as *X*~f~, *Y*~f~) can also be compared at the same time *t*. Within a sequence, any change of a monomer in the sequence represents a single step change that may or may not affect the overall function. Sequence reversals, gene splits, lateral transfers, and multiple point mutations can also be quantified between the two states *t*~i~, *t*~j~. The limits of the change in functional uncertainty between the two states can then be evaluated at *t* = *t*~i~ and *t* = *t*~*j*~.\n\nThe change in functional uncertainty (denoted as \u0394*H*~f~) between two states can be defined as\n\n\u0394\n\nH\n\n(\n\nX\n\ng\n\n(\n\nt\n\ni\n\n),\n\nX\n\nf\n\n(\n\nt\n\nj\n\n)) =\n\nH\n\n(\n\nX\n\ng\n\n(\n\nt\n\nj\n\n)) -\n\nH\n\n(\n\nX\n\nf\n\n(\n\nt\n\ni\n\n))\n\nwhere *X*~f~ (*t*~i~) and *X*~g~ (*t*~j~) can be applied to the same sequence at two different times or to two different sequences at the same time. \u0394*H*~f~ *can then quantify the change in functional uncertainty between two biopolymeric states with regard to biological functionality*. Unrelated biomolecules with the same function or the same sequence evolving a new or additional function through genetic drift can be compared and analyzed. A measure of \u0394*H*~f~ can increase, decrease, or remain unchanged.\n\nBiological function is mostly, though not entirely determined by the organism's genetic instructions \\[24-26\\]. The function could theoretically arise stochastically through mutational changes coupled with selection pressure, or through human experimenter involvement \\[13-15\\]. A time limit can be set in some situations to evaluate what changes to *X*~f~(*t*~i~) might be possible within that limit. For example, an estimation of the evolutionary limits projected over the next 10 years could be computed in this approach for any particular strain of the HIV virus. The specifics of the function (as an outcome of the function variable) can remain constant, or it can be permitted to vary within a range of efficiency. The limit may be determined by what is permitted metabolically. There is often a minimum limit of catalytic efficiency required by the organism for a given function.\n\nThe *ground state g* (an outcome of *F*) of a system is the state of presumed highest uncertainty (not necessarily equally probable) permitted by the constraints of the physical system, when no specified biological function is required or present. Certain physical systems may constrain the number of options in the ground state so that not all possible sequences are equally probable \\[27\\]. An example of a highly constrained ground state resulting in a highly ordered sequence occurs when the phosphorimidazolide of adenosine is added daily to a decameric primer bound to montmorillonite clay, producing a perfectly ordered, 50-mer sequence of polyadenosine \\[3\\]. In this case, the ground state permits only one single possible sequence. Since the ground state represents the state of presumed highest uncertainty permitted by the physical constraints of the system, the set of functional options, if there are any, will therefore be a subset of the permitted options, assuming the constraints for the physical system remain constant. If the ground state permits only one sequence, then there is no possibility of change in the functional uncertainty of the system.\n\nThe *null state*, a possible outcome of *F* denoted as \u00f8, is defined here as a special case of the ground state of highest uncertainly when the physical system imposes *no constraints at all*, *resulting in the equi-probability of all possible sequences or options*. Such sequencing has been called \"dynamically inert, dynamically decoupled, or dynamically incoherent\" \\[28,29\\]. For example, the ground state of a 300 amino acid protein family can be represented by a completely random 300 amino acid sequence where functional constraints have been loosened such that any of the 20 amino acids will suffice at any of the 300 sites. From Eqn. (1) the functional uncertainty of the null state is represented as\n\nH\n\n(\n\nX\n\n\u00f8\n\n(\n\nt\n\ni\n\n))= - \u2211\n\nP\n\n(\n\nX\n\n\u00f8\n\n(\n\nt\n\ni\n\n)) log\n\nP\n\n(\n\nX\n\n\u00f8\n\n(\n\nt\n\ni\n\n))\n\nwhere (*X*~\u00f8~(*t*~i~)) is the conditional variable for all possible equiprobable sequences. Consider the number of all possible sequences is denoted by *W*. Letting the length of each sequence be denoted by *N* and the number of possible options at each site in the sequence be denoted by *m*, *W* = *m*^*N*^. For example, for a protein of length *N* = 257 and assuming that the number of possible options at each site is *m* = 20, *W* = 20^257^. Since, for the null state, we are requiring that there are no constraints and all possible sequences are equally probable, *P*(*X*~\u00f8~(*t*~i~)) = 1\/*W* and\n\nH\n\n(\n\nX\n\n\u00f8\n\n(\n\nt\n\ni\n\n))= - \u2211(1\/\n\nW\n\n) log (1\/\n\nW\n\n) = log\n\nW\n\n.\n\nThe change in functional uncertainty from the null state is, therefore,\n\n\u0394\n\nH\n\n(\n\nX\n\n\u00f8\n\n(\n\nt\n\ni\n\n),\n\nX\n\nf\n\n(\n\nt\n\nj\n\n)) = log (\n\nW\n\n) -\n\nH\n\n(\n\nX\n\nf\n\n(\n\nt\n\ni\n\n)).\n\nPhysical constraints increase order and change the ground state away from the null state, restricting freedom of selection and reducing functional sequencing possibilities, as mentioned earlier. The genetic code, for example, makes the synthesis and use of certain amino acids more probable than others, which could influence the ground state for proteins. However, for proteins, the data indicates that, although amino acids may naturally form a nonrandom sequence when polymerized in a dilute solution of amino acids \\[30\\], actual dipeptide frequencies and single nucleotide frequencies in proteins are closer to random than ordered \\[31\\]. For this reason, the ground state for biosequences can be approximated by the null state. The value for the measured FSC of protein motifs can be calculated by relating the joint (*X*, *F*) pattern to a stochastic ensemble, the null state in the case of biopolymers that includes any random string from the sequence space.\n\n## A. Functional uncertainty as a measure of FSC\n\nThe measure of Functional Sequence Complexity, denoted as \u03b6, is defined as the change in functional uncertainty from the ground state *H*(*X*~g~(*t*~i~)) to the functional state *H*(*X*~f~(*t*~i~)), or\n\n\u03b6 = \u0394\n\nH\n\n(\n\nX\n\ng\n\n(\n\nt\n\ni\n\n),\n\nX\n\nf\n\n(\n\nt\n\nj\n\n)).\n\nThe resulting unit of measure is defined on the joint data and functionality variable, which we call *Fits (*or *F*unctional *bits)*. The unit Fit thus defined is related to the intuitive concept of *functional* information, including genetic instruction and, thus, provides an important distinction between functional information and Shannon information \\[6,32\\].\n\nEqn. (6) describes a measure to calculate the functional information of the whole molecule, that is, with respect to the functionality of the protein considered. The functionality of the protein can be known and is consistent with the whole protein family, given as inputs from the database. However, the functionality of a sub-sequence or particular sites of a molecule can be substantially different \\[12\\]. The functionality of a sub-molecule, though clearly extremely important, has to be identified and discovered. This problem of estimating the functionality as well as where it is expressed at the sub-molecular level is currently an active area of research in our group.\n\nTo avoid the complication of considering functionality at the sub-molecular level, we crudely assume that each site in a molecule, when calculated to have a high measure of FSC, correlates with the functionality of the whole molecule. The measure of FSC of the whole molecule, is then the total sum of the measured FSC for each site in the aligned sequences.\n\nConsider that there are usually only 20 different amino acids possible per site for proteins, Eqn. (6) can be used to calculate a maximum Fit value\/protein amino acid site of 4.32 Fits\/site. We use the formula log (*20*) - *H*(*X*~f~) to calculate the functional information at a site specified by the variable *X*~f~ such that *X*~f~ corresponds to the aligned amino acids of each sequence with the same molecular function *f*. The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability, in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49^ different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106^ percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space. A high Fit value for individual sites within a protein indicates sites that require a high degree of functional information. High Fit values may also point to the key structural or binding sites within the overall 3-D structure. Since the functional uncertainty, as defined by Eqn. (1) is proportional to the -log of the probability, we can see that the cost of a linear increase in FSC is an exponential decrease in probability.\n\nFor the current approach, both equi-probability of monomer availability\/reactivity and independence of selection at each site within the strand can be assumed as a starting point, using the null state as our ground state. For the functional state, however, an *a posteriori* probability estimate based on the given aligned sequence ensemble must be made. Although there are a variety of methods to estimate *P*(*X*~f~(*t*)), the method we use here, as an approximation, is as follows. First, a set of aligned sequences with the same presumed function, is produced by methods such as CLUSTAL, downloaded from Pfam. Since real sequence data is used, the effect of the genetic code on amino acid frequency is already incorporated into the outcome. Let the total number of sequences with the specified function in the set be denoted by *M*. The data set can be represented by the N-tuple *X* = (*X*~1~, ... *X*~N~) where *N* denotes the aligned sequence length as mentioned earlier. The total number of occurrences, denoted by *d*, of a specific amino acid \"aa\" in a given site is computed. An estimate for the probability that the given amino acid will occur in that site *X*~*i*~, denoted by *P*(*X*~*i*~ = \"aa\") is then made by dividing the number of occurrences *d* by *M*, or,\n\nP\n\n(\n\nX\n\ni\n\n= \"aa\") =\n\nd\n\n\/\n\nM\n\n.\n\nFor example, if in a set of 2,134 aligned sequences, we observe that proline occurs 351 times at the third site, then *P* (\"proline\") = 351\/2,134. Note that *P* (\"proline\") is a conditional probability for that site variable on condition of the presumed function *f*. This is calculated for each amino acid for all sites. The functional uncertainty of the amino acids in a given site is then computed using Eqn. (1) using the estimated probabilities for each amino acid observed. The Fit value for that site is then obtained by subtracting the functional uncertainty of that site from the null state, in this case using Eqn. (4), log20. The individual Fit values for each site can be tabulated and analyzed. The summed total of the fitness values for each site can be used as an estimate for the overall FSC value for the entire protein and compared with other proteins.\n\n## B. Measuring changes in FSC\n\nIn principle, some proteins may change from a non-functional state to a functional state gradually as their sequences change. Furthermore, iso-enzymes in some cases may catalyze the same reaction, but have different sequences. Also, certain enzymes may demonstrate variations in both their sequence and their function. Finally, a single mutation in a functional sequence can sometimes render the sequence non-functional relative to the original function. An example of this effect has been observed in experiments with the ultrabiothorax (Ubx) protein \\[17,33\\].\n\nFrom Eqn. (6), the FSC of a biosequence can be measured as it changes with time, as shown in Figure 1<\/a>. When measuring evolutionary change in terms of FSC, it is necessary to account for a change in function due to insertions, deletions, substitutions and shuffling. Evolutionary change that involves a change from a non-functional state that is not in the null state, or from an existing function *f*~a~ to a modified function *f*~b~ that is either different in terms of efficiency or function, is given by\n\n\u03b6\n\nE\n\n= \u0394\n\nH\n\n(\n\nX\n\nfa\n\n(\n\nt\n\ni\n\n),\n\nX\n\nfb\n\n(\n\nt\n\nj\n\n)).\n\nThe sequences (corresponding to *X*~fa~ with initial function *f*~*a*~) have two components to it, relative to that of *X*~fb~ (with resulting mutated function *f*~*b*~). The *static component* is that portion of the subsequence that must remain within the permitted sequence variation of the original biosequence with function *f*~a~ while, at the same time, enabling the new function *f*~b~. The *mutating component* is the portion of *X*~fa~ that must change to achieve either the new function *f*~b~, where the new function is to be understood as either a new level of efficiency for the existing function, or a novel function different from *f*~a~. This is a convenient simplification, assuming that the two components are separate according to the aligned sites. Currently we are also studying scenarios when the two components may be mixed, possibly at different times. The mutating component can be assumed to be in the null state relative to the resulting sequences of *X*~fb~. Since the mutating component is the only part that must change, we can ignore the static component *provided we include the probability of it remaining static* during the mutational changes of the mutating component.\n\nIn practice, the sequence space for possible novel functional states may not be known. However, by considering particular proteins, estimated mutation rates, population size, and time, an estimated value for the probability can be chosen and substituted into the relevant components of Eqn. (9) to limit search areas around known biosequences that are observed, such as protein structural domains, to see what other possible states within that range might have some selective advantage. In this way, possible evolutionary paths for the formation of certain protein families might be reconstructed. For example, using this method, it might be possible to predict, say, future viral strains within certain limits.\n\nIntuitively, the greater the reduction in FSC a mutation produces, the more likely the mutation is deleterious to the given function. This can be evaluated using known mutations introduced individually into a set of aligned, wild-type sequences to measure the change in FSC. The results could then be ranked. Operating under the hypothesis that mutations producing the greatest decrease in FSC are most likely to be deleterious, experimental investigations into certain genes with certain mutations could be prioritized according to how negatively they affect FSC.\n\n# Results and Discussion\n\nFor the 35 protein families analyzed, a measure of FSC in Fits for each site was computed from their aligned sequence data on PFAM. The results for the families, as well as an array of randomly (uniformly) generated sequences and an ordered 50-mer polyadenosine sequence are shown in Table 1<\/a>. They reveal significant aspects of FSC described below.\n\nFSC of Selected Proteins\n\n| | **length (aa)** | **Number of Sequences** | **Null State (Bits)** | **FSC (Fits)** | **FSC Density Fits\/aa** |\n|----|----|----|----|----|----|\n| Ankyrin | 33 | 1,171 | 143 | 46 | 1.4 |\n| HTH 8 | 41 | 1,610 | 177 | 76 | 1.9 |\n| HTH 7 | 45 | 503 | 194 | 83 | 1.8 |\n| HTH 5 | 47 | 1,317 | 203 | 80 | 1.7 |\n| HTH 11 | 53 | 663 | 229 | 80 | 1.5 |\n| HTH 3 | 55 | 3,319 | 238 | 80 | 1.5 |\n| Insulin | 65 | 419 | 281 | 156 | 2.4 |\n| Ubiquitin | 65 | 2,442 | 281 | 174 | 2.7 |\n| Kringle domain | 75 | 601 | 324 | 173 | 2.3 |\n| Phage Integr N-dom | 80 | 785 | 346 | 123 | 1.5 |\n| VPR | 82 | 2,372 | 359 | 308 | 3.7 |\n| RVP | 95 | 51 | 411 | 172 | 1.8 |\n| Acyl-Coa dh N-dom | 103 | 1,684 | 445 | 174 | 1.7 |\n| MMR HSR1 | 119 | 792 | 514 | 179 | 1.5 |\n| Ribosomal S12 | 121 | 603 | 523 | 359 | 3.0 |\n| FtsH | 133 | 456 | 575 | 216 | 1.6 |\n| Ribosomal S7 | 149 | 535 | 644 | 359 | 2.4 |\n| P53 DNA domain | 157 | 156 | 679 | 525 | 3.3 |\n| Vif | 190 | 1,982 | 821 | 675 | 3.6 |\n| SRP54 | 196 | 835 | 847 | 445 | 2.3 |\n| Ribosomal S2 | 197 | 605 | 851 | 462 | 2.4 |\n| Viral helicase1 | 229 | 904 | 990 | 335 | 1.5 |\n| Beta-lactamase | 239 | 1,785 | 1,033 | 336 | 1.4 |\n| RecA | 240 | 1,553 | 1,037 | 832 | 3.5 |\n| Bac luciferase | 272 | 1,900 | 1,176 | 357 | 1.3 |\n| tRNA-synt 1b | 280 | 865 | 1,210 | 438 | 1.6 |\n| SecY | 342 | 469 | 1,478 | 688 | 2.0 |\n| EPSP Synthase | 372 | 1,001 | 1,608 | 688 | 1.9 |\n| FTHFS | 390 | 658 | 1,686 | 1,144 | 2.9 |\n| DctM | 407 | 682 | 1,759 | 724 | 1.8 |\n| Corona S2 | 445 | 836 | 1,923 | 1,285 | 2.9 |\n| Flu PB2 | 608 | 1,692 | 2,628 | 2,416 | 4.0 |\n| Usher | 724 | 316 | 3,129 | 1,296 | 1.8 |\n| Paramyx RNA Pol | 887 | 389 | 3,834 | 1,886 | 2.1 |\n| ACR Tran | 949 | 1,141 | 4,102 | 1,650 | 1.7 |\n| Random sequences | 1000 | 500 | 4,321 | 0 | 0 |\n| 50-mer polyadenosine | 50 | 1 | 0 | 0 | 0 |\n\n**Results for 35 protein families** Shown above are the 35 protein families analyzed, their sequence length (column 1), the number of sequences analyzed for each family (column 2), the Shannon uncertainty of the Null State *H*~\u00f8~ (Eqn. 4) for each protein (column 3), the FSC value \u03b6 in Fits for each protein (column 4), and the average Fit value\/site (FSC\/length, column 5). For comparison, the results for a set of uniformly random amino acid sequences (RSC) are shown in the second from last row, and a highly ordered, 50-mer polyadenosine sequence (OSC) in the last row. All values, except for the OSC example, which was calculated from the constrained ground state required to produce OSC, were computed from the null state. The Fit values obtained can be discussed as the measure of the change in functional uncertainty required to specify any functional sequence that falls into the given family being analyzed.\n\nFirst, as observed in Table 1<\/a>, although we might expect larger proteins to have a higher FSC, that is not always the case. For example, 342-residue SecY has a FSC of 688 Fits, but the smaller 240-residue RecA actually has a larger FSC of 832 Fits. The Fit density (Fits\/amino acid) is, therefore, lower in SecY than in RecA. This indicates that RecA is likely more functionally complex than SecY. The results for the array of random sequences and for a 50-mer polyadenosine sequence formed on Montmorillonite show that \u0394*H*~f~ distinguishes FSC from RSC and OSC. The results for the array of random sequences are shown in the second from the last row of Table 1<\/a>, and indicate that random sequences, which are an example of RSC, tend to have an FSC of approximately 0 Fits. The results of the highly ordered 50-mer polyadenosine, which is an example of OSC, are shown in the last row of Table 1<\/a>, and indicate an FSC of approximately 0 Fits. This is consistent with Abel and Trevors' prediction that neither OSC nor RSC can contain the functional sequence complexity observed in biosequences.\n\nA plot of FSC vs. protein length for the 35 selected protein families is shown in Figure 2<\/a>. The x-intercept was found to be approximately 23 amino acids. For the 35 protein families analyzed, there were no points corresponding to any protein families in the lower right area of the plot. The right hand plot in Figure 2<\/a> shows the average number of Fits\/site for the 35 protein families analyzed. For our small sample of 35 protein families, we found no points between 0 and 1.3 Fits\/site.\n\nTo demonstrate some ways in which our approach can be applied to proteins, ubiquitin was chosen. A sample plot of FITs\/site and amino acid conservation for each site, is shown in Figure 3<\/a> for Ubiquitin. Data for the first 5 sites in the aligned set was not available from PFAM. The conservation value was obtained by subtracting the total number of different options observed at a given site, from the total possible options. In this case, 20 different amino acids. If all 20 amino acids were permitted at a site, then the conservation value was 0. A maximum value of 19 would obtain if only 1 amino acid was observed at the site. From Figure 3<\/a>, it can be observed that a high conservation value usually corresponded to a high measured FSC value. The measurement is also affected by the number of amino acids observed which could be different for different sites. For example, site 37 shows 20 observed amino acids, but still a relatively high value of 2.9 Fits. Conservation of a site reflects the degree of variations which is affected by both the number of observed amino acids and their frequency of observation in the alignment \\[12\\]. For example, if a site is observed to be dominated by only a few amino acids even though all the amino acid types are observed, its measured FSC could still be high.\n\nFrom the plot in Figure 3<\/a>, one can observe which sites have higher measured FSC value. An arbitrary lower cutoff limit of 3.32 Fits\/site was chosen to indicate high measured FSC sites, presumed to be significantly associated with the functionality of the whole molecule. A more rigorous statistical analysis can also be used as well as other types of measures based on dependency \\[12\\]. All sites exceeding that lower cutoff value were examined. A total of seven sites were found that had a high value that was between 3.32 Fits\/site and the maximum of 4.32 Fits\/site. These sites were located on a 3-D model of ubiquitin (1AAR.pdb) as shown in Figure 4<\/a>. Out of the 7 sites, 6 where located on the binding domain containing the binding site Lys-48 \\[34\\]. The site with the maximum FIT value was site 47, immediately next to the binding site. Surprisingly, the binding site itself was poorly conserved, with an amino acid conservation value of only 1, albeit a relatively high value of 2.91 Fits. Five of the remaining 6 sites were found to be clustered in the area of the binding site, with bonds between Leu-50 and Leu-43, as well as between Gly-41, Lys-27, and Asp-52, as shown in Figure 5<\/a>. Since these sites had the highest FSC values in ubiquitin, we infer that they play a critical role in either binding, or in the structure of the binding site domain for that protein. The fact that of the top seven sites, one was immediately adjacent to the binding site, and five others were located on the structure supporting the binding site lends support for our hypothesis that high FIT values can be used to locate key functional components of a protein family.\n\nIn this paper, we have presented an important advance in the measurement of the FSC of biopolymers. It was assumed that aligned sequences from the same Pfam family, could be assigned the same functionality label. Even though the same functionality may not be applicable to individual sites, site independence and significance was assumed and the measured FSC of each site was summed. However, further extension of the method should be considered \\[12,35\\]. For example, if dependency of joint occurrences is detected between the outcomes of two variables *X*~3~ and *X*~4~ in the aligned sequences, then the N-tuple representation of the sequences could be transformed into a new R-tuple *Y*~R~ where these outcomes of *X*~3~ and *X*~4~ are represented as outcome by a single variable *Y*~3~ as shown in Figure 6<\/a>. An outcome of the two variables in *X*~3~ and *X*~4~ correspond to a hypercell in Y~R~. A more accurate estimate of FSC could then be calculated. We are currently considering this more general scenario.\n\nThe measurement in Fits of the FSC provides significant information about how specific each monomer in the sequence must be to provide the needed\/normal biofunction. The functional information measures the degree of challenge involved in searching the sequence space for a sequence capable of performing the function. In addition, Fits can be summed for every sequence required to achieve a complete functional biochemical pathway and integrated cellular metabolism, including regulatory proteins. In principle, it may be possible to estimate a FSC value for an entire prokaryotic cell where the genome has been sequenced and all translated proteins are known. Simpler genomes in viruses may be an excellent example for this kind of analysis. That is, further analysis of the FSC values will provide a starting point to reveal important information about the processes for an entire organism such as the virus.\n\n# Conclusion\n\nA mathematical measure of functional information, in units of Fits, of the functional sequence complexity observed in protein family biosequences has been designed and evaluated. This measure has been applied to diverse protein families to obtain estimates of their FSC. The Fit values we calculated ranged from 0, which describes no functional sequence complexity, to as high as 2,400 that described the transition to functional complexity. This method successfully distinguishes between FSC and OSC, RSC, thus, distinguishing between order, randomness, and biological function.\n\n# Methods\n\nThe following is a brief summary of the methods used (additional file 1<\/a>). A more detailed description is available as an online supplement. Eqn. (6) was applied to 35 protein families or protein domain families, to estimate the value of the FSC for any protein included within that family. A program was written, using Python, to analyze the two-dimensional array of aligned sequences for a protein family and is available online (additional files 2<\/a>, 3<\/a>, 4<\/a>, 5<\/a>, 6<\/a>, 7<\/a>, 8<\/a>, and 9<\/a>). The data for the arrays was obtained from the Pfam database \\[36\\]. Eqn. (7) was used to estimate the probability for each amino acid at each site.\n\n# Competing interests\n\nThe author(s) declare that they have no competing interests.\n\n# Authors' contributions\n\nKD helped develop the formulation and measure of FSC, wrote the software, carried out the analysis and drafted the manuscript. DC developed, together with KD, the basic formula of functional entropy and evaluated the different mathematical forms. DC provided suggestions to the experimental design and its interpretations. DLA's contributions included defining\/delineating the three subsets of sequence complexity and their relevance to biopolymeric information, contributing to the first draft of the paper, critiquing KD's quantification methodology, contributing references, and coining the term \"fits\" for \"functional bits\" as the unit of measure of Functional Sequence Complexity (FSC). JT participated in the design and coordination of the study. All authors read and approved the final manuscript.\n\n# Supplementary Material\n\n###### Additional File 1\n\nMethods. Additional details of the methods used in this project\n\nClick here for file\n\n###### Additional File 2\n\nMain Program. A copy of the coding for the main program used in this paper\n\nClick here for file\n\n###### Additional File 3\n\nAminoFreq. A required module for the main program\n\nClick here for file\n\n###### Additional File 4\n\nColTot. A required module for the main program\n\nClick here for file\n\n###### Additional File 5\n\nConvert. A required module for the main program\n\nClick here for file\n\n###### Additional File 6\n\nDistEnt. A required module for the main program\n\nClick here for file\n\n###### Additional File 7\n\nFormArray. A required module for the main program\n\nClick here for file\n\n###### Additional File 8\n\nStripName. A required module for the main program\n\nClick here for file\n\n###### Additional File 9\n\nP53DNADom. A sample data set for the reader\n\nClick here for file\n\n### Acknowledgements\n\nThe research is supported by the National Sciences and Engineering research Council (NSERC) of Canada, Discovery Grant, the Korean Research Foundation Grant (KRF-2004-042-C00020) of one of the authors.","meta":{"dup_signals":{"dup_doc_count":136,"dup_dump_count":70,"dup_details":{"curated_sources":2,"2023-14":1,"2022-40":1,"2022-33":1,"2022-21":1,"2021-49":1,"2021-39":2,"2021-17":1,"2021-04":1,"2020-40":2,"2020-29":1,"2020-16":1,"2020-10":1,"2020-05":1,"2019-47":1,"2019-39":1,"2019-35":1,"2019-26":1,"2019-18":1,"2019-09":1,"2019-04":2,"2018-47":1,"2018-43":1,"2018-34":2,"2018-26":2,"2018-22":1,"2018-17":1,"2018-13":1,"2018-09":1,"2018-05":3,"2017-51":3,"2017-47":1,"2017-43":2,"2017-39":2,"2017-34":2,"2017-30":2,"2017-26":2,"2017-22":2,"2017-17":2,"2017-09":3,"2017-04":2,"2016-50":2,"2016-44":4,"2016-40":5,"2016-36":4,"2016-30":4,"2016-26":1,"2016-22":1,"2016-18":1,"2015-48":2,"2015-40":2,"2015-35":2,"2015-32":2,"2015-27":2,"2015-22":2,"2015-14":2,"2014-52":2,"2014-49":3,"2014-42":6,"2014-41":4,"2014-35":3,"2014-23":4,"2014-15":4,"2023-40":1,"2015-18":2,"2015-11":1,"2015-06":2,"2014-10":2,"2013-48":2,"2013-20":2,"2024-18":1}},"file":"PMC2217542"},"subset":"pubmed_central"} {"text":"author: Neil Hall\ndate: 2012\ninstitute: 1Centre for Genomic Research, University of Liverpool, Crown St, Liverpool L69 7ZB, UK\ntitle: Why science and synchronized swimming should not be Olympic sports\n\nThe brief, for my intermittent comment column for *Genome Biology*, was to \"give a UK perspective\" while \"keeping it interesting for an international audience\". That's a tough brief. There is little reason for people to take an interest in events taking place in a rainy windswept island, famous mostly for bad food, greedy bankers and reality shows. Recently, though, because of the London 2012 Olympics, the focus of the world's attention has briefly flitted in our direction. So, while basking in our reflected Olympic glory, I will start with sport.\n\nIn the UK we can be rightly proud of the fact that (as the head of the International Olympic Committee pointed out) we as a nation have done much to codify many Olympic sports. But this probably points to a national failing: we love rules, we love measuring performance and hence we love inventing sports. That's why we invented cricket; it has loads of obscure rules and loads of complex performance statistics. A game that can take 5 days and has to stop for rain or bad light (in England!) was not invented for the drama or spectacle. Our obsession with performance metrics is not just limited to sport; the government has taken to measuring and publishing the performance of everything from schools and hospitals to police forces and train operators. They have even tried to quantify our national happiness ().\n\nSportsmen and -women can obsess about measuring their performance and gauging it against their past performance and the performance of others, but researchers are quite different. In academia we don't like other people judging what we do (or even defining what we do) and we tend not to like metrics designed to measure our performance. But many years ago, UK government decided to ignore these protestations from our ivory towers and created a mechanism for measuring research quality called the Research Excellence Framework (or REF for short). REF is the reason that some readers may have noticed UK-based collaborators acting increasingly strangely, maybe looking stressed and distant, obsessing about impact factors and questioning the value of everything they do against impenetrable metrics.\n\n# REF 101\n\nFor the benefit of non-UK-based readers I will need to give some background. In the UK we have only recently taken to crucifying our youth with a lifetime of debt to fund their education. Hence, there are still large sums of money coming from central government to universities. Each year, \u00a31billion will be paid directly to institutions based on research quality. The sum of money each university receives is decided on by the REF, and here, in brief, are the rules.\n\n## 1. Publish rarely but well\n\nThe REF occurs every 7 years, although it changes its name more than The Artist Formerly Known As Prince. It used to be called the Research Assessment Exercise and before that the Research Selectivity Exercise. The nuances of the process have evolved, but a key metric in each assessment is the quality of the research outputs (papers). Each academic can submit just four papers published in the last 7 years to be judged for their 'excellence in originality, significance and rigor', against these murky definitions:\n\n\u2022 4 star - Quality that is world leading\n\n\u2022 3 star - Quality that is internationally excellent\n\n\u2022 2 star - Quality that is recognized internationally\n\n\u2022 1 star - Quality that is recognized nationally\n\nTo me, these read like color descriptions on paint tins (linen white, antique white, foam white, cloud white, and so on) and if you are trying to work out what the difference is between 'world leading' and 'internationally excellent', you are not alone. Most UK academics are currently kept awake with such thoughts. This is probably a futile exercise as the star ratings are assigned by REF panels, each with about 20 members, and each member will have to read hundreds of papers, many of which are outside their area of expertise. It is hard to imagine that this review process is not guided by journal impact factors or the esteem with which the submitting academic is held in by the panel. However, these ratings are important as the stars for all the papers submitted by each department will be added up to evaluate their research quality.\n\nMost of you will also be thinking that four papers in 7 years is not a lot... and it isn't. So, the reason your UK collaborators are not publishing the stuff they presented at conferences last year is because they are hoping to roll it in with more data into a more prestigious journal later.\n\n## 2. Hire for the short term\n\nInexplicably, although the process is designed to measure the performance of universities, the star ratings for the papers go to the investigator, regardless of where they were working at the time of publication. This means that you can buy in researchers with a good set of publications simply to use them in your REF audit. This has created what amounts to a 'transfer deadline' and a highly skewed academic job market in the UK in which universities hire on a 7-yearly cycle, and increases in professorial salaries have outstripped those for junior faculty. This is reminiscent of sports stars; a few individuals will be paid extremely well but very little cash goes to the grass roots.\n\nI should acknowledge that I have been a beneficiary of this system. When I realized that if I lived in the USA any longer my son would soon complete his metamorphosis into Bart Simpson, I started to look for work in the UK. It turned out that this was just before the cyclical REF (then called the RAE). And, as I had been working as a project leader at genome centers during a period when *Nature* would essentially publish raw electropherograms (or as I call it, 'The good old days'), I was nailed-on 'Four Star'. Therefore, my prospective employer could hire me on a healthy professorial salary and provide me with a good start up in the knowledge that the government would refund the cost in return for my papers.\n\nAs a scientist, I was delighted; as a taxpayer, I should have been outraged.\n\n## 3. Spend, spend, spend\n\nAt this point I know some of you think I am making this up, but the other metric that decides how much money you receive is how much money you have spent. This does sound like a futile positive feedback loop that could only have been invented by a Lehman Brothers investment banker, but it genuinely is a measure used to score the 'Research Environment', which is a key metric in the REF. According to how this metric is measured, a department that has a large research spend is considered to be a better environment than those with a lower spend. As I run a genomics lab, I am obviously thrilled with this metric because I can certainly generate a good research environment as judged by the REF, but some people may argue that a great research environment is instead one that produces good papers without spending huge sums of money.\n\n# Measuring the unmeasurable\n\nI would like to think that anyone reading this would conclude that the REF is a shockingly bad way to judge research performance. And it's fair to say I have paraphrased the rules (there is a 106 page document on just the panel working methods for those wanting more detail: ). I have also highlighted the easy targets for ridicule. I can totally understand that governments who fund research would like to measure performance, but I question their ability to do so and I would challenge anyone to come up with a better system.\n\nAthletics is sport in its purest form and has basic measurements (height, length and speed) that can be used to define who is the best. Usain Bolt has a single, very simple KPI (that's a Key Performance Indicator for those not on university management committees). But there are other 'sports' such as gymnastics, diving, dressage and synchronized swimming that are much less clear cut. I personally think that anything that uses a judging panel to score artistic interpretation should not be a sport. Now imagine what would happen if, instead of judging panels, the synchronized swimmers scored each other anonymously. I expect that things could get quite divisive.\n\nLike many others, I really enjoyed my 4-yearly dose of synchronized swimming during the Olympics. It was dramatic, entertaining and thrilling, but making it a sport is missing the point. You may as well make ballet and stand-up comedy Olympic sports as well. Unlike a 100 m sprint, the measurement of performance is far too subjective. At some point one has to accept that some things can have intrinsic value but they can't be objectively measured.\n\nScience quality is difficult to measure, yet we all know when we see something amazing or dramatic, whether it is a Higgs boson or a Neanderthal genome, and choosing which of these is best is pointless. In science, as with ballet, comedy or synchronized swimming, there should be no winners; because you can't measure who won.","meta":{"dup_signals":{"dup_doc_count":123,"dup_dump_count":72,"dup_details":{"curated_sources":2,"2022-27":1,"2021-49":1,"2020-45":1,"2020-34":1,"2020-24":1,"2020-16":1,"2020-10":1,"2020-05":1,"2019-51":1,"2019-47":1,"2019-43":1,"2019-39":1,"2019-35":1,"2019-30":1,"2019-26":1,"2019-22":1,"2019-18":1,"2019-13":1,"2019-09":1,"2019-04":1,"2018-51":1,"2018-47":1,"2018-43":1,"2018-39":1,"2018-34":1,"2018-30":1,"2018-26":1,"2018-22":1,"2018-17":1,"2018-13":1,"2018-09":1,"2018-05":2,"2017-51":2,"2017-47":2,"2017-43":1,"2017-39":2,"2017-34":2,"2017-30":1,"2017-26":1,"2017-22":1,"2017-17":2,"2017-09":7,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":8,"2016-30":7,"2016-22":1,"2016-18":1,"2016-07":3,"2015-48":2,"2015-40":2,"2015-35":2,"2015-32":2,"2015-27":2,"2015-22":2,"2014-52":2,"2014-49":4,"2014-42":4,"2014-41":3,"2014-35":2,"2014-23":3,"2014-15":2,"2023-23":1,"2017-13":1,"2015-18":2,"2015-11":2,"2015-06":2,"2014-10":1,"2013-48":1,"2024-10":1}},"file":"PMC3491389"},"subset":"pubmed_central"} {"text":"abstract: A report from the Keystone Symposium on Molecular and Cellular Biology, 'Deregulation of transcription in cancer: controlling cell fate decisions', Killarney, Ireland, 21-26 July 2009.\nauthor: Aisling M Redmond; Jason S Carroll\ndate: 2009\ninstitute: 1Endocrine Oncology Research Group, Department of Surgery, Royal College of Surgeons in Ireland, St Stephens Green, Dublin 2, Ireland; 2Cancer Research UK, Cambridge Research Institute, Robinson Way, Cambridge CB2 0RE, UK\ntitle: Defining and targeting transcription factors in cancer\n\nThis Keystone meeting focused on mechanisms of transcriptional regulation, the effects of transcriptional deregulation and the consequences on cancer, and the possibilities for inhibiting transcription factors as potential therapeutic targets. Greg Verdine (Harvard University, Cambridge, USA) suggested that, until now, only 20% of the genome was targetable with drugs (druggable). Transcription factors have traditionally been considered too difficult to target, but with increased understanding of transcription factor biology and technological advances, targeting them is becoming a realistic option.\n\nThe role of polycomb complexes in regulating transcriptional activity was an important theme to the meeting. Kristian Helin (University of Copenhagen, Denmark) summarized the role of polycomb repressor complexes (involving EZH2) in controlling senescence, in particular their role in the regulation of the p16^INK4A^ and p14^ARF^ tumor suppressor proteins. Competitive binding between polycomb repressor complexes and the histone demethylase JMJD3 determines transcriptional activity, suggesting that JMJD3 can act as a tumor suppressor. Maarten Van Lohuizen (Netherlands Cancer Institute, Amsterdam, The Netherlands) presented work on BMI1, a member of the polycomb repressor complex PRC1, which is crucial in the maintenance of adult stem cells. Loss of BMI1 in mouse models causes a dramatic reduction in proliferation in the mammary gland. Levels of BMI1 and EZH2 and interactions between them influence tumor progression. Qiang Yu (Genome Institute of Singapore, Singapore) focused on the possibility of targeting EZH2 in cancer. Their compound, 3-deazaneplanocin A (DZNep), an *S*-adenosylhomocysteine hydrolase inhibitor, depletes EZH2 in breast cancer cells. In colorectal cancer, DZNep reactivates the tumor suppressor microRNA *miR449*, leading to cell cycle arrest. He suggested that DZNep, in combination with 5-azacytidine, may be an effective cancer treatment.\n\nSeveral talks used contemporary genomics technologies to define transcription factor binding sites, regulatory regions and changes in chromatin structure, all of which can contribute to altered cellular growth. Manel Esteller (Catalan Institute of Oncology, Barcelona, Spain) has assessed global DNA methylation in normal and cancer cells and found significant changes in DNA methylation at promoters. He discussed the DNA epigenome project, an ambitious project that will study the methylation state of 10,000 promoters in tumors from more than 1,000 patients. Bing Ren (University of California, San Diego, USA) discussed global DNA methylation and histone modification data generated by chromatin immunoprecipitation microarrays (ChIP-chip) or ChIP-sequencing (ChIP-seq). In human embryonic stem cells that were induced to differentiate, he showed that 30,000 putative enhancers exist in pre- and post-differentiation states, but only 8,000 of these were common between the two states. Within the regions shown to have altered chromatin structure following differentiation, he could show enrichments of motifs for various transcription factors. Both positive and inverse correlations were found when the histone maps were combined with DNA methylation data. Importantly, he could show that in specific cell types, non-CG methylation could occur and this was usually depleted from promoters of actively transcribed genes.\n\nTwo talks from members of the Genome Institute of Singapore highlighted new data on estrogen receptor (ER) transcription and chromatin dynamics. Yijun Ruan presented data on a novel technique called 'whole genome chromatin interaction analysis using paired-end ditagging' (ChIA-PET), which is a global method for identifying chromatin loops that form during transcription. By applying this to estrogen-induced gene transcription in breast cancer cells, his group found many hundreds of estrogen-induced intrachromosomal chromatin loops that form over distances as great at 1 Mb, representing *cis*-regulatory components that physically interact. As a follow-up to this presentation, Ed Liu presented data from recent genome-wide mapping of ER binding sites. He showed that only small subsets of predicted motifs are actual binding sites *in vivo* and that the ER binding sites occur in gene-rich areas. Within the list of ER binding sites, the strongest (that is, those most enriched by ChIP-seq) were more likely to contain responsive motifs and to be adjacent to genes that are differentially regulated by estrogen. This suggests that there is a hierarchy of ER binding sites, in which the strongest sites are more likely to be functional, possibly as a result of superior ER-DNA interactions.\n\nMaintaining the theme of nuclear receptor transcriptional activity, Ralf Kittler (University of Chicago, USA) has engineered bacterial artificial chromosomes of more than 20 different nuclear receptor (NR) genes to include an enhanced green fluorescent protein tag, and these have subsequently been used for genome-wide mapping using ChIP-chip. By correlating the binding profiles of all NRs, they found profile similarities and common binding sites between ER and retinoic acid receptor \u03b1; they hypothesized that this was an antagonistic interaction.\n\nArul Chinnaiyan (University of Michigan, Ann Arbor, USA) presented data characterizing gene fusions in prostate cancer, including his group's original discovery of fusions between the *TMPRSS2* gene (which encodes a transmembrane serine protease) and the *ERG* ETS-family oncogene in prostate cancers. He showed the results from large-scale genomic screens, which have resulted in more than 100 validated gene fusions. Interestingly, all of the fusions contain an ETS factor as the transcriptionally active partner and the other half of the fusion is usually an androgen-regulated gene target. He showed that fusions of various kinds are found in more than half of all prostate cancers and that their specificity to cancer and not normal tissue has led to a urine-based test for detecting fusion genes that may function as a prognostic indicator of prostate cancer. One of the major issues that researchers in the field face is defining which fusions are driving factors and which ones are passenger events.\n\nOne of the key themes of the meeting was the ability to target transcription factors using drugs. Traditionally, transcription factors were generally considered too difficult to target, and kinase pathways or cell surface proteins have instead been popular therapeutic targets. However, transcription factors are the downstream effectors of many pathways and this, coupled with technological advances, has made them attractive and realistic drug targets. Greg Verdine presented data on 'stapled peptides'. Normally, peptides used to block or inhibit transcription factors are easily degraded or have poor solubility. Verdine's approach provides a stabilizing backbone (or 'staple') to the short peptides or proteins, thereby generating proteins that are stable and maintain correct conformation. His group has successfully developed stapled peptides that target Bcl-2, Bax and BID (proapoptotic proteins containing only the BH-3 motif). One of their inhibitors is about to enter clinical trials, and the approach provides a realistic option for targeting a specific transcription factor in cancer.\n\nJohn Rossi (Beckman Research Institute of the City of Hope, Duarte, USA) discussed his group's RNA interference method as a potential therapeutic approach. He suggested that the delivery of small interfering RNAs (siRNAs) to target cells has improved considerably, but one of the major problems is producing sufficient quantities of any specific siRNA. He also showed that they can generate siRNAs that target two different transcripts simultaneously, allowing increased effectiveness from a single siRNA. Lyubomir Vassilev (Roche Pharmaceuticals, Nutley, USA) showed data on Nutlin, an inhibitor of the interaction between the p53 tumor suppressor and its regulator Mdm2. This inhibitor binds selectively to the pocket of Mdm2, resulting in increased p53 levels. Nutlin was an effective inhibitor in cell lines and in mice with various types of cancer and was bioavailable when administered orally. These data confirm that protein-protein interactions between transcription factors and regulatory proteins can be blocked successfully with chemical inhibitors. Sandra Dunn (University of British Columbia, Vancouver, Canada) provided compelling evidence for an important role for the transcription factor YB-1, which is expressed in 40% of breast cancers but not normal breast tissue. YB-1 was shown to correlate with poor prognosis in patients and in mouse models, and YB-1 transgenic mice readily generated tumors. YB-1 activity is dependent on phosphorylation, and Dunn's group has shown that the ribosomal S6 kinase (Rsk) complex is involved. They are currently testing the effectiveness of peptides that target YB-1.\n\nRen\u00e9 Bernards (Netherlands Cancer Institute) stressed the need to be able to stratify breast cancer patients, firstly in order to restrict chemotherapeutic drug administration only to those who will benefit from it, and secondly to allow informed decisions regarding the choice of drug for each patient. As stated by Joe Nevins (Duke University Medical Center, Durham, USA), by finding and focusing on the specific subset of patients that will probably benefit from an individual therapy, a potential failed drug can become a blockbuster drug. Nevins presented gene expression profiles resulting from exposure to the major cancer drugs; from these, his group could generate signatures that represent a likely response or lack of response to individual therapies. They could use this approach to predict patients who would benefit from particular drug regimes. They are currently using these genomic screening tools in trials. Similarly, Bernards discussed gene expression signatures that predict outcome in women with breast cancer. He also showed data from a 159 gene signature of activated phosphatidylinositol 3-kinase, which was used to predict outcome in colon cancers. Bernards also showed data from a short hairpin RNA (shRNA) library screen to find genes involved in resistance to trastuzumab (known as Herceptin) in a BT474 breast cancer cell line model. By simultaneously screening 24,000 human shRNAs against 8,000 genes, his group could identify genes required for trastuzumab effectiveness. They identified and validated the phosphatase pTEN as an essential component in the trastuzumab response.\n\nClearly, the single gene or single protein approach is rapidly becoming redundant. The use of screens allows researchers to simultaneously assess all genes, identify thousands of regulatory sites, test a multitude of compounds and combine these different screens in multifactorial ways. By distilling this information we can progress more rapidly towards personalized treatments.","meta":{"dup_signals":{"dup_doc_count":122,"dup_dump_count":52,"dup_details":{"curated_sources":2,"2023-23":1,"2022-33":1,"2022-05":1,"2021-39":1,"2021-10":1,"2021-04":1,"2020-40":1,"2020-24":1,"2020-05":1,"2019-51":1,"2019-47":2,"2019-22":1,"2019-09":1,"2019-04":1,"2018-51":1,"2018-47":2,"2018-39":1,"2018-09":1,"2017-47":1,"2017-34":1,"2017-22":1,"2017-17":2,"2017-09":7,"2016-44":1,"2016-40":1,"2016-36":9,"2016-30":5,"2016-22":1,"2016-18":1,"2016-07":4,"2015-48":3,"2015-40":1,"2015-35":3,"2015-32":2,"2015-27":3,"2015-22":3,"2015-14":2,"2014-52":2,"2014-49":3,"2014-42":7,"2014-41":5,"2014-35":5,"2014-23":4,"2014-15":3,"2023-50":1,"2015-18":3,"2015-11":3,"2015-06":3,"2014-10":3,"2013-48":3,"2013-20":3,"2024-30":1}},"file":"PMC2728525"},"subset":"pubmed_central"} {"text":"date: 2010-04-16\ntitle: Findings of research misconduct.\n\n# Findings of Research Misconduct\n\nNotice Number: NOT-OD-10-085\n\nKey Dates** \n** Release Date: April 16, 2010\n\nIssued by** \n** Department of Health and Human Services\n\nNotice is hereby given that the Office of Research Integrity (ORI) and the Assistant Secretary for Health have taken final action in the following case:\n\nEmily M. Horvath, Indiana University: Based on the Respondent's own admissions in sworn testimony and as set forth below, Indiana University (IU) and the U.S. Public Health Service (PHS) found that Ms. Emily M. Horvath, former graduate student, IU, engaged in research misconduct in research supported by National Center for Complementary and Alternative Medicine (NCCAM), National Institutes of Health (NIH), grant R01 AT001846 and Predoctoral Fellowship Award F31 AT003977-01, and National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), NIH, grant R01 DK082773-01.\n\nSpecifically, the Respondent admitted to falsifying the original research data when entering values into computer programs for statistical analysis with the goal of reducing the magnitude of errors within groups, thereby gaining greater statistical power. The Respondent, IU, and ORI agree that the figures identified below in specific grant applications and published papers are false and that these falsifications rise to the level of research misconduct:\n\nRespondent admitted to falsifying Figures 6B, 18, 22, 23B, and 24 in NCCAM, NIH, grant application R01 AT001846-06, \\`\\`Chromium Enhanced Insulin & GLUT4 Action via Lipid Rafts,' Jeffery S. Elmendorf, P.I. (07\/01\/04-05\/31\/20) (application was withdrawn in May 2009).\n\nRespondent admitted to falsifying Figures 6B, 8, 9D, 16D, and 21 in NIDDK, NIH, grant application R01 DK082773-01, \\`\\`Mechanisms of Membrane-Based Insulin Resistance & Therapeutic Reversal Strategies,' Jeffrey S. Elmendork, P.I. (3\/15\/09-01\/31\/13).\n\nRespondent admitted to falsifying Figures 2C, 5, 6D, and 11 in the publication: Horvath, E.M., Tacket, L., McCarthy, A.M., Raman, P., Brozinick, J.T., & Elmendorf, J.S. \\`\\`Antidiabetogenic Effects of Chromium Mitigate Hyperinsulinemia-induced Cellular Insulin Resistance via Correction of Plasma Membrane Cholesterol Imbalance.' Molecular Endocrinology 22:937-950, 2008.\n\nRespondent admitted to falsifying Figure 2C in the publication: Bhonagiri, P., Patter, G.R., Horvath, E.M., Habegger, K.M., McCarthy, A.M., Elmendorf, J.S. \\`\\`Hexosamine biosysthesis pathway flux contributes to insulin resistance via altering membrane PIP 2 and cortical F-actin.' Endocrinology 150(4):1636-1645, 2009.\n\nRespondent also admitted to falsifying Figures 2C, 5, 6D, 11, 13C, 15A, 16A, 17A, 18, 19C, and 20A, which are included in her thesis, \\`\\`Cholesterol-dependent mechanism(s) of insulin-sensitizing therapeutics.' The Ph.D. was awarded to the Respondent on December 31, 2008. Respondent was supported by a Predoctoral Fellowship Award F31 AT003977 from 09\/30\/2006 to 09\/29\/2009.\n\nMs. Horvath has entered into a Voluntary Settlement Agreement in which she has voluntarily agreed, for a period of three (3) years, beginning on March 22, 2010:\n\n\\(1\\) To exclude herself from serving in any advisory capacity to PHS, including but not limited to service on any PHS advisory committee, board, and\/or peer review committee, or as a consultant;\n\n\\(2\\) That any institution that submits an application for PHS support for a research project on which the Respondent's participation is proposed or that uses her in any capacity on PHS-supported research, or that submits a report of PHS-funded research in which she is involved, must concurrently submit a plan for supervision of her duties to the funding agency for approval; the supervisory plan must be designed to ensure the scientific integrity of her research contribution; respondent agreed that she will not participate in any PHS-supported research until such a supervisory plan is submitted to ORI;\n\n\\(3\\) That any institution employing her submits, in conjunction with each application for PHS funds or report, manuscript, or abstract of PHS-funded research in which the Respondent is involved, a certification that the data provided by the Respondent are based on actual experiments or are otherwise legitimately derived and that the data, procedures, analyses, and methodology are accurately reported in the application, report, manuscript, or abstract; the Respondent must ensure that the institution sends a copy of the certification to ORI; and\n\n\\(4\\) That she will write letters, approved by ORI, to relevant journal editors of the published papers cited above to state what she falsified\/fabricated and to provide corrections if she has not already done so. These letters should state that her falsifications\/fabrications were the underlying reason for the retraction\/corrections.\n\nInquiries\n\nFOR FURTHER INFORMATION CONTACT:** \n \n** Director** \n** Division of Investigative Oversight** \n** Office of Research Integrity** \n** 1101 Wootton Parkway, Suite 750** \n** Rockville, MD 20852** \n** (240) 453-8800","meta":{"dup_signals":{"dup_doc_count":107,"dup_dump_count":42,"dup_details":{"curated_sources":2,"2020-34":1,"2019-51":1,"2018-51":1,"2018-30":1,"2018-17":1,"2018-05":1,"2017-30":1,"2017-22":1,"2017-17":1,"2017-09":2,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-30":1,"2016-26":1,"2016-22":1,"2016-18":1,"2016-07":3,"2015-48":2,"2015-40":2,"2015-35":4,"2015-32":3,"2015-27":3,"2015-22":4,"2015-14":3,"2014-52":3,"2014-49":5,"2014-42":5,"2014-41":5,"2014-35":6,"2014-23":8,"2014-15":4,"2022-33":1,"2015-18":3,"2015-11":4,"2015-06":4,"2014-10":4,"2013-48":4,"2013-20":4,"2024-30":1}},"file":"PMC4259714"},"subset":"pubmed_central"} {"text":"abstract: # Background\n .\n Many clinicians depend solely on journal abstracts to guide clinical decisions.\n .\n # Objectives\n .\n This study aims to determine if there are differences in the accuracy of responses to simulated cases between resident physicians provided with an abstract only and those with full-text articles. It also attempts to describe their information-seeking behaviour.\n .\n # Methods\n .\n Seventy-seven resident physicians from four specialty departments of a tertiary care hospital completed a paper-based questionnaire with clinical simulation cases, then randomly assigned to two intervention groups\u2014access to abstracts-only and access to both abstracts and full-text. While having access to medical literature, they completed an online version of the same questionnaire.\n .\n # Findings\n .\n The average improvement across departments was not significantly different between the abstracts-only group and the full-text group (p=0.44), but when accounting for an interaction between intervention and department, the effect was significant (p=0.049) with improvement greater with full-text in the surgery department. Overall, the accuracy of responses was greater after the provision of either abstracts-only or full-text (p\\<0.0001). Although some residents indicated that 'accumulated knowledge' was sufficient to respond to the patient management questions, in most instances (83% of cases) they still sought medical literature.\n .\n # Conclusions\n .\n Our findings support studies that doctors will use evidence when convenient and current evidence improved clinical decisions. The accuracy of decisions improved after the provision of evidence. Clinical decisions guided by full-text articles were more accurate than those guided by abstracts alone, but the results seem to be driven by a significant difference in one department.\nauthor: Alvin Marcelo; Alex Gavino; Iris Thiele Isip-Tan; Leilanie Apostol-Nicodemus; Faith Joan Mesa-Gaerlan; Paul Nimrod Firaza; John Francis Faustorilla; Fiona M Callaghan; Paul FonteloCorrespondence to: **Dr Alex Gavino**, Office of High Performance Computing and Communications, Lister Hill National Center for Biomedical Communications, National Library of Medicine, 8600 Rockville Pike, Bethesda, MD 20894, USA; \ndate: 2013-04\ninstitute: 1National Telehealth Center, University of the Philippines, Manila, Philippines; 2Department of Surgery, Philippine General Hospital, Manila, Philippines; 3National Library of Medicine, National Institutes of Health, Bethesda, Maryland, USA; 4Department of Medicine, Philippine General Hospital, Manila, Philippines; 5Department of Family and Community Medicine, Philippine General Hospital, Manila, Philippines; 6Department of Emergency Medicine, Philippine General Hospital, Manila, Philippines\nreferences:\ntitle: A comparison of the accuracy of clinical decisions based on full-text articles and on journal abstracts alone: a study among residents in a tertiary care hospital\n\n# Introduction\n\n## Background\n\nBroad estimates show that a specialist would need almost two million pieces of data to practice good medicine.1 To keep updated and apply evidence-based medicine (EBM) in practice, a physician must critically appraise full-text articles to guide their clinical decision making.2 However, owing to limited access to full-text articles, inadequate critical appraisal skills or lack of time to read the entire article, many clinicians depend solely on journal abstracts to answer clinical questions.3\u20138 Journal abstracts may have become the de facto resource for health professionals wanting to practice EBM because they are easy to read and are easily accessible anywhere.2 5 7 9\n\nAlthough abstracts are commonly utilised for clinical decisions, caution should be made in using them because they may not completely reflect the entire article.7 Studies by Pitkin *et al*10 11 and Peacock *et al*12 identified abstracts that contained data which were different or missing in the full-text. High-impact factor journals had abstracts that failed to include harm despite being mentioned in the main article.13 A study by Berwanger *et al*8 found that the abstracts of randomised controlled trials from major journals were reported with suboptimal quality. Moreover, abstracts are also subject to authors' biases which may mislead the readers.14\n\nEfforts have been made to improve the quality and accuracy of journal abstracts since they are often the most commonly read part of an article\u2014if not the only part read.10 11 15 In 1987, the Ad Hoc Working Group for Critical Appraisal of the Medical Literature introduced a seven-heading format (Objectives, Design, Setting, Patients, Interventions, Measurements and Conclusion) for structured abstracts.16 Variations in structured abstracts include the eight-heading format proposed by Haynes *et al*,17 IMRAD18\u201320 (Introduction, Methods, Results and Discussion), and more recently, *BMJ*'s pico format21 (Patient, Intervention, Comparison and Outcome). Structured abstracts tend to be longer than traditional ones but they also tend to have better content, readability, recall and retrieval.14 18 22\u201325 Aside from structuring, 'quality criteria' and guidelines have been developed to assist authors in preparing abstracts.26 27\n\nMost of the research on journal abstracts focus on their quality compared to the full-text,10 11 13 22 23 or based on their structure.14 20 24 Given the tendency of physicians to use abstracts for evidence, there is a need to evaluate their reliability in clinical decision making. A study by Barry *et al*5 looked at the effect of abstract format on physicians' management decisions. However, we are unable to find studies that compare clinical decisions between those with access to abstract-only or full-text.\n\nThe primary objective of this study was to determine whether there is a significant difference in the accuracy of the clinical decisions made on simulated cases by residents with access to full-text articles and those with access to abstract-only. The specific objectives were: (1) to compare the effect of access to abstracts-only or full-text articles on the clinical decision-making of residents; (2) to determine whether providing either the abstract or full-text article increased the accuracy of clinical decisions and (3) to characterise the information-seeking behaviour and use of information resources by residents of four departments in a tertiary care hospital.\n\n# Methods\n\n## Ethics review\n\nThe research protocol was submitted for technical review to the Research Grants Administration Office of the University of the Philippines Manila and for ethical evaluation to the Institutional Review Board, both of which approved the study.\n\n## Prestudy clinical case development\n\nA physician consultant from each of four clinical departments (Surgery, Internal Medicine, Emergency Medicine and Family and Community Medicine) prepared five simulated clinical cases of varying complexity and the corresponding clinical questions to assess the residents' management decisions. They searched PubMed for at least three recent (from 2007 onwards) journal articles that were deemed relevant for each case. 'Gold standard' answers to the clinical questions were based on the journal articles and other relevant information (applicability and appropriateness to local conditions, available resources and practice environments). A paper-based questionnaire was used for the preintervention assessment while an online version was used during the intervention phase to allow access to journal abstracts or full-text articles.\n\n## Study participants and setting\n\nSeventy-seven resident physicians from the four clinical departments (above) at the Philippine General Hospital participated in the study. The Philippine General Hospital is a 1500-bed tertiary care, state-owned, referral center and teaching hospital of the University of the Philippines College of Medicine, College of Nursing, College of Dentistry and allied colleges. It is the largest government hospital with a yearly patient load of 600\u2005000, mostly indigent patients. Fourteen clinical departments offer residency and fellowship training.\n\n## Study design\n\nDuring the prestudy briefing, the residents were informed that they were to answer questions related to the case simulations and that they could access reference articles if needed during the online phase of the study. Written consent was obtained and paper-based case simulations were given to each resident to replicate the hospital scenario of paper patient records. After reading the case simulations, they were asked to respond to five clinical questions and indicate whether they considered a literature search was needed to answer the questions or accumulated knowledge28 was adequate. Accumulated knowledge was defined in this study as the residents' personal knowledge base accumulated through years of formal education, training, research of the medical literature and clinical experience. Immediately after the preintervention phase, the residents were randomly assigned to one of two groups\u2014access to 'full-text' or 'abstracts-only,' stratified by department. The same clinical cases and questions in the preintervention phase were presented to the residents using the online version of the questionnaire to simulate real-time access to medical literature. A 20-min time limit was allotted for each question both for the paper-based and online questionnaire. The journal material provided, whether abstracts-only or full-text, was dependent on their assigned group. If the resident assigned to the abstracts-only group clicked on the link to the full-text, a prompt saying, 'Sorry, full-text is unavailable' appeared. Although access to either journal abstracts or full-text articles on the online version was available to all residents, they had the option of not using any resource at all. The residents' actions regarding the use or non-use of medical literature were recorded. Mouse clicks related to the residents' request for the articles' abstracts or full-text were logged in the server. The accuracy of response was a measure of correctness of residents' answer when compared with the answers ('gold standard') provided by the consultants. The same consultants who prepared the clinical cases and questions evaluated the accuracy of the residents' answers. A correct response was scored as '1' and an incorrect response scored '0'. Incomplete responses were rated as inaccurate and scored '0'. Resident responses were anonymised in both the paper and online versions.\n\n## Data analysis\n\nIn order to account for the repeated measures nature of the data (physicians answered multiple questions), we fit mixed effects logistic regression models with department type, intervention, and the interaction between department and intervention as independent variables, and accuracy of the response as the dependent variable. Unless otherwise stated, all results were based on this model. Resident year level was also considered as a predictor in the model but was not found to be significant and was dropped. We also fit a model with an interaction between intervention and department. For univariate analysis, we used the nonparametric Wilcoxon two-sample tests and Fisher exact tests, as appropriate.29 All analyses were performed using R Statistical Software.30\n\n# Results\n\n## Participant profile\n\nSeventy-seven residents from the departments of Surgery (n=20), Internal Medicine (n=20), Emergency Medicine (n=20) and Family and Community Medicine (n=17) participated in this study. Table\u00a01<\/a> shows the description of the study participants by department.\n\nCharacteristics of participating resident trainees by department\n\n| Characteristics | Surgery, n=20 | Internal medicine, n=20 | Emergency medicine, n=20 | Family and community medicine, n=17 |\n|:---|----|----|----|----|\n| Mean age, years (SD\\*) | 29.2 (2.7) | 28.1 (1.6) | 30.8 (3.3) | 30.5 (2.4) |\n| Gender, n (%) | | | | |\n| \u2003Female | 5 (25) | 3 (15) | 6 (30) | 13 (76) |\n| \u2003Male | 15 (75) | 17 (85) | 14 (70) | 4 (24) |\n| Years in residency training, n (%) | | | | |\n| \u20031 | 6 (30) | 7 (35) | 7 (35) | 11 (65) |\n| \u20032 | 2 (10) | 6 (30) | 7 (35) | 1 (6) |\n| \u2003\\>3 | 12 (60) | 7 (35) | 6 (30) | 5 (29) |\n\n\\*Standard deviation.\n\n## Comparing the effect of abstract-only and full-text access on the accuracy of clinical decision making\n\nThe first objective was to answer the question: Is there a significant difference in the accuracy of responses of residents in the abstract-only group and full-text group? Overall, there was no significant difference between the interventions (p*\u2009*=\u20090.44). Post-hoc power of the experiment to detect an overall difference between the interventions was low and varied from approximately 44\u201358%,31 depending on the level of correlation between the answers within each department. In a model fit to include an interaction between intervention and department, the interaction was significant suggesting that intervention effects differed by department (p*\u2009*=\u20090.03). In that model, access to full-text was significantly better than access to abstracts-only (p=0.049).\n\nWe then compared the effect of the interventions within departments in order to investigate which departments seemed to respond differently to the others with respect to the effect of the interventions. We found no significant difference between the interventions for the Internal Medicine, Emergency Medicine and Family Medicine departments (p=0.73, 0.13 and 0.37, respectively), but there was a difference between the interventions for the Surgery department (p=0.02). The OR for each department is given in table 2<\/a>. The full-text group had 3.6 times the probability of getting a correct answer on a case simulation compared to the abstract-only group. Note that the CI for Surgery does not include 1.0, which indicates a significant difference. There were no differences found between the interventions for any other department. Power to detect a difference between the interventions in a specific department was low (approx. 17%) because of the reduced sample size in each group (n=10). We also investigated whether resident year was a significant predictor of clinical decision accuracy, but it was not significant in any model.\n\nEstimates of the odds of getting an accurate response after full-text intervention compared to abstract-only intervention\n\n| Department | OR (95% CI) |\n|:-------------------|:------------------|\n| Surgery | 3.6 (1.3 to 10.3) |\n| Internal medicine | 0.8 (0.3 to 2.2) |\n| Emergency medicine | 0.5 (0.2 to 1.3) |\n| Family medicine | 1.6 (0.6 to 4.7) |\n\n## Accuracy of clinical decisions before and after access to literature search\n\nWe calculated the mean percentage of accurate responses to the simulated clinical questions before and after each intervention. Overall, mean accuracy increased from 42% to 68% for the abstract-only intervention, and 48\u201375% for the full-text intervention. The differences between the scores before and after the two interventions were significant (p\\<0.0001). Table\u00a03<\/a> shows the comparison of these percentages by department and the tests of significance.\n\nComparison of the average percentage of accurate of responses before and after interventions and tests of significance\n\n| | N | Average % of accurate responses before intervention (SD) | Average % of accurate responses after intervention (SD) | Difference in % before and after (95% CI)\\* | Test of significance for each department and overall (p value)\u2020 | Does the amount of improvement differ by department? (p value)\u2020 |\n|:---|----|:---|:---|----|----|:---|\n| Abstract-only (intervention 1) | | | | | | |\n| \u2003Department | | | | | | |\n| \u2003\u2003Surgery | 10 | 48 (27) | 62 (15) | 14 (0 to 40) | 0.16 | |\n| \u2003\u2003Internal medicine | 10 | 36 (18) | 66 (16) | 30 (20 to 60) | 0.003 | 0.06 |\n| \u2003\u2003Emergency medicine | 10 | 54 (21) | 72 (14) | 18 (0 to 40) | 0.06 | |\n| \u2003\u2003Family medicine | 8 | 25 (14) | 75 (14) | 50 (40 to 60) | \\<0.0001 | |\n| \u2003\u2003Overall | 38 | 42 (23) | 68 (15) | 27 (20 to 40) | \\<0.0001 | |\n| Full-text (intervention 2) | | | | | | |\n| \u2003Department | | | | | | |\n| \u2003\u2003Surgery | 10 | 60 (13) | 86 (0.16) | 26 (20 to 40) | 0.003 | |\n| \u2003\u2003Internal medicine | 10 | 46 (16) | 68 (0.1) | 22 (0 to 40) | 0.03 | 0.0001 |\n| \u2003\u2003Emergency medicine | 10 | 64 (25) | 64 (0.23) | 0 (\u221220 to 20) | 1.0 | |\n| \u2003\u2003Family medicine | 9 | 20 (14) | 82 (0.21) | 62 (40 to 80) | \\<0.0001 | |\n| \u2003\u2003Overall | 39 | 48 (24) | 75 (0.2) | 27 (20 to 40) | \\<0.0001 | |\n\n\\*95% CI based on Wilcoxon test.\n\n\u2020Based on values from the logistic regression mixed model.\n\nWhen given full-text articles, the departments of Surgery, Internal Medicine and Family Medicine showed significant improvements (p=0.003, 0.03 and \\<0.0001, respectively), while there was no change for the department of Emergency Medicine (p*\u2009*=\u20091.0). The differences among the departments were significant (p*\u2009*\\<\u20090.0001) for full-text intervention group. This suggests that full-text was more effective for Surgery, Internal Medicine and Family Medicine, but not in the Emergency Medicine department. However, the sample size was small (n=10 or less) at this level. The effect of the abstract-only intervention seems to have been in a similar direction for all the departments, and no significant difference in effects across departments was detected.\n\n## Information-seeking trends of the residents\n\nThe majority of the residents (86%) indicated that the articles provided in the online version were adequate to answer their questions and 77% indicated that they had actually read the articles. When asked whether they used abstracts-only or full-text articles to answer clinical questions in actual practice, 53 of the 77 residents (69%) indicated that they relied on abstracts most of the time, while only 24 (31%) said they would read the full-text article.\n\nResidents were asked whether or not they felt they needed extra information in order to answer the question correctly. We recorded whether they clicked on the links for the abstract or the full-text. We wanted to answer the question: does a perceived need for more information correlate with how often the physicians actually accessed the links for abstract-only or full-text? For the 157 cases where the resident indicated that they did not require additional information, there were 131 (83%) instances where literature was actually accessed (95% CI 77% to 88%). In contrast, out of the 228 cases where residents indicated that they needed additional information, there were only 12 (5%) cases where they did not actually access literature (95% CI 3% to 9%). Table\u00a04<\/a> shows a summary of whether the resident requested additional information and whether they actually accessed literature.\n\nResidents' perceived need for additional information and actual access of literature\n\n| | | Was literature actually accessed? | |\n|:---|:---|:---|:---|\n| | | No | Yes |\n| Perceived need for more information? | No | 26 (17%) | 131 (83%) |\n| | Yes | 12 (5%) | 216 (95%) |\n\nRow percentages are displayed.\n\n# Discussion\n\nThe main question we wanted to address in this study was whether there is a significant difference in the clinical decisions between residents who have access to abstracts-only and those with access to full-text articles. Overall, our results seem to demonstrate no difference in the accuracy of responses between residents provided with full-text articles and those with abstracts-only (p=0.4415). When we consider the clustering of physicians by department, we found a difference between the two interventions (p=0.0494) but further analysis showed that this difference was observed only in the department of Surgery (p=0.016). The effects of abstracts-only and full-text were not significantly different for the Internal Medicine, Emergency Medicine and Family Medicine departments. However, the study had low power to detect differences between the interventions within a department.\n\nOur study provides preliminary but useful information related to the use of journal abstracts in evidence-based practice. We believe this to be the first report involving physicians, that attempted to evaluate how abstracts measure up to full-text articles in guiding clinical decisions. This finding offers support for using 'consensus abstracts' (concurring and corroborating abstracts from independently conducted randomised clinical studies and systematic research from meta-analysis and systematic reviews that form the basis of clinical evidence) as a possible alternative when access to full-text is limited or in other circumstances when it is not feasible.2 However, clinicians who want to practice EBM will also find online many summaries, reviews and preappraised free resources (TRIP Database, ACP Journal Club, Cochrane Library, etc) or by subscription (UpToDate, 5-Minute Clinical Consult, etc). EBM websites will have links to these. Many of these resources will have applications for mobile devices like the iPhone or Android devices. Our observations set the stage for further research on the role of using abstracts in evidence-based practice. Future studies may include randomised controlled trials with real-time clinical decision-making encountered at the bedside.\n\nEBM encourages the use of timely and relevant information to complement the clinical acumen of clinicians.32 We found that the average improvement in the accuracy of responses across all the departments when either abstracts or full-text articles were provided was significant (p\\<0.0001 for both interventions). This finding supports previous research regarding the role of medical literature in improving clinical decisions.33\u201336 However, when individual departments were considered, there seems to be a significant difference between the departments in the full-text intervention group (p=0.0001). This difference in the effect of full-text between the departments appears to be due to the fact that there was no change in the accuracy of responses of Emergency Medicine residents compared to the increase in scores for the other residents when full-text was provided. This may mean that full-text articles were beneficial to Surgery, Family Medicine and Internal Medicine residents but did not benefit Emergency Medicine residents. A possible explanation for this is that the Emergency Medicine department is fast paced and residents may not have the time to read the full-text article. This hypothesis was further supported by data for the abstract-only group where we found no significant difference between the departments on how the intervention improved the accuracy of the responses by residents.\n\nOur study also demonstrated some trends in information-seeking and utilisation of evidence by residents when presented with clinical questions. We observed that although residents indicated that accumulated knowledge was sufficient to answer the questions, in most instances (83.4%), they still accessed the medical literature provided. This observation supports earlier studies that health professionals will use evidence from the literature when they are easily accessible at the time the question arises.37\n\nMore than a third of the residents (68.8%) who participated in this study claimed that they commonly used abstracts in seeking answers to their clinical dilemma. Other studies have reported similar observations. A study by Haynes *et al*3 found that two-thirds of clinical decisions were influenced by literature even if the full-text was not read. Moreover, internists reported that in 63% of the articles they come across, only the abstracts were read.4 These findings may even be higher among physicians in low- and middle-income countries because of even more limited availability of full-text articles.\n\n## Limitations\n\nThe small sample of residents from a tertiary government hospital in the Philippines limits the generalisability of the study to the larger medical community. Simulated clinical cases were used as surrogate to actual clinical encounters that a resident may be presented with. The clinical questions were specific within the realm of the disciplines and are not necessarily comparable to each other. The residents only answered five questions which reduced the variation in the study. A 'learning effect' was considered to explain the higher score during the intervention phase but was deemed unlikely because of the short interval period\u2014the residents took the online version questions immediately after the preintervention session. Furthermore, this study does not address whether access to full-text would have more impact than access to the abstract in a complex case, a case in which the details of a treatment or outcome or magnitude or significance might affect practice. It also does not address the impact on standard or routine or long-term practice. Finally, although there was reasonable power to detect a difference between the interventions overall, there was low power to detect differences within a department. It is possible that there were differences between the interventions for each department but our study did not have enough sample size to investigate the effect of the intervention at the department level.\n\n# Conclusion\n\nIn this study, we demonstrated that clinical decisions made by residents improved when evidence, either abstracts or full-text articles were provided. However, this study also indicates that some clinical questions may be simple enough; answered quickly using accumulated knowledge, but accumulated knowledge was enhanced by the use of appropriate medical information. The residents, in spite of initially stating that accumulated knowledge was adequate to answer clinical questions, accessed evidence anyway. This confirms previous findings that easy availability of evidence encourages the practice of evidence-based medicine. When clustered by department, clinical decisions guided by full-text articles were more accurate than those guided by abstracts alone, but this difference can be largely attributed to a significant difference in Surgery. It may be less or not at all in the other three departments but the analysis is not conclusive because of the limited power of this study. Without departmental clustering, the findings seem to show that they may not be significantly different.\n\n# References","meta":{"dup_signals":{"dup_doc_count":144,"dup_dump_count":60,"dup_details":{"curated_sources":2,"2023-50":2,"2023-40":1,"2023-06":2,"2022-40":2,"2022-21":2,"2021-49":1,"2021-43":2,"2021-31":1,"2021-17":3,"2021-10":2,"2019-43":1,"2018-51":1,"2018-34":1,"2018-26":1,"2018-22":2,"2018-17":1,"2018-13":3,"2018-09":1,"2018-05":1,"2017-51":2,"2017-47":1,"2017-43":3,"2017-39":1,"2017-34":2,"2017-22":1,"2017-17":2,"2017-09":2,"2017-04":2,"2016-50":3,"2016-44":3,"2016-40":2,"2016-36":3,"2016-30":4,"2016-26":1,"2016-22":3,"2016-18":3,"2016-07":2,"2015-48":2,"2015-40":3,"2015-35":3,"2015-32":2,"2015-27":3,"2015-22":4,"2015-14":2,"2014-52":3,"2014-49":5,"2014-42":9,"2014-41":6,"2014-35":4,"2014-23":5,"2014-15":2,"2024-10":1,"2017-13":3,"2015-18":3,"2015-11":3,"2015-06":2,"2014-10":2,"2013-48":2,"2013-20":2,"2024-30":1}},"file":"PMC3607116"},"subset":"pubmed_central"} {"text":"author: Alexandre B. Hardy; Emma M. Allister; Michael B. WheelerCorresponding author: Michael B. Wheeler, .\ndate: 2013-08\nreferences:\ntitle: Response to Comment on: Allister et al. UCP2 Regulates the Glucagon Response to Fasting and Starvation. Diabetes 2013;62:1623\u20131633\n\nWe thank Dr. Gylfe (1) for his interest in our work showing a role for uncoupling protein 2 (UCP2) in regulating \u03b1-cell glucagon secretion and suggesting that this is an interesting and potentially important finding. We agree that the role and mechanism of glucose sensing in \u03b1-cells is still highly controversial and that two opposing models are promoted in the literature. Our data suggests that the presence of UCP2 in \u03b1-cells may play a role in glucose sensing because the absence of UCP2 impaired the release of glucagon, and this was accompanied by reduced mitochondrial membrane potential hyperpolarization, increased reactive oxygen species production, more depolarized plasma membrane potential, and lower intracellular calcium levels. So although the objective of this study was not to define the mechanisms of glucose-regulated glucagon secretion in control cells, but rather to investigate the impact of UCP2 on \u03b1-cell function, we have attempted to fit our data into the published models. The first model was suggested by Dr. Gylfe and is based on the role of glucose as an activator of Ca^2+^ sequestration in the endoplasmic reticulum, which inhibits glucagon secretion (2,3). This model also suggests a depolarizing effect of low glucose concentration on \u03b1-cell plasma membrane potential (2,4,5). The second model (by Rorsman and colleagues \\[6\\]) describes regulation of glucagon secretion by an ATP-sensitive potassium channel\u2013dependent pathway. This model predicts that glucose metabolism increases intracellular ATP, closing ATP-sensitive potassium channels. The channel closure depolarizes \u03b1-cell membrane potential to a level that inactivates Na^+^ and Ca^2+^ ion channels, thereby reducing glucagon secretion (7). In our study, we present data that fits with both models. The glucose-induced changes in membrane potential recorded in isolated dispersed \u03b1-cells fit with the first model; we show that low glucose concentration caused depolarization and increased intracellular calcium levels along with enhanced secretion. However, the \u03b1-cell\u2013specific UCP2 knockout mouse \u03b1-cells were more depolarized under both high and low glucose concentrations and secreted less glucagon, which fits with the model by Rorsman and colleagues. In addition, our data show that low-dose diazoxide (1\u00b5mol\/L), which should hyperpolarize the membrane, increased glucagon secretion under high glucose conditions in control \u03b1-cells and could enhance glucagon secretion to control levels in the absence of UCP2. Again these data are in line with the second model of glucagon secretion and perhaps point to the \u03b1-cell being secretory within a narrow range of plasma membrane potentials. There may be differences in the data based on the use of dispersed versus whole islets, which were used for the electrophysiological and secretion experiments, respectively. Complex factors such as release of paracrine molecules can regulate glucagon secretion and may play a role in the whole islets studies. In addition, it cannot be ignored that the elevated reactive oxygen species levels (even under low glucose conditions) in the \u03b1-cell\u2013specific UCP2 knockout mouse islets could potentially be affecting secretion via a channel-independent mechanism, and this area deserves more investigation. However, as pointed out by Dr. Gylfe, in the context of normal \u03b1-cells, low glucose in our hands caused depolarization.\n\n## ACKNOWLEDGMENTS\n\nNo potential conflicts of interest relevant to this article were reported.\n\n# REFERENCES","meta":{"dup_signals":{"dup_doc_count":128,"dup_dump_count":46,"dup_details":{"curated_sources":2,"2020-10":1,"2019-04":1,"2018-43":1,"2018-30":1,"2018-26":2,"2018-17":4,"2018-09":2,"2018-05":1,"2017-51":1,"2017-47":3,"2017-43":5,"2017-39":2,"2017-34":1,"2017-30":8,"2017-26":2,"2017-22":2,"2017-17":8,"2017-09":4,"2017-04":8,"2016-50":3,"2016-44":4,"2016-40":3,"2016-36":2,"2016-30":3,"2016-26":3,"2016-22":3,"2016-18":2,"2016-07":3,"2015-48":2,"2015-40":3,"2015-35":3,"2015-32":2,"2015-27":2,"2015-22":2,"2015-14":2,"2014-52":3,"2014-49":1,"2014-42":4,"2014-41":2,"2014-35":4,"2014-23":3,"2023-06":1,"2017-13":2,"2015-18":3,"2015-11":2,"2015-06":2}},"file":"PMC3717857"},"subset":"pubmed_central"} {"text":"abstract: Meta-analysis is a popular way to try to get the most out of different studies linking genes and disease. But there are cons as well as pros.\nauthor: Gregory A Petsko\ndate: 2008\ninstitute: 1Rosenstiel Basic Medical Sciences Research Center, Brandeis University, Waltham, MA 02454-9110, USA\ntitle: Meta-morphosis\n\nThere's an old chemistry joke, based on the *ortho, meta* and *para* nomenclature for substituents attached to a benzene ring, that goes like this:\n\nOK, I know it's not very funny, but how many funny chemistry jokes are there? Anyway, someone reminded me of this joke the other day and while I was busy not laughing I was also thinking about something else. Because these days, for most people in the life sciences, and I bet nearly everybody in genome biology, the prefix *meta* doesn't conjure up images of di-substituted benzene. It calls to mind the meta-analysis of data.\n\nMeta-analysis - sometimes contracted to metanalysis - is one of those Sudoku-like fads that seem to pop up overnight and sweep the entire country in about as little time. 'Metanalysis' doesn't mean the same thing to scientists as it does to other academics. In linguistics, metanalysis is the act of breaking down a word or phrase into segments or meanings not original to it. The term was coined by the linguist Otto Jespersen, and comes from the Greek for 'a change of breakdown'. Here's an example, courtesy of Wikipedia: in the phrase \"God rest ye merry gentlemen\", originally merry was a complement with rest (as in \"God rest ye merry, gentlemen\" - note the position of the comma - that is, \"\\[may God\\] give you gentlemen a pleasant repose\"). But now, by a process of metanalysis, merry is frequently construed as an ordinary adjective modifying gentlemen (\"God rest ye, merry gentlemen\") - and in all probability it can be relexicalized with the current sense of merry, that is, cheerful, jolly, though that is harder to be certain of. (The expression \"to rest merry\" and the like was once generally current, by the way.) Incidentally, I love this sort of thing, but it isn't what 'meta-analysis' means in biomedical research.\n\nMeta-analysis for us is a technique whereby all data from all available studies of something are combined, often regardless of the relative quality of the data. The method is used by researchers to get a maximum amount of statistical information from a set of studies that might not have large enough individual sample sizes, or whose results may be of marginal statistical significance.\n\nIn a typical meta-analysis, the results of, say, four different clinical trials of a drug, or the data from five independent studies of the association of a genetic variation with a particular disease, are merged into a single statistical sample. The sample is then analyzed for the same correlation, or lack thereof.\n\nI don't know where it the practice came from but it's of relatively recent origin - the late 1970s or so. Certainly when I was a student there wasn't an entire subfield devoted to pooling studies and reinterpreting the results. But the medical literature, and the literature of human genetics, is awash with it now.\n\nImagine the excitement of the person who thought this up. \"Omigod! I can take other people's results, throw them together, and come up with a conclusion that in some cases will horrify them, and I can get it published in good journals -sometimes with quite high visibility if the data concern an important human disease or a dietary substance that's popular - and I don't have to do any of the really hard work! I don't have to actually do the studies. I don't have to find the patients, or collect the questionnaires, or analyze the genomes, or any of that difficult, real science. They'll do it for me, and then I can use their data for my own benefit. This is like making money with other people's money! This is how the first banker must have felt!!\"\n\nI am not a professional statistician, but I have taught statistics, and as a protein crystallographer I have to be familiar with most of the rudimentary forms of statistical analysis of data. And everything I know about the subject suggests to me that meta-analysis could lead to all kinds of trouble. One of the things I've learned is that you don't improve bad data by combining it with good data. In fact, exactly the opposite occurs: the bad data degrade the better. \"When in doubt, leave it out\" is what I usually tell my students - occasionally modified to \"when in doubt, weight it down\". But most meta-analyses don't apply relative weights to the studies they combine (it's pretty hard to see where they would get the weights anyway), so studies of all sorts of varying quality are sometimes pooled as though they were equally reliable.\n\nAnd even if we allow that the people who do this may get better statistical precision this way, owing to an enlargement of the sample size, I don't think statistical precision is the issue here. Adding more data may reduce the random errors, but I believe that in most biomedical and genomics studies - especially the latter - the real importance may lie in the systematic errors, or, more precisely (pun intended), the systematic differences.\n\nMeta-analysts justify combining data from different studies in part because doing the same experiment in different ways has long been a way to avoid problems caused by non-random or systematic errors. And it is true that using multiple studies that involve different questions or different experimental techniques, with different patient populations and other variations, may allow global trends to emerge from underneath spikes of systematic differences. But there is a danger there. The first meta-analysis I ever encountered, some years ago, concerned a gene I was interested in. It had been reported that a particular polymorphism in this gene was associated with a reduction in risk for certain diseases in the Han Chinese population. Several other studies, including some with much smaller sample sizes, had looked for a disease association in other ethnic groups but had failed to find any. The meta-analysis pooled all of these studies and concluded, rather definitively, that the particular variant did not confer a change in risk for any of the diseases in question. And their overall statistics certainly bore that out.\n\nThere was just one problem: I had read the original Han Chinese study rather carefully, and the work was well done, and the statistical correlation with disease risk was absolutely significant. But if you only read the meta-analysis -and I've seen that referred to repeatedly since and even quoted from at meetings - you wouldn't know that.\n\nBecause in the age of genomics, it's not about the general population anymore. Personalized medicine is coming, and our studies of haplotypes and other natural variations make it clear that, even if we can't quite get down to the level of the individual yet in all studies, the genetic (and environmental) background of the population being evaluated can make a huge difference. Ethnic and geographical differences aren't not systematic errors to be averaged out; they're essential components of the ways genes (or drugs, or nutrients) interact with the human body. The right question to be asking in the case of the study I referred to is not whether the polymorphism alters disease risk in the general population, but rather why it does so in the Han Chinese population. What are the particular combinations of other genetic and environmental factors that cause this variant to become associated with a set of diseases? That's where the really interesting science, and medicine, lies. Average different studies together willy-nilly and you run the risk that sometimes you may average out precisely the those variations that provide the clues we are looking for as to how human health really works.\n\nNow you may think that this column is all about trashing meta-analysis, and, to be honest, it started out that way. But then I had a discussion with David Altshuler, a human geneticist at Harvard Medical School and the Broad Institute at MIT, that made me rethink what was about to become a blanket condemnation. He pointed out to me another goal of meta-analysis in current human genetic studies. The key issues, as he put it, are first, the different statistical thresholds needed in discovery science with a low prior (as compared with hypothesis testing in a well-established field), and second, in human genetics the ability to use the rest of the genome as independent tests to assess the matching of cases and controls in human genetics.\n\nHere's how he explained the first issue: in genetic mapping with the goal of initially determining which genes might actually play a role in human disease, there is a very low *a priori* probability of any variant being associated with any risk (on the order of one in a million), and - that is, with an initial goal of determining which genes might actually play a role in human disease risk - one can't say that a *p* value of 0.01 or even 0.0001 is 'absolutely significant'. He's right: most findings of that magnitude turn out to be entirely irreproducible, and are likely to be false positives. But finding consistent results by meta-analysis increases evidence that the null hypothesis is wrong - and confidence that there is a relationship between the variant and risk. Without that confidence, he asserts, the field was awash with wishful thinking about lots of candidate genes that reductionist biologists had found in cells and in animal models, and wanted to believe were 'genetically validated in people' - but that turned out to be noise.\n\nIf you look into it, you will find that there is a long history in genetic mapping of setting the right threshold based on the number of tests. In linkage mapping, the LOD score that indicated linkage between a gene variant and a trait was 3.0 -not *p* \\< 0.05 (LOD stands for logarithm to the base 10 of odds; a LOD score of 3.0 means the likelihood of observing the given pedigree if two loci are not linked is less than 1 in 1000). By contrast, in genome-wide human genetic studies, a *p* of less than or equal to 0.0000001 is typically required for proof of association of a gene with a trait. And while biologists often like to say that for 'candidate genes' like the one I mentioned above it is more like *p* \\< 0.05, that threshold has a history of not always supporting reproducible discoveries. It is in gathering enough data to really get the confidence level to where it needs to be that meta-analysis has proven proved valuable in many cases.\n\nThe second issue is the quality of the data that goes into meta-analysis, and the ability to compare and align it. In most studies, Altshuler agrees with me that you can't know if you are washing out good data with bad. But he makes a strong case, with which I am forced to agree, that in the current wave of genome-wide association studies, the studies often use the same phenotype definition, or the same microarray protocols, and so the data may well be comparable (clearly, it is important to see whether that is the case, if you want to evaluate the meta-analysis critically). When done properly, information from the rest of the genome can be used to assess the properties of the data, matching of cases and controls, and so on, which, as he puts it, \"can result in valid combination of lots of good data, rather than sloppy mashing up of lots of bad data\". Finally, sometimes you actually may want to test the hypothesis across ethnicity or age or other variables. In those instances, meta-analysis allows you to put the data together in a valid way - a way that is probably better than just reading the papers and trying to compare them.\n\nDr Altshuler went on to make a very valuable point, which I think belongs in this column: he feels there that is currently a big and unfortunate divide between cell and molecular biologists and people using human genetics to find disease genes. The tone of the first part of my column just reinforced his concern, I'm sorry to say. For disease research, it's certainly true that knowing up front that a gene influences the disease in humans is invaluable, and I think he's right that meta-analysis can be a powerful tool for getting that correct. (As he asked me: \"Would you want to do functional work on a gene that turns out to be a false positive?\")\n\nA final point: meta-analysis is currently being used in two ways in human genetics, and I may have given a false impression that there is only one. One method is when there is a pre-existing hypothesis that needs to be tested. A good example comes from Altshuler's own work (Altshuler D, *et al.: Nat Genet* 2000, **26:**76-80). The data the authors had collected gave a positive result, but others had claimed the opposite. The studies were comparable enough that meta-analysis was able to show that the data were actually consistent and reinforced one another. In such cases meta-analysis serves as a reality check, and helps avoid possible bias in the selection of data that might support one hypothesis or another.\n\nBut careful meta-analysis can also in some cases be hypothesis generating, because enough studies might, as I mentioned above, allow previously undetected signals to rise above the noise. These can then be tested experimentally, and of course, need to be.\n\nDr Altshuler closed our discussion with a nice comment: \"Meta-analysis,\" he pointed out, \"is just a method. Garbage in, garbage out. But in genetic mapping of common complex diseases, where it is clear there are many different variants contributing, and where studies are expensive, combining data to learn the most is well justified and valid.\"\n\nIt is true that it's often the original studies that matter, that data need to be examined carefully, and that a study ideally must be accepted or rejected on its own merits. People are fond of saying that the devil is in the details. But God is in the details too. In the age of genomics, the details are ultimately what matter: the complex interplay of individual gene and genetic background, diet, environment, perhaps even state of mind, is what determines whether we are prone to this disease or that, react poorly or well to this drug or that, age well or badly. Meta-analysis can hide those details, but used properly, it can also be revealing. I guess you could say that writing this column has led to a meta-morphosis in the way I think about this subject, and that I have to back away from my meta-phor about making money with other people's money. In the right hands, meta-analysis can be a valuable tool, which is a pity, because it's so much more fun to write a column that completely trashes something. Meta-physically speaking, of course.","meta":{"dup_signals":{"dup_doc_count":118,"dup_dump_count":40,"dup_details":{"curated_sources":2,"2021-49":1,"2019-39":1,"2019-26":1,"2019-13":1,"2018-51":1,"2018-39":1,"2018-30":1,"2018-17":1,"2018-05":1,"2017-39":1,"2017-22":1,"2017-09":10,"2016-44":1,"2016-40":1,"2016-36":11,"2016-30":8,"2016-22":1,"2016-18":1,"2016-07":5,"2015-48":3,"2015-40":2,"2015-35":3,"2015-32":3,"2015-27":2,"2015-22":3,"2015-14":2,"2014-52":3,"2014-49":2,"2014-42":8,"2014-41":6,"2014-35":6,"2014-23":4,"2014-15":2,"2023-50":1,"2015-18":3,"2015-11":3,"2015-06":3,"2014-10":3,"2013-48":3,"2013-20":2}},"file":"PMC2760864"},"subset":"pubmed_central"} {"text":"author: T Pap; A Cinski; A Drynda; A Wille; A Baier; S Gay; I Meinecke\ndate: 2005\ninstitute: 1Division of Molecular Medicine of Musculoskeletal Tissue, University Hospital, Munster, Germany; 2Division of Experimental Rheumatology, University Hospital Magdeburg, Germany; 3Center of Experimental Rheumatology, Department of Rheumatology, University Hospital, Zurich, Switzerland; 4Department of Traumatology, Hand and Reconstructive Surgery, University Hospital, Munster, Germany\ntitle: Regulating apoptosis in fibroblasts\n\nApoptosis constitutes a highly selective way of eliminating aged and injured cells and is a key mechanism for the balanced growth and regeneration of tissues. In addition to genotoxic stress and the withdrawal of growth factors, apoptosis can be induced by death receptors. Such receptors trigger apoptosis through an approximately 90 amino acid death domain. Prominent and best-characterized members of the death domain receptor family are Fas (CD-95\/Apo-1) and the p55 TNF receptor (TNFRI). Alterations in receptor-mediated apoptosis may result in changes of tissue homeostasis and have been found in a variety of malignancies as well as in the inflamed and hyperplastic synovium in rheumatoid arthritis. Accumulating evidence suggests that the activation of rheumatoid arthritis synovial fibroblasts (RASF) is associated with alterations in apoptosis, especially at sites of invasion into cartilage and bone. Specifically, it has been demonstrated that RASF are less susceptible to Fas-induced cell death than osteoarthritis synovial fibroblasts but show higher expression levels of Fas.\n\nRecent studies have shown that both cytokine-dependent and cytokine-independent mechanisms contribute to the resistance of RASF against Fas-induced apoptosis. Thus, tumor necrosis factor alpha (TNF-\u03b1) \u2013 a major inflammatory cytokine in the rheumatoid synovium \u2013 fails to induce apoptosis in RASF, but reduces the susceptibility of these cells to Fas-mediated cell death through the induction of the transcription factor NF-\u03baB. Providing a link between altered apoptosis and cartilage destruction, we have shown that overexpression of TIMP-3 through gene transfer not only reduces the invasiveness of RASF but also modulates the apoptosis-inhibiting effects of TNF-\u03b1. RASF overexpressing TIMP-3 are sensitized strongly to Fas\/CD95-mediated cell death by TNF-\u03b1, and gene transfer of TIMP-3 inhibits the TNF-\u03b1-induced activation of NF-\u03baB. While these effects of TNF-\u03b1 on apoptosis can be found in different fibroblasts, several anti-apoptotic molecules have been demonstrated to be upregulated specifically in RASF.\n\nIn this context, we have investigated the role of the small ubiquitin-like modifier SUMO-1 in the regulation of apoptosis. We found that increased levels of SUMO-1 in RASF but not in osteoarthritis synovial fibroblasts were associated with a reduced susceptibility of RASF to Fas-induced apoptosis. Using small interfering (si)RNA to knock-down the expression of SUMO-1 as well as retroviral gene transfer of SUMO-1, we could establish a functional relationship between the expression of SUMO-1 and the resistance of RASF against apoptosis. Moreover, it was demonstrated by gene transfer of the nuclear SUMO-protease SENP1 that, rather than by interacting directly with the Fas-associated death domain, SUMO-1 inhibits apoptosis through recruiting pro-apoptotic molecules such as DAXX into nuclear PML bodies, where they cannot exert their pro-apoptotic effects.\n\nTaken together, there is growing evidence that in activated fibroblasts, such as in RASF, there is a close association between anti-apoptotic and destructive pathways. In addition to cytokine-dependent inflammatory mechanisms, the intrinsic upregulation of anti-apoptotic molecules contributes to the resistance of RASF against apoptosis. Post-translational modification of nuclear proteins through SUMO-1 appears to constitute an important mechanism of apoptosis regulation. Therefore, the inhibition of molecules that confer the resistance of RASF to apoptosis constitutes a most interesting therapeutic target.","meta":{"dup_signals":{"dup_doc_count":117,"dup_dump_count":42,"dup_details":{"curated_sources":2,"2022-21":1,"2021-43":1,"2021-17":1,"2020-45":1,"2020-29":1,"2020-05":1,"2019-39":1,"2018-30":1,"2018-09":1,"2017-39":1,"2017-30":1,"2017-22":1,"2017-09":7,"2016-44":2,"2016-40":1,"2016-36":7,"2016-30":4,"2016-22":1,"2016-18":1,"2016-07":7,"2015-48":3,"2015-40":2,"2015-35":3,"2015-32":2,"2015-27":3,"2015-22":3,"2015-14":2,"2014-52":3,"2014-49":4,"2014-42":10,"2014-41":3,"2014-35":5,"2014-23":5,"2014-15":5,"2022-40":1,"2015-18":3,"2015-11":2,"2015-06":3,"2014-10":3,"2013-48":3,"2013-20":4,"2024-26":1}},"file":"PMC2833993"},"subset":"pubmed_central"} {"text":"abstract: The primary associations of the HLA class II genes, *HLA-DRB1* and *HLA-DQB1*, and the class I genes, *HLA-A* and *HLA-B*, with type 1 diabetes (T1D) are well established. However, the role of polymorphism at the *HLA-DRB3, HLA-DRB4*, and *HLA-DRB5* loci remains unclear. In two separate studies, one of 500 subjects and 500 control subjects and one of 366 DRB1\\*03:01\u2013positive samples from selected multiplex T1D families, we used Roche 454 sequencing with Conexio Genomics ASSIGN ATF 454 HLA genotyping software analysis to analyze sequence variation at these three *HLA-DRB* loci. Association analyses were performed on the two HLA-DRB loci haplotypes (*DRB1*-*DRB3*, -*DRB4*, or -*DRB5*). Three common *HLA-DRB3* alleles (\\*01:01, \\*02:02, \\*03:01) were observed. DRB1\\*03:01 haplotypes carrying DRB3\\*02:02 conferred a higher T1D risk than did DRB1\\*03:01 haplotypes carrying DRB3\\*01:01 in DRB1\\*03:01\/\\*03:01 homozygotes with two DRB3\\*01:01 alleles (odds ratio \\[OR\\] 3.4 \\[95% CI 1.46\u20138.09\\]), compared with those carrying one or two DRB3\\*02:02 alleles (OR 25.5 \\[3.43\u2013189.2\\]) (*P* = 0.033). For DRB1\\*03:01\/\\*04:01 heterozygotes, however, the *HLA-DRB3* allele did not significantly modify the T1D risk of the DRB1\\*03:01 haplotype (OR 7.7 for \\*02:02; 6.8 for \\*01:01). These observations were confirmed by sequence analysis of *HLA-DRB3* exon 2 in a targeted replication study of 281 informative T1D family members and 86 affected family-based association control (AFBAC) haplotypes. The frequency of DRB3\\*02:02 was 42.9% in the DRB1\\*03:01\/\\*03:01 patients and 27.6% in the DRB1\\*03:01\/\\*04 (*P* = 0.005) compared with 22.6% in AFBAC DRB1\\*03:01 chromosomes (*P* = 0.001). Analysis of T1D-associated alleles at other HLA loci (*HLA-A, HLA-B*, and *HLA-DPB1*) on DRB1\\*03:01 haplotypes suggests that DRB3\\*02:02 on the DRB1\\*03:01 haplotype can contribute to T1D risk.\nauthor: Henry A. Erlich; Ana Maria Valdes; Shana L. McDevitt; Birgitte B. Simen; Lisbeth A. Blake; Kim R. McGowan; John A. Todd; Stephen S. Rich; Janelle A. Noble; Corresponding author: Henry A. Erlich, .\ndate: 2013-07\nreferences:\ntitle: Next Generation Sequencing Reveals the Association of DRB3\\*02:02 With Type 1 Diabetes\n\nThe Type 1 Diabetes Genetics Consortium (T1DGC) (1) is an international collaboration that has collected thousands of multiplex family samples, as well as case and control samples, and has carried out linkage and association analysis for genome-wide single nucleotide polymorphisms (SNPs), candidate gene SNPs, and major histocompatibility complex (MHC) SNPs, as well as for alleles and haplotypes at the highly polymorphic HLA class I and class II genes (2\u20137). More than 40 different loci have been associated with T1D (3); however, linkage and association analyses have identified the HLA region as the major genetic determinant of T1D risk. The most strongly T1D-associated MHC markers are defined by the HLA-DRB1, -DQA1, and -DQB1 haplotypes (4), although alleles at other HLA loci, notably *HLA-A* and *HLA-B*, as well as *HLA DPB1* (5\u201311) and other non-HLA loci across the genome, also contribute to T1D risk (3).\n\nAlthough the association of *HLA-DRB1* alleles with T1D is well established, the role of polymorphism at the *HLA-DRB3*, *HLA-DRB4*, and *HLA-DRB5* loci has been studied less frequently, partly due to technical issues in genotyping. Most high-resolution genotyping strategies for *HLA-DRB1* depend on DRB1-specific PCR primers to minimize confounding signals from coamplified secondary DRB loci. All copies of chromosome 6 have a DRB1 locus, and most, but not all, have a functional second DRB locus. This second DRB locus is DRB3 for DRB1\\*03, \\*11, \\*12, \\*13, and \\*14 haplotypes, DRB4 for DRB1\\*04, \\*07, and \\*09 haplotypes, and DRB5 for DRB1\\*15 and \\*16 haplotypes. DRB1\\*01, DRB1\\*08, and DRB1\\*10 haplotypes do not have a second DRB locus.\n\nThe clonal sequencing property of next generation sequencing systems, such as the Roche 454 GS FLX and GS Junior Systems, allows the use of generic DRB primers to coamplify and sequence exon 2 from DRB1, DRB3, DRB4, and DRB5 loci (12\u201314). We have used the Roche 454 amplicon sequencing system with \"fusion primers\" containing the 454 adaptor (A and B) sequences and 10 base multiplex ID tags (MIDs) to amplify and sequence exon 2 from different individuals (13,14). The genotyping software consolidates these clonal sequence readings, sorts them to individual DRB loci and to individual samples, and compares them to the IMGT\/HLA Sequence Database to assign specific genotypes at the *HLA-DRB1*, *HLA-DRB3*, *HLA-DRB4*, and *HLA-DRB5* loci.\n\nTo assess the potential role of these secondary HLA-DRB loci in T1D, association analyses were carried out at the two DRB locus haplotype levels, focusing on the DRB1\\*03:01 haplotypes bearing DRB3\\*01:01 or DRB3\\*02:02. The role of *HLA-DRB3* polymorphism in T1D risk was initially evaluated in a case\/control study and then examined in a targeted replication study of informative patients from multiplex HLA-genotyped families, allowing analysis of HLA haplotypes and the potential role of T1D-associated alleles at other HLA loci.\n\n# RESEARCH DESIGN AND METHODS\n\n## Subjects.\n\nThe initial set of DNA samples from 500 case and 500 control subjects was provided by the JDRF\/Wellcome Trust Diabetes and Inflammation Laboratory (case subjects) and the British 1958 Birth Cohort (control subjects) from a study described previously (15).\n\nTo test the hypothesis generated in the case\/control study, DNA samples from the T1DGC and the Human Biological Data Interchange (HBDI) multiplex family collections were used. Samples from 280 T1D-affected family members were selected if they had a DRB1\\*03 haplotype. Samples were selected based on the known T1D risk DRB1 genotypes, DRB1\\*03:01\/DRB1\\*03:01 (*n* = 54 T1D subjects; 105 independent DRB1\\*03:01 haplotypes), DRB1\\*03:01\/DRB1\\*04:01 (*n* = 154 DRB1\\*03:01 haplotypes), and DRB1\\*03:01\/DRB1\\*04:04 (*n* = 73 DRB1\\*03:01 haplotypes), with the aim of testing the hypothesis that DRB3\\*02:02 and \\*01:01 alleles may confer differential T1D risk. Parent samples with a DRB1\\*03:01 allele known not to be transmitted to an affected T1D patient were also selected as control subjects from the T1DGC and the HBDI collections (affected family-based association controls \\[AFBACs\\], *n* = 115 haplotypes) (16). These AFBAC control haplotypes were used to determine population frequencies of DRB3\\*02:02 and DRB3\\*01:01 alleles on DRB1\\*03:01 haplotypes.\n\n## HLA genotyping using next generation sequencing (Roche 454).\n\nHLA sequence data were generated on the Roche 454 GS FLX and GS Junior Systems and analyzed using Conexio Genomics HLA ASSIGN ATF genotyping software to interpret the sequence files as HLA genotypes (13,14). Amplicons were generated from genomic DNA using DRB generic exon 2 454 fusion primers. The 454 fusion primers consist of a locus-specific sequence on the 3\u2032 end, a 10-bp MID tag, and an \"A\" or \"B\" 454-specific primer sequence on the 5\u2032 end. The MID tag serves as a sample barcode recognized by Conexio ASSIGN ATF genotyping software.\n\nAmplicons were purified with AMPure beads (Becton Dickinson, Franklin Lakes, NJ), quantified using the Quant-iT PicoGreen dsDNA reagent (Life Technologies, Foster City, CA), and mixed with capture beads after dilution. Individual DRB exon 2 amplicon molecules captured by these beads were amplified in an emulsion PCR amplification and DNA-containing beads subsequently analyzed by pyrosequencing to obtain sequence readings originating from a single molecule (12\u201314). Sequencing on the GS FLX and GS Junior System was performed, as described (13,14). HLA genotypes were assigned to samples using Conexio ASSIGN ATF genotyping software, as described (13,14).\n\n## Statistical analysis.\n\nDRB3 allele frequencies on DR3 haplotypes in DR3 homozygotes, DR3\/DR4 heterozygotes, and AFBACs were compared using a Fisher exact test where the overall sample size was less than 50 or a Pearson \u03c7^2^ statistic.\n\n# RESULTS\n\n## Case\/control study.\n\nA total of 500 case and 500 control DNA samples from the Wellcome Trust\/JDRF Diabetes Inflammation Laboratory collection were amplified with 454 DRB fusion primers containing 32 MID tags and sequenced in two GS FLX System runs using PicoTiterPlates fitted with 16-region gaskets. The long read lengths of \\>400 bp spanned the amplicon in both directions and allowed setting phase (haplotyping) for the polymorphisms within exon 2. This provided, in most cases, unambiguous genotype assignments for *HLA-DRB1*, *HLA-DRB3*, and *HLA-DRB5*. For *HLA-DRB4*, however, several different genotypes were consistent with the observed sequence reads (ambiguity string). (The *HLA-DRB4* exon 2 sequence reads were identical for the DRB4\\*01:01, DRB4\\*01:03, and DRB4\\*01:06 alleles.) This DRB4 exon 2 sequence was present on all DRB1\\*04, \\*07, and \\*08 haplotypes; thus, the role of DRB4 alleles for association with T1D or for defining specific linkage disequilibrium (LD) patterns could not be evaluated.\n\nFor *HLA-DRB3* and *HLA-DRB5*, the allele assignments were unambiguous and, as expected, a pattern of very strong LD between *HLA-DRB1* and the secondary DRB locus was observed. The T1D association analyses were performed on the two-locus haplotypes (Table 1<\/a>). In these data, DRB1\\*15:01 was linked exclusively to DRB5\\*01:01, and although the numbers were small (*n* = 6), DRB1\\*16:01 was linked exclusively to DRB5\\*02:02. Three common DRB3 alleles (\\*01:01, \\*02:02, \\*03:01) were observed. Of those haplotypes carrying a DRB3 locus, most DRB1 alleles were linked to a unique DRB3 allele in both case and control subjects (Table 1<\/a>). The DRB1\\*03:01 and the DRB1\\*13:01 haplotypes, however, were linked to one of two different DRB3 alleles (\\*01:01 or \\*02:02).\n\nDRB1-DRB3 and DRB1-DRB5 haplotype frequencies in T1D case and control subjects\n\n![](2618tbl1)\n\nThus, the role of DRB3\\*01:01 versus DRB3\\*02:02 on DRB1\\*03:01 haplotypes could be evaluated. DRB1\\*03:01 haplotypes carrying DRB3\\*02:02 had a nominally higher risk for T1D; this difference in T1D risk was observed in the DRB1\\*03:01\/\\*03:01 homozygotes with two DRB3\\*01:01 alleles (odds ratio \\[OR\\] 3.4 \\[95% CI 1.46\u20138.09\\]) compared with those with one or two DRB3\\*02:02 alleles (25.5 \\[3.43\u2013189.2\\]; *P* = 0.033; Table 2<\/a>). For DRB1\\*03:01\/\\*04:01 heterozygotes, however, there was no difference in the T1D risk between DRB1\\*03:01 haplotypes distinguished by the DRB3 allele (OR 7.7 vs. 6.8; *P* = 0.29; Table 2<\/a>). That the apparent difference in risk for DRB1\\*03:01 haplotypes bearing DRB3\\*02:02 versus DRB3\\*01:01 is not evident in DRB1\\*03:01\/DRB1\\*04 heterozygotes may reflect the very high risk associated with this genotype and attributed, in part, to the *trans*-complementing DQ-\u03b1 (\\*05:01) or DQ-\u03b2 (\\*03:02) heterodimer (4).\n\nAssociation of DRB1\\*03:01\/03:01 and DRB1\\*03:01\/04:01 genotypes with T1D stratified by DRB3 genotype\n\n![](2618tbl2)\n\nAlthough many samples were sequenced in this case\/control study, the number of informative haplotypes (DRB1\\*03:01) and genotypes was limited; therefore, the statistical power of the association with DRB3:02:02 was very modest. Access to the T1DGC collection of HLA-genotyped families allowed selective targeting of informative genotypes to directly address the hypothesis of an effect of DRB3\\*02:02 in DRB1\\*03:01\/\\*03:01 homozygotes and replicate the results of the case\/control study.\n\n## Targeted replication study of informative samples from the T1DGC family collection.\n\nTo further evaluate the putative role of *HLA-DRB3* polymorphism on DRB1\\*03:01 haplotype risk, a panel of DRB1\\*03:01\/\\*03:01 homozygotes (*n* = 54 T1D family members, 105 nonshared chromosomes) and DRB1\\*03:01\/DRB1\\*04-DQB1\\*03:02 (*n* = 227) heterozygous T1D members were selected from the T1DGC families. Of the DRB1\\*03:01\/DRB1\\*04 subjects, 154 were \\*04:01 and 73 were \\*04:04. The distribution of DRB3 alleles in these T1D subjects is reported in Table 3<\/a>. The frequency of the DRB3\\*02:02 allele in the DRB1\\*03:01\/\\*03:01 T1D members was significantly greater than in those with DRB1\\*03:01\/\\*04-DQB1\\*03:02 (42.9 vs. 27.6%, *P* \\< 0.005), consistent with the observations in the previous case\/control study (Table 2<\/a>). The distribution of DRB3 alleles on control DRB1\\*03:01 chromosomes was evaluated by examining the nontransmitted AFBAC (16) DRB1\\*03:01 chromosomes from heterozygous parents. The frequency of DRB3\\*02:02 in those chromosomes not transmitted to a T1D-affected child was 22.6%, significantly less than the 42.9% (*P* = 0.001) observed in the DRB1\\*03:01\/DRB1\\*03:01 homozygous patients but not significantly different from the frequency in DRB1\\*03:01\/DRB1\\*04:01 heterozygous T1D patients (27.3%).\n\nComparison of distribution of DRB3 alleles on DR3 haplotypes in control subjects (AFBAC), DRB1\\*03:01 homozygous patients, and DRB1\\*03:01\/DRB1\\*04 heterozygous patients\n\n![](2618tbl3)\n\n## Does DRB3\\*02:02 allele modify risk or is it simply a marker for a high-risk DRB1\\*03:01 haplotype?\n\nOn the basis of these two studies, the DRB3\\*02:02 allele appears to mark a higher-risk DRB1\\*03:01 haplotype than the DRB1\\*03:01-DRB3\\*01:01 haplotype, and the effect of this high-risk haplotype can be discerned primarily in the DRB1\\*03:01\/\\*03:01 homozygotes. T1D risk heterogeneity in DRB1\\*03:01 haplotypes has been previously reported with HLA-A\\*30:02, HLA-B\\*18:01, and DPB1\\*02:02 alleles distinguishing the higher-risk from the lower-risk DRB1\\*03:01 haplotypes (17). To investigate whether the DRB3\\*02:02 allele might contribute to T1D risk or simply mark a high-risk DRB1\\*03:01 haplotype in which HLA class I (or other alleles) was a modifying risk, we examined the distribution of these and other T1D-associated alleles at other HLA loci on these two (DRB3\\*01:01 and DRB3\\*02:02) DRB1\\*03:01 haplotypes. The increase of DRB3\\*02:02 on DRB1\\*03:01 haplotypes in DRB1\\*03:01\/\\*03:01 case subjects versus DRB1\\*03:01\/DRB1\\*04 case subjects and AFBAC control subjects could reflect LD with alleles at other high-risk loci. The analysis of the case subjects selected from T1DGC families with informative HLA-DRB1 genotypes included parents; thus, haplotypes across the HLA region could be determined.\n\nEight locus haplotypes were phased and assigned based on familial inheritance. In the absence of parental genotyping data for the DRB3 locus, only a fraction of these could be assigned phase unambiguously with regard to the remaining eight loci. In total, 327 DRB1\\*03:01 haplotypes were used for this analysis of extended HLA haplotypes. The DRB1\\*03:01 haplotype counts in the three groups (AFBAC, DRB1\\*03:01\/DRB1\\*04, DRB1\\*03:01\/DRB1\\*03:01) for selected DPB1, HLA-B, -C, and -A alleles are reported in Table 4<\/a> and [Supplementary Table 1](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1387\/-\/DC1). These alleles were selected based on previously published reports of T1D association. Table 4<\/a> and [Supplementary Table 1](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1387\/-\/DC1) compare the distribution of DRB3\\*01:01 and \\*02:02 alleles on DRB1\\*03:01 haplotypes bearing various high-risk alleles at other HLA loci in DRB1\\*03:01\/\\*03:01 versus DRB1\\*03:01\/\\*04 patients.\n\nDistribution of DRB3\\*02:02 and \\*01:01 alleles on the extended DRB1\\*03:01-DPB1\\*03:01 haplotype\n\n![](2618tbl4)\n\nWe note that the alleles A\\*01:01, B\\*08:01, C\\*07:02, and DPB1\\*01:01 are in very strong LD with DRB3\\*01:01. These alleles on this extended haplotype mark a \"lower-risk\" DRB1\\*03:01 haplotype. In addition, the LD between alleles on the extended DRB1\\*03:01 haplotype, A\\*30:02-B\\*18:01-DRB1\\*03:01-DRB3\\*02:02-DPB1\\*02:02 is sufficiently strong that it does not permit evaluation of the independent contribution of DRB3\\*02:02. One haplotype, the DRB1\\*03:01-DPB1\\*03:01, however, is informative; DRB1\\*03:01 haplotypes carrying both DPB1\\*03:01 and DRB3\\*02:02 are significantly increased (*P* = 0.0006) in DRB1\\*03:01\/\\*03:01 versus DRB1\\*03:01\/\\*04 case subjects compared with those who carry DRB3\\*01:01 (Table 4<\/a>). This effect cannot be explained by the presence of A\\*30:02 or B\\*18:01, which are also markers of high-risk DRB1\\*03:01 haplotypes on these 16 DPB1\\*03:01-DRB3\\*02:02 haplotypes. Only 4 of these 16 haplotypes carry A\\*30:02, and of the 9 DPB1\\*03:01-DRB3\\*02:02 haplotypes that also carry B\\*18:01, 5 are found in DR3\/DR4 and 4 in DR3\/DR3 case subjects. These observations, taken together, suggest that the DRB3\\*02:02 allele may contribute to T1D risk rather than simply marking a high-risk DRB1\\*03:01 haplotype.\n\nDRB1\\*13:01 is the only other DRB1 haplotype that can carry DRB3\\*01:01 or DRB3\\*02:02 (Table 1<\/a>). We note that the OR of the DRB1\\*13:01-DRB3\\*02:02 haplotype is nominally higher than the DRB1\\*13:01-DRB3\\*01:01 haplotype (0.29 vs. 0.15; Table 1<\/a>). However, the CIs overlap, indicating that much larger sample sizes will be necessary to evaluate the risk of these two haplotypes.\n\n# DISCUSSION\n\nLong-read next generation clonal sequencing allows genotyping the secondary HLA-DRB loci (*HLA-DRB3*, *HLA-DRB4*, and *HLA-DRB5*) as well as *HLA-DRB1* by using DRB generic 454 fusion primers for exon 2. Using the Roche 454 GS FLX and GS Junior Systems, we have used this capability to examine the role of the secondary HLA-DRB loci in T1D susceptibility. Most *HLA-DRB1* alleles are linked exclusively to a specific allele at the secondary DRB locus (Table 1<\/a>); however, DRB1\\*03:01 and DRB1\\*13:01 can carry DRB3\\*01:01 or \\*02:02. DRB1\\*03:01 haplotypes carrying DRB3\\*02:02 appear to confer higher risk than those carrying DRB3\\*01:01. The difference in T1D risk is observed in DRB1\\*03:01\/\\*03:01 homozygotes but not in DRB1\\*03:01\/DRB1\\*04-DQB1\\*03:02 heterozygotes. The very high risk for T1D associated with this heterozygous genotype has been attributed to the *trans*-complementing DQ heterodimer formed by the DQ-\u03b1 chain encoded by the DQA1\\*05:01 allele on the DRB1\\*03:01 haplotype and the DQ-\u03b2 chain encoded by the DQB1\\*03:02 on the DRB1\\*04-DQB1\\*03:02 haplotype (4). One possible explanation of the different effect of *HLA-DRB3* in this heterozygote and in the DRB1\\*03:01\/DRB1\\*03:01 homozygote is that the putative risk conferred by the *trans-*complementing DQ heterodimers in the heterozygote is sufficiently large so that the risk difference between the \"higher-risk\" and \"lower-risk\" DRB1\\*03:01 haplotypes has a minimal effect on the overall T1D risk for the heterozygote. A recent study on the effect of other MHC markers (*HLA-B*, *HLA-A*, *HLA-DPB1*, and *TNF-\u03b1*) on DRB1\\*03:01 haplotype risk reported a similar observation\u2014these markers were associated with differential risk in DRB1\\*03:01\/DRB1\\*03:01 homozygotes but not in the DRB1\\*03:01\/DRB1\\*04-DQB1\\*03:02 heterozygotes (17).\n\nDoes DRB3\\*02:02 play a role in T1D risk or does it only *mark* the higher-risk DRB1\\*03:01 haplotype? To address this question, eight locus DRB1\\*03:01 haplotypes were phased and assigned based on familial inheritance, and the distribution of the DRB3\\*01:01 and \\*02:02 alleles on extended haplotypes bearing high-risk alleles at other HLA loci was compared (Table 4<\/a> and [Supplementary Table 1](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1387\/-\/DC1)). The linkage disequilibrium between A\\*30:02, B\\*18:01, DPB1\\*02:02, and DRB3\\*02:02 was so great (virtually 100%) that the effect of these alleles on this high-risk DRB1\\*03:01 haplotype could not be distinguished. Some DRB1\\*03:01-DRB3\\*02:02 haplotypes, however, *do not* carry these alleles, associated with the high-risk DRB1\\*03:01 haplotype, but still demonstrate a significant increase in DRB1\\*03:01\/\\*03:01 case subjects versus DRB1\\*03:01\/\\*04. In particular, DRB1\\*03:01 haplotypes bearing DPB1\\*03:01 can carry DRB3\\*01:01 or DRB3\\*02:02; those that carry DRB3\\*02:02 are dramatically increased among the DRB1\\*03:01\/\\*03:01 case subjects (*P* = 0.0006). These observations are consistent with the hypothesis that DRB3\\*02:02 is not simply a marker of high-risk haplotypes but, in fact, increases the risk of DRB1\\*03:01 haplotypes, particularly in DRB1\\*03:01\/\\*03:01 homozygotes.\n\nThe amino acid sequence differences between DRB3\\*01:01 and DRB3\\*02:02 are substantial (10 differences). Consequently, DRB3\\*02:02 is likely to have a different repertoire of peptide binding and presentation and may thus confer greater T1D risk than DRB3\\*01:01 in certain genotype contexts (i.e., DRB1\\*03:01\/\\*03:01 homozygotes) through altered peptide binding and T-cell repertoire. The crystallographic structure of the DRB3\\*01:01 (DR52a) and the DRB3\\*03:01 (DR52c) molecules have been reported recently (18,19). The P9 pocket of the DRB3\\*02:02 (DR52b) differs substantially from that of DRB3\\*01:01 (DR52a); in particular, it contains Tyr-37, Ala-38, Asp-57, and Tyr-60, making the pocket more accommodating to smaller, polar, or charged peptide residues. A structural model with bound peptide of the DRB3\\*02:02 molecule with the differences highlighted is shown in [Supplementary Fig. 1](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1387\/-\/DC1).\n\nThe results presented here support the conclusion that the T1D risk of a given HLA haplotype is determined by specific combinations of alleles at a variety of HLA loci with genotype-dependent effects and support a role for the DRB3 locus in T1D susceptibility conferred by the DRB1\\*03:01 haplotype.\n\n# Supplementary Material\n\n###### Supplementary Data\n\n## ACKNOWLEDGMENTS\n\nThis research used resources provided by the T1DGC, a collaborative clinical study sponsored by the National Institute of Diabetes and Digestive and Kidney Diseases, National Institute of Allergy and Infectious Diseases, National Human Genome Research Institute, National Institute of Child Health and Human Development, and JDRF and was supported by U01 DK-062418. This work also was supported in part by National Institutes of Health Grant DK-61722 (J.A.N.).\n\nNo potential conflicts of interest relevant to this article were reported.\n\nH.A.E. designed the initial study, designed the second study with the support of the T1DGC Steering Committee, and drafted the manuscript. A.M.V. provided statistical analyses and edited the manuscript. S.L.M. contributed to the manuscript and with L.A.B. and K.R.M., they all contributed to generating sequence data. B.B.S. contributed to generating sequence data and edited the manuscript. J.A.T. provided samples for the first study and edited the manuscript. S.S.R. edited the manuscript. J.A.N. designed the second study with the support of the T1DGC Steering Committee and edited the manuscript. H.A.E. is the guarantor of this work and, as such, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.\n\nThe authors are grateful to the Wellcome Trust and JDRF support for J.A.T., and also to Neil Walker and Helen Stevens, of the JDRF\/Wellcome Trust Diabetes and Inflammation Laboratory, for providing DNA samples and HLA typing information.\n\n# REFERENCES","meta":{"dup_signals":{"dup_doc_count":120,"dup_dump_count":47,"dup_details":{"curated_sources":2,"2021-10":1,"2020-16":1,"2019-43":1,"2019-39":1,"2019-30":1,"2018-30":1,"2018-26":2,"2018-17":3,"2018-09":1,"2018-05":3,"2017-47":2,"2017-43":4,"2017-39":2,"2017-34":2,"2017-30":5,"2017-26":3,"2017-22":2,"2017-17":8,"2017-09":4,"2017-04":6,"2016-50":3,"2016-44":4,"2016-40":3,"2016-36":3,"2016-30":3,"2016-26":3,"2016-22":3,"2016-18":3,"2016-07":2,"2015-48":2,"2015-35":1,"2015-32":2,"2015-27":2,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":3,"2014-42":2,"2014-41":2,"2014-35":1,"2014-23":1,"2023-40":1,"2017-13":4,"2015-18":3,"2015-11":2,"2015-06":2,"2024-30":1}},"file":"PMC3712046"},"subset":"pubmed_central"} {"text":"abstract: The \"Spanish\" influenza pandemic of 1918\u20131919, which caused \u224850 million deaths worldwide, remains an ominous warning to public health. Many questions about its origins, its unusual epidemiologic features, and the basis of its pathogenicity remain unanswered. The public health implications of the pandemic therefore remain in doubt even as we now grapple with the feared emergence of a pandemic caused by H5N1 or other virus. However, new information about the 1918 virus is emerging, for example, sequencing of the entire genome from archival autopsy tissues. But, the viral genome alone is unlikely to provide answers to some critical questions. Understanding the 1918 pandemic and its implications for future pandemics requires careful experimentation and in-depth historical analysis.\nauthor: Jeffery K. Taubenberger; David M. MorensJeffery K. Taubenberger, Department of Molecular Pathology, Armed Forces Institute of Pathology, 1413 Research Blvd, Bldg 101, Rm 1057, Rockville, MD 20850-3125, USA; fax. 301-295-9507; email: \ndate: 2006-01\nreferences:\ntitle: 1918 Influenza: the Mother of All Pandemics\n\n> \"Curiouser and curiouser!\" cried Alice\n>\n> Lewis Carroll, Alice's Adventures in Wonderland, 1865\n\nAn estimated one third of the world's population (or \u2248500 million persons) were infected and had clinically apparent illnesses (*1**,**2*) during the 1918\u20131919 influenza pandemic. The disease was exceptionally severe. Case-fatality rates were \\>2.5%, compared to \\<0.1% in other influenza pandemics (*3**,**4*). Total deaths were estimated at \u224850 million (*5**\u2013**7*) and were arguably as high as 100 million (*7*).\n\nThe impact of this pandemic was not limited to 1918\u20131919. All influenza A pandemics since that time, and indeed almost all cases of influenza A worldwide (excepting human infections from avian viruses such as H5N1 and H7N7), have been caused by descendants of the 1918 virus, including \"drifted\" H1N1 viruses and reassorted H2N2 and H3N2 viruses. The latter are composed of key genes from the 1918 virus, updated by subsequently incorporated avian influenza genes that code for novel surface proteins, making the 1918 virus indeed the \"mother\" of all pandemics.\n\nIn 1918, the cause of human influenza and its links to avian and swine influenza were unknown. Despite clinical and epidemiologic similarities to influenza pandemics of 1889, 1847, and even earlier, many questioned whether such an explosively fatal disease could be influenza at all. That question did not begin to be resolved until the 1930s, when closely related influenza viruses (now known to be H1N1 viruses) were isolated, first from pigs and shortly thereafter from humans. Seroepidemiologic studies soon linked both of these viruses to the 1918 pandemic (*8*). Subsequent research indicates that descendants of the 1918 virus still persists enzootically in pigs. They probably also circulated continuously in humans, undergoing gradual antigenic drift and causing annual epidemics, until the 1950s. With the appearance of a new H2N2 pandemic strain in 1957 (\"Asian flu\"), the direct H1N1 viral descendants of the 1918 pandemic strain disappeared from human circulation entirely, although the related lineage persisted enzootically in pigs. But in 1977, human H1N1 viruses suddenly \"reemerged\" from a laboratory freezer (*9*). They continue to circulate endemically and epidemically.\n\nThus in 2006, 2 major descendant lineages of the 1918 H1N1 virus, as well as 2 additional reassortant lineages, persist naturally: a human epidemic\/endemic H1N1 lineage, a porcine enzootic H1N1 lineage (so-called classic swine flu), and the reassorted human H3N2 virus lineage, which like the human H1N1 virus, has led to a porcine H3N2 lineage. None of these viral descendants, however, approaches the pathogenicity of the 1918 parent virus. Apparently, the porcine H1N1 and H3N2 lineages uncommonly infect humans, and the human H1N1 and H3N2 lineages have both been associated with substantially lower rates of illness and death than the virus of 1918. In fact, current H1N1 death rates are even lower than those for H3N2 lineage strains (prevalent from 1968 until the present). H1N1 viruses descended from the 1918 strain, as well as H3N2 viruses, have now been cocirculating worldwide for 29 years and show little evidence of imminent extinction.\n\n# Trying To Understand What Happened\n\nBy the early 1990s, 75 years of research had failed to answer a most basic question about the 1918 pandemic: why was it so fatal? No virus from 1918 had been isolated, but all of its apparent descendants caused substantially milder human disease. Moreover, examination of mortality data from the 1920s suggests that within a few years after 1918, influenza epidemics had settled into a pattern of annual epidemicity associated with strain drifting and substantially lowered death rates. Did some critical viral genetic event produce a 1918 virus of remarkable pathogenicity and then other critical genetic event occur soon after the 1918 pandemic to produce an attenuated H1N1 virus?\n\nIn 1995, a scientific team identified archival influenza autopsy materials collected in the autumn of 1918 and began the slow process of sequencing small viral RNA fragments to determine the genomic structure of the causative influenza virus (*10*). These efforts have now determined the complete genomic sequence of 1 virus and partial sequences from 4 others. The primary data from the above studies (*11**\u2013**17*) and a number of reviews covering different aspects of the 1918 pandemic have recently been published (*18**\u2013**20*) and confirm that the 1918 virus is the likely ancestor of all 4 of the human and swine H1N1 and H3N2 lineages, as well as the \"extinct\" H2N2 lineage. No known mutations correlated with high pathogenicity in other human or animal influenza viruses have been found in the 1918 genome, but ongoing studies to map virulence factors are yielding interesting results. The 1918 sequence data, however, leave unanswered questions about the origin of the virus (*19*) and about the epidemiology of the pandemic.\n\n# When and Where Did the 1918 Influenza Pandemic Arise?\n\nBefore and after 1918, most influenza pandemics developed in Asia and spread from there to the rest of the world. Confounding definite assignment of a geographic point of origin, the 1918 pandemic spread more or less simultaneously in 3 distinct waves during an \u224812-month period in 1918\u20131919, in Europe, Asia, and North America (the first wave was best described in the United States in March 1918). Historical and epidemiologic data are inadequate to identify the geographic origin of the virus (*21*), and recent phylogenetic analysis of the 1918 viral genome does not place the virus in any geographic context (*19*).\n\nAlthough in 1918 influenza was not a nationally reportable disease and diagnostic criteria for influenza and pneumonia were vague, death rates from influenza and pneumonia in the United States had risen sharply in 1915 and 1916 because of a major respiratory disease epidemic beginning in December 1915 (*22*). Death rates then dipped slightly in 1917. The first pandemic influenza wave appeared in the spring of 1918, followed in rapid succession by much more fatal second and third waves in the fall and winter of 1918\u20131919, respectively (Figure 1<\/a>). Is it possible that a poorly-adapted H1N1 virus was already beginning to spread in 1915, causing some serious illnesses but not yet sufficiently fit to initiate a pandemic? Data consistent with this possibility were reported at the time from European military camps (*23*), but a counter argument is that if a strain with a new hemagglutinin (HA) was causing enough illness to affect the US national death rates from pneumonia and influenza, it should have caused a pandemic sooner, and when it eventually did, in 1918, many people should have been immune or at least partially immunoprotected. \"Herald\" events in 1915, 1916, and possibly even in early 1918, if they occurred, would be difficult to identify.\n\nThe 1918 influenza pandemic had another unique feature, the simultaneous (or nearly simultaneous) infection of humans and swine. The virus of the 1918 pandemic likely expressed an antigenically novel subtype to which most humans and swine were immunologically naive in 1918 (*12**,**20*). Recently published sequence and phylogenetic analyses suggest that the genes encoding the HA and neuraminidase (NA) surface proteins of the 1918 virus were derived from an avianlike influenza virus shortly before the start of the pandemic and that the precursor virus had not circulated widely in humans or swine in the few decades before (*12**,**15**,**24*). More recent analyses of the other gene segments of the virus also support this conclusion. Regression analyses of human and swine influenza sequences obtained from 1930 to the present place the initial circulation of the 1918 precursor virus in humans at approximately 1915\u20131918 (*20*). Thus, the precursor was probably not circulating widely in humans until shortly before 1918, nor did it appear to have jumped directly from any species of bird studied to date (*19*). In summary, its origin remains puzzling.\n\n# Were the 3 Waves in 1918\u20131919 Caused by the Same Virus? If So, How and Why?\n\nHistorical records since the 16th century suggest that new influenza pandemics may appear at any time of year, not necessarily in the familiar annual winter patterns of interpandemic years, presumably because newly shifted influenza viruses behave differently when they find a universal or highly susceptible human population. Thereafter, confronted by the selection pressures of population immunity, these pandemic viruses begin to drift genetically and eventually settle into a pattern of annual epidemic recurrences caused by the drifted virus variants.\n\nIn the 1918\u20131919 pandemic, a first or spring wave began in March 1918 and spread unevenly through the United States, Europe, and possibly Asia over the next 6 months (Figure 1<\/a>). Illness rates were high, but death rates in most locales were not appreciably above normal. A second or fall wave spread globally from September to November 1918 and was highly fatal. In many nations, a third wave occurred in early 1919 (*21*). Clinical similarities led contemporary observers to conclude initially that they were observing the same disease in the successive waves. The milder forms of illness in all 3 waves were identical and typical of influenza seen in the 1889 pandemic and in prior interpandemic years. In retrospect, even the rapid progressions from uncomplicated influenza infections to fatal pneumonia, a hallmark of the 1918\u20131919 fall and winter waves, had been noted in the relatively few severe spring wave cases. The differences between the waves thus seemed to be primarily in the much higher frequency of complicated, severe, and fatal cases in the last 2 waves.\n\nBut 3 extensive pandemic waves of influenza within 1 year, occurring in rapid succession, with only the briefest of quiescent intervals between them, was unprecedented. The occurrence, and to some extent the severity, of recurrent annual outbreaks, are driven by viral antigenic drift, with an antigenic variant virus emerging to become dominant approximately every 2 to 3 years. Without such drift, circulating human influenza viruses would presumably disappear once herd immunity had reached a critical threshold at which further virus spread was sufficiently limited. The timing and spacing of influenza epidemics in interpandemic years have been subjects of speculation for decades. Factors believed to be responsible include partial herd immunity limiting virus spread in all but the most favorable circumstances, which include lower environmental temperatures and human nasal temperatures (beneficial to thermolabile viruses such as influenza), optimal humidity, increased crowding indoors, and imperfect ventilation due to closed windows and suboptimal airflow.\n\nHowever, such factors cannot explain the 3 pandemic waves of 1918\u20131919, which occurred in the spring-summer, summer-fall, and winter (of the Northern Hemisphere), respectively. The first 2 waves occurred at a time of year normally unfavorable to influenza virus spread. The second wave caused simultaneous outbreaks in the Northern and Southern Hemispheres from September to November. Furthermore, the interwave periods were so brief as to be almost undetectable in some locales. Reconciling epidemiologically the steep drop in cases in the first and second waves with the sharp rises in cases of the second and third waves is difficult. Assuming even transient postinfection immunity, how could susceptible persons be too few to sustain transmission at 1 point and yet enough to start a new explosive pandemic wave a few weeks later? Could the virus have mutated profoundly and almost simultaneously around the world, in the short periods between the successive waves? Acquiring viral drift sufficient to produce new influenza strains capable of escaping population immunity is believed to take years of global circulation, not weeks of local circulation. And having occurred, such mutated viruses normally take months to spread around the world.\n\nAt the beginning of other \"off season\" influenza pandemics, successive distinct waves within a year have not been reported. The 1889 pandemic, for example, began in the late spring of 1889 and took several months to spread throughout the world, peaking in northern Europe and the United States late in 1889 or early in 1890. The second recurrence peaked in late spring 1891 (more than a year after the first pandemic appearance) and the third in early 1892 (*21*). As was true for the 1918 pandemic, the second 1891 recurrence produced of the most deaths. The 3 recurrences in 1889\u20131892, however, were spread over \\>3 years, in contrast to 1918\u20131919, when the sequential waves seen in individual countries were typically compressed into \u22488\u20139 months.\n\nWhat gave the 1918 virus the unprecedented ability to generate rapidly successive pandemic waves is unclear. Because the only 1918 pandemic virus samples we have yet identified are from second-wave patients (*16*), nothing can yet be said about whether the first (spring) wave, or for that matter, the third wave, represented circulation of the same virus or variants of it. Data from 1918 suggest that persons infected in the second wave may have been protected from influenza in the third wave. But the few data bearing on protection during the second and third waves after infection in the first wave are inconclusive and do little to resolve the question of whether the first wave was caused by the same virus or whether major genetic evolutionary events were occurring even as the pandemic exploded and progressed. Only influenza RNA\u2013positive human samples from before 1918, and from all 3 waves, can answer this question.\n\n# What Was the Animal Host Origin of the Pandemic Virus?\n\nViral sequence data now suggest that the entire 1918 virus was novel to humans in, or shortly before, 1918, and that it thus was not a reassortant virus produced from old existing strains that acquired 1 or more new genes, such as those causing the 1957 and 1968 pandemics. On the contrary, the 1918 virus appears to be an avianlike influenza virus derived in toto from an unknown source (*17**,**19*), as its 8 genome segments are substantially different from contemporary avian influenza genes. Influenza virus gene sequences from a number of fixed specimens of wild birds collected circa 1918 show little difference from avian viruses isolated today, indicating that avian viruses likely undergo little antigenic change in their natural hosts even over long periods (*24**,**25*).\n\nFor example, the 1918 nucleoprotein (NP) gene sequence is similar to that of viruses found in wild birds at the amino acid level but very divergent at the nucleotide level, which suggests considerable evolutionary distance between the sources of the 1918 NP and of currently sequenced NP genes in wild bird strains (*13**,**19*). One way of looking at the evolutionary distance of genes is to compare ratios of synonymous to nonsynonymous nucleotide substitutions. A synonymous substitution represents a silent change, a nucleotide change in a codon that does not result in an amino acid replacement. A nonsynonymous substitution is a nucleotide change in a codon that results in an amino acid replacement. Generally, a viral gene subjected to immunologic drift pressure or adapting to a new host exhibits a greater percentage of nonsynonymous mutations, while a virus under little selective pressure accumulates mainly synonymous changes. Since little or no selection pressure is exerted on synonymous changes, they are thought to reflect evolutionary distance.\n\nBecause the 1918 gene segments have more synonymous changes from known sequences of wild bird strains than expected, they are unlikely to have emerged directly from an avian influenza virus similar to those that have been sequenced so far. This is especially apparent when one examines the differences at 4-fold degenerate codons, the subset of synonymous changes in which, at the third codon position, any of the 4 possible nucleotides can be substituted without changing the resulting amino acid. At the same time, the 1918 sequences have too few amino acid differences from those of wild-bird strains to have spent many years adapting only in a human or swine intermediate host. One possible explanation is that these unusual gene segments were acquired from a reservoir of influenza virus that has not yet been identified or sampled. All of these findings beg the question: where did the 1918 virus come from?\n\nIn contrast to the genetic makeup of the 1918 pandemic virus, the novel gene segments of the reassorted 1957 and 1968 pandemic viruses all originated in Eurasian avian viruses (*26*); both human viruses arose by the same mechanism\u2014reassortment of a Eurasian wild waterfowl strain with the previously circulating human H1N1 strain. Proving the hypothesis that the virus responsible for the 1918 pandemic had a markedly different origin requires samples of human influenza strains circulating before 1918 and samples of influenza strains in the wild that more closely resemble the 1918 sequences.\n\n# What Was the Biological Basis for 1918 Pandemic Virus Pathogenicity?\n\nSequence analysis alone does not offer clues to the pathogenicity of the 1918 virus. A series of experiments are under way to model virulence in vitro and in animal models by using viral constructs containing 1918 genes produced by reverse genetics.\n\nInfluenza virus infection requires binding of the HA protein to sialic acid receptors on host cell surface. The HA receptor-binding site configuration is different for those influenza viruses adapted to infect birds and those adapted to infect humans. Influenza virus strains adapted to birds preferentially bind sialic acid receptors with \u03b1 (2\u20133) linked sugars (*27**\u2013**29*). Human-adapted influenza viruses are thought to preferentially bind receptors with \u03b1 (2\u20136) linkages. The switch from this avian receptor configuration requires of the virus only 1 amino acid change (*30*), and the HAs of all 5 sequenced 1918 viruses have this change, which suggests that it could be a critical step in human host adaptation. A second change that greatly augments virus binding to the human receptor may also occur, but only 3 of 5 1918 HA sequences have it (*16*).\n\nThis means that at least 2 H1N1 receptor-binding variants cocirculated in 1918: 1 with high-affinity binding to the human receptor and 1 with mixed-affinity binding to both avian and human receptors. No geographic or chronologic indication exists to suggest that one of these variants was the precursor of the other, nor are there consistent differences between the case histories or histopathologic features of the 5 patients infected with them. Whether the viruses were equally transmissible in 1918, whether they had identical patterns of replication in the respiratory tree, and whether one or both also circulated in the first and third pandemic waves, are unknown.\n\nIn a series of in vivo experiments, recombinant influenza viruses containing between 1 and 5 gene segments of the 1918 virus have been produced. Those constructs bearing the 1918 HA and NA are all highly pathogenic in mice (*31*). Furthermore, expression microarray analysis performed on whole lung tissue of mice infected with the 1918 HA\/NA recombinant showed increased upregulation of genes involved in apoptosis, tissue injury, and oxidative damage (*32*). These findings are unexpected because the viruses with the 1918 genes had not been adapted to mice; control experiments in which mice were infected with modern human viruses showed little disease and limited viral replication. The lungs of animals infected with the 1918 HA\/NA construct showed bronchial and alveolar epithelial necrosis and a marked inflammatory infiltrate, which suggests that the 1918 HA (and possibly the NA) contain virulence factors for mice. The viral genotypic basis of this pathogenicity is not yet mapped. Whether pathogenicity in mice effectively models pathogenicity in humans is unclear. The potential role of the other 1918 proteins, singularly and in combination, is also unknown. Experiments to map further the genetic basis of virulence of the 1918 virus in various animal models are planned. These experiments may help define the viral component to the unusual pathogenicity of the 1918 virus but cannot address whether specific host factors in 1918 accounted for unique influenza mortality patterns.\n\n# Why Did the 1918 Virus Kill So Many Healthy Young Adults?\n\nThe curve of influenza deaths by age at death has historically, for at least 150 years, been U-shaped (Figure 2<\/a>), exhibiting mortality peaks in the very young and the very old, with a comparatively low frequency of deaths at all ages in between. In contrast, age-specific death rates in the 1918 pandemic exhibited a distinct pattern that has not been documented before or since: a \"W-shaped\" curve, similar to the familiar U-shaped curve but with the addition of a third (middle) distinct peak of deaths in young adults \u224820\u201340 years of age. Influenza and pneumonia death rates for those 15\u201334 years of age in 1918\u20131919, for example, were \\>20 times higher than in previous years (*35*). Overall, nearly half of the influenza-related deaths in the 1918 pandemic were in young adults 20\u201340 years of age, a phenomenon unique to that pandemic year. The 1918 pandemic is also unique among influenza pandemics in that absolute risk of influenza death was higher in those \\<65 years of age than in those \\>65; persons \\<65 years of age accounted for \\>99% of all excess influenza-related deaths in 1918\u20131919. In comparison, the \\<65-year age group accounted for 36% of all excess influenza-related deaths in the 1957 H2N2 pandemic and 48% in the 1968 H3N2 pandemic (*33*).\n\nA sharper perspective emerges when 1918 age-specific influenza morbidity rates (*21*) are used to adjust the W-shaped mortality curve (Figure 3<\/a>, panels, A, B, and C \\[35,37\\]). Persons \\<35 years of age in 1918 had a disproportionately high influenza incidence (Figure 3<\/a>, panel A). But even after adjusting age-specific deaths by age-specific clinical attack rates (Figure 3<\/a>, panel B), a W-shaped curve with a case-fatality peak in young adults remains and is significantly different from U-shaped age-specific case-fatality curves typically seen in other influenza years, e.g., 1928\u20131929 (Figure 3<\/a>, panel C). Also, in 1918 those 5 to 14 years of age accounted for a disproportionate number of influenza cases, but had a much lower death rate from influenza and pneumonia than other age groups. To explain this pattern, we must look beyond properties of the virus to host and environmental factors, possibly including immunopathology (e.g., antibody-dependent infection enhancement associated with prior virus exposures \\[38\\]) and exposure to risk cofactors such as coinfecting agents, medications, and environmental agents.\n\nOne theory that may partially explain these findings is that the 1918 virus had an intrinsically high virulence, tempered only in those patients who had been born before 1889, e.g., because of exposure to a then-circulating virus capable of providing partial immunoprotection against the 1918 virus strain only in persons old enough (\\>35 years) to have been infected during that prior era (*35*). But this theory would present an additional paradox: an obscure precursor virus that left no detectable trace today would have had to have appeared and disappeared before 1889 and then reappeared more than 3 decades later.\n\nEpidemiologic data on rates of clinical influenza by age, collected between 1900 and 1918, provide good evidence for the emergence of an antigenically novel influenza virus in 1918 (*21*). Jordan showed that from 1900 to 1917, the 5- to 15-year age group accounted for 11% of total influenza cases, while the \\>65-year age group accounted for 6% of influenza cases. But in 1918, cases in the 5- to 15-year-old group jumped to 25% of influenza cases (compatible with exposure to an antigenically novel virus strain), while the \\>65 age group only accounted for 0.6% of the influenza cases, findings consistent with previously acquired protective immunity caused by an identical or closely related viral protein to which older persons had once been exposed. Mortality data are in accord. In 1918, persons \\>75 years had lower influenza and pneumonia case-fatality rates than they had during the prepandemic period of 1911\u20131917. At the other end of the age spectrum (Figure 2<\/a>), a high proportion of deaths in infancy and early childhood in 1918 mimics the age pattern, if not the mortality rate, of other influenza pandemics.\n\n# Could a 1918-like Pandemic Appear Again? If So, What Could We Do About It?\n\nIn its disease course and pathologic features, the 1918 pandemic was different in degree, but not in kind, from previous and subsequent pandemics. Despite the extraordinary number of global deaths, most influenza cases in 1918 (\\>95% in most locales in industrialized nations) were mild and essentially indistinguishable from influenza cases today. Furthermore, laboratory experiments with recombinant influenza viruses containing genes from the 1918 virus suggest that the 1918 and 1918-like viruses would be as sensitive as other typical virus strains to the Food and Drug Administration\u2013approved antiinfluenza drugs rimantadine and oseltamivir.\n\nHowever, some characteristics of the 1918 pandemic appear unique: most notably, death rates were 5\u201320 times higher than expected. Clinically and pathologically, these high death rates appear to be the result of several factors, including a higher proportion of severe and complicated infections of the respiratory tract, rather than involvement of organ systems outside the normal range of the influenza virus. Also, the deaths were concentrated in an unusually young age group. Finally, in 1918, 3 separate recurrences of influenza followed each other with unusual rapidity, resulting in 3 explosive pandemic waves within a year's time (Figure 1<\/a>). Each of these unique characteristics may reflect genetic features of the 1918 virus, but understanding them will also require examination of host and environmental factors.\n\nUntil we can ascertain which of these factors gave rise to the mortality patterns observed and learn more about the formation of the pandemic, predictions are only educated guesses. We can only conclude that since it happened once, analogous conditions could lead to an equally devastating pandemic.\n\nLike the 1918 virus, H5N1 is an avian virus (*39*), though a distantly related one. The evolutionary path that led to pandemic emergence in 1918 is entirely unknown, but it appears to be different in many respects from the current situation with H5N1. There are no historical data, either in 1918 or in any other pandemic, for establishing that a pandemic \"precursor\" virus caused a highly pathogenic outbreak in domestic poultry, and no highly pathogenic avian influenza (HPAI) virus, including H5N1 and a number of others, has ever been known to cause a major human epidemic, let alone a pandemic. While data bearing on influenza virus human cell adaptation (e.g., receptor binding) are beginning to be understood at the molecular level, the basis for viral adaptation to efficient human-to-human spread, the chief prerequisite for pandemic emergence, is unknown for any influenza virus. The 1918 virus acquired this trait, but we do not know how, and we currently have no way of knowing whether H5N1 viruses are now in a parallel process of acquiring human-to-human transmissibility. Despite an explosion of data on the 1918 virus during the past decade, we are not much closer to understanding pandemic emergence in 2006 than we were in understanding the risk of H1N1 \"swine flu\" emergence in 1976.\n\nEven with modern antiviral and antibacterial drugs, vaccines, and prevention knowledge, the return of a pandemic virus equivalent in pathogenicity to the virus of 1918 would likely kill \\>100 million people worldwide. A pandemic virus with the (alleged) pathogenic potential of some recent H5N1 outbreaks could cause substantially more deaths.\n\nWhether because of viral, host or environmental factors, the 1918 virus causing the first or 'spring' wave was not associated with the exceptional pathogenicity of the second (fall) and third (winter) waves. Identification of an influenza RNA-positive case from the first wave could point to a genetic basis for virulence by allowing differences in viral sequences to be highlighted. Identification of pre-1918 human influenza RNA samples would help us understand the timing of emergence of the 1918 virus. Surveillance and genomic sequencing of large numbers of animal influenza viruses will help us understand the genetic basis of host adaptation and the extent of the natural reservoir of influenza viruses. Understanding influenza pandemics in general requires understanding the 1918 pandemic in all its historical, epidemiologic, and biologic aspects.\n\nDr Taubenberger is chair of the Department of Molecular Pathology at the Armed Forces Institute of Pathology, Rockville, Maryland. His research interests include the molecular pathophysiology and evolution of influenza viruses.\n\nDr Morens is an epidemiologist with a long-standing interest in emerging infectious diseases, virology, tropical medicine, and medical history. Since 1999, he has worked at the National Institute of Allergy and Infectious Diseases.\n\n# References","meta":{"dup_signals":{"dup_doc_count":113,"dup_dump_count":64,"dup_details":{"curated_sources":1,"2023-23":1,"2023-14":1,"2023-06":2,"2022-49":2,"2022-40":2,"2022-27":2,"2021-49":2,"2021-43":3,"2021-31":3,"2021-21":1,"2021-17":3,"2021-10":1,"2021-04":1,"2020-50":1,"2020-40":1,"2020-29":1,"2020-16":1,"2020-05":2,"2019-43":1,"2019-30":2,"2019-26":1,"2019-22":1,"2019-09":2,"2018-51":1,"2018-47":1,"2018-39":1,"2018-26":1,"2018-22":3,"2018-17":2,"2018-13":1,"2018-09":1,"2018-05":3,"2017-51":2,"2017-47":2,"2017-43":3,"2017-39":3,"2017-34":2,"2017-30":2,"2017-26":3,"2017-22":1,"2017-17":1,"2017-09":2,"2017-04":1,"2016-50":1,"2016-44":2,"2016-40":3,"2016-36":1,"2016-30":2,"2016-26":1,"2016-22":1,"2015-48":2,"2015-40":1,"2015-35":2,"2015-32":1,"2015-27":2,"2015-22":4,"2015-14":2,"2014-52":2,"2014-42":1,"2014-41":5,"2014-35":1,"2014-23":3,"2014-15":1,"2023-40":1}},"file":"PMC3291398"},"subset":"pubmed_central"} {"text":"abstract: # Objective\n .\n To evaluate the long-term safety (up to 3\u2005years) of treatment with pegloticase in patients with refractory chronic gout.\n .\n # Methods\n .\n This open-label extension (OLE) study was conducted at 46 sites in the USA, Canada and Mexico. Patients completing either of two replicate randomised placebo-controlled 6-month trials received pegloticase 8\u2005mg every 2\u2005weeks (biweekly) or every 4\u2005weeks (monthly). Safety was evaluated as the primary outcome, with special interest in gout flares and infusion-related reactions (IRs). Secondary outcomes included urate-lowering and clinical efficacy.\n .\n # Results\n .\n Patients (n=149) received a mean\u00b1SD of 28\u00b118 pegloticase infusions and were followed for a mean of 25\u00b111\u2005months. Gout flares and IRs were the most frequently reported adverse events; these were least common in patients with a sustained urate-lowering response to treatment and those receiving biweekly treatment. In 10 of the 11 patients with a serious IR, the event occurred when uric acid exceeded 6\u2005mg\/dl. Plasma and serum uric acid levels remained \\<6\u2005mg\/dl in most randomised controlled trial (RCT)-defined pegloticase responders throughout the OLE study and were accompanied by sustained and progressive improvements in tophus resolution and flare incidence.\n .\n # Conclusions\n .\n The safety profile of long-term pegloticase treatment was consistent with that observed during 6\u2005months of RCT treatment; no new safety signals were identified. Improvements in clinical status, in the form of flare and tophus reduction initiated during RCT pegloticase treatment in patients maintaining goal range urate-lowering responses were sustained or advanced during up to 2.5\u2005years of additional treatment.\nauthor: Michael A Becker; Herbert S B Baraf; Robert A Yood; Aileen Dillon; Janitzia V\u00e1zquez-Mellado; Faith D Ottery; Dinesh Khanna; John S Sundy[^1]Correspondence to Dr Michael A Becker, 237 East Delaware Pl, Chicago, IL 60611-1713, USA; \ndate: 2013-09\ninstitute: 1Rheumatology Section, The University of Chicago, Chicago, Illinois, USA; 2Center for Rheumatology and Bone Research, Wheaton, Maryland, USA; 3Reliant Medical Group, Worcester, Massachusetts, USA; 4Rheumatology Section, Kaiser Permanente Medical Center, San Francisco, California, USA; 5Department of Rheumatology, Hospital General de Mexico, Mexico City, Mexico; 6Savient Pharmaceuticals, Inc., East Brunswick, New Jersey, USA; 7Department of Medicine, University of Michigan, Ann Arbor, Michigan, USA; 8Duke Clinical Research Unit, Duke University Medical Center, Durham, North Carolina, USA\nreferences:\ntitle: Long-term safety of pegloticase in chronic gout refractory to conventional treatment\n\n# Introduction\n\nUrate-lowering therapy is the mainstay of chronic gout management and aims at achieving and maintaining sub-saturating serum uric acid (SUA) concentrations, most often recommended as \u22640.36\u2005mmol\/l (\u22646\u2005mg\/dl).1 2 Long-term maintenance of goal range SUA is associated with depletion of urate crystals in synovial fluid3 4 and reductions in tophus size and flare frequency.5\u20137 Because the rates of achieving these improvements are related to the degree of urate-lowering,4 6 8 the optimal therapeutic target may be substantially lower than 6\u2005mg\/dl in patients with severe manifestations of chronic gout,5 prompting one recommendation for a target SUA of \\<5\u2005mg\/dl.9\n\nUp to 3% of the estimated eight million patients with gout in the USA10 fail urate-lowering management with standard-of-care therapies (oral xanthine oxidase inhibitors) because of drug intolerance\/contraindication or treatment refractoriness.11\u201313 Such patients are at risk of progression to increasing flare recurrences, gouty arthropathy, destructive and deforming tophi and chronic pain, frequently accompanied by impaired physical function and poor health-related quality of life. Until recently these patients had few or no effective urate-lowering options to prevent or reverse gout progression. Pegloticase, a mammalian recombinant uricase conjugated to monomethoxypoly(ethylene glycol) (mPEG), was developed as an enzymatic alternative for the treatment of patients with gout refractory to conventional oral therapies and was approved in the USA in 2010.14\u201316 When administered intravenously, pegloticase reduces the urate concentration in the intravascular space to below the limit of solubility (6.8\u2005mg\/dl). The resulting reduction in extracellular soluble urate concentration is hypothesised to favour dissolution of deposited urate crystals, resulting in progressive normalisation of body urate pools and improvements in clinical signs and symptoms of gout.17 18\n\nThe tolerability and efficacy of pegloticase treatment in patients with refractory chronic gout were demonstrated in replicate 6-month randomised double-blind placebo-controlled trials (RCTs).17 Pegloticase administered every 2\u2005weeks (biweekly) or every 4\u2005weeks (monthly) produced treatment responses (plasma uric acid (PUA)\\<6\u2005mg\/dl for \u226580% of the time during months 3 and 6) in 42% and 35% of patients, respectively, compared with 0% for patients receiving placebo. PUA normalised within 24\u2005h of the first pegloticase infusion in all patients, but in non-responders the urate-lowering response was lost over time. Infusion-related reactions (IRs) were the most common reason for discontinuations in the RCTs (10% of biweekly treated patients and 13% of monthly treated patients).17 This report focuses on the long-term open-label extension (OLE) of the RCTs which provided an additional 2.5\u2005years of safety data.\n\n# Methods\n\n## Study design and patients\n\nThe OLE (NCT01356498) enrolled patients at 46 centres in the USA, Canada and Mexico who had completed either of the two RCTs (NCT01356498, NCT00325195).17 The OLE was conducted from December 2006 to July 2009; compliance and permission information is provided in the online supplementary material. Protocol amendments extended the OLE from 12\u2005months to a maximum of 30\u2005months. Figure\u00a01<\/a> summarises the OLE study design, inclusion criteria (exclusion criteria are presented in the online supplementary material), end points and evaluations. Delays in treatment between the randomised trials and the OLE resulted from administrative requirements, such as time for Institutional Review Board approvals and site implementation. Except where specifically indicated, patients were categorised for analysis purposes according to the first dosing regimen administered during the OLE study.\n\nDetails of safety evaluations and prophylaxis regimens for IRs and flares are provided in the online supplementary material. As IRs were one of the events of interest, physical examinations and monitoring of vital signs were performed at the time of all IRs and medical intervention was provided as appropriate. For IRs occurring during infusions, the infusion could be discontinued, slowed by half, or interrupted and later restarted at a slower rate.\n\nImmunogenicity was assessed from serum collected every 12\u2005weeks using a validated ELISA to detect IgG and IgM pegloticase antibody. Antibody titres were categorised as low (\u22641\u2009:\u20092430) or high (\\>1\u2009:\u20092430), consistent with the RCTs.17 Tophus complete response was defined as 100% reduction in the measured area of at least one tophus of baseline diameter \u22655\u2005mm without growth of any other baseline tophus or appearance of any new tophus.\n\n## Statistical analysis\n\nSafety and efficacy end points were evaluated using descriptive statistics in the intent-to-treat population, which included all patients who received at least one dose of pegloticase during the OLE study and had follow-up data. Demographic and baseline clinical values are described in the online supplementary material.\n\n# Results\n\n## Patient disposition\n\nA total of 225 patients were enrolled in the two RCTs and 212 patients were included in the primary efficacy analysis. Of these, 157 patients (74%) completed the RCTs and 151 (96% of completers) entered the OLE study, including 57 (97%) patients from the biweekly pegloticase group, 55 (93%) from the monthly pegloticase group and all 39 patients from the placebo group (figure 2<\/a>). All patients received pegloticase in the OLE study, except for two patients who chose observation only after receiving monthly pegloticase in the RCTs. For patients treated with pegloticase in the RCTs, a higher proportion of those defined as responders than those defined as non-responders completed the OLE study, as did a higher proportion of patients allocated to placebo in the RCTs who then received biweekly pegloticase compared with those who received monthly pegloticase (figure 2<\/a>). The most common reasons for discontinuing treatment during the OLE study were adverse events (AEs) in 18% (27\/149) of patients and loss of urate-lowering response in 9% (13\/149).\n\n## Patients\n\nThe OLE study population had a mean age of 56.8\u2005years (range 30\u201389) and 79% were men. Most patients were white (69%), African-American (13%) or Hispanic\/Latino (11%). Risk factors were common; the most common were cardiovascular (94% of patients had at least one cardiovascular risk factor, most commonly hypertension, 72%), obesity (body mass index (BMI) \u226530\u2005kg\/m^2^, 65%) and dyslipidaemia (52%). Chronic kidney disease (creatinine clearance \\<1.0\u2005ml\/s) was present in 26% of patients. Less commonly occurring risk factors and concomitant medications are described in the online supplementary material.\n\n## Exposure to pegloticase\n\nPatients received a mean of 28 pegloticase infusions (median 26; range 1\u201359) during the OLE study, with RCT responders receiving more infusions than non-responders (mean 35 vs 26). When exposure to pegloticase was pooled from the RCT and OLE studies, patients received a mean of 35 infusions (range 1\u201370). Thirty of 67 patients (45%) who started on monthly pegloticase switched to biweekly treatment at some point during the OLE study; only 12% (10\/82) switched from biweekly to monthly treatment. Overall, patients remained in the OLE study for a mean of 25\u2005months (median 29\u2005months; range 0\u221237\u2005months, including a mandated end-of-study observation period (no treatment) for a maximum of 6\u2005months).\n\n## Safety\n\n### Adverse events (AEs)\n\nNearly all patients (98%) had at least one AE during the OLE study (table 1<\/a>). The overall incidence of AEs did not differ between responders and non-responders from the RCTs or for patients initially on placebo who started pegloticase biweekly versus monthly. Both gout flares and IRs occurred at a lower rate with the biweekly regimen during the OLE study (2.7 per patient per year vs 4.7 with the monthly regimen and 1.3 per patient per year vs 2.1, respectively).\n\nSummary of AEs during the OLE study\n\n| | All treated patients (N=149) |\n|:---|----|\n| AEs in the OLE study | N (%) |\n| Subjects with any AE | 146 (98) |\n| Subjects with any serious AE | 51 (34) |\n| Subjects with serious AEs considered related to pegloticase | 13 (9) |\n| Discontinuations due to AE | 11 (7) |\n| Most common AEs (incidence \\>10%) | |\n| \u2003Gout flare | 106 (71) |\n| \u2003Infusion-related reaction | 65 (44) |\n| \u2003Arthralgia | 29 (20) |\n| \u2003Upper respiratory tract infection | 27 (18) |\n| \u2003Pain in extremity | 26 (17) |\n| \u2003Back pain | 25 (17) |\n| \u2003Diarrhoea | 22 (15) |\n| \u2003Peripheral oedema | 21 (14) |\n| \u2003Urinary tract infection | 20 (13) |\n| \u2003Nausea | 17 (11) |\n| \u2003Headache | 16 (11) |\n| \u2003Fatigue | 15 (10) |\n| \u2003Sinusitis | 15 (10) |\n| \u2003Nasopharyngitis | 15 (10) |\n\nAE, adverse event; OLE, open-label extension.\n\nThe incidence of gout flares during the OLE study was lowest for responders who had received biweekly pegloticase in the RCTs and highest for RCT non-responders to monthly pegloticase and patients treated with placebo in the RCTs. The incidence of IRs in the OLE study was lowest for RCT responders regardless of pegloticase schedule and highest for RCT non-responders to monthly pegloticase and RCT placebo patients starting monthly pegloticase. Eleven patients (7%) discontinued pegloticase due to AEs, including seven patients with IRs; six of these seven patients were RCT pegloticase non-responders or RCT placebo-treated patients. Among all patients with AEs judged as treatment-related by the investigators (66%, 99\/149), only gout flares and IRs were reported in \\>5% of patients.\n\nMost AEs (53% of patients) were investigator-rated as moderate in intensity. Overall, 36% (54\/149) of patients had severe AEs, of which 17% were treatment-related, most commonly IRs (7%) or gout flares (7%). Of note, no RCT pegloticase responder had a severe treatment-related IR or a severe gout flare. The highest rates of severe IRs and flares (26% for both; 6\/23) were reported in patients treated with biweekly pegloticase during the OLE study who had received placebo in the RCTs. Assessments of haematology, clinical chemistry and urinalysis identified no significant change from baseline (except for UA) in any of the subgroups defined by response to pegloticase or pegloticase administration schedule.\n\nTwenty-four adjudicated cardiovascular events in 21 patients were identified in the OLE study by the independent expert committee. These events occurred in responders and non-responders with no apparent relationship to the duration of treatment or time since the last pegloticase infusion. Eight of these events occurred in patients who were being observed and had not received pegloticase for \\>30\u2005days.\n\n### Serious AEs and deaths\n\nApproximately one-third of patients (34%, 51\/149) experienced a total of 106 serious AEs during the OLE study. Thirteen patients who reported serious AEs discontinued treatment. Among the 13 serious AEs considered possibly related to study drug, there were 11 IRs (one serious AE of IR was judged unlikely to be related to the study drug), one skin necrosis (severe) and one nephrolithiasis (moderate). Among the 11 serious AEs of IR, all but one occurred when the UA values exceeded 6 mg\/dl. Four deaths occurred during the OLE study, all of which were judged as unlikely to be related to the study drug by the investigator (see online supplementary material).\n\n### IRs\n\nDuring the RCTs, IRs were the second most common AE and were reported in 26% of patients receiving biweekly pegloticase and 42% of patients receiving monthly pegloticase. In the OLE, IRs were reported in 44% (60\/149) of patients. The rate of IRs was lower in RCT responders (17%, 10\/60) than in non-responders (52%, 26\/50) and highest among patients who received placebo in the RCT (62%, 24\/39). IRs were rated mild in 27% (16\/60) of patients, moderate in 55% (33\/60) and severe in 18% (11\/60). IRs rated as severe in the OLE occurred in four patients, none of whom sustained goal range urate lowering, and in seven patients who received placebo in the RCT. IRs were the reason cited for withdrawal from the study for 6% (9\/149) of patients.\n\nExcept for three patients with IRs manifesting in the 2\u2005h period after infusion, all IRs occurred during infusion. The most common signs and symptoms associated with IRs were musculoskeletal pain\/discomfort, flushing, erythema, nausea\/vomiting, dyspnoea, sweating, headache, blood pressure changes, urticaria and pruritus. All IRs resolved with supportive measures and no patient required intubation, mechanical ventilatory support, pressors or hospitalisation. Three patients were found to have anaphylaxis based on a retrospective analysis. Symptoms included red, itching or swollen eyes, throat irritation, musculoskeletal symptoms, skin flushing, hypotension, dizziness and vomiting.\n\n### Immunogenicity\n\nA total of 31% (52\/169) of patients treated with pegloticase had high pegloticase antibody titres (\\>1\u2009:\u20092430) during the RCTs. High-titre antibodies were identified in 53% (67\/127) of patients at the week 13 assessment of the OLE; this number increased by the end of the study (antibody titres \\>1\u2009:\u20092430 in 60%; 90\/149). Only one patient had evidence of in vitro neutralising antibodies against uricase activity; this was measurable at only one time point. Consistent with findings from the RCTs,17 low titres (\u22641\u2009:\u20092430) of anti-pegloticase antibodies were less likely to be associated with loss of SUA response (see online supplementary material for supporting data).\n\n## Efficacy\n\n### Serum uric acid response\n\nThe concordance of PUA and SUA (collected for all patients at the same time points) for values above and below the target of 6\u2005mg\/dl was 95% in samples collected during the OLE study. The efficacy data reported here focus on SUA because of its ready accessibility in clinical settings. The overall mean\u00b1SD SUA at baseline was 10.1\u00b11.4\u2005mg\/dl. Most responders to biweekly and monthly pegloticase in the RCTs maintained SUA\\<6\u2005mg\/dl throughout the OLE study (figure 3<\/a>A). Among patients randomised to placebo in the RCTs, biweekly pegloticase administration in the OLE study produced greater reductions in mean SUA and a greater proportion of patients maintaining SUA\\<6\u2005mg\/dl than monthly pegloticase (figure 3<\/a>B).\n\n### Tophus response\n\nTophus burden continued to decrease during ongoing pegloticase therapy. At the end of the RCTs, a complete response in at least one tophus was reported for 40% (21\/52) of patients with tophi treated with biweekly pegloticase, 21% (11\/52) of those treated with monthly pegloticase and 7% (2\/27) of those treated with placebo. By OLE week 13 a complete response in at least one tophus was reported for 45% (36\/80) of patients with tophi at RCT inception. At the final OLE visit (week 125), 60% (56\/94) of patients with evaluable tophi had a complete response. Patients with a sustained urate-lowering response to treatment were more likely to experience complete tophus resolution. A complete response was seen in 87% (32\/37) of patients in the responder group and in 31% (11\/36) designated non-responders on the basis of SUA measurements in the RCTs.\n\nAs prespecified in the protocol, each patient could have up to five measurable (and two additional) target or index tophi. Responses based on the total number of tophi were assessed to complement information on the proportion of patients with tophus complete response. After 1\u2005year of pegloticase treatment in the OLE study, 61% (185\/302) of all target tophi had completely resolved. Among patients who qualified as responders in the RCTs, a complete response of 70% (102\/145) of all target tophi was achieved after 1\u2005year of open-label treatment.\n\n### Gout flares\n\nGout flares occurred in 71% of patients (106\/149) during the OLE study; the mean number of flares per patient per 3-month period was 0.5 over the duration of the trial. The highest flare rates occurred in 52% of patients (78\/149; 1.1 flares\/patient) during the first 3-month period. Flare rates diminished with continued treatment in the OLE among patients who were urate-lowering responders during the RCT compared with patients who were RCT non-responders (figure 4<\/a>A,B). For example, in the cohort of patients sustaining goal urate-lowering on the biweekly pegloticase OLE dose schedule, flares diminished substantially (occurring in 26% (9\/35) during months 1\u20133 of the OLE study, 9% (3\/32) during months 10\u201312 and 3% (1\/29) during months 22\u201324).\n\n# Discussion\n\nPegloticase is a PEGylated mammalian recombinant uricase that was developed to control hyperuricaemia and its clinical manifestations in patients with refractory chronic gout and no other urate-lowering options. In the present OLE of two RCTs,17 long-term treatment with pegloticase was safe and generally well tolerated, especially in patients who had experienced sustained goal range urate-lowering responses during blinded treatment. In addition to maintaining markedly sub-saturating urate levels, these patients showed progressive clinical benefit in gout flares and tophus reduction during pegloticase treatment for an average of 2\u2005years in the OLE study.\n\nTarget range SUA was achieved for 55% of all patients at week 25 of the OLE study. This is consistent with findings from the RCTs in which 42% of patients treated with biweekly pegloticase showed sustained urate lowering for 6\u2005months.17 Conversely, patients losing urate-lowering efficacy did so within the first few months of treatment in the RCTs, and urate levels for this cohort remained above target for the duration of the OLE study. About one-third of patients who did not sustain SUA\\<6\u2005mg\/dl in the RCTs retained clinical benefit during the OLE study with at least one tophus remaining resolved. However, this finding could be explained by an initial response followed by a protracted period needed for sufficient reaccumulation of crystal deposits to become clinically detectable.\n\nA post hoc analysis of the RCTs revealed relationships between loss of urate-lowering efficacy, risk of IRs and development of high-titre pegloticase antibodies.17 Because of blinding of urate levels during the RCTs, the relationship between loss of urate-lowering efficacy and the risk of pegloticase infusion reactions was not appreciated until several months after initiation of the OLE study. As a result, some patients who were RCT pegloticase non-responders continued receiving pegloticase during the OLE study, providing additional information with regard to these relationships. Among patients entering the OLE study, 71 had at least one IR during the randomised trials or the OLE study; 85% of these patients (60\/71) experienced their first IR when SUA was \\>6\u2005mg\/dl. Overall, these data support the view that the great majority of IRs can be avoided if patients discontinue treatment when, with routine preinfusion monitoring of SUA, levels in excess of 6\u2005mg\/dl indicate a sustained loss of pegloticase urate-lowering efficacy.\n\nThere are limitations to this study. First, the open-label study design carries both well-documented value and some bias as a result of being uncontrolled.19 Second, the RCT protocol called for stratification of patients based on treatment response and dosing schedule. Although this provided the opportunity to follow safety and efficacy outcomes during the OLE study for patients with specific treatment histories, it also resulted in six distinct cohorts, each containing small numbers of patients. Nevertheless, the analyses of OLE study data indicate that patients who achieve treatment success throughout the first 6\u2005months of treatment are likely to benefit from sustained long-term urate-lowering by pegloticase with favourable clinical responses. Conversely, patients not sustaining urate-lowering treatment responses to pegloticase infusions should not be expected to have treatment benefits (extending substantially beyond the period of urate-lowering initially achieved) and incur an increased risk of IRs. We recommend discontinuation of pegloticase therapy in such patients.\n\nA final limitation is that patients were identified by their initial allocation to biweekly or monthly treatment in the RCTs but were allowed to change dosing frequency at two time points upon entering the OLE study. A higher proportion of patients switched from monthly to biweekly treatment than vice versa, however, indicating that most patients were receiving the US FDA-approved pegloticase biweekly regimen**.**\n\nIn summary, the safety profile of pegloticase in the OLE study was consistent with that observed in the RCTs17 with no evidence of new safety concerns related to long-term exposure. Efficacy findings further demonstrated that clinical improvements were durable and probably progressive during long-term therapy in patients with persistent goal range urate-lowering responses to pegloticase.\n\n# Supplementary Material\n\n###### Web supplement\n\nThe authors thank all the investigators and patients who participated in the pegloticase trials. Members of the Cardiovascular Event Adjudication Committee were William B White (chair), Glen E Cooke and Philip Gorelick who were compensated for their participation. The authors wish to acknowledge writing and editorial support from the fm*P* group of Fallon Medica LLC, funded by Savient Pharmaceuticals, in the preparation of this manuscript.\n\n# References\n\n[^1]: **Handling editor** Tore K Kvien","meta":{"dup_signals":{"dup_doc_count":102,"dup_dump_count":52,"dup_details":{"curated_sources":2,"2022-40":1,"2021-43":1,"2021-17":1,"2020-50":1,"2020-40":1,"2020-05":1,"2019-30":1,"2018-22":3,"2018-13":2,"2018-05":4,"2017-51":1,"2017-47":2,"2017-43":1,"2017-39":2,"2017-34":3,"2017-30":2,"2017-26":1,"2017-22":2,"2017-17":2,"2017-09":2,"2017-04":2,"2016-50":2,"2016-44":2,"2016-40":2,"2016-36":2,"2016-30":2,"2016-26":2,"2016-22":2,"2016-18":1,"2016-07":1,"2015-48":2,"2015-40":2,"2015-35":1,"2015-32":2,"2015-27":1,"2015-22":2,"2015-14":2,"2014-52":2,"2014-49":2,"2014-42":6,"2014-41":3,"2014-35":3,"2014-23":3,"2014-15":4,"2023-40":1,"2015-18":2,"2015-11":1,"2015-06":2,"2014-10":2,"2013-48":2,"2013-20":2,"2017-13":1}},"file":"PMC3756467"},"subset":"pubmed_central"} {"text":"date: 2020-05\ntitle: GLOBAL LEADERS UNITE TO ENSURE EVERYONE EVERYWHERE CAN ACCESS NEW VACCINES, TESTS AND TREATMENTS FOR COVID-19: Unprecedented gathering of heads of government, institutions and industry cements commitment to accelerate development and delivery for all populations\n\n**24 April 2020** - Heads of state and global health leaders today made an unprecedented commitment to work together to accelerate the development and production of new vaccines, tests and treatments for COVID-19 and assure equitable access worldwide.\n\nThe COVID-19 pandemic has already affected more than 2.4 million people, killing over 160,000. It is taking a huge toll on families, societies, health systems and economies around the world, and for as long as this virus threatens any country, the entire world is at risk.\n\nThere is an urgent need, therefore, while following existing measures to keep people physically distanced and to test and track all contacts of people who test positive, for innovative COVID-19 vaccines, diagnostics and treatments.\n\n\"We will only halt COVID-19 through solidarity,\" said Dr Tedros Adhanom Ghebreyesus, WHO Director-General. \"Countries, health partners, manufacturers, and the private sector must act together and ensure that the fruits of science and research can benefit everybody.\"\n\nWork has already started. Since January, WHO has been working with researchers from hundreds of institutions to develop and test vaccines, standardize assays and standardize regulatory approaches on innovative trial designs and define criteria to prioritize vaccine candidates. The Organization has prequalified diagnostics that are being used all over the world, and more are in the pipeline. And it is coordinating a global trial to assess the safety and efficacy of four therapeutics against COVID-19.\n\nThe challenge is to speed up and harmonize processes to ensure that once products are deemed safe and effective, they can be brought to the billions of people in the world who need them. Past experience, in the early days of HIV treatment, for example, and in the deployment of vaccines against the H1N1 outbreak in 2009, shows that even when tools are available, they have not been equally available to all.\n\nSo today leaders came together at a virtual event, co-hosted by the World Health Organization, the President of France, the President of the European Commission, and the Bill & Melinda Gates Foundation. The event was joined by the UN Secretary General, the AU Commission Chairperson, the G20 President, heads of state of France, South Africa, Germany, Vietnam, Costa Rica, Italy, Rwanda, Norway, Spain, Malaysia and the UK (represented by the First Secretary of State).\n\nHealth leaders from the Coalition for Epidemic Preparedness Innovations (CEPI), GAVI-the Vaccine Alliance, the Global Fund, UNITAID, the Wellcome Trust, the International Red Cross and Red Crescent Movement (IFRC), the International Federation of Pharmaceutical Manufacturers (IFPMA), the Developing Countries Vaccine Manufacturers' Network (DCVMN), and the International Generic and Biosimilar Medicines Association (IGBA) committed to come together, guided by a common vision of a planet protected from human suffering and the devastating social and economic consequences of COVID-19, to launch this groundbreaking collaboration. They are joined by two Special Envoys: Ngozi Okonjo-Iweala, Gavi Board Chair and Sir Andrew Witty, former CEO of GlaxoSmithKline.\n\nThey pledged to work towards equitable global access based on an unprecedented level of partnership. They agreed to create a strong unified voice, to build on past experience and to be accountable to the world, to communities and to one another.\n\n\"Our shared commitment is to ensure all people have access to all the tools to prevent, detect, treat and defeat COVID-19,\" said Dr Tedros. \"No country and no organization can do this alone. The Access to COVID-19 Tools Accelerator brings together the combined power of several organizations to work with speed and scale.\"\n\nHealth leaders called on the global community and political leaders to support this landmark collaboration and for donors to provide the necessary resources to accelerate achievement of its objectives, capitalizing on the opportunity provided by a forthcoming pledging initiative that starts on 4 May 2020. This initiative, spearheaded by the European Union, aims to mobilize the significant resources needed to accelerate the work towards protecting the world from COVID-19.\n\nAvailable from: ","meta":{"dup_signals":{"dup_doc_count":130,"dup_dump_count":31,"dup_details":{"curated_sources":2,"2023-40":1,"2023-23":3,"2023-14":1,"2023-06":3,"2022-49":2,"2022-40":5,"2022-33":3,"2022-27":4,"2022-21":4,"2022-05":3,"2021-49":5,"2021-43":2,"2021-39":8,"2021-31":5,"2021-25":5,"2021-21":7,"2021-17":4,"2021-10":4,"2021-04":5,"2020-50":11,"2020-45":7,"2020-40":5,"2020-34":6,"2020-29":4,"2020-24":11,"2023-50":1,"2024-30":2,"2024-26":2,"2024-22":3,"2024-18":1,"2024-10":1}},"file":"PMC7253839"},"subset":"pubmed_central"} {"text":"author: J Yazdany; C Feldman; J Liu; MM Ward; MA Fischer; KH Costenbader\ndate: 2012\ninstitute: 1University of California, San Francisco, CA, USA; 2Brigham and Women's Hospital, Boston, MA, USA; 3National Institute of Arthritis and Musculoskeletal and Skin Diseases, Bethesda, MD, USA\ntitle: Usual source of care and geographic region are largest predictors of healthcare quality for incident lupus nephritis in US Medicaid recipients\n\n# Background\n\nLittle is known about the quality of healthcare delivered to patients with lupus nephritis in the United States and the major determinants of quality remain unknown. We aimed to examine the sociodemographic, geographic, and healthcare system factors associated with performance on a healthcare quality measure in a nationwide cohort of Medicaid recipients with incident lupus nephritis.\n\n# Methods\n\nWe used US Medicaid analytic extract (MAX) data from 2000 to 2004 containing person-level files on Medicaid eligibility, utilization and payments. We identified patients meeting a validated administrative data definition of incident lupus nephritis, and used this group as the denominator population for the quality metric (QM). The QM numerator assessed receipt of **i**nduction therapy with glucocorticoids and another immunosuppressant (azathioprine, mycophenolate mofetil, mycophenolic acid, cyclophosphamide, cyclosporine A, or tacrolimus) within 90 days of lupus nephritis onset. Patients with end-stage renal disease were excluded. We used multivariable logistic regression models to examine sociodemographic (age, sex, race\/ethnicity), geographic (US region), and healthcare (health professional shortage areas, HPSAs, from the Area Resource File) predictors of higher performance. We also examined the restrictiveness of Medicaid benefits in each state, defined by less generous medication coverage policies (mandatory generic substitution, requirements for prior authorization and drug co-payments), and whether the patient's usual source of care was the emergency department or the ambulatory setting (\\>50% visits).\n\n# Results\n\nA total of 974 Medicaid recipients met the definition of incident lupus nephritis. The mean age was 39 years (SD 12), 93% were female, and most were African American (African American 48%, White 27%, Hispanic 13%, Asian 6%). Individuals were geographically dispersed (20% Midwest, 22% Northeast, 34% South, 24% West), and 95% resided in partial or complete HPSAs. One hundred and sixty-four individuals resided in states with more restrictive Medicaid benefits. At 90 days, only 16% of patients met all numerator components of QM1; 45% of individuals received only steroids (mean prednisone dose 28 mg\/day), and 3% received an immunosuppressant alone. Among those treated with an immunosuppressant, 31% received azathioprine, 47% received mycophenolate, 14% received cyclophosphamide, and 11% received a calcineurin inhibitor. For 20% (*n* = 192) of patients, the usual source of care was in the emergency setting. In multivariable logistic regression models, younger individuals were more likely to receive optimal treatment (OR for 18 to 34 years vs. 51 to 64 years = 3.5, CI = 1.6 to 7.6), while those living in the South and Midwest were less likely (OR = 0.49, CI = 0.24 to 0.67 and OR = 0.30 CI = 0.15 to 0.61, respectively). Those whose usual source of care was the emergency department were less likely to receive optimal treatment (OR = 0.47, CI = 0.28 to 0.81). In this adjusted analysis, we did not find significant associations for race\/ethnicity, HPSA or Medicaid restrictiveness with QM performance.\n\n# Conclusion\n\nMost US Medicaid recipients with incident lupus nephritis in our study did not receive timely induction therapy, and many were treated with high-dose steroids alone. We found significant geographic variation in performance, with the South and Midwest having lower performance than other regions. A substantial number of Medicaid patients with lupus nephritis used the emergency department as a usual source of care and performance on the QM is lower in this setting. These data suggest a need for targeted quality improvement interventions, including increasing access to appropriate ambulatory care for Medicaid recipients with lupus nephritis.","meta":{"dup_signals":{"dup_doc_count":129,"dup_dump_count":30,"dup_details":{"curated_sources":2,"2017-47":1,"2017-34":1,"2017-26":1,"2017-17":1,"2017-09":1,"2016-50":1,"2016-40":1,"2016-22":1,"2016-18":1,"2015-48":5,"2015-40":4,"2015-35":5,"2015-32":5,"2015-27":4,"2015-22":5,"2015-14":5,"2014-52":5,"2014-49":5,"2014-42":14,"2014-41":8,"2014-35":9,"2014-23":7,"2014-15":6,"2020-05":1,"2015-18":6,"2015-11":4,"2015-06":5,"2014-10":5,"2013-48":5,"2013-20":5}},"file":"PMC3467494"},"subset":"pubmed_central"} {"text":"author: Linda S. Birnbaum[^1]\ndate: 2012-08-01\nreferences:\ntitle: NIEHS's New Strategic Plan\n\nI am pleased to present \"our\" new strategic plan for the National Institute of Environmental Health Sciences (NIEHS 2012). I say that it's \"our\" plan because the entire document reflects tremendous thought, discussion, and sharing of ideas by hundreds of scientists and community stakeholders, and it speaks to the entire field of environmental health research. This plan is about what we will strive to accomplish together as we devote ourselves to research that, in my opinion, has the greatest chance for preventing disease and for improving health throughout the world.\n\nAs reflected in the new strategic plan (NIEHS 2012), the NIEHS has a fresh vision, not because our values have changed but because our research has been so successful\u2014and many of you have made a huge contribution to that progress.\n\nNew technologies and increasing knowledge bring exciting new opportunities each and every year. The NIEHS's new strategic plan builds upon the accomplishments and vision that came before.\n\nThe NIEHS has come a long way in making environmental health research responsive to the needs and concerns of the American people\u2014\nto make environmental health part of the public health debate. This continues to be a source of motivation and purpose for NIEHS staff and our research partners. Environmental justice is an everlasting core value for NIEHS research.\n\nIn the past few years, we at the NIEHS have made some important progress in exposure science, supporting new technologies for sensor devices and bioinformatics. We have been embracing new science such as epigenetics and exposure phenotyping, focusing on interdisciplinary research and translational research. In addition, our clinical research unit is now up and running.\n\nAs the NIEHS moves forward, our overall goal is to make the institute, including the National Toxicology Program (NTP), the foremost trusted source of environmental health knowledge, leading the field in innovation and the application of research to solve health problems.\n\nOur new vision statement (NIEHS 2012) captures our collective dreams and aspirations, and reflects our strong commitment to making a real difference:\n\n> The vision of the National Institute of Environmental Health Sciences is to provide global leadership for innovative research that improves public health by preventing disease and disability from our environment.\n\nWhat this means in practical terms is that we are pursuing some of the \"big influences\" that have been understudied, all of which interact with traditional environmental exposures: the microbiome, for example, and inflammation pathways, immunological pathways, nutrition, and epigenetic processes. We also want to lead the process of defining the \"exposome,\" which is the totality of exposure encountered by humans.\n\nWe have elevated the NTP to the divisional level within the NIEHS. And we will continue to integrate our toxicology research with our excellent basic and translational science programs, not because I am a toxicologist but because the NTP is a problem-solving program\u2014a truly translational component of the NIEHS. For example, the NTP is part of our consortium on bisphenol A, contributing to the scientific deliberations, right along with our intramural scientists and our grantees.\n\nThe NTP is leading the Tox21 initiative along with the National Institute of Health's National Center for Comparative Genomics, the U.S. Environmental Protection Agency, and the Food and Drug Administration. This high throughput testing program shows great promise not only as a new and faster method but also for moving toxicology into a predictive science.\n\nThe NTP is moving beyond the traditional approaches of testing one chemical at a time and are taking on the significant challenge of evaluating mixtures. We are also looking at the effects of exposures throughout the life span, expanding our research and testing to include prenatal exposures and how they may link to adult disease. It is clear that there are multiple windows of susceptibility and that exposures early in life may have long-lasting consequences to both health and disease.\n\nFinally, the antiquated idea that the dose makes the poison is overly simplistic. The newest research clearly shows that biology is affected by low doses of chemicals, often within the range of general population exposure, and that these biological changes can be harmful, especially during periods of development. Therefore, low-dose research must go hand in hand with our life-span approach.\n\nThe NIEHS's job doesn't stop with the publication of scientific results. We also have an obligation to help translate the nation's research investment into public health intervention, new policy, and preventive clinical practice.\n\nTo be successful, we at the NIEHS need to conduct and support the best science, whether it is led by an individual researcher or a multidisiplinary team, and this means working together with all of our partners. We thank everyone who joined us in our strategic planning process. We appreciate your commitment and support.\n\n# Reference\n\n[^1]: Linda S. Birnbaum, director of the NIEHS and the NTP, oversees a budget that funds multidisciplinary biomedical research programs and prevention and intervention efforts that encompass training, education, technology transfer, and community outreach. She recently received an honorary Doctor of Science from the University of Rochester, the distinguished alumna award from the University of Illinois, and was elected to the Institute of Medicine. She is the author of \\> 900 peer-reviewed publications, book chapters, abstracts, and reports. Birnbaum received her M.S. and Ph.D. in microbiology from the University of Illinois, Urbana. A board-certified toxicologist, she has served as a federal scientist for \\> 32 years, 19 with the U.S. EPA Office of Research and Development, preceded by 10 years at the NIEHS as a senior staff fellow, a principal investigator, a research microbiologist, and a group leader for the institute's Chemical Disposition Group.","meta":{"dup_signals":{"dup_doc_count":229,"dup_dump_count":23,"dup_details":{"curated_sources":4,"2016-18":9,"2016-07":9,"2015-48":10,"2015-40":7,"2015-35":10,"2015-32":10,"2015-27":10,"2015-22":9,"2015-14":9,"2014-52":10,"2014-49":10,"2014-42":17,"2014-41":15,"2014-35":15,"2014-23":12,"2014-15":14,"2021-49":1,"2015-18":10,"2015-11":10,"2015-06":9,"2014-10":8,"2013-48":10,"2013-20":1}},"file":"PMC3440102"},"subset":"pubmed_central"} {"text":"abstract: # Background\n .\n We have previously shown that irregular lifestyle of young Japanese female students are significantly related to their desire to be thinner. In the present study, we examined the nutritional knowledge and food habits of Chinese university students and compared them with those of other Asian populations.\n .\n # Methods\n .\n A self-reported questionnaire was administered to 540 students, ranging in age from 19-24 years. Medical students from Beijing University (135 men and 150 women) in Northern China and Kunming Medical College in southern China (95 men and 160 women) participated in this study. The parametric variables were analyzed using the Student's *t*-test. Chi-square analyses were conducted for non-parametric variables\n .\n # Results\n .\n Our results showed that 80.5% of students had a normal BMI and 16.6 % of students were underweight with the prevalence of BMI\\>30 obesity being very low in this study sample. Young Chinese female students had a greater desire to be thinner (62.0%) than males (47.4%). Habits involving regular eating patterns and vegetable intake were reported and represent practices that ought to be encouraged.\n .\n # Conclusions\n .\n The university and college arenas represent the final opportunity for the health and nutritional education of a large number of students from the educator's perspective. Our findings suggest the need for strategies designed to improve competence in the area of nutrition.\nauthor: Ruka Sakamaki; Kenji Toyama; Rie Amamoto; Chuan-Jun Liu; Naotaka Shinfuku\ndate: 2005\ninstitute: 1International Center for Medical Research. Kobe University Graduate School of Medicine, Kobe, 650-0017, Japan; 2Seinan Jo Gakuin University Faculty of Health and Welfare, Department of Nutritional Sciences, Kitakyusyu, 803-0835, Japan; 3Department of Plastic Surgery, Kobe University Graduate School of Medicine, Kobe, 650-0017, Japan\nreferences:\ntitle: Nutritional knowledge, food habits and health attitude of Chinese university students \u2013a cross sectional study\u2013\n\n# Background\n\nThe increasing problem of obesity has been observed in many lower-income countries during the last decades. China has adopted an open-market policy and experienced explosive economic growth, which has led to less food scarcity at the national level and to a remarkable transition in the structure of the diet of Chinese \\[1\\]. The composition of the Chinese diet has been shifting towards a diet higher in fat and meat, and lower in carbohydrates and fiber \\[2\\]. Additionally, decreased levels of physical activity and leisure are linked to increases in the prevalence of an overweight condition, obesity and diet-related non-communicable diseases \\[3\\].\n\nIn previous reports, we examined eating habits and dietary knowledge of female students in Japan. Our results showed that irregular lifestyle was significantly related to indefinite complaint, with the majority of students having a desire to be thinner although the prevalence of students who were overweight was very low in this study sample \\[4\\]. Universities and colleges are potentially important targets for the promotion of healthy lifestyles of the adult population. However, little is known concerning the body mass index (BMI) distribution and nutritional and health-related behavior of Chinese university students. The purpose of this study was to obtain a preliminary understanding of the relative level of BMI distribution of Chinese university students and to determine the nutritional knowledge and body-shape perceptions.\n\n# Material and Methods\n\nThis study was carried out between February 2001 and April 2002. Medical students from Beijing University (135 men and 150 women) in Northern China and Kunming Medical College in southern China (95 men and 160 women) participated in this study. A sample of 540 students aged 19 \u2013 24 years were administered a self-reported questionnaire. The questionnaire consisted of 21 questions regarding eating, drinking and smoking habits (19 questions), with 2 questions related to dieting (trying to lose weight). Self-reported height and weight were used to calculate BMI (kg\/m^2^). The questionnaire was designed by the authors and based on a national dietary survey held by the Health and Labor Ministry of Japan. Some of the authors also traveled to China to investigate the dietary life of Chinese to facilitate questionnaire design. The questionnaire was first written in Japanese and then translated to Chinese utilizing fluent bilingual linguistic services. The translated Chinese version was back-translated to insure the original meaning was not lost. Informed consent was obtained from all participants of this study according to the Declaration of Helsinki. The statistical software package SPSS 10.0 was used for the analysis of data \\[5\\]. In this study, parametric variables were analyzed using the Student's *t*-test. Chi-square analyses were conducted for non-parametric variables. All analyses were two-tailed, and a 'p' value less than 0.05 was considered statistically significant.\n\n# Results\n\n## Characteristics of the sample and BMI categories\n\nThe response rate was 96% (512 \/ 540). The characteristics of the subjects are shown in Table 1<\/a>. A total of 212 men and 300 women, with a mean age of 20 \u00b1 1.9 years, participated in this study. The average height was 165.8 \u00b1 7.8 cm, while the average weight was 56.9 \u00b1 9.2 kg. Mean BMI was 20.6 \u00b1 2.2. To analyze the distribution of BMI and health-related behavior, BMI was categorized into 4 groups according to mean BMI of \u00b1 1 standard deviation (SD) (Figure 1<\/a>). The average BMI for male students was 21.4 \u00b1 2.5 and was highest in the categories 18.9\u2264BMI\\<21.4 (37.7%) and 21.4\u2264BMI\\<23.9 (32.5%). The average BMI for female students was 20.0 \u00b1 1.8, with the categories 18.2\u2264BMI\\<20.0 (37.5%) and 20.0\u2264BMI\\<21.8 (31.4%) displaying high values. According to WHO BMI classifications \\[6\\], 97.1% of students were classified into the underweight or normal weight categories. 2.5% (13\/512) students were overweight (BMI\\>25) and 0.4% (2\/512) of students were obese (BMI\\>30). BMI values of deviations from the average sample show the presence of few extreme values.\n\n## Eating habit\n\nThe life style practices were compared by gender (see additional file 1<\/a>). The majority of students (83.6 %) reported taking meals regularly, with 79.0% eating meals 3 times per day; there were no gender differences. However, a significant gender difference was found in the response relating to breakfast intake, with 66.8% of males and 82.3% of females reporting eating breakfast regularly (p \\< 0.0006). The frequency of snacking rate was significantly higher in females (31.1%) than in males (11.5%; p \\< 0.0001). The present sample demonstrated high consumption of vegetable and fruits. A total of 47.9% of students reported the consumption of colored vegetables such as spinach and carrots, and 32.5% of subjects reported eating fruit daily. Female students tend to eat more fruit than males (p \\< 0.0001). In addition, female students tend to eat with friends and family more frequently than males (p \\< 0.01). Few subjects smoke or drink alcohol. When the students eat out, female students are more likely to consider the calorie content of the menu than males (data not shown). Although 85.6% of students are aware of the concept of nutritionally balanced food, only a small number of students (7%) apply this concept when selecting food from a menu. Moreover, only 51% of students showed a desire to learn about healthy diets.\n\n## Body image and health consciousness\n\nWhen subjects were asked about their history of dieting, 22.7% of respondents reported that they had dieted (see additional file 2<\/a>). The proportion of female students having a dieting experience (29.8%) was more than twice as great as that of male subjects (12.7%; p \\< 0.0006). In total, 56% of the students selected 'thin or slim is beautiful'. The percentage by gender was, 47.4% for male and 62.0% for female students. Female students have a significantly greater desire to be thinner than males (p \\< 0.001). More than half of the respondents reported a desire to adopt healthier dietary habits. Moreover, a question regarding the degree of consciousness pertaining to health and diet was asked; 45.2% of male students and 48.3% of female students wish to learn about health and diet. Among female subjects, BMI\\<18.2 strongly showed their consciousness of health and diet (p \\< 0.03).\n\n# Discussion\n\nThis study aimed to determine the health, nutritional knowledge and dietary behavior of university students in China. As a result, we recorded the distribution of BMI among Chinese students and found a low prevalence of obesity, a finding that is consistent with a study of Japanese female students (BMI\u226525 overweight was 5.8%, BMI\\>30 of obesity was 0%) \\[4\\]. In the United States, 35% of the college students are reported to be overweight or obese (BMI\u226525) \\[7\\]. According to the WHO definition of obesity, BMI\\>30 is the cut-off point \\[6\\]. The definition is based on research of Caucasian populations. Asian populations are reported to have a higher body fat (%) at a lower BMI compared to Caucasians \\[8\\]. The WHO expert consultation reported that BMI in Asian populations is related to disease at a lower level \\[9\\]. In order to compare obesity prevalence between ethnic groups, BMI cut-off points for Asians need to be considered by well constructed and standardized body composition studies. It is notable that in China, the prevalence of overweight individuals increased from 1991 to 1997, with the increasing rate changing from 6.4 to 7.7 \\[10\\]. The proportion of energy derived from the fat of both vegetable and animal sources increased each year. A recent study revealed that energy derived from dietary fat accounted for more than 30% of the total energy \\[11\\]. Changes in dietary composition, which correspond to socioeconomic growth, may accelerate the prevalence of obesity in China.\n\nThe results of our study show that the majority of students regularly eat three times per day, and almost 80% of students eat vegetables and fruit twice per day. These eating habits ought to be encouraged. The traditional Chinese diet contains plenty of vegetables and is rice-based. The present study reported a high proportion of Chinese students eat breakfast daily. In contrast, a dietary survey of young Japanese subjects revealed a low rate of individuals engaged in regular eating patterns \\[12\\]. The skipping of breakfast has been associated with lower nutritional status and the risk of cardiovascular diseases \\[13\\]. It has also been reported that less adequate breakfast habits may contribute to the appearance and further development of obesity \\[14\\]. Therefore the importance of regular eating patterns cannot be overemphasized in nutritional education.\n\nOur results showed that body figure perception was significantly different between female and male students. A number of researchers have investigated the relationship of body image and gender role. Women tend to desire a thinner figure, express more anxiety about becoming fat, and are more likely to diet than men \\[15,16\\]. In contrast, men have reported a desire for a heavier physique and muscularity \\[17\\]. In recent years, eating disorders have been increasing dramatically among young women. The results of our study did not confirm this suggestion to the level of statistical significance; however, it is worth pointing out that 65.0% of female students with BMI\\<20, which is under to normal weight range, indicated a desire to be thin. Dissatisfaction with body figure and eating disorders are closely related \\[18-20\\]. Being young, female, and dieting are identified risk factors that have been reliably linked to the development of eating disorders \\[21\\]. It was speculated that some of the students who were preoccupied with a thin body may develop eating disturbances. Thus, the promotion of healthy weight management practices should be considered when developing health education programs.\n\n# Conclusions\n\nIn conclusion, our findings reveal that the majority of students were classified into the normal BMI group, with the prevalence of BMI \\>30 obesity being very low in this study sample. Young female students had a greater desire to be thinner than male students. Habits involving regular eating patterns and vegetable intake were found and represent practices that ought to be encouraged. The meal and snack patterns in Chinese students were very similar to the traditional eating pattern model, although diets are changing rapidly in China and other low-income countries. The university and college arenas represent the final opportunity for nutritional education of a large number of students from the educator's perspective. Our findings suggest the need for strategies designed to improve competence in the area of nutrition, especially with respect to information relating to sources of nutrition and healthy weight management. Furthermore, public demand for health and nutritional information should be taken into consideration when implementing strategies aimed at improving the nutritional well-being of individuals.\n\n# Authors' contributions\n\nR.S carried out questionnaire design, manuscript drafting and total coordination of the study. K.T has been involved in drafting and revision of the article. R.A contributed to the data entry and its analysis. L.CJ contributed to the questionnaire design, data collection and language translations. N.S contributed to final approval of the manuscript.\n\n# Supplementary Material\n\n###### Additional File 1\n\nTable 2 containing the results of questions related to lifestyle practices with special reference to food habit. The meal patterns, consumption of fruits and vegetables, consumption of fried foods, consumption of alcohol were assessed in male and female students. The Chi-square analyses were employed to compare the behavioral differences by gender. The evaluations of statistical significance were made at the p \\< 0.05.\n\nClick here for file\n\n###### Additional File 2\n\nTable 3 contains the results of body shape perception and health consciousness of male and female students. Male and female respondents were categorized in to 4 groups respectively, according to mean BMI of \u00b1 1 standard deviation (SD). Analyses were made between BMI groups using Chi-square analysis. The evaluations of statistical significance were made at the p \\< 0.05.\n\nClick here for file\n\n### Acknowledgements\n\nThe authors express their appreciation for the invaluable partnership and support of Dr. Wang of Beijing University, Dr Zhao of Kunming Medical College and all the study participants of both institutes. We also thank Dr. Shigeki Minakami for valuable comments on the manuscript.\n\n## Figures and Tables\n\nCharacteristics of Participants\n\n| | **Total** | **Male** | **Female** |\n|-------------------|-------------|-------------|-------------|\n| **Variable** | n = 512 | n = 212 | n = 300 |\n| **Age (y)** | 20.4 \u00b1 1.9 | 20.3 \u00b1 1.7 | 20.4 \u00b1 2.0 |\n| **weight (kg)** | 56.9 \u00b1 9.2 | 63.7 \u00b1 8.8 | 52.1 \u00b1 5.9 |\n| **height (cm)** | 165.8 \u00b1 7.8 | 172.3 \u00b1 5.5 | 161.2 \u00b1 5.6 |\n| **BMI (kg\/m^2^)** | 20.6 \u00b1 2.2 | 21.4 \u00b1 2.5 | 20.0 \u00b1 1.8 |\n\nGender distribution in the sampled population. BMI is based on self-reported height and weight. BMI = weight \\[kg\\] \/ height \\[m\\]^2^","meta":{"dup_signals":{"dup_doc_count":191,"dup_dump_count":48,"dup_details":{"curated_sources":6,"2022-05":1,"2020-45":1,"2020-34":1,"2020-24":1,"2019-47":1,"2019-35":2,"2019-18":1,"2019-09":1,"2018-51":1,"2018-47":1,"2018-39":1,"2018-30":1,"2018-22":1,"2018-09":1,"2018-05":1,"2017-51":1,"2017-47":1,"2017-39":1,"2017-34":1,"2017-26":1,"2017-17":1,"2017-09":18,"2016-36":1,"2016-22":1,"2016-18":1,"2016-07":22,"2015-48":5,"2015-40":6,"2015-35":5,"2015-32":4,"2015-27":5,"2015-22":6,"2015-14":5,"2014-52":5,"2014-49":7,"2014-42":12,"2014-41":7,"2014-35":8,"2014-23":10,"2014-15":5,"2023-50":1,"2015-18":5,"2015-11":5,"2015-06":4,"2014-10":5,"2013-48":4,"2013-20":5,"2024-26":1}},"file":"PMC553986"},"subset":"pubmed_central"} {"text":"date: 2016-12\ntitle: OVER 1 MILLION TREATED WITH HIGHLY EFFECTIVE HEPATITIS C MEDICINES\n\n2 SEPTEMBER 2016 \\| GENEVA - 27 OCTOBER 2016 \\| GENEVA - Over one million people in low- and middle-income countries have been treated with a revolutionary new cure for hepatitis C since its introduction two years ago.\n\nWhen Direct Acting Antivirals (DAAs) were first approved for hepatitis C treatment in 2013, there were widespread fears that their high price would put them out of reach for the more than 80 million people with chronic hepatitis C infections worldwide.\n\nThe new medicines have a cure rate of over 95%, fewer side effects than previously available therapies, and can completely cure the disease within three months. But at an initial estimated price of some US\\$85 000 they were unaffordable even in high-income countries.\n\nCountries show hepatitis C treatment is achievable\n\nThanks to a series of access strategies supported by the World Health Organization (WHO) and other partners, a range of low- and middle-income countries, including Argentina, Brazil, Egypt, Georgia, Indonesia, Morocco, Nigeria, Pakistan, Philippines, Romania, Rwanda, Thailand and Ukraine \u2013 are beginning to succeed in getting drugs to people who need them. Strategies include competition from generic medicines through licensing agreements, local production and price negotiations.\n\n\"Maximizing access to lifesaving hepatitis C treatment is a priority for WHO,\" says Dr Gottfried Hirnschall, Director of WHO's Department of HIV and Global Hepatitis Programme. \"It is encouraging to see countries starting to make important progress. However, access still remains beyond the reach for most people.\"\n\nA new WHO report, Global Report on Access to Hepatitis C Treatment: Focus on Overcoming Barriers, released today shows how political will, civil society advocacy and pricing negotiations are helping address hepatitis C, a disease which kills almost 700 000 people annually and places a heavy burden on health systems' capacities and resources.\n\n\"Licensing agreements and local production in some countries have gone a long way to make these treatments more affordable,\" says Dr Suzanne Hill, WHO Director for Essential Medicines and Health Products. For example, the price of a three-month treatment in Egypt dropped from US\\$900 in 2014 to less than US\\$200 in 2016.\n\n\"But there are still huge differences between what countries are paying. Some middle-income countries, which bear the largest burden of hepatitis C, are still paying very high prices. WHO is working on new pricing models for these, and other expensive medicines, in order to increase access to all essential medicines in all countries,\" says Dr Hill.\n\n80% of people in need still face challenges\n\nAmong middle-income countries, the price for a three-month treatment of sofosbuvir and daclatasvir varies greatly. Costs range from US\\$9 400 in Brazil to US\\$79 900 in Romania.\n\nHigh costs have led to treatment rationing in some countries, including in the European Union, where price agreements have not accounted for the full cost of treating the whole affected population.\n\n\"Today's report on access, prices, patents and registration of hepatitis C medicines will help create the much needed market transparency which should support country efforts to increase access to DAAs,\" said Dr Hirnschall. \"We hope countries will update their hepatitis treatment guidelines, work to remove barriers to access, and make these medicines available promptly for everyone in need.\"\n\nIn May 2016, at the World Health Assembly, 194 countries adopted the first-ever Global Health Sector Strategy on Viral Hepatitis, agreeing to eliminate hepatitis as a public health threat by 2030. The strategy includes a target to treat 80% of people in need by this date.\n\nWHO issued guidelines recommending the use of DAAs in 2014 and 2016 and included DAAs on its Essential Medicines List \u2013 which is compiled to address the priority healthcare needs of populations; to make needed essential medicines available at all times in adequate amounts, at a price the health system and community can afford.\n\nAvailable from: ","meta":{"dup_signals":{"dup_doc_count":104,"dup_dump_count":46,"dup_details":{"curated_sources":4,"2022-27":1,"2022-21":1,"2021-49":2,"2021-31":2,"2021-21":1,"2021-17":2,"2021-04":3,"2020-45":4,"2020-34":1,"2020-29":2,"2020-24":2,"2020-16":2,"2020-10":1,"2019-51":2,"2019-43":2,"2019-35":1,"2019-30":3,"2019-22":1,"2019-18":2,"2019-13":1,"2019-09":4,"2019-04":1,"2018-51":4,"2018-47":2,"2018-43":4,"2018-39":2,"2018-34":3,"2018-30":3,"2018-26":1,"2018-22":1,"2018-17":4,"2018-13":2,"2018-09":1,"2018-05":3,"2017-51":3,"2017-47":1,"2017-43":3,"2017-39":4,"2017-34":2,"2017-30":2,"2017-26":3,"2017-22":3,"2017-17":2,"2023-14":1,"2017-13":4,"2024-10":1}},"file":"PMC5303791"},"subset":"pubmed_central"} {"text":"abstract: Hypoglycemia stimulates counterregulatory hormone release to restore euglycemia. This protective response is diminished by recurrent hypoglycemia, limiting the benefits of intensive insulin treatment in patients with diabetes. We previously reported that EphA5 receptor-ephrinA5 interactions within the ventromedial hypothalamus (VMH) influence counterregulatory hormone responses during acute hypoglycemia in nondiabetic rats. In this study, we examined whether recurrent hypoglycemia alters the capacity of the ephrinA5 ligand to activate VMH EphA5 receptors, and if so, whether these changes could contribute to pathogenesis of defective glucose counterregulation in response to a standard hypoglycemic stimulus. The expression of ephrinA5, but not EphA5 receptors within the VMH, was reduced by antecedent recurrent hypoglycemia. In addition, the number of synaptic connections was increased and astroglial synaptic coverage was reduced. Activation of VMH EphA5 receptors via targeted microinjection of ephrinA5-Fc before a hyperinsulinemic hypoglycemic clamp study caused a reduction in the glucose infusion rate in nondiabetic rats exposed to recurrent hypoglycemia. The increase in the counterregulatory response to insulin-induced hypoglycemia was associated with a 150% increase in glucagon release (*P* \\< 0.001). These data suggest that changes in ephrinA5\/EphA5 interactions and synaptic plasticity within the VMH, a key glucose-sensing region in the brain, may contribute to the impairment in glucagon secretion and counterregulatory responses caused by recurrent hypoglycemia.\nauthor: Barbara Szepietowska; Tamas L. Horvath; Robert S. SherwinCorresponding author: Robert S. Sherwin, .\ndate: 2014-03\nreferences:\ntitle: Role of Synaptic Plasticity and EphA5-EphrinA5 Interaction Within the Ventromedial Hypothalamus in Response to Recurrent Hypoglycemia\n\nFrequent episodes of acute hypoglycemia represent the principal obstacle to achieving optimal glycemic control during insulin treatment in patients with type 1 diabetes and long-standing type 2 diabetes (1,2). This problem is further magnified by the loss of an appropriate counterregulatory response to hypoglycemia that results as a consequence of frequent episodes iatrogenic insulin-induced hypoglycemia (3,4). The molecular mechanisms underlying this phenomenon remain uncertain but are likely to involve the key brain glucose\u2013sensing region, the ventromedial hypothalamus (VMH) (5,6).\n\nWe have previously reported that local stimulation of VMH EphA5 receptors by microinjection of ephrinA5-Fc or ephrinA5 overexpression increased, whereas knockdown of VMH ephrinA5 reduced, counterregulatory responses to hypoglycemia. Furthermore, overexpression of VMH ephrinA5 transiently increased local glutamate concentrations, whereas ephrinA5 knockdown produced profound suppression of VMH interstitial fluid glutamine concentrations in the basal state and during hypoglycemia. These data suggest that the activation of VMH EphA5 receptors by ephrinA5 may act in concert with \u03b2-cell Eph receptor forward signaling to restore glucose homeostasis during acute hypoglycemia via alterations in glutamate\/glutamine cycling (7,8).\n\nWithin the central nervous system, Eph receptors and their ligands, the ephrins, play a key role in cell-to-cell communication as well as in synaptic structure and function. Eph receptors function as transmembrane receptor tyrosine kinases and are divided by sequence similarity and ligand affinity into an A and a B subclass. Their ligands, the ephrins, are also divided into an A and a B subclass: the A subclass is tethered to the cell membrane by a glycosylphosphatidylinositol anchor, and members of the B subclass have a transmembrane domain and as a short cytoplasmic region. For the most part, A-type receptors bind to most or all A-type ligands, and B-type receptors bind to most or all B-type ligands (9). As for many other receptor tyrosine kinases, ligand-binding induces \"forward signaling,\" mostly through phosphotyrosine-mediated pathways. However, ephrins can also signal into their host cell\u2014referred to as \"reverse signaling'\"(10). Eph receptors and their ephrin ligands are present in the adult brain and are specifically enriched in glutamate excitatory synapses (11). Moreover, Eph receptor tyrosine kinases are mainly expressed in synaptic terminals where they influence synaptic plasticity via binding to ephrins found on astrocytic processes that surround the synapse or on neuronal synapses (12,13).\n\nSeveral observations suggest that changes in hypothalamic synaptic plasticity may play a significant role in the regulation of energy balance. For example, peripheral signals, such as leptin, ghrelin, and estrogen, induce synaptic adaptations that serve as dynamic regulators of neuronal activity in the arcuate nucleus, the hypothalamic center for feeding control (14,15). Whether hypoglycemia per se induces local changes in the VMH affecting both neuronal synapses and\/or surrounding glia cells is unknown, but such alterations could potentially modulate neurotransmission within VMH, thereby altering brain glucose sensing. This study tests whether recurrent hypoglycemia alters EphA5 receptor\u2013ephrinA5 interactions within the VMH, which might contribute to diminished activation of counterregulatory responses to acute hypoglycemia.\n\n# Research Design and Methods\n\n## Animals\n\nMale Sprague-Dawley rats (Charles River, Wilmington, MA) weighing 300\u2013350 g were individually housed in the Yale Animal Resource Center in temperature- (22\u201323\u00b0C) and humidity-controlled rooms. Animals were fed rat chow (Prolab 3000; Agway, Syracuse, NY) and water ad libitum and were acclimatized to a 12-h light-dark cycle. The Yale University Institutional Animal Care and Use Committee approved the experimental protocols. Different sets of animals were used for the clamp and electron microscopy (EM) experiments described below.\n\n## Vascular and Stereotaxic Surgery\n\nAnimals were anesthetized \u223c7\u201310 days before study and underwent aseptic surgery in which vascular catheters were implanted into the left carotid artery for blood sampling and into the right jugular vein for infusion, as previously described (8). These catheters were tunneled subcutaneously and exteriorized at the back of the neck between the scapulae. Wound staples were used to close the incision. For stereotaxic surgery, animals were placed into a stereotaxic frame (David Kopf Instruments, Tujunga, CA), and stainless steel guide cannulas (Plastics One, Roanoke, VA) were inserted into the brain and secured in place with screws and dental acrylic (coordinates from Bregma: anteroposterior \u22122.6 mm, mediolateral \u00b1 0.8 mm, and dorsoventral \u22128.0) for microinjection.\n\n## Recurrent Hypoglycemia Protocol\n\nRats received intraperitoneal injections of insulin (10 units\/kg) for 3 consecutive days. After each injection, food was withheld to allow plasma glucose to fall into the hypoglycemic range (30\u201350 mg\/dL); throughout, animals were monitored with tail vein glucose measurements every 30 min using the AlphaTRAK rodent glucometer (Abbott Animal Health, Chicago, IL) to ensure sustained hypoglycemia and to avoid glucose reduction sufficient to cause seizure activity. At the end of this period, the rats were given free access to food again. Control rats received an injection of 0.9% saline under the same conditions.\n\n## Microinjection of EphrinA5-Fc or Control-Fc\n\nOn the morning of the study, awake, overnight-fasted rats were connected to infusion pumps \u223c90 min before the start of the experiment and then left undisturbed to recover from handling stress. After the recovery period, 22-gauge microinjection needles (Plastics One), designed to extend 1 mm beyond the tip of the guide cannula, were inserted bilaterally through the guide cannula into each VMH region. Rats then received a microinjection of recombinant human ephrinA5-Fc or recombinant human IgG1-Fc (control-Fc protein; both R&D Systems, Minneapolis, MN; catalog number: 374-EA-200) in a concentration of 0.3 \u03bcg\/\u03bcL dissolved in artificial extracellular fluid delivered at a rate of (0.1 \u00b5L\/min) over 60 min (dose: 1.8 \u03bcg for each side). After the microinjection, needles were left in place for 30 min before being removed. Immediately thereafter, a hyperinsulinemic hypoglycemic clamp study was performed. These compounds have been previously administered into the central nervous system in in vivo studies (16) and in vitro in brain slices (17), without adverse effects. In addition, ephrinA5-Fc has been shown to specifically bind EphA5 receptors (18).\n\n## Hyperinsulinemic-Hypoglycemic Clamp\n\nA primed continuous infusion of 20 mU \u22c5 kg^\u20131^ \u22c5 min^\u20131^ insulin (Humulin R; Eli Lilly, Indianapolis, IN) was given, and a variable infusion of 20% dextrose was adjusted at 5- to 10-min intervals based on glucose measurements (Analox Instruments, Lunenburg, MA) designed to maintain plasma glucose at 50 mg\/dL from 30 to 90 min (19). Additional blood was drawn at baseline and at 30, 60, and 90 min for measurement of insulin, glucagon, epinephrine, and norepinephrine. Rats were killed at study termination, and the probe position was confirmed histologically.\n\n## Hormone and Neurotransmitter Analyses\n\nPlasma catecholamine measurements were analyzed by high-performance liquid chromatography using electrochemical detection, and plasma insulin and glucagon concentrations were determined by radioimmunoassay (Linco, St. Charles, MO).\n\n## Immunoblot Analysis\n\nFrozen tissue micropunches from VMH and control regions were homogenized in buffer containing 1% NP40, 150 mmol\/L NaCl, 50 mmol\/L Tris (pH 7.4), 1 mmol\/L Na~3~VO~4~, 1 mmol\/L phenylmethylsulfonyl fluoride, and protease inhibitor (Roche Diagnostics) using a plastic pestle and ultrasonicator. Protein content was assessed with the Bradford protein assay. Protein samples were fractioned under reducing conditions on SDS-9% PAGE (Bio-Rad). After electrophoresis, proteins were electroblotted onto nitrocellulose membranes, blocked with 5% nonfat dry milk in PBS, probed with first antibody (\u03b1-tubulin; Cell Signaling, cat. 2125S; ephrin-A5; R&D Systems, cat. AF3743) and EpA5 receptor (EphA5; Sigma-Aldrich, cat. P8651), and incubated with the appropriate secondary antibody conjugated to peroxidase by horseradish peroxidase\u2013linked protein A (1:2000; Sigma-Aldrich). The immunoblots were developed using an enhanced chemiluminescence detection system (Amersham Biosciences).\n\n## Electron Microscopy\n\nBriefly, animals were perfused with paraformaldehyde fixative and ultrathin brain sections were cut on a Leica ultramicrotome, collected on Formvar-coated single-slot grids, and analyzed with a Tecnai 12 Biotwin EM (FEI). Glia coverage of the cell membrane of random VMH cells was performed using ImageJ software. EM photographs (original magnification \u00d711,500) were used to first measure the perimeter of each VMH neuron analyzed, followed by determination of the amount of membrane covered by glia (in nanometers). Results are reported as glia coverage\/perimeter of the VMH neurons.\n\nCharacteristics of synaptic contacts were defined, as previously described (20). They were collected from serial sections of the cell membrane of random VMH neurons. Synapse characterization was performed at original magnification \u00d720,000, and quantitative measurements were performed at original magnification \u00d711,500. Results are reported as synapses number\/perimeter of the VMH neurons (21).\n\n## Statistics\n\nData are expressed as the means \u00b1 SEM. Analysis was performed by one-way ANOVA or the Student *t* test, as appropriate. Statistical analysis was then performed by two-way ANOVA for repeated measures, followed by post hoc analysis using GraphPad Prism 4.0 software (GraphPad Software, Inc., San Diego, CA). *P* \\< 0.05 was considered statistically significant.\n\n# Results\n\n## Recurrent Hypoglycemia and EphrinA5 Expression\n\nAs shown in Fig. 1<\/a>, rats exposed to recurrent hypoglycemia for 3 days exhibit a 25% reduction in ephrinA5 expression in the VMH (Fig. 1A<\/em> and B<\/em><\/a>). In contrast, no significant change in the expression of the EphA5 receptor in the VMH was detected (Fig. 1C<\/em> and D<\/em><\/a>).\n\n## Stimulation of VMH EphA5 Receptors in Rats Exposed to Recurrent Hypoglycemia\n\nTo assess the biological consequences of the reduction of ephrinA5 expression, we microinjected the Eph receptor agonist, ephrinA5-Fc, into the VMH before conducting a hyperinsulinemic-hypoglycemic clamp study in rats previously exposed to three episodes of insulin-induced hypoglycemia. Schematic representation of the experimental protocol is presented in Fig. 2A<\/em><\/a>. Body weight and plasma levels of glucose, insulin, glucagon, epinephrine, and norepinephrine were indistinguishable at baseline and immediately after completion of the VMH microinjection of ephrinA5-Fc or control-Fc (Table 1<\/a>). Subsequently, during the hypoglycemic clamp study, plasma glucose (Fig. 2B<\/em><\/a>) and insulin (Fig. 2C<\/em><\/a>) were indistinguishable between the two groups. EphrinA5-Fc delivery, however, significantly reduced within 15 min the glucose infusion rate required to maintain hypoglycemia (Fig. 2D<\/em><\/a>). This was accompanied by a rapid 150% (*P* \\< 0.001) increase in glucagon release (Fig. 2E<\/em><\/a>). As was observed in our previous study (8) in rats not exposed to antecedent hypoglycemia, neither plasma epinephrine (Fig. 2F<\/em><\/a>) nor norepinephrine (Fig. 2G<\/em><\/a>) responses to hypoglycemia were significantly altered by VMH delivery of ephrinA5-Fc compared with the control-Fc microinjection.\n\nCharacteristics for the rats with recurrent hypoglycemia at baseline and after microinjection of ephrinA5-Fc in the VMH\n\n![](1140tbl1)\n\n## Effect of Recurrent Hypoglycemia on Glia Ensheetment and Synaptic Input Organization\n\nNext, we assessed if recurrent hypoglycemia affected the VMH synaptic organization and glia ensheetment in rats exposed for 3 days to recurrent hypoglycemia compared with controls, a model we have previously shown suppresses hypoglycemic counterregulation (22). Figure 3A<\/em> and B<\/em><\/a> compares representative EMs of glia ensheetment in random VMH neuron perikarya in control and recurrent hypoglycemic rats. The rats exposed to recurrent hypoglycemia exhibited reduced glial coverage of neurons (*P* \\< 0.001) (Fig. 3C<\/em><\/a>) and, as a result, more total synaptic contacts in random VMH neurons (*P* \\< 0.05) (Fig. 3D<\/em><\/a>).\n\n# Discussion\n\nThe current study demonstrates that exposure of nondiabetic rats to recurrent hypoglycemia for 3 consecutive days diminishes ephrinA5 expression within the VMH in association with reduced glial coverage of VMH neurons and synaptic remodeling. In addition, targeted VMH delivery of the EphA5 receptor ligand ephrinA5-Fc enhances glucose counterregulation and glucagon release in rats exposed to recurrent hypoglycemia. These findings are consistent with the possibility that recurrent hypoglycemia diminishes EphA5 receptor forward signaling in the VMH, which in turn reduces the magnitude of glucagon secretion.\n\nSignaling via the EphA\/ephrinA receptor system has been reported to regulate neuron-astrocyte interactions that cause rapid changes in synaptic structural and functional plasticity (17,23). It has been proposed that the loss of ephrinA alters astrocytic-neuronal contacts (24), whereas application of ephrinA3-Fc or endogenous ephrin induces rapid growth of the astrocytes processes and growth of new filopodia (17). In addition, activation of EphA4 by ephrinA3 has been shown to induce spine retraction (25). The current finding that recurrent hypoglycemia decreases ephrinA5 expression and produces diminished glia coverage and more synaptic connections within the VMH raises the question of a possible relationship. The fact that bypassing ephrinA5 using a targeted VMH ephrinA5-Fc microinjection can increase counterregulatory responses in animals exposed to recurrent hypoglycemia is in keeping with the hypothesis that reduced VMH ephrinA5 expression might induce alterations in VMH synaptic plasticity that in turn contribute to the development of disordered glucose counterregulation. However, a direct link between the observed changes in ephrinA5 and in glia coverage as well as synaptic connection rearrangements remains to be established in future studies.\n\nPrevious studies have reported rapid changes in synaptic network connectivity and glia morphology in the VMH in response to alterations in energy substrate bioavailability (15,26\u201330). Acute hypoglycemia has been shown to alter synaptophysin expression, findings consistent with a rapid alternation in synaptic morphology (31,32), whereas insulin-deficient diabetic rats display a decrease in the number of hypothalamic astrocytes as a consequence of increased death and decreased proliferation (33). Given that diabetes and recurrent glucose deprivation are both accompanied by impaired counterregulation (22), these observations are consistent with the hypothesis that synaptic connectivity and the function of glia in the VMH play a significant role in supporting the neurotransmission required for proper counterregulatory responses to acute hypoglycemia.\n\nIt is noteworthy that the principal effect of VMH microinjection of the ephrinA5 ligand on hypoglycemia-induced counterregulatory hormone release was on glucagon, whereas recurrent hypoglycemia normally leads to a suppression of glucagon and epinephrine levels in nondiabetic animals. These findings are consistent with previous studies in mice showing that knockdown of VGLUT2 (vesicular glutamate transporter) selectively in SF1 VMH neurons predominately inhibited the secretion of glucagon in response to acute hypoglycemia (34). It should be noted, however, that alterations in EphA receptor\/ephrinA signaling appear to influence epinephrine responses as well. Using a targeted gene expression manipulation approach to chronically alter VMH ephrinA5 expression, we observed that overexpression of ephrinA5 in the VMH stimulates, whereas targeted VMH knockdown of ephrinA5 inhibits, epinephrine as well as glucagon responses to acute hypoglycemia. These effects also appear to be mediated by alterations in glutamate\u2013glutamine cycling (8). In this study, we did not include a control group not subjected to recurrent hypoglycemia, and thus, the extent that acute delivery of the EphA receptor agonist restored the glucagon response to hypoglycemia in rats exposed to recurrent hypoglycemia cannot be determined. However, results from previous studies from our laboratory using a similar hypoglycemic clamp protocol show the effect of the EphA receptor agonist on glucagon responses appears to be at least as great as the suppressive effect of recurrent hypoglycemia on glucagon levels (8).\n\nGiven that the EphA5\/ephrinA5 system is mainly localized on glutamatergic synapses (12,25), it is intriguing to speculate that the stimulation of glucagon produced by the EphA5 receptor agonist in the current study is most likely mediated by the augmented VMH glutamatergic neurotransmission. The maintenance of VMH glutamate neurotransmission during acute hypoglycemia is supported by the transport of glutamate into astrocytes, resulting in the production of glutamine for delivery to neurons and glutamate\u2013glutamine cycle activation (35). Previous studies have shown that astrocyte synaptic coverage is linked to glutamate clearance and the activation of metabotropic glutamate receptors (36), and this has been proposed to alter synaptic and astroglia organization, and in turn, neurotransmission (37,38). Thus, the reduced glia coverage of the VMH neurons observed in the current study in rats exposed to recurrent hypoglycemia may have produced a deficit in astroglial function to support proper glutamate neurotransmission during hypoglycemia. Interestingly, this was associated with more neuronal synaptic contacts in the VMH (Fig. 3D<\/em><\/a>), which appeared to be in large part symmetric and thus potentially \u03b3-aminobutyric acid (GABA) inhibitory in nature (21). An increase in the VMH GABA tone has been shown to be an important contributor to the development of impaired glucose counterregulation in response to recurrent hypoglycemia (39).\n\nTaken together, our data demonstrate that recurrent hypoglycemia alters neuron-glia plasticity in VMH nuclei and diminishes ephrinA5 ligand expression within the VMH. It is thus possible decreased ephrin-induced activation of Eph receptors in VMH glutamate neurons may contribute to the impairment in glucose counterregulation in response to recurrent antecedent hypoglycemia.\n\n## Article Information\n\n**Acknowledgments.** The authors are grateful to Wanling Zhu, from R.S.S.'s laboratory, for performing animal surgeries; Aida Groszman, Maria Batsu, Codruta Todeasa, Ralph J. Jacob, from the Yale Center for Clinical Investigation Core Laboratory of the Yale School of Medicine, and Jan Czyzyk, from the University of Rochester Medical Center School of Medicine and Dentistry, for assisting with the protein expression experiments; and Erzsebet Borok, from the Yale Program in Integrative Cell Signaling and Neurobiology of Metabolism, for excellent technical support and assistance.\n\n**Funding.** This work was supported by research grants from National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) (20495), the Juvenile Diabetes Research Foundation, and the NIDDK-supported Diabetes Endocrinology Research Center.\n\n**Duality of Interest.** No potential conflicts of interest relevant to this article were reported.\n\n**Author Contributions.** B.S. designed the study, performed animal surgery and studies, analyzed data, and wrote the manuscript. T.L.H. designed EM experiments. R.S.S. designed the study, reviewed data, and revised the manuscript. B.S. and R.S.S. are the guarantors of this work, and, as such, had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.\n\n**Prior Presentation.** This study was presented at the 70th Scientific Sessions of the American Diabetes Association, Orlando, FL, 25\u201329 June 2010.\n\n# References","meta":{"dup_signals":{"dup_doc_count":122,"dup_dump_count":47,"dup_details":{"curated_sources":2,"2020-40":1,"2020-05":1,"2019-04":1,"2018-43":1,"2018-34":1,"2018-30":1,"2018-22":3,"2018-13":3,"2018-09":1,"2018-05":1,"2017-51":2,"2017-47":2,"2017-43":4,"2017-39":2,"2017-34":2,"2017-30":6,"2017-26":3,"2017-22":2,"2017-17":8,"2017-09":4,"2017-04":9,"2016-50":3,"2016-44":4,"2016-40":3,"2016-36":3,"2016-30":2,"2016-26":3,"2016-22":3,"2016-18":2,"2016-07":2,"2015-48":1,"2015-40":1,"2015-35":2,"2015-27":3,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":2,"2014-42":1,"2014-41":3,"2014-35":1,"2023-40":1,"2017-13":4,"2015-18":3,"2015-11":2,"2015-06":3,"2024-26":1}},"file":"PMC3931406"},"subset":"pubmed_central"} {"text":"date: 2017-08\ntitle: WHO urges action against HIV drug resistance threat\n\n20 JULY 2017 \\| GENEVA - WHO alerts countries to the increasing trend of resistance to HIV drugs detailed in a report based on national surveys conducted in several countries. The Organization warns that this growing threat could undermine global progress in treating and preventing HIV infection if early and effective action is not taken.\n\nThe WHO HIV drug resistance report 2017 shows that in 6 of the 11 countries surveyed in Africa, Asia and Latin America, over 10% of people starting antiretroviral therapy had a strain of HIV that was resistant to some of the most widely used HIV medicines. Once the threshold of 10% has been reached, WHO recommends those countries urgently review their HIV treatment programmes.\n\n# HIV drug resistance report 2017\n\n\"Antimicrobial drug resistance is a growing challenge to global health and sustainable development,\" said Dr Tedros Adhanom Ghebreyesus, WHO Director-General. \"We need to proactively address the rising levels of resistance to HIV drugs if we are to achieve the global target of ending AIDS by 2030.\"\n\nHIV drug resistance develops when people do not adhere to a prescribed treatment plan, often because they do not have consistent access to quality HIV treatment and care. Individuals with HIV drug resistance will start to fail therapy and may also transmit drug-resistant viruses to others. The level of HIV in their blood will increase, unless they change to a different treatment regimen, which could be more expensive \u2013 and, in many countries, still harder to obtain.\n\nOf the 36.7 million people living with HIV worldwide, 19.5 million people were accessing antiretroviral therapy in 2016. The majority of these people are doing well, with treatment proving highly effective in suppressing the HIV virus. But a growing number are experiencing the consequences of drug resistance.\n\nWHO is therefore issuing new guidelines to help countries address HIV drug resistance. These recommend that countries monitor the quality of their treatment programmes and take action as soon as treatment failure is detected.\n\n\"We need to ensure that people who start treatment can stay on effective treatment, to prevent the emergence of HIV drug resistance,\" said Dr Gottfried Hirnschall, Director of WHO's HIV Department and Global Hepatitis Programme. \"When levels of HIV drug resistance become high we recommend that countries shift to an alternative first-line therapy for those who are starting treatment.\"\n\nIncreasing HIV drug resistance trends could lead to more infections and deaths. Mathematical modelling shows an additional 135 000 deaths and 105 000 new infections could follow in the next five years if no action is taken, and HIV treatment costs could increase by an additional US\\$ 650 million during this time.\n\nTackling HIV drug resistance will require the active involvement of a broad range of partners. A new five-year Global Action Plan calls on all countries and partners to join efforts to prevent, monitor and respond to HIV drug resistance and to protect the ongoing progress towards the Sustainable Development Goal of ending the AIDS epidemic by 2030. In addition, WHO has developed new tools to help countries monitor HIV drug resistance, improve the quality of treatment programmes and transition to new HIV treatments, if needed.\n\nThe WHO HIV drug resistance report 2017 was co-authored by the Global Fund to Fight AIDS, Tuberculosis and Malaria, and the Centers for Disease Control and Prevention, USA.\n\n\"This new report shows a worrying picture of increasing levels of HIV drug resistance and, if unchecked, it will be a major risk to program impact,\" said Dr Marijke Wijnroks, Interim Executive Director of the Global Fund. \"We strongly recommend implementing WHO recommendations for early warning indicators and HIV drug resistance surveys in every national plan for antiretroviral therapy, and to consider funding them through Global Fund grants or reprogramming.\"\n\nDr Shannon Hader, Director of CDC's Division of Global HIV and Tuberculosis, US Centers for Disease Control and Prevention, added: \"The new report pulls together key HIV drug resistance survey findings from across the globe that, taken together with other national-level data, confirm we must be forward-thinking in our efforts to combat resistance: scaling up viral load testing, improving the quality of treatment programs, and transitioning to new drugs like dolutegravir.\"\n\nDr. Hader continued, stating that \"Overall high rates of viral suppression across three recent national Population-based HIV Impact Assessments showed that present first-line regimens remain largely effective. However, special attention to populations at risk for higher resistance, such as pediatrics, adolescents, pregnant women and key populations, will be critical to target more urgent interventions. We call on the global community for continued vigilance and responsiveness.\"\n\nAvailable from: ","meta":{"dup_signals":{"dup_doc_count":130,"dup_dump_count":51,"dup_details":{"curated_sources":2,"2023-40":3,"2023-14":1,"2023-06":2,"2022-49":2,"2022-40":2,"2022-33":2,"2022-27":1,"2022-21":1,"2022-05":1,"2021-43":1,"2021-39":3,"2021-31":1,"2021-21":3,"2021-10":1,"2021-04":1,"2020-45":2,"2020-40":3,"2020-29":1,"2020-24":1,"2020-16":2,"2020-05":2,"2019-51":1,"2019-47":1,"2019-39":1,"2019-35":2,"2019-30":3,"2019-22":2,"2019-13":3,"2019-09":2,"2019-04":3,"2018-51":2,"2018-47":4,"2018-43":3,"2018-39":2,"2018-34":5,"2018-30":7,"2018-26":4,"2018-22":4,"2018-17":6,"2018-13":1,"2018-09":7,"2018-05":3,"2017-51":4,"2017-47":5,"2017-43":5,"2017-39":1,"2017-34":5,"2024-30":2,"2024-26":1,"2024-22":2,"2024-18":1}},"file":"PMC5654036"},"subset":"pubmed_central"} {"text":"abstract: Decades of research in rodent models has shown that early postnatal overnutrition induces excess adiposity and other components of metabolic syndrome that persist into adulthood. The specific biologic mechanisms explaining the persistence of these effects, however, remain unknown. On postnatal day 1 (P1), mice were fostered in control (C) or small litters (SL). SL mice had increased body weight and adiposity at weaning (P21), which persisted to adulthood (P180). Detailed metabolic studies indicated that female adult SL mice have decreased physical activity and energy expenditure but not increased food intake. Genome-scale DNA methylation profiling identified extensive changes in hypothalamic DNA methylation during the suckling period, suggesting that it is a critical period for developmental epigenetics in the mouse hypothalamus. Indeed, SL mice exhibited subtle and sex-specific changes in hypothalamic DNA methylation that persisted from early life to adulthood, providing a potential mechanistic basis for the sustained physiological effects. Expression profiling in adult hypothalamus likewise provided evidence of widespread sex-specific alterations in gene expression. Together, our data indicate that early postnatal overnutrition leads to a reduction in spontaneous physical activity and energy expenditure in females and suggest that early postnatal life is a critical period during which nutrition can affect hypothalamic developmental epigenetics.\nauthor: Ge Li; John J. Kohorst; Wenjuan Zhang; Eleonora Laritsky; Govindarajan Kunde-Ramamoorthy; Maria S. Baker; Marta L. Fiorotto; Robert A. WaterlandCorresponding author: Robert A. Waterland, .[^1]\ndate: 2013-08\nreferences:\ntitle: Early Postnatal Nutrition Determines Adult Physical Activity and Energy Expenditure in Female Mice\n\nEnvironmental influences on the development of body weight regulatory mechanisms may be an important factor in the worldwide obesity epidemic (1,2). Evidence in humans indicates that overnutrition during early postnatal life can permanently alter body weight regulation, increasing susceptibility to obesity throughout life (3,4). Accordingly, various animal models have been developed to explore the effects of infant overnutrition on lifelong obesity risk. Artificial feeding of rodent pups by intragastric cannula provides clear evidence for sustained effects of early postnatal overnutrition (5) but requires raising newborn rodents in isolation, which itself has long-term consequences. Overfeeding dams during lactation, with a high-fat diet, for example, could indirectly overnourish pups. Indeed, two recent rodent studies (6,7) report that the obesogenic effect of maternal high-fat diet occurs specifically during the suckling period. Pups from high-fat-fed dams are not consistently heavier at weaning, however (8,9), indicating that maternal overnutrition does not reliably induce early postnatal overnutrition.\n\nIn the rodent small litter model of early postnatal overnutrition (10), offspring from several litters born on the same day are randomized and fostered to either normal size (control \\[C\\]) or small litters (SL). Suckling in SL is naturalistic, easy to implement, and consistently induces early postnatal overnutrition, providing an apt model in which to study potential long-term effects of infantile overnutrition by excessive formula feeding (11). The early postnatal exposure induces elevated body weight and adiposity that persists to adulthood (10\u201312), with concomitant increases in plasma insulin (11,13) and leptin concentrations (13) and impaired glucose tolerance (11,13,14). It remains unresolved, however, whether the sustained increase in adiposity of adult SL rodents results from increased energy intake or decreased energy expenditure (13,15). Moreover, the fundamental mechanisms by which the metabolic effects of SL exposure persist to adulthood are unknown.\n\nEnvironmental influences on developmental epigenetics (16,17) provide a likely mechanism. Epigenetic mechanisms regulate mitotically heritable alterations in gene expression potential that are not caused by changes in DNA sequence (18) and are known to play key roles in brain development (19). DNA methylation, the most stable epigenetic modification (20), is a likely mechanism to explain effects that persist for a lifetime (2). Given its central role in regulating food intake and energy expenditure (21), the hypothalamus is an obvious tissue in which to explore a potential epigenetic basis for induced alterations in body weight regulation. We therefore set out to determine *1*) whether the persistently elevated adiposity of SL mice is caused by increased food intake or decreased energy expenditure, and *2*) whether early postnatal overnutrition causes persistent changes in hypothalamic epigenetic regulation that may perpetuate altered body weight regulation.\n\n# RESEARCH DESIGN AND METHODS\n\nFor the litter size studies, virgin FVB\/NJ females (The Jackson Laboratory) were mated with FVB\/NJ males at age 8 weeks. In each batch, 14\u201315 mating pairs were set up on the same day; four independent batches of mice were studied over the course of 2 years. On postnatal day 1 (P1), pups from all litters born on the same day (P0) were weighed, sexed, and pooled randomly. Only pups from a birth litter size of 6\u201312 were included. Foster dams received either four (SL) or nine (C) pups. There were two females and two males in each SL and four to five females and males in each C litter. Litter assignment was performed systematically to balance body weight at P1. At P21, offspring from both groups were weaned onto a fixed-formula, soy protein\u2013free diet (2020X; Harlan Teklad); females were housed two to five per cage, and males were housed individually. Body composition, food intake, energy expenditure, and physical activity were measured at P21\u2013P25 and approximately P180. The P21 vs. P0 methylation-specific amplification and microarray hybridization (MSAM) comparisons used female C57BL\/6J mice, and the pyrosequencing validation studies were performed in C57BL\/6J and FVB\/NJ mice of both sexes (The Jackson Laboratory). Sex was confirmed by PCR amplification of *Sry*. All applicable institutional and governmental regulations concerning the ethical use of animals were followed during this research. The protocol was approved by the Institutional Animal Care and Use Committee of Baylor College of Medicine. All mice were housed in a temperature-controlled facility (22\u00b0C), provided free access to food and water, and maintained on a 12-h light cycle.\n\n## Body composition.\n\nBody composition was determined by quantitative magnetic resonance (EchoMRI-100; Echo Medical Systems) according to the manufacturer's instructions.\n\n## Food intake, energy expenditure, and physical activity.\n\nPrior to metabolic study, mice were acclimatized to single housing in Comprehensive Laboratory Animal Monitoring System (CLAMS) cages (Columbus Instruments, Columbus, OH) for 3 days. Mice were subsequently transferred to calorimetry cages for 4 days, during which food intake, energy expenditure (by indirect calorimetry), and physical activity were monitored in real time (22). Only data from the last 3 days (6:00 a.m.<\/span> to 6:00 a.m.<\/span>) were used. Upon weaning (P21), mice were housed in a normal cage for 1 day before being introduced to the CLAMS cages. Hence, the metabolic measurements did not commence until P25.\n\n## Genome-scale DNA methylation profiling.\n\nMSAM was performed as previously described (23,24), using a starting quantity of 0.5 \u00b5g genomic DNA. In both MSAM experiments, two independent cohybridizations (biological replicates) were performed on a custom 2\u00d7105k array (Agilent Technologies). The array includes 90,694 probes covering 86% (23,742) of the 27,675 potentially informative *Sma*I*\/Xma*I intervals between 200 bp and 2 kb in the mouse genome (average 3.8 probes per interval) (25). Fold change and *P* value were calculated as mean and median, respectively, of all probes in each interval; \"hits\" were called for fold change \\>2 or \\<0.5 and *P* \\< 0.0001 in both cohybridizations. Gene Ontology analysis was performed using the Gene Ontology enRIchment anaLysis and visuaLizAtion tool (GOrilla) (26), in each case using the potentially informative genes on the array as the background set. *Sma*I*\/Xma*I cut sites were annotated by RefGene according to NCBI36\/mm8 mouse genome (Feb 2006 release), according to the schema in [Supplementary Fig. 2](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1). Relevant details of the MSAM experiments and the hybridization data are available in the GEO database: P21 vs. P0, [www.ncbi.nlm.nih.gov\/geo\/query\/acc.cgi?acc=GSE32475](http:\/\/www.ncbi.nlm.nih.gov\/geo\/query\/acc.cgi?acc=GSE32475); and SL vs. C, P180, [www.ncbi.nlm.nih.gov\/geo\/query\/acc.cgi?acc=GSE32477](http:\/\/www.ncbi.nlm.nih.gov\/geo\/query\/acc.cgi?acc=GSE32477).\n\n## Quantitative analysis of DNA methylation.\n\nSite-specific analysis of CpG methylation was performed by bisulfite pyrosequencing (23,24). For validation of MSAM hits, pyrosequencing assays were designed to cover both informative *Sma*I*\/Xma*I sites when possible. A hit was considered validated if either assay showed a substantial methylation difference in the same direction as in MSAM. Sensitivity and linearity of each pyrosequencing assay were confirmed by running methylation standards (27). Pyrosequencing primers and annotations of the CpG sites examined are listed in [Supplementary Tables 3 and 4](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1), respectively.\n\n## Gene expression profiling and analysis.\n\nTotal hypothalamic RNA was isolated by the RNeasy kit (Qiagen). RNA quality control, cRNA preparation, labeling, and microarray hybridization were conducted as previously described (28). cRNA was hybridized onto Illumina MouseWG6 v2 Expression BeadChips (Illumina) following the manufacturer's protocol. Signal intensities extracted from Illumina Genome Studio software were preprocessed using LUMI (29) and the R statistical package (), including probes with a detection *P* value \u22640.05 in at least half of the samples. These 20,791 probe signals were then quantile normalized. To test for effects of group, sex, and group \u00d7 sex, we performed two-way ANOVA using Genomics Suite Software, version 6.6 \u03b2 (Partek). Contrasts were applied to all groups to identify differentially expressed transcripts, using an \u03b1 level of 0.05 after Benjamini-Hochberg false discovery rate adjustment. Network analyses were performed using IPA (Ingenuity Systems). Relevant details of the expression microarray experiment and the hybridization data are available in the GEO database: [www.ncbi.nlm.nih.gov\/geo\/query\/acc.cgi?acc=GSE40616](http:\/\/www.ncbi.nlm.nih.gov\/geo\/query\/acc.cgi?acc=GSE40616).\n\n## Quantitative analysis of gene expression.\n\nExpression levels of specific genes were determined by quantitative PCR. Total hypothalamic RNA was isolated by the RNeasy kit (Qiagen), and cDNA was synthesized using M-MLV Reverse Transcriptase (Promega) with random primers (Life Technologies). Gene expression levels were determined with either TaqMan (Life Technologies) or Sybr Green (Life Technologies), using the 2^\u2212\u0394\u0394Ct^ method (assay details provided in [Supplementary Table 6](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1)). In all cases, *ActB* was used as an endogenous control.\n\n## General statistical methods.\n\nEnrichment of *Sma*I*\/Xma*I cut sites relative to genomic regions was analyzed by \u03c7^2^ test. Group differences in body weight at P1 were analyzed by two-tailed *t* test. Body composition data at P25 and P180 were analyzed by ANOVA (SAS Proc Mixed). Body weight data at P21, P60, P120, and P180 were analyzed by repeated-measures ANOVA (SAS Proc Mixed, compound symmetry covariance structure) with age in the repeated effect. For the CLAMS studies, hourly measurements of food intake, energy expenditure, and activity counts for three consecutive days were averaged into one 24-h record for each mouse. Repeated-measures ANOVA used the full power of these time-series data while recognizing the nonindependence of the 24 multiple measures within each mouse. Analysis of food intake, energy expenditure, and physical activity were performed both with and without lean mass and fat mass included as independent variables to adjust for group differences in body size and composition (30). Group differences in DNA methylation by pyrosequencing were analyzed by repeated-measures ANOVA, with CpG site as the repeated effect ([Supplementary Table 2](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1)). Loci that showed significant group effects on methylation but no significant group \u00d7 age interaction were considered persistently altered by SL suckling. Requiring the same group difference, in the same direction, at both ages (in independent sets of mice) affords substantial protection against type 1 error; these analyses were therefore not otherwise adjusted for multiple testing. Akaike information criterion (AIC) model selection by adjusted *R*^2^ (SAS Proc Reg) was performed based on individual average dark-period energy expenditure and physical activity data.\n\n# RESULTS\n\n## Early postnatal overnutrition reduces adult energy expenditure in females.\n\nWe used the SL mouse model (Fig. 1A<\/em><\/a>) to study persistent effects of overnutrition during the suckling period. We studied four independent batches (groups of litters cross-fostered at one time) over 2 years, including offspring from 24 C and 26 SL litters total. Consistent with previous studies, SL mice were heavier at P21 and remained so into adulthood (*P* \\< 0.0001 in both females and males) (Fig. 1B<\/em><\/a>). Although the increase in adult body weight was modest, effects on body composition were substantial. Both male and female SL adults had 50% higher fat mass and percent body fat compared with C mice (*P* \\< 0.005 in all comparisons) (Fig. 1C<\/em><\/a>). There were no group differences in lean mass. Clearly, suckling in a small litter induces persistent changes in regulatory mechanisms that affect adult body composition.\n\nTo determine whether these changes involve alterations in food intake and\/or energy expenditure, we used metabolic cages to simultaneously monitor food intake, energy expenditure, and voluntary physical activity. In an attempt to identify persistent metabolic differences that might explain the sustained group differences in adiposity, we performed the metabolic measurements shortly after weaning (P25) and in adulthood (P180). (Again, these data represent four batches of mice studied over the course of 2 years.) After appropriate least squares normalization for lean mass and fat mass (30), food intake of SL mice tended, surprisingly, to be slightly lower than that of C mice at both P25 and P180 (Fig. 2A<\/em><\/a>), but these differences were not statistically significant. Energy expenditure (normalized for lean mass and fat mass \\[30\\]) was nearly identical between SL and C mice at P25 (Fig. 2B<\/em><\/a>). At P180, however, energy expenditure of SL females was significantly lower than that of C females (*P* = 0.002); this group difference was significant during both the light and dark periods. Resting metabolic rate was estimated as the lowest average energy expenditure within 1 h for each mouse. After least squares normalization for lean mass and fat mass, female mice showed no group differences in resting metabolic rate at either age. Resting metabolic rate of SL males, however, was higher at P25 (*P* = 0.02) and lower at P180 (*P* = 0.03) relative to C males. There were no group differences in respiratory exchange ratio. Group differences in voluntary physical activity, again normalized for lean mass and fat mass, were consistent with those in energy expenditure: none were found at P25, but SL females were significantly less active than C females at P180, specifically during the dark period (group \u00d7 light interaction *P* = 0.0009) (Fig. 2C<\/em><\/a>). It is noteworthy that adult SL females were less physically active even after including body weight and body composition in the model; hence, their lower activity was not caused by their excess adiposity. (For comparison, unnormalized data on food intake, energy expenditure, and physical activity are shown in [Supplementary Fig. 1](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1).) Including physical activity in the model for energy expenditure of P180 females drastically reduced the significance of the SL effect (from *P* = 0.002 to *P* = 0.01), suggesting that physical activity explains much of the group difference in energy expenditure. Together, these data indicate that the persistent alterations in energy balance of female SL mice are due not to excess food intake but, rather, to reduced energy expenditure. The sex specificity of this effect may be related to the male-specific decline of physical activity with age ([Supplementary Fig. 1*C*](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1)).\n\n## Extensive epigenetic development occurs in the early postnatal hypothalamus.\n\nOntogenic periods when epigenetic mechanisms are being established or undergoing maturation constitute critical periods during which environmental influences can cause persistent changes in epigenetic regulation (31,32). To determine whether the suckling period might be a critical period for developmental epigenetics in the hypothalamus, we tested for changes in hypothalamic DNA methylation. We used MSAM, which is based on sequential digestion of genomic DNA with the methylation-sensitive and -insensitive isoschizomers *Sma*I and *Xma*I (23,33). Two independent P21 vs. P0 MSAM cohybridizations (incorporating a dye swap) were performed.\n\nUsing stringent criteria validated in our previous studies (24,33), 868 *Sma*I*\/Xma*I intervals changed methylation from P0 to P21. Only 31 intervals lost methylation (Fig. 3A<\/em><\/a>), and the genomic distribution of associated *Sma*I*\/Xma*I cut sites was not different from that on the array (Fig. 3B<\/em><\/a> and [Supplementary Fig. 2](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1)). Methylation increased at 837 intervals (Fig. 3A<\/em><\/a>); associated cut sites were significantly underrepresented at promoters (*P* \\< 0.0001) and overrepresented in introns (*P* \\< 0.0001) (Fig. 3B<\/em><\/a>). In a larger number of P0 and P21 mice, we used bisulfite pyrosequencing (23,33) to measure P0\u2013P21 changes in hypothalamic CpG methylation at 10 intervals identified by MSAM; all 10 validated (100%) (7 are shown in Fig. 4<\/a>). Additionally, since the P21 vs. P0 MSAM studies were performed in C57BL\/6J mice, we confirmed (at a subset of loci) that these methylation changes also occur in both sexes of FVB\/NJ mice (the strain used for the litter size studies) ([Supplementary Fig. 3](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1)).\n\nFunctional significance of the developmental changes in DNA methylation was evaluated by gene ontology analysis. Relative to all potentially informative genes on the array, no enriched ontologies were found for the few genes associated with intervals that lost methylation. Genes that gained methylation, however, were significantly enriched for 20 biologic process categories (Fig. 3C<\/em><\/a>); of these, 17 are explicitly related to development, including neurodevelopmental processes such as axon guidance and neuron differentiation. Postnatal development of the mouse hypothalamus clearly involves functionally important epigenetic changes. This may be a critical period during which environment can influence these processes, with long-term consequences.\n\n## Early postnatal overnutrition causes persistent and sex-specific alterations in hypothalamic DNA methylation and gene expression.\n\nWe therefore examined DNA methylation differences among SL and C hypothalami. Intrigued by the large group differences in energy expenditure and physical activity in P180 females, we used MSAM to compare hypothalamic DNA methylation of SL and C females at P180. Two independent SL vs. C cohybridizations were performed, with each hypothalamic DNA sample pooled from five females drawn from different foster litters. The results, however, provided no evidence of persistent group differences in hypothalamic DNA methylation.\n\nReasoning that DNA methylation changes in SL mice might be too subtle to detect by MSAM, we used bisulfite pyrosequencing to examine a panel of candidate genes. Since genomic regions undergoing methylation change from P0 to P21 are most likely to show persistent effects of overnutrition during this period (2), most of the genomic regions that we selected were those identified in our P21 vs. P0 MSAM experiments. In addition to 10 of those already validated (Fig. 4<\/a>), we examined six hits near genes previously reported to change expression in hypothalamus from P0 to P21 (34) and two showing interindividual variation in DNA methylation. Additionally, promoters of a few genes critical to hypothalamic function and development (*Agrp, Fto*, and *Pomc*) were included. In total, 24 loci (named according to the nearest gene) were selected ([Supplementary Table 1](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1)).\n\nOf these, 15 showed no DNA methylation differences between SL and C hypothalamus at P25 ([Supplementary Fig. 4](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1) and [Supplementary Table 2](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1)) and were not examined at P180. The remaining nine loci were examined at both P25 and P180 ([Supplementary Fig. 5](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1) and [Supplementary Table 2](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1)). Considering this an exploratory data analysis, we set an \u03b1-level of 0.1 for main effects and 0.2 for interactions. The initial model including both sexes showed significant main effects for age but not group (SL vs. C); interestingly, however, four loci showed significant group \u00d7 sex interactions ([Supplementary Table 2](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1)). We therefore performed sex-specific analyses. In females, *AK145544*, *Aqp4*, and *Nolz1* and in males *Gadd45b* showed main effects of group that did not differ by age (no group \u00d7 age interaction). Plots of average site-specific methylation at these loci (Fig. 5<\/a>) illustrate subtle but persistent differences in DNA methylation. Since most of these changes were found in females, we used quantitative real-time RT-PCR to measure gene expression of *AK145544*, *Aqp4*, and *Nolz1* in hypothalamus of P180 females but found no significant group differences. To test whether differences in methylation and expression at these loci could explain individual variation in adult physical activity or energy expenditure, we applied the AIC (35) to identify the best model for each. In addition to methylation and expression of the three genes, SL group membership, lean mass, and fat mass were included as potential explanatory variables. Physical activity was not significantly predicted by any model. Remarkably, however, the best model for energy expenditure ([Supplementary Fig. 6](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1)) included expression of all three genes and methylation at *Nolz1* and *Aqp4* and predicted 55% of the variation in P180 energy expenditure in females (*P* = 0.025). Hence, group differences in DNA methylation and gene expression at these loci, though subtle, may explain some of the observed alterations in energy balance.\n\nAs a complementary approach to identify genes with persistent alterations in epigenetic regulation, we profiled gene expression in P180 hypothalamus of SL and C mice. Three males and three females were studied in each group (12 arrays total). The results showed strong effects of sex (Fig. 6A<\/em><\/a>), with 342 transcripts showing significant differences (false discovery rate \\<0.05). Although none of the group or group \u00d7 sex effects survived multiple testing correction, there was an enrichment of low *P* value probes in the group \u00d7 sex analysis (Fig. 6A<\/em><\/a>), suggesting subtle sex-specific effects. We therefore performed gene ontology analysis on the 37 and 732 transcripts with at least a 20% group difference (*P* \\< 0.05) in females and males, respectively. (The reference list comprised the 14,628 transcripts significantly expressed in hypothalamus.) No enriched gene ontology terms were found in females. In males, however, for both the 381 genes upregulated and the 351 downregulated in SL hypothalamus, the foremost biological process related to formation of neuronal projections. Examination of the gene ontology terms associated with the genes comprising these enrichments (Fig. 6B<\/em><\/a> and [Supplementary Table 5](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1)) suggests a subtle shift in expression profile that may favor neuronal remodeling in the hypothalamus of adult SL males. Additionally, in an analysis of gene networks associated with the expression changes in male hypothalamus ([Supplementary Fig. 7](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1)), two of the top three networks were related to cellular development and nervous system development. These networks are centered on Atn1 and dynein, respectively (both regulators of neurodegeneration), again supporting potential alterations of neuronal remodeling in SL males.\n\n# DISCUSSION\n\nHere we showed that early postnatal overnutrition, known to permanently increase body weight and adiposity, also reduces voluntary physical activity and energy expenditure in adult females. These physiological changes were associated with persistent alterations in hypothalamic DNA methylation at specific loci. Overall, these findings provide support for the hypothesis that early postnatal overnutrition causes subtle but widespread changes in hypothalamic epigenetic regulation that persist to influence adult energy balance.\n\nOur study addresses a key outstanding question: whether the persistently altered energy balance of SL rodents is due to increased food intake or decreased physical activity. Previous studies reported increased food intake (13,36) and energy expenditure (37) in adult SL rodents. Those conclusions, however, were based on nonnormalized data, disregarding the altered weight and body composition of SL rodents. Here, we used least squares means to appropriately adjust expenditure and intake data for body weight and composition (30). Compared with C mice of the same weight and body composition, adult SL mice were not hyperphagic (Fig. 2A<\/em><\/a>). Their energy expenditure, however, again compared with C mice of the same weight and body composition, was lower (Fig. 2B<\/em><\/a>), significantly so in females. Hence, our data provide strong evidence that reduced energy expenditure, not increased food intake, explains the increased adiposity of female adult SL mice.\n\nIn addition to food intake and energy expenditure, however, there are other determinants of energy balance, such as nutrient absorption, which were not measured in this study. Also, it is possible that group differences in central regulation of food intake may have been unmasked if mice were provided a highly palatable diet (38). Other than physical activity, we did not measure additional determinants of energy expenditure, such as brown adipose tissue activity. These shortcomings may explain why the excess adiposity of male SL mice occurred without measurable differences in physical activity or energy expenditure. (Notably, a recent study found age-associated alterations in thermogenic capacity of brown adipose tissue in male SL rats \\[39\\], consistent with our finding that resting metabolic rate is increased at P21 but decreased at P180 in male SL mice.) In females, the reduced energy expenditure was largely explained by reduced physical activity (Fig. 2C<\/em><\/a>). In an earlier report in rats, prenatal undernutrition likewise caused persistent reductions in locomotor activity, most prominently in females (40). Given the worldwide trends of decreasing physical activity (41), it is crucial to determine whether, in humans as in rodents, nutrition during early life modulates voluntary physical activity for a lifetime.\n\nDespite its importance in central regulation of food intake and energy expenditure (21), our understanding of the molecular mechanisms driving functional development of the hypothalamus remains limited. Mouse hypothalamic development continues into early postnatal life, a critical period for formation of leptin-sensitive neuroanatomic projections that function in energy homeostasis (42) and major alterations of hypothalamic gene expression (34). Here, we have shown for the first time that during this same period widespread changes in DNA methylation\u2014mostly increases\u2014are underway. The association of these methylation increases with genes involved in neural development (Fig. 3C<\/em><\/a>) suggests a process of postnatal epigenetic maturation. Because projections from the arcuate nucleus of the hypothalamus to other brain regions form prenatally in primates but postnatally in rodents (43), it is often proposed that hypothalamic development during the suckling period in the mouse is comparable with that in a third-trimester human. It is currently unknown, however, whether and when the epigenetic maturation we have documented in the postnatal mouse occurs in humans. Moreover, our findings that postnatal overnutrition leads to a decrease in physical activity in female mice raises the question of whether the mouse is a good model for physical activity in humans. Although we currently have only a rudimentary understanding of the neurobiological regulation of spontaneous physical activity (44), the hypothalamus and other brain regions are known to be involved, as are several highly conserved neuropeptides including cholecystokinin, corticotrophin-releasing hormone, leptin, and orexins.\n\nIt was recently reported that SL rats have alterations in hypothalamic DNA methylation (45,46). Those studies, however, assessed DNA methylation only at P21. Our data therefore provide the first evidence that early postnatal overnutrition induces persistent epigenetic changes in the hypothalamus. Additionally, unlike previous studies on related models (45\u201348), rather than focus on single CpG sites we performed integrated analysis of all CpG sites represented in each assay because *1*) regional changes in DNA methylation are more likely to affect gene expression and *2*) concordant changes at multiple adjacent sites are less likely to occur by chance. Notably, contrary to the previous report of increased DNA methylation (at 2 of 20 CpG sites measured) at the *Pomc* promoter in the hypothalamus of P21 SL rats (45), our methylation assay spanning five nearby CpG sites found no SL vs. C differences at P25 ([Supplementary Fig. 4](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1)).\n\nWe developed the strategy of examining genomic regions undergoing DNA methylation changes from P0 to P21, based on the conjecture that these changes may be susceptible to environmental influences. Indeed, 4 of 21 regions undergoing P0\u2013P21 DNA methylation change showed evidence of persistent methylation differences between SL and C mice, supporting the utility of this approach. Hence, the \u223c900 loci that we report that undergo postnatal methylation changes may provide useful candidate regions for future studies of environmental influences on hypothalamic developmental epigenetics.\n\nWith the potential exception of *AK145544*, all four genes with persistent changes in hypothalamic DNA methylation in SL hypothalamus play important roles in neural development or function (49\u201351). At each of these genes, the methylation change in SL mice was modest (Fig. 5<\/a>); the cumulative effect of such changes at hundreds or thousands of genes, however, could be considerable. This interpretation is supported by the AIC model selection ([Supplementary Fig. 6](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1)), which included as significant predictors of adult energy expenditure expression of three and methylation at two of the genes that we identified, in most cases with *F* values comparable with that of lean body mass.\n\nThe results of transcriptional profiling in P180 hypothalamus mirrored our DNA methylation analyses in detecting subtle, widespread, and sex-specific alterations in gene expression. Analyzing the corpus of genes with potentially altered expression in male SL hypothalamus identified highly significant gene ontology enrichments pertaining to regulation of neuronal projections (Fig. 6<\/a>). The adult rodent hypothalamus maintains significant synaptic plasticity (52); our data suggest that early postnatal overnutrition in males may persistently augment this capability. Adult mice that become obese owing to a high-fat diet, conversely, appear to have reduced hypothalamic neurogenesis (53).\n\nAll the potential explanatory effects we found\u2014changes in energy expenditure, physical activity, DNA methylation, and gene expression\u2014were sex specific. The long-term consequences of early life exposures have long been recognized to differ by sex (16). Our findings of sexual dimorphism in the epigenetic responses to early postnatal environment suggest that nutrition may interact with the epigenetic mechanisms regulating hormone-dependent sexualization of the neonatal hypothalamus (54). In fact, the sex differences found here might provide an answer as to why the lower physical activity in SL females arose only in adulthood. In male mice, physical activity declined with age in both groups, but in females this decline was seen only in SL mice ([Supplementary Fig. 1*C*](http:\/\/diabetes.diabetesjournals.org\/lookup\/suppl\/doi:10.2337\/db12-1306\/-\/DC1)). Our results may suggest, therefore, that postnatal overnutrition is leading to masculinization of the CNS pathways that regulate age-related changes in physical activity.\n\nEncouraged by earlier studies that gained insights into hypothalamic developmental epigenetics (55), we too studied DNA methylation in whole hypothalamus. The interpretation of our data is therefore complicated by the heterogeneity of the hypothalamus at both the regional and the cellular level. The hypothalamus is comprised of distinct regions, or \"nuclei,\" with specialized functions, gene expression patterns (21), and epigenetic regulation (56). Additionally, the nervous system includes diverse cell types; the simplest classification distinguishes neurons and glia, which are epigenetically distinct (57,58). To improve our understanding of how early postnatal overnutrition causes persistent changes in regulation of body weight and body composition, it will be advantageous to characterize epigenetic effects within specific hypothalamic nuclei and specific cell types. For example, based on our current data we cannot exclude the possibility that the persistent alterations in DNA methylation that we identified represent a shift in the proportion of hypothalamic cell types rather than induced alterations in epigenetic regulation within specific cell types. Moreover, since early postnatal life is a critical period for not only epigenetic but also neuroanatomic development (42), studying these processes in an integrated fashion will likely be necessary to gain a clear understanding of how early postnatal nutrition affects the establishment of hypothalamic body weight regulation.\n\n## ACKNOWLEDGMENTS\n\nThis work was supported by grants from the National Institutes of Health\/National Institute of Diabetes and Digestive and Kidney Diseases (1R01DK081557) and the U.S. Department of Agriculture (USDA) (CRIS 6250-51000-055) (to R.A.W.). Body composition and CLAMS studied were performed in the Mouse Metabolic Research Unit at the USDA\/Agricultural Research Service (ARS) Children's Nutrition Research Center, which is supported by funds from the USDA ARS.\n\nNo potential conflicts of interest relevant to this article were reported.\n\nG.L. and J.J.K. performed experiments and wrote the manuscript. W.Z. and E.L. performed experiments. G.K.-R. performed bioinformatic analyses. M.S.B. performed experiments. M.L.F. provided critical guidance on experimental procedures and edited the manuscript. R.A.W. designed the study and wrote the manuscript. R.A.W. is the guarantor of this work and, as such, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.\n\nThe authors thank Adam Gillum (USDA\/ARS Children's Nutritional Research Center \\[CNRC\\]) for assistance with the figures and Firoz Vohra (USDA\/ARS CNRC) for assistance with the CLAMS studies.\n\n# REFERENCES\n\n[^1]: G.L. and J.J.K. contributed equally to this study.","meta":{"dup_signals":{"dup_doc_count":114,"dup_dump_count":51,"dup_details":{"curated_sources":2,"2022-05":1,"2020-16":1,"2019-47":1,"2018-30":1,"2018-26":1,"2018-22":1,"2018-17":1,"2018-13":2,"2018-09":1,"2018-05":1,"2017-51":2,"2017-47":1,"2017-43":6,"2017-39":1,"2017-34":2,"2017-30":9,"2017-26":3,"2017-22":1,"2017-17":10,"2017-09":3,"2017-04":10,"2016-50":2,"2016-44":3,"2016-40":3,"2016-36":3,"2016-30":3,"2016-26":2,"2016-22":3,"2016-18":1,"2016-07":1,"2015-48":2,"2015-40":2,"2015-32":1,"2015-27":1,"2015-22":2,"2015-14":2,"2014-52":2,"2014-49":1,"2014-42":3,"2014-41":3,"2014-35":1,"2014-23":1,"2014-15":2,"2023-06":1,"2024-26":1,"2017-13":1,"2015-18":2,"2015-11":1,"2015-06":1,"2014-10":1,"2024-30":1}},"file":"PMC3717861"},"subset":"pubmed_central"} {"text":"abstract: Cardiovascular autonomic neuropathy (CAN) is associated with increased mortality in diabetes. Since CAN often develops in parallel with diabetic nephropathy as a confounder, we aimed to investigate the isolated impact of CAN on cardiovascular disease in normoalbuminuric patients. Fifty-six normoalbuminuric, type 1 diabetic patients were divided into 26 with (+) and 30 without (\u2212) CAN according to tests of their autonomic nerve function. Coronary artery plaque burden and coronary artery calcium score (CACS) were evaluated using computed tomography. Left ventricular function was evaluated using echocardiography. Blood pressure and electrocardiography were recorded through 24 h to evaluate nocturnal drop in blood pressure (dipping) and pulse pressure. In patients +CAN compared with \u2212CAN, the CACS was higher, and only patients +CAN had a CACS \\>400. A trend toward a higher prevalence of coronary plaques and flow-limiting stenosis in patients +CAN was nonsignificant. In patients +CAN, left ventricular function was decreased in both diastole and systole, nondipping was more prevalent, and pulse pressure was increased compared with \u2212CAN. In multivariable analysis, CAN was independently associated with increased CACS, subclinical left ventricular dysfunction, and increased pulse pressure. In conclusion, CAN in normoalbuminuric type 1 diabetic patients is associated with distinct signs of subclinical cardiovascular disease.\nauthor: Ulrik Madvig Mogensen; Tonny Jensen; Lars K\u00f8ber; Henning Kelb\u00e6k; Anne Sophie Mathiesen; Ulrik Dixen; Peter Rossing; Jannik Hilsted; Klaus Fuglsang KofoedCorresponding author: Ulrik Madvig Mogensen, .\ndate: 2012-07\nreferences:\ntitle: Cardiovascular Autonomic Neuropathy and Subclinical Cardiovascular Disease in Normoalbuminuric Type 1 Diabetic Patients\n\nCardiovascular autonomic neuropathy (CAN) is a severe complication of diabetes associated with increased morbidity and mortality (1,2). CAN results from damage to the autonomic nerve fibers to the heart, and an early manifestation is a decrease in heart rate variation (HRV) during deep breathing (3). CAN is present in 25% of type 1 diabetic patients but is often overlooked (1,3).\n\nThe increased mortality associated with CAN appears to be partly explained by coexistence with other long-term complications of diabetes (4), and since especially nephropathy and CAN generally develop in parallel, the relative contribution of CAN on mortality has been difficult to identify (5). However, some studies have found CAN to be associated with sudden cardiac death (6) and to be an independent predictor of mortality (7,8). Different mechanisms through which CAN may promote mortality have been suggested including silent myocardial ischemia, silent myocardial infarction, impaired respiratory response to hypoxia, intraoperative cardiovascular lability, and fatal arrhythmia due to QT prolongation (2,5,9\u201311). However, the mechanism remains unclear (12,13).\n\nA noninvasive evaluation of coronary artery stenosis can be performed with high diagnostic accuracy using multislice computed tomography (MSCT) coronary angiography (14). MSCT also allows assessment of coronary artery calcification, which can be quantified as a coronary artery calcium score (CACS). Coronary artery calcium deposit has been shown to be a strong predictor of future cardiovascular events (15\u201317). Other markers of increased cardiovascular risk have recently been introduced: left ventricular dysfunction evaluated with tissue Doppler imaging (TDI) (18,19), a reduction in nighttime blood pressure drop (20,21), and increased arterial pulse pressure (22) are all independently associated with increased cardiovascular risk.\n\nThe aim of this study was to investigate the potential association between CAN and subclinical coronary atherosclerosis in addition to other prognostic markers of cardiovascular disease. For avoidance of the confounding effect of diabetic nephropathy, only patients with normal albumin excretion rates (albuminuria) were included.\n\n# RESEARCH DESIGN AND METHODS\n\nPatients with long-lasting type 1 diabetes, persistent normoalbuminuria, and no history or symptoms of cardiac disease (23) were recruited from the outpatient clinic cohort of type 1 diabetic patients at Steno Diabetes Center and the Diabetes Unit, Rigshospitalet. Inclusion criteria were type 1 diabetes according to American Diabetes Association criteria of at least 10 years' duration, age between 18 and 75 years, and HbA~1C~ \\<10%. Exclusion criteria were albuminuria (urinary albumin-to-creatinine ratio \\>30 mg\/g; elevated *S*-creatinine \\>120 \u03bcmol\/L), untreated hypertension (\\>140\/85 mmHg), and electrocardiographic signs or clinical symptoms of heart disease.\n\nIn the study population registry of 3,000 type 1 diabetic patients, autonomic testing had previously been performed in 350 patients owing to clinical suspicion of autonomic neuropathy, and 123 of these patients were eligible according to inclusion\/exclusion criteria. CAN testing was repeated in all patients included in the current study.\n\nFrom this group, informed consent to participate in the study was obtained in 60 randomly selected patients during their visit to the outpatient clinic. Patients were divided into two groups according to the outcome of four autonomic function tests: HRV during deep breathing, Valsalva ratio, lying-to-standing test, and blood pressure response to standing up. CAN was defined as two or more abnormal tests (24), and age-normative values were used to define abnormality (25,26). Of the included 60 patients, 26 had CAN, 30 were without CAN, and 4 had inconclusive tests.\n\nAll patients underwent MSCT and transthoracic echocardiography (TTE). In order to decrease heart rate for optimized image quality, all patients were given 5 mg ivabradine orally on the evening before and on the morning of cardiac imaging. Ivabradine was selected to reduce heart rate because \u03b2-adrenoceptor blockade was considered relative contraindicated in this population. MSCT and TTE were performed on the same day, and 24-h blood pressure and Holter monitoring was performed simultaneously after a few days. Information on retinopathy was retrieved from the records of the patients, as retinopathy status is registered at least once a year. Vibratory perception threshold (V) was evaluated using biothesiometry (Bio-Medical Instrument Company, Newbury, OH) as an assessment of distal neuropathy. Blood and urine samples were collected and analyzed with standardized clinical chemistry methods including measurements of N-terminal pro-brain natriuretic peptide (NT-proBNP), high-sensitivity C-reactive protein, and cystatin in addition to routine blood tests to exclude signs of other diseases or neuropathies.\n\nCoronary artery disease evaluated with MSCT was the primary outcome measure. Other parameters were included as confounding factors in the analysis.\n\nAll measurements and analysis were performed with the investigator blinded to the CAN status of the patients. The study was conducted in accordance with the Declaration of Helsinki II and approved by the local ethics committee (protocol number H-4-2009-091). All patients gave written informed consent.\n\n## Cardiovascular autonomic neuropathy tests.\n\nAge-normative values were used to define abnormality in all tests (25,26). HRV was assessed in previously trained patients who were asked to breathe deeply at a rate of six breaths per minute while being monitored on a (50 mm\/s) 12-lead electrocardiography. The maximum and minimum heart rates during each breathing cycle were measured, and the mean difference of six cycles was calculated.\n\nThe lying-to-standing heart rate ratio was determined after at least 5 min rest in the supine position, and HRV was determined by calculating the maximal-to-minimal heart rate ratio: the longest R-R interval, measured around the 30th beat after standing up, to the shortest R-R interval, measured around the 15th beat after standing up.\n\nThe Valsalva test consisted of forced exhalation into a mouthpiece with a pressure of 40 mmHg for 15 s, and the ratio of the maximum R-R after the maneuver to the minimum R-R during the maneuver was calculated. The test was performed three times, and the mean value of the ratios was used.\n\nOrthostatic hypotension was defined as a decrease in systolic blood pressure (SBP) of 30 mmHg when changing from supine to the upright position. Measurements were taken every minute for at least 3 min.\n\n## MSCT.\n\nAll examinations were performed using a Toshiba Aquillon One 320 Volume scanner (Toshiba Medical Systems, Tokyo, Japan) according to the recommendations of the vendor and analyzed on dedicated software (Vitrea 2; Vital Images).\n\nFirst, a prospective low-dose calcium scan without contrast enhancement was performed followed by coronary angiography using a prospective protocol. For the calcium score, slices of 3 mm were acquired using prospective electrocardiogram-gated axial scanning (27). We infused 80\u2013100 mL intravenous contrast agent (Visipaque 320; GE Healthcare) (according to body weight) with a flow rate of 5 mL\/s, followed by a saline chaser (50 mL). Image acquisition was initiated automatically at a density threshold of 180 Hounsfield units (HU) in the descending aorta. The scan parameters were 320 \u00d7 0.5 mm detector collimation and 100\u2013120 kV tube voltage depending on BMI (threshold 30 kg\/m^2^). Rotation time was between 350 and 375 ms depending on the heart rate. Owing to the relatively high heart rate (25 patients with heart rate \\>65 bpm), the scanners (sureCardio software) frequently required two gantry rotations. Median radiation dose for nonconstrast and for angiography scans was 1.2 mSv (range 0.9\u20132.6) and 4.3 mSv (1.9\u201315.3), respectively. For the coronary computed tomography angiography, 0.5-mm slices were obtained. The images were reconstructed at 0.5-mm slices with an increment of 0.25 mm, thereby giving true isotropic voxels of 0.5 mm (28).\n\n## Coronary calcium scoring.\n\nCoronary calcium deposit was identified as a dense area in the coronary artery exceeding the threshold of 130 HU. Areas were automatically registered by the software and were manually assigned the corresponding coronary arteries. Total mass of the coronary calcium deposit and the Agatston score were determined for each patient. CACS was compared with reference values on an individual level according to age and sex (29,30).\n\n## Coronary stenosis and plaque composition.\n\nBased on computed tomography coronary angiography recordings, the presence of coronary atherosclerosis was evaluated by visual inspection of all coronary arteries. Degree of coronary stenosis was classified according to previously published guidelines (31): absence of plaque; minimal, plaque with \\<25% stenosis; mild, 25\u201349% stenosis; moderate, 50\u201369% stenosis; severe, 70\u201399% stenosis; and occluded.\n\n## Echocardiography.\n\nEach patient underwent examination with a Philips IE with a S5 transducer with two-dimensonal and TDI in the left lateral decubitus using standard parasternal short axis and apical four-chamber, two-chamber, and long-axis views. Data were analyzed with commercially available software (Excelera; Philips).\n\nEvaluation of the left ventricular ejection fraction was performed by the modified Simpsons biplane method. Left ventricular mass index was calculated as the anatomic mass divided by body surface area of the patient.\n\nPulsed-wave Doppler at the apical position was used to record mitral inflow between the tips of the mitral leaflets. Peak velocities of early (E) and atrial (A) transmitral flow velocities and deceleration time of the early transmitral flow were measured, and the E-to-A ratio was calculated. All valves were examined to exclude significant valvular disease.\n\nWith TDI, peak systolic (s\u2032), early diastolic (e\u2032), and late diastolic (a\u2032) velocities were measured in all six mitral annular positions. Ratios of E to e\u2032, e\u2032 to a\u2032, and e\u2032 to (a\u2032 \u00d7 s\u2032) (eas index), were calculated as measures of left ventricular filling pressures, diastolic performance, and combined systolic and diastolic performance (19).\n\n## Ambulatory 24-h blood pressure recording.\n\nMeasurements were performed on the nondominant arm with a properly calibrated Blood Pressure Monitor System 90217 from Space Laboratories (Washington, DC). Blood pressure measurements were validated with an automatic oscillometric device. The examination was not considered valid if \\>30% of the measurements were missing. The SBP, diastolic blood pressure (DBP), and heart rate were measured automatically every 20 min during daytime (between 0600 and 2200 h) and once every hour during nighttime (between 2200 and 0600 h) for 24 consecutive hours. Participants were instructed to go about their normal daily conditions; however they were advised not to rest in bed during the day or exercise heavily and to abstain from caffeine beverages and tobacco.\n\nAnalyses were performed with dedicated software (Spacelabs ABP 92506 report management system). Blood pressure dipping was defined as an average reduction in SBP and DBP \u226510% from day to night (32).\n\n## Ambulatory 24-h electrocardiography.\n\nAmbulatory electrocardiography recordings were obtained with a 12-lead Rozinn RZ153+. The electrocardiography recordings were analyzed with dedicated software and reviewed by highly trained observers. Time domain analysis was performed in order to verify the difference in autonomic function in patients with CAN (26,33). Measures included SD of R-R intervals during a 22-h period, SD of the average R-R interval in all 5-min recordings, and mean of the SD of all filtered R-R intervals for all 5-min segments over 22 h.\n\n## Statistics.\n\nAll analyses were performed with SAS 9.2 (SAS Institute, Cary, NC). The \u03c7^2^ or Fisher exact test was used for dichotomous variables and the Wilcoxon rank-sum test for continuous variables. Dichotomous variables are listed as percentages. Data of continuous variables are presented as median (range).\n\nA multivariable logistic regression model was created with CAN status as the dependent variable and including sex, age, diabetes duration, HbA~1C~, total cholesterol, and smoking as variables considered important to coronary artery disease, and the corresponding *P* values for the independent association of CAN with a given variable are presented in all tables. Univariate linear regression models were created to identify independent predictors of increased calcium score, and these variables were included stepwise in a multiple logistic regression model.\n\nTo further compare groups of similar age, we performed a sensitivity analysis on 22 patients in each group. Excluded from this analysis were the four oldest patients with CAN (+CAN) and the eight youngest without CAN (\u2212CAN) to obtain 22 matched pairs.\n\n# RESULTS\n\n## Patient characteristics.\n\nPatients +CAN had a higher mean age and longer diabetes duration compared with patients \u2212CAN (Table 1<\/a>). The proportions of all other cardiovascular risk factors and recorded diabetes complications were similar in the two groups.\n\nClinical characteristics and laboratory results\n\n![](1822tbl1)\n\nWhereas +CAN tended to be associated with higher levels of high-sensitivity C-reactive protein than \u2212CAN (median 1.1 mg\/L \\[range 0.2\u201313\\] vs. 0.7 mg\/L \\[0.4\u201314.4\\], respectively, *P* = 0.06), levels of NTproBNP and cystatin C were similar in +CAN and \u2212CAN patients (10.3 \u03bcmol\/L \\[6.1\u201328.9\\] vs. 18.6 \u03bcmol\/L \\[7.8\u201356.5\\], *P* = 0.18, and 0.64 mg\/L \\[0.5\u20130.72\\] vs. 0.67 mg\/L \\[0.45\u20131.2\\], *P* = 0.16, respectively). Peripheral vibration threshold was significantly higher in patients +CAN compared with that in patients \u2212CAN (23 V \\[8\u201350\\] vs. 17 V \\[6\u201335\\], *P* \\< 0.01).\n\n## MSCT.\n\nThe mean CACS was significantly higher in +CAN than in \u2212CAN patients (Table 2<\/a>). Categories of increasing CACS according to CAN status are illustrated in Fig. 1A<\/em><\/a>. The proportions of patients having a CACS \u2265400 were significantly higher in patients +CAN than in those \u2212CAN. While nine patients +CAN were found in categories of CACS \\>400, the highest CACS in \u2212CAN was 312. Furthermore, a CACS \\<10 was less common in patients +CAN than in patients \u2212CAN.\n\nMultislice computed tomography calcium scoring\n\n![](1822tbl2)\n\nWith use of the CACS of each individual patient to find the corresponding percentile in a background population according to age and sex, the median percentile was significantly higher in patients +CAN compared with that in patients \u2212CAN (median 92 \\[range 0\u201399\\] vs. 39 \\[0\u201398\\], respectively, *P* = 0.0205). Similarly, the proportion of patients with a CACS above the 95th percentile according to age and sex was significantly higher in patients +CAN compared with \u2212CAN (13 \\[72%\\] vs. 5 \\[17%\\], *P* = 0.0077).\n\nCoronary computed tomography angiography revealed focal coronary plaques in 38 (69%) patients. Obstructive lesions were found in 9 (16%) and occlusions in 1 (2%) patient.\n\nPatients +CAN had a higher median number of plaque lesions than \u2212CAN patients (three vs. one, respectively, *P* = 0.039) and a higher proportion of patients with more than seven plaque lesions (Fig. 1B<\/em><\/a>) in univariate analysis. However, the prevalence of obstructive stenosis was not significantly higher (Fig. 1C<\/em><\/a>).\n\nAbsence of elevated CACS did not exclude obstructive stenosis, as three patients had coronary atherosclerosis with a CACS of zero. Two of these patients had minimal plaques (\\<25%), but one patient (+CAN) had a 70\u201399% noncalcified stenosis.\n\n## Echocardiography.\n\nAll patients had a normal left ventricular ejection fraction. CAN status had no significant impact on left ventricular systolic function as measured by conventional echocardiography, but TDI measures of longitudinal systolic function (s\u2032) were significantly lower in +CAN than in \u2212CAN (Table 3<\/a>).\n\nEchocardiography findings\n\n![](1822tbl3)\n\nTDI measures of left ventricular diastolic function indicated higher filling pressures (E-to-e\u2032 ratio) and impaired diastolic performance according to both e\u2032 and e\u2032-to-a\u2032 ratio in +CAN compared with \u2212CAN, whereas the eas index was not significantly different in the two groups (Table 3<\/a>).\n\n## Ambulatory blood pressure measurements.\n\nPatients +CAN had higher SBP and lower DBP but mean arterial blood pressure similar to that in \u2212CAN patients. Accordingly, the pulse pressure was significantly higher in patients +CAN compared with that in \u2212CAN patients (Table 4<\/a>).\n\nTwenty-four-hour blood pressure and Holter measurements\n\n![](1822tbl4)\n\nThe nocturnal drop in blood pressure was lower in +CAN patients compared with that in \u2212CAN patients with respect to both SBP and DBP. The number of patients with abnormal dipping was significantly higher in +CAN patients compared with \u2212CAN patients.\n\n## Ambulatory electrocardiography.\n\nMean 24-h heart rate was independent of CAN status, but the maximal increase and decrease in heart rate were lower in +CAN compared with \u2212CAN patients (Table 4<\/a>).\n\nThe total number of ventricular extrasystoles (VES) was significantly higher in +CAN compared with \u2212CAN patients. The majority of the VES were isolated, and the overall prevalence of consecutive VES was low and without difference between the groups. The number of patients having \\>30 VES\/h, which has been defined as frequent (32), were not significantly higher in +CAN than in the \u2212CAN patients.\n\nIn the time domain analysis of HRV patients +CAN compared with \u2212CAN had significantly lower SD of R-R intervals during a 22-h period (median 99 \\[range 47\u2013185\\] vs. 152 \\[99\u2013208\\], *P* = 0.0005) as well as SD of the average R-R interval in all 5-min recordings and mean of the SD of all filtered R-R intervals for all 5-min segments over 22 h (data not shown).\n\n## Statistical analysis.\n\nVariables independently associated with increased CACS from univariate analysis were DBP, pulse pressure, and measures of diastolic function (E-to-e\u2032 ratio, interventricular septum diameter \\[IVSD\\], E, A, and left atrial volume). When all significant variables were included stepwise in a multiple logistic regression model, CAN remained an independent predictor of CACS (*P* = 0.0009) together with age (*P* = 0.0136), diabetes duration (*P* = 0.0010), and daytime DBP (*P* = 0.0269). Including BMI, pulse pressure, and\/or urine albumin excretion in the model did not change the significance of any of the presented results.\n\n## Sensitivity analysis.\n\nA sensitivity analysis of 22 matched patient pairs (+CAN and \u2212CAN) is presented in Table 5<\/a>. In this analysis, there was no significant difference in age, sex, diabetes duration, HbA~1C~, or other cardiovascular risk factors between +CAN and \u2212CAN patients. In this subset of patients, results regarding CACS, echocardiography, ambulatory blood pressure, and Holter monitoring did not differ from those of the entire study population (Table 5<\/a>).\n\nFindings of analysis of 22 patients with and without CAN of comparable mean age and diabetes duration\n\n![](1822tbl5)\n\n## Coexistence of CAN and markers of increased cardiovascular risk.\n\nA higher proportion of patients +CAN had markers associated with increased cardiovascular mortality, including a CACS \u2265400 (hazard ratio 2.99\u20133.24 for mortality \\[15\\]), abnormal dipping (2.16 for mortality \\[21\\]), a pulse pressure \u226562 mmHg (1.8 for mortality \\[34\\]), and a trend toward a higher proportion of patients with coronary artery stenosis \u226550% (41 for a composite end point of death, nonfatal myocardial infarction, and revascularization \\[35\\]) (Fig. 2<\/a>).\n\n# DISCUSSION\n\nThe current study demonstrates an association between the presence of CAN and several distinct signs of subclinical cardiovascular disease in type 1 diabetic patients with normal urinary albumin excretion rate. Patients with CAN were characterized by increased coronary calcium deposit, impaired left ventricular function, increased arterial pulse pressure, a higher prevalence of nondipping, and increased ventricular ectopia compared with patients without CAN.\n\nLong-term type 1 diabetes carries an excess cardiovascular mortality (36), and nephropathy is a major contributing factor (37,38). Thus, even in asymptomatic patients a greater atherosclerotic plaque burden was demonstrated by magnetic resonance imaging in diabetic patients with albuminuria compared with patients with normal albumin excretion rate (39).\n\nCAN has also been associated with high cardiovascular mortality (2). However, since nephropathy and CAN generally develop in parallel in type 1 diabetic patients, the relative contribution of CAN to cardiovascular disease has been difficult to identify (5). In an attempt to overcome this important confounder, the present CAN population was identified in a subset of type 1 diabetic patients characterized by long duration of diabetes yet with absence of nephropathy (i.e., normal albumin excretion rate).\n\nCACS is a powerful predictor of clinical coronary artery disease (17). An association between CACS and reduced cardiac autonomic function has recently been reported in both type 1 (40\u201342) and type 2 (43) diabetes. Colhoun et al. (40) observed an association between CACS and CAN independent of age and triglycerides but not independent of BMI and SBP. By contrast, Rodrigues et al. (42) found an association between CAN and progression of CACS, independent of all confounders tested, including inflammatory markers. In the current study, we extended the evaluation of the coronary arteries with a combination of CACS and computed tomography angiography and included measurements of several other potential confounders, including left ventricular function and 24-h blood pressure.\n\nCACS was markedly higher in patients +CAN, with a median CACS of 196 compared with a median CACS of 5 in patients \u2212CAN. With use of MESA data as reference (29,30), 72% of patients +CAN had CACS similar to that in subjects in the upper 95th percentile according to age and sex, whereas this was only the case for 17% of patients \u2212CAN. Likewise, the estimated arterial age for a person with CACS of 5 is 52 years (95% CI 48\u201356 years) (30,44), in agreement with the median age of patients \u2013CAN. By contrast, a CACS of 196 corresponds to an estimated arterial age of 77 years (75\u201380), which is almost 20 years more than the actual median age of patients +CAN.\n\nDespite a higher CACS among +CAN patients, computed tomography angiography did not reveal significant differences in the prevalence of obstructive stenoses. It could be speculated whether the increased calcification without increased luminal coronary plaque burden is due to a different localization of the calcified area relative to the intima and media. Surgical sympathectomy leads to arterial media calcification, a condition frequently found in the lower extremities in patients with diabetic neuropathy (45), and it has been suggested that arterial media calcification could be the result of autonomic denervation (40,46). Sympathetic denervation may cause dedifferentiation of vascular smooth muscle cells, and these alterations are associated with extracellular matrix production and migration to the intima\u2014changes that are seen in atherosclerosis (40,47). Another suggested mechanism of how CAN could result in increased calcification is through loss of neuropeptides, involving biochemical pathways similar to those observed in studies on calcification processes in bone metabolism (46).\n\nData from the current study could be interpreted as supportive of the concept of increased media sclerosis. However, the relationship between CACS and luminal stenoses is known to be only modest (48), and a nonsignificant difference in the prevalence of stenoses does not necessarily imply that the calcification is differently located in the arterial wall. Furthermore, the existence of media sclerosis has been questioned (49).\n\nRegardless of the location of the calcification, the difference in CACS could be caused by other confounding factors. Many cardiovascular parameters are interrelated, and associations could merely reflect that these conditions share common risk factors.\n\nThe presence of CAN was associated with a decreased systolic and diastolic function of the left ventricle. We found a decrease in s\u2032 of approximately 1 cm\/s in patients +CAN compared with patients \u2212CAN. A similar decrease in s\u2032 in a study of the background population corresponded to a hazard ratio of 1.35 for 5-year mortality (19), suggesting that the observed left ventricular dysfunction might have prognostic importance. Several echocardiographic measures of diastolic function were univariately associated with CACS. Blood pressure variables are other factors associated with CAN, CACS, and left ventricular function. However, CAN remained independently associated with CACS, even when these variables were included in multivariable models.\n\nAn association between CAN and nondipping has been attributed to impaired vagal activity during nighttime in patients with CAN (50). In the current study, nondipping was more prevalent in patients with CAN, but it was not an isolated phenomenon in these patients, since 37% of patients without CAN were also nondippers. Likewise, a recent study did not find CAN to be the main causal factor for nondipping in type 1 diabetes (51). Nonetheless, blood pressure levels at night were significantly higher in +CAN compared with patients \u2212CAN, suggesting a relationship between CAN and nocturnal blood pressure regulation.\n\nCAN was also found to be associated with increased pulse pressure. In addition to being associated with increased cardiovascular risk, arterial pulse pressure is considered an indirect marker of arterial stiffness (22). With use of other methods, arterial stiffness has been shown to contribute to left ventricular diastolic dysfunction (52) and has been suggested as a link between CAN and cardiovascular disease (53).\n\nWe did not observe a significant difference in mean corrected QT or in the prevalence of QTc prolongation. Very few studies have reported on ventricular ectopia associated with CAN. We found no signs of pathological arrhythmias, but CAN was associated with an increased number of isolated premature ventricular beats.\n\nThough interrelationships are difficult to exclude, CAN remained independently associated with CACS even when differences in cardiovascular parameters were adjusted for in multivariable analysis and sensitivity analysis was applied, and CAN was found to be associated with markers of increased cardiovascular risk (Fig. 2<\/a>).\n\nThis study had limitations. First, owing to stringent inclusion criteria and since autonomic dysfunction is a rare isolated complication in long-term diabetes (10), it was only possible to match patients for age, sex, and diabetes duration in a limited number of patients, and several of the variables measured are age dependent. However, both multivariable logistic regression models and sensitivity analysis of smaller groups of patients with no differences in demographics and cardiovascular risk factors were performed. In these analyses, all reported findings remained similar and significant. Second, investigations were only carried out on diabetic patients; reference values on the different measurements from a nondiabetic control group were therefore not available. Third, we did not have information on HRV on all patients in our registry. Many patients had HRV measured in previous studies, but some of the patients had HRV measured owing to clinical suspicion of CAN. We cannot exclude having captured a particular subgroup of patients being free of long-term diabetes complications not similar to the most common phenotype of diabetic patients, and our findings can only to a limited extent be extrapolated to the entire diabetic population.\n\nIn the analysis of ambulatory blood pressure measurements, a fixed method was used and not diary time. This could potentially have biased the results if differences in the sleep patterns between the two groups exist.\n\nIn the comparison between CACS and individual reference values from the MESA study, diabetes duration was a parameter that could not be accounted for. The longer diabetes duration in +CAN patients must be remembered when evaluating these data.\n\nConfounding effects from differences in antihypertensive treatment and use of diuretics cannot be excluded, even though the differences were nonsignificant in demographics.\n\nIn conclusion, CAN was associated with several distinct signs of subclinical cardiovascular disease in type 1 diabetic patients with normoalbuminuria. These included increased coronary calcium deposit, subtle impairment of left ventricle systolic and diastolic function, increased arterial pulse pressure, a higher prevalence of nondipping, and marginally increased ventricular ectopia, and CAN was independently associated with CACS even with adjustment for confounding factors.\n\n## ACKNOWLEDGMENTS\n\nThis study was supported by a grant from Arvid Nilssons Foundation. The MSCT Cardiac Research Unit, Rigshospitalet, is supported by a grant from the A.P. M\u00f8ller og Hustru Chastine McKinney M\u00f8llers Fond til almene Formaal.\n\nU.M.M. was supported by a scholarship from Novo Nordisk. No other potential conflicts of interest relevant to this article were reported.\n\nU.M.M. researched data and wrote the manuscript. T.J. included patients, contributed to discussion, and reviewed the manuscript. L.K. researched data and reviewed the manuscript. H.K. reviewed the manuscript. A.S.M. screened patients. U.D. and P.R. reviewed the manuscript. J.H. contributed to discussion and reviewed the manuscript. K.F.K. researched data and reviewed the manuscript. U.M.M. is the guarantor of this work and, as such, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.\n\nParts of this study were presented in abstract form at the European Society of Cardiology Congress 2011, Paris, France, 27\u201331 August 2011, and at the 47th European Association for the Study of Diabetes Annual Meeting, Lisbon, Portugal, 12\u201316 September 2011.\n\nThe authors thank Tina Bock-Pedersen, MSCT Cardiac Research Unit, Rigshospitalet, for excellent technical assistance.\n\n# REFERENCES","meta":{"dup_signals":{"dup_doc_count":138,"dup_dump_count":56,"dup_details":{"curated_sources":2,"2021-04":1,"2020-40":1,"2020-16":1,"2020-05":1,"2019-51":1,"2019-47":1,"2019-43":1,"2019-39":1,"2019-26":1,"2019-13":1,"2018-43":1,"2018-30":2,"2018-22":1,"2018-17":1,"2018-13":3,"2018-09":1,"2018-05":2,"2017-51":1,"2017-47":3,"2017-43":6,"2017-39":2,"2017-34":2,"2017-30":7,"2017-26":3,"2017-22":1,"2017-17":9,"2017-09":4,"2017-04":9,"2016-50":3,"2016-44":4,"2016-40":3,"2016-36":3,"2016-30":3,"2016-26":3,"2016-22":3,"2016-18":2,"2016-07":2,"2015-48":3,"2015-40":2,"2015-35":2,"2015-32":1,"2015-27":2,"2015-22":2,"2015-14":3,"2014-52":2,"2014-49":3,"2014-42":1,"2014-41":4,"2014-35":2,"2014-23":2,"2023-14":1,"2017-13":4,"2015-18":3,"2015-11":2,"2015-06":2,"2024-10":1}},"file":"PMC3379682"},"subset":"pubmed_central"} {"text":"abstract: Several aspects of *Journal of Biology* seem to have caught readers' attention. Some of the questions asked have arisen sufficiently often to be worth addressing here.\nauthor: Martin Raff; Theodora Bloom; Peter Newmark\ndate: 2002\ninstitute: 1Editor-in-Chief, *Journal of Biology*; 2Editor, *Journal of Biology*; 3Editorial Director, BioMed Central\ntitle: Editorial\n\nSeveral aspects of *Journal of Biology* seem to have caught readers' attention since issue 1 appeared this summer. Some of the questions asked have arisen sufficiently often to be worth addressing here. In summary, the journal differs from other top-tier journals in four main ways. First, and most importantly, no fee will ever be charged to readers of the research articles, and the authors retain full copyright, so that the articles can be freely read and distributed by anyone, from the day of publication onwards, in perpetuity. This is the 'open access' policy of all of the journals published by BioMed Central, which is currently the only publisher that is wholly committed to the principles of open-access publishing.\n\nWhy is the immediate free use and distribution of the entire article so important? Not only is it possible and desirable, but it benefits both scientists and science; restrictions on use and distribution serve publishers, not scientists or readers. Open access also allows full archiving and retrieval. Extensive efforts are being made to create public archives of the scientific literature, containing complete copies of all scientific papers. (PubMed Central is one example of this, and all research articles published in all BioMed Central journals are deposited there in full.) In time, these freely accessible archives will greatly facilitate the practice and dissemination of science, and will lead to exciting and sophisticated ways of using full-text information, much as GenBank has done for DNA sequences. The current restrictions on access, use, and distribution put in place by most journals - even those that offer copyright to authors while in fact denying them permission to distribute their article - will seriously impede the development and comprehensiveness of central archives.\n\n*Journal of Biology* differs in a second way from some of the best-known high-profile journals with which it aims to compete. Its reviewing process is designed to be as fast, fair, and constructive as possible. Decisions are made jointly by a scientist as editor-in-chief and a professional editor. No submitted article is rejected without advice from a relevant scientist, and fashion is not a consideration. And at least one of three peer-reviewers for each article is chosen from a list supplied by the authors.\n\nThe third difference comes from our commitment to maximize the impact of the research we publish. Each research article is accompanied by commissioned commentaries. We aim for the most effective presentation of data, both online and in print, and the print issue is distributed free of charge to over 80,000 life scientists.\n\nFinally, we do not wait for a threshold number of papers of sufficient quality before publishing an issue. Instead, an issue appears whenever a research article of suitable caliber is ready for publication. For this reason, the first two issues of *Journal of Biology* each contain only one keynote research article and its associated commentaries.\n\nTo date, *Journal of Biology* has declined tens of articles for every one it has published. This is because we are committed to publishing only the most significant research. The challenge is to convince scientists with an important story to tell to try *Journal of Biology* instead of their usual preferred journal. We have met enthusiasm for open-access publishing, and for *Journal of Biology*, across the board, from students to Nobel laureates, and at all levels in between. The merits of open access, the wide dissemination of each article, and the usefulness of the associated commentaries, result in each paper published in *Journal of Biology* having unparalleled impact. Why not make 2003 the year you discover the benefits of publishing in *Journal of Biology* for yourself?\n\n**Editor's note:** Martin Raff has recently joined the Scientific Advisory Board of Curis, the company responsible for the research article in this issue. He was not involved in the refereeing of this article or in the decision to publish it.","meta":{"dup_signals":{"dup_doc_count":135,"dup_dump_count":44,"dup_details":{"curated_sources":2,"2020-40":1,"2020-10":1,"2019-22":1,"2019-09":1,"2018-51":1,"2018-43":1,"2018-30":1,"2018-13":1,"2017-47":1,"2017-34":1,"2017-26":1,"2017-22":1,"2017-17":1,"2017-09":11,"2016-44":1,"2016-40":1,"2016-36":11,"2016-30":10,"2016-22":1,"2016-18":1,"2016-07":8,"2015-48":3,"2015-40":2,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":2,"2014-52":3,"2014-49":3,"2014-42":11,"2014-41":5,"2014-35":6,"2014-23":5,"2014-15":5,"2022-21":1,"2017-13":1,"2015-18":3,"2015-11":2,"2015-06":3,"2014-10":2,"2013-48":3,"2013-20":3,"2024-18":1}},"file":"PMC137066"},"subset":"pubmed_central"} {"text":"abstract: A major goal in diabetes research is to find ways to enhance the mass and function of insulin secreting \u03b2-cells in the endocrine pancreas to prevent and\/or delay the onset or even reverse overt diabetes. In this *Perspectives in Diabetes* article, we highlight the contrast between the relatively large body of information that is available in regard to signaling pathways, proteins, and mechanisms that together provide a road map for efforts to regenerate \u03b2-cells in rodents versus the scant information in human \u03b2-cells. To reverse the state of ignorance regarding human \u03b2-cell signaling, we suggest a series of questions for consideration by the scientific community to construct a human \u03b2-cell proliferation road map. The hope is that the knowledge from the new studies will allow the community to move faster towards developing therapeutic approaches to enhance human \u03b2-cell mass in the long-term goal of preventing and\/or curing type 1 and type 2 diabetes.\nauthor: Rohit N. Kulkarni; Ernesto-Bernal Mizrachi; Adolfo Garcia Ocana; Andrew F. StewartCorresponding authors: Rohit N. Kulkarni, , and Andrew F. Stewart, .\ndate: 2012-09\nreferences:\nsubtitle: : Driving in the Dark Without a Road Map\ntitle: Human \u03b2-Cell Proliferation and Intracellular Signaling\n\n# The Challenge of Inducing Human \u03b2-Cell Replication\n\nA principal goal of the National Institutes of Health\/National Institute of Diabetes and Digestive and Kidney Diseases, Juvenile Diabetes Research Foundation, American Diabetes Association, and their European and Asian equivalents is to develop viable therapeutic approaches to induce adult human \u03b2-cell replication\/expansion for regeneration therapies in patients with type 1 diabetes and\/or type 2 diabetes mellitus (T2DM). Unfortunately, all observers of adult human \u03b2-cell replication find that it is very limited (\u223c0.2% of \u03b2-cells\/24 h) and poorly responsive or unresponsive to the many mitogens, growth factors, and nutrients that have been shown to induce expansion in rodent models. For example, glucagon-like peptide-1 (GLP-1) and its analog, exendin-4, hepatocyte growth factor (HGF), lactogens, insulin, IGF-I, and many other molecules have been shown to increase rat and mouse \u03b2-cell proliferation and mass expansion, but have shown limited effects in human \u03b2-cells.\n\n# Species- and Age-Related Failure of \u03b2-Cell Replicative Capacity\n\nSome of this refractoriness to proliferation in both rodents and humans is age-related. For example, whereas \u03b2-cell replication is easy to induce using exendin-4 or partial pancreatectomy in young mice, it is markedly attenuated in older mice (1). Similarly, whereas \u03b2-cell replication has been difficult to induce in adult human \u03b2-cells, reports of human embryonic and neonatal \u03b2-cell proliferation do indicate that proliferation can occur in juvenile human \u03b2-cells (2,3). But even in this study, the proliferation is very limited (\u223c2\u20134% in embryonic human \u03b2-cells by Ki-67 staining) as compared with other fetal and adult tissues (e.g., spleen, bone marrow, gastrointestinal crypts, basal keratinocytes) where proliferation is often 10-fold higher.\n\nIn contrast, there are also clear species differences, because rodent \u03b2-cell models can display relatively large proliferative responses (e.g., 10\u201315%), whereas this type of proliferation is never seen in human \u03b2-cells under physiological conditions, even in embryonic life. Perhaps more germane to human diabetes, whatever the underlying reasons, these issues are a major hurdle to driving therapeutic human \u03b2-cell expansion, because the major source of human \u03b2-cells is adult cadaveric donors, and the major therapeutic target for expansion of endogenous human \u03b2-cells, at least initially, will be adults with type 1 diabetes mellitus. Thus, there is an urgent need to understand why adult \u03b2-cells are refractory to replication, and, at the end of this *Perspectives in Diabetes* article, we suggest a series of questions to be addressed by the scientific community to reverse this state of ignorance.\n\n# The Rodent \u03b2-Cell Receptor\u2013Nutrient\u2013Signaling\u2013Cell-Cycle Pathway Road map\n\nIn rodent \u03b2-cells, we have an impressive and expanding intracellular signaling road map, or \"wiring diagram,\" that reveals how proliferation normally occurs, how it intersects with downstream cell-cycle machinery, and how it can be manipulated for therapeutic purposes. Space prevents a detailed discussion of every pathway, but several oversimplified examples are shown in Figs. 1<\/a>\u20133<\/a> and described briefly below.\n\n# Signaling Pathways\n\n## Insulin receptor substrate\/phosphatidylinositol-3 kinase\/Akt signaling.\n\nInsulin and IGF-I constitute two primary members of the growth factor family for which receptors are expressed ubiquitously and mediate the growth and metabolic effects of the hormones in virtually all mammalian tissues. Insulin and IGF-I classically bind to their own receptors, but can also cross-react and activate common downstream proteins. Receptor activation transmits signals by phosphorylating insulin receptor substrates (IRS), including the four IRS proteins\u2014Shc, Gab-1, focal adhesion kinase, and Cbl\u2014and others, leading to activation of phosphatidylinositol-3 kinase (PI3K)\/Akt (see below). Several recent reviews provide an excellent resource for interested readers (4,5). Mouse \u03b2-cells express both the insulin and IGF-I receptors, and most components of their signaling pathways. Recent studies on insulin receptor (IR) signaling in \u03b2-cells have provided cumulative evidence for an autocrine role of insulin on its own receptor. Two early mouse models that provided direct genetic evidence for a role for insulin\/IGF-I signaling in the regulation of \u03b2-cell biology include the \u03b2-cell\u2013specific knockout of the IR (\u03b2IRKO) (6) and the global knockout of IRS-2 (7). Both \u03b2IRKO and IRS2KO mice failed to maintain their \u03b2-cell mass and manifested a phenotype most resembling human T2DM. Following these two studies, multiple laboratories have reported the creation and characterization of transgenics\/KOs complemented by in vitro and ex vivo approaches to indicate the significance of proteins in the insulin\/IGF-I cascade for the regulation of \u03b2-cells (5).\n\nContrary to traditional thought, we and others have used genetic approaches to directly demonstrate that the insulin\/IGF-1 signaling pathway is not critical for early development of \u03b2-cells (5,8). However, whereas both \u03b2IRKO and \u03b2-cell\u2013specific IGF-1 receptor KO mice exhibit impaired glucose tolerance and secretory defects, only \u03b2IRKO mice show an age-dependent decrease in \u03b2-cell mass and an increased susceptibility to develop overt diabetes, suggesting a dominant role for insulin signaling in the regulation of adult \u03b2-cell mass (9). It has been observed for several decades that mouse models of diabetes and obesity exhibit a remarkable ability to compensate for the increase in insulin demand in response to insulin resistance. One such model, the liver-specific IR KO mouse, develops severe insulin resistance and glucose intolerance, but the mice do not become overtly diabetic due, in part, to an \u223c30-fold increase in \u03b2-cell mass as a compensatory mechanism to counter the ambient insulin resistance (9,10).\n\nThe IRSs and Akt, important downstream signaling molecules in the IR\/IGF-I receptor signaling pathway, have been reported to play a dominant role in \u03b2-cell growth. Indeed, global KO of IRS-1 in mice leads to postnatal growth retardation and hyperplastic and dysfunctional islets, but the mice do not develop overt diabetes due to \u03b2-cell compensation (11\u201313). In contrast, IRS-2 global KOs develop mild growth retardation and, depending on their genetic background, develop either mild glucose intolerance or \u03b2-cell hypoplasia and overt diabetes (7,14). \u03b2-cell\u2013specific deletion of IRS-2 also leads to mild diabetes (15). Importantly, IRS-2 mediates the effects of the incretin hormone, GLP-1, to promote survival and\/or proliferation of rodent \u03b2-cells (16). Thus, IRS-2 appears to be a positive regulator of \u03b2-cell compensation, whereas IRS-1 predominantly regulates insulin secretion.\n\nA similar scenario is observed in the context of the Akt isoforms. Thus, global KOs for Akt2 develop overt diabetes largely due to insulin resistance in peripheral tissues and \u03b2-cell failure, despite islet hyperplasia and hyperinsulinemia (17,18). Transgenic mice expressing a kinase-dead mutant of Akt1 under control of the rat insulin I promoter showed increased susceptibility to diabetes following fat feeding (19). Islet hyperplasia, \u03b2-cell hypertrophy, and hyperinsulinemia are observed in \u03b2-cell\u2013specific transgenic mice expressing constitutively active Akt1 (20,21). Akt regulates proliferation by modulation of multiple downstream targets including glycogen synthase kinase-3 (GSK3) (see below), FoxO1, and tuberous sclerosis proteins (TSC)\/mammalian target of rapamycin (mTOR) among others. Recent experiments have also linked the cyclin\/cyclin-dependent kinase (CDK) 4 complex to Akt in \u03b2-cell proliferation, showing that Akt1 upregulates cyclins D1 and D2, p21Cip1 levels (but not p27Kip1), and CDK4 activity (22). Collectively, these in vivo data suggest a dual role for the Akts in the regulation of \u03b2-cell mass. In addition to regulating proliferation, class Ia PI3K also modulates \u03b2-cell function by multiple mechanisms (23).\n\n## GSK3 and liver kinase B1 signaling.\n\nGSK3 is a ubiquitously expressed serine\/threonine protein kinase originally identified as a regulator of glycogen metabolism. It is now well-established that GSK3 acts as a downstream regulatory switch for numerous signaling pathways and is involved in cell-cycle regulation and cell proliferation. There are two mammalian GSK3 isoforms encoded by distinct genes: GSK3\u03b1 and GSK3\u03b2. Activated Akt phosphorylates and inactivates GSK3. GSK3 phosphorylation was observed in mice overexpressing a constitutively active form of Akt1 in \u03b2-cells (22), which correlated with an increase in cyclin D1 levels in islets, suggesting that GSK3 is an important negative regulator of \u03b2-cell cycle progression in mice. Indeed, a decrease in GSK3\u03b2 expression can correct diabetes in mouse models of insulin resistance (24,25). More recently, the GSK3\u03b2\/\u03b2-catenin\/T-cell factor pathway has been shown to act in concert with cAMP-responsive element\u2013binding protein (CREB) to downregulate cyclin D2 expression by phosphatase and tensin homolog (PTEN), suggesting a convergence of the PI3K\/PTEN and Wnt pathways. Furthermore, phosphorylation and activation of TSC2 by GSK3\u03b2 inhibits mTOR signaling (26). However, whether this occurs in \u03b2-cells is unknown.\n\nAnother kinase that physically associates in vivo with GSK3\u03b2 is the tumor suppressor liver kinase B1 (LKB1). Loss of LKB1 in adult \u03b2-cells increases \u03b2-cell proliferation, size, and mass and enhances glucose tolerance in mice (27,28), suggesting that this kinase is a regulator of \u03b2-cell growth in vivo in rodents. LKB1 is also known to associate with protein kinase C\u03b6 (PKC\u03b6), as described below. Whether there is any interaction among these three different kinases in the \u03b2-cell is currently unclear.\n\n## TSC, mTOR, S6 kinase, and 4E-BP1 signaling.\n\nmTOR is essential for cell growth and proliferation and is part of two mTOR complexes (mTORC), mTORC1 and 2 \\[recently reviewed (29)\\]. mTORC1 activity is negatively regulated by TSC1, TSC2, and the small G protein Rheb. TSC2 phosphorylation and inactivation by Akt and extracellular signal\u2013related kinase, among other kinases, releases the inhibition of Rheb, leading to activation of mTOR (29). In contrast, phosphorylation and activation of TSC2 by several kinases inhibits mTOR signaling (29). mTORC1 constitutes the rapamycin-sensitive arm of mTOR signaling and contains at least three proteins: raptor, mLst\/G\u03b2L, and PRAS40 (29). This pathway integrates signals from growth factors and nutrients and controls growth (cell size), proliferation (cell number), and metabolism by directly modulating 4E-BP and S6 kinases (S6K) and by indirectly attenuating Akt signaling via an mTORC1\/S6K-mediated negative-feedback loop on IRS signaling. S6K phosphorylates downstream substrates, such as ribosomal S6 protein and eukaryotic translation initiation factor 4B, to promote mRNA translation and synthesis of ribosomes. Phosphorylation of 4E-BPs triggers their release from eukaryotic translation initiation factor 4E and initiates cap-dependent translation. Thus, mTORC1 substrates regulate cell growth, proliferation, and mRNA translation initiation and progression, thereby controlling the rate of protein synthesis.\n\nAs relates to \u03b2-cells, evidence from mouse genetic models demonstrates that mice with conditional deletion of TSC2 (leading to mTORC1 activation) in \u03b2-cells exhibit increases in \u03b2-cell mass, proliferation, and cell size (30). Separate studies demonstrated that conditional deletion of TSC2 or TSC1 in \u03b2-cells causes a similar phenotype, but these mice developed diabetes and \u03b2-cell failure after 40 weeks (31,32). Further confirmation of the effect of mTORC1 activation to increase \u03b2-cell mass comes from mice overexpressing Rheb in \u03b2-cells (33). However, proliferation in \u03b2-cells with mTORC1 activation has not been demonstrated in all studies: thus, precisely how mTORC1 acting upon 4E-BPs and S6K modulates \u03b2-cell mass, proliferation, and function is unclear. Evidence for a role of S6K in \u03b2-cell proliferation was demonstrated in mice with activation of Akt signaling in an S6K1-deficient background (34). However, S6K1 gain of function by transgenic overexpression failed to induce proliferation, in part due to negative feedback on IRS-1 and -2 signaling (35). The importance of 4E-BPs on regulation of \u03b2-cell proliferation is hard to decipher at present because of alterations in insulin sensitivity in global KO models.\n\nFurther evidence for a role of mTORC1 in \u03b2-cell proliferation comes from studies with its inhibitor, rapamycin. These studies demonstrate that: *1*) rapamycin treatment blocks \u03b2-cell expansion, cell size, and proliferation induced by activation of Akt in \u03b2-cells (36). The inhibition of \u03b2-cell proliferation by rapamycin results from decreases in cyclin D2 and D3 and in cyclin-dependent kinase (cdk) 4 activity. *2*) Rapamycin treatment results in reduced proliferation in \u03b2-cells in pregnant mice and causes antiproliferative effects on transplanted rat \u03b2-cells in vivo (37,38). *3*) Rapamycin also attenuates \u03b2-cell expansion in a model of insulin resistance and \u03b2-cell regeneration (39,40) and reduces proliferation of rodent islets in vitro (41). These studies support the concept that mTORC1 pays an important role in the regulation of \u03b2-cell mass and cell cycle and can mediate adaptation of \u03b2-cells to insulin resistance.\n\nThe mTORC2 complex, containing rictor, mSin, and protor, is insensitive to rapamycin. This complex phosphorylates Akt on Ser473, suggesting that this pathway indirectly could be linked to proliferation. However, the only evidence for a role for mTORC2 in \u03b2-cells comes from mice with conditional deletion of rictor in \u03b2-cells (42). These mice exhibit mild hyperglycemia resulting, in part, from decreased \u03b2-cell proliferation, mass, and insulin secretion.\n\n## Other pathways: PKC\u03b6 signaling.\n\nPKC\u03b6 is a member of the atypical PKC subfamily activated by PI3K\/3\u2032-PI-dependent protein kinase-1 (PDK-1). Multiple studies during the 1990s showed that PKC\u03b6 is a critical kinase for mitogenic signal transduction in a variety of cell types, including fibroblasts, brown adipocytes, endothelial cells, and oocytes (43). More recently, the importance of PKC\u03b6 for \u03b2-cell replication has also been elucidated. We and others have shown that growth factors such as GLP-1, parathyroid hormone-related protein, HGF, and nutrients such as glucose increase PKC\u03b6 phosphorylation and activity in insulinoma cells and rodent islets (44\u201346). Interestingly, activation of PKC\u03b6 using a constitutively active form of this kinase (CA-PKC\u03b6) markedly increases \u03b2-cell proliferation in insulinoma and primary mouse \u03b2-cells in vitro, suggesting that PKC\u03b6 activation could be involved in growth factor-induced rodent \u03b2-cell proliferation (45,46). Indeed, small interfering RNA-based downregulation of PKC\u03b6 or the use of a dominant-negative form of PKC\u03b6 completely abolished growth factor-induced \u03b2-cell proliferation (45,46). Conversely, transgenic mice with expression of CA-PKC\u03b6 in \u03b2-cells display increased \u03b2-cell proliferation, size, and mass with a concomitant enhancement in insulin secretion and improved glucose tolerance (47). Downstream, activation of mTOR and upregulation of cyclin Ds and A are essential for PKC\u03b6-mediated mitogenic effects in rodent \u03b2-cells (47). Collectively, these results indicate that PKC\u03b6 is a critical kinase for mitogenic signal transduction in rodent \u03b2-cells in vitro and in vivo.\n\nPKC\u03b6 activation in rodent \u03b2-cells also leads to phosphorylation and inactivation of GSK3\u03b2 (47), suggesting that PKC\u03b6-mediated \u03b2-cell mitogenic effects might also be favored by GSK3\u03b2 inactivation. GSK3\u03b2 is one of the kinases known to phosphorylate D-cyclins and to regulate their degradation (48). Because PKC\u03b6 activation results in increased GSK3\u03b2 phosphorylation\/inactivation and increased expression of D<\/span>-cyclins, it is possible that PKC\u03b6 increases the accumulation of D<\/span>-cyclins in \u03b2-cells through inactivation of GSK3\u03b2.\n\n## Lactogen and Janus kinase\u2013signal transducer and activation of transcription signaling.\n\nIn rodent \u03b2-cells, pregnancy-associated lactogens bind to the prolactin receptor, which then activates the Janus kinase JAK2, which results in phosphorylation\/activation of the transcription factor signal transducer and activation of transcription 5 (STAT5) and its nuclear translocation. Phospho-STAT5 dimers bind to the promoter of the cyclin D2 gene and upregulate transcription of cyclin D2 mRNA, with resultant induction of \u03b2-cell proliferation. This pathway was demonstrated to exist in rodent \u03b2-cells (49). More recently, two reports have added depth and definition to this pathway. First, Kim and colleagues (50) has demonstrated that lactogenic signaling during pregnancy upregulates the transcription factor Bcl6, with resulting repression of the epigenetic histone methylase complex, trithorax component, menin. Through the trithorax complex, menin acts as a transcriptional repressor of the CDK inhibitors p18INK4 and p27cip, and their repression is therefore permissive for \u03b2-cell replication in pregnancy. Most recently, the German group (Kim et al.) (51) has demonstrated that lactogenic signaling during pregnancy also is associated with a dramatic, 1,000\u00d7, increase in serotonin production by \u03b2-cells, a reflection of the induction of \u03b2-cell tryptophan hydroxylases by lactogens. Unanswered questions still remain, such as which signaling pathways does 5HT2b activate in the \u03b2-cell? How exactly do these downstream signaling molecules affect the cell-cycle symphony?\n\n## Additional growth factor signaling pathways.\n\nIn this brief *Perspectives in Diabetes*, only a few rodent \u03b2-cell signaling pathways can be described. It should be clear, however, that multiple other signaling pathways linking growth factors to rodent \u03b2-cell replication exist. Examples include: a glucose\u2013glut2\u2013ChREBP\u2013cMyc pathway (52); a glucose\u2013AMP-activated protein kinase\u2013LKB1\u2013calcineurin\u2013nuclear factor of activated T-cell pathway (27,28,53); an epidermal growth factor (EGF) ligand family (EGF, betacellulin, heparin-binding EGF, amphiregulin, and trefoil factor family 3)\u2013EGF receptor\u2013ras\/raf\u2013mitogen-activated protein kinase (MAPK)\u2013cdk\u2013cyclin pathway (54,55); a cAMP\u2013cAMP-dependent protein kinase\u2013CREB\u2013cAMP response element modulator\u2013cyclin A pathway (56,57); parathyroid hormone-related protein induction of proliferation (5); a leptin\u2013IRS-2 pathway (58); a transforming growth factor-\u03b2 activin family\u2013SMAD pathway (59); sex hormone signaling pathways (60); cannabinoid receptors (CB1) that cross-talk with insulin receptors and IRS-2 (61) and calpains (62); and recently, a platelet-derived growth factor (PDGF) receptor linked to MAPK signaling pathway, driving the polycomb repressive complex family member Ezh2, with repression of p16INK4, thereby permitting activation of downstream cell-cycle molecules such as the pRb family pocket proteins and E2Fs (63).\n\n# Cell-Cycle Control of Proliferation in the Rodent \u03b2-Cell\n\nThis is an important and rapidly moving area. It is complex and well beyond the scope of a brief review, but detailed reviews are available (64\u201366). Briefly, downstream of every developmental program and nutrient- or growth factor-driven signaling pathway that is able to drive rodent \u03b2-cell proliferation, intracellular signaling pathways converge on the molecules that control the G~1~\/S checkpoint. This checkpoint is shown schematically in Fig. 2<\/a>, and examples of interactions with upstream signaling pathways are shown in Fig. 3<\/a>. Briefly, there are a number of cell cycle\u2013activating molecules: the D<\/span>-cyclins and their cognate cdks and the E\/A-cyclins and their cognate cdks, cdk1 and -2. Overexpression of all of these in various combinations is able to drive rodent \u03b2-cell proliferation, either when delivered adenovirally (67) or transgenically, and in some cases (but not all), disruption of their genes leads to \u03b2-cell hypoplasia, hypoinsulinemia, and diabetes (64\u201366). Thus, these cell-cycle activators are key targets of upstream pathways, and they have been shown to be regulated by these signaling pathways, growth factors (as described above), and glucose and glucokinase activators (68).\n\nThese cell-cycle activators are balanced by a series of cell-cycle inhibitors, which include the so-called \"pocket proteins\" (pRb, p107, and p130), the INK4 family (p15, p16, p18, and p19), the CIP\/KIP family (p21, p27, and p57), menin, and p53. The details of their actions and interactions with the cyclins and cdks are too complex for a brief discussion, but detailed reviews are available. The key points in the current context are that they, too, are essential regulators of \u03b2-cell cycle progression (e.g., p21 and p27 are regulated by mitogens and p16 and p18 by free fatty acids), and genetic disruption of the genes encoding some of these molecules leads to \u03b2-cell hyperplasia and hyperfunction (69,70).\n\nIn this context, it is important to emphasize that these G~1~\/S molecules have short half-lives and frequently, if not most often, are regulated at the level of protein stability. This is important because changes seen at the mRNA level frequently correlate poorly with those observed at the protein level.\n\nThus, we know an enormous amount about cell-cycle control in the rodent \u03b2-cell and how key signaling pathways regulate these molecules.\n\n# Cell-Cycle Control in the Human \u03b2-Cell\n\nAs in the rodent \u03b2-cell, the G~1~\/S molecules controlling human \u03b2-cell proliferation have also been studied, and much is now known. For example, we have a relatively complete understanding of which of the \u223c30 G~1~\/S molecules are present in the human islet, and they can be arranged in a working wiring diagram or road map, which looks very similar to the rodent version in Fig. 2<\/a> (71). In addition, we know that adenoviral overexpression of certain cell-cycle regulatory molecules (e.g., cMyc, the cdks, or the cyclins D or E) in the human \u03b2-cell activates proliferation robustly. For example, with adenoviral overexpression of cell-cycle molecules, as many as 10\u201315% of adult human \u03b2-cells are able to replicate as assessed using Ki-67 immunohistochemistry or bromodeoxyuridine incorporation (71\u201374).\n\nAlthough the human \u03b2-cell G~1~\/S road map is similar to that in the rodent \u03b2-cell, there are two principal differences, indicated by the black numbers in Fig. 2<\/a>. First, cdk6 is absent in rodent \u03b2-cells but abundant in human \u03b2-cells (71). This is relevant because genetic loss of cdk4 in mice leads to \u03b2-cell hypoplasia and diabetes, presumably because there is no cdk6 to compensate for this loss. Second, although all three D<\/span>-cyclins are present in the mouse \u03b2-cell, genetic loss of cyclin D2 (but not cyclins D1 or D3) in mice leads to \u03b2-cell hypoplasia and diabetes, indicating that this is the key D<\/span>-cyclin in the rodent \u03b2-cell (75,76). In contrast, human islets contain little or no cyclin D2 (71), suggesting, paradoxically, either that cyclin D2 is irrelevant in the human \u03b2-cell or that it is absolutely essential, with its paucity being precisely the reason that human \u03b2-cells do not replicate.\n\n## Failure to signal from cell surface to cell-cycle machinery.\n\nMost importantly in the current context, because adult human \u03b2-cells do not replicate in response to the long list of growth factors, nutrients, and maneuvers that induce rodent \u03b2-cells to replicate, but clearly can replicate when cyclins and cdks are overexpressed (71\u201373), it is clear that failure or inadequacy of the cell-cycle machinery in human \u03b2-cells is not the reason they do not replicate; the machinery is there and waiting to be activated. Thus, the likely missing link in adult human \u03b2-cell replication is not failure to express key cell-cycle molecules, but rather a failure to activate them in response to what would appear to be appropriate upstream signals. More specifically, the blockade(s) would appear to be somewhere among the levels of the upstream growth factors, their receptors, signaling pathways that should, but do not, alter these cell-cycle molecules.\n\n# There is No Human \u03b2-Cell Receptor\u2013Nutrient\u2013Signaling\u2013Cell-Cycle Pathway Road map\n\nIf human \u03b2-cells possess the requisite G~1~\/S molecules to drive \u03b2-cell replication, and their upregulation can drive human \u03b2-cell replication, it is then axiomatic that signaling events that should be reaching the cell-cycle machinery are not. It therefore would be particularly useful to have a human \u03b2-cell signaling road map that describes these intracellular signaling pathways, with their upstream ligands and receptors and downstream cell-cycle targets. Unfortunately, in contrast to the rich and complex road map for growth factor and nutrient induction of rodent \u03b2-cell replication, no such road map exists for the human \u03b2-cell: we remain very much in the dark regarding receptors, nutrient transporters\/sensors, their downstream signaling pathways, and their putative connections to the ultimate downstream cell-cycle regulatory molecules. Simply said, the human \u03b2-cell signaling road map is almost a blank slate, a tabula rasa, as shown in the light gray lines and text in the upper portion of Fig. 4<\/a>, which should be contrasted to their counterparts in Fig. 1<\/a>. The following sections briefly summarize the little that we do know.\n\n## IRS-PI3K-Akt signaling.\n\nHuman islet cells express insulin and IGF-I receptors and components of these signaling pathways (5,77). In functional terms, several studies support the concept of a direct insulin action in the \u03b2-cell that promotes a positive-feedback effect in vivo in healthy humans (78) that is blunted in patients with T2DM (79,80). Ex vivo examination of human islets and pancreas sections from T2DM patients (77,81) supports a role for the insulin-signaling network in the regulation of cell-cycle proteins. Although an increase in \u03b2-cell volume has been reported in humans with obesity (82), it is unclear whether this occurs as a compensatory response to insulin resistance, as occurs in animal models, or is genetically predetermined. Additional studies that directly address this possibility are warranted.\n\n## mTORC1 and regulation of human \u03b2-cell proliferation.\n\nThe expression of some mTOR signaling components has been demonstrated in human islets (83) exemplified by its induction by glucose, amino acids, and insulin (84). Most of our understanding of mTOR signaling in human islets is derived from studies using rapamycin derivatives as immunosuppressants or as treatment of patients with insulinomas and other neuroendocrine tumors. Rapamycin treatment has been shown to inhibit human \u03b2-cell proliferation in vitro (41). Moreover, *TSC2* and *PTEN* expression is decreased in insulinomas and pancreatic endocrine tumors, suggesting that activation of mTORC1 and Akt signaling plays a role in the proliferative changes observed in these tumors (85). However, insulinomas are rare in patients with tuberous sclerosis. In contrast, mTORC1 inhibition by everolimus (a rapamycin analog) improves glycemic control in patients with insulinoma and prolongs progression-free survival in patients with pancreatic neuroendocrine tumors (86,87). The extent to which reduced \u03b2-cell mass or inhibition of insulin secretion contributes to the responses to mTORC1 inhibition is unclear, but, based on the current evidence, it is likely that inhibition of \u03b2-cell proliferation plays a major role.\n\n## PKC\u03b6 and GSK3 signaling in the human \u03b2-cell.\n\nPharmacological inhibition of GSK3\u03b2 has been shown to enhance (approximately fourfold) glucose-mediated human \u03b2-cell proliferation (although from very low to low basal rates) (41), and adenoviral transduction of human islets with CA-PKC\u03b6 in vitro enhances (approximately fourfold) human \u03b2-cell proliferation (45). However, whether PKC\u03b6-induced human \u03b2-cell proliferation is mediated or potentiated by GSK3\u03b2 inactivation is unknown.\n\nIn summary, it is important to emphasize that at present, although many examples exist in rodent \u03b2-cells, we do not have a single example of a complete pathway in the human \u03b2-cell that links a growth factor or nutrient, via a cell-surface receptor through a complete signaling pathway, to the downstream cell-cycle machinery that translates to proliferation. In this regard, the recent report of a PDGF\u2013PDGF receptor\u2013MAPK\u2013Ezh2\u2013p16 pathway (63) is exciting and comes the closest to filling this void.\n\n# Why We Need a Human \u03b2-Cell Mitogenic Signaling Pathway Road map\n\nComparing Figs. 1<\/a> and 4<\/a> makes it clear that informed efforts to drive therapeutic human \u03b2-cell proliferation are severely hampered by the essentially complete lack of a human \u03b2-cell road map. Further, the fact that direct cell-cycle activation by cyclin\/cdk overexpression can markedly activate human \u03b2-cell proliferation places at least one key obstacle to human \u03b2-cell proliferation within these upstream signaling pathways, about which we know so little. As noted below, it would be informative to investigate the upstream signaling pathways and identify proteins that link the cell-surface receptors with key molecules in the cell-cycle machinery in human \u03b2-cells.\n\nExamples of key questions about which we have no answers are: *1*) do the gray molecules and lines in Fig. 4<\/a> even exist in adult human \u03b2-cells?; *2*) are there key signaling pathways in human islets that we have completely overlooked? \\[e.g., erythropoietin, insulin, serotonin, and osteocalcin have all been proposed as physiologic drivers of rodent \u03b2-cell replication (51,88,89), but have not been comprehensively tested in human \u03b2-cells\\]; *3*) do adult human \u03b2-cells require engagement of receptors or nutrients distinct from those on rodent \u03b2-cells; *4*) what are the intracellular signaling networks and cell-cycle targets regulated by the kinases in Fig. 4<\/a> that *are* present in human \u03b2-cells?; *5*) are these kinases activated by growth factors and nutrients, and, if not, what is preventing them from being activated?; *6*) can coordinate activation of combinations of pathways synergize to enhance proliferation (e.g., if overexpression of PKC\u03b6 and inhibition of GSK3\u03b2 can increase human \u03b2-cell replication from 0.2 to 1.0%, would coordinate activation of multiple pathways further increase replication?)?; *7*) are the cell-cycle inhibitors that are so abundant in human \u03b2-cells restraining proliferation, and how are they regulated?; *8*) are key cell-cycle regulatory elements epigenetically regulated (in this study, for example, the epigenetic methylases and demethylases menin, Ezh2, and Bmi1 have all been shown to participate in restraining rodent \u03b2-cell replication)?; *9*) in this era of high-throughput screening, which are the optimal small-molecule targets in the human \u03b2-cell to drive proliferation?; and *10*) why and how do \u03b2-cells in young humans proliferate at a higher rate compared with those from older individuals?\n\nThe answers to these critical questions are simply that we know very little: we have no road map from which to hypothesize, model, and integrate data from multiple systems. We believe that obtaining the information required to create such a road map is essential. Without this information, we are driving at night without headlights, without a road map, and without global positioning system navigation. It is time to develop such a high-content signaling road map that connects the cell surface to cell-cycle machinery in the human \u03b2-cell.\n\n## ACKNOWLEDGMENTS\n\nThis work was supported by the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK)\/National Institutes of Health (NIH) grants R01-DK-67536 and R21-RR-24905 and Juvenile Diabetes Research Foundation (JDRF) Grant 17-2011-644 (to R.N.K.), NIDDK\/NIH grants R01-DK-73716 and R01-DK-84236, JDRF Grant 46-2010-758, and Career Development Award 7-06-CD-02 from the American Diabetes Association (ADA) (to E.-B.M.), NIDDK\/NIH grants R01-DK-77096 and R01-DK-67351 and Research Award 1-10-BS-59 from the ADA (to A.G.O.), NIDDK and the Beta Cell Biology Consortium through NIH grants U01-DK-089538 and R01-DK-55023, and JDRF grants 1-2008-39 and 34-2008-630 (to A.F.S.).\n\nNo potential conflicts of interest relevant to this article were reported.\n\nR.N.K., E.-B.M., A.G.O., and A.F.S. wrote the manuscript, reviewed and edited the manuscript, and contributed to the discussion. R.N.K., E.-B.M., A.G.O., and A.F.S. are the guarantors of this work and, as such, had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.\n\nThe authors thank the innumerable authors who have contributed to the field of \u03b2-cell intracellular signaling and apologize for not being able to discuss and\/or cite all of the important contributions because of editorial limitations on the text and numbers of references allowed.\n\n# REFERENCES","meta":{"dup_signals":{"dup_doc_count":177,"dup_dump_count":64,"dup_details":{"curated_sources":2,"2023-40":1,"2022-21":1,"2021-49":2,"2021-39":1,"2021-31":1,"2021-25":1,"2021-21":1,"2020-40":1,"2020-16":1,"2019-51":1,"2019-47":1,"2019-43":1,"2019-26":2,"2019-09":1,"2018-51":1,"2018-47":1,"2018-34":1,"2018-30":1,"2018-26":2,"2018-22":1,"2018-17":2,"2018-13":1,"2018-05":2,"2017-51":1,"2017-47":2,"2017-43":7,"2017-39":2,"2017-34":2,"2017-30":7,"2017-26":4,"2017-22":4,"2017-17":9,"2017-09":5,"2017-04":10,"2016-50":3,"2016-44":5,"2016-40":4,"2016-36":5,"2016-30":5,"2016-26":3,"2016-22":4,"2016-18":4,"2016-07":4,"2015-48":4,"2015-40":3,"2015-35":2,"2015-27":3,"2015-22":5,"2015-14":2,"2014-52":3,"2014-49":3,"2014-42":5,"2014-41":4,"2014-35":4,"2014-23":3,"2023-50":1,"2017-13":3,"2015-18":3,"2015-11":2,"2015-06":3,"2014-10":1,"2013-48":1,"2013-20":1,"2024-22":1}},"file":"PMC3425429"},"subset":"pubmed_central"} {"text":"abstract: The individual 'validation' experiments typically included in papers reporting genome-scale studies often do not reflect the overall merits of the work.\nauthor: Timothy R Hughes\ndate: 2009\ninstitute: 1Banting and Best Department of Medical Research, Department of Molecular Genetics, and Centre for Cellular and Biomolecular Research, University of Toronto, Toronto, ON M5S 3E1, Canada\nreferences:\ntitle: 'Validation' in genome-scale research\n\nFollowing the advent of genome sequencing, the past decade has seen an explosion in genome-scale research projects. Major goals of this type of work include gaining an overview of how biological systems work, generation of useful reagents and reference datasets, and demonstration of the efficacy of new techniques. The typical structure of these studies, and of the resulting manuscripts, is similar to that of a traditional genetic screen. The major steps often include development of reagents and\/or an assay, systematic implementation of the assay, and analysis and interpretation of the resulting data. The analyses are usually centered on identifying patterns or groups in the data, which can lead to predictions regarding previously unknown or unanticipated properties of individual genes or proteins.\n\nSo that the work is not purely descriptive \u2013 anathema in the molecular biology literature \u2013 there is frequently some follow-up or 'validation', for example, application of independent assays to confirm the initial data, an illustration of how the results obtained apply to some specific cellular process, or the testing of some predicted gene functions. As the first few display items are often schematics, example data, clustering diagrams, networks, tables of *P*-values and the like, these validation experiments usually appear *circa* Figure 5 or 6 in a longer-format paper. This format is sufficiently predominant that my colleague Charlie Boone refers to it as \"applying the formula\". I have successfully used the formula myself for many papers.\n\nMy motivation for writing this opinion piece is that, in my own experience, as both an author and a reviewer, the focal point of the review process \u2013 and of the editorial decision \u2013 seems too often to rest on the quality of the validation, which is usually not what the papers are really about. While it is customary for authors to complain about the review process in general (and for reviewers to complain about the papers they review), as a reader of such papers and a user of the datasets, I do think there are several legitimate reasons why our preoccupation with validation in genomic studies deserves reconsideration.\n\nFirst, single-gene experiments are a poor demonstration that a large-scale assay is accurate. To show that an assay is consistent with previous results requires testing a sufficiently large collection of gold-standard examples to be able to assess standard measures such as sensitivity, false-positive rate and false-discovery rate. A decade ago, there were many fewer tools and resources available; for example, Gene Ontology (GO) did not exist before the year 2000 \\[1\\], and many of the data analysis techniques now in common use were unfamiliar to most biologists. Proving that one could make accurate predictions actually required doing the laboratory analyses. But today, many tools are in place to make the same arguments by cross-validation, which produces all of the standard statistics. It is also (gradually) becoming less fashionable for molecular biologists to be statistical Luddites.\n\nSecond, and similarly, single-gene experiments, or illustrations relating to a specific process, do not describe the general utility of a dataset. Many studies have shown (even if they did not emphasize) that specific data types and reagents are more valuable for the study of some things than others. Validation experiments tend to focus on the low-hanging fruit, for instance, functional categories that seem to be yielding the best examples, and the largest numbers. To minimize the ire of my colleagues, I will give an example from my own work. Our first efforts at systematically predicting yeast gene functions from gene-expression data \\[2\\] resulted in more predictions relating to RNA processing than to any other category, and Northern blots are something even my lab can do, so these were the ones we tested. Although we would like to think that the success at validating predictions from other processes will also be as high as our cross-validation predicted, laboratory validation of predictions from only one category does not show that. Moreover, if one is engaged in high-throughput data collection, it is possible to perform a large number of validations, and show only those that work. It is also possible to choose the validation experiments from other screens already in progress, or already done, or even from other labs. I suspect this practice may be widespread.\n\nA third issue is that focus on the validation is often at the expense of a thorough evaluation of the key points of the remainder of the paper. I may be further ruffling the fur of my colleagues here, but I think it is fair to say that a hallmark of the functional genomics\/systems biology\/network analysis literature is an emphasis on artwork and *P*-values, and perhaps not enough consideration of questions such as the positive predictive value of the large-scale data. David Botstein has described certain findings as \"significant, but not important\" \u2013 if one is making millions of measurements, an astronomically significant statistical relationship can be obtained between two variables that barely correlate, and an overlap of only one or a few percent in a Venn diagram can be very significant by the widely used hypergeometric test. A good yarn seems to distract us from a thorough assessment of whether statistical significance equates to biological significance, and even whether the main dataset actually contains everything that is claimed.\n\nI'm writing for an issue of *Journal of Biology* that is about how to make the peer review process easier, but I do believe that papers in our field would be better if referees were allowed and expected (and given time) to look at the primary data, have a copy of the software, use the same annotation indices, and so on, and see whether they can verify the claims and be confident in conclusions that are reached from computational analyses. Even simple reality checks such as comparing replicates (when there are some) are often ignored by both authors and reviewers. I bring this up because one of the major frustrations expressed by a group of around 30 participants at the Computational and Statistical Genomics workshop I attended at the Banff International Research Station last June was the difficulty of reproducing computational analyses in the functional genomics literature. Often, the trail from the primary data to the published dataset is untraceable, let alone the downstream analyses.\n\nFourth, and finally, the individual validation experiments may not garner much attention, unless they are mentioned in the title, or have appropriate keywords in the abstract. They are rarely as useful as they would be in a paper in which they were explored in more depth and in which the individual hypothesis-driven experiments could be summarized. For instance, a paper we published in *Journal of Biology* in 2004 \\[3\\] described an atlas of gene expression in 55 mouse tissues and cell types. Using SVM (Support Vector Machine) cross-validation scores, we found that, for many GO annotation categories, it was possible to predict which genes were in the category, to a degree that is orders of magnitude better than random guessing, although usually still far from perfect. The most interesting aspect of the study to me was the observation that there is a quantitative relationship between gene expression and gene function; not that this was completely unexpected, but it is nice to have experimental evidence to support the generality of one's assumptions. The SVM scores were used mainly to prove the general point, and whether any individual predictions were correct was not the key finding \u2013 we knew ahead of time (from the cross-validation results) that most of the individual predictions would not be correct; this is the nature of the business. Nonetheless, final acceptance of the manuscript hinged on our being able to show that the predictions are accurate, so at the request of reviewers and editors, we showed that Pwp1 is involved in rRNA biogenesis, as predicted. According to Google Scholar, this paper now has 139 citations, and my perusal of all of them suggests that neither Pwp1 nor ribosome biogenesis is the topic of any of the citing papers. The vast majority of citations are bioinformatics analyses, reviews, and other genomics and proteomics papers, many of them concerning tissue-specific gene expression. Thus, the initial impact appears primarily to have been the proof-of-principle demonstration of the relationship between gene function and gene expression across organs and cell types, and the microarray data themselves. It is the use of genome-scale data and cross-validation that proves the point, not the individual follow-up experiments.\n\nA small survey of my colleagues suggests that many such examples would be found in a more extensive analysis of the literature in functional genomics and systems biology.\n\nFor instance, Jason Moffat explained that in the reviews of his 2006 *Cell* paper describing the RNAi Consortium lentivirus collection \\[4\\], which already contained a screen for alteration of the mitotic index in cultured cells, a major objection was that more work was needed to validate the reagents by demonstrating that the screen would also work in primary cell cultures \u2013 which may be true, but so far, even the mitotic index screen seems to have served primarily as an example of what one can do with the collection. The paper has clearly had a major impact: it has 161 citations according to Google Scholar, the vast majority of which relate to use of the RNAi reagents, not any of the individual findings in this paper.\n\nTo conclude, I would propose that, as authors, reviewers and editors, we should re-evaluate our notion of what parts of genome-scale studies really are interesting to a general audience, and consider carefully which parts of papers prove the points that are being made. It is, of course, important that papers are interesting to read, have some level of independent validation, and a clear connection to biology. But it seems likely that pioneering reagent and data collections, technological advances, and studies proving or refuting common perceptions will continue to be influential and of general interest, judging by citation rates. As erroneous data or poorly founded conclusions could have a proportionally detrimental influence, we should be making an effort to scrutinize more deeply what is really in the primary data, rather than waiting to work with it once it is published. Conversely, the individual 'validation' studies that occupy the nethermost figures, although contributing some human interest, may be a poor investment of resources, making papers unnecessarily long, delaying the entry of valuable reagents and datasets into the public domain, and possibly distracting from the main message of the manuscript.","meta":{"dup_signals":{"dup_doc_count":109,"dup_dump_count":45,"dup_details":{"curated_sources":2,"2022-05":1,"2020-40":1,"2020-10":1,"2019-47":1,"2018-51":2,"2018-34":1,"2018-30":1,"2018-13":1,"2017-47":2,"2017-39":1,"2017-34":2,"2017-26":2,"2017-22":2,"2017-17":1,"2017-09":9,"2017-04":1,"2016-50":1,"2016-44":2,"2016-40":1,"2016-36":8,"2016-30":8,"2016-22":1,"2016-18":1,"2016-07":9,"2015-48":2,"2015-40":1,"2015-35":2,"2015-32":2,"2015-27":2,"2015-22":2,"2015-14":1,"2014-52":2,"2014-49":1,"2014-42":7,"2014-41":3,"2014-35":2,"2014-23":4,"2014-15":4,"2022-27":1,"2015-18":2,"2015-11":2,"2015-06":2,"2014-10":2,"2013-48":2,"2013-20":1}},"file":"PMC2656219"},"subset":"pubmed_central"} {"text":"abstract: Through its adoption of the biomedical model of disease which promotes medical individualism and its reliance on the individual-based anthropology, mainstream bioethics has predominantly focused on respect for autonomy in the clinical setting and respect for person in the research site, emphasizing self-determination and freedom of choice. However, the emphasis on the individual has often led to moral vacuum, exaggeration of human agency, and a thin (liberal?) conception of justice. Applied to resource-poor countries and communities within developed countries, autonomy-based bioethics fails to address the root causes of diseases and public health crises with which individuals or communities are confronted. A sociological explanation of disease causation is needed to broaden principles of biomedical ethics and provides a renewed understanding of disease, freedom, medical practice, patient-physician relationship, risk and benefit of research and treatment, research priorities, and health policy.\nauthor: Jacquineau Az\u00e9tsop; Stuart Rennie\ndate: 2010\ninstitute: 1Facult\u00e9 de M\u00e9d\u00e9cine Teilhard de Chardin, Complexe M\u00e9dical le Bon Samaritain, N'djam\u00e9na, BP 456, Chad; 2Department of Philosophy, University of Cape Town, Cape Town, Private Bag X3 Rondebosch7701, South Africa; 3Department of Social Medicine, University of North Carolina School of Medicine Chapel Hill, 333 S Columbia Street MacNider Hall, Room 348, CB 7240, Chapel Hill, NC 27599-7240, USA\nreferences:\ntitle: Principlism, medical individualism, and health promotion in resource-poor countries: can autonomy-based bioethics promote social justice and population health?\n\n# Introduction\n\nRespect for autonomy or respect for persons has tended to be the leading principle of biomedical ethics or research ethics, respectively. This principle historically has its roots in the liberal moral and political tradition of the Enlightenment in Western Europe. Within this tradition, the ethical justification of actions or practices strongly depends on the free decisions of individuals, i.e. an action or practice can only be ethically justified when undertaken without any coercive influence and entered by free and informed agreement. While there have always been disagreements on the details, all theories of autonomy agree on two essential conditions: the first is liberty, specifying the independence from controlling influences; the second is agency, referring to the capacity for intentional action\\[1\\]. Used in clinical ethics, autonomy functions primarily to examine decision-making in health care and serves to identify actions that are protected by the rules of informed consent, informed refusal, truth telling, and confidentiality\\[1\\]. Autonomy-based approaches are strongly expressed in Tom Beauchamp and James Childress' classic text *Principles of Biomedical Ethics* for clinical bioethics and, for research ethics, the influential *Belmont Report*\\[1,2\\].\n\nMany criticisms of autonomy-based bioethics have appeared over the past thirty years from a number of different angles, such as feminism, casuistry, disability rights, multiculturalism, cultural studies, and ethnography. In this article, we take a different approach by exploring what we will call the 'medical individualism' that autonomy-based bioethics largely assumes, and by raising questions about the relevance and impact of autonomy-based bioethics in developing countries (and communities within developed equitable ones), especially in light of initiatives to 'build capacity' in research sites and to ensure access to healthcare in resource-poor settings. This paper argues that the medical individualism underlying autonomy-based bioethics renders the latter incapable of addressing some of the most pressing bioethical issues in resource-poor settings, which have to do with social justice. The first section of this paper considers some of the limitations of principlism. The second section examines the inability of this approach to address social justice concerns in resource-poor countries. Finally, the third section attempts to offer an alternative approach by exploring the contribution of the sociological model of disease causation to research ethics, health justice and health policy.\n\n# A brief anatomy of autonomy-based bioethics\n\nOne of the major defenders of the centrality of autonomy in bioethics, the British medical ethicist and pediatrician, Raanan Gillon argues that respect for autonomy should hold a primary place among the four principles of biomedical ethics\\[3\\]. Other proponents of autonomy, Beauchamp and Childress, define autonomy as a form of personal liberty of action where the individual determines his or her own course of action in accordance with a plan chosen by himself or herself\\[1\\]. In application to clinical medicine, respect for autonomy dictates that patients with decision-making ability have a right to voice their medical treatment preferences, and physicians have the concomitant duty to respect those preferences\\[4\\]. Like Beauchamp and Childress, Gillon embraces a Millian understanding of autonomy, understanding it as deliberated self rule; the ability and tendency to think for oneself, to make decisions for oneself about the way one wishes to lead one's life based on that thinking, and then to enact those decisions--is what makes morality--any sort of morality--possible\\[3\\]. Given its supreme ethical importance, autonomy is not merely a value to be respected, but a virtue or trait that ought to be actively developed, nurtured and promoted.\n\nAccording to Gillon, other ethical principles (beneficence, non-maleficence, and justice) presuppose (and can be reduced to) respect for autonomy. Beneficence and non-maleficence toward autonomous moral agents presuppose respect for the autonomy of these agents even when they choose to refuse medical interventions which are life-saving. Gillon also takes an autonomy-centered approach to justice, arguing that responding to people's needs justly will require respect for those people's autonomous views, including autonomous rejection of offers to meet their needs; and, more importantly, because providing for people's needs requires resources, including other people's resources\\[3\\]. To conclude his praise for autonomy, Gillon writes that respect for autonomy contingently builds in a *prima facie* moral requirement to respect both individual and cultural moral variability\\[3\\]. While it is true that not all autonomy-based approaches in bioethics take the explicit and extreme form expressed by Gillon, autonomy continues to be treated implicitly as a primary value in many controversial clinical and research debates, from end of life issues (such as the Terri Shiavo case) to questions of exploitation of research subjects in international health research. When ethical principles conflict, it is often thought that the conflict can be resolved in an ideally impartial way by asking, for example, what the patient wants (or would have wanted) or whether the research subject really understood and freely consented to the procedures described in the research protocol. In this way, the multifarious values involved in the practice of medicine and biomedical research tend to be reduced to the principle of respect for persons, itself narrowly understood as respect for autonomy. Furthermore, the preeminence of autonomy as an ethical value within bioethics is deeply related to the increasing commoditization of medicine in developed countries. For the more that medical practices are justified by reference to patient choice, the more that patients will be viewed as 'clients' and health care professionals perceived as 'service providers'. This model of patient as 'client', which is prevalent in the United States of America and some parts of the western world, assumes affluence and power: the (literate) patient has to be capable of understanding and rationally weighing his\/her options--possibly even in disagreement with the physician--and be in a position to pay in exchange for services chosen.\n\n## Autonomy, exaggeration of human agency, and ethical pluralism\n\nAn autonomy-based ethics places the responsibility for medical decision-making largely in the hands of the patient. This raises the descriptive question of whether this conception accurately depicts how clinical decisions are actually made, as well as the normative question about whether such a conception of responsibility should (or should not) function as a universal ideal. In regard to the descriptive issue, patients in resource-poor settings are often not concerned with their ability to determine and shape the course of cure. Their arrival at the local health center is the outcome of a long family discussion that led to the collection of money. Sometimes, the patient arrives at the dispensary when the disease has reached its critical stage because the cost of care is too high. The primary expectation of both patient and family is to get the medicine or undergo a medical procedure they need and go back to their workplace. Spending time at the hospital means loss of earnings for them and their families or the diminishment of financial resources. When people can barely afford the cost of care or satisfy the nutritional requirements for a good recovery, the ethics of medical encounter should be understood differently and expressed in different terms than patient choice. Instead of developing a highly-organized medical bureaucracy that cares for the enforcement of patients' rights and protects medical professionals from accusations of malpractice, it would be more helpful to develop new sets of values that guide medical practice and promote patient participation in the healing relationship. The framing of these values may encourage and foster a non-confrontational relationship between health professionals and patients in the clinical setting, and include social challenges that influence health in the bioethics agenda. The role of bioethics will then consist in identifying social values and laws that may guide clinical work, restore the social dimension of medicine, connect the macro-determinants of health to medical practice and health system delivery, avoid the fragmentation of healthcare, and advocate for good health policies.\n\nThe challenge facing bioethics in resource-poor settings is not then to mislead people with unrealistic promises of autonomy that very few people can indeed achieve, but to articulate moral principles and societal values that are oriented around the promotion of equitable access to care and which broaden the goals of medicine and public health. The goals of medicine cannot be confined to the alleviation of suffering within the clinical setting. Medicine needs to be concerned with the determinants of good and bad health outside the clinical context in order to contribute to evidence-based clinical and public health interventions and education. The major bioethical questions prevalent in resource-poor countries do not essentially revolve around the provision of informed consent at the individual level, but rather around the burning social questions of access to care, commodification and quality of medical care, the relationship between income disparities and health inequities, the impact of poverty and underdevelopment on population health, priorities in biomedical research, and impacts of gender discrimination on women's health\\[5,6\\]. Once the focus is shifted away from the individualistic 'patient as client' paradigm, the social problems connected with the domination of medicine by market forces become apparent. If the goal of medicine is to restore health functioning, bioethics should avoid adopting a conception of autonomy that can be used to justify the domination of healthcare delivery by market forces alone and (wittingly or unwittingly) legitimizing health care systems that exclude the needy sick because the latter are unable to pay (or co-pay) for services or afford hefty medical insurance premiums. Even those bioethicists who promote market-driven medicine based on a libertarian anthropology\\[7,8\\] ought to carefully articulate alternative ethical values for health care and biomedical research, if they not to be lured into a 'self-defeating' conception of medicine. As an example of the latter tendency, Robert Sade considers medicine as a market commodity and understands medical practice as sets of skills that physicians are entitled to sell on the marketplace to make as much money as possible. Even the cries of the destitute sick or government regulatory function cannot restrict the physicians' appetite for greater financial reward. Sade's anthropology and approach to medicine is based on the assumption that individuals have the right to select the values that they deem necessary to sustain one's own life. They are also entitled to exercise their judgment to take the best course of action to achieve chosen values. Finally, they have the right to dispose of those values, once gained, in any way one chooses, without coercion by other men\\[7\\]. Similarly, Tristram Engelhardt protects human freedom to the point of ignoring the fact that the concern that we have for each other makes life in society possible. For him, as long as freedom functions as a side constraint, and as long as the moral community is based on respect for freedom and not force, individual persons will have the possibility of holding entitlements\\[8\\], Engelhardt's suggestion is paradoxical because, in trying to protect freedom of individuals to use their resources to access health care and other goods, he does not ensure that those with few resources have the freedom to obtain health care. Realistically, a genuine affirmation of autonomy cannot result in action informed or motivated by the desire to avoid being a responsible member of one's moral community\\[9\\]. Here, responsibility means that one should not exploit others by using autonomy as a warrant to market-driven medicine or profit-seeking attitudes. Once medicine is understood as a commoditized product like any other, those who cannot afford services are merely unfortunate consumers. In this way, a strong emphasis on autonomy can contribute to a culture in which healing and health promotion are no longer at the center of clinical practice and biomedical research.\n\nOne can hardly refute the fact that complex social and economic forces have placed patient autonomy at the center of medical ethics, and thereby undermined the age-old ethic of physician beneficence\\[10\\]. This change is sustained by waning trust in the traditional patient-physician relationship. With the control of medicine by the forces of the market, patients have become consumers of a market commodity called medical care. As a result of this change, the clinical relationship between the patient and physician begins to be seen as a contract and not as a covenant of care as it was in the past. Autonomy-based bioethics has a tendency to distort the relationship between individuals and the world. On the one hand, it exaggerates the power and range of individual agency; furthermore, it underestimates the impact of society, culture and environment, both on individual decision-making and on health. If persons are regarded as atomistic, certain defensive notions of individualistic rights-based autonomy prevail. If a relational construction of personal identity is employed instead, then respect for autonomy becomes part of a wider morality of relationship and care\\[1\\]. 'Atomistic autonomy' is divisive and lacks social rootedness while relational autonomy brings about trust and communality. The second version of autonomy, which reveals our true self in society, presents the possibility of placing trust and partnership at the center of the patient-physician relationship. With such an understanding of personhood, bioethics can better balance its concerns over choices and actions with those of relationship and responsibility. A more plausible philosophical anthropology would conceive individuals as entangled in the world, both capable of acting on it and subject to being affected by it.\n\nReflection on the notion of disease, both infectious and chronic, can contribute to a more plausible philosophical anthropology for bioethics. Infectious diseases question our understanding of autonomous agency in two important ways. First, as both a victim and a vector, a patient cannot be simply seen as a rational agent who has the final ethical word on his own decisions. Both vulnerability to infection and threat of transmission to others should shape our understanding of patient agency. Second, the concept of choice that shapes our conception of agency in bioethics can no longer be understood in isolation from society. Risk of acquiring and transmitting infectious diseases reflects the patient's interconnectedness with others and the biological environment, an interconnectedness which is always there even when infectious disease is not present\\[11\\]. Although the values and desires of the patient obviously need to be considered, the ideal of the autonomous agent will remain a fiction unless the social context of the patient's vulnerability is also considered. For other reasons, chronic disease also challenges our understanding of autonomy, especially when the patient finds it hard to manage his or her chronic condition. Family or friends stand as important resources for decision-making and long-term daily care for chronic diseases. We should then recognize that the family and community, which may play an important role in patient care, are part of the resource needed by the patient to exercise agency\\[12\\]. More and more, it is becoming obvious that the promotion of patients' agency requires serious consideration of patients' best interests in a broader way. Against the backdrop of contemporary institutional medicine, family solidarity is more important than ever to help maintain patient's dignity and agency throughout stressful time\\[13\\]. Exclusion of family and relatives from the sphere of decision-making on account of respect for individual autonomy does not necessarily serve patients' best interest. Furthermore, primary care, because of its focus on treatment and prevention of chronic and infectious diseases, is the domain of medicine that goes beyond techno-medical solutions to consider patients as persons with their stories, relationships, and social environment in which they live. Consequently, primary care should essentially rely on socially-grounded values rather than on desocialized principles\\[14\\].\n\nFamily and social relationships are important in the context of clinical medicine. However, we cannot undermine the importance of individual freedom. We simply reject strong claims that do not have any social rootedness. It would be almost unsound and socially untrue to radically endorse autonomy to the detriment of an ethic of responsibility and socially-based care because they are mutually interdependent, and a complete account of medicine's moral axis requires that they be integrated. This reorientation is crucial for reasserting the ethos of clinical medicine, whose fundamental mandate remains the care of others\\[10\\].\n\n## Autonomy ethics and the 'moral vacuum'\n\nFor Immanuel Kant, respect for persons never refers to the freedom to be left alone. Kant's understanding of respect for autonomy provides the ground for the categorical imperative, which he formulated in five different ways. The third formulation, \".act so that you treat humanity whether in your own person or in that of another, always as an end and never as means only\" \\[15\\] cannot be reduced to the respect for autonomy often found in the bioethics literature. The view of autonomy commonly found among individuals and in some of the bioethics literature in North America or Western culture is more in tune with John Stuart Mill's formulation of liberty: do not intrude on the freedom of any person by an invasion foreign to his or her own wishes and values. When Kant talks about autonomy, he does not imply that one should act according to one's own desires, unconstrained by a balanced consideration of one's situation as a being-among-others\\[9\\] Instead, he refers to the dignity of humans who are capable of making for themselves and others universal law. Hence, autonomy, rightly construed... results in action informed and motivated by the desire to be responsible member of one's moral community (the ground of one's being-among-others)\\[9\\]. Kantian autonomy is tied the moral agent's search for the truth and respectable conduct. The autonomous subject does not act in accordance to his or her primary inclination. Kantian autonomy is applied to actions performed when the will is freed from any selfish determination. When humans treat each other as ends and never as means merely, there arises a systematic union of rational beings under common objective laws. Physician and patient, each with their own needs, desires, capabilities, must find those principles that allow them to coalesce into a helping alliance to achieve a common goal.\n\nContemporary readings often accept a Millian version of autonomy that is associated with self-seeking attitudes. This approach to respect for autonomy refers to the capacity to act on needs, wants, or wishes; a capacity shared by many creatures. Since the person's action is informed by instrumental reasoning, it constricts the scope of reason so that it is subject to any desire or disposition that one happens to endorse at the time one acts\\[9\\]. Focusing essentially on individual choices sets up a false and pernicious opposition between persons and the community to which they belong. It is reasonable, on both conceptual and empirical grounds, to suppose that individuals acquire their values through engagement with a concrete moral tradition, rather than through a private and self-directed process. Instead of providing ethical decision-making with an objective and rational process, the obsession with individual autonomy tends to create what McCormick calls a 'moral vacuum', i.e. the disappearance of the network of shared and established goods and values that make the choices of individuals right or wrong, moral or immoral\\[16\\].\n\n## Balancing autonomy and community in ethical decision-making\n\nIt is hard to undermine the influence of social, cultural and environmental factors on moral decision making. We have to take these factors into account in order to fully appreciate the moral dilemmas and health challenges in settings and traditions where individualism does not prevail. Writing from their Jewish background, Barth-Rogers and Jotkowitz note that within Jewish tradition, the idea of unlimited human autonomy is not a defining value; Judaism deems the intrinsic human value of each individual's life to take precedence over patient autonomy\\[12\\]. Similarly, the Confucian culture from East Asia understands the person not only as a rational, autonomous being but also as a relational and altruistic entity whose self-actualization involves participating in and promoting the welfare of fellow persons\\[4\\]. In the same line of thought, African traditions present a view of the human person that is essentially relational; it is within the social network that the individual lives and acts as a free person. The Jewish, Confucian, and African cultures convey an understanding of the human person and society which is different from individualism operative in some cultures.\n\nThis is where the shortcomings of Gillon's autonomy-centered conception of bioethics become the most obvious. Gillon does not reject the view that particular cultures should be respected, instead he theorizes that the *prima facie* nature of autonomy requires that both the individual and cultural moral variability be respected\\[3\\]. But this sense of respect for culture does not adequately reflect the social rootedness of the human person. Despite making 'concessions' to culture, Gillon continues to view societal relationships, determinants and influences to be peripheral to human reason and, because of the danger of ethical relativism, something to be transcended by a universal ethic. Hence, the four principles (with autonomy as supreme among them) can account for all our moral worries and being applied straightforwardly to all situations and contexts\\[17\\]. Gillon contends that any other moral principle or value can be explained by one or some combination of the four principles. In fact, however, Gillon's quest for a universal discourse is nothing more than the promotion of one approach to ethics among others, one which reflects specific cultural assumptions concerning individual choice and future-oriented action that are associated with class position and social opportunities and foreign to the lived reality of the poor, the marginalized, and people of color in a multicultural society like the United States\\[18\\]. Any attempt to universalize an ethnic particularity fails the test of respect for pluralism in bioethics and in our ever-globalizing world.\n\nIn resource-poor countries where medical paternalism prevails on account of patient beneficence and shared-responsibility for health promotion\\[19\\], the necessity to create the conditions that improve, for example, patient-physician communication in ways that favor patient agency needs to be acknowledged. Very often, the physician does not even tell the patient what is going on with his or her health. However, the one-sided view of the human person which prevails in autonomy-based bioethics should not be adopted as a model to correct paternalism; a more fruitful alternative would be a combination between a community- and tradition-oriented view and autonomy that conceives decision-making as guided by important human values such as partnership, trust and solidarity, in addition to autonomy. This view would acknowledge the embedded and relational nature of human choices, behavior, ways of expressing emotions and feelings, patterns of thinking, and conceptions of disease and healing.\n\n# Autonomy, biomedical individualism, and social justice\n\nSome criticisms of autonomy-centered bioethics have been purely conceptual. Others have emerged from reflections on its limitations in dealing with collective macro-problems including social, sanitary and environmental problems that mark everyday life in poor countries. Autonomy-based bioethics fails to engage the lived worlds of diversely constituted and situated social groups, particularly those that are marginalized\\[18\\]. Similarly, in clinical medicine, broad issues such as the common good, distributive justice and the spirituality of the patient are ignored for the sake of the primacy of secular business concerns. To guide clinical practice, laws have been developed to reduce risk for malpractice and protect patients. However, emphasis placed on the principle of autonomy has led to an excessive control of clinical practice by judicial institutions. Consequently, this obsession with the law has led to the elimination of a wide range of moral concerns from public consideration\\[16\\]. To emphasize this point, McCormick criticizes clinical ethics for being preoccupied with cost control that focuses narrowly on matters of financial efficiency, thus exiling the more basic ethical questions (ends of medicine, the meaning of life, death, illness and health)\\[16\\]. Furthermore, any public health intervention that adopts the biomedical model fails to address issues of wider social injustices that are responsible for health-related vulnerability and risk.\n\n## Autonomy ethics and medical individualism\n\nThe biomedical model is premised on individualism, because it adopts an abstract view of the body and mind of an individual person from a liberal model of economy and politics\\[20\\]. In this model, individuals choose health behaviors. Thus, poor health is largely due to exposures to health risks that the individuals have decided not to avoid. This approach to health risks disregards the role of social structures in structuring the array of risk factors that individuals are supposed to avoid\\[18\\], and fails to explain how social inequalities can be embodied in poor-health outcomes\\[21\\]. Thus, autonomy-focused bioethics, rather than presenting an objective perspective, deprives itself of theoretical tools to adequately address non-pathological causes of ill-health. Similarly, in research sites, much effort is often invested in securing the informed consent of individual participants while often ignoring the broader issues of justice in places where research takes place\\[22\\]. Consequently, the absolutization of autonomy with the unreal and distorted picture of the person helps explain why so much bioethical writing is concerned with procedures that protect choice, rather than more substantive issues, with consent itself rather than what is consented to\\[16\\]. This tendency to make the social causes of poor health (and the broader ethical problems related to health improvement) invisible can even be seen among those working in public health to the extent that they subscribe to the biomedical model\\[20\\].\n\n## Biomedical model and the social gradient in health\n\nHealth differentials between individuals cannot be explained simply by their health behavior or lifestyles, but also by their social position and economic status, the social networks to which they belong, and the levels of education that provide them with the means to avoid health risks, deal with adversity, and have access to life-protecting information. The pervasiveness of the social gradient in health remains even when well-designed public health interventions are implemented. Even when these public health interventions may reduce health risks and mortality, they do not eliminate the social gradient because individuals in the lower socioeconomic groups take less advantage of health interventions than those who are better off.\n\nWhen we compare the health statistics between poor and rich within countries or between countries, the differentials are striking. HIV\/AIDS statistics provide us with striking examples of the impacts of socioeconomic status on risk differentials and chances of survival between groups within countries and between countries. Even in developed countries, the geography of HIV\/AIDS challenges us to investigate the social causes of its distribution. Risks and survival differentials prompt us to consider a view that places political-economic critiques of global resource distributions, and criticism based on the higher and qualitatively different disease burdens in poor countries within a common framework of international and internal socio-economic structure\\[23\\]. At the local level, income inequality in poor countries affects health and can be an indicator of life expectancy\\[24,25\\]. Poverty affects individuals' ability to have access to goods which are instrumental for well-being. At the country level, poverty limits government's ability to found social programs and provide people with basic social goods such as safe drinking water, electricity, good public health coverage, healthcare institutions, schools, social services, and economic opportunities. These structural causes are steady and they include access to basic resources that can be used to avoid all sorts of health risks or reduce the negatives outcomes of diseases when they occur\\[26\\].\n\nMost public health interventions focus on individual risk factors and behavior. To lessen vulnerability and risk, health professionals will need to address income differences between individuals and population groups. Otherwise, they will only address the symptoms and not the root-causes of poor health. As public health practitioners and other health professions 'resocialize' their conceptions of health and disease, bioethicists should join and inform their efforts. A sociological approach to disease can increase the social relevance of bioethics because it provides an acute perception of disease etiology and pathology that includes the social and material conditions in which people live.\n\n# Sociological model and autonomy-based bioethics\n\nTo underscore the difference between Western and non-Western conception of illness, Bowman writes that most non-Western cultures tend to perceive illness in a much broader and far less tangible manner. Illness is often viewed as being linked to social, spiritual, and environmental determinants\\[27\\]. The sociological model of disease explanation shares some important connections with many non-Western cultures in which disease representation and explanation is not primarily understood in biomedical terms, but in social ones. Autonomy-based bioethics is premised on the view that disease is located in the individual. The focus on the individual person often reduces the scope of justice in clinical medicine and health research to an equal treatment of individuals involved and a fair distribution of available resources and burden regardless of people's social status, age, race, gender or religion. In the clinics, for example, justice requires that patients whose circumstances are the same deserve the same level and quality of care.\n\nConversely, the sociological model perceives the disease as an integrated social-physiological process which includes the person's relation to the environment. In addition to its bio-physiological dimension, a disease is a relational phenomenon; as a subjective and socially-constructed reality, a disease develops out of the omnipresence of symptoms and bodily feelings in everyday life. The sociological model allows us to develop a socially-relevant approach to health justice, a new set of principles that may guide research as well as an approach to health policy based on the features of the site where research is done. Thus, this model points to the fact that there are two reminders of our embeddedness in the world relevant to bioethics: first, biological embeddedness and infectious disease and second, social embeddedness, particularly (but not exclusively) in contexts where people are obviously dependent on one another and traditional behavior and customs are strong.\n\n## Contribution of medical sociology: Sociological model and social justice\n\nThe current formulation of ethical principles as they are applied to medical research in poor countries is inappropriate for capturing some crucial implications of medical research since they ignore the roots of health crises with which these countries are confronted\\[28\\]. Analyzing the health crises in African countries in the late 1980s, the Cameroonian sociologist Jean-Marc Ela argues that disease and malnutrition never exist by themselves; rather they come from a system characterized by violence, by a pattern of impoverishment of the majority, and by the monopoly by a minority of the means to live with dignity\\[29\\]. Health interventions should not merely address the symptoms of a disease-producing society, but also its structures. Social structures not only shape distribution of disease across population, but they also determine societal and individual responses to suffering. When the major determinants of health are far from being addressed by a conceptual framework that prioritizes individual problems and morality, there is a need to call its relevance into question. The high rates of infectious diseases in poor countries are linked to poor living conditions and structural problems. These primary sources of exposure and vulnerability to health hazards should necessarily be considered in any attempt to develop bioethical standards for research or any bioethical agenda. The poverty that permeates all spheres of society should be studied because poverty never exists in isolation from societal influences, but rather is integrally a product of the inner workings of each society's political economy. Minimizing the contribution of poverty to the production of disease and disability in poor countries makes suffering invisible and limits our understanding of the etiology of disease.\n\nMedical sociology scrutinizes patterns of diseases and pathways through which social inequalities are embodied in individual vulnerabilities and major epidemics. Thus, the model of disease causation that comes from sociological investigations challenges us to move beyond the clinics or research sites to broaden the scope of justice. Similarly, the prevalence of infectious diseases in resource-poor countries challenges the way justice is understood in research sites. If we consider the patient as a potential victim and vector, we need to shift our gaze from the healthcare that might be most desirable for the individual patient to broader social concerns and the worldview distribution of care that might enable all to achieve opportunities over a reasonable life span\\[11\\]. The extension of care to all not only aims at serving individual needs for care, but more importantly it addresses infectious diseases as a threat to population health. Opting out from an intervention of this kind would simply mean that the individual remains a threat to the entire population\\[3\\].\n\nThe sociological explanation of disease incorporates a distinctive view of etiology, prevention, pathology, treatment, and justice. This approach to disease explanation tacitly promotes a conception of responsibility for infection or disease causation which is not only individual. This approach questions the uses of individualism as methodology and framework for analyzing disease occurrence, and thus criticizes the one-sidedness of the anthropology that sustains the biomedical model.\n\n## Sociological model and justice in current biomedical research\n\nDocuments such as the *Declaration of Helsinki* issued by the World Medical Association and the *International ethical guidelines for biomedical research involving human subjects* (CIOMS) as well as the work of the National Council on Bioethics in 2002 and that of the National Bioethics Advisory Commission (NBAC) in 2001 all take material poverty as the main reason for developing bioethical standards that apply to medical research conducted in poor countries. Surprisingly, the bioethics standards they promote hardly reflect the physical, social, and cultural environment of poor countries. This is another important area for revision\\[28\\].\n\nGiven the substantial differences in individual exposure to health risks and the availability of health protective resources as well as differences in the disease burden and mortality and morbidity at the population level, it is clear that illness in poor countries can be better understood using a 'social causation of illness' perspective. The principles of respect for persons, beneficence, and justice that shape the *Belmont Report* are all built on the biomedical model. The principle of respect for persons reinforces individual agency and protection in the research setting by ensuring that participants are properly informed about the research or the course of care that will be taken to restore normal functioning. The principle of beneficence extends the latter by insisting that research protocols should maximize potential benefits and minimize harm. Finally, the principle of justice ensures that those with diminished autonomy are protected and that participants share in the benefits of the research. Agency, benefit, participation, risk, and vulnerability are all understood from the standpoint of the individually-focused disease management whether in the clinical setting or the research site. To be of broader global significance, ethical principles of biomedical research should be responsive to the context of poverty and social inequities, since these structural factors can lead to increased vulnerability and exploitation. For example, the incapacity of poor people to satisfy their basic needs can lead to increased participation in clinical trials without true understanding of risk and benefit at least in part due to financial incentives. Thus, even if these people 'consent' to participation in a trial, is that decision truly autonomous? It is then clear that 'research protections' cannot be ensured solely through the use of the consent form and the provision of information to the subject. A formal provision of consent by the research subject can simply mask the misery that inhibits his or her ability to consent freely.\n\nSimilarly, what counts as 'benefits' can be tied to different levels of poverty and disease burden in different resource-poor countries. Ethical principles and guidelines that oversee biomedical research can be defined in terms of public good rather than merely as an improvement in individual health status because public good and social policy transcend the framework of individual-based ethics\\[28\\]. In resource-poor countries, death-rates are high and infectious diseases contribute significantly to the burden of disease--as opposed to richer countries, where cardiovascular disease and cancer are the leading causes of mortality--the difference in exposure, health risk, mortality, and morbidity between poor and rich countries challenges us to develop a new approach to the concept of benefit in biomedical research. We need to think of 'benefits' as running to the whole community in which research takes place, and not just to single research subjects. Therefore, the availability of and access to modern health services is a substantial issue for evaluating the impact of biomedical research benefits in poor countries since the outcomes of health initiatives are largely determined by some structural arrangements that transcend the benefits of research subjects. These arrangements are based upon national and international patterns of control over society's resources.\n\nCurrent ethical guidelines continue to be inappropriate because they do not address the international context of exploitation within which research is done. People's health status cannot be separated from the capitalist system of resource distribution and exchanges which favors the rich countries or high socioeconomic groups and reinforces the impoverishment of the poor ones. The economic exploitation that prevails in the capitalist system shapes the global and local distribution of resources and diseases as well as the health risks and vulnerability of those who live on the margins of the global market. The concepts of 'benefit' and 'justice' have been inadequately extended to biomedical research in poor countries because the possibility of exploiting the underprivileged is more complex than an exploitative relationship with vulnerable populations in developed countries, where at least the rule of law and the respect due to every citizen have already been institutionalized. Furthermore, the number of research studies conducted in poor countries is increasing because regulatory measures are often less strict; this situation may facilitate the exploitation of the poor, non-respect for basic ethical standards, and unlimited search for benefit.\n\nBioethics scholarship that focuses on the sociological model considers local as well as global issues of social inequality, because this model is premised on the intimate connection that exists between social inequality and health inequality. The distribution of illness is likely to reflect the geography of inequality. A social approach to bioethics emphasizes distributive justice and benefits at both the population and individual level. Three important principles flow from this analysis. The first one can be called principle of public benefits (community-based approach to benefits); it is a context-based principle which derives from factors that contribute to ill-health and vulnerability to preventable diseases in poor countries. It states that risks, benefits, and equity can no longer be defined in terms of individual health, but also in relation to the international, national and local contexts\\[23\\]. Such a principle challenges the individualistic understanding of benefits in places where exploitation and inequality are at the center of research. Consequently, a community-based understanding of benefits calls for a large-scale distribution of the benefits of research as an important requirement of justice. This principle is relevant for political and socioeconomic critiques of the ethics of carrying on research in poor countries, given well-established patterns of exploitation and oppression of the underprivileged. Reliance on the sociological model brings out the fact that the health conditions under study originate in socioeconomic conditions that need to be treated to have an impact on the health status of research participants\\[28\\]. Thus, the notion of population or community-based benefits is related to that of health as a public good which is, in turn, linked to the global-capitalistic system that significantly contributes to the health conditions found in poor countries.\n\nThe second principle, the principle of social justice, is rooted in a broad approach to justice that places poor health at the center of public and research policy and seeks to correct systemic injustices. This principle is related to the principle of public benefit since it states that the distribution of benefits should take into account the poverty of local healthcare systems and people's disempowerment as a function of social structures\\[23\\]. Here, the challenge is that the distribution of benefits should address the root-causes of poor health and not only its symptoms. The third principle underscores the need for building local capacity. This principle states that building capacity to promote healthcare sustainability will have a lasting effect on people's health. This principle emphasizes the need for building local capacity and improving human capital to reduce the burden of preventable diseases. For example, research on AIDS vaccine often uses existing facilities or new ones built by funding agencies to conduct research or administrate the vaccine on trial. Building capacity may involve researchers and funding agencies improving the training of local medical professionals and reinforcing existing facilities to reduce the burden of disease; and, if a new medical facility has been built for the research study, local communities can still use it even after the research project comes to an end.\n\nTo avoid exploiting the underprivileged and reinforcing an existing system of oppression, the distribution of benefits should be determined by the context within which diseases occur, the state of the healthcare system, and available resources. Therefore, research institutions and their financial sponsors are morally obligated to contribute to the development of a healthcare system and the improvement of human resources that can benefit the whole population. Carrying on research in impoverished parts of the world where people have been enduring a systemic marginalization would not be ethical if our understanding of benefit will not address the root causes of poor health. Thus, it is no longer enough to avoid not doing harm; addressing health challenges that prevail in the research site is consistent with a broader view of justice\\[28\\].\n\n## Sociological model, bioethics, and health policy\n\nAn autonomy-centered ethics places the burden of prevention and access to healthcare on the moral agent. In doing so, it frames disease within a model that limits political intervention in the health domain strictly to biomedical solutions or behavior change. This leads to the perpetuation of the social *status quo* within which risks for poor health are greater, and lends legitimacy to the social forces that increase health risks. This failure to promote social justice contrasts with John Lynch's understanding of public health intervention. Lynch believes that elements of the social fabric should shape the conception, framework, and implementation of public health intervention. Discussing the influence of socioeconomic status on behavioral and psychosocial risk factors for cardiovascular disease, he argues that the public health community should consider the potential for a broad array of social, educational, and economic policies as effective public health interventions to reduce the unequal distribution of risk factors and the unequal burden of disease\\[30\\]. Similarly, bioethicists need to study health-promoting effects of structural interventions to determine which ones are ethically acceptable and justified. Such a move requires bioethicists to look at broad issues of social equity and advocate for a shift in public policymaking.\n\nIn a population-based study examining the associations between socioeconomic status measures (education, income, and occupation) reflecting different stages of the lifecourse of 2674 middle-aged Finnish men, health behaviors, and psychosocial characteristics in adulthood, Lynch et al. conclude that: understanding that adult health behavior and psychosocial health orientations are associated with socioeconomic conditions throughout the lifecourse implies that efforts to reduce socioeconomic inequalities in health must recognize that economic policy is public health policy\\[31\\]. The sociological model within which Lynch's understanding of public health intervention is built challenges us to advocate for a shift in policymaking mindset because health is not a sphere of justice which is separate from other aspects of human life. Since disease is a social process, a policy vision that focuses on the individual and individual risk factors fails to promote social justice and to address structural elements that create conditions favorable to the production of disease. Hence, we need to move from healthcare policy to health policy, or rather, a healthcare policy that is responsive to facts explaining why (certain) people with (certain) diseases from (certain) communities require medical care. Health policy should embrace healthcare policies but include considerations regarding welfare, work, occupational, economic development, employment, and educational policies.\n\n# Conclusion\n\nSociologists and social epidemiologists challenge bioethicists, especially those working in developing countries, to be socially and culturally relevant. The sociological theory of disease explanation starts with a concrete analysis of the social setting within which illness occurs or research is carried on. Since societal factors shape patterns of mortality and morbidity, principles of biomedical and research ethics need to be framed within the context of the social inequalities that shape vulnerability to illness. Aligning bioethics to perspectives, concerns and information in the fields of public health, health policy and medical sociology could vastly improve its global significance. Thus, bioethicists should be challenged to develop a philosophical anthropology that goes beyond radical affirmations of the individuality to acknowledge both the communal and the individual dimension of the human person.\n\n# Competing interests\n\nThe authors declare that they have no competing interests.\n\n# Authors' contributions\n\nJA originated the article, did the research and wrote a first rough draft of the manuscript. SR read the first version and contributed editorial and critical suggestions. After the first peer-review, JA made substantial revisions to the earlier draft in close collaboration with SR. They have both read and approved the final version of the manuscript.\n\n# Authors' informations\n\nJacquineau Az\u00e9tsop obtained his PhD in theological and social ethics at Boston College, Massachusetts\/USA and a Masters in Public Health (MPH) at the Bloomberg School of Public Health from Johns Hopkins University in Baltimore, USA. Currently, he is lecturer in health policy and bioethics at Facult\u00e9 de M\u00e9d\u00e9cine Teilhard de Chardin in N'djam\u00e9na, Chad.\n\nStuart Rennie (PhD, Philosophy) is currently Lecturer in the Department of Philosophy at the University of Cape Town and Research Assistant Professor in the Department of Social Medicine at the University of North Carolina (USA). He is co-Principal Investigator of UNC's Fogarty International Center bioethics capacity building project in the Democratic Republic of Congo, and ethics consultant for CDC\/Global AIDS Projects in the DR Congo and Madagascar. He is co-chair of the UNC's Behavioral Institutional Review Board (IRB) and is also currently lead author of research ethics guidelines for the HIV Prevention Trials Network (HPTN). Dr. Rennie has published on research ethics and bioethics topics in PLoS Medicine, Science, the Hastings Center Report, Developing World Bioethics and the Journal of Medical Ethics, as well as writing for his own Global Bioethics Blog.\n\n## Acknowledgements\n\nIn thinking about writing this paper we have benefitted tremendously from conversations with and comments from John Paris, the Wash professor of Bioethics at Boston College. The authors would also like to thank the anonymous reviewers for their extensive and challenging comments. They have contributed to the strength of this paper.","meta":{"dup_signals":{"dup_doc_count":101,"dup_dump_count":51,"dup_details":{"curated_sources":4,"2022-40":1,"2022-33":1,"2022-05":2,"2021-39":1,"2021-21":1,"2021-10":2,"2020-45":1,"2020-40":1,"2020-34":1,"2019-47":1,"2019-04":1,"2018-13":1,"2017-47":1,"2017-43":1,"2017-34":1,"2017-26":1,"2017-17":1,"2017-09":3,"2017-04":2,"2016-50":2,"2016-44":2,"2016-40":2,"2016-36":2,"2016-30":2,"2016-26":1,"2016-22":2,"2016-18":2,"2015-48":7,"2015-40":1,"2015-35":2,"2015-32":2,"2015-27":2,"2015-22":1,"2015-14":2,"2014-52":2,"2014-49":1,"2014-42":6,"2014-41":3,"2014-35":4,"2014-23":4,"2014-15":4,"2023-23":1,"2024-18":1,"2017-13":2,"2015-18":2,"2015-11":2,"2015-06":2,"2014-10":2,"2013-48":2,"2013-20":2,"2024-30":1}},"file":"PMC2828974"},"subset":"pubmed_central"} {"text":"abstract: Semiconductor nanoparticles (quantum dots) are promising fluorescent markers, but it is very little known about interaction of quantum dots with biological molecules. In this study, interaction of CdTe quantum dots coated with thioglycolic acid (TGA) with bovine serum albumin was investigated. Steady state spectroscopy, atomic force microscopy, electron microscopy and dynamic light scattering methods were used. It was explored how bovine serum albumin affects stability and spectral properties of quantum dots in aqueous media. CdTe\u2013TGA quantum dots in aqueous solution appeared to be not stable and precipitated. Interaction with bovine serum albumin significantly enhanced stability and photoluminescence quantum yield of quantum dots and prevented quantum dots from aggregating.\nauthor: Vilius Poderys; Marija Matulionyte; Algirdas Selskis; Ricardas Rotomskis\ndate: 2011\ninstitute: 1Laboratory of Biomedical Physics, Vilnius University Institute of Oncology, Vilnius, Lithuania; 2Biophotonics Laboratory, Quantum Electronics Department, Physics Faculty, Vilnius University, Vilnius, Lithuania; 3Department of Material Structure, Institute of Chemistry, Vilnius, Lithuania\nreferences:\ntitle: Interaction of Water-Soluble CdTe Quantum Dots with Bovine Serum Albumin\n\n# Introduction\n\nSince the first time fluorescent semiconductor nanoparticles (quantum dots) were synthesized, they are widely explored due to their possible applications in many fields, including medicine. Tunable emission wavelength, broad absorption and sharp emission spectra, high quantum yield (QY), resistance to chemical degradation and photo bleaching and versatility in surface modification make quantum dots very promising fluorescent markers \\[1\\].\n\nQuantum dots can be used for live cell labeling ex vivo, detection and imaging of cancer cells ex vivo \\[2\\], as a specific marker for healthy and diseased tissues labeling \\[3\\], for labeling healthy and cancerous cells in vivo \\[4\\] and for treatment of cancer using photodynamic therapy \\[5\\]. Despite all unique photo physical properties, some problems must be solved before quantum dots can be successfully applied in medicine. Quantum dots usually are water insoluble and made of materials that are toxic for biological objects (Cd, Se). To make them suitable for application in medicine, surface of quantum dots has to be modified to make them water-soluble and resistant to biological media. After injection of quantum dots to live organisms, they are exposed to various biomolecules (ions, proteins, blood cells, etc.). This could lead to degradation of quantum dot coating or quantum dot itself. In this case, toxic Cd^2+^ ions are released and can cause damage to cells or even cell death.\n\nA lot of research is done to better understand quantum dots synthesis \\[6\\] growth \\[7\\] and modification \\[1\\]. Recently, the interaction of quantum dots with biomolecules attracted much interest and is studied using various methods, such as atomic force microscopy, gel electrophoresis, dynamic light scattering, size-exclusion high-performance liquid chromatography, circular dichroism spectroscopy and fluorescence correlation spectroscopy \\[7-11\\]. It was shown that interaction of quantum dots with biological molecules can enhance optical properties and stability of quantum dots \\[12-14\\] or it may oppositely lead to their degradation \\[15\\]. Serum albumin is one of the most studied proteins. It is the most abundant protein in blood plasma and plays a key role in the transport of a large number of metabolites, endogenous ligands, fatty acids, bilirubin, hormones, anesthetics and other commonly used drugs.\n\nIn this study, we investigated effect of interaction between bovine serum albumin (BSA) and water-soluble CdTe quantum dots in aqueous solutions using microscopy and spectroscopy methods.\n\n# Materials and Methods\n\nQuantum dots solutions were prepared by dissolving CdTe quantum dots coated with thioglycolic acid (*\u03bb*~PL~ = 550 \u00b1 5 nm, PlasmaChem GmbH, Germany) in deionized water (pH\u22486) or saline (0.9% NaCl solution, pH\u22485.6). Experiments of CdTe quantum dots solution with protein were performed by adding a small amount of concentrated bovine serum albumin (BSA) (BSA, V fraction, *M* = 69,000 g\/mol, Sigma, Germany) solution in saline to the quantum dots solution.\n\nSpectral measurements were performed immediately after preparation of solutions. Absorbance spectra were measured with Varian Cary Win UV (Varian Inc., Australia) absorption spectrometer. Photoluminescence spectra were measured with Varian Cary Eclipse (Varian Inc., Australia) and PerkinElmer LS 50B (PerkinElmer, USA) fluorimeters. Photoluminescence excitation wavelength was 405 nm, excitation slits were 5 nm and emission slits 5 and 4 nm for Varian Cary Eclipse and PerkinElmer LS 50B, respectively. Measurements were taken in 1-cm path length quartz cells (Hellma, Germany). Samples for atomic force microscopy measurements were prepared by casting a drop (40 \u03bcl) of solution on freshly cleaved V-1 grade muscovite mica (SPI supplies, USA) spinning at 1,000 rpm. Atomic force microscope (AFM) diInnova (Veeco instruments inc., USA) was used to take 3-dimensional (3-D) images of quantum dots. Measurements were performed in tapping mode in air; RTESP7 cantilevers (Veeco instruments inc., USA) were used. Samples for scanning transmission electron microscopy (STEM) measurements were prepared by casting a drop of solution on TEM grid and drying it in ambient air. STEM images were obtained with HITACHI SU8000 microscope (Hitachi High-Technologies Corporation, Japan). Malvern Zetasizer Nano S (Malvern Instruments Ltd., England) was used to determine particles size distributions in investigated solutions.\n\n# Results\n\nNormalized photoluminescence and absorption spectra of BSA and CdTe quantum dots coated with thioglycolic acid is presented in Figure 1<\/a>. BSA has absorption band in UV region at 280 nm, and fluorescence band peak is at 338 nm. CdTe\u2013TGA quantum dots absorb light in wide spectral region and have excitonic absorption band at 508 nm, and photoluminescence band peak of quantum dots solution is at 550 nm. Titration of freshly prepared quantum dots solution with BSA showed that addition of protein to CdTe quantum dots solution increases photoluminescence intensity of quantum dots (simultaneously a slight (\\~4 nm) bathochromic shift of quantum dots excitonic absorption band is observed). This effect was observed until 10^-5^ mol\/l BSA concentration was reached. Further increase of BSA concentration in quantum dots solution induced slight decrease in photoluminescence intensity (Figure 2<\/a>, curve A). Constant decrease in CdTe quantum dots solution photoluminescence intensity was observed, when CdTe quantum dots solution was titrated with saline (Figure 2<\/a>, curve B). This constant decrease in photoluminescence intensity was caused by decreasing concentration of quantum dots (dilution effect). Curve C (Figure 2<\/a>) shows CdTe quantum dots photoluminescence intensity change caused by CdTe\u2013BSA interaction (dilution effect is eliminated). The biggest increase in CdTe quantum dots photoluminescence intensity (120% of initial value) was observed when ratio of BSA\/quantum dot was 1.75:1.\n\nDynamics of quantum dots photoluminescence properties (photoluminescence intensity and photoluminescence band peak position) in solutions with BSA and without BSA are presented in Figure 3<\/a>. Photoluminescence intensity of CdTe\u2013TGA quantum dots solution (*c* = 6 \u00d7 10^-6^ mol\/l) without bovine serum albumin was increasing for the first 144 h (Figure 3<\/a>, curve A). Photoluminescence band maximum position and width stayed intact. After 144-h photoluminescence intensity started to decrease, band started to narrow and shift to longer wavelength region. Simultaneously absorption slightly decreased (Figure 3<\/a>, curve B). Decrease in quantum dots photoluminescence intensity and bathochromic shift of photoluminescence band indicates aggregation of quantum dots. After 9 days, precipitate of large aggregates appeared in quantum dots solution.\n\nA sudden increase in photoluminescence intensity (by 27%) was observed after protein was added to the CdTe quantum dots solution in saline (Figure 3a<\/a>). Photoluminescence intensity further increased for approximately 40 h. Later photoluminescence intensity started decreasing, but decrease in intensity was quite slow and at longer time scale became negligible (even after 6 months no precipitate was observed). Photoluminescence band width and maximum position remained constant, and absorption intensity slightly increased. This indicates that core of quantum dot remained intact.\n\nInvestigation of quantum dot size with atomic force microscope (AFM) and scanning electron transmission microscope (STEM) showed that in solution without protein quantum dots aggregate (Figure 4a\u2013d<\/a>). AFM image of quantum dots, deposited from solution that was kept for 40 min, is presented in Figure 4a<\/a>. A lot of small round structures were present on the surface. These structures were \\~2.5 nm in height and \\~25 nm in width. Shape of colloidal quantum dots should be close to spherical (width of quantum dot should be approximately equal to height). Height of these structures is approximately equal to a height of single quantum dot, but width was much bigger. This could be explained by AFM imaging artifact called \"tip imaging\". It is also possible that these small structures are not single quantum dots but few quantum dots attached to each other. AFM image of quantum dots deposited from solution that was kept for 5 h shows larger structures (Figure 4b<\/a>). Height and width of these structures varied in broader range. Some small structures (height\u20132.5 nm, width\u201320 nm) could be seen, but bigger structures (up to 9 nm in height and up to 70 nm in width) were also present. Image of sample prepared from solution that was kept for 24 h (Figure 4c<\/a>) showed that sizes of the structures increased even more (height\u2013up to 13 nm, width\u2013up to 150 nm). In STEM images (Figure 4d<\/a>), obtained 2 days after solution preparation, various size structures (much larger than single quantum dots) were seen. This shows that CdTe\u2013TGA quantum dots dissolved in aqueous solution are not stable, and aggregates and forms large clusters of quantum dots.\n\nAFM image (Figure 4e<\/a>) of sample prepared from CdTe quantum dots solution in saline with BSA (solution was kept for 2 months) showed that there were no large structures that could form precipitate, but there were plenty of round structures that were 9\u201320 nm in height and 40\u201360 nm in width. Height of structures seen in image (9\u201320 nm) was bigger than height of single quantum dot (\\~2.5 nm). BSA is heart-shaped molecule; its approximate size is 8 nm \u00d7 8 nm \u00d7 3 nm \\[14\\]. Structures observed in AFM image were a bit bigger than BSA molecules. Structures observed in AFM image could be CdTe quantum dots coated with BSA. In STEM image, only small structures (single quantum dot) \\~3 nm in diameter are seen. In bigger collections, quantum dots are separated one from another by \\~3 nm (Figure 4f<\/a>). Interaction of quantum dots with BSA could lead to the formation of additional quantum dot coating layer that prevents quantum dots from aggregation. Additional coating layer is not visible in STEM image because BSA is formed of light atoms that are not visible in STEM images.\n\nParticle size distributions in BSA solution, CdTe\u2013TGA quantum dots solution and CdTe\u2013TGA quantum dots solution with BSA are presented in Figure 5<\/a> (solutions were kept for 1 week). Average diameter of particles in BSA solution is 8.7 nm. This result very well coincides with dimensions of BSA molecule presented in literature \\[16\\]. Sizes of particles present in CdTe quantum dots solution are bigger than 50 nm in diameter, much bigger than size of single quantum dot (that should be approximately 2\u20133 nm). This shows that quantum dots formed aggregates and confirms results obtained with AFM and STEM. Particle size distribution in CdTe\u2013TGA with BSA solution shows that in this solution average particle size is slightly bigger (diameter \\~12.5 nm) than in BSA solution (diameter \\~8.7 nm). This shows that CdTe\u2013TGA quantum dots interact with BSA and form quantum dot\u2013protein complex whose size is approximately 12.5 nm.\n\n# Discussion\n\nOur proposed model explaining spectral dynamics of CdTe\u2013TGA quantum dots in aqueous solution with and without BSA is presented in Figure 6<\/a>.\n\nDynamics of photoluminescence properties of investigated solutions (presented in Figure 3<\/a>) show two phases\u2014growth of photoluminescence and decrease of photoluminescence. In the first phase, photoluminescence of quantum dots increased in both investigated solutions (quantum dots without protein and quantum dots with protein). Despite quite large increase in photoluminescence spectra, changes in absorption spectrum were very small. During this phase, photoluminescence band peak position and photoluminescence band width remained constant. These changes indicate that core of quantum dot remains intact. Core degradation would cause blue shift of photoluminescence band; aggregation of quantum dots would cause a red shift. Change in photoluminescence intensity indicates that properties of quantum dot coating (or coating itself) are changing: molecules coating core of quantum dot are rearranging, being replaced by other molecules of being washed-out. Theoretically, increase in quantum dots photoluminescence intensity is explained by decrease in non-radiative transitions or their speeds. Decrease in defects on quantum dots surface would cause this effect \\[17\\]. Another process that can change intensity of quantum dots photoluminescence is aggregation. Aggregation of quantum dots decreases photoluminescence quantum yield. Slow dissolution (monomerization) of quantum dots powder (aggregates) could cause increasing photoluminescence intensity due to increased photoluminescence quantum yield of single quantum dots compared with aggregated form. More detailed investigation into absorption spectrum dynamics during first day after preparation of solution contradicts to this explanation. Absorption of quantum dots dissolved in deionized water decreases during first day. This decrease can be explained by aggregation of quantum dots. Aggregation of quantum dots leads to decrease in absorption intensity, red shift, broadening and photoluminescence band intensity decrease. But in first phase, width and wavelength of photoluminescence band do not change, whereas photoluminescence intensity increases. So these changes are caused not by aggregation of quantum dots but by changes in quantum dot coating. CdTe\u2013TGA quantum dots are fluorescent nanoparticles composed of CdTe core and TGA coating. Rearrangement of quantum dot coating can lead to decrease in defects on quantum dot surface and increase in photoluminescence quantum yield. Sudden increase in quantum dots photoluminescence band intensity, after adding BSA to solution, shows that interaction of quantum dots with BSA strongly increases photoluminescence quantum yield. Photoluminescence decay measurements presented in literature \\[18\\] confirm this result. Photoluminescence decay of quantum dots with BSA is tri-exponential, while photoluminescence decay of quantum dots is described with four exponents. This shows that addition of protein eliminates one excitation relaxation path. Photoluminescence lifetime analysis shows that fastest relaxation component (\u03c4~1~ = 3.4 ns) disappears \\[18\\]. Fastest relaxation component is caused by defects of quantum dots \\[19\\]. Elimination of this component leads to increase in quantum dots photoluminescence quantum yield. So increase in photoluminescence intensity at the first phase is caused by rearrangement of TGA molecules (Figure 6IA, IB<\/a>).\n\nIn the second phase, photoluminescence of quantum dots starts to decrease. TGA molecules are not covalently bound to CdTe core (they are attached to it by coordinating bonds \\[20\\]) and probably are washing out slowly (Figure 6IIA, IIIB<\/a>). This process increases number of defects on quantum dots surface and leads to decrease in photoluminescence quantum yield. AFM and STEM images (Figure 4a\u2013d<\/a>) show that quantum dots in aqueous media aggregate. TGA coating makes CdTe quantum dots water soluble. Washing out of coating decreases water solubility of quantum dots, increases aggregation speed (Figure 6IIIA<\/a>) and leads to formation of precipitate (Figure 6IVA<\/a>). In the second phase, effects of aggregation (decrease in photoluminescence intensity and red shift of photoluminescence band) are seen in quantum dots solution without protein (Figure 3a<\/a>).\n\nSecond phase is different for quantum dots solution with protein. In this case, photoluminescence decreases slowly and after some time stabilizes. Position of photoluminescence band does not change during this phase. This shows that quantum dots in the presence of protein do not aggregate, and protein prevents the degradation of quantum dot coating and aggregation of quantum dots.\n\n# Conclusions\n\nThis study showed that water-soluble CdTe\u2013TGA quantum dots in aqueous solutions are not stable. Spectroscopic and atomic force microscopy measurements showed that quantum dots aggregate in solution, and 9 days after preparation of solution, precipitate was observed. BSA interacts with CdTe\u2013TGA quantum dots, prevents them from aggregating, increases photoluminescence quantum yield and makes them stable. This effect is achieved by forming a new layer of quantum dot coating.\n\n## Acknowledgements\n\nThis work was supported by the project \"Multifunctional nanoparticles for specific non-invasive early diagnostics and treatment of cancer\" (No. 2004-LT0036-IP-1NOR).","meta":{"dup_signals":{"dup_doc_count":107,"dup_dump_count":22,"dup_details":{"curated_sources":3,"2022-21":1,"2015-48":5,"2015-40":5,"2015-35":5,"2015-32":1,"2015-27":5,"2015-22":5,"2015-14":4,"2014-52":5,"2014-49":4,"2014-42":10,"2014-41":7,"2014-35":4,"2014-23":9,"2014-15":7,"2022-40":1,"2015-18":5,"2015-11":3,"2015-06":4,"2014-10":5,"2013-48":5,"2013-20":4}},"file":"PMC3212239"},"subset":"pubmed_central"} {"text":"abstract: The complete sequence of the 1,267,782 bp genome of *Wolbachia pipientis w*Mel, an obligate intracellular bacteria of Drosophila melanogaster, has been determined. *Wolbachia*, which are found in a variety of invertebrate species, are of great interest due to their diverse interactions with different hosts, which range from many forms of reproductive parasitism to mutualistic symbioses. Analysis of the *w*Mel genome, in particular phylogenomic comparisons with other intracellular bacteria, has revealed many insights into the biology and evolution of *w*Mel and *Wolbachia* in general. For example, the *w*Mel genome is unique among sequenced obligate intracellular species in both being highly streamlined and containing very high levels of repetitive DNA and mobile DNA elements. This observation, coupled with multiple evolutionary reconstructions, suggests that natural selection is somewhat inefficient in *w*Mel, most likely owing to the occurrence of repeated population bottlenecks. Genome analysis predicts many metabolic differences with the closely related *Rickettsia* species, including the presence of intact glycolysis and purine synthesis, which may compensate for an inability to obtain ATP directly from its host, as *Rickettsia* can. Other discoveries include the apparent inability of *w*Mel to synthesize lipopolysaccharide and the presence of the most genes encoding proteins with ankyrin repeat domains of any prokaryotic genome yet sequenced. Despite the ability of *w*Mel to infect the germline of its host, we find no evidence for either recent lateral gene transfer between *w*Mel and D. melanogaster or older transfers between *Wolbachia* and any host. Evolutionary analysis further supports the hypothesis that mitochondria share a common ancestor with the \u03b1-Proteobacteria, but shows little support for the grouping of mitochondria with species in the order Rickettsiales. With the availability of the complete genomes of both species and excellent genetic tools for the host, the *w*Mel\u2013D. melanogaster symbiosis is now an ideal system for studying the biology and evolution of *Wolbachia* infections.\nauthor: Martin Wu; Ling V Sun; Jessica Vamathevan; Markus Riegler; Robert Deboy; Jeremy C Brownlie; Elizabeth A McGraw; William Martin; Christian Esser; Nahal Ahmadinejad; Christian Wiegand; Ramana Madupu; Maureen J Beanan; Lauren M Brinkac; Sean C Daugherty; A. Scott Durkin; James F Kolonay; William C Nelson; Yasmin Mohamoud; Perris Lee; Kristi Berry; M. Brook Young; Teresa Utterback; Janice Weidman; William C Nierman; Ian T Paulsen; Karen E Nelson; Herv\u00e9 Tettelin; Scott L O'Neill; Jonathan A Eisen\ndate: 2004-03\ninstitute: **1**The Institute for Genomic Research, RockvilleMarylandUnited States of America; **2**Department of Epidemiology and Public Health, Yale University School of MedicineNew Haven, ConnecticutUnited States of America; **3**Department of Zoology and Entomology, School of Life SciencesThe University of Queensland, St Lucia, QueenslandAustralia; **4**Institut f\u00fcr Botanik III, Heinrich-Heine Universit\u00e4tD\u00fcsseldorfGermany\nreferences:\ntitle: Phylogenomics of the Reproductive Parasite *Wolbachia pipientis w*Mel: A Streamlined Genome Overrun by Mobile Genetic Elements\n\n# Introduction\n\n*Wolbachia* are intracellular gram-negative bacteria that are found in association with a variety of invertebrate species, including insects, mites, spiders, terrestrial crustaceans, and nematodes. *Wolbachia* are transovarialy transmitted from females to their offspring and are extremely widespread, having been found to infect 20%\u201375% of invertebrate species sampled (Jeyaprakash and Hoy 2000; Werren and Windsor 2000). *Wolbachia* are members of the Rickettsiales order of the \u03b1-subdivision of the Proteobacteria phyla and belong to the Anaplasmataceae family, with members of the genera *Anaplasma*, *Ehrlichia*, *Cowdria*, and *Neorickettsia* (Dumler et al. 2001). Six major clades (A\u2013F) of *Wolbachia* have been identified to date (Lo et al. 2002): A, B, E, and F have been reported from insects, arachnids, and crustaceans; C and D from filarial nematodes.\n\n*Wolbachia\u2013*host interactions are complex and range from mutualistic to pathogenic, depending on the combination of host and *Wolbachia* involved. Most striking are the various forms of \"reproductive parasitism\" that serve to alter host reproduction in order to enhance the transmission of this maternally inherited agent. These include parthenogenesis (infected females reproducing in the absence of mating to produce infected female offspring), feminization (infected males being converted into functional phenotypic females), male-killing (infected male embryos being selectively killed), and cytoplasmic incompatibility (in its simplest form, the developmental arrest of offspring of uninfected females when mated to infected males) (O'Neill et al. 1997a).\n\n*Wolbachia* have been hypothesized to play a role in host speciation through the reproductive isolation they generate in infected hosts (Werren 1998). They also provide an intriguing array of evolutionary solutions to the genetic conflict that arises from their uniparental inheritance. These solutions represent alternatives to classical mutualism and are often of more benefit to the symbiont than the host that is infected (Werren and O'Neill 1997). From an applied perspective, it has been proposed that *Wolbachia* could be utilized to either suppress pest insect populations or sweep desirable traits into pest populations (e.g., the inability to transmit disease-causing pathogens) (Sinkins and O'Neill 2000). Moreover, they may provide a new approach to the control of human and animal filariasis. Since the nematode worms that cause filariasis have an obligate symbiosis with mutualistic *Wolbachia*, treatment of filariasis with simple antibiotics that target *Wolbachia* has been shown to eliminate microfilaria production as well as ultimately killing the adult worm (Taylor et al. 2000; Taylor and Hoerauf 2001).\n\nDespite their common occurrence and major effects on host biology, little is currently known about the molecular mechanisms that mediate the interactions between *Wolbachia* and their invertebrate hosts. This is partly due to the difficulty of working with an obligate intracellular organism that is difficult to culture and hard to obtain in quantity. Here we report the completion and analysis of the genome sequence of Wolbachia pipientis *w*Mel, a strain from the A supergroup that naturally infects Drosophila melanogaster (Zhou et al. 1998).\n\n# Results\/Discussion\n\n## Genome Properties\n\nThe *w*Mel genome is determined to be a single circular molecule of 1,267,782 bp with a G+C content of 35.2%. This assembly is very similar to the genetic and physical map of the closely related strain *w*MelPop (Sun et al., 2003). The genome does not exhibit the GC skew pattern typical of some prokaryotic genomes (Figure 1<\/a>) that have two major shifts, one near the origin and one near the terminus of replication. Therefore, identification of a putative origin of replication and the assignment of basepair 1 were based on the location of the *dnaA* gene. Major features of the genome and of the annotation are summarized in Table 1<\/a> and Figure 1<\/a>.\n\n###### *w*Mel Genome Features\n\n![](pbio.0020069.t001)\n\n## Repetitive and Mobile DNA\n\nThe most striking feature of the *w*Mel genome is the presence of very large amounts of repetitive DNA and DNA corresponding to mobile genetic elements, which is unique for an intracellular species. In total, 714 repeats of greater than 50 bp in length, which can be divided into 158 distinct families (Table S1<\/a>), were identified. Most of the repeats are present in only two copies in the genome, although 39 are present in three or more copies, with the most abundant repeat being found in 89 copies. We focused our analysis on the 138 repeats of greater than 200 bp (Table 2<\/a>). These were divided into 19 families based upon sequence similarity to each other. These repeats were found to make up 14.2 % of the *w*Mel genome. Of these repeat families, 15 correspond to likely mobile elements, including seven types of insertion sequence (IS) elements, four likely retrotransposons, and four families without detectible similarity to known elements but with many hallmarks of mobile elements (flanked by inverted repeats, present in multiple copies) (Table 2<\/a>). One of these new elements (repeat family 8) is present in 45 copies in the genome. It is likely that many of these elements are not able to autonomously transpose since many of the transposase genes are apparently inactivated by mutations or the insertion of other transposons (Table S2<\/a>). However, some are apparently recently active since there are transposons inserted into at least nine genes (Table S2<\/a>), and the copy number of some repeats appears to be variable between *Wolbachia* strains (M. Riegler et al., personal communication). Thus, many of these repetitive elements may be useful markers for strain discrimination. In addition, the mobile elements likely contribute to generating the diversity of phenotypically distinct *Wolbachia* strains (e.g., mod^\u2212^ strains \\[McGraw et al. 2001\\]) by altering or disrupting gene function (Table S2<\/a>).\n\n###### *w*Mel DNA Repeats of Greater than 200 bp\n\n![](pbio.0020069.t002)\n\nThree prophage elements are present in the genome. One is a small pyocin-like element made up of nine genes (WD00565\u2013WD00575). The other two are closely related to and exhibit extensive gene order conservation with the WO phage described from *Wolbachia* sp. *w*Kue (Masui et al. 2001) (Figure 2<\/a>). Thus, we have named them *w*Mel WO-A and WO-B, based upon their location in the genome. *w*Mel WO-B has undergone a major rearrangement and translocation, suggesting it is inactive. Phylogenetic analysis indicates that *w*Mel WO-B is more closely related to the *w*Kue WO than to *w*Mel WO-A (Figure S1<\/a>). Thus, *w*Mel WO-A likely represents either a separate insertion event in the *Wolbachia* lineage or a duplication that occurred prior to the separation of the *w*Mel and *w*Kue lineages. Phylogenetic analysis also confirms the proposed mosaic nature of the WO phage (Masui et al. 2001), with one block being closely related to lambdoid phage and another to P2 phage (data not shown).\n\n## Genome Structure: Rearrangements, Duplications, and Deletions\n\nThe irregular pattern of GC skew in *w*Mel is likely due in part to intragenomic rearrangements associated with the many DNA repeat elements. Comparison with a large contig from a *Wolbachia* species that infects Brugia malayi is consistent with this (Ware et al. 2002) (Figure 3<\/a>). While only translocations are seen in this plot, genetic comparisons reveal that inversions also occur between strains (Sun et al., 2003), which is consistent with previous studies of prokaryotic genomes that have found that the most common large-scale rearrangements are inversions that are symmetric around the origin of DNA replication (Eisen et al. 2000). The occurrence of frequent rearrangement events during *Wolbachia* evolution is supported by the absence of any large-scale conserved gene order with *Rickettsia* genomes. The rearrangements in *Wolbachia* likely correspond with the introduction and massive expansion of the repeat element families that could serve as sites for intragenomic recombination, as has been shown to occur for some other bacterial species (Parkhill et al. 2003). The rearrangements in *w*Mel may have fitness consequences since several classes of genes often found in clusters are generally scattered throughout the *w*Mel genome (e.g., ABC transporter subunits, Sec secretion genes, rRNA genes, F-type ATPase genes).\n\nAlthough the common ancestor of *Wolbachia* and *Rickettsia* likely already had a reduced, streamlined genome, *w*Mel has lost additional genes since that time (Table S3<\/a>). Many of these recent losses are of genes involved in cell envelope biogenesis in other species, including most of the machinery for producing lipopolysaccharide (LPS) components and the alanine racemase that supplies D-alanine for cell wall synthesis. In addition, some other genes that may have once been involved in this process are present in the genome, but defective (e.g., mannose-1-phosphate guanylyltransferase, which is split into two coding sequences \\[CDSs\\], WD1224 and WD1227, by an IS5 element) and are likely in the process of being eliminated. The loss of cell envelope biogenesis genes has also occurred during the evolution of the *Buchnera* endosymbionts of aphids (Shigenobu et al. 2000; Moran and Mira 2001). Thus, *w*Mel and *Buchnera* have lost some of the same genes separately during their reductive evolution. Such convergence means that attempts to use gene content to infer evolutionary relatedness needs to be interpreted with caution. In addition, since *Anaplasma* and *Ehrlichia* also apparently lack genes for LPS production (Lin and Rikihisha 2003), it is likely that the common ancestor of *Wolbachia*, *Ehrlichia*, and *Anaplasma* was unable to synthesize LPS. Thus, the reports that *Wolbachia*-derived LPS-like compounds is involved in the immunopathology of filarial nematode disease in mammals (Taylor 2002) either indicate that these *Wolbachia* have acquired genes for LPS synthesis or that the reported LPS-like compounds are not homologous to LPS.\n\nDespite evident genome reduction in *w*Mel and in contrast to most small-genomed intracellular species, gene duplication appears to have continued, as over 50 gene families have apparently expanded in the *w*Mel lineage relative to that of all other species (Table S4<\/a>). Many of the pairs of duplicated genes are encoded next to each other in the genome, suggesting that they arose by tandem duplication events and may simply reflect transient duplications in evolution (deletion is common when there are tandem arrays of genes). Many others are components of mobile genetic elements, indicating that these elements have expanded significantly after entering the *Wolbachia* evolutionary lineage. Other duplications that could contribute to the unique biological properties of *w*Mel include that of the mismatch repair gene *mutL* (see below) and that of many hypothetical and conserved hypothetical proteins.\n\nOne duplication of particular interest is that of *wsp*, which is a standard gene for strain identification and phylogenetic reconstruction in *Wolbachia* (Zhou et al. 1998). In addition to the previously described *wsp* (WD0159), *w*Mel encodes two *wsp* paralogs (WD0009 and WD0489), which we designate as *wspB* and *wspC*, respectively. While these paralogs are highly divergent from *wsp* (protein identities of 19.7% and 23.5%, respectively) and do not amplify using the standard *wsp* PCR primers (Braig et al. 1998; Zhou et al. 1998), their presence could lead to some confusion in classification and identification of *Wolbachia* strains. This has apparently occurred in one study of *Wolbachia* strain *w*KueYO, for which the reported *wsp* gene (gbAB045235) is actually an ortholog of *wspB* (99.8% sequence identity and located at the end of the *virB* operon \\[Masui et al. 2000\\]) and not an ortholog of the *wsp* gene. Considering that the *wsp* gene has been extremely informative for discriminating between strains of *Wolbachia*, we designed PCR primers to the *w*Mel *wspB* gene to amplify and then sequence the orthologs from the related *w*Ri and *w*AlbB *Wolbachia* strains from Drosophila simulans and Aedes albopictus, respectively, as well as the *Wolbachia* strain that infects the filarial nematode Dirofilaria immitis to determine the potential utility of this locus for strain discrimination. A comparison of genetic distances between the *wsp* and *wspB* genes for these different taxa indicates that overall the *wspB* gene appears to be evolving at a faster rate than *wsp* and, as such, may be a useful additional marker for discriminating between closely related *Wolbachia* strains (Table S5<\/a>).\n\n## Inefficiency of Selection in *w*Mel\n\nThe fraction of the genome that is repetitive DNA and the fraction that corresponds to mobile genetic elements are among the highest for any prokaryotic genome. This is particularly striking compared to the genomes of other obligate intracellular species such as *Buchnera*, *Rickettsia*, *Chlamydia*, and *Wigglesworthia*, that all have very low levels of repetitive DNA and mobile elements. The recently sequenced genomes of the intracellular pathogen Coxiella burnetti (Seshadri et al. 2003) has both a streamlined genome and moderate amounts of repetitive DNA, although much less than *w*Mel. The paucity of repetitive DNA in these and other intracellular species is thought to be due to a combination of lack of exposure to other species, thereby limiting introduction of mobile elements, and genome streamlining (Mira et al. 2001; Moran and Mira 2001; Frank et al. 2002). We examined the *w*Mel genome to try to understand the origin of the repetitive and mobile DNA and to explain why such repetitive\/mobile DNA is present in *w*Mel, but not other streamlined intracellular species.\n\nWe propose that the mobile DNA in *w*Mel was acquired some time after the separation of the *Wolbachia* and *Rickettsia* lineages but before the radiation of the *Wolbachia* group*.* The acquisition of these elements after the separation of the *Wolbachia* and *Rickettsia* lineages is suggested by the fact that most do not have any obvious homologous sequences in the genomes of other \u03b1-Proteobacteria, including the closely related *Rickettsia* spp. Additional evidence for some acqui-sition of foreign DNA after the *Wolbachia\u2013Rickettsia* split comes from phylogenetic analysis of those genes present in *w*Mel, but not in the two sequenced rickettsial genomes (see Table S3<\/a>; unpublished data). The acquisition prior to the radiation of *Wolbachia* is suggested by two lines of evidence. First, many of the elements are found in the genome of the distantly related *Wolbachia* of the nematode B. malayi (see Figure 3<\/a>; unpublished data). In addition, genome analysis reveals that these elements do not have significantly anomalous nucleotide composition or codon usage compared to the rest of the genome. In fact, there are only four regions of the genome with significantly anomalous composition, comprising in total only approximately 17 kbp of DNA (Table 3<\/a>). The lack of anomalous composition suggests either that any foreign DNA in *w*Mel was acquired long enough ago to allow it to \"ameliorate\" and become compositionally similar to endogenous *Wolbachia* DNA (Lawrence and Ochman 1997, 1998) or that any foreign DNA that is present was acquired from organisms with similar composition to endogenous *w*Mel genes. Owing to their potential effects on genome evolution (insertional mutagenesis, catalyzing genome rearrangements), we propose that the acquisition and maintenance of these repetitive and mobile elements by *w*Mel have played a key role in shaping the evolution of *Wolbachia*.\n\n###### Regions of Anomalous Nucleotide Composition in the *wMel* Genome\n\n![](pbio.0020069.t003)\n\nIt is likely that much of the mobile\/repetitive DNA was introduced via phage, given that three prophage elements are present; experimental studies have shown active phage in some *Wolbachia* (Masui et al. 2001) and *Wolbachia* superinfections occur in many hosts (e.g., Jamnongluk et al. 2002), which would allow phage to move between strains. Whatever the mechanism of introduction, the persistence of the repetitive elements in *w*Mel in the face of apparently strong pressures for streamlining is intriguing. One expla-nation is that *w*Mel may be getting a steady infusion of mobile elements from other *Wolbachia* strains to counteract the elimination of elements by selection for genome streamlining. This would explain the absence of anomalous nucleotide composition of the elements. However, we believe that a major contributing factor to the presence of all the repetitive\/mobile DNA in *w*Mel is that *w*Mel and possibly *Wolbachia* in general have general inefficiency of natural selection relative to other species. This inefficiency would limit the ability to eliminate repetitive DNA. A general inefficiency of natural selection (especially purifying selection) has been suggested previously for intracellular bacteria, based in part on observations that these bacteria have higher evolutionary rates than free-living bacteria (e.g., Moran 1996). We also find a higher evolutionary rate for *w*Mel than that of the closely related intracellular *Rickettsia*, which themselves have higher rates than free-living \u03b1-Proteobacteria (Figure 4<\/a>). Additionally, codon bias in *w*Mel appears to be driven more by mutation or drift than selection (Figure S2<\/a>), as has been reported for *Buchnera* species and was suggested to be due to inefficient purifying selection (Wernegreen and Moran 1999). Such inefficiencies of natural selection are generally due to an increase in the relative contribution of genetic drift and mutation as compared to natural selection (Eiglmeier et al. 2001; Lawrence 2001; Parkhill et al. 2001). Below we discuss different possible explanations for the inefficiency of selection in *w*Mel, especially in comparison to other intracellular bacteria.\n\nLow rates of recombination, such as occur in centromeres and the human Y chromosome, can lead to inefficient selection because of the linkage among genes. This has been suggested to be occurring in *Buchnera* species because these species do not encode homologs of RecA, which is the key protein in homologous recombination in most species (Shigenobu et al. 2000). The absence of recombination in *Buchnera* is supported by the lack of genome rearrangements in their recent evolution (Tamas et al. 2002). Additionally, there is apparently little or no gene flow into *Buchnera* strains. In contrast, *w*Mel encodes the necessary machinery for recombination, including RecA (Table S6<\/a>), and has experienced both extensive intragenomic homologous recombination and introduction of foreign DNA. Therefore, the unusual genome features of *w*Mel are unlikely to be due to low levels of recombination.\n\nAnother possible explanation for inefficient selection is high mutation rates. It has been suggested that the higher evolutionary rates in intracellular bacteria are the result of high mutation rates that are in turn due to the loss of genes for DNA repair processes (e.g., Itoh et al. 2002). This is likely not the case in *w*Mel since its genome encodes proteins corresponding to a broad suite of DNA repair pathways including mismatch repair, nucleotide excision repair, base excision repair, and homologous recombination (Table S6<\/a>). The only noteworthy DNA repair gene absent from *w*Mel and present in the more slowly evolving *Rickettsia* is *mfd,* which is involved in targeting DNA repair to the transcribed strand of actively transcribing genes in other species (Selby et al. 1991). However, this absence is unlikely to contribute significantly to the increased evolutionary rate in *w*Mel, since defects in *mfd* do not lead to large increases in mutation rates in other species (Witkin 1994). The presence of mismatch repair genes (homologs of *mutS* and *mutL*) in *w*Mel is particularly relevant since this pathway is one of the key steps in regulating mutation rates in other species. In fact, *w*Mel is the first bacterial species to be found with two *mutL* homologs. Overall, examination of the predicted DNA repair capabilities of bacteria (Eisen and Hanawalt 1999) suggests that the connection between evolutionary rates in intracellular species and the loss of DNA repair processes is spurious. While many intracellular species have lost DNA repair genes in their recent evolution, different species have lost different genes and some, such as *w*Mel and *Buchnera* spp., have kept the genes that likely regulate mutation rates. In addition, some free-living species without high evolutionary rates have lost some of the same pathways lost in intracellular species, while many free-living species have lost key pathways resulting in high mutation rates (e.g., Helicobacter pylori has apparently lost mismatch repair \\[Eisen 1997, Eisen 1998b; Bjorkholm et al. 2001\\]). Given that intracellular species tend to have small genomes and have lost genes from every type of biological process, it is not surprising that many of them have lost DNA repair genes as well.\n\nWe believe that the most likely explanations for the inefficiency of selection in *w*Mel involve population-size related factors, such as genetic drift and the occurrence of population bottlenecks. Such factors have also been shown to likely explain the high evolutionary rates in other intracellular species (Moran 1996; Moran and Mira 2001; van Ham et al. 2003). *Wolbachia* likely experience frequent population bottlenecks both during transovarial transmission (Boyle et al. 1993) and during cytoplasmic incompatibility mediated sweeps through host populations. The extent of these bottlenecks may be greater than in other intracellular bacteria, which would explain why *w*Mel has both more repetitive and mobile DNA than other such species and a higher evolutionary rate than even the related *Rickettsia spp.* Additional genome sequences from other *Wolbachia* will reveal whether this is a feature of all *Wolbachia* or only certain strains.\n\n## Mitochondrial Evolution\n\nThere is a general consensus in the evolutionary biology literature that the mitochondria evolved from bacteria in the \u03b1-subgroup of the Proteobacteria phyla (e.g., Lang et al. 1999). Analysis of complete mitochondrial and bacterial genomes has very strongly supported this hypothesis (Andersson et al. 1998, 2003; Muller and Martin 1999; Ogata et al. 2001). However, the exact position of the mitochondria within the \u03b1-Proteobacteria is still debated. Many studies have placed them in or near the Rickettsiales order (Viale and Arakaki 1994; Gupta 1995; Sicheritz-Ponten et al. 1998; Lang et al. 1999; Bazinet and Rollins 2003). Some studies have further suggested that mitochondria are a sister taxa to the *Rickettsia* genus within the Rickettsiaceae family and thus more closely related to *Rickettsia* spp. than to species in the Anaplasmataceae family such as *Wolbachia* (Karlin and Brocchieri 2000; Emelyanov 2001a, 2001b, 2003a, 2003b).\n\nIn our analysis of complete genomes, including that of *w*Mel, the first non-*Rickettsia* member of the Rickettsiales order to have its genome completed, we find support for a grouping of *Wolbachia* and *Rickettsia* to the exclusion of the mitochondria, but not for placing the mitochondria within the Rickettsiales order (Figure 5<\/a>A and 5<\/a>B; Table S7<\/a>; Table S8<\/a>). Specifically, phylogenetic trees of a concatenated alignment of 32 proteins show strong support with all methods (see Table S7<\/a>) for common branching of: (i) mitochondria, (ii) *Rickettsia* with *Wolbachia*, (iii) the free-living \u03b1-Proteobacteria, and (iv) mitochondria within \u03b1-Proteobacteria. Since amino acid content bias was very severe in these datasets, protein LogDet analyses, which can correct for the bias, were also performed. In LogDet analyses of the concatenated protein alignment, both including and excluding highly biased positions, mitochondria usually branched basal to the *Wolbachia\u2013Rickettsia* clade, but never specifically with *Rickettsia* (see Table S7<\/a>). In addition, in phylogenetic studies of individual genes, there was no consistent phylogenetic position of mitochondrial proteins with any particular species or group within the \u03b1-Proteobacteria (see Table S8<\/a>), although support for a specific branch uniting the two *Rickettsia* species with *Wolbachia* was quite strong. Eight of the proteins from mitochondrial genomes (YejW, SecY, Rps8, Rps2, Rps10, RpoA, Rpl15, Rpl32) do not even branch within the \u03b1-Proteobacteria, although these genes almost certainly were encoded in the ancestral mitochondrial genome (Lang et al. 1997).\n\nThis analysis of mitochondrial and \u03b1-Proteobacterial genes reinforces the view that ancient protein phylogenies are inherently prone to error, most likely because current models of phylogenetic inference do not accurately reflect the true evolutionary processes underlying the differences observed in contemporary amino acid sequences (Penny et al. 2001). These conflicting results regarding the precise position of mitochondria within the \u03b1-Proteobacteria can be seen in the high amount of networking in the Neighbor-Net graph of the analyses of the concatenated alignment shown in Figure 5<\/a>. An important complication in studies of mitochondrial evolution lies in identifying \"\u03b1-Proteobacterial\" genes for comparison (Martin 1999). For example, in our analyses, proteins from *Magnetococcus* branched with other \u03b1-Proteobacterial homologs in only 17 of the 49 proteins studied, and in five cases they assumed a position basal to \u03b1-, \u03b2-, and \u03b3-Proteobacterial homologs.\n\n## Host\u2013Symbiont Gene Transfers\n\nMany genes that were once encoded in mitochondrial genomes have been transferred into the host nuclear genomes. Searching for such genes has been complicated by the fact that many of the transfer events happened early in eukaryotic evolution and that there are frequently extreme amino acid and nucleotide composition biases in mitochondrial genomes (see above). We used the *w*Mel genome to search for additional possible mitochondrial-derived genes in eukaryotic nuclear genomes. Specifically, we constructed phylogenetic trees for *w*Mel genes that are not in either *Rickettsia* genomes. Five new eukaryotic genes of possible mitochondrial origin were identified: three genes involved in de novo nucleotide biosynthesis (*purD*, *purM*, *pyrD*) and two conserved hypothetical proteins (WD1005, WD0724). The \u03b1-Proteobacterial origin of these genes suggests that at least some of the genes of the de novo nucleotide synthesis pathway in eukaryotes might have been laterally acquired from bacteria via the mitochondria. The presence of such genes in other Proteobacteria suggests that their absence from *Rickettsia* is due to gene loss (Gray et al. 2001). This finding supports the need for additional \u03b1-Proteobacterial genomes to identify mitochondrion-derived genes in eukaryotes.\n\nWhile organelle to nuclear gene transfers are generally accepted, there is a great deal of controversy over whether other gene transfers have occurred from bacteria into animals. In particular, claims of transfer from bacteria into the human genome (Lander et al. 2001) were later shown to be false (Roelofs and Van Haastert 2001; Salzberg et al. 2001; Stanhope et al. 2001). *Wolbachia* are excellent candidates for such transfer events since they live inside the germ cells, which would allow lateral transfers to the host to be transmitted to subsequent host generations. Consistent with this, a recent study has shown some evidence for the presence of *Wolbachia-*like genes in a beetle genome (Kondo et al. 2002). The symbiosis between *w*Mel and D. melanogaster provides an ideal case to search for such transfers since we have the complete genomes of both the host and symbiont. Using BLASTN searches and MUMmer alignments, we did not find any examples of highly similar stretches of DNA shared between the two species. In addition, protein-level searches and phylogenetic trees did not identify any specific relationships between *w*Mel and D. melanogaster for any genes. Thus, at least for this host\u2013symbiont association, we do not find any likely cases of recent gene exchange, with genes being maintained in both host and symbiont. In addition, in our phylogenetic analyses, we did not find any examples of *w*Mel proteins branching specifically with proteins from any invertebrate to the exclusion of other eukaryotes. Therefore, at least for the genes in *w*Mel, we do not find evidence for transfer of *Wolbachia* genes into any invertebrate genome.\n\n## Metabolism and Transport\n\n*w*Mel is predicted to have very limited capabilities for membrane transport, for substrate utilization, and for the biosynthesis of metabolic intermediates (Figure S3<\/a>), similar to what has been seen in other intracellular symbionts and pathogens (Paulsen et al. 2000). Almost all of the identifiable uptake systems for organic nutrients in *w*Mel are for amino acids, including predicted transporters for proline, asparate\/glutamate, and alanine. This pattern of transporters, coupled with the presence of pathways for the metabolism of the amino acids cysteine, glutamate, glutamine, proline, serine, and threonine, suggests that *w*Mel may obtain much of its energy from amino acids. These amino acids could also serve as material for the production of other amino acids. In contrast, carbohydrate metabolism in *w*Mel appears to be limited. The only pathways that appear to be complete are the tricarboxylic acid cycle, the nonoxidative pentose phosphate pathway, and glycolysis, starting with fructose-1,6-biphosphate. The limited carbohydrate metabolism is consistent with the presence of only one sugar phosphate transporter. *w*Mel can also apparently transport a range of inorganic ions, although two of these systems, for potassium uptake and sodium ion\/proton exchange, are frameshifted. In the latter case, two other sodium ion\/proton exchangers may be able to compensate for this defect.\n\nMany of the predicted metabolic properties of *w*Mel, such as the focus on amino acid transport and the presence of limited carbohydrate metabolism, are similar to those found in *Rickettsia.* A major difference with the *Rickettsia* spp. is the absence of the ADP\u2013ATP exchanger protein in *w*Mel. In *Rickettsia* this protein is used to import ATP from the host, thus allowing these species to be direct energy scavengers (Andersson et al. 1998). This likely explains the presence of glycolysis in *w*Mel but not *Rickettsia.* An inability to obtain ATP from its host also helps explain the presence of pathways for the synthesis of the purines AMP, IMP, XMP, and GMP in *w*Mel but not *Rickettsia.* Other pathways present in *w*Mel but not *Rickettsia* include threonine degradation (described above), riboflavin biosynthesis, pyrimidine metabolism (i.e., from PRPP to UMP), and chelated iron uptake (using a single ABC transporter). The two *Rickettsia* species have a relatively large complement of predicted transporters for osmoprotectants, such as proline and glycine betaine, whereas *w*Mel possesses only two of these systems.\n\n## Regulatory Responses\n\nThe *w*Mel genome is predicted to encode few proteins for regulatory responses. Three genes encoding two-component system subunits are present: two sensor histidine kinases (WD1216 and WD1284) and one response regulator (WD0221). Only six strong candidates for transcription regulators were identified: a homolog of arginine repressors (WD0453), two members of the TenA family of transcription activator proteins (WD0139 and WD0140), a homolog of *ctrA*, a transcription regulator for two component systems in other \u03b1-Proteobacteria (WD0732), and two \u03c3 factors (RpoH\/WD1064 and RpoD\/WD1298). There are also seven members of one paralogous family of proteins that are distantly related to phage repressors (see above), although if they have any role in transcription, it is likely only for phage genes. Such a limited repertoire of regulatory systems has also been reported in other endosymbionts and has been explained by the apparent highly predictable and stable environment in which these species live (Andersson et al. 1998; Read et al. 2000; Shigenobu et al. 2000; Moran and Mira 2001; Akman et al. 2002; Seshadri et al. 2003).\n\n## Host\u2013Symbiont Interactions\n\nThe mechanisms by which *Wolbachia* infect host cells and by which they cause the diverse phenotypic effects on host reproduction and fitness are poorly understood, and the *w*Mel genome helps identify potential contributing factors. A complete Type IV secretion system, portions of which have been reported in earlier studies, is present. The complete genome sequence shows that in addition to the five *vir* genes previously described from Wolbachia wKueYO (Masui et al. 2001), an additional four are present in *w*Mel. Of the nine *w*Mel *vir* ORFs, eight are arranged into two separate operons. Similar to the single operon identified in *w*Tai and *w*KueYO, the *w*Mel *virB8*, *virB9*, *virB10*, *virB11*, and *virD4* CDSs are adjacent to *wspB*, forming a 7 kb operon (WD0004\u2013WD0009). The second operon contains *virB3*, *virB4*, and *virB6* as well as four additional non-*vir* CDSs, including three putative membrane-spanning proteins, that form part of a 15.7 kb operon (WD0859\u2013WD0853). Examination of the Rickettsia conorii genome shows a similar orga-nization (Figure 6<\/a>A). The observed conserved gene order for these genes between these two genomes suggests that the putative membrane-spanning proteins could form a novel and, possibly, integral part of a functioning Type IV secretion system within these bacteria. Moreover, reverse transcription (RT)-PCRs have confirmed that *wspB* and WD0853\u2013WD0856 are each expressed as part of the two *vir* operons and further indicate that these additional encoded proteins are novel components of the *Wolbachia* Type IV secretion system (Figure 6<\/a>B).\n\nIn addition to the two major *vir* clusters, a paralog of *virB8* (WD0817) is also present in the *w*Mel genome. WD0818 is quite divergent from *virB8* and, as such, does not appear to have resulted from a recent gene duplication event. RT-PCR experiments have failed to show expression of this CDS in *w*Mel-infected *Drosophila* (data not shown). PCR primers were designed to all CDSs of the *w*Mel Type IV secretion system and used to successfully amplify orthologs from the divergent *Wolbachia* strains *w*Ri and *w*AlbB (data not shown). We were able to detect orthologs to all of the *w*Mel Type IV secretion system components as well as most of the adjacent non-*vir* CDSs, suggesting that this system is conserved across a range of A- and B-group *Wolbachia*. An increasing body of evidence has highlighted the importance of Type IV secretion systems for the successful infection, invasion, and persistence of intracellular bacteria within their hosts (Christie 2001; Sexton and Vogel 2002). It is likely that the Type IV system in *Wolbachia* plays a role in the establishment and maintenance of infection and possibly in the generation of reproductive phenotypes.\n\nGenes involved in pathogenicity in bacteria have been found to be frequently associated with regions of anomalous nucleotide composition, possibly owing to transfer from other species or insertion into the genome from plasmids or phage. In the four such regions in *w*Mel (see above; see Table 3<\/a>), some additional candidates for pathogenicity-related activities are present including a putative penicillin-binding protein (WD0719), genes predicted to be involved in cell wall synthesis (WD0095\u2013WD0098, including D-alanine-D-alanine ligase, a putative FtsQ, and D-alanyl-D-alanine carboxy peptidase) and a multidrug resistance protein (WD0099). In addition, we have identified a cluster of genes in one of the phage regions that may also have some role in host\u2013symbiont interactions. This cluster (WD0611\u2013WD0621) is embedded within the WO-B phage region of the genome (see Figure 2<\/a>) and contains many genes that encode proteins with putative roles in the synthesis and degradation of surface polysaccharides, including a UDP-glucose 6-dehydrogenase (WD0620). Since this cluster appears to be normal in terms of phylogeny relative to other genes in the genome (i.e., the genes in this region have normal *w*Mel nucleotide composition and branch in phylogenetic trees with genes from other \u03b1-Proteobacteria), it is not likely to have been acquired from other species. However, it is possible that these genes can be transferred among *Wolbachia* strains via the phage, which in turn could lead to some variation in host\u2013symbiont interactions between *Wolbachia* strains.\n\nOf particular interest for host-interaction functions are the large number of genes that encode proteins that contain ankyrin repeats (Table 4<\/a>). Ankyrin repeats, a tandem motif of around 33 amino acids, are found mainly in eukaryotic proteins, where they are known to mediate protein\u2013protein interactions (Caturegli et al. 2000). While they have been found in bacteria before, they are usually present in only a few copies per species. *w*Mel has 23 ankyrin repeat-containing genes, the most currently described for a prokaryote, with C. burnetti being next with 13. This is particularly striking given *w*Mel's relatively small genome size. The functions of the ankyrin repeat-containing proteins in *w*Mel are difficult to predict since most have no sequence similarity outside the ankyrin domains to any proteins of known function. Many lines of evidence suggest that the *w*Mel ankyrin domain proteins are involved in regulating host cell-cycle or cell division or interacting with the host cytoskeleton: (i) many ankyrin-containing proteins in eukaryotes are thought to be involved in linking membrane proteins to the cytoskeleton (Hryniewicz-Jankowska et al. 2002); (ii) an ankyrin-repeat protein of Ehrlichia phagocytophila binds condensed chromatin of host cells and may be involved in host cell-cycle regulation (Caturegli et al. 2000); (iii) some of the proteins that modify the activity of cell-cycle-regulating proteins in D. melanogaster contain ankyrin repeats (Elfring et al. 1997); and (iv) the *Wolbachia* strain that infects the wasp Nasonia vitripennis induces cytoplasmic incompatibility, likely by interacting with these same cell-cycle proteins (Tram and Sullivan 2002). Of the ankyrin-containing proteins in *w*Mel, those worth exploring in more detail include the several that are predicted to be surface targeted or secreted (Table 4<\/a>) and thus could be targeted to the host nucleus. It is also possible that some of the other ankyrin-containing proteins are secreted via the Type IV secretion system in a targeting signal independent pathway. We call particular attention to three of the ankyrin-containing proteins (WD0285, WD0636, and WD0637), which are among the very few genes, other than those encoding components of the translation apparatus, that have significantly biased codon usage relative to what is expected based on GC content, suggesting they may be highly expressed.\n\n###### **Table 4.** Ankyrin-Domain Containing Proteins Encoded by the *w*Mel Genome\n\n![](pbio.0020069.t004)\n\n## Conclusions\n\nAnalysis of the *w*Mel genome reveals that it is unique among sequenced genomes of intracellular organisms in that it is both streamlined and massively infected with mobile genetic elements. The persistence of these elements in the genome for apparently long periods of time suggests that *w*Mel is inefficient at getting rid of them, likely a result of experiencing severe population bottlenecks during every cycle of transovarial transmission as well as during sweeps through host populations. Integration of evolutionary reconstructions and genome analysis (phylogenomics) has provided insights into the biology of *Wolbachia*, helped identify genes that likely play roles in the unusual effects *Wolbachia* have on their host, and revealed many new details about the evolution of *Wolbachia* and mitochondria. Perhaps most importantly, future studies of *Wolbachia* will benefit both from this genome sequence and from the ability to study host\u2013symbiont interactions in a host (D. melanogaster) well-suited for experimental studies.\n\n# Materials and Methods\n\n### Purification\/source of DNA\n\n*w*Mel DNA was obtained from *D. melanogaster yw* ^67c23^ flies that naturally carry the *w*Mel infection. *w*Mel was purified from young adult flies on pulsed-field gels as described previously (Sun et al. 2001). Plugs were digested with the restriction enzyme AscI (GG\\^CGCGCC), which cuts the bacterial chromosome twice (Sun et al. 2001), aiding in the entry of the DNA into agarose gels. After electrophoresis, the resulting two bands were recovered from the gel and stored in 0.5 M EDTA (pH 8.0). DNA was extracted from the gel slices by first washing in TE (Tris\u2013HCl and EDTA) buffer six times for 30 min each to dilute EDTA followed by two 1-h washes in \u03b2-agarase buffer (New England Biolabs, Beverly, Massachusetts, United States). Buffer was then removed and the blocks melted at 70\u00b0C for 7 min. The molten agarose was cooled to 40\u00b0C and then incubated in \u03b2-agarase (1 U\/100 \u03bcl of molten agarose) for 1 h. The digest was cooled to 4\u00b0C for 1 h and then centrifuged at 4,100 \u00d7 *g* ~max~ for 30 min at 4\u00b0C to remove undigested agarose. The supernatant was concentrated on a Centricon YM-100 microconcentrator (Millipore, Bedford, Massachusetts, United States) after prerinsing with 70% ethanol followed by TE buffer and, after concentration, rinsed with TE. The retentate was incubated with proteinase K at 56\u00b0C for 2 h and then stored at 4\u00b0C. *w*Mel DNA for gap closure was prepared from approximately 1,000 *Drosophila* adults using the Holmes\u2013Bonner urea\/phenol:chloroform protocol (Holmes and Bonner 1973) to prepare total fly DNA.\n\n### Library construction\/sequencing\/closure\n\nThe complete genome sequence was determined using the whole-genome shotgun method (Venter et al. 1996). For the random shotgun-sequencing phase, libraries of average size 1.5\u20132.0 kb and 4.0\u20138.0 kb were used. After assembly using the TIGR Assembler (Sutton et al. 1995), there were 78 contigs greater than 5000 bp, 186 contigs greater than 3000 bp, and 373 contigs greater than 1500 bp. This number of contigs was unusually high for a 1.27 Mb genome. An initial screen using BLASTN searches against the nonredundant database in GenBank and the Berkeley *Drosophila* Genome Project site () showed that 3,912 of the 10,642 contigs were likely contaminants from the *Drosophila* genome. To aid in closure, the assemblies were rerun with all sequences of likely host origin excluded. Closure, which was made very difficult by the presence of a large amount of repetitive DNA (see below), was done using a mix of primer walking, generation, and sequencing of transposon-tagged libraries of large insert clones and multiplex PCR (Tettelin et al. 1999). The final sequence showed little evidence for polymorphism within the population of *Wolbachia* DNA. In addition, to obtain sequence across the AscI-cut sites, PCR was performed on undigested DNA. It is important to point out that the reason significant host contamination does not significantly affect symbiont genome assembly is that most of the *Drosophila* contigs were small due to the approximately 100-fold difference in genome sizes between host (approximately 180 Mb) and *w*Mel (1.2 Mb).\n\nSince it has been suggested that *Wolbachia* and their hosts may undergo lateral gene transfer events (Kondo et al. 2002), genome assemblies were rerun using all of the shotgun and closure reads without excluding any sequences that appeared to be of host origin. Only five assemblies were found to match both the D. melanogaster genome and the *w*Mel assembly. Primers were designed to match these assemblies and PCR attempted from total DNA of *w*Mel infected D. melanogaster. In each case, PCR was unsuccessful, and we therefore presume that these assemblies are the result of chimeric cloning artifacts. The complete sequence has been given GenBank accession ID AE017196 and is available at .\n\n### Repeats\n\nRepeats were identified using RepeatFinder (Volfovsky et al. 2001), which makes use of the REPuter algorithm (Kurtz and Schleiermacher 1999) to find maximal-length repeats. Some manual curation and BLASTN and BLASTX searches were used to divide repeat families into different classes.\n\n### Annotation\n\nIdentification of putative protein-encoding genes and annotation of the genome was done as described previously (Eisen et al. 2002). An initial set of ORFs likely to encode proteins (CDS) was identified with GLIMMER (Salzberg et al. 1998). Putative proteins encoded by the CDS were examined to identify frameshifts or premature stop codons compared to other species. The sequence traces for each were reexamined and, for some, new sequences were generated. Those for which the frameshift or premature stops were of high quality were annotated as \"authentic\" mutations. Functional assignment, identification of membrane-spanning domains, determination of paralogous gene families, and identification of regions of unusual nucleotide composition were performed as described previously (Tettelin et al. 2001). Phylogenomic analysis (Eisen 1998a; Eisen and Fraser 2003) was used to aid in functional predictions. Alignments and phylogenetic trees were generated as described (Salzberg et al. 2001).\n\n### Comparative genomics\n\nAll putative *w*Mel proteins were searched using BLASTP against the predicted proteomes of published complete organismal genomes and a set of complete plastid, mitochondrial, plasmid, and viral genomes. The results of these searches were used (i) to analyze the phylogenetic profile (Pellegrini et al. 1999; Eisen and Wu 2002), (ii) to identify putative lineage-specific duplications (those proteins with a top *E*-value score to another protein from *w*Mel), and (iii) to determine the presence of homologs in different species. Orthologs between the *w*Mel genome and that of the two *Rickettsia* species were identified by requiring mutual best-hit relationships among all possible pairwise BLASTP comparisons, with some manual correction. Those genes present in both *Rickettsia* genomes as well as other bacterial species, but not *w*Mel, were considered to have been lost in the *w*Mel branch (see Table S3<\/a>). Genes present in only one or two of the three species were considered candidates for gene loss or lateral transfer and were also used to identify possible biological differences between these species (see Table S3<\/a>). For the *w*Mel genes not in the *Rickettsia* genomes, proteins were searched with BLASTP against the TIGR NRAA database. Protein sequences of their homologs were aligned with CLUSTALW and manually curated. Neighbor-joining trees were constructed using the PHYLIP package.\n\n### Phylogenetic analysis of mitochondrial proteins\n\nFor phylogenetic analysis, the set of all 38 proteins encoded in both the Marchantia polymorpha and Reclinomonas americana (Lang et al. 1997) mitochondrial genomes were collected. Acanthamoeba castellanii was excluded due to high divergence and extremely long evolutionary branches. Six genes were excluded from further analysis because they were too poorly conserved for alignment and phylogenetic analysis (*nad7*, *rps10*, *sdh3*, *sdh4*, *tatC*, and *yejV*), leaving 32 genes for investigation: *atp6*, *atp9*, *atpA*, *cob*, *cox1*, *cox2*, *cox3*, *nad1*, *nad2*, *nad3*, *nad4*, *nad4L*, *nad5*, *nad6*, *nad9*, *rpl16*, *rpl2*, *rpl5*, *rpl6*, *rps1*, *rps11*, *rps12*, *rps13*, *rps14*, *rps19*, *rps2*, *rps3*, *rps4*, *rps7*, *rps8*, *yejR*, and *yejU*. Using FASTA with the mitochondrial proteins as a query, homologs were identified from the genomes of seven \u03b1-Proteobacteria: two intracellular symbionts (*W. pipientis w*Mel and Rickettsia prowazekii) and five free-living forms (Sinorhozobium loti, Agrobacterium tumefaciens, Brucella melitensis, Mesorhizobium loti, and *Rhodopseudomonas* sp.). Escherichia coli and Neisseria meningitidis were used as outgroups. Caulobacter crescentus was excluded from analysis because homologs of some of the 32 genes were not found in the current annotation. In the event that more than one homolog was identified per genome, the one with the greatest sequence identity to the mitochondrial query was retrieved. Proteins were aligned using CLUSTALW (Thompson et al. 1994) and concatenated. To reduce the influence of poorly aligned regions, all sites that contained a gap at any position were excluded from analysis, leaving 6,776 positions per genome for analysis. The data contained extreme amino acid bias: all sequences failed the \u03c7^2^ test at *p* = 0.95 for deviation from amino acid frequency distribution assumed under either the JTT or mtREV24 models as determined with PUZZLE (Strimmer and von Haeseler 1996). When the data were iteratively purged of highly variable sites using the method described (Hansmann and Martin 2000), amino acid composition gradually came into better agreement with acid frequency distribution assumed by the model. The longest dataset in which all sequences passed the \u03c7^2^ test at *p* = 0.95 consisted of the 3,100 least polymorphic sites. PROTML (Adachi and Hasegawa 1996) analyses of the 3,100-site data using the JTT model detected mitochondria as sisters of the five free-living \u03b1-Proteobacteria with low (72%) support, whereas PUZZLE, using the same data, detected mitochondria as sisters of the two intracellular symbionts, also with low (85%) support. This suggested the presence of conflicting signal in the less-biased subset of the data. Therefore, protein log determinants (LogDet) were used to infer distances from the 6,776-site data, since the method can correct for amino acid bias (Lockhart et al. 1994), and Neighbor-Net (Bryant and Moulton 2003) was used to display the resulting matrix, because it can detect and display conflicting signal. The result (see Figure 5<\/a>A) shows both signals. In no analysis was a sister relationship between *Rickettsia* and mitochondria detected.\n\nFor analyses of individual genes, the 63 proteins encoded in the *Reclinomonas* mitochondrial genome were compared with FASTA to the proteins from 49 sequenced eubacterial genomes, which included the \u03b1-Proteobacteria shown in Figure 5<\/a>, R. conorii, and *Magnetococcus* MC1, one of the more divergent \u03b1-Proteobacteria. Of those proteins, 50 had sufficiently well-conserved homologs to perform phylogenetic analyses. Homologs were aligned and subjected to phylogenetic analysis with PROTML (Adachi and Hasegawa 1996).\n\n### Analysis of *wspB* sequences\n\nTo compare *wspB* sequences from different *Wolbachia* strains, PCR was done on total DNA extracted from the following sources: *w*Ri was obtained from infected adult D. simulans, Riverside strain; *w*AlbB was obtained from the infected Aa23 cell line (O'Neill et al. 1997b), and *D. immitis Wolbachia* was extracted from adult worm tissue. DNA extraction and PCR were done as previously described (Zhou et al. 1998) with *wspB*-specific primers (*wspB*-F, 5\u2032-TTTGCAAGTGAAACAGAAGG and *wspB*-R, 5\u2032-GCTTTGCTGGCAAAATGG). PCR products were cloned into pGem-T vector (Promega, Madison, Wisconsin, United States) as previously described (Zhou et al. 1998) and sequenced (Genbank accession numbers AJ580921\u2013AJ508923). These sequences were compared to previously sequenced *wsp* genes for the same *Wolbachia* strains (Genbank accession numbers AF020070, AF020059, and AJ252062). The four partial *wsp* sequences were aligned using CLUSTALV (Higgins et al. 1992) based on the amino acid translation of each gene and similarly with the *wspB* sequences. Genetic distances were calculated using the Kimura 2 parameter method and are reported in Table S5<\/a>.\n\n### Type IV secretion system\n\nTo determine whether the *vir*-like CDSs, as well as adjacent ORFs, were actively expressed within *w*Mel as two polycistronic operons, RT-PCR was used. Total RNA was isolated from infected *D. melanogaster yw* ^67c23^ adults using Trizol reagent (Invitrogen, Carlsbad, California, United States) and cDNA synthesized using SuperScript III RT (Invitrogen) using primers *wspB*R, WD0817R, WD0853R, and WD0852R. RNA isolation and RT were done according to manufacturer's protocols, with the exception that suggested initial incubation of RNA template and primers at 65\u00b0C for 5 min and final heat denaturation of RT-enzyme at 70\u00b0C for 15 min were not done. PCR was done using r*Taq* (Takara, Kyoto, Japan), and several primer sets were used to amplify regions spanning adjacent CDSs for most of the two operons. For operon virB3-WD0853, the following primers were used: (*virB3*-*virB4*)F, (*virB3*-*virB4*)R, (*virB6*-WD0856)F, (*virB6*-WD0856)R, (WD0856-WD0855)F, (WD0856-WD0855)R, (WD0854-WD0853)F, (WD0854-WD0853)R. For operon *virB8*-*wspB*, the following primers were used: (*virB8*-*virB9*)F, (*virB8*-*virB9*)R, (*virB9*-*virB11*)F, (*virB9*-*virB11*)R, (*virB11*-*virD4*)F, (*virB11*-*virD4*)R, (*virD4*-*wspB*)F, and (*virD4*-*wspB*)R. The coexpression of *virB4* and *virB6*, as well as WD0855 and WD0854, was confirmed within the putative *virB3*-WD0853 operon using nested PCR with the following primers: (*virB4*-*virB6*)F1, (*virB4*-*virB6*)R1, (*virB4*-*virB6*)F2, (*virB4*-*virB6*)R2, (WD0855-WD0854)F1, (WD0855-WD0854)R1, (WD0855-WD0854)F2, and (WD0855-WD0854)R2. All ORFs within the putative *virB8*-*wspB* operon were shown to be coexpressed and are thus considered to be a genuine operon. All products were amplified only from RT-positive reactions (see Figure 6<\/a>). Primer sequences are given in Table S9<\/a>.\n\n# Supporting Information\n\n###### Phage Trees\n\nPhylogenetic tree showing the relationship between WO-A and WO-B phage from *w*Mel with reported phage from *w*Kue and *w*Tai. The tree was generated from a CLUSTALW multiple sequence alignment (Thompson et al. 1994) using the PROTDIST and NEIGHBOR programs of PHYLIP (Felsenstein 1989).\n\n(60 KB PDF).\n\nClick here for additional data file.\n\n###### Plot of the Effective Number of Codons against GC Content at the Third Codon Position (GC3)\n\nProteins with fewer than 100 residues are excluded from this analysis because their effective number of codon (ENc) values are unreliable. The curve shows the expected ENc values if codon usage bias is caused by GC variation alone. Colors: yellow, hypothetical; purple, mobile element; blue, others. Most of the variation in codon bias can be traced to variation in GC, indicating that the mutation forces dominate the *w*Mel codon usage. Multivariate analysis of codon usage was performed using the CODONW package (available from ).\n\n(289 KB PDF).\n\nClick here for additional data file.\n\n###### Predicted Metabolism and Transport in *w*Mel\n\nOverview of the predicted metabolism (energy production and organic compounds) and transport in *w*Mel*.* Transporters are grouped by predicted substrate specificity: inorganic cations (green), inorganic anions (pink), carbohydrates (yellow), and amino acids\/peptides\/amines\/purines and pyrimidines (red). Transporters in the drug-efflux family (labeled as \"drugs\") and those of unknown specificity are colored black. Arrows indicate the direction of transport. Energy-coupling mechanisms are also shown: solutes transported by channel proteins (double-headed arrow); secondary transporters (two-arrowed lines, indicating both the solute and the coupling ion); ATP-driven transporters (ATP hydrolysis reaction); unknown energy-coupling mechanism (single arrow). Transporter predictions are based upon a phylogenetic classification of transporter proteins (Paulsen et al. 1998).\n\n(167 KB PDF).\n\nClick here for additional data file.\n\n###### Repeats of Greater Than 50 bp in the *w*Mel Genome (with Coordinates)\n\n(649 KB DOC).\n\nClick here for additional data file.\n\n###### Inactivated Genes in the *w*Mel Genome\n\n(147 KB DOC).\n\nClick here for additional data file.\n\n###### Ortholog Comparison with *Rickettsia* spp\n\n(718 KB XLS).\n\nClick here for additional data file.\n\n###### Putative Lineage-Specific Gene Duplications in *w*Mel\n\n(116 KB DOC).\n\nClick here for additional data file.\n\n###### Genetic Distances as Calculated for Alignments of *wsp* and *wspB* Gene Sequences from the Same *Wolbachia* Strains\n\n(24 KB DOC).\n\nClick here for additional data file.\n\n###### Putative DNA Repair and Recombination Genes in the *w*Mel Genome\n\n(26 KB DOC).\n\nClick here for additional data file.\n\n###### Phylogenetic Results for Concatenated Data of 32 Mitochondrial Proteins\n\n(34 KB DOC).\n\nClick here for additional data file.\n\n###### Individual Phylogenetic Results for *Reclinomonas* Mitochondrial DNA-Encoded Proteins\n\n(117 KB DOC).\n\nClick here for additional data file.\n\n###### PCR Primers\n\n(47 KB DOC).\n\nClick here for additional data file.\n\n## Accession Numbers\n\nThe complete sequence for *w*Mel has been given GenBank () accession ID number AE017196 and is available through the TIGR Comprehensive Microbial Resourceat \n\nThe GenBank accession numbers for other sequences discussed in this paper are AF020059 (*Wolbachia* sp. *w*AlbB outer surface protein precursor *wsp* gene), AF020070 (*Wolbachia* sp. *w*Ri outer surface protein precursor *wsp* gene), AJ252062 (*Wolbachia* endosymbiont of D. immitis sp. gene for surface protein), AJ580921 (*Wolbachia* endosymbiont of D. immitis partial *wspB* gene for *Wolbachia* surface protein B), AJ580922 (*Wolbachia* endosymbiont of A. albopictus partial *wspB* gene for *Wolbachia* surface protein B), and AJ580923 (*Wolbachia* endosymbiont of D. simulans partial *wspB* gene for *Wolbachia* surface protein B).\n\nWe acknowledge Barton Slatko, Jeremy Foster, New England Biolabs, and Mark Blaxter for helping inspire this project; Rehka Seshadri for help in examining pathogenicity factors and reading the manuscript; Derek Fouts for examination of group II introns; Susan Lo, Michael Heaney, Vadim Sapiro, and Billy Lee for IT support; Maria-Ines Benito, Naomi Ward, Michael Eisen, Howard Ochman, and Vincent Daubin for helpful discussions; Steven Salzberg and Mihai Pop for help in comparing *w*Mel with the D. melanogaster genome; Elodie Ghedin for access to the *B. malayi Wolbachia* sequence data; Maria Ermolaeva for assistance with analysis of operons; Dan Haft for designing protein family hidden Markov models for annotation; Owen White for general bioinformatics support; four anonymous reviewers for very helpful comments and suggestions; and Claire M. Fraser for continuing support of TIGR's scientific research. This project was supported by grant UO1-AI47409\u201301 to Scott O'Neill and Jonathan A. Eisen from the National Institutes of Allergy and Infectious Diseases.\n\n# References\n\n## Abbreviations\n\nCDS\n\n: coding sequence\n\nENc\n\n: effective number of codons\n\nIS\n\n: insertion sequence\n\nLPS\n\n: lipopolysaccharide\n\nRT\n\n: reverse transcription\n\nTIGR\n\n: The Institute for Genomic Research","meta":{"dup_signals":{"dup_doc_count":104,"dup_dump_count":44,"dup_details":{"curated_sources":2,"2022-49":1,"2022-27":1,"2021-49":1,"2021-43":1,"2020-24":1,"2019-51":1,"2019-26":1,"2019-09":2,"2018-47":1,"2018-39":1,"2018-34":1,"2018-30":1,"2018-22":1,"2018-13":1,"2018-05":1,"2017-47":1,"2017-39":1,"2017-30":1,"2017-22":1,"2017-09":3,"2016-44":1,"2016-30":2,"2016-26":1,"2016-22":1,"2015-40":3,"2015-35":3,"2015-32":3,"2015-27":4,"2015-14":2,"2014-52":4,"2014-49":3,"2014-42":10,"2014-41":5,"2014-35":6,"2014-23":6,"2014-15":8,"2023-06":1,"2024-18":2,"2024-10":2,"2015-18":4,"2015-11":3,"2015-06":1,"2014-10":3,"2024-26":1}},"file":"PMC368164"},"subset":"pubmed_central"} {"text":"author: Naomi Attar\ndate: 2012\ninstitute: 1Genome Biology, BioMed Central, 236 Gray's Inn Road, London WC1X 8HB, UK\ntitle: The Research Works Act: a comment\n\nIt's been a tale of two 'opens'; recent events in US political life suggest a reconciliation with the concept of open marriage but a relationship with open access to scientific research that may be headed for splitsville.\n\nIn a rare feat of bipartisanship during a Congress famed for partisan pettiness, the Research Works Act (RWA) has received cross-party sponsorship in the House and is now being heard in Committee. The Act, in its own words, seeks to prevent Federal agencies from requiring that authors 'assent to network dissemination of a private-sector research work'. Translation? It will be illegal for Federally funded research grants to have open access strings attached. This includes a bar on policies insisting on deposition in public repositories; in fact, it will be illegal to 'cause', 'permit' or 'authorize' such a deposition.\n\nRWA has not attracted the same volume of headlines as the equally controversial SOPA (Stop Online Piracy Act), which was another recent bill pertaining to internet freedoms. But although the opponents of RWA cannot rival the political and global reach of the likes of Google and Wikipedia, a spirited, academia-led campaign against RWA is underway on the blogosphere and Twitter. As with many grassroots protests in the social media era, the RWA rebutters have mixed a sense of genuine outrage with mischievous and irreverent humor.\n\nOne notable line of campaigning has been to instigate a boycott of publishers deemed to be supportive of RWA. The boycott includes refusing to peer review manuscripts, in addition to submitting manuscripts only to publishers opposed or indifferent to RWA. Particular ire has been reserved for publishing houses thought to be proactive in their RWA support, either through lobbying or in terms of financial backing provided to Washington politicos. As one company singled out for approbation is a competitor of this journal's publisher, there is an undoubted temptation to indulge in a little schadenfreude and jump on the boycott bandwagon. However, while researchers are of course at liberty to submit to and peer review for whichever journals they choose (and on whatever grounds they choose), it seems that the focus on publishers is misplaced. Companies should be expected to make representations, by legal means, on behalf of what they perceive to be their best interests (admittedly, a boycott may influence these perceptions); instead, the war for open access must be waged against the Congressmen, accountable to the People, who are driving RWA forward.\n\nLooking into the US from the other side of the Atlantic, Federal funding for scientific research can only be viewed through a green mist of envy. The contribution of these funds to global science is remarkable (just take a look at a list of Nobel Prize winners, and note how many institutional affiliations are in the US), and is a record of which US tax payers should be justly proud. This contribution is not paralleled by any other government. Furthermore, efforts by funding agencies to make the benefits of research as widely available as possible - including to those outside the US - are laudable. It seems to be a truism that maximizing access to the results of scientific endeavor is in the best interests of further scientific progress, and so offers the best value to the taxpayer. So why would the representatives of the very same taxpayer seek to restrict access to this research, and by the same measure subsidize the publishing industry with money diverted away from scientists? It defies belief to imagine that these Congressmen are arguing for their constituents to pay exorbitant prices simply to read articles that they themselves have paid for with their tax dollars.\n\nSo what is the defense? Supporters of RWA pitch it as a battle for the free market; in the words of the Association of American Publishers, its motivation is the 'freedom from regulatory interference for \\[the\\] private sector'. Of course, this is quite the opposite of what RWA actually represents, which is *additional* government regulation contrary to the spirit of the free market. In fact, market forces scare traditional publishing models, because left to their own devices they will arrive at the most efficient use of capital, which is undoubtedly, for the funding agencies, open access publishing. Given that Federal funding ultimately pays for both access to publications and publishing costs, the best value option is an open access model. This is because the cost of publication should not vary according to access level, only the size of the audience able to access the material. To prevent Federal agencies from pursuing what is therefore a no-brainer option, RWA is designed to skew the market; it leaves the decision of which model to publish manuscripts under to individual authors, thereby creating a disconnect between the source of capital and the choice of how it is spent. No freely operating market would tolerate those paying for the product (the taxpayers) being barred from access to its benefits.\n\nOne component of the defense put forward for RWA is particularly provocative, and warrants more detailed attention. Here's the Association of American Publishers again: RWA will protect 'millions of dollars invested by publishers in... operational funding of independent peer review by specialized experts.' Publishers would do well to remember that academics offer a peer review service free of charge, and so to focus on peer review as a publisher-added value, even though there are undoubtedly publisher-incurred costs to this process, is unnecessarily antagonistic.\n\nCongress must judge RWA in the true spirit of the free market, the taxpayer and the great tradition of American science. Open access publishing is an economic inevitability. It's time to get on board.\n\n# Competing interests\n\nAs *Genome Biology* publishes research under an exclusively open access model, the journal is unashamed to oppose any legislation aiming to restrict the fruits of scientific research to a limited audience.\n\n# Your views\n\n*Genome Biology* would be interested to hear any dissenting views on this Editorial, or indeed any positive feedback. Tell us your own perspective either by posting a comment on this article or by including our @GenomeBiology handle in a tweet.","meta":{"dup_signals":{"dup_doc_count":101,"dup_dump_count":44,"dup_details":{"curated_sources":2,"2021-43":1,"2020-50":1,"2020-40":1,"2020-24":1,"2019-30":1,"2019-04":1,"2018-43":1,"2018-34":1,"2018-26":1,"2018-05":1,"2017-39":1,"2017-26":1,"2017-17":1,"2017-09":9,"2016-44":1,"2016-40":1,"2016-36":10,"2016-30":7,"2016-22":1,"2016-18":1,"2016-07":6,"2015-48":2,"2015-40":1,"2015-35":2,"2015-32":2,"2015-27":2,"2015-22":2,"2015-14":2,"2014-52":1,"2014-49":3,"2014-42":6,"2014-41":3,"2014-35":3,"2014-23":2,"2014-15":3,"2022-33":1,"2017-13":1,"2015-18":2,"2015-11":2,"2015-06":2,"2014-10":2,"2013-48":2,"2013-20":3,"2024-18":1}},"file":"PMC3334566"},"subset":"pubmed_central"} {"text":"abstract: DNA microarrays (representing approximately 30,000 human genes) were used to analyze gene expression in six different human eye compartments, revealing candidate genes for diseases affecting the cornea, lens and retina.\nauthor: Jennifer J Diehn; Maximilian Diehn; Michael F Marmor; Patrick O Brown\ndate: 2005\ninstitute: 1Department of Ophthalmology, Stanford University School of Medicine, Stanford, CA 94305, USA; 2Department of Biochemistry, Stanford University School of Medicine, Stanford, CA 94305, USA; 3Howard Hughes Medical Institute, Stanford University School of Medicine, Stanford, CA 94305, USA; 4Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA\nreferences:\ntitle: Differential gene expression in anatomical compartments of the human eye\n\n# Background\n\nThe human eye is composed of multiple substructures of diverse form, function, and even embryologic origin that work in concert to provide us with our sense of sight. Identifying the global patterns of gene expression that specify the distinctive characteristics of each of the various compartments of the eye is an important step towards understanding how these complex normal tissues function, and how dysfunction leads to disease. The Human Genome sequence \\[1,2\\] provides a basis for examining gene expression on a genomic scale, and cDNA microarrays provide an efficient method for analyzing the expression of thousands of genes in parallel. Previous studies have used microarrays to investigate gene expression within normal eye tissues, including cornea \\[3\\] and retina \\[4\\], as well as within pathological tissues such as glaucomatous optic nerve heads \\[5\\], uveal melanomas \\[6\\], and aging retina \\[7\\].\n\nAnalysis of gene expression in the eye has been notoriously difficult because of the technical obstacles associated with extracting sufficient quantities of high quality RNA from the tissues. This is especially true for the lens and cornea, which have relatively few RNA-producing cells when compared to a highly cellular tissue such as retina. Furthermore, pigmented ocular tissues contain melanin, which often co-purifies with RNA and inhibits subsequent enzymatic reactions \\[8\\]. Any delay between the patient's death and the harvesting of ocular tissues can also compromise RNA quality and yield. To date, many experiments examining the gene expression profile of particular eye compartments have relied on pooled samples or cell culture in order to obtain adequate amounts of RNA. In contrast to these studies, the experiments described in this paper were performed using a linear amplification procedure \\[9\\], which made it possible to examine individual specimens using DNA microarrays, thereby eliminating the potentially confounding effects of pooling multiple donor samples or culturing cells, which can elicit dramatic changes in gene expression based on the cell culture media \\[10\\]. We chose an *in vitro* transcription-based, linear amplification approach because this has previously been shown to reproducibly generate microarray gene expression results that are extremely similar to data generated using unamplified RNA \\[9,11,12\\]. Additionally, the amplification process has been shown to selectively and reproducibly 'over-amplify' some low-copy number transcripts, resulting in a larger fraction of the expressed genome that can be reliably measured on DNA microarrays. Importantly, by analyzing individual donor samples on arrays, we can detect variation in the eye compartments of different donors, which will be critical for future studies that examine how gene expression varies between individuals at baseline and also in disease states.\n\nA major goal of this study was to discover how the various eye compartments differ from one another on a molecular level by identifying clusters of differentially expressed genes, or 'gene signatures', characteristic of each eye compartment. We also wanted to investigate how gene expression varies between geographical regions of the retina. Because certain retinal diseases such as retinitis pigmentosa (RP) and age-related macular degeneration (ARMD) preferentially affect a specific retinal region, identification of genes that are differentially expressed in the macula versus peripheral retina may provide valuable clues to the molecular mechanisms underlying these diseases. Recent work using serial analysis of gene expression (SAGE), a method that involves sequencing thousands of transcripts from a given RNA sample, identified several genes that were significantly enriched in either the macula or the periphery \\[13\\]. Our cDNA microarray studies confirmed some of these genes, but also significantly added to the catalog of macula-enriched genes. Lastly, because many ophthalmologic diseases preferentially affect a particular eye compartment, our study demonstrates that gene signatures can be combined with gene linkage studies in order to identify candidate disease genes.\n\n# Results\n\nTo explore relationships among the different eye compartments and among genes expressed in these compartments, we performed hierarchical cluster analysis of both genes and samples \\[14\\] using genes that met our selection criteria (see Materials and methods). The display generated through hierarchical clustering analysis is shown in Figure 1a<\/a>. In this display, relatively high expression levels are indicated by a red color, and relatively low expression levels are represented by a green color; each column represents data from a single tissue sample, and each row represents the series of measurements for a single gene. Tissue samples with similar gene expression patterns are clustered adjacent to one another, and genes with similar expression patterns are clustered together. In our experiments, samples of the same eye compartment from different donors clustered in discrete groups (for example, cornea with cornea, retina with retina), with the only exception being an intermingling of the ciliary body and iris specimens (Figure 1a<\/a>). The lack of a clear distinction between the expression patterns of the ciliary body and iris may be due to both their shared embryological origin and their close anatomical approximation, resulting in sub-optimal separation during dissection. The division between the retinal samples and all other samples was the most striking. Furthermore, there was a distinct grouping of the various macula specimens, which formed a tightly clustered subgroup among the retinal samples. The expression patterns of the optic nerve samples were most similar to those of the three brain specimens.\n\nEach anatomical compartment of the eye expressed a distinct set of genes that were not expressed, or expressed at much lower levels, in the other eye compartments (Figure 1b<\/a>). The repertoire of genes specifically expressed in the retina was especially large and diverse (3,727 genes), but we also found a surprisingly large number of transcripts (1,777 genes) expressed predominantly in the lens. To explore the connections between these compartment-enriched genes and phenotypic features of the compartments in which they were expressed, we considered each group of compartment-enriched genes in detail.\n\n## Corneal signature\n\nThe cornea is a multi-layered structure consisting of an epithelium of stratified squamous cells, a thick stroma of layered collagen fibrils, and an underlying endothelial layer. To provide an effective physical barrier to the outside world, the corneal epithelial cells bind to one another and to the underlying connective tissue through a series of linked structures known collectively as the 'adhesion complex'. As shown in Figure 2a<\/a>, many genes enriched in the corneal signature encoded proteins that stabilize epithelial sheets and promote cell-cell adhesion, including keratins (KRT5, KRT6B, KRT13, KRT15, KRT16, KRT17, KRT19), laminins (LAMB3, LAMC2), and desmosomal components (DSG1, DSC3, BPAG1).\n\nOther genes highly expressed in the cornea signature encoded proteins that help maintain the shape, transparency, or integrity of the cornea, which serves as the primary refractive element in the eye. Some of the genes encoded proteins specifically expressed by either squamous epithelial cells or fibroblasts, reflecting the histological composition of corneal tissue. For example, the signature included numerous genes that encode collagens (COL5A2, COL6A3, COL12A1, COL17A1), along with the gene for lysyl oxidase (LOX), an enzyme that promotes collagen cross-linking. The gene encoding keratocan (KERA), a proteoglycan involved in maintaining corneal shape in mice knock-out studies \\[15\\], and linked to abnormal corneal morphology (keratoconus and cornea plana) in humans, was selectively expressed in corneal tissue, as were the genes encoding lumican (LUM), a keratan sulfate-containing proteoglycan that has been shown to be important for mouse corneal transparency \\[16\\], and aquaporin 3 (AQP3), which encodes a water\/small solute-transporting molecule. Immunolabeling studies performed on corneas with pseudophakic bullous keratopathy demonstrated increased AQP3 in the superficial epithelial cells, suggesting that AQP3 may be associated with increased fluid accumulation, resulting in the decrease in corneal transparency seen in pseudophakic bullous keratopathy corneas \\[17\\]. Modulating genes or proteins involved in corneal shape and transparency could potentially lead to non-invasive treatments for some corneal diseases, which are often only remediable through corneal transplantation.\n\nAn intriguing subset of genes in the cornea signature has been studied in tumor metastasis models because these genes encode proteins that regulate cell-cell or cell-matrix interactions (TWIST, MMP10, SERPINB5, THBS1, CEACAM1, C4.4A). For example, TWIST encodes a transcription factor shown to promote metastasis in a murine breast tumor model through the loss of cadherin-mediated cell-cell adhesion \\[18\\]. Another corneal signature gene encodes matrix metalloproteinase 10 (MMP10), a protein capable of degrading extracellular matrix components. Overexpression of MMP10 in transfected lymphoma cells has been shown to stimulate invasive activity *in vitro* and promote thymic lymphoma growth in an *in vivo* murine model \\[19\\]. Various matrix metalloproteinases have been examined for their roles in corneal wound healing (reviewed in \\[20\\]), including MMP10, which was identified in migrating epithelial cells in cultured human cornea tissues that were experimentally wounded \\[21\\], which may suggest that the process of corneal wound healing may mimic some aspects of tumor biology. Certainly, in both wound healing and cancer, cells undergo rapid proliferation, invade and remodel the extracellular matrix, and migrate to other areas.\n\nRecent microarray investigations identified a gene expression signature related to a wound response in the expression profiles of several common carcinomas, and the presence of this wound healing gene signature predicted an increased risk of metastasis and death in breast, lung, and gastric carcinomas \\[22,23\\]. Further research into corneal wound healing may also provide us with a model for better understanding the pathophysiology underlying tumor metastasis because the cornea is exceptionally efficient among human tissues at degrading and remodeling its extracellular matrix, allowing it to heal superficial wounds within hours.\n\n## Ciliary body\/iris signature\n\nThe ciliary body and iris are components of the eye's highly pigmented and vascular layer known as the uveal tract. As might be expected, genes related to pigmentation were a feature of the distinctive expression pattern of these tissues (Figure 2b<\/a>). These genes encoded enzymes involved in melanogenesis, including tyrosinase (TYR), tyrosinase-related protein 1 (TYRP1), and dopachrome tautomerase (DCT), as well as melanosomal matrix proteins such as SILV and MLANA. Several of the ciliary body\/iris signature genes were noteworthy in that their mutation can lead to albinism or hypopigmentation phenotypes, including OA1 (ocular albinism type 1), TYR and TYRP1 (oculocutaneous albinism 1A and 3, respectively), and MLPH (Griscelli syndrome). Investigation of the numerous uncharacterized genes with similar expression patterns to those of pigmentation genes may expand our knowledge about the pigmentation process in eyes and the molecular mechanisms behind hypopigmentation syndromes.\n\nThe ciliary body is also responsible for aqueous humor formation and lens accommodation, while the contiguous iris filters light entering the eye by constricting and dilating the muscles around the pupillary opening. Histologically, the ciliary body consists predominately of smooth muscle, but also contains striated muscle (reviewed in \\[24\\]). Previous work has demonstrated that contractility of both the ciliary body and the trabecular meshwork is critical in modulating aqueous humor outflow (reviewed in \\[25\\]), one of the key determinants of intraocular pressure, along with aqueous humor production and episcleral venous pressure. Muscle-related proteins encoded by genes in the ciliary body\/iris cluster included smooth muscle actin (ACTG2), and actin cross-linking proteins such as filamin (FLNC), tropomyosin (TPM2), and tensin (TNS). Other iris\/ciliary body signature genes have known roles in myosin phosphorylation (PPP1R12B), sarcolemmal calcium homeostasis (CASQ2), and ATP availability (CKMT2), all of which may contribute to ciliary body\/trabecular meshwork contractility.\n\nBoth ciliary body and trabecular meshwork contractility, as well as aqueous humor production, have been linked to changes in membrane potential, and membrane channels have been studied extensively in the ciliary body \\[25-27\\]. Of note, transcripts encoding an inward-rectifying potassium channel (KCNJ8), not previously identified in the ciliary body, were highly enriched in the ciliary body\/iris signature and may warrant further study. The signature also included the gene for adrenergic receptor 2\u03b1 (ADRA2A), a regulator of aqueous humor production and outflow, and the molecular target of the ocular hypotensive agent brimonidine. Identification of other genes that facilitate aqueous production and outflow may provide additional molecular targets for future glaucoma therapeutics aimed at lowering intraocular pressure, the only modifiable risk factor for the development and progression of glaucoma.\n\n## Immune system genes expressed within anterior segment tissues\n\nGenes related to immune defense mechanisms were prominent among the large set of genes selectively expressed in both the ciliary body\/iris and corneal tissues. These included genes encoding proteins involved in intracellular antigen processing and transport for eventual surface presentation to immune cells (PSMB8, TAP1), antigen presentation proteins, including HLA class I molecules (HLA-A, HLA-C, HLA-F, and HLA-G) and HLA class II molecules (HLA-DRB1, DRB4, DRB5, DPA1, and DPB1), cytokines involved in the recruitment of monocytes (SCYA3, SCYA4, CD14), and cytokine receptors (IL1R2, IL4R, and IL6R). Several anterior segment-enriched genes encoded proteins with intrinsic antibiotic activity, including defensin (DEFB1) and lysozyme (LYZ), which may protect epithelial surfaces from microbial colonization.\n\nGenes encoding components of the complement cascade, a major arm of the innate immune system, were a particularly prominent feature of the anterior segment signature. Most of the early classical pathway complement genes, including C1 components (C1S, C1QA, C1QG, C1R), C2, and C4b, as well as a component of the late complement cascade (C7), were selectively expressed in both the corneal and ciliary body\/iris tissues. In addition, the gene encoding the trigger for the alternative complement pathway, properdin (BF), was highly expressed in these tissues.\n\nTo prevent the destructive reactions that could ensue from the daily bombardment of the eye with potentially antigenic stimuli, regulatory mechanisms must counteract the multitude of pro-inflammatory mediators found in the eye. A study by Sohn *et al*. \\[28\\] that examined a number of complement and complement-regulating components in rat eyes suggested that the complement system is continuously active at a low level in the normal eye and is kept in check by regulatory proteins. Indeed, we found that the anterior segment selectively expressed many critical negative regulators of the immune system, especially of the complement cascade. These included SERPING1 and DAF, two genes that encode proteins that limit the production of early complement components, and CD59, which encodes a protein that inhibits the assembly of complement subunits into the membrane attack complex.\n\nThe presence of complement activation products in the human eye during infection or inflammation has been previously described \\[29\\]. Studies have suggested that the complement pathway contributes to the pathophysiology of uveitis, an inflammatory disease of the uveal tract that is often idiopathic in etiology \\[30\\]. In support of this theory, Bardenstein *et al*. \\[31\\] showed that blocking the complement regulator CD59 in the rat eye precipitated massive inflammation in the anterior eye, including intense conjunctival inflammation and iritis. Our evidence that complement pathway components and regulators are highly expressed in anterior segment tissues provides further impetus for investigating their links to ocular disease.\n\nA caution to bear in mind in interpreting these results is that all of our ocular specimens were obtained post-mortem. The expression of the inflammatory genes could therefore reflect, at least in part, changes in the eye that occur after death. Future studies examining gene expression in fresh tissue samples obtained at surgery, such as peripheral iridectomy specimens, should help to further address this issue.\n\n## Lens signature\n\nThe distinctive features of the lens are its transparency, precisely crafted shape, and deformability, all of which are critical for proper light refraction. Elucidating the molecular mechanisms that maintain or disrupt lens transparency is fundamental in preventing cataract, the leading cause of world blindness. Our studies showed that lens gene expression is very distinct from the other eye compartments (Figure 2c<\/a>), perhaps reflecting the extraordinary specialization of the lens as an isolated, avascular structure within the eye. We found more than a thousand genes selectively expressed in the lens; clearly, diverse RNA populations are still present in the adult lens, even though its population of active epithelial cells is outnumbered by the mature fiber cells that have lost their organelles, including nuclei.\n\nGenes encoding the subunits of crystallins, the predominant structural proteins in the lens, were prominent in the lens signature, including subunits for crystallin alpha (CRYAA), beta (CRYBA1, CRYBA4), and gamma (CRYGA, CRYGC). Work by Horwitz and colleagues \\[32,33\\] on alpha-crystallins, which are structurally similar to small heat shock proteins, showed these crystallins may preserve lens transparency by serving as molecular chaperones that protect other lens proteins from irreversible denaturation and aggregation. Of the other heat shock proteins highly enriched in the lens signature (HSPA6, HSPA8, HSPB1), HSPB1 may be of particular interest because it is a protein with an alpha-crystallin domain that may have a role in lens differentiation \\[34\\]. The lens signature also included genes encoding subunits of the proteasome complex (PSMA6, PSMA7, PSMB6, PSMB7, PSMB9, PSMD13), a multicatalytic proteinase structure that is responsible for degrading intracellular proteins. Previous studies have demonstrated the significance of the proteasome pathway in removing oxidatively damaged proteins within the lens \\[35\\].\n\nBesides the crystallin genes, other genes encoding previously described structural components of the lens, including lens intrinsic membrane (LIM2), beaded filament structural protein (BFSP2), spectrin (SPTBN2), and actin binding protein (ABLIM) were included in the lens signature. More interestingly, the signature also contained intermediate filament genes, such as those encoding erythrocyte membrane band 4.9 and 4.1 (EPB49 and EPB41L1, EPB41L4), that are characteristically expressed in erythrocytes, another cell whose highly stereotyped shape is critical to its function. Previous studies have shown that protein 4.1 helps stabilize the spectrin-actin cytoskeleton, which is present in both erythrocytes and lenticular tissue \\[36\\]. Further investigations comparing erythroid and lens cells may reveal other similarities in their cytoskeletons, both of which define a distinctive and stereotyped cell shape that must endure substantial amounts of mechanical stress.\n\nAnother notable feature of the lens signature was the enrichment of genes encoding proteins involved in endocytosis, including clathrin (CLTCL1, PICALM) and caveolin (CAV1). Currently, intercellular transport within the lens is thought to occur predominately by diffusion through gap junctions, but several investigators have proposed the uptake of nutrients must be supplemented by mechanisms other than gap junctions because of the paucity of gap junctions identified in microscopy studies and the confirmed presence of clathrin-coated vesicles in freeze-fracture studies \\[37,38\\].\n\nOxidative stress mediated by free radical production has been associated with cataract formation (reviewed in \\[39\\]). Therefore, we looked for genes involved in scavenging free radicals in the lens signature. Two of these genes encode enzymes, glutathione synthetase (GSS) and glutathione reductase (GSR), that facilitate the production of glutathione, a potent anti-oxidant and essential cofactor for redox enzymes. Superoxide dismutase (SOD1) and anti-oxidant protein 2 (AOP2), two proteins responsible for reducing free oxygen radicals and hydrogen peroxide species, respectively, were also selectively expressed in lens tissue. Drugs or environmental agents that modulate the expression or activity of these proteins could have a significant impact on cataract progression or prevention.\n\n## Optic nerve signature\n\nThe gene expression pattern in the optic nerve was overall quite similar to that seen in brain tissue (Figure 2d<\/a>), very likely reflecting the preponderance of glial cells present in both tissues. Both signatures included a number of genes (MBP, MOBP, MAG, OLIG1, and OLIG2) previously found in glial cells, several of which have been linked to neurological diseases. For example, myelin-associated oligodendrocyte basic protein (MOBP) is implicated as an antigen stimulus for multiple sclerosis, a disease that also can present with optic neuritis (reviewed in \\[40\\]). Interestingly, the optic nerves in MOBP knock-out mice lacked the radial component of myelination \\[41\\]. In another study, transgenic mice with T-cell receptors specific to myelin associated glycoprotein (MAG) spontaneously presented with optic neuritis \\[42\\]. The majority of the genes in the brain and optic nerve signatures encoded proteins of unknown function; our results, showing that these genes may have specialized roles in these tissues, may be a step toward discovering the biological role(s) for these uncharacterized proteins.\n\n## Retina signature\n\nThe retina, a complex tissue composed of neuronal and glial elements, is essentially an extension of the central nervous system, and the genes found in the retina signature appear to reflect its distinctive histology and embryology (Figure 3a<\/a>). For example, the signature included the receptors for known retinal neurotransmitters, including gamma-aminobutyric acid (GABRA1, GABRG2, GABRB3), glutamate (GRIA1, GRIN2D), glycine (GLRB), and dopamine (DRD2). Retinal neurotransmitters are packaged into small vesicles in the pre-synaptic regions of photoreceptors. Many retinal signature genes encoded proteins associated with synaptic vesicle docking and fusion (SNAP25, VAMP2, SYP, SNPH), as well as vesicle exocytosis and neurotransmitter release (SYN2, SYT4). One of the retinal signature genes with a role in synaptic transmission, human retinal gene 4 (HRG4\/UNC119), has been linked to late-onset cone-rod dystrophy in humans and marked synaptic degeneration in a transgenic mouse model \\[43\\].\n\nThe retina protects the integrity of its neuronal layers by regulating its extracellular environment through a blood-retina barrier consisting of vessel tight junctions and cell basement membranes. The exchange of nutrients and metabolites across these barriers likely requires diverse, specialized transporters. Indeed, over 30 different genes encoding small molecule transporters were found within the retina signature, including carriers of glucose (SLC2A1, SLC2A3), glutamate (SLC1A7), and other amino acids (SLC7A5, SLC38A1, SLC6A6). Of note, severe retinal degeneration was observed in mice mutated in *SLC6A6*, a gene encoding a transporter of the amino acid taurine \\[44\\]. Several genes encoding ABC transporters (ABCA3, ABCA4, ABCA5, ABCA7), which use ATP energy to transport various molecules across cell membranes, were contained in the retinal signature. The most notable of these, ABCA4, is involved in vitamin A transport in photoreceptor cells; mutations in the gene encoding ABCA4 can result in a spectrum of retinopathies, including retinitis pigmentosa, Stargardt's disease, cone-rod dystrophy, and ARMD.\n\nThe retinal signature was also enriched in transcripts encoding vitamin and mineral transporters. The inclusion of a vitamin C transporter (SLC23A1) and a zinc transporter (SLC39A3) within the signature was of particular interest, in light of the Age-Related Eye Disease Research Group study that demonstrated supplementation with zinc and anti-oxidants, including vitamin C, lowered the probability of developing neovascular ARMD in some high-risk patient subgroups \\[45\\]. The presence of transferrin (TF), an iron transport molecule, and its receptor (TFRC), in the retina signature may also be noteworthy because a higher accumulation of iron has been observed in some ARMD-affected maculas \\[46\\].\n\nSomewhat unexpectedly, the retina signature contained the gene encoding thyroid releasing hormone (TRH) and numerous thyroid hormone receptor-related genes (THRA, TRIP8, TRIP15, TRAP100). TRH expression was previously observed in the retinal amacrine cells of amphibians \\[47\\]. Previous work has demonstrated the importance of thyroid hormone in the developing rat retina \\[48\\], and thyroid hormone receptors are required for green cone photoreceptor development in rodents \\[49\\]. Further studies of these genes may uncover additional roles of thyroid hormone and its receptors in the human retina.\n\nThe retina is ultimately responsible for executing the visual cycle, the process by which a photon signal is translated into an electrical impulse. This complex cycle is initiated when photoreceptor pigments activate G-proteins. G-proteins in turn activate phosphodiesterases to break down cyclic GMP (cGMP) to GMP, thereby influencing cell polarization via the downstream modulation of ion channel efflux. The retina signature incorporated many genes encoding known visual cycle elements, including the photopigment rhodopsin (RHO), G-proteins from rods and cones (GNAT1, GNAT2, GNB5), subunits of rod and cone phosphodiesterases (PDE6A, PDE6B, PDE6G, PDE6H), and cGMP-sensitive channels (CNGB1, CNGA1). Genes responsible for visual cycle recovery, such as arrestins (SAG, ARR3), were also present. Intriguingly, transcripts encoding other G-proteins (GNB1, GNAZ) and several phosphodiesterases (PDE8B, PDE7A, PDE4A) with no established roles in the visual cycle were enriched in the retinal signature. Additionally, the signature contained CDS1, which, though it has no clear function in humans, is homologous to the phototransduction gene CDS that has been linked to light-induced retinal degeneration in *Drosophila* mutants \\[50\\]. Perhaps further in-depth study of the many uncharacterized genes in the retinal signature will reveal roles in phototransduction for these genes, which may expand our current concept of the visual cycle pathway.\n\n## Macula signature\n\nWe used the statistical analysis of microarrays (SAM) algorithm to select genes whose expression differed significantly between the central and peripheral retinal tissues (Figure 3b<\/a>). The large set of genes that we identified as selectively expressed in macula tissues included a subset of genes involved in lipid biosynthesis. The majority of these genes are regulated by sterol response element-binding protein (SREBP), a transcription factor that has emerged as a master regulator of cholesterol and fatty acid metabolic pathways \\[51\\]. Previous studies by Fliesler *et al*. \\[52\\] have provided evidence for rapid *de novo* synthesis of cholesterol in the rat retina *in vivo*, and our findings strongly suggested the human retina also contains the enzymes needed for cholesterol biogenesis. Transcripts encoding the enzymes that catalyze multiple steps in cholesterol synthesis were enriched in the macula, including stearoyl-CoA desaturase (SCD), mevalonate decarboxylase (MVD), hydroxy-3-methylglutaryl-coenzyme A synthase 1 (HMGCS1), and HMG-coenzyme A reductase (HMGCR), the rate-limiting enzyme in cholesterol synthesis and the target of the 'statin' class of drugs for patients with dyslipidemia. Other macula signature genes encoded enzymes that act later in cholesterol biosynthesis, such as lanosterol synthase (LSS) and squalene epoxide (SQLE). In addition, the macula-enriched cluster included the gene for low-density lipoprotein receptor (LDLR), known for its role in binding low-density lipoprotein (LDL), the major cholesterol-carrying lipoprotein of plasma. LDL receptors and LDL-like receptors have been previously identified in retinal pigment epithelium and retinal muller cells \\[53,54\\], but their function in cholesterol transport within the retina has been minimally explored.\n\nThe genes represented in the macula cluster at least partially reflect cell types present in a higher density in the macula than in the peripheral retina, such as ganglion cells and photoreceptors. For example, a substantial number of genes in the macula signature have previously been characterized in ganglion cells (THY1, POU4F1, L1CAML1, NRN1). Interestingly, cholesterol is involved in the physiology of both retinal ganglion cells and photoreceptors. Cholesterol has been identified in rod outer segments in a wide variety of animal species (reviewed in \\[55\\]), as well as in oil droplets isolated from chicken cone photoreceptors \\[56\\]. *In vitro*, cholesterol has the capacity to modulate phototransduction in rods by altering the rod outer segment membrane structure \\[57\\], as well as by directly binding to rhodopsin itself \\[58\\]. Histological studies on retinas from patients with abetalipoproteinemia and familial hypobetalipoproteinemia, (serum LDL-cholesterol levels \\<5% of normal) demonstrated a profound absence of photoreceptors throughout most of the posterior retina \\[59,60\\]. In addition, patients with Smith-Lemli-Opitz Syndrome, a disease of abnormal cholesterol metabolism caused by a defect in 7-dehydrocholesterol reductase (DHCR7), another enzyme encoded by a gene selectively expressed in macula tissues, exhibited slower activation and recovery kinetics of their rod photoreceptors \\[61\\].\n\n*In vitro* studies by Mauch *et al*. \\[62\\] have demonstrated that retinal ganglion cells require cholesterol in order to form mature, functioning synapses. The retinal ganglion cells in their experiments produced enough cholesterol to survive and grow, but effective synaptogenesis demanded additional cholesterol supplied by glial cells. Other work by Hayashi *et al*. \\[63\\] showed that exposure to lipoproteins containing cholesterol and apolipoprotein E stimulated retinal ganglion cell axons to extend, and that this effect was mediated by receptors of the LDL receptor family present on distal axons. Studying the role of cholesterol in synaptogenesis may lead to insights useful in the development of protective or restorative therapeutics for neurodegenerative disease, as well as for ocular diseases that affect ganglion cells.\n\nIn view of epidemiological studies that have suggested connections among atherosclerosis, serum cholesterol levels, and ARMD \\[64-66\\], the enrichment of cholesterol biosynthesis genes within the macula warrants further investigation. The presence of cholesterol in drusen, the extracellular deposits of ARMD, has been confirmed \\[67,68\\], although the origin of this cholesterol remains unclear. Disregulation of lipid metabolism and transport, either on a local and\/or systemic level, may contribute to macular diseases, such as ARMD. Studies have associated statin use with a decreased rate of ARMD \\[69,70\\], but randomized, prospective studies have yet to be completed.\n\n## Identifying candidate disease genes\n\nOne direct application of the gene expression patterns we have defined is the identification of candidate genes for genetic diseases that differentially affect the various eye compartments. This strategy relies on the hypothesis that if mutations in a gene cause physiological aberrations specifically in a particular tissue, the gene is more likely to be selectively expressed in that tissue. We therefore used the literature, RetNet \\[71\\], and the Online Mendelian Inheritance in Man \\[72\\] databases, to collate lists of genetic diseases affecting the lens, cornea, and retina, along with the genetic intervals to which the disease loci have been mapped. Next, we identified genes that were relatively selectively expressed in each of the three compartments. Briefly, we standardized the Cy5 intensity data for each array and calculated the average intensity for every gene across all samples from each compartment. We then empirically identified an intensity cut-off that resulted in selection of greater than 85% of genes included in the retinal compartment signature from Figure 1<\/a>, but also included highly expressed genes that were expressed in more than one compartment. Using this cut-off, we identified separate compartment gene lists for the three compartments and identified the subset of these genes that were located in the appropriate cytogenetic intervals for each compartment-specific disease (see Additional data files 4, 5, 6 and Materials and methods).\n\nTo assess the potential of this approach, we analyzed the subset of diseases for which candidate intervals were listed in our sources but for which the causative gene is now known. The density of affected-tissue-expressed genes located in the candidate intervals was similar to that for the unknown diseases, and thus this subset served as a reliable positive control. The disease gene for a remarkable 50% to 70% of the diseases of known genetic cause was selectively expressed in the cognate compartment (Table 1<\/a>). We tested the statistical significance of this result by comparing the number of disease genes identified by the compartment gene expression lists with the aggregate list of all genes detectably expressed in any of the samples shown in Figure 1<\/a>. We found that for all three groups of diseases, the compartment signatures were significantly enriched for candidate disease genes (lens, p \\< 0.002; cornea, p \\< 0.005; retina, p \\< 0.0004, by the hypergeometric distribution). By focusing on the genes expressed within the compartment displaying the disease phenotype, we could enrich for potential candidate genes by an average of 2 to 2.5-fold.\n\nAs an example of this approach, we more closely examined Retinitis Pigmentosa 29 (RP 29), an autosomal recessive form of RP that was mapped to chromosomal region 4q32-q34 in a consanguineous Pakistani family \\[73\\]. At least two genes within this interval (*WDR17*, *GPM6A*), and one gene near the interval (*CCN3*), were previously examined by sequencing and were excluded as candidates \\[74\\]. In our data, only one gene, *KIAA1712*, was both located within the mapped interval and selectively expressed in our retinal samples. Little is currently known about this gene, except that it appears in expressed sequence tags (EST) and SAGE libraries from several tissues, including brain. Our analysis suggests that *KIAA1712* is a strong candidate gene for RP 29, and deserves further study. We expect our candidate gene lists to be highly enriched for the causative genes for a large fraction of the diseases we analyzed, and thus should prove useful in accelerating identification of genes important in various aspects of ocular pathology.\n\n# Discussion\n\nOur microarray studies identified distinct molecular signatures for each compartment of the human eye. As we predicted, many of the genes differentially expressed in each tissue could be related to the histology and embryology of the cognate structure in the eye; more usefully, each signature uncovered numerous genes whose expression or function in the eye had not been previously characterized and for which their expression pattern now provides a new clue to their roles. Through a comparative analysis of gene expression among eye compartments, we can also gain insight into the pathophysiology of diseases that afflict specific eye tissues. Furthermore, our data may help anticipate or understand drug effects and side-effects, when the molecular targets of the drugs are preferentially expressed in particular ocular tissues.\n\nThe extensive set of genes selectively expressed in the macula demonstrates that there is significant regional variation in gene expression programs in the human retina. The macula-enriched expression pattern may provide clues to the pathogenesis of retinal diseases that preferentially affect the macula, such as ARMD. Because no ophthalmologic clinical data accompanied the autopsy globe samples used in our experiments and because of our limited sample sizes, we were unable to correlate our gene expression data with clinical exam findings or disease course. The techniques used in these experiments did, however, allow us to examine tissues from individual donors rather than requiring us to rely on either pooled tissue samples or cultured cells. Thus, our results show that future experiments examining individual diseased samples will be possible.\n\nBy analyzing our global gene expression data together with previous genetic mapping data, we were able to greatly refine sets of candidate genes for many corneal, lenticular, and retinal diseases whose genetic basis is still undefined. When we used a control set of diseases with known causative genes, the candidate gene lists we generated included 50% to 70% of the causative genes for this control set. One explanation for why we did not identify all the causative genes for the control disease set was that some causative genes did not meet our intensity threshold, and thus were not included in the compartment expression lists. Furthermore, we could not have identified those causative genes that are only expressed in the diseased state (but not in normal tissues), because we limited our microarray analyses to tissues with no known ocular pathology. Other reasons why our approach may have missed causative genes include expression of causative genes only at certain points in development and not in adult tissues, technical problems with the array element(s) representing these genes, and possible loss of transcripts in the RNA isolation or amplification process. Future investigation of these potential problems and comparison of our candidate gene lists with genome-scale gene expression data from diseased tissues will result in further refinement of the approach presented here.\n\nFinally, our studies were designed to provide an open resource for all investigators interested in ocular physiology and disease. The tissue signature data, as well as the diseases, genetic intervals, and candidate genes for all the diseases we examined, and the complete set of data from our studies is freely available without restriction from the Authors' Web Supplement accompanying this manuscript \\[75\\].\n\n# Materials and methods\n\n## Tissue specimens\n\nEight whole globes (G1 to G8) were harvested from autopsy donors (age range 30 to 85 years old) within 24 h of death, and the tissues were immediately stored at 4\u00b0C in RNAlater (Ambion, Austin, TX, USA). Four of the globes were from female donors (G3, G6 to G8) and four were from male donors (G1, G2, G4, G5). Globes 4 and 5 were harvested as a set from a single donor, as were globes 6 and 7. No ophthalmologic clinical records were available for any of the globes at the time of harvest. Seven of the globes (G1 to G7) were dissected into the following components: cornea, lens, iris, ciliary body, retina, and optic nerve, while only retinal tissue was available from G8. The maculas and the peripheral retinal tissues were further dissected from several of the retinal samples. The macula was defined as the visible xanthophyll-containing tissue temporal to the optic nerve, which encompassed an approximate area of 4 mm^2^. For comparison purposes, three post-mortem brain specimens were analyzed.\n\n## RNA extraction and amplification\n\nSpecimens were disrupted in TRIZOL (Gibco, Carlsbad, CA, USA) solution using a tissue homogenizer. Samples were processed according to the manufacturer's protocol until the aqueous supernatant was retrieved. The supernatant was mixed with 1 volume of 70% ethanol, applied to an RNeasy column (Qiagen, Valencia, CA, USA), and purified according to the manufacturer's protocol. RNA quality and quantity were assessed by gel electrophoresis and spectrophotometer measurements. Total RNA was amplified using a single round, linear amplification method \\[9\\] (also see Additional data files 1 and 2). Tissue samples that yielded inadequate amounts of RNA were excluded from any further analysis. A reference mixture of mRNAs derived from 10 different cell lines (Universal Human Reference RNA, Stratagene, La Jolla, CA, USA) was used in all experiments as an internal standard for comparative two-color fluorescence hybridization.\n\n## Microarray procedures\n\nHuman cDNA microarray construction and hybridization were as previously described \\[76\\]. The microarrays contained 43,198 elements, representing approximately 30,000 genes (estimated by UniGene clusters) and were manufactured by the Stanford Functional Genomics Facility \\[77\\]. In each analysis, amplified RNA from an eye tissue sample was labeled with Cy5, and amplified reference RNA was labeled with Cy3. The two labeled samples were combined, and the mixture was hybridized to a microarray. Arrays were scanned using a GenePix 4000B scanner (Axon Instruments Inc., Sunnyvale, CA, USA). The array images were processed using GenePix Pro 3.0, and the resulting data were indexed in the Stanford Microarray Database and normalized using their default total intensity normalization algorithm (more detailed methods are available in Additional data file 3). Searchable figures and all raw microarray data can be found at \\[75\\]. The complete microarray dataset is also accessible through the Gene Expression Omnibus \\[78\\] (accession number GSE3023).\n\n## Bioinformatic analyses\n\nFor the data shown in Figures 1<\/a> and 2<\/a>, only elements for which at least 50% of the measurements across all samples had fluorescence intensity in either channel at least 3.25-fold over background intensity were included. The logarithm of the ratio of background-subtracted Cy5 fluorescence to background-subtracted Cy3 fluorescence was calculated. Then values for each array and each gene were median centered, and only cDNA array elements for which at least two measurements differed by more than 2.5-fold from the median were included in subsequent analyses. For the data in Figures 3<\/a>, we employed the Statistical Analysis of Microarrays (SAM) package \\[79\\]. Only elements for which the intensity to background ratio was at least 3.25 in at least 35% of the retina samples were considered. Only genes whose expression significantly differed between the macula and peripheral retina (false discovery rate \\<0.05 with 500 permutations) were selected. Finally, to focus on genes with the largest absolute difference in expression between the two regions, we selected genes whose expression differed by at least four-fold from the median in at least two samples.\n\n## Candidate disease gene analysis\n\nTo identify the gene sets expressed in each compartment, background-subtracted Cy5 intensities from each microarray were standardized to an array-median of 1,500, and genes exhibiting an average intensity of at least 2,500 in a compartment were identified (see Additional data file 4). This threshold was chosen empirically because it resulted in greater than 85% of the retinal signature from Figures 1<\/a> to be included in the retina set, while less than 5% of these genes were contained in any of the other compartment sets. Genetic diseases affecting the lens, cornea, or retina were collated from the Online Mendelian Inheritance in Man database \\[72\\] and the Retinal Information Network \\[71\\], along with the genetic intervals to which they have been mapped (see Additional data file 5). Using Perl scripts, we mapped every sequence on our arrays to the human genome using data from the UCSC genome browser \\[80\\]. Genes in the corresponding compartment expression set that were located in the genetic interval associated with each compartment-specific disease were identified. For the benchmark analysis of diseases that were associated with known genes, we also identified all genes in the human genome that fell into the genetic interval associated with each disease. The compartment expression sets and our lists of candidate genes for the 147 diseases we analyzed can be found in Additional data file 6.\n\n# Additional data files\n\nThe following additional data are available with the online version of this paper. Additional data file 1<\/a> contains the step-by-step amplification protocol used in this work. Additional data file 2<\/a> is a table detailing RNA isolation and amplification yields. Additional data file 3<\/a> contains more detailed supplemental materials and methods. Additional data file 4<\/a> contains the compartment gene lists used in the disease gene analysis. Additional data file 5<\/a> contains the list of diseases for each compartment with their mapped genetic intervals. Additional data file 6<\/a> contains the results of the disease gene analysis, including the list of candidate genes for each disease.\n\n# Supplementary Material\n\n###### Additional data file 1\n\nStep-by-step amplification protocol.\n\nClick here for file\n\n###### Additional data file 2\n\nA table detailing RNA isolation and amplification yields.\n\nClick here for file\n\n###### Additional data file 3\n\nDetailed supplemental materials and methods.\n\nClick here for file\n\n###### Additional data file 4\n\nCompartment gene lists used in the disease gene analysis.\n\nClick here for file\n\n###### Additional data file 5\n\nThe list of diseases for each compartment with their mapped genetic intervals.\n\nClick here for file\n\n###### Additional data file 6\n\nResults of the disease gene analysis, including the list of candidate genes for each disease.\n\nClick here for file\n\n### Acknowledgements\n\nWe wish to thank members of the Brown laboratory for helpful advice and discussions, M van de Rijn and Stanford pathology for help with tissue acquisition, and T Hernandez-Boussard for computational assistance. This work was supported by the Howard Hughes Medical Institute, NCI grant CA77097, the Stanford Medical Scholars Program (J.D.), and by NIGMS training grant GM07365 (MD). P.O.B. is an investigator of the HHMI.\n\n## Figures and Tables\n\nCompartment gene sets are enriched for candidate genes of ocular diseases\n\n| | Disease-associated genes on array | Disease-associated genes expressed in affected compartment | Percentage of known disease-associated genes identified | Average fold enrichment compared to total number of genes in interval | *P*-value |\n|----|----|----|----|----|----|\n| Lens | 15 | 8 | 53 | 2.4 | 0.002 |\n| Cornea | 13 | 9 | 69 | 2.0 | 0.005 |\n| Retina | 42 | 23 | 55 | 2.3 | 0.0004 |\n\nArrays were standardized to the same median intensity and genes exhibiting minimum intensities of 2,500 in any compartment were identified. Genetic diseases affecting the lens, cornea, or retina were collated from the RetNet \\[71\\] and Online Mendelian Inheritance in Man \\[72\\] databases, along with their cytogenetic map positions. The table indicates the number of cloned disease genes on the arrays, the number contained in a given compartment gene set, the percentage of known disease genes included in the signatures, the average fold enrichment compared to the total number of genes in each cytogenetic interval, and the statistical significance of this enrichment (using the hypergeometric distribution).","meta":{"dup_signals":{"dup_doc_count":154,"dup_dump_count":46,"dup_details":{"curated_sources":2,"2022-27":1,"2021-43":1,"2020-34":1,"2020-24":1,"2019-47":1,"2019-22":1,"2019-09":1,"2018-51":1,"2018-39":1,"2018-30":1,"2018-22":1,"2018-13":1,"2017-51":1,"2017-43":1,"2017-34":1,"2017-26":1,"2017-17":4,"2017-09":17,"2017-04":1,"2016-50":1,"2016-44":2,"2016-40":2,"2016-36":19,"2016-30":16,"2015-48":2,"2015-40":1,"2015-35":3,"2015-32":3,"2015-27":1,"2015-22":4,"2015-14":3,"2014-52":3,"2014-49":5,"2014-42":8,"2014-41":5,"2014-35":4,"2014-23":6,"2014-15":5,"2023-23":1,"2015-18":4,"2015-11":2,"2015-06":3,"2014-10":3,"2013-48":4,"2013-20":3,"2024-18":1}},"file":"PMC1242209"},"subset":"pubmed_central"} {"text":"abstract: Connections have been revealed between very different human diseases using phenotype associations in other species\nauthor: Bolan Linghu; Charles DeLisi\ndate: 2010\ninstitute: 1Translational Sciences Department, Novartis Institutes for BioMedical Research, Cambridge, MA 02139, USA; 2Departments of Biomedical Engineering and Physics, Program in Bioinformatics and Systems Biology, Boston University, Boston, MA 02215, USA\nreferences:\ntitle: Phenotypic connections in surprising places\n\n# The human disease landscape\n\nThe past few years have witnessed a growing number of well documented connections between and among human disease phenotypes, whose relationship would not have been obvious within the current disease classification framework. The evidence stems from a variety of sources, spanning clinical epidemiology, computational genomics and various model systems \\[1-5\\]. The implications are potentially so fundamental to disease etiology, drug development and the general diagnostic paradigm that there have been calls for an NIH Roadmap (large trans-institute transformational grants) focused on delineating the human disease landscape - a quantitative bipartite correlation map relating disease phenotypes and their genetic structures. A new study of phenotype correlations \\[6\\] now takes this idea further.\n\nOne of the earliest comprehensive studies of human phenotype correlations was by Rzhetsky *et al*. \\[5\\], who established a correlation network between 161 diseases and disorders using evidence of comorbidities obtained from some 1.5 million patient records. Among other results, they found suggestive evidence for genetic relations between autism, which manifests in childhood, and several late onset diseases, including bipolar disorder and schizophrenia.\n\nA different approach was taken by Goh *et al*. \\[1\\], who constructed a human disease network by linking genetic disorders that are known to share causative genes. The network was based on an analysis of the Online Mendelian Inheritance in Man (OMIM) database, the most comprehensive compendium of well established associations between human disorders and their associated genes \\[7\\]. Among their findings was a large subnet formed by 516 of the 1,284 disorders studied, clearly showing many-to-many relationships between phenotypes and genes. For example, *KRAS* (encoding a small GTPase), *BRCA1* and *BRCA2* (both encoding tumor suppressors) are all involved in breast cancer, but *KRAS* is also implicated in pancreatic cancer, whereas *BRCA1* and *BRCA2* are associated with papillary carcinoma and prostate cancer, respectively \\[7\\]. The results of these and other studies support the concept that genes underlying a disorder tend to be functionally related; for example, they could be part of a particular protein complex, a particular pathway or process, or a particular set of coexpressed genes. A disorder can then be viewed as a phenotype that emerges from dysfunction of one or more components of a functionally coherent gene module. An important aspect of modularization is that any gene in the module that was not previously identified with the phenotype is a candidate for association with it \\[8\\]. Moreover, a particular module can, to varying extents, underlie related phenotypes, opening up the possibility of linking phenotypes not only on the basis of shared genes, but also on the basis of functionally related genes that are not shared.\n\nInferring disease connectivity through functional linkage starts with a network of genes whose functions are correlated; that is, each pair of nodes (genes or proteins) is connected by one or more sources of evidence supporting its functional coherence, such as physical interaction, correlated expression, adjacency in the same metabolic pathway or genetic interaction \\[2-4\\]. Connections can then be inferred between the diseases whose associated genes are linked in the gene network. For instance, we and our colleagues \\[3\\] constructed a network for the human genome by integration of diverse types of evidence using a Bayesian model, and annotated it with all OMIM disease genes for diseases known to be associated with five or more genes. Genes that are most tightly linked to those known to be associated with a given disease are immediate candidates for association with that disease. Furthermore, connections between disease pairs can be quantitatively identified on the basis of the magnitude of the functional linkage between their disease-associated genes. Thus, two diseases can be linked even if they do not share disease-associated genes. The associations found ranged from phenotypically disparate disease pairs, such as multiple sclerosis with malaria, to phenotypically similar pairs, such as muscular dystrophy with myopathy. Such results suggest that the current disease classification may be much less informative than is commonly believed.\n\nRecognition of molecular connections between disease phenotypes provides immediate insight into the molecular mechanisms underlying different diseases and can therefore generate novel hypotheses for therapeutic strategies. This is especially valuable if one disease is well studied and the other is not; it might also be valuable if viable drug targets have been found for one of the diseases but not the other. This prospect of drug repositioning could accelerate the introduction of therapies by years \\[9\\].\n\n# Human disease models and phenotypic connections in surprising places\n\nRecent work from Edward Marcotte's laboratory by McGary *et al*. \\[6\\] takes these concepts, and indeed the entire field of phenomics, to an entirely new level by developing, establishing and exploring a quantitative method to find non-obvious phenotypic connections, not just within a species, but across species. Such cross-species phenotypic connections stem from the evolutionary conservation of the underlying associated gene modules \\[8,10\\]. The importance of the work reported by McGary *et al*. \\[6\\] can be glimpsed by recalling that whereas OMIM, which is the most extensive database of well established human gene-phenotype associations, contains approximately 5,400 unique associations \\[7\\], the numbers for some model organisms are 1 to 2 orders of magnitude higher \\[6\\]. Some of these relations - such as between obesity and its implicated genes in mice - have obvious equivalences in humans, and when they do, the laboratory model can serve as a useful surrogate to study the disease or disorder. Other phenotypic pairs are entirely non-obvious, such as the fact that retinal cancer in humans and ectopic vulvae in the nematode can both be caused by disruption of the human retinoblastoma 1 (*RB1*) gene or its nematode ortholog \\[6\\]. Because such non-obvious relations can occur with surprising frequency, a method that can rapidly and reliably link human genes to non-obviously related phenotypes in model systems offers the prospect of radically accelerating the rate at which we can explore disease landscapes in both humans and model organisms.\n\nMcGary *et al*. \\[6\\] provide more than a glimpse at the possibilities. They begin by defining 'phenologs' as cross-species mutant phenotypes that share a significant number of orthologous genes. The statistical significance of phenologs, whose emergence results from disruption of orthologous genes, can be estimated as the probability that the observed number of orthologues common to the two phenotypes would be found by chance and correcting for multiple hypotheses. In this way, using approximately 300 human diseases and over 6,000 phenotypes in model organisms, including mouse, worm, yeast and *Arabidopsis*, they identify 4,390 significant phenologs. As one of the positive controls, the authors note that the 3,755 mouse-human phenologs identified contain many of the known disease models - including cataracts, deafness and retinal disease - all at *P*-values well below 10-8. Given that phenologs map gene-phenotype associations across the phyla, an association known in one species can be used to find non-established relations in another. Cross-validations presented by McGary *et al*. \\[6\\] show that phenologs can predict genes associated with about a third to a half of tested human diseases.\n\nThe work \\[6\\] offers tantalizing evidence for several counterintuitive mammalian disease models, including reduced growth rate of yeast deletion strains in medium enriched with the cholesterol-lowering drug lovastatin as a model for abnormal angiogenesis in mice, and negative gravitropism defects in *Arabidopsis* as a model for human Waardenburg syndrome (which causes deafness with defects in neural-crest-derived tissues). Furthermore, using the yeast model, they demonstrate that *SOX13* (encoding a transcription factor related to the sex-determining gene *SRY*) is a new gene that regulates angiogenesis, and using the *Arabidopsis* model they show that *SEC23IP* (encoding a protein that interacts with SEC23, a component of the COPII complex that controls endoplasmic reticulum-to-Golgi trafficking) is probably a new Waardenburg syndrome gene. Notwithstanding the fact that many functionally coherent gene modules are conserved across different species \\[10\\], such demonstrations - especially the identification of phenologs that predate kingdom divergence - seem to mark one of those uncommon occasions in science in which intuition built on years of experience fails completely.\n\n# Additional connections\n\nAs staggering as these results \\[6\\] are, it seems possible that many phenologs have been missed, because the method is confined to connections based only on phenotype pairs sharing known orthologous genes. Gene-phenotype association data may, however, be far from complete; if so, many orthologous phenotypes will be missed. One possible way to increase the discovery rate would be to consider the functional relatedness for the genes associated with each phenotype. As described earlier, gene-gene functional relatedness has been successfully used to identify phenotypic connections within the same species \\[2-4\\], and it is possible that the same principle can be applied to identify phenotype connections between different species.\n\nA deeper and far more important connection relates to the impact of McGary *et al*. \\[6\\] on efforts to develop a more complete picture of the topology of the human phenome. It is evident that the way phenomics looks today, the amount of information and the incredible interconnectivities, could not have been imagined even 3 years ago. It seems likely that the methods developed and demonstrated by McGary *et al*. \\[6\\], and their inevitable extensions, will add unprecedented knowledge and quantitative detail to the interrelated landscapes of mutant phenotypes for humans and other species, and this will offer many more surprising correlations between and among human diseases. As our picture of the human disease landscape continues to take shape, perhaps the one thing that we should not be surprised about will be the need to fundamentally rethink the current disease classification and its associated diagnostic paradigm.","meta":{"dup_signals":{"dup_doc_count":124,"dup_dump_count":48,"dup_details":{"curated_sources":2,"2022-21":1,"2021-49":1,"2021-10":1,"2020-45":1,"2020-24":1,"2019-51":1,"2019-35":1,"2019-22":1,"2019-13":2,"2019-09":1,"2018-51":1,"2018-39":1,"2018-30":1,"2018-17":1,"2017-51":1,"2017-47":1,"2017-34":1,"2017-22":1,"2017-09":11,"2016-44":1,"2016-40":1,"2016-36":10,"2016-30":7,"2016-22":1,"2016-18":1,"2016-07":6,"2015-48":3,"2015-40":1,"2015-35":3,"2015-32":3,"2015-27":3,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":2,"2014-42":7,"2014-41":4,"2014-35":2,"2014-23":3,"2014-15":4,"2023-23":1,"2015-18":3,"2015-11":3,"2015-06":3,"2014-10":4,"2013-48":3,"2013-20":3,"2024-10":1}},"file":"PMC2884535"},"subset":"pubmed_central"} {"text":"author: Mary R. LoekenCorresponding author: Mary R. Loeken, .\ndate: 2012-07\nreferences:\ntitle: A New Role for Pancreatic Insulin in the Male Reproductive Axis\n\nThe recent discovery that insulin is expressed by the rat testis (1) raises questions about whether and how locally produced insulin regulates testicular function. It also raises the question whether pancreatic insulin regulates testicular function, and whether fertility might be impaired as a consequence of insulin deficiency in type 1 diabetes. As reported in this issue of *Diabetes*, Schoeller et al. (2) have found that pancreatic insulin regulates the male hypothalamic-pituitary-gonadal axis and is essential for fertility. In contrast, insulin produced in the testes may not be essential for testicular function. The article by Schoeller et al. provides an important advance toward understanding why fertility may be diminished in men with type 1 diabetes.\n\nIt has only recently been recognized that diabetes can reduce sperm quality and that the female partners of diabetic men have lower pregnancy rates (3,4). It is not well understood whether hyperglycemia or abnormal insulin signaling is responsible, and what stage(s) of spermatogenesis is affected. Increased nuclear and mitochondrial DNA damage in sperm samples from diabetic men suggests that hyperglycemia-induced oxidative stress may be responsible (3,5). However, it is also possible that the adverse effects of diabetes may be due to abnormal insulin signaling in the testis, systemic effects of insulin, or both, and may be separate from insulin's effects on blood glucose levels.\n\nSchoeller et al. (2) have taken advantage of the Akita mouse model of insulin-deficient diabetes to cleverly sort out these possibilities. The dominant Akita phenotype is caused by a mutation in the *Ins2* allele. The resulting misfolded protein product causes endoplasmic reticulum (ER) stress, leading to pancreatic \u03b2-cell death (6). Akita mice develop hyperglycemia, although the age at onset and severity of hyperglycemia depends on the sex, strain background, and whether the mice are heterozygous or homozygous for the mutant *Ins2* allele (M.R.L., personal observations; Schoeller et al. \\[2\\]). Notably, whereas primates carry only one insulin gene, rodents carry two functional insulin genes, *Ins1* and *Ins2*. *Ins1* arose from a duplication of the ancestral *Ins2* gene approximately 20 million years ago (7); therefore, the mouse *Ins2* gene is orthologous to the human insulin gene. Thus, if *Ins2* is expressed in the mouse testis, it is very likely that the human testis expresses insulin as well. The beauty of using the Akita model is that it has the potential to distinguish between the roles of testicular and pancreatic insulin on male reproductive function. This is not possible with other models, such as streptozotocin-induced diabetes or the NOD mouse, in which testicular insulin production would not be interrupted.\n\nThe Moley laboratory previously observed that sperm from homozygous Akita males fertilizes fewer oocytes and that the resulting blastocysts are developmentally impaired (8). In this study (2), they used males that were either heterozygous or homozygous for the Akita allele; heterozygous Akita males develop hyperglycemia (\\>300 mg\/dL) by 5 weeks of age, whereas homozygous Akita males develop hyperglycemia by 3 weeks of age (i.e., prior to puberty) and die by 8\u201312 weeks of age unless treated with insulin. Fertility diminishes by 4\u20136 months of age in heterozygous Akita males, whereas homozygous Akita males are completely infertile. RT-PCR demonstrated that, unlike the pancreas in which both *Ins1* and *Ins2* are transcribed, only *Ins2* was transcribed in the testis. Immunolocalization showed that insulin was detected predominantly in Sertoli cells. However, there was no associated ER stress. Thus, while the resulting Ins2 protein may be hypomorphic or nonfunctional in the testis, infertility in homozygous Akita males does not appear to be due to ER stress-induced apoptosis of testicular cells.\n\nAlthough treatment of the homozygous Akita males with exogenous insulin using subcutaneous insulin implants restored spermatogenesis and fertility, this was not due to restoration of insulin within the testes because insulin did not cross the blood-testis barrier. There could be effects on Leydig cells, which contain insulin receptors and are located outside the blood-testis barrier (9). Therefore, they would presumably not be responsive to insulin produced by Sertoli cells. However, because circulating levels of luteinizing hormone and testosterone, which were significantly reduced in Akita homozygotes, were restored by insulin treatment, the primary effects of exogenous insulin appear to be on the hypothalamic-pituitary axis.\n\nThis study by Schoeller et al. (2) is of clinical relevance to men with type 1 diabetes because it demonstrates that pancreatic insulin is crucial for the male reproductive axis. The infertility in homozygous Akita males appears to be due to insulin deficiency, not hyperglycemia. This is because heterozygous Akita males become as severely hyperglycemic as the homozygotes at only a slightly older age, and they are fertile at least until they are 4\u20136 months of age. On the other hand, it is possible that severe hyperglycemia before puberty interferes with the function of the hypothalamic-pituitary-testicular axis, whereas hyperglycemia occurring during or after puberty only interferes with the function of the reproductive axis after several months of chronic exposure. However, the ability to restore fertility in homozygous Akita males with exogenous insulin suggests that if prepubescent hyperglycemia interferes with the function of the reproductive axis, it is not irreversible.\n\nThis article (2) opens the door to future studies that aim to understand how insulin regulates the male reproductive axis, and whether insulin is regulating the pituitary, the hypothalamus, or higher central nervous system nuclei. Knockout of the insulin receptor gene in the central nervous system impairs luteinizing hormone production and spermatogenesis (10). There was no significant difference in follicle-stimulating hormone levels between wild-type, heterozygous, and homozygous Akita males, suggesting that the effects of insulin are on the hypothalamus or on the responsiveness of the pituitary gonadotropes to the hypothalamic gonadotropin\u2013releasing hormone.\n\nIt should be noted that insulin has long been recognized to play a role in the ovary. Insulin is detectable in ovarian follicular fluid and synergizes with gonadotropins for oogenesis, ovulation, and luteinization (11,12). Because the ovarian follicle is permeable to circulating hormones, it has been thought that follicular fluid insulin is derived from the pancreas. However, in light of this current study, whether insulin (and *Ins2* in particular) is expressed by ovarian cells ought to be examined. Although this study did not find an essential role for testicular insulin, additional experimentation, perhaps with testis-specific *Ins1* or *Ins2* knockout or transgenic strains, is necessary to further understand the function of testis-derived insulin. It will also be important to study the effects of insulin and insulin deficiency that are conserved, as well as those that are distinct, between the hypothalamic-pituitary-gonadal axes of females and males.\n\n## ACKNOWLEDGMENTS\n\nM.R.L. is supported by grants from the National Institutes of Health (RO1 DK52865 and RO1 DK58300).\n\nNo potential conflicts of interest relevant to this article were reported.\n\n# REFERENCES","meta":{"dup_signals":{"dup_doc_count":135,"dup_dump_count":49,"dup_details":{"curated_sources":2,"2022-33":2,"2019-51":1,"2019-47":1,"2019-43":2,"2019-39":1,"2019-26":1,"2019-09":1,"2018-47":1,"2018-39":1,"2018-30":2,"2018-26":2,"2018-17":2,"2018-09":3,"2018-05":1,"2017-51":2,"2017-47":2,"2017-43":4,"2017-39":2,"2017-34":2,"2017-30":5,"2017-26":3,"2017-22":2,"2017-17":8,"2017-09":4,"2017-04":7,"2016-50":4,"2016-44":4,"2016-40":4,"2016-36":4,"2016-30":4,"2016-26":3,"2016-22":3,"2016-18":2,"2016-07":2,"2015-48":3,"2015-40":2,"2015-35":2,"2015-32":3,"2015-27":2,"2015-22":3,"2015-14":3,"2014-52":3,"2014-42":4,"2014-35":4,"2014-23":3,"2017-13":2,"2015-18":3,"2015-11":3,"2015-06":1}},"file":"PMC3379657"},"subset":"pubmed_central"} {"text":"author: JH Anolik; A Palanichamy; J Bauer; J Barnard; J Biear; R Dedrick; I Sanz; J Liesveld; E Baechler\ndate: 2012\ninstitute: 1University of Rochester Medical Center, Rochester, NY, USA; 2University of Minnesota, Minneapolis, MN, USA; 3XDx, Brisbane, CA, USA; 4Emory University, Atlanta, GA, USA\ntitle: B cells at the adaptive-innate immune system interface in SLE\n\n# Background\n\nAccumulating data indicate that inappropriate activation of type I interferon (IFN) plays a key role in the pathogenesis of systemic lupus erythematosus (SLE). Given that IFN can influence B-cell lymphopoiesis in murine bone marrow (BM), we explored the hypothesis that IFN activation in SLE BM has direct effects on B-cell development. Additionally, the impact of B cells on pDC production of IFN was examined.\n\n# Methods\n\nPeripheral blood mononuclear cells (PBMC) and bone marrow mononuclear cells (BMMC) were isolated from 28 patients fulfilling ACR criteria for SLE and from 20 healthy controls. RNA isolates were analyzed for the expression of three to five IFN-regulated genes (IFIT1, IRF7, G1P2, CEB1, C1orf29) by quantitative PCR and IFN-regulated chemokines CXCL10 (IP-10), CCL2 (MCP-1), and CCL19 (MIP-3\u03b2) defined in serum and BM supernatant. B-cell subsets were delineated in single-cell suspensions of PBMC and BMMC by multi-parameter flow cytometry. BM and PB pDCs were stimulated with TLR9 (CpG ODN 2006 or 2216) or TLR7 (R848) agonists and intracellular IFN and TNF measured by flow cytometry.\n\n# Results\n\nThe majority of SLE patients had an IFN signature in the BM (57%), which was even more pronounced than the paired PB. Notably, the early B-cell compartment (consisting of pro B cells, pre B cells, immature B cells and early transitional T1 and T2 B cells) in SLE BM with an IFN signature was associated with a reduction in the fraction of pro\/pre B cells, suggesting an inhibition in early B-cell development. However, at the transitional B-cell stage this inhibition was reversed with enhanced selection of B cells into the transitional compartment. The composition of the mature B-cell compartment in IFN-activated SLE BM was notable for an expansion of CD27^+^, IgD^-^ switch memory B cells. SLE patients with a BM IFN signature were enriched for a high number of autoantibody specificities (*P* = 0.003 compared with IFN low), and the degree of IFN activation in the BM correlated with peripheral lymphopenia (*P* = 0.019) and disease activity (*P* = 0.05). In order to understand the etiology of IFN activation in SLE BM, we examined the production of IFN by pDCs. CpG induced IFN production in pDCs in a dose-dependent fashion. Notably, a higher proportion of BM pDCs produced IFN compared with paired PB. Moreover, pDCs produced 59% more IFN in the presence of B cells.\n\n# Conclusion\n\nThis is the first demonstration of an IFN signature in SLE BM. These results suggest that the BM is an important but previously unrecognized target organ in SLE with critical implications for B-cell ontogeny and selection. We postulate that in the setting of IFN activation the stringency of negative selection of autoreactive B cells in the BM may be reduced. Circulating immune complexes and apoptotic fragments in SLE BM may serve as ligands for Toll-like receptors on pDCs contributing to aberrant IFN production and, in turn, B cells may be critical regulators of pDC function.\n\n## Acknowledgements\n\nWork supported by grants from the NIH (R01 AI077674 and P01 AI078907) and the Rochester Autoimmunity Center of Excellence.","meta":{"dup_signals":{"dup_doc_count":107,"dup_dump_count":34,"dup_details":{"curated_sources":2,"2019-09":1,"2018-47":1,"2018-39":1,"2018-30":2,"2018-13":1,"2017-47":1,"2017-34":1,"2017-22":1,"2017-09":8,"2016-44":1,"2016-40":1,"2016-36":7,"2016-30":8,"2015-48":3,"2015-40":2,"2015-35":3,"2015-32":3,"2015-27":2,"2015-22":3,"2015-14":3,"2014-52":3,"2014-49":6,"2014-42":8,"2014-41":3,"2014-35":3,"2014-23":5,"2014-15":5,"2020-05":1,"2015-18":3,"2015-11":2,"2015-06":3,"2014-10":4,"2013-48":3,"2013-20":3}},"file":"PMC3467501"},"subset":"pubmed_central"} {"text":"abstract: # Background\n .\n The main goal of the whole transcriptome analysis is to correctly identify all expressed transcripts within a specific cell\/tissue - at a particular stage and condition - to determine their structures and to measure their abundances. RNA-seq data promise to allow identification and quantification of transcriptome at unprecedented level of resolution, accuracy and low cost. Several computational methods have been proposed to achieve such purposes. However, it is still not clear which promises are already met and which challenges are still open and require further methodological developments.\n .\n # Results\n .\n We carried out a simulation study to assess the performance of 5 widely used tools, such as: CEM, Cufflinks, iReckon, RSEM, and SLIDE. All of them have been used with default parameters. In particular, we considered the effect of the following three different scenarios: the availability of complete annotation, incomplete annotation, and no annotation at all. Moreover, comparisons were carried out using the methods in three different modes of action. In the first mode, the methods were forced to only deal with those isoforms that are present in the annotation; in the second mode, they were allowed to detect novel isoforms using the annotation as guide; in the third mode, they were operating in fully data driven way (although with the support of the alignment on the reference genome). In the latter modality, precision and recall are quite poor. On the contrary, results are better with the support of the annotation, even though it is not complete. Finally, abundance estimation error often shows a very skewed distribution. The performance strongly depends on the true real abundance of the isoforms. Lowly (and sometimes also moderately) expressed isoforms are poorly detected and estimated. In particular, lowly expressed isoforms are identified mainly if they are provided in the original annotation as potential isoforms.\n .\n # Conclusions\n .\n Both detection and quantification of all isoforms from RNA-seq data are still hard problems and they are affected by many factors. Overall, the performance significantly changes since it depends on the modes of action and on the type of available annotation. Results obtained using complete or partial annotation are able to detect most of the expressed isoforms, even though the number of false positives is often high. Fully data driven approaches require more attention, at least for complex eucaryotic genomes. Improvements are desirable especially for isoform quantification and for isoform detection with low abundance.\nauthor: Claudia Angelini; Daniela De Canditiis; Italia De Feis\ndate: 2014\ninstitute: 1Istituto per le Applicazioni del Calcolo, CNR, Naples, Italy; 2Istituto per le Applicazioni del Calcolo, CNR, Rome, Italy\nreferences:\ntitle: Computational approaches for isoform detection and estimation: good and bad news\n\n# Background\n\nGene transcription represents a key step in the biology of living organisms. Several recent studies, including \\[1,2\\], have shown that, at least in eukaryotes, a large fraction of the genome is transcribed and almost all the genes (more than 90% of human genes) undergo alternative splicing. The discovery of the pervasive nature of eukaryotic transcription, its unexpected level of complexity - particularly in humans - and its accurate quantification are helping to have a deep insight into biological pathways and molecular mechanisms that regulate disease predisposition and progression \\[3\\].\n\nThe main goal of the whole transcriptome analysis is to identify, measure, characterize and catalogue all expressed transcripts within a specific cell\/tissue - at a particular stage and condition - in particular to determine the precise structure of genes and transcripts, the correct splicing patterns, their abundances, and to quantify the differential expressions in both physiological and pathological conditions.\n\nThanks to pioneer works of \\[4-6\\] that showed, among others, the potential of high-throughput mRNA sequencing (RNA-seq) and the development of efficient computational tools \\[7-9\\] to analyse such a data, RNA-seq has quickly become one of the preferred and most widely used approaches for discovering new genes and transcripts and for measuring transcript abundance from a single experiment (see \\[10,11\\] for reviews). To date, RNA-seq experiments have been successfully used in a wide spectrum of researches, offering tremendous benefits with respect to those previous approaches, such as microarrays, and also creating many challenges from both experimental and data analysis perspective \\[12\\].\n\nIn particular, to fully benefit of RNA-seq data, the following (strongly connected) computational challenges must be faced: **i)** Transcriptome reconstruction or isoform identification **ii)** Gene and Isoform detection (on\/off) **iii)** Gene and Isoform quantification (expression level in terms of either FPKM or read-count) **iv)** Gene and Isoform differential expression\n\nPoints i)\u2013iii) are aimed to provide a full characterization of the transcriptome of a given sample, with ii) and iii) often combined into a simultaneous step, where some parsimonious strategies are employed to deal with the high number of candidate isoforms. Point iv) is carried out to compare samples across different physiological and pathological conditions. To face these challenges, several computational methods have been proposed \\[13,14\\] and open-source software packages are available. However, despite the connection among the previous points, most of the available computational methods attempt to face each point independently. Therefore, sophisticated pipelines are built in order to provide a comprehensive answer (see the Tuxedo pipeline \\[15\\] as a remarkable example). Anyway, the choice of the best method to use for a specific dataset, the best parameter tuning and the expected performance are not clear to a beginner user. In particular, methods often require several additional parameters that are not easy to understand and choose. Assessing the best combination is very difficult and time consuming. In most cases the choice is done in a subjective way, partially driven by prior knowledge of the structure of the genome under analysis and by some heuristic considerations, rather than using an objective and general approach. Therefore, most users are often confused and tend to use default values.\n\nRecently, few independent studies have been devoted to compare the performance of computational methods for detecting differential expression under a wide type of settings, see for example \\[16-18\\]. For what concerns points i)\u2013iii) limited comparisons were carried out within the same paper that describes the proposed method \\[19-23\\]. However, to the best of our knowledge, no independent comparison was available until the recent study of \\[24\\], conducted almost simultaneously to our study.\n\nGoals of the present paper are to illustrate the results of a detailed comparison of five widely used tools, namely *CEM*\\[23\\], *Cufflinks*\\[20\\], *iReckon*\\[22\\], *RSEM*\\[19\\] and *SLIDE*\\[21\\], to provide a discussion about expected results, and to assess which promises are already met and which challenges are still open and require further methodological developments. Even though, at least for data driven approaches, our conclusions are similar to those achieved by \\[24\\], the way to carry out our analysis and the way to compare the methods are different. Therefore, this study can be viewed as a complement to \\[24\\]. Specifically, we assessed the particular improvements that may be obtained by using annotation.\n\nIn the following section, we briefly review some of the most widely used tools for isoform reconstruction and quantification in organisms for which a reference genome is available. When the reference genome is not available (or the user does not want to use it) such methods cannot be used and more computational demanding assembly strategies have to be taken into account instead, see for example \\[25,26\\] or more in general \\[24\\]. Subsequently, in Methods section we describe the approach considered to build the comparisons and provide the rationale about the compared methods, their parameters and modes of usage. Comparisons are mainly carried out for simulated paired-end reads (PE) with different through-put and read-lengths, since they represent the state of the art of most current experiments. However, for completeness we also implemented a limited simulation study using single-end reads (SE) at the same depth. All methods were mostly used with their default parameters, without attempting any internal parameter optimization to improve their performances, mimicking the expected usage of a non expert scientist in the analysis of RNA-seq data.\n\nMethods were compared under different experimental scenarios, assuming the availability of complete annotation, incomplete annotation and absence of annotation. Moreover, whenever possible, such scenarios were combined with three different modes of action that account for different strategies in the considered algorithms. In the first mode, the inference is limited only to those isoforms that are present in the annotation. In the second mode, the annotation is used as a guide to identify other possible transcripts. In the third mode, all inference is fully data driven. We observe that the case of complete annotation (combined with inference limited to handle only transcripts contained therein) represents an ideal case, that allows us to evaluate the performance of each method in detecting presence of isoforms and to quantify their expression when everything else in known. This situation is rarely met given our current knowledge of Biology. However, it can be considered as a limit case since, in the near future, it will be possible to work with almost complete annotations, thanks to the output of large international projects such as ENCODE (Encyclopedia of DNA Elements) \\[27\\], at least for widely studied organisms such as the Human one. The case of incomplete annotation (combined with an inference driven by the provided annotation) illustrates the realistic case in current studies, where previous projects have disclosed most information. However, data emerging from the literature show that such information represents only partial knowledge. Finally, the full data driven approach is necessary when studying novel sequenced organisms for which no previous information is available (or the user does not want to use the annotation explicitly). All comparisons in \\[24\\], except iReckon and SLIDE, were carried out fully data driven.\n\nIn Results and discussions section, we illustrate the results of our comparative study on two different experimental set-ups. The first models small genomes, while the second models large genomes. In particular, we compare recall and precision given a set of truly expressed isoforms and we evaluate the quality of the abundance estimates. Also with respect to this point, our method differs from \\[24\\]. In our case the truth is based on simulated data, about which we have a complete knowledge in terms of truly expressed isoforms and expression levels. Conversely, in \\[24\\] the comparison was carried out on a real dataset for which the same information was not available. In that case the aim was to benchmark data driven approaches in recovering the gene annotation, without taking into account whether the retrieved isoforms were present in the sample or not. Finally, Conclusions section summarizes all our evaluations.\n\n## An overview on computational methods for isoform identification and quantification\n\nThe classical pipeline for isoform detection and estimation consists of the following three logical steps. First, the reads are aligned to the reference genome. Subsequently, candidate isoforms are either identified or are directly provided by the user through an annotation file. Finally, the presence and the abundance of each isoform are (either independently or simultaneously) estimated. We refer to \\[13,14\\] for detailed reviews of the existing algorithms and software. Alternatively, it is also possible to use methods, such as \\[26\\], that assemble reads in longer fragments that constitute the transcriptome, and then use methods for quantifying the abundance of inferred transcripts. Assembly methods are based on local alignment and graph theory and are similar in spirit to those methods used to assemble genomes. Such methods are potential very interesting for detecting de-novo isoforms. However, the comparison of such approaches with aligned based algorithms is out of the scope of the current work.\n\nRNA-seq alignment can be performed by a series of devoted tools such a \\[28-32\\], that allow to map both reads to the reference genome without large gap (i.e., exon-body reads) and reads with large gap in terms of genomic coordinates that span exon-exon junctions (i.e., splice-junctions reads). Since the aim of this paper is to compare isoform estimation\/detection procedures, we chose for the alignment step Tophat2 \\[29\\] (version 2.0.7) and we refer to \\[18,31,33,34\\] for comparisons on different algorithms. The choice of Tophat2 is motivated by the fact that the analysed tools suggest it, or its previous version \\[28\\], as aligner. Nevertheless, in general these methods only require the user to provide an alignment file. Therefore, any of the existing RNA-seq mappers can be used. The ability of an aligner to properly map the junction reads is important since false negative junctions may prevent the possibility of reconstructing some isoforms, while false positive junctions can lead to false isoform identification. We also note that some methods, for example \\[35\\], align reads to the transcriptome to better map the (known) splice junctions. Others, such as \\[29\\], implement hybrid approaches using both transcriptome and genome.\n\nOnce the read alignment has been performed, the inference can be carried out at different biological levels. Quantification of multiple isoforms is more complicated than the single event one (i.e., exons, junctions or genes), since different isoforms of the same gene (or that insist on the same genomic locus) share great part of the sequences from common exons and junctions. Moreover, identification and quantification problems are affected by both positional and sequence content biases present in RNA-seq data and by several other -still not fully understood- sources of experimental biases. The differences among the methods mostly depend on the way they model reads and the way they account for the different sources of biases.\n\nIn principle RNA-Seq data (i.e. observed coverage and splice-junction) can be modeled as a linear combination of isoforms. Therefore, the problem can be seen as a deconvolution problem \\[36,37\\] with expression levels as weights and isoforms as convolution kernels. Under such formalism, the isoform expression can be estimated either by using the \"maximum likelihood principle\" or by using similar statistical optimizations. Unfortunately, the design matrix that describes the isoform structures is unknown (or at least not completely known) and potentially very large. Therefore, the problem can be treated as a two steps procedure where, first, a set of candidate isoforms is identified, then the inference is made on such a set. The isoform identification step is crucial since the rest of inference is carried out on the basis of this result. On the other hand, it is also possible to perform the two steps simultaneously, see \\[38\\]. Moreover, because of the large number of candidate isoforms, the problem becomes ill-posed. Therefore, some penalties have to be used to encourage sparse solutions and avoid data over-fitting.\n\nFor instance, the very famous Lasso-type penalty was used in \\[21,39\\]. This penalty is sub-optimal since it does not take into account that all abundances are non negatives and their sum is constrained. The penalty in \\[22\\] is one attempt to explicitly use such constraints and reinforce the sparsity. In other cases, see for example \\[20\\] the sparseness is achieved using post-filtering steps to reduce the number of candidate isoforms.\n\nIt should be noted that several methods proposed for studying isoforms \\[36,40,41\\] do not perform the identification step explicitly. Indeed, they require the user to provide such a-priori knowledge. This is usually done either in terms of annotation file (in.GTF or.BED format) that can be downloaded from some database (e.g., \\[42\\]) or as a preliminary result of some tools for transcript reconstruction, such as in \\[43\\]. In this context, the inference is often limited to the easier problem of quantifying only those isoforms that are contained in the annotation rather than identifying novel isoform structures. Despite the availability of several methods that allow both isoform reconstruction and quantification, we consider useful to consider those approaches since, when the annotation will become complete, they can turn back in competition. Moreover, providing them a list of candidate isoforms obtained from some assembly procedures, such methods claim to return accurate quantification. An example of such idea is given by RSEM \\[19\\] that is now used as quantification step combined with Trinity \\[26\\].\n\nMore in general, for methods performing the identification step, isoform reconstruction can be carried out by using two other philosophical approaches. In the first one, the algorithm is driven by an annotation (that represents the available information at state of the art). In the second case, all isoforms are reconstructed ab initio (or fully data driven), mainly using graph theory. These models must be used in combination with some (heuristic) approaches in order to make the graph optimization feasible due to the large number of potential transcripts coming from a splicing graph. Moreover, it often occurs that a same method can use different rationales according to the way it is used, see \\[20,23,43\\].\n\n# Methods\n\nIn order to evaluate and compare the performances of the proposed methodologies, we used simulated data for which the true isoform structures and abundances are known.\n\nFew RNA-Seq simulators have been proposed in the last years (BEERS Simulator \\[31\\], RSEM Read Simulator \\[35\\], RNASeqReadSimulator \\[44\\]). In this work we used Flux Simulator \\[45\\] (available at ), which is a tool able to model most of the experimental steps. Indeed, it takes into account reverse transcription, fragmentation, adapter ligation, PCR amplification, gel segregation and sequencing.\n\nWe ran Flux Simulator with the default file of parameters suggested for H. Sapiens (see Section 5.2 of the user manual at ) where we only changed the number of molecules (*NB*\\_*MOLECULES*), the number of reads (*READ*\\_*NUMBER*) and the read length (*READ*\\_*LENGTH*) to achieve the desired sparsity. We used the error model for reads of length 76 bp provided in the software, as suggested in the user manual, because the simulator scales the error profile to the chosen read length not explicitly supported by the model.\n\nAs output Flux Simulator returns a.pro file containing for each transcript the number of simulated reads originating from it and its length in bp. Therefore, for each transcript the \"true\" abundance was evaluated in terms of FPKM (Fragments Per Kilobase of transcript per Million of mapped fragments). Transcripts not originating any simulated read were considered as not expressed.\n\n## Simulation scheme\n\nHuman genome (Hg19, UCSC) was considered as reference organism and release 69 annotation file was downloaded from Ensembl database \\[46\\]. For simplicity, we took into account only transcripts coding for proteins (i.e., 142692 potential transcripts in total).\n\nTwo experimental set-ups were simulated using Flux Simulator: **Set-up 1**, in which all transcripts from chromosome 1 were considered as Complete Annotation (CA), i.e., CA contains 13123 transcripts; **Set-up 2**, in which we considered a subsample of 85615 transcripts uniformly sampled from the list of all protein-coding transcripts as CA (i.e., CA contains 85615 transcripts). These scenarios were used to investigate the capability of the compared methodologies to deal with \"small\" and \"large\" genomes.\n\nFor **Set-up 1** Flux Simulator generated a large dataset of (strand-specific) PE reads of 100 bp per side and a set of 3726 transcripts with positive FPKM; for **Set-up 2** it generated a dataset of (strand-specific) PE reads of 75 bp per side and a set of 17032 transcripts with positive FPKM. Fastq files of reads underwent to a filtering process to remove those pairs that had one of the two sides smaller than 100 bp in **Set-up 1**, and one of the two sides smaller than 75 bp in **Set-up 2**, leading to a number of 31177152 PE fragments for **Set-up 1** and to a number of 74365564 PE fragments for **Set-up 2**.\n\nTo investigate the depth effect, in **Set-up 1** the simulated fragments were sub-sampled to obtain six subsets of cardinality 20M, 10M, 5M, 1M, 0.5M and 0.25M, where M stands for 10^6^ reads. To study the read-length effect, for each of the six subsets the reads were trimmed to obtain analogous sets of PE fragments of length 75 bp and 50 bp per side. Finally, to account for the library type effect, for each of the six sets of PE of 100 bp per side, only the left-mate reads were retained to obtain analogous sets of SE reads of length 100 bp. Analogously, in **Set-up 2** the dataset of PE reads was sub-sampled to obtain a subset of cardinality 60M, from which a second set was generated by trimming the reads at 50 bp.\n\nSummarizing, overall eighteen PE datasets and six SE datasets were obtained under **Set-up 1** and two datasets under **Set-up 2**. Set-up 2 was also analyzed for depths of 40M and 20M (both at 75 bp and 50 bp). However, such results are not showed here for the sake of brevity.\n\nTo investigate the ability of different methods for transcript identification at different abundance levels, isoforms were divided in high, medium and low expression classes, where (analogously to iReckon) the low class is given by the isoforms whose true expression belong to the lower 5% of the FPKM distribution, the high class by isoforms with expression larger than the 74% of the FPKM distribution, the remaining ones representing the medium class.\n\nFor the sake of completeness, we also generated an Incomplete Annotation (IA). IA was obained from the corresponding CA selecting 70% of the annotated transcripts (i.e. 9186 in **Set-up 1** and 59930 in **Set-up 2**). In **Set-up 1**, IA was aimed to mimic a normal condition where most of the not annotated isoforms are present at low abundance in the RNA sample. Therefore, IA contains about 70*%* of the non expressed transcripts and the remaining 30*%* of expressed transcripts. In particular, the highly expressed transcripts represent 93% of the true high class, the moderately expressed transcripts represent the 64% of the true medium class and the lowly expressed transcripts represent only the 30% of the true low class. On the contrary, in **Set-up 2**, IA was obtained by randomly sampling the 70% of isoforms from the corresponding CA, regardless of their expression, to mimic the situation where tissue specific conditions or pathologies can alter the normal expression profile and produce novel transcripts that are present at any expression level.\n\n## Read alignment\n\nPE reads of each dataset were aligned to the human reference genome by using TopHat2 \\[29\\] with option **\u2013library-type** fr-secondstrand turned on to benefit of the strand information of the simulated reads.\n\nIn particular, TopHat2 can be used with option **-G** turned on (i.e., by adding -G annotation.gtf to the command line). In this case, first, TopHat2 extracts the transcript sequences and uses Bowtie2 to align reads to this virtual transcriptome. Then, only the reads that do not fully map to the transcriptome are mapped to the reference genome where potential novel splice sites are identified. The reads mapped on the transcriptome are converted to genomic mappings (spliced as needed) and merged with the novel mappings and junctions in the final TopHat2 output. By contrast, if the option is turned off (i.e., -G is not used), TopHat2 aligns the reads directly to the genome and it searches for junctions with a data dependent approach \\[28\\]. It is clear that providing the annotation file allows TopHat2 to better map the splice junctions and cover the entire exon body, and to reduce the number of FP junctions, see \\[29\\] for a more detailed discussion.\n\nFor the scope of our analysis, when the option -G was turned on, we ran TopHat2 alignment on each set of PE reads twice. The first time, we provided CA. The second time, we provided IA. We also ran TopHat2 with the option -G turned off. For each set of the SE reads in **Set-up 1**, we repeated the alignment with TopHat2 using the same scheme.\n\n## Modes of action\n\nIn our study, the problem of isoform detection and quantification is analyzed under different modes of action: **Mode 1)** The method assumes that annotation is available and the algorithm is forced to quantify only those isoforms in the given annotation. Those isoforms that are not present in the annotation are set to zero. **Mode 2)** The method assumes that annotation is available, but it could be incomplete. Therefore, the algorithm uses the provided annotation as a guide in order to find potentially new isoforms. After that all potential isoforms are identified, their expressions are quantified. **Mode 3)** The method assumes that no information is available. Therefore, all potential isoforms are computed from the data. Then, their expressions are quantified.\n\nIt is important to note that the modes of action can be combined with the different type of available annotation. In particular, under Mode 1 and 2 we can further distinguish the case that the available annotation is CA or IA. The case of Mode 1 with CA represents an ideal situation, when everything else is known. Such scenario is rarely met given our current knowledge of Biology, but can be considered as limit case for studies in the near future. The case of Mode 2 with IA represents a realistic scenario in current studies. Indeed, it is true that previous projects have disclosed most information, but still data emerging from the literature shows that such information represents only partial knowledge. Therefore, IA in Mode 2 represents a more realistic situation. For the sake of completeness, we observe that the usage of the methods under Mode 1 with IA will not allow to recover the not-given transcripts. Analogously, all novel transcripts detected by any method in Mode 2 with CA will be false positives. In both cases we are aimed to evaluate how such drawbacks can affect the estimation of other isoforms. Finally, Mode 3 is considered to illustrate the expected results that one can obtain when studying novel sequenced organisms for which no previous information is available (or when the user does not want to use it) and all inference has to be carried out from the experimental data. Moreover, comparing our simulation scheme with the analyses carried out in \\[24\\], we observe that their results correspond to Mode 3 without annotation, except for iReckon and SLIDE that were used similar to our Mode 2.\n\nFigure 1<\/a> illustrates the simulation pipeline built for the two experimental set-ups.\n\n## Compared algorithms\n\nIn this paper, we assess the performance of five different methods: CEM, Cufflinks, iReckon, RSEM and SLIDE. All of them were used with mostly their default values and with modes of action illustrated in Table 1<\/a>. We observe that Cufflinks and CEM can perform all modes of action, while the other methods only some of them. All methods where compared in **Set-up 1** for PE reads. All methods, except iReckon, were compared in **Set-up 1** for SE reads. All methods, except SLIDE, were compared in **Set-up 2**.\n\nSoftware used in the comparison\n\n| **Name** | **Version** | **Modes of action** | **Link** | **Publication** |\n|:---|:--:|:--:|:--:|:--:|\n| TopHat2 | 2.07 | CA\/IA, no annotation | | \\[28,29\\] |\n| RSEM | 1.2.3 | 1 | | \\[19,35\\] |\n| Cufflinks | 2.0.2 | 1,2,3 | | \\[20\\] |\n| SLIDE | May 7th, 2012 | 1,2 | | \\[21\\] |\n| CEM | 0.9.1 | 1,2,3 | | \\[23\\] |\n| iReckon | 1.0.7 | 2 | | \\[22\\] |\n\nTable 1<\/a> shows detailed information for the software used during the study.\n\n### CEM\n\nCEM \\[23\\] is a recent command line program written in C++ and Python developed by the authors of IsoLASSO \\[39\\] of which it constitutes a significant improvement. Its logic is very similar to the one of Cufflinks. Indeed the only required argument is the sam\/bam alignment file. In this case, it executes Mode 3. The assembly problem is solved via a *connectivity graph*, which is more general than the *overlap graph* implemented in Cufflinks. By using optional parameter **-x**, the user can specify the annotation file (in BED format) and execute Mode 1 or Mode 2. If **\u2013forceref** is turned on (i.e., -x annotation.bed \u2013 forceref), CEM will run in Mode 1. If the option **\u2013forceref** is turned off (i.e., -x annotation.bed), the existing gene annotation will be incorporated into the estimation process as a guide from which CEM assembles new isoforms. Regardless of the action modes, the estimation of transcript abundance is carried out by minimizing a lasso penalized squared data-fit loss, where data-fit is given modeling the coverage in each segment as a Poisson distribution whose intensity is proportional to the mixture of abundances of the isoforms that insist on the same segment. With respect to this point, the main difference between CEM and its parent, IsoLASSO, consists in the algorithm used to perform minimization: CEM uses the Expectation-Maximization (EM) algorithm instead of quadratic programming. As a consequence CEM results by far more efficient than IsoLASSO, and overall one of the most efficient algorithm in terms of computational cost. No explicit parameter is available for strand specificity. CEM supports both SE and PE dataset.\n\n### Cufflinks\n\nCufflinks \\[20\\] is a popular software developed by the authors of TopHat and Bowtie and is part of the Tuxedo pipeline \\[15\\]. It is a command line tool, written in C++, where the only required argument is the sam\/bam alignment file. In this case, it executes Mode 3. In particular, Cufflinks reduces the comparative assembly problem to a maximum matching problem in bipartite graphs and solves it by using the so call *overlap graph* approach. On the contrary, when using the optional parameters **-G** (i.e., -G annotation.gtf) or **-g** (i.e., -g annotation.gtf) the user can execute Mode 1 and Mode 2, respectively. Cufflinks was also used with the option **-u** turned on that allows an initial estimation procedure to assign more accurately those reads that mapped to multiple locations in the genome. Given the set of newly identified or annotated transcripts, for all modes of action, the transcript abundance is estimated via a Maximum Likelihood approach, where the probability of observing each fragment is modeled as a linear function of the transcript abundance that can originate the fragments. Because of linearity, the likelihood function has a unique maximum value that Cufflinks finds via a numerical optimization algorithm. Finally, we also observe that when the reads are aligned by TopHat by using strand-specific mode, Cufflinks will automatically treat data as strand-specific (otherwise library type has to be explicitly specified by the user). Cufflinks supports both SE and PE dataset.\n\n### iReckon\n\niReckon is a java software which implements the method presented in \\[22\\]. The input of iReckon are aligned reads, genome, annotation and reads themselves. Genome and annotation have to be provided in a house format obtained from Fasta and BED format after conversion in Savant \\[47\\]. The original reads and the genome files are necessary since after the construction of an enlarged transcriptome (i.e., all possible isoforms including pre-mRNA and isoforms with retained intron) from the mapped reads, the estimation is done by re-aligning the reads (using BWA 0.6.2, \\[48\\]) on the sequences of all possible isoforms. In principle iReckon could execute both in Mode 1 and Mode 2. Nevertheless, we used only Mode 2 (which is the iReckon's default approach), due to a potential bug appearing when executing version 1.07 with the option **-novel 0** that was forcing the method in quantifying only transcripts in the annotation.\n\nThe main advantage of iReckon is that it directly models (and to date is the only one to have this feature) multiple biological and technical phenomena, including novel isoforms, intron retention, unspliced pre-mRNA, PCR amplification biases, and multi-mapped reads. iReckon utilizes the EM algorithm with a new non linear regularization penalty to accurately estimate the abundance of known and novel isoforms. The reason is that abundances are very similar to frequencies being non negative and summing to a normalization constant.\n\nFor large datasets and genomes, iReckon requires a large memory space and execution time. Most of the running time is due to the re-alignment step. No additional parameter is available for strand specificity. iReckon only supports PE dataset.\n\n### RSEM\n\nRSEM \\[19\\] is a software package which implements the method originally presented in \\[35\\]. It is a command line program, written mainly in C++, with contributions in Perl and R. In contrast with previous methods, RSEM can only perform the estimation of isoform abundance given an annotation file, i.e. it can only work under Mode 1. As input data, RSEM requires either both the genome sequence and the annotation (in fasta and.GTF format) or the transcript sequences directly, and the read file. As first step, the reads are aligned directly to the transcript sequences using Bowtie (function **rsem-prepare-reference**); then abundances are estimated via an EM algorithm based on a generative statistical model that handle read mapping uncertainty (function **rsem-calculate-expression**). In particular, it uses an iterative process to fractionally assign reads to each transcript considering the probabilities of the reads being derived from each transcript and taking into account positional biases created by RNA-seq library-generating protocols. The interest in RSEM is that, although it can quantify only the known transcripts contained in the annotation file, if the annotation file includes potential novel transcripts or ab-initio estimates transcripts, the quantification can be performed as well. In this context it comes bundled with the Trinity software \\[26\\] where it is used to quantify the novel assembled transcripts. We used RSEM software with the **\u2013strand-specific** option activated. RSEM supports both SE and PE dataset.\n\n### SLIDE\n\nSLIDE (Sparse Linear modeling for Isoform Discovery and abundance Estimation) is a software mainly written in Python (with some R scripts) that implements the statistical method described in \\[21\\]. It can be used either in Mode 1 (**\u2013mode estimation**) and in Mode 2 (**\u2013mode discovery**). In particular, in Mode 2, SLIDE defines all the possible 2 ^*n*^ \u22121 transcripts obtained by enumerating the *n* sub-exons of each gene.\n\nIn contrast to the other methods, SLIDE does not work on empirical read coverage, but it associates to each PE fragment four genomic locations corresponding to the starting and ending position of its 5' and 3' reads, respectively, and converts such positions into four sub-exon indexes. Then, it computes the fraction of pairs whose genomic locations span a given bin (i.e., a combination of four given sub-exons, for all sub-exon combination). Subsequently, it uses a linear model to approximate the observed bin proportions in terms of isoform proportions, which represent the parameters to be estimated. SLIDE estimates isoform abundance with a non negative least squares solution of a linear model. The design matrix models the conditional probability of sampling a PE read in a bin given that it comes from an isoform. A modified lasso type approach is used to limit the number of non null isoforms as well as to favor longer isoforms.\n\nFinally, we observe that SLIDE has been mainly designed for relative small genomes. Therefore, the code has not been optimized. For this reason we compared its performance only on a small scale comparison (i.e., **Set-up 1**). In this case we considered both the higher confident output denoted, as \"less\", and the larger one, denoted by \"more\", similar to the original annotation. No explicit parameter is available for strand specificity. SLIDE supports both SE and PE dataset.\n\n## Novel isoform matching\n\nAll methods used in Mode 1 (i.e. Cufflinks -G; RSEM; CEM - forceref and SLIDE - mode estimation) directly estimate FPKM of each isoform given in the annotation (CA or IA). Therefore, the association between the estimated value and the true value is straightforward. On the contrary, methods used in Mode 2 or Mode 3 allow to discover (new) isoforms. Therefore, their output need to be further processed in order to properly associate the inferred isoforms with the true ones. To this purpose, Cuffcompare v2.1.1 (which is part of the Cufflinks suite) was used to associate the output of any considered method (usually a gtf file) with the true CA. In particular, a 'true' isoform name was associated to an assembled isoform whenever a complete match of the intron chain was observed (i.e, Class Code ' =' in \\.tmap file, see user manual of \\[20\\] for details). Such level of transcript matching is quite stringent and could be relaxed, since in some cases we noticed that the match in Mode 2 and 3 was achieved at lower level of stringency (for example transcripts were contained in the true ones or other Class Codes were returned by Cuffcompare). However, the choice of using stringent match does not change the conclusions, but only the actual values of the performance indexes. The assembled transcripts which did not match an annotated one were classified as novel. In few cases, more assembled isoforms were associated to the same annotated isoform. Then, the estimated FPKM of the isoform was evaluated as the sum of all estimated FPKMs. Similarly to \\[24\\], for iReckon, transcripts with intron retention or unspliced events were not considered.\n\n## Measures of performance\n\nIn order to measure the performance of the considered methods, we first evaluated their capability in isoform detection in terms of true positives (TP), false positives (FP) and false negatives (FN), then their accuracy in isoform estimation in terms of estimation error.\n\nFor isoform detection, the following indicators were computed:\n\n\u25cf\u2003**Recall** (aimed at measuring the fraction of truly expressed isoforms that is retrieved) defined as\n\n$$\\text{recall} = \\frac{\\textit{TP}}{\\textit{TP} + \\textit{FN}} = \\frac{\\left| {\\left\\{ {\\overset{\\hat{}}{\\textit{FPKM}} > 0} \\right\\} \\cap \\left\\{ \\textit{FPKM} > 0 \\right\\}} \\right|}{\\left| \\left\\{ \\textit{FPKM} > 0 \\right\\} \\right|}.$$\n\n\u25cf\u2003**Precision** (aimed at measuring the fraction of predicted expressed isoforms that are truly expressed) defined as\n\n$$\\mspace{-2700mu}\\text{precision} = \\frac{\\textit{TP}}{\\textit{TP} + \\textit{FP}} = \\frac{\\left| {\\left\\{ {\\overset{\\hat{}}{\\textit{FPKM}} > 0} \\right\\} \\cap \\left\\{ \\textit{FPKM} > 0 \\right\\}} \\right|}{\\left| \\left\\{ {\\overset{\\hat{}}{\\textit{FPKM}} > 0} \\right\\} \\right|}.$$\n\nwhere $\\overset{\\hat{}}{\\textit{FPKM}}$ represents the estimated abundance of the isoform, *FPKM* is the true abundance and \\|*S*\\| stands for cardinality of set S. If *FPKM*\\>0 the isoform is truly expressed, while if *FPKM*=0 the isoform is not expressed, analogously for the estimated values. Obviously, recall is a measure of completeness and precision is a measure of accuracy. Recall was also evaluated on abundance classes (low, medium and high) defined as described in Section Simulation scheme.\n\nAs a global measure of performance we also considered the following\n\n\u25cf\u2003**F-Measure** defined as\n\n$$F = \\frac{2 \\ast \\text{precision} \\ast \\text{recall}}{\\text{precision} + \\text{recall}}.$$\n\nFor evaluating the accuracy in abundance estimation, we distinguished three cases and considered\n\n\u25cf\u2003**Estimation Error** (aimed at quantifying the FPKM retrieval accuracy) defined as\n\n$$\\mspace{-3060mu}\\textit{error}\\; = \\;\\left\\{ {\\;\\;\\begin{array}{lll}\n{E_{1} = \\frac{\\overset{\\hat{}}{\\textit{FPKM}} - \\textit{FPKM}}{\\textit{FPKM}}} & \\textit{if} & {\\textit{FPKM} > 0\\;\\quad\\text{and}\\;\\quad\\overset{\\hat{}}{\\textit{FPKM}} > 0} \\\\\n{E_{2} = \\overset{\\hat{}}{\\textit{FPKM}}} & \\textit{if} & {\\textit{FPKM} = 0\\quad\\;\\text{and}\\quad\\;\\overset{\\hat{}}{\\textit{FPKM}} > 0} \\\\\n{E_{3} = \\textit{FPKM}} & \\textit{if} & {\\textit{FPKM} > 0\\;\\quad\\text{and}\\;\\quad\\overset{\\hat{}}{\\textit{FPKM}} = 0}\n\\end{array}} \\right)\\;\\;,$$\n\nwhere *E*~1~ quantifies the (relative) accuracy in estimating the expressed isoforms that the method is able to identify, *E*~2~ quantifies the abundances assigned to FP isoforms and *E*~3~ quantifies loss of expression for FN isoforms.\n\nThe last, but not the least, important thing to be considered is the computational cost. Since algorithms are implemented in different languages and can be used on different computational architectures that can benefit or not of parallelism, we believe that any precise quantification of computational cost would be not fair. Therefore, this point will be only discussed from qualitative point of view in Section Results and discussions.\n\n# Results and discussions\n\nIn the following, we first compare the methods in terms of their capability in isoform detection, then in terms of their accuracy in isoform estimation. We stress that the goal of the comparison is not to make a rank list of the considered methods, but to underline global positive aspects, common weaknesses and open problems that might lead to over-optimistic conclusions about the performance of current methodology.\n\n## Isoform detection\n\nHere, we illustrate the results in terms of recall, precision and F-measure considering the following effects: type of alignment, modes of action, type of annotation, type of library, abundance level, read length and sequencing depth. In order to investigate such effects, the same figures have to be inspected several times evaluating different aspects each time. To facilitate such comparison, we first describe the general structure of the figures, then we focus the attention on some specific comparison.\n\nFigures 2<\/a>, 3<\/a>, 4<\/a> and 5<\/a> illustrate results for precision and recall obtained in **Set-up 1** for libraries 100 bp-PE, 75 bp-PE, 50 bp-PE and 100 bp-SE, respectively. For each of these cases, recall is further expanded with respect to the level of abundance of the true isoforms and results are reported in Figures 6<\/a>, 7<\/a>, 8<\/a> and 9<\/a> in the same order. Finally, F-measure is illustrated in Additional file 1<\/a>: Figure S1, Additional file 2<\/a>: Figure S2, Additional file 3<\/a>: Figure S3 and Additional file 4<\/a>: Figure S4 in the same order for each of the four cases.\n\nIn Figures 2<\/a>, 3<\/a>, 4<\/a> and 5<\/a> results are visually depicted into four panels, A (upper left), B (upper right), C (bottom left) and D (bottom right). Panels A and B refer to the annotation driven alignment; panels C and D to the data driven alignment. Precision is depicted in panels A and C, while recall in panels B and D. Within each panel, the plot is divided in six sub-blocks according to the modes of action and the type of annotation. In particular, on the left blocks (Mode 1) there are RSEM, Cufflinks with the -G option turned on, CEM with the -forceref option turned on, SLIDE with -mode estimation. In the central blocks (Mode 2) there are iReckon, Cufflinks with the option -g turned on, CEM with the option -forceref turned off, SLIDE with -mode discovery. Finally, in the right blocks (Mode 3) there are only Cufflinks and CEM with all default options. Results obtained using CA\/IA are depicted in the first\/second row of each panel, respectively. In **Set-up 1** different bars for the same method and mode of action correspond to the different sequencing depth (0.25M, 0.5M, 1M, 5M, 10M and 20M, respectively). Dashed horizontal lines are added to facilitate comparisons among different cases. Since RSEM does not depend on the alignment strategy, for comparative purposes, panels A and C report the same precision and panels B and D the same recall, in correspondence of the same type of annotation for RSEM. Analogously, within each panel, methods in Mode 3 show the same precision and recall (within the upper and bottom rows of each panel), not depending on the provided annotation when the alignment is data driven.\n\nFigures 6<\/a>, 7<\/a>, 8<\/a> and 9<\/a> have a similar organisation. In particular, CA is used in panels A and C, IA in panels B and D. Within each panel, the plot is divided in nine sub-blocks. Along the horizontal direction the organization is analogous to that of Figures 2<\/a>, 3<\/a>, 4<\/a> and 5<\/a>, in the vertical direction recall is reported for the low, medium and high abundance classes, respectively.\n\nAdditional file 1<\/a>: Figure S1, Additional file 2<\/a>: Figure S2, Additional file 3<\/a>: Figure S3 and Additional file 4<\/a>: Figure S4 are also organized in four panels and illustrate the F-measure with respect the sequencing depth. Panels A and B refer to the annotation driven alignment; panels C and D to the data driven alignment. CA is used in panels A and C, IA in panels B and D. To better distinguish results among different modes of action, results in Mode 1 are shown with continuous lines, those in Mode 2 are reported in dashed lines, those in Mode 3 are reported in dotted lines.\n\nFor the sake of completeness, we also provide Additional file 5<\/a>: Figure S5, Additional file 6<\/a>: Figure S6, Additional file 7<\/a>: Figure S7 and Additional file 8<\/a>: Figure S8 showing the number of TP and FP obtained under **Set-up 1**, for the most extreme conditions PE vs SE, 20M vs 0.25M. The figures are again organized in four panels where panels A and B refer to the annotation driven alignment; panels C and D to the data driven alignment. CA is used in panels A and C, IA in panels B and D. Within each panel, the right bars (i.e., the one depicted in coral) show the number of FP and the left ones (depicted in aquamarine) the number of FN. An horizontal dashed line is added representing the number of truly expressed isoforms. Therefore, the difference between the number of FP and this line represents the number of FN.\n\nTo provide a better insight into the capability of methods in Mode 2 (with IA) to identify those isoforms that are not provided in the annotation, Figures 10<\/a> and 11<\/a> show the number of TP (with the annotation driven alignment and with data driven alignment, respectively). In these figures, the number of TP is divided in those already present in IA (denoted IA) and those not contained in IA (denoted No IA). The latter are further divided according to the true expression level. Finally, Figure 12<\/a> illustrates the performance of Cufflinks and CEM (Mode 1) when a suitable threshold is applied to set to zero isoforms with very low estimated FPKM. The figure compares precision, recall and F-measure with the corresponding indexes observed for the same methods in Mode 2.\n\nAdditional file 9<\/a>: Figure S9, Additional file 10<\/a>: Figure S10, Additional file 11<\/a>: Figure S11, Additional file 12<\/a>: Figure S12, Additional file 13<\/a>: Figure S13 and Additional file 14<\/a>: Figure S14 show results for **Set-up 2**. In particular, Additional file 9<\/a>: Figure S9, Additional file 10: Figure S10, Additional file 11<\/a>: Figure S11, Additional file 12<\/a>: Figure S12 are devoted to precision and recall. Additional file 13<\/a>: Figure S13 is the analogous of Figure 12<\/a> and Additional file 14<\/a>: Figure S14 is the analogous of Additional file 5<\/a>: Figure S5, Additional file 6<\/a>: Figure S6, Additional file 7<\/a>: Figure S7 and Additional file 8<\/a>: Figure S8.\n\nAll methods have been evaluated under the two set-ups, except SLIDE, that for the high computational cost has been evaluated only under **Set-up 1**, and iReckon, that was evaluated only for PE reads, since it does not support SE reads. In Figures 1<\/a>, 2<\/a>, 3<\/a>, 4<\/a>, 5<\/a>, 6<\/a>, 7<\/a>, 8<\/a> and 9<\/a>, Additional file 1<\/a>: Figure S1, Additional file 2<\/a>: Figure S2, Additional file 3<\/a>: Figure S3 and Additional file 4<\/a>: Figure S4 and Additional file 9<\/a>: Figure S9, Additional file 10: Figure S10, Additional file 11<\/a>: Figure S11 and Additional file 12<\/a>: Figure S12, Cufflinks is coloured in red, CEM is coloured in blue, SLIDE is coloured in orange (when present), RSEM is coloured in green and iReckon is coloured in brown (when present).\n\nRSEM shows relatively good performance when CA is provided, but its inference is limited to annotated transcripts (or to a list of potential transcripts that the user has to provide). On the other side, SLIDE shows worse performance than the others (in particular, in Mode 2) suggesting that it could be better suited for genome with lower complexity in terms of isoform structures.\n\nOverall Set-ups 1 and 2 show similar qualitative results. The best F-measure in **Set-up 1** was about 0.78, in **Set-up 2** was 0.75. In both cases, the best performance was achieved by RSEM in the CA case.\n\nLooking at Figures 2<\/a>, 3<\/a>, 4<\/a> and 5<\/a>, precision rarely exceeds 0.6\u20130.7 and it is often below such values, meaning that all methods produce a quite large number of FP isoforms (remarkably, even when the true CA is provided and the methods are forced to work in Mode 1). Such drawback is also confirmed by the inspection of Additional file 5<\/a>: Figure S5, Additional file 6<\/a>: Figure S6, Additional file 7<\/a>: Figure S7 and Additional file 8<\/a>: Figure S8, where we often observe a high number of FP (compared to TP) for the same set-up. These poor results can be due to a non sufficiently strong penalty term (or to a not sufficient strong post-filtering step) that should keep to zero not significant isoforms.\n\nThe best precision is achieved by Cufflinks in Mode 2 with CA (at a price of lower power). Surprisingly, Cufflinks (Mode 1) shows a worse precision. However, we noticed that most of the isoforms detected as presents by Cufflinks (Mode 1) were estimated with extremely low expression. Therefore, if we filter out these isoforms, precision in Mode 1 and Mode 2 becomes comparable (see Figure 12<\/a>, left panel), as expected. For example, by setting to zero all isoforms with estimated $\\overset{\\hat{}}{\\textit{FPKM}} < 10^{- 5}$ the Cufflinks (Mode 1) precision increases to 0.59, without significantly reducing the recall (analogous behaviour with 10^\u22121^as threshold). Indeed, the F-measure of Cufflinks (Mode 1) with a threshold on the expressed isoforms becomes better than the Cufflinks (Mode 2) one, as expected. The tendency to produce a high number of FP with very low expression level was observed also for CEM (Mode 1), but with a much less pronounced effect (see Figure 12<\/a>, right panel). The same effect occurs for the other conditions of depth, read length and library type (data not showed).\n\nOverall recall is above 0.80 when methods are used in Mode 1 with CA (and above 0.6 with IA), while recall does not exceed 0.3 in Mode 3. Recalls in Mode 1 are mostly satisfactory (except for SLIDE). The actual observed recall values for Mode 2 and Mode 3 depend on the stringency of the match between newly identified isoforms and existing ones. However, the low recall values in Mode 3 show that the performance of methods in recovering the precise isoform structure is still not satisfactory.\n\nFigures 6<\/a>, 7<\/a> and 8<\/a> for PE and Figure 9<\/a> for SE datasets illustrate how well methods are able to detect highly expressed isoforms (remarkably, recall is mostly higher than 0.50 also for methods in Mode 3). At the same time they underline that the major problem arises in identifying lowly expressed isoforms (recall rarely exceeds 0.2 either in Mode 2 and Mode 3). In particular, we observe that lowly expressed isoforms are identified mainly in Mode 1 with CA. Methods in Mode 2 with IA are able to identify lowly expressed isoforms mainly if they were already present in IA. On the contrary, for moderately and highly expressed isoforms they are capable of detecting isoforms not provided in IA, see Figures 10<\/a> and 11<\/a>. The figures show good results for Cufflinks and iReckon for medium and high expression classes, at high depth and regardless the alignment type.\n\nThe performance clearly drops down when decreasing the depth. In that case, it becomes almost impossible to recover isoforms that have not been provided in the annotation at the beginnig.\n\nAs mentioned above, **Set-up 2** bring us to analogous overall considerations, see Additional file 9<\/a>: Figure S9, Additional file 10: Figure S10, Additional file 11<\/a>: Figure S11, Additional file 12<\/a>: Figure S12, Additional file 13<\/a>: Figure S13 and Additional file 14<\/a>: Figure S14.\n\n### Effect of alignment\n\nIn order to evaluate the effect of the alignment, we compared results obtained by the same method and mode of action across different alignment strategies, i.e., we investigate its effect on precision by comparing (in Figures 2<\/a>, 3<\/a>, 4<\/a> and 5<\/a>) panel A versus panel C, and on recall by comparing panel B versus panel D, for **Set-up 1**. To evaluate the global effect on the F-measure, it is sufficient to carry out the analogous comparison for Additional file 1<\/a>: Figure S1, Additional file 2<\/a>: Figure S2, Additional file 3<\/a>: Figure S3, and Additional file 4<\/a>: Figure S4. Clearly, RSEM is not affected by the alignment, since it maps the reads directly to the transcriptome. Therefore, its results differ only with respect to CA and IA within panels A and B.\n\nOverall comparisons show no appreciable difference in precision for Mode 1, a slight difference in Mode 2, where the precision increases along with the quality of mapping, and more remarkable improvements in Mode 3. Analogous results can be observed for recall. Moreover, both differences in precision and recall decrease when the sequencing depth increases. As global measure, we observe that F-measure for 20M 100 bp-PE is about 0.35 and 0.27, respectively, for Cuffinks and CEM in Mode 3 with reads aligned without any annotation; it becomes about 0.40 and 0.34, respectively, when the reads are aligned providing either IA or CA, see Additional file 1<\/a>: Figure S1. To comment this effect, we inspected the alignment files and we observed that the main differences are in the number of mapped junctions. As an example, we observed that the dataset 20M 100 bp-PE identified 14500 junctions when CA was provided and 14479 junctions without annotation, with a very negligible loss due to the alignment. But, with 0.25M 100 bp-PE the number of mapped junctions was 9173 with CA and dropped down to 8047 without annotation. Analogously, for 20M 50 bp-PE, 14290 junctions were detected when CA was provided and only 12643 junctions without annotation; with 0.25M 50 bp-PE, the number of mapped junctions was 8241 with CA and dropped down to 5990 without annotation. The improved performance in Mode 3 can be explained by the fact that data driven methods can deeply benefit from the presence of informative junctions, even though current performances cannot be considered overall satisfactory.\n\nThe same conclusions apply for **Set-up 2**, comparing Additional file 9<\/a>: Figure S9 and Additional file 10: Figure S10. In particular, in Mode 3, the F-measure increases from 0.18 of both Cufflinks and CEM in the case of data driven alignment up to 0.24 and 0.31 respectively in the case of CA based alignment (data not showed).\n\n### Effect of modes of action\n\nIn order to evaluate the effect of the modes of action, results have to be compared across the horizontal blocks of each panel of each figure, see Figures 2<\/a>, 3<\/a>, 4<\/a>, 5<\/a>, 6<\/a>, 7<\/a>, 8<\/a> and 9<\/a> for **Set-up 1** and Additional file 9<\/a>: Figure S9, Additional file 10: Figure S10, Additional file 11<\/a>: Figure S11 and Additional file 12<\/a>: Figure S12 for **Set-up 2**. Moreover, we have to compare the continuous lines with respect to the dashed lines and the dotted lines in Additional file 1<\/a>: Figure S1, Additional file 2<\/a>: Figure S2, Additional file 3<\/a>: Figure S3 and Additional file 4<\/a>: Figure S4.\n\nAs expected, all methods in Mode 1 have better global performance than those in Mode 2 and perform significantly better than the methods in Mode 3. The latter ones are still a big challenge. Moreover, performance of methods in Mode 1 are much less affected by the depth and the read length, while still benefit of PE reads, if the coverage is not sufficiently high.\n\nAs previously mentioned, Figures 2<\/a>, 3<\/a>, 4<\/a> and 5<\/a> show that the precision cannot be considered fully satisfactory and the number of FP is high across all modes of action. In particular, Cufflinks (Mode 1) shows still poor precision. However, in this case, we already noted that most of the FP isoforms were estimated with a very low expression value and filtering out the low $\\overset{\\hat{}}{\\textit{FPKM}}$s restores the performance, see Figure 12<\/a>. From Figures 2<\/a>, 3<\/a>, 4<\/a> and 5<\/a>, we observe that all methods in Mode 1 using CA have a very good recall, independently from alignment option. Methods in Mode 2 reach good\/sufficient recall, except for SLIDE that shows the worst behaviour. In particular, it seems that SLIDE (Mode 2) does not take advantage of the annotation and behaves similarly to methods in Mode 3.\n\nTo fully understand the action mode effect, it is important to inspect Figures 6<\/a>, 7<\/a>, 8<\/a> and 9<\/a>. Here, one can immediately observe that highly expressed isoforms are detectable in all action modes (in particular, for high sequencing depth), the scenario changes dramatically for moderately and lowly expressed ones. The latter have been recognized only in Mode 1 with CA and with sufficient sequencing depth.\n\nThe same conclusions apply for **Set-up 2**, comparing Additional file 9<\/a>: Figure S9, Additional file 10: Figure S10, Additional file 11<\/a>: Figure S11 and Additional file 12<\/a>: Figure S12.\n\nIt follows that, while it is intriguing to promise to reconstruct the whole transcriptome using only RNA-seq data without any information about annotation, researchers have to be very careful in faithfully believe to the results obtained in Mode 3 (using current experimental protocols and computational methodologies analogues to those considered in the present work). As a consequence, for complex genomes, they should consider to complement the results with other sources of either experimental or computational evidence.\n\n### Effect of the annotation\n\nIn order to evaluate the effect of the annotation, in Figures 2<\/a>, 3<\/a>, 4<\/a> and 5<\/a> precision and recall have to be compared between the two horizontal blocks in each panel A,B,C e D of the same figure (where the upper row refers to CA and the lower row refers to IA) limiting the attention to Mode 1 and Mode 2. While, in Figures 6<\/a>, 7<\/a>, 8<\/a> and 9<\/a> and in Additional file 1<\/a>: Figure S1, Additional file 2<\/a>: Figure S2, Additional file 3<\/a>: Figure S3 and Additional file 4<\/a>: Figure S4, panels A have to be compared with the corresponding panels B; and panels C with the corresponding panels D.\n\nThis effect is very important considering that CA is usually unknown in a real experiment. Therefore, results obtained with CA represent a sort of optimal performance. Results obtained under IA can be seen as a measure of what one can reach with the current knowledge of the Biology.\n\nAs expected, moving from CA to IA, regardless the type of alignment, the performances degrade, with a more evident loss in Mode 1 with respect to Mode 2. The observed difference in the two modes of action allows us to illustrate the discovery power of methods in Mode 2. To this purpose, Figures 10<\/a> and 11<\/a> show the number of transcripts that were not present in IA, but are still recovered by methods in Mode 2 driven by IA. As mentioned before, the \"recovering\" effect is mainly concentrated on isoforms with high or moderate expression.\n\nIn general, the best F-measure observed in Mode 1 with CA is about 0.8 for the higher depth at 100 bp PE and it drops to about 0.6 when IA is used; while in Mode 2 the best F-measure is above 0.6 and drops down to 0.6 when IA is used, compare panel A and B of Additional file 1<\/a>: Figure S1. Obviously, the present difference depends on the closeness between CA and IA.\n\nInspecting Figures 2<\/a>, 3<\/a>, 4<\/a> and 5<\/a>, it is possible to observe that the loss of performance occurs both in terms of precision and recall. While the loss in detection capability of true isoforms is expected with IA, the loss in precision is more devious. It can be justified by the fact that the methods tend to explain all the reads by assigning them to an isoform. Therefore, other isoforms are used to accommodate the differences in fitting and isoforms with few reads (and low expression) are often formed in absence of a sufficiently strong penalty term or of a post-filtering procedure.\n\nIssues apply for **Set-up 2** comparing Additional file 9<\/a>: Figure S9, Additional file 10: Figure S10, Additional file 11<\/a>: Figure S11 and Additional file 12<\/a>: Figure S12, where IA was randomly generated from CA. Therefore, the main conclusion does not depend strongly on the precise content of IA. As a conclusion, we want to stress that the main limit of methods in Mode 2 is their inadequacy in recovering lowly expressed isoforms, unless already annotated in IA. On the contrary, such methods are able in detecting both moderately and highly expressed isoforms in absence of their annotation as well.\n\n### Effect of library type\n\nIn order to evaluate the effect of the library type, we have to compare PE results with SE results. Such comparison was carried out in **Set-up 1**, for the read length 100 bp and for all depths. Therefore, for precision and recall we have to compare the corresponding panels of Figures 2<\/a> and 6<\/a> with Figures 5<\/a> and 10<\/a>, respectively; for F-measure we have to compare Additional file 1<\/a>: Figure S1 with Additional file 4<\/a>: Figure S4; for number of TP and TN we have to compare Additional file 5<\/a>: Figure S5 with Additional file 6<\/a>: Figure S6 for the highest depth (20M) and Additional file 7<\/a>: Figure S7 with Additional file 8<\/a>: Figure S8 for the lowest depth (0.25M).\n\nWe observe that PE reads show better indexes with respect to SE reads at the same depth (at the price of higher experimental cost, not evaluated here). However, the differences are almost negligible in Mode 1 (in particular, at high coverage), they are relatively small for methods in Mode 2 when CA is provided. The gap increases in Mode 2 when IA is provided. In this case, Figures 10<\/a> and 11<\/a> show that PE reads allow to correctly detect more isoforms that were not present in IA. The gain is small for high depth, it becomes more evident for low depth. Advantages of using PE is also evident in Mode 3. To better evaluate the differences, we observe that 20M 100 bp-SE allows to map 13918 junctions when CA is provided and 13564 junctions in absence of annotation; 0.25M 100 bp-SE allows to map 8086 junctions when CA is provided and 6851 in absence of annotation.\n\nAs a conclusion, we can underline that the main advantage of using PE with respect to SE is in the better capability to recover novel isoforms when not provided in the annotation. On the other hand, we also observe that 100 bp-SE are quite long reads, with short SE reads the advantages of PE are more pronounced.\n\n### Effect of isoform abundance\n\nIn order to evaluate the effect of isoform abundance, we have to inspect Figures 6<\/a>, 7<\/a>, 8<\/a> and 9<\/a> that provide a deeper insight about the recall illustrated in Figures 2<\/a>, 3<\/a>, 4<\/a> and 5<\/a>. The index is now expanded into three rows depending on the level of expression of the corresponding true isoforms. From the figures, we can see that the capability in isoform detection strongly depends on their expression levels. In general, highly expressed isoforms are easily detected by methods in Mode 1, while methods in Mode 2 and Mode 3 show a lower (but still acceptable) detection capabilities. On the contrary, moderately and lowly expressed isoforms are detected well, or at least with an acceptable rate, in Mode 1. However, they are not well identified in Mode 2, and often completely lost in Mode 3. Integrating such observation with Figures 10<\/a> and 11<\/a>, we observe that lowly expressed isoforms are mostly detected if they are provided in the annotation.\n\nAdditional file 11<\/a>: Figure S11 and Additional file 12<\/a>: Figure S12 illustrate the recall with respect to the isoform abundance class in **Set-up 2**, providing analogous conclusions.\n\n### Effect of sequencing depth\n\nIn order to evaluate the effect of sequencing depth, we have to compare different bars of the same colour in each block and methods of panels of Figures 2<\/a>, 3<\/a>, 4<\/a>, 5<\/a>, 6<\/a>, 7<\/a>, 8<\/a> and 9<\/a>, and the behaviour of each coloured line in the F-measure reported in Additional file 1<\/a>: Figure S1, Additional file 2<\/a>: Figure S2, Additional file 3<\/a>: Figure S3 and Additional file 4<\/a>: Figure S4.\n\nIn most cases, the performance gets worse with the decrease in sequencing depth, but the loss is less evident than what one can expect. In particular, it is almost negligible for methods in Mode 1 with CA and it appears more evident for methods in Mode 2 or 3. The gap increases for data driven alignment and in absence of CA. Indeed, when the depth increases we observed less precision and simultaneously a higher recall. The loss in precision can be explained by the large number of FP isoforms, often with low expression values. More in general, we noticed that as far as a minimum level of depth is reached (in the case of **Set-up 1** such level is estimated in about 1M for PE) then further increases of the depth only play a trade-off role between the observed precision and recall without impacting the overall global performance. Conversely, below the saturation level the global performance drops down.\n\nIn a similar way, comparing Additional file 5<\/a>: Figure S5, Additional file 6<\/a>: Figure S6 and Additional file 7<\/a>: Figure S7 and Additional file 6<\/a>: Figure S6, Additional file 7<\/a>: Figure S7 and Additional file 8<\/a>: Figure S8, it is possible to see the benefit in the total number of correctly identified isoforms when increasing the depth from the extreme conditions of 0.25M to 20M.\n\n### Effect of read length\n\nIn order to evaluate the effect of read length, we have to compare precision and recall in Figure 2<\/a> (100 bp-PE) with Figures 3<\/a> and 4<\/a> (75 bp-PE and 50 bp-PE, respectively) and Additional file 1<\/a>: Figure S1 with Additional file 2<\/a>: Figure S2 and Additional file 3<\/a>: Figure S3 in terms of F-measure. We found, as expected, that long reads are preferable to short ones. In particular, we observed an overall loss of performance both in terms of recall and precision. We quantified in about 5% the loss in performance in term of F-measure for methods in Mode 1 (the best performance achieved by RSEM with CA is about 0.78 for 100 bp-PE and becomes 0.75 for 75 bp-PE and 0.73 for 50 bp-PE). A more significant loss was observed when executing methods in Modes 2\/3, especially at low depth.\n\nWe also observe that in our experimental design short reads are obtained by trimming the long ones. Therefore, short reads generated in this way have a slightly better quality with respect to those (of the same length) generated following the error profile. As a consequence, we expect that the real difference between short and long reads could be slightly larger than the one we have reported.\n\nAdditional file 9<\/a>: Figure S9 and Additional file 10: Figure S10 provide analogous insights for **Set-up 2**.\n\n## Isoform estimation\n\nHere, we illustrate the results for estimation error indexes *E*~1~, *E*~2~ and *E*~3~. Additional file 15<\/a>: Table S1, Additional file 16<\/a>: Table S2, Additional file 17<\/a>: Table S3, Additional file 18<\/a>: Table S4, Additional file 19<\/a>: Table S5, Additional file 20<\/a>: Table S6, Additional file 21<\/a>: Table S7, Additional file 22<\/a>: Table S8, Additional file 23<\/a>: Table S9, Additional file 24<\/a>: Table S10, Additional file 25<\/a>: Table S11, Additional file 26<\/a>: Table S12, Additional file 27<\/a>: Table S13, Additional file 28<\/a>: Table S14 Additional file 29<\/a>: Table S15 and Additional file 30: Table S16 show errors statistics (median, 3rd quartile and maximum value) for the indexes in **Set-up 1**. In particular, the group of tables in Additional file 15<\/a>: Table S1, Additional file 16<\/a>: Table S2, Additional file 17<\/a>: Table S3, Additional file 18<\/a>: Table S4, Additional file 19<\/a>: Table S5, Additional file 20: Table S6, Additional file 21<\/a>: Table S7 and Additional file 22<\/a>: Table S8 illustrate the results for PE, and those in Additional file 23<\/a>: Table S9, Additional file 24<\/a>: Table S10, Additional file 25<\/a>: Table S11, Additional file 26<\/a>: Table S12, Additional file 27<\/a>: Table S13, Additional file 28<\/a>: Table S14 Additional file 29<\/a>: Table S15 and Additional file 30<\/a>: Table S16 for SE. Within each group, the two most extreme depths are shown (i.e., 20M and 0.25M). Furthermore, the tables provide results either for annotation driven alignment (CA and IA) and for data driven alignment. Additional file 31<\/a>: Table S17, Additional file 32<\/a>: Table S18, Additional file 33<\/a>: Table S19 and Additional file 34<\/a>: Table S20 refer to **Set-up 2**. In particular, Additional file 31<\/a>: Table S17 and Additional file 32<\/a>: Table S18 describe the results for 60M 75 bp-PE (annotation driven alignment). The analogous cases for data driven alignment are shown in Additional file 33<\/a>: Table S19 and Additional file 34<\/a>: Table S20. In order to investigate the performance of the methods in correctly estimating the isoform abundances, the attention have to be mainly focused on the qualitative aspects related to error distributions rather that on the actual values shown in the tables.\n\nOverall, both set-ups show a similar behaviour for all error types. In fact, all methods produce errors that have a strongly asymmetric skewed distribution with a long right tail. This means that they fail to estimate a (significant) fraction of isoforms (see results for the 3rd quartile and the maximum observed value). Moreover, within the same table, the same index is also different (sometimes of order of magnitude) with respect to methods and mode of actions. Differences are observed, not only with respect to the most extreme value, but also with respect to the median and the 3rd quartile. This means that the methods may provide very different results on a large fraction of isoforms. In brief, all tables indicate that the problem of obtaining reliable estimates is still open. The loss of performance is not completely surprising, since estimation is carried out after isoform detection, without explicitly consider the uncertainty due to the identification step. The latter statement is clear when we compare the same method under different modes of action. In such cases, each method uses an analogous statistical procedure for estimating the abundances. However, the inference is carried out on different sets of isoforms and can produce different estimates. To mitigate such drawbacks confidence intervals for the estimates could be more reliable than point estimates.\n\nMore in detail, looking at the 3rd quartile in Additional file 15<\/a>: Table S1 (i.e., 20M 100 bp-PE CA driven alignment), the best result for *E*~1~ oscillates between 0.27\u20130.29 (CEM and Cufflinks Mode 3, RSEM), for *E*~2~ is about 0.005 (Cufflinks, Mode 1), and for *E*~3~ is between 0.22\u20130.25 (RSEM, CEM Mode 1).\n\nThe low 3rd quartiles observed for *E*~3~ and *E*~2~ confirm that problems arise mainly with lowly expressed isoforms, that are either not detected or not set to zero by filtering or penalization procedures. On the contrary, the large extreme values observed for *E*~2~ and *E*~3~ indicate that the corresponding methods can completely fail (of several order of magnitude) some estimates. However, such failures are limited to few units when the corresponding 3rd quartile is low, being larger when the right tail of the distribution becomes heavier.\n\nMore attention requires the (relative) error *E*~1~, whose 3rd quartile is at least about 0.27\u20130.29 in Additional file 15<\/a>: Table S1, (with the median value of about 0.07\u20130.15). This means that, even when we are able to correctly detect the presence of the isoforms, in more than 50% of the cases the estimation error is about 10% of the true value. Larger values for the median or the 3rd quartile indicate a worse performance.\n\nWe stress that Additional file 15<\/a>: Table S1 illustrates (in principle) the most favourable condition. Comparing it with Additional file 17<\/a>: Table S3 (i.e., 0.25M 100 bp-PE CA driven alignment), it is possible to evaluate the effect of the depth when the annotation is CA. Whereas precision and recall for methods in Mode 1 were not significantly influenced, the error indexes did. In particular, the largest differences are observed for *E*~2~ and *E*~3~. In general, at low depth the quality of the estimates is quite poor. Analogous considerations hold considering SE instead of PE, see Additional file 23<\/a>: Table S9 and Additional file 15<\/a>: Table S11, for the cases 20M SE and 0.25M SE, respectively.\n\nThe influence of annotation can be deduced comparing Additional file 15<\/a>: Table S1 and Additional file 17<\/a>: Table S2 (obtained using IA). Analogous comparison can be carried out for all pairs of corresponding tables (CA vs IA). In such cases the differences are larger, due to the fact that methods are not able to identify all isoforms (in particular those not provided in IA).\n\nAdditional file 31<\/a>: Table S17, Additional file 32<\/a>: Table S18, Additional file 33<\/a>: Table S19 and Additional file 34<\/a>: Table S20 drive to similar considerations for **Set-up 2** with more skewed error distributions.\n\n## Considerations about the computational cost\n\nSince the methods were implemented in different languages and ran in different environments, we found that a technical comparison of execution time was not fair. Therefore, we briefly reports only qualitative considerations. RSEM and Cufflinks (in all modes of action) are sufficiently fast, CEM (in all modes of action) is the fastest. On the contrary, iReckon is quite slow with respect to the others and also produces very large temporary files. Current implementation of SLIDE is too slow to be completed in **Set-up 2** in a reasonable amount of time, and therefore was considered only in **Set-up 1**.\n\n# Conclusions\n\nOur results show that algorithms have good or acceptable performance in detecting the presence of the isoforms (recall) when the annotation is provided, even if incomplete (Mode 1 and Mode 2, with both CA and IA). On the contrary, the data driven methods (Mode 3) are still not satisfactory, also when the reads are carefully aligned using the annotation during the mapping phase. Results obtained in Mode 3 are in agreement with what observed in \\[24\\].\n\nThe performance of all methods is strongly influenced by the strength of the isoforms. Highly and moderately expressed isoforms are identified with good accuracy. On the contrary, lowly expressed isoforms are still problematic, also when the exact annotation is provided. In particular, we observed that lowly expressed isoforms that are not present in the annotation, are not recovered using methods in Mode 2. Conversely, Mode 2 approaches are able to recover part of moderately and most of highly expressed transcripts also when not provided in the annotation.\n\nResults improve by increasing the sequencing depth and the read-length. However, for the depth there is a saturation limit -in current computational methodologies- that does not allow to achieve optimal reconstruction even at high coverage. In fact, when the coverage increases, the number of TP increase as expected, but also the number of FP increases. In particular, we noticed that some methods tend to identify several (novel and annotated) isoforms as present at very low expression levels. In most cases such isoforms are FP, as a consequence precision is often not satisfactory.\n\nRealistic estimation of isoforms abundance is also very problematic. Similar to \\[24\\] our results show that the estimation error is very skewed. The error distribution up to the third quartile is acceptable at least for E2 (connected to FP) and E3 (connected to FN), with a very long right tail. On the contrary, the distribution of the (relative) error E1 shows that more than 50% of isoforms are estimated with more than 10% of error. To this purpose, we should consider that estimation of isoform abundances is not easier than isoform identification. In fact, the estimates are often obtained for the set of isoforms that have been previously identified. However, methods usually do not take into account such uncertainty. As a consequence, the performance can be much worse. Therefore, we conclude that the estimation of the correctly identified isoforms is still challenging.\n\nAs observed by \\[24\\], the complexity of higher eukaryotic genomes, such as the human one, imposes severe limitations to the performance of all quantification and estimation methods, that are likely to remain limiting factors for the analysis of current-generation RNA-seq experiments. Such limitations can be partially solved providing existing annotations, but more in general require the development of further research and techniques from both methodological and experimental point of views.\n\nFinally, it should be noted that all methods considered here and in \\[24\\] work with a single RNA-seq sample. Recent works \\[49,50\\] propose to use a multiple-sample approach to achieve a more precise identification and estimation of isoform expression. The availability of such type of approaches, whose performances have to be further validated, seems to indicate that future studies have to investigate a larger variety of (homogeneous) samples at a lower depth per sample to obtain more confident transcript predictions, see \\[50\\].\n\n# Competing interests\n\nThe authors declare that they have no competing interests.\n\n# Authors' contributions\n\nAll authors participated in conceiving the analysis and setting-up the code for the comparisons and writing the manuscript. All authors also read and approved the submitted manuscript.\n\n# Supplementary Material\n\n###### Additional file 1\n\n**Figure S1.** F-measure in Set-up 1 for 100 bp-PE. Panels **A** (upper left) and **B** (upper right) depict F-measure versus the sequencing depth for each compared method when the alignment is annotation driven using CA and IA, respectively. Panels **C** (bottom left) and **D** (bottom right) are analogous to Panels **A** and **B**, when the alignment is data driven. The figure refers to Set-up 1 and 100 bp-PE. Within each panel, methods in Mode 1 are depicted with continuous line, methods in Mode 2 with dashed line, methods in Mode 3 with dotted line. When the alignment is annotation driven, the same annotation provided during the alignment was used for Mode 1 and 2.\n\nClick here for file\n\n###### Additional file 2\n\n**Figure S2.** F-measure in Set-up 1 for 75 bp-PE. Analogous to Additional file 1<\/a>: Figure S1, but for Set-up 1 and 75 bp-PE.\n\nClick here for file\n\n###### Additional file 3\n\n**Figure S3.** F-measure in Set-up 1 for 50 bp-PE. Analogous to Additional file 1<\/a>: Figure S1, but for Set-up 1 and 50 bp-PE.\n\nClick here for file\n\n###### Additional file 4\n\n**Figure S4.** F-measure in Set-up 1 for 100 bp-SE. Analogous to Additional file 1<\/a>: Figure S1, but for Set-up 1 and 100 bp-SE.\n\nClick here for file\n\n###### Additional file 5\n\n**Figure S5.** True Positives and False Positives in Set-up 1 for 20M 100 bp-PE. Panels **A** (upper left) and **B** (upper right) depict TP (coral) and FP (aquamarine) bars for the compared methods when the alignment is annotation driven (CA and IA, respectively). Panels **C** (bottom left) and **D** (bottom right) are analogous to Panels **A** and **B**, when the alignment is data driven. The figure refers to Set-up 1 and 20M 100 bp-PE. The true number of expressed transcripts (i.e., 3726) is added as dashed horizontal line to each panel. The difference between the TP and the horizontal line represents the FN.\n\nClick here for file\n\n###### Additional file 6\n\n**Figure S6.** True Positives and False Positives in Set-up 1 for 20M 100 bp-SE. Analogous to Additional file 5<\/a>: Figure S5, but for Set-up 1 and 20M 100 bp-SE.\n\nClick here for file\n\n###### Additional file 7\n\n**Figure S7.** True Positives and False Positives in Set-up 1 for 0.25M 100 bp-PE. Analogous to Additional file 5<\/a>: Figure S5, but for Set-up 1 and 0.25M 100 bp-PE.\n\nClick here for file\n\n###### Additional file 8\n\n**Figure S8.** True Positives and False Positives in Set-up 1 for 0.25M 100 bp-SE. Analogous to Additional file 5<\/a>: Figure S5, but for Set-up 1 and 0.25M 100 bp-PE.\n\nClick here for file\n\n###### Additional file 9\n\n**Figure S9.** Precision and Recall bar-plot in Set-up 2 for 75 bp-PE. Analogous to Figure 2<\/a>, but for Set-up 2 for 60M 75 bp-PE.\n\nClick here for file\n\n###### Additional file 10\n\n**Figure 10.** Precision and Recall bar-plot in Set-up 2 for 50 bp-PE. Analogous to Figure 2<\/a>, but for Set-up 2 for 60M 50 bp-PE.\n\nClick here for file\n\n###### Additional file 11\n\n**Figure S11.** Recall bar-plot versus isoform abundance in Set-up 2 for 60M 75 bp-PE. Analogous to Figure 5<\/a>, but for Set-up 2 for 60M 75 bp-PE.\n\nClick here for file\n\n###### Additional file 12\n\n**Figure S12.** Recall bar-plot versus isoform abundance in Set-up 2 for 60M 50 bp-PE. Analogous to Figure 5<\/a>, but for Set-up 2 for 60M 50 bp-PE.\n\nClick here for file\n\n###### Additional file 13\n\n**Figure S13.** Precision, Recall and F-measure when introducing thresholds (Set-up 2). Analogous to Figure 12<\/a>, but for Set-up 2, 60M 75 bp-PE and the alignment with CA.\n\nClick here for file\n\n###### Additional file 14\n\n**Figure S14.** True Positives and False Positives in Set-up 2 for 60M 75 bp-PE. Panels **A** (upper left) and **B** (upper right) depict TP (coral) and FP (aquamarine) bars for the compared methods when the alignment is annotation driven (CA and IA, respectively). Panels **C** (bottom left) and **D** (bottom right) are analogous to Panels **A** and **B**, when the alignment is data driven. The figure refers to Set-up 2 and 60M 75 bp-PE. The true number of expressed transcripts (i.e., 17032) is added as dashed horizontal line to each panel. The difference between the TP and the horizontal line represents the FN.\n\nClick here for file\n\n###### Additional file 15\n\n**Table S1.** E1, E2 and E3 in Set-up 1 with annotation driven alignment (CA) for 20M 100 bp-PE. Median, 3rd Quartile and maximum value observed for error indexes E1, E2 and E3 in Set-up 1, 20M PE reads of 100 bp. Tables are divided in blocks, where the left block is for methods used in Mode 1, middle block is for methods used in Mode 2, right block is for methods used in Mode 3. Upper rows refer to E1, middle ones refer to E2, bottom ones refer to E3. CA was provided during the alignment.\n\nClick here for file\n\n###### Additional file 16\n\n**Table S2.** E1, E2 and E3 in Set-up 1 with annotation driven alignment (IA) for 20M 100 bp-PE. Analogous to Additional file 15<\/a>: Table S1, but with IA provided during the alignment.\n\nClick here for file\n\n###### Additional file 17\n\n**Table S3.** E1, E2 and E3 in Set-up 1 with annotation driven alignment (CA) for 0.25M 100 bp-PE. Analogous to Additional file 15<\/a>: Table S1, but for 0.25M PE reads.\n\nClick here for file\n\n###### Additional file 18\n\n**Table S4.** E1, E2 and E3 in Set-up 1 with annotation driven alignment (IA) for 0.25M 100 bp-PE. Analogous to Additional file 15<\/a>: Table S1, but for 0.25M PE reads and with IA provided during the alignment.\n\nClick here for file\n\n###### Additional file 19\n\n**Table S5.** E1, E2 and E3 in Set-up 1, with data driven alignment and CA for 20M 100 bp-PE. Analogous to Additional file 15<\/a>: Table S1, but with data driven alignment. CA was provided in Mode 1 and Mode 2.\n\nClick here for file\n\n###### Additional file 20\n\n**Table S6.** E1, E2 and E3 in Set-up 1, with data driven alignment and IA for 20M 100 bp-PE. Analogous to Additional file 15<\/a>: Table S1, but with data driven alignment. IA was provided in Mode 1 and Mode 2.\n\nClick here for file\n\n###### Additional file 21\n\n**Table S7.** E1, E2 and E3 in Set-up 1, with data driven alignment and CA for 0.25M 100 bp-PE. Analogous to Additional file 19<\/a>: Table S5, but for 0.25M PE reads.\n\nClick here for file\n\n###### Additional file 22\n\n**Table S8.** E1, E2 and E3 in Set-up 1, with data driven alignment and IA for 0.25M 100 bp-PE. Analogous to Additional file 20: Table S6, but for 0.25M PE reads.\n\nClick here for file\n\n###### Additional file 23\n\n**Table S9.** E1, E2 and E3 in Set-up 1 with annotation driven alignment (CA) for 20M 100 bp-SE. Analogous to Additional file 15<\/a>: Table S1, but for 20M SE reads.\n\nClick here for file\n\n###### Additional file 24\n\n**Table S10.** E1, E2 and E3 in Set-up 1 with annotation driven alignment (IA) for 20M 100 bp-SE. Analogous to Additional file 17<\/a>: Table S2, but for 20M SE reads.\n\nClick here for file\n\n###### Additional file 25\n\n**Table S11.** E1, E2 and E3 in Set-up 1 with annotation driven alignment (CA) for 0.25M 100 bp-SE. Analogous to Additional file 17<\/a>: Table S3, but for 0.25M SE reads.\n\nClick here for file\n\n###### Additional file 26\n\n**Table S12.** E1, E2 and E3 in Set-up 1 with annotation driven alignment (IA) for 0.25M 100 bp-SE. Analogous to Additional file 18<\/a>: Table S4, but for 0.25M SE reads.\n\nClick here for file\n\n###### Additional file 27\n\n**Table S13.** E1, E2 and E3 in Set-up 1, with data driven alignment and CA for 20M 100 bp-SE. Analogous to Additional file 19<\/a>: Table S5, but for 20M SE reads.\n\nClick here for file\n\n###### Additional file 28\n\n**Table S14.** E1, E2 and E3 in Set-up 1, with data driven alignment and IA for 20M 100 bp-SE. Analogous to Additional file 20: Table S6, but for 20M SE reads.\n\nClick here for file\n\n###### Additional file 29\n\n**Table S15.** E1, E2 and E3 in Set-up 1, with data driven alignment and CA for 0.25M 100 bp-SE. Analogous to Additional file 21<\/a>: Table S7, but for 025M SE reads.\n\nClick here for file\n\n###### Additional file 30\n\n**Table S16.** E1, E2 and E3 in Set-up 1, with data driven alignment and IA for 0.25M 100 bp-SE. Analogous to Additional file 22<\/a>: Table S8, but for 025M SE reads.\n\nClick here for file\n\n###### Additional file 31\n\n**Table S17.** Statistics for E1, E2 and E3 in Set-up 2 with annotation driven alignment (CA) for 60M 75 bp-PE. Analogous to Additional file 15<\/a>: Table S1, but for Set-up 2, 60M 75 bp-PE.\n\nClick here for file\n\n###### Additional file 32\n\n**Table S18.** Statistics for E1, E2 and E3 in Set-up 2 with annotation driven alignment (IA) for 60M 75 bp-PE. Analogous to Additional file 17<\/a>: Table S2, but for Set-up 2, 60M 75 bp-PE.\n\nClick here for file\n\n###### Additional file 33\n\n**Table S19.** Statistics for E1, E2 and E3 in Set-up 2 with data driven alignment and CA for 60M 75 bp-PE. Analogous to Additional file 19<\/a>: Table S5, but for Set-up 2, 60M 75 bp-PE.\n\nClick here for file\n\n###### Additional file 34\n\n**Table S20.** Statistics for E1, E2 and E3 in Set-up 2 with data driven alignment and IA for 60M 75 bp-PE. Analogous to Additional file 20: Table S6, but for Set-up 2, 60M 75 bp-PE.\n\nClick here for file\n\n## Acknowledgments\n\nThis research was supported by \"Italian Flagship Project Epigenomic\" of the Italian Ministry of Education, University and Research and the National Research Council (). The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper, and Francesco Russo for reading the manuscript.","meta":{"dup_signals":{"dup_doc_count":102,"dup_dump_count":26,"dup_details":{"curated_sources":2,"2022-33":1,"2021-31":1,"2020-16":1,"2019-13":2,"2019-04":1,"2018-47":1,"2018-39":1,"2018-30":1,"2018-22":1,"2018-13":1,"2017-47":1,"2017-34":1,"2017-26":7,"2017-22":21,"2017-17":13,"2017-09":7,"2017-04":1,"2016-44":1,"2016-40":2,"2016-36":12,"2016-30":8,"2016-22":1,"2016-18":1,"2023-40":1,"2017-13":11,"2024-22":1}},"file":"PMC4098781"},"subset":"pubmed_central"} {"text":"abstract: This paper distinguishes between two different scales of medium range order, MRO, in non-crystalline SiO~2~: (1) the first is \\~0.4 to 0.5 nm and is obtained from the position of the first sharp diffraction peak, FSDP, in the X-ray diffraction structure factor, S(*Q*), and (2) the second is \\~1 nm and is calculated from the FSDP full-width-at-half-maximum FWHM. Many-electron calculations yield Si\u2013O third- and O\u2013O fourth-nearest-neighbor bonding distances in the same 0.4\u20130.5 nm MRO regime. These derive from the availability of empty Si d\u03c0 orbitals for back-donation from occupied O p\u03c0 orbitals yielding narrow symmetry determined distributions of third neighbor Si\u2013O, and fourth neighbor O\u2013O distances. These are segments of six member rings contributing to connected six-member rings with \\~1 nm length scale within the MRO regime. The unique properties of non-crystalline SiO~2~ are explained by the encapsulation of six-member ring clusters by five- and seven-member rings on average in a compliant hard-soft nano-scaled inhomogeneous network. This network structure minimizes macroscopic strain, reducing intrinsic bonding defects as well as defect precursors. This inhomogeneous CRN is enabling for applications including thermally grown \\~1.5 nm SiO~2~ layers for Si field effect transistor devices to optical components with centimeter dimensions. There are qualitatively similar length scales in nano-crystalline HfO~2~ and phase separated Hf silicates based on the primitive unit cell, rather than a ring structure. Hf oxide dielectrics have recently been used as replacement dielectrics for a new generation of Si and Si\/Ge devices heralding a transition into nano-scale circuits and systems on a Si chip.\nauthor: Gerald Lucovsky; James C Phillips\ndate: 2010\ninstitute: 1Department of Physics, North Carolina State University, Raleigh, NC, 27695-8202, USA; 2Department of Physics and Astronomy, Rutgers University, Piscataway, NJ, 08854, USA\nreferences:\ntitle: Nano-regime Length Scales Extracted from the First Sharp Diffraction Peak in Non-crystalline SiO~2~ and Related Materials: Device Applications\n\n# Introduction\n\nThere have been many models proposed for the unique properties of non-crystalline SiO~2~. These are based on the concept of the continuous random network, CRN, structure as first proposed by Zachariasen \\[1,2\\]. CRN models assume the short range order, SRO, of SiO~2~ is comprised of fourfold coordinated Si in tetrahedral environments through corner-connected twofold coordinated O bridging two Si atoms in a bent geometry. The random character of the network has generally been attributed to a wide distribution of Si\u2013O\u2013Si bond angles, 150 \u00b1 30\u00b0 as determined by X-ray diffraction \\[3\\], as well as a random distribution of dihedral angles. These combine to give a distribution of ring geometries that defines a compliant and strain free CRN structure \\[2\\].\n\nMore recently, a semi-empirical bond constraint theory (SE-BCT) was proposed by one of the authors (JCP) to correlate the ease of glass formation in SiO~2~ and chalcogenide glass with local bonding constraints associated with two-body bond-stretching and three-body bond-bending forces \\[4,5\\]. The criterion for ease of glass formation was a mean-field relation equating the average number of stretching and bending constraints\/atom with the network dimensionality of three. When applied to SiO~2~, satisfaction of this criterion was met by assuming a broken bond bending constraint for the bridging O-atoms. This was inferred from the large bond angle distribution of the Si\u2013O\u2013Si bonding group, and the weak bonding force constant. The same local mean-field approach for the ease of glass formation has been applied with good success to other non-crystalline network glasses in the Ge\u2013Se and As\u2013Se alloy systems.\n\nSE-BCT makes no connection with medium MRO that is a priori deemed to be important with other properties. As such, SE-BCT cannot identify MRO bonding that has been associated with the FSDP \\[6,7\\]. Based on these references, the position and width of the FSDP identify two different MRO length scales. It will be demonstrated in this paper that these length scales provide a basis for explaining some of the unique nano-scale related properties of non-crystalline SiO~2~ that are enabling for device applications.\n\nThe FSDP in the structure factor, S(*Q*), has been determined from X-ray and neutron diffraction studies of oxide, silicate, germanate, borate and chalcogenide glasses \\[7\\]. There is a consensus that the position and width of this feature derive from MRO \\[6-8\\]. This is defined as order extending beyond the nearest- and next-nearest neighbor distances extracted from diffraction studies, and displayed in radial distribution function plots \\[2\\]. There has been much speculation and empirical modeling addressing the microscopic nature of bonding arrangements in the MRO regime including rings of bonded atoms \\[9\\], distances between layer-like ordering \\[10\\], and\/or void clustering that are responsible for the FSDP \\[11\\]. First-principle molecular dynamics calculations have been applied to the FSDP \\[7,12\\]. One of these papers ruled out models based on layer-like nano-structures, and nano-scale voids as the MRO responsible for the FSDP \\[12\\]. References \\[7\\] and \\[12\\] did not offered alternative explanations for the FSDP based a microscopic understanding of the relationship between atomic pair correlations in the MRO regime and constraints imposed by fundamental electronic structure at the atomic and molecular levels.\n\nMoss and Price in \\[7\\], building on the 1974 deNeufville et al. \\[6\\] observation and interpretation of the FSDP proposed that the position this feature, *Q*~1~(\u00c5^\u22121^), \"can be related, via an approximate reciprocal relation, to a distance *R* in real space by the expression *R* = 2\u03c0\/*Q*\". It is important to note the MRO-scale bonding structures previously proposed in \\[9\\] and \\[10\\], and ruled out in \\[12\\], could not explain why a large number of oxide and chalcogenide glasses exhibit FSDP's in a relative narrow regime of *Q*-values, \\~1 to 1.6 \u00c5^\u22121^. Nor could they account for the systematic differences among these *Q*-values, \\~1.5 for oxides, and 1.0\u20131.25 for chalcogenides. It has been shown in \\[13\\] that this is a result of an scaling relationship between the position of the FSDP in *Q*-space and the nearest neighbor bond length.\n\nBased on \\[7,8,13,14\\], the position, *Q*~1~(\u00c5^\u22121^), and full-width at half-maximum, FWHM, \u0394*Q*~1~(\u00c5^\u22121^) of the FSDP have been used to identify a second length scale within the MRO regime for a representative set of oxide and chalcogenide glasses \\[7,11\\]. The first length scale has been designated as a correlation length, *R* = 2\u03c0\/*Q*~1~(\u00c5^\u22121^), and is determined from the *Q*-space position of the spectral peak as suggested in \\[1\\], and the second has been designated as a coherence length, *L* = 2\u03c0\/\u0394*Q*~1~(\u00c5^\u22121^), and is determined from the FWHM \\[7\\]. This interpretation of the FSDP position and line-shape is consistent with the interpretation of diffraction peaks or local maxima in S(*Q*) for non-crystalline and crystalline solids \\[2\\]. These are interpreted as inter-atomic distances or equivalently atomic pair correlations that are repeated throughout a significant volume of the sample within the X-ray beam, but not in a periodic manner characteristic of long range crystalline order. Like other diffraction features, e.g., the width of the Si\u2013Si pair correlation length in SiO~2~ as determined in \\[3\\], is also associated with a characteristic real space distance, e.g., of a MRO-scale cluster of atoms.\n\nIn referring to the FSDP, Moss and Prince in \\[7\\] noted that \"such a diffraction feature thus represents the build up of correlation whose basic period is well beyond the first few neighbor distances\"; this basic period is within the MRO domain. It was also pointed out by them that \"In fact, the width of this feature can be used to estimate a correlation range over which the period in question survives\", or persists.\n\nReturning to paper published in 1974 by deNeufville, Moss and Ovshinsky; this article addressed photo-darkening in As~2~(S, Se)~3~ in a way that anticipated the quantitative definitions for *R* and *L* in subsequent publications \\[7\\]. This is of historical interest since FSDPs were observed for the first time in each compound and\/or alloy studied, and these were associated with a real space distance of \\~5.5 A in the MRO regime. It was also noted that the width of this feature in reciprocal (*Q*) space identified a larger scale of order over which these MRO-regime correlations persisted; this has subsequently been defined as the coherence length, *L*.\n\n# Experimental Results for SiO~2~\n\nThe position and width of the FSDP in glassy SiO~2~ have received considerable attention, and are well-characterized \\[7,8,14\\]. Based on these references and others as well, *Q*~1~(\u00c5^\u22121^) is equal to 1.52 \u00b1 0.03 \u00c5^\u22121^, and \u0394*Q*~1~(\u00c5^\u22121^) to 0.66 \u00b1 0.03 \u00c5^\u22121^. The calculated values of the correlation length, *R* = 2\u03c0\/*Q*~1~(\u00c5^\u22121^), and the coherence length, *L* = 2\u03c0\/\u0394*Q*~1~(\u00c5^\u22121^), are, respectively, *R* = 4.13 \u00b1 0.08 \u00c5, and *L* = 9.95 \u00b1 0.05 \u00c5. *R* gives rise to features in the RDF in a regime associated with rings of bonded atoms; these are a universal aspect of the CRN description of non-crystalline oxides and chalcogenides which include twofold coordinate atoms \\[2\\].\n\nIn a continuous random network, CRN, such as SiO~2~, the primitive ring size is defined by the number of Si atoms connected through bridging O atoms to form the smallest high symmetry ring structure. This primitive ring is the non-crystalline analog of the primitive unit cell (PUC) in crystalline solids and this provides an important connection between the properties of non-crystalline and nano-crystalline thin films.\n\nIt has been first demonstrated in the Bell and Dean model \\[15,16\\], and later by computer generated modeling \\[17-19\\], and molecular dynamic simulations as well \\[8\\], that the ring size distribution for SiO~2~ is dominated by six-member rings with six silicon and six oxygen atoms.\n\nThe contributions to the partial structure factor, S~*ij*~^*N*^ (*Q*) associated with Si\u2013O, O\u2013O and Si\u2013Si pair correlation distances have been determined using classical molecular dynamics simulations as addressed in \\[8\\]. Combined with RDFs from the Bell and Dean model \\[16\\], and computer modeling \\[17,18,20\\], these studies identify inter-atomic pair correlations in the regime of 4\u20135 \u00c5 that contribute to the position of FSDP. Figure 3 of \\[16\\], is a pair distribution histogram that indicates a (1) a Si\u2013O pair correlation, or third nearest neighbor distance of 4.1 \u00b1 0.5 \u00c5, and (2) an O\u2013O pair correlation, or fourth nearest neighbor distance of 4.5 \u00b1 0.3 \u00c5. These features are evident in the computed and experimental radial distribution function plots for X-ray diffraction in Fig. 4, and neutron diffraction in Fig. 5, also of \\[16\\]. As indicated in Fig. 1<\/a> of this paper, the 4.1 \u00c5 feature is assigned with Si\u2013O third nearest-neighbor distances, and the 4.5 \u00c5 feature is assigned to fourth nearest-neighbor O\u2013O distances. Figure 1<\/a> is a schematic representation of local cluster that has been used to determine the Si\u2013O\u2013Si bond angle using many-electron ab initio quantum chemistry calculations in \\[18\\].\n\nThe importance of Si atom d-state symmetries in calculations of the electronic structure of non-crystalline SiO~2~ was recognized in \\[18\\], published in 2002. These symmetries, coupled with the O 2p\u03c0 states play a significant role in narrowing the two pair distribution distances identified above. The cluster displayed in Fig. 1<\/a> is large enough to include the correlation length, *R* in the MRO regime. The calculations of \\[18\\] demonstrated that Si d-state basis Gaussian functions when included into a many-electron, ab initio calculation play a determinant role in generating a stable minimum for a Si\u2013O\u2013Si bond angle, \u0398, that is smaller than the ionic bonding value 180\u00b0. In addition these values of \u0398, and the bond angle distribution, \u0394\u0398 (1) were different from what had been determined by the X-ray diffraction studies of Mozzi and Warren in \\[3\\], but (2) were in excellent agreement with more recent studies that employed a larger range of *k* or *Q*\\[19\\]. The values obtained by Mozzi and Warren \\[3\\] are \u0398 \\~ 144\u00b0, and \u0394\u0398\\]FWHM \\~ 30\u00b0, whereas the studies in \\[19\\] obtained values of *Q* \\~ 148\u00b0 and \u0394\u0398\\]FWHM \\~ 13\u201315\u00b0 that were essentially the same as those calculated in \\[16\\]. The Bell and Dean model of \\[15\\] in Fig. 2 gave a Si\u2013O\u2013Si bond angle of 152\u00b0, and also wide bond angle distribution with a FWHM \\~ 15\u00b0. Of particular significance is the significantly narrower Si\u2013O\u2013Si bond angle distribution of the calculations in \\[18\\], and the X-ray diffraction studies of \\[19\\]. The bond angles and bond distributions of \\[16,18,19\\] have important implications for the existence of high symmetry six member Si\u2013O rings their importance as the primitive ring structure in both \u03b1-quartz and \u03b2-quartz, as well as non-crystalline SiO~2~.\n\nThe identification of the specific MRO regime features obtained from S(*Q*) rely heavily on the pair correlation functions derived from the Bell and Dean model \\[16\\], as well as from computer modeling of the Gaskell group \\[21\\] and Tadros et al. \\[17,20\\]. Combined with \\[16\\], The Si d\u03c0-O 2p\u03c0-Si d\u03c0 symmetry determined overlap and charge transfer from occupied O \u03c0-states into otherwise empty Si d\u03c0 states, plays the determinant role in forcing the narrowness of this MRO length scale feature. Stated differently, pairs of Si atoms connected through an intervening O atom as in Fig. 1<\/a>, are strongly correlated by the local symmetries forced on these Si d\u03c0-states. This correlation reflects the even symmetry of the respective Si d-states, and the odd symmetry of the O p-states. In contrast, the coherence length, *L*, as determined from the FWHM of the FSDP cannot be assigned to a specific inter-atomic repeat distance identified in any of the models addressed above, but instead is an average cluster dimension, in the spirit of the definitions in \\[6\\] and \\[7\\].\n\nThe coherence length, *L* in SiO~2~, as computed from the FWHM of the FSDP, is 9.5 \u00b1 0.5 \u00c5, and this identifies the cluster associated with this length scale. Based on a simple extension of the schematic diagram in Fig. 1<\/a>, this cluster includes a coupling of at least two, and no more than three symmetric six-member primitive rings. If this cluster is extended well beyond two to three rings in all directions, it would eventually generate the crystal structure of \u03b1-quartz. This helical aspect of this structure gives rise to a right- or left-handed optical rotary property of \u03b1-quartz \\[22\\]. The helical structure of \u03b1-quartz has its parentage in trigonal Se, which is comprised of right or left-handed helical chains with three Se atoms per turn of the helix. The two-atom helix analog is the cinnabar phase of HgS with six atoms\/turn, three Hg and three S. \u03b1-quartz is the three-atom analog with nine atoms\/turn, three Si and six O \\[22\\]. Returning to non-crystalline SiO~2~, the coupling of two to three-six-member rings is consist with the relative fraction of six member rings, \\~50% in the Bell and Dean \\[16\\] construction as well as other estimates of the ring fraction.\n\nMoreover, this two to three ring clustered structure is an example of the MRO structures addressed in \\[7\\]. With respect the FSDP, Moss and Price noted that \"such a diffraction feature thus represents the build up of correlation whose basic period is well beyond the first few neighbor distances\"; it therefore in the MRO regime. They also pointed out that: \"In fact, the width of this feature (the, FSDP) can be used to estimate a correlation range over which the period in question survives\". This incoherent coupling associated with less symmetric five- and seven-member rings than determines the correlation, or coherence range over which this period survives.\n\n# Revisiting the CRN in Context of Correlation and Coherence Length Determinations\n\nThe pair correlation assignments made for *R* and *L* are consistent with the global concept of a CRN, but the length scales for correlation, *R*, and coherence. *L*, are quantitatively different that what was proposed originally in \\[1\\], and discussed at length \\[3\\]. Each of these envisioned the CRN randomness to be associated with the relative widths of bond lengths and bond angles, as in Fig. 2 in the Bell and Dean \\[16\\]. Based on this model the Si\u2013O pair correlation has a width \\<0.05 \u00c5, and the Si\u2013O\u2013Si bond angle displays a 30\u00b0 width, corresponding to a Si\u2013Si pair correlation width at least two-to-three larger. In these conventional descriptions of the CRN, any dihedral angle correlations, or four-atom correlations, are removed by bond-angle widths.\n\nThe identification of the MRO length scales, *R* and *L*, also has important implications for the use of semi-empirical bond constraint theory (SE-BCT) for identifying and\/or describing ideal glass formers. This theory is a mean-field theory based on average properties that are determined by constraints restricted to SRO bonding arrangements \\[4,5,23\\]. The identification and interpretation of the two MRO length scales discussed above indicates that this emphasis on SRO is not sufficient for identifying the important nano-scale properties of SiO~2~. Indeed MRO is deemed crucial for establishing the unique and technologically important character of non-crystalline SiO~2~ over a dimensional scale from 1 to 2 nm thick gate dielectrics to centimeter dimensions for high-quality optically homogeneous components, e.g., lenses.\n\nThe FSDP has been observed, and studied in other non-crystalline oxide glasses, e.g., B~2~O~3~, GeO~2~, as well chalcogenide glasses including sulfides, GeS~2~ and As~2~S~3~, and selenides, GeSe~2~, As~2~Se~3~ and SiSe~2~\\[6,7\\]. The values of *R* and *L* have been calculated, and display anion, O, S and Se and cation coordination specific behaviors. For example, the values of the correlation length *R*, and the coherence length *L*, have been obtained from the position, and FWHM of the S(*Q*) FSDP peak for (a) SiO~2~: *R* = 4.1 \u00b1 0.2 \u00c5, and *L* = 9.5 \u00b1 0.5 \u00c5; (b) B~2~O~3~: *R* = 4.0 \u00b1 0.2 \u00c5, and *L* = 11 \u00b1 1 \u00c5; and (c) GeSe~2~: *R* = 6.3 \u00b1 0.3 \u00c5, and *L* = 24 \u00b1 4 \u00c5.\n\nIt has been noted previously elsewhere \\[7,13\\], that quantitative differences between the position of the FSDPs in SiO~2~ and GeSe~2~ can be correlated directly with differences between the respective (1) Si\u2013O and Ge\u2013Se bond-lengths, 1.65 and 2.39 \u00c5, and (2) Si\u2013Si and Ge\u2013Ge next neighbor features as determined by the respective Si\u2013O\u2013Si and Ge\u2013Se\u2013Ge bond angles, \\~148\u00b0 and \\~105\u00b0. This was addressed in \\[1\\] and \\[24\\], where it was shown that the products of nearest neighbor bond length (in \u00c5) and positions of the FSDP (*Q*(\u00c5^\u22121^) are approximately the same, \\~2.5 \u00b1 0.4 for the oxide and chalcogenide glasses \\[1,24\\]. Based on this scaling, the value *R* for GeSe~2~ (*x* = 0.33), is estimated to be 6.2 \u00b1 0.2 \u00c5, compared with the averaged experimental value of *R* = 6.30 \u00b1 0.07 \u00c5.\n\nThis values of *Q*~1~(\u00c5^\u22121^) show interesting correlations with the nature of the CRNs. For the three oxide glasses in Table 1<\/a>*Q*~1~(\u00c5^\u22121^) \\~ 1.55 \u00b1 0.03, and is independent of the network coordination, i.e., 3\u20132 for B~2~O~3~ and 4\u20132 SiO~2~ and GeO~2~. In contrast, the value of *Q*~1~(\u00c5^\u22121^) decreases to \\~1.05 for 4\u20132 selenides, and then increases to \\~1.25 for the 3\u20132 chalcogenides. This indicates a longer correlation length in the 3\u20132 alloys that is presumed to be associated with repulsions between lone pairs on As, and either the Se or S atoms of the particular alloy for the 3\u20132 chalcogenides.\n\nComparisons and scaling for R \\[1\\]\n\n| Glass | *Q*~1~(\u00c5^\u22121^) | *R*(\u00c5) | *r*~1~(\u00c5) | *r*~1~*Q*~1~ |\n|:----------|---------------|--------|-----------|--------------|\n| SiO~2~ | 1.55 | 4.1 | 1.61 | 2.48 |\n| GeO~2~ | 1.55 | 4.1 | 1.74 | 2.70 |\n| B~2~O~3~ | 1.57 | 4.0 | 1.36 | 2.14 |\n| SiSe~2~ | 1.02 | 6.2 | 2.30 | 2.35 |\n| GeS~2~ | 1.00 | 6.3 | 2.37 | 2.37 |\n| GeS~2~ | 1.04 | 6.0 | 2.22 | 2.30 |\n| As~2~S~2~ | 1.27 | 4.9 | 2.28 | 3.10 |\n| As~2~S~2~ | 1.26 | 5.0 | 2.44 | 2.87 |\n\n*r*~1~ = bond length\n\nIt is significant to note that the scaling relationship based on SRO, breaks down for the coherence length *L* for GeSe~2~. The scaled ratio for *L* is estimated to be 15 \u00c5 compared with the higher average experimental value of *L* = 24 \u00b1 4 \u00c5 \\[11,25\\]. The comparisons based on scaling are consistent with *R* being determined by the extension of a local pair correlation determined by the ring structures in the SiO~2~ and GeSe~2~ CRNs. The microscopic basis for *L* in SiO~2~, and B~2~O~3~ as well, is determined by characteristic inter-ring bonding arrangements with a cluster size that related to coupling of two, two or three rings, respectively. These determine the period of the cluster repetition, and the encapsulation of these more symmetric rings by less symmetric rings of bonded atoms; i.e., five- and seven-member rings in SiO~2~. The inter-ring coupling in SiO~2~ is direct result of the softness of the Si\u2013O\u2013Si bonding force constant in SiO~2~\\[4,5\\]. For the case of the GeSe~2~ CRN because of the smaller Ge\u2013Se\u2013Ge bond angle and repulsive effects between the Se lone pair electrons and the bonding electrons localized in the more covalent Ge\u2013Se bonds, the coherence length is not attributed to rings of bonded atoms, but rather to a hard soft cluster mixture. The hard soft structure in GeSe alloys is determined by compositionally dependent constraints imposed by local bonding, e.g., locally rigid groups with Ge atoms separated by one bridging Se atom, Ge\u2013Se\u2013Se, and locally compliant groups associated with two bridging Se atoms, Ge\u2013Se\u2013Se\u2013Ge \\[23\\]. Similar considerations apply to the period of the hard-component of a hard-soft structure that have been proposed as the driving force for glass formation, and the associated low densities of defect and defect precursors which are associated with either broken and strained-bonds, respectively. The criterion is SiO~2~ and B~2~O~3~ is determined by a nano-structures that includes a multiplicity of different ring sizes, whereas the criterion is a volume percolation threshold that applies in chalcogenides glasses, and is consistent with locally rigid, and locally compliant groups been phase-separated into hard-soft mixtures \\[26\\]. The same considerations apply in As-chalcogenides, and for the compound As~2~Se~3~ and GeSe~2~ compositions that include local small discrete molecules that add compliance to the otherwise locally rigid CNRs that includes As\u2013Se\u2013As and Ge\u2013Se\u2013Ge bonding, respectively \\[23\\].\n\nThe conclusion is that SE-BCT, even with local modifications for symmetry-associated broken bending constraints, and additional constraints due to lone pair and terminal atom repulsions \\[23\\], has limited value in accounting the elimination of macroscopic strain reduction for technology applications. This property depends on MRO, as embodied in hard-soft mixtures, and\/or percolation of short-range order ground that exceeds a volume percolation threshold \\[23,27\\].\n\n# Nano-crystalline and Nano-crystalline\/Non-crystalline Alloys\n\nExtension of the MRO concepts of the previous sections from CRNS to nano-crystalline and nano-crystalline\/non-crystalline composites of technological importance is addressed in this section. One way to formulate this issue is to determine conditions that promote hard-soft mixtures in materials that are (1) chemically homogeneous, but inhomogeneous on a nano-meter length scale, or (2) both chemically inhomogeneous and phase-separated. The first of these is addressed in homogeneous HfO~2~ thin films, and the second for phase separated Hf silicates, as well as other phase separated materials in which SiO~2~ in a chemical constituent \\[28\\].\n\n## Nano-grain HfO~2~ Films\n\nThe nano-grain morphology of deposited and subsequently high temperature, \\>700\u00b0C, annealed HfO~2~ thin films is typically a mixture of monoclinic (m-) and tetragonal (t-) grains differentiated by Hf 5d features in combination with O 2p \u03c0 states that comprise local symmetry adapted linear combinations (SALCs) of atomic states into molecular orbitals (MO) \\[28,29\\]. These MOs are essentially one-electron states, in contrast to occupied Hf states that must by treated in a many-electron theory \\[30\\]. Of particular importance are the \u03c0-bonded MOs that contribute to the lowest conduction band features in O K edge XAS spectra \\[28,29\\]. Figure 2<\/a> indicates differences in these band edge features for nano-grain t-HfO~2~ and m-HfO~2~ thin films in which the grain-morphology has been controlled by interfacial bonding. The t-HfO~2~ films display a single asymmetric band edge feature, whereas m-HfO~2~ films display two band edge features. Figure 3<\/a> is for films that have with a mixed t-\/m- nano-grain morphology, and a thickness that is increased from 2 to 3 nm, and then to 4 nm. Based of features in these spectra, and 2nd derivative spectra as well, the 2 nm film displays neither a t-, nor a m-nano-grain morphology, while the thicker films display a doublet structure indicative of a mixed nano-grain morphology.\n\nThe band edge 5d E~g~ splittings in Figs. 2<\/a> and 3<\/a> indicate a cooperative Jahn\u2013Teller (J\u2013T) distortion \\[28\\]. The theoretical model in \\[31\\] indicates that an electronic unit cell comprised of seven PUCs, each \\~0.5 to 0.55 nm is necessary for a cooperative J\u2013T effect, and this requires a nano-grain dimensions of \\~3 to 3.5 nm. This indicates a dimensional constraint in the 2 nm thick film. This film is simply too thin to support a high concentration of randomly oriented nano-grains with an electronic unit cell large enough to support a J\u2013T distortion. These 2 nm films are generally characterized as X-ray amorphous. As-deposited 3 and 4 nm thick films also display no J\u2013T, but when subjected to the same 900\u00b0C anneal as the 2 nm thick film, the dimensional constraint is relaxed and J\u2013T distortions are stabilized and are observed in O K edge XAS.\n\nThese differences in nano-scale morphology identify several scales of MRO for HfO~2~, as well as other TM d^0^ oxides, TiO~2~ and ZrO~2~. The first is the PUC \\~ 0.5 to 0.55 nm, and the second and third are for coupling of unit cells. The first coupling is manifest in 1.5\u20132.0 nm grains that are analogues of the SiO~2~ clusters comprised of 2\u20133 symmetric six-member rings. The second length scale is 3\u20133.5 nm and is sufficient to promote J\u2013T distortion which persist in thicker annealed film and bulk crystals as well. The PUC of HfO~2~ then plays the same role as the symmetric or regular six-member ring of non-crystalline SiO~2~ and in crystalline a-quartz.\n\nDifferences in nano-grain order have a profound effect on intrinsic bonding defects in HfO~2~. In films thicker than 3 nm they contribute to high densities of vacancy defects (\\~10^12^ cm^\u22122^, or equivalently 10^18^ cm^\u22123^), clustered on internal grain boundaries of nano-grains large enough to display J\u2013T term splittings \\[28\\]. These are indicated in Fig. 4<\/a>.\n\nNano-grain HfO~2~ in the MRO size regime of 1.5\u20132 nm can also formed in phase-separated Hf silicates (HfO~2~)~*x*~(SiO~2~)~1\u2212*x*~, alloys in two narrow compositional regimes: 0.15 \\<*x* \\< 0.3, and 0.75 \\<*x* \\< 0.85. For the lower *x*-regime, the phase separation of an as-deposited homogeneous silicate yields a compliant hard-soft structure. This is comprised of X-ray amorphous nano-grains with \\<3 nm dimensions that are encapsulated by non-crystalline SiO~2~. For the higher *x*-regime. The phase separated silicates include X-ray amorphous nano-grains \\<3 nm in size, whose growth is frustrated by a random incorporation of 2 nm clusters of compliant non-crystalline SiO~2~. The concentration of these 2 nm clusters exceeds a volume percolation threshold accounting for the frustration of larger nano-grain growth \\[27\\].\n\nEach of these phase-separated silicate regimes exhibits low densities of defects and defect precursors. However, these diphasic silicates have not studied with respect to radiation stressing, so it would be ill-advised and inappropriate to call then SiO~2~-look-alikes, a label that has been attached to the homogeneous Hf Si oxynitride alloys in the next sub-section based on radiation stressing \\[32\\].\n\n## Homogeneous Hf Si Oxynitride Alloys\n\nThere is a unique composition (HfO~2~)~0.3~(SiO~2~)~0.3~(Si~3~N~4~)~0.4~(concentrations \u00b1 0.025) hereafter HfSiON~334~, which is stable to annealing temperatures \\>1,000\u00b0C, and whose electrical response after X-ray and \u03b3-ray stressing is essentially the same as SiO~2~\\[32\\].This similarity is with respect to (1) the linear dependence on dosing, (2) the sign of the fixed charge, always positive, and (3) the magnitude of the defect generation. The unique properties are attributed to a fourfold coordinated Hf substitute onto 16.7% of the possible fourfold coordinated Si bonding sites. This concentration is at the percolation threshold for connectivity of compliant local bonding arrangements \\[27\\]. Larger concentrations of (Si~3~N~4~) for the same or different combinations of HfO~2~ and SiO~2~ bonding leads to chemical phase separation with loss of bonded N, and therefore qualitatively different thin films.\n\n## Other Diphasic Materials with 20% SiO~2~\n\nThere are at least two other diphasic materials with a dimensionally stabilized symmetric nano-crystalline phase, and a 20% compliant non-crystalline phase, 2 nm clusters of SiO~2~. This includes a 20% mixture of non-crystalline SiO~2~ with (1) nano-crystalline zincblende-structured ZnS grains, or (2) a fine nano-grain ceramic as in Corning cookware \\[33\\]. In each of these thin materials, TEM imaging indicates that the 20% SiO~2~ is distributed uniformly in compliant clusters with an average size of \\~2\u20133 nm. These encapsulated nano-clusters reduce macroscopic strain, but equally important suppress the formation of more asymmetric crystal structures, e.g., wurtzite ZnS, which would lead to anisotropic optical properties, and make these films in unusable for use as protective layers in optical memory stacks for digital video disks (DVD) for information storage and retrieval. In the second application, the SiO~2~ makes these ceramics macroscopically strain free, and capable on being moved from the \"oven to the refrigerator\" without cracking \\[33\\].\n\n## (Si~3~N~4~)~*x*~(SiO~2~)~1\u2212*x*~ Gate Dielectrics\n\nSi oxynitride pseudo-binary alloys (Si~3~N~4~)~*x*~(SiO~2~)~1\u2212*x*~, have emerged in the late 1990s as replacement dielectrics \\[34\\]. These alloys have been used with small and high concentrations of Si~3~N~4~ with different objectives. At low concentration levels \\<5% Si~3~N~4~, for blocking Boron transported from B-doped poly-Si gate dielectrics \\[24\\], and at significantly higher concentrations, \\~50 to 60% Si~3~N~4~, as required for a significant increase in the dielectric constant from \\~3.9 to \\~5.4 to 5.8 \\[35\\].\n\nThe mid-gap interface state density, *D*~it~, and the flat-band voltage *V*~fb~ were obtained from a conventional C\u2013V analysis of metal\u2013oxide\u2013semiconductor capacitors on p-type Si substrates with \\~10^17^ cm^\u22123^ doping, *p*-MOSCAPs, with Al gate metal layers deposited after a post metal anneal in forming gas. Both *D*~it~ and and *V*~fb~ display qualitatively similar behavior as function of *x* for both as-deposited and Si-dielectric layers annealed at 900\u00b0C in Ar for 1 min \\[34\\]. The annealed dielectrics are processed at temperatures that validate comparisons with *p*-MOSCAPs with thermally grown SiO~2~ and similarly processed Al gates. *D*~it~ decreases from \\~10^11^ cm^\u22122^ eV^\u22121^ for Si~3~N~4~ (*x* = 1), to \\~10^10^ cm^\u22122^ eV^\u22121^ for *x* \\~ 0.7 to a value comparable to state of the art SiO~2~ MOSCAPs. The value of *D*~it~ is relatively constant, 1.1 \u00b1 0.2 \u00d7 10^\u221210^ cm^\u22122^ eV^\u22121^, for values of *x* from 0.65 to 0.0 (SiO~2~). In a complementary manner, *V*~fb~ increases from \u22121.3 eV for Si~3~N~4~ (*x* = 1), to \u22120.9 eV at *x* \\~ 0.7, and then remains relatively constant, \u22120.8 \u00b1 0.1 eV for values of *x* from 0.65 to 0.0 (SiO~2~). The values of *D*~it~ and *V*~fb~ are comparable to those for thermally grown SiO~2~, and therefore have been the basis for use of these Si oxynitrides in commercial devices \\[34\\].\n\nThe electrical measurements are consistent with significant decreases in macroscopic strain for Si oxnitride alloys with SiO~2~ concentrations exceeding about 35% or *x* = 0.65. This suggests a hard-soft mechanism in this regime similar to that in Hf silicates. At concentrations \\<0.35, i.e., SiO~2~ = 65%, the roles of the hard and soft components are assumed to be reversed. However, strain reduction over such an extensive composition regime suggests a more complicated nano-scale structure that has a mixed hard-soft character over a significant composition region, The proposed mixed phase is comprised of equal concentrations of Si~3~N~4~ encapsulating SiO~2~ at high Si~3~N~4~ concentrations, and an inverted hard-soft character with SiO~2~ encapsulating Si~3~N~4~ at lower Si~3~N~4~ concentrations. If this is indeed the case, it represents a rather interesting example of a double percolation process \\[26,36\\].\n\n# Summary and Conclusions\n\nThis will be displayed in a bulleted format.\n\n1\\. The spectral position of the FSDP for glasses, and its FWHM are associated with real space distances as obtained from the structure factor S(*Q*) derived from X-ray or neutron diffraction\\are in the MRO regime. The first length scale has been designated as a correlation length, *R* = 2\u03c0\/*Q*~1~(\u00c5^\u22121^), and the second length scale has been designated as a coherence length, *L* = 2\u03c0\/\u0394*Q*~1~(\u00c5^\u22121^) where *Q*~1~(\u00c5^\u22121^) and \u0394*Q*~1~(\u00c5^\u22121^) are, respectively, the position and FWHM of S(*Q*).\n\n2\\. The values of the correlation length *R*, and the coherence length *L*, obtained in this way are for: (a) SiO~2~:*R* = 4.1 \u00b1 0.2 \u00c5, and *L* = 9.5 \u00b1 0.5 \u00c5; (b) B~2~O~3~:*R* = 4.0 \u00b1 0.2 \u00c5, and *L* = 11 \u00b1 1 \u00c5; and (c) GeSe~2~: *R* = 6.3 \u00b1 0.3 \u00c5, and *L* = 24 \u00b1 4 \u00c5.\n\n3\\. Based on molecular dynamics calculations and modeling, the values of *R* correspond to third neighbor Si\u2013O, and associated with segments of six-member rings in SiO~2~. The larger value of *R* in GeSe~2~ is consistent with scaling based on Ge\u2013Se bond lengths and therefore has a similar origin.\n\n4\\. Based on molecular dynamics calculations and modeling, the coherence length features are not a direct result of inter-atomic pair correlations. This is supported by the analysis of X-ray diffraction data as well, where the coherence length is determined by the width of the FSDP rather than by an additional peak in S(*Q*).\n\n5\\. The ring clusters contributing to the coherence lengths for SiO~2~ are comprised of two, or at most three symmetric six-member rings, that are stabilized by back donation of electrons from occupied 2p \u03c0 states on O atoms to empty \u03c0 orbitals on the Si atoms. These rings are encapsulated by more compliant structures with lower symmetry irregular five- and seven-member rings to form a compliant hard-soft system.\n\n6\\. The coherence length in Ge~*x*~Se~1\u2212*x*~ alloys is different in Se-rich and Ge-rich composition regimes, and is significantly larger in each of these regimes than at the compound composition, GeSe~2~ which they bracket. It is determined in each alloy regime, and at the compound composition by minimization of macroscopic strain by a chemical bonding self-organization as in which site percolation dominates. There is a compliant alloy regime which extends from *x* = 0.2 to 0.26 in which locally compliant bonding arrangements, Ge\u2013Se\u2013Se\u2013Ge, completely encapsulate a more rigid cluster comprised of locally rigid Ge\u2013Se\u2013Ge bonding. For compositions greater than *x* = 0.26 and extending to *x* = 0.4, macroscopic compliance results form a diphasic mixture which includes small molecules with Ge\u2013Se, and Ge\u2013Ge bonding.\n\n7\\. The hard-soft mix in non-crystalline SiO~2~ with a length scale of at most 1 nm establishes the unique properties of gate dielectrics \\>1\u20131.5 nm thick, and for cm glasses with cm-dimensions as well.\n\n8\\. There is an analog between the properties of nano-crystalline HfO~2~, and phase separated HfO~2~-SiO~2~ silicate alloys, ZnS-SiO~2~ alloys and ceramic-SiO~2~ alloys that establishes their unique properties in device applications as diverse as gate dielectrics for aggressively scaled dielectrics, protective layers for stacks in with rewritable optical information storage, and for temperature compliance in ceramic cookware.\n\n9\\. *p*-MOSCAPs with Si oxynitride pseudo-binary alloys (Si~3~N~4~)~*x*~(SiO~2~)~1\u2212*x*~, gate dielectrics display an defect densities for interface trapping, *D*~it~, and fixed positive charge that determines the flat-band voltage, *V*~fb~, comparable to those of thermally grown SiO~2~ dielectrics for a range of concentrations extending for \\~70%, *x* = 0.7, Si~3~N~4~ to SiO~2~. The electrical measurements are consistent with significant decreases in macroscopic strain, suggesting a hard-soft mechanism in this regime similar to that in Hf silicates. However, strain reduction over such an extensive composition regime suggests a more complicated nano-scale structure that has a mixed hard-soft character over a significant composition region, The proposed mixed phase is comprised of equal concentrations of Si~3~N~4~ encapsulating SiO~2~ at high Si~3~N~4~ concentrations, and an inverted hard-soft character with SiO~2~ encapsulating Si~3~N~4~ at lower Si~3~N~4~ concentrations. If this is indeed the case, it represents a rather interesting example of a double percolation process.\n\n10\\. The properties of the films and bulk materials identified above are underpinned by the real-space correlation and coherence lengths, *R* and *L*, obtained from analysis of the SiO~2~ structure factor derived from X-ray or neutron diffraction. The real space interpretation relies of the application of many-electron theory to the structural, optical and defect properties on non-crystalline SiO~2~.\n\n## Acknowledgments\n\nOne of the authors (G. L.) acknowledges support from the AFOSR, SRC, DTRA and NSF.\n\n### Open Access\n\nThis article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.","meta":{"dup_signals":{"dup_doc_count":147,"dup_dump_count":46,"dup_details":{"curated_sources":2,"2022-33":1,"2021-10":1,"2020-05":1,"2019-47":1,"2019-39":1,"2019-26":1,"2019-18":1,"2019-04":1,"2018-43":1,"2018-34":1,"2018-26":1,"2017-51":1,"2017-39":1,"2017-30":2,"2017-22":1,"2017-17":1,"2017-09":16,"2017-04":1,"2016-50":1,"2016-44":1,"2016-36":1,"2016-30":2,"2016-26":1,"2016-22":2,"2016-18":2,"2015-48":5,"2015-40":5,"2015-35":5,"2015-32":3,"2015-27":3,"2015-22":5,"2015-14":4,"2014-52":5,"2014-49":5,"2014-42":6,"2014-41":7,"2014-35":6,"2014-23":8,"2014-15":5,"2023-40":1,"2015-18":5,"2015-11":3,"2015-06":5,"2014-10":5,"2013-48":5,"2013-20":5}},"file":"PMC2894147"},"subset":"pubmed_central"} {"text":"author:\ndate: 2011-01\nreferences:\ntitle: Standards of Medical Care in Diabetes\u20142011\n\n# CONTENTS\n\n1. CLASSIFICATION AND DIAGNOSIS OF DIABETES, p. S12\n\n 1. Classification of diabetes\n\n 2. Diagnosis of diabetes\n\n 3. Categories of increased risk for diabetes (prediabetes)\n\n2. TESTING FOR DIABETES IN ASYMPTOMATIC PATIENTS, p. S13\n\n 1. Testing for type 2 diabetes and risk of future diabetes in adults\n\n 2. Testing for type 2 diabetes in children\n\n 3. Screening for type 1 diabetes\n\n3. DETECTION AND DIAGNOSIS OF GESTATIONAL DIABETES MELLITUS, p. S15\n\n4. PREVENTION\/DELAY OF TYPE 2 DIABETES, p. S16\n\n5. DIABETES CARE, p. S16\n\n 1. Initial evaluation\n\n 2. Management\n\n 3. Glycemic control\n\n 1. Assessment of glycemic control\n\n 1. Glucose monitoring\n\n 2. A1C\n\n 2. Glycemic goals in adults\n\n 4. Pharmacologic and overall approaches to treatment\n\n 1. Therapy for type 1 diabetes\n\n 2. Therapy for type 2 diabetes\n\n 5. Diabetes self-management education\n\n 6. Medical nutrition therapy\n\n 7. Physical activity\n\n 8. Psychosocial assessment and care\n\n 9. When treatment goals are not met\n\n 10. Hypoglycemia\n\n 11. Intercurrent illness\n\n 12. Bariatric surgery\n\n 13. Immunization\n\n6. PREVENTION AND MANAGEMENT OF DIABETES COMPLICATIONS, p. S27\n\n 1. Cardiovascular disease\n\n 1. Hypertension\/blood pressure control\n\n 2. Dyslipidemia\/lipid management\n\n 3. Antiplatelet agents\n\n 4. Smoking cessation\n\n 5. Coronary heart disease screening and treatment\n\n 2. Nephropathy screening and treatment\n\n 3. Retinopathy screening and treatment\n\n 4. Neuropathy screening and treatment\n\n 5. Foot care\n\n7. DIABETES CARE IN SPECIFIC POPULATIONS, p. S38\n\n 1. Children and adolescents\n\n 1. Type 1 diabetes Glycemic control\n\n 1. Screening and management of chronic complications in children and adolescents with type 1 diabetes\n\n 1. Nephropathy\n\n 2. Hypertension\n\n 3. Dyslipidemia\n\n 4. Retinopathy\n\n 5. Celiac disease\n\n 6. Hypothyroidism\n\n 2. Self-management\n\n 3. School and day care\n\n 4. Transition from pediatric to adult care\n\n 2. Type 2 diabetes\n\n 3. Monogenic diabetes syndromes\n\n 2. Preconception care\n\n 3. Older adults\n\n 4. Cystic fibrosis\u2013related diabetes\n\n8. DIABETES CARE IN SPECIFIC SETTINGS, p. S43\n\n 1. Diabetes care in the hospital\n\n 1. Glycemic targets in hospitalized patients\n\n 2. Anti-hyperglycemic agents in hospitalized patients\n\n 3. Preventing hypoglycemia\n\n 4. Diabetes care providers in the hospital\n\n 5. Self-management in the hospital\n\n 6. Diabetes self-management education in the hospital\n\n 7. Medical nutrition therapy in the hospital\n\n 8. Bedside blood glucose monitoring\n\n 9. Discharge planning\n\n9. STRATEGIES FOR IMPROVING DIABETES CARE, p. S46\n\nDiabetes is a chronic illness that requires continuing medical care and ongoing patient self-management education and support to prevent acute complications and to reduce the risk of long-term complications. Diabetes care is complex and requires that many issues, beyond glycemic control, be addressed. A large body of evidence exists that supports a range of interventions to improve diabetes outcomes.\n\nThese standards of care are intended to provide clinicians, patients, researchers, payors, and other interested individuals with the components of diabetes care, general treatment goals, and tools to evaluate the quality of care. While individual preferences, comorbidities, and other patient factors may require modification of goals, targets that are desirable for most patients with diabetes are provided. These standards are not intended to preclude clinical judgment or more extensive evaluation and management of the patient by other specialists as needed. For more detailed information about management of diabetes, refer to references 1\u20133.\n\nThe recommendations included are screening, diagnostic, and therapeutic actions that are known or believed to favorably affect health outcomes of patients with diabetes. A grading system (Table 1<\/a>), developed by the American Diabetes Association (ADA) and modeled after existing methods, was utilized to clarify and codify the evidence that forms the basis for the recommendations. The level of evidence that supports each recommendation is listed after each recommendation using the letters A, B, C, or E.\n\nADA evidence grading system for clinical practice recommendations\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Level of evidence<\/th>\nDescription<\/th>\n<\/tr>\n<\/thead>\n
A<\/td>\nClear evidence from well-conducted, generalizable, randomized controlled trials that are adequately powered, including:\n
    \n
  • Evidence from a well-conducted multicenter trial<\/p><\/li>\n

  • Evidence from a meta-analysis that incorporated quality ratings in the analysis<\/p><\/li>\n<\/ul><\/td>\n<\/tr>\n

<\/td>\nCompelling nonexperimental evidence, i.e., \"all or none\" rule developed by Center for Evidence Based Medicine at Oxford<\/td>\n<\/tr>\n
<\/td>\nSupportive evidence from well-conducted randomized controlled trials that are adequately powered, including:\n
    \n
  • Evidence from a well-conducted trial at one or more institutions<\/p><\/li>\n

  • Evidence from a meta-analysis that incorporated quality ratings in the analysis<\/p><\/li>\n<\/ul><\/td>\n<\/tr>\n

B<\/td>\nSupportive evidence from well-conducted cohort studies\n
    \n
  • Evidence from a well-conducted prospective cohort study or registry<\/p><\/li>\n

  • Evidence from a well-conducted meta-analysis of cohort studies<\/p><\/li>\n<\/ul><\/td>\n<\/tr>\n

<\/td>\nSupportive evidence from a well-conducted case-control study<\/td>\n<\/tr>\n
C<\/td>\nSupportive evidence from poorly controlled or uncontrolled studies\n
    \n
  • Evidence from randomized clinical trials with one or more major or three or more minor methodological flaws that could invalidate the results<\/p><\/li>\n

  • Evidence from observational studies with high potential for bias (such as case series with comparison to historical controls)<\/p><\/li>\n

  • Evidence from case series or case reports<\/p><\/li>\n<\/ul><\/td>\n<\/tr>\n

<\/td>\nConflicting evidence with the weight of evidence supporting the recommendation<\/td>\n<\/tr>\n
E<\/td>\nExpert consensus or clinical experience<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\nThese standards of care are revised annually by the ADA's multidisciplinary Professional Practice Committee, incorporating new evidence. Members of the Professional Practice Committee and their disclosed conflicts of interest are listed on page S97. Subsequently, as with all Position Statements, the standards of care are reviewed and approved by the Executive Committee of ADA's Board of Directors.\n\n# I. CLASSIFICATION AND DIAGNOSIS OF DIABETES\n\n## A. Classification of diabetes\n\nThe classification of diabetes includes four clinical classes:\n\n- Type 1 diabetes (results from \u03b2-cell destruction, usually leading to absolute insulin deficiency)\n\n- Type 2 diabetes (results from a progressive insulin secretory defect on the background of insulin resistance)\n\n- Other specific types of diabetes due to other causes, e.g., genetic defects in \u03b2-cell function, genetic defects in insulin action, diseases of the exocrine pancreas (such as cystic fibrosis), and drug- or chemical-induced (such as in the treatment of HIV\/AIDS or after organ transplantation)\n\n- Gestational diabetes mellitus (GDM) (diabetes diagnosed during pregnancy that is not clearly overt diabetes)\n\nSome patients cannot be clearly classified as having type 1 or type 2 diabetes. Clinical presentation and disease progression vary considerably in both types of diabetes. Occasionally, patients who otherwise have type 2 diabetes may present with ketoacidosis. Similarly, patients with type 1 diabetes may have a late onset and slow (but relentless) progression of disease despite having features of autoimmune disease. Such difficulties in diagnosis may occur in children, adolescents, and adults. The true diagnosis may become more obvious over time.\n\n## B. Diagnosis of diabetes\n\nFor decades, the diagnosis of diabetes was based on plasma glucose criteria, either the fasting plasma glucose (FPG) or the 2-h value in the 75-g oral glucose tolerance test (OGTT) (4).\n\nIn 2009, an International Expert Committee that included representatives of the ADA, the International Diabetes Federation (IDF), and the European Association for the Study of Diabetes (EASD) recommended the use of the A1C test to diagnose diabetes, with a threshold of \u22656.5% (5), and ADA adopted this criterion in 2010 (4). The diagnostic test should be performed using a method that is certified by the National Glycohemoglobin Standardization Program (NGSP) and standardized or traceable to the Diabetes Control and Complications Trial (DCCT) reference assay. Point-of-care A1C assays are not sufficiently accurate at this time to use for diagnostic purposes.\n\nEpidemiologic datasets show a similar relationship between A1C and risk of retinopathy as has been shown for the corresponding FPG and 2-h plasma glucose thresholds. The A1C has several advantages to the FPG and OGTT, including greater convenience, since fasting is not required; evidence to suggest greater preanalytical stability; and less day-to-day perturbations during periods of stress and illness. These advantages must be balanced by greater cost, the limited availability of A1C testing in certain regions of the developing world, and the incomplete correlation between A1C and average glucose in certain individuals. In addition, A1C levels can vary with patients' ethnicity (6) as well as with certain anemias and hemoglobinopathies. For patients with an abnormal hemoglobin but normal red cell turnover, such as sickle cell trait, an A1C assay without interference from abnormal hemoglobins should be used (an updated list is available at [www.ngsp.org\/interf.asp](http:\/\/www.ngsp.org\/interf.asp)). For conditions with abnormal red cell turnover, such as pregnancy, recent blood loss or transfusion, or some anemias, the diagnosis of diabetes must employ glucose criteria exclusively.\n\nThe established glucose criteria for the diagnosis of diabetes (FPG and 2-h PG) remain valid as well (Table 2<\/a>). Just as there is less than 100% concordance between the FPG and 2-h PG tests, there is not perfect concordance between A1C and either glucose-based test. Analyses of National Health and Nutrition Examination Survey (NHANES) data indicate that, assuming universal screening of the undiagnosed, the A1C cut point of \u22656.5% identifies one-third fewer cases of undiagnosed diabetes than a fasting glucose cut point of \u2265126 mg\/dl (7.0 mmol\/l) (7). However, in practice, a large portion of the diabetic population remains unaware of their condition. Thus, the lower sensitivity of A1C at the designated cut point may well be offset by the test's greater practicality, and wider application of a more convenient test (A1C) may actually increase the number of diagnoses made.\n\nCriteria for the diagnosis of diabetes\n\n| |\n|----|\n| A1C \u22656.5%. The test should be performed in a laboratory using a method that is NGSP certified and standardized to the DCCT assay.*<\/a> |\n| or |\n| FPG \u2265126 mg\/dl (7.0 mmol\/l). Fasting is defined as no caloric intake for at least 8 h.*<\/a> |\n| or |\n| 2-h plasma glucose \u2265200 mg\/dl (11.1 mmol\/l) during an OGTT. The test should be performed as described by the World Health Organization, using a glucose load containing the equivalent of 75 g anhydrous glucose dissolved in water.*<\/a> |\n| or |\n| In a patient with classic symptoms of hyperglycemia or hyperglycemic crisis, a random plasma glucose \u2265200 mg\/dl (11.1 mmol\/l) |\n\n\\*In the absence of unequivocal hyperglycemia, result should be confirmed by repeat testing.\n\nAs with most diagnostic tests, a test result diagnostic of diabetes should be repeated to rule out laboratory error, unless the diagnosis is clear on clinical grounds, such as a patient with a hyperglycemic crisis or classic symptoms of hyperglycemia and a random plasma glucose \u2265200 mg\/dl. It is preferable that the same test be repeated for confirmation, since there will be a greater likelihood of concurrence in this case. For example, if the A1C is 7.0% and a repeat result is 6.8%, the diagnosis of diabetes is confirmed. However, if two different tests (such as A1C and FPG) are both above the diagnostic thresholds, the diagnosis of diabetes is also confirmed.\n\nOn the other hand, if two different tests are available in an individual and the results are discordant, the test whose result is above the diagnostic cut point should be repeated, and the diagnosis is made on the basis of the confirmed test. That is, if a patient meets the diabetes criterion of the A1C (two results \u22656.5%) but not the FPG (\\<126 mg\/dl or 7.0 mmol\/l), or vice versa, that person should be considered to have diabetes.\n\nSince there is preanalytic and analytic variability of all the tests, it is also possible that when a test whose result was above the diagnostic threshold is repeated, the second value will be below the diagnostic cut point. This is least likely for A1C, somewhat more likely for FPG, and most likely for the 2-h PG. Barring a laboratory error, such patients are likely to have test results near the margins of the threshold for a diagnosis. The healthcare professional might opt to follow the patient closely and repeat the testing in 3\u20136 months.\n\nThe current diagnostic criteria for diabetes are summarized in Table 2<\/a>.\n\n## C. Categories of increased risk for diabetes (prediabetes)\n\nIn 1997 and 2003, The Expert Committee on Diagnosis and Classification of Diabetes Mellitus (8,9) recognized an intermediate group of individuals whose glucose levels, although not meeting criteria for diabetes, are nevertheless too high to be considered normal. These persons were defined as having impaired fasting glucose (IFG) (FPG levels 100\u2013125 mg\/dl \\[5.6\u20136.9 mmol\/l\\]) or impaired glucose tolerance (IGT) (2-h PG values in the OGTT of 140\u2013199 mg\/dl \\[7.8\u201311.0 mmol\/l\\]). It should be noted that the World Health Organization (WHO) and a number of other diabetes organizations define the cutoff for IFG at 110 mg\/dl (6.1 mmol\/l).\n\nIndividuals with IFG and\/or IGT have been referred to as having prediabetes, indicating the relatively high risk for the future development of diabetes. IFG and IGT should not be viewed as clinical entities in their own right but rather risk factors for diabetes as well as cardiovascular disease (CVD). IFG and IGT are associated with obesity (especially abdominal or visceral obesity), dyslipidemia with high triglycerides and\/or low HDL cholesterol, and hypertension.\n\nAs is the case with the glucose measures, several prospective studies that used A1C to predict the progression to diabetes demonstrated a strong, continuous association between A1C and subsequent diabetes. In a systematic review of 44,203 individuals from 16 cohort studies with a follow-up interval averaging 5.6 years (range 2.8\u201312 years), those with an A1C between 5.5 and 6.0% had a substantially increased risk of diabetes with 5-year incidences ranging from 9\u201325%. An A1C range of 6.0\u20136.5% had a 5-year risk of developing diabetes between 25\u201350% and relative risk 20 times higher compared with an A1C of 5.0% (10). In a community-based study of black and white adults without diabetes, baseline A1C was a stronger predictor of subsequent diabetes and cardiovascular events than fasting glucose (11). Other analyses suggest that an A1C of 5.7% is associated with diabetes risk similar to that of the high-risk participants in the Diabetes Prevention Program (DPP).\n\nHence, it is reasonable to consider an A1C range of 5.7\u20136.4% as identifying individuals with high risk for future diabetes, a state that may be referred to as prediabetes (4). As is the case for individuals found to have IFG and IGT, individuals with an A1C of 5.7\u20136.4% should be informed of their increased risk for diabetes as well as CVD and counseled about effective strategies to lower their risks (see iv. prevention\/delay of type 2 diabetes<\/span>). As with glucose measurements, the continuum of risk is curvilinear\u2014as A1C rises, the risk of diabetes rises disproportionately (10). Accordingly, interventions should be most intensive and follow-up particularly vigilant for those with A1Cs above 6.0%, who should be considered to be at very high risk.\n\nTable 3<\/a> summarizes the categories of increased risk for diabetes.\n\nCategories of increased risk for diabetes (prediabetes)*<\/a>\n\n| |\n|--------------------------------------------------------------------------|\n| FPG 100\u2013125 mg\/dl (5.6\u20136.9 mmol\/l): IFG |\n| or |\n| 2-h plasma glucose in the 75-g OGTT 140\u2013199 mg\/dl (7.8\u201311.0 mmol\/l): IGT |\n| or |\n| A1C 5.7\u20136.4% |\n\n\\*For all three tests, risk is continuous, extending below the lower limit of the range and becoming disproportionately greater at higher ends of the range.\n\n# II. TESTING FOR DIABETES IN ASYMPTOMATIC PATIENTS\n\n## Recommendations\n\n- Testing to detect type 2 diabetes and assess risk for future diabetes in asymptomatic people should be considered in adults of any age who are overweight or obese (BMI \u226525 kg\/m^2^) and who have one or more additional risk factors for diabetes (Table 4<\/a>). In those without these risk factors, testing should begin at age 45 years. (B)\n\n- If tests are normal, repeat testing carried out at least at 3-year intervals is reasonable. (E)\n\n- To test for diabetes or to assess risk of future diabetes, A1C, FPG, or 2-h 75-g OGTT is appropriate. (B)\n\n- In those identified with increased risk for future diabetes, identify and, if appropriate, treat other CVD risk factors. (B)\n\nFor many illnesses, there is a major distinction between screening and diagnostic testing. However, for diabetes, the same tests would be used for \"screening\" as for diagnosis. Diabetes may be identified anywhere along a spectrum of clinical scenarios ranging from a seemingly low-risk individual who happens to have glucose testing, to a higher-risk individual whom the provider tests because of high suspicion of diabetes, to the symptomatic patient. The discussion herein is primarily framed as testing for diabetes in those without symptoms. Testing for diabetes will also detect individuals at increased future risk for diabetes, herein referred to as having prediabetes.\n\nCriteria for testing for diabetes in asymptomatic adult individuals\n\n\n\n\n
    \n
  1. Testing should be considered in all adults who are overweight (BMI \u226525 kg\/m2<\/sup>*<\/a>) and have additional risk factors:<\/p>\n