sentences
sequence
labels
sequence
[ "The stilbenoids are naturally occurring stilbene derivatives. Examples include resveratrol and its cousin, pterostilbene. The stilbestrols, which are structurally but not synthetically related to (E)-stilbene, exhibit estrogenic activity. Members of this group include diethylstilbestrol, fosfestrol, and dienestrol. Some such derivative are produced by condensation of coenzyme A derivatives of cinnamic acid or 4-hydroxycinnamic acid and the malonic acid.", "Many syntheses have been developed. One popular route entails reduction of benzoin using zinc amalgam.\n:CH&ndash;CH(OH)&ndash;C(=O)&ndash;CH trans-CH&ndash;CH=CH&ndash;CH\nBoth isomers of stilbene can be produced by decarboxylation of α-phenylcinnamic acid, trans-stilbene being produced from the of the acid.\nRichard F. Heck and Tsutomu Mizoroki independently reported the synthesis of trans-stilbene by coupling of iodobenzene and styrene using a palladium(II) catalyst, in what is now known as the Mizoroki-Heck reaction. The Mizoroki approach produced the higher yield.\nStilbene undergoes reactions typical of alkenes. Trans-stilbene undergoes epoxidation with peroxymonophosphoric acid, HPO, producing a 74% yield of trans-stilbene oxide in dioxane. The epoxide product formed is a racemic mixture of the two enantiomers of 1,2-diphenyloxirane. The achiral meso compound (1R,2S)-1,2-diphenyloxirane arises from cis-stilbene, though peroxide epoxidations of the cis-isomer produce both cis- and trans-epoxide products. For example, using tert-butyl hydroperoxide, oxidation of cis-stilbene produces 0.8% cis-stilbene oxide, 13.5% trans-stilbene oxide, and 6.1% benzaldehyde. Enantiopure stilbene oxide has been prepared by Nobel laureate Karl Barry Sharpless.\nStilbene can be cleanly oxidised to benzaldehyde by ozonolysis or Lemieux–Johnson oxidation, and stronger oxidants such as acidified potassium permanganate will produce benzoic acid. Vicinal diols can be produced via the Upjohn dihydroxylation or enantioselectively using Sharpless asymmetric dihydroxylation with enantiomeric excesses as high as 100%.\nBromination of trans-stilbene produces predominantly meso-1,2-dibromo-1,2-diphenylethane (sometimes called meso-stilbene dibromide), in line with a mechanism involving a cyclic bromonium ion intermediate of a typical electrophilic bromine addition reaction; cis-stilbene yields a racemic mixture of the two enantiomers of 1,2-dibromo-1,2-diphenylethane in a non-polar solvent such as carbon tetrachloride, but the extent of production of the meso compound increases with solvent polarity, with a yield of 90% in nitromethane. The formation of small quantities of the two enantiomers of stilbene dibromide from the trans-isomer suggests that the bromonium ion intermediate exists in chemical equilibrium with a carbocation intermediate PhCHBr&ndash;C(H)Ph with a vacant p orbital vulnerable to nucleophilic attack from either face. The addition of bromide or tribromide salts restores much of the stereospecificity even in solvents with a dielectric constant above 35.\nUpon UV irradiation it converts to cis-stilbene, a classic example of a photochemical reaction involving trans-cis isomerization, and can undergo further reaction to form phenanthrene.", "(E)-Stilbene itself is of little value, but it is a precursor to other derivatives used as dyes, optical brighteners, phosphors, and scintillators. Stilbene is one of the gain mediums used in dye lasers.\nDisodium 4,4-dinitrostilbene-2,2-disulfonate is prepared by the sulfonation of 4-nitrotoluene to form 4-nitrotoluene-2-sulfonic acid, which can then be oxidatively coupled using sodium hypochlorite to form the (E)-stilbene derivative in a process originally developed by Arthur George Green and André Wahl in the late nineteenth century. Improvements to the process with higher yields have been developed, using air oxidation in liquid ammonia. The product is useful as its reaction with aniline derivatives results in the formation of azo dyes. Commercially important dyes derived from this compound include Direct Red 76, Direct Brown 78, and Direct Orange 40.", "(E)-Stilbene, commonly known as trans-stilbene, is an organic compound represented by the condensed structural formula CHCH=CHCH. Classified as a diarylethene, it features a central ethylene moiety with one phenyl group substituent on each end of the carbon&ndash;carbon double bond. It has an (E) stereochemistry, meaning that the phenyl groups are located on opposite sides of the double bond, the opposite of its geometric isomer, cis-stilbene. Trans-stilbene occurs as a white crystalline solid at room temperature and is highly soluble in organic solvents. It can be converted to cis-stilbene photochemically, and further reacted to produce phenanthrene.\nStilbene was discovered in 1843 by the French chemist Auguste Laurent. The name \"stilbene\" is derived from the Greek word στίλβω (stilbo), which means \"I shine\", on account of the lustrous appearance of the compound.", "Stilbene exists as two possible stereoisomers. One is trans-1,2-diphenylethylene, called (E)-stilbene or trans-stilbene. The second is cis-1,2-diphenylethylene, called (Z)-stilbene or cis-stilbene, and is sterically hindered and less stable because the steric interactions force the aromatic rings out-of-plane and prevent conjugation. Cis-stilbene is a liquid at room temperature (melting point: ), while trans-stilbene is a crystalline solid which does not melt until around , illustrating the two isomers have significantly different physical properties.", "Stilbene exists as two possible isomers known as (E)-stilbene and (Z)-stilbene. (Z)-Stilbene is sterically hindered and less stable because the steric interactions force the aromatic rings 43° out-of-plane and prevent conjugation. (Z)-Stilbene has a melting point of , while (E)-stilbene melts around , illustrating that the two compounds are quite different.", "Many stilbene derivatives (stilbenoids) are present naturally in plants. An example is resveratrol and its cousin, pterostilbene.", "* Stilbene will typically have the chemistry of a diarylethene, a conjugated alkene.\n* Stilbene can undergo photoisomerization under the influence of UV light.\n* Stilbene can undergo stilbene photocyclization, an intramolecular reaction.\n* (Z)-Stilbene can undergo electrocyclic reactions.", "* Stilbene is used in manufacture of dyes and optical brighteners, and also as a phosphor and a scintillator.\n* Stilbene is one of the gain mediums used in dye lasers.", "(Z)-Stilbene is a diarylethene, that is, a hydrocarbon consisting of a cis ethene double bond substituted with a phenyl group on both carbon atoms of the double bond. The name stilbene was derived from the Greek word , which means shining.", "21st Century Medicine (21CM) is a California cryobiological research company which has as its primary focus the development of perfusates and protocols for viable long-term cryopreservation of human organs, tissues and cells at temperatures below −100 °C through the use of vitrification. 21CM was founded in 1993.\nIn 2004 21CM received a $900,000 grant from the U.S. National Institutes of Health (NIH) to study a preservation solution developed by the University of Rochester in New York for extending simple cold storage time of human hearts removed for transplant.\nAt the July 2005 annual conference of the Society for Cryobiology, 21st Century Medicine announced the vitrification of a rabbit kidney to −135 °C with their vitrification mixture. The kidney was successfully transplanted upon rewarming to a rabbit, the rabbit being euthanized on the 48th day for histological follow-up.\nOn February 9, 2016, 21st Century Medicine won the Small Mammal Brain Preservation Prize. On March 13, 2018, they won the Large Mammal Brain Preservation Prize.", "Extrusion-based printing is a very common technique within the field of 3D printing which entails extruding, or forcing, a continuous stream of melted solid material or viscous liquid through a sort of orifice, often a nozzle or syringe. When it comes to extrusion based bioprinting, there are four main types of extrusion. These are pneumatic driven, piston driven, screw driven and eccentric screw driven (also known as progressing cavity pump). Each extrusion method has their own advantages and disadvantages. Pneumatic extrusion uses pressurized air to force liquid bioink through a depositing agent. Air filters are commonly used to sterilize the air before it is used, to ensure air pushing the bioink is not contaminated. Piston driven extrusion utilizes a piston connected to a guide screw. The linear motion of the piston squeezes material out of the nozzle. Screw driven extrusion uses an auger screw to extrude material using rotational motion. Screw driven devices allow for the use of higher viscosity materials and provide more volumetric control. Eccentric screw driven systems allow for a much more precise deposition of low to high viscosity materials due to the self-sealing chambers in the extruder. Once printed, many materials require a crosslinking step to achieve the desired mechanical properties for the construct, which can be achieved for example with the treatment of chemical agents or photo-crosslinkers.\nDirect extrusion is one of the most common extrusion-based bioprinting techniques, wherein the pressurized force directs the bioink to flow out of the nozzle, and directly print the scaffold without any necessary casting. The bioink itself for this approach can be a blend of polymer hydrogels, naturally derived materials such as collagen, and live cells suspended in the solution. In this manner, scaffolds can be cultured post-print and without the need for further treatment for cellular seeding. Some focus in the use of direct printing techniques is based upon the use of coaxial nozzle assemblies, or coaxial extrusion. The coaxial nozzle setup enables the simultaneous extrusion of multiple material bioinks, capable of making multi-layered scaffolds in a single extrusion step. The development of tubular structures has found the layered extrusion achieved via these techniques desirable for the radial variability in material characterization that it can offer, as the coaxial nozzle provides an inner and outer tube for bioink flow. Indirect extrusion techniques for bioprinting rather require the printing of a base material of cell-laden hydrogels, but unlike direct extrusion contains a sacrificial hydrogel that can be trivially removed post-printing through thermal or chemical extraction. The remaining resin solidifies and becomes the desired 3D-printed construct.", "Three dimensional (3D) bioprinting is the utilization of 3D printing–like techniques to combine cells, growth factors, bio-inks, and biomaterials to fabricate functional structures that were traditionally used for tissue engineering applications but in recent times have seen increased interest in other applications such as biosensing, and environmental remediation. Generally, 3D bioprinting utilizes a layer-by-layer method to deposit materials known as bio-inks to create tissue-like structures that are later used in various medical and tissue engineering fields. 3D bioprinting covers a broad range of bioprinting techniques and biomaterials. Currently, bioprinting can be used to print tissue and organ models to help research drugs and potential treatments. Nonetheless, translation of bioprinted living cellular constructs into clinical application is met with several issues due to the complexity and cell number necessary to create functional organs. However, innovations span from bioprinting of extracellular matrix to mixing cells with hydrogels deposited layer by layer to produce the desired tissue. In addition, 3D bioprinting has begun to incorporate the printing of scaffolds which can be used to regenerate joints and ligaments. Apart from these, 3D bioprinting has recently been used in environmental remediation applications, including the fabrication of functional biofilms that host functional microorganisms that can facilitate pollutant removal.", "Pre-bioprinting is the process of creating a model that the printer will later create and choosing the materials that will be used. One of the first steps is to obtain a biopsy of the organ, to sample cells. Common technologies used for bioprinting are computed tomography (CT) and magnetic resonance imaging (MRI). To print with a layer-by-layer approach, tomographic reconstruction is done on the images. The now-2D images are then sent to the printer to be made. Once the image is created, certain cells are isolated and multiplied. These cells are then mixed with a special liquefied material that provides oxygen and other nutrients to keep them alive. This aggregation of cells does not require a scaffold, and is required for placing in the tubular-like tissue fusion for processes such as extrusion.", "In the second step, the liquid mixtures of cells, matrix, and nutrients known as bioinks are placed in a printer cartridge and deposited using the patients' medical scans. When a bioprinted pre-tissue is transferred to an incubator, this cell-based pre-tissue matures into a tissue.\n3D bioprinting for fabricating biological constructs typically involves dispensing cells onto a biocompatible scaffold using a successive layer-by-layer approach to generate tissue-like three-dimensional structures. Artificial organs such as livers and kidneys made by 3D bioprinting have been shown to lack crucial elements that affect the body such as working blood vessels, tubules for collecting urine, and the growth of billions of cells required for these organs. Without these components the body has no way to get the essential nutrients and oxygen deep within their interiors. Given that every tissue in the body is naturally composed of different cell types, many technologies for printing these cells vary in their ability to ensure stability and viability of the cells during the manufacturing process. Some of the methods that are used for 3D bioprinting of cells are photolithography, magnetic 3D bioprinting, stereolithography, and direct cell extrusion.", "The post-bioprinting process is necessary to create a stable structure from the biological material. If this process is not well-maintained, the mechanical integrity and function of the 3D printed object is at risk. To maintain the object, both mechanical and chemical stimulations are needed. These stimulations send signals to the cells to control the remodeling and growth of tissues. In addition, in recent development, bioreactor technologies have allowed the rapid maturation of tissues, vascularization of tissues and the ability to survive transplants.\nBioreactors work in either providing convective nutrient transport, creating microgravity environments, changing the pressure causing solution to flow through the cells, or adding compression for dynamic or static loading. Each type of bioreactor is ideal for different types of tissue, for example compression bioreactors are ideal for cartilage tissue.", "The first approach of bioprinting is called biomimicry. The main goal of this approach is to create fabricated structures that are identical to the natural structure that are found in the tissues and organs in the human body. Biomimicry requires duplication of the shape, framework, and the microenvironment of the organs and tissues. The application of biomimicry in bioprinting involves creating both identical cellular and extracellular parts of organs. For this approach to be successful, the tissues must be replicated on a micro scale. Therefore, it is necessary to understand the microenvironment, the nature of the biological forces in this microenvironment, the precise organization of functional and supporting cell types, solubility factors, and the composition of extracellular matrix.", "The second approach of bioprinting is autonomous self-assembly. This approach relies on the physical process of embryonic organ development as a model to replicate the tissues of interest. When cells are in their early development, they create their own extracellular matrix building block, the proper cell signaling, and independent arrangement and patterning to provide the required biological functions and micro-architecture. Autonomous self-assembly demands specific information about the developmental techniques of the tissues and organs of the embryo. There is a \"scaffold-free\" model that uses self-assembling spheroids that subjects to fusion and cell arrangement to resemble evolving tissues. Autonomous self-assembly depends on the cell as the fundamental driver of histogenesis, guiding the building blocks, structural and functional properties of these tissues. It demands a deeper understanding of how embryonic tissues mechanisms develop as well as the microenvironment surrounded to create the bioprinted tissues.", "The third approach of bioprinting is a combination of both the biomimicry and self-assembly approaches, called mini tissues. Organs and tissues are built from very small functional components. The mini-tissue approach takes these small pieces and arrange them into larger framework.", "Akin to ordinary ink printers, bioprinters have three major components to them. These are the hardware used, the type of bio-ink, and the material it is printed on (biomaterials). Bio-ink is a material made from living cells that behaves much like a liquid, allowing people to print it in order to create the desired shape. To make bio-ink, scientists create a slurry of cells that can be loaded into a cartridge and inserted into a specially designed printer, along with another cartridge containing a gel known as bio-paper.\" In bioprinting, there are three major types of printers that have been used. These are inkjet, laser-assisted, and extrusion printers. Inkjet printers are mainly used in bioprinting for fast and large-scale products. One type of inkjet printer, called drop-on-demand inkjet printer, prints materials in exact amounts, minimizing cost and waste. Printers that utilize lasers provide high-resolution printing; however, these printers are often expensive. Extrusion printers print cells layer-by-layer, just like 3D printing to create 3D constructs. In addition to just cells, extrusion printers may also use hydrogels infused with cells.", "Researchers in the field have developed approaches to produce living organs that are constructed with the appropriate biological and mechanical properties. 3D bioprinting is based on three main approaches: biomimicry, autonomous self-assembly and mini-tissue building blocks.", "Bioprinting can also be used for cultured meat. In 2021, a steak-like cultured meat, composed of three types of bovine cell fibers was produced. The Wagyu-like beef has a structure similar to original meat. This technology provides an alternative to natural meat harvesting methods if the livestock industry is plagued by disease. In addition, it provides a possible solution to reducing the environmental impact of the livestock industry.", "Another form of bioprinting involves an inkjet printer, which is primarily utilized in biomedical settings. This method prints detailed proteins and nucleic acids. Hydrogels are commonly selected as the bioink. Cells can be printed on to a selected surface media to proliferate and ultimately differentiate. A drawback of this printing method is the ability of the bioinks such as hydrogels to clog the printing nozzle, due to their high viscosity. Ideal inkjet bioprinting involves using a low polymer viscosity (ideally below 10 centipoise), low cell density (<10 million cells/mL), and low structural heights (<10 million cells/mL).", "There are several other bioprinting techniques which are less commonly used. Droplet-based bioprinting is a technique in which the bioink blend of cells and/or hydrogels are placed in droplets in precise positions. Most common amongst this approach are thermal and piezoelectric-drop-on-demand techniques. This method of bioprinting is often used experimentally with lung and ovarian cancer models. Thermal technologies use short duration signals to heat the bioink, inducing the formation of small bubbles which are ejected. Piezoelectric bioprinting has short duration current applied to a piezoelectric actuator, which induces a mechanical vibration capable of ejecting a small globule of bioink through the nozzle. A significant aspect of the study of droplet-based approaches to bioprinting is accounting for mechanical and thermal stress cells within the bioink experience near the nozzle-tip as they are extruded.", "Bioinks are essential components of the bioprinting process. They are composed of living cells and enzymatic supplements to nurture an environment that supports the biological needs of the printed tissue. The environment created by the bioink allows for the cell to attach, grow, and differentiate into its adult form. Cell-encapsualting hydrogels are utilized in extrusion based bioprinting methods, while gelatin MethacryloylGelatin methacrylon (GelMA) and acellular comprised bioinks are most often used in tissue engineering techniques that require cross-linkage and precise structural integrity. It is essential for bioinks to help replicate the external cellular matrix environment that the cell would naturally occur in.", "3D bioprinting can be used to reconstruct tissue from various regions of the body. The precursor to the adoption of 3D printing in healthcare was a series of trials conducted by researchers at Boston Childrens Hospital. The team built replacement urinary bladders by hand for seven patients by constructing scaffolds, then layering the scaffolds with cells from the patients and allowing them to grow. The trials were a success as the patients remained in good health 7 years after implantation, which led a research fellow named Anthony Atala, MD, to search or ways to automate the process. Patients with end-stage bladder disease can now be treated by using bio-engineered bladder tissues to rebuild the damaged organ. This technology can also potentially be applied to bone, skin, cartilage and muscle tissue. Though one long-term goal of 3D bioprinting technology is to reconstruct an entire organ as well as minimize the problem of the lack of organs for transplantation. There has been little success in bioprinting of fully functional organs e.g. liver, skin, meniscus or pancreas. Unlike implantable stents, organs have complex shapes and are significantly harder to bioprint. A bioprinted heart, for example, must not only meet structural requirements, but also vascularization, mechanical load, and electrical signal propagation requirements. In 2022, the first success of a clinical trial for a 3D bioprinted transplant that is made from the patients own cells, an external ear to treat microtia, was reported.\n3D bioprinting contributes to significant advances in the medical field of tissue engineering by allowing for research to be done on innovative materials called biomaterials. Some of the most notable bioengineered substances are usually stronger than the average bodily materials, including soft tissue and bone. These constituents can act as future substitutes, even improvements, for the original body materials. In addition, the Defense Threat Reduction Agency aims to print mini organs such as hearts, livers, and lungs as the potential to test new drugs more accurately and perhaps eliminate the need for testing in animals.", "Laser-based bioprinting can be split into two major classes: those based on cell transfer technologies or photo-polymerization. In cell transfer laser printing, a laser stimulates the connection between energy-absorbing material (e.g. gold, titanium, etc.) and the bioink. This donor layer vaporizes under the laser's irradiation, forming a bubble from the bioink layer which gets deposited from a jet. Photo-polymerization techniques rather use photoinitiated reactions to solidify the ink, moving the beam path of a laser to induce the formation of a desired construct. Certain laser frequencies paired with photopolymerization reactions can be carried out without damaging cells in the material.", "Bioremediation utilizes microorganisms or in recent times, materials of biological origin, such as enzymes, biocomposites, biopolymers, or nanoparticles, to biochemically degrade contaminants into harmless substances, making it an environmentally friendly and cost-effective alternative; 3D bioprinting facilitates the fabrication of functional structures utilizing these materials that enhance bioremediation processes leading to a significant interest in the application of 3D bioprinted constructs in improving bioremediation.", "The bioprinting of biofilms utilizes the same methods as other bioprinting. Oftentimes, the biofilm begins with an extrusion of a polysaccharide to provide structure for biofilm growth. An example of one of these polysaccharides is alginate. The alginate structure can have microbes embedded within the structure. Hydrogels can also be used to assist in the formation of functional biofilms. Biofilms are difficult to analyze in a laboratory setting due to the complex structure and the time it takes for a functional biofilm to form. 3D bioprinting biofilms allows us to skip certain processes and makes it easier to analyze functional biofilms. Thickness of the biofilm being printed with change the functionality due to nutrient and oxygen diffusion. Thicker 3D printed biofilms will naturally select for anaerobes for example.\nBiofilms are capable of remediation in the natural environment which suggests there is potential in regards to the use of 3D bioprinted biofilm use in environmental remediation. Microbes are able to degrade a large range of chemicals and metals and providing a structure for these microbes to flourish such as in biofilm structures is beneficial. Artificial biofilms protect the microbes from the dangers of the environment while promoting signaling and overall microbial interactions. 3D bioprinting allows functional microorganisms to be placed in structures that provide mechanical stability and protects them from environmental conditions. The larger contact area provided by 3D printed structures compared to normal environmental structures provides more efficient removal of pollutants.", "In this form of printing, plastic residues are melted down and individual layered in sections to create a desired shape. Nylon and PVA are examples of biomaterials utilized in this method. This technique is most often used to design prototypes for prosthetics and cartilage construction.", "Bioprinting also has possible uses in the future in assisting in wastewater treatment and in corrosion control. When humans come in contact with environmental biofilms, it is possible for infections and long-term health hazards to occur. Antibiotic penetration and expansion within a biofilm is an area of research which can benefit from bioprinting techniques, to further explore the effect of environmental biofilms on human health. Biofilm printing requires further research due to limited published data and complex protocols.", "4,4′-Diamino-2,2′-stilbenedisulfonic acid is the organic compound with the formula (HNCHSOH)CH. It is a white, water-soluble solid. Structurally, it is a derivative of trans-stilbene, containing amino and sulfonic acid functional groups on each of the two phenyl rings. \nThe compound is a popular optical brightener for use in laundry detergents. \nIt is produced by reduction of 4,4′-dinitro-2,2′-stilbenedisulfonic acid with iron powder.", "Reddi is the founder of the International Conference on Bone Morphogenetic Proteins (BMPs). He organized the first conference at the Johns Hopkins University School of Medicine in 1994. The conference is held every two years rotating between the United States and an international venue.", "Hari Reddi received his PhD from the University of Delhi in reproductive endocrinology under the mentorship of M.R.N. Prasad. Reddi did postdoctoral work with [http://www.uchospitals.edu/news/2004/20040602-williams-ashman.html Howard Guy Williams-Ashman] at the Johns Hopkins University School of Medicine. Reddi was also a student of Charles Brenton Huggins, the winner of the 1966 Nobel Prize with Peyton Rous for the endocrine regulation of cancer.", "Professor Reddi discovered that bone induction is a sequential multistep cascade involving chemotaxis, mitosis, and differentiation. Early studies in his laboratory at the University of Chicago and National Institutes of Health unraveled the sequence of events involved in bone matrix-induce bone morphogenesis. Using a battery of in vitro and in vivo bioassays for bone formation, a systematic study was undertaken in his laboratory to isolate and purify putative bone morphogenetic proteins. Reddi and colleagues were the first to identify BMPs as pleiotropic regulators, acting in a concentration dependent manner. They demonstrated first that BMPs bind the extracellular matrix, are present at the apical ectodermal ridge in the developing limb bud, are chemotactic for human monocytes, and have neurotropic potential. His laboratory pioneered the use of BMPs in regenerative orthopedics and dentistry.", "A. Hari Reddi (born October 20, 1942) is a Distinguished Professor and holder of the Lawrence J. Ellison Endowed Chair in Musculoskeletal Molecular Biology at the University of California, Davis. He was previously the Virginia M. and William A. Percy Chair and Professor in Orthopaedic Surgery, Professor of Biological Chemistry, and Professor of Oncology at the Johns Hopkins University School of Medicine. Professor Reddi's research played an indispensable role in the identification, isolation and purification of bone morphogenetic proteins (BMPs) that are involved in bone formation and repair. The molecular mechanism of bone induction studied by Professor Reddi led to the conceptual advance in tissue engineering that morphogens in the form of metabologens bound to an insoluble extracellular matrix scaffolding act in collaboration to stimulate stem cells to form cartilage and bone. The Reddi laboratory has also made important discoveries unraveling the role of the extracellular matrix in bone and cartilage tissue regeneration and repair.", "Most of the solid blanket materials that surround the fusion chamber in conventional designs are replaced by a fluorine lithium beryllium (FLiBe) molten salt that can easily be circulated/replaced, reducing maintenance costs.\nThe liquid blanket provides neutron moderation and shielding, heat removal, and a tritium breeding ratio ≥ 1.1. The large temperature range over which FLiBe is liquid permits blanket operation at with single-phase fluid cooling and a Brayton cycle.", "To achieve a near tenfold increase in fusion power density, the design makes use of REBCO superconducting tape for its toroidal field coils. This material enables higher magnetic field strength to contain heated plasma in a smaller volume. In theory, fusion power density is proportional to the fourth power of the magnetic field strength. The most probable candidate material is yttrium barium copper oxide, with a design temperature of , allowing various coolants (e.g. liquid hydrogen, liquid neon, or helium gas) instead of the much more complicated liquid helium refrigeration chosen by ITER. The official SPARC brochure displays a YBCO cable section that is commercially available and that should allow fields up to 30 T.\nARC is planned to be a 270 MWe tokamak reactor with a major radius of , a minor radius of , and an on-axis magnetic field of .\nThe design point has a fusion energy gain factor Q ≈ 13.6 (the plasma produces 13 times more fusion energy than is required to heat it), yet is fully non-inductive, with a bootstrap fraction of ~63%.\nThe design is enabled by the ~23 T peak field on coil. External current drive is provided by two inboard RF launchers using of lower hybrid and of ion cyclotron fast wave power. The resulting current drive provides a steady-state core plasma far from disruptive limits.", "The ARC design incorporates major departures from traditional tokamaks, while retaining conventional D–T (deuterium - tritium) fuel.", "The project was announced in 2014. The name and design were inspired by the fictional arc reactor built by Tony Stark, who attended MIT in the comic books.\nThe concept was born as \"a project undertaken by a group of MIT students in a fusion design course. The ARC design was intended to show the capabilities of the new magnet technology by developing a point design for a plant producing as much fusion power as ITER at the smallest possible size. The result was a machine about half the linear dimension of ITER, running at 9 tesla and producing more than 500 megawatt (MW) of fusion power. The students also looked at technologies that would allow such a device to operate in steady state and produce more than of electricity.\"", "The ARC fusion reactor (affordable, robust, compact) is a design for a compact fusion reactor developed by the Massachusetts Institute of Technology (MIT) Plasma Science and Fusion Center (PSFC). ARC aims to achieve an engineering breakeven of three (to produce three times the electricity required to operate the machine). The key technical innovation is to use high-temperature superconducting magnets in place of ITER's low-temperature superconducting magnets. The proposed device would be about half the diameter of the ITER reactor and cheaper to build.\nThe ARC has a conventional advanced tokamak layout. ARC uses rare-earth barium copper oxide (REBCO) high-temperature superconductor magnets in place of copper wiring or conventional low-temperature superconductors. These magnets can be run at much higher field strengths, 23 T, roughly doubling the magnetic field on the plasma axis. The confinement time for a particle in plasma varies with the square of the linear size, and power density varies with the fourth power of the magnetic field, so doubling the magnetic field offers the performance of a machine 4 times larger. The smaller size reduces construction costs, although this is offset to some degree by the expense of the REBCO magnets.\nThe use of REBCO may allow the magnet windings to be flexible when the machine is not operational. This would allow them to be \"folded open\" to allow access to the interior of the machine. This would greatly lower maintenance costs, eliminating the need to perform maintenance through small access ports using remote manipulators. If realized, this could improve the reactor's capacity factor, an important metric in power generation costs.\nThe first machine planned to come from the project is a scaled-down demonstrator named SPARC (as Soon as Possible ARC). It is to be built by Commonwealth Fusion Systems, with backing led by Eni, Breakthrough Energy Ventures, Khosla Ventures, Temasek, and Equinor.", "The design includes a removable vacuum vessel (the solid component that separates the plasma and the surrounding vacuum from the liquid blanket). It does not require dismantling the entire device. That makes it well-suited for evaluating design changes.", "Indias top court ruled that authorities must regulate the sale of acid. The Supreme Courts ruling on 16 July 2013, came after an incident in which four sisters suffered severe burns after being attacked with acid by two men on a motorbike. Acid which is designed to clean rusted tools is often used in the attacks can be bought across the counter. But the judges said the buyer of such acids should in future have to provide a photo identity card to any retailer when they make a purchase. The retailers must register the name and address of the buyer. In 2013, section 326 A of Indian Penal Code was enacted by the Indian Parliament to ensure enhanced punishment for acid throwing.", "Recent studies on acid attacks in Cambodia found the victims were almost equally likely to be men or women (48.4% men, 51.6% women). As with India, rates of acid attacks in Cambodia have generally increased in the past decades, with a high rate of 40 cases reported for 2000 that started the increasing trend. According to the Cambodian Acid Survivors Charity, 216 acid attacks were reported from 1985 to 2009, with 236 reported victims. Jealousy and hate is the biggest motivator for acid attacks in Cambodia, as 28% of attacks reported those emotions as the cause. Such assaults were not only perpetrated by men – some reports suggest women attack other women occur more frequently than men do. Such incidents usually occur between a husband's wife and mistress to attain power and socioeconomic security.\nA particularly high-profile case of this nature was the attack on Cambodian teenager Tat Marina in 1999, allegedly carried out by the jealous wife of a government official (the incident prompted a rash of copycat crimes that year, raising the number from seven in 1998 to 40 in 1999). One-third of the victims are bystanders. In Cambodia, there is only one support center that is aiming to help acid attack survivors. There they can receive medical and legal support.", "Many countries have begun pushing for legislation addressing acid attacks, and a few have recently employed new laws against this crime. Under the Qisas law of Pakistan, the perpetrator may suffer the same fate as the victim, and may be punished by having drops of acid placed in their eyes. This law is not binding and is rarely enforced according to a report in The New York Times. In Pakistan, the Lower House of Parliament unanimously passed the Acid Control and Acid Crime Prevention Bill on 10 May 2011. As punishment, according to the bill individuals held responsible for acid attacks face harsh fines and life in prison. However, the country with the most specific, effective legislation against acid attacks is Bangladesh, and such legal action has resulted in a steady 20–30% decrease in acid violence for the past few years. In 2013, India introduced an amendment to the Indian Penal Code through the Criminal Law (Amendment) Act, 2013, making acid attacks a specific offence with a punishment of imprisonment not less than 10 years and which can extend to life imprisonment and with fine.", "Acid has been used in metallurgy and for etching since ancient times. The rhetorical and theatrical term \"La Vitrioleuse\" was coined in France after a \"wave of vitriolage\" occurred according to the popular press where, in 1879, 16 cases of vitriol attacks were widely reported as crimes of passion perpetrated predominantly by women against other women. Much was made of the idea that women, no matter how few, had employed such violent means to an end. On 17 October 1915, acid was fatally thrown on Prince Leopold Clement of Saxe-Coburg and Gotha, heir to the House of Koháry, by his distraught mistress, Camilla Rybicka, who then killed herself. Sensationalizing such incidents made for lucrative newspaper sales. Similarly, multiple acid attacks were reported in the UK in the nineteenth century and the first half of the twentieth century. Again, these were seen as a crime carried out by women, although in practice perpetrators were as likely to be male as female.\nThe use of acid as a weapon began to rise in many developing nations, specifically those in South Asia. The first recorded acid attacks in South Asia occurred in Bangladesh in 1967, India in 1982, and Cambodia in 1993. Since then, research has witnessed an increase in the quantity and severity of acid attacks in the region. However, this can be traced to significant underreporting in the 1980s and 1990s, along with a general lack of research on this phenomenon during that period.\nResearch shows acid attacks increasing in many developing nations, with the exception of Bangladesh which has observed a decrease in incidence in the past few years.", "Acid attacks are common in Vietnam. Most victims are female. While the issue in other Asian countries like Cambodia, India and Pakistan is constantly monitored by domestic and transnational organizations, the situation in Vietnam is rather off the radar. Official statistics on acid attacks in the country are hard to record. Most of Vietnam's acid attack victims spend their lives isolated and ignored and also blamed for their condition. Kevin Hawkins, an American lawyer working for Vietnam-based VILAF law firm, notes the alarming prevalence of using acid in attacks, mostly for revenge and particularly in relation to failed romantic relationships or pursuits. The current Vietnamese penal code stipulates that those who use acid to attack their victims face a charge of “intentionally injuring others,” rather than “murder,” which thus fails to discourage potential offenders.", "Victor Riesel was a broadcast journalist, specializing in labor issues, who was attacked while leaving Lindys restaurant in midtown Manhattan in the early morning of 5 April 1956. Riesel was left blind as a result. The attack was motivated by Riesels reporting on the influence of organized crime on certain corrupt labor unions.\nIn 1959, American attorney Burt Pugach hired a man to throw lye (an alkaline rather than acid substance, but with similar corrosive effects) in the face of his ex-girlfriend Linda Riss. Riss suffered blindness and permanent scarring. Pugach served 14 years in prison for the incident.\nGabrielle White, a 22-year-old single mother living in Detroit, was attacked on 26 August 2006 by a stranger. She was left with third and fourth degree burns on her face, throat, and arms, leaving her blind and without one ear. She also miscarried her unborn child. A 25-year-old nursing student at Merritt College was the victim of an acid attack.\nEsperanza Medina walked out of her Logan Square apartment in Chicago, Illinois, on a July morning in 2008, heading to her job as a social worker. Three teenagers poured cups of battery acid on the head of Medina, a 48-year-old mother of four.\nOn 30 August 2010, Bethany Storro, 28, of Vancouver, Washington made national headlines after she claimed a stranger, whom she described as an African American woman, approached her on a walk and threw a cup of acid in her face, resulting in serious burns. Two weeks later, Storro admitted that she herself had lied about the attack and had, in fact, poured the acid on herself. She attributed her actions to untreated body dysmorphic disorder and pleaded guilty to lying to police, a misdemeanor. She also charged with three counts of second-degree theft in regards to donations shed received to help aid her in her recovery but these charges were dropped after she repaid the money. It was reported in February 2013 that she spent one year in a mental health facility and had written a book, Facing the Truth'.\nIn 2017, a 17-year-old girl was permanently scarred by an acid attack in Dallas. In November 2019, a man in Milwaukee was attacked and sustained multiple burns.\nIn April 2021, a student at Hofstra University suffered severe injuries to her face, arms, and throat from an acid attack carried out with battery acid. The assailant remains at large.", "The UK at times has had one of the highest rates of acid attacks per capita in the world, though recent studies suggest that this is due to gang-related violence and possession offences, rather than traditional attacks found in lower middle-income countries, according to Acid Survivors Trust International (ASTI). NHS hospital figures record 144 assaults in 2011–2012 involving corrosive substances, which can include petrol, bleach and kerosene. Six years earlier, 56 such episodes were noted. The official records for 2017–2018 shows 150 patients in the UK admitted to hospital for \"Assault by corrosive substance\". In 2016, the Metropolitan Police in London recorded 454 attacks involving corrosive fluids in the city, with 261 in the previous year, indicating a rise of 36%. A rise of 30% was also recorded in the UK as a whole. Between 2005–2006 and 2011–2012 the number of assaults involving acid throwing and other corrosive substances tripled in England, official records show. According to Londons Metropolitan Police, 2017 was the worst year for acid attacks in London, with 465 attacks recorded, up from 395 the previous year and 255 in 2015. Acid attacks in London continued to rise in 2017. In July 2017, the BBCs George Mann reported that police statistics showed that: \"Assaults involving corrosive substances have more than doubled in England since 2012. The vast majority of cases were in London.\" According to Time magazine, motives included organized crime, revenge, and domestic violence. According to Newham police, there is no trend of using acid in hate crimes.\nAccording to data London's Metropolitan Police, a demographic breakdown of known suspects in London attacks for the period (2002–2016) showed White Europeans comprising 32% of suspects, Black Caribbeans 38% and Asian 6%. Victims for the same period were 45% White Europeans, 25% Black Caribbeans and 19% Asian. Of the total population, whites constitute 60%, blacks 13%, and Asians 18% as per the 2011 census of London. Known suspects were overwhelmingly male, 77% of known suspects were male and just 2% of suspects female. Four out of five victims in 2016 were male. In January 2018, CNN reported that acid attacks in London increased six-fold between 2012 and 2017 and that 71% of attackers and 72% of victims were male.\nOn 3 October 2017, the UK government announced that sales of acids to under 18s would be banned.\nMark van Dongen chose to undergo euthanasia months after he was attacked by his ex-girlfriend Berlinah Wallace during the early hours of 23 September 2015. He was left paralysed, scarred, had his lower left leg amputated and lost the sight in his left eye, as well as most of the sight in his right eye, following the incident. Wallace was found guilty of \"throwing a corrosive substance with intent\" and received a life sentence with a minimum term of 12 years.\nIn April 2017, a man named Arthur Collins, the ex-boyfriend of Ferne McCann, threw acid inside a nightclub across terrified clubbers in east London forcing a mass evacuation of 600 partygoers flooding into the street. 22 people were injured in the attack. Collins was sentenced to 20 years for the attack. Another similar attack is the 2017 Beckton acid attack. Katie Piper was also attacked in 2008 with acid by her ex-boyfriend Daniel Lynch and an accomplice Stefan Sylvestre.\nIn April 2019, a teenage girl, 13, and a woman, 63, were attacked by a man driving a white car, who poured sulphuric acid on them in Thornton Heath, South London.\nThe UK has subsequently banned possession of concentrated sulphuric acid without a licence and incidents of acid attacks have dropped substantially.\nOn 31 January 2024, nine people, including three police officers, were hospitalised after Abdul Shakoor Ezedi threw an alkaline solution on a car in Clapham, south west London.", "In 2017, a Chinese Irish woman was targeted in an attack in Blackrock, Dublin, causing facial scars and eye damage. Another foreign woman was suspected of ordering the attack.\nIn 2018, Lithuanian criminals threw acid at a Garda (police officer).\nIn April 2019 in Waterford, three teenagers were attacked by two others, who threw acid at them in a premeditated attack. All three victims suffered severe skin burns in the incident, and one, Tega Agberhiere, suffered severe injuries to his face and body and his eyesight was damaged. Nevertheless, the perpetrators merely got cautions.\nOn 13 June 2020, a man was attacked with acid in Garryowen, Limerick.\nIn December 2020, a woman threw acid at three women in a takeaway in Tallaght.", "On 31 July 2018, Kateryna Handziuk, an anti-corruption activist and political advisor from the southern Ukrainian city of Kherson, was attacked with sulfuric acid outside her home by an unknown attacker. She died of her injuries on 3 November 2018. She was 33 years old.", "Though comprehensive statistics on acid attacks in South America are sparse, a recent study investigating acid assault in Bogota, Colombia, provides some insight for this region. According to the article, the first identified survivor of acid violence in Bogota was attacked in 1998. Since then reported cases have been increasing with time. The study also cited the Colombian Forensics Institute, which reported that 56 women complained of aggression by acid in 2010, 46 in 2011, and 16 during the first trimester of 2012. The average age of survivors was about 23 years old, but ranged from 13 to 41 years.\nThe study reported a male-to-female victim ratio of 1:30 for acid assault in Bogota, Colombia, although recent reports show the ratio is closer to 1:1. Reasons behind these attacks usually stemmed from poor interpersonal relationships and domestic intolerance toward women. Moreover, female victims usually came from low socioeconomic classes and had low education. The authors state that the prevalence of acid attacks in other areas of South America remains unknown due to significant underreporting.\nOn 27 March 2014, a woman named Natalia Ponce de León was assaulted by Jonathan Vega, who threw a liter of sulphuric acid on her face and body. Vega, a former neighbor, was reported to have been \"obsessed\" with Ponce de León and had been making death threats against her after she turned down his proposal for a relationship. 24% of her body was severely burned as a result of the attack. Ponce de León has undergone 15 reconstruction surgeries on her face and body since the attack.\nThree years before the attack took place, Colombia reported one of the highest rates of acid attack<nowiki/>s per capita in the world. However, there was not an effective law in place until Ponce de León's campaign took off in the months after her attack. The new law, which is named after her, defines acid attacks as a specific crime and increases maximum sentences to 50 years in prison for convicted offenders. The law also seeks to provide victims with better state medical care including reconstructive surgery and psychological therapy. Ponce de León expressed hope that the new law would act as a deterrent against future attacks.", "On 17 January 2013, Russian ballet dancer Sergei Filin was attacked with acid by an unknown assailant, who cornered him outside of his home in Moscow. He suffered third-degree burns to his face and neck. While it was initially reported that he was in danger of losing his eyesight, his physicians stated on 21 January 2013, that he would retain eyesight in one eye. Three men, including dancer Dmitrichenko, were subsequently sentenced to 4–10 years of prison each for orchestrating and executing the crime.", "In Pakistan attacks are typically the work of husbands against their wives who have \"dishonored them\". Statistics compiled by the Human Rights Commission of Pakistan (HRCP) show that 46 acid attacks occurred in Pakistan during 2004 and decreased with only 33 acid assaults reported for 2007. According to a New York Times article, in 2011 there were 150 acid attacks in Pakistan, up from 65 in 2010. However, estimates by the Human Rights Watch and the HRCP in 2006 cite the number of acid attack victims to be as high 40–70 per year. Motivation behind acid assaults range from marriage proposal rejections to religious fundamentalism. Acid attacks have been dropped by half in 2019\nAcid attacks in Pakistan came to international attention after the release of a documentary by Sharmeen Obaid-Chinoy called Saving Face (2012). According to Shahnaz Bukhari, the majority of these attacks occur in the summer when acid is used extensively to soak certain seeds to induce germination. Various reasons have been given for such attacks, such as a woman dressing inappropriately or rejecting a proposal of marriage.\nThe first known instance of an acid attack occurred in East Pakistan present day Bangladesh in 1967. According to the Acid Survivors Foundation, up to 150 attacks occur every year. The foundation reports that the attacks are often the result in an escalation of domestic abuse, and the majority of victims are female.\nIn 2019, the Acid Survivors Foundation Pakistan (ASFP) have said that the reported cases of acid attacks on women have dropped by around 50 per cent compared to the last five years.", "Drug cartels such as the Los Zetas are known to use acid on civilians. For example, In the 2011 San Fernando massacre, Los Zetas members took away children from their mothers, and shot the rest of the civilians in a bus. The women were taken to a warehouse where many other women were held captive. Inside a dark room, the women were reportedly raped and beaten. Screams of the women and of the children being put in acid were also heard.", "In 1983, acid attacks were reported to be carried out by Mujama al-Islamiya against men and women who spoke out against the Mujama in the Islamic University of Gaza. Additional attacks by Mujama al-Islamiya were reported through 1986. During the First Intifada, Hamas and other Islamist factions conducted an organized intimidation of women to dress \"modestly\" or wear the hijab. Circulars were distributed specifying proper modest dress and behavior. Women who did not conform to these expectations, or to \"morality expectations\" of secular factions, were vulnerable to attacks which included pouring acid on their bodies, rock pelting, threats, and even rape. B'Tselem has also documented additional attacks with acid in specific attacks involving women in a collaboration context.\nIn 2006–07, as part of a wider campaign to enforce Islamist moral conduct, the al-Qaida affiliated \"Suyuf al-Haq\" (Swords of Righteousness) claimed to have thrown acid on the faces of \"immodestly\" dressed woman in Gaza as well as engaging in intimidation via threats. Following 2014 Israel–Gaza conflict Amnesty International has claimed that Hamas used acid during interrogations as a torture technique. Hamas denies this claim. In 2016, during a teacher's strike, unknown assailants hurled acid in the face of a striking Palestinian teacher in Hebron.\nThere have also been recorded incidents of acid use against Israelis. In December 2014, a Palestinian hurled acid (concentrated vinegar which contains a high percentage of acetic acid and can cause burns) into a car containing a Jewish family of six and a hitchhiker at a checkpoint between Beitar Illit and Husan in the West Bank, causing serious face injuries to the father and lightly injuring other occupants, including children. In September 2008 a Palestinian woman carried out two separate acid attacks against soldiers at Huwwara checkpoint, blinding a soldier in one eye.\nMoshe Hirsch was the leader of the anti-Zionist Neturei Karta group in Jerusalem. Hirsch had one glass eye due to an injury sustained when someone threw acid in his face. According to his cousin, journalist Abraham Rabinovich, the incident had no link with Hirsch's political activities but was connected to a real estate dispute.", "The most notable effect of an acid attack is the lifelong bodily disfigurement. According to the Acid Survivors Foundation in Bangladesh, there is a high survival rate amongst victims of acid attacks. Consequently, the victim is faced with physical challenges, which require long-term surgical treatment, as well as psychological challenges, which require in-depth intervention from psychologists and counselors at each stage of physical recovery. These far-reaching effects on their lives impact their psychological, social, and economic viability in communities.", "The Mong Kok acid attacks were incidents in 2008, 2009, and 2010 where plastic bottles filled with corrosive liquid (drain cleaner) were thrown onto shoppers on Sai Yeung Choi Street South, Hong Kong, a pedestrian street and popular shopping area. A reward, originally HK$100,000, for information about the perpetrator or perpetrators, was raised to HK$300,000 following the second incident, and cameras were to be installed in the area following the December incident. The third incident occurred the day the cameras were turned on. The fifth incident happened after Hong Kong government announced its new strategies against the incident. 130 people were injured in these attacks.", "In South Asia, acid attacks have been used as a form of revenge for refusal of sexual advances, proposals of marriage and demands for dowry. Scholars Taru Bahl and M.H. Syed say that land and property disputes are another leading cause.", "An acid attack, also called acid throwing, vitriol attack, or vitriolage, is a form of violent assault involving the act of throwing acid or a similarly corrosive substance onto the body of another \"with the intention to disfigure, maim, torture, or kill\". Perpetrators of these attacks throw corrosive liquids at their victims, usually at their faces, burning them, and damaging skin tissue, often exposing and sometimes dissolving the bones. Acid attacks can lead to permanent, partial, or complete blindness.\nThe most common types of acid used in these attacks are sulfuric and nitric acid. Hydrochloric acid is sometimes used but is much less damaging. Aqueous solutions of strongly alkaline materials, such as caustic soda (sodium hydroxide) or ammonia, are used as well, particularly in areas where strong acids are controlled substances.\nThe long-term consequences of these attacks may include blindness, as well as eye burns, with severe permanent scarring of the face and body, along with far-reaching social, psychological, and economic difficulties.\nToday, acid attacks are reported in many parts of the world, though more commonly in developing countries. Between 1999 and 2013, a total of 3,512 Bangladeshi people were attacked with acid, with the rate of cases declining by 15–20% every year since 2002 based on strict legislation against perpetrators and regulation of acid sales. In India, acid attacks are at an all-time high and increasing every year, with 250–300 reported incidents every year, while the \"actual number could exceed 1,000, according to Acid Survivors' Trust International\".\nAlthough acid attacks occur all over the world, this type of violence is most common in South Asia. Statistics from Acid Survivors Trust International (ASTI) suggest that 80% of victims worldwide are women.", "According to Afshin Molavi, in the early years of the revolution and following the mandating of the covering of hair by women in Iran, some women were threatened with acid attacks by Islamic vigilantes for failing to wear hijab.\nRecently, acid assault in Iran has been met with increased sanctions. The Sharia code of qisas, or equivalence justice, required a caught perpetrator of acid violence to pay a fine and may be blinded with acid in both eyes. Under Iranian law, victims or their families can ask a courts permission to enact \"qisas\" either by taking the perpetrators life in murder cases or inflicting an equivalent injury on his or her body. One victim, Ameneh Bahrami, sentenced her attacker to be blinded in 2008. However, as of 31 July 2011, she pardoned her attacker, thereby absolving Majid Movahedi of his crime and halting the retributive justice of Qisas.\nIn October 2014, a series of acid attacks on women occurred in the city of Isfahan, resulting in demonstrations and arrests of journalists who had covered the attacks. The attacks were thought by many Iranians to be the work of conservative Islamist vigilantes, but the Iranian government denies this.", "Research has prompted many solutions to the increasing incidence of acid attacks in the world. Bangladesh, whose rates of attack have been decreasing, is a model for many countries, and they follow Bangladesh's lead in many legislative reforms. However, several reports highlighted the need for an increased, legal role of NGOs to offer rehabilitation support to acid survivors. Additionally, nearly all research stressed the need for stricter regulation of acid sales to combat this social issue.", "Acid attacks in India, like Bangladesh, have a gendered aspect to them: analyses of news reports revealed at least 72% of reported attacks included at least one female victim. However, unlike Bangladesh, India's incidence rate of chemical assault has been increasing in the past decade, with a high 27 reported cases in 2010. Altogether, from January 2002 to October 2010, 153 cases of acid assault were reported in Indian print media while 174 judicial cases were reported for the year of 2000.\nThe motivation behind acid attacks in India mirrors those in Bangladesh: a study of Indian news reports from January 2002 to October 2010 uncovered that victims rejection of sex or marriage proposals motivated attacks in 35% of the 110 news stories providing a motive for the attack. Acid attacks have also been reported among religious minorities or Muslim women as a form of retaliation or qisas. Notable cases are Sonali Mukherjee in 2003 and Laxmi Agarwal in 2005, whose experience on the ban of acid sales was portrayed in the Bollywood film Chhapaak'.\nPolice in India were also accused of using acid on individuals, particularly on their eyes, causing blindness to the victims. A well known such case is the Bhagalpur blindings, where police blinded 31 individuals under trial (or convicted criminals, according to some versions) by pouring acid into their eyes. The incident was widely discussed, debated and acutely criticized by several human rights organizations. The Bhagalpur blinding case had made criminal jurisprudence history by becoming the first in which the Indian Supreme Court ordered compensation for violation of basic human rights.", "Acid assault survivors face many mental health issues upon recovery. One study showed that when compared to published Western norms for psychological well-being, non-Caucasian acid attack victims reported higher levels of anxiety, depression, and scored higher on the Derriford appearance scale, which measures psychological distress due to one's concern for their appearance. Additionally, female victims reported lowered self-esteem according to the Rosenberg scale and increased self-consciousness, both in general and in the social sphere.", "In addition to medical and psychological effects, many social implications exist for acid survivors, especially women. For example, such attacks usually leave victims handicapped in some way, rendering them dependent on either their spouse or family for everyday activities, such as eating and running errands. These dependencies are increased by the fact that many acid survivors are not able to find suitable work, due to impaired vision and physical handicap. This negatively impacts their economic viability, causing hardships on the families/spouses that care for them. As a result, divorce rates are high, with abandonment by husbands found in 25% of acid assault cases in Uganda (compared to only 3% of wives abandoning their disfigured husbands). Moreover, acid survivors who are single when attacked almost certainly become ostracized from society, effectively ruining marriage prospects. Some media outlets overwhelmingly avoid reporting acid attack violence, or the description of the attack is laconic or often implies that the act was inevitable or even justified.", "When acids contact the skin, response time can affect the severity of burns. If washed away with water or neutralized promptly, burns can be minimized or avoided entirely. However, areas unprotected by skin, such as the cornea of the eye or the lips, may be burned immediately on contact.\nMany victims are attacked in an area without immediate access to water, or unable to see due to being blinded or forced to keep their eyes closed to prevent additional burns to the eye. Treatment for burn victims remains inadequate in many developing nations where incidence is high. Medical underfunding has resulted in very few burn centers available for victims in countries such as Uganda, Bangladesh, and Cambodia., Uganda has one specialized burn center in the entire nation, opening in 2003; Cambodia has only one burn facility for victims, and scholars estimate that only 30% of the Bangladeshi community has access to health care.\nIn addition to inadequate medical capabilities, many acid assault victims fail to report to the police due to a lack of trust in the force, a sense of hopelessness due to the attackers' impunity, and fear of retribution by the assailant.\nThese problems are exacerbated by a lack of knowledge of how to treat burns: some victims have applied oil to the acid, rather than rinsing thoroughly and completely with water for 30 minutes or longer to neutralize the acid. Such home remedies only serve to increase the severity of damage, as they do not counteract the acidity.", "The intention of the attacker is often to cause shame and pain rather than to kill the victim. In Britain, such attacks, particularly those against men, are believed to be underreported, and as a result many of them do not show up in official statistics. Some of the most common motivations of perpetrators include:\n* Personal conflict regarding intimate relationships and sexual rejection \n* Sexual-related jealousy and lust\n* Revenge for refusal of sexual advances, proposals of marriage, and demands for dowry\n* Gang violence and rivalry\n* Conflicts over land ownership, farm animals, housing, and property\nAcid attacks often occur as revenge against a woman who rejects a proposal of marriage or a sexual advance. Gender inequality and women's position in the society, in relation to men, plays a significant role in these types of attacks.\nAttacks against individuals based on their religious beliefs or social or political activities also occur. These attacks may be targeted against a specific individual, due to their activities, or may be perpetrated against random persons merely because they are part of a social group or community. In Europe, Konstantina Kouneva, a former member of the European Parliament, had acid thrown on her in 2008, in what was described as \"the most severe assault on a trade unionist in Greece for 50 years.\" Female students have had acid thrown in their faces as a punishment for attending school. Acid attacks due to religious conflicts have been also reported. Both males and females have been victims of acid attacks for refusing to convert to another religion.\nConflicts regarding property issues, land disputes, and inheritance have also been reported as motivations of acid attacks. Acid attacks related to conflicts between criminal gangs occur in many places, including the UK, Greece, and Indonesia.", "According to researchers and activists, countries typically associated with acid assault include Bangladesh, India, Nepal, Cambodia, Vietnam, Laos, United Kingdom, Kenya, South Africa, Uganda, Pakistan, and Afghanistan. Acid attacks have been reported however in countries around the world, including:\n* Afghanistan\n* Australia\n* Bangladesh\n* Belgium\n* Bulgaria\n* Cambodia\n* China\n** Hong Kong S.A.R.\n* Colombia\n* France\n* Gabon\n* Germany\n* India\n* Indonesia\n* Iran\n* Ireland\n* Israel\n* Italy\n* Jamaica\n* Kenya\n* Laos\n* Mexico\n* Myanmar\n* Nepal\n* Nigeria\n* Philippines\n* Pakistan\n* Russia \n* Sri Lanka\n* Sweden\n* South Africa \n* Taiwan\n* Tanzania\n* Thailand\n* Uganda\n* United Kingdom\n* United States\n* Vietnam\nAdditionally, anecdotal evidence for acid attacks exists in other regions of the world such as South America, Central and North Africa, the Middle East, and Central Asia. However, South Asian countries maintain the highest incidence of acid attacks.\nPolice in the United Kingdom have noted that many victims are afraid to come forward to report attacks, meaning the true scale of the problem may be unknown.", "The medical effects of acid attacks are extensive. As a majority of acid attacks are aimed at the face, several articles thoroughly reviewed the medical implications for these victims. The severity of the damage depends on the concentration of the acid and the time before the acid is thoroughly washed off with water or neutralized with a neutralizing agent. The acid can rapidly eat away skin, the layer of fat beneath the skin, and in some cases even the underlying bone. Eyelids and lips may be completely destroyed and the nose and ears severely damaged. Though not exhaustive, Acid Survivors Foundation Uganda findings included:\n* The skull is partly destroyed/deformed and hair lost.\n* Ear cartilage is usually partly or totally destroyed; deafness may occur.\n* Eyelids may be burned off or deformed, leaving the eyes extremely dry and prone to blindness. Acid directly in the eye also damages sight, sometimes causing blindness in both eyes.\n* The nose can become shrunken and deformed; the nostrils may close off completely due to destroyed cartilage.\n* The mouth becomes shrunken and narrow, and it may lose its full range of motion. Sometimes, the lips may be partly or totally destroyed, exposing the teeth. Eating and speaking can become difficult.\n* Scars can run down from the chin to neck area, shrinking the chin and extremely limiting range of motion in the neck.\n* Inhalation of acid vapors usually creates respiratory problems, exacerbated restricted airway pathways (the esophagus and nostrils) in acid patients.\nIn addition to these above-mentioned medical effects, acid attack victims face the possibility of sepsis, kidney failure, skin depigmentation, and even death.\nA 2015 attack that involved throwing sulfuric acid on a man's face and body while he lay in bed caused him, among other serious injuries, to become paralyzed from the neck down.", "In 2002, Bangladesh introduced the death penalty for acid attacks and laws strictly controlling the sale, use, storage, and international trade of acids. The acids are used in traditional trades carving marble nameplates, conch bangles, goldsmiths, tanneries, and other industries, which have largely failed to comply with the legislation. Salma Ali of the Bangladesh National Women Lawyers' Association derided these laws as ineffective. The names of these laws are the Acid Crime Control Act (ACCA) and the Acid Control Act (ACA), respectively.\nThe ACCA directly impacts the criminal aspect of acid attacks, and allows for the death penalty or a level of punishment corresponding to the area of the body affected. If the attack results in a loss of hearing or sight or damages the victim's face, breasts, or sex organs then the perpetrator faces either the death penalty or life sentencing. If any other part of the body is maimed, then the criminal faces 7–14 years of imprisonment in addition to a fine of US$700. Additionally, throwing or attempting to throw acid without causing any physical or mental harm is punishable by this law and could result in a prison term of 3–7 years along with a US$700 fine. Furthermore, conspirators that aid in such attacks assume the same liability as those actually committing the crime.\nThe ACA regulates the sale, usage, and storing of acid in Bangladesh through the creation of the National Acid Control Council (NACC). The law requires that the NACC implement policies regarding the trade, misuse, and disposal of acid, while also undertaking initiatives that raise awareness about the dangers of acid and improve victim treatment and rehabilitation. The ACA calls for district-level committees responsible for enacting local measures that enforce and further regulate acid use in towns and cities.", "An accurate estimate of the gender ratio of victims and perpetrators is difficult to establish because many acid attacks are not reported or recorded by authorities. For example, a 2010 study in The Lancet described that there are \"no reliable statistics\" on the prevalence of acid attacks in Pakistan.\nA 2007 literature review analyzed 24 studies in 13 countries over the past 40 years, covering 771 cases. According to the London-based charity Acid Survivors Trust International, 80% of acid attacks are on women, and acid assaults are grossly under-estimated. In some regions, assaults perpetrated on female victims by males are often driven by the mentality \"If I can't have you, no one shall.\"\nIn Bangladesh, throwing acid has been labeled as a \"gender crime\", as there is a dominance of female victims who are assaulted by males, for the reason of refusing to marry, or refusing sexual advances. In Jamaica, women throwing acid on other women in relation to fights over male partners is a common cause. In the UK, the majority of victims are men, and many of these attacks are related to gang violence.\nIn India, a female victim was attacked with a knife twice, but no criminal charges were filed against the suspect. The victim was only given police aid after being hospitalized following an acid attack, raising questions of police apathy in dealing with cases of harassment.\nAnother factor that puts victims at increased risk for an acid assault is their socioeconomic status, as those living in poverty are more likely to be attacked. , the three nations with the most noted incidence of acid attacks – Bangladesh, India, and Cambodia – were ranked 75th, 101st, and 104th, respectively, out of 136 countries on the Global Gender Gap Index, a scale that measures equality in opportunities between men and women in nations.", "Under the Qisas (eye-for-an-eye) law of Pakistan, the perpetrator could suffer the same fate as the victim, if the victim or the victim's guardian chooses. The perpetrator may be punished by having drops of acid placed in their eyes.\nSection 336B of Pakistan Penal Code states: \"Whoever causes hurt by corrosive substance shall be punished with imprisonment for life or imprisonment of either description which shall not be less than fourteen years and a minimum fine of one million rupees.\" Additionally, section 299 defines Qisas and states: \"Qisas means punishment by causing similar hurt at the same part of the body of the convict as he has caused to the victim or by causing his death if he has committed qatl-iamd (intentional manslaughter) in exercise of the right of the victim or a Wali (the guardian of the victim).\"", "After a spate of attacks in London in 2017, the Home Office said it would consider changes in laws and measures regarding sales of acid, as well as changes in prosecution and sentencing guidelines. As of 2017, it is unlawful to carry acid with the intent to cause harm. Attacks are prosecuted as acts of actual bodily harm and grievous bodily harm. Three quarters of police investigations do not end in prosecution, either because the attacker could not be found, or because the victim is unwilling to press charges. According to ASTI, of the 2,078 acid attack crimes recorded for the years 2011–2016 in UK, only 414 of those crimes resulted in charges being brought. Most acid attack crimes happened in London, where over 1,200 cases were recorded over the past five years. From 2011 to 2016 there were 1,464 crimes involving acid or corrosive substance. Northumbria recorded the second highest with 109 recorded attacks, Cambridgeshire had 69 attacks, Hertfordshire 67, Greater Manchester 57 and Humberside 52.\nThe Offensive Weapons Act 2019 made provisions for crimes related to acid attacks, including bringing in greater regulation of the sale of corrosive products and making it an offence to carry a corrosive substance in a public place without good reason.", "* A fake acid attack between rivals for a husband appears in Cecil B. DeMilles film Why Change Your Wife?' (1920).\n* In \"The Adventure of the Illustrious Client\" by Sir Arthur Conan Doyle, the villainous Baron Adelbert Gruner has oil of vitriol thrown in his face by a wronged former mistress, disfiguring him. She is prosecuted for this but given the minimum sentence due to extenuating circumstances. \n* DC Comics supervillain Two-Face's origin stories feature half his face disfigured with acid.\n* In the 2002 series of He-Man and the Masters of the Universe, Skeletor owes his namesake skeletal face to an acid attack.\n*Saving Face – A 2012 documentary film by Sharmeen Obaid Chinoy and Daniel Junge that follows Pakistani/British plastic surgeon Dr. Mohammad Jawad to his native Pakistan to aid women who were victims of acid attacks, and examines the Pakistani parliament's exercise in banning the act of acid burning. The film won the 2012 Academy Award for best Documentary Short.\n* In Emmerdale, one of the characters, Ross Barton, is a victim of an acid attack (as depicted in a 2018 episode). The actor who portrayed Ross Barton has said that it was his idea that the character should be a victim of an acid attack, as he wanted to create an awareness campaign about this problem.\n*Surkh Chandni – A 2019 Pakistani television series directed by Shahid Shafaat that follows the story of a girl who survived an acid attack and the harshness of society she has to face there after.\n*Dirty God – a 2019 English film starring Vicky Knight as an acid attack victim seeking justice and healing. Knight is a real burn victim, although from a domestic fire rather than an acid attack.\n* Infinite Jest – a 1996 novel featuring a scene in which Joelle Van Dynes mother tries to throw acid in her husbands face after he confesses his love for their daughter, Joelle, but instead misses and hits her.\n* Uyare – a 2019 Indian Malayalam-language film focuses on an aspiring pilot, who is a victim of an acid attack and how the situation changes around her.\n* Chhapaak – a 2020 Indian Hindi-language film based on the life of Laxmi Agarwal, an acid attack survivor.\n* In Bergen – a 2022 biopic about Turkish singer Bergen, the acid attack that left her blind in one eye is depicted.\n* In Coronation Street in 2023, two characters, Daisy Midgeley and Ryan Connor are victims of an acid attack when Daisy’s stalker, Justin attacks her with acid. Ryan jumps in between Daisy and Justin and receives more severe burns to his face while Daisy only receives moderate burns on her body.\n* In Top Boy in 2019, a minor character is depicted being wrestled to the ground and doused with a bottle of unknown acid on his face in a gang attack orchestrated by a main character, Jamie. The victim is then shown at the end of the season wearing an eyepatch with burns on his face. He was also seen in the following season, also with an eyepatch on his face, implying that the acid attack had permanently blinded the victim in one eye.\n* In 2023 Indian television series Jyoti... Umeedon Se Sajee tells about the story of an hardworking and a aspired women named, Jyoti whose life turns upside down as she is being turned into an victim of acid attack.", "Vitriolage is the deliberate splashing of a person or object with acid, also known as vitriol, in order to deface or kill. A female who engages in such an act is known as a vitrioleuse. There are instances of this act throughout history and in modern times, often in places where honor killings are also common.", "Such attacks or threats against women who failed to wear hijab, dress \"modestly\" or otherwise threaten traditional norms have been reported in Afghanistan. In November 2008, extremists subjected girls to acid attacks for attending school.", "High incidence of acid assaults have been reported in some African countries, including Nigeria, Uganda, and South Africa. Unlike occurrences in South Asia, acid attacks in these countries show less gender discrimination. In Uganda, 57% of acid assault victims were female and 43% were male. A study focusing on chemical burns in Nigeria revealed a reversal in findings: 60% of the acid attack patients were male while 40% were female. In both nations, younger individuals were more likely to suffer from an acid attack: the average age in the Nigeria study was 20.6 years, while Ugandan analysis shows 59% of survivors were 19–34 years of age.\nMotivation for acid assault in these African countries is similar to that of Cambodia. Relationship conflicts caused 35% of acid attacks in Uganda in 1985–2011, followed by property conflicts at 8%, and business conflicts at 5%. Disaggregated data was not available in the Nigeria study, but they reported that 71% of acid assaults resulted from an argument with either a jilted lover, family member, or business partner. As with the other nations, researchers believe these statistics to be under-representative of the actual scope and magnitude of acid attacks in African nations.\nIn August 2013, two Jewish women volunteer teachers – Katie Gee and Kirstie Trup from the UK – were injured by an acid attack by men on a moped near Stone Town in Tanzania.\nA few cases also occurred in Ethiopia and Nigeria.", "According to the Acid Survivors Foundation in Bangladesh, the country has reported 3000 acid attack victims since 1999, peaking at 262 victims for the year of 2002. Rates have been steadily decreasing by 15% to 20% since 2002, with the amount of acid attack victims reported at 91 in Bangladesh as recently as 2011. Bangladesh acid attacks shows the most gendered discrimination, with one study citing a male to female victim ratio of 0.15:1 and another reporting that 82% of acid attack survivors in Bangladesh are women. Younger women were especially prone to attack, with a recent study reporting that 60% of acid assault survivors are between the ages of 10 and 19. According to Mridula Bandyopadhyay and Mahmuda Rahman Khan, it is a form of violence primarily targeted at women. They describe it as a relatively recent form of violence, with the earliest record in Bangladesh from 1983.\nIn societies like Bangladesh’s, where women are typically treated as property and lacking agency, acid attacks are often perpetrated by men who become enraged after women rebuff their requests for relationship or marriage. One study showed that refusal of marriage proposals accounted for 55% of acid assaults, with abuse from a husband or other family member (18%), property disputes (11%) and refusal of sexual advances (2%) as other leading causes. Additionally, the use of acid attacks in dowry arguments has been reported in Bangladesh, with 15% of cases studied by the Acid Survivors Foundation citing dowry disputes as the motive. The chemical agents most commonly used to commit these attacks are hydrochloric acid and sulfuric acid.", "Many non-governmental organizations (NGOs) have been formed in the areas with the highest occurrence of acid attacks to combat such attacks. Bangladesh has its Acid Survivors Foundation, which offers acid victims legal, medical, counseling, and monetary assistance in rebuilding their lives. Similar institutions exist in Uganda, which has its own Acid Survivors Foundation, and in Cambodia which uses the help of Cambodian Acid Survivors Charity. NGOs provide rehabilitation services for survivors while acting as advocates for social reform, hoping to increase support and awareness for acid assault.\nIn Bangladesh, the Acid Survivors Foundation, Nairpokkho, Action Aid, and the Bangladesh Rural Advancement Committee's Community Empowerment & Strengthening Local Institutions Programme assist survivors. The Depilex Smileagain Foundation and The Acid Survivors Foundation in Pakistan operates in Islamabad, offering medical, psychological and rehabilitation support. The Acid Survivors Foundation in Uganda operates in Kampala and provides counseling and rehabilitation treatment to victims, as well as their families. The LICADHO, the Association of the Blind in Cambodia, and the Cambodian Acid Survivors Charity assist survivors of acid attacks. The Acid Survivors Foundation India operates from different centres with national headquarters at Kolkata and chapters at Delhi and Mumbai.\nAcid Survivors Trust International (UK registered charity no. 1079290) provides specialist support to its sister organizations in Africa and Asia. Acid Survivors Trust International is the only international organisation whose sole purpose is to end acid violence. The organisation was founded in 2002 and now works with a network of six Acid Survivors Foundations in Bangladesh, Cambodia, India, Nepal, Pakistan and Uganda that it has helped to form. Acid Survivors Trust International has helped to provide medical expertise and training to partners, raised valuable funds to support survivors of acid attacks and helped change laws. A key role for ASTI is to raise awareness of acid violence to an international audience so that increased pressure can be applied to governments to introduce stricter controls on the sale and purchase of acid.\nIndian acid attack survivor Shirin Juwaley founded the Palash Foundation to help other survivors with psychosocial rehabilitation. She also spearheads research into social norms of beauty and speaks publicly as an advocate for the empowerment of all victims of disfigurement and discrimination. In 2011, the principal of an Indian college refused to have Juwaley speak at her school for fear that Juwaley's story of being attacked by her husband would make students \"become scared of marriage\".", "A positive correlation has been observed between acid attacks and ease of acid purchase. Sulfuric, nitric, and hydrochloric acid are most commonly used and are all cheap and readily available in many instances. For example, often acid throwers can purchase a liter of concentrated sulfuric acid at motorbike mechanic shops for about 40 U.S. cents. Nitric acid costs around $1.50 per liter and is available for purchase at gold or jewelry shops, as polishers generally use it to purify gold and metals. Hydrochloric acid is also used for polishing jewelry, as well as for making soy sauce, cosmetics, and traditional medicine/amphetamine drugs.\nDue to such ease of access, many organizations call for a stricter regulation on the acid economy. Specific actions include required licenses for all acid traders, a ban on concentrated acid in certain areas, and an enhanced system of monitoring for acid sales, such as the need to document all transactions involving acid. However, some scholars have warned that such stringent regulation may result in black market trading of acid, which law enforcements must keep in mind.", "It is sometimes stated that \"the conjugate of a weak acid is a strong base\". Such a statement is incorrect. For example, acetic acid is a weak acid which has a = 1.75 x 10. Its conjugate base is the acetate ion with K = 10/K = 5.7 x 10 (from the relationship K × K = 10), which certainly does not correspond to a strong base. The conjugate of a weak acid is often a weak base and vice versa.", "The strength of an acid varies from solvent to solvent. An acid which is strong in water may be weak in a less basic solvent, and an acid which is weak in water may be strong in a more basic solvent. According to Brønsted–Lowry acid–base theory, the solvent S can accept a proton. \nFor example, hydrochloric acid is a weak acid in solution in pure acetic acid, , which is more acidic than water.\nThe extent of ionization of the hydrohalic acids decreases in the order . Acetic acid is said to be a differentiating solvent for the three acids, while water is not.\nAn important example of a solvent which is more basic than water is dimethyl sulfoxide, DMSO, . A compound which is a weak acid in water may become a strong acid in DMSO. Acetic acid is an example of such a substance. An extensive bibliography of values in solution in DMSO and other solvents can be found at [http://tera.chem.ut.ee/~ivo/HA_UT/ Acidity–Basicity Data in Nonaqueous Solvents].\nSuperacids are strong acids even in solvents of low dielectric constant. Examples of superacids are fluoroantimonic acid and magic acid. Some superacids can be crystallised. They can also quantitatively stabilize carbocations.\nLewis acids reacting with Lewis bases in gas phase and non-aqueous solvents have been classified in the ECW model, and it has been shown that there is no one order of acid strengths. The relative acceptor strength of Lewis acids toward a series of bases, versus other Lewis acids, can be illustrated by C-B plots. It has been shown that to define the order of Lewis acid strength at least two properties must be considered. For the qualitative HSAB theory the two properties are hardness and strength while for the quantitative ECW model the two properties are electrostatic and covalent.", "In organic carboxylic acids, an electronegative substituent can pull electron density out of an acidic bond through the inductive effect, resulting in a smaller value. The effect decreases, the further the electronegative element is from the carboxylate group, as illustrated by the following series of halogenated butanoic acids.", "The usual measure of the strength of an acid is its acid dissociation constant (), which can be determined experimentally by titration methods. Stronger acids have a larger and a smaller logarithmic constant () than weaker acids. The stronger an acid is, the more easily it loses a proton, . Two key factors that contribute to the ease of deprotonation are the polarity of the bond and the size of atom A, which determine the strength of the bond. Acid strengths also depend on the stability of the conjugate base.\nWhile the value measures the tendency of an acidic solute to transfer a proton to a standard solvent (most commonly water or DMSO), the tendency of an acidic solvent to transfer a proton to a reference solute (most commonly a weak aniline base) is measured by its Hammett acidity function, the value. Although these two concepts of acid strength often amount to the same general tendency of a substance to donate a proton, the and values are measures of distinct properties and may occasionally diverge. For instance, hydrogen fluoride, whether dissolved in water ( = 3.2) or DMSO ( = 15), has values indicating that it undergoes incomplete dissociation in these solvents, making it a weak acid. However, as the rigorously dried, neat acidic medium, hydrogen fluoride has an value of –15, making it a more strongly protonating medium than 100% sulfuric acid and thus, by definition, a superacid. (To prevent ambiguity, in the rest of this article, \"strong acid\" will, unless otherwise stated, refer to an acid that is strong as measured by its value ( < –1.74). This usage is consistent with the common parlance of most practicing chemists.)\nWhen the acidic medium in question is a dilute aqueous solution, the is approximately equal to the pH value, which is a negative logarithm of the concentration of aqueous in solution. The pH of a simple solution of an acid in water is determined by both and the acid concentration. For weak acid solutions, it depends on the degree of dissociation, which may be determined by an equilibrium calculation. For concentrated solutions of acids, especially strong acids for which pH value is a better measure of acidity than the pH.", "The experimental determination of a value is commonly performed by means of a titration. A typical procedure would be as follows. A quantity of strong acid is added to a solution containing the acid or a salt of the acid, to the point where the compound is fully protonated. The solution is then titrated with a strong base \nuntil only the deprotonated species, , remains in solution. At each point in the titration pH is measured using a glass electrode and a pH meter. The equilibrium constant is found by fitting calculated pH values to the observed values, using the method of least squares.", "Acid strength is the tendency of an acid, symbolised by the chemical formula , to dissociate into a proton, , and an anion, . The dissociation of a strong acid in solution is effectively complete, except in its most concentrated solutions.\nExamples of strong acids are hydrochloric acid , perchloric acid , nitric acid and sulfuric acid .\nA weak acid is only partially dissociated, with both the undissociated acid and its dissociation products being present, in solution, in equilibrium with each other.\nAcetic acid () is an example of a weak acid. The strength of a weak acid is quantified by its acid dissociation constant, value.\nThe strength of a weak organic acid may depend on substituent effects. The strength of an inorganic acid is dependent on the oxidation state for the atom to which the proton may be attached. Acid strength is solvent-dependent. For example, hydrogen chloride is a strong acid in aqueous solution, but is a weak acid when dissolved in glacial acetic acid.", "A strong acid is an acid that dissociates according to the reaction \nwhere S represents a solvent molecule, such as a molecule of water or dimethyl sulfoxide (DMSO), to such an extent that the concentration of the undissociated species is too low to be measured. For practical purposes a strong acid can be said to be completely dissociated. An example of a strong acid is hydrochloric acid.\n:(in aqueous solution)\nAny acid with a value which is less than about -2 is classed as a strong acid. This results from the very high buffer capacity of solutions with a pH value of 1 or less and is known as the leveling effect.\nThe following are strong acids in aqueous and dimethyl sulfoxide solution. The values of , cannot be measured experimentally. The values in the following table are average values from as many as 8 different theoretical calculations.\nAlso, in water\n* Nitric acid = −1.6 \n* Sulfuric acid (first dissociation only, ≈ −3)\nThe following can be used as protonators in organic chemistry\n* Fluoroantimonic acid \n* Magic acid \n* Carborane superacid \n* Fluorosulfuric acid ( = −6.4)\nSulfonic acids, such as p-toluenesulfonic acid (tosylic acid) are a class of strong organic oxyacids. Some sulfonic acids can be isolated as solids. Polystyrene functionalized into polystyrene sulfonate is an example of a substance that is a solid strong acid.", "A weak acid is a substance that partially dissociates when it is dissolved in a solvent. In solution there is an equilibrium between the acid, , and the products of dissociation. \nThe solvent (e.g. water) is omitted from this expression when its concentration is effectively unchanged by the process of acid dissociation. The strength of a weak acid can be quantified in terms of a dissociation constant, , defined as follows, where signifies the concentration of a chemical moiety, X.\nWhen a numerical value of is known it can be used to determine the extent of dissociation in a solution with a given concentration of the acid, , by applying the law of conservation of mass.\nwhere is the value of the analytical concentration of the acid. When all the quantities in this equation are treated as numbers, ionic charges are not shown and this becomes a quadratic equation in the value of the hydrogen ion concentration value, .\nThis equation shows that the pH of a solution of a weak acid depends on both its value and its concentration. Typical examples of weak acids include acetic acid and phosphorous acid. An acid such as oxalic acid () is said to be dibasic because it can lose two protons and react with two molecules of a simple base. Phosphoric acid () is tribasic.\nFor a more rigorous treatment of acid strength see acid dissociation constant. This includes acids such as the dibasic acid succinic acid, for which the simple method of calculating the pH of a solution, shown above, cannot be used.", "In a set of oxoacids of an element, values decrease with the oxidation state of the element. The oxoacids of chlorine illustrate this trend.\n† theoretical", "An acidity function is a measure of the acidity of a medium or solvent system, usually expressed in terms of its ability to donate protons to (or accept protons from) a solute (Brønsted acidity). The pH scale is by far the most commonly used acidity function, and is ideal for dilute aqueous solutions. Other acidity functions have been proposed for different environments, most notably the Hammett acidity function, H, for superacid media and its modified version H for superbasic media. The term acidity function is also used for measurements made on basic systems, and the term basicity function is uncommon.\nHammett-type acidity functions are defined in terms of a buffered medium containing a weak base B and its conjugate acid BH:\nwhere pK is the dissociation constant of BH. They were originally measured by using nitroanilines as weak bases or acid-base indicators and by measuring the concentrations of the protonated and unprotonated forms with UV-visible spectroscopy. Other spectroscopic methods, such as NMR, may also be used. The function H is defined similarly for strong bases:\nHere BH is a weak acid used as an acid-base indicator, and B is its conjugate base.", "In dilute aqueous solution, the predominant acid species is the hydrated hydrogen ion HO (or more accurately [H(OH)]). In this case H and H are equivalent to pH values determined by the buffer equation or Henderson-Hasselbalch equation.<br>\nHowever, an H value of &minus;21 (a 25% solution of SbF in HSOF) does not imply a hydrogen ion concentration of 10 mol/dm: such a \"solution\" would have a density more than a hundred times greater than a neutron star. Rather, H = &minus;21 implies that the reactivity (protonating power) of the solvated hydrogen ions is 10 times greater than the reactivity of the hydrated hydrogen ions in an aqueous solution of pH 0. The actual reactive species are different in the two cases, but both can be considered to be sources of H, i.e. Brønsted acids. never exists on its own in a condensed phase, as it is always solvated to a certain extent. The high negative value of H in SbF/HSOF mixtures indicates that the solvation of the hydrogen ion is much weaker in this solvent system than in water. Other way of expressing the same phenomenon is to say that SbF·FSOH is a much stronger proton donor than HO.", "In phosphors and scintillators, the activator is the element added as dopant to the crystal of the material to create desired type of nonhomogeneities.\nIn luminescence, only a small fraction of atoms, called emission centers or luminescence centers, emit light. In inorganic phosphors, these inhomogeneities in the crystal structure are created usually by addition of a trace amount of dopants, impurities called activators. (In rare cases dislocations or other crystal defects can play the role of the impurity.) The wavelength emitted by the emission center is dependent on the atom itself, its electronic configuration, and on the surrounding crystal structure.\nThe activators prolong the emission time (afterglow). In turn, other materials (such as nickel) can be used to quench the afterglow and shorten the decay part of the phosphor emission characteristics.\nThe electronic configuration of the activator depends on its oxidation state and is crucial for the light emission. Oxidation of the activator is one of the common mechanisms of phosphor degradation. The distribution of the activator in the crystal is also of high importance. Diffusion of the ions can cause depletion of the crystal from the activators with resulting loss of efficiency. This is another mechanism of phosphor degradation.\nThe scintillation process in inorganic materials is due to the electronic band structure found in the crystals. An incoming particle can excite an electron from the valence band to either the conduction band or the exciton band (located just below the conduction band and separated from the valence band by an energy gap). This leaves an associated hole behind, in the valence band. Impurities create electronic levels in the forbidden gap. The excitons are loosely bound electron-hole pairs which wander through the crystal lattice until they are captured as a whole by impurity centers. The latter then rapidly de-excite by emitting scintillation light (fast component). In case of inorganic scintillators, the activator impurities are typically chosen so that the emitted light is in the visible range or near-UV where photomultipliers are effective. The holes associated with electrons in the conduction band are independent from the latter. Those holes and electrons are captured successively by impurity centers exciting certain metastable states not accessible to the excitons. The delayed de-excitation of those metastable impurity states, slowed by reliance on the low-probability forbidden mechanism, again results in light emission (slow component).\nThe activator is the main factor determining the phosphor emission wavelength. The nature of the host crystal can however to some degree influence the wavelength as well.\nMore activators can be used simultaneously.\nCommon examples of activators are:\n* Copper, added in concentration of 5 ppm to copper-activated zinc sulfide, used in glow in the dark materials and green CRT phosphors; long afterglow\n* Silver, added to zinc sulfide to produce a phosphor/scintillator used in radium dials, spinthariscopes, and as a common blue phosphor in color CRTs, and to zinc sulfide-cadmium sulfide used as a phosphor in black-and-white CRTs (where the ratio determines the blue/yellow balance of the resulting white); short afterglow\n* Europium(II), added to strontium aluminate, used in high-performance glow in the dark materials, very long afterglow; with other host materials it is frequently used as the red emitter in color CRTs and fluorescent lights.\n* Cerium, added to yttrium aluminium garnet used in white light emitting diodes, excited by blue light and emitting yellow\n* Thallium, used in sodium iodide and caesium iodide scintillator crystals for detection of gamma radiation and for gamma spectroscopy\nA newly discovered activator is Samarium(II), added to calcium fluoride. Sm(II) is one of the few materials reported which offers efficient scintillation in the red region of the spectrum, particularly when cooled by dry ice.", "Adduct purification is a technique for preparing extremely pure simple organometallic compounds, which are generally unstable and hard to handle, by purifying a stable adduct with a Lewis acid and then obtaining the desired product from the pure adduct by thermal decomposition.\nEpichem Limited is the licensee of the major patents in this field, and uses the trademark EpiPure to refer to adduct-purified materials; Professor Anthony Jones at Liverpool University is the initiator of the field and author of many of the important papers.\nThe choice of Lewis acid and of reaction medium is important; the desired organometallics are almost always air- and water-sensitive. Initial work was done in ether, but this led to oxygen impurities, and so more recent work involves tertiary amines or nitrogen-substituted crown ethers.", "The α-mannan degradation Mannan which can be found in the cell wall of yeast has a\nparticular chemical structure, and constitutes a food source since humans begun eating fermented foods several thousands of years ago. To determine whether the intake of yeast mannans through fermented foods has promoted specific adaptations of the human gut microbiota, an international team of researchers studied the ability of Bacteroides thetaiotaomicron to specifically degrade yeast mannans.\nThe mannan-oligosaccharides are able to alter the composition of the microbiota present in the bowels, so they produce an increase in the growth of benign bacteria and therefore an increase in the resistance to infection by pathogens.\nThe B. thetaiotaomicron are bacteria that have been shown to bind polysaccharides thanks to a receptor system located on the outer membrane before introducing the polysaccharides into the periplasm for their degradation to monosaccharides. These bacteria use α-mannose as a carbon source. Transcriptional studies have identified three different PULs (Polysaccharide Utilization Loci) which are activated by α-mannan from Saccharomyces cerevisiae, and Schizosaccharomyces pombe and the yeast pathogen Candida albicans. To demonstrate the specificity of these PULs, the researchers have engineered different B. thetaiotaomicron strains which showed that mutants lacking MAN-PUL1, MAN-PUL3 or PUL2 are unable to grow in vitro with yeast mannan as the sole carbon source.\nIn order to assess whether the ability to degrade yeast mannan is a general feature of the microbiota or it is a specific adaptation of B. thetaiotaomicron, the authors analysed the growth profiles of 29 species of Bacteroidota on the human bowel. The analysis revealed that only nine are able to metabolize S. cerevisiae alfa-mannan while 33 of 34 strains of B. thetaiotaomicron are able to grow on this glycan. These results show that B. thetaiotaomicron along with some phylogenetically related species dominate the yeast metabolism of α-mannan in the phylum Bacteroidota of the microbial flora.", "Allison Hubel is an American mechanical engineer and cryobiologist who applies her expertise in heat transfer to study the cryopreservation of biological tissue. She is a professor of mechanical engineering at the University of Minnesota, where she directs the Biopreservation Core Resource and the Technological Leadership Institute, and is president-elect of the Society for Cryobiology.", "Hubel majored in mechanical engineering at Iowa State University, graduating in 1983. She continued her studies at the Massachusetts Institute of Technology (MIT), where she earned a master's degree in 1989 and completed her Ph.D. in the same year.\nShe worked as a research fellow at Massachusetts General Hospital from 1989 to 1990, and as an instructor at MIT from 1990 to 1993, before moving to the University of Minnesota in 1993 as a research associate in the Department of Laboratory Medicine and Pathology. In 1996 she became an assistant professor in that department, and in 2002 she moved to the Department of Mechanical Engineering as an associate professor. She was promoted to full professor in 2009, and became director of the Biopreservation Core Resource in 2010.\nWith two of her students, she founded a spinoff company, BlueCube Bio (later renamed Evia Bio) to commercialize their technology for preserving cells in cell therapy. She continues to serve as chief scientific officer for Evia Bio.\nShe became president-elect of the Society for Cryobiology for the 2022–2023 term, and will become president in the subsequent term.", "Hubel was elected as an ASME Fellow in 2008, and a Fellow of the American Institute for Medical and Biological Engineering in 2012. She was named a Cryofellow of the Society for Cryobiology in 2021.", "The abundance of total alpha elements in stars is usually expressed in terms of logarithms, with astronomers customarily using a square bracket notation:\nwhere is the number of alpha elements per unit volume, and is the number of iron nuclei per unit volume. It is for the purpose of calculating the number that which elements are to be considered \"alpha elements\" becomes contentious. Theoretical galactic evolution models predict that early in the universe there were more alpha elements relative to iron.", "The alpha process generally occurs in large quantities only if the star is sufficiently massive, ( being the mass of the sun); these stars contract as they age, increasing core temperature and density to high enough levels to enable the alpha process. Requirements increase with atomic mass, especially in later stages -- sometimes referred to as silicon burning -- and thus most commonly occur in supernovae. Type II supernovae mainly synthesize oxygen and the alpha-elements (Ne, Mg, Si, S, Ar, Ca, and Ti) while Type Ia supernovae mainly produce elements of the iron peak (Ti, V, Cr, Mn, Fe, Co, and Ni). Sufficiently massive stars can synthesize elements up to and including the iron peak solely from the hydrogen and helium that initially comprises the star.\nTypically, the first stage of the alpha process (or alpha-capture) follows from the helium-burning stage of the star once helium becomes depleted; at this point, free capture helium to produce . This process continues after the core finishes the helium burning phase as a shell around the core will continue burning helium and convecting into the core. The second stage (neon burning) starts as helium is freed by the photodisintegration of one atom, allowing another to continue up the alpha ladder. Silicon burning is then later initiated through the photodisintegration of in a similar fashion; after this point, the peak discussed previously is reached. The supernova shock wave produced by stellar collapse provides ideal conditions for these processes to briefly occur. \nDuring this terminal heating involving photodisintegration and rearrangement, nuclear particles are converted to their most stable forms during the supernova and subsequent ejection through, in part, alpha processes. Starting at and above, all the product elements are radioactive and will therefore decay into a more stable isotope -- e.g. is formed and decays into .", "The alpha process, also known as alpha capture or the alpha ladder, is one of two classes of nuclear fusion reactions by which stars convert helium into heavier elements. The other class is a cycle of reactions called the triple-alpha process, which consumes only helium, and produces carbon. The alpha process most commonly occurs in massive stars and during supernovae.\nBoth processes are preceded by hydrogen fusion, which produces the helium that fuels both the triple-alpha process and the alpha ladder processes. After the triple-alpha process has produced enough carbon, the alpha-ladder begins and fusion reactions of increasingly heavy elements take place, in the order listed below. Each step only consumes the product of the previous reaction and helium. The later-stage reactions which are able to begin in any particular star, do so while the prior stage reactions are still under way in outer layers of the star.\nThe energy produced by each reaction, , is mainly in the form of gamma rays (), with a small amount taken by the byproduct element, as added momentum.\nIt is a common misconception that the above sequence ends at (or , which is a decay product of ) because it is the most tightly bound nuclide – i.e., the nuclide with the highest nuclear binding energy per nucleon – and production of heavier nuclei would consume energy (be endothermic) instead of release it (exothermic). (Nickel-62) is actually the most tightly bound nuclide in terms of binding energy (though has a lower energy or mass per nucleon). The reaction is actually exothermic, and indeed adding alphas continues to be exothermic all the way to , but nonetheless the sequence does effectively end at iron. The sequence stops before producing because conditions in stellar interiors cause the competition between photodisintegration and the alpha process to favor photodisintegration around iron. This leads to more being produced than \nAll these reactions have a very low rate at the temperatures and densities in stars and therefore do not contribute significant energy to a star's total output. They occur even less easily with elements heavier than neon due to the increasing Coulomb barrier.", "One of the distinctive features of altermagnets is a specifically spin-split band structure which was first experimentally observed in work that was published in 2024. Altermagnetic band structure breaks time-reversal symmetry, E=E (E is energy, k wavevector and s spin) as in ferromagnets, however unlike in ferromagnets, it does not generate net magnetization. The altermagnetic spin polarisation alternates in wavevector space and forms characteristic 2, 4, or 6 spin-degenerate nodes, respectively, which correspond to d-, g, or i-wave order parameters. \nA d-wave altermagnet can be regarded as the magnetic counterpart of a d-wave superconductor.\nThe altermagnetic spin polarization in band structure (energy–wavevector diagram) is collinear and does not break inversion symmetry. The altermagnetic spin splitting is even in wavector, i.e. (k-k)s. It is thus also distinct from noncollinear Rasba or Dresselhaus spin texture which break inversion symmetry in noncentrosymmetric nonmagnetic or antiferromagnetic materials due to the spin-orbit coupling. Unconventional time-reversal symmetry breaking, giant ~1eV spin splitting and anomalous Hall effect was first theoretically predicted and experimentally confirmed in RuO.", "Altermagnets exhibit an unusual combination of ferromagnetic and antiferromagnetic properties, and remarkably more closely resemble those of ferromagnets. Hallmarks of altermagnetic materials such as the anomalous Hall effect have been observed before (but this effect occurs also in other magnetically compensated systems such as non-collinear antiferromagnets). Altermagnets also exhibit unique properties such as anomalous and spin currents that can change sign as the crystal rotates.", "Direct experimental evidence of altermagnetic band structure in semiconducting MnTe and metallic RuO was first published in 2024. Many more materials are predicted to be altermagnets – ranging from insulators, semiconductors, and metals to superconductors. Altermagnetism was predicted in 3d and 2d materials with both light as well as heavy elements and can be found in nonrelativistic as well as relativistic band structures.", "In altermagnetic materials, atoms form a regular pattern with alternating spin and spatial orientation at adjacent magnetic sites in the crystal.\nAtoms with opposite magnetic moment are in altermagnets coupled by crystal rotation or mirror symmetry. The spatial orientation of magnetic atoms may originate from the surrounding cages of non-magnetic atoms. The opposite spin sublattices in altermagnetic manganese telluride (MnTe) are related by spin rotation combined with six-fold crystal rotation and half-unit cell translation. In altermagnetic ruthenium dioxide (RuO), the opposite spin sublattices are related by four-fold crystal rotation.", "In condensed matter physics, altermagnetism is a type of persistent magnetic state in ideal crystals. Altermagnetic structures are collinear and crystal-symmetry compensated, resulting in zero net magnetisation. Unlike in an ordinary collinear antiferromagnet, another magnetic state with zero net magnetization, the electronic bands in an altermagnet are not Kramers degenerate, but instead depend on the wavevector in a spin-dependent way. Related to this feature, key experimental observations \nwere published in 2024. It has been speculated that altermagnetism may have applications in the field of spintronics.", "Amasa Stone Bishop (1921 – May 21, 1997) was an American nuclear physicist specializing in fusion physics. He received his B.S. in physics from the California Institute of Technology in 1943. From 1943 to 1946 he was a member of the staff of Radiation Laboratory at the Massachusetts Institute of Technology, where he was involved with radar research and development. Later, he became a staff member of the University of California at Berkeley from 1946 to 1950. Specializing in high energy particle work, he earned his Ph.D. in physics in 1950.\nAfter attaining his Ph.D., Amasa spent three years in Switzerland, acting as research associate at the Federal Institute of Technology in Zürich, and later at the University of Zürich. In 1953 Amasa joined the research division of the Atomic Energy Commission (AEC) in Washington and became the director of the American program to develop controlled fusion, also known as Project Sherwood. He was later presented the AEC Outstanding Service Award for his work. After leaving this position in 1956, Amasa published a book on behalf of the AEC discussing the various attempts at harnessing fusion under Project Sherwood. The book, \"Project Sherwood: The U.S. Program in Controlled Fusion\", was published in 1958.\nAfter 1956 Amasa also served as the AEC's European scientific representative, based in Paris. He was also an assistant delegate to the European atomic energy agency, Euratom, in Brussels. Later he spent several years in Princeton, New Jersey, and was in charge of the fusion program in Washington.\nIn 1970 Amasa joined the United Nations in Europe as director of environment of the United Nations Economic Commission for Europe. During this position, he worked with scientists and diplomats to create solutions for various environmental problems. He left this position to retire in 1980. Amasa died on May 21, 1997, of pneumonia related to Alzheimer's disease at the Clinique de Genolier in Genolier, Switzerland.\nBishop was the great-grandson of Industrialist Amasa Stone.", "Asperomagnetism is the equivalent of ferromagnetism for a disordered system with random magnetic moments. It is defined by short range correlations of locked magnetic moments within small noncrystalline regions, with average long range correlations. Speromagnets possess a permanent net magnetic moment.\nAn example of a asperomagnets is amorphous YFe and DyNi.", "In physics, amorphous magnet refers to a magnet made from amorphous solids. Below a certain temperature, these magnets present permanent magnetic phases produced by randomly located magnetic moments. Three common types of amorphous magnetic phases are asperomagnetism, speromagnetism and sperimagnetism, which correspond to ferromagnetism, antiferromagnetism and ferrimagnetism, respectively, of crystalline solids. Spin glass models can present these amorphous types of magnetism. Due to random frustration, amorphous magnets possess many nearly degenerate ground states.\nThe terms for the amorphous magnetic phases were coined by Michael Coey in 1970s. The Greek root spero/speri () means to scatter.", "Speromagnetism is the equivalent of antiferromagnetism for a disordered system with random magnetic moments. It is defined by short range correlations of locked magnetic moments within small noncrystalline regions, without average long range correlations. Speromagnets do not have a net magnetic moment. \nAn example of a solid presenting speromagnetism is amorphous YFe and can be detected using Mössbauer spectroscopy.", "Sperimagnetism is the equivalent of ferrimagnetism for a disordered system with two or more species of magnetic moments, with at least one species locked in random magnetic moments. Sperimagnets possess a permanent net magnetic moment. When all species are the same, this phase is equivalent to asperomagnetism.", "Both the formation and degradation of amylopectin is important to the metabolic processes of organisms. Amylopectin is one of the two dominant components of starch, and starch is a successful storage molecule for energy. Because of this, it is synthesized and broken down in most plants and cyanobacteria. In fact, amylopectin seems to rival glycogen, the energy storage molecule in animals, because it is able to store more glucose units and henceforth more energy.\nThe synthesis of amylopectin depends on the combined efforts of four different enzymes. These four different enzymes are:\n# ADP glucose pyrophosphorylase (AGPase)\n# soluble starch synthase (SS)\n# starch branching enzyme (BE)\n# starch debranching enzyme (DBE)\nAmylopectin is synthesized by the linkage of α(1→4) Glycosidic bonds. The extensive branching of amylopectin (α(1→6) Glycosidic bond) is initiated by BE and this is what differentiates amylose from amylopectin. DBE is also needed during this synthesis process to regulate the distribution of these branches.\nThe breakdown of amylopectin has been studied in context with the breakdown of starch in animals and humans. Starch is mostly composed of amylopectin and amylose, but amylopectin has been shown to degrade more easily. The reason is most likely because amylopectin is highly branched and these branches are more available to digestive enzymes. In contrast, amylose tends to form helices and contain hydrogen bonding.\nThe breakdown of starch is dependent on three enzymes, among others:\n# alpha, beta amylases\n# phosphorylases\n# starch debranching enzyme (DBE)\nThere are enzymes that are involved in the synthesis and degradation of amylopectin that have isoforms that display different relationships with proteins and other enzymes. For example, there are many versions of SS (Starch Synthase). Even the third isoform (SS-III) has two different versions. It is believed that SS-I and SS-II both have a role in elongating the chains of amylopectin branches. SS-IV is also thought to be responsible for the leaf-like structure of starch granule clusters.", "Amylopectin is a water-insoluble polysaccharide and highly branched polymer of α-glucose units found in plants. It is one of the two components of starch, the other being amylose.\nPlants store starch within specialized organelles called amyloplasts. To generate energy, the plant hydrolyzes the starch, releasing the glucose subunits. Humans and other animals that eat plant foods also use amylase, an enzyme that assists in breaking down amylopectin, to initiate the hydrolysis of starch.\nStarch is made of about 70–80% amylopectin by weight, though it varies depending on the source. For example, it ranges from lower percent content in long-grain rice, amylomaize, and russet potatoes to 100% in glutinous rice, waxy potato starch, and waxy corn. Amylopectin is highly branched, being formed of 2,000 to 200,000 glucose units. Its inner chains are formed of 20–24 glucose subunits.\nDissolved amylopectin starch has a lower tendency of retrogradation (a partial recrystallization after cooking—a part of the staling process) during storage and cooling. For this main reason, the waxy starches are used in different applications mainly as a thickening agent or stabilizer.", "Starch and amylopectin are often used in adhesive formulas, and are increasingly examined for further use in construction", "Amylopectin is a key component in the crystallization of starch’s final configuration, accounting for 70-80% of the final mass. Composed of α-glucose, it is formed in plants as a primary measure of energy storage in tandem with this structural metric.\nAmylopectin bears a straight/linear chain along with a number of side chains which may be branched further. Glucose units are linked in a linear way with α(1→4) Glycosidic bonds. Branching usually occurs at intervals of 25 residues. At the places of origin of a side chain, the branching that takes place bears an α(1→6) glycosidic bond, resulting in a soluble molecule that can be quickly degraded as it has many end points onto which enzymes can attach. Wolform and Thompson (1956) have also reported α(1→3)linkages in case of Amylopectin. Amylopectin contains a larger number of Glucose units (2000 to 200,000) as compared to Amylose containing 200 to 1000 α-Glucose units. In contrast, amylose contains very few α(1→6) bonds, or even none at all. This causes amylose to be hydrolyzed more slowly, but also creates higher density and insolubility.\nAmylopectin is divided into A and B helical chains of α-glucose. A chains are chains that carry no other chains, resulting in an eventual terminus, whereas B chains are chains that do carry other chains, perpetuating the amylopectin polymer. The ratio between these is usually between 0.8 to 1.4.\nThe formation of chain structures has a direct impact on the overall strength of the polymeric whole; the longer a chain is, the more differing the effects amylopectin will have on starch’s morphology. Packing of chains, inter block chain length (IB-CL), also has been correlated to have a direct positive impact on the gelatinization temperature of starch granules. In tandem, the IB-CL will increase as the length of B chains increases, meaning that as the length of individual B chains increases, as does the blocks between connections with other chains. Finally, in general, the more densely packed the resulting molecule of amylopectin, the higher the strength of the starch gel as a whole unit.\nStarch utilizes the density-strength correlation of amylopectin as a measure of forming dense, strong bricks as a basis for the final starch configuration. Amylopectin in starch is formed into helices to compose hexagonal structures that will subsequently be differentiated into A (cereal) and B (high-amylose; tubular) type starch. Structurally, A is more compact, while B is looser, hence the higher concentration of amylose.", "The categorization of amylopectin began with the first observation in starch in 1716 by Antonie van Leeuwenhoek, where he differentiated starch into two fundamental structural components.\nThe terms amylose and amylopectin where not coined until 1906, by French researchers Maquenee and Roux in the course of an examination of starch, where they explained variations in the properties of starches according to the mixture of these related substances and variable saccharification by malt extract. Since then and through the 1940s, research focused on various methods of separation, like fractional precipitation or enzymatically. This gave rise to the Meyer definition of amylose and \"reserv[ing] the name amylopectin to carbohydrates that are branched molecule, degraded by b-amylase only to the stage of residual dextrin\". Meyer also proposed the tree like structure model for amylopectin.\nThe currently accepted structural model was proposed in 1972, based on the cluster organization of double helical structures. Other models have been proposed since, such as the Bertoft BB model, or building block and backbone model in 2012. This model claims short chains are the structural building blocks and long chains the backbone to carry the building blocks, and that the different lengths of chain are separated by their position and direction of elongation", "Amylopectin is the most common carbohydrate in the human diet and is contained in many staple foods. The major sources of amylopectin of starch intake worldwide are the cereals such as rice, wheat, and maize, and the root vegetables potatoes and cassava. Upon cooking, amylopectin in the starch is transformed into readily accessible glucose chains with very different nutritional and functional properties. During cooking with high heat, sugars released from amylopectin can react with amino acids via the Maillard reaction, forming advanced glycation end-products (AGEs), contributing aromas, flavors and texture to foods.\nThe amylose/amylopectin ratio, molecular weight and molecular fine structure influences the physicochemical properties as well as energy release of different types of starches, which affects the number of calories people consume from food. Amylopectin is also sometimes used as a workout supplement due to this caloric density and a correlation with muscle protein synthesis\nIndustrially, amylopectin is used as a stabilizer and thickener, such as corn starch. Amylopectin has also been widely used for the development of edible coating films because of its abundance, cost-effectiveness, and excellent film-forming abilities. Amylopectin-based films have good optical, organoleptic and gas barrier properties, however, they have poor mechanical properties. Many attempts have been made to overcome these limitations, such as the addition of co-biopolymers or other secondary additives to improve the mechanical and tensile properties of the films. Properties of the amylopectin-based films can be influenced by many factors, including types of starch, temperature and time during film formation, plasticizers, co-biopolymers, and storage conditions.", "Amylopectin-based fibers have been fabricated mainly by blending native or modified starches with polymers, plasticizers, cross-linkers, or other additives. Most amylopectin-based fibers are fabricated by electro-wet-spinning, however, the method is demonstrated to be suitable for starches with amylopectin content below 65% and sensitive to amylopectin content of starches. Electrospinning allows for amylopectin to coagulate and form a filament. Fibrous starches induce a more dense material, which can optimize the mechanical properties of starch. Fibers in biomaterials can be used for bone tissue engineering as suitable environment for bone tissue repair and regeneration. Natural bone is a complex composite material composed of an extracellular matrix of mineralized fibers containing living cells and bioactive molecules. Consequently, the use of fibers in biomaterial-based scaffolds offers a wide variety of opportunities to replicate the functional performance of bone. In the last decade, fiber-based techniques such as weaving, knitting, braiding, as well as electrospinning, and direct writing have emerged as promising platforms for making 3D tissue constructs.", "Amylopectin has seen a rise of use in biomedical applications due to its physiological factors, ease of availability, and low cost. Specifically, amylopectin has very advantageous biochemical properties due to its prevalence as a natural polysaccharide. This causes a high sense of biocompatibility with cells and molecules within the body. Amylopectin is also able to biodegrade to a high degree due to its high sense of crosslinking with 1,6 glycosidic bonds. These bonds are easily broken down by the body can reduce molecular weight, expose certain regions, and interact certain bonds with clinical factors. Various physical, chemical, and enzymatic methods of modification have also been researched for amylopectin. These, generally, allow for enhanced and controllable properties which can be selected for the field of research performed. Amylopectin's main role, clinically, is within its integration in starch. Function and structure of amylopectin is based on its integration with amylose and other bounded molecules. Separating these molecules and isolated amylopectin is quite difficult for researchers to perform.", "Drug delivery refers to technology used to present a drug to a pre-determined region of the body for drug emission and absorption. Principles relating to route of administration, metabolism, site of specific targeting, and toxicity are most important within this field. Drugs administered orally (through mouth) are usually encapsulated in some structure in order to protect the drug from immune and biological responses. These structures aim to keep the drug intact until its site of action and release it at a correct dosage when exposed to a specific marker. Corn and potato starch are often used for this as they contain 60-80% amylopectin. They are mostly used in solid preparations: powders, granules, capsules, and tablets. As a natural polysaccharide, it has a compatible nature with anatomical structures and molecules. This prevents any sort of negative immune response, which is a highly controversial topic in drug delivery. Biodegradability of starch allows it to keep the drug intact until reaching its site of action. This allows the drug to avoid low pH situations such as the digestive system. Native starch can also be modified in physical, chemical, and enzymatic ways to improve mechanical or biochemical properties. Within drug delivery, physical modification include treatment under mechanical forces, heat, or pressure. Chemical modifications attempt to alter molecular structure which can include breaking or addition of bonds. Treating starch with enzymes can allow for increased water solubility.", "Tissue engineering aims to generate functional constructs which could replace or improve damaged or infected tissues or whole organs. Many of these constructs lead to infected tissue around the implant area. Coating these materials in amylopectin allows reduction in this infectious reaction. Since amylopectin is mainly used as a coating around these constructs in as it prevents following immune reactions. Since amylopectin is derived directly from a natural polysaccharide, it integrates well with tissues and cells. However, mechanical properties of amylopectin are not optimal due to its high level of crosslinking. This can be avoided by the formation of amylopectin fibers or by forming a nanocomposite with another more rigid polymer.", "Nanoscience and nanotechnology have been emerging as a technology for the development of various hybrid and composite materials for biomedical applications. When nanomaterials are used for the development of the composites in biology, they are called bionanocomposites. Bionanocomposites have been used in tissue engineering to replace, support, or regenerate the cells, organs, or parts of human entity such that it can function as normal.\nAmylopectin-based bionanocomposites are another important class of bionanomaterials, which are biodegradable, with higher mechanical properties, optical transparency, thermal stability, and barrier properties than thermoplastic starch. In conjunction with other nanomaterials like cellulose nanocrystals, nano-ZnO, nanoclay, biodegradable synthetic polymers, starch is one of the most popular materials for the preparation of bionanocomposites for various biomedical applications such as controlled drug release, scaffold for tissue engineering, and cement for bone regeneration. Amylopectin is usually combined with a synthetic polymer with higher elastic modulus and yield strength. This allows for starch to withstand the higher fluid flow and mechanical forces prevalent in bone, cardiac, and endothelial tissue.", "Historically, there is a long established use of starch in sizing applications for textiles. As a component of starch, amylopectin is responsible for the retrogradation or crystalline reordering of the starch, which adds rigidity.\nThis stiffening effect is used for several textile industry processes, such as printing and pressing, to maintain the shape of a fabric over time. Amylopectin is also used as a sizing agent for yarns, to reinforce and protects the fibers from abrasion and breakage during weaving.", "Anisotropic energy is energy that is directionally specific. The word anisotropy means \"directionally dependent\", hence the definition. The most common form of anisotropic energy is magnetocrystalline anisotropy, which is commonly studied in ferromagnets. In ferromagnets, there are islands or domains of atoms that are all coordinated in a certain direction; this spontaneous positioning is often called the \"easy\" direction, indicating that this is the lowest energy state for these atoms. In order to study magnetocrystalline anisotropy, energy (usually in the form of an electric current) is applied to the domain, which causes the crystals to deflect from the \"easy\" to \"hard\" positions. The energy required to do this is defined as the anisotropic energy. The easy and hard alignments and their relative energies are due to the interaction between spin magnetic moment of each atom and the crystal lattice of the compound being studied.", "An anode ray ion source typically is an anode coated with the halide salt of an alkali or alkaline earth metal. Application of a sufficiently high electrical potential creates alkali or alkaline earth ions and their emission is most brightly visible at the anode.", "Goldstein used a gas-discharge tube which had a perforated cathode. When an electrical potential of several thousand volts is applied between the cathode and anode, faint luminous \"rays\" are seen extending from the holes in the back of the cathode. These rays are beams of particles moving in a direction opposite to the \"cathode rays\", which are streams of electrons which move toward the anode. Goldstein called these positive rays Kanalstrahlen, \"channel rays\", or \"canal rays\", because these rays passed through the holes or channels in the cathode.\nThe process by which anode rays are formed in a gas-discharge anode ray tube is as follows. When the high voltage is applied to the tube, its electric field accelerates the small number of ions (electrically charged atoms) always present in the gas, created by natural processes such as radioactivity. These collide with atoms of the gas, knocking electrons off them and creating more positive ions. These ions and electrons in turn strike more atoms, creating more positive ions in a chain reaction. The positive ions are all attracted to the negative cathode, and some pass through the holes in the cathode. These are the anode rays.\nBy the time they reach the cathode, the ions have been accelerated to a sufficient speed such that when they collide with other atoms or molecules in the gas they excite the species to a higher energy level. In returning to their former energy levels these atoms or molecules release the energy that they had gained. That energy gets emitted as light. This light-producing process, called fluorescence, causes a glow in the region behind the cathode.", "An anode ray (also positive ray or canal ray) is a beam of positive ions that is created by certain types of gas-discharge tubes. They were first observed in Crookes tubes during experiments by the German scientist Eugen Goldstein, in 1886. Later work on anode rays by Wilhelm Wien and J. J. Thomson led to the development of mass spectrometry.", "When no external field is applied, the antiferromagnetic structure corresponds to a vanishing total magnetization. In an external magnetic field, a kind of ferrimagnetic behavior may be displayed in the antiferromagnetic phase, with the absolute value of one of the sublattice magnetizations differing from that of the other sublattice, resulting in a nonzero net magnetization. Although the net magnetization should be zero at a temperature of absolute zero, the effect of spin canting often causes a small net magnetization to develop, as seen for example in hematite.\nThe magnetic susceptibility of an antiferromagnetic material typically shows a maximum at the Néel temperature. In contrast, at the transition between the ferromagnetic to the paramagnetic phases the susceptibility will diverge. In the antiferromagnetic case, a divergence is observed in the staggered susceptibility.\nVarious microscopic (exchange) interactions between the magnetic moments or spins may lead to antiferromagnetic structures. In the simplest case, one may consider an Ising model on a bipartite lattice, e.g. the simple cubic lattice, with couplings between spins at nearest neighbor sites. Depending on the sign of that interaction, ferromagnetic or antiferromagnetic order will result. Geometrical frustration or competing ferro- and antiferromagnetic interactions may lead to different and, perhaps, more complicated magnetic structures.\nThe relationship between magnetization and the magnetizing field is non-linear like in ferromagnetic materials. This fact is due to the contribution of the hysteresis loop, which for ferromagnetic materials involves a residual magnetization.", "Antiferromagnetic structures were first shown through neutron diffraction of transition metal oxides such as nickel, iron, and manganese oxides. The experiments, performed by Clifford Shull, gave the first results showing that magnetic dipoles could be oriented in an antiferromagnetic structure.\nAntiferromagnetic materials occur commonly among transition metal compounds, especially oxides. Examples include hematite, metals such as chromium, alloys such as iron manganese (FeMn), and oxides such as nickel oxide (NiO). There are also numerous examples among high nuclearity metal clusters. Organic molecules can also exhibit antiferromagnetic coupling under rare circumstances, as seen in radicals such as 5-dehydro-m-xylylene.\nAntiferromagnets can couple to ferromagnets, for instance, through a mechanism known as exchange bias, in which the ferromagnetic film is either grown upon the antiferromagnet or annealed in an aligning magnetic field, causing the surface atoms of the ferromagnet to align with the surface atoms of the antiferromagnet. This provides the ability to \"pin\" the orientation of a ferromagnetic film, which provides one of the main uses in so-called spin valves, which are the basis of magnetic sensors including modern hard disk drive read heads. The temperature at or above which an antiferromagnetic layer loses its ability to \"pin\" the magnetization direction of an adjacent ferromagnetic layer is called the blocking temperature of that layer and is usually lower than the Néel temperature.", "Unlike ferromagnetism, anti-ferromagnetic interactions can lead to multiple optimal states (ground states—states of minimal energy). In one dimension, the anti-ferromagnetic ground state is an alternating series of spins: up, down, up, down, etc. Yet in two dimensions, multiple ground states can occur.\nConsider an equilateral triangle with three spins, one on each vertex. If each spin can take on only two values (up or down), there are 2 = 8 possible states of the system, six of which are ground states. The two situations which are not ground states are when all three spins are up or are all down. In any of the other six states, there will be two favorable interactions and one unfavorable one. This illustrates frustration: the inability of the system to find a single ground state. This type of magnetic behavior has been found in minerals that have a crystal stacking structure such as a Kagome lattice or hexagonal lattice.", "Synthetic antiferromagnets (often abbreviated by SAF) are artificial antiferromagnets consisting of two or more thin ferromagnetic layers separated by a nonmagnetic layer. Dipole coupling of the ferromagnetic layers results in antiparallel alignment of the magnetization of the ferromagnets.\nAntiferromagnetism plays a crucial role in giant magnetoresistance, as had been discovered in 1988 by the Nobel prize winners Albert Fert and Peter Grünberg (awarded in 2007) using synthetic antiferromagnets.\nThere are also examples of disordered materials (such as iron phosphate glasses) that become antiferromagnetic below their Néel temperature. These disordered networks frustrate the antiparallelism of adjacent spins; i.e. it is not possible to construct a network where each spin is surrounded by opposite neighbour spins. It can only be determined that the average correlation of neighbour spins is antiferromagnetic. This type of magnetism is sometimes called speromagnetism.", "In materials that exhibit antiferromagnetism, the magnetic moments of atoms or molecules, usually related to the spins of electrons, align in a regular pattern with neighboring spins (on different sublattices) pointing in opposite directions. This is, like ferromagnetism and ferrimagnetism, a manifestation of ordered magnetism. The phenomenon of antiferromagnetism was first introduced by Lev Landau in 1933.\nGenerally, antiferromagnetic order may exist at sufficiently low temperatures, but vanishes at and above the Néel temperature – named after Louis Néel, who had first in the West identified this type of magnetic ordering. Above the Néel temperature, the material is typically paramagnetic.", "One recent, successful business endeavor has been the introduction of AFPs into ice cream and yogurt products. This ingredient, labelled ice-structuring protein, has been approved by the Food and Drug Administration. The proteins are isolated from fish and replicated, on a larger scale, in genetically modified yeast.\nThere is concern from organizations opposed to genetically modified organisms (GMOs) who believe that antifreeze proteins may cause inflammation. Intake of AFPs in diet is likely substantial in most northerly and temperate regions already. Given the known historic consumption of AFPs, it is safe to conclude their functional properties do not impart any toxicologic or allergenic effects in humans.\nAs well, the transgenic process of ice structuring proteins production is widely used in society. Insulin and rennet are produced using this technology. The process does not impact the product; it merely makes production more efficient and prevents the death of fish that would otherwise be killed to extract the protein.\nCurrently, Unilever incorporates AFPs into some of its American products, including some Popsicle ice pops and a new line of Breyers Light Double Churned ice cream bars. In ice cream, AFPs allow the production of very creamy, dense, reduced fat ice cream with fewer additives. They control ice crystal growth brought on by thawing on the loading dock or kitchen table, which reduces texture quality.\nIn November 2009, the Proceedings of the National Academy of Sciences published the discovery of a molecule in an Alaskan beetle that behaves like AFPs, but is composed of saccharides and fatty acids.\nA 2010 study demonstrated the stability of superheated water ice crystals in an AFP solution, showing that while the proteins can inhibit freezing, they can also inhibit melting.\nIn 2021, EPFL and Warwick scientists have found an artificial imitation of antifreeze proteins.", "The remarkable diversity and distribution of AFPs suggest the different types evolved recently in response to sea level glaciation occurring 1–2 million years ago in the Northern hemisphere and 10-30 million years ago in Antarctica. Data collected from deep sea ocean drilling has revealed that the development of the Antarctic Circumpolar Current was formed over 30 million years ago. The cooling of Antarctic imposed from this current caused a mass extinction of teleost species that were unable to withstand freezing temperatures. Notothenioids species with the antifreeze gylcoprotein were able to survive the glaciation event and diversify into new niches.\nThis independent development of similar adaptations is referred to as convergent evolution. Evidence for convergent evolution in Northern cod (Gadidae) and Notothenioids is supported by the findings of different spacer sequences and different organization of  introns and exons as well as unmatching AFGP tripeptide sequences, which emerged from duplications of short ancestral sequences which were differently permuted (for the same tripeptide) by each group. These groups diverged approximately 7-15 million years ago. Shortly after (5-15 mya), the AFGP gene evolved from an ancestral pancreatic trypsinogen gene in Notothenioids. AFGP and trypsinogen genes split via a sequence divergence - an adaptation which occurred alongside the cooling and eventual freezing of the Antarctic Ocean. The evolution of the AFGP gene in Northern cod occurred more recently (~3.2 mya) and emerged from a noncoding sequence via tandem duplications in a Thr-Ala-Ala unit. Antarctic notothenioid fish and artic cod, Boreogadus saida, are part of two distinct orders and have very similar antifreeze glycoproteins. Although the two fish orders have similar antifreeze proteins, cod species contain arginine in AFG, while Antarctic notothenioid do not. The role of arginine as an enhancer has been investigated in Dendroides canadensis antifreeze protein (DAFP-1) by observing the effect of a chemical modification using 1-2 cyclohexanedione. Previous research has found various enhancers of this bettles' antifreeze protein including a thaumatin-like protein and polycarboxylates. Modifications of DAFP-1 with the arginine specific reagent resulted in the partial and complete loss of thermal hysteresis in DAFP-1, indicating that arginine plays a crucial role in enhancing its ability. Different enhancer molecules of DAFP-1 have distinct thermal hysteresis activity. Amornwittawat et al. 2008 found that the number of carboxylate groups in a molecules influence the enhancing ability of DAFP-1. Optimum activity in TH is correlated with high concentration of enhancer molecules. Li et al. 1998 investigated the effects of pH and solute on thermal hysteresis in Antifreeze proteins from Dendrioides canadensis. TH activity of DAFP-4 was not affected by pH unless the there was a low solute concentration (pH 1) in which TH decreased. The effect of five solutes; succinate, citrate, malate, malonate, and acetate, on TH activity was reported. Among the five solutes, citrate was shown to have the greatest enhancing effect.\nThis is an example of a proto-ORF model, a rare occurrence where new genes pre exist as a formed open reading frame before the existence of the regulatory element needed to activate them.\nIn fishes, horizontal gene transfer is responsible for the presence of Type II AFP proteins in some groups without a recently shared phylogeny. In Herring and smelt, up to 98% of introns for this gene are shared; the method of transfer is assumed to occur during mating via sperm cells exposed to foreign DNA. The direction of transfer is known to be from herring to smelt as herring have 8 times the copies of AFP gene as smelt (1) and the segments of the gene in smelt house transposable elements which are otherwise characteristic of and common in herring but not found in other fishes.\nThere are two reasons why many types of AFPs are able to carry out the same function despite their diversity:\n# Although ice is uniformly composed of water molecules, it has many different surfaces exposed for binding. Different types of AFPs may interact with different surfaces.\n# Although the five types of AFPs differ in their primary structure of amino acids, when each folds into a functioning protein they may share similarities in their three-dimensional or tertiary structure that facilitates the same interactions with ice.\nAntifreeze glycoprotein activity has been observed across several ray-finned species including eelpouts, sculpins, and cod species. Fish species that possess the antifreeze glycoprotein express different levels of protein activity. Polar cod (Boreogadus saida) exhibit similar protein activity and properties to the Antarctic species, T. borchgrevinki. Both species have higher protein activity than saffron cod (Eleginus gracilis). Ice antifreeze proteins have been reported in diatom species to help decrease the freezing point of organism's proteins. Bayer-Giraldi et al. 2010 found 30 species from distinct taxa with homologues of ice antifreeze proteins. The diversity is consistent with previous research that has observed the presence of these genes in crustaceans, insects, bacteria, and fungi. Horizontal gene transfer is responsible for the presence of ice antifreeze proteins in two sea diatom species, F. cylindrus and F. curta.", "Many microorganisms living in sea ice possess AFPs that belong to a single family. The diatoms Fragilariopsis cylindrus and F. curta play a key role in polar sea ice communities, dominating the assemblages of both platelet layer and within pack ice. AFPs are widespread in these species, and the presence of AFP genes as a multigene family indicates the importance of this group for the genus Fragilariopsis. AFPs identified in F. cylindrus belong to an AFP family which is represented in different taxa and can be found in other organisms related to sea ice (Colwellia spp., Navicula glaciei, Chaetoceros neogracile and Stephos longipes and Leucosporidium antarcticum) and Antarctic inland ice bacteria (Flavobacteriaceae), as well as in cold-tolerant fungi (Typhula ishikariensis, Lentinula edodes and Flammulina populicola).\nSeveral structures for sea ice AFPs have been solved. This family of proteins fold into a beta helix that form a flat ice-binding surface. Unlike the other AFPs, there is not a singular sequence motif for the ice-binding site.\nAFP found from the metagenome of the ciliate Euplotes focardii and psychrophilic bacteria has an efficient ice re-crystallization inhibition ability. 1 μM of Euplotes focardii consortium ice-binding protein (EfcIBP) is enough for the total inhibition of ice re-crystallization in –7.4 °C temperature. This ice-recrystallization inhibition ability helps bacteria to tolerate ice rather than preventing the formation of ice. EfcIBP produces also thermal hysteresis gap, but this ability is not as efficient as the ice-recrystallization inhibition ability. EfcIBP helps to protect both purified proteins and whole bacterial cells in freezing temperatures. Green fluorescent protein is functional after several cycles of freezing and melting when incubated with EfcIBP. Escherichia coli survives longer periods in 0 °C temperature when the efcIBP gene was inserted to E. coli genome. EfcIBP has a typical AFP structure consisting of multiple beta-sheets and an alpha-helix. Also, all the ice-binding polar residues are at the same site of the protein.", "According to the structure and function study on the antifreeze protein from Pseudopleuronectes americanus, the antifreeze mechanism of the type-I AFP molecule was shown to be due to the binding to an ice nucleation structure in a zipper-like fashion through hydrogen bonding of the hydroxyl groups of its four Thr residues to the oxygens along the direction in ice lattice, subsequently stopping or retarding the growth of ice pyramidal planes so as to depress the freeze point.\nThe above mechanism can be used to elucidate the structure-function relationship of other antifreeze proteins with the following two common features:\n# recurrence of a Thr residue (or any other polar amino acid residue whose side-chain can form a hydrogen bond with water) in an 11-amino-acid period along the sequence concerned, and\n# a high percentage of an Ala residue component therein.", "Species containing AFPs may be classified as\nFreeze avoidant: These species are able to prevent their body fluids from freezing altogether. Generally, the AFP function may be overcome at extremely cold temperatures, leading to rapid ice growth and death.\nFreeze tolerant: These species are able to survive body fluid freezing. Some freeze tolerant species are thought to use AFPs as cryoprotectants to prevent the damage of freezing, but not freezing altogether. The exact mechanism is still unknown. However, it is thought AFPs may inhibit recrystallization and stabilize cell membranes to prevent damage by ice. They may work in conjunction with ice nucleating proteins (INPs) to control the rate of ice propagation following freezing.", "Numerous fields would be able to benefit from the protection of tissue damage by freezing. Businesses are currently investigating the use of these proteins in:\n* Increasing freeze tolerance of crop plants and extending the harvest season in cooler climates\n* Improving farm fish production in cooler climates\n* Lengthening shelf life of frozen foods\n* Improving cryosurgery\n* Enhancing preservation of tissues for transplant or transfusion in medicine\n* Therapy for hypothermia\n* Human Cryopreservation (Cryonics)\nUnilever has obtained UK, US, EU, Mexico, China, Philippines, Australia and New Zealand approval to use a genetically modified yeast to produce antifreeze proteins from fish for use in ice cream production. They are labeled \"ISP\" or ice structuring protein on the label, instead of AFP or antifreeze protein.", "In the 1950s, Norwegian scientist Scholander set out to explain how Arctic fish can survive in water colder than the freezing point of their blood. His experiments led him to believe there was “antifreeze” in the blood of Arctic fish. Then in the late 1960s, animal biologist Arthur DeVries was able to isolate the antifreeze protein through his investigation of Antarctic fish. These proteins were later called antifreeze glycoproteins (AFGPs) or antifreeze glycopeptides to distinguish them from newly discovered nonglycoprotein biological antifreeze agents (AFPs). DeVries worked with Robert Feeney (1970) to characterize the chemical and physical properties of antifreeze proteins. In 1992, Griffith et al. documented their discovery of AFP in winter rye leaves. Around the same time, Urrutia, Duman and Knight (1992) documented thermal hysteresis protein in angiosperms. The next year, Duman and Olsen noted AFPs had also been discovered in over 23 species of angiosperms, including ones eaten by humans. They reported their presence in fungi and bacteria as well.", "Normally, ice crystals grown in solution only exhibit the basal (0001) and prism faces (1010), and appear as round and flat discs. However, it appears the presence of AFPs exposes other faces. It now appears the ice surface 2021 is the preferred binding surface, at least for AFP type I. Through studies on type I AFP, ice and AFP were initially thought to interact through hydrogen bonding (Raymond and DeVries, 1977). However, when parts of the protein thought to facilitate this hydrogen bonding were mutated, the hypothesized decrease in antifreeze activity was not observed. Recent data suggest hydrophobic interactions could be the main contributor. It is difficult to discern the exact mechanism of binding because of the complex water-ice interface. Currently, attempts to uncover the precise mechanism are being made through use of molecular modelling programs (molecular dynamics or the Monte Carlo method).", "The classification of AFPs became more complicated when antifreeze proteins from plants were discovered. Plant AFPs are rather different from the other AFPs in the following aspects:\n#They have much weaker thermal hysteresis activity when compared to other AFPs.\n#Their physiological function is likely in inhibiting the recrystallization of ice rather than in preventing ice formation.\n#Most of them are evolved pathogenesis-related proteins, sometimes retaining antifungal properties.", "Antifreeze glycoproteins or AFGPs are found in Antarctic notothenioids and northern cod. They are 2.6-3.3 kD. AFGPs evolved separately in notothenioids and northern cod. In notothenioids, the AFGP gene arose from an ancestral trypsinogen-like serine protease gene.\n*Type I AFP is found in winter flounder, longhorn sculpin and shorthorn sculpin. It is the best documented AFP because it was the first to have its three-dimensional structure determined. Type I AFP consists of a single, long, amphipathic alpha helix, about 3.3-4.5 kD in size. There are three faces to the 3D structure: the hydrophobic, hydrophilic, and Thr-Asx face.\n**Type I-hyp AFP (where hyp stands for hyperactive) are found in several righteye flounders. It is approximately 32 kD (two 17 kD dimeric molecules). The protein was isolated from the blood plasma of winter flounder. It is considerably better at depressing freezing temperature than most fish AFPs. The ability is partially derived from its many repeats of the Type I ice-binding site.\n*Type II AFPs (e.g. ) are found in sea raven, smelt and herring. They are cysteine-rich globular proteins containing five disulfide bonds. Type II AFPs likely evolved from calcium dependent (c-type) lectins. Sea ravens, smelt, and herring are quite divergent lineages of teleost. If the AFP gene were present in the most recent common ancestor of these lineages, it is peculiar that the gene is scattered throughout those lineages, present in some orders and absent in others. It has been suggested that lateral gene transfer could be attributed to this discrepancy, such that the smelt acquired the type II AFP gene from the herring.\n*Type III AFPs are found in Antarctic eelpout. They exhibit similar overall hydrophobicity at ice binding surfaces to type I AFPs. They are approximately 6kD in size. Type III AFPs likely evolved from a sialic acid synthase (SAS) gene present in Antarctic eelpout. Through a gene duplication event, this gene—which has been shown to exhibit some ice-binding activity of its own—evolved into an effective AFP gene by loss of the N-terminal part.\n*Type IV AFPs () are found in longhorn sculpins. They are alpha helical proteins rich in glutamate and glutamine. This protein is approximately 12KDa in size and consists of a 4-helix bundle. Its only posttranslational modification is a pyroglutamate residue, a cyclized glutamine residue at its N-terminus.", "AFPs are thought to inhibit ice growth by an adsorption–inhibition mechanism. They adsorb to nonbasal planes of ice, inhibiting thermodynamically-favored ice growth. The presence of a flat, rigid surface in some AFPs seems to facilitate its interaction with ice via Van der Waals force surface complementarity.", "Recent attempts have been made to relabel antifreeze proteins as ice structuring proteins to more accurately represent their function and to dispose of any assumed negative relation between AFPs and automotive antifreeze, ethylene glycol. These two things are completely separate entities, and show loose similarity only in their function.", "AFPs create a difference between the melting point and freezing point (busting temperature of AFP bound ice crystal) known as thermal hysteresis. The addition of AFPs at the interface between solid ice and liquid water inhibits the thermodynamically favored growth of the ice crystal. Ice growth is kinetically inhibited by the AFPs covering the water-accessible surfaces of ice.\nThermal hysteresis is easily measured in the lab with a nanolitre osmometer. Organisms differ in their values of thermal hysteresis. The maximum level of thermal hysteresis shown by fish AFP is approximately −3.5 °C (Sheikh Mahatabuddin et al., SciRep)(29.3 °F). In contrast, aquatic organisms are exposed only to −1 to −2 °C below freezing. During the extreme winter months, the spruce budworm resists freezing at temperatures approaching −30 °C.\nThe rate of cooling can influence the thermal hysteresis value of AFPs. Rapid cooling can substantially decrease the nonequilibrium freezing point, and hence the thermal hysteresis value. Consequently, organisms cannot necessarily adapt to their subzero environment if the temperature drops abruptly.", "Unlike the widely used automotive antifreeze, ethylene glycol, AFPs do not lower freezing point in proportion to concentration. Rather, they work in a noncolligative manner. This phenomenon allows them to act as an antifreeze at concentrations 1/300th to 1/500th of those of other dissolved solutes. Their low concentration minimizes their effect on osmotic pressure. The unusual properties of AFPs are attributed to their selective affinity for specific crystalline ice forms and the resulting blockade of the ice-nucleation process.", "Antifreeze proteins (AFPs) or ice structuring proteins refer to a class of polypeptides produced by certain animals, plants, fungi and bacteria that permit their survival in temperatures below the freezing point of water. AFPs bind to small ice crystals to inhibit the growth and recrystallization of ice that would otherwise be fatal. There is also increasing evidence that AFPs interact with mammalian cell membranes to protect them from cold damage. This work suggests the involvement of AFPs in cold acclimatization.", "There are a number of AFPs found in insects, including those from Dendroides, Tenebrio and Rhagium beetles, spruce budworm and pale beauty moths, and midges (same order as flies). Insect AFPs share certain similarities, with most having higher activity (i.e. greater thermal hysteresis value, termed hyperactive) and a repetitive structure with a flat ice-binding surface. Those from the closely related Tenebrio and Dendroides beetles are homologous and each 12–13 amino-acid repeat is stabilized by an internal disulfide bond. Isoforms have between 6 and 10 of these repeats that form a coil, or beta-solenoid. One side of the solenoid has a flat ice-binding surface that consists of a double row of threonine residues. Other beetles (genus Rhagium) have longer repeats without internal disulfide bonds that form a compressed beta-solenoid (beta sandwich) with four rows of threonine residus, and this AFP is structurally similar to that modelled for the non-homologous AFP from the pale beauty moth. In contrast, the AFP from the spruce budworm moth is a solenoid that superficially resembles the Tenebrio protein, with a similar ice-binding surface, but it has a triangular cross-section, with longer repeats that lack the internal disulfide bonds. The AFP from midges is structurally similar to those from Tenebrio and Dendroides, but the disulfide-braced beta-solenoid is formed from shorter 10 amino-acids repeats, and instead of threonine, the ice-binding surface consists of a single row of tyrosine residues. Springtails (Collembola) are not insects, but like insects, they are arthropods with six legs. A species found in Canada, which is often called a \"snow flea\", produces hyperactive AFPs. Although they are also repetitive and have a flat ice-binding surface, the similarity ends there. Around 50% of the residues are glycine (Gly), with repeats of Gly-Gly- X or Gly-X-X, where X is any amino acid. Each 3-amino-acid repeat forms one turn of a polyproline type II helix. The helices then fold together, to form a bundle that is two helices thick, with an ice-binding face dominated by small hydrophobic residues like alanine, rather than threonine. Other insects, such as an Alaskan beetle, produce hyperactive antifreezes that are even less similar, as they are polymers of sugars (xylomannan) rather than polymers of amino acids (proteins). Taken together, this suggests that most of the AFPs and antifreezes arose after the lineages that gave rise to these various insects diverged. The similarities they do share are the result of convergent evolution.", "Aqueous biphasic systems (ABS) or aqueous two-phase systems (ATPS) are clean alternatives for traditional organic-water solvent extraction systems.\nABS are formed when either two polymers, one polymer and one kosmotropic salt, or two salts (one chaotropic salt and the other a kosmotropic salt) are mixed at appropriate concentrations or at a particular temperature. The two phases are mostly composed of water and non volatile components, thus eliminating volatile organic compounds. They have been used for many years in biotechnological applications as non-denaturing and benign separation media. Recently, it has been found that ATPS can be used for separations of metal ions like mercury and cobalt, carbon nanotubes, environmental remediation, metallurgical applications and as a reaction media.", "In 1896, Beijerinck first noted an incompatibility in solutions of agar, a water-soluble polymer, with soluble starch or gelatine. Upon mixing, they separated into two immiscible phases.\nSubsequent investigation led to the determination of many other aqueous biphasic systems, of which the polyethylene glycol (PEG) - dextran system is the most extensively studied. Other systems that form aqueous biphases are: PEG - sodium carbonate or PEG and phosphates, citrates or sulfates. Aqueous biphasic systems are used during downstream processing mainly in biotechnological and chemical industries.", "It is a common observation that when oil and water are poured into the same container, they separate into two phases or layers, because they are immiscible. In general, aqueous (or water-based) solutions, being polar, are immiscible with non-polar organic solvents (cooking oil, chloroform, toluene, hexane etc.) and form a two-phase system. However, in an ABS, both immiscible components are water-based.\nThe formation of the distinct phases is affected by the pH, temperature and ionic strength of the two components, and separation occurs when the amount of a polymer present exceeds a certain limiting concentration (which is determined by the above factors).", "The \"upper phase\" is formed by the more hydrophobic polyethylene glycol (PEG), which is of lower density than the \"lower phase,\" consisting of the more hydrophilic and denser dextran solution.\nAlthough PEG is inherently denser than water, it occupies the upper layer. This is believed to be due to its solvent ordering properties, which excludes excess water, creating a low density water environment. The degree of polymerization of PEG also affects the phase separation and the partitioning of molecules during extraction.", "ABS is an excellent method to employ for the extraction of proteins/enzymes and other labile biomolecules from crude cell extracts or other mixtures. Most often, this technique is employed in enzyme technology during industrial or laboratory production of enzymes.\n* They provide mild conditions that do not harm or denature unstable/labile biomolecules\n* The interfacial stress (at the interface between the two layers) is far lower (400-fold less) than water-organic solvent systems used for solvent extraction, causing less damage to the molecule to be extracted\n* The polymer layer stabilizes the extracted protein molecules, favouring a higher concentration of the desired protein in one of the layers, resulting in an effective extraction\n* Specialised systems may be developed (by varying factors such as temperature, degree of polymerisation, presence of certain ions etc. ) to favour the enrichment of a specific compound, or class of compounds, into one of the two phases. They are sometimes used simultaneously with ion-exchange resins for better extraction\n* Separation of the phases and the partitioning of the compounds occurs rapidly. This allows the extraction of the desired molecule before endogenous proteases can degrade them.\n* These systems are amenable to scale-ups, from laboratory-sized set-ups to those that can handle the requirements of industrial production. They may be employed in continuous protein-extraction processes.\nSpecificity may be further increased by tagging ligands specific to the desired enzyme, onto the polymer. This results in a preferential binding of the enzyme to the polymer, increasing the effectiveness of the extraction.\nOne major disadvantage, however, is the cost of materials involved, namely high-purity dextrans employed for the purpose. However, other low-cost alternatives such as less refined dextrans, hydroxypropyl starch derivatives and high-salt solutions are also available.", "Besides the experimental study, it is important to have a good thermodynamic model to describe and predict liquid-liquid equilibrium conditions in engineering and design. To obtain global and reliable parameters for thermodynamic models usually, phase equilibrium data is suitable for this purpose. As there are polymer, electrolyte and water in polymer/salt systems, all different types of interactions should be taken into account. Up to now, several models have been used such as NRTL, Chen-NRTL, Wilson, UNIQUAC, NRTL-NRF and UNIFAC-NRF. It has been shown that, in all cases, the mentioned models were successful in reproducing tie-line data of polymer/salt aqueous two-phase systems. In most of the previous works, excess Gibbs functions have been used for modeling.", "In condensed matter physics, an Arrott plot is a plot of the square of the magnetization of a substance, against the ratio of the applied magnetic field to magnetization at one (or several) fixed temperature(s). Arrott plots are an easy way of determining the presence of ferromagnetic order in a material. They are named after American physicist Anthony Arrott who introduced them as a technique for studying magnetism in 1957.", "Giving the critical exponents explicitly in the equation of state, Arrott and Noakes proposed:\nWhere are free parameters. In these modified Arrott plots, data is plotted as versus . In the case of classical Landau theory, and and this equation reduces to the linear versus plot. However, the equation also allows for other values of and , since real ferromagnets often do not have critical exponents exactly consistent with a simple mean field theory ferromagnetism.\nThe use of the correct critical exponents for a given system can help give straight lines on the Arott plot, but not in cases such as low magnetic field and amorphous materials. While mean field theory is a more reasonable model for ferromagnets at higher magnetic fields, the presence of more than one magnetic domain in real magnets means that especially at low magnetic fields, the experimentally measured macroscopic magnetic field (which is an average over the whole sample) will not be a reasonable way to determine the local magnetic field (which is felt by a single atom). Therefore, magnetization data taken at low magnetic fields should be ignored for the purposes of Arrott plots.", "According to the Landau theory applied to the mean field picture for magnetism, the free energy of a ferromagnetic material close to a phase transition can be written as:\nwhere , the magnetization, is the order parameter, is the applied magnetic field, is the critical temperature, and are material constants.\nClose to the phase transition, this gives a relation for the magnetization order parameter:\nwhere is a dimensionless measure of the temperature.\nThus in a graph plotting vs. for various temperatures, the line without an intercept corresponds to the dependence at the critical temperature. Thus along with providing evidence for the existence of a ferromagnetic phase, the Arrott plot can also be used to determine the critical temperature for the phase transition.", "Magnetic phase transitions can be either first order or second order. The nature of the transition can be inferred from the Arrott plot based on the slope of the magnetic isotherms. If the lines are all positive slope, the phase transition is second order, whereas if there are negative slope lines, the phase transition is first order. This condition is known as the Banerjee criterion.\nThe Banerjee criterion is not always accurate for evaluating inhomogeneous ferromagnets, since the slopes can all be positive even when the transition is first-order.", "Barometric light is a name for the light that is emitted by a mercury-filled barometer tube when the tube is shaken. The discovery of this phenomenon in 1675 revealed the possibility of electric lighting.", "The earliest barometers were simply glass tubes that were closed at one end and filled with mercury. The tube was then inverted and its open end was submerged in a cup of mercury. The mercury then drained out of the tube until the pressure of the mercury in the tube — as measured at the surface of the mercury in the cup — equaled the atmosphere's pressure on the same surface.\nIn order to produce barometric light, the glass tube must be very clean and the mercury must be pure. If the barometer is then shaken, a band of light will appear on the glass at the meniscus of the mercury whenever the mercury moves downward.\nWhen mercury contacts glass, the mercury transfers electrons to the glass. Whenever the mercury pulls free of the glass, these electrons are released from the glass into the surroundings, where they collide with gas molecules, causing the gas to glow — just as the collision of electrons and neon atoms causes a neon lamp to glow.", "Barometric light was first observed in 1675 by the French astronomer Jean Picard: \"Towards the year 1676, Monsieur Picard was transporting his barometer from the Observatory to Port Saint Michel during the night, [when] he noticed a light in a part of the tube where the mercury was moving; this phenomenon having surprised him, he immediately reported it to the sçavans, … \" The Swiss mathematician Johann Bernoulli studied the phenomenon while teaching at Groningen, the Netherlands, and in 1700 he demonstrated the phenomenon to the French Academy. After learning of the phenomenon from Bernoulli, the Englishman Francis Hauksbee investigated the subject extensively. Hauksbee showed that a complete vacuum was not essential to the phenomenon, for the same glow was apparent when mercury was shaken with air only partially rarefied, and that even without using the barometric tube, bulbs containing low-pressure gases could be made to glow via externally applied static electricity. The phenomenon was also studied by contemporaries of Hauksbee, including the Frenchman Pierre Polinière and a French mathematician, Gabriel-Philippe de la Hire, and subsequently by many others.", "BMT is the first company registered in European Feed Materials Register for the production and sale of laboratory-grown meat for pet food; specifically cat and dog food. BMT claims to be the only entity in the world that can produce and sell this product for the pet food market. By 2024, BMT plans to make several metric tons per day of laboratory-grown meat meant for pet food.", "Developing a technology to produce cultured meat by propagating animal cells without using fetal bovine serum, ideally with growth factors from their own production. BMT claims that their final technology will allow its operators to produce and offer the product at prices affordable to consumers.\nIn March 2023, the company said that the first cultured meat product launched on the market may not be for human consumption, but as pet food. However, BMT states that the creation of meat meant for human consumption is one of their goals.", "Bene Meat Technologies a.s. (BMT) is a Czech biotechnology start-up focused on research and development of technology for the production of cultivated meat on an industrial scale. It cooperates with scientific institutions and companies in the Czech Republic and abroad. The company has its laboratories on the first floor of the Cube building in Vokovice, Prague.", "Bene Meat Technologies a.s. was founded in 2020 by Mgr. Roman Kříž, who is the project leader. The main biologist of the scientific team is Jiří Janoušek and one of the external scientists involved in the ongoing research is the immunologist Prof. RNDr. Jan Černý, Ph.D. In 2022, the BMT research team consisted of 70 scientists", "Binary acids or Hydracids are certain molecular compounds in which hydrogen is combined with a second nonmetallic element. \nExamples: \n*HF\n*HS\n*HCl \n*HBr \n*HI\n*HAt\nTheir strengths depend on the solvation of the initial acid, the H-X bond energy, the electron affinity energy of X, and the solvation energy of X. Observed trends in acidity correlate with bond energies, the weaker the H-X bond, the stronger the acid. For example, there is a weak bond between hydrogen and iodine in hydroiodic acid, making it a very strong acid.\nBinary acids are one of two classes of acids, the second being the oxyacids, which consist of a hydrogen, oxygen, and some other element.\nThe names of binary acids begin with hydro- followed by the name of the other element modified to end with -ic. \nSome texts contrast two types of acids. 1. binary acids or hydracids and 2. oxyacids that contain oxygen.", "The most promising results come from recellularized rat hearts. After only 8 days of maturation, the heart models were stimulated with an electrical signal to provide pacing. The heart models showed a unified contraction with a force equivalent to ~2% of a normal rat heart or ~25% of that of a 16-week-old human heart.\nAlthough far from use in a clinical setting, there have been great advances in the field of bioartificial heart generation. The use of decellularization and recellularization processes, has led to the production of a three dimensional matrix that promotes cellular growth; the repopulation of the matrix containing appropriate cell composition; and the bioengineering of organs demonstrating functionality (limited) and responsiveness to stimuli. This area shows immense promise and with future research may redefine treatment of end stage heart failure.", "The preferred method to remove all cellular components from a heart is perfusion decellularization. This technique involves perfusing the heart with detergents such as SDS and Triton X-100 dissolved in distilled water.\nThe remaining ECM is composed of structural elements such as collagen, laminin, elastin and fibronectin. The ECM scaffold promotes proper cellular proliferation and differentiation, vascular development, as well as providing mechanical support for cellular growth. Because minimal DNA material remains after the decellularization process, the engineered organ is biocompatible with the transplant recipient, regardless of species. Unlike traditional transplant options, recellularized hearts are less immunogenic and have a decreased risk of rejection.\nOnce the decellularized heart has been sterilized to remove any pathogens, the recellularization process can occur. Multipotent cardiovascular progenitors are then added to the decellularized heart and with additional exogenous growth factors, are stimulated to differentiate into cardiomyocytes, smooth muscle cells and endothelial cells.", "Heart failure is one of the leading causes of death. In 2013, an estimate of 17.3 million deaths per year out of the 54 million total deaths was caused by cardiovascular diseases, meaning that 31.5% of the world's total death was caused by this. Often, the only viable treatment for end-stage heart failure is organ transplantation. Currently organ supply is insufficient to meet the demand, which presents a large limitation in an end-stage treatment plan. A theoretical alternative to traditional transplantation processes is the engineering of personalized bioartificial hearts. Researchers have had many successful advances in the engineering of cardiovascular tissue and have looked towards using decellularized and recellularized cadaveric hearts in order to create a functional organ. Decellularization-recellularization involves using a cadaveric heart, removing the cellular contents while maintaining the protein matrix (decellularization), and subsequently facilitating growth of appropriate cardiovascular tissue inside the remaining matrix (recellularization).\nOver the past years, researchers have identified populations of cardiac stem cells that reside in the adult human heart. This discovery sparked the idea of regenerating the heart cells by taking the stem cells inside the heart and reprogramming them into cardiac tissues. The importance of these stem cells are self-renewal, the ability to differentiate into cardiomyocytes, endothelial cells and smooth vascular muscle cells, and clonogenicity. These stem cells are capable of becoming myocytes, which are for stabilizing the topography of the intercellular components, as well as to help control the size and shape of the heart, as well as vascular cells, which serve as a cell reservoir for the turnover and the maintenance of the mesenchymal tissues. However, in vivo studies have demonstrated that the regenerative ability of implanted cardiac stem cells lies in the associated macrophage-mediated immune response and concomitant fibroblast-mediated wound healing and not in their functionality, since these effects were observed for both live and dead stem cells.", "A bioartificial heart is an engineered heart that contains the extracellular structure of a decellularized heart and cellular components from a different source. Such hearts are of particular interest for therapy as well as research into heart disease. The first bioartificial hearts were created in 2008 using cadaveric rat hearts. In 2014, human-sized bioartificial pig hearts were constructed. Bioartificial hearts have not been developed yet for clinical use, although the recellularization of porcine hearts with human cells opens the door to xenotransplantation.", "The journal is abstracted and indexed in:\nAccording to the Journal Citation Reports, the journal has a 2020 impact factor of 3.715.", "Biomedical Materials is a peer-reviewed medical journal that covers research on tissue engineering and regenerative medicine. The editors-in-chief are Myron Spector (Harvard Medical School and VA Boston Healthcare System) and Joyce Wong (Boston University).", "Woods lamp is useful in diagnosing conditions such as tuberous sclerosis and erythrasma (caused by Corynebacterium minutissimum, see above). Additionally, detection of porphyria cutanea tarda can sometimes be made when urine turns pink upon illumination with Woods lamp. Woods lamps have also been used to differentiate hypopigmentation from depigmentation such as with vitiligo. A vitiligo patients skin will appear yellow-green or blue under the Wood's lamp. Its use in detecting melanoma has been reported.", "A blacklight, also called a UV-A light, Wood's lamp, or ultraviolet light, is a lamp that emits long-wave (UV-A) ultraviolet light and very little visible light. One type of lamp has a violet filter material, either on the bulb or in a separate glass filter in the lamp housing, which blocks most visible light and allows through UV, so the lamp has a dim violet glow when operating. Blacklight lamps which have this filter have a lighting industry designation that includes the letters \"BLB\". This stands for \"blacklight blue\". A second type of lamp produces ultraviolet but does not have the filter material, so it produces more visible light and has a blue color when operating. These tubes are made for use in \"bug zapper\" insect traps, and are identified by the industry designation \"BL\". This stands for \"blacklight\".\nBlacklight sources may be specially designed fluorescent lamps, mercury-vapor lamps, light-emitting diodes (LEDs), lasers, or incandescent lamps. In medicine, forensics, and some other scientific fields, such a light source is referred to as a Woods lamp, named after Robert Williams Wood, who invented the original Woods glass UV filters.\nAlthough many other types of lamp emit ultraviolet light with visible light, black lights are essential when UV-A light without visible light is needed, particularly in observing fluorescence, the colored glow that many substances emit when exposed to UV. Black lights are employed for decorative and artistic lighting effects, diagnostic and therapeutic uses in medicine, the detection of substances tagged with fluorescent dyes, rock-hunting, scorpion-hunting, the detection of counterfeit money, the curing of plastic resins, attracting insects and the detection of refrigerant leaks affecting refrigerators and air conditioning systems. Strong sources of long-wave ultraviolet light are used in tanning beds.", "One of the innovations for night and all-weather flying used by the US, UK, Japan and Germany during World War II was the use of UV interior lighting to illuminate the instrument panel, giving a safer alternative to the radium-painted instrument faces and pointers, and an intensity that could be varied easily and without visible illumination that would give away an aircraft's position. This went so far as to include the printing of charts that were marked in UV-fluorescent inks, and the provision of UV-visible pencils and slide rules such as the E6B.\nThey may also be used to test for LSD, which fluoresces under black light while common substitutes such as 25I-NBOMe do not.\nStrong sources of long-wave ultraviolet light are used in tanning beds.", "Although black lights produce light in the UV range, their spectrum is mostly confined to the longwave UVA region, that is, UV radiation nearest in wavelength to visible light, with low frequency and therefore relatively low energy. While low, there is still some power of a conventional black light in the UVB range. UVA is the safest of the three spectra of UV light, although high exposure to UVA has been linked to the development of skin cancer in humans. The relatively low energy of UVA light does not cause sunburn. UVA is capable of causing damage to collagen fibers, however, so it does have the potential to accelerate skin aging and cause wrinkles. UVA can also destroy vitamin A in the skin.\nUVA light has been shown to cause DNA damage, but not directly, like UVB and UVC. Due to its longer wavelength, it is absorbed less and reaches deeper into skin layers, where it produces reactive chemical intermediates such as hydroxyl and oxygen radicals, which in turn can damage DNA and result in a risk of melanoma. The weak output of black lights, however, is not considered sufficient to cause DNA damage or cellular mutations in the way that direct summer sunlight can, although there are reports that overexposure to the type of UV radiation used for creating artificial suntans on sunbeds can cause DNA damage, photoaging (damage to the skin from prolonged exposure to sunlight), toughening of the skin, suppression of the immune system, cataract formation and skin cancer.\nUV-A can have negative effects on eyes in both the short-term and long-term.", "Ultraviolet radiation is invisible to the human eye, but illuminating certain materials with UV radiation causes the emission of visible light, causing these substances to glow with various colors. This is called fluorescence, and has many practical uses. Black lights are required to observe fluorescence, since other types of ultraviolet lamps emit visible light which drowns out the dim fluorescent glow.", "A Woods lamp is a diagnostic tool used in dermatology by which ultraviolet light is shone (at a wavelength of approximately 365 nanometers) onto the skin of the patient; a technician then observes any subsequent fluorescence. For example, porphyrins—associated with some skin diseases—will fluoresce pink. Though the technique for producing a source of ultraviolet light was devised by Robert Williams Wood in 1903 using \"Woods glass\", it was in 1925 that the technique was used in dermatology by Margarot and Deveze for the detection of fungal infection of hair. It has many uses, both in distinguishing fluorescent conditions from other conditions and in locating the precise boundaries of the condition.", "It is also helpful in diagnosing:\n* Fungal infections. Some forms of tinea, such as Trichophyton tonsurans, do not fluoresce.\n* Bacterial infections\n**Corynebacterium minutissimum is coral red\n**Pseudomonas is yellow-green\n* Cutibacterium acnes, a bacterium involved in acne causation, exhibits an orange glow under a Wood's lamp.", "A Woods lamp may be used to rapidly assess whether an individual is suffering from ethylene glycol poisoning as a consequence of antifreeze ingestion. Manufacturers of ethylene glycol-containing antifreezes commonly add fluorescein, which causes the patients urine to fluoresce under Wood's lamp.", "A black light may also be formed by simply using a UV filter coating such as Wood's glass on the envelope of a common incandescent bulb. This was the method that was used to create the very first black light sources. Although incandescent black light bulbs are a cheaper alternative to fluorescent tubes, they are exceptionally inefficient at producing UV light since most of the light emitted by the filament is visible light which must be blocked. Due to its black body spectrum, an incandescent light radiates less than 0.1% of its energy as UV light. Incandescent UV bulbs, due to the necessary absorption of the visible light, become very hot during use. This heat is, in fact, encouraged in such bulbs, since a hotter filament increases the proportion of UVA in the black-body radiation emitted. This high running-temperature drastically reduces the life of the lamp, however, from a typical 1,000 hours to around 100 hours.", "UV light can be used to harden particular glues, resins and inks by causing a photochemical reaction inside those substances. This process of hardening is called ‘curing’. UV curing is adaptable to printing, coating, decorating, stereolithography, and in the assembly of a variety of products and materials. In comparison to other technologies, curing with UV energy may be considered a low-temperature process, a high-speed process, and is a solventless process, as cure occurs via direct polymerization rather than by evaporation. Originally introduced in the 1960s, this technology has streamlined and increased automation in many industries in the manufacturing sector. A primary advantage of curing with ultraviolet light is the speed at which a material can be processed. Speeding up the curing or drying step in a process can reduce flaws and errors by decreasing time that an ink or coating spends wet. This can increase the quality of a finished item, and potentially allow for greater consistency. Another benefit to decreasing manufacturing time is that less space needs to be devoted to storing items which can not be used until the drying step is finished.\nBecause UV energy has unique interactions with many different materials, UV curing allows for the creation of products with characteristics not achievable via other means. This has led to UV curing becoming fundamental in many fields of manufacturing and technology, where changes in strength, hardness, durability, chemical resistance, and many other properties are required.", "Fluorescent black light tubes are typically made in the same fashion as normal fluorescent tubes except that a phosphor that emits UVA light instead of visible white light is used on the inside of the tube. The type most commonly used for black lights, designated blacklight blue or \"BLB\" by the industry, has a dark blue filter coating on the tube, which filters out most visible light, so that fluorescence effects can be observed. These tubes have a dim violet glow when operating. They should not be confused with \"blacklight\" or \"BL\" tubes, which have no filter coating, and have a brighter blue color. These are made for use in \"bug zapper\" insect traps where the emission of visible light does not interfere with the performance of the product. The phosphor typically used for a near 368 to 371 nanometer emission peak is either europium-doped strontium fluoroborate (:) or europium-doped strontium borate (:) while the phosphor used to produce a peak around 350 to 353 nanometres is lead-doped barium silicate (:). \"Blacklight blue\" lamps peak at 365 nm.\nManufacturers use different numbering systems for black light tubes. Philips uses one system which is becoming outdated (2010), while the (German) Osram system is becoming dominant outside North America. The following table lists the tubes generating blue, UVA and UVB, in order of decreasing wavelength of the most intense peak. Approximate phosphor compositions, major manufacturer's type numbers and some uses are given as an overview of the types available. \"Peak\" position is approximated to the nearest 10 nm. \"Width\" is the measure between points on the shoulders of the peak that represent 50% intensity.", "UV-A presents a potential hazard when eyes and skin are exposed, especially to high power sources. According to the World Health Organization, UV-A is responsible for the initial tanning of skin and it contributes to skin ageing and wrinkling. UV-A may also contribute to the progression of skin cancers. Additionally, UV-A can have negative effects on eyes in both the short-term and long-term.", "Bili light. A type of phototherapy that uses blue light with a range of 420–470 nm, used to treat neonatal jaundice.", "Blacklights are a common tool for rock-hunting and identification of minerals by their fluorescence. The most common minerals and rocks that glow under UV light are fluorite, calcite, aragonite, opal, apatite, chalcedony, corundum (ruby and sapphire), scheelite, selenite, smithsonite, sphalerite, sodalite. The first person to observe fluorescence in minerals was George Stokes in 1852. He noted the ability of fluorite to produce a blue glow when illuminated with ultraviolet light and called this phenomenon “fluorescence” after the mineral fluorite. Lamps used to visualise seams of fluorite and other fluorescent minerals are commonly used in mines but they tend to be on an industrial scale. The lamps need to be short wavelength to be useful for this purpose and of scientific grade. UVP range of hand held UV lamps are ideal for this purpose and are used by Geologists to identify the best sources of fluorite in mines or potential new mines. Some transparent selenite crystals exhibit an “hourglass” pattern under UV light that is not visible in natural light. These crystals are also phosphorescent. Limestone, marble, and travertine can glow because of calcite presence. Granite, syenite, and granitic pegmatite rocks can also glow.", "It is also used to illuminate pictures painted with fluorescent colors, particularly on black velvet, which intensifies the illusion of self-illumination. The use of such materials, often in the form of tiles viewed in a sensory room under UV light, is common in the United Kingdom for the education of students with profound and multiple learning difficulties. Such fluorescence from certain textile fibers, especially those bearing optical brightener residues, can also be used for recreational effect, as seen, for example, in the opening credits of the James Bond film A View to a Kill. Black light puppetry is also performed in a black light theater.", "Black light is commonly used to authenticate oil paintings, antiques and banknotes. Black lights can be used to differentiate real currency from counterfeit notes because, in many countries, legal banknotes have fluorescent symbols on them that only show under a black light. In addition, the paper used for printing money does not contain any of the brightening agents which cause commercially available papers to fluoresce under black light. Both of these features make illegal notes easier to detect and more difficult to successfully counterfeit. The same security features can be applied to identification cards such as passports or driver's licenses.\nOther security applications include the use of pens containing a fluorescent ink, generally with a soft tip, that can be used to \"invisibly\" mark items. If the objects that are so marked are subsequently stolen, a black light can be used to search for these security markings. At some amusement parks, nightclubs and at other, day-long (or night-long) events, a fluorescent mark is rubber stamped onto the wrist of a guest who can then exercise the option of leaving and being able to return again without paying another admission fee.", "Fluorescent materials are also very widely used in numerous applications in molecular biology, often as \"tags\" which bind themselves to a substance of interest (for example, DNA), so allowing their visualization.\nThousands of moth and insect collectors all over the world use various types of black lights to attract moth and insect specimens for photography and collecting. It is one of the preferred light sources for attracting insects and moths at night. Black light can also be used to see animal excreta such as urine and vomit that is not always visible to the naked eye.", "Black light is used extensively in non-destructive testing. Fluorescing fluids are applied to metal structures and illuminated with a black light which allows cracks and other weaknesses in the material to be easily detected.\nIn addition, if a leak is suspected in a refrigerator or an air conditioning system, a UV tracer dye can be injected into the system along with the compressor lubricant oil and refrigerant mixture. The system is then run in order to circulate the dye across the piping and components and then the system is examined with a blacklight lamp. Any evidence of fluorescent dye then pinpoints the leaking part which needs replacement.", "Ultraviolet light can be generated by some light-emitting diodes, but wavelengths shorter than 380 nm are uncommon, and the emission peaks are broad, so only the very lowest energy UV photons are emitted, within predominant not visible light.", "Another class of UV fluorescent bulb is designed for use in \"bug zapper\" flying insect traps. Insects are attracted to the UV light, which they are able to see, and are then electrocuted by the device. These bulbs use the same UV-A emitting phosphor blend as the filtered blacklight, but since they do not need to suppress visible light output, they do not use a purple filter material in the bulb. Plain glass blocks out less of the visible mercury emission spectrum, making them appear light blue-violet to the naked eye. These lamps are referred to by the designation \"blacklight\" or \"BL\" in some North American lighting catalogs. These types are not suitable for applications which require the low visible light output of \"BLB\" tubes lamps.", "High power mercury vapor black light lamps are made in power ratings of 100 to 1,000 watts. These do not use phosphors, but rely on the intensified and slightly broadened 350&ndash;375 nm spectral line of mercury from high pressure discharge at between , depending upon the specific type. These lamps use envelopes of Woods glass or similar optical filter coatings to block out all the visible light and also the short wavelength (UVC) lines of mercury at 184.4 and 253.7 nm, which are harmful to the eyes and skin. A few other spectral lines, falling within the pass band of the Woods glass between 300 and 400 nm, contribute to the output.\nThese lamps are used mainly for theatrical purposes and concert displays. They are more efficient UVA producers per unit of power consumption than fluorescent tubes.", "A blastoid is an embryoid, a stem cell-based embryo model which, morphologically and transcriptionally resembles the early, pre-implantation, mammalian conceptus, called the blastocyst. The first blastoids were created by the Nicolas Rivron laboratory by combining mouse embryonic stem cells and mouse trophoblast stem cells. Upon in vitro development, blastoids generate analogs of the primitive endoderm cells, thus comprising analogs of the three founding cell types of the conceptus (epiblast, trophoblast and primitive endoderm), and recapitulate aspects of implantation on being introduced into the uterus of a compatible female. Mouse blastoids have not shown the capacity to support the development of a foetus and are thus generally not considered as an embryo but rather as a model. As compared to other stem cell-based embryo models (e.g., Gastruloids), blastoids model the preimplantation stage and the integrated development of the conceptus including the embryo proper and the two extraembryonic tissues (trophectoderm and primitive endoderm). The blastoid is a model system for the study of mammalian development and disease. It might be useful for the identification of therapeutic targets and preclinical modelling.", "The tritium breeding blanket (also known as a fusion blanket, lithium blanket or simply blanket), is a key part of many proposed fusion reactor designs. It serves several purposes; primarily it is to produce (or \"breed\") further tritium fuel for the nuclear fusion reaction, which owing to the scarcity of tritium would not be available in sufficient quantities, through the reaction of neutrons with lithium in the blanket. The blanket may also act as a cooling mechanism, absorbing the energy from the neutrons produced by the reaction between deuterium and tritium (\"D-T\"), and further serves as shielding, preventing the high-energy neutrons from escaping to the area outside the reactor and protecting the more radiation-susceptible portions, such as ohmic or superconducting magnets, from damage.\nOf these three duties, it is only the breeding portion that cannot be replaced by other means. For instance, a large quantity of water makes an excellent cooling system and neutron shield, as in the case of a conventional nuclear reactor. However, tritium is not a naturally occurring resource, and thus is difficult to obtain in sufficient quantity to run a reactor through other means, so if commercial fusion using the D-T cycle is to be achieved, successful breeding of the tritium in commercial quantities is a requirement.\nITER runs a major effort in blanket design and will test a number of potential solutions. Concepts for the breeder blanket include helium-cooled lithium lead (HCLL), helium-cooled pebble bed (HCPB), and water-cooled lithium lead (WCLL) methods. Six different tritium breeding systems, known as Test Blanket Modules (TBM) wil be tested in ITER.\nSome breeding blanket designs are based on lithium containing ceramics, with a focus on lithium titanate and lithium orthosilicate. These materials, mostly in a pebble form, are used to produce and extract tritium and helium; must withstand high mechanical and thermal loads; and should not become excessively radioactive upon completion of their useful service life.\nTo date no large-scale breeding system has been attempted, and it is an open question whether such a system is possible to create.\nA fast breeder reactor uses a blanket of uranium or thorium.", "Brian G. Wowk is a Canadian medical physicist and cryobiologist known for the discovery and development of synthetic molecules that mimic the activity of natural antifreeze proteins in cryopreservation applications, sometimes called \"ice blockers\". As a senior scientist at 21st Century Medicine, Inc., he was a co-developer with Greg Fahy of key technologies enabling cryopreservation of large and complex tissues, including the first successful vitrification and transplantation of a mammalian organ (kidney). Wowk is also known for early theoretical work on future applications of molecular nanotechnology, especially cryonics, nanomedicine, and optics. In the early 1990s he wrote that nanotechnology would revolutionize optics, making possible virtual reality display systems optically indistinguishable from real scenery as in the fictitious Holodeck of Star Trek. These systems were described by Wowk in the chapter \"Phased Array Optics\" in the 1996 anthology Nanotechnology: Molecular Speculations on Global Abundance , and highlighted in the September 1998 Technology Watch section of Popular Mechanics magazine.", "*1.[https://web.archive.org/web/20060118094514/http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=5039 Nanotechnology: Molecular Speculations on Global Abundance]\n*2.[https://www.amazon.com/gp/product/354067215X Functional MRI]", "He obtained his undergraduate and graduate degrees from the University of Manitoba in Winnipeg, Canada. Dr. Wowk obtained his PhD in physics in 1997. His graduate studies included work in online portal imaging for radiotherapy at the Manitoba Cancer Treatment and Research Foundation (now Cancer Care Manitoba), and work on artifact reduction for functional magnetic resonance imaging at the National Research Council of Canada. His work in the latter field is cited by several text books, including\nFunctional MRI which includes an image he obtained of magnetic field changes inside the human body caused by respiration.", "Polyketones, thermoplastic polymers, are formed by the copolymerisation of carbon monoxide and one or more alkenes (typically ethylene with propylene). The process utilises a palladium(II) catalyst with a bidentate ligand like 2,2′-bipyridine or 1,10-phenanthroline (phen) with a non-coordinating BARF counterion, such as [(phen)Pd(CH)(CO)]BAr. The preparation of the catalyst involves the reaction of a dimethyl palladium complex with Brookhart's acid in acetonitrile with loss of methane and the catalytic species is formed by uptake of carbon monoxide to displace acetonitrile.\n:[(EtO)H]BAr + [(phen)Pd(CH)] + MeCN → [(phen)Pd(CH)(MeCN)]BAr + 2 EtO + CH\n:[(phen)Pd(CH)(MeCN)]BAr + CO → [(phen)Pd(CH)(CO)]BAr + MeCN\nThe mechanism involves migratory insertion whereby the polymer chain is bound to the catalytic centre and grows by the sequential insertion of carbon monoxide and the alkene between the palladium atom and the existing chain. Defects occur when insertions do not alternate &ndash; that is, a carbon monoxide insertion follows a carbon monoxide insertion or an alkene insertion follows an alkene insertion &ndash; these are highlighted in red in the figure below. This catalyst produces a very low rate of defects due to the difference in Gibbs energy of activation of each insertion &ndash; the energy barrier to inserting an alkene immediately following an alkene insertion is ~12 kJ mol higher than barrier to carbon monoxide insertion. Use of monodentate phosphine ligands also leads to undesirable side-products but bidentate phosphine ligands like 1,3-bis(diphenylphosphino)propane have been used industrially.", "Traditional weakly coordinating anions, such as perchlorate, tetrafluoroborate, and hexafluorophosphate, will nonetheless coordinate to very electrophilic cations, making these counterions unsuitable for some complexes. The highly reactive species [CpZr(CH)], for example, has been reported to abstract F from PF. Starting in the 1980s, new types of weakly coordinating anions began to be developed. BAr′ anions are used as counterions for highly electrophilic, cationic transition metal species, as they are very weakly coordinating and unreactive towards electrophilic attack. One common method of generating these cationic species is via protonolysis of a dialkyl complexes or an olefin complex. For example, an electrophilic palladium catalyst, [(2,2′-bipyridine)Pd(CH)(CHCN)][BAr′], is prepared by protonating the dimethyl complex with Brookhart's acid. This electrophilic, cationic palladium species is used for the polymerization of olefins with carbon monoxide to polyketones in aprotic solvents.", "The acid crystallizes as a white, hygroscopic crystalline solid. NMR and elemental analysis showed that the crystal contains two equivalents of diethyl ether. In solution, the compound slowly degrades to m-CH(CF) and BAr′.\n[H(OEt)][B(CF)] is a related compound with a slightly different weakly coordinating anion; it was first reported in 2000. An X-ray crystal structure of that compound was obtained, showing the acidic proton coordinated by both ethereal oxygen centers, although the crystal was not good enough to determine whether the proton is located symmetrically or unsymmetrically between the two.", "Brookhart's acid is the salt of the diethyl ether oxonium ion and [[Tetrakis(3,5-bis(trifluoromethyl)phenyl)borate|tetrakis[3,5-bis(trifluoromethyl)phenyl]borate]] (BAr′). It is a colorless solid, used as a strong acid. The compound was first reported by Volpe, Grant, and Brookhart in 1992.", "This compound is prepared by treatment of NaBAr′ in diethyl ether (EtO) with hydrogen chloride:\n: NaBAr′ + HCl + 2 EtO → [H(OEt)] + NaCl\nNaBAr′ is soluble in diethyl ether, whereas sodium chloride is not. Precipitation of sodium chloride thus drives the formation of the oxonium acid compound, which is isolable as a solid.", "Thermonuclear weapons, also known as hydrogen bombs, are nuclear weapons that use energy released by a burning plasma's fusion reactions to produce part of their explosive yield. This is in contrast to pure-fission weapons, which produce all of their yield from a neutronic nuclear fission reaction. The first thermonuclear explosion, and thus the first man-made burning plasma, was the Ivy Mike test carried out by the United States in 1952. All high-yield nuclear weapons today are thermonuclear weapons.", "Multiple tokamaks are currently under construction with the goal of becoming the first magnetically confined burning plasma experiment.\nITER, being built near Cadarache in France, has the stated goal of allowing fusion scientists and engineers to investigate the physics, engineering, and technologies associated with a self-heating plasma. Issues to be explored include understanding and controlling a strongly coupled, self-organized plasma; management of heat and particles that reach plasma-facing surfaces; demonstration of fuel breeding technology; and the physics of energetic particles. These issues are relevant to ITER's broader goal of using self-heating plasma reactions to become the first fusion energy device that produces more power than it consumes, a major step toward commercial fusion power production. To reach fusion-relevant temperatures, the ITER tokamak will heat plasmas using three methods: ohmic heating (running electric current through the plasma), neutral particle beam injection, and high-frequency electromagnetic radiation.\nSPARC, being built in Devens in the United States, plans to verify the technology and physics required to build a power plant based on the ARC fusion power plant concept. SPARC is designed to achieve this with margin in excess of breakeven and may be capable of achieving up to 140 MW of fusion power for 10-second bursts despite its relatively compact size. SPARC's high-temperature superconductor magnet is intended to create much stronger magnetic fields, allowing it to be much smaller than similar tokamaks.", "In plasma physics, a burning plasma is one in which most of the heating comes from fusion reactions involving thermal plasma ions. The Sun and similar stars are a burning plasma, and in 2020 the National Ignition Facility achieved burning plasma. A closely related concept is that of an ignited plasma, in which all of the heating comes from fusion reactions.", "The NIF burning plasma, despite not occurring in an energy context, has been characterised as a major milestone in the race towards nuclear fusion power, with the perception that it could bring with it a better planet. The first controlled burning plasma has been characterized as a critical juncture on the same level as the Trinity Test, with enormous implications for fusion for energy (fusion power), including the weaponization of fusion power, mainly for electricity for directed-energy weapons, as well as fusion for peacebuilding – one of the main tasks of ITER.", "In the Sun and other similar stars, those fusion reactions involve hydrogen ions. The high temperatures needed to sustain fusion reactions are maintained by a self-heating process in which energy from the fusion reaction heats the thermal plasma ions via particle collisions. A plasma enters what scientists call the burning plasma regime when the self-heating power exceeds any external heating.\nThe Sun is a burning plasma that has reached fusion ignition, meaning the Sun's plasma temperature is maintained solely by energy released from fusion. The Sun has been burning hydrogen for 4.5 billion years and is about halfway through its life cycle.", "It was announced in 2022 that a burning plasma had been achieved at the National Ignition Facility, a large laser-based inertial confinement fusion research device, located at the Lawrence Livermore National Laboratory in Livermore, California. The burning plasma created was sustained for approximately 100 trillionths of a second, and the process consumed more energy than it created by a factor of approximately ten. NIF achieved ignition on December 5, 2022, net energy release from a burning plasma fusion reaction.", "Candoluminescence is the light given off by certain materials at elevated temperatures (usually when exposed to a flame) that has an intensity at some wavelengths which can, through chemical action in flames, be higher than the blackbody emission expected from incandescence at the same temperature. The phenomenon is notable in certain transition-metal and rare-earth oxide materials (ceramics) such as zinc oxide, cerium(IV) oxide and thorium dioxide.", "The existence of the candoluminescence phenomenon and the underlying mechanism have been the subject of extensive research and debate since the first reports of it in the 1800s. The topic was of particular interest before the introduction of electric lighting, when most artificial light was produced by fuel combustion. The main alternative explanation for candoluminescence is that it is simply \"selective\" thermal emission in which the material has a very high emissivity in the visible spectrum and a very weak emissivity in the part of the spectrum where the blackbody thermal emission would be highest; in such a system, the emitting material will tend to retain a higher temperature because of the lack of invisible radiative cooling. In this scenario, observations of candoluminescence would simply have been underestimating the temperature of the emitting species. Several authors in the 1950s came to the view that candoluminescence was simply an instance of selective thermal emission, and one of the most prominent researchers in the field, V. A. Sokolov, once advocated eliminating the term from the literature in his noted 1952 review article, only to revise his view several years later. The modern scientific consensus is that candoluminescence does occur, that it is not always simply due to selective thermal emission, but the mechanisms vary depending on the materials involved and the method of heating, particularly the type of flame and the position of the material relative to the flame.", "Early in the 20th century, there was vigorous debate over whether candoluminescence is required to explain the behavior of Welsbach gas mantles or limelight. One counterargument was that since thorium oxide (for example) has much lower emissivity in the near infrared region than the shorter wavelength parts of the visible spectrum, it should not be strongly cooled by infrared radiation, and thus a thorium-oxide mantle can get closer to the flame temperature than can a blackbody material. The higher temperature would then lead to higher emission levels in the visible portion of the spectrum, without invoking candoluminescence as an explanation. \nAnother argument was that the oxides in the mantle might be actively absorbing the combustion products and thus being selectively raised to combustion-product temperatures. Some more recent authors seem to have concluded that neither Welsbach mantles nor limelight involve candoluminescence (e.g. Mason), but Ivey, in an extensive review of 254 sources, concluded that catalysis of free-radical recombination does enhance the emission of Welsbach mantles, such that they are candoluminescent.", "When the fuel in a flame combusts, the energy released by the combustion process is deposited in combustion products, usually molecular fragments called free radicals. The combustion products are excited to a very high temperature called the adiabatic flame temperature (that is, the temperature before any heat has been transferred away from the combustion products). This temperature is usually much higher than the temperature of the air in the flame or which an object inserted into the flame can reach. When the combustion products lose this energy by radiative emission, the radiation can thus be more intense than that of a lower-temperature blackbody inserted into the flame. The exact emission process involved varies with the material, the type of fuels and oxidizers, and the type of flame, though in many cases it is well established that the free radicals undergo radiative recombination. This energetic light emitted directly from the combustion products may be observed directly (as with a blue gas flame), depending on the wavelength, or it may then cause fluorescence in the candoluminescent material. Some free-radical recombinations emit ultraviolet light, which is only observable through fluorescence. \nOne important candoluminescence mechanism is that the candoluminescent material catalyzes the recombination, enhancing the intensity of the emission. Extremely narrow-wavelength emission by the combustion products is often an important feature in this process, because it reduces the rate at which the free radicals lose heat to radiation at invisible or non-fluorescence-exciting wavelengths. In other cases, the excited combustion products are thought to directly transfer their energy to luminescent species in the solid material. In any case, the key feature of candoluminescence is that the combustion products lose their energy to radiation without becoming thermalized with the environment, which allows the effective temperature of their radiation to be much higher than that of thermal emission from materials in thermal equilibrium with the environment.", "Monosaccharides are the major fuel source for metabolism, being used both as an energy source (glucose being the most important in nature as it is the product of photosynthesis in plants) and in biosynthesis. When monosaccharides are not immediately needed, they are often converted to more space-efficient (i.e., less water-soluble) forms, often polysaccharides. In many animals, including humans, this storage form is glycogen, especially in liver and muscle cells. In plants, starch is used for the same purpose. The most abundant carbohydrate, cellulose, is a structural component of the cell wall of plants and many forms of algae. Ribose is a component of RNA. Deoxyribose is a component of DNA. Lyxose is a component of lyxoflavin found in the human heart. Ribulose and xylulose occur in the pentose phosphate pathway. Galactose, a component of milk sugar lactose, is found in galactolipids in plant cell membranes and in glycoproteins in many tissues. Mannose occurs in human metabolism, especially in the glycosylation of certain proteins. Fructose, or fruit sugar, is found in many plants and humans, it is metabolized in the liver, absorbed directly into the intestines during digestion, and found in semen. Trehalose, a major sugar of insects, is rapidly hydrolyzed into two glucose molecules to support continuous flight.", "The aldehyde or ketone group of a straight-chain monosaccharide will react reversibly with a hydroxyl group on a different carbon atom to form a hemiacetal or hemiketal, forming a heterocyclic ring with an oxygen bridge between two carbon atoms. Rings with five and six atoms are called furanose and pyranose forms, respectively, and exist in equilibrium with the straight-chain form.\nDuring the conversion from straight-chain form to the cyclic form, the carbon atom containing the carbonyl oxygen, called the anomeric carbon, becomes a stereogenic center with two possible configurations: The oxygen atom may take a position either above or below the plane of the ring. The resulting possible pair of stereoisomers is called anomers. In the α anomer, the -OH substituent on the anomeric carbon rests on the opposite side (trans) of the ring from the CHOH side branch. The alternative form, in which the CHOH substituent and the anomeric hydroxyl are on the same side (cis) of the plane of the ring, is called the β anomer.", "Carbohydrates are polyhydroxy aldehydes, ketones, alcohols, acids, their simple derivatives and their polymers having linkages of the acetal type. They may be classified according to their degree of polymerization, and may be divided initially into three principal groups, namely sugars, oligosaccharides and polysaccharides.", "Two joined monosaccharides are called a disaccharide, the simplest kind of polysaccharide. Examples include sucrose and lactose. They are composed of two monosaccharide units bound together by a covalent bond known as a glycosidic linkage formed via a dehydration reaction, resulting in the loss of a hydrogen atom from one monosaccharide and a hydroxyl group from the other. The formula of unmodified disaccharides is CHO. Although there are numerous kinds of disaccharides, a handful of disaccharides are particularly notable.\nSucrose, pictured to the right, is the most abundant disaccharide, and the main form in which carbohydrates are transported in plants. It is composed of one D-glucose molecule and one D-fructose molecule. The systematic name for sucrose, O-α-D-glucopyranosyl-(1→2)-D-fructofuranoside, indicates four things:\n* Its monosaccharides: glucose and fructose\n* Their ring types: glucose is a pyranose and fructose is a furanose\n* How they are linked together: the oxygen on carbon number 1 (C1) of α-D-glucose is linked to the C2 of D-fructose.\n* The -oside suffix indicates that the anomeric carbon of both monosaccharides participates in the glycosidic bond.\nLactose, a disaccharide composed of one D-galactose molecule and one D-glucose molecule, occurs naturally in mammalian milk. The systematic name for lactose is O-β-D-galactopyranosyl-(1→4)-D-glucopyranose. Other notable disaccharides include maltose (two D-glucoses linked α-1,4) and cellobiose (two D-glucoses linked β-1,4). Disaccharides can be classified into two types: reducing and non-reducing disaccharides. If the functional group is present in bonding with another sugar unit, it is called a reducing disaccharide or biose.", "Monosaccharides are the simplest carbohydrates in that they cannot be hydrolyzed to smaller carbohydrates. They are aldehydes or ketones with two or more hydroxyl groups. The general chemical formula of an unmodified monosaccharide is (C•HO), literally a \"carbon hydrate\". Monosaccharides are important fuel molecules as well as building blocks for nucleic acids. The smallest monosaccharides, for which n=3, are dihydroxyacetone and D- and L-glyceraldehydes.", "Carbohydrate metabolism is the series of biochemical processes responsible for the formation, breakdown and interconversion of carbohydrates in living organisms.\nThe most important carbohydrate is glucose, a simple sugar (monosaccharide) that is metabolized by nearly all known organisms. Glucose and other carbohydrates are part of a wide variety of metabolic pathways across species: plants synthesize carbohydrates from carbon dioxide and water by photosynthesis storing the absorbed energy internally, often in the form of starch or lipids. Plant components are consumed by animals and fungi, and used as fuel for cellular respiration. Oxidation of one gram of carbohydrate yields approximately 16 kJ (4 kcal) of energy, while the oxidation of one gram of lipids yields about 38 kJ (9 kcal). The human body stores between 300 and 500 g of carbohydrates depending on body weight, with the skeletal muscle contributing to a large portion of the storage. Energy obtained from metabolism (e.g., oxidation of glucose) is usually stored temporarily within cells in the form of ATP. Organisms capable of anaerobic and aerobic respiration metabolize glucose and oxygen (aerobic) to release energy, with carbon dioxide and water as byproducts.", "</div>\n</div>\nMonosaccharides are classified according to three different characteristics: the placement of its carbonyl group, the number of carbon atoms it contains, and its chiral handedness. If the carbonyl group is an aldehyde, the monosaccharide is an aldose; if the carbonyl group is a ketone, the monosaccharide is a ketose. Monosaccharides with three carbon atoms are called trioses, those with four are called tetroses, five are called pentoses, six are hexoses, and so on. These two systems of classification are often combined. For example, glucose is an aldohexose (a six-carbon aldehyde), ribose is an aldopentose (a five-carbon aldehyde), and fructose is a ketohexose (a six-carbon ketone).\nEach carbon atom bearing a hydroxyl group (-OH), with the exception of the first and last carbons, are asymmetric, making them stereo centers with two possible configurations each (R or S). Because of this asymmetry, a number of isomers may exist for any given monosaccharide formula. Using Le Bel-van't Hoff rule, the aldohexose D-glucose, for example, has the formula (C·HO), of which four of its six carbons atoms are stereogenic, making D-glucose one of 2=16 possible stereoisomers. In the case of glyceraldehydes, an aldotriose, there is one pair of possible stereoisomers, which are enantiomers and epimers. 1, 3-dihydroxyacetone, the ketose corresponding to the aldose glyceraldehydes, is a symmetric molecule with no stereo centers. The assignment of D or L is made according to the orientation of the asymmetric carbon furthest from the carbonyl group: in a standard Fischer projection if the hydroxyl group is on the right the molecule is a D sugar, otherwise it is an L sugar. The \"D-\" and \"L-\" prefixes should not be confused with \"d-\" or \"l-\", which indicate the direction that the sugar rotates plane polarized light. This usage of \"d-\" and \"l-\" is no longer followed in carbohydrate chemistry.", "Carbohydrate consumed in food yields 3.87 kilocalories of energy per gram for simple sugars, and 3.57 to 4.12 kilocalories per gram for complex carbohydrate in most other foods. Relatively high levels of carbohydrate are associated with processed foods or refined foods made from plants, including sweets, cookies and candy, table sugar, honey, soft drinks, breads and crackers, jams and fruit products, pastas and breakfast cereals. Lower amounts of digestible carbohydrate are usually associated with unrefined foods as these foods have more fiber, including beans, tubers, rice, and unrefined fruit. Animal-based foods generally have the lowest carbohydrate levels, although milk does contain a high proportion of lactose.\nOrganisms typically cannot metabolize all types of carbohydrate to yield energy. Glucose is a nearly universal and accessible source of energy. Many organisms also have the ability to metabolize other monosaccharides and disaccharides but glucose is often metabolized first. In Escherichia coli, for example, the lac operon will express enzymes for the digestion of lactose when it is present, but if both lactose and glucose are present the lac operon is repressed, resulting in the glucose being used first (see: Diauxie). Polysaccharides are also common sources of energy. Many organisms can easily break down starches into glucose; most organisms, however, cannot metabolize cellulose or other polysaccharides like chitin and arabinoxylans. These carbohydrate types can be metabolized by some bacteria and protists. Ruminants and termites, for example, use microorganisms to process cellulose. Even though these complex carbohydrates are not very digestible, they represent an important dietary element for humans, called dietary fiber. Fiber enhances digestion, among other benefits.\nThe Institute of Medicine recommends that American and Canadian adults get between 45 and 65% of dietary energy from whole-grain carbohydrates. The Food and Agriculture Organization and World Health Organization jointly recommend that national dietary guidelines set a goal of 55–75% of total energy from carbohydrates, but only 10% directly from sugars (their term for simple carbohydrates). A 2017 Cochrane Systematic Review concluded that there was insufficient evidence to support the claim that whole grain diets can affect cardiovascular disease.", "Catabolism is the metabolic reaction which cells undergo to break down larger molecules, extracting energy. There are two major metabolic pathways of monosaccharide catabolism: glycolysis and the citric acid cycle.\nIn glycolysis, oligo- and polysaccharides are cleaved first to smaller monosaccharides by enzymes called glycoside hydrolases. The monosaccharide units can then enter into monosaccharide catabolism. A 2 ATP investment is required in the early steps of glycolysis to phosphorylate Glucose to Glucose 6-Phosphate (G6P) and Fructose 6-Phosphate (F6P) to Fructose 1,6-biphosphate (FBP), thereby pushing the reaction forward irreversibly. In some cases, as with humans, not all carbohydrate types are usable as the digestive and metabolic enzymes necessary are not present.", "Low-carbohydrate diets may miss the health advantages – such as increased intake of dietary fiber – afforded by high-quality carbohydrates found in legumes and pulses, whole grains, fruits, and vegetables. A \"meta-analysis, of moderate quality,\" included as adverse effects of the diet halitosis, headache and constipation.\nCarbohydrate-restricted diets can be as effective as low-fat diets in helping achieve weight loss over the short term when overall calorie intake is reduced. An Endocrine Society scientific statement said that \"when calorie intake is held constant [...] body-fat accumulation does not appear to be affected by even very pronounced changes in the amount of fat vs carbohydrate in the diet.\" In the long term, effective weight loss or maintenance depends on calorie restriction, not the ratio of macronutrients in a diet. The reasoning of diet advocates that carbohydrates cause undue fat accumulation by increasing blood insulin levels, and that low-carbohydrate diets have a \"metabolic advantage\", is not supported by clinical evidence. Further, it is not clear how low-carbohydrate dieting affects cardiovascular health, although two reviews showed that carbohydrate restriction may improve lipid markers of cardiovascular disease risk.\nCarbohydrate-restricted diets are no more effective than a conventional healthy diet in preventing the onset of type 2 diabetes, but for people with type 2 diabetes, they are a viable option for losing weight or helping with glycemic control. There is limited evidence to support routine use of low-carbohydrate dieting in managing type 1 diabetes. The American Diabetes Association recommends that people with diabetes should adopt a generally healthy diet, rather than a diet focused on carbohydrate or other macronutrients.\nAn extreme form of low-carbohydrate diet – the ketogenic diet – is established as a medical diet for treating epilepsy. Through celebrity endorsement during the early 21st century, it became a fad diet as a means of weight loss, but with risks of undesirable side effects, such as low energy levels and increased hunger, insomnia, nausea, and gastrointestinal discomfort. The British Dietetic Association named it one of the \"top 5 worst celeb diets to avoid in 2018\".", "Most dietary carbohydrates contain glucose, either as their only building block (as in the polysaccharides starch and glycogen), or together with another monosaccharide (as in the hetero-polysaccharides sucrose and lactose). Unbound glucose is one of the main ingredients of honey. Glucose is extremely abundant and has been isolated from a variety of natural sources across the world, including male cones of the coniferous tree Wollemia nobilis in Rome, the roots of Ilex asprella plants in China, and straws from rice in California.\n The carbohydrate value is calculated in the USDA database and does not always correspond to the sum of the sugars, the starch, and the \"dietary fiber\".", "Formerly the name \"carbohydrate\" was used in chemistry for any compound with the formula C (HO). Following this definition, some chemists considered formaldehyde (CHO) to be the simplest carbohydrate, while others claimed that title for glycolaldehyde. Today, the term is generally understood in the biochemistry sense, which excludes compounds with only one or two carbons and includes many biological carbohydrates which deviate from this formula. For example, while the above representative formulas would seem to capture the commonly known carbohydrates, ubiquitous and abundant carbohydrates often deviate from this. For example, carbohydrates often display chemical groups such as: N-acetyl (e.g. chitin), sulfate (e.g. glycosaminoglycans), carboxylic acid and deoxy modifications (e.g. fucose and sialic acid).\nNatural saccharides are generally built of simple carbohydrates called monosaccharides with general formula (CHO) where n is three or more. A typical monosaccharide has the structure H–(CHOH)(C=O)–(CHOH)–H, that is, an aldehyde or ketone with many hydroxyl groups added, usually one on each carbon atom that is not part of the aldehyde or ketone functional group. Examples of monosaccharides are glucose, fructose, and glyceraldehydes. However, some biological substances commonly called \"monosaccharides\" do not conform to this formula (e.g. uronic acids and deoxy-sugars such as fucose) and there are many chemicals that do conform to this formula but are not considered to be monosaccharides (e.g. formaldehyde CHO and inositol (CHO)).\nThe open-chain form of a monosaccharide often coexists with a closed ring form where the aldehyde/ketone carbonyl group carbon (C=O) and hydroxyl group (–OH) react forming a hemiacetal with a new C–O–C bridge.\nMonosaccharides can be linked together into what are called polysaccharides (or oligosaccharides) in a large variety of ways. Many carbohydrates contain one or more modified monosaccharide units that have had one or more groups replaced or removed. For example, deoxyribose, a component of DNA, is a modified version of ribose; chitin is composed of repeating units of N-acetyl glucosamine, a nitrogen-containing form of glucose.", "Carbohydrate chemistry is a large and economically important branch of organic chemistry. Some of the main organic reactions that involve carbohydrates are:\n* Amadori rearrangement\n* Carbohydrate acetalisation\n* Carbohydrate digestion\n* Cyanohydrin reaction\n* Koenigs–Knorr reaction\n* Lobry de Bruyn–Van Ekenstein transformation\n* Nef reaction\n* Wohl degradation", "A carbohydrate () is a biomolecule consisting of carbon (C), hydrogen (H) and oxygen (O) atoms, usually with a hydrogen–oxygen atom ratio of 2:1 (as in water) and thus with the empirical formula (where m may or may not be different from n), which does not mean the H has covalent bonds with O (for example with , H has a covalent bond with C but not with O). However, not all carbohydrates conform to this precise stoichiometric definition (e.g., uronic acids, deoxy-sugars such as fucose), nor are all chemicals that do conform to this definition automatically classified as carbohydrates (e.g. formaldehyde and acetic acid).\nThe term is most common in biochemistry, where it is a synonym of saccharide (), a group that includes sugars, starch, and cellulose. The saccharides are divided into four chemical groups: monosaccharides, disaccharides, oligosaccharides, and polysaccharides. Monosaccharides and disaccharides, the smallest (lower molecular weight) carbohydrates, are commonly referred to as sugars. While the scientific nomenclature of carbohydrates is complex, the names of the monosaccharides and disaccharides very often end in the suffix -ose, which was originally taken from the word glucose (), and is used for almost all sugars, e.g. fructose (fruit sugar), sucrose (cane or beet sugar), ribose, lactose (milk sugar), etc.\nCarbohydrates perform numerous roles in living organisms. Polysaccharides serve as an energy store (e.g. starch and glycogen) and as structural components (e.g. cellulose in plants and chitin in arthropods). The 5-carbon monosaccharide ribose is an important component of coenzymes (e.g. ATP, FAD and NAD) and the backbone of the genetic molecule known as RNA. The related deoxyribose is a component of DNA. Saccharides and their derivatives include many other important biomolecules that play key roles in the immune system, fertilization, preventing pathogenesis, blood clotting, and development.\nCarbohydrates are central to nutrition and are found in a wide variety of natural and processed foods. Starch is a polysaccharide and is abundant in cereals (wheat, maize, rice), potatoes, and processed food based on cereal flour, such as bread, pizza or pasta. Sugars appear in human diet mainly as table sugar (sucrose, extracted from sugarcane or sugar beets), lactose (abundant in milk), glucose and fructose, both of which occur naturally in honey, many fruits, and some vegetables. Table sugar, milk, or honey are often added to drinks and many prepared foods such as jam, biscuits and cakes.\nCellulose, a polysaccharide found in the cell walls of all plants, is one of the main components of insoluble dietary fiber. Although it is not digestible by humans, cellulose and insoluble dietary fiber generally help maintain a healthy digestive system by facilitating bowel movements. Other polysaccharides contained in dietary fiber include resistant starch and inulin, which feed some bacteria in the microbiota of the large intestine, and are metabolized by these bacteria to yield short-chain fatty acids.", "In scientific literature, the term \"carbohydrate\" has many synonyms, like \"sugar\" (in the broad sense), \"saccharide\", \"ose\", \"glucide\", \"hydrate of carbon\" or \"polyhydroxy compounds with aldehyde or ketone\". Some of these terms, especially \"carbohydrate\" and \"sugar\", are also used with other meanings.\nIn food science and in many informal contexts, the term \"carbohydrate\" often means any food that is particularly rich in the complex carbohydrate starch (such as cereals, bread and pasta) or simple carbohydrates, such as sugar (found in candy, jams, and desserts). This informality is sometimes confusing since it confounds chemical structure and digestibility in humans.\nOften in lists of nutritional information, such as the USDA National Nutrient Database, the term \"carbohydrate\" (or \"carbohydrate by difference\") is used for everything other than water, protein, fat, ash, and ethanol. This includes chemical compounds such as acetic or lactic acid, which are not normally considered carbohydrates. It also includes dietary fiber which is a carbohydrate but which does not contribute food energy in humans, even though it is often included in the calculation of total food energy just as though it did (i.e., as if it were a digestible and absorbable carbohydrate such as a sugar).\nIn the strict sense, \"sugar\" is applied for sweet, soluble carbohydrates, many of which are used in human food.", "The history of the discovery regarding carbohydrates dates back around 10,000 years ago in Papua New Guinea during the cultivation of Sugarcane during the Neolithic agricultural revolution . The term \"carbohydrate\" was first proposed by German chemist Carl Schmidt (chemist) in 1844. In 1856, glycogen, a form of carbohydrate storage in animal livers, was discovered by French physiologist Claude Bernard.", "Nutritionists often refer to carbohydrates as either simple or complex. However, the exact distinction between these groups can be ambiguous. The term complex carbohydrate was first used in the U.S. Senate Select Committee on Nutrition and Human Needs publication Dietary Goals for the United States (1977) where it was intended to distinguish sugars from other carbohydrates (which were perceived to be nutritionally superior). However, the report put \"fruit, vegetables and whole-grains\" in the complex carbohydrate column, despite the fact that these may contain sugars as well as polysaccharides. This confusion persists as today some nutritionists use the term complex carbohydrate to refer to any sort of digestible saccharide present in a whole food, where fiber, vitamins and minerals are also found (as opposed to processed carbohydrates, which provide energy but few other nutrients). The standard usage, however, is to classify carbohydrates chemically: simple if they are sugars (monosaccharides and disaccharides) and complex if they are polysaccharides (or oligosaccharides).\nIn any case, the simple vs. complex chemical distinction has little value for determining the nutritional quality of carbohydrates. Some simple carbohydrates (e.g. fructose) raise blood glucose rapidly, while some complex carbohydrates (starches), raise blood sugar slowly. The speed of digestion is determined by a variety of factors including which other nutrients are consumed with the carbohydrate, how the food is prepared, individual differences in metabolism, and the chemistry of the carbohydrate. Carbohydrates are sometimes divided into \"available carbohydrates\", which are absorbed in the small intestine and \"unavailable carbohydrates\", which pass to the large intestine, where they are subject to fermentation by the gastrointestinal microbiota.\nThe USDAs Dietary Guidelines for Americans 2010' call for moderate- to high-carbohydrate consumption from a balanced diet that includes six one-ounce servings of grain foods each day, at least half from whole grain sources and the rest are from enriched.\nThe glycemic index (GI) and glycemic load concepts have been developed to characterize food behavior during human digestion. They rank carbohydrate-rich foods based on the rapidity and magnitude of their effect on blood glucose levels. Glycemic index is a measure of how quickly food glucose is absorbed, while glycemic load is a measure of the total absorbable glucose in foods. The insulin index is a similar, more recent classification method that ranks foods based on their effects on blood insulin levels, which are caused by glucose (or starch) and some amino acids in food.", "Carbohydrate Structure Database (CSDB) is a free curated database and service platform in glycoinformatics, launched in 2005 by a group of Russian scientists from [http://zioc.ru/?lang=en N.D. Zelinsky Institute of Organic Chemistry], Russian Academy of Sciences. CSDB stores published structural, taxonomical, bibliographic and NMR-spectroscopic data on natural carbohydrates and carbohydrate-related molecules.", "The main data stored in CSDB are carbohydrate structures of bacterial, fungal, and plant origin. Each structure is assigned to an organism and is provided with the link(s) to the corresponding scientific publication(s), in which it was described. Apart from structural data, CSDB also stores NMR spectra, information on methods used to decipher a particular structure, and some other data.\nCSDB provides access to several carbohydrate-related research tools:\n* Simulation of 1D and 2D NMR spectra of carbohydrates ([http://csdb.glycoscience.ru/database/index.html?help=nmr GODDESS: glycan-oriented database-driven empirical spectrum simulation]).\n* Automated NMR-based structure elucidation ([http://csdb.glycoscience.ru/database/index.html?help=nmr#grass GRASS: generation, ranking and assignment of saccharide structures]).\n* Statistical analysis of structural feature distribution in glycomes of living organisms\n* Generation of optimized atomic coordinates for an arbitrary saccharide and subdatabase of conformation maps.\n* Taxon clustering based on similarities of glycomes (carbohydrate-based tree of life)\n* Glycosyltransferase subdatabase ([http://csdb.glycoscience.ru/gt.html GT-explorer])", "Until 2015, [http://csdb.glycoscience.ru/bacterial/index.html Bacterial Carbohydrate Structure Database] (BCSDB) and [http://csdb.glycoscience.ru/plant_fungal/index.html Plant&Fungal Carbohydrate Structure Database] (PFCSDB) databases existed in parallel. In 2015, they were joined into the single [http://csdb.glycoscience.ru/database/index.html Carbohydrate Structure Database] (CSDB). The development and maintenance of CSDB have been funded by [http://www.istc.int/en/ International Science and Technology Center] (2005-2007), [http://grants.extech.ru Russian Federation President grant program] (2005-2006), [http://www.rfbr.ru/rffi/eng Russian Foundation for Basic Research] (2005-2007,2012-2014,2015-2017,2018-2020), [https://www.dkfz.de/ Deutsches Krebsforschungszentrum] (short-term in 2006-2010), and [https://www.rscf.ru/en/ Russian Science Foundation] (2018-2020).", "The main sources of CSDB data are:\n* Scientific publications indexed in the dedicated citation databases, including [https://www.ncbi.nlm.nih.gov/pubmed/ NCBI Pubmed] and [http://webofknowledge.com/ Thomson Reuters Web Of Science] (approx. 18000 records).\n* CCSD (Carbbank ) database (approx. 3000 records).\nThe data are selected and added to CSDB manually by browsing original scientific publications. The data originating from other databases are subject to error-correction and approval procedures.\nAs of 2017, the coverage on bacteria and archaea is ca. 80% of carbohydrate structures published in scientific literature The time lag between the publication of relative data and their deposition into CSDB is about 18 months. Plants are covered up to 1997, and fungi up to 2012.\nCSDB does not cover data from the animalia domain, except unicellular metazoa. There is a number of dedicated databases on animal carbohydrates, e.g. [http://www.unicarbkb.org/ UniCarbKB] or [http://glycosciences.de GLYCOSCIENCES.de] .\nCSDB is reported as one of the biggest projects in glycoinformatics. It is employed in structural studies of natural carbohydrates and in glyco-profiling.\nThe content of CSDB has been used as a data source in other glycoinformatics projects.", "CSDB is cross-linked to other glycomics databases, such as [http://www.monosaccharidedb.org MonosaccharideDB], [http://glycosciences.de Glycosciences.DE] , [https://www.ncbi.nlm.nih.gov/pubmed/ NCBI Pubmed], [https://www.ncbi.nlm.nih.gov/taxonomy NCBI Taxonomy], [https://www.ncbi.nlm.nih.gov/nlmcatalog NLM catalog], [https://www.who.int/classifications/icd/en/ International Classification of Diseases 11], etc. Besides a native notation, CSDB Linear, structures are presented in multiple carbohydrate notations (SNFG, SweetDB, GlycoCT, [http://www.wurcs-wg.org WURCS], [http://glycam.org GLYCAM], etc.). CSDB is exportable as a Resource Description Framework (RDF) feed according to the [https://bioportal.bioontology.org/ontologies/GLYCORDF GlycoRDF] ontology.", "* Molecular structures of glycans, glycopolymers and glycoconjugates: primary structure, aglycon information, polymerization degree and class of molecule. Structural scope includes molecules composed of residues (monosaccharides, alditols, amino acids, fatty acids etc.) linked by glycosidic, ester, amidic, ketal, phospho- or sulpho-diester bonds, in which at least one residue is a monosaccharide or its derivative.\n* Bibliography associated with structures: imprint data, keywords, abstracts, IDs in bibliographic databases\n* Biological context of structures: associated taxon, strain, serogroup, host organism, disease information. The covered domains are: prokaryotes, plants, fungi and selected pathogenic unicellular metazoa. The database contains only glycans originating from these domains or obtained by chemical modification of such glycans.\n* Assigned NMR spectra and experimental conditions.\n* Glycosyltransferases associated with taxons: gene and enzyme identifiers, full structures, donor and substrates, methods used to prove enzymatic activity, trustworthiness level.\n* References to other databases\n* Other data collected from original publications\n* Conformation maps of disaccharides derived from molecular dynamics simulations.", "Carnobacterium pleistocenium is a recently discovered bacterium from the arctic part of Alaska. It was found in permafrost, seemingly frozen there for 32,000 years. Melting the ice, however, brought these extremophiles back to life. This is the first case of an organism \"coming back to life\" from ancient ice. These bacterial cells were discovered in a tunnel dug by the Army Corps of Engineers in the 1960s to allow scientists to study the permafrost in preparation for the construction of the Trans-Alaska pipeline system.\nThe discovery of this bacterium is of particular interest for NASA, for it may be possible for such life to exist in the permafrost of Mars or on the surface of Europa. It is also of interest for scientists investigating the potential for cryogenically freezing life forms to reduce the transportation costs (in terms of life support systems) that would be associated with long-duration space travel.", "Although direct bandgap semiconductors such as GaAs or GaN are most easily examined by these techniques, indirect semiconductors such as silicon also emit weak cathodoluminescence, and can be examined as well. In particular, the luminescence of dislocated silicon is different from intrinsic silicon, and can be used to map defects in integrated circuits.\nRecently, cathodoluminescence performed in electron microscopes is also being used to study surface plasmon resonances in metallic nanoparticles. Surface plasmons in metal nanoparticles can absorb and emit light, though the process is different from that in semiconductors. Similarly, cathodoluminescence has been exploited as a probe to map the local density of states of planar dielectric photonic crystals and nanostructured photonic materials.", "In scanning electron microscopes a focused beam of electrons impinges on a sample and induces it to emit light that is collected by an optical system, such as an elliptical mirror. From there, a fiber optic will transfer the light out of the microscope where it is separated into its component wavelengths by a monochromator and is then detected with a photomultiplier tube. By scanning the microscope's beam in an X-Y pattern and measuring the light emitted with the beam at each point, a map of the optical activity of the specimen can be obtained (cathodoluminescence imaging). Instead, by measuring the wavelength dependence for a fixed point or a certain area, the spectral characteristics can be recorded (cathodoluminescence spectroscopy). Furthermore, if the photomultiplier tube is replaced with a CCD camera, an entire spectrum can be measured at each point of a map (hyperspectral imaging). Moreover, the optical properties of an object can be correlated to structural properties observed with the electron microscope.\nThe primary advantages to the electron microscope based technique is its spatial resolution. In a scanning electron microscope, the attainable resolution is on the order of a few ten nanometers, while in a (scanning) transmission electron microscope (TEM), nanometer-sized features can be resolved. Additionally, it is possible to perform nanosecond- to picosecond-level time-resolved measurements if the electron beam can be \"chopped\" into nano- or pico-second pulses by a beam-blanker or with a pulsed electron source. These advanced techniques are useful for examining low-dimensional semiconductor structures, such a quantum wells or quantum dots.\nWhile an electron microscope with a cathodoluminescence detector provides high magnification, an optical cathodoluminescence microscope benefits from its ability to show actual visible color features directly through the eyepiece. More recently developed systems try to combine both an optical and an electron microscope to take advantage of both these techniques.", "In geology, mineralogy, materials science and semiconductor engineering, a scanning electron microscope (SEM) fitted with a cathodoluminescence detector, or an optical cathodoluminescence microscope, may be used to examine internal structures of semiconductors, rocks, ceramics, glass, etc. in order to get information on the composition, growth and quality of the material.", "Luminescence in a semiconductor results when an electron in the conduction band recombines with a hole in the valence band. The difference energy (band gap) of this transition can be emitted in form of a photon. The energy (color) of the photon, and the probability that a photon and not a phonon will be emitted, depends on the material, its purity, and the presence of defects. First, the electron has to be excited from the valence band into the conduction band. In cathodoluminescence, this occurs as the result of an impinging high energy electron beam onto a semiconductor. However, these primary electrons carry far too much energy to directly excite electrons. Instead, the inelastic scattering of the primary electrons in the crystal leads to the emission of secondary electrons, Auger electrons and X-rays, which in turn can scatter as well. Such a cascade of scattering events leads to up to 10 secondary electrons per incident electron. These secondary electrons can excite valence electrons into the conduction band when they have a kinetic energy about three times the band gap energy of the material . From there the electron recombines with a hole in the valence band and creates a photon. The excess energy is transferred to phonons and thus heats the lattice. One of the advantages of excitation with an electron beam is that the band gap energy of materials that are investigated is not limited by the energy of the incident light as in the case of photoluminescence. Therefore, in cathodoluminescence, the \"semiconductor\" examined can, in fact, be almost any non-metallic material. In terms of band structure, classical semiconductors, insulators, ceramics, gemstones, minerals, and glasses can be treated the same way.", "A cathodoluminescence (CL) microscope combines a regular (light optical) microscope with a cathode-ray tube. It is designed to image the luminescence characteristics of polished thin sections of solids irradiated by an electron beam.\nUsing a cathodoluminescence microscope, structures within crystals or fabrics can be made visible which cannot be seen in normal light conditions. Thus, for example, valuable information on the growth of minerals can be obtained. CL-microscopy is used in geology, mineralogy and materials science for the investigation of rocks, minerals, volcanic ash, glass, ceramic, concrete, fly ash, etc.\nCL color and intensity are dependent on the characteristics of the sample and on the working conditions of the electron gun. Here, acceleration voltage and beam current of the electron beam are of major importance. Today, two types of CL microscopes are in use. One is working with a \"cold cathode\" generating an electron beam by a corona discharge tube, the other one produces a beam using a \"hot cathode\". Cold-cathode CL microscopes are the simplest and most economical type. Unlike other electron bombardment techniques like electron microscopy, cold cathodoluminescence microscopy provides positive ions along with the electrons which neutralize surface charge buildup and eliminate the need for conductive coatings to be applied to the specimens. The \"hot cathode\" type generates an electron beam by an electron gun with tungsten filament. The advantage of a hot cathode is the precisely controllable high beam intensity allowing to stimulate the emission of light even on weakly luminescing materials (e.g. quartz – see picture). To prevent charging of the sample, the surface must be coated with a conductive layer of gold or carbon. This is usually done by a sputter deposition device or a carbon coater.", "Cathodoluminescence is an optical and electromagnetic phenomenon in which electrons impacting on a luminescent material such as a phosphor, cause the emission of photons which may have wavelengths in the visible spectrum. A familiar example is the generation of light by an electron beam scanning the phosphor-coated inner surface of the screen of a television that uses a cathode ray tube. Cathodoluminescence is the inverse of the photoelectric effect, in which electron emission is induced by irradiation with photons.", "The Catskill-Delaware Water Ultraviolet Disinfection Facility is a ultraviolet (UV) water disinfection plant built in Westchester County, New York to disinfect water for the New York City water supply system. The compound is the largest ultraviolet germicidal irradiation plant in the world.\nThe UV facility treats water delivered by two of the citys aqueduct systems, the Catskill Aqueduct and the Delaware Aqueduct, via the Kensico Reservoir. (The citys third supply system, the New Croton Aqueduct, has a separate treatment plant.)\nThe plant has 56 energy-efficient UV reactors, and cost the city $1.6 billion. Mayor Michael Bloomberg created research groups between 2004-2006 to decide the best and most cost-effective ways to modernize the citys water filtration process, as a secondary stage following the existing chlorination and fluoridation facilities. The UV technology effectively controls microorganisms such as giardia and cryptosporidium' which are resistant to chlorine treatment. The city staff determined that the cheapest alternatives to a UV system would cost over $3 billion. In response to this finding, Bloomberg decided to set up a public competitive contract auction. Ontario based Trojan Technologies won the contract.\nThe facility treats of water per day. The new facility was originally set to be in operation by the end of 2012. The facility opened on October 8, 2013.", "The potential of using cell microencapsulation in successful clinical applications can be realized only if several requirements encountered during the development process are optimized such as the use of an appropriate biocompatible polymer to form the mechanically and chemically stable semi-permeable matrix, production of uniformly sized microcapsules, use of an appropriate immune-compatible polycations cross-linked to the encapsulation polymer to stabilized the capsules, selection of a suitable cell type depending on the situation.", "* A Rheometer is a machine used to test\n** shear rate\n** shear strength\n** consistency coefficient\n** flow behavior index\n* Viscometer - shear strength testing", "It is essential that the microcapsules have adequate membrane strength (mechanical stability) to endure physical and osmotic stress such as during the exchange of nutrients and waste products. The microcapsules should be strong enough and should not rupture on implantation as this could lead to an immune rejection of the encapsulated cells. For instance, in the case of xenotransplantation, a tighter more stable membrane would be required in comparison to allotransplantation. Also, while investigating the potential of using APA microcapsules loaded with bile salt hydrolase (BSH) overproducing active Lactobacillus plantarum 80 cells, in a simulated gastro intestinal tract model for oral delivery applications, the mechanical integrity and shape of the microcapsules was evaluated. It was shown that APA microcapsules could potentially be used in the oral delivery of live bacterial cells. However, further research proved that the GCAC microcapsules possess a higher mechanical stability as compared to APA microcapsules for oral delivery applications. Martoni et al. were experimenting with bacteria-filled capsules that would be taken by mouth to reduce serum cholesterol. \nThe capsules were pumped through a series of vessels simulating the human GI tract to determine how well the capsules would survive in the body. Extensive research into the mechanical properties of the biomaterial to be used for cell microencapsulation is necessary to determine the durability of the microcapsules during production and especially for in vivo applications where a sustained release of the therapeutic product over long durations is required.\nvan der Wijngaart et al. grafted a solid, but permeable, shell around the cells to provide increased mechanical strength.\nSodium Citrate is used for degradation of alginate beads after encapsulation of cells. In order to determine viability of the cells or for further experimentation. Concentrations of approximately 25mM are used to dissolve the alginate spheres and the solution is spun down using a centrifuge so the sodium citrate can be removed and the cells can be collected.", "A fundamental criterion that must be established while developing any device with a semi-permeable membrane is to adjust the permeability of the device in terms of entry and exit of molecules. It is essential that the cell microcapsule is designed with uniform thickness and should have a control over both the rate of molecules entering the capsule necessary for cell viability and the rate of therapeutic products and waste material exiting the capsule membrane. Immunoprotection of the loaded cell is the key issue that must be kept in mind while working on the permeability of the encapsulation membrane as not only immune cells but also antibodies and cytokines should be prevented entry into the microcapsule which in fact depends on the pore size of the biomembrane.\nIt has been shown that since different cell types have different metabolic requirements, thus depending on the cell type encapsulated in the membrane the permeability of the membrane has to be optimized. Several groups have been dedicated towards the study of membrane permeability of cell microcapsules and although the role of permeability of certain essential elements like oxygen has been demonstrated, the permeability requirements of each cell type are yet to be determined.\nSodium Citrate is used for degradation of alginate beads after encapsulation of cells. In order to determine viability of the cells or for further experimentation. Concentrations of approximately 25mM are used to dissolve the alginate spheres and the solution is spun down using a centrifuge so the sodium citrate can be removed and the cells can be collected.", "The use of an ideal high quality biomaterial with the inherent properties of biocompatibility is the most crucial factor that governs the long term efficiency of this technology. An ideal biomaterial for cell encapsulation should be one that is totally biocompatible, does not trigger an immune response in the host and does not interfere with cell homeostasis so as to ensure high cell viability. However, one major limitation has been the inability to reproduce the different biomaterials and the requirements to obtain a better understanding of the chemistry and biofunctionality of the biomaterials and the microencapsulation system. Several studies demonstrate that surface modification of these cell containing microparticles allows control over the growth and cellular differentiation. of the encapsulated cells.\nOne study proposed the use of zeta potential which measures the electric charge of the microcapsule as a means to predict the interfacial reaction between microcapsule and the surrounding tissue and in turn the biocompatibility of the delivery system.", "Agarose is a polysaccharide derived from seaweed used for nanoencapsulation of cells and the cell/agarose suspension can be modified to form microbeads by reducing the temperature during preparation. However, one drawback with the microbeads so obtained is the possibility of cellular protrusion through the polymeric matrix wall after formation of the capsules.", "Chitosan is a polysaccharide composed of randomly distributed β-(1-4)-linked D-glucosamine (deacetylated unit) and N-acetyl-D-glucosamine (acetylated unit). It is derived from the N-deacetylation of chitin and has been used for several applications such as drug delivery, space-filling implants and in wound dressings. However, one drawback of this polymer is its weak mechanical properties and is thus often combined with other polymers such collagen to form a polymer with stronger mechanical properties for cell encapsulation applications.", "In 1933 Vincenzo Bisceglie made the first attempt to encapsulate cells in polymer membranes. He demonstrated that tumor cells in a polymer structure transplanted into pig abdominal cavity remained viable for a long period without being rejected by the immune system.\nThirty years later in 1964, the idea of encapsulating cells within ultra thin polymer membrane microcapsules so as to provide immunoprotection to the cells was then proposed by Thomas Chang who introduced the term \"artificial cells\" to define this concept of bioencapsulation. He suggested that these artificial cells produced by a drop method not only protected the encapsulated cells from immunorejection but also provided a high surface-to-volume relationship enabling good mass transfer of oxygen and nutrients.\nTwenty years later, this approach was successfully put into practice in small animal models when alginate-polylysine-alginate (APA) microcapsules immobilizing xenograft islet cells were developed. The study demonstrated that when these microencapsulated islets were implanted into diabetic rats, the cells remained viable and controlled glucose levels for several weeks.\nHuman trials utilising encapsulated cells were performed in 1998. Encapsulated cells expressing a cytochrome P450 enzyme to locally activate an anti-tumour prodrug were used in a trial for advanced, non-resectable pancreatic cancer. Approximately a doubling of survival time compared to historic controls was demonstrated.", "Collagen, a major protein component of the ECM, provides support to tissues like skin, cartilage, bones, blood vessels and ligaments and is thus considered a model scaffold or matrix for tissue engineering due to its properties of biocompatibility, biodegradability and ability to promote cell binding. This ability allows chitosan to control distribution of cells inside the polymeric system. Thus, Type-I collagen obtained from animal tissues is now successfully being used commercially as tissue engineered biomaterial for multiple applications. Collagen has also been used in nerve repair and bladder engineering. Immunogenicity has limited the applications of collagen. Gelatin has been considered as an alternative for that reason.", "Gelatin is prepared from the denaturation of collagen and many desirable properties such as biodegradability, biocompatibility, non-immunogenity in physiological \nenvironments, and easy processability make this polymer a good choice for tissue engineering applications. It is used in engineering tissues for the skin, bone and cartilage and is used commercially for skin replacements.", "Several groups have extensively studied several natural and synthetic polymers with the goal of developing the most suitable biomaterial for cell microencapsulation. Extensive work has been done using alginates which are regarded as the most suitable biomaterials for cell microencapsulation due to their abundance, excellent biocompatibility and biodegradability properties. Alginate is a natural polymer which can be extracted from seaweed and bacteria with numerous compositions based on the isolation source.\nAlginate is not free from all criticism. Some researchers believe that alginates with high-M content could produce an inflammatory response and an abnormal cell growth while some have demonstrated that alginate with high-G content lead to an even higher cell overgrowth and inflammatory reaction in vivo as compared to intermediate-G alginates.\nEven ultrapure alginates may contain endotoxins, and polyphenols which could compromise the biocompatibility of the resultant cell microcapsules. It has been shown that even though purification processes successfully lower endotoxin and polyphenol content in the processed alginate, it is difficult to lower the protein content and the purification processes could in turn modify the properties of the biomaterial. Thus it is essential that an effective purification process is designed so as to remove all the contaminants from alginate before it can be successfully used in clinical applications.", "Questions could arise as to why the technique of encapsulation of cells is even required when therapeutic products could just be injected at the site. An important reason for this is that the encapsulated cells would provide a source of sustained continuous release of therapeutic products for longer durations at the site of implantation. Another advantage of cell microencapsulation technology is that it allows the loading of non-human and genetically modified cells into the polymer matrix when the availability of donor cells is limited. Microencapsulation is a valuable technique for local, regional and oral delivery of therapeutic products as it can be implanted into numerous tissue types and organs. For prolonged drug delivery to the treatment site, implantation of these drug loaded artificial cells would be more cost effective in comparison to direct drug delivery. Moreover, the prospect of implanting artificial cells with similar chemical composition in several patients irrespective of their leukocyte antigen could again allow reduction in costs.", "Researchers have also been able to develop alginate microcapsules with an altered form of alginate with enhanced biocompatibility and higher resistance to osmotic swelling. \nAnother approach to increasing the biocompatibility of the membrane biomaterial is through surface modification of the capsules using peptide and protein molecules which in turn controls the proliferation and rate of differentiation of the encapsulated cells. One group that has been working extensively on coupling the amino acid sequence Arg-Gly-Asp (RGD) to alginate hydrogels demonstrated that the cell behavior can be controlled by the RGD density coupled on the alginate gels. Alginate microparticles loaded with myoblast cells and functionalized with RGD allowed control over the growth and differentiation of the loaded cells. \nAnother vital factor that controls the use of cell microcapsules in clinical applications is the development of a suitable immune-compatible polycation to coat the otherwise highly porous alginate beads and thus impart stability and immune protection to the system. Poly-L-lysine is the most commonly used polycation but its low biocompatibility restricts the successful clinical use of these PLL formulated microcapsules which attract inflammatory cells thus inducing necrosis of the loaded cells. Studies have also shown that alginate-PLL-alginate (APA) microcapsules demonstrate low mechanical stability and short term durability. Thus several research groups have been looking for alternatives to PLL and have demonstrated promising results with poly-L-ornithine and poly(methylene-co-guanidine) hydrochloride by fabricating durable microcapsules with high and controlled mechanical strength for cell encapsulation.\nSeveral groups have also investigated the use of chitosan which is a naturally derived polycation as a potential replacement for PLL to fabricate alginate-chitosan (AC) microcapsules for cell delivery applications. However, studies have also shown that the stability of this AC membrane is again limited and one group demonstrated that modification of this alginate-chitosan microcapsules with genipin, a naturally occurring iridoid glucosid from gardenia fruits, to form genipin cross-linked alginate-chitosan (GCAC) microcapsules could augment stability of the cell loaded microcapsules.", "Cell encapsulation is a possible solution to graft rejection in tissue engineering applications. Cell microencapsulation technology involves immobilization of cells within a polymeric semi-permeable membrane. It permits the bidirectional diffusion of molecules such as the influx of oxygen, nutrients, growth factors etc. essential for cell metabolism and the outward diffusion of waste products and therapeutic proteins. At the same time, the semi-permeable nature of the membrane prevents immune cells and antibodies from destroying the encapsulated cells, regarding them as foreign invaders.\nCell encapsulation could reduce the need for long-term use of immunosuppressive drugs after an organ transplant to control side effects.", "Eletrospraying is used to create alginate spheres by pumping an alginate solution through a needle. A source of high voltage usually provided by a clamp attached to the needle is used to generate an electric potential with the alginate falling from the needle tip into a solution that contains a ground. Calcium chloride is used as cross linking solution in which the generated capsules drop into where they harden after approximately 30 minutes. Beads are formed from the needle due to charge and surface tension.\n* Size dependency of the beads\n** height alterations of device from needle to calcium chloride solution\n** voltage alterations of clamp on the needle\n** alginate concentration alterations", "Many other medical conditions have been targeted with encapsulation therapies, especially those involving a deficiency in some biologically derived protein. One of the most successful approaches is an external device that acts similarly to a dialysis machine, only with a reservoir of pig hepatocytes surrounding the semipermeable portion of the blood-infused tubing. This apparatus can remove toxins from the blood of patients suffering severe liver failure. Other applications that are still in development include cells that produce ciliary-derived neurotrophic factor for the treatment of ALS and Huntingtons disease, glial-derived neurotrophic factor for Parkinsons disease, erythropoietin for anemia, and HGH for dwarfism. In addition, monogenic diseases such as haemophilia, Gaucher's disease and some mucopolysaccharide disorders could also potentially be targeted by encapsulated cells expressing the protein that is otherwise lacking in the patient.", "The use of monoclonal antibodies for therapy is now widespread for treatment of cancers and inflammatory diseases. Using cellulose sulphate technology, scientists have successfully encapsulated antibody producing hybridoma cells and demonstrated subsequent release of the therapeutic antibody from the capsules. The capsules containing the hybridoma cells were used in pre-clinical studies to deliver neutralising antibodies to the mouse retrovirus FrCasE, successfully preventing disease.", "The cell type chosen for this technique depends on the desired application of the cell microcapsules. The cells put into the capsules can be from the patient (autologous cells), from another donor (allogeneic cells) or from other species (xenogeneic cells). The use of autologous cells in microencapsulation therapy is limited by the availability of these cells and even though xenogeneic cells are easily accessible, danger of possible transmission of viruses, especially porcine endogenous retrovirus to the patient restricts their clinical application, and after much debate several groups have concluded that studies should involve the use of allogeneic instead of xenogeneic cells. Depending on the application, the cells can be genetically altered to express any required protein. However, enough research has to be carried out to validate the safety and stability of the expressed gene before these types of cells can be used.\nThis technology has not received approval for clinical trial because of the high immunogenicity of cells loaded in the capsules. They secrete cytokines and produce a severe inflammatory reaction at the implantation site around the capsules, in turn leading to a decrease in viability of the encapsulated cells. One promising approach being studied is the administration of anti-inflammatory drugs to reduce the immune response produced due to administration of the cell loaded microcapsules. Another approach which is now the focus of extensive research is the use of stem cells such as mesenchymal stem cells for long term cell microencapsulation and cell therapy applications in hopes of reducing the immune response in the patient after implantation. Another issue which compromises long term viability of the microencapsulated cells is the use of fast proliferating cell lines which eventually fill up the entire system and lead to decrease in the diffusion efficiency across the semi-permeable membrane of the capsule. A solution to this could be in the use of cell types such as myoblasts which do not proliferate after the microencapsulation procedure.", "Numerous studies have been dedicated towards the development of effective methods to enable cardiac tissue regeneration in patients after ischemic heart disease. An emerging approach to answer the problems related to ischemic tissue repair is through the use of stem cell-based therapy. However, the actual mechanism due to which this stem cell-based therapy has generative effects on cardiac function is still under investigation. Even though numerous methods have been studied for cell administration, the efficiency of the number of cells retained in the beating heart after implantation is still very low. A promising approach to overcome this problem is through the use of cell microencapsulation therapy which has shown to enable a higher cell retention as compared to the injection of free stem cells into the heart.\nAnother strategy to improve the impact of cell based encapsulation technique towards cardiac regenerative applications is through the use of genetically modified stem cells capable of secreting angiogenic factors such as vascular endothelial growth factor (VEGF) which stimulate neovascularization and restore perfusion in the damaged ischemic heart. An example of this is shown in the study by Zang et al. where genetically modified xenogeneic CHO cells expressing VEGF were encapsulated in alginate-polylysine-alginate microcapsules and implanted into rat myocardium. It was observed that the encapsulation protected the cells from an immunoresponse for three weeks and also led to an improvement in the cardiac tissue post-infarction due to increased angiogenesis.", "The use of cell encapsulated microcapsules towards the treatment of several forms of cancer has shown great potential. One approach undertaken by researchers is through the implantation of microcapsules containing genetically modified cytokine secreting cells. An example of this was demonstrated by Cirone et al. when genetically modified IL-2 cytokine secreting non-autologous mouse myoblasts implanted into mice showed a delay in the tumor growth with an increased rate of survival of the animals. However, the efficiency of this treatment was brief due to an immune response towards the implanted microcapsules. \nAnother approach to cancer suppression is through the use of angiogenesis inhibitors to prevent the release of growth factors which lead to the spread of tumors. The effect of implanting microcapsules loaded with xenogenic cells genetically modified to secrete endostatin, an antiangiogenic drug which causes apoptosis in tumor cells, has been extensively studied. However, this method of local delivery of microcapsules was not feasible in the treatment of patients with many tumors or in metastasis cases and has led to recent studies involving systemic implantation of the capsules.\nIn 1998, a murine model of pancreatic cancer was used to study the effect of implanting genetically modified cytochrome P450 expressing feline epithelial cells encapsulated in cellulose sulfate polymers for the treatment of solid tumors. The approach demonstrated for the first time the application of enzyme expressing cells to activate chemotherapeutic agents. On the basis of these results, an encapsulated cell therapy product, NovaCaps, was tested in a phase I and II clinical trial for the treatment of pancreatic cancer in patients and has recently been designated by the European medicines agency (EMEA) as an orphan drug in Europe. A further phase I/II clinical trial using the same product confirmed the results of the first trial, demonstrating an approximate doubling of survival time in patients with stage IV pancreatic cancer. In all of these trials using cellulose sulphate, in addition to the clear anti-tumour effects, the capsules were well tolerated and there were no adverse reactions seen such as immune response to the capsules, demonstrating the biocompatible nature of the cellulose sulphate capsules. In one patient the capsules were in place for almost 2 years with no side effects.\nThese studies show the promising potential application of cell microcapsules towards the treatment of cancers. However, solutions to issues such as immune response leading to inflammation of the surrounding tissue at the site of capsule implantation have to be researched in detail before more clinical trials are possible.", "The use of the best biomaterial depending on the application is crucial in the development of drug delivery systems and tissue engineering. The polymer alginate is very commonly used due to its early discovery, easy availability and low cost but other materials such as cellulose sulphate, collagen, chitosan, gelatin and agarose have also been employed.", "Probiotics are increasingly being used in numerous dairy products such as ice cream, milk powders, yoghurts, frozen dairy desserts and cheese due to their important health benefits. But, low viability of probiotic bacteria in the food still remains a major hurdle. The pH, dissolved oxygen content, titratable acidity, storage temperature, species and strains of associative fermented dairy product organisms and concentration of lactic and acetic acids are some of the factors that greatly affect the probiotic viability in the product. As set by Food and Agriculture Organization (FAO) of the United Nations and the World Health Organization (WHO), the standard in order to be considered a health food with probiotic addition, the product should contain per gram at least 10-10 cfu of viable probiotic bacteria. It is necessary that the bacterial cells remain stable and healthy in the manufactured product, are sufficiently viable while moving through the upper digestive tract and are able to provide positive effects upon reaching the intestine of the host.\nCell microencapsulation technology has successfully been applied in the food industry for the encapsulation of live probiotic bacteria cells to increase viability of the bacteria during processing of dairy products and for targeted delivery to the gastrointestinal tract.\nApart from dairy products, microencapsulated probiotics have also been used in non-dairy products, such as [https://web.archive.org/web/20101123210115/http://integratedhealth.com/hpdspec/therasweet.htm TheresweetTM] which is a sweetener. It can be used as a convenient vehicle for delivery of encapsulated Lactobacillus to the intestine although it is not itself a dairy product.", "The diameter of the microcapsules is an important factor that influences both the immune response towards the cell microcapsules as well as the mass transport across the capsule membrane. Studies show that the cellular response to smaller capsules is much lesser as compared to larger capsules and in general the diameter of the cell loaded microcapsules should be between 350-450 µm so as to enable effective diffusion across the semi-permeable membrane.", "Cellulose sulphate is derived from cotton and, once processed appropriately, can be used as a biocompatible base in which to suspend cells. When the poly-anionic cellulose sulphate solution is immersed in a second, poly-cationic solution (e.g. pDADMAC), a semi-permeable membrane is formed around the suspended cells as a result of gelation between the two poly-ions. Both mammalian cell lines and bacterial cells remain viable and continue to replicate within the capsule membrane in order to fill-out the capsule. As such, in contrast to some other encapsulation materials, the capsules can be used to grow cells and act as such like a mini-bioreactor. The biocompatible nature of the material has been demonstrated by observation during studies using the cell-filled capsules themselves for implantation as well as isolated capsule material. Capsules formed from cellulose sulphate have been successfully used, showing safety and efficacy, in clinical and pre-clinical trials in both humans and animals, primarily as anti-cancer treatments, but also exploring possible uses for gene therapy or antibody therapies. Using cellulose sulphate it has been possible to manufacture encapsulated cells as a pharmaceutical product at large scale and fulfilling Good Manufacturing Process (cGMP) standards. This was achieved by the company [https://austrianova.com/ Austrianova] in 2007.", "Droplet-based microfluidics can be used to generate microparticles with repeatable size.\n* manipulation of alginate solution to allow microcapsules to be created", "The potential of using bioartificial pancreas, for treatment of diabetes mellitus, based on encapsulating islet cells within a semi permeable membrane is extensively being studied by scientists. These devices could eliminate the need for of immunosuppressive drugs in addition to finally solving the problem of shortage of organ donors. The use of microencapsulation would protect the islet cells from immune rejection as well as allow the use of animal cells or genetically modified insulin-producing cells. It is hoped that development of these islet encapsulated microcapsules could prevent the need for the insulin injections needed several times a day by type 1 diabetic patients. The Edmonton protocol involves implantation of human islets extracted from cadaveric donors and has shown improvements towards the treatment of type 1 diabetics who are prone to hypoglycemic unawareness. However, the two major hurdles faced in this technique are the limited availability of donor organs and with the need for immunosuppresents to prevent an immune response in the patient's body.\nSeveral studies have been dedicated towards the development of bioartificial pancreas involving the immobilization of islets of Langerhans inside polymeric capsules. The first attempt towards this aim was demonstrated in 1980 by Lim et al. where xenograft islet cells were encapsulated inside alginate polylysine microcapsules and showed significant in vivo results for several weeks. It is envisaged that the implantation of these encapsulated cells would help to overcome the use of immunosuppressive drugs and also allow the use of xenograft cells thus obviating the problem of donor shortage.\nThe polymers used for islet microencapsulation are alginate, chitosan, polyethylene glycol (PEG), agarose, sodium cellulose sulfate and water-insoluble polyacrylates with alginate and PEG being commonly used polymers. \nWith successful in vitro studies being performed using this technique, significant work in clinical trials using microencapsulated human islets is being carried out. In 2003, the use of alginate/PLO microcapsules containing islet cells for pilot phase-1 clinical trials was permitted to be carried out at the University of Perugia by the Italian Ministry of Health. In another study, the potential of clinical application of PEGylation and low doses of the immunosuppressant cyclosporine A were evaluated. The trial which began in 2005 by Novocell, now forms the phase I/II of clinical trials involving implantation of islet allografts into the subcutaneous site. However, there have been controversial studies involving human clinical trials where Living Cell technologies Ltd demonstrated the survival of functional xenogeneic cells transplanted without immunosuppressive medication for 9.5 years. However, the trial received harsh criticism from the International Xenotransplantation Association as being risky and premature.\nHowever, even though clinical trials are under way, several major issues such as biocompatibility and immunoprotection need to be overcome.\nPotential alternatives to encapsulating isolated islets (of either allo- or xenogeneic origin) are also being explored. Using sodium cellulose sulphate technology from [https://austrianova.com/ Austrianova Singapore] an islet cell line was encapsulated and it was demonstrated that the cells remain viable and release insulin in response to glucose. In pre-clinical studies, implanted, encapsulated cells were able to restore blood glucose levels in diabetic rats over a period of 6 months.", "The Cells Alive System (CAS) is a line of commercial freezers manufactured by ABI Corporation, Ltd. of Chiba, Japan claimed to preserve food with greater freshness than ordinary freezing by using electromagnetic fields and mechanical vibrations to limit ice crystal formation that destroys food texture. They also are claimed to increase tissue survival without having its water replaced by cryogenically compatible fluids; whether they have any effect is unclear. The freezers have attracted attention among organ banking and transplantation surgeons, as well as the food processing industry.", "Chloroauric acid is an inorganic compound with the chemical formula . It forms hydrates . Both the trihydrate and tetrahydrate are known. Both are orange-yellow solids consisting of the planar anion. Often chloroauric acid is handled as a solution, such as those obtained by dissolution of gold in aqua regia. These solutions can be converted to other gold complexes or reduced to metallic gold or gold nanoparticles.", "The tetrahydrate crystallizes as and two water molecules. The oxidation state of gold in and anion is +3. The salts of (tetrachloroauric(III) acid) are tetrachloroaurates(III), containing anions (tetrachloroaurate(III) anions), which have square planar molecular geometry. The Au–Cl distances are around 2.28 Å. Other d complexes adopt similar structures, e.g. tetrachloroplatinate(II) Potassium tetrachloroplatinate|.", "Solid chloroauric acid is a hydrophilic (ionic) protic solute. It is soluble in water and other oxygen-containing solvents, such as alcohols, esters, ethers, and ketones. For example, in dry dibutyl ether or diethylene glycol, the solubility exceeds 1 M. Saturated solutions in the organic solvents often are the liquid solvates of specific stoichiometry. Chloroauric acid is a strong monoprotic acid.\nWhen heated in air, solid melts in the water of crystallization, quickly darkens and becomes dark brown.", "Since is prone to hydrolyze, upon treatment with an alkali metal base, chloroauric acid converts to gold(III) hydroxide. The related thallium salt() is poorly soluble in all nonreacting solvents. Salts of quaternary ammonium cations are known. Other complex salts include and .\nPartial reduction of chloroauric acid gives oxonium dichloridoaurate(1−). Reduction may also yield other gold(I) complexes, especially with organic ligands. Often the ligand serves as reducing agent as illustrated with thiourea, :\nChloroauric acid is the precursor to gold nanoparticles by precipitation onto mineral supports. Heating of in a stream of chlorine gives gold(III) chloride (). Gold nanostructures can be made from chloroauric acid in a two-phase redox reaction whereby metallic clusters are amassed through the simultaneous attachment of self-assembled thiol monolayers on the growing nuclei. is transferred from aqueous solution to toluene using tetraoctylammonium bromide where it is then reduced with aqueous sodium borohydride in the presence of a thiol.", "Chloroauric acid is produced by dissolving gold in aqua regia (a mixture of concentrated nitric and hydrochloric acids) followed by careful evaporation of the solution:\nUnder some conditions, oxygen can be used as an oxidant. For higher efficiency, these processes are conducted in autoclaves, which allows greater control of temperature and pressure. Alternatively, a solution of can be produced by electrolysis of gold metal in hydrochloric acid:\nTo prevent the deposition of gold on the cathode, the electrolysis is carried out in a cell equipped with a membrane. This method is used for refining gold. Some gold remains in solution in the form of .", "Chloroauric acid is the precursor used in the purification of gold by electrolysis.\nLiquid–liquid extraction of chloroauric acid is used for the recovery, concentrating, purification, and analytical determinations of gold. Of great importance is the extraction of from hydrochloric medium by oxygen-containing extractants, such as alcohols, ketones, ethers and esters. The concentration of gold(III) in the extracts may exceed 1 mol/L. Frequently used extractants for this purpose are dibutyl glycol, methyl isobutyl ketone, tributyl phosphate, dichlorodiethyl ether (chlorex).\nIn histology, chlorauric acid is known as \"brown gold chloride\", and its sodium salt (sodium tetrachloroaurate(III)) as \"gold chloride\", \"sodium gold chloride\" or \"yellow gold chloride\". The sodium salt is used in a process called \"toning\" to improve the optical definition of tissue sections stained with silver.", "Chloroauric acid is a strong eye, skin, and mucous membrane irritant. Prolonged skin contact with chloroauric acid may result in tissue destruction. Concentrated chloroauric acid is corrosive to skin and must, therefore, be handled with appropriate care, since it can cause skin burns, permanent eye damage, and irritation to mucous membranes. Gloves are worn when handling the compound.", "It can be formulated as follows: take a d-dimensional lattice, and a set of spins of the unit length\neach one placed on a lattice node.\nThe model is defined through the following Hamiltonian:\nwith\na coupling between spins.", "Independently of the range of the interaction, at low enough temperature the magnetization is positive.\nConjecturally, in each of the low temperature extremal states the truncated correlations decay algebraically.", "* In the case of long-range interaction, , the thermodynamic limit is well defined if ; the magnetization remains zero if ; but the magnetization is positive at low enough temperature if (infrared bounds).\n* Polyakov has conjectured that, as opposed to the classical XY model, there is no dipole phase for any ; i.e. at non-zero temperature the correlations cluster exponentially fast.", "*In case of long range interaction, , the thermodynamic limit is well defined if ; the magnetization remains zero if ; but the magnetization is positive, at low enough temperature, if (infrared bounds).\n*As in any nearest-neighbor n-vector model with free boundary conditions, if the external field is zero, there exists a simple exact solution.", "* The general mathematical formalism used to describe and solve the Heisenberg model and certain generalizations is developed in the article on the Potts model.\n* In the continuum limit the Heisenberg model (2) gives the following equation of motion\n:This equation is called the continuous classical Heisenberg ferromagnet equation or shortly Heisenberg model and is integrable in the sense of soliton theory. It admits several integrable and nonintegrable generalizations like Landau-Lifshitz equation, Ishimori equation and so on.", "The Classical Heisenberg model, developed by Werner Heisenberg, is the case of the n-vector model, one of the models used in statistical physics to model ferromagnetism, and other phenomena.", "Conventional deuteron fusion is a two-step process, in which an unstable high-energy intermediary is formed:\n:H + H → HeNuclear isomer| + 24 MeV\nExperiments have shown only three decay pathways for this excited-state nucleus, with the branching ratio showing the probability that any given intermediate follows a particular pathway. The products formed via these decay pathways are:\n:He → n + He + 3.3 MeV (ratio=50%)\n:He → p + H + 4.0 MeV (ratio=50%)\n:He → He + γ + 24 MeV (ratio=10)\nOnly about one in a million of the intermediaries take the third pathway, making its products very rare compared to the other paths. This result is consistent with the predictions of the Bohr model. If 1 watt (6.242 × 10 eV/s) were produced from ~2.2575 × 10 deuteron fusions per second, with the known branching ratios, the resulting neutrons and tritium (H) would be easily measured. Some researchers reported detecting He but without the expected neutron or tritium production; such a result would require branching ratios strongly favouring the third pathway, with the actual rates of the first two pathways lower by at least five orders of magnitude than observations from other experiments, directly contradicting both theoretically predicted and observed branching probabilities. Those reports of He production did not include detection of gamma rays, which would require the third pathway to have been changed somehow so that gamma rays are no longer emitted.\nThe known rate of the decay process together with the inter-atomic spacing in a metallic crystal makes heat transfer of the 24 MeV excess energy into the host metal lattice prior to the intermediary's decay inexplicable by conventional understandings of momentum and energy transfer, and even then there would be measurable levels of radiation. Also, experiments indicate that the ratios of deuterium fusion remain constant at different energies. In general, pressure and chemical environment cause only small changes to fusion ratios. An early explanation invoked the Oppenheimer–Phillips process at low energies, but its magnitude was too small to explain the altered ratios.", "Criticism of cold fusion claims generally take one of two forms: either pointing out the theoretical implausibility that fusion reactions have occurred in electrolysis setups or criticizing the excess heat measurements as being spurious, erroneous, or due to poor methodology or controls. There are several reasons why known fusion reactions are an unlikely explanation for the excess heat and associated cold fusion claims.", "Researchers in the field do not agree on a theory for cold fusion. One proposal considers that hydrogen and its isotopes can be absorbed in certain solids, including palladium hydride, at high densities. This creates a high partial pressure, reducing the average separation of hydrogen isotopes. However, the reduction in separation is not enough to create the fusion rates claimed in the original experiment, by a factor of ten. It was also proposed that a higher density of hydrogen inside the palladium and a lower potential barrier could raise the possibility of fusion at lower temperatures than expected from a simple application of Coulomb's law. Electron screening of the positive hydrogen nuclei by the negative electrons in the palladium lattice was suggested to the 2004 DOE commission, but the panel found the theoretical explanations not convincing and inconsistent with current physics theories.", "Known instances of nuclear reactions, aside from producing energy, also produce nucleons and particles on readily observable ballistic trajectories. In support of their claim that nuclear reactions took place in their electrolytic cells, Fleischmann and Pons reported a neutron flux of 4,000 neutrons per second, as well as detection of tritium. The classical branching ratio for previously known fusion reactions that produce tritium would predict, with 1 watt of power, the production of 10 neutrons per second, levels that would have been fatal to the researchers. In 2009, Mosier-Boss et al. reported what they called the first scientific report of highly energetic neutrons, using CR-39 plastic radiation detectors, but the claims cannot be validated without a quantitative analysis of neutrons.\nSeveral medium and heavy elements like calcium, titanium, chromium, manganese, iron, cobalt, copper and zinc have been reported as detected by several researchers, like Tadahiko Mizuno or George Miley. The report presented to the United States Department of Energy (DOE) in 2004 indicated that deuterium-loaded foils could be used to detect fusion reaction products and, although the reviewers found the evidence presented to them as inconclusive, they indicated that those experiments did not use state-of-the-art techniques.\nIn response to doubts about the lack of nuclear products, cold fusion researchers have tried to capture and measure nuclear products correlated with excess heat. Considerable attention has been given to measuring He production. However, the reported levels are very near to background, so contamination by trace amounts of helium normally present in the air cannot be ruled out. In the report presented to the DOE in 2004, the reviewers' opinion was divided on the evidence for He, with the most negative reviews concluding that although the amounts detected were above background levels, they were very close to them and therefore could be caused by contamination from air.\nOne of the main criticisms of cold fusion was that deuteron-deuteron fusion into helium was expected to result in the production of gamma rays—which were not observed and were not observed in subsequent cold fusion experiments. Cold fusion researchers have since claimed to find X-rays, helium, neutrons and nuclear transmutations. Some researchers also claim to have found them using only light water and nickel cathodes. The 2004 DOE panel expressed concerns about the poor quality of the theoretical framework cold fusion proponents presented to account for the lack of gamma rays.", "An excess heat observation is based on an energy balance. Various sources of energy input and output are continuously measured. Under normal conditions, the energy input can be matched to the energy output to within experimental error. In experiments such as those run by Fleischmann and Pons, an electrolysis cell operating steadily at one temperature transitions to operating at a higher temperature with no increase in applied current. If the higher temperatures were real, and not an experimental artifact, the energy balance would show an unaccounted term. In the Fleischmann and Pons experiments, the rate of inferred excess heat generation was in the range of 10–20% of total input, though this could not be reliably replicated by most researchers. Researcher Nathan Lewis discovered that the excess heat in Fleischmann and Ponss original paper was not measured, but estimated from measurements that didnt have any excess heat.\nUnable to produce excess heat or neutrons, and with positive experiments being plagued by errors and giving disparate results, most researchers declared that heat production was not a real effect and ceased working on the experiments. In 1993, after their original report, Fleischmann reported \"heat-after-death\" experiments—where excess heat was measured after the electric current supplied to the electrolytic cell was turned off. This type of report has also become part of subsequent cold fusion claims.", "A cold fusion experiment usually includes:\n* a metal, such as palladium or nickel, in bulk, thin films or powder; and\n* deuterium, hydrogen, or both, in the form of water, gas or plasma.\nElectrolysis cells can be either open cell or closed cell. In open cell systems, the electrolysis products, which are gaseous, are allowed to leave the cell. In closed cell experiments, the products are captured, for example by catalytically recombining the products in a separate part of the experimental system. These experiments generally strive for a steady state condition, with the electrolyte being replaced periodically. There are also \"heat-after-death\" experiments, where the evolution of heat is monitored after the electric current is turned off.\nThe most basic setup of a cold fusion cell consists of two electrodes submerged in a solution containing palladium and heavy water. The electrodes are then connected to a power source to transmit electricity from one electrode to the other through the solution. Even when anomalous heat is reported, it can take weeks for it to begin to appear—this is known as the \"loading time,\" the time required to saturate the palladium electrode with hydrogen (see \"Loading ratio\" section).\nThe Fleischmann and Pons early findings regarding helium, neutron radiation and tritium were never replicated satisfactorily, and its levels were too low for the claimed heat production and inconsistent with each other. Neutron radiation has been reported in cold fusion experiments at very low levels using different kinds of detectors, but levels were too low, close to background, and found too infrequently to provide useful information about possible nuclear processes.", "Since the Fleischmann and Pons announcement, the Italian national agency for new technologies, energy and sustainable economic development (ENEA) has funded Franco Scaramuzzi's research into whether excess heat can be measured from metals loaded with deuterium gas. Such research is distributed across ENEA departments, CNR laboratories, INFN, universities and industrial laboratories in Italy, where the group continues to try to achieve reliable reproducibility (i.e. getting the phenomenon to happen in every cell, and inside a certain frame of time). In 2006–2007, the ENEA started a research program which claimed to have found excess power of up to 500 percent, and in 2009, ENEA hosted the 15th cold fusion conference.", "Between 1992 and 1997, Japans Ministry of International Trade and Industry sponsored a \"New Hydrogen Energy (NHE)\" program of US$20 million to research cold fusion. Announcing the end of the program in 1997, the director and one-time proponent of cold fusion research Hideo Ikegami stated \"We couldnt achieve what was first claimed in terms of cold fusion. (...) We can't find any reason to propose more money for the coming year or for the future.\" In 1999 the Japan C-F Research Society was established to promote the independent research into cold fusion that continued in Japan. The society holds annual meetings. Perhaps the most famous Japanese cold fusion researcher was Yoshiaki Arata, from Osaka University, who claimed in a demonstration to produce excess heat when deuterium gas was introduced into a cell containing a mixture of palladium and zirconium oxide, a claim supported by fellow Japanese researcher Akira Kitamura of Kobe University and Michael McKubre at SRI.", "Because nuclei are all positively charged, they strongly repel one another. Normally, in the absence of a catalyst such as a muon, very high kinetic energies are required to overcome this charged repulsion. Extrapolating from known fusion rates, the rate for uncatalyzed fusion at room-temperature energy would be 50 orders of magnitude lower than needed to account for the reported excess heat. In muon-catalyzed fusion there are more fusions because the presence of the muon causes deuterium nuclei to be 207 times closer than in ordinary deuterium gas. But deuterium nuclei inside a palladium lattice are further apart than in deuterium gas, and there should be fewer fusion reactions, not more.\nPaneth and Peters in the 1920s already knew that palladium can absorb up to 900 times its own volume of hydrogen gas, storing it at several thousands of times the atmospheric pressure. This led them to believe that they could increase the nuclear fusion rate by simply loading palladium rods with hydrogen gas. Tandberg then tried the same experiment but used electrolysis to make palladium absorb more deuterium and force the deuterium further together inside the rods, thus anticipating the main elements of Fleischmann and Pons' experiment. They all hoped that pairs of hydrogen nuclei would fuse together to form helium, which at the time was needed in Germany to fill zeppelins, but no evidence of helium or of increased fusion rate was ever found.\nThis was also the belief of geologist Palmer, who convinced Steven Jones that the helium-3 occurring naturally in Earth perhaps came from fusion involving hydrogen isotopes inside catalysts like nickel and palladium. This led their team in 1986 to independently make the same experimental setup as Fleischmann and Pons (a palladium cathode submerged in heavy water, absorbing deuterium via electrolysis). Fleischmann and Pons had much the same belief, but they calculated the pressure to be of 10 atmospheres, when cold fusion experiments achieve a loading ratio of only one to one, which has only between 10,000 and 20,000 atmospheres. John R. Huizenga says they had misinterpreted the Nernst equation, leading them to believe that there was enough pressure to bring deuterons so close to each other that there would be spontaneous fusions.", "Martin Fleischmann of the University of Southampton and Stanley Pons of the University of Utah hypothesized that the high compression ratio and mobility of deuterium that could be achieved within palladium metal using electrolysis might result in nuclear fusion. To investigate, they conducted electrolysis experiments using a palladium cathode and heavy water within a calorimeter, an insulated vessel designed to measure process heat. Current was applied continuously for many weeks, with the heavy water being renewed at intervals. Some deuterium was thought to be accumulating within the cathode, but most was allowed to bubble out of the cell, joining oxygen produced at the anode. For most of the time, the power input to the cell was equal to the calculated power leaving the cell within measurement accuracy, and the cell temperature was stable at around 30 °C. But then, at some point (in some of the experiments), the temperature rose suddenly to about 50 °C without changes in the input power. These high temperature phases would last for two days or more and would repeat several times in any given experiment once they had occurred. The calculated power leaving the cell was significantly higher than the input power during these high temperature phases. Eventually the high temperature phases would no longer occur within a particular cell.\nIn 1988, Fleischmann and Pons applied to the United States Department of Energy for funding towards a larger series of experiments. Up to this point they had been funding their experiments using a small device built with $100,000 out-of-pocket. The grant proposal was turned over for peer review, and one of the reviewers was Steven Jones of Brigham Young University. Jones had worked for some time on muon-catalyzed fusion, a known method of inducing nuclear fusion without high temperatures, and had written an article on the topic entitled \"Cold nuclear fusion\" that had been published in Scientific American in July 1987. Fleischmann and Pons and co-workers met with Jones and co-workers on occasion in Utah to share research and techniques. During this time, Fleischmann and Pons described their experiments as generating considerable \"excess energy\", in the sense that it could not be explained by chemical reactions alone. They felt that such a discovery could bear significant commercial value and would be entitled to patent protection. Jones, however, was measuring neutron flux, which was not of commercial interest. To avoid future problems, the teams appeared to agree to publish their results simultaneously, though their accounts of their 6 March meeting differ.", "In August 2003, the U.S. Secretary of Energy, Spencer Abraham, ordered the DOE to organize a second review of the field. This was thanks to an April 2003 letter sent by MITs Peter L. Hagelstein, and the publication of many new papers, including the Italian ENEA and other researchers in the 2003 International Cold Fusion Conference, and a two-volume book by U.S. SPAWAR in 2002. Cold fusion researchers were asked to present a review document of all the evidence since the 1989 review. The report was released in 2004. The reviewers were \"split approximately evenly\" on whether the experiments had produced energy in the form of heat, but \"most reviewers, even those who accepted the evidence for excess power production, stated that the effects are not repeatable, the magnitude of the effect has not increased in over a decade of work, and that many of the reported experiments were not well documented'\". In summary, reviewers found that cold fusion evidence was still not convincing 15 years later, and they did not recommend a federal research program. They only recommended that agencies consider funding individual well-thought studies in specific areas where research \"could be helpful in resolving some of the controversies in the field\". They summarized its conclusions thus:\nCold fusion researchers placed a \"rosier spin\" on the report, noting that they were finally being treated like normal scientists, and that the report had increased interest in the field and caused \"a huge upswing in interest in funding cold fusion research\". However, in a 2009 BBC article on an American Chemical Society's meeting on cold fusion, particle physicist Frank Close was quoted stating that the problems that plagued the original cold fusion announcement were still happening: results from studies are still not being independently verified and inexplicable phenomena encountered are being labelled as \"cold fusion\" even if they are not, in order to attract the attention of journalists.\nIn February 2012, millionaire Sidney Kimmel, convinced that cold fusion was worth investing in by a 19 April 2009 interview with physicist Robert Duncan on the US news show 60 Minutes, made a grant of $5.5 million to the University of Missouri to establish the Sidney Kimmel Institute for Nuclear Renaissance (SKINR). The grant was intended to support research into the interactions of hydrogen with palladium, nickel or platinum under extreme conditions. In March 2013 Graham K. Hubler, a nuclear physicist who worked for the Naval Research Laboratory for 40 years, was named director. One of the SKINR projects is to replicate a 1991 experiment in which a professor associated with the project, Mark Prelas, says bursts of millions of neutrons a second were recorded, which was stopped because \"his research account had been frozen\". He claims that the new experiment has already seen \"neutron emissions at similar levels to the 1991 observation\".\nIn May 2016, the United States House Committee on Armed Services, in its report on the 2017 National Defense Authorization Act, directed the Secretary of Defense to \"provide a briefing on the military utility of recent U.S. industrial base LENR advancements to the House Committee on Armed Services by September 22, 2016\".", "United States Navy researchers at the Space and Naval Warfare Systems Center (SPAWAR) in San Diego have been studying cold fusion since 1989. In 2002 they released a two-volume report, \"Thermal and nuclear aspects of the Pd/DO system\", with a plea for funding. This and other published papers prompted a 2004 Department of Energy (DOE) review.", "A 1991 review by a cold fusion proponent had calculated \"about 600 scientists\" were still conducting research. After 1991, cold fusion research only continued in relative obscurity, conducted by groups that had increasing difficulty securing public funding and keeping programs open. These small but committed groups of cold fusion researchers have continued to conduct experiments using Fleischmann and Pons electrolysis setups in spite of the rejection by the mainstream community. The Boston Globe estimated in 2004 that there were only 100 to 200 researchers working in the field, most suffering damage to their reputation and career. Since the main controversy over Pons and Fleischmann had ended, cold fusion research has been funded by private and small governmental scientific investment funds in the United States, Italy, Japan, and India. For example, it was reported in Nature, in May, 2019, that Google had spent approximately $10 million on cold fusion research. A group of scientists at well-known research labs (e.g., MIT, Lawrence Berkeley National Lab, and others) worked for several years to establish experimental protocols and measurement techniques in an effort to re-evaluate cold fusion to a high standard of scientific rigor. Their reported conclusion: no cold fusion.\nIn 2021, following Natures' 2019 publication of anomalous findings that might only be explained by some localized fusion, scientists at the Naval Surface Warfare Center, Indian Head Division announced that they had assembled a group of scientists from the Navy, Army and National Institute of Standards and Technology to undertake a new, coordinated study. With few exceptions, researchers have had difficulty publishing in mainstream journals. The remaining researchers often term their field Low Energy Nuclear Reactions (LENR), Chemically Assisted Nuclear Reactions (CANR), Lattice Assisted Nuclear Reactions (LANR), Condensed Matter Nuclear Science (CMNS) or Lattice Enabled Nuclear Reactions; one of the reasons being to avoid the negative connotations associated with \"cold fusion\". The new names avoid making bold implications, like implying that fusion is actually occurring.\nThe researchers who continue their investigations acknowledge that the flaws in the original announcement are the main cause of the subject's marginalization, and they complain of a chronic lack of funding and no possibilities of getting their work published in the highest impact journals. University researchers are often unwilling to investigate cold fusion because they would be ridiculed by their colleagues and their professional careers would be at risk. In 1994, David Goodstein, a professor of physics at Caltech, advocated increased attention from mainstream researchers and described cold fusion as:", "In the 1990s, India stopped its research in cold fusion at the Bhabha Atomic Research Centre because of the lack of consensus among mainstream scientists and the US denunciation of the research. Yet, in 2008, the National Institute of Advanced Studies recommended that the Indian government revive this research. Projects were commenced at Chennais Indian Institute of Technology, the Bhabha Atomic Research Centre and the Indira Gandhi Centre for Atomic Research. However, there is still skepticism among scientists and, for all practical purposes, research has stalled since the 1990s. A special section in the Indian multidisciplinary journal Current Science' published 33 cold fusion papers in 2015 by major cold fusion researchers including several Indian researchers.", "Cold fusion setups utilize an input power source (to ostensibly provide activation energy), a platinum group electrode, a deuterium or hydrogen source, a calorimeter, and, at times, detectors to look for byproducts such as helium or neutrons. Critics have variously taken issue with each of these aspects and have asserted that there has not yet been a consistent reproduction of claimed cold fusion results in either energy output or byproducts. Some cold fusion researchers who claim that they can consistently measure an excess heat effect have argued that the apparent lack of reproducibility might be attributable to a lack of quality control in the electrode metal or the amount of hydrogen or deuterium loaded in the system. Critics have further taken issue with what they describe as mistakes or errors of interpretation that cold fusion researchers have made in calorimetry analyses and energy budgets.", "In mid-March 1989, both research teams were ready to publish their findings, and Fleischmann and Jones had agreed to meet at an airport on 24 March to send their papers to Nature via FedEx. Fleischmann and Pons, however, pressured by the University of Utah, which wanted to establish priority on the discovery, broke their apparent agreement, disclosing their work at a press conference on 23 March (they claimed in the press release that it would be published in Nature but instead submitted their paper to the Journal of Electroanalytical Chemistry). Jones, upset, faxed in his paper to Nature after the press conference.\nFleischmann and Pons' announcement drew wide media attention. But the 1986 discovery of high-temperature superconductivity had made the scientific community more open to revelations of unexpected scientific results that could have huge economic repercussions and that could be replicated reliably even if they had not been predicted by established theories. Many scientists were also reminded of the Mössbauer effect, a process involving nuclear transitions in a solid. Its discovery 30 years earlier had also been unexpected, though it was quickly replicated and explained within the existing physics framework.\nThe announcement of a new purported clean source of energy came at a crucial time: adults still remembered the 1973 oil crisis and the problems caused by oil dependence, anthropogenic global warming was starting to become notorious, the anti-nuclear movement was labeling nuclear power plants as dangerous and getting them closed, people had in mind the consequences of strip mining, acid rain, the greenhouse effect and the Exxon Valdez oil spill, which happened the day after the announcement. In the press conference, Chase N. Peterson, Fleischmann and Pons, backed by the solidity of their scientific credentials, repeatedly assured the journalists that cold fusion would solve environmental problems, and would provide a limitless inexhaustible source of clean energy, using only seawater as fuel. They said the results had been confirmed dozens of times and they had no doubts about them. In the accompanying press release Fleischmann was quoted saying: \"What we have done is to open the door of a new research area, our indications are that the discovery will be relatively easy to make into a usable technology for generating heat and power, but continued work is needed, first, to further understand the science and secondly, to determine its value to energy economics.\"", "Cold fusion researchers (McKubre since 1994, ENEA in 2011) have speculated that a cell that is loaded with a deuterium/palladium ratio lower than 100% (or 1:1) will not produce excess heat. Since most of the negative replications from 1989 to 1990 did not report their ratios, this has been proposed as an explanation for failed reproducibility. This loading ratio is hard to obtain, and some batches of palladium never reach it because the pressure causes cracks in the palladium, allowing the deuterium to escape. Fleischmann and Pons never disclosed the deuterium/palladium ratio achieved in their cells; there are no longer any batches of the palladium used by Fleischmann and Pons (because the supplier now uses a different manufacturing process), and researchers still have problems finding batches of palladium that achieve heat production reliably.", "Cold fusion is a hypothesized type of nuclear reaction that would occur at, or near, room temperature. It would contrast starkly with the \"hot\" fusion that is known to take place naturally within stars and artificially in hydrogen bombs and prototype fusion reactors under immense pressure and at temperatures of millions of degrees, and be distinguished from muon-catalyzed fusion. There is currently no accepted theoretical model that would allow cold fusion to occur.\nIn 1989, two electrochemists, Martin Fleischmann and Stanley Pons, reported that their apparatus had produced anomalous heat (\"excess heat\") of a magnitude they asserted would defy explanation except in terms of nuclear processes. They further reported measuring small amounts of nuclear reaction byproducts, including neutrons and tritium. The small tabletop experiment involved electrolysis of heavy water on the surface of a palladium (Pd) electrode. The reported results received wide media attention and raised hopes of a cheap and abundant source of energy.\nMany scientists tried to replicate the experiment with the few details available. Hopes faded with the large number of negative replications, the withdrawal of many reported positive replications, the discovery of flaws and sources of experimental error in the original experiment, and finally the discovery that Fleischmann and Pons had not actually detected nuclear reaction byproducts. By late 1989, most scientists considered cold fusion claims dead, and cold fusion subsequently gained a reputation as pathological science. In 1989 the United States Department of Energy (DOE) concluded that the reported results of excess heat did not present convincing evidence of a useful source of energy and decided against allocating funding specifically for cold fusion. A second DOE review in 2004, which looked at new research, reached similar conclusions and did not result in DOE funding of cold fusion. Presently, since articles about cold fusion are rarely published in peer-reviewed mainstream scientific journals, they do not attract the level of scrutiny expected for mainstream scientific publications.\nNevertheless, some interest in cold fusion has continued through the decades—for example, a Google-funded failed replication attempt was published in a 2019 issue of Nature. A small community of researchers continues to investigate it, often under the alternative designations low-energy nuclear reactions (LENR) or condensed matter nuclear science (CMNS).", "In 1989, after Fleischmann and Pons had made their claims, many research groups tried to reproduce the Fleischmann-Pons experiment, without success. A few other research groups, however, reported successful reproductions of cold fusion during this time. In July 1989, an Indian group from the Bhabha Atomic Research Centre (P. K. Iyengar and M. Srinivasan) and in October 1989, John Bockris' group from Texas A&M University reported on the creation of tritium. In December 1990, professor Richard Oriani of the University of Minnesota reported excess heat.\nGroups that did report successes found that some of their cells were producing the effect, while other cells that were built exactly the same and used the same materials were not producing the effect. Researchers that continued to work on the topic have claimed that over the years many successful replications have been made, but still have problems getting reliable replications. Reproducibility is one of the main principles of the scientific method, and its lack led most physicists to believe that the few positive reports could be attributed to experimental error. The DOE 2004 report said among its conclusions and recommendations:", "The ability of palladium to absorb hydrogen was recognized as early as the nineteenth century by Thomas Graham. In the late 1920s, two Austrian-born scientists, Friedrich Paneth and Kurt Peters, originally reported the transformation of hydrogen into helium by nuclear catalysis when hydrogen was absorbed by finely divided palladium at room temperature. However, the authors later retracted that report, saying that the helium they measured was due to background from the air.\nIn 1927, Swedish scientist John Tandberg reported that he had fused hydrogen into helium in an electrolytic cell with palladium electrodes. On the basis of his work, he applied for a Swedish patent for \"a method to produce helium and useful reaction energy\". Due to Paneth and Peterss retraction and his inability to explain the physical process, his patent application was denied. After deuterium was discovered in 1932, Tandberg continued his experiments with heavy water. The final experiments made by Tandberg with heavy water were similar to the original experiment by Fleischmann and Pons. Fleischmann and Pons were not aware of Tandbergs work.\nThe term \"cold fusion\" was used as early as 1956 in an article in The New York Times about Luis Alvarez's work on muon-catalyzed fusion. Paul Palmer and then Steven Jones of Brigham Young University used the term \"cold fusion\" in 1986 in an investigation of \"geo-fusion\", the possible existence of fusion involving hydrogen isotopes in a planetary core. In his original paper on this subject with Clinton Van Siclen, submitted in 1985, Jones had coined the term \"piezonuclear fusion\".", "The most famous cold fusion claims were made by Stanley Pons and Martin Fleischmann in 1989. After a brief period of interest by the wider scientific community, their reports were called into question by nuclear physicists. Pons and Fleischmann never retracted their claims, but moved their research program from the US to France after the controversy erupted.", "Although the experimental protocol had not been published, physicists in several countries attempted, and failed, to replicate the excess heat phenomenon. The first paper submitted to Nature reproducing excess heat, although it passed peer review, was rejected because most similar experiments were negative and there were no theories that could explain a positive result; this paper was later accepted for publication by the journal Fusion Technology. Nathan Lewis, professor of chemistry at the California Institute of Technology, led one of the most ambitious validation efforts, trying many variations on the experiment without success, while CERN physicist Douglas R. O. Morrison said that \"essentially all\" attempts in Western Europe had failed. Even those reporting success had difficulty reproducing Fleischmann and Pons' results. On 10 April 1989, a group at Texas A&M University published results of excess heat and later that day a group at the Georgia Institute of Technology announced neutron production—the strongest replication announced up to that point due to the detection of neutrons and the reputation of the lab. On 12 April Pons was acclaimed at an ACS meeting. But Georgia Tech retracted their announcement on 13 April, explaining that their neutron detectors gave false positives when exposed to heat. Another attempt at independent replication, headed by Robert Huggins at Stanford University, which also reported early success with a light water control, became the only scientific support for cold fusion in 26 April US Congress hearings. But when he finally presented his results he reported an excess heat of only one degree Celsius, a result that could be explained by chemical differences between heavy and light water in the presence of lithium. He had not tried to measure any radiation and his research was derided by scientists who saw it later. For the next six weeks, competing claims, counterclaims, and suggested explanations kept what was referred to as \"cold fusion\" or \"fusion confusion\" in the news.\nIn April 1989, Fleischmann and Pons published a \"preliminary note\" in the Journal of Electroanalytical Chemistry. This paper notably showed a gamma peak without its corresponding Compton edge, which indicated they had made a mistake in claiming evidence of fusion byproducts. Fleischmann and Pons replied to this critique, but the only thing left clear was that no gamma ray had been registered and that Fleischmann refused to recognize any mistakes in the data. A much longer paper published a year later went into details of calorimetry but did not include any nuclear measurements.\nNevertheless, Fleischmann and Pons and a number of other researchers who found positive results remained convinced of their findings. The University of Utah asked Congress to provide $25 million to pursue the research, and Pons was scheduled to meet with representatives of President Bush in early May.\nOn 30 April 1989, cold fusion was declared dead by The New York Times. The Times called it a circus the same day, and the Boston Herald attacked cold fusion the following day.\nOn 1 May 1989, the American Physical Society held a session on cold fusion in Baltimore, including many reports of experiments that failed to produce evidence of cold fusion. At the end of the session, eight of the nine leading speakers stated that they considered the initial Fleischmann and Pons claim dead, with the ninth, Johann Rafelski, abstaining. Steven E. Koonin of Caltech called the Utah report a result of \"the incompetence and delusion of Pons and Fleischmann,\" which was met with a standing ovation. Douglas R. O. Morrison, a physicist representing CERN, was the first to call the episode an example of pathological science.\nOn 4 May, due to all this new criticism, the meetings with various representatives from Washington were cancelled.\nFrom 8 May, only the A&M tritium results kept cold fusion afloat.\nIn July and November 1989, Nature published papers critical of cold fusion claims. Negative results were also published in several other scientific journals including Science, Physical Review Letters, and Physical Review C (nuclear physics).\nIn August 1989, in spite of this trend, the state of Utah invested $4.5 million to create the National Cold Fusion Institute.\nThe United States Department of Energy organized a special panel to review cold fusion theory and research. The panel issued its report in November 1989, concluding that results as of that date did not present convincing evidence that useful sources of energy would result from the phenomena attributed to cold fusion. The panel noted the large number of failures to replicate excess heat and the greater inconsistency of reports of nuclear reaction byproducts expected by established conjecture. Nuclear fusion of the type postulated would be inconsistent with current understanding and, if verified, would require established conjecture, perhaps even theory itself, to be extended in an unexpected way. The panel was against special funding for cold fusion research, but supported modest funding of \"focused experiments within the general funding system\". Cold fusion supporters continued to argue that the evidence for excess heat was strong, and in September 1990 the National Cold Fusion Institute listed 92 groups of researchers from 10 countries that had reported corroborating evidence of excess heat, but they refused to provide any evidence of their own arguing that it could endanger their patents. However, no further DOE nor NSF funding resulted from the panel's recommendation. By this point, however, academic consensus had moved decidedly toward labeling cold fusion as a kind of \"pathological science\".\nIn March 1990, Michael H. Salamon, a physicist from the University of Utah, and nine co-authors reported negative results. University faculty were then \"stunned\" when a lawyer representing Pons and Fleischmann demanded the Salamon paper be retracted under threat of a lawsuit. The lawyer later apologized; Fleischmann defended the threat as a legitimate reaction to alleged bias displayed by cold-fusion critics.\nIn early May 1990, one of the two A&M researchers, Kevin Wolf, acknowledged the possibility of spiking, but said that the most likely explanation was tritium contamination in the palladium electrodes or simply contamination due to sloppy work. In June 1990 an article in Science by science writer Gary Taubes destroyed the public credibility of the A&M tritium results when it accused its group leader John Bockris and one of his graduate students of spiking the cells with tritium. In October 1990 Wolf finally said that the results were explained by tritium contamination in the rods. An A&M cold fusion review panel found that the tritium evidence was not convincing and that, while they couldn't rule out spiking, contamination and measurements problems were more likely explanations, and Bockris never got support from his faculty to resume his research.\nOn 30 June 1991, the National Cold Fusion Institute closed after it ran out of funds; it found no excess heat, and its reports of tritium production were met with indifference.\nOn 1 January 1991, Pons left the University of Utah and went to Europe. In 1992, Pons and Fleischmann resumed research with Toyota Motor Corporation's IMRA lab in France. Fleischmann left for England in 1995, and the contract with Pons was not renewed in 1998 after spending $40 million with no tangible results. The IMRA laboratory stopped cold fusion research in 1998 after spending £12 million. Pons has made no public declarations since, and only Fleischmann continued giving talks and publishing papers.\nMostly in the 1990s, several books were published that were critical of cold fusion research methods and the conduct of cold fusion researchers. Over the years, several books have appeared that defended them. Around 1998, the University of Utah had already dropped its research after spending over $1 million, and in the summer of 1997, Japan cut off research and closed its own lab after spending $20 million.", "* [http://dspace.mit.edu/handle/1721.1/50230 MIT Open Access Articles].\n* (manuscript).\n* In the foreword by the president of ENEA the belief is expressed that the cold fusion phenomenon is proved.", "Nuclear fusion is normally understood to occur at temperatures in the tens of millions of degrees. This is called \"thermonuclear fusion\". Since the 1920s, there has been speculation that nuclear fusion might be possible at much lower temperatures by catalytically fusing hydrogen absorbed in a metal catalyst. In 1989, a claim by Stanley Pons and Martin Fleischmann (then one of the world's leading electrochemists) that such cold fusion had been observed caused a brief media sensation before the majority of scientists criticized their claim as incorrect after many found they could not replicate the excess heat. Since the initial announcement, cold fusion research has continued by a small community of researchers who believe that such reactions happen and hope to gain wider recognition for their experimental evidence.", "Although details have not surfaced, it appears that the University of Utah forced the 23 March 1989 Fleischmann and Pons announcement to establish priority over the discovery and its patents before the joint publication with Jones. The Massachusetts Institute of Technology (MIT) announced on 12 April 1989 that it had applied for its own patents based on theoretical work of one of its researchers, Peter L. Hagelstein, who had been sending papers to journals from 5 to 12 April. An MIT graduate student applied for a patent but was reportedly rejected by the USPTO in part by the citation of the \"negative\" MIT Plasma Fusion Center's cold fusion experiment of 1989. On 2 December 1993 the University of Utah licensed all its cold fusion patents to ENECO, a new company created to profit from cold fusion discoveries, and in March 1998 it said that it would no longer defend its patents.\nThe U.S. Patent and Trademark Office (USPTO) now rejects patents claiming cold fusion. Esther Kepplinger, the deputy commissioner of patents in 2004, said that this was done using the same argument as with perpetual motion machines: that they do not work. Patent applications are required to show that the invention is \"useful\", and this utility is dependent on the inventions ability to function. In general USPTO rejections on the sole grounds of the inventions being \"inoperative\" are rare, since such rejections need to demonstrate \"proof of total incapacity\", and cases where those rejections are upheld in a Federal Court are even rarer: nevertheless, in 2000, a rejection of a cold fusion patent was appealed in a Federal Court and it was upheld, in part on the grounds that the inventor was unable to establish the utility of the invention.\nA U.S. patent might still be granted when given a different name to disassociate it from cold fusion, though this strategy has had little success in the US: the same claims that need to be patented can identify it with cold fusion, and most of these patents cannot avoid mentioning Fleischmann and Pons' research due to legal constraints, thus alerting the patent reviewer that it is a cold-fusion-related patent. David Voss said in 1999 that some patents that closely resemble cold fusion processes, and that use materials used in cold fusion, have been granted by the USPTO. The inventor of three such patents had his applications initially rejected when they were reviewed by experts in nuclear science; but then he rewrote the patents to focus more on the electrochemical parts so they would be reviewed instead by experts in electrochemistry, who approved them. When asked about the resemblance to cold fusion, the patent holder said that it used nuclear processes involving \"new nuclear physics\" unrelated to cold fusion. Melvin Miles was granted in 2004 a patent for a cold fusion device, and in 2007 he described his efforts to remove all instances of \"cold fusion\" from the patent description to avoid having it rejected outright.\nAt least one patent related to cold fusion has been granted by the European Patent Office.\nA patent only legally prevents others from using or benefiting from one's invention. However, the general public perceives a patent as a stamp of approval, and a holder of three cold fusion patents said the patents were very valuable and had helped in getting investments.", "Cold fusion researchers were for many years unable to get papers accepted at scientific meetings, prompting the creation of their own conferences. The International Conference on Cold Fusion (ICCF) was first held in 1990 and has met every 12 to 18 months since. Attendees at some of the early conferences were described as offering no criticism to papers and presentations for fear of giving ammunition to external critics, thus allowing the proliferation of crackpots and hampering the conduct of serious science. Critics and skeptics stopped attending these conferences, with the notable exception of Douglas Morrison, who died in 2001. With the founding in 2004 of the International Society for Condensed Matter Nuclear Science (ISCMNS), the conference was renamed the International Conference on Condensed Matter Nuclear Science—for reasons that are detailed in the subsequent research section above—but reverted to the old name in 2008. Cold fusion research is often referenced by proponents as \"low-energy nuclear reactions\", or LENR, but according to sociologist Bart Simon the \"cold fusion\" label continues to serve a social function in creating a collective identity for the field.\nSince 2006, the American Physical Society (APS) has included cold fusion sessions at their semiannual meetings, clarifying that this does not imply a softening of skepticism. Since 2007, the American Chemical Society (ACS) meetings also include \"invited symposium(s)\" on cold fusion. An ACS program chair, Gopal Coimbatore, said that without a proper forum the matter would never be discussed and, \"with the world facing an energy crisis, it is worth exploring all possibilities.\"\nOn 22–25 March 2009, the American Chemical Society meeting included a four-day symposium in conjunction with the 20th anniversary of the announcement of cold fusion. Researchers working at the U.S. Navys Space and Naval Warfare Systems Center (SPAWAR) reported detection of energetic neutrons using a heavy water electrolysis setup and a CR-39 detector, a result previously published in Naturwissenschaften'. The authors claim that these neutrons are indicative of nuclear reactions. Without quantitative analysis of the number, energy, and timing of the neutrons and exclusion of other potential sources, this interpretation is unlikely to find acceptance by the wider scientific community.", "The ISI identified cold fusion as the scientific topic with the largest number of published papers in 1989, of all scientific disciplines. The Nobel Laureate Julian Schwinger declared himself a supporter of cold fusion in the fall of 1989, after much of the response to the initial reports had turned negative. He tried to publish his theoretical paper \"Cold Fusion: A Hypothesis\" in Physical Review Letters, but the peer reviewers rejected it so harshly that he felt deeply insulted, and he resigned from the American Physical Society (publisher of PRL) in protest.\nThe number of papers sharply declined after 1990 because of two simultaneous phenomena: first, scientists abandoned the field; second, journal editors declined to review new papers. Consequently, cold fusion fell off the ISI charts. Researchers who got negative results turned their backs on the field; those who continued to publish were simply ignored. A 1993 paper in Physics Letters A was the last paper published by Fleischmann, and \"one of the last reports [by Fleischmann] to be formally challenged on technical grounds by a cold fusion skeptic.\"\nThe Journal of Fusion Technology (FT) established a permanent feature in 1990 for cold fusion papers, publishing over a dozen papers per year and giving a mainstream outlet for cold fusion researchers. When editor-in-chief George H. Miley retired in 2001, the journal stopped accepting new cold fusion papers. This has been cited as an example of the importance of sympathetic influential individuals to the publication of cold fusion papers in certain journals.\nThe decline of publications in cold fusion has been described as a \"failed information epidemic\". The sudden surge of supporters until roughly 50% of scientists support the theory, followed by a decline until there is only a very small number of supporters, has been described as a characteristic of pathological science. The lack of a shared set of unifying concepts and techniques has prevented the creation of a dense network of collaboration in the field; researchers perform efforts in their own and in disparate directions, making the transition to \"normal\" science more difficult.\nCold fusion reports continued to be published in a few journals like Journal of Electroanalytical Chemistry and Il Nuovo Cimento. Some papers also appeared in Journal of Physical Chemistry, Physics Letters A, International Journal of Hydrogen Energy, and a number of Japanese and Russian journals of physics, chemistry, and engineering. Since 2005, Naturwissenschaften has published cold fusion papers; in 2009, the journal named a cold fusion researcher to its editorial board. In 2015 the Indian multidisciplinary journal Current Science published a special section devoted entirely to cold fusion related papers.\nIn the 1990s, the groups that continued to research cold fusion and their supporters established (non-peer-reviewed) periodicals such as Fusion Facts, Cold Fusion Magazine, Infinite Energy Magazine and New Energy Times to cover developments in cold fusion and other fringe claims in energy production that were ignored in other venues. The internet has also become a major means of communication and self-publication for CF researchers.", "The calculation of excess heat in electrochemical cells involves certain assumptions. Errors in these assumptions have been offered as non-nuclear explanations for excess heat.\nOne assumption made by Fleischmann and Pons is that the efficiency of electrolysis is nearly 100%, meaning nearly all the electricity applied to the cell resulted in electrolysis of water, with negligible resistive heating and substantially all the electrolysis product leaving the cell unchanged. This assumption gives the amount of energy expended converting liquid DO into gaseous D and O. The efficiency of electrolysis is less than one if hydrogen and oxygen recombine to a significant extent within the calorimeter. Several researchers have described potential mechanisms by which this process could occur and thereby account for excess heat in electrolysis experiments.\nAnother assumption is that heat loss from the calorimeter maintains the same relationship with measured temperature as found when calibrating the calorimeter. This assumption ceases to be accurate if the temperature distribution within the cell becomes significantly altered from the condition under which calibration measurements were made. This can happen, for example, if fluid circulation within the cell becomes significantly altered. Recombination of hydrogen and oxygen within the calorimeter would also alter the heat distribution and invalidate the calibration.", "Some research groups initially reported that they had replicated the Fleischmann and Pons results but later retracted their reports and offered an alternative explanation for their original positive results. A group at Georgia Tech found problems with their neutron detector, and Texas A&M discovered bad wiring in their thermometers. These retractions, combined with negative results from some famous laboratories, led most scientists to conclude, as early as 1989, that no positive result should be attributed to cold fusion.", "A 1990 Michael Winner film Bullseye!, starring Michael Caine and Roger Moore, referenced the Fleischmann and Pons experiment. The film – a comedy – concerned conmen trying to steal scientists' purported findings. However, the film had a poor reception, described as \"appallingly unfunny\".\nIn Undead Science, sociologist Bart Simon gives some examples of cold fusion in popular culture, saying that some scientists use cold fusion as a synonym for outrageous claims made with no supporting proof, and courses of ethics in science give it as an example of pathological science. It has appeared as a joke in Murphy Brown and The Simpsons. It was adopted as a software product name Adobe ColdFusion and a brand of protein bars (Cold Fusion Foods). It has also appeared in advertising as a synonym for impossible science, for example a 1995 advertisement for Pepsi Max.\nThe plot of The Saint, a 1997 action-adventure film, parallels the story of Fleischmann and Pons, although with a different ending. In Undead Science, Simon posits that film might have affected the public perception of cold fusion, pushing it further into the science fiction realm.\nSimilarly, the tenth episode of 2000 science fiction TV drama Life Force (\"Paradise Island\") is also based around cold fusion, specifically the efforts of eccentric scientist Hepzibah McKinley (Amanda Walker), who is convinced she has perfected it based on her father's incomplete research into the subject. The episode explores its potential benefits and viability within the ongoing post-apocalyptic global warming scenario of the series.\nIn the 2023 video game Atomic Heart, cold fusion is responsible for nearly all of the technological advances.", "Cold hardening is the physiological and biochemical process by which an organism prepares for cold weather.", "Plants in temperate and polar regions adapt to winter and sub zero temperatures by relocating nutrients from leaves and shoots to storage organs. Freezing temperatures induce dehydrative stress on plants, as water absorption in the root and water transport in the plant decreases. Water in and between cells in the plant freezes and expands, causing tissue damage. Cold hardening is a process in which a plant undergoes physiological changes to avoid, or mitigate cellular injuries caused by sub-zero temperatures. Non-acclimatized individuals can survive −5 °C, while an acclimatized individual in the same species can survive −30 °C. Plants that originated in the tropics, like tomato or maize, don't go through cold hardening and are unable to survive freezing temperatures. The plant starts the adaptation by exposure to cold yet still not freezing temperatures. The process can be divided into three steps. First the plant perceives low temperature, then converts the signal to activate or repress expression of appropriate genes. Finally, it uses these genes to combat the stress, caused by sub-zero temperatures, affecting its living cells. Many of the genes and responses to low temperature stress are shared with other abiotic stresses, like drought or salinity. \n When temperature drops, the membrane fluidity, RNA and DNA stability, and enzyme activity change. These, in turn, affect transcription, translation, intermediate metabolism, and photosynthesis, leading to an energy imbalance. This energy imbalance is thought to be one of the ways the plant detects low temperature. Experiments on arabidopsis show that the plant detects the change in temperature, rather than the absolute temperature. The rate of temperature drop is directly connected to the magnitude of calcium influx, from the space between cells, into the cell. Calcium channels in the cell membrane detect the temperature drop, and promotes expression of low temperature responsible genes in alfalfa and arabidopsis. The response to the change in calcium elevation depends on the cell type and stress history. Shoot tissue will respond more than root cells, and a cell that already is adapted to cold stress will respond more than one that has not been through cold hardening before. Light doesn't control the onset of cold hardening directly, but shortening of daylight is associated with fall, and starts production of reactive oxygen species and excitation of photosystem 2, which influences low-temp signal transduction mechanisms. Plants with compromised perception of day length have compromised cold acclimation.\nCold increases cell membrane permeability and makes the cell shrink, as water is drawn out when ice is formed in the extracellular matrix between cells. To retain the surface area of the cell membrane so it will be able to regain its former volume when temperature rises again, the plant forms more and stronger Hechtian strands. These are tubelike structures that connect the protoplast with the cell wall. When the intracellular water freezes, the cell will expand, and without cold hardening the cell would rupture. To protect the cell membrane from expansion induced damage, the plant cell changes the proportions of almost all lipids in the cell membrane, and increases the amount of total soluble protein and other cryoprotecting molecules, like sugar and proline.\nChilling injury occurs at 0–10 degrees Celsius, as a result of membrane damage, metabolic changes, and toxic buildup. Symptoms include wilting, water soaking, necrosis, chlorosis, ion leakage, and decreased growth. Freezing injury may occur at temperatures below 0 degrees Celsius. Symptoms of extracellular freezing include structural damage, dehydration, and necrosis. If intracellular freezing occurs, it will lead to death. Freezing injury is a result of lost permeability, plasmolysis, and post-thaw cell bursting.\nWhen spring comes, or during a mild spell in winter, plants de-harden, and if the temperature is warm for long enough – their growth resumes.", "Cold hardening has also been observed in insects such as the fruit fly and diamondback moth. The insects use rapid cold hardening to protect against cold shock during overwintering periods. Overwintering insects stay awake and active through the winter while non-overwintering insects migrate or die. Rapid cold hardening can be experienced during short periods of undesirable temperatures, such as cold shock in environment temperature, as well as the common cold months. The buildup of cryoprotective compounds is the reason that insects can experience cold hardening. Glycerol is a cryoprotective substance found within these insects capable of overwintering. Through testing, glycerol requires interactions with other cell components within the insect in order to decrease the body's permeability to the cold. When an insect is exposed to these cold temperatures, glycerol rapidly accumulates. Glycerol is known as a non-ionic kosmotrope forming powerful hydrogen bonds with water molecules. The hydrogen bonds in the glycerol compound compete with the weaker bonds between the water molecules causing an interruption in the makeup of ice formation. This chemistry found within the glycerol compound and reaction between water has been used as an antifreeze in the past, and can be seen here when concerning cold hardening. Proteins also play a large role in the cryoprotective compounds that increase ability to survive the cold hardening process and environmental change. Glycogen phosphorylase (GlyP) has been a key protein found during testing to increase in comparison to a controlled group not experiencing the cold hardening. Once warmer temperatures are observed the process of acclimation begins, and the increased glycerol along with other cryoprotective compounds and proteins are also reversed. There is a rapid cold hardening capacity found within certain insects that suggests not all insects can survive a long period of overwintering. Non-diapausing insects can sustain brief temperature shocks but often have a limit to what they can handle before the body can no longer produce enough cryoprotective components.\n<nowiki/>\nInclusive to the cold hardening process being beneficial for insects survival during cold temperatures, it also helps improve the organisms performance. Rapid cold hardening (RCH) is one of the fastest cold temperature responses recorded. This process allows an insect to instantly adapt to the severe weather change without compromising function. The Drosophila melanogaster' (common fruit fly) is a frequently experimented insect involving cold hardening. A proven example of RCH enhancing organisms performance comes from courting and mating within the fruit fly. It has been tested that the fruit fly mated more frequently once RCH has commenced in relation to a controlled insect group not experiencing RCH. Most insects experiencing extended cold periods are observed to modify the membrane lipids within the body. Desaturation of fatty acids are the most commonly seen modification to the membrane. When the fruit fly was observed under the stressful climate the survival rate increased in comparison to the fly prior to cold hardening.\nIn addition to testing on the common fruit fly, Plutella xylostella (diamondback moth) also has been widely studied for its significance in cold hardening. While this insect also shows an increase in glycerol and similar cryoprotective compounds, it also shows an increase in polyols. These compounds are specifically linked to cryoprotective compounds designed to withstand cold hardening. The polyol compound is freeze-susceptible and freeze tolerant. Polyols simply act as a barrier within the insect body by preventing intracellular freezing by restricting the extracellular freezing likely to happen in overwintering periods. During the larval stage of the diamondback moth, the significance of glycerol was tested again for validity. The lab injected the larvae with added glycerol and in turn proved that glycerol is a major factor in survival rate when cold hardening. The cold tolerance is directly proportional to the buildup of glycerol during cold hardening.\nCold hardening of insects improves the survival rate of the species and improves function. Once environmental temperature begins to warm up above freezing, the cold hardening process is reversed and the glycerol and cryoprotective compounds decrease within the body. This also reverts the function of the insect to pre-cold hardening activity.", "The collective–amoeboid transition (CMT) is a process by which collective multicellular groups dissociate into amoeboid single cells following the down-regulation of integrins. CMTs contrast with epithelial–mesenchymal transitions (EMT) which occur following a loss of E-cadherin. Like EMTs, CATs are involved in the invasion of tumor cells into surrounding tissues, with amoeboid movement more likely to occur in soft extracellular matrix (ECM) and mesenchymal movement in stiff ECM. Although once differentiated, cells typically do not change their migration mode, EMTs and CMTs are highly plastic with cells capable of interconverting between them depending on intracelluar regulatory signals and the surrounding ECM.\nCATs are the least common transition type in invading tumor cells, although they are noted in melanoma explants.", "Colliding beam fusion (CBF), or colliding beam fusion reactor (CBFR), is a class of fusion power concepts that are based on two or more intersecting beams of fusion fuel ions that are independently accelerated to fusion energies using a variety of particle accelerator designs or other means. One of the beams may be replaced by a static target, in which case the approach is termed accelerator based fusion or beam-target fusion, but the physics is the same as colliding beams.\nCBFRs face several problems that have limited their ability to be seriously considered as candidates for fusion power. When two ions collide, they are more likely to scatter than to fuse. Magnetic confinement fusion reactors overcome this problem using a bulk plasma and confining it for some time so that the ions have many thousands of chances to collide. Two beams colliding give ions little time to interact before the beams fly apart. This limits how much fusion power a beam-beam machine can make.\nCBFR offers more efficient ways to provide the activation energy for fusion, by directly accelerating individual particles rather than heating a bulk fuel. The CBFR reactants are naturally non-thermal which gives them advantages, especially that they can directly carry enough energy to overcome the Coulomb barrier of aneutronic fusion fuels. Several designs have sought to address the shortcomings of earlier CBFRs, including Migma, MARBLE, MIX, and other beam-based concepts. These attempt to overcome the fundamental challenges of CBFR by applying radio waves, bunching beams together, increasing recirculation, or applying some quantum effects. None of these approaches have succeeded yet.", "Given the extremely low interaction cross-sections, the number of particles required in the reaction area is enormous, well beyond any existing technology. But this assumes that the particles in question only get one pass through the system. If the particles that missed collisions can be recycled in a way that their energy can be retained and the particles have multiple chances to collide, the energy imbalance can be reduced.\nOne such solution would be to place the reaction area of a two-beam system between the poles of a powerful magnet. The field will cause the electrically charged particles to bend around into circular paths and come back into the reaction area again. However, such systems naturally defocus the particles, so this will not lead them back to their original trajectories accurately enough to produce the densities desired.\nA better solution is to use a dedicated storage ring which includes focusing systems to maintain the beam accuracy. However, these only accept particles in a relatively narrow selection of original trajectories. If two particles approach closely and scatter off at an angle, they will no longer recycle into the storage area. It is easy to show that the loss rate from such scatterings is far greater than the fusion rate.\nMany attempts have been made to address this scattering problem.", "The energy levels needed to overcome the coulomb barrier, about 100 keV for D-T fuel, corresponds to millions of degrees, but is within the energy range that can be provided by even the smallest particle accelerators. For instance, the very first cyclotron, built in 1932, was capable of producing 4.8 MeV in a device that fit on a tabletop.\nThe original earthbound fusion reactions were created by such a device at the Cavendish Laboratory at Cambridge University. In 1934, Mark Oliphant, Paul Harteck and Ernest Rutherford used a new type of power supply to power a device not unlike an electron gun to shoot deuterium nuclei into a metal foil infused with deuterium, lithium or other light elements. This apparatus allowed them to study the nuclear cross section of the various reactions, and it was their work that produced the 100 keV figure.\nThe chance that any given deuteron will hit one of the deuterium atoms in the metal foil is vanishingly small. The experiment only succeeded because it ran for extended periods, and the rare reactions that did occur were so powerful that they could not be missed. But as the basis of a system for power production it simply wouldn't work; the vast majority of the accelerated deuterons goes right through the foil without undergoing a collision, and all the energy put into accelerating it is lost. The small number of reactions that do occur give off far less energy that what is fed into the accelerator.\nA somewhat related concept was explored by Stanislaw Ulam and Jim Tuck at Los Alamos shortly after World War II. In this system, deuterium was infused into metal like the Cavendish experiments, but then formed into a cone and inserted into shaped charge warheads. Two such warheads were aimed at each other and fired, forming rapidly moving jets of deuterized metal that collided. These experiments were carried out in 1946 but failed to turn up any evidence of fusion reactions.", "Things can be somewhat improved by using two accelerators firing at each other instead of a single accelerator and a non-moving target. In this case, the second fuel, boron in the example above, is already ionized, so the \"ionization drag\" seen by the protons entering the solid block is eliminated.\nIn this case, however, the concept of a characteristic interaction length has no meaning as there is no solid target. Instead, for these types of system, the typical measure is to use the beam luminosity, L, a term that combines the reaction cross-section with the number of events. The term is normally defined as:\nFor this discussion, we will re-arrange it to extract the collisional frequency:\nEach of these collisions will produce 8.7 MeV, so multiplying by gives the power. To generate N collisions one requires luminosity L, generating L requires power, so one can calculate the amount of power needed to produce a given L through:\nIf we set P to 1 MW, equivalent to a small wind turbine, this requires an L of 10 cms. For comparison, the world record for luminosity set by the Large Hadron Collider in 2017 was 2.06 x 10 cms, more than seven orders of magnitude too low.", "To illustrate the difficulty of building a beam-target fusion system, we will consider one promising fusion fuel, the proton-boron cycle, or p-B11.\nBoron can be formed into highly purified solid blocks, and protons easily produced by ionizing hydrogen gas. The protons can be accelerated and fired into the boron block, and the reactions will cause several alpha particles to be released. These can be collected in an electrostatic system to directly produce electricity without having to use a Rankine cycle or a similar heat-driven system. As the reactions create no neutrons directly, they have many practical advantages for safety also.\nThe chance of a collision is maximized when the protons have an energy of about 675 keV. When they fuse, the alphas carry away a total of 8.7 MeV. Some of that energy, 0.675 MeV, must be recycled into the accelerator to produce new protons to continue the process, and the generation and acceleration process is unlikely to be much more than 50% efficient. This still leaves ample net energy to close the cycle. However, this assumes every proton causes a fusion event, which does not occur. Considering the probability of a reaction, the resultant cycle is:\nwhere and are the probabilities that any given proton or boron will undergo a reaction. Re-arranging, we can show that:\nThat means that to break even, the system needs at least of the particles to undergo fusion. To ensure that a proton has a chance to collide with a boron, it must travel past many boron atoms. The collision rate is:\nwhere is the nuclear cross section between a proton and boron, is the density of boron, and is the average distance the proton travels through the boron before undergoing a fusion reaction. For p-B11, is 0.9 x 10 cm, is 2.535 g/cm, and thus ~ 8 cm. However, travelling through the block causes the proton to ionize the boron atoms it passes, which slows the proton. At 0.675 MeV, this process slows the proton to sub-keV energies in about 10 cm, many orders of magnitude less than what is required.", "Fusion takes place when atoms come into close proximity and the nuclear force pulls their nuclei together to form a single larger nucleus. Counteracting this process is the positive charge of the nuclei, which repel each other due to the electrostatic force. For fusion to occur, the nuclei must have enough energy to overcome this coulomb barrier. The barrier is lower for atoms with less positive charge: those with the fewest protons. The nuclear force increases with more nucleons: the total number of protons and neutrons. This means that a combination of deuterium and tritium has the lowest coulomb barrier, at about 100 keV (see requirements for fusion).\nWhen the fuel is heated to high energies the electrons disassociate from the nuclei, which are left as individual ions and electrons mixed in a gas-like plasma. Particles in a gas are distributed across a wide range of energies in a spectrum known as the Maxwell–Boltzmann distribution. At any given temperature the majority of the particles are at lower energies, with a \"long tail\" containing smaller numbers of particles at much higher energies. So while 100 keV represents a temperature of over one billion degrees, to produce fusion events, the fuel does not need to be heated to this temperature as a whole: some reactions will occur even at lower temperatures due to the small number of high-energy particles in the mix.\nAs the fusion reactions give off large amounts of energy, and some of that energy will be deposited back in the fuel, these reactions heat the fuel. There is a critical temperature at which the rate of reactions, and thus the energy deposited, balances losses to the environment. At this point the reaction becomes self-sustaining, a point known as ignition. For D-T fuel, that temperature is between 50 and 100 million degrees. The overall rate of fusion and net energy release is dependent on the combination of temperature, density and energy confinement time, known as the fusion triple product.\nTwo primary approaches have developed to attack the fusion power problem. In the inertial confinement approach, the fuel is quickly squeezed to extremely high densities, which also increases the internal temperature through the adiabatic process. There is no attempt to maintain these conditions for any period of time, the fuel explodes outward as soon as the force is released. The confinement time is on the order of microseconds, so the temperatures and density must be very high for any appreciable amount of the fuel to undergo fusion. This approach has been successful in producing fusion reactions, but to date, the devices that can provide the compression, typically lasers, require far more energy than the reactions produce.\nThe more widely studied approach is magnetic confinement. Since the plasma is electrically charged, it will follow magnetic lines of force and a suitable arrangement of fields can keep the fuel away from the container walls. The fuel is then heated over an extended period until some of the fuel in the tail starts undergoing fusion. At the temperatures and densities that are possible using magnets the fusion process is fairly slow, so this approach requires long confinement times on the order of tens of seconds, or minutes. Confining a gas at millions of degrees for this sort of time scale has proven difficult, although modern experimental machines are approaching the conditions needed for net power production, or \"breakeven\".", "The classic example of an IEC device is a fusor. A typical Fusor has two spherical metal cages, one inside another, in a vacuum. A high voltage is placed between the two cages. Fuel gas injected. The fuel ionizes and is accelerated toward the inner cage. Ions that miss the inner cage can fuse together.\nFusors are not considered part of the CBFR family, because they do not traditionally use beams.\nThere are many problems with the fusor as a fusion power reactor. One is that the electrical grids are charged to the point where there is a strong mechanical force pulling them together, which limits how small the grid materials can be. This results in a minimum rate of collisions between the ions and the grids, removing energy from the system. Additionally, these collisions spall off metal into the fuel, which causes it to rapidly lose energy through radiation. It may be that the smallest possible grid material is still large enough that collisions with the ions will remove energy from the system faster than the fusion rate. Beyond that, there are several loss mechanisms that suggest X-ray radiation from such a system will likewise remove energy faster than fusion can supply it.", "In 2017, the University of Maryland simulated an N-Body beam system to determine if recirculating ion-beams could reach fusion conditions. Models showed that the concept was fundamentally limited because it could not reach sufficient densities needed for fusion power.", "A similar concept is being attempted by TAE Technologies, formerly Tri-Alpha Energy (TAE), based largely on the ideas of Norman Rostoker, a professor at University of California, Irvine. Early publications from the early 1990s show devices using conventional intersecting storage rings and refocussing arrangements, but later documents from 1996 on use a very different system firing fuel ions into a field-reversed configuration (FRC).\nThe FRC is a self-stable arrangement of plasma which geometry looks like a mix of a vortex ring and a thick-walled tube. The magnetic fields keep the particles trapped between the tube walls, circulating rapidly. TAE intends to first produce a stable FRC, and then use accelerators to fire additional fuel ions into it so they become trapped. The ions make up for any radiative losses from the FRC, and inject more magnetic helicity into the FRC to keep its shape. The ions from the accelerators collide to produce fusion.\nWhen the concept was first revealed, it garnered several negative reviews in the journals. These issues were explained away and the construction of several small experimental devices followed. , the best-reported performance of the system is approximately 10 away from breakeven. In early 2019, it was announced that the system would instead be developed using conventional D-T fuels and the company changed its name to TAE.", "An attempt to avoid the grid-collision problems was made by Robert Bussard in his Polywell design. This uses cusp magnetic field arrangements to produce \"virtual electrodes\" consisting of trapped electrons. The result is to produce an accelerating field similar to one produced by the grid wires in the fusor, but with no wires. Collisions with the electrons in the virtual electrodes are possible, but unlike the fusor, these cause no losses via spalled-off metal ions.\nThe polywell's biggest flaw is its ability to hold a plasma negative for any significant amount of time. In practice, any significant amount of negative charge vanishes quickly. Further, analysis by Todd Rider in 1995 suggests that any system that has non-equilibrium plasmas will suffer rapid losses of energy via bremsstrahlung. Bremsstrahlung occurs when a charged particle is rapidly accelerated, causing it to radiate x-rays, and thereby lose energy. In the case of IEC devices, including both the fusor and polywell, the collisions between recently accelerated ions entering the reaction area and low-energy ions and electrons forms a lower limit on bremsstrahlung that appears to be far higher than any possible rate of fusion.", "The Migma device is perhaps the first significant attempt to solve the recirculation problem. It uses a storage system that was, in effect, an infinite number of storage rings arranged at different locations and angles. This is not done by added components or hardware configurations, but via careful arrangement of the magnetic fields within a wide but flat cylindrical vacuum chamber. Only ions undergoing very high angle scattering events would be lost, and calculations suggest that the rate of these events was such that any given ion would pass through the reaction area 10 times before scattering out. This would be enough to sustain positive energy output.\nSeveral Migma devices were built and showed some promise, but it did not progress beyond moderately sized devices. Several theoretical concerns were raised based on space charge limit considerations, which suggested that increasing the density of the fuel to useful levels would require enormous magnets to confine it. During funding rounds the system became mired in an acrimonious debate with the various energy agencies and further development ended in the 1980s.", "Copurification procedures, such as co-immunoprecipitation, are commonly used to analyze interactions between proteins. Copurification is one method used to map the interactome of living organisms.", "Copurification in a chemical or biochemical context is the physical separation by chromatography or other purification technique of two or more substances of interest from other contaminating substances. For substances to co-purify usually implies that these substances attract each other to form a non-covalent complex such as in a protein complex.\nHowever, when fractionating mixtures, especially mixtures containing large numbers of components (for example a cell lysate), it is possible by chance that some components may copurify even though they don't form complexes. In this context the term copurification is sometimes used to denote when two biochemical activities or some other property are isolated together after purification but it is not certain if the sample has been purified to homogeneity (i.e., contains only one molecular species or one molecular complex). Hence these activities or properties are likely but not guaranteed to reside on the same molecule or in the same molecular complex.", "The Coulomb barrier, named after Coulomb's law, which is in turn named after physicist Charles-Augustin de Coulomb, is the energy barrier due to electrostatic interaction that two nuclei need to overcome so they can get close enough to undergo a nuclear reaction.", "This energy barrier is given by the electric potential energy:\nwhere\n:ε is the permittivity of free space;\n:q, q are the charges of the interacting particles;\n:r is the interaction radius.\nA positive value of U is due to a repulsive force, so interacting particles are at higher energy levels as they get closer. A negative potential energy indicates a bound state (due to an attractive force).\nThe Coulomb barrier increases with the atomic numbers (i.e. the number of protons) of the colliding nuclei:\nwhere e is the elementary charge, and Z the corresponding atomic numbers.\nTo overcome this barrier, nuclei have to collide at high velocities, so their kinetic energies drive them close enough for the strong interaction to take place and bind them together.\nAccording to the kinetic theory of gases, the temperature of a gas is just a measure of the average kinetic energy of the particles in that gas. For classical ideal gases the velocity distribution of the gas particles is given by Maxwell–Boltzmann. From this distribution, the fraction of particles with a velocity high enough to overcome the Coulomb barrier can be determined.\nIn practice, temperatures needed to overcome the Coulomb barrier turned out to be smaller than expected due to quantum mechanical tunnelling, as established by Gamow. The consideration of barrier-penetration through tunnelling and the speed distribution gives rise to a limited range of conditions where fusion can take place, known as the Gamow window.\nThe absence of the Coulomb barrier enabled the discovery of the neutron by James Chadwick in 1932.", "Counterflow centrifugal elutriation (CCE) is a liquid clarification technique. This method enables scientists to separate different cells with different sizes. Since cell size is correlated with cell cycle stages this method also allows the separation of cells at different stages of the cell cycle.", "The key concept is that larger cells tend to stay within the flowing buffer solution while smaller cells will be washed away follow the buffer solution (different sedimentation property within the buffer solution), and cells will have different sedimentation properties in different cell cycle stages.\nThe basic principle of separating the cells inside CCE is the balance between centripetal and the counter flow drag force. When the cells enter the elutriation chamber, all the cells will stay at the outer edge of the chamber due to centrifugal force. Then when the flow rate of the buffer solution increases, the solution tends to push the cells towards the middle of the CCE. When the counter flow drag force outweighs the centripetal force, particles will be driven by the net force and leave the chamber. Smaller particles are able to leave the chamber at lower flow rates. In contrast, larger particles will stay within the elutriation chamber. Therefore, buffer flow rate can be used to control size sorting within the elutriation chamber.", "During the separation, the cell only needs to be suspended in a buffer solution and enter a centrifuge, the whole processes does not involve any chemical (e.g. staining) and physical (e.g. attachment of antibody, lyses of cell membrane) effect on the cells, so the cell will remain unchanged before and after the separation. Because of this, the collected cells can be used for further experiment or further separation by other techniques. Finally the CCE rely on centrifugal force and the counter flow drag force to separate the cells, so the speed of separation is fast. In summary:\n*Minimum effect on the cells\n*High recovery viability\n*Separated cells can be used further\n*Rapid", "As mentioned above, the CCE separates cells based on their sedimentation property but not specific features (e.g. surface protein, cell shape). It cannot separate different types of cells which have similar sedimentation properties. This means that previous purification needs to be done for mixed cell type sample. The CCE is also limited to cells which are able to be individually suspended in the buffer solution. Cells which always attach to something cannot be separated by the CCE.", "* Cryogenics, the study of the production and behaviour of materials at very low temperatures and the study of producing extremely low temperatures \n* Cryoelectronics, the study of superconductivity under cryogenic conditions and its applications \n* Cryosphere, those portions of Earth's surface where water ice naturally occurs\n* Cryotron, a switch that uses superconductivity\n* Cryovolcano, a theoretical type of volcano that erupts volatiles instead of molten rock", "* Cryobiology, the branch of biology that studies the effects of low temperatures on living things\n* Cryonics, the low-temperature preservation of people who cannot be sustained by contemporary medicine\n* Cryoprecipitate, a blood-derived protein product used to treat some bleeding disorders\n* Cryotherapy, medical treatment using cold\n** Cryoablation, tissue removal using cold\n** Cryosurgery, surgery using cold\n* Cryo-electron microscopy (cryoEM), a technique that fires beams of electrons at proteins that have been frozen in solution, to deduce the biomolecules’ structure", "Cryo- is from the Ancient Greek κρύος (krúos, “ice, icy cold, chill, frost”). Uses of the prefix Cryo- include:", "Cryo-adsorption is a method used for hydrogen storage where gaseous hydrogen at cryogenic temperatures (150—60 K) is physically adsorbed on porous material, mostly activated carbon. The achievable storage density is between liquid-hydrogen (LH) storage systems and compressed-hydrogen (CGH) storage systems.", "Cryobiology is the study of living organisms, organs, biological tissues or biological cells at low temperatures. This knowledge is practically applied in three fields: cryonics, cryopreservation and cryosurgery. Please see cryobiology for more information. The Wikipedia entries related to cryobiology have typically been miscategorized as cryogenics, and the same mistake has been made for cryonics.", "Cryobiology is a bimonthly peer-reviewed scientific journal covering cryobiology. It was established in 1964 and is published by Elsevier on behalf of the Society for Cryobiology, of which it is the official journal. The editor-in-chief is D.M. Rawson (University of Bedfordshire). According to the Journal Citation Reports, the journal has a 2017 impact factor of 2.050.", "Immunological effects resulting from the cryoablation of tumors was first observed in the 1960s. Since the 1960s, Tanaka treated metastatic breast cancer patients with cryotherapy and reported cryoimmunological reaction resulting from cryotherapy. In the 1970s, systemic immunological response from local cryoablation of prostate cancer was also clinically observed. In the 1980s, Tanaka, of Japan, continued to advance the clinical practice of cryoimmunology with combination treatments including: cryochemotherapy and cryoimmunotherapy. In 1997, Russian scientists confirmed the efficacy of cryoimmunotherapy in inhibiting metastases in advanced cancer. In 2000s, China, following closely with the exciting developments, enthusiastically embraced cryoablation treatment for cancer and has been leading the practice ever since with cryoimmunotherapy treatments available for cancer patients in numerous hospitals and medical clinics throughout China. In the 2010s, American researchers and medical professionals, started to explore cryoimmunotherapy for systemic treatment of cancer.", "Cryoablation of tumor induces necrosis of tumor cells. The immunotherapeutic effect of cryoablation of tumor is the result of the release of intracellular tumor antigens from within the necrotized tumor cells. The released tumor antigens help activate anti-tumor T cells, which destroy remaining malignant cells. Thus, cryoablation of tumor elicits a systemic anti-tumor immunologic response.\nThe resulting immunostimulation from cryoablation may not be sufficient to induce sustained, systemic regression of metastases, and can be synergised with the combination of immunotherapy treatment and vaccine adjuvants.\nVarious adjuvant immunotherapy and chemotherapy treatments can be combined with cryoablation to sustain systemic anti-tumor response with regression of metastases, including:\n* Injection of immunomodulating drugs (i.e.: therapeutic antibodies) and vaccine adjuvants (saponins) directly into the cryoablated, necrotized tumor lysate, immediately after cryoablation \n* Administration of autologous immune enhancement therapy, including: dendritic cell therapy, CIK cell therapy", "Cryoimmunotherapy, also referred to as cryoimmunology, is an oncological treatment for various cancers that combines cryoablation of tumor with immunotherapy treatment. In-vivo cryoablation of a tumor, alone, can induce an immunostimulatory, systemic anti-tumor response, resulting in a cancer vaccine—the abscopal effect. Thus, cryoablation of tumors is a way of achieving autologous, in-vivo tumor lysate vaccine and treat metastatic disease. However, cryoablation alone may produce an insufficient immune response, depending on various factors, such as high freeze rate. Combining cryotherapy with immunotherapy enhances the immunostimulating response and has synergistic effects for cancer treatment.\nAlthough, cryoblation and immunotherapy has been used successfully in oncological clinical practice for over 100 years, and can treat metastatic disease with curative intent, it has been ignored in modern practice. Only recently has cryoimmunotherapy been resurrected to become the gold standard in cancer treatment of all stages of disease.", "Each nerve is composed of a bundle of axons. Each axon is surrounded by the endoneurium connective tissue layer. These axons are bundled into fascicles surrounded by the perineurium connective tissue layer. Multiple fascicles are then surrounded by the epineurium, which is the outermost connective tissue layer of the nerve. The axons of myelinated nerves have a myelin sheath made up of Schwann cells that coat the axon.", "A similar procedure that uses radiofrequency energy for back pain appears to have short-term benefit, but it is unclear if it has a long-term effect.", "Cryoneurolysis, also referred to as cryoanalgesia, is a medical procedure that temporarily blocks nerve conduction along peripheral nerve pathways. The procedure, which inserts a small probe to freeze the target nerve, can facilitate complete regeneration of the structure and function of the affected nerve. Cryoneurolysis has been used to treat a variety of painful conditions.", "Classification of nerve damage was well-defined by Sir Herbert Seddon and Sunderland in a system that remains in use. The adjacent table details the forms (neurapraxia, axonotmesis and neurotmesis) and degrees of nerve injury that occur as a result of exposure to various temperatures.\nCryoneurolysis treatments that use nitrous oxide (boiling point of −88.5 °C) as the coolant fall in the range of an axonotmesis injury, or 2nd degree injury, according to the Sunderland classification system. Treatments of the nerve in this temperature range are reversible. Nerves treated in this temperature range experience a disruption of the axon, with Wallerian degeneration occurring distal to the site of injury. The axon and myelin sheath are affected, but all of the connective tissues (endoneurium, perineurium, and epineurium) remain intact. Following Wallerian degeneration, the axon regenerates along the original nerve path at a rate of approximately 1–2 mm per day.\nCryoneurolysis differs from cryoablation in that cryoablation treatments utilize liquid nitrogen (boiling point of −195.8 °C) as the coolant, and therefore, fall into the range of a neurotmesis injury, or 3rd degree injury according to the Sunderland classification. Treatments of the nerve in this temperature range are irreversible. Nerves treated in this temperature range experience a disruption of both the axon and the endoneurium connective tissue layer.", "The use of cold for pain relief and as an anti-inflammatory has been known since the time of Hippocrates (460–377 BC). Since then there have been numerous accounts of ice used for pain relief including from the Ancient Egyptians and Avicenna of Persia (982–1070 AD). In 1812 Napoleon's Surgeon General noted that half-frozen soldiers from the Moscow battle were able to tolerate amputations with reduced pain and in 1851, ice and salt mixtures were promoted by Arnott for the treatment of nerve pain. Campbell White, in 1899, was the first to use refrigerants medically, and Allington, in 1950, was the first to use liquid nitrogen for medical treatments. In 1961, Cooper et al. created an early cryoprobe that reached −190 °C using liquid nitrogen. Shortly thereafter, in 1967, an ophthalmic surgeon named Amoils used carbon dioxide and nitrous oxide to create a cryoprobe that reached −70 °C.", "Cryoneurolysis is performed with a cryoprobe, which is composed of a hollow cannula that contains a smaller inner lumen. The pressurized coolant (nitrous oxide, carbon dioxide or liquid nitrogen) travels down the lumen and expands at the end of the lumen into the tip of the hollow cannula. No coolant exits the cryoprobe. The expansion of the pressurized liquid causes the surrounding area to cool (known as the Joule–Thomson effect) and the phase change of the liquid to gas also causes the surrounding area to cool. This causes a visible iceball to form and the tissue surrounding the end of the cryoprobe to freeze. The gas form of the coolant then travels up the length of the cryoprobe and is safely expelled. The tissue surrounding the end of the cryoprobe can reach as low as −88.5 °C with nitrous oxide as the coolant, and as low as −195.8 °C with liquid nitrogen. Temperatures below −100 °C are damaging to nerves.\nCryo-S Painless cryoanalgesia device is the next generation of apparatus used by many experts in the field since 1992. The working medium for Cryo-S Painless is carbon dioxide: (−78 °C) or nitrous oxide: (−89 °C), very efficient and easy to use gases. Cryo-S Painless is controlled by a microprocessor and all the parameters are displayed and monitored on a LCD screen. Mode selection probe, cleaning and freezing can be performed automatically using footswitch or touch screen which allows to keep the site of a procedure under sterile conditions. Electronic communication (chip system) between the connected probe and device allows recognition of optimal operating parameters and auto-configures to cryoprobe characteristics. Pressure and gas flow are set automatically, any manual adjustment is not necessary. Cryoprobe temperature, cylinder pressure, gas flow inside of cryoprobe and procedure time are displayed during freezing. Built-in voice communication Built-in neurostimulation (sensory, motor).", "The Endocare PerCryo Percutaneous Cryoablation device utilizes argon as a coolant and can be used with four different single cryoprobe configurations with a diameter of either 1.7 mm (~16 gauge) or 2.4 mm (~13 gauge) in diameter .\nThe Myoscience Iovera is a handheld device that uses nitrous oxide as a coolant and can be used with a three-probe configuration with a probe diameter of 0.4 mm (~27 gauge).", "Cryoprotectants operate by increasing the solute concentration in cells. However, in order to be biologically viable they must easily penetrate and must not be toxic to cells.", "Mixtures of cryoprotectants have less toxicity and are more effective than single-agent cryoprotectants. A mixture of formamide with DMSO (dimethyl sulfoxide), propylene glycol, and a colloid was for many years the most effective of all artificially created cryoprotectants. Cryoprotectant mixtures have been used for vitrification (i.e. solidification without crystal ice formation). Vitrification has important applications in preserving embryos, biological tissues and organs for transplant. Vitrification is also used in cryonics, in an effort to eliminate freezing damage.", "A cryoprotectant is a substance used to protect biological tissue from freezing damage (i.e. that due to ice formation). Arctic and Antarctic insects, fish and amphibians create cryoprotectants (antifreeze compounds and antifreeze proteins) in their bodies to minimize freezing damage during cold winter periods. Cryoprotectants are also used to preserve living materials in the study of biology and to preserve food products.\nFor years, glycerol has been used in cryobiology as a cryoprotectant for blood cells and bull sperm, allowing storage in liquid nitrogen at temperatures around −196 °C. However, glycerol cannot be used to protect whole organs from damage. Instead, many biotechnology companies are researching the development of other cryoprotectants more suitable for such uses. A successful discovery may eventually make possible the bulk cryogenic storage (or \"banking\") of transplantable human and xenobiotic organs. A substantial step in that direction has already occurred. Twenty-First Century Medicine has vitrified a rabbit kidney to -135 °C with their proprietary vitrification cocktail. Upon rewarming, the kidney was successfully transplanted into a rabbit, with complete functionality and viability, able to sustain the rabbit indefinitely as the sole functioning kidney.", "Some cryoprotectants function by lowering the glass transition temperature of a solution or of a material. In this way, the cryoprotectant prevents actual freezing, and the solution maintains some flexibility in a glassy phase. Many cryoprotectants also function by forming hydrogen bonds with biological molecules as water molecules are displaced. Hydrogen bonding in aqueous solutions is important for proper protein and DNA function. Thus, as the cryoprotectant replaces the water molecules, the biological material retains its native physiological structure and function, although they are no longer immersed in an aqueous environment. This preservation strategy is most often utilized in anhydrobiosis.", "Cold-adapted arctic frogs, such as wood frogs, and some other ectotherms in polar and subpolar regions naturally produce glucose, but southern brown tree frogs and Arctic salamanders create glycerol in their livers to reduce ice formation.\nWhen glucose is used as a cryoprotectant by arctic frogs, massive amounts of glucose are released at low temperature and a special form of insulin allows for this extra glucose to enter the cells. When the frog rewarms during spring, the extra glucose must be rapidly eliminated, but stored.", "Conventional cryoprotectants are glycols (alcohols containing at least two hydroxyl groups), such as ethylene glycol , propylene glycol and glycerol. Ethylene glycol is commonly used as automobile antifreeze; while propylene glycol has been used to reduce ice formation in ice cream. Dimethyl sulfoxide (DMSO) is also regarded as a conventional cryoprotectant. Glycerol and DMSO have been used for decades by cryobiologists to reduce ice formation in sperm, oocytes, and embryos that are cold-preserved in liquid nitrogen. Cryoconservation of animal genetic resources is a practice that involves conventional cryoprotectants to store genetic material with the intention of future revival. Trehalose is non-reducing sugar produced by yeasts and insects in copious amounts. Its use as a cryoprotectant in commercial systems has been patented widely.", "Cryoprotectants are also used to preserve foods. These compounds are typically sugars that are inexpensive and do not pose any toxicity concerns. For example, many (raw) frozen chicken products contain a sucrose and sodium phosphates solution in water.", "* DMSO\n* Ethylene glycol\n* Glycerol\n* 2-Methyl-2,4-pentanediol (MPD)\n* Propylene glycol\n* Sucrose\n* Trehalose\n*Heavy water [7]", "Insects most often use sugars or polyols as cryoprotectants. One species that uses cryoprotectant is Polistes exclamans (a wasp). In this species, the different levels of cryoprotectant can be used to distinguish between morphologies.", "The term cryostasis was introduced to name the reversible preservation technology for live biological objects which is based on using clathrate-forming gaseous substances under increased hydrostatic pressure and hypothermic temperatures.\nLiving tissues cooled below the freezing point of water are damaged by the dehydration of the cells as ice is formed between the cells. The mechanism of freezing damage in living biological tissues has been elucidated by Renfret.\nThe vapor pressure of the ice is lower than the vapor pressure of the solute water in the surrounding cells and as heat is removed at the freezing point of the solutions, the ice crystals grow between the cells, extracting water from them. As the ice crystals grow, the volume of the cells shrinks, and the cells are crushed between the ice crystals. Additionally, as the cells shrink, the solutes inside the cells are concentrated in the remaining water, increasing the intracellular ionic strength and interfering with the organization of the proteins and other organized intercellular structures. Eventually, the solute concentration inside the cells reaches the eutectic and freezes. The final state of frozen tissues is pure ice in the former extracellular spaces, and inside the cell membranes a mixture of concentrated cellular components in ice and bound water. In general, this process is not reversible to the point of restoring the tissues to life.\nCryostasis utilizes clathrate-forming gases that penetrate and saturate the biological tissues causing clathrate hydrates formation (under specific pressure-temperature conditions) inside the cells and in the extracellular matrix. Clathrate hydrates are a class of solids in which gas molecules occupy \"cages\" made up of hydrogen-bonded water molecules. These \"cages\" are unstable when empty, collapsing into conventional ice crystal structure, but they are stabilised by the inclusion of the gas molecule within them. Most low molecular weight gases (including CH, HS, Ar, Kr, and Xe) will form a hydrate under some pressure-temperature conditions.\nClathrates formation will prevent the biological tissues from dehydration which will cause irreversible inactivation of intracellular enzymes.", "Cryosurgery is also used to treat internal and external tumors as well as tumors in the bone. To cure internal tumors, a hollow instrument called a cryoprobe is used, which is placed in contact with the tumor. Liquid nitrogen or argon gas is passed through the cryoprobe. Ultrasound or MRI is used to guide the cryoprobe and monitor the freezing of the cells. This helps in limiting damage to adjacent healthy tissues. A ball of ice crystals forms around the probe which results in freezing of nearby cells. When it is required to deliver gas to various parts of the tumor, more than one probe is used. After cryosurgery, the frozen tissue is either naturally absorbed by the body in the case of internal tumors, or it dissolves and forms a scab for external tumors.", "A common method of freezing lesions is by using liquid nitrogen as the cryogen. The liquid nitrogen may be applied to lesions using a variety of methods; such as dipping a cotton or synthetic material tipped applicator in liquid nitrogen and then directly applying the cryogen onto the lesion. The liquid nitrogen can also be sprayed onto the lesion using a spray canister. The spray canister may utilize a variety of nozzles for different spray patterns. A cryoprobe, which is a metal applicator that has been cooled using liquid nitrogen, can also be directly applied onto lesions.", "Carbon dioxide is also available as a spray and is used to treat a variety of benign spots. Less frequently, doctors use carbon dioxide \"snow\" formed into a cylinder or mixed with acetone to form a slush that is applied directly to the treated tissue.", "Cryosurgery is a minimally invasive procedure, and is often preferred to other types of surgery because of its safety, ease of use, minimal pain and scarring as well as low cost; however, as with any medical treatment, there are risks involved, primarily that of damage to nearby healthy tissue. Damage to nerve tissue is of particular concern but is rare.\nCryosurgery cannot be used on lesions that would subsequently require biopsy as the technique destroys tissue and precludes the use of histopathology.\nMore common complications of cryosurgery include blistering and edema which are transient. Cryosurgery may cause complications due to damage of underlying structures. Destruction of the basement membrane may cause scarring and destruction of hair follicles can cause alopecia or hair loss. Occasionally, hypopigmentation may occur in the area of skin treated with cryosurgery, however, this complication is usually transient and often resolves as melanocytes migrate and repigment the area over several months. Bleeding can also occur, which can be delayed or immediate, due to damage of underlying arteries and arterioles. Tendon rupture and cartillage necrosis can occur, particularly if cryosurgery is done over bony prominences. These complications can be avoided or minimized if freeze times of less than 30 seconds are used during cryosurgery.\nPatients undergoing cryosurgery usually experience redness and minor-to-moderate localized pain, which most of the time can be alleviated sufficiently by oral administration of mild analgesics such as ibuprofen, codeine or acetaminophen (paracetamol). Blisters may form as a result of cryosurgery, but these usually scab over and peel away within a few days.", "Warts, moles, skin tags, solar keratoses, molluscum, Mortons neuroma and small skin cancers are candidates for cryosurgical treatment. Several internal disorders are also treated with cryosurgery, including liver cancer, prostate cancer, lung cancer, oral cancers, cervical disorders and, more commonly in the past, hemorrhoids. Soft tissue conditions such as plantar fasciitis (joggers heel) and fibroma (benign excrescence of connective tissue) can be treated with cryosurgery.\nCryosurgery works by taking advantage of the destructive force of freezing temperatures on cells. When their temperature sinks beyond a certain level ice crystals begin forming inside the cells and, because of their lower density, eventually tear apart those cells. Further harm to malignant growth will result once the blood vessels supplying the affected tissue begin to freeze.\nCryosurgery is used to treat a variety of benign skin lesions including:\n* Acne\n* Dermatofibroma\n* Hemangioma\n* Keloid (hypertrophic scar)\n* Molluscum contagiosum\n* Myxoid cyst\n* Pyogenic granuloma\n* Seborrheic keratoses\n* Skin tags\n* Warts (including anogenital warts)\nCryosurgery may also be used to treat low risk skin cancers such as basal cell carcinoma and squamous cell carcinoma but a biopsy should be obtained first to confirm the diagnosis, determine the depth of invasion and characterize other high risk histologic features.", "A mixture of dimethyl ether and propane is used in some \"freeze spray\" preparations such as Dr. Scholl's Freeze Away. The mixture is stored in an aerosol spray type container at room temperature and drops to when dispensed. The mixture is often dispensed into a straw with a cotton-tipped swab. Similar products may use tetrafluoroethane or other substances.", "Cryosurgery (with cryo from the Ancient Greek ) is the use of extreme cold in surgery to destroy abnormal or diseased tissue; thus, it is the surgical application of cryoablation.\nCryosurgery has been historically used to treat a number of diseases and disorders, especially a variety of benign and malignant skin conditions.", "Recent advances in technology have allowed for the use of argon gas to drive ice formation using a principle known as the Joule-Thomson effect. This gives physicians excellent control of the ice and minimizes complications using ultra-thin 17 gauge cryoneedles.", ";Cryosurgical systems\nA number of medical supply companies have developed cryogen delivery systems for cryosurgery. Most are based on the use of liquid nitrogen, although some employ the use of proprietary mixtures of gases that combine to form the cryogen.", "In many materials, the Curie–Weiss law fails to describe the susceptibility in the immediate vicinity of the Curie point, since it is based on a mean-field approximation. Instead, there is a critical behavior of the form\nwith the critical exponent . However, at temperatures the expression of the Curie–Weiss law still holds true, but with replaced by a temperature that is somewhat higher than the actual Curie temperature. Some authors call the Weiss constant to distinguish it from the temperature of the actual Curie point.", "In magnetism, the Curie–Weiss law describes the magnetic susceptibility of a ferromagnet in the paramagnetic region above the Curie temperature:\nwhere is a material-specific Curie constant, is the absolute temperature, and is the Curie temperature, both measured in kelvin. The law predicts a singularity in the susceptibility at . Below this temperature, the ferromagnet has a spontaneous magnetization. The name is given after Pierre Curie and Pierre Weiss.", "According to the Bohr–van Leeuwen theorem, when statistical mechanics and classical mechanics are applied consistently, the thermal average of the magnetization is always zero. Magnetism cannot be explained without quantum mechanics. That means that it can not be explained without taking into account that matter consists of atoms. Next are listed some semi-classical approaches to it, using a simple atom model, as they are easy to understand and relate to even though they are not perfectly correct.\nThe magnetic moment of a free atom is due to the orbital angular momentum and spin of its electrons and nucleus. When the atoms are such that their shells are completely filled, they do not have any net magnetic dipole moment in the absence of an external magnetic field. When present, such a field distorts the trajectories (classical concept) of the electrons so that the applied field could be opposed as predicted by the Lenz's law. In other words, the net magnetic dipole induced by the external field is in the opposite direction, and such materials are repelled by it. These are called diamagnetic materials.\nSometimes an atom has a net magnetic dipole moment even in the absence of an external magnetic field. The contributions of the individual electrons and nucleus to the total angular momentum do not cancel each other. This happens when the shells of the atoms are not fully filled up (Hund's Rule). A collection of such atoms however, may not have any net magnetic moment as these dipoles are not aligned. An external magnetic field may serve to align them to some extent and develop a net magnetic moment per volume. Such alignment is temperature dependent as thermal agitation acts to disorient the dipoles. Such materials are called paramagnetic.\nIn some materials, the atoms (with net magnetic dipole moments) can interact with each other to align themselves even in the absence of any external magnetic field when the thermal agitation is low enough. Alignment could be parallel (ferromagnetism) or anti-parallel. In the case of anti-parallel, the dipole moments may or may not cancel each other (antiferromagnetism, ferrimagnetism).", "We take a very simple situation in which each atom can be approximated as a two state system. The thermal energy is so low that the atom is in the ground state. In this ground state, the atom is assumed to have no net orbital angular momentum but only one unpaired electron to give it a spin of the half. In the presence of an external magnetic field, the ground state will split into two states having an energy difference proportional to the applied field. The spin of the unpaired electron is parallel to the field in the higher energy state and anti-parallel in the lower one.\nA density matrix, , is a matrix that describes a quantum system in a mixed state, a statistical ensemble of several quantum states (here several similar 2-state atoms). This should be contrasted with a single state vector that describes a quantum system in a pure state. The expectation value of a measurement, , over the ensemble is . In terms of a complete set of states, , one can write\nVon Neumann's equation tells us how the density matrix evolves with time.\nIn equilibrium,\none has , and the allowed density matrices are \nThe canonical ensemble has \nwhere\nFor the 2-state system, we can write\nHere is the gyromagnetic ratio.\nHence , and\nFrom which", "So far, we have assumed that the atoms do not interact with each other. Even though this is a reasonable assumption in the case of diamagnetic and paramagnetic substances, this assumption fails in the case of ferromagnetism, where the spins of the atom try to align with each other to the extent permitted by the thermal agitation. In this case, we have to consider the Hamiltonian of the ensemble of the atom. Such a Hamiltonian will contain all the terms described above for individual atoms and terms corresponding to the interaction among the pairs of the atom. Ising model is one of the simplest approximations of such pairwise interaction.\nHere the two atoms of a pair are at . Their interaction is determined by their distance vector . In order to simplify the calculation, it is often assumed that interaction happens between neighboring atoms only and is a constant. The effect of such interaction is often approximated as a mean field and, in our case, the Weiss field.", "A magnetic moment which is present even in the absence of the external magnetic field is called spontaneous magnetization. Materials with this property are known as ferromagnets, such as iron, nickel, and magnetite. However, when these materials are heated up, at a certain temperature they lose their spontaneous magnetization, and become paramagnetic. This threshold temperature below which a material is ferromagnetic is called the Curie temperature and is different for each material.\nThe Curie–Weiss law describes the changes in a materials magnetic susceptibility, , near its Curie temperature. The magnetic susceptibility is the ratio between the materials magnetization and the applied magnetic field.", "The Curie–Weiss law is an adapted version of Curie's law, which for a paramagnetic material may be written in SI units as follows, assuming :\nHere μ is the permeability of free space; M the magnetization (magnetic moment per unit volume), is the magnetic field, and C the material-specific Curie constant:\nwhere is Boltzmann's constant, the number of magnetic atoms (or molecules) per unit volume, the Landé g-factor, the Bohr magneton, the angular momentum quantum number.\nFor the Curie-Weiss Law the total magnetic field is where is the Weiss molecular field constant and then\nwhich can be rearranged to get\nwhich is the Curie-Weiss Law\nwhere the Curie temperature is", "In the presence of a uniform external magnetic field along the z-direction, the Hamiltonian of the atom changes by\nwhere are positive real numbers which are independent of which atom we are looking at but depend on the mass and the charge of the electron. corresponds to individual electrons of the atom.\nWe apply second order perturbation theory to this situation. This is justified by the fact that even for highest presently attainable field strengths, the shifts in the energy level due to is quite small w.r.t. atomic excitation energies. Degeneracy of the original Hamiltonian is handled by choosing a basis which diagonalizes in the degenerate subspaces. Let be such a basis for the state of the atom (rather the electrons in the atom). Let be the change in energy in . So we get\nIn our case we can ignore and higher order terms. We get\nIn case of diamagnetic material, the first two terms are absent as they don't have any angular momentum in their ground state. In case of paramagnetic material all the three terms contribute.", "Researchers are able to take the tissue from a donor or cadaver, lyse and kill the cells within the tissue without damaging the extracellular components, and finish with a product that is the natural ECM scaffold that has the same physical and biochemical functions of the natural tissue. After acquiring the ECM scaffold, scientists can recellularize the tissue with potent stem or progenitor cells that will differentiate into the original type of tissue. By removing the cells from a donor tissue, the immunogenic antibodies from the donor will be removed. The progenitor cells can be taken from the host, therefore they will not have an adverse response to the tissue. This process of decellularizing tissues and organs is still being developed, but the exact process of taking a tissue from a donor and removing all the cellular components is considered to be the decellularization process. The steps to go from a decellularized ECM scaffold to a functional organ is under the umbrella of recellularization. Because of the diverse applications of tissue in the human body, decellularization techniques have to be tailored to the specific tissue being exercised on. The researched methods of decellularization include physical, chemical, and enzymatic treatments. Though some methods are more commonly used, the exact combination of treatments is variable based on the tissue’s origin and what it is needed for.\nAs far as introducing the different liquidized chemicals and enzymes to an organ or tissue, perfusion and immersion decellularization techniques have been used. Perfusion decellularization is applicable when an extensive vasculature system is present in the organ or tissue. It is crucial for the ECM scaffold to be decellularized at all levels, and evenly throughout the structure. Because of this requirement, vascularized tissues can have chemicals and enzymes perfused through the present arteries, veins, and capillaries. Under this mechanism and proper physiological conditions, treatments can diffuse equally to all of the cells within the organ. The treatments can be removed through the veins at the end of the process. Cardiac and pulmonary decellularization often uses this process of decellularization to introduce the treatments because of their heavily vascularized networks. Immersion decellularization is accomplished through the submersion of a tissue in chemical and enzymatic treatments. This process is more easily accomplished than perfusion, but is limited to thin tissues with a limited vascular system.", "The most common physical methods used to lyse, kill, and remove cells from the matrix of a tissue through the use of temperature, force and pressure, and electrical disruption. Temperature methods are often used in a rapid freeze-thaw mechanism. By quickly freezing a tissue, microscopic ice crystals form around the plasma membrane and the cell is lysed. After lysing the cells, the tissue can be further exposed to liquidized chemicals that degrade and wash out the undesirable components. Temperature methods conserve the physical structure of the ECM scaffold, but are best handled by thick, strong tissues.\nDirect force of pressure to a tissue will guarantee disruption of the ECM structure, so pressure is commonly used. Pressure decellularization involves the controlled use of hydrostatic pressure applied to a tissue or organ. This is done best at high temperatures to avoid unmonitored ice crystal formation that could damage the scaffold. Electrical disruption of the plasma membrane is another option to lyse the cells housed in a tissue or organ. By exposing a tissue to electrical pulses, micropores are formed at the plasma membrane. The cells eventually turn to death after their homeostatic electrical balance is ruined through the applied stimulus. This electrical process is documented as Non-thermal irreversible electroporation (NTIRE) and is limited to small tissues and the limited possibilities of inducing an electric current in vivo.", "The proper combination of chemicals is selected for decellularization depending on the thickness, extracellular matrix composition, and intended use of the tissue or organ. For example, enzymes would not be used on a collagenous tissue because they disrupt the connective tissue fibers. However, when collagen is not present in a high concentration or needed in the tissue, enzymes can be a viable option for decellularization. The chemicals used to kill and remove the cells include acids, alkaline treatments, ionic detergents, non-ionic detergents, and zwitterionic detergents.\nThe ionic detergent, sodium dodecyl sulfate (SDS), is commonly used because of its high efficacy for lysing cells without significant damage to the ECM. Detergents act effectively to lyse the cell membrane and expose the contents to further degradation. After SDS lyses the cell membrane, endonucleases and exonucleases degrade the genetic contents, while other components of the cell are solubilized and washed out of the matrix. SDS is commonly used even though it has a tendency to slightly disrupt the ECM structure. Alkaline and acid treatments can be effective companions with an SDS treatment due to their ability to degrade nucleic acids and solubilize cytoplasmic inclusions.\nThe most well known non-ionic detergent is Triton X-100, which is popular because of its ability to disrupt lipid-lipid and lipid-protein interactions. Triton X-100 does not disrupt protein-protein interactions, which is beneficial to keeping the ECM intact. EDTA is a chelating agent that binds calcium, which is a necessary component for proteins to interact with one another. By making calcium unavailable, EDTA prevents the integral proteins between cells from binding to one another. EDTA is often used with trypsin, an enzyme that acts as a protease to cleave the already existing bonds between integral proteins of neighboring cells within a tissue. Together, the EDTA-Trypsin combination make a good team for decellularizing tissues.", "Enzymes used in decellularization treatments are used to break the bonds and interactions between nucleic acids, interacting cells through neighboring proteins, and other cellular components. Lipases, thermolysin, galactosidase, nucleases, and trypsin have all been used in the removal of cells. After a cell is lysed with a detergent, acid, physical pressure, etc., endonucleases and exonucleases can begin the degradation of the genetic material. Endonucleases cleave DNA and RNA in the middle of sequences. Benzoase, an endonuclease, produces multiple small nuclear fragments that can be further degraded and removed from the ECM scaffold. Exonucleases act at the end of DNA sequences to cleave the phosphodiester bonds and further degrade the nucleic acid sequences.\nEnzymes such as trypsin act as proteases that cleave the interactions between proteins. Although trypsin can have adverse effects of collagen and elastin fibers of the ECM, using it in a time-sensitive manner controls any potential damage it could cause on the extracellular fibers. Dispase is used to prevent undesired aggregation of cells, which is beneficial in promoting their separating from the ECM scaffold. Experimentation has shown dispase to be most effective on the surface of a thin tissue, such as a lung in pulmonary tissue regeneration. To successfully remove deep cells of a tissue with dispase, mechanical agitation is often included in the process.\nCollagenase is only used when the ECM scaffold product does not require an intact collagen structure. Lipases are commonly used when decellularized skin grafts are needed. Lipase acids function in decellularizing dermal tissues through delipidation and cleaving the interactions between heavily lipidized cells. The enzyme, α-galactosidase is a relevant treatment when removing the Gal epitope antigen from cell surfaces.", "A natural ECM scaffold provides the necessary physical and biochemical environment to facilitate the growth and specialization of potent progenitor and stem cells. Acellular matrices have been isolated in vitro and in vivo in a number of different tissues and organs. Decellularized ECM can be used to prepare bio-ink for 3D bioprinting. The most applicable success from decellularized tissues has come from symmetrical tissues that have less specialization, such as bone and dermal grafts; however, research and success are ongoing at the organ level.\nAcellular dermal matrices have been successful in a number of different applications. For example, skin grafts are used in cosmetic surgery and burn care. The decellularized skin graft provides mechanical support to the damaged area while supporting the development of host-derived connective tissue. Cardiac tissue has clinical success in developing human valves from natural ECM matrices. A technique known as the Ross procedure uses an acellular heart valve to replace a defective valve, allowing native cells to repopulate a newly functioning valve. Decellularized allografts have been critical in bone grafts that function in bone reconstruction and replacing of deformed bones in patients.\nThe limits to myocardial tissue engineering come from the ability to immediately perfuse and seed and implemented heart into a patient. Though the ECM scaffold maintains the protein and growth factors of the natural tissue, the molecular level specialization has not yet been harnessed by researchers using decellularized heart scaffolds. Better success at using a whole organ from decellularization techniques has been found in pulmonary research. Scientists have been able to regenerate whole lungs in vitro from rat lungs using perfusion-decellularization. By seeding the matrix with fetal rat lung cells, a functioning lung was produced. The in vitro-produced lung was successfully implemented into a rat, which attests to the possibilities of translating an in vitro produced organ into a patient.\nOther success for decellularization has been found in small intestinal submucosa (SIS), renal, hepatic, and pancreatic engineering. Because it is a thin material, the SIS matrix can be decellularized through immersing the tissue in chemical and enzymatic treatments. Renal tissue engineering is still developing, but cadaveric kidney matrices have been able to support development of potent fetal kidney cells. Pancreatic engineering is a testament to the molecular specificity of organs. Scientists have not yet been able to produce an entirely functioning pancreas, but they have had success in producing an organ that functions at specific segments. For example, diabetes in rats was shown to decrease by seeding a pancreatic matrix at specific sites. The future applications of decellularized tissue matrix is still being discovered and is considered one of the most hopeful areas in regenerative research.", "Decellularization (also spelled decellularisation in British English) is the process used in biomedical engineering to isolate the extracellular matrix (ECM) of a tissue from its inhabiting cells, leaving an ECM scaffold of the original tissue, which can be used in artificial organ and tissue regeneration. Organ and tissue transplantation treat a variety of medical problems, ranging from end organ failure to cosmetic surgery. One of the greatest limitations to organ transplantation derives from organ rejection caused by antibodies of the transplant recipient reacting to donor antigens on cell surfaces within the donor organ. Because of unfavorable immune responses, transplant patients suffer a lifetime taking immunosuppressing medication. Stephen F. Badylak pioneered the process of decellularization at the McGowan Institute for Regenerative Medicine at the University of Pittsburgh. This process creates a natural biomaterial to act as a scaffold for cell growth, differentiation and tissue development. By recellularizing an ECM scaffold with a patient’s own cells, the adverse immune response is eliminated. Nowadays, commercially available ECM scaffolds are available for a wide variety of tissue engineering. Using peracetic acid to decellularize ECM scaffolds have been found to be false and only disinfects the tissue.\nWith a wide variety of decellularization-inducing treatments available, combinations of physical, chemical, and enzymatic treatments are carefully monitored to ensure that the ECM scaffold maintains the structural and chemical integrity of the original tissue. Scientists can use the acquired ECM scaffold to reproduce a functional organ by introducing progenitor cells, or adult stem cells (ASCs), and allowing them to differentiate within the scaffold to develop into the desired tissue. The produced organ or tissue can be transplanted into a patient. In contrast to cell surface antibodies, the biochemical components of the ECM are conserved between hosts, so the risk of a hostile immune response is minimized. Proper conservation of ECM fibers, growth factors, and other proteins is imperative to the progenitor cells differentiating into the proper adult cells. The success of decellularization varies based on the components and density of the applied tissue and its origin. The applications to the decellularizing method of producing a biomaterial scaffold for tissue regeneration are present in cardiac, dermal, pulmonary, renal, and other types of tissues. Complete organ reconstruction is still in the early levels of development.", "DEHPA is used in the solvent extraction of uranium salts from solutions containing the sulfate, chloride, or perchlorate anions. This extraction is known as the “Dapex procedure” (dialkyl phosphoric extraction). Reminiscent of the behaviours of carboxylic acids, DEHPA generally exists as a hydrogen-bonded dimer in the non-polar organic solvents. For practical applications, the solvent, often called a diluent, is typically kerosene. A complex is formed from two equivalents of the conjugate base of DEHPA and one uranyl ion. Complexes of the formula (UO)[(OP(OR)] also form, and at high concentrations of uranium, polymeric complexes may form.\nThe extractability of Fe is similar to that of uranium, so it must be reduced to Fe before the extraction.", "The extractive capabilities of DEHPA can be increased through synergistic effects by the addition of other organophosphorus compounds. Tributyl phosphate is often used, as well as dibutyl-, diamyl-, and dihexylphosphonates. The synergistic effects are thought to occur by the addition of the trialkylphosphate to the uranyl-DEHPA complex by hydrogen bonding. The synergistic additive may also react with the DEHPA, competing with the uranyl extraction, resulting in a decrease in extraction efficiency past a concentration specific to the compound.", "DEHPA can be used to extract lanthanides (rare earths) from aqeuous solutions, it is commonly used in the lanthanide sector as an extraction agent. In general the distribution ratio of the lanthanides increase as their atomic number increases due to the lanthanide contraction. It is possible by bringing a mixture of lanthanides in a counter current mixer settler bank into contact with a suitable concentration of nitric acid to selectively strip (back extract) some of the lanthanides while leaving the others still in the DEHPA based organic layer. In this way selective stripping of the lanthanides can be used to make a separation of a mixture of the lanthanides into mixtures containing fewer lanthanides. Under ideal conditions this can be used to obtain a single lanthanide from a mixture of many lanthanides.\nIt is common to use DEHPA in an aliphatic kerosene which is best considered to be a mixture of long chain alkanes and cycloalkanes. When used in an aromatic hydrocarbon diluent the lanthanide distribution ratios are lower. It has been shown that it is possible to use a second generation biodiesel which was made by the hydrotreatment of vegetable oil. It has been reported that Neste's HVO100 is a suitable diluent for DEHPA when calcium, lanthanum and neodymium are extracted from aqueous nitric acid", "Alternative organophosphorus compounds include trioctylphosphine oxide and bis(2,4,4-trimethyl pentyl)phosphinic acid. Secondary, tertiary, and quaternary amines have also been used for some uranium extractions. Compared to phosphate extractants, amines are more selective for uranium, extract the uranium faster, and are easily stripped with a wider variety of reagents. However, the phosphates are more tolerant of solids in the feed solution and show faster phase separation.", "Di(2-ethylhexyl)phosphoric acid (DEHPA or HDEHP) is an organophosphorus compound with the formula (CHO)POH. The colorless liquid is a diester of phosphoric acid and 2-ethylhexanol. It is used in the solvent extraction of uranium, vanadium and the rare-earth metals.", "The uranium is then stripped from the DEHPA/kerosene solution with hydrochloric acid, hydrofluoric acid, or carbonate solutions. Sodium carbonate solutions effectively strip uranium from the organic layer, but the sodium salt of DEHPA is somewhat soluble in water, which can lead to loss of the extractant.", "DEHPA is prepared through the reaction of phosphorus pentoxide and 2-ethylhexanol:\n:4 CHOH + PO → 2 [(CHO)PO(OH)]O\n:[(CHO)PO(OH)]O + CHOH → (CHO)PO(OH) + (CHO)PO(OH)\nThese reaction produce a mixture of mono-, di-, and trisubstituted phosphates, from which DEHPA can be isolated based on solubility.", "Dilution is the process of decreasing the concentration of a solute in a solution, usually simply by mixing with more solvent like adding more water to the solution. To dilute a solution means to add more solvent without the addition of more solute. The resulting solution is thoroughly mixed so as to ensure that all parts of the solution are identical. \nThe same direct relationship applies to gases and vapors diluted in air for example. Although, thorough mixing of gases and vapors may not be as easily accomplished.\nFor example, if there are 10 grams of salt (the solute) dissolved in 1 litre of water (the solvent), this solution has a certain salt concentration (molarity). If one adds 1 litre of water to this solution, the salt concentration is reduced. The diluted solution still contains 10 grams of salt (0.171 moles of NaCl).\nMathematically this relationship can be shown by equation:\nwhere\n*c = initial concentration or molarity\n*V = initial volume\n*c = final concentration or molarity\n*V = final volume", "The basic room purge equation can be used only for purge scenarios. In a scenario where a liquid continuously evaporates from a container in a ventilated room, a differential equation has to be used:\nwhere the ventilation rate has been adjusted by a mixing factor K: \n*C = concentration of a gas\n*G = generation rate\n*V = room volume\n*Q′ = adjusted ventilation rate of the volume", "The dilution in welding terms is defined as the weight of the base metal melted divided by the total weight of the weld metal. For example, if we have a dilution of 0.40, the fraction of the weld metal that came from the consumable electrode is 0.60.", "The basic room purge equation is used in industrial hygiene. It determines the time required to reduce a known vapor concentration existing in a closed space to a lower vapor concentration. The equation can only be applied when the purged volume of vapor or gas is replaced with \"clean\" air or gas. For example, the equation can be used to calculate the time required at a certain ventilation rate to reduce a high carbon monoxide concentration in a room.\nSometimes the equation is also written as:\n where \n*D = time required; the unit of time used is the same as is used for Q\n*V = air or gas volume of the closed space or room in cubic feet, cubic metres or litres\n*Q = ventilation rate into or out of the room in cubic feet per minute, cubic metres per hour or litres per second\n*C = initial concentration of a vapor inside the room measured in ppm\n*C = final reduced concentration of the vapor inside the room in ppm", "DPN is a direct write technique so it can be used for top-down and bottom-up lithography applications. In top-down work, the tips are used to deliver an etch resist to a surface, which is followed by a standard etching process. In bottom-up applications, the material of interest is delivered directly to the surface via the tips.", "Dip pen nanolithography (DPN) is a scanning probe lithography technique where an atomic force microscope (AFM) tip is used to create patterns directly on a range of substances with a variety of inks. A common example of this technique is exemplified by the use of alkane thiolates to imprint onto a gold surface. This technique allows surface patterning on scales of under 100 nanometers. DPN is the nanotechnology analog of the dip pen (also called the quill pen), where the tip of an atomic force microscope cantilever acts as a \"pen\", which is coated with a chemical compound or mixture acting as an \"ink\", and put in contact with a substrate, the \"paper\".\nDPN enables direct deposition of nanoscale materials onto a substrate in a flexible manner. Recent advances have demonstrated massively parallel patterning using two-dimensional arrays of 55,000 tips. \nApplications of this technology currently range through chemistry, materials science, and the life sciences, and include such work as ultra high density biological nanoarrays, and additive photomask repair.", "The uncontrollable transfer of a molecular \"ink\" from a coated AFM tip to a substrate was first reported by Jaschke and Butt in 1995, but they erroneously concluded that alkanethiols could not be transferred to gold substrates to form stable nanostructures. A research group at Northwestern University, US led by Chad Mirkin independently studied the process and determined that under the appropriate conditions, molecules could be transferred to a wide variety of surfaces to create stable chemically-adsorbed monolayers in a high resolution lithographic process they termed \"DPN\". Mirkin and his coworkers hold the patents on this process, and the patterning technique has expanded to include liquid \"inks\". It is important to note that \"liquid inks\" are governed by a very different deposition mechanism when compared to \"molecular inks\".", "Molecular inks are typically composed of small molecules that are coated onto a DPN tip and are delivered to the surface through a water meniscus. In order to coat the tips, one can either vapor coat the tip or dip the tips into a dilute solution containing the molecular ink. If one dip-coats the tips, the solvent must be removed prior to deposition. The deposition rate of a molecular ink is dependent on the diffusion rate of the molecule, which is different for each molecule. The size of the feature is controlled by the tip/surface dwell-time (ranging from milliseconds to seconds) and the size of the water meniscus, which is determined by the humidity conditions (assuming the tip's radius of curvature is much smaller than the meniscus). \n*Water meniscus mediated (exceptions do exist)\n*Nanoscale feature resolution (50 nm to 2000 nm)\n*No multiplexed depositions\n*Each molecular ink is limited to its corresponding substrate", "In order to define a good DPN application, it is important to understand what DPN can do that other techniques cannot. Direct-write techniques, like contact printing, can pattern multiple biological materials but it cannot create features with subcellular resolution. Many high-resolution lithography methods can pattern at sub-micrometre resolution, but these require high-cost equipment that were not designed for biomolecule deposition and cell culture. Microcontact printing can print biomolecules at ambient conditions, but it cannot pattern multiple materials with nanoscale registry.", "Liquid inks can be any material that is liquid at deposition conditions. The liquid deposition properties are determined by the interactions between the liquid and the tip, the liquid and the surface, and the viscosity of the liquid itself. These interactions limit the minimum feature size of the liquid ink to about 1 micrometre, depending on the contact angle of the liquid. Higher viscosities offer greater control over feature size and are desirable. Unlike molecular inks, it is possible to perform multiplexed depositions using a carrier liquid. For example, using a viscous buffer, it is possible to directly deposit multiple proteins simultaneously.\n*1–10 micrometre feature resolution\n*Multiplexed depositions\n*Less restrictive ink/surface requirements\n*Direct deposition of high viscosity materials", "* Directed Placement – Directly print various materials onto existing nano and microstructures with nanoscale registry\n* Direct Write – Maskless creation of arbitrary patterns with feature resolutions from as small as 50 nm and as large as 10 micrometres\n* Biocompatible – Subcellular to nanoscale resolution at ambient deposition conditions\n* Scalable – Force independent, allowing for parallel depositions", "A two dimensional array of (PDMS) deformable transparent pyramid shaped tips are coated with an opaque layer of metal. The metal is then removed from the very tip of the pyramid, leaving an aperture for light to pass through. The array is then scanned across a surface and light is directed to the base of each pyramid via a micromirror array, which funnels the light toward the tip. Depending on the distance between the tips and the surface, light interacts with the surface in a near-field or far-field fashion, allowing sub-diffraction scale features (100 nm features with 400 nm light) or larger features to be fabricated.", "DPN evolved directly from AFM so it is not a surprise that people often assume that any commercial AFM can perform DPN experiments. In fact, DPN does not require an AFM, and an AFM does not necessarily have real DPN capabilities. There is an excellent analogy with scanning electron microscopy (SEM) and electron beam (E-beam) lithography. E-beam evolved directly from SEM technology and both use a focused electron beam, but it is not possible to perform modern E-beam lithography experiments on a SEM that lacks the proper lithography hardware and software components.\nIt is also important to consider one of the unique characteristics of DPN, namely its force independence. With virtually all ink/substrate combinations, the same feature size will be patterned no matter how hard the tip is pressing down against the surface. As long as robust SiN tips are used, there is no need for complicated feedback electronics, no need for lasers, no need for quad photo-diodes, and no need for an AFM.", "The criticism most often directed at DPN is the patterning speed. The reason for this has more to do with how it is compared to other techniques rather than any inherent weaknesses. For example, the soft lithography method, microcontact printing (μCP), is the current standard for low cost, bench-top micro and nanoscale patterning, so it is easy to understand why DPN is compared directly to microcontact printing. The problem is that the comparisons are usually based upon applications that are strongly suited to μCP, instead of comparing them to some neutral application. μCP has the ability to pattern one material over a large area in a single stamping step, just as photolithography can pattern over a large area in a single exposure. Of course DPN is slow when it is compared to the strength of another technique. DPN is a maskless direct write technique that can be used to create multiple patterns of varying size, shape, and feature resolution, all on a single substrate. No one would try to apply microcontact printing to such a project because then it would never be worth the time and money required to fabricate each master stamp for each new pattern. Even if they did, microcontact printing would not be capable of aligning multiple materials from multiple stamps with nanoscale registry. The best way to understand this misconception is to think about the different ways to apply photolithography and e-beam lithography. No one would try to use e-beam to solve a photolithography problem and then claim e-beam to be \"too slow\". Directly compared to photolithography's large area patterning capabilities, e-beam lithography is slow and yet, e-beam instruments can be found in every lab and nanofab in the world. The reason for this is because e-beam has unique capabilities that cannot be matched by photolithography, just as DPN has unique capabilities that cannot be matched by microcontact printing.", "A heated probe tip version of Dip Pen Lithography has also been demonstrated, thermal Dip Pen Lithography (tDPL), to deposit nanoparticles. Semiconductor, magnetic, metallic, or optically active nanoparticles can be written to a substrate via this method. The particles are suspended in a Poly(methyl methacrylate) (PMMA) or equivalent polymer matrix, and heated by the probe tip until they begin to flow. The probe tip acts as a nano-pen, and can pattern nanoparticles into a programmed structure. Depending on the size of the nanoparticles, resolutions of 78–400 nm were attained. An O plasma etch can be used to remove the PMMA matrix, and in the case of Iron Oxide nanoparticles, further reduce the resolution of lines to 10 nm. Advantages unique to tDPL are that it is a maskless additive process that can achieve very narrow resolutions, it can also easily write many types of nanoparticles without requiring special solution preparation techniques. However there are limitations to this method. The nanoparticles must be smaller than the radius of gyration of the polymer, in the case of PMMA this is about 6 nm. Additionally, as nanoparticles increase in size viscosity increases, slowing the process. For a pure polymer deposition speeds of 200 μm/s are achievable. Adding nanoparticles reduces speeds to 2 μm/s, but is still faster than regular Dip Pen Lithography.", "DPN is emerging as a powerful research tool for manipulating cells at subcellular resolution\n* Stem cell differentiation\n* Subcellular drug delivery\n* Cell sorting\n* Surface gradients\n* Subcellular ECM protein patterns\n* Cell adhesion", "*Protein, peptide, and DNA patterning\n*Hydrogels\n*Sol gels\n*Conductive inks\n*Lipids\n*Silanes (liquid phase) written to glass or silicon", "The following are some examples of how DPN is being applied to potential products. \n# Biosensor Functionalization – Directly place multiple capture domains on a single biosensor device\n# Nanoscale Sensor Fabrication – Small, high-value sensors that can detect multiple targets\n# Nanoscale Protein Chips – High-density protein arrays with increased sensitivity", "Direct energy conversion (DEC) or simply direct conversion converts a charged particle's kinetic energy into a voltage. It is a scheme for power extraction from nuclear fusion.", "In the middle of the 1960s direct energy conversion was proposed as a method for capturing the energy from the exhaust gas in a fusion reactor. This would generate a direct current of electricity. Richard F. Post at the Lawrence Livermore National Laboratory was an early proponent of the idea. Post reasoned that capturing the energy would require five steps: (1) Ordering the charged particles into linear beam. (2) Separation of positives and negatives. (3) Separating the ions into groups, by their energy. (4) Gathering these ions as they touch collectors. (5) Using these collectors as the positive side in a circuit. Post argued that the efficiency was theoretically determined by the number of collectors.", "The Venetian blind design is a type of electrostatic direct collector. The Venetian Blind design name comes from the visual similarity of the ribbons to venetian window blinds. Designs in the early 1970s by William Barr and Ralph Moir used repeating metal ribbons at a specified angle as the ion collector plates. These metal ribbon-like surfaces are more transparent to ions going forward than to ions going backward. Ions pass through surfaces of successively increasing potential until they turn and start back, along a parabolic trajectory. They then see opaque surfaces and are caught. Thus ions are sorted by energy with high-energy ions being caught on high-potential electrodes.\nWilliam Barr and Ralph Moir then ran a group which did a series of direct energy conversion experiments through the late 1970s and early 1980s. The first experiments used beams of positives and negatives as fuel, and demonstrated energy capture at a peak efficiency of 65 percent and a minimum efficiency of 50 percent. The following experiments involved a true plasma direct converter that was tested on the Tandem Mirror Experiment (TMX), an operating magnetic mirror fusion reactor. In the experiment, the plasma moved along diverging field lines, spreading it out and converting it into a forward moving beam with a Debye length of a few centimeters. Suppressor grids then reflect the electrons, and collector anodes recovered the ion energy by\nslowing them down and collecting them at high-potential plates. This machine demonstrated an energy capture efficiency of 48 percent. However, Marshall Rosenbluth argued that keeping the plasma's neutral charge over the very short Debye length distance would be very challenging in practice, though he said that this problem would not occur in every version of this technology.\nThe Venetian Blind converter can operate with 100 to 150 keV D-T plasma, with an efficiency of about 60% under conditions compatible with economics, and an upper technical conversion efficiency up to 70% ignoring economic limitations.", "A second type of electrostatic converter initially proposed by Post, then developed by Barr and Moir, is the Periodic Electrostatic Focusing concept. Like the Venetian Blind concept, it is also a direct collector, but the collector plates are disposed in many stages along the longitudinal axis of an electrostatic focusing channel. As each ion is decelerated along the channel toward zero energy, the particle becomes \"over-focused\" and is deflected sideways from the beam, then collected. The Periodic Electrostatic Focusing converter typically operates with a 600 keV D-T plasma (as low as 400 keV and up to 800 keV) with efficiency of about 60% under conditions compatible with economics, and an upper technical conversion efficiency up to 90% ignoring economic limitations.", "From the 1960s through the 1970s, methods have been developed to extract electrical energy directly from a hot gas (a plasma) in motion within a channel fitted with electromagnets (producing a transverse magnetic field), and electrodes (connected to load resistors). Charge carriers (free electrons and ions) incoming with the flow are then separated by the Lorentz force and an electric potential difference can be retrieved from pairs of connected electrodes. Shock tubes used as pulsed MHD generators were for example able to produce several megawatts of electricity in channels the size of a beverage can.", "Original direct converters were designed to extract the energy carried by 100 to 800 keV ions produced by D-T fusion reactions. Those electrostatic converters are not suitable for higher energy product ions above 1 MeV generated by other fusion fuels like the D-He or the p-B aneutronic fusion reactions.\nA much shorter device than the Traveling-Wave Direct Energy Converter has been proposed in 1997 and patented by Tri Alpha Energy, Inc. as an Inverse Cyclotron Converter (ICC).\nThe ICC is able to decelerate the incoming ions based on experiments made in 1950 by Felix Bloch and Carson D. Jeffries, in order to extract their kinetic energy. The converter operates at 5 MHz and requires a magnetic field of only 0.6 tesla. The linear motion of fusion product ions is converted to circular motion by a magnetic cusp. Energy is collected from the charged particles as they spiral past quadrupole electrodes. More classical electrostatic collectors would also be used for particles with energy less than 1 MeV. The Inverse Cyclotron Converter has a maximum projected efficiency of 90%.", "A significant amount of the energy released by fusion reactions is composed of electromagnetic radiation, essentially X-rays due to Bremsstrahlung. Those X-rays can not be converted into electric power with the various electrostatic and magnetic direct energy converters listed above, and their energy is lost.\nWhereas more classical thermal conversion has been considered with the use of a radiation/boiler/energy exchanger where the X-ray energy is absorbed by a working fluid at temperatures of several thousand degrees, more recent research done by companies developing nuclear aneutronic fusion reactors, like Lawrenceville Plasma Physics (LPP) with the Dense Plasma Focus, and Tri Alpha Energy, Inc. with the Colliding Beam Fusion Reactor (CBFR), plan to harness the photoelectric and Auger effects to recover energy carried by X-rays and other high-energy photons. Those photoelectric converters are composed of X-ray absorber and electron collector sheets nested concentrically in an onion-like array. Indeed, since X-rays can go through far greater thickness of material than electrons can, many layers are needed to absorb most of the X-rays. LPP announces an overall efficiency of 81% for the photoelectric conversion scheme.", "In the early 2000s, research was undertaken by Sandia National Laboratories, Los Alamos National Laboratory, The University of Florida, Texas A&M University and General Atomics to use direct conversion to extract energy from fission reactions, essentially, attempting to extract energy from the linear motion of charged particles coming off a fission reaction.", "In addition to converters using electrodes, pure inductive magnetic converters have also been proposed by Lev Artsimovich in 1963, then Alan Frederic Haught and his team from United Aircraft Research Laboratories in 1970, and Ralph Moir in 1977.\nThe magnetic compression-expansion direct energy converter is analogous to the internal combustion engine. As the hot plasma expands against a magnetic field, in a manner similar to hot gases expanding against a piston, part of the energy of the internal plasma is inductively converted to an electromagnetic coil, as an EMF (voltage) in the conductor.\nThis scheme is best used with pulsed devices, because the converter then works like a \"magnetic four-stroke engine\":\n# Compression: A column of plasma is compressed by a magnetic field that acts like a piston.\n# Thermonuclear burn: The compression heats the plasma to the thermonuclear ignition temperature.\n# Expansion/Power: The expansion of fusion reaction products (charged particles) increases the plasma pressure and pushes the magnetic field outward. A voltage is induced and collected in the electromagnetic coil.\n# Exhaust/Refuel: After expansion, the partially burned fuel is flushed out, and new fuel in the form of gas is introduced and ionized; and the cycle starts again.\nIn 1973, a team from Los Alamos and Argonne laboratories stated that the thermodynamic efficiency of the magnetic direct conversion cycle from alpha-particle energy to work is 62%.", "In 1992, a Japan–U.S. joint-team proposed a novel direct energy conversion system for 14.7 MeV protons produced by D-He fusion reactions, whose energy is too high for electrostatic converters.\nThe conversion is based on a Traveling-Wave Direct Energy Converter (TWDEC). A gyrotron converter first guides fusion product ions as a beam into a 10-meter long microwave cavity filled with a 10-tesla magnetic field, where 155 MHz microwaves are generated and converted to a high voltage DC output through rectennas.\nThe Field-Reversed Configuration reactor ARTEMIS in this study was designed with an efficiency of 75%. The traveling-wave direct converter has a maximum projected efficiency of 90%.", "This methods consists in selecting the cell type of interest, usually with antibiotic resistance. For this purpose, the source material cells are modified to contain antibiotic resistance cassette under a target cell type specific promoter. Only cells committed to the lineage of interest is surviving the selection.", "Cell differentiation involves a transition from a proliferative mode toward differentiation mode. Directed differentiation consists in mimicking developmental (embryo's development) decisions in vitro using the stem cells as source material. For this purpose, pluripotent stem cells (PSCs) are cultured in controlled conditions involving specific substrate or extracellular matrices promoting cell adhesion and differentiation, and define culture media compositions. A limited number of signaling factors such as growth factors or small molecules, controlling cell differentiation, is applied sequentially or in a combinatorial manner, at varying dosage and exposure time. Proper differentiation of the cell type of interest is verified by analyzing cell type specific markers, gene expression profile, and functional assays.", "Directed differentiation provides a potentially unlimited and manipulable source of cell and tissues.\nSome applications are impaired by the immature phenotype of the pluripotent stem cells (PSCs)-derived cell type, which limits the physiological and functional studies possible.\nSeveral application domains emerged:", "This method consists in exposing the cells to specific signaling pathways modulators and manipulating cell culture conditions (environmental or exogenous) to mimick the natural sequence of developmental decisions to produce a given cell type/tissue. A drawback of this approach is the necessity to have a good understanding of how the\ncell type of interest is formed.", "This method, also known as transdifferentiation or direct conversion, consists in overexpressing one or several factors, usually transcription factors, introduced in the cells. The starting material can be either pluripotent stem cells (PSCs), or either differentiated cell type such as fibroblasts. The principle was first demonstrated in 1987 with the myogenic factors MyoD.\nA drawback of this approach is the introduction of foreign nucleic acid in the cells and the forced expression of transcription factors which effects are not fully understood.", "The potentially unlimited source of cell and tissues may have direct application for tissue engineering, cell replacement and transplantation following acute injuries and reconstructive surgery. These applications are limited to the cell types that can be differentiated efficiently and safely from human PSCs with the proper organogenesis. Decellularized organs are also being used as tissue scaffold for organogenesis. Source material can be normal healthy cells from another donor (heterologous transplantation) or genetically corrected from the same patient (autologous).\nConcerns on patient safety have been raised due to the possibility of contaminating undifferentiated cells. The first clinical trial using hESC-derived cells was in 2011. The first clinical trial using hiPSC-derived cells started in 2014 in Japan.", "PSCs-derived cells from patients are used in vitro to recreate specific pathologies. The specific cell type affected in the pathology is at the base of the model. For example, motoneurons are used to study spinal muscular atrophy (SMA) and cardiomyocytes are used to study arrythmia. This can allow for a better understanding of the pathogenesis and the development of new treatments through drug discovery. Immature PSC-derived cell types can be matured in vitro by various strategies, such as in vitro ageing, to modelize age-related disease in vitro.\nMajor diseases being modelized with PSCs-derived cells are amyotrophic lateral sclerosis (ALS), Alzheimers (AD), Parkinsons (PD), fragile X syndrome (FXS), Huntington disease (HD), Down syndrome, Spinal muscular atrophy (SMA), muscular dystrophies, cystic fibrosis, Long QT syndrome, and Type I diabetes.", "Cell types differentiated from pluripotent stem cells (PSCs) are being evaluated as preclinical in vitro models of Human diseases. Human cell types in a dish provide an alternative to traditional preclinical assays using animal, human immortalized cells or primary cultures from biopsies, which have their limitations. Clinically relevant cell types i.e. cell type affected in diseases are a major focus of research, this includes hepatocytes, Langerhans islet beta-cells, cardiomyocytes and neurons. Drug screen are performed on miniaturized cell culture in multiwell-plates or on a chip.", "Directed differentiation is primarily applied to pluripotent stem cells (PSCs) of mammalian origin, in particular mouse and human cells for biomedical research applications. Since the discovery of embryonic stem (ES) cells (1981) and induced pluripotent stem (iPS) cells (2006), source material is potentially unlimited.\nHistorically, embryonic carcinoma (EC) cells have also been used. Fibroblasts or other differentiated cell types have been used for direct reprogramming strategies.", "* co-culture with stromal cells or feeder cells, and on specific culture substrates:\nsupport cells and matrices provide developmental-like environmental signals.\n* 3D cell aggregate formation, termed embryoid bodies (EBs): the aggregate aim at mimicking early embryonic development and instructing the cell differentiation.\n* culture in presence of fetal bovine serum, removal of pluripotency factors.", "During differentiation, pluripotent cells make a number of developmental decisions to generate first the three germ layers (ectoderm, mesoderm and endoderm) of the embryo and intermediate progenitors, followed by subsequent decisions or check points, giving rise to all the bodys mature tissues. The differentiation process can be modeled as sequence of binary decisions based on probabilistic or stochastic models. Developmental biology and embryology provides the basic knowledge of the cell types differentiation through mutation analysis, lineage tracing, embryo micro-manipulation and gene expression studies. Cell differentiation and tissue organogenesis involve a limited set of developmental signaling pathways. It is thus possible to direct cell fate by controlling cell decisions through extracellular signaling, mimicking developmental signals.", "Directed differentiation is a bioengineering methodology at the interface of stem cell biology, developmental biology and tissue engineering. It is essentially harnessing the potential of stem cells by constraining their differentiation in vitro toward a specific cell type or tissue of interest. Stem cells are by definition pluripotent, able to differentiate into several cell types such as neurons, cardiomyocytes, hepatocytes, etc. Efficient directed differentiation requires a detailed understanding of the lineage and cell fate decision, often provided by developmental biology.", "For basic science, notably developmental biology and cell biology, PSC-derived cells allow to study at the molecular and cellular levels fundamental questions in vitro, that would have been otherwise extremely difficult or impossible to study for technical and ethical reasons in vivo such as embryonic development of human. In particular, differentiating cells are amenable for quantitative and qualitative studies.\nMore complex processes can also be studied in vitro and formation of organoids, including cerebroids, optic cup and kidney have been described.", "Northwestern University researchers announced a solution to a primary problem of DSSCs, that of difficulties in using and containing the liquid electrolyte and the consequent relatively short useful life of the device. This is achieved through the use of nanotechnology and the conversion of the liquid electrolyte to a solid. The current efficiency is about half that of silicon cells, but the cells are lightweight and potentially of much lower cost to produce.", "DSSCs degrade when exposed to light. In 2014 air infiltration of the commonly-used amorphous Spiro-MeOTAD hole-transport layer was identified as the primary cause of the degradation, rather than oxidation. The damage could be avoided by the addition of an appropriate barrier.\nThe barrier layer may include UV stabilizers and/or UV absorbing luminescent chromophores (which emit at longer wavelengths which may be reabsorbed by the dye) and antioxidants to protect and improve the efficiency of the cell.", "Several important measures are used to characterize solar cells. The most obvious is the total amount of electrical power produced for a given amount of solar power shining on the cell. Expressed as a percentage, this is known as the solar conversion efficiency. Electrical power is the product of current and voltage, so the maximum values for these measurements are important as well, J and V respectively. Finally, in order to understand the underlying physics, the \"quantum efficiency\" is used to compare the chance that one photon (of a particular energy) will create one electron.\nIn quantum efficiency terms, DSSCs are extremely efficient. Due to their \"depth\" in the nanostructure there is a very high chance that a photon will be absorbed, and the dyes are very effective at converting them to electrons. Most of the small losses that do exist in DSSC's are due to conduction losses in the TiO and the clear electrode, or optical losses in the front electrode. The overall quantum efficiency for green light is about 90%, with the \"lost\" 10% being largely accounted for by the optical losses in the top electrode. The quantum efficiency of traditional designs vary, depending on their thickness, but are about the same as the DSSC.\nIn theory, the maximum voltage generated by such a cell is simply the difference between the (quasi-)Fermi level of the TiO and the redox potential of the electrolyte, about 0.7 V under solar illumination conditions (V). That is, if an illuminated DSSC is connected to a voltmeter in an \"open circuit\", it would read about 0.7 V. In terms of voltage, DSSCs offer slightly higher V than silicon, about 0.7 V compared to 0.6 V. This is a fairly small difference, so real-world differences are dominated by current production, J.\nAlthough the dye is highly efficient at converting absorbed photons into free electrons in the TiO, only photons absorbed by the dye ultimately produce current. The rate of photon absorption depends upon the absorption spectrum of the sensitized TiO layer and upon the solar flux spectrum. The overlap between these two spectra determines the maximum possible photocurrent. Typically used dye molecules generally have poorer absorption in the red part of the spectrum compared to silicon, which means that fewer of the photons in sunlight are usable for current generation. These factors limit the current generated by a DSSC, for comparison, a traditional silicon-based solar cell offers about 35 mA/cm, whereas current DSSCs offer about 20 mA/cm.\nOverall peak power conversion efficiency for current DSSCs is about 11%. Current record for prototypes lies at 15%.", "A group of researchers at the École Polytechnique Fédérale de Lausanne (EPFL) has reportedly increased the thermostability of DSC by using amphiphilic ruthenium sensitizer in conjunction with quasi-solid-state gel electrolyte. The stability of the device matches that of a conventional inorganic silicon-based solar cell. The cell sustained heating for 1,000 h at 80 °C.\nThe group has previously prepared a ruthenium amphiphilic dye Z-907 (cis-Ru(Hdcbpy)(dnbpy)(NCS), where the ligand Hdcbpy is 4,4′-dicarboxylic acid-2,2′-bipyridine and dnbpy is 4,4′-dinonyl-2,2′-bipyridine) to increase dye tolerance to water in the electrolytes. In addition, the group also prepared a quasi-solid-state gel electrolyte with a 3-methoxypropionitrile (MPN)-based liquid electrolyte that was solidified by a photochemically stable fluorine polymer, polyvinylidenefluoride-co-hexafluoropropylene (PVDF-HFP).\nThe use of the amphiphilic Z-907 dye in conjunction with the polymer gel electrolyte in DSC achieved an energy conversion efficiency of 6.1%. More importantly, the device was stable under thermal stress and soaking with light. The high conversion efficiency of the cell was sustained after heating for 1,000 h at 80 °C, maintaining 94% of its initial value. After\naccelerated testing in a solar simulator for 1,000 h of light-soaking at 55 °C (100 mW cm) the efficiency had decreased by less than 5% for cells covered with an ultraviolet absorbing polymer film. These results are well within the limit for that of traditional inorganic silicon solar cells.\nThe enhanced performance may arise from a decrease in solvent permeation across the sealant due to the application of the polymer gel electrolyte. The polymer gel electrolyte is quasi-solid at room temperature, and becomes a viscous liquid (viscosity: 4.34 mPa·s) at 80 °C compared with the traditional liquid electrolyte (viscosity: 0.91 mPa·s). The much improved stabilities of the device under both thermal stress and soaking with light has never before been seen in DSCs, and they match the durability criteria applied to solar cells for outdoor use, which makes these devices viable for practical application.", "The first successful solid-hybrid dye-sensitized solar cells were reported.\nTo improve electron transport in these solar cells, while maintaining the high surface area needed for dye adsorption, two researchers have designed alternate semiconductor morphologies, such as arrays of nanowires and a combination of nanowires and nanoparticles, to provide a direct path to the electrode via the semiconductor conduction band. Such structures may provide a means to improve the quantum efficiency of DSSCs in the red region of the spectrum, where their performance is currently limited.\nIn August 2006, to prove the chemical and thermal robustness of the 1-ethyl-3 methylimidazolium tetracyanoborate solar cell, the researchers subjected the devices to heating at 80 °C in the dark for 1000 hours, followed by light soaking at 60 °C for 1000 hours. After dark heating and light soaking, 90% of the initial photovoltaic efficiency was maintained – the first time such excellent thermal stability has been observed for a liquid electrolyte that exhibits such a high conversion efficiency. Contrary to silicon solar cells, whose performance declines with increasing temperature, the dye-sensitized solar-cell devices were only negligibly influenced when increasing the operating temperature from ambient to 60 °C.", "Wayne Campbell at Massey University, New Zealand, has experimented with a wide variety of organic dyes based on porphyrin. In nature, porphyrin is the basic building block of the hemoproteins, which include chlorophyll in plants and hemoglobin in animals. He reports efficiency on the order of 5.6% using these low-cost dyes.", "A group of researchers at Georgia Tech made dye-sensitized solar cells with a higher effective surface area by wrapping the cells around a quartz optical fiber. The researchers removed the cladding from optical fibers, grew zinc oxide nanowires along the surface, treated them with dye molecules, surrounded the fibers by an electrolyte and a metal film that carries electrons off the fiber. The cells are six times more efficient than a zinc oxide cell with the same surface area. Photons bounce inside the fiber as they travel, so there are more chances to interact with the solar cell and produce more current. These devices only collect light at the tips, but future fiber cells could be made to absorb light along the entire length of the fiber, which would require a coating that is conductive as well as transparent. Max Shtein of the University of Michigan said a sun-tracking system would not be necessary for such cells, and would work on cloudy days when light is diffuse.", "DSSCs are currently the most efficient third-generation (2005 Basic Research Solar Energy Utilization 16) solar technology available. Other thin-film technologies are typically between 5% and 13%, and traditional low-cost commercial silicon panels operate between 14% and 17%. This makes DSSCs attractive as a replacement for existing technologies in \"low density\" applications like rooftop solar collectors, where the mechanical robustness and light weight of the glass-less collector is a major advantage. They may not be as attractive for large-scale deployments where higher-cost higher-efficiency cells are more viable, but even small increases in the DSSC conversion efficiency might make them suitable for some of these roles as well.\nThere is another area where DSSCs are particularly attractive. The process of injecting an electron directly into the TiO is qualitatively different from that occurring in a traditional cell, where the electron is \"promoted\" within the original crystal. In theory, given low rates of production, the high-energy electron in the silicon could re-combine with its own hole, giving off a photon (or other form of energy) which does not result in current being generated. Although this particular case may not be common, it is fairly easy for an electron generated by another atom to combine with a hole left behind in a previous photoexcitation.\nIn comparison, the injection process used in the DSSC does not introduce a hole in the TiO, only an extra electron. Although it is energetically possible for the electron to recombine back into the dye, the rate at which this occurs is quite slow compared to the rate that the dye regains an electron from the surrounding electrolyte. Recombination directly from the TiO to species in the electrolyte is also possible although, again, for optimized devices this reaction is rather slow. On the contrary, electron transfer from the platinum coated electrode to species in the electrolyte is necessarily very fast.\nAs a result of these favorable \"differential kinetics\", DSSCs work even in low-light conditions. DSSCs are therefore able to work under cloudy skies and non-direct sunlight, whereas traditional designs would suffer a \"cutout\" at some lower limit of illumination, when charge carrier mobility is low and recombination becomes a major issue. The cutoff is so low they are even being proposed for indoor use, collecting energy for small devices from the lights in the house.\nA practical advantage which DSSCs share with most thin-film technologies, is that the cell's mechanical robustness indirectly leads to higher efficiencies at higher temperatures. In any semiconductor, increasing temperature will promote some electrons into the conduction band \"mechanically\". The fragility of traditional silicon cells requires them to be protected from the elements, typically by encasing them in a glass box similar to a greenhouse, with a metal backing for strength. Such systems suffer noticeable decreases in efficiency as the cells heat up internally. DSSCs are normally built with only a thin layer of conductive plastic on the front layer, allowing them to radiate away heat much easier, and therefore operate at lower internal temperatures.", "Researchers at the École Polytechnique Fédérale de Lausanne and at the Université du Québec à Montréal claim to have overcome two of the DSC's major issues:\n* \"New molecules\" have been created for the electrolyte, resulting in a liquid or gel that is transparent and non-corrosive, which can increase the photovoltage and improve the cell's output and stability.\n* At the cathode, platinum was replaced by cobalt sulfide, which is far less expensive, more efficient, more stable and easier to produce in the laboratory.", "The major disadvantage to the DSSC design is the use of the liquid electrolyte, which has temperature stability problems. At low temperatures the electrolyte can freeze, halting power production and potentially leading to physical damage. Higher temperatures cause the liquid to expand, making sealing the panels a serious problem. Another disadvantage is that costly ruthenium (dye), platinum (catalyst) and conducting glass or plastic (contact) are needed to produce a DSSC. A third major drawback is that the electrolyte solution contains volatile organic compounds (or VOC's), solvents which must be carefully sealed as they are hazardous to human health and the environment. This, along with the fact that the solvents permeate plastics, has precluded large-scale outdoor application and integration into flexible structure.\nReplacing the liquid electrolyte with a solid has been a major ongoing field of research. Recent experiments using solidified melted salts have shown some promise, but currently suffer from higher degradation during continued operation, and are not flexible.", "Dye sensitised solar cells operate as a photoanode (n-DSC), where photocurrent result from electron injection by the sensitized dye. Photocathodes (p-DSCs) operate in an inverse mode compared to the conventional n-DSC, where dye-excitation is followed by rapid electron transfer from a p-type semiconductor to the dye (dye-sensitized hole injection, instead of electron injection). Such p-DSCs and n-DSCs can be combined to construct tandem solar cells (pn-DSCs) and the theoretical efficiency of tandem DSCs is well beyond that of single-junction DSCs.\nA standard tandem cell consists of one n-DSC and one p-DSC in a simple sandwich configuration with an intermediate electrolyte layer. n-DSC and p-DSC are connected in series, which implies that the resulting photocurrent will be controlled by the weakest photoelectrode, whereas photovoltages are additive. Thus, photocurrent matching is very important for the construction of highly efficient tandem pn-DSCs. However, unlike n-DSCs, fast charge recombination following dye-sensitized hole injection usually resulted in low photocurrents in p-DSC and thus hampered the efficiency of the overall device.\nResearchers have found that using dyes comprising a perylenemonoimide (PMI) as the acceptor and an oligothiophene coupled to triphenylamine as the donor greatly improve the performance of p-DSC by reducing charge recombination rate following dye-sensitized hole injection. The researchers constructed a tandem DSC device with NiO on the p-DSC side and TiO on the n-DSC side. Photocurrent matching was achieved through adjustment of NiO and TiO film thicknesses to control the optical absorptions and therefore match the photocurrents of both electrodes. The energy conversion efficiency of the device is 1.91%, which exceeds the efficiency of its individual components, but is still much lower than that of high performance n-DSC devices (6%–11%). The results are still promising since the tandem DSC was in itself rudimentary. The dramatic improvement in performance in p-DSC can eventually lead to tandem devices with much greater efficiency than lone n-DSCs.\nAs previously mentioned, using a solid-state electrolyte has several advantages over a liquid system (such as no leakage and faster charge transport), which has also been realised for dye-sensitised photocathodes. Using electron transporting materials such as PCBM, TiO and ZnO instead of the conventional liquid redox couple electrolyte, researchers have managed to fabricate solid state p-DSCs (p-ssDSCs), aiming for solid state tandem dye sensitized solar cells, which have the potential to achieve much greater photovoltages than a liquid tandem device.", "In a conventional n-type DSSC, sunlight enters the cell through the transparent SnO:F top contact, striking the dye on the surface of the TiO. Photons striking the dye with enough energy to be absorbed create an excited state of the dye, from which an electron can be \"injected\" directly into the conduction band of the TiO. From there it moves by diffusion (as a result of an electron concentration gradient) to the clear anode on top.\nMeanwhile, the dye molecule has lost an electron and the molecule will decompose if another electron is not provided. The dye strips one from iodide in electrolyte below the TiO, oxidizing it into triiodide. This reaction occurs quite quickly compared to the time that it takes for the injected electron to recombine with the oxidized dye molecule, preventing this recombination reaction that would effectively short-circuit the solar cell.\nThe triiodide then recovers its missing electron by mechanically diffusing to the bottom of the cell, where the counter electrode re-introduces the electrons after flowing through the external circuit.", "An article published in Nature Materials demonstrated cell efficiencies of 8.2% using a new solvent-free liquid redox electrolyte consisting of a melt of three salts, as an alternative to using organic solvents as an electrolyte solution. Although the efficiency with this electrolyte is less than the 11% being delivered using the existing iodine-based solutions, the team is confident the efficiency can be improved.", "In DSSC, electrodes consisted of sintered semiconducting nanoparticles, mainly TiO or ZnO. These nanoparticle DSSCs rely on trap-limited diffusion through the semiconductor nanoparticles for the electron transport. This limits the device efficiency since it is a slow transport mechanism. Recombination is more likely to occur at longer wavelengths of radiation. Moreover, sintering of nanoparticles requires a high temperature of about 450 °C, which restricts the fabrication of these cells to robust, rigid solid substrates. It has been proven that there is an increase in the efficiency of DSSC, if the sintered nanoparticle electrode is replaced by a specially designed electrode possessing an exotic nanoplant-like morphology.", "One last area that has been actively studied is the synergy of different materials in promoting superior electroactive performance. Whether through various charge transport material, electrochemical species, or morphologies, exploiting the synergetic relationship between different materials has paved the way for even newer counter electrode materials.\nIn 2016, Lu et al. mixed nickel cobalt sulfide microparticles with reduced graphene oxide (rGO) nanoflakes to create the counter electrode. Lu et al. discovered not only that the rGO acted as a co-catalyst in accelerating the triiodide reduction, but also that the microparticles and rGO had a synergistic interaction that decreased the charge transfer resistance of the overall system. Although the efficiency of this system was slightly lower than its platinum analog (efficiency of NCS/rGO system: 8.96%; efficiency of Pt system: 9.11%), it provided a platform on which further research can be conducted.", "A dye-sensitized solar cell (DSSC, DSC, DYSC or Grätzel cell) is a low-cost solar cell belonging to the group of thin film solar cells. It is based on a semiconductor formed between a photo-sensitized anode and an electrolyte, a photoelectrochemical system. The modern version of a dye solar cell, also known as the Grätzel cell, was originally co-invented in 1988 by Brian O'Regan and Michael Grätzel at UC Berkeley and this work was later developed by the aforementioned scientists at the École Polytechnique Fédérale de Lausanne (EPFL) until the publication of the first high efficiency DSSC in 1991. Michael Grätzel has been awarded the 2010 Millennium Technology Prize for this invention.\nThe DSSC has a number of attractive features; it is simple to make using conventional roll-printing techniques, is semi-flexible and semi-transparent which offers a variety of uses not applicable to glass-based systems, and most of the materials used are low-cost. In practice it has proven difficult to eliminate a number of expensive materials, notably platinum and ruthenium, and the liquid electrolyte presents a serious challenge to making a cell suitable for use in all weather. Although its conversion efficiency is less than the best thin-film cells, in theory its price/performance ratio should be good enough to allow them to compete with fossil fuel electrical generation by achieving grid parity. Commercial applications, which were held up due to chemical stability problems, had been forecast in the European Union Photovoltaic Roadmap to significantly contribute to renewable electricity generation by 2020.", "Photosensitizers are dye compounds that absorb the photons from incoming light and eject electrons, producing an electric current that can be used to power a device or a storage unit. According to a new study performed by Michael Grätzel and fellow scientist Anders Hagfeldt, advances in photosensitizers have resulted in a substantial improvement in performance of DSSC’s under solar and ambient light conditions. Another key factor to achieve power-conversion records is cosensitization, due to its ability combine dyes that can absorb light across a wider range of the light spectrum. Cosensitization is a chemical manufacturing method that produces DSSC electrodes containing two or more different dyes with complementary optical absorption capabilities, enabling the use of all available sunlight.\nThe researchers from Switzerland’s École polytechnique fédérale de Lausanne (EPFL) found that the efficiency to cosensitized solar cells can be raised by the pre-adsorption of a monolayer of hydroxamic acid derivative on a surface of nanocrystalline mesoporous titanium dioxide, which functions as the electron transport mechanism of the electrode. The two photosensitizer molecules used in the study were the organic dye SL9, which served as the primary long wavelength-light harvester, and the dye SL10, which provided an additional absorption peak that compensates the SL9’s inefficient blue light harvesting. It was found that adding this hydroxamic acid layer improved the dye layer’s molecular packing and ordering. This slowed down the adsorption of the sensitizers and augmented their fluorescence quantum yield, improving the power conversion efficiency of the cell.\nThe DSSC developed by the team showed a record-breaking power conversion efficiency of 15.2% under standard global simulated sunlight and long-term operational stability over 500 hours. In addition, devices with a larger active area exhibited efficiencies of around 30% while maintaining high stability, offering new possibilities for the DSSC field.", "The field of building-integrated photovoltaics (BIPV) has gained attention from the scientific community due to its potential to reduce pollution and materials and electricity costs, as well as to improve the aesthetics of a building. In recent years, scientists have looked at ways to incorporate DSSC’s in BIPV applications, since the dominant Si-based PV systems in the market have a limited presence in this field due to their energy-intensive manufacturing methods, poor conversion efficiency under low light intensities, and high maintenance requirements. In 2021, a group of researchers from the Silesian University of Technology in Poland developed a DSSC in which the classic glass counter electrode was replaced by an electrode based on a ceramic tile and nickel foil. The motivation for this change was that, despite that glass substrates have resulted in the highest recorded efficiencies for DSSC’s, for BIPV applications like roof tiles or building facades, lighter and more flexible materials are essential. This includes plastic films, metals, steel, or paper, which may also reduce manufacturing costs. The team found that the cell had an efficiency of 4% (close to that of a solar cell with a glass counter electrode), demonstrated the potential for creating building-integrated DSSC’s that are stable and low-cost.", "During the last 5–10 years, a new kind of DSSC has been developed – the solid state dye-sensitized solar cell. In this case the liquid electrolyte is replaced by one of several solid hole conducting materials. From 2009 to 2013 the efficiency of Solid State DSSCs has dramatically increased from 4% to 15%. Michael Grätzel announced the fabrication of Solid State DSSCs with 15.0% efficiency, reached by the means of a hybrid perovskite CHNHPbI dye, subsequently deposited from the separated solutions of CHNHI and PbI.\nThe first architectural integration was demonstrated at EPFL's SwissTech Convention Center in partnership with Romande Energie. The total surface is 300 m, in 1400 modules of 50 cm x 35 cm. Designed by artists Daniel Schlaepfer and Catherine Bolle.", "The dyes used in early experimental cells (circa 1995) were sensitive only in the high-frequency end of the solar spectrum, in the UV and blue. Newer versions were quickly introduced (circa 1999) that had much wider frequency response, notably \"triscarboxy-ruthenium terpyridine\" [Ru(4,4',4\"-(COOH)-terpy)(NCS)], which is efficient right into the low-frequency range of red and IR light. The wide spectral response results in the dye having a deep brown-black color, and is referred to simply as \"black dye\". The dyes have an excellent chance of converting a photon into an electron, originally around 80% but improving to almost perfect conversion in more recent dyes, the overall efficiency is about 90%, with the \"lost\" 10% being largely accounted for by the optical losses in top electrode.\nA solar cell must be capable of producing electricity for at least twenty years, without a significant decrease in efficiency (life span). The \"black dye\" system was subjected to 50 million cycles, the equivalent of ten years' exposure to the sun in Switzerland. No discernible performance decrease was observed. However the dye is subject to breakdown in high-light situations. Over the last decade an extensive research program has been carried out to address these concerns. The newer dyes included 1-ethyl-3 methylimidazolium tetrocyanoborate [EMIB(CN)] which is extremely light- and temperature-stable, copper-diselenium [Cu(In,GA)Se] which offers higher conversion efficiencies, and others with varying special-purpose properties.\nDSSCs are still at the start of their development cycle. Efficiency gains are possible and have recently started more widespread study. These include the use of quantum dots for conversion of higher-energy (higher frequency) light into multiple electrons, using solid-state electrolytes for better temperature response, and changing the doping of the TiO to better match it with the electrolyte being used.", "Dyesol and Tata Steel Europe announced in June the development of the world's largest dye sensitized photovoltaic module, printed onto steel in a continuous line.\nDyesol and CSIRO announced in October a Successful Completion of Second Milestone in Joint Dyesol / CSIRO Project.\nDyesol Director Gordon Thompson said, \"The materials developed during this joint collaboration have the potential to significantly advance the commercialisation of DSC in a range of applications where performance and stability are essential requirements.\nDyesol is extremely encouraged by the breakthroughs in the chemistry allowing the production of the target molecules. This creates a path to the immediate commercial utilisation of these new materials.\"\nDyesol and Tata Steel Europe announced in November the targeted development of Grid Parity Competitive BIPV solar steel that does not require government subsidised feed in tariffs. TATA-Dyesol \"Solar Steel\" Roofing is currently being installed on the Sustainable Building Envelope Centre (SBEC) in Shotton, Wales.", "In the case of the original Grätzel and O'Regan design, the cell has 3 primary parts. On top is a transparent anode made of fluoride-doped tin dioxide (SnO:F) deposited on the back of a (typically glass) plate. On the back of this conductive plate is a thin layer of titanium dioxide (TiO), which forms into a highly porous structure with an extremely high surface area. The (TiO) is chemically bound by a process called sintering. TiO only absorbs a small fraction of the solar photons (those in the UV). The plate is then immersed in a mixture of a photosensitive ruthenium-polypyridyl dye (also called molecular sensitizers) and a solvent. After soaking the film in the dye solution, a thin layer of the dye is left covalently bonded to the surface of the TiO. The bond is either an ester, chelating, or bidentate bridging linkage.\nA separate plate is then made with a thin layer of the iodide electrolyte spread over a conductive sheet, typically platinum metal. The two plates are then joined and sealed together to prevent the electrolyte from leaking. The construction is simple enough that there are hobby kits available to hand-construct them. Although they use a number of \"advanced\" materials, these are inexpensive compared to the silicon needed for normal cells because they require no expensive manufacturing steps. TiO, for instance, is already widely used as a paint base.\nOne of the efficient DSSCs devices uses ruthenium-based molecular dye, e.g. [Ru(4,4-dicarboxy-2,2-bipyridine)(NCS)] (N3), that is bound to a photoanode via carboxylate moieties. The photoanode consists of 12 μm thick film of transparent 10–20 nm diameter TiO nanoparticles covered with a 4 μm thick film of much larger (400 nm diameter) particles that scatter photons back into the transparent film. The excited dye rapidly injects an electron into the TiO after light absorption. The injected electron diffuses through the sintered particle network to be collected at the front side transparent conducting oxide (TCO) electrode, while the dye is regenerated via reduction by a redox shuttle, I/I, dissolved in a solution. Diffusion of the oxidized form of the shuttle to the counter electrode completes the circuit.", "Researchers have investigated the role of surface plasmon resonances present on gold nanorods in the performance of dye-sensitized solar cells. They found that with an increase nanorod concentration, the light absorption grew linearly; however, charge extraction was also dependent on the concentration. With an optimized concentration, they found that the overall power conversion efficiency improved from 5.31 to 8.86% for Y123 dye-sensitized solar cells.\nThe synthesis of one-dimensional TiO nanostructures directly on fluorine-doped tin oxide glass substrates was successful demonstrated via a two-stop solvothermal reaction. Additionally, through a TiO sol treatment, the performance of the dual TiO nanowire cells was enhanced, reaching a power conversion efficiency of 7.65%.\nStainless steel based counter-electrodes for DSSCs have been reported which further reduce cost compared to conventional platinum based counter electrode and are suitable for outdoor application.\nResearchers from EPFL have advanced the DSSCs based on copper complexes redox electrolytes, which have achieved 13.1% efficiency under standard AM1.5G, 100 mW/cm conditions and record 32% efficiency under 1000 lux of indoor light.\nResearchers from Uppsala University have used n-type semiconductors instead of redox electrolyte to fabricate solid state p-type dye sensitized solar cells.", "Of course, the composition of the material that is used as the counter electrode is extremely important to creating a working photovoltaic, as the valence and conduction energy bands must overlap with those of the redox electrolyte species to allow for efficient electron exchange.\nIn 2018, Jin et al. prepared ternary nickel cobalt selenide (NiCoSe) films at various stoichiometric ratios of nickel and cobalt to understand its impact on the resulting cell performance. Nickel and cobalt bimetallic alloys were known to have outstanding electron conduction and stability, so optimizing its stoichiometry would ideally produce a more efficient and stable cell performance than its singly metallic counterparts. Such is the result that Jin et al. found, as NiCoSe achieved superior power conversion efficiency (8.61%), lower charge transfer impedance, and higher electrocatalytic ability than both its platinum and binary selenide counterparts.", "Even with the same composition, morphology of the nanoparticles that make up the counter electrode play such an integral role in determining the efficiency of the overall photovoltaic. Because a material's electrocatalytic potential is highly dependent on the amount of surface area available to facilitate the diffusion and reduction of the redox species, numerous research efforts have been focused towards understanding and optimizing the morphology of nanostructures for DSSC counter electrodes.\nIn 2017, Huang et al. utilized various surfactants in a microemulsion-assisted hydrothermal synthesis of CoSe/CoSeO composite crystals to produce nanocubes, nanorods, and nanoparticles. Comparison of these three morphologies revealed that the hybrid composite nanoparticles, due to having the largest electroactive surface area, had the highest power conversion efficiency of 9.27%, even higher than its platinum counterpart. Not only that, the nanoparticle morphology displayed the highest peak current density and smallest potential gap between the anodic and cathodic peak potentials, thus implying the best electrocatalytic ability.\nWith a similar study but a different system, Du et al. in 2017 determined that the ternary oxide of NiCoO had the greatest power conversion efficiency and electrocatalytic ability as nanoflowers when compared to nanorods or nanosheets. Du et al. realized that exploring various growth mechanisms that help to exploit the larger active surface areas of nanoflowers may provide an opening for extending DSSC applications to other fields.", "One of the most important components of DSSC is the counter electrode. As stated before, the counter electrode is responsible for collecting electrons from the external circuit and introducing them back into the electrolyte to catalyze the reduction reaction of the redox shuttle, generally I to I. Thus, it is important for the counter electrode to not only have high electron conductivity and diffusive ability, but also electrochemical stability, high catalytic activity and appropriate band structure. The most common counter electrode material currently used is platinum in DSSCs, but is not sustainable owing to its high costs and scarce resources. Thus, much research has been focused towards discovering new hybrid and doped materials that can replace platinum with comparable or superior electrocatalytic performance. One such category being widely studied includes chalcogen compounds of cobalt, nickel, and iron (CCNI), particularly the effects of morphology, stoichiometry, and synergy on the resulting performance. It has been found that in addition to the elemental composition of the material, these three parameters greatly impact the resulting counter electrode efficiency. Of course, there are a variety of other materials currently being researched, such as highly mesoporous carbons, tin-based materials, gold nanostructures, as well as lead-based nanocrystals. However, the following section compiles a variety of ongoing research efforts specifically relating to CCNI towards optimizing the DSSC counter electrode performance.", "In the late 1960s it was discovered that illuminated organic dyes can generate electricity at oxide electrodes in electrochemical cells. In an effort to understand and simulate the primary processes in photosynthesis the phenomenon was studied at the University of California at Berkeley with chlorophyll extracted from spinach (bio-mimetic or bionic approach). On the basis of such experiments electric power generation via the dye sensitization solar cell (DSSC) principle was demonstrated and discussed in 1972. The instability of the dye solar cell was identified as a main challenge. Its efficiency could, during the following two decades, be improved by optimizing the porosity of the electrode prepared from fine oxide powder, but the instability remained a problem.\nA modern n-type DSSC, the most common type of DSSC, is composed of a porous layer of titanium dioxide nanoparticles, covered with a molecular dye that absorbs sunlight, like the chlorophyll in green leaves. The titanium dioxide is immersed under an electrolyte solution, above which is a platinum-based catalyst. As in a conventional alkaline battery, an anode (the titanium dioxide) and a cathode (the platinum) are placed on either side of a liquid conductor (the electrolyte).\nThe working principle for n-type DSSCs can be summarized into a few basic steps. Sunlight passes through the transparent electrode into the dye layer where it can excite electrons that then flow into the conduction band of the n-type semiconductor, typically titanium dioxide. The electrons from titanium dioxide then flow toward the transparent electrode where they are collected for powering a load. After flowing through the external circuit, they are re-introduced into the cell on a metal electrode on the back, also known as the counter electrode, and flow into the electrolyte. The electrolyte then transports the electrons back to the dye molecules and regenerates the oxidized dye.\nThe basic working principle above, is similar in a p-type DSSC, where the dye-sensitised semiconductor is of p-type nature (typically nickel oxide). However, instead of injecting an electron into the semiconductor, in a p-type DSSC, a hole flows from the dye into the valence band of the p-type semiconductor.\nDye-sensitized solar cells separate the two functions provided by silicon in a traditional cell design. Normally the silicon acts as both the source of photoelectrons, as well as providing the electric field to separate the charges and create a current. In the dye-sensitized solar cell, the bulk of the semiconductor is used solely for charge transport, the photoelectrons are provided from a separate photosensitive dye. Charge separation occurs at the surfaces between the dye, semiconductor and electrolyte.\nThe dye molecules are quite small (nanometer sized), so in order to capture a reasonable amount of the incoming light the layer of dye molecules needs to be made fairly thick, much thicker than the molecules themselves. To address this problem, a nanomaterial is used as a scaffold to hold large numbers of the dye molecules in a 3-D matrix, increasing the number of molecules for any given surface area of cell. In existing designs, this scaffolding is provided by the semiconductor material, which serves double-duty.", "In a traditional solid-state semiconductor, a solar cell is made from two doped crystals, one doped with n-type impurities (n-type semiconductor), which add additional free conduction band electrons, and the other doped with p-type impurities (p-type semiconductor), which add additional electron holes. When placed in contact, some of the electrons in the n-type portion flow into the p-type to \"fill in\" the missing electrons, also known as electron holes. Eventually enough electrons will flow across the boundary to equalize the Fermi levels of the two materials. The result is a region at the interface, the p–n junction, where charge carriers are depleted and/or accumulated on each side of the interface. In silicon, this transfer of electrons produces a potential barrier of about 0.6 to 0.7 eV.\nWhen placed in the sun, photons of the sunlight can excite electrons on the p-type side of the semiconductor, a process known as photoexcitation. In silicon, sunlight can provide enough energy to push an electron out of the lower-energy valence band into the higher-energy conduction band. As the name implies, electrons in the conduction band are free to move about the silicon. When a load is placed across the cell as a whole, these electrons will flow out of the p-type side into the n-type side, lose energy while moving through the external circuit, and then flow back into the p-type material where they can once again re-combine with the valence-band hole they left behind. In this way, sunlight creates an electric current.\nIn any semiconductor, the band gap means that only photons with that amount of energy, or more, will contribute to producing a current. In the case of silicon, the majority of visible light from red to violet has sufficient energy to make this happen. Unfortunately higher energy photons, those at the blue and violet end of the spectrum, have more than enough energy to cross the band gap; although some of this extra energy is transferred into the electrons, the majority of it is wasted as heat. Another issue is that in order to have a reasonable chance of capturing a photon, the n-type layer has to be fairly thick. This also increases the chance that a freshly ejected electron will meet up with a previously created hole in the material before reaching the p–n junction. These effects produce an upper limit on the efficiency of silicon solar cells, currently around 20% for common modules and up to 27.1% for the best laboratory cells (33.16% is the theoretical maximum efficiency for single band gap solar cells, see Shockley–Queisser limit.).\nBy far the biggest problem with the conventional approach is cost; solar cells require a relatively thick layer of doped silicon in order to have reasonable photon capture rates, and silicon processing is expensive. There have been a number of different approaches to reduce this cost over the last decade, notably the thin-film approaches, but to date they have seen limited application due to a variety of practical problems. Another line of research has been to dramatically improve efficiency through the multi-junction approach, although these cells are very high cost and suitable only for large commercial deployments. In general terms the types of cells suitable for rooftop deployment have not changed significantly in efficiency, although costs have dropped somewhat due to increased supply.", "The following steps convert in a conventional n-type DSSC photons (light) to current:\nThe efficiency of a DSSC depends on four energy levels of the component: the excited state (approximately LUMO) and the ground state (HOMO) of the photosensitizer, the Fermi level of the TiO electrode and the redox potential of the mediator (I/I) in the electrolyte.", "Quantum dots are unique fluorophores relative to organic dyes, like fluorescein or rhodamine because they are composed of semiconductor metals, instead of a π-conjugated carbon-bonding framework. With organic dyes, the length of the π-conjugated framework (quantum confinement), as well as side-groups (electron donating/withdrawing or halogens) tend to dictate the absorption and emission spectra of the molecule. Semiconductor quantum dots also work on the concept of quantum confinement, (often referred to as \"Particle in a Box\" theory) where an exciton is formed inside the crystal lattice by an incident photon of higher energy. The electron and hole of the exciton have an interaction energy that is tuned by changing the physical size of the quantum dot. The absorption and emission colors are tuned such that smaller quantum dots confine the exciton into a tighter physical space and increase the energy. Alternatively, a larger quantum dot confines the exciton into a larger physical space, lowering the interaction energy of the electron and hole, and decreasing the energy of the system. As shown in the table above, the diameter of the CdSe quantum dots is related to the emission energy such that the smaller quantum dots emit photons toward the blue wavelength range (higher energy) and the larger quantum dots emit photons toward the red wavelength range (lower energy.)\nTo the right are representative absorption (blue) and emission (red) spectra for the eFluor-605 nanocrystal. The absorption spectrum of nanocrystals displays a number of peaks overlaid on background that rises exponentially toward the ultraviolet, where the lowest energy absorption peak arises from the 1S-1S transition, and has been correlated to the physical size of the quantum dot. Generally referred to as the \"1st exciton,\" and is the primary absorption characteristic used to determine both size and concentration for most quantum dots.\nThe photoluminescence spectra of quantum dots are also unique relative to organic dyes in that they are typically Gaussian-shaped curves with no red-tailing to the spectrum. The width of the photoluminescence peak represents the heterogeneity in size dispersion of the quantum dots, where a large size dispersion will lead to broad emission peaks, and tight size-dispersion will lead to narrow emission peaks, often quantified by the full width at half maximum (FWHM) value. eFluor Nanocrystals are specified at ≤30nm FWHM for the CdSe nanocrystals, and ≤70nm FWHM for the InGaP eFluor 700 nanocrystals.", "The optical emission properties of eFluor Nanocrystals are primarily dictated by their size, as discussed in the next section. There are at least two aspects to consider when discussing the \"size\" of a quantum dot: the physical size of the semiconductor structure, and the size of the entire quantum dot moiety including the associated ligands and hydrophilic coating. The size of the semiconductor structure is tabulated below, and reflects the diameter of the spherical quantum dot without ligands. eFluor Nanocrystals are rendered water-dispersable with a patented poly-ethylene glycol (PEG) lipid layer that functions as both a protective hydrophilic coating around the quantum dot, as well as reducing non-specific binding By dynamic light scattering measurements, the hydrodynamic radius of all eFluor Nanocrystals ranges from 10 to 13nm.", "eFluor nanocrystals are a class of fluorophores made of semiconductor quantum dots. The nanocrystals can be provided as either primary amine, carboxylate, or non-functional groups on the surface, allowing conjugation to biomolecules of a researcher's choice. The nanocrystals can be conjugated to primary antibodies which are used for flow cytometry, immunohistochemistry, microarrays, in vivo imaging and microscopy.", "Electrofiltration is a method that combines membrane filtration and electrophoresis in a dead-end process.\nElectrofiltration is regarded as an appropriate technique for concentration and fractionation of biopolymers. The film formation on the filter membrane which hinders filtration can be minimized or completely avoided by the application of electric field, improving filtration’s performance and increasing selectivity in case of fractionation. This approach reduces significantly the expenses for downstream processing in bioprocesses.", "*Vorobiev E., Lebovka N., (2008). Electrotechnologies for Extraction from Food Plants and Biomaterials, .", "Electrofiltration is a technique for separation and concentration of colloidal substances – for instance biopolymers. The principle of electrofiltration is based on overlaying electric field on a standard dead-end filtration. Thus the created polarity facilitates electrophoretic force which is opposite to the resistance force of the filtrate flow and directs the charged biopolymers. This provides extreme decrease in the film formation on the micro- or ultra-filtration membranes and the reduction of filtration time from several hours by standard filtration to a few minutes by electrofiltration. In comparison to cross-flow filtration electrofiltration exhibits not only increased permeate flow but also guarantees reduced shear force stress which qualifies it as particularly mild technique for separation of biopolymers that are usually unstable.\nThe promising application in purification of biotechnological products is based on the fact that biopolymers are difficult for filtration but on the other hand they are usually charged as a result of the presence of amino and carboxyl groups. The objective of electrofiltration is to prevent the formation of filter cake and to improve the filtration kinetic of products difficult to filtrate.\nThe electrophoresis of the particles and the electro-osmosis become essential when the filtration process is overlaid with electric field. By electrofiltration the conventional filtration is overlaid with an electric field (DC) which works parallel with the filtrate’s flow direction. When the electrophoretic force F, oppositely directed to flow, overruns the hydrodynamic resistance force F, the charged particles migrate from the filter medium, thus reducing significantly the thickness of the filter cake on the membrane.\nWhen the solid particles, subject to separation, are negatively charged they migrate towards the anode (positive pole) and deposit on the filter cloth situated there. As a result, on the cathode side’s membrane (negative pole) there is only a very thin film allowing nearly the whole filtrate to efflux through this membrane.\nFigure 1 presents schematic description of electrofiltration chamber with flushing electrodes. For the flushing circulation a buffer solution is used. This approach has been patented.", "The hydrodynamic resistance force is evaluated following the Stokes’ law.\nThe electrophoretic force is evaluated following the Coulomb’s law.\nIn these equations r presents the hydrodynamic radius of the colloids, – the speed of electrophoretic migration, – the dynamic viscosity of the solutions, – dielectric constant in vacuum, is water’s relative dielectric constant at 298 K, is the zeta potential, E is the electric field. The hydrodynamic radius is the sum of particles’ radiuses and the stationary solvent interface.\nBy steady state electrophoretic migration of charged colloids the electrophoretic force and the hydrodynamic resistance force are in equilibrium, described by:\n:F + F = 0\nThose effects influence the electrofiltration of biopolymers, which could be also charged, not only by the hydrodynamic resistance force but also by the electric field force. Focusing on the cathode side reveals that the negatively charged particles are affected by the electric field force, which is opposite to the hydrodynamic resistance force. In this manner the formation of filter cake on this side is impeded or in ideal situation filter cake is not formed at all. In this case the electric field is referred as critical electric field E. As a result of the equilibrium of those forces, liquids subjected to the influence of electric force become charged. In addition to the applied hydraulic pressure ∆pH the process is influenced also by the electro-osmotic pressure P.\nModifying the Darcy’s basic equation, describing filter cake formation, with electro-kinetic effects by integration under assumption of using the constants of electro-osmotic pressure P, the critical electric field E and the electric field E results:\nPrevious scientific works conducted in the [http://www.bio-ag.de/ Dept. of Bioprocess Engineering, Institute of Engineering in Life Sciences, University of Karlsruhe] demonstrated that electrofiltration is effective for the concentration of charged biopolymers. Very promising results concerning purification of the charged polysaccharide xanthan are already obtained. Figure 2 represents xanthan filter cake.", "Electroluminescent devices are fabricated using either organic or inorganic electroluminescent materials. The active materials are generally semiconductors of wide enough bandwidth to allow the exit of the light.\nThe most typical inorganic thin-film EL (TFEL) is ZnS:Mn with yellow-orange emission. Examples of the range of EL material include:\n* Powdered zinc sulfide doped with copper (producing greenish light) or silver (producing bright blue light)\n* Thin-film zinc sulfide doped with manganese (producing orange-red color)\n* Naturally blue diamond, which includes a trace of boron that acts as a dopant.\n* Semiconductors containing Group III and Group V elements, such as indium phosphide (InP), gallium arsenide (GaAs), and gallium nitride (GaN) (Light-emitting diodes.)\n* Certain organic semiconductors, such as [Ru(bpy)](PF), where bpy is 2,2'-bipyridine", "Electroluminescence (EL) is an optical and electrical phenomenon, in which a material emits light in response to the passage of an electric current or to a strong electric field. This is distinct from black body light emission resulting from heat (incandescence), chemical reactions (chemiluminescence), reactions in a liquid (electrochemiluminescence), sound (sonoluminescence), or other mechanical action (mechanoluminescence), or \norganic electroluminescence.", "Electroluminescence is the result of radiative recombination of electrons & holes in a material, usually a semiconductor. The excited electrons release their energy as photons - light. Prior to recombination, electrons and holes may be separated either by doping the material to form a p-n junction (in semiconductor electroluminescent devices such as light-emitting diodes) or through excitation by impact of high-energy electrons accelerated by a strong electric field (as with the phosphors in electroluminescent displays).\nIt has been recently shown that as a solar cell improves its light-to-electricity efficiency (improved open-circuit voltage), it will also improve its electricity-to-light (EL) efficiency.", "Electroluminescent technologies have low power consumption compared to competing lighting technologies, such as neon or fluorescent lamps. This, together with the thinness of the material, has made EL technology valuable to the advertising industry. Relevant advertising applications include electroluminescent billboards and signs. EL manufacturers can control precisely which areas of an electroluminescent sheet illuminate, and when. This has given advertisers the ability to create more dynamic advertising that is still compatible with traditional advertising spaces.\nAn EL film is a so-called Lambertian radiator: unlike with neon lamps, filament lamps, or LEDs, the brightness of the surface appears the same from all angles of view; electroluminescent light is not directional. The light emitted from the surface is perfectly homogeneous and is well-perceived by the eye. EL film produces single-frequency (monochromatic) light that has a very narrow bandwidth, is uniform and visible from a great distance.\nIn principle, EL lamps can be made in any color. However, the commonly used greenish color closely matches the peak sensitivity of human vision, producing the greatest apparent light output for the least electrical power input. Unlike neon and fluorescent lamps, EL lamps are not negative resistance devices so no extra circuitry is needed to regulate the amount of current flowing through them. A new technology now being used is based on multispectral phosphors that emit light from 600 to 400nm depending on the drive frequency; this is similar to the color-changing effect seen with aqua EL sheet but on a larger scale.", "The Sylvania Lighting Division in Salem and Danvers, Massachusetts, produced and marketed an EL night light, under the trade name Panelescent at roughly the same time that the Chrysler instrument panels entered production. These lamps have proven extremely reliable, with some samples known to be still functional after nearly 50 years of continuous operation.\nLater in the 1960s, Sylvania's Electronic Systems Division in Needham, Massachusetts developed and manufactured several instruments for the Apollo Lunar Module and Command Module using electroluminescent display panels manufactured by the Electronic Tube Division of Sylvania at Emporium, Pennsylvania. Raytheon in Sudbury, Massachusetts manufactured the Apollo Guidance Computer, which used a Sylvania electroluminescent display panel as part of its display-keyboard interface (DSKY).", "The most common electroluminescent (EL) devices are composed of either powder (primarily used in lighting applications) or thin films (for information displays.)", "Powder phosphor-based electroluminescent panels are frequently used as backlights for liquid crystal displays. They readily provide gentle, even illumination for the entire display while consuming relatively little electric power. This makes them convenient for battery-operated devices such as pagers, wristwatches, and computer-controlled thermostats, and their gentle green-cyan glow is common in the technological world.\nEL backlights require relatively high voltage (between 60 and 600 volts). For battery-operated devices, this voltage must be generated by a boost converter circuit within the device. This converter often makes a faintly audible whine or siren sound while the backlight is activated. Line-voltage-operated devices may be activated directly from the power line; some electroluminescent nightlights operate in this fashion. Brightness per unit area increases with increased voltage and frequency.\nThin-film phosphor electroluminescence was first commercialized during the 1980s by Sharp Corporation in Japan, Finlux (Oy Lohja Ab) in Finland, and Planar Systems in the US. In these devices, bright, long-life light emission is achieved in thin-film yellow-emitting manganese-doped zinc sulfide material. Displays using this technology were manufactured for medical and vehicle applications where ruggedness and wide viewing angles were crucial, and liquid crystal displays were not well developed. In 1992, Timex introduced its Indiglo EL display on some watches.\nRecently, blue-, red-, and green-emitting thin film electroluminescent materials that offer the potential for long life and full-color electroluminescent displays have been developed.\nThe EL material must be enclosed between two electrodes and at least one electrode must be transparent to allow the escape of the produced light. Glass coated with indium tin oxide is commonly used as the front (transparent) electrode, while the back electrode is coated with reflective metal. Additionally, other transparent conducting materials, such as carbon nanotube coatings or PEDOT can be used as the front electrode.\nThe display applications are primarily passive (i.e., voltages are driven from the edge of the display cf. driven from a transistor on the display). Similar to LCD trends, there have also been Active Matrix EL (AMEL) displays demonstrated, where the circuitry is added to prolong voltages at each pixel. The solid-state nature of TFEL allows for a very rugged and high-resolution display fabricated even on silicon substrates. AMEL displays of 1280×1024 at over 1000 lines per inch (LPI) have been demonstrated by a consortium including Planar Systems.", "Thick-film dielectric electroluminescent technology (TDEL) is a phosphor-based flat panel display technology developed by Canadian company iFire Technology Corp. TDEL is based on inorganic electroluminescent (IEL) technology that combines both thick-and thin-film processes. The TDEL structure is made with glass or other substrates, consisting of a thick-film dielectric layer and a thin-film phosphor layer sandwiched between two sets of electrodes to create a matrix of pixels. Inorganic phosphors within this matrix emit light in the presence of an alternating electric field.", "Color By Blue (CBB) was developed in 2003. The Color By Blue process achieves higher luminance and better performance than the previous triple pattern process, with increased contrast, grayscale rendition, and color uniformity across the panel. Color By Blue is based on the physics of photoluminescence. High luminance inorganic blue phosphor is used in combination with specialized color conversion materials, which absorb the blue light and re-emit red or green light, to generate the other colors.", "Electroluminescent lighting is now used as an application for public safety identification involving alphanumeric characters on the roof of vehicles for clear visibility from an aerial perspective.\nElectroluminescent lighting, especially electroluminescent wire (EL wire), has also made its way into clothing as many designers have brought this technology to the entertainment and nightlife industry. From 2006, t-shirts with an electroluminescent panel stylized as an audio equalizer, the T-Qualizer, saw a brief period of popularity.\nEngineers have developed an electroluminescent \"skin\" that can stretch more than six times its original size while still emitting light. This hyper-elastic light-emitting capacitor (HLEC) can endure more than twice the strain of previously tested stretchable displays. It consists of layers of transparent hydrogel electrodes sandwiching an insulating elastomer sheet. The elastomer changes luminance and capacitance when stretched, rolled, and otherwise deformed. In addition to its ability to emit light under a strain of greater than 480% of its original size, the group's HLEC was shown to be capable of being integrated into a soft robotic system. Three six-layer HLEC panels were bound together to form a crawling soft robot, with the top four layers making up the light-up skin and the bottom two the pneumatic actuators. The discovery could lead to significant advances in health care, transportation, electronic communication and other areas.", "Light-emitting capacitor, or LEC, is a term used since at least 1961 to describe electroluminescent panels. General Electric has patents dating to 1938 on flat electroluminescent panels that are still made as night lights and backlights for instrument panel displays. Electroluminescent panels are a capacitor where the dielectric between the outside plates is a phosphor that gives off photons when the capacitor is charged. By making one of the contacts transparent, the large area exposed emits light.\nElectroluminescent automotive instrument panel backlighting, with each gauge pointer also an individual light source, entered production on 1960 Chrysler and Imperial passenger cars, and was continued successfully on several Chrysler vehicles through 1967 and marketed as \"Panelescent Lighting\".", "* AMEL Active Matrix Electroluminescence\n* TFEL Thin Film Electroluminescence\n* TDEL Thick Dielectric Electroluminescence", "Electroluminescent displays have been a very niche format and are very rarely used nowadays. Some uses have included to indicate speed and altitude at the front of the Concorde, and as floor indicators on Otis Elevators from around 1989 to 2007, mostly only available to high-rise buildings and modernizations.", "EL works by exciting atoms by passing an electric current through them, causing them to emit photons. By varying the material being excited, the colour of the light emitted can be changed. The actual ELD is constructed using flat, opaque electrode strips running parallel to each other, covered by a layer of electroluminescent material, followed by another layer of electrodes, running perpendicular to the bottom layer. This top layer must be transparent in order to let light escape. At each intersection, the material lights, creating a pixel.", "Electroluminescent Displays (ELDs) are a type of flat panel display created by sandwiching a layer of electroluminescent material such as Gallium arsenide between two layers of conductors. When current flows, the layer of material emits radiation in the form of visible light. Electroluminescence (EL) is an optical and electrical phenomenon where a material emits light in response to an electric current passed through it, or to a strong electric field. The term \"electroluminescent display\" describes displays that use neither LED nor OLED devices, that instead use traditional electroluminescent materials. Beneq is the only manufacturer of TFEL (Thin Film Electroluminescent Display) and TAESL displays, which are branded as LUMINEQ Displays. The structure of a TFEL is similar to that of a passive matrix LCD or OLED display, and TAESL displays are essentially transparent TEFL displays with transparent electrodes. TAESL displays can have a transparency of 80%. Both TEFL and TAESL displays use chip-on-glass technology, which mounts the display driver IC directly on one of the edges of the display. TAESL displays can be embedded onto glass sheets. Unlike LCDs, TFELs are much more rugged and can operate at temperatures from −60 to 105°C and unlike OLEDs, TFELs can operate for 100,000 hours without considerable burn-in, retaining about 85% of their initial brightness. The electroluminescent material is deposited using atomic layer deposition, which is a process that deposits one 1-atom thick layer at a time.", "By arranging each strand of EL wire into a shape slightly different from the previous one, it is possible to create animations using EL wire sequencers. EL wire sequencers are also used for costumes and have been used to create animations on various items such as kimono, purses, neckties, and motorcycle tanks. They are increasingly popular among artists, dancers, maker culture, and similar creative communities, such as exhibited in the annual Burning Man alt-culture festival.", "EL wire's construction consists of five major components. First is a solid-copper wire core coated with phosphor. A very fine wire or pair of wires is spiral-wound around the phosphor-coated copper core and then the outer Indium tin oxide (ITO) conductive coating is evaporated on. This fine wire is electrically isolated from the copper core. Surrounding this \"sandwich\" of copper core, phosphor and fine copper wire is a clear PVC sleeve. Finally, surrounding this thin and clear PVC sleeve is another clear, colored translucent or fluorescent PVC sleeve.\nAn alternating current electric potential of approximately 90 to 120 volts at about 1000 Hz is applied between the copper core wire and the fine wire that surrounds the copper core. The wire can be modeled as a coaxial capacitor with about 1 nF of capacitance per 30 cm, and the rapid charging and discharging of this capacitor excites the phosphor to emit light. The colors of light that can be produced efficiently by phosphors are limited, so many types of wire use an additional fluorescent organic dye in the clear PVC sleeve to produce the final result. These organic dyes produce colors like red and purple when excited by the blue-green light of the core.\nA resonant oscillator is typically used to generate the high voltage drive signal. Because of the capacitance load of the EL wire, using an inductive (coiled) transformer makes the driver a very efficient tuned LC oscillator. The efficiency of EL wire is very high, and thus up to a hundred meters of EL wire can be driven by AA batteries for several hours.\nIn recent years, the LC circuit has been replaced for some applications with a single chip switched capacitor inverter IC such as the Supertex HV850; this can run 30 cm of angel hair wire at high efficiency, and is suitable for solar lanterns and safety applications. The other advantage of these chips is that the control signals can be derived from a microcontroller, so brightness and colour can be varied programmatically; this can be controlled by using external sensors that sense, for example, battery state, ambient temperature, or ambient light etc.\nEL wire - in common with other types of EL devices - does have limitations: at high frequency it dissipates a lot of heat, and that can lead to breakdown and loss of brightness over time. Because the wire is unshielded and typically operates at a relatively high voltage, EL wire can produce high-frequency interference (corresponding to the frequency of the oscillator) that can be picked up by sensitive audio equipment, such as guitar pickups. \nThere is also a voltage limit: typical EL wire breaks down at around 180 volts peak-to-peak, so if using an unregulated transformer, back-to-back zener diodes and series current-limiting resistors are essential.\nIn addition, EL sheet and wire can sometimes be used as a touch sensor, since compressing the capacitor will change its value.", "Electroluminescent wire (often abbreviated as EL wire) is a thin copper wire coated in a phosphor that produces light through electroluminescence when an alternating current is applied to it. It can be used in a wide variety of applications—vehicle and structure decoration, safety and emergency lighting, toys, clothing etc.—much as rope light or Christmas lights are often used. Unlike these types of strand lights, EL wire is not a series of points, but produces a continuous unbroken line of visible light. Its thin diameter makes it flexible and ideal for use in a variety of applications such as clothing or costumes.", "EL wire sequencers can flash electroluminescent wire, or EL wire, in sequential patterns. EL wire requires a low-power, high-frequency driver to cause the wire to illuminate. Most EL wire drivers simply light up one strand of EL wire in a constant-on mode, and some drivers may additionally have a blink or strobe mode. A sound-activated driver will light EL wire in synchronization to music, speech, or other ambient sound, but an EL wire sequencer will allow multiple lengths of EL wire to be flashed in a desired sequence. The lengths of EL wire can all be the same color, or a variety of colors.\nThe images above show a sign that displays a telephone number, where the numbers were formed using different colors of EL wire. There are ten numbers, each of which is connected to a different channel of the EL wire sequencer.\nLike EL wire drivers, sequencers are rated to drive (or power) a range or specific length of EL wire. For example, using a sequencer rated for 1.5 to 14 meters (5 to 45 feet), if less than 1.5m is used, there is a risk of burning out the sequencer, and if more than 14m is used, the EL wire will not light as brightly as intended.\nThere are commercially available EL wire sequencers capable of lighting three, four, five, or ten lengths of EL wire. There are professional and experimental sequencers with many more than ten channels, but for most applications, ten channels is enough. Sequencers usually have options for changing the speed, reversing, changing the order of the sequence, and sometimes, to change whether the first wires remain lit or go off as the rest of the wires in the sequence are lit. EL wire sequencers tend to be smaller than a pack of cigarettes and most are powered by batteries. This versatility lends to the sequencers' use at nighttime events where mains electricity is not available.", "An electrostatic separator is a device for separating particles by mass in a low energy charged beam.\nAn example is the electrostatic precipitator used in coal-fired power plants to treat exhaust gas, removing small particles that cause air pollution. \nElectrostatic separation is a process that uses electrostatic charges to separate crushed particles of material. An industrial process used to separate large amounts of material particles, electrostatic separating is most often used in the process of sorting mineral ore. This process can help remove valuable material from ore, or it can help remove foreign material to purify a substance. In mining, the process of crushing mining ore into particles for the purpose of separating minerals is called beneficiation.\nGenerally, electrostatic charges are used to attract or repel differently charged material. When electrostatic separation uses the force of attraction to sort particles, conducting particles stick to an oppositely charged object, such as a metal drum, thereby separating them from the particle mixture. When this type of beneficiation uses repelling force, it is normally employed to change the trajectory of falling objects to sort them into different places. This way, when a mixture of particles falls past a repelling object, the particles with the correct charge fall away from the other particles when they are repelled by the similarly charged object.\nAn electric charge can be positive or negative — objects with a positive charge repel other positively charged objects, thereby causing them to push away from each other, while a positively charged object would attract to a negatively charged object, thereby causing the two to draw together. \nExperiments showing electrostatic sorting in action can help make the process more clear. To exhibit electrostatic separation at home, an experiment can be conducted using peanuts that are still in their shells. When the shells are rubbed off of the peanuts and gently smashed into pieces, an electrostatically charged device, like a comb rubbed quickly against a wool sweater, will pick up the peanut shells with static electricity. The lightweight crushed shells that are oppositely charged from the comb easily move away from the edible peanut parts when the comb is passed nearby.\nThe electrostatic separation of conductors is one method of beneficiation; another common beneficiation method is magnetic beneficiation. Electrostatic separation is a preferred sorting method when dealing with separating conductors from electrostatic separation non-conductors. In a similar way to that in which electrostatic separation sorts particles with different electrostatic charges magnetic beneficiation sorts particles that respond to a magnetic field. Electrostatic beneficiation is effective for removing particulate matter, such as ash from mined coal, while magnetic separation functions well for removing the magnetic iron ore from deposits of clay in the earth.", "Laboratories have developed grading methods to judge oocyte and embryo quality. In order to optimise pregnancy rates, there is significant evidence that a morphological scoring system is the best strategy for the selection of embryos. Since 2009 where the first time-lapse microscopy system for IVF was approved for clinical use, morphokinetic scoring systems has shown to improve to pregnancy rates further. However, when all different types of time-lapse embryo imaging devices, with or without morphokinetic scoring systems, are compared against conventional embryo assessment for IVF, there is insufficient evidence of a difference in live-birth, pregnancy, stillbirth or miscarriage to choose between them. A small prospectively randomized study in 2016 reported poorer embryo quality and more staff time in an automated time-lapse embryo imaging device compared to conventional embryology. Active efforts to develop a more accurate embryo selection analysis based on Artificial Intelligence and Deep Learning are underway. Embryo Ranking Intelligent Classification Algorithm (ERICA), is a clear example. This Deep Learning software substitutes manual classifications with a ranking system based on an individual embryo's predicted genetic status in a non-invasive fashion. Studies on this area are still pending and current feasibility studies support its potential.", "Embryo transfer can be performed after various durations of embryo culture, conferring different stages in embryogenesis. The main stages at which embryo transfer is performed are cleavage stage (day 2 to 4 after co-incubation) or the blastocyst stage (day 5 or 6 after co-incubation).\nBecause in vivo, a cleavage stage embryo still resides in the fallopian tube and it is known that the nutritional environment of the uterus is different from that of the tube, it is postulated that this may cause stress on the embryo if transferred on day 3 resulting in reduced implantation potential. A blastocyst stage embryo does not have this problem as it is best suited for the uterine environment [https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD002118.pub5/full]\nEmbryos who reach the day 3 cell stage can be tested for chromosomal or specific genetic defects prior to possible transfer by preimplantation genetic diagnosis (PGD). Transferring at the blastocyst stage confers a significant increase in live birth rate per transfer, but also confers a decreased number of embryos available for transfer and embryo cryopreservation, so the cumulative clinical pregnancy rates are increased with cleavage stage transfer. It is uncertain whether there is any difference in live birth rate between transfer on day two or day three after fertilization.\nMonozygotic twinning is not increased after blastocyst transfer compared with cleavage-stage embryo transfer.\nThere is a significantly higher odds of preterm birth (odds ratio 1.3) and congenital anomalies (odds ratio 1.3) among births having reached the blastocyst stage compared with cleavage stage. Because of increased female embryo mortality due to epigenetic modifications induced by extended culture, blastocyst transfer leads to more male births (56.1% male) versus 2 or 3 day transfer (a normal sex ratio of 51.5% male).", "The embryo transfer procedure starts by placing a speculum in the vagina to visualize the cervix, which is cleansed with saline solution or culture media. A transfer catheter is loaded with the embryos and handed to the clinician after confirmation of the patient's identity. The catheter is inserted through the cervical canal and advanced into the uterine cavity. Several types of catheters are used for this process, however, there is good evidence that using a soft vs a hard transfer catheter can increase the chances of clinical pregnancy.\nThere is good and consistent evidence of benefit in ultrasound guidance, that is, making an abdominal ultrasound to ensure correct placement, which is 1–2 cm from the uterine fundus. There is evidence of a significant increase in clinical pregnancy using ultrasound guidance compared with only \"clinical touch\", as well as performing the transfer with hyaluronic acid enriched transfer media. Anesthesia is generally not required. Single embryo transfers in particular require accuracy and precision in placement within the uterine cavity. The optimal target for embryo placement, known as the maximal implantation potential (MIP) point, is identified using 3D/4D ultrasound. However, there is limited evidence that supports deposition of embryos in the midportion of the uterus.\nAfter insertion of the catheter, the contents are expelled and the embryos are deposited. Limited evidence supports making trial transfers before performing the procedure with embryos. After expulsion, the duration that the catheter remains inside the uterus has no effect on pregnancy rates. Limited evidence suggests avoiding negative pressure from the catheter after expulsion. After withdrawal, the catheter is handed to the embryologist, who inspects it for retained embryos.\nIn the process of zygote intrafallopian transfer (ZIFT), eggs are removed from the woman, fertilised, and then placed in the woman's fallopian tubes rather than the uterus.", "Embryos can be either \"fresh\" from fertilized egg cells of the same menstrual cycle, or \"frozen\", that is they have been generated in a preceding cycle and undergone embryo cryopreservation, and are thawed just prior to the transfer, which is then termed \"frozen embryo transfer\" (FET). The outcome from using cryopreserved embryos has uniformly been positive with no increase in birth defects or development abnormalities, also between fresh versus frozen eggs used for intracytoplasmic sperm injection (ICSI). In fact, pregnancy rates are increased following FET, and perinatal outcomes are less affected, compared to embryo transfer in the same cycle as ovarian hyperstimulation was performed. The endometrium is believed to not be optimally prepared for implantation following ovarian hyperstimulation, and therefore frozen embryo transfer avails for a separate cycle to focus on optimizing the chances of successful implantation. Children born from vitrified blastocysts have significantly higher birthweight than those born from non-frozen blastocysts. When transferring a frozen-thawed oocyte, the chance of pregnancy is essentially the same whether it is transferred in a natural cycle or one with ovulation induction.\nThere is probably little or no difference between FET and fresh embryo transfers in terms of live birth rate and ongoing pregnancy rate and the risk of ovarian hyperstimulation syndrome may be less using the \"freeze all\" strategy. The risk of having a large-for-gestational-age baby and higher birth rate, in addition to maternal hypertensive disorders of pregnancy may be increased using a \"freeze all\" strategy.", "The technique of selecting only one embryo to transfer to the woman is called elective-single embryo transfer (e-SET) or, when embryos are at the blastocyst stage, it can also be called elective single blastocyst transfer (eSBT). It significantly lowers the risk of multiple pregnancies, compared with e.g. Double Embryo Transfer (DET) or double blastocyst transfer (2BT), with a twinning rate of approximately 3.5% in sET compared with approximately 38% in DET, or 2% in eSBT compared with approximately 25% in 2BT. At the same time, pregnancy rates is not significantly less with eSBT than with 2BT. That is, the cumulative live birth rate associated with single fresh embryo transfer followed by a single frozen and thawed embryo transfer is comparable with that after one cycle of double fresh embryo transfer. Furthermore, SET has better outcomes in terms of mean gestational age at delivery, mode of delivery, birthweight, and risk of neonatal intensive care unit necessity than DET. e-SET of embryos at the cleavage stage reduces the likelihood of live birth by 38% and multiple birth by 94%. Evidence from randomized, controlled trials suggests that increasing the number of e-SET attempts (fresh and/or frozen) results in a cumulative live birth rate similar to that of DET.\nThe usage of single embryo transfer is highest in Sweden (69.4%), but as low as 2.8% in the USA. Access to public funding for ART, availability of good cryopreservation facilities, effective education about the risks of multiple pregnancy, and legislation appear to be the most important factors for regional usage of single embryo transfer. Also, personal choice plays a significant role as many subfertile couples have a strong preference for twins.", "In the human, the uterine lining (endometrium) needs to be appropriately prepared so that the embryo can implant. In a natural cycle the embryo transfer takes place in the luteal phase at a time where the lining is appropriately undeveloped in relation to the status of the present Luteinizing Hormone. In a stimulated or cycle where a \"frozen\" embryo is transferred, the recipient woman could be given first estrogen preparations (about 2 weeks), then a combination of estrogen and progesterone so that the lining becomes receptive for the embryo. The time of receptivity is the implantation window. A scientific review in 2013 came to the conclusion that it is not possible to identify one method of endometrium preparation in frozen embryo transfer as being more effective than another.\nLimited evidence also supports removal of cervical mucus before transfer.", "Embryo transfer refers to a step in the process of assisted reproduction in which embryos are placed into the uterus of a female with the intent to establish a pregnancy. This technique - which is often used in connection with in vitro fertilization (IVF) - may be used in humans or in other animals, in which situations and goals may vary.\nEmbryo transfer can be done at day two or day three, or later in the blastocyst stage, which was first performed in 1984.\nFactors that can affect the success of embryo transfer include the endometrial receptivity, embryo quality, and embryo transfer technique.", "A major issue is how many embryos should be transferred, since placement of multiple embryos carries a risk of multiple pregnancy. While the past physicians placed multiple embryos to increase the chance of pregnancy, this approach has fallen out of favor. Professional societies, and legislatures in many countries, have issued guidelines or laws to curtail the practice. There is low to moderate evidence that making a double embryo transfer during one cycle achieves a higher live birth rate than a single embryo transfer; but making two single embryo transfers in two cycles has the same live birth rate and would avoid multiple pregnancies.\nThe appropriate number of embryos to be transferred depends on the age of the woman, whether it is the first, second or third full IVF cycle attempt and whether there are top-quality embryos available. According to a guideline from The National Institute for Health and Care Excellence (NICE) in 2013, the number of embryos transferred in a cycle should be chosen as in following table:", "It is not necessary that the embryo transfer be performed on the female who provided the eggs. Thus another female whose uterus is appropriately prepared can receive the embryo and become pregnant.\nEmbryo transfer may be used where a woman who has eggs but no uterus and wants to have a biological baby; she would require the help of a gestational carrier or surrogate to carry the pregnancy. Also, a woman who has no eggs but a uterus may utilize egg donor IVF, in which case another woman would provide eggs for fertilization and the resulting embryos are placed into the uterus of the patient. Fertilization may be performed using the womans partners sperm or by using donor sperm. Spare embryos which are created for another couple undergoing IVF treatment but which are then surplus to that couples needs may also be transferred (called embryo donation). Embryos may be specifically created by using eggs and sperm from donors and these can then be transferred into the uterus of another woman. A surrogate may carry a baby produced by embryo transfer for another couple, even though neither she nor the commissioning' couple is biologically related to the child. Third party reproduction is controversial and regulated in many countries. Persons entering gestational surrogacy arrangements must make sense of an entirely new type of relationship that does not fit any of the traditional scripts we use to categorize relations as kinship, friendship, romantic partnership or market relations. Surrogates have the experience of carrying a baby that they conceptualize as not of their own kin, while intended mothers have the experience of waiting through nine months of pregnancy and transitioning to motherhood from outside of the pregnant body. This can lead to new conceptualizations of body and self.", "The first transfer of an embryo from one human to another resulting in pregnancy was reported in July 1983 and subsequently led to the announcement of the first human birth 3 February 1984. This procedure was performed at the Harbor UCLA Medical Center under the direction of Dr. John Buster and the University of California at Los Angeles School of Medicine.\nIn the procedure, an embryo that was just beginning to develop was transferred from one woman in whom it had been conceived by artificial insemination to another woman who gave birth to the infant 38 weeks later. The sperm used in the artificial insemination came from the husband of the woman who bore the baby.\nThis scientific breakthrough established standards and became an agent of change for women with infertility and for women who did not want to pass on genetic disorders to their children. Donor embryo transfer has given women a mechanism to become pregnant and give birth to a child that will contain their husband's genetic makeup. Although donor embryo transfer as practiced today has evolved from the original non-surgical method, it now accounts for approximately 5% of in vitro fertilization recorded births.\nPrior to this, thousands of women who were infertile, had adoption as the only path to parenthood. This set the stage to allow open and candid discussion of embryo donation and transfer. This breakthrough has given way to the donation of human embryos as a common practice similar to other donations such as blood and major organ donations. At the time of this announcement the event was captured by major news carriers and fueled healthy debate and discussion on this practice which impacted the future of reproductive medicine by creating a platform for further advancements in woman's health.\nThis work established the technical foundation and legal-ethical framework surrounding the clinical use of human oocyte and embryo donation, a mainstream clinical practice, which has evolved over the past 25 years.", "Fresh blastocyst (day 5 to 6) stage transfer seems to be more effective than cleavage (day 2 or 3) stage transfer in assisted reproductive technologies. The Cochrane study showed a small improvement in live birth rate per couple for blastocyst transfers. This would mean that for a typical rate of 31% in clinics that use early cleavage stage cycles, the rate would increase to 32% to 41% live births if clinics used blastocyst transfer. Recent systematic review showed that along with selection of embryo, the techniques followed during transfer procedure may result in successful pregnancy outcome. The following interventions are supported by the literature for improving pregnancy rates:\nAbdominal ultrasound guidance for embryo transfer\nRemoval of cervical mucus\nUse of soft embryo transfer catheters\nPlacement of embryo transfer tip in the upper or middle (central) area of the uterine cavity, greater than 1 cm from the fundus, for embryo expulsion\nImmediate ambulation once the embryo transfer procedure is completed", "Embryo transfer techniques allow top quality female livestock to have a greater influence on the genetic advancement of a herd or flock in much the same way that artificial insemination has allowed greater use of superior sires. ET also allows the continued use of animals such as competition mares to continue training and showing, while producing foals. The general epidemiological aspects of embryo transfer indicates that the transfer of embryos provides the opportunity to introduce genetic material into populations of livestock while greatly reducing the risk for transmission of infectious diseases. Recent developments in the sexing of embryos before transfer and implanting has great potential in the dairy and other livestock industries.\nEmbryo transfer is also used in laboratory mice. For example, embryos of genetically modified strains that are difficult to breed or expensive to maintain may be stored frozen, and only thawed and implanted into a pseudopregnant dam when needed.\nOn February 19, 2020, the first pair of Cheetah cubs to be conceived through embryo transfer from a surrogate cheetah mother was born at Columbus Zoo in Ohio.", "The development of various methods of cryopreservation of bovine embryos improved embryo transfer technique considerably efficient technology, no longer depending on the immediate readiness of suitable recipients. Pregnancy rates are just slightly less than those achieved with fresh embryos. Recently, the use of cryoprotectants such as ethylene glycol has permitted the direct transfer of bovine embryos. The world's first live crossbred bovine calf produced under tropical conditions by Direct Transfer (DT) of embryo frozen in ethylene glycol freeze media was born on 23 June 1996. Dr. Binoy Sebastian Vettical of Kerala Livestock Development Board Ltd has produced the embryo stored frozen in Ethylene Glycol freeze media by slow programmable freezing (SPF) technique and transferred directly to recipient cattle immediately after thawing the frozen straw in water for the birth of this calf. In a study, in vivo produced crossbred bovine embryos stored frozen in ethylene glycol freeze media were transferred directly to recipients under tropical conditions and achieved a pregnancy rate of 50 percent. In a survey of the North American embryo transfer industry, embryo transfer success rates from direct transfer of embryos were as good as to those achieved with glycerol. Moreover, in 2011, more than 95% of frozen-thawed embryos were transferred by Direct Transfer.", "Patients usually start progesterone medication after egg (also called oocyte) retrieval. While daily intramuscular injections of progesterone-in-oil (PIO) have been the standard route of administration, PIO injections are not FDA-approved for use in pregnancy. A recent meta-analysis showed that the intravaginal route with an appropriate dose and dosing frequency is equivalent to daily intramuscular injections. In addition, a recent case-matched study comparing vaginal progesterone with PIO injections showed that live birth rates were nearly identical with both methods. A duration of progesterone administration of 11 days results in almost the same birth rates as longer durations.\nPatients are also given estrogen medication in some cases after the embryo transfer. Pregnancy testing is done typically two weeks after egg retrieval.", "It is uncertain whether the use of mechanical closure of the cervical canal following embryo transfer has any effect.\nThere is considerable evidence that prolonges bed rest (more than 20 minutes) after embryo transfer is associated with reduced chances of clinical pregnancy.\nUsing hyaluronic acid as an adherence medium for the embryo may increase live birth rates. There may be little or no benefit in having a full bladder, removal of cervical mucus, or flushing of the endometrial or endocervical cavity at the time of embryo transfer. Adjunctive antibiotics in the form of amoxicillin plus clavulanic acid probably does not increase the clinical pregnancy rate compared with no antibiotics. The use of Atosiban, G-CSF and hCG around the time of embryo transfer showed a trend towards increased clinical pregnancy rate.\nFor frozen-thawed embryo transfer or transfer of embryo from egg donation, no previous ovarian hyperstimulation is required for the recipient before transfer, which can be performed in spontaneous ovulatory cycles. Still, various protocols exist for frozen-thawed embryo transfers as well, such as protocols with ovarian hyperstimulation, protocols in which the endometrium is artificially prepared by estrogen and/or progesterone. There is some evidence that in cycles where the endometrium is artificially prepared by estrogen or progesterone, it may be beneficial to administer an additional drug that suppresses hormone production by the ovaries such as continuous administration of a gonadotropin releasing hormone agonist (GnRHa). For egg donation, there is evidence of a lower pregnancy rate and a higher cycle cancellation rate when the progesterone supplementation in the recipient is commenced prior to oocyte retrieval from the donor, as compared to commenced day of oocyte retrieval or the day after.\nSeminal fluid contains several proteins that interact with epithelial cells of the cervix and uterus, inducing active gestational immune tolerance. There are significantly improved outcomes when women are exposed to seminal plasma around the time of embryo transfer, with statistical significance for clinical pregnancy, but not for ongoing pregnancy or live birth rates with the limited data available.", "An endotransglucosylase is an enzyme which is able to transfer a saccharide unit from one saccharide to another.", "Loss of E-cadherin is considered to be a fundamental event in EMT. Many transcription factors (TFs) that can repress E-cadherin directly or indirectly can be considered as EMT-TF (EMT inducing TFs). SNAI1/Snail 1, SNAI2/Snail 2 (also known as Slug), ZEB1, ZEB2, TCF3 and KLF8 (Kruppel-like factor 8) can bind to the E-cadherin promoter and repress its transcription, whereas factors such as Twist, Goosecoid, TCF4 (also known as E2.2), homeobox protein SIX1 and FOXC2 (fork-head box protein C2) repress E-cadherin indirectly. SNAIL and ZEB factors bind to E-box consensus sequences on the promoter region, while KLF8 binds to promoter through GT boxes. These EMT-TFs not only directly repress E-cadherin, but also repress transcriptionally other junctional proteins, including claudins and desmosomes, thus facilitating EMT. On the other hand, transcription factors such as grainyhead-like protein 2 homologue (GRHL2), and ETS-related transcription factors ELF3 and ELF5 are downregulated during EMT and are found to actively drive MET when overexpressed in mesenchymal cells. Since EMT in cancer progression recaptures EMT in developmental programs, many of the EMT-TFs are involved in promoting metastatic events.\nSeveral signaling pathways (TGF-β, FGF, EGF, HGF, Wnt/beta-catenin and Notch) and hypoxia may induce EMT. In particular, Ras-MAPK has been shown to activate Snail and Slug. Slug triggers the steps of desmosomal disruption, cell spreading, and partial separation at cell–cell borders, which comprise the first and necessary phase of the EMT process. On the other hand, Slug cannot trigger the second phase, which includes the induction of cell motility, repression of the cytokeratin expression, and activation of vimentin expression. Snail and Slug are known to regulate the expression of p63 isoforms, another transcription factor that is required for proper development of epithelial structures. The altered expression of p63 isoforms reduced cell–cell adhesion and increased the migratory properties of cancer cells. The p63 factor is involved in inhibiting EMT and reduction of certain p63 isoforms may be important in the development of epithelial cancers. Some of them are known to regulate the expression of cytokeratins. The phosphatidylinositol 3' kinase (PI3K)/AKT axis, Hedgehog signaling pathway, nuclear factor-kappaB and Activating Transcription Factor 2 have also been implicated to be involved in EMT.\nWnt signaling pathway regulates EMT in gastrulation, cardiac valve formation and cancer. Activation of Wnt pathway in breast cancer cells induces the EMT regulator SNAIL and upregulates the mesenchymal marker, vimentin. Also, active Wnt/beta-catenin pathway correlates with poor prognosis in breast cancer patients in the clinic. Similarly, TGF-β activates the expression of SNAIL and ZEB to regulate EMT in heart development, palatogenesis, and cancer. The breast cancer bone metastasis has activated TGF-β signaling, which contributes to the formation of these lesions. However, on the other hand, p53, a well-known tumor suppressor, represses EMT by activating the expression of various microRNAs – miR-200 and miR-34 that inhibit the production of protein ZEB and SNAIL, and thus maintain the epithelial phenotype.", "After the initial stage of embryogenesis, the implantation of the embryo and the initiation of placenta formation are associated with EMT. The trophoectoderm cells undergo EMT to facilitate the invasion of endometrium and appropriate placenta placement, thus enabling nutrient and gas exchange to the embryo. Later in embryogenesis, during gastrulation, EMT allows the cells to ingress in a specific area of the embryo – the primitive streak in amniotes, and the ventral furrow in Drosophila. The cells in this tissue express E-cadherin and apical-basal polarity. Since gastrulation is a very rapid process, E-cadherin is repressed transcriptionally by Twist and SNAI1 (commonly called Snail), and at the protein level by P38 interacting protein. The primitive streak, through invagination, further generates mesoendoderm, which separates to form a mesoderm and an endoderm, again through EMT. Mesenchymal cells from the primitive streak participate also in the formation of many epithelial mesodermal organs, such as notochord as well as somites, through the reverse of EMT, i.e. mesenchymal–epithelial transition. Amphioxus forms an epithelial neural tube and dorsal notochord but does not have the EMT potential of the primitive streak. In higher chordates, the mesenchyme originates out of the primitive streak migrates anteriorly to form the somites and participate with neural crest mesenchyme in formation of the heart mesoderm.\nIn vertebrates, epithelium and mesenchyme are the basic tissue phenotypes. During embryonic development, migratory neural crest cells are generated by EMT involving the epithelial cells of the neuroectoderm. As a result, these cells dissociate from neural folds, gain motility, and disseminate to various parts of the embryo, where they differentiate to many other cell types. Also, craniofacial crest mesenchyme that forms the connective tissue forming the head and face, is formed by neural tube epithelium by EMT. EMT takes place during the construction of the vertebral column out of the extracellular matrix, which is to be synthesized by fibroblasts and osteoblasts that encircle the neural tube. The major source of these cells are sclerotome and somite mesenchyme as well as primitive streak. Mesenchymal morphology allows the cells to travel to specific targets in the embryo, where they differentiate and/or induce differentiation of other cells.\nDuring wound healing, keratinocytes at the border of the wound undergo EMT and undergo re-epithelialization or MET when the wound is closed. Snail2 expression at the migratory front influences this state, as its overexpression accelerates wound healing. Similarly, in each menstrual cycle, the ovarian surface epithelium undergoes EMT during post-ovulatory wound healing.", "Not all cells undergo a complete EMT, i.e. losing their cell-cell adhesion and gaining solitary migration characteristics. Instead, most cells undergo partial EMT, a state in which they retain some epithelial traits such as cell-cell adhesion or apico-basal polarity, and gain migratory traits, thus cells in this hybrid epithelial/mesenchymal (E/M) phenotype are endowed with special properties such as collective cell migration. Single-cell tracking contributes to enabling the visualization of morphological transitions during EMT, the discernment of cell migration phenotypes, and the correlation of the heritability of these traits among sister cells. Two mathematical models have been proposed, attempting to explain the emergence of this hybrid E/M phenotype, and its highly likely that different cell lines adopt different hybrid state(s), as shown by experiments in MCF10A, HMLE and H1975 cell lines. Although a hybrid E/M state has been referred to as metastable or transient, recent experiments in H1975 cells suggest that this state can be stably maintained by cells.", "Platelets in the blood have the ability to initiate the induction of EMT in cancer cells. When platelets are recruited to a site in the blood vessel they can release a variety of growth factors (PDGF, VEGF, Angiopoietin-1) and cytokines including the EMT inducer TGF-β. The release of TGF-β by platelets in blood vessels near primary tumors enhances invasiveness and promotes metastasis of cancer cells in the tumor. Studies looking at defective platelets and reduced platelet counts in mouse models have shown that impaired platelet function is associated with decreased metastatic formation. In humans, platelet counts and thrombocytosis within the upper end of the normal range have been associated with advanced, often metastatic, stage cancer in cervical cancer, ovarian cancer, gastric cancer, and esophageal cancer. Although a great deal of research has been applied to studying interactions between tumor cells and platelets, a cancer therapy targeting this interaction has not yet been established. This may be in part due to the redundancy of prothrombotic pathways which would require the use of multiple therapeutic approaches in order to prevent pro-metastatic events via EMT induction in cancer cells by activated platelets.\nTo improve the chances for the development of a cancer metastasis, a cancer cell must avoid detection and targeting by the immune system once it enters the bloodstream. Activated platelets have the ability to bind glycoproteins and glycolipids (P-selectin ligands such as PSGL-1) on the surface of cancer cells to form a physical barrier that protects the cancer cell from natural killer cell-mediated lysis in the bloodstream. Furthermore, activated platelets promote the adhesion of cancer cells to activated endothelial cells lining blood vessels using adhesion molecules present on platelets. P-selectin ligands on the surface of cancer cells remain to be elucidated and may serve as potential biomarkers for disease progression in cancer.", "Similar to generation of Cancer Stem Cells, EMT was demonstrated to generate endocrine progenitor cells from human pancreatic islets. Initially, the human islet-derived progenitor cells (hIPCs) were proposed to be better precursors since β-cell progeny in these hIPCs inherit epigenetic marks that define an active insulin promoter region. However, later, another set of experiments suggested that labelled β-cells de-differentiate to a mesenchymal-like phenotype in vitro, but fail to proliferate; thus initiating a debate in 2007.\nSince these studies in human islets lacked lineage-tracing analysis, these findings from irreversibly tagged beta cells in mice were extrapolated to human islets. Thus, using a dual lentiviral and genetic lineage tracing system to label β-cells, it was convincingly demonstrated that adult human islet β-cells undergo EMT and proliferate in vitro. Also, these findings were confirmed in human fetal pancreatic insulin-producing cells, and the mesenchymal cells derived from pancreatic islets can undergo the reverse of EMT – MET – to generate islet-like cell aggregates. Thus, the concept of generating progenitors from insulin-producing cells by EMT or generation of Cancer Stem Cells during EMT in cancer may have potential for replacement therapy in diabetes, and call for drugs targeting inhibition of EMT in cancer.", "Small molecules that are able to inhibit TGF-β induced EMT are under development. Silmitasertib (CX-4945) is a small molecule inhibitor of protein kinase CK2, which has been supported to be linked with TGF-β induced EMT, and is currently in clinical trials for cholangiocarcinoma (bile duct cancer), as well as in preclinical development for hematological and lymphoid malignancies. In January 2017, Silmitasertib was granted orphan drug status by the U.S. Food and Drug Administration for cholangiocarcinoma and is currently in phase II study. Silmitasertib is being developed by Senhwa Biosciences. Another small molecule inhibitor Galunisertib (LY2157299) is a potent TGF-β type I receptor kinase inhibitor that was demonstrated to reduce the size, the growth rate of tumors, and the tumor forming potential in triple negative breast cancer cell lines using mouse xenografts. Galunisertib is currently being developed by Lilly Oncology and is in phase I/II clinical trials for hepatocellular carcinoma, unresectable pancreatic cancer, and malignant glioma. Small molecule inhibitors of EMT are suggested to not act as a replacement for traditional chemotherapeutic agents but are likely to display the greatest efficacy in treating cancers when used in conjunction with them.\nAntagomirs and microRNA mimics have gained interest as a potential source of therapeutics to target EMT induced metastasis in cancer as well as treating many other diseases. Antagomirs were first developed to target miR-122, a microRNA that was abundant and specific to the liver, and this discovery has led to the development of other antagomirs that can pair with specific microRNAs present in the tumor microenvironment or in the cancer cells. A microRNA mimic to miR-655 was found to suppress EMT through the targeting of EMT inducing transcription factor ZEB1 and TGF-β receptor 2 in a pancreatic cancer cell line. Overexpression of the miR-655 mimic in the Panc1 cancer cell line upregulated the expression of E-cadherin and suppressed the migration and invasion of mesenchymal-like cancer cells. The use of microRNA mimics to suppress EMT has expanded to other cancer cell lines and holds potential for clinical drug development. However, microRNA mimics and antagomirs suffer from a lack of stability in vivo and lack an accurate delivery system to target these molecules to the tumor cells or tissue for treatment. Improvements to antagomir and microRNA mimic stability through chemical modifications such as locked nucleic acid (LNA) oligonucleotides or peptide nucleic acids (PNA) can prevent the fast clearing of these small molecules by RNases. Delivery of antagomirs and microRNA mimics into cells by enclosing these molecules in liposome-nanoparticles has generated interest however liposome structures suffer from their own drawbacks that will need to be overcome for their effective use as a drug delivery mechanism. These drawbacks of liposome-nanoparticles include nonspecific uptake by cells and induction of immune responses. The role that microRNAs play in cancer development and metastasis is under much scientific investigation and it is yet to be demonstrated whether microRNA mimics or antagomirs may serve as standard clinical treatments to suppress EMT or oncogenic microRNAs in cancers.", "Initiation of metastasis requires invasion, which is enabled by EMT. Carcinoma cells in a primary tumor lose cell-cell adhesion mediated by E-cadherin repression and break through the basement membrane with increased invasive properties, and enter the bloodstream through intravasation. Later, when these circulating tumor cells (CTCs) exit the bloodstream to form micro-metastases, they undergo MET for clonal outgrowth at these metastatic sites. Thus, EMT and MET form the initiation and completion of the invasion-metastasis cascade. At this new metastatic site, the tumor may undergo other processes to optimize growth. For example, EMT has been associated with PD-L1 expression, particularly in lung cancer. Increased levels of PD-L1 suppresses the immune system which allows the cancer to spread more easily. \nEMT confers resistance to oncogene-induced premature senescence. Twist1 and Twist2, as well as ZEB1 protects human cells and mouse embryonic fibroblasts from senescence. Similarly, TGF-β can promote tumor invasion and evasion of immune surveillance at advanced stages. When TGF-β acts on activated Ras-expressing mammary epithelial cells, EMT is favored and apoptosis is inhibited. This effect can be reversed by inducers of epithelial differentiation, such as GATA-3.\nEMT has been shown to be induced by androgen deprivation therapy in metastatic prostate cancer. Activation of EMT programs via inhibition of the androgen axis provides a mechanism by which tumor cells can adapt to promote disease recurrence and progression. Brachyury, Axl, MEK, and Aurora kinase A are molecular drivers of these programs, and inhibitors are currently in clinical trials to determine therapeutic applications. Oncogenic PKC-iota can promote melanoma cell invasion by activating Vimentin during EMT. PKC-iota inhibition or knockdown resulted an increase E-cadherin and RhoA levels while decreasing total Vimentin, phosphorylated Vimentin (S39) and Par6 in metastatic melanoma cells. These results suggested that PKC-ι is involved in signaling pathways which upregulate EMT in melanoma.\nEMT has been indicated to be involved in acquiring drug resistance. Gain of EMT markers was found to be associated with the resistance of ovarian carcinoma epithelial cell lines to paclitaxel. Similarly, SNAIL also confers resistance to paclitaxel, adriamycin and radiotherapy by inhibiting p53-mediated apoptosis. Furthermore, inflammation, that has been associated with the progression of cancer and fibrosis, was recently shown to be related to cancer through inflammation-induced EMT. Consequently, EMT enables cells to gain a migratory phenotype, as well as induce multiple immunosuppression, drug resistance, evasion of apoptosis mechanisms.\nSome evidence suggests that cells that undergo EMT gain stem cell-like properties, thus giving rise to Cancer Stem Cells (CSCs). Upon transfection by activated Ras, a subpopulation of cells exhibiting the putative stem cell markers CD44high/CD24low increases with the concomitant induction of EMT. Also, ZEB1 is capable of conferring stem cell-like properties, thus strengthening the relationship between EMT and stemness. Thus, EMT may present increased danger to cancer patients, as EMT not only enables the carcinoma cells to enter the bloodstream, but also endows them with properties of stemness which increases tumorigenic and proliferative potential.\nHowever, recent studies have further shifted the primary effects of EMT away from invasion and metastasis, toward resistance to chemotherapeutic agents. Research on breast cancer and pancreatic cancer both demonstrated no difference in cells' metastatic potential upon acquisition of EMT. These are in agreement with another study showing that the EMT transcription factor TWIST actually requires intact adherens junctions in order to mediate local invasion in breast cancer. The effects of EMT and its relationship to invasion and metastasis may therefore be highly context specific.\nIn urothelial carcinoma cell lines overexpression of HDAC5 inhibits long-term proliferation but can promote epithelial-to-mesenchymal transition (EMT).", "The epithelial–mesenchymal transition (EMT) is a process by which epithelial cells lose their cell polarity and cell–cell adhesion, and gain migratory and invasive properties to become mesenchymal stem cells; these are multipotent stromal cells that can differentiate into a variety of cell types. EMT is essential for numerous developmental processes including mesoderm formation and neural tube formation. EMT has also been shown to occur in wound healing, in organ fibrosis and in the initiation of metastasis in cancer progression.", "Epithelial–mesenchymal transition was first recognized as a feature of embryogenesis by Betty Hay in the 1980s. EMT, and its reverse process, MET (mesenchymal-epithelial transition) are critical for development of many tissues and organs in the developing embryo, and numerous embryonic events such as gastrulation, neural crest formation, heart valve formation, secondary palate development, and myogenesis. Epithelial and mesenchymal cells differ in phenotype as well as function, though both share inherent plasticity. Epithelial cells are closely connected to each other by tight junctions, gap junctions and adherens junctions, have an apico-basal polarity, polarization of the actin cytoskeleton and are bound by a basal lamina at their basal surface. Mesenchymal cells, on the other hand, lack this polarization, have a spindle-shaped morphology and interact with each other only through focal points. Epithelial cells express high levels of E-cadherin, whereas mesenchymal cells express those of N-cadherin, fibronectin and vimentin. Thus, EMT entails profound morphological and phenotypic changes to a cell.\nBased on the biological context, EMT has been categorized into 3 types: developmental (Type I), fibrosis and wound healing (Type II), and cancer (Type III).", "Many studies have proposed that induction of EMT is the primary mechanism by which epithelial cancer cells acquire malignant phenotypes that promote metastasis. Drug development targeting the activation of EMT in cancer cells has thus become an aim of pharmaceutical companies.", "EuroCarbDB was an EU-funded initiative for the creation of software and standards for the systematic collection of carbohydrate structures and their experimental data, which was discontinued in 2010 due to lack of funding. The project included a database of known carbohydrate structures and experimental data, specifically mass spectrometry, HPLC and NMR data, accessed via a web interface that provides for browsing, searching and contribution of structures and data to the database. The project also produces a number of associated bioinformatics tools for carbohydrate researchers:\n* GlycanBuilder, a Java applet for drawing glycan structures\n* GlycoWorkbench, a standalone Java application for semi-automated analysis and annotation of glycan mass spectra\n* GlycoPeakfinder, a webapp for calculating glycan compositions from mass data\nThe canonical online version of EuroCarbDB was hosted by the European Bioinformatics Institute at www.ebi.ac.uk up to 2012, and then relax.organ.su.se.\nEuroCarb code has since been incorporated into and extended by UniCarb-DB, which also includes the work of the defunct GlycoSuite database.", "EFDA (1999 — 2013) has been followed by EUROfusion, which is a consortium of national fusion research institutes located in the European Union and Switzerland.\nThe European Union has a strongly coordinated nuclear fusion research programme. At the European level, the so-called [http://europa.eu/legislation_summaries/institutional_affairs/treaties/treaties_euratom_en.htm EURATOM Treaty] is the international legal framework under which member states cooperate in the fields of nuclear fusion research.\nThe [https://web.archive.org/web/20140914125632/http://www.efda.org/ European Fusion Development Agreement] (EFDA) is an agreement between European fusion research institutions and the European Commission (which represents Euratom) to strengthen their coordination and collaboration, and to participate in collective activities in the field of nuclear fusion research.\nIn Europe, fusion research takes place in a great number of research institutes and universities. In each member state of the European Fusion Programme at least one research organisation has a \"Contract of Association\" with the European Commission. All the fusion research organisations and institutions of a country are connected to the program through this (these) contracted organisation(s). After the name of the contract, the groups of fusion research organisations of the member states are called \"Associations\".", "In order to achieve its objectives EFDA conducts the following group of activities:\n* Collective use of JET, the world's largest fusion experiment\n* Reinforced coordination of fusion physics and technology research and development in EU laboratories. \n* Training and carrier development of researchers, promoting links to universities and carrying out support actions for the benefit of the fusion programme. \n* EU contributions to international collaborations outside F4E\nEFDA coordinates a range of activities to be carried out by the Associations in 7 key physics and technology areas. The implementation of these activities benefits from structures so called Task Forces and Topical Groups. The European Task Forces on [https://web.archive.org/web/20091222223222/http://www.efda.org/about_efda/activities-plasma_wall_interaction.htm Plasma Wall Interaction] (PWI) and on [https://web.archive.org/web/20091222223205/http://www.efda.org/about_efda/activities-integrated_tokamak_modelling.htm Integrated Tokamak Modelling] (ITM) set up respectively in 2002 and 2003. To strengthen the co-ordination in other key areas five Topical Groups have been set up in 2008: on [https://web.archive.org/web/20090419230108/http://www.efda.org/about_efda/activities-fusion_materials.htm Fusion Materials Development], [https://web.archive.org/web/20091222223153/http://www.efda.org/about_efda/activities-diagnostics.htm Diagnostics], [https://web.archive.org/web/20091222223159/http://www.efda.org/about_efda/activities-heating_and_current_drive.htm Heating and Current Drive], [https://web.archive.org/web/20091222223242/http://www.efda.org/about_efda/activities-transport.htm Transport] and [https://web.archive.org/web/20091222223217/http://www.efda.org/about_efda/activities-magnetohydrodinamics.htm Plasma Stability and Control].", "EFDA has two locations, which each house a so-called Close Support Unit (CSU), responsible for part of EFDA's activities. The EFDA-CSU Garching is located in Garching, near Munich (Germany), and is hosted by the German [http://www.ipp.mpg.de/ Max-Planck Institut für Plasmaphysik]. [https://web.archive.org/web/20090723101818/http://www.jet.efda.org/ EFDA-CSU Culham] is hosted by the [http://www.ccfe.ac.uk/ CCFE] laboratory in Culham (UK), home of the Joint European Torus facilities.\nA large number of scientists and engineers from the associated laboratories work together on different projects of EFDA. The main task of the Close Support Units is to ensure that these diverse activities are integrated in a coordinated European Fusion Programme.\nThe EFDA management consists of the EFDA Leader (Dr. Francesco Romanelli) and the EFDA-Associate Leader for JET (Dr. Francesco Romanelli).", "The European Fusion Development Agreement (EFDA) was created in 1999.\nUntil 2008 EFDA was responsible for the exploitation of the Joint European Torus, the coordination and support of fusion-related research & development activities carried out by the Associations and by European Industry and coordination of the European contribution to large scale international collaborations, such as the ITER-project.\n2008 has brought a significant change to the structure of the European Fusion Programme. The change was triggered by the signature of the ITER agreement at the end of 2006. The ITER parties had agreed to provide contributions to ITER through legal entities referred to as \"Domestic Agencies\". Europe has fulfilled its obligation by launching the European Domestic Agency called \"[http://fusionforenergy.europa.eu/ Fusion for Energy]\", also called F4E, in March 2007.\nWith the appearance of F4E EFDA´s role has changed and it has been reorganised. A revised European Fusion Development Agreement entered into force on 1 January 2008 focuses on research coordination with two main objectives: to prepare for the operation and exploitation of ITER and to further develop and consolidate the knowledge base needed for overall fusion development and in particular for DEMO, the first electricity producing experimental fusion power plant being built after ITER.", "Selection favors different traits in captive populations than it does in wild populations, so this may result in adaptations that are beneficial in captivity but are deleterious in the wild. This reduces the success of re-introductions, so it is important to manage captive populations in order to reduce adaptations to captivity. Adaptations to captivity can be reduced by minimizing the number of generations in captivity and by maximizing the number of migrants from wild populations. Minimizing selection on captive populations by creating an environment that is similar to their natural environment is another method of reducing adaptations to captivity, but it is important to find a balance between an environment that minimizes adaptation to captivity and an environment that permits adequate reproduction. Adaptations to captivity can also be reduced by managing the captive population as a series of population fragments. In this management strategy, the captive population is split into several sub-populations or fragments which are maintained separately. Smaller populations have lower adaptive potentials, so the population fragments are less likely to accumulate adaptations associated with captivity. The fragments are maintained separately until inbreeding becomes a concern. Immigrants are then exchanged between the fragments to reduce inbreeding, and then the fragments are managed separately again.", "The storage of seeds in a temperature and moisture controlled environment. This technique is used for taxa with orthodox seeds that tolerate desiccation. Seed bank facilities vary from sealed boxes to climate controlled walk-in freezers or vaults. Taxa with recalcitrant seeds that do not tolerate desiccation are typically not held in seed banks for extended periods of time.", "Plant cryopreservation consist of the storage of seeds, pollen, tissue, or embryos in liquid nitrogen. This method can be used for virtually indefinite storage of material without deterioration over a much greater time-period relative to all other methods of ex situ conservation. Cryopreservation is also used for the conservation of livestock genetics through cryoconservation of animal genetic resources. Technical limitations prevent the cryopreservation of many species, but cryobiology is a field of active research, and many studies concerning plants are underway.", "Ex situ conservation, while helpful in humankinds efforts to sustain and protect our environment, is rarely enough to save a species from extinction. It is to be used as a last resort, or as a supplement to in situ conservation because it cannot recreate the habitat as a whole: the entire genetic variation of a species, its symbiotic counterparts, or those elements which, over time, might help a species adapt to its changing surroundings. Instead, ex situ conservation removes the species from its natural ecological contexts, preserving it under semi-isolated conditions whereby natural evolution and adaptation processes are either temporarily halted or altered by introducing the specimen to an unnatural habitat. In the case of cryogenic storage methods, the preserved specimens adaptation processes are (quite literally) frozen altogether. The downside to this is that, when re-released, the species may lack the genetic adaptations and mutations which would allow it to thrive in its ever-changing natural habitat.\nFurthermore, ex situ conservation techniques are often costly, with cryogenic storage being economically infeasible in most cases since species stored in this manner cannot provide a profit but instead slowly drain the financial resources of the government or organization determined to operate them. Seedbanks are ineffective for certain plant genera with recalcitrant seeds that do not remain fertile for long periods of time. Diseases and pests foreign to the species, to which the species has no natural defense, may also cripple crops of protected plants in ex situ plantations and in animals living in ex situ breeding grounds. These factors, combined with the specific environmental needs of many species, some of which are nearly impossible to recreate by man, make ex situ conservation impossible for a great number of the world's endangered flora and fauna.", "Showy Indian clover, Trifolium amoenum, is an example of a species that was thought to be extinct, but was rediscovered in 1993 in the form of a single plant at a site in western Sonoma County. Seeds were harvested and the species grown in ex situ facilities.\nThe Wollemi pine is another example of a plant that is being preserved via ex situ conservation, as they are being grown in nurseries to be sold to the general public.\nThe Orange-bellied parrot, with a wild population of 14 birds as of early February 2017, are being bred in a captive breeding program. The captive population consists of around 300 birds.", "Genetic disorders are often an issue within captive populations due to the fact that the populations are usually established from a small number of founders. In large, outbreeding populations, the frequencies of most deleterious alleles are relatively low, but when a population undergoes a bottleneck during the founding of a captive population, previously rare alleles may survive and increase in number. Further inbreeding within the captive population may also increase the likelihood that deleterious alleles will be expressed due to increasing homozygosity within the population. The high occurrence of genetic disorders within a captive population can threaten both the survival of the captive population and its eventual reintroduction back into the wild. If the genetic disorder is dominant, it may be possible to eliminate the disease completely in a single generation by avoiding breeding of the affected individuals. However, if the genetic disorder is recessive, it may not be possible to completely eliminate the allele due to its presence in unaffected heterozygotes. In this case, the best option is to attempt to minimize the frequency of the allele by selectively choosing mating pairs. In the process of eliminating genetic disorders, it is important to consider that when certain individuals are prevented from breeding, alleles and therefore genetic diversity are removed from the population; if these alleles are not present in other individuals, they may be lost completely. Preventing certain individuals from the breeding also reduces the effective population size, which is associated with problems such as the loss of genetic diversity and increased inbreeding.", "Genetic diversity is often lost within captive populations due to the founder effect and subsequent small population sizes. Minimizing the loss of genetic diversity within the captive population is an important component of ex situ conservation and is critical for successful reintroductions and the long term success of the species, since more diverse populations have higher adaptive potential. The loss of genetic diversity due to the founder effect can be minimized by ensuring that the founder population is large enough and genetically representative of the wild population. This is often difficult because removing large numbers of individuals from the wild populations may further reduce the genetic diversity of a species that is already of conservation concern. An alternative to this is collecting sperm from wild individuals and using this via artificial insemination to bring in fresh genetic material. Maximizing the captive population size and the effective population size can decrease the loss of genetic diversity by minimizing the random loss of alleles due to genetic drift. Minimizing the number of generations in captivity is another effective method for reducing the loss of genetic diversity in captive populations.", "Botanical gardens, zoos, and aquariums are the most conventional methods of ex situ conservation. Also in ex situ conservation, all of which house whole, protected specimens for breeding and reintroduction into the wild when necessary and possible. These facilities provide not only housing and care for specimens of endangered species, but also have an educational value. They inform the public of the threatened status of endangered species and of those factors which cause the threat, with the hope of creating public interest in stopping and reversing those factors which jeopardize a species survival in the first place. They are the most publicly visited ex situ' conservation sites, with the WZCS (World Zoo Conservation Strategy) estimating that the 1,100 organized zoos in the world receive more than 600 million visitors annually. Globally there is an estimated total of 2,107 aquaria and zoos in 125 countries. Additionally many private collectors or other not-for-profit groups hold animals and they engage in conservation or reintroduction efforts. Similarly there are approximately 2,000 botanical gardens in 148 counties cultivating or storing an estimated 80,000 taxa of plants.", "Captive populations are subject to problems such as inbreeding depression, loss of genetic diversity and adaptations to captivity. It is important to manage captive populations in a way that minimizes these issues so that the individuals to be introduced will resemble the original founders as closely as possible, which will increase the chances of successful reintroductions. During the initial growth phase, the population size is rapidly expanded until a target population size is reached. The target population size is the number of individuals that are required to maintain appropriate levels of genetic diversity, which is generally considered to be 90% of the current genetic diversity after 100 years. The number of individuals required to meet this goal varies based on potential growth rate, effective size, current genetic diversity, and generation time. Once the target population size is reached, the focus shifts to maintaining the population and avoiding genetic issues within the captive population.", "Endangered animal species and breeds are preserved using similar techniques. Animal species can be preserved in genebanks, which consist of cryogenic facilities used to store living sperm, eggs, or embryos. For example, the Zoological Society of San Diego has established a \"frozen zoo\" to store such samples using cryopreservation techniques from more than 355 species, including mammals, reptiles, and birds.\nA potential technique for aiding in reproduction of endangered species is interspecific pregnancy, implanting embryos of an endangered species into the womb of a female of a related species, carrying it to term. It has been carried out for the Spanish ibex.", "Somatic tissue can be stored in vitro for short periods of time. This is done in a light and temperature controlled environment that regulates the growth of cells. As an ex situ conservation technique tissue culture is primary used for clonal propagation of vegetative tissue or immature seeds. This allows for the proliferation of clonal plants from a relatively small amount of parent tissue.", "Plants are under horticulture care, but the environment is managed to near natural conditions. This occurs with either restored or semi-natural environments. This technique is primarily used for taxa that are rare or in areas where habitat has been severely degraded.", "Plants under horticultural care in a constructed landscape, typically a botanic garden or arboreta. This technique is similar to a field gene bank in that plants are maintained in the ambient environment, but the collections are typically not as genetically diverse or extensive. These collections are susceptible to hybridization, artificial selection, genetic drift, and disease transmission. Species that cannot be conserved by other ex situ techniques are often included in cultivated collections.", "An extensive open-air planting used maintain genetic diversity of wild, agricultural, or forestry species. Typically species that are either difficult or impossible to conserve in seed banks are conserved in field gene banks. Field gene banks may also be used grow and select progeny of species stored by other ex situ techniques.", "Ex situ conservation (literally \"off-site conservation\") is the process of protecting an endangered species, variety or breed, of plant or animal outside its natural habitat. For example, by removing part of the population from a threatened habitat and placing it in a new location, an artificial environment which is similar to the natural habitat of the respective animal and within the care of humans, such as a zoological park or wildlife sanctuary. The degree to which humans control or modify the natural dynamics of the managed population varies widely, and this may include alteration of living environments, reproductive patterns, access to resources, and protection from predation and mortality.\nEx situ management can occur within or outside a species natural geographic range. Individuals maintained ex situ exist outside an ecological niche. This means that they are not under the same selection pressures as wild populations, and they may undergo artificial selection if maintained ex situ' for multiple generations.\nAgricultural biodiversity is also conserved in ex situ collections. This is primarily in the form of gene banks where samples are stored in order to conserve the genetic resources of major crop plants and their wild relatives.", "Managing populations based on minimizing mean kinship values is often an effective way to increase genetic diversity and to avoid inbreeding within captive populations. Kinship is the probability that two alleles will be identical by descent when one allele is taken randomly from each mating individual. The mean kinship value is the average kinship value between a given individual and every other member of the population. Mean kinship values can help determine which individuals should be mated. In choosing individuals for breeding, it is important to choose individuals with the lowest mean kinship values because these individuals are least related to the rest of the population and have the least common alleles. This ensures that rarer alleles are passed on, which helps to increase genetic diversity. It is also important to avoid mating two individuals with very different mean kinship values because such pairings propagate both the rare alleles that are present in the individual with the low mean kinship value as well as the common alleles that are present in the individual with the high mean kinship value. This genetic management technique requires that ancestry is known, so in circumstances where ancestry is unknown, it might be necessary to use molecular genetics such as microsatellite data to help resolve unknowns.", "The main advantages of excimer lamps over other sources of UV and VUV radiation are as follows:\n* high average specific power of UV radiation (up to 1 Watt per cubic centimeter of active medium);\n* high energy of an emitted photon (from 3.5 to 11.5 eV);\n* quasimonochromatic radiation with the spectral full-width at half maximum from 2 to 15 nm;\n* high power spectral density of UV radiation;\n* choice of the wavelength of the spectral maximum of UV radiation for specific purposes (see table);\n* availability of multi-wave UV radiation owing to simultaneous excitation of several kinds of working excimer molecules;\n* absence of visible and IR radiation;\n* instant achievement of the operating mode;\n* low heating of radiating surface;\n* absence of mercury.", "Light sources emitting in the UV spectral region are widely used in techniques involving photo-chemical processes, e.g., curing of inks, adhesives, varnishes and coatings, photolithography, UV induced growth of dielectrics, UV induced surface modification, and cleaning or material deposition. Incoherent sources of UV radiation have some advantages over laser sources because of their lower cost, a huge area of irradiation, and ease of use, especially when large-scale industrial processes are envisaged.\nMercury lamps (λ = 253.7 nm) are widely spread UV sources, but their production, use, and disposal of old lamps pose a threat to human health and environmental pollution. Comparing with commonly used mercury lamps, excimer lamps have a number of advantages. A specific feature of an excimer molecule is the absence of a strong bond in the ground electronic state. Thanks to this, high-intensity UV radiation can be extracted from a plasma without significant self-absorption. This makes possible to convert efficiently energy deposited to the active medium into UV radiation.\nExcimer lamps are referred to cold sources of UV radiation since the radiating surface of excimer lamps remains at relatively low temperatures in contrast with traditional UV lamps like a mercury one. Because the medium does not need to be heated, excimer lamps reach their peak output almost immediately after they are turned on.\nRare gas and rare gas-halide excimer lamps generally radiate in the ultraviolet (UV) and vacuum-ultraviolet (VUV) spectral regions (see table). Their unique narrow-band emission characteristics, high quantum efficiency, and high-energy photons make them suitable for applications such as absorption spectroscopy, UV curing, UV coating, disinfection, ozone generation, destruction of gaseous organic waste, photo-etching and photo-deposition and more other applications.\nLight sources emitting photons in the energy range of 3.5–10 eV find applications in many fields due to the ability of high-energy photons to cleave most chemical bonds and kill microbes destroying nucleic acids and disrupting their DNA. Examples of excimer lamp applications include purification and disinfection of drinking water, pool water, air, sewage purification, decontamination of industrial waste, photochemical synthesis and degradation of organic compounds in flue gases and water, photopolymerization of organic coatings and paints, and photo-enhanced chemical vapor deposition. In all cases UV photons excite species or cleave chemical bonds, resulting in the formation of radicals or other chemical reagents, which initiate a required reaction.\nAn excimer lamp has selective action. UV radiation of a given wavelength can selectively excite species or generate required radicals. Such lamps can be useful for photophysical and photochemical processing such as UV curing of paints, varnishes, and adhesives, cleansing and modifying surface properties, polymerization of lacquers and paints, and photo-degradation of a variety of pollutants. Photo-etching of polymers is possible using different wavelengths: 172 nm by xenon excimer, 222 nm by krypton chloride, and 308 nm by xenon chloride. Excimer UV sources can be used for microstructuring large-area polymer surfaces. XeCl-excimer lamps (308 nm) are especially suitable to get tan.\nFluorescence spectroscopy is one of the most common methods for detecting biomolecules. Biomolecules can be labeled with fluoroprobe, which then is excited by a short pulse of UV light, leading to re-emission in the visible spectral region. Detecting this re-emitted light, one can judge the density of labeled molecules. Lanthanide complexes are commonly used as fluoroprobes. Due to their long lifetime, they play an important role in Förster resonance energy transfer (FRET) analysis.\nAt present, excimer lamps are coming into use in ecology, photochemistry, photobiology, medicine, criminalistics, petrochemistry, physics, microelectronics, different engineering tasks, wide-ranging technologies, science, various branches of industry including the food industry, and many others.", "One of the widely used ways to excite emission of excimer molecules is an electric discharge. There are a lot of discharge types used for pumping excimer lamps. Some examples are glow discharge, pulsed discharge, capacitive discharge, longitudinal and transverse discharges, volume discharge, spark discharge, and microhollow discharge.\n, dielectric barrier discharge (DBD), a type of capacitive discharge, is the most common type used in commercial lamps. A benefit of the DBD excimer lamps is that the electrodes are not in direct contact with the active medium (plasma). Absence of interaction between the electrodes and the discharge eliminates electrode corrosion as well as contamination of the active medium by sputtered electrode material, which considerably increases the lifetime of DBD excimer lamps in comparison with others. Moreover, a dielectric barrier discharge ensures effective excitation of a gas mixture in a wide range of working pressures from a few torrs to more than one atmosphere. Excimer lamps can be made in any desired shape of the radiating surface, satisfying requirements of a specific task.", "Mercury lamps are the most common source of UV radiation due to their high efficiency. However, the use of mercury in these lamps poses disposal and environmental problems. On the contrary, excimer lamps based on rare gases are absolutely non-hazardous and excimer lamps containing halogen are more environmentally benign than mercury ones.", "Radiation is produced owing to the spontaneous transition of an excimer molecule from an excited electronic state to the ground state. Excimer and exciplex molecules are not long-living formations. They rapidly decompose, typically within a few nanoseconds, releasing their excitation energy in the form of a UV photon:\n: emission by an excimer molecule:\n: emission by an exciplex molecule:\nwhere Rg* is an excimer molecule, RgX* is an exciplex molecule, Rg is an atom of rare gas, and X is an atom of halogen.", "Excimer lamps are quasimonochromatic light sources operating over a wide range of wavelengths in the ultraviolet (UV) and vacuum ultraviolet (VUV) spectral regions. Operation of an excimer lamp is based on the formation of excited dimers (excimers), which spontaneously transiting from the excited state to the ground state result in the emission of UV photons. The spectral maximum of excimer lamp radiation is specified by a working excimer molecule:\nExcimers are diatomic molecules (dimers) or polyatomic molecules that have stable excited electronic states and an unbound or weakly bound (thermally unstable) ground state. Initially, only homonuclear diatomic molecules with a stable excited state but a repulsive ground state were called excimers (excited dimers). The term \"excimer\" was later extended to refer any polyatomic molecule with a repulsive or weakly bound ground state. One can also come across the term \"exciplex\" (from \"excited complex\"). It is also an excimer molecule but not a homonuclear dimer. For instance, Xe*, Kr*, Ar* are excimer molecules, while XeCl*, KrCl*, XeBr*, ArCl*, XeCl* are referred to exciplex molecules. Dimers of rare gases and rare-gas–halogen dimers are the most spread and studied excimers. Rare-gas–halide trimers, metal excimers, metal–rare-gas excimers, metal–halide excimers, and rare-gas–oxide excimers are also known, but they are rarely used.\nAn excimer molecule can exist in an excited electronic state for a limited time, as a rule from a few to a few tens of nanoseconds. After that, an excimer molecule transits to the ground electronic state, while releasing the energy of internal electronic excitation in the form of a photon. Owing to a specific electronic structure of an excimer molecule, the energy gap between the lowest bound excited electronic state and the ground state amounts from 3.5 to 10 eV, depending on a kind of an excimer molecule and provides light emission in the UV and VUV spectral region. A typical spectral characteristic of excimer lamp radiation consists mainly of one intense narrow emission band. About 70–80% of the whole radiation power of an excimer lamp is concentrated in this emission band. The full width at half maximum of the emission band depends on a kind of an excimer molecule and excitation conditions and ranges within 2 to 15 nm. In fact, excimer lamps are sources of quasimonochromatic light. Therefore, such sources are suitable for spectral-selective irradiation and can even replace lasers in some cases.", "An excimer lamp (or excilamp) is a source of ultraviolet light based on spontaneous emission of excimer (exciplex) molecules.", "It is convenient to generate excimer molecules in a plasma. Electrons play an important role in a plasma and, in particular, in the formation of excimer molecules. To efficiently generate excimer molecules, the working medium (plasma) should contain sufficient concentration of electrons with energies that are high enough to produce the precursors of the excimer molecules, which are mainly excited and ionized rare gas atoms. Introduction of power into a gaseous mixture results in the formation of excited and ionized rare gas atoms as follows:\nElectron excitation\n:Rg + e → Rg* + e,\nDirect electron ionization\n:Rg + e → Rg + 2e,\nStepwise ionization\n:Rg* + e → Rg + 2e,\nwhere Rg* is a rare gas atom in an excited electronic state, Rg is a rare gas ion, and e is an electron.\nWhen there are enough excited rare gas atoms accumulated in a plasma, the excimer molecules are formed by the following reaction:\n:Rg* + Rg + M → Rg* + M,\nwhere Rg* is an excimer molecule, and M is a third particle carrying away the excess energy to stabilize an excimer molecule. As a rule, it is a rare gas atom of the working medium.\nAnalyzing this three-body reaction, one can see that the efficiency of the production of excimer molecules is proportional to the concentration of excited rare gas atoms and the square of the concentration of rare gas atoms in the ground state. From this point of view, the concentration of rare gas in the working medium should be as high as possible. A higher concentration of rare gas is achieved by increasing gas pressure. However, an increase in the concentration of rare gas also intensifies the collisional quenching of excimer molecules, resulting in their radiationless decay:\n:Rg* + Rg → Rg* + 2Rg.\nThe collisional quenching of excimer molecules is negligible while the mean time between collisions is much higher than the lifetime of an excimer molecule in an excited electronic state. In practice, the optimal pressure of a working medium is found experimentally, and it amounts to approximately one atmosphere.\nA mechanism underlying the formation of exciplex molecules (rare gas halides) is a bit more complicated than the mechanism of excimer molecule formation. The formation of exciplex molecules occurs in two main ways. The first way is due to a reaction of ion-ion recombination, i.e., recombination of a positive rare gas ion and a negative halogen ion:\n:Rg + X + M → RgX* + M,\nwhere RgX* is an exciplex molecule, and M is a collisional third partner, which is usually an atom or molecule of a gaseous mixture or buffer gas. The third particle takes the excess energy and stabilizes an exciplex molecule.\nThe formation of a negative halogen ion results from the interaction of a low-energy electron with a halogen molecule in a so-called process of the dissociative electron attachment:\n:X + e → X + X,\nwhere X is a halogen atom.\nThe pressure of a gaseous mixture is of great importance for efficient production of exciplex molecules due to the reaction of ion-ion recombination. The process of ion-ion recombination is dependent on three-body collisions, and the probability of such a collision increases with pressure. At low pressures of a gaseous mixture (several tens of torr), the reaction of ion-ion recombination is of little efficiency, while it is quite productive at pressures above 100 Torr.\nThe second way of the formation of exciplex molecules is a harpoon reaction. In this case, a halogen molecule or halogen-containing compound captures a weakly bound electron of an excited rare gas atom, and an exciplex molecule in an excited electronic state is formed:\n:Rg* + X → RgX* + X.\nSince the harpoon reaction is a process of a two-body collision, it can proceed productively at a pressure significantly lower than that required for a three-body reaction. Thus, the harpoon reaction makes possible the efficient operation of an excimer lamp at low pressures of a gaseous mixture. The collisional quenching of exciplex molecules at low pressures of a gaseous mixture is much lower than at pressures required for productive proceeding the reaction of ion-ion recombination. Due to this, a low-pressure excimer lamp ensures the maximum efficiency in converting the pumping energy to UV radiation.\nIt should be mentioned that both the harpoon reaction and reaction of ion-ion recombination proceed simultaneously. The dominance of the first or second reaction is mainly determined by the pressure of a gaseous mixture. The harpoon reaction predominates at low pressures (below 50 Torr), while the reaction of ion-ion recombination prevails at higher pressures (above 100 Torr).\nThe kinetics of reactions proceeding in a plasma is diverse and is not limited to the processes considered above. The efficiency of producing exciplex molecules depends on the composition of a gaseous mixture and conditions of its excitation. The type of halogen donor plays an important role. The most effective and widely used halogen-carriers are homonuclear diatomic halogen molecules. More complex halogen compounds such as hydrogen halides, metal halides, and interhalogens are also used as a halogen-carrier but to a lesser extent.\nA noteworthy halogen-carrier is alkali halide. A feature of alkali halides is a similarity of their chemical bond with that of exciplex molecules in excited electronic states. Exciplex molecules in excited electronic states are characterized by the ionic bond as well as alkali halides in the ground state. It opens up alternative mechanisms for the formation of exciplex molecules, namely substitution reactions:\n:Rg* + AX → RgX* + A,\n:Rg + AX → RgX* + A,\nwhere AX is an alkali halide molecule, A is an alkali metal atom, and A is an alkali metal ion.\nThese mechanisms of the formation of exciplex molecules are fundamentally different from the reaction of ion-ion recombination and harpoon reaction. An exciplex molecule is formed simply by replacing an atom/ion of alkali metal from an alkali halide molecule by an excited atom/ion of rare gas.\nAn advantage of using alkali halides is that both the substitution reactions can simultaneously proceed at low pressures with comparable productivity. Moreover, both excited atoms and ions of rare gas are effectively used in the production of exciplex molecules in contrast to excimer lamps using other halogen-carriers. It is of importance because the ionization and excitation of rare gas consume most of the introduced energy. Since the reaction of ion-ion recombination and harpoon reaction dominate depending on the pressure of a gaseous mixture, the generation of rare gas ions is unprofitable at low pressures, while the excitation of rare gas is unreasonable at high pressures. A drawback of using alkali halides is high temperatures required for providing the necessary concentration of alkali halide molecules in a gaseous mixture. Despite this, the use of alkali halides as a halogen-carrier is especially promising in the development of exciplex lasers operating at low pressures.", "A low-FODMAP diet consists of the global restriction of all fermentable carbohydrates (FODMAPs), and is recommended only for a short time. A low-FODMAP diet is recommended for managing patients with irritable bowel syndrome (IBS) and can reduce digestive symptoms of IBS, including bloating and flatulence.\nSeveral studies have found a low-FODMAP diet to improve digestive symptoms in adults with irritable bowel syndrome, but its long-term use can have negative effects, because it has a detrimental impact on the gut microbiota and metabolome. It should only be used for short periods and under the advice of a specialist. More study is needed to evaluate its effectiveness in children with irritable bowel syndrome. Small studies (which are susceptible to bias) show little evidence of its effectiveness in treating functional symptoms of inflammatory bowel disease (IBD). More study is needed to assess the true impact of this diet on health.", "People following a low-FODMAP diet may be able to tolerate moderate amounts of fructose and lactose, particularly if they have lactase persistence.", "FODMAPs present in gluten-containing grains have been identified as a possible cause of gastrointestinal symptoms in people with non-celiac gluten sensitivity, either by themselves, or in combination effect with gluten and other proteins in gluten-containing cereals, such as amylase-trypsin inhibitors (ATIs). The amount of fructans in these cereals is small. In rye, they account for 3.6–6.6% of dry matter, 0.7–2.9% in wheat, and barley contains only trace amounts. They are only minor sources of FODMAPs when eaten in common dietary amounts. Wheat and rye may comprise a major source of fructans when consumed in large amounts.\nIn a 2018 double-blind, crossover research study on 59 persons on a gluten-free diet with challenges of gluten, fructans, or placebo, intestinal symptoms (specifically bloating) were (borderline) significantly higher after challenge with fructans, in comparison with gluten proteins (P=0.049). Although the differences between the three interventions were small, the authors concluded that fructans are more likely to cause gastrointestinal symptoms in non-celiac gluten sensitivity than gluten. Fructans used in the study were extracted from chicory root, and the results may or may not apply to wheat fructans.\nA 2018 review concluded that although fructan intolerance may play a role in non-celiac gluten sensitivity, it only explains some gastrointestinal symptoms. Fructan intolerance does not explain the extra-digestive symptoms that people with non-celiac gluten sensitivity may develop, such as neurological disorders, fibromyalgia, psychological disturbances, and dermatitis. This review also found that FODMAPs may cause digestive symptoms when the person is hypersensitive to luminal distension.\nA 2019 review concluded that wheat fructans could cause certain IBS-like symptoms, such as bloating, but that they are not likely to cause immune activation or extra-digestive symptoms, as many people with non-celiac gluten sensitivity reported resolution of their symptoms after removing gluten-containing cereals. These same participants continued to eat fruits and vegetables with high FODMAP content without issue.", "Polyols are found naturally in mushrooms, some fruit (particularly stone fruits), including apples, apricots, avocados, blackberries, cherries, lychees, nectarines, peaches, pears, plums, prunes, watermelon, and in some vegetables, including cauliflower, snow peas, and mange-tout peas. Cabbage, chicory, and fennel contain moderate amounts, but may be eaten in a low-FODMAP diet if the advised portion size is observed.\nPolyols, specifically sugar alcohols, used as artificial sweeteners in commercially prepared food, beverages, and chewing gum, include isomalt, maltitol, mannitol, sorbitol, and xylitol.", "Pulses and beans are the main dietary sources (although green beans, canned lentils, sprouted mung beans, tofu (not silken), and tempeh contain comparatively low amounts). Supplements of the enzyme alpha-galactosidase may reduce symptoms, assuming the enzyme product does not contain other FODMAPs, such as polyol artificial sweeteners.", "Sources of fructans include wheat, rye, barley, onion, garlic, Jerusalem and globe artichoke, beetroot, dandelion leaves, the white part of leeks, the white part of spring onion, brussels sprouts, savoy cabbage, and prebiotics such as fructooligosaccharides (FOS), oligofructose and inulin. Asparagus, fennel, red cabbage, and radicchio contain moderate amounts but may be eaten if the advised portion size is observed.", "The significance of sources of FODMAPs varies through differences in dietary groups such as geography, ethnicity, and other factors. Commonly used FODMAPs comprise the following:\n* oligosaccharides, including fructans and galactooligosaccharides\n* disaccharides, including lactose\n* monosaccharides, including fructose\n* polyols, including sorbitol, xylitol, and mannitol", "Some FODMAPs, such as fructose, are readily absorbed in the small intestine of humans via GLUT receptors. Absorption thus depends on the appropriate expression and delivery of these receptors in the intestinal enterocyte to both the apical surface, contacting the lumen of the intestine (e.g., GLUT5), and to the basal membrane, contacting the blood (e.g., GLUT2). Improper absorption of these FODMAPS in the small intestine leaves them available for absorption by gut flora. The resultant metabolism by the gut flora leads to the production of gas and potentially results in bloating and flatulence.\nAlthough FODMAPs can cause certain digestive discomfort in some people, not only do they not cause intestinal inflammation, but they help prevent it because they produce beneficial alterations in the intestinal flora that contribute to maintaining good colon health.\nFODMAPs are not the cause of irritable bowel syndrome or other functional gastrointestinal disorders, but rather a person develops symptoms when the underlying bowel response is exaggerated or abnormal.\nFructose malabsorption and lactose intolerance may produce IBS symptoms through the same mechanism, but unlike other FODMAPs, poor absorption of fructose is found in only a minority of people. Lactose intolerance is found in most adults, except for specific geographic populations, notably those of European descent. Many who benefit from a low FODMAP diet need not restrict fructose or lactose. It is possible to identify these two conditions with hydrogen and methane breath testing, thus eliminating the necessity for dietary compliance.", "FODMAPs or fermentable oligosaccharides, disaccharides, monosaccharides, and polyols are short-chain carbohydrates that are poorly absorbed in the small intestine and ferment in the colon. They include short-chain oligosaccharide polymers of fructose (fructans) and galactooligosaccharides (GOS, stachyose, raffinose), disaccharides (lactose), monosaccharides (fructose), and sugar alcohols (polyols), such as sorbitol, mannitol, xylitol, and maltitol. Most FODMAPs are naturally present in food and the human diet, but the polyols may be added artificially in commercially prepared foods and beverages.\nFODMAPs may cause digestive discomfort in some people. The reasons are hypersensitivity to luminal distension and/or a proclivity to excess water retention and gas production and accumulation, but they do not cause intestinal inflammation. Naturally occurring FODMAPs may help avert digestive discomfort for some people because they produce beneficial alterations in the gut flora. They are not the cause of these disorders, but a low-FODMAP diet, restricting FODMAPs, might help to improve digestive symptoms in adults with irritable bowel syndrome (IBS) and other functional gastrointestinal disorders (FGID). Avoiding all FODMAPs long-term may have a detrimental impact on the gut microbiota and metabolome.\nFODMAPs, especially fructans, are present in small amounts in gluten-containing grains and have been identified as a possible cause of symptoms in people with non-celiac gluten sensitivity. They are only minor sources of FODMAPs when eaten in the usual standard quantities in the daily diet. As of 2019, reviews conclude that although FODMAPs present in wheat and related grains may play a role in non-celiac gluten sensitivity, they only explain certain gastrointestinal symptoms, such as bloating, but not the extra-digestive symptoms that people with non-celiac gluten sensitivity may develop, such as neurological disorders, fibromyalgia, psychological disturbances, and dermatitis. Consuming a low FODMAP diet without a previous medical evaluation could cause health risks because it can ameliorate and mask digestive symptoms of celiac disease, delaying or avoiding its correct diagnosis and therapy.", "When ferrimagnets are exposed to an external magnetic field, they display what is called magnetic hysteresis, where magnetic behavior depends on the history of the magnet. They also exhibit a saturation magnetization ; this magnetization is reached when the external field is strong enough to make all the moments align in the same direction. When this point is reached, the magnetization cannot increase, as there are no more moments to align. When the external field is removed, the magnetization of the ferrimagnet does not disappear, but a nonzero magnetization remains. This effect is often used in applications of magnets. If an external field in the opposite direction is applied subsequently, the magnet will demagnetize further until it eventually reaches a magnetization of . This behavior results in what is called a hysteresis loop.", "Unlike ferromagnetism, the magnetization curves of ferrimagnetism can take many different shapes depending on the strength of the interactions and the relative abundance of atoms. The most notable instances of this property are that the direction of magnetization can reverse while heating a ferrimagnetic material from absolute zero to its critical temperature, and that strength of magnetization can increase while heating a ferrimagnetic material to the critical temperature, both of which cannot occur for ferromagnetic materials. These temperature dependencies have also been experimentally observed in NiFeCrO and LiFeCeO.\nA temperature lower than the Curie temperature, but at which the opposing magnetic moments are equal (resulting in a net magnetic moment of zero) is called a magnetization compensation point. This compensation point is observed easily in garnets and rare-earth–transition-metal alloys (RE-TM). Furthermore, ferrimagnets may also have an angular momentum compensation point, at which the net angular momentum vanishes. This compensation point is crucial for achieving fast magnetization reversal in magnetic-memory devices.", "The oldest known magnetic material, magnetite, is a ferrimagnetic substance. The tetrahedral and octahedral sites of its crystal structure exhibit opposite spin. Other known ferrimagnetic materials include yttrium iron garnet (YIG); cubic ferrites composed of iron oxides with other elements such as aluminum, cobalt, nickel, manganese, and zinc; and hexagonal or spinel type ferrites, including rhenium ferrite, ReFeO, PbFeO and BaFeO and pyrrhotite, FeS.\nFerrimagnetism can also occur in single-molecule magnets. A classic example is a dodecanuclear manganese molecule with an effective spin S = 10 derived from antiferromagnetic interaction on Mn(IV) metal centers with Mn(III) and Mn(II) metal centers.", "Ferrimagnetic materials have high resistivity and have anisotropic properties. The anisotropy is actually induced by an external applied field. When this applied field aligns with the magnetic dipoles, it causes a net magnetic dipole moment and causes the magnetic dipoles to precess at a frequency controlled by the applied field, called Larmor or precession frequency. As a particular example, a microwave signal circularly polarized in the same direction as this precession strongly interacts with the magnetic dipole moments; when it is polarized in the opposite direction, the interaction is very low. When the interaction is strong, the microwave signal can pass through the material. This directional property is used in the construction of microwave devices like isolators, circulators, and gyrators. Ferrimagnetic materials are also used to produce optical isolators and circulators. Ferrimagnetic minerals in various rock types are used to study ancient geomagnetic properties of Earth and other planets. That field of study is known as paleomagnetism. In addition, it has been shown that ferrimagnets such as magnetite can be used for thermal energy storage.", "Ferrimagnetism has the same physical origins as ferromagnetism and antiferromagnetism. In ferrimagnetic materials the magnetization is also caused by a combination of dipole–dipole interactions and exchange interactions resulting from the Pauli exclusion principle. The main difference is that in ferrimagnetic materials there are different types of atoms in the material's unit cell. An example of this can be seen in the figure above. Here the atoms with a smaller magnetic moment point in the opposite direction of the larger moments. This arrangement is similar to that present in antiferromagnetic materials, but in ferrimagnetic materials the net moment is nonzero because the opposed moments differ in magnitude.\nFerrimagnets have a critical temperature above which they become paramagnetic just as ferromagnets do. At this temperature (called the Curie temperature) there is a second-order phase transition, and the system can no longer maintain a spontaneous magnetization. This is because at higher temperatures the thermal motion is strong enough that it exceeds the tendency of the dipoles to align.", "Until the twentieth century, all naturally occurring magnetic substances were called ferromagnets. In 1936, Louis Néel published a paper proposing the existence of a new form of cooperative magnetism he called antiferromagnetism. While working with MnSb, French physicist Charles Guillaud discovered that the current theories on magnetism were not adequate to explain the behavior of the material, and made a model to explain the behavior. In 1948, Néel published a paper about a third type of cooperative magnetism, based on the assumptions in Guillaud's model. He called it ferrimagnetism. In 1970, Néel was awarded for his work in magnetism with the Nobel Prize in Physics.", "There are various ways to describe ferrimagnets, the simplest of which is with mean-field theory. In mean-field theory the field acting on the atoms can be written as\nwhere is the applied magnetic field, and is field caused by the interactions between the atoms. The following assumption then is \nHere is the average magnetization of the lattice, and is the molecular field coefficient. When we allow and to be position- and orientation-dependent, we can then write it in the form\nwhere is the field acting on the i-th substructure, and is the molecular field coefficient between the i-th and k-th substructures. For a diatomic lattice we can designate two types of sites, a and b. We can designate the number of magnetic ions per unit volume, the fraction of the magnetic ions on the a sites, and the fraction on the b sites. This then gives\nIt can be shown that and that unless the structures are identical. favors a parallel alignment of and , while favors an anti-parallel alignment. For ferrimagnets, , so it will be convenient to take as a positive quantity and write the minus sign explicitly in front of it. For the total fields on a and b this then gives\nFurthermore, we will introduce the parameters and which give the ratio between the strengths of the interactions. At last we will introduce the reduced magnetizations\nwith the spin of the i-th element. This then gives for the fields:\nThe solutions to these equations (omitted here) are then given by\nwhere is the Brillouin function. The simplest case to solve now is . Since , this then gives the following pair of equations:\nwith and . These equations do not have a known analytical solution, so they must be solved numerically to find the temperature dependence of .", "A ferrimagnetic material is a material that has populations of atoms with opposing magnetic moments, as in antiferromagnetism, but these moments are unequal in magnitude, so a spontaneous magnetization remains. This can for example occur when the populations consist of different atoms or ions (such as Fe and Fe).\nLike ferromagnetic substances, ferrimagnetic substances are attracted by magnets and can be magnetized to make permanent magnets. The oldest known magnetic substance, magnetite (FeO), was classified as a ferromagnet before Louis Néel discovered ferrimagnetism in 1948. Since the discovery, numerous uses have been found for ferrimagnetic materials, such as hard-drive platters and biomedical applications.", "A fibrin scaffold is a network of protein that holds together and supports a variety of living tissues. It is produced naturally by the body after injury, but also can be engineered as a tissue substitute to speed healing. The scaffold consists of naturally occurring biomaterials composed of a cross-linked fibrin network and has a broad use in biomedical applications.\nFibrin consists of the blood proteins fibrinogen and thrombin which participate in blood clotting. Fibrin glue or fibrin sealant is also referred to as a fibrin based scaffold and used to control surgical bleeding, speed wound healing, seal off hollow body organs or cover holes made by standard sutures, and provide slow-release delivery of medications like antibiotics to tissues exposed.\nFibrin scaffold use is helpful in repairing injuries to the urinary tract, liver lung, spleen, kidney, and heart. In biomedical research, fibrin scaffolds have been used to fill bone cavities, repair neurons, heart valves, vascular grafts and the surface of the eye.\nThe complexity of biological systems requires customized care to sustain their function. When they are no longer able to perform their purpose, interference of new cells and biological cues is provided by a scaffold material. Fibrin scaffold has many aspects like being biocompatible, biodegradable and easily processable. Furthermore, it has an autologous nature and it can be manipulated in various size and shape. Inherent role in wound healing is helpful in surgical applications. Many factors can be bound to fibrin scaffold and those can be released in a cell-controlled manner. Its stiffness can be managed by changing the concentration according to needs of surrounding or encapsulated cells. Additional mechanical properties can be obtained by combining fibrin with other suitable scaffolds. Each biomedical application has its own characteristic requirement for different kinds of tissues and recent studies with fibrin scaffold are promising towards faster recovery, less complications and long-lasting solutions.", "In orthopedics, methods with minimum invasion are desired and improving injectable systems is a leading aim. Bone cavities can be filled by polymerizing materials when injected and adaptation to the shape of the cavity can be provided. Shorter surgical operation time, minimum large muscle retraction harm, smaller scar size, less pain after operation and consequently faster recovery can be obtained by using such systems. In a study to evaluate if injectable fibrin scaffold is helpful for transplantation of bone marrow stromal cell (BMSC) when central nervous system (CNS) tissue is damaged, Yasuda et al. found that BMSC has extended survival, migration and differentiation after transplantation to rat cortical lesion although there is complete degradation of fibrin matrix after four weeks. Another study to assess if fibrin glue enriched with platelet is better than just platelet rich plasma (PRP) on bone formation was conducted. Each combined with bone marrow mesenchymal stem cells and bone morphogenetic protein 2 (BMP-2) are injected into the subcutaneous space. Results shows that fibrin glue enriched with platelet has better osteogenic properties when compared to PRP. To initiate and speed up tissue repair and regeneration, platelet-rich fibrin gels are ideal since they have a high concentration of platelet releasing growth factors and bioactive proteins. Addition of fibrin glue to calcium phosphate granules has promising results leading to faster bone repair by inducing mineralization and possible effects of fibrin on angiogenesis, cell attachment and proliferation.", "Valvular heart disease is a major cause of death globally. Both mechanical valves and fixed biological xenograft or homografts used clinically have many drawbacks. One study focused on fibrin-based heart valves to assess structure and mechanical durability on sheep revealed promising potential for patient originated valve replacements. From autologous arterial-derived cells and fibrin scaffold, tissue engineered heart valves are formed, then mechanically conditioned and transplanted into the pulmonary trunk of the same animals. The preliminary result are potentially hopeful towards autologous heart valve production.", "In atherosclerosis, a severe disease in modern society, coronary blood vessels occlude. These vessels have to be freed and held open i.e. by stents. Unfortunately after certain time these vessels close again and have to be bypassed to allow for upkeep of circulation. Usually autologous vessels from the patient or synthetic polymer grafts are used for this purpose. Both options have disadvantages. Firstly there are only few autologous vessels available in a human body that might be of low quality, considering the health status of the patient. The synthetic polymer based grafts on the other hand often have insufficient haemocompatibility and thus rapidly occlude - a problem that is especially prone in small calibre grafts. In this context the fibrin-gel-based tissue engineering of autologous vessel substitutes is a very promising approach to overcome the current problems. Cells and fibrin are isolated by a low invasive procedure from the patient and shaped in individual moulds to meet the required dimensions. Additional pre-cultivation in a specialized bioreactor is inevitable to ensure appropriate properties of the graft.", "Bullous keratopathy that is characterized by corneal stromal edema related to cell loss and endothelial decompensation as well as subepithelial fibrosis and corneal vascularization in further cases, results vision problems due to loss of corneal transparency. Fibrin glue is used as a sutureless method onto the corneal surface to fix amniotic membrane that is cryopreserved. Complete re-epithelialization on the ocular surface with no symptom is achieved in 3 weeks. Results show that fibrin glue fixation is easy, reliable and efficient with the corneal surface.", "Polymerization time of fibrinogen and thrombin is affected primarily by concentration of thrombin and temperature, while fibrinogen concentration has a minor effect. Fibrin gel characterization by scanning electron microscopy reveals that thick fibers make up a dense structure at lower fibrinogen concentrations (5 mg/ml) and thinner fibers and looser gel can be obtained as fibrinogen concentration (20 mg/ml) increases whereas increase in thrombin concentration (from 0.5 U/ml to 5 U/ml) has no such significant result although the fibers steadily get thinner.\nFibrin gels can be enriched by addition of other extracellular matrix (ECM) components such as fibronectin, vitronectin, laminin and collagen. These can be linked covalently to fibrin scaffold by reactions catalyzed by transglutaminase. Laminin originated substrate amino acid sequences for transglutaminase can be IKVAV, YIGSR or RNIAEIIKDI. Collagen originated sequence is DGEA and many other ECM protein originated RGD sequence can be given as other examples. Heparin binding sequences KβAFAKLAARLYRKA, RβAFARLAARLYRRA, KHKGRDVILKKDVR, YKKIIKKL are from antithrombin III, modified antithrombin III, neural cell adhesion molecule and platelet factor 4, respectively. Heparin-binding growth factors can be attached to heparin binding domains via heparin. As a result, a reservoir can be provided instead of passive diffusion by liberation of growth factors in extended time. Acidic and basic fibroblast growth factor, neurotrophin 3, transforming growth factor beta 1, transforming growth factor beta 2, nerve growth factor, brain derived neurotrophic factor can be given as examples for such growth factors.\nFor some tissues like cartilage, highly dense polymeric scaffolds such as polyethylene glycol (PEG) are essential due to mechanical stress and that can be achieved by combining them with natural biodegradable cell-adhesive scaffolds since cells can not attach to synthetic polymers and take proper signals for normal cell function. Various scaffold combinations with PEG-based hydrogels are studied to assess the chondrogenic response to dynamic strain stimulation in a recent study. PEG-Proteoglycan, PEG-Fibrinogen, PEG-Albumin conjugates and only PEG including hydrogels are used to evaluate the mechanical effect on bovine chondrocytes by using a pneumatic reactor system. The most substantial increase in stiffness is observed in PEG-Fibrinogen conjugated hydrogel after 28 days of mechanical stimulation.", "The use of fibrin hydrogel in gene delivery (transfection) is studied to address essential factors controlling the delivery process such as fibrinogen and pDNA concentration in addition to significance of cell-mediated fibrin degradation for pursuing the potential of cell-transfection microarray engineering or in vivo gene transfer. Gene transfer is more successful in-gel than on-gel probably because of proximity of lipoplexes and target cells. Less cytotoxicity is observed due to less use of transfection agents like lipofectamine and steady degradation of fibrin. Consequently, each cell type requires optimization of fibrinogen and pDNA concentrations for higher transfection yields and studies towards high-throughput transfection microarray experiments are promising.", "Because fibrin fulfills the mechanical aspects of neuronal growth without initiation of glial proliferation, it can be potentially used in neuronal wound healing even with no need of growth factors or such constituents. Neurons and astrocytes, two major cell type of central nervous system, can show various responses to differences in matrix stiffness. Neuronal development of precursor cells is maintained by gels with low elastic modulus. When stiffness of the matrix is more than that of a normal brain, extension of spinal cord and cortical brain neurons is inhibited since neurite extension and branch forming take place on soft materials (, salmon fibrin promotes the neurite growth best and it is more proteolysis resistant than mammalian fibrins. Because down to 0 °C, salmon fibrinogen can clot whereas polymerization of human fibrinogen occurs slowly below 37 °C, this can be taken as an advantage in surgical settings that are cooler. Therefore, for treatment of central nervous system damages, salmon fibrin can be a useful biomaterial.\nFor sciatic nerve regeneration, fibrin scaffold is used with glial derived neurotrophic factor (GDNF) in a recent study. Survival of both sensory and motor neurons is promoted by glial-derived neurotrophic factor and its delivery to peripheral nervous system improves regeneration after an injury. GDNF and nerve growth factor (NGF) is sequestered in the gel via a bi-domain peptide. This peptide is composed of heparin binding domain and transglutaminase substrate domain which can be cross-linked into the fibrin matrix by polymerization via transglutaminase activity of factor XIIIa. Many neurotrophic factors can bind to heparin through its sulfated domains. This is the affinity-based delivery system in which growth factors are released by cell-based degradation control. After a 13 mm rat sciatic nerve defect is made, the fibrin matrix delivery system is applied to the gap as a nerve guiding channel. Results show that such a delivery system is efficient to enhance maturity and promote organized architecture of nerve regenerating in presence of GDNF, in addition to expressing the promising treatment variations for peripheral nerve injuries.", "Fibrin scaffold is an important element in tissue engineering approaches as a scaffold material. It is advantageous opposed to synthetic polymers and collagen gels when cost, inflammation, immune response, toxicity and cell adhesion are concerned. When there is a trauma in a body, cells at site start the cascade of blood clotting and fibrin is the first scaffold formed normally. To achieve in clinical use of a scaffold, fast and entire incorporation into host tissue is essential. Regeneration of the tissue and the degradation of the scaffold should be balanced in terms of rate, surface area and interaction so that ideal templating can be achieved. Fibrin satisfies many requirements of scaffold functions. Biomaterials made up of fibrin can attach many biological surfaces with high adhesion. Its biocompatibility comes from being not toxic, allergenic or inflammatory. By the help of fibrinolysis inhibitors or fiber cross-linkers, biodegradation can be managed. Fibrin can be provided from individuals to be treated many times so that gels from autologous fibrin have no undesired immunogenic reactions in addition to be reproducible. Inherently, structure and biochemistry of fibrin has an important role in wound healing. Although there are limitations due to diffusion, exceptional cellular growth and tissue development can be achieved. According to the application, fibrin scaffold characteristics can be adjustable by manipulating concentrations of components. Long-lasting durable fibrin hydrogels are enviable in many applications.", "Field-induced polymer electroluminescent (FIPEL) technology is a low power electroluminescent light source. Three layers of moldable light-emitting polymer blended with a small amount of carbon nanotubes glow when an alternating current is passed through them. The technology can produce white light similar to that of the Sun, or other tints if desired. It is also more efficient than compact fluorescent lamps in terms of the energy required to produce light. As cited from the Carroll Research Group at Wake Forest University, \"To date our brightest device – without output couplers – exceeds 18,000 cd/m2.\" This confirms that FIPEL technology is a viable solution for area lighting.\nFIPEL lights are different from LED lighting, in that there is no junction. Instead, the light emitting component is a layer of polymer containing an iridium compound which is doped with multi-wall carbon nanotubes. This planar light emitting structure is energized by an AC field from insulated electrodes. The lights can be shaped into many different forms, from mimicking conventional light bulbs to unusual forms such as 2-foot-by-4-foot flat sheets and straight or bent tubes. The technology was developed by a team headed by Dr. David Carroll of Wake Forest University in Winston-Salem, North Carolina.", "Fluorophores have particular importance in the field of biochemistry and protein studies, for example, in immunofluorescence, cell analysis, immunohistochemistry, and small molecule sensors.", "Abbreviations:\n*Ex (nm): Excitation wavelength in nanometers\n*Em (nm): Emission wavelength in nanometers\n*MW: Molecular weight\n*QY: Quantum yield\n*BR: Brightness: Molar absorption coefficient * quantum yield / 1000\n*PS: Photostability: time [sec] to reduce brightness by 50%", "Abbreviations:\n*Ex (nm): Excitation wavelength in nanometers\n*Em (nm): Emission wavelength in nanometers\n*MW: Molecular weight\n*QY: Quantum yield", "Fluorophore molecules could be either utilized alone, or serve as a fluorescent motif of a functional system. Based on molecular complexity and synthetic methods, fluorophore molecules could be generally classified into four categories: proteins and peptides, small organic compounds, synthetic oligomers and polymers, and multi-component systems.\nFluorescent proteins GFP, YFP, and RFP (green, yellow, and red, respectively) can be attached to other specific proteins to form a fusion protein, synthesized in cells after transfection of a suitable plasmid carrier.\nNon-protein organic fluorophores belong to following major chemical families:\n* Xanthene derivatives: fluorescein, rhodamine, Oregon green, eosin, and Texas red\n* Cyanine derivatives: cyanine, indocarbocyanine, oxacarbocyanine, thiacarbocyanine, and merocyanine\n* Squaraine derivatives and ring-substituted squaraines, including Seta and Square dyes\n* Squaraine rotaxane derivatives: See Tau dyes\n* Naphthalene derivatives (dansyl and prodan derivatives)\n* Coumarin derivatives\n* Oxadiazole derivatives: pyridyloxazole, nitrobenzoxadiazole, and benzoxadiazole\n* Anthracene derivatives: anthraquinones, including DRAQ5, DRAQ7, and CyTRAK Orange\n* Pyrene derivatives: cascade blue, etc.\n* Oxazine derivatives: Nile red, Nile blue, cresyl violet, oxazine 170, etc.\n* Acridine derivatives: proflavin, acridine orange, acridine yellow, etc.\n* Arylmethine derivatives: auramine, crystal violet, malachite green\n* Tetrapyrrole derivatives: porphin, phthalocyanine, bilirubin\n* Dipyrromethene derivatives: BODIPY, aza-BODIPY\nThese fluorophores fluoresce due to delocalized electrons which can jump a band and stabilize the energy absorbed. For example, benzene, one of the simplest aromatic hydrocarbons, is excited at 254 nm and emits at 300 nm. This discriminates fluorophores from quantum dots, which are fluorescent semiconductor nanoparticles.\nThey can be attached to proteins to specific functional groups, such as amino groups (active ester, carboxylate, isothiocyanate, hydrazine), carboxyl groups (carbodiimide), thiol (maleimide, acetyl bromide), and organic azide (via click chemistry or non-specifically (glutaraldehyde)).\nAdditionally, various functional groups can be present to alter their properties, such as solubility, or confer special properties, such as boronic acid which binds to sugars or multiple carboxyl groups to bind to certain cations. When the dye contains an electron-donating and an electron-accepting group at opposite ends of the aromatic system, this dye will probably be sensitive to the environment's polarity (solvatochromic), hence called environment-sensitive. Often dyes are used inside cells, which are impermeable to charged molecules; as a result of this, the carboxyl groups are converted into an ester, which is removed by esterases inside the cells, e.g., fura-2AM and fluorescein-diacetate.\nThe following dye families are trademark groups, and do not necessarily share structural similarities.\n* CF dye (Biotium)\n* DRAQ and CyTRAK probes (BioStatus)\n* BODIPY (Invitrogen)\n* EverFluor (Setareh Biotech)\n* Alexa Fluor (Invitrogen)\n* Bella Fluor (Setareh Biotech)\n* DyLight Fluor (Thermo Scientific, Pierce)\n* Atto and Tracy (Sigma Aldrich)\n* FluoProbes (Interchim)\n* Abberior Dyes (Abberior)\n* DY and MegaStokes Dyes (Dyomics)\n* Sulfo Cy dyes (Cyandye)\n* HiLyte Fluor (AnaSpec)\n* Seta, SeTau and Square Dyes (SETA BioMedicals)\n* Quasar and Cal Fluor dyes (Biosearch Technologies)\n* SureLight Dyes (APC, RPEPerCP, Phycobilisomes) (Columbia Biosciences)\n* APC, APCXL, RPE, BPE (Phyco-Biotech, Greensea, Prozyme, Flogen)\n* Vio Dyes (Miltenyi Biotec)", "Most fluorophores are organic small molecules of 20–100 atoms (200–1000 Dalton; the molecular weight may be higher depending on grafted modifications and conjugated molecules), but there are also much larger natural fluorophores that are proteins: green fluorescent protein (GFP) is 27 kDa, and several phycobiliproteins (PE, APC...) are ≈240kDa. As of 2020, the smallest known fluorophore was claimed to be 3-hydroxyisonicotinaldehyde, a compound of 14 atoms and only 123 Da.\nFluorescence particles like quantum dots (2–10 nm diameter, 100–100,000 atoms) are also considered fluorophores.\nThe size of the fluorophore might sterically hinder the tagged molecule and affect the fluorescence polarity.", "The fluorophore absorbs light energy of a specific wavelength and re-emits light at a longer wavelength. The absorbed wavelengths, energy transfer efficiency, and time before emission depend on both the fluorophore structure and its chemical environment, since the molecule in its excited state interacts with surrounding molecules. Wavelengths of maximum absorption (≈ excitation) and emission (for example, Absorption/Emission = 485 nm/517 nm) are the typical terms used to refer to a given fluorophore, but the whole spectrum may be important to consider. The excitation wavelength spectrum may be a very narrow or broader band, or it may be all beyond a cutoff level. The emission spectrum is usually sharper than the excitation spectrum, and it is of a longer wavelength and correspondingly lower energy. Excitation energies range from ultraviolet through the visible spectrum, and emission energies may continue from visible light into the near infrared region.\nThe main characteristics of fluorophores are:\n* Maximum excitation and emission wavelength (expressed in nanometers (nm)): corresponds to the peak in the excitation and emission spectra (usually one peak each).\n* Molar absorption coefficient (in molcm): links the quantity of absorbed light, at a given wavelength, to the concentration of fluorophore in solution.\n* Quantum yield: efficiency of the energy transferred from incident light to emitted fluorescence (the number of emitted photons per absorbed photons).\n* Lifetime (in picoseconds): duration of the excited state of a fluorophore before returning to its ground state. It refers to the time taken for a population of excited fluorophores to decay to 1/e (≈0.368) of the original amount.\n* Stokes shift: the difference between the maximum excitation and maximum emission wavelengths.\n* Dark fraction: the proportion of the molecules not active in fluorescence emission. For quantum dots, prolonged single-molecule microscopy showed that 20-90% of all particles never emit fluorescence. On the other hand, conjugated polymer nanoparticles (Pdots) show almost no dark fraction in their fluorescence. Fluorescent proteins can have a dark fraction from protein misfolding or defective chromophore formation.\nThese characteristics drive other properties, including photobleaching or photoresistance (loss of fluorescence upon continuous light excitation). Other parameters should be considered, as the polarity of the fluorophore molecule, the fluorophore size and shape (i.e. for polarization fluorescence pattern), and other factors can change the behavior of fluorophores.\nFluorophores can also be used to quench the fluorescence of other fluorescent dyes or to relay their fluorescence at even longer wavelengths.", "A fluorophore (or fluorochrome, similarly to a chromophore) is a fluorescent chemical compound that can re-emit light upon light excitation. Fluorophores typically contain several combined aromatic groups, or planar or cyclic molecules with several π bonds.\nFluorophores are sometimes used alone, as a tracer in fluids, as a dye for staining of certain structures, as a substrate of enzymes, or as a probe or indicator (when its fluorescence is affected by environmental aspects such as polarity or ions). More generally they are covalently bonded to macromolecules, serving as a markers (or dyes, or tags, or reporters) for affine or bioactive reagents (antibodies, peptides, nucleic acids). Fluorophores are notably used to stain tissues, cells, or materials in a variety of analytical methods, such as fluorescent imaging and spectroscopy.\nFluorescein, via its amine-reactive isothiocyanate derivative fluorescein isothiocyanate (FITC), has been one of the most popular fluorophores. From antibody labeling, the applications have spread to nucleic acids thanks to carboxyfluorescein. Other historically common fluorophores are derivatives of rhodamine (TRITC), coumarin, and cyanine. Newer generations of fluorophores, many of which are proprietary, often perform better, being more photostable, brighter, or less pH-sensitive than traditional dyes with comparable excitation and emission.", "Fluorescent dyes find a wide use in industry, going under the name of \"neon colors\", such as:\n* Multi-ton scale usages in textile dyeing and optical brighteners in laundry detergents\n* Advanced cosmetic formulations\n* Safety equipment and clothing\n* Organic light-emitting diodes (OLEDs)\n* Fine arts and design (posters and paintings)\n* Synergists for insecticides and experimental drugs\n* Dyes in highlighters to give off a glow-like effect\n* Solar panels to collect more light / wavelengths\n* Fluorescent sea dye is used to help airborne search and rescue teams locate objects in the water", "A formation light, also known as a slime light, is a type of thin film electroluminescent light that assists aircraft flying in formation in low visibility environments.", "Certain materials, such as glass and glycerol, may harden without crystallizing; these are called amorphous solids. Amorphous materials, as well as some polymers, do not have a freezing point, as there is no abrupt phase change at any specific temperature. Instead, there is a gradual change in their viscoelastic properties over a range of temperatures. Such materials are characterized by a glass transition that occurs at a glass transition temperature, which may be roughly defined as the \"knee\" point of the material's density vs. temperature graph. Because vitrification is a non-equilibrium process, it does not qualify as freezing, which requires an equilibrium between the crystalline and liquid state.", "Freezing is almost always an exothermic process, meaning that as liquid changes into solid, heat and pressure are released. This is often seen as counter-intuitive, since the temperature of the material does not rise during freezing, except if the liquid were supercooled. But this can be understood since heat must be continually removed from the freezing liquid or the freezing process will stop. The energy released upon freezing is a latent heat, and is known as the enthalpy of fusion and is exactly the same as the energy required to melt the same amount of the solid.\nLow-temperature helium is the only known exception to the general rule. Helium-3 has a negative enthalpy of fusion at temperatures below 0.3 K. Helium-4 also has a very slightly negative enthalpy of fusion below 0.8 K. This means that, at appropriate constant pressures, heat must be added to these substances in order to freeze them.", "In spite of the second law of thermodynamics, crystallization of pure liquids usually begins at a lower temperature than the melting point, due to high activation energy of homogeneous nucleation. The creation of a nucleus implies the formation of an interface at the boundaries of the new phase. Some energy is expended to form this interface, based on the surface energy of each phase. If a hypothetical nucleus is too small, the energy that would be released by forming its volume is not enough to create its surface, and nucleation does not proceed. Freezing does not start until the temperature is low enough to provide enough energy to form stable nuclei. In presence of irregularities on the surface of the containing vessel, solid or gaseous impurities, pre-formed solid crystals, or other nucleators, heterogeneous nucleation may occur, where some energy is released by the partial destruction of the previous interface, raising the supercooling point to be near or equal to the melting point. The melting point of water at 1 atmosphere of pressure is very close to 0 °C (32 °F, 273.15 K), and in the presence of nucleating substances the freezing point of water is close to the melting point, but in the absence of nucleators water can supercool to before freezing. Under high pressure (2,000 atmospheres) water will supercool to as low as before freezing.", "Most liquids freeze by crystallization, formation of crystalline solid from the uniform liquid. This is a first-order thermodynamic phase transition, which means that as long as solid and liquid coexist, the temperature of the whole system remains very nearly equal to the melting point due to the slow removal of heat when in contact with air, which is a poor heat conductor. Because of the latent heat of fusion, the freezing is greatly slowed and the temperature will not drop anymore once the freezing starts but will continue dropping once it finishes. \nCrystallization consists of two major events, nucleation and crystal growth. \"Nucleation\" is the step wherein the molecules start to gather into clusters, on the nanometer scale, arranging in a defined and periodic manner that defines the crystal structure. \"Crystal growth\" is the subsequent growth of the nuclei that succeed in achieving the critical cluster size.", "Freezing is a common method of food preservation that slows both food decay and the growth of micro-organisms. Besides the effect of lower temperatures on reaction rates, freezing makes water less available for bacteria growth. Freezing is one of the oldest and most widely used methods of food preservation; since as long ago as 1842, freezing has been used in an ice and salt brine. In freezing, flavours, smell and nutritional content generally remain unchanged. Freezing became commercially applicable after the advent (introduction) of mechanical refrigeration. Freezing has been successfully employed for long term preservation of many foods providing a significantly extended shelf-life. Freezing preservation is generally regarded as superior to canning and dehydration with respect to retention in sensory attributes and nutritive attributes.", "Freezing is a phase transition where a liquid turns into a solid when its temperature is lowered below its freezing point. In accordance with the internationally established definition, freezing means the solidification phase change of a liquid or the liquid content of a substance, usually due to cooling.\nFor most substances, the melting and freezing points are the same temperature; however, certain substances possess differing solid-liquid transition temperatures. For example, agar displays a hysteresis in its melting point and freezing point. It melts at 85 °C (185 °F) and solidifies from 32 °C to 40 °C (89.6 °F to 104 °F).", "The Frozen Ark is a charitable frozen zoo project created jointly by the Zoological Society of London, the Natural History Museum and University of Nottingham. The project aims to preserve the DNA and living cells of endangered species to retain the genetic knowledge for the future. The Frozen Ark collects and stores samples taken from animals in zoos and those threatened with extinction in the wild. Its current director is Michael W. Bruford (Cardiff University). The Frozen Ark was a finalist for the Saatchi & Saatchi Award for World Changing Ideas in 2006.\nThe project was founded by Ann Clarke, her husband Bryan Clarke and Dame Anne McLaren. Since Bryan Clarkes death in 2014, the Frozen Arks interim director has been Mike Bruford.", "The banteng was the second endangered species to be successfully cloned, and the first clone to survive beyond infancy. Scientists at Advanced Cell Technology in Worcester, Massachusetts, extracted DNA from skin cells of a dead male banteng, that were preserved in San Diego 's Frozen Zoo facility, and transferred it into eggs from domestic banteng cows, a process called somatic cell nuclear transfer. Thirty embryos were created and implanted in domestic banteng cows. Two were carried to term and delivered by Caesarian section. The first was born on 1 April 2003, and the second two days later. The second was euthanized, apparently suffering from large offspring syndrome (an overgrowth disorder), but the first survived and lived for seven years at the San Diego Zoo, where it died in April 2010 after it broke a leg and was euthanized.", "The first frozen zoo was established at the San Diego Zoo by pathologist Kurt Benirschke in 1972. At the time there was no technology available to make use of the collection, but Benirschke believed such technology would be developed in the future. The frozen zoo idea was later supported in Gregory Benford's 1992 paper proposing a Library of Life. Zoos such as the San Diego Zoo and research programs such as the Audubon Center for Research of Endangered Species cryopreserve genetic material in order to protect the diversity of the gene pool of endangered species, or to provide for a prospective reintroduction of such extinct species as the Tasmanian tiger and the mammoth.\nGathering material for a frozen zoo is rendered simple by the abundance of sperm in males. Sperm can be taken from an animal following death. The production of eggs, which in females is usually low, can be increased through hormone treatment to obtain 10–20 oocytes, dependent on the species. Some frozen zoos prefer to fertilize eggs and freeze the resulting embryo, as embryos are more resilient under the cryopreservation process. Some centers also collect skin cell samples of endangered animals or extinct species. The Scripps Research Institute has successfully made skin cells into cultures of special cells called induced pluripotent stem cells (IPS cells). It is theoretically possible to make sperm and egg cells from these IPS cells.\nSeveral animals whose cells were preserved in frozen zoos have been cloned to increase the genetic diversity of endangered species, . One attempt to clone an extinct species was made in 2003; the newborn Pyrenean ibex died of a development disorder which may have been linked to the cloning, and there are not enough genetic samples in frozen zoos to re-create a breeding Pyrenean ibex population.", "A frozen zoo is a storage facility in which genetic materials taken from animals (e.g. DNA, sperm, eggs, embryos and live tissue) are stored at very low temperatures (−196 °C) in tanks of liquid nitrogen. Material preserved in this way can be stored indefinitely and used for artificial insemination, in vitro fertilization, embryo transfer, and cloning. There are a few frozen zoos across the world that implement this technology for conservation efforts. Several different species have been introduced to this technology, including the Pyrenean ibex, Black-footed ferret, and potentially the white rhinoceros.", "Due to the very low temperatures required, varying levels of stress are put on the DNA samples. Spermatozoa, in particular, are stressed by temperature shock, osmotic stress, and oxidative stress with the latter being the most detrimental. When temperature shock occurs, the membrane is damaged through freezing and thawing of the sperm. Osmotic stress occurs when ice crystals form inside the nucleus during the freezing process, causing differing osmotic pressures within the cell. Oxidative stress is the result of too many reactive oxygen species (ROS), which is highly reactive and damaging to all parts of the cell. Although these stressors are present within the cell, there are solutions to each. By introducing cholesterol to the samples, temperature shock can be reduced. The use of antifreeze proteins provides one solution for osmotic stress. Oxidative stress is the most difficult to combat because of the highly reactive components of ROS, but some measures like adding certain proteins to limit freeze-thaw damage and increase the survival rate of the DNA.", "The Frozen Zoo at the San Diego Zoo's Institute for Conservation Research currently stores a collection of 8,400 samples from over 800 species and subspecies. Frozen Zoo at San Diego Zoo Conservation Research has acted as a forebear to similar projects at other zoos in the United States and Europe. However, there are still less than a dozen frozen zoos worldwide.\nAt the United Arab Emirates Breeding Centre for Endangered Arabian Wildlife (BCEAW) in Sharjah, the embryos stored include the extremely endangered Gordons wildcat (Felis silvestris gordoni) and the Arabian leopard (Panthera pardus nimr) (of which there are only 50 in the wild).\nThe Audubon Center for Research of Endangered Species, affiliated with the University of New Orleans, is maintaining a frozen zoo. In 2000 the Center implanted a frozen-thawed embryo from the highly endangered African wildcat into the uterus of a domestic house cat, resulting in a healthy male wildcat.\nThe Frozen Ark is a frozen zoo established in 2004 and jointly managed by the Zoological Society of London, the London Natural History Museum, and the University of Nottingham. This organization operates as a charity with many different departments including the DNA laboratory, consortium, taxon expert groups, and the database. In the DNA laboratory, samples are contained after collection from scientists, and different research projects are conducted there. The consortium acts as a bridge to bring together different, but important, groups from zoos, aquariums, museums, and universities. The taxon expert groups monitor the major phyla and lists like the IUCN Red List. The database is the essential piece as it holds all reports and records needed to perform all of the other functions for the charity. The hope for the future is for zoos and aquariums to be able to collect samples from their threatened and/or endangered species in house to help with conservation efforts. The collection and freezing of these samples allows for the distribution of gametes among populations. Samples can be collected from living hosts and from deceased hosts as well.\nThe University of Georgia's Regenerative Bioscience Center is building a frozen zoo. RBC Director Steven Stice and animal and dairy science assistant professor Franklin West created the facility with the thought of saving endangered cat species. The scientists have already extracted cells from a Sumatran tiger, which could be used for artificial insemination. Artificial insemination provides a remedy for animals who, due to anatomical or physiological reasons, are unable to reproduce in the natural way. Reproduction of stored genetic material also allows for the fostering of genetic improvements, and the prevention of inbreeding. Modern technology allows for genetic manipulation in animals without keeping them in captivity. However, the success of their restoration into the wild would require the application of new science and a sufficient amount of previously collected material.", "The Pyrenean ibex went extinct in 2000. In 2003 frozen cells from the last one (a female killed by a falling branch) were used to clone 208 embryos, of which 7 successfully implanted in goats, and one made it to term. That one ibex died of respiratory failure just after birth; quite possibly as a result of the cloning process, its lungs had not developed properly. There may not be enough individuals' cells preserved to create a breeding population. Despite the death of the ibex, DNA analysis revealed that the offspring was a legitimate clone from its last living descendent.", "In 2020, the first cloned Przewalski's horse was born, the result of a collaboration between San Diego Zoo Global, ViaGen Equine and Revive & Restore. The cloning was carried out by somatic cell nuclear transfer (SCNT), whereby a viable embryo is created by transplanting the DNA-containing nucleus of a somatic cell into an immature egg cell (oocyte) that has had its own nucleus removed, producing offspring genetically identical to the somatic cell donor. Since the oocyte used was from a domestic horse, this was an example of interspecies SCNT.\nThe somatic cell donor was a Przewalskis horse stallion named Kuporovic, born in the UK in 1975, and relocated three years later to the US, where he died in 1998. Due to concerns over the loss of genetic variation in the captive Przewalskis horse population, and in anticipation of the development of new cloning techniques, tissue from the stallion was cryopreserved at the San Diego Zoos Frozen Zoo. Breeding of this individual in the 1980s had already substantially increased the genetic diversity of the captive population, after he was discovered to have more unique alleles than any other horse living at the time, including otherwise-lost genetic material from two of the original captive founders. To produce the clone, frozen skin fibroblasts were thawed, and grown in cell culture. An oocyte was collected from a domestic horse, and its nucleus replaced by a nucleus collected from a cultured Przewalskis horse fibroblast. The resulting embryo was induced to begin division and was cultured until it reached the blastocyst stage, then implanted into a domestic horse surrogate mare, which carried the embryo to term and delivered a foal with the Przewalski's horse DNA of the long-deceased stallion.\nThe cloned horse was named Kurt, after Dr. Kurt Benirschke, a geneticist who developed the idea of cryopreserving genetic material from species considered to be endangered. His ideas led to the creation of the Frozen Zoo as a genetic library. There is a breeding herd in the San Diego Zoo Safari Park. Once the foal matures, he will be relocated to the breeding herd at the San Diego Zoo Safari Park, so as to pass Kuporovics genes into the larger captive Przewalskis horse population and increase the genetic variation of the species.", "To help mitigate inbreeding depression for two endangered species, the Black-footed ferret(Mustela nigripes), Revive & Restore facilitates on-going efforts to clone individuals from historic cell lines stored at the San Diego Zoo Wildlife Alliance Frozen Zoo. The program seeks to restore genetic variation lost from the living gene pool.\nOn December 10, 2020, the world's first cloned black-footed ferret was born. This ferret, named Elizabeth Ann, marked the first time a U.S. endangered species was successfully cloned. \nThe cells of two 1980s wild-caught black-footed ferrets that never bred in captivity were preserved in the San Diego Wildlife Alliance Frozen Zoo. One of them was cloned to increase genetic diversity in this species in December 2020. More clones of both are planned. They will initially be bred separately from the non-cloned population.", "Over the years, concerns over population declines of the northern white rhinoceros (Ceratotherium simum cottoni) have increased with the increasing value of their horns to poachers. Specifically, the population has declined nearly seventy percent from 2011 to 2019. Processes like SCNT can help aid in conservation efforts towards the revival of their population. Researchers are looking towards induced pluripotent stem cells (iPSC), as they hold limitless possibilities. With the lack of natural mating occurring within the species due to the limited number of them, this sub-species provides researchers the opportunity for iPSC intervention. Other methods, including artificial insemination with fresh semen (AI), have been used successfully in another sub-species, the Southern White Rhinoceros (Ceratotherium simum simum). Frozen-thawed semen has been tested and has seen some successes, helping solve issues with reproduction of the species as a whole.", "A gaur that died of natural causes had some skin cells frozen and added to the San Diego Frozen Zoo. Eight years later, DNA from these cells was inserted into a domestic-cow egg to create an embryo (trans-species cloning), which was then implanted in a domestic cow (Bos taurus). On 8 January 2001, the gaur, named Noah, was born in Sioux Center, Iowa. Noah was initially healthy, but the next day, he came down with clostridial enteritis, and died of dysentery within 48 hours of birth. This is not uncommon in uncloned animals, and the researchers did not think it was due to the cloning.", "The synthesis of glycogen in the liver following a fructose-containing meal proceeds from gluconeogenic precursors. Fructose is initially converted to DHAP and glyceraldehyde by fructokinase and aldolase B. The resultant glyceraldehyde then undergoes phosphorylation to glyceraldehyde-3-phosphate. Increased concentrations of DHAP and glyceraldehyde-3-phosphate in the liver drive the gluconeogenic pathway toward glucose-6-phosphate, glucose-1-phosphate and glycogen formation. It appears that fructose is a better substrate for glycogen synthesis than glucose and that glycogen replenishment takes precedence over triglyceride formation. Once liver glycogen is replenished, the intermediates of fructose metabolism are primarily directed toward triglyceride synthesis.", "Fructose consumption results in the insulin-independent induction of several important hepatic lipogenic enzymes including pyruvate kinase, NADP-dependent malate dehydrogenase, citrate lyase, acetyl CoA carboxylase, fatty acid synthase, as well as pyruvate dehydrogenase. Although not a consistent finding among metabolic feeding studies, diets high in refined fructose have been shown to lead to hypertriglyceridemia in a wide range of populations including individuals with normal glucose metabolism as well as individuals with impaired glucose tolerance, diabetes, hypertriglyceridemia, and hypertension. The hypertriglyceridemic effects observed are a hallmark of increased dietary carbohydrate, and fructose appears to be dependent on a number of factors including the amount of dietary fructose consumed and degree of insulin resistance.\n‡ = Mean ± SEM activity in nmol/min per mg protein\n§ = 12 rats/group\n = Significantly different from control at p < 0.05", "Carbons from dietary fructose are found in both the FFA and glycerol moieties of plasma triglycerides (TG). Excess dietary fructose can be converted to pyruvate, enter the Krebs cycle and emerges as citrate directed toward free fatty acid synthesis in the cytosol of hepatocytes. The DHAP formed during fructolysis can also be converted to glycerol and then glycerol 3-phosphate for TG synthesis. Thus, fructose can provide trioses for both the glycerol 3-phosphate backbone, as well as the free fatty acids in TG synthesis. Indeed, fructose may provide the bulk of the carbohydrate directed toward de novo TG synthesis in humans.", "The lack of two important enzymes in fructose metabolism results in the development of two inborn errors in carbohydrate metabolism – essential fructosuria and hereditary fructose intolerance. In addition, reduced phosphorylation potential within hepatocytes can occur with intravenous infusion of fructose.", "Fructolysis refers to the metabolism of fructose from dietary sources. Though the metabolism of glucose through glycolysis uses many of the same enzymes and intermediate structures as those in fructolysis, the two sugars have very different metabolic fates in human metabolism. Unlike glucose, which is directly metabolized widely in the body, fructose is mostly metabolized in the liver in humans, where it is directed toward replenishment of liver glycogen and triglyceride synthesis. Under one percent of ingested fructose is directly converted to plasma triglyceride. 29% - 54% of fructose is converted in liver to glucose, and about a quarter of fructose is converted to lactate. 15% - 18% is converted to glycogen. Glucose and lactate are then used normally as energy to fuel cells all over the body.\nFructose is a dietary monosaccharide present naturally in fruits and vegetables, either as free fructose or as part of the disaccharide sucrose, and as its polymer inulin. It is also present in the form of refined sugars including granulated sugars (white crystalline table sugar, brown sugar, confectioner's sugar, and turbinado sugar), refined crystalline fructose , as high fructose corn syrups as well as in honey. About 10% of the calories contained in the Western diet are supplied by fructose (approximately 55 g/day).\nUnlike glucose, fructose is not an insulin secretagogue, and can in fact lower circulating insulin. In addition to the liver, fructose is metabolized in the intestines, testis, kidney, skeletal muscle, fat tissue and brain, but it is not transported into cells via insulin-sensitive pathways (insulin regulated transporters GLUT1 and GLUT4). Instead, fructose is taken in by GLUT5. Fructose in muscles and adipose tissue is phosphorylated by hexokinase.", "Although the metabolism of fructose and glucose share many of the same intermediate structures, they have very different metabolic fates in human metabolism. Fructose is metabolized almost completely in the liver in humans, and is directed toward replenishment of liver glycogen and triglyceride synthesis, while much of dietary glucose passes through the liver and goes to skeletal muscle, where it is metabolized to CO, HO and ATP, and to fat cells where it is metabolized primarily to glycerol phosphate for triglyceride synthesis as well as energy production. The products of fructose metabolism are liver glycogen and de novo lipogenesis of fatty acids and eventual synthesis of endogenous triglyceride. This synthesis can be divided into two main phases: The first phase is the synthesis of the trioses, dihydroxyacetone (DHAP) and glyceraldehyde; the second phase is the subsequent metabolism of these trioses either in the gluconeogenic pathway for glycogen replenishment and/or the complete metabolism in the fructolytic pathway to pyruvate, which enters the Krebs cycle, is converted to citrate and subsequently directed toward de novo synthesis of the free fatty acid palmitate.", "The absence of fructose-1-phosphate aldolase (aldolase B) results in the accumulation of fructose 1 phosphate in hepatocytes, kidney and small intestines. An accumulation of fructose-1-phosphate following fructose ingestion inhibits glycogenolysis (breakdown of glycogen) and gluconeogenesis, resulting in severe hypoglycemia. It is symptomatic resulting in severe hypoglycemia, abdominal pain, vomiting, hemorrhage, jaundice, hepatomegaly, and hyperuricemia eventually leading to liver and/or kidney failure and death. The incidence varies throughout the world, but it is estimated at 1:55,000 (range 1:10,000 to 1:100,000) live births.", "Intravenous (i.v.) infusion of fructose has been shown to lower phosphorylation potential in liver cells by trapping inorganic phosphate (Pi) as fructose 1-phosphate. The fructokinase reaction occurs quite rapidly in hepatocytes trapping fructose in cells by phosphorylation. On the other hand, the splitting of fructose 1 phosphate to DHAP and glyceraldehyde by Aldolase B is relatively slow. Therefore, fructose-1-phosphate accumulates with the corresponding reduction of intracellular Pi available for phosphorylation reactions in the cell. This is why fructose is contraindicated for total parenteral nutrition (TPN) solutions and is never given intravenously as a source of carbohydrate. It has been suggested that excessive dietary intake of fructose may also result in reduced phosphorylation potential. However, this is still a contentious issue. Dietary fructose is not well absorbed and increased dietary intake often results in malabsorption. Whether or not sufficient amounts of dietary fructose could be absorbed to cause a significant reduction in phosphorylating potential in liver cells remains questionable and there are no clear examples of this in the literature.", "The first step in the metabolism of fructose is the phosphorylation of fructose to fructose 1-phosphate by fructokinase (Km = 0.5 mM, ≈ 9 mg/100 ml), thus trapping fructose for metabolism in the liver. Hexokinase IV (Glucokinase), also occurs in the liver and would be capable of phosphorylating fructose to fructose 6-phosphate (an intermediate in the gluconeogenic pathway); however, it has a relatively high Km (12 mM) for fructose and, therefore, essentially all of the fructose is converted to fructose-1-phosphate in the human liver. Much of the glucose, on the other hand, is not phosphorylated (Km of hepatic glucokinase (hexokinase IV) = 10 mM), passes through the liver directed toward peripheral tissues, and is taken up by the insulin-dependent glucose transporter, GLUT 4, present on adipose tissue and skeletal muscle.\nFructose-1-phosphate then undergoes hydrolysis by fructose-1-phosphate aldolase (aldolase B) to form dihydroxyacetone phosphate (DHAP) and glyceraldehyde; DHAP can either be isomerized to glyceraldehyde 3-phosphate by triosephosphate isomerase or undergo reduction to glycerol 3-phosphate by glycerol 3-phosphate dehydrogenase. The glyceraldehyde produced may also be converted to glyceraldehyde 3-phosphate by glyceraldehyde kinase or converted to glycerol 3-phosphate by glyceraldehyde 3-phosphate dehydrogenase. The metabolism of fructose at this point yields intermediates in gluconeogenic pathway leading to glycogen synthesis, or can be oxidized to pyruvate and reduced to lactate, or be decarboxylated to acetyl CoA in the mitochondria and directed toward the synthesis of free fatty acid, resulting finally in triglyceride synthesis.", "The absence of fructokinase results in the inability to phosphorylate fructose to fructose-1-phosphate within the cell. As a result, fructose is neither trapped within the cell nor directed toward its metabolism. Free fructose concentrations in the liver increase and fructose is free to leave the cell and enter plasma. This results in an increase in plasma concentration of fructose, eventually exceeding the kidneys' threshold for fructose reabsorption resulting in the appearance of fructose in the urine. Essential fructosuria is a benign asymptomatic condition.", "Fusion Nuclear Science Facility (FNSF) is a low cost, low aspect ratio compact tokamak reactor design, aiming for a 9 Tesla field at the plasma centre.\nIt is considered a step after ITER on the path to a fusion power plant.\nBecause of the high neutron irradiation damage expected, non-insulating superconducting coils are being considered for it.", "Ignition should not be confused with breakeven, a similar concept that compares the total energy being given off to the energy being used to heat the fuel. The key difference is that breakeven ignores losses to the surroundings, which do not contribute to heating the fuel, and thus are not able to make the reaction self-sustaining. Breakeven is an important goal in the fusion energy field, but ignition is required for a practical energy producing design.\nIn nature, stars reach ignition at temperatures similar to that of the Sun, around 15 million kelvins (27 million degrees F). Stars are so large that the fusion products will almost always interact with the plasma before their energy can be lost to the environment at the outside of the star. In comparison, man-made reactors are far less dense and much smaller, allowing the fusion products to easily escape the fuel. To offset this, much higher rates of fusion are required, and thus much higher temperatures; most man-made fusion reactors are designed to work at temperatures over 100 million kelvins (180 million degrees F).\nFusion ignition was first achieved by humans in the cores of detonating thermonuclear weapons. A thermonuclear weapon uses a conventional fission (U-235 or Pu-239/241) \"sparkplug\" to generate high pressures and compress a rod of fusion fuel (usually lithium deuteride). The fuel reaches high enough pressures and densities to ignite, releasing large amounts of energy and neutrons in the process.\nThe National Ignition Facility at Lawrence Livermore National Laboratory performs laser-driven inertial confinement fusion experiments that achieve fusion ignition. This is similar to a thermonuclear weapon, but the National Ignition Facility uses a 1.8 MJ laser system instead of a fission weapon to compress the fuel, and uses a much smaller amount of fuel (a mixture of deuterium and tritium, which are both isotopes of hydrogen).\nIn January 2012, National Ignition Facility Director Mike Dunne predicted in a Photonics West 2012 plenary talk that ignition would be achieved at NIF by October 2012.\nBy 2022 the NIF had achieved ignition.\nBased on the tokamak reactor design, the ITER is intended to sustain fusion mostly by internal fusion heating and yield in its plasma a ten-fold return on power. Construction is expected to be completed in 2025.\nExperts believe that achieving fusion ignition is the first step towards electricity generation using fusion power.", "The National Ignition Facility at the Lawrence Livermore National Laboratory in California reported in 2021 that it had triggered ignition in the laboratory on 8 August 2021, for the first time in the over-60-year history of the ICF program. The shot yielded 1.3 megajoules of fusion energy, an 8-fold improvement on tests done in spring 2021. NIF estimates that the laser supplied 1.9 megajoules of energy, 230 kilojoules of which reached the fuel capsule. This corresponds to a total scientific energy gain of 0.7 and a capsule energy gain of 6. While the experiment fell short of ignition as defined by the National Academy of Sciences – a total energy gain greater than one – most people working in the field viewed the experiment as the demonstration of ignition as defined by the Lawson criterion.\nIn August 2022, the results of the experiment were confirmed in three peer-reviewed papers: one in Physical Review Letters and two in Physical Review E. Throughout 2022, the NIF researchers tried and failed to replicate the August result. However, on 13 December 2022, the United States Department of Energy announced via Twitter that an experiment on December 5 had surpassed the August result, achieving a scientific gain of 1.5,\nsurpassing the National Academy of Sciences definition of ignition.", "Fusion ignition is the point at which a nuclear fusion reaction becomes self-sustaining. This occurs when the energy being given off by the reaction heats the fuel mass more rapidly than it cools. In other words, fusion ignition is the point at which the increasing self-heating of the nuclear fusion removes the need for external heating.\nThis is quantified by the Lawson criterion.\nIgnition can also be defined by the fusion energy gain factor.\nIn the laboratory, fusion ignition defined by the Lawson criterion was first achieved in August 2021,\nand ignition defined by the energy gain factor was achieved in December 2022,\nboth by the U.S. National Ignition Facility.", "Galactan (galactosan) is a polysaccharide consisting of polymerized galactose. In general, galactans in natural sources contain a core of galactose units connected by α(1→3) or α(1→6), with structures containing other monosaccharides as side-chains.\nGalactan derived from Anogeissus latifolia is primarily α(1→6), but galactan from acacia trees is primarily α(1→3).\nHalymenia durvillei is a red seaweed (algae) that produces a sulfated galactan. Several other algae species also contain galactans. Including Carpopeltis .", "Galactogen is synthesized by secretory cells in the albumen gland of adult female snails and later transferred to the egg. This process is under neurohormonal control, notably by the brain galactogenin. The biochemical pathways for glycogen and galactogen synthesis are closely related. Both use glucose as a common precursor and its conversion to activated galactose is catalyzed by UDP-glucose 4-epimerase and galactose-1-P uridyl-transferase. This enables glucose to be the common precursor for both glycogenesis and galactogenesis. In fact, both polysaccharides are found in the same secretory cells of the albumen gland and are subject to independent seasonal variations. Glycogen accumulates in autumn as a general energy storage for hibernation, whereas galactogen is synthesized during spring in preparation of egg-laying. It is commonly accepted that galactogen production is restricted to embryo nutrition and therefore is mainly transferred to eggs.\nLittle is known about the galactogen-synthesizing enzymes. A D-galactosyltransferase was described in the albumen gland of Helix pomatia. This enzyme catalyzes the transfer of D-galactose to a (1→6) linkage and is dependent upon the presence of acceptor galactogen. Similarly, a β-(1→3)-galactosyltransferase activity has been detected in albumen gland extracts from Limnaea stagnalis.\nIn embryos and fasting newly hatched snails, galactogen is most likely an important donor (via galactose) of metabolic intermediates. In feeding snails, the primary diet is glucose-containing starch and cellulose. These polymers are digested and contribute glucose to the pathways of intermediary metabolism. Galactogen consumption begins at the gastrula stage and continues throughout development. Up to 46-78 % of egg galactogen disappears during embryo development. The remainder is used up within the first days after hatching.\nOnly snail embryos and hatchlings are able to degrade galactogen, whereas other animals and even adult snails do not. β-galactosidase may be important in the release of galactose from galactogen; however, most of the catabolic pathway of this polysaccharide is still unknown.", "Galactogen is a polysaccharide of galactose that functions as energy storage in pulmonate snails and some Caenogastropoda. This polysaccharide is exclusive of the reproduction and is only found in the albumen gland from the female snail reproductive system and in the perivitelline fluid of eggs.\nGalactogen serves as an energy reserve for developing embryos and hatchlings, which is later replaced by glycogen in juveniles and adults. The advantage of accumulating galactogen instead of glycogen in eggs remains unclear, although some hypotheses have been proposed (see below).", "Galactogen has been reported in the albumen gland of pulmonate snails such as Helix pomatia, Limnaea stagnalis, Oxychilus cellarius, Achatina fulica, Aplexa nitens and Otala lactea, Bulimnaea megasoma, Ariolimax columbianis, Ariophanta, Biomphalaria glabrata, and Strophochelius oblongus. This polysaccharide was also identified in the Caenogastropoda Pila virens and Viviparus, Pomacea canaliculata, and Pomacea maculata.\nIn adult gastropods, galactogen is confined to the albumen gland, showing a large variation in content during the year and reaching a higher peak in the reproductive season. During the reproductive season, this polysaccharide is rapidly restored in the albumen gland after being transferred to the eggs, decreasing its total amount only after repeated ovipositions. In Pomacea canaliculata snails, galactogen would act, together with perivitellins, as a main limiting factor of reproduction. This polysaccharide has been identified in the Golgi zone of the secretory cells from the albumen gland in the form of discrete granules 200 Å in diameter. The appearance of galactogen granules within the secretory globules suggests that this is the site of biosynthesis of the polysaccharide.\nApart from the albumen gland, galactogen is also found as a major component of the perivitelline fluid from the snail eggs, comprising the main energy source for the developing embryo.", "Galactogen is a polymer of galactose with species-specific structural variations. In this polysaccharide, the D-galactose are predominantly β (1→3) and β (1→6) linked; however some species also have β (1→2) and β (1→4). The galactogen of the aquatic Basommatophora (e.g. Lymnaea, Biomphalaria) is highly branched with only 5-8 % of the sugar residues in linear sections, and β(1→3) and β(1→6) bonds alternate more-or-Iess regularly. In the terrestrial Stylommatophora (e.g. Helix, Arianta, Cepaea, Achatina) up to 20% of the sugar residues are linear β(1→3) bound. The galactogen of Ampullarius sp species has an unusually large proportion of linearly arranged sugars, with 5% β(1→3), 26% β(1→6), and 10% β(1→2). Other analyses in Helix pomatia suggested a dichotomous structure, where each galactopyranose unit bears a branch or side chain.\nMolecular weight determinations in galactogen extracted from the eggs of Helix pomatia and Limnaea stagnalis were estimated in 4x10 and 2.2x10, respectively. In these snails galactogen contains only D-galactose. Depending upon the origin of the galactogen, apart from D-galactose, L-galactose, L-fucose, D-glucose, L-glucose and phosphate residues may also be present; for instance, the galactogen from Ampullarius sp. contains 98% of D-galacotose and 2% of L- fucose, and the one isolated from Pomacea maculata eggs consist in 68% of D-galactose and 32% of D-glucose. Phosphate-substituted galactose residues are found in the galactogen of individual species from various snail genera such as Biomphalaria, Helix and Cepaea. Therefore, current knowledge indicates it could be considered either a homopolysaccharide of or a heteropolysaccharide dominated by galactose.", "Besides being a source of energy, few other functions have been described for galactogen in the snail eggs, and all of them are related to embryo defense and protection. Given that carbohydrates retain water, the high amount of this polysaccharide would protect the eggs from desiccation from those snails that have aerial oviposition. Besides, the high viscosity that the polysaccharide may confer to the perivitelline fluid has been suggested as a potential antimicrobial defense.\nSince galactogen is a β-linked polysaccharide, such as cellulose or hemicelluloses, specific biochemical adaptations are needed to exploit it as a nutrient, such as specific glycosidases. However, apart from snail embryos and hatchlings, no animal seems to be able to catabolize galactogen, including adult snails. This fact led to consider galactogen as part of an antipredation defense system exclusive of gastropods, deterring predators by lowering the nutritional value of eggs.", "Galactomannans are polysaccharides consisting of a mannose backbone with galactose side groups, more specifically, a (1-4)-linked beta-D-mannopyranose backbone with branchpoints from their 6-positions linked to alpha-D-galactose, (i.e. 1-6-linked alpha-D-galactopyranose).\nIn order of increasing number of mannose-to-galactose ratio:\n*fenugreek gum, mannose:galactose ~1:1\n*guar gum, mannose:galactose ~2:1\n*tara gum, mannose:galactose ~3:1\n*locust bean gum or carob gum, mannose:galactose ~4:1\n*cassia gum, mannose:galactose ~5:1\nGalactomannans are often used in food products to increase the viscosity of the water phase.\nGuar gum has been used to add viscosity to artificial tears, but is not as stable as carboxymethylcellulose.", "Galactomannans are used in foods as stabilisers. Guar and locust bean gum (LBG) are commonly used in ice cream to improve texture and reduce ice cream meltdown. LBG is also used extensively in cream cheese, fruit preparations and salad dressings. Tara gum is seeing growing acceptability as a food ingredient but is still used to a much lesser extent than guar or LBG. Guar has the highest usage in foods, largely due to its low and stable price.", "Galactomannan is a component of the cell wall of the mold Aspergillus and is released during growth. Detection of galactomannan in blood is used to diagnose invasive aspergillosis infections in humans. This is performed with monoclonal antibodies in a double-sandwich ELISA; this assay from Bio-Rad Laboratories was approved by the FDA in 2003 and is of moderate accuracy. The assay is most useful in patients who have had hemopoietic cell transplants (stem cell transplants). False positive Aspergillus Galactomannan test have been found in patients on intravenous treatment with some antibiotics or fluids containing gluconate or citric acid such as some transfusion platelets, parenteral nutrition or PlasmaLyte.", "Galvanoluminescence Is the emission of light produced by the passage of an electric current through an appropriate electrolyte in which an electrode, made of certain metals such as aluminium or tantalum, has been immersed. An example being the electrolysis of sodium bromide (NaBr).", "A typical adult human stomach will secrete about 1.5 liters of gastric acid daily. Gastric acid secretion is produced in several steps. Chloride and hydrogen ions are secreted separately from the cytoplasm of parietal cells and mixed in the canaliculi. Gastric acid is then secreted into the lumen of the gastric gland and gradually reaches the main stomach lumen. The exact manner in which the secreted acid reaches the stomach lumen is controversial, as acid must first cross the relatively pH-neutral gastric mucus layer.\nChloride and sodium ions are secreted actively from the cytoplasm of the parietal cell into the lumen of the canaliculus. This creates a negative potential of between −40and−70mV across the parietal cell membrane that causes potassium ions and a small number of sodium ions to diffuse from the cytoplasm into the parietal cell canaliculi.\nThe enzyme carbonic anhydrase catalyses the reaction between carbon dioxide and water to form carbonic acid. This acid immediately dissociates into hydrogen and bicarbonate ions. The hydrogen ions leave the cell through H/K ATPase antiporter pumps.\nAt the same time, sodium ions are actively reabsorbed . This means that the majority of secreted K (potassium) and Na (sodium) ions return to the cytoplasm. In the canaliculus, secreted hydrogen and chloride ions mix and are secreted into the lumen of the oxyntic gland.\nThe highest concentration that gastric acid reaches in the stomach is 160mM in the canaliculi. This is about 3 million times that of arterial blood, but almost exactly isotonic with other bodily fluids. The lowest pH of the secreted acid is 0.8, but the acid is diluted in the stomach lumen to a pH of between 1 and 3.\nThere is a small continuous basal secretion of gastric acid between meals of usually less than 10mEq/hour.\nThere are three phases in the secretion of gastric acid which increase the secretion rate in order to digest a meal:\n# The cephalic phase: Thirty percent of the total gastric acid secretions to be produced is stimulated by anticipation of eating and the smell or taste of food. This signalling occurs from higher centres in the brain through the vagus nerve (Cranial Nerve X). It activates parietal cells to release acid and ECL cells to release histamine. The vagus nerve (CN X) also releases gastrin releasing peptide onto G cells. Finally, it also inhibits somatostatin release from D cells.\n# The gastric phase: About sixty percent of the total acid for a meal is secreted in this phase. Acid secretion is stimulated by distension of the stomach and by amino acids present in the food.\n# The intestinal phase: The remaining 10% of acid is secreted when chyme enters the small intestine, and is stimulated by small intestine distension and by amino acids. The duodenal cells release entero-oxyntin which acts on parietal cells without affecting gastrin.", "The pH of gastric acid in humans is 1.5-2.0. This is a much lower pH level than that of most animals and very close to scavengers, which eat carrion. This suggests that carrion feeding could have been more important in human evolution than previously thought.", "The role of gastric acid in digestion was established in the 1820s and 1830s by William Beaumont on Alexis St. Martin, who, as a result of an accident, had a fistula (hole) in his stomach, which allowed Beaumont to observe the process of digestion and to extract gastric acid, verifying that acid played a crucial role in digestion.", "The proton pump enzyme is the target of proton pump inhibitors, used to increase gastric pH (and hence decrease stomach acidity) in diseases that feature excess acid. H antagonists indirectly decrease gastric acid production. Antacids neutralize existing acid.", "Gastric acid, gastric juice, or stomach acid is a digestive fluid formed within the stomach lining. With a pH between 1 and 3, gastric acid plays a key role in digestion of proteins by activating digestive enzymes, which together break down the long chains of amino acids of proteins. Gastric acid is regulated in feedback systems to increase production when needed, such as after a meal. Other cells in the stomach produce bicarbonate, a base, to buffer the fluid, ensuring a regulated pH. These cells also produce mucus – a viscous barrier to prevent gastric acid from damaging the stomach. The pancreas further produces large amounts of bicarbonate and secretes bicarbonate through the pancreatic duct to the duodenum to neutralize gastric acid passing into the digestive tract.\nThe primary active component of gastric acid is hydrochloric acid (HCl), which is produced by parietal cells in the gastric glands in the stomach. The secretion is a complex and relatively energetically expensive process. Parietal cells contain an extensive secretory network (called canaliculi) from which the \"hydrochloric acid\" is secreted into the lumen of the stomach. The pH of gastric acid is 1.5 to 3.5 in the human stomach lumen, a level maintained by the proton pump H/K ATPase. The parietal cell releases bicarbonate into the bloodstream in the process, which causes a temporary rise of pH in the blood, known as an alkaline tide.\nThe highly acidic environment in the stomach lumen degrades proteins (e.g., food). Peptide bonds, which comprise proteins, are labilized. The gastric chief cells of the stomach secrete enzymes for protein breakdown (inactive pepsinogen, and in infancy rennin). The low pH activates pepsinogen into the enzyme pepsin, which then aids digestion by breaking the amino acid bonds, a process called proteolysis. In addition, many microorganisms are inhibited or destroyed in an acidic environment, preventing infection or sickness.", "In the duodenum, gastric acid is neutralized by bicarbonate. This also blocks gastric enzymes that have their optima in the acid range of pH. The secretion of bicarbonate from the pancreas is stimulated by secretin. This polypeptide hormone gets activated and secreted from so-called S cells in the mucosa of the duodenum and jejunum when the pH in the duodenum falls below 4.5 to 5.0. The neutralization is described by the equation:\n:HCl + NaHCO → NaCl + HCO\nThe carbonic acid rapidly equilibrates with carbon dioxide and water through catalysis by carbonic anhydrase enzymes bound to the gut epithelial lining, leading to a net release of carbon dioxide gas within the lumen associated with neutralisation. In the absorptive upper intestine, such as the duodenum, both the dissolved carbon dioxide and carbonic acid will tend to equilibrate with the blood, leading to most of the gas produced on neutralisation being exhaled through the lungs.", "Gastric acid production is regulated by both the autonomic nervous system and several hormones. The parasympathetic nervous system, via the vagus nerve, and the hormone gastrin stimulate the parietal cell to produce gastric acid, both directly acting on parietal cells and indirectly, through the stimulation of the secretion of the hormone histamine from enterochromaffine-like cells (ECL). Vasoactive intestinal peptide, cholecystokinin, and secretin all inhibit production.\nThe production of gastric acid in the stomach is tightly regulated by positive regulators and negative feedback mechanisms. Four types of cells are involved in this process: parietal cells, G cells, D cells and enterochromaffine-like cells. Beside this, the endings of the vagus nerve (CN X) and the intramural nervous plexus in the digestive tract influence the secretion significantly.\nNerve endings in the stomach secrete two stimulatory neurotransmitters: acetylcholine and gastrin-releasing peptide. Their action is both direct on parietal cells and mediated through the secretion of gastrin from G cells and histamine from enterochromaffine-like cells. Gastrin acts on parietal cells directly and indirectly too, by stimulating the release of histamine.\nThe release of histamine is the most important positive regulation mechanism of the secretion of gastric acid in the stomach. Its release is stimulated by gastrin and acetylcholine and inhibited by somatostatin.", "In hypochlorhydria and achlorhydria, there is low or no gastric acid in the stomach, potentially leading to problems as the disinfectant properties of the gastric lumen are decreased. In such conditions, there is greater risk of infections of the digestive tract (such as infection with Vibrio or Helicobacter bacteria).\nIn Zollinger–Ellison syndrome and hypercalcemia, there are increased gastrin levels, leading to excess gastric acid production, which can cause gastric ulcers.\nIn diseases featuring excess vomiting, patients develop hypochloremic metabolic alkalosis (decreased blood acidity by H and chlorine depletion).\nGastroesophageal reflux disease occurs when stomach acid repeatedly flows back into the Esophagus, this backwash (acid reflux) can irritate the lining of the esophagus.\nMany people experience acid reflux from time to time. However, when acid reflux happens repeatedly over time, it can cause .\nMost people are able to manage the discomfort of with lifestyle changes and medications. While it is uncommon, some may need surgery to ease symptoms.", "Gastruloids are three dimensional aggregates of embryonic stem cells (ESCs) that, when cultured in specific conditions, exhibit an organization resembling that of an embryo. They develop with three orthogonal axes and contain the primordial cells for various tissues derived from the three germ layers, without the presence of extraembryonic tissues. Notably, they do not possess forebrain, midbrain, and hindbrain structures. Gastruloids serve as a valuable model system for studying mammalian development, including human development, as well as diseases associated with it. They are a model system an embryonic organoid for the study of mammalian development (including humans) and disease.", "The Gastruloid model system draws its origins from work by Marikawa et al.. In that study, small numbers of mouse P19 embryonal carcinoma (EC) cells, were aggregated as embryoid bodies (EBs) and used to model and investigate the processes involved in anteroposterior polarity and the formation of a primitive streak region. In this work, the EBs were able to organise themselves into structures with polarised gene expression, axial elongation/organisation and up-regulation of posterior mesodermal markers. This was in stark contrast to work using EBs from mouse ESCs, which had shown some polarisation of gene expression in a small number of cases but no further development of the multicellular system.\nFollowing this study, the [http://amapress.gen.cam.ac.uk Martinez Arias] laboratory in the Department of Genetics at the University of Cambridge demonstrated how aggregates of mouse embryonic stem cells (ESCs) were able to generate structures that exhibited collective behaviours with striking similarity to those during early development such as symmetry-breaking (in terms of gene expression), axial elongation and germ-layer specification. To quote from the original paper: \"Altogether, these observations further emphasize the similarity between the processes that we have uncovered here and the events in the embryo. The movements are related to those of cells in gastrulating embryos and for this reason we term these aggregates ‘gastruloids’\". As noted by the authors of this protocol, a crucial difference between this culture method and previous work with mouse EBs was the use of small numbers of cells which may be important for generating the correct length scale for patterning, and the use of culture conditions derived from directed differentiation of ESCs in adherent culture\nBrachyury (T/Bra), a gene which marks the primitive streak and the site of gastrulation, is up-regulated in the Gastruloids following a pulse of the Wnt/β-Catenin agonist CHIR99021 (Chi; other factors have also been tested) and becomes regionalised to the elongating tip of the Gastruloid. From or near the region expressing T/Bra, cells expressing the mesodermal marker tbx6 are extruded from the similar to cells in the gastrulating embryo; it is for this reason that these structures are called Gastruloids.\nFurther studies revealed that the events that specify T/Bra expression in gastruloids mimic those in the embryo. After seven days gastruloids exhibit an organization very similar to a midgestation embryo with spatially organized primordia for all mesodermal (axial, paraxial, intermediate, cardiac, cranial and hematopoietic) and endodermal derivatives as well as the spinal cord. They also implement Hox gene expression with the spatiotemporal coordinates as the embryo. Gastruloids lack brain as well as extraembryonic tissues but characterisation of the cellular complexity of gastruloids at the level of single cell and spatial transcriptomics, reveals that they contain representatives of the three germ layers including neural crest, Primordial Germ cells and placodal primordia. \nA feature of gastruloids is a disconnect between the transcriptional programs and outlines and the morphogenesis. However, changes in the culture conditions can elicit morphogenesis, most significantly gastruloids have been shown to form somites and early cardiac structures. In addition, interactions between gastruloids and extraembryonic tissues promote an anterior, brain-like polarised tissue.\nGastruloids have recently been obtained from human ESCs, which gives developmental biologists the ability to study early human development without needing human embryos. Importantly though, the human gastruloid model is not able to form a human embryo, meaning that is a non-intact, non-viable and non-equivalent to in vivo human embryos.\nThe term Gastruloid has been expanded to include self-organised human embryonic stem cell arrangements on patterned (micro patterns) that mimic early patterning events in development; these arrangements should be referred to as 2D gastruloids.", "In gene-activated matrix technology (GAM), cytokines and growth factors could be delivered not as recombinant proteins but as plasmid genes. GAM is one of the tissue engineering approaches to wound healing. Following gene delivery, the recombinant cytokine could be expressed in situ by endogenous would healing cells – in small amounts but for a prolonged period of time – leading to reproducible tissue regeneration. The matrix can be modified by incorporating a viral vector, mRNA or DNA bound to a delivery system, or a naked plasmid.", "Twenty irregular tetrahedra pack with a common vertex in such a way that the twelve outer vertices form a regular icosahedron. Indeed, the icosahedron edge length l is slightly longer than the circumsphere radius r (l ≈ 1.05r). There is a solution with regular tetrahedra if the space is not Euclidean, but spherical. It is the polytope {3,3,5}, using the Schläfli notation, also known as the 600-cell.\nThere are one hundred and twenty vertices which all belong to the hypersphere S with radius equal to the golden ratio (φ = ) if the edges are of unit length. The six hundred cells are regular tetrahedra grouped by five around a common edge and by twenty around a common vertex. This structure is called a polytope (see Coxeter) which is the general name in higher dimension in the series containing polygons and polyhedra. Even if this structure is embedded in four dimensions, it has been considered as a three dimensional (curved) manifold. This point is conceptually important for the following reason. The ideal models that have been introduced in the curved Space are three dimensional curved templates. They look locally as three dimensional Euclidean models. So, the {3,3,5} polytope, which is a tiling by tetrahedra, provides a very dense atomic structure if atoms are located on its vertices. It is therefore naturally used as a template for amorphous metals, but one should not forget that it is at the price of successive idealizations.", "The stability of metals is a longstanding question of solid state physics, which can only be understood in the quantum mechanical framework by properly taking into account the interaction between the positively charged ions and the valence and conduction electrons. It is nevertheless possible to use a very simplified picture of metallic bonding and only keeps an isotropic type of interactions, leading to structures which can be represented as densely packed spheres. And indeed the crystalline simple metal structures are often either close packed face-centered cubic (fcc) or hexagonal close packing (hcp) lattices. Up to some extent amorphous metals and quasicrystals can also be modeled by close packing of spheres. The local atomic order is well modeled by a close packing of tetrahedra, leading to an imperfect icosahedral order.\nA regular tetrahedron is the densest configuration for the packing of four equal spheres. The dense random packing of hard spheres problem can thus be mapped on the tetrahedral packing problem. It is a practical exercise to try to pack table tennis balls in order to form only tetrahedral configurations. One starts with four balls arranged as a perfect tetrahedron, and try to add new spheres, while forming new tetrahedra. The next solution, with five balls, is trivially two tetrahedra sharing a common face; note that already with this solution, the fcc structure, which contains individual tetrahedral holes, does not show such a configuration (the tetrahedra share edges, not faces). With six balls, three regular tetrahedra are built, and the cluster is incompatible with all compact crystalline structures (fcc and hcp). Adding a seventh sphere gives a new cluster consisting in two \"axial\" balls touching each other and five others touching the latter two balls, the outer shape being an almost regular pentagonal bi-pyramid. However, we are facing now a real packing problem, analogous to the one encountered above with the pentagonal tiling in two dimensions. The dihedral angle of a tetrahedron is not commensurable with 2; consequently, a hole remains between two faces of neighboring tetrahedra. As a consequence, a perfect tiling of the Euclidean space R is impossible with regular tetrahedra. The frustration has a topological character: it is impossible to fill Euclidean space with tetrahedra, even severely distorted, if we impose that a constant number of tetrahedra (here five) share a common edge.\nThe next step is crucial: the search for an unfrustrated structure by allowing for curvature in the space, in order for the local configurations to propagate identically and without defects throughout the whole space.", "Two-dimensional examples are helpful in order to get some understanding about the origin of the competition between local rules and geometry in the large. Consider first an arrangement of identical discs (a model for a hypothetical two-dimensional metal) on a plane; we suppose that the interaction between discs is isotropic and locally tends to arrange the disks in the densest way as possible. The best arrangement for three disks is trivially an equilateral triangle with the disk centers located at the triangle vertices. The study of the long range structure can therefore be reduced to that of plane tilings with equilateral triangles. A well known solution is provided by the triangular tiling with a total compatibility between the local and global rules: the system is said to be \"unfrustrated\".\nBut now, the interaction energy is supposed to be at a minimum when atoms sit on the vertices of a regular pentagon. Trying to propagate in the long range a packing of these pentagons sharing edges (atomic bonds) and vertices (atoms) is impossible. This is due to the impossibility of tiling a plane with regular pentagons, simply because the pentagon vertex angle does not divide 2. Three such pentagons can easily fit at a common vertex, but a gap remains between two edges. It is this kind of discrepancy which is called \"geometric frustration\". There is one way to overcome this difficulty. Let the surface to be tiled be free of any presupposed topology, and let us build the tiling with a strict application of the local interaction rule. In this simple example, we observe that the surface inherits the topology of a sphere and so receives a curvature. The final structure, here a pentagonal dodecahedron, allows for a perfect propagation of the pentagonal order. It is called an \"ideal\" (defect-free) model for the considered structure.", "Another type of geometrical frustration arises from the propagation of a local order. A main question that a condensed matter physicist faces is to explain the stability of a solid.\nIt is sometimes possible to establish some local rules, of chemical nature, which lead to low energy configurations and therefore govern structural and chemical order. This is not generally the case and often the local order defined by local interactions cannot propagate freely, leading to geometric frustration. A common feature of all these systems is that, even with simple local rules, they present a large set of, often complex, structural realizations. Geometric frustration plays a role in fields of condensed matter, ranging from clusters and amorphous solids to complex fluids.\nThe general method of approach to resolve these complications follows two steps. First, the constraint of perfect space-filling is relaxed by allowing for space curvature. An ideal, unfrustrated, structure is defined in this curved space. Then, specific distortions are applied to this ideal template in order to embed it into three dimensional Euclidean space. The final structure is a mixture of ordered regions, where the local order is similar to that of the template, and defects arising from the embedding. Among the possible defects, disclinations play an important role.", "With the help of lithography techniques, it is possible to fabricate sub-micrometer size magnetic islands whose geometric arrangement reproduces the frustration found in naturally occurring spin ice materials. Recently R. F. Wang et al. reported the discovery of an artificial geometrically frustrated magnet composed of arrays of lithographically fabricated single-domain ferromagnetic islands. These islands are manually arranged to create a two-dimensional analog to spin ice. The magnetic moments of the ordered ‘spin’ islands were imaged with magnetic force microscopy (MFM) and then the local accommodation of frustration was thoroughly studied. In their previous work on a square lattice of frustrated magnets, they observed both ice-like short-range correlations and the absence of long-range correlations, just like in the spin ice at low temperature. These results solidify the uncharted ground on which the real physics of frustration can be visualized and modeled by these artificial geometrically frustrated magnets, and inspires further research activity.\nThese artificially frustrated ferromagnets can exhibit unique magnetic properties when studying their global response to an external field using Magneto-Optical Kerr Effect. In particular, a non-monotonic angular dependence of the square lattice coercivity is found to be related to disorder in the artificial spin ice system.", "The mathematical definition is simple (and analogous to the so-called Wilson loop in quantum chromodynamics): One considers for example expressions (\"total energies\" or \"Hamiltonians\") of the form\nwhere G is the graph considered, whereas the quantities are the so-called \"exchange energies\" between nearest-neighbours, which (in the energy units considered) assume the values ±1 (mathematically, this is a signed graph), while the are inner products of scalar or vectorial spins or pseudo-spins. If the graph G has quadratic or triangular faces P, the so-called \"plaquette variables\" P, \"loop-products\" of the following kind, appear:\n: and respectively,\nwhich are also called \"frustration products\". One has to perform a sum over these products, summed over all plaquettes. The result for a single plaquette is either +1 or −1. In the last-mentioned case the plaquette is \"geometrically frustrated\".\nIt can be shown that the result has a simple gauge invariance: it does not change &ndash; nor do other measurable quantities, e.g. the \"total energy\" – even if locally the exchange integrals and the spins are simultaneously modified as follows:\nHere the numbers ε and ε are arbitrary signs, i.e. +1 or −1, so that the modified structure may look totally random.", "A mathematically analogous situation to the degeneracy in water ice is found in the spin ices. A common spin ice structure is shown in Figure 6 in the cubic pyrochlore structure with one magnetic atom or ion residing on each of the four corners. Due to the strong crystal field in the material, each of the magnetic ions can be represented by an Ising ground state doublet with a large moment. This suggests a picture of Ising spins residing on the corner-sharing tetrahedral lattice with spins fixed along the local quantization axis, the TiO, DyTiO, and HoSnO. These materials all show nonzero residual entropy at low temperature.", "Although most previous and current research on frustration focuses on spin systems, the phenomenon was first studied in ordinary ice. In 1936 Giauque and Stout published The Entropy of Water and the Third Law of Thermodynamics. Heat Capacity of Ice from 15 K to 273 K, reporting calorimeter measurements on water through the freezing and vaporization transitions up to the high temperature gas phase. The entropy was calculated by integrating the heat capacity and adding the latent heat contributions; the low temperature measurements were extrapolated to zero, using Debyes then recently derived formula. The resulting entropy, S = 44.28 cal/(K·mol) = 185.3 J/(mol·K) was compared to the theoretical result from statistical mechanics of an ideal gas, S = 45.10 cal/(K·mol) = 188.7 J/(mol·K). The two values differ by S' = 0.82 ± 0.05 cal/(K·mol) = 3.4 J/(mol·K). This result was then explained by Linus Pauling to an excellent approximation, who showed that ice possesses a finite entropy (estimated as 0.81 cal/(K·mol) or 3.4 J/(mol·K)) at zero temperature due to the configurational disorder intrinsic to the protons in ice.\nIn the hexagonal or cubic ice phase the oxygen ions form a tetrahedral structure with an O–O bond length 2.76 Å (276 pm), while the O–H bond length measures only 0.96 Å (96 pm). Every oxygen (white) ion is surrounded by four hydrogen ions (black) and each hydrogen ion is surrounded by 2 oxygen ions, as shown in Figure 5. Maintaining the internal HO molecule structure, the minimum energy position of a proton is not half-way between two adjacent oxygen ions. There are two equivalent positions a hydrogen may occupy on the line of the O–O bond, a far and a near position. Thus a rule leads to the frustration of positions of the proton for a ground state configuration: for each oxygen two of the neighboring protons must reside in the far position and two of them in the near position, so-called ‘ice rules’. Pauling proposed that the open tetrahedral structure of ice affords many equivalent states satisfying the ice rules.\nPauling went on to compute the configurational entropy in the following way: consider one mole of ice, consisting of N O and 2N protons. Each O–O bond has two positions for a proton, leading to 2 possible configurations. However, among the 16 possible configurations associated with each oxygen, only 6 are energetically favorable, maintaining the HO molecule constraint. Then an upper bound of the numbers that the ground state can take is estimated as Ω (). Correspondingly the configurational entropy S = kln(Ω) = Nkln() = 0.81 cal/(K·mol) = 3.4 J/(mol·K) is in amazing agreement with the missing entropy measured by Giauque and Stout.\nAlthough Pauling's calculation neglected both the global constraint on the number of protons and the local constraint arising from closed loops on the Wurtzite lattice, the estimate was subsequently shown to be of excellent accuracy.", "The spin ice model is only one subdivision of frustrated systems. The word frustration was initially introduced to describe a systems inability to simultaneously minimize the competing interaction energy between its components. In general frustration is caused either by competing interactions due to site disorder (see also the Villain model') or by lattice structure such as in the triangular, face-centered cubic (fcc), hexagonal-close-packed, tetrahedron, pyrochlore and kagome lattices with antiferromagnetic interaction. So frustration is divided into two categories: the first corresponds to the spin glass, which has both disorder in structure and frustration in spin; the second is the geometrical frustration with an ordered lattice structure and frustration of spin. The frustration of a spin glass is understood within the framework of the RKKY model, in which the interaction property, either ferromagnetic or anti-ferromagnetic, is dependent on the distance of the two magnetic ions. Due to the lattice disorder in the spin glass, one spin of interest and its nearest neighbors could be at different distances and have a different interaction property, which thus leads to different preferred alignment of the spin.", "Geometrical frustration is an important feature in magnetism, where it stems from the relative arrangement of spins. A simple 2D example is shown in Figure 1. Three magnetic ions reside on the corners of a triangle with antiferromagnetic interactions between them; the energy is minimized when each spin is aligned opposite to neighbors. Once the first two spins align antiparallel, the third one is frustrated because its two possible orientations, up and down, give the same energy. The third spin cannot simultaneously minimize its interactions with both of the other two. Since this effect occurs for each spin, the ground state is sixfold degenerate. Only the two states where all spins are up or down have more energy.\nSimilarly in three dimensions, four spins arranged in a tetrahedron (Figure 2) may experience geometric frustration. If there is an antiferromagnetic interaction between spins, then it is not possible to arrange the spins so that all interactions between spins are antiparallel. There are six nearest-neighbor interactions, four of which are antiparallel and thus favourable, but two of which (between 1 and 2, and between 3 and 4) are unfavourable. It is impossible to have all interactions favourable, and the system is frustrated.\nGeometrical frustration is also possible if the spins are arranged in a non-collinear way. If we consider a tetrahedron with a spin on each vertex pointing along the easy axis (that is, directly towards or away from the centre of the tetrahedron), then it is possible to arrange the four spins so that there is no net spin (Figure 3). This is exactly equivalent to having an antiferromagnetic interaction between each pair of spins, so in this case there is no geometrical frustration. With these axes, geometric frustration arises if there is a ferromagnetic interaction between neighbours, where energy is minimized by parallel spins. The best possible arrangement is shown in Figure 4, with two spins pointing towards the centre and two pointing away. The net magnetic moment points upwards, maximising ferromagnetic interactions in this direction, but left and right vectors cancel out (i.e. are antiferromagnetically aligned), as do forwards and backwards. There are three different equivalent arrangements with two spins out and two in, so the ground state is three-fold degenerate.", "In condensed matter physics, the term geometrical frustration (or in short: frustration) refers to a phenomenon where atoms tend to stick to non-trivial positions or where, on a regular crystal lattice, conflicting inter-atomic forces (each one favoring rather simple, but different structures) lead to quite complex structures. As a consequence of the frustration in the geometry or in the forces, a plenitude of distinct ground states may result at zero temperature, and usual thermal ordering may be suppressed at higher temperatures. Much studied examples are amorphous materials, glasses, or dilute magnets.\nThe term frustration, in the context of magnetic systems, has been introduced by Gerard Toulouse in 1977. Frustrated magnetic systems had been studied even before. Early work includes a study of the Ising model on a triangular lattice with nearest-neighbor spins coupled antiferromagnetically, by G. H. Wannier, published in 1950. Related features occur in magnets with competing interactions, where both ferromagnetic as well as antiferromagnetic couplings between pairs of spins or magnetic moments are present, with the type of interaction depending on the separation distance of the spins. In that case commensurability, such as helical spin arrangements may result, as had been discussed originally, especially, by A. Yoshimori, T. A. Kaplan, R. J. Elliott, and others, starting in 1959, to describe experimental findings on rare-earth metals. A renewed interest in such spin systems with frustrated or competing interactions arose about two decades later, beginning in the 1970s, in the context of spin glasses and spatially modulated magnetic superstructures. In spin glasses, frustration is augmented by stochastic disorder in the interactions, as may occur experimentally in non-stoichiometric magnetic alloys. Carefully analyzed spin models with frustration include the Sherrington–Kirkpatrick model, describing spin glasses, and the ANNNI model, describing commensurability magnetic superstructures. Recently, the concept of frustration has been used in brain network analysis to identify the non-trivial assemblage of neural connections and highlight the adjustable elements of the brain.", "High-pressure lamps are much more similar to HID lamps than fluorescent lamps.\nThese lamps radiate a broad-band UVC radiation, rather than a single line. They are widely used in industrial water treatment, because they are very intense radiation sources. High-pressure lamps produce very bright bluish white light.", "Excimer lamps emit narrow-band UVC and vacuum-ultraviolet radiation at a variety of wavelengths depending on the medium. They are mercury-free and reach full output quicker than a mercury lamp, and generate less heat. Excimer emission at 207 and 222 nm appears to be safer than traditional 254 nm germicidal radiation, due to greatly reduced penetration of these wavelengths in human skin.", "A germicidal lamp (also known as disinfection lamp or sterilizer lamp) is an electric light that produces ultraviolet C (UVC) light. This short-wave ultraviolet light disrupts DNA base pairing, causing formation of pyrimidine dimers, and leads to the inactivation of bacteria, viruses, and protozoans. It can also be used to produce ozone for water disinfection. They are used in ultraviolet germicidal irradiation (UVGI).\nThere are four common types available:\n* Low-pressure mercury lamps\n* High-pressure mercury lamps\n* Excimer lamps\n* LEDs", "Low-pressure mercury lamps are very similar to a fluorescent lamp, with a wavelength of 253.7 nm (1182.5 THz).\nThe most common form of germicidal lamp looks similar to an ordinary fluorescent lamp but the tube contains no fluorescent phosphor. In addition, rather than being made of ordinary borosilicate glass, the tube is made of fused quartz or vycor 7913 glass. These two changes combine to allow the 253.7 nm ultraviolet light produced by the mercury arc to pass out of the lamp unmodified (whereas, in common fluorescent lamps, it causes the phosphor to fluoresce, producing visible light). Germicidal lamps still produce a small amount of visible light due to other mercury radiation bands.\nAn older design looks like an incandescent lamp but with the envelope containing a few droplets of mercury. In this design, the incandescent filament heats the mercury, producing a vapor which eventually allows an arc to be struck, short circuiting the incandescent filament.\nAs with all gas-discharge lamps, low- and high-pressure mercury lamps exhibit negative resistance and require the use of an external ballast to regulate the current flow. The older lamps that resembled an incandescent lamp were often operated in series with an ordinary 40 W incandescent \"appliance\" lamp; the incandescent lamp acted as the ballast for the germicidal lamp.", "For most purposes, ozone production would be a detrimental side effect of lamp operation. To prevent this, most germicidal lamps are treated to absorb the 185 nm mercury emission line (which is the longest wavelength of mercury light which will ionize oxygen).\nIn some cases (such as water sanitization), ozone production is precisely the point. This requires specialized lamps which do not have the surface treatment.", "Short-wave UV light is harmful to humans. In addition to causing sunburn and (over time) skin cancer, this light can produce extremely painful inflammation of the cornea of the eye, which may lead to temporary or permanent vision impairment. For this reason, the light produced by a germicidal lamp must be carefully shielded against direct viewing, with consideration of reflections and dispersed light. A February 2017 risk analysis of UVC lights concluded that ultraviolet light from these lamps can cause skin and eye problems.", "Recent developments in light-emitting diode (LED) technology have led to the commercial availability of UVC LED sources.\nUVC LEDs use semiconductor materials to produce light in a solid-state device. The wavelength of emission is tuneable by adjusting the chemistry of the semiconductor material, giving a selectivity to the emission profile of the LED across, and beyond, the germicidal wavelength band. Advances in understanding and synthesis of the AlGaN materials system led to significant increases in the output power, device lifetime, and efficiency of UVC LEDs in the early 2010s.\nThe reduced size of LEDs opens up options for small reactor systems allowing point-of-use applications and integration into medical devices. Low power consumption of semiconductors introduce UV disinfection systems that utilized small solar cells in remote or Third World applications.\nBy 2019, LEDs made up 41.4% of UV light sales, up from 19.2% in 2014 The UV-C LED global market is expected to rise from $223m in 2017 to US$991m in 2023.", "Germicidal lamps are used to sterilize workspaces and tools used in biology laboratories and medical facilities. If the quartz envelope transmits shorter wavelengths, such as the 185 nm mercury emission line, they can also be used wherever ozone is desired, for example, in the sanitizing systems of hot tubs and aquariums. They are also used by geologists to provoke fluorescence in mineral samples, aiding in their identification. In this application, the light produced by the lamp is usually filtered to remove as much visible light as possible, leaving just the UV light. Germicidal lamps are also used in waste water treatment in order to kill microorganisms.\nThe light produced by germicidal lamps is also used to erase EPROMs; the ultraviolet photons are sufficiently energetic to allow the electrons trapped on the transistors' floating gates to tunnel through the gate insulation, eventually removing the stored charge that represents binary ones and zeroes.", "*9,10-Diphenylanthracene (DPA) emits blue light\n*9-(2-Phenylethenyl) anthracene emits teal light\n*1-Chloro-9,10-diphenylanthracene (1-chloro(DPA)) and 2-chloro-9,10-diphenylanthracene (2-chloro(DPA)) emit blue-green light more efficiently than nonsubstituted DPA\n*9,10-Bis(phenylethynyl)anthracene (BPEA) emits green light with maximum at 486 nm\n*1-Chloro-9,10-bis(phenylethynyl)anthracene emits yellow-green light, used in 30-minute high-intensity Cyalume sticks\n*2-Chloro-9,10-bis(phenylethynyl)anthracene emits green light, used in 12-hour low-intensity Cyalume sticks\n*1,8-Dichloro-9,10-bis(phenylethynyl)anthracene emits yellow light, used in Cyalume sticks\n*Rubrene emits orange-yellow at 550 nm\n*2,4-Di-tert-butylphenyl 1,4,5,8-tetracarboxynaphthalene diamide emits deep red light, together with DPA is used to produce white or hot-pink light, depending on their ratio\n*Rhodamine B emits red light. It is rarely used, as it breaks down in contact with CPPO, shortening the shelf life of the mixture.\n*5,12-Bis(phenylethynyl)naphthacene emits orange light\n*Violanthrone emits orange light at 630 nm\n*16,17-(1,2-Ethylenedioxy)violanthrone emits red at 680 nm\n*16,17-Dihexyloxyviolanthrone emits infrared at 725 nm\n*16,17-Butyloxyviolanthrone emits infrared\n*N,N′-Bis(2,5,-di-tert-butylphenyl)-3,4,9,10-perylenedicarboximide emits red\n*1-(N,N-Dibutylamino)anthracene emits infrared\n*6-Methylacridinium iodide emits infrared", "Glow sticks emit light when two chemicals are mixed. The reaction between the two chemicals is catalyzed by a base, usually sodium salicylate. The sticks consist of a tiny, brittle container within a flexible outer container. Each container holds a different solution. When the outer container is flexed, the inner container breaks, allowing the solutions to combine, causing the necessary chemical reaction. After breaking, the tube is shaken to thoroughly mix the components.\nThe glow stick contains two chemicals, a base catalyst, and a suitable dye (sensitizer, or fluorophor). This creates an exergonic reaction. The chemicals inside the plastic tube are a mixture of the dye, the base catalyst, and diphenyl oxalate. The chemical in the glass vial is hydrogen peroxide. By mixing the peroxide with the phenyl oxalate ester, a chemical reaction takes place, yielding two moles of phenol and one mole of peroxyacid ester (1,2-dioxetanedione). The peroxyacid decomposes spontaneously to carbon dioxide, releasing energy that excites the dye, which then relaxes by releasing a photon. The wavelength of the photon—the color of the emitted light—depends on the structure of the dye. The reaction releases energy mostly as light, with very little heat. The reason for this is that the reverse [[Woodward-Hoffmann rules|[2 + 2] photocycloadditions]] of 1,2-dioxetanedione is a forbidden transition (it violates Woodward–Hoffmann rules) and cannot proceed through a regular thermal mechanism.\nBy adjusting the concentrations of the two chemicals and the base, manufacturers can produce glow sticks that glow either brightly for a short amount of time or more dimly for an extended length of time. This also allows glow sticks to perform satisfactorily in hot or cold climates, by compensating for the temperature dependence of reaction. At maximum concentration (typically found only in laboratory settings), mixing the chemicals results in a furious reaction, producing large amounts of light for only a few seconds. The same effect can be achieved by adding copious amounts of sodium salicylate or other bases. Heating a glow stick also causes the reaction to proceed faster and the glow stick to glow more brightly for a brief period. Cooling a glow stick slows the reaction a small amount and causes it to last longer, but the light is dimmer. This can be demonstrated by refrigerating or freezing an active glow stick; when it warms up again, it will resume glowing. The dyes used in glow sticks usually exhibit fluorescence when exposed to ultraviolet radiation—even a spent glow stick may therefore shine under a black light.\nThe light intensity is high immediately after activation, then exponentially decays. Leveling of this initial high output is possible by refrigerating the glow stick before activation.\nA combination of two fluorophores can be used, with one in the solution and another incorporated to the walls of the container. This is advantageous when the second fluorophore would degrade in solution or be attacked by the chemicals. The emission spectrum of the first fluorophore and the absorption spectrum of the second one have to largely overlap, and the first one has to emit at shorter wavelength than the second one. A downconversion from ultraviolet to visible is possible, as is conversion between visible wavelengths (e.g., green to orange) or visible to near-infrared. The shift can be as much as 200 nm, but usually the range is about 20–100 nm longer than the absorption spectrum. Glow sticks using this approach tend to have colored containers, due to the dye embedded in the plastic. Infrared glow sticks may appear dark-red to black, as the dyes absorb the visible light produced inside the container and reemit near-infrared.\nOn the other hand, various colors can also be achieved by simply mixing several fluorophores within the solution to achieve the desired effect. These various colors can be achieved due to the principles of additive color. For example, a combination of red, yellow, and green fluorophores is used in orange light sticks, and a combination of several fluorescers is used in white light sticks.", "By the 2020s, work was being done to create safer glow sticks and alternatives. Canadian company LUX BIO, for example, developed glow stick alternatives such as the Light Wand which is biodegradable and glows with bioluminescence, rather than the chemiluminescence and LÜMI, which is a reusable and non-toxic alternative that glows with phosphorescence and is chemically and biologically inert.", "Glow sticks also contribute to the plastic waste problem, as glow sticks are single-use and made from plastic. Additionally, since the inner vial is often made from glass and the chemicals inside are dangerous if improperly handled, the plastic used for glow sticks is non-recoverable by recycling services, so glow sticks are categorized as non-recyclable waste.", "In glow sticks, phenol is produced as a byproduct. It is advisable to keep the mixture away from skin and to prevent accidental ingestion if the glow stick case splits or breaks. If spilled on skin, the chemicals could cause slight skin irritation, swelling, or, in extreme circumstances, vomiting and nausea. Some of the chemicals used in older glow sticks were thought to be potential carcinogens. The sensitizers used are polynuclear aromatic hydrocarbons, a class of compounds known for their carcinogenic properties.\nDibutyl phthalate, a plasticizer sometimes used in glow sticks (and many plastics), has raised some health concerns. It was put on California's list of suspected teratogens in 2006. Glow stick liquid contains ingredients that can act as a plasticizer, softening plastics onto which it leaks. Diphenyl oxalate can sting and burn eyes, irritate and sting skin and can burn the mouth and throat if ingested.\nResearchers in Brazil, concerned about waste from glowsticks used in fishing in their country, published a study in 2014 on this topic. It measured the secondary reactions that continue within used glow sticks, toxicity to cells in culture, and chemical reactions with DNA in vitro. The authors found \"high toxicity\" of light stick solutions, and evidence of reactivity with DNA. They concluded that light stick solutions \"are hazardous and that the health risks associated with exposure have not yet been properly evaluated.\"", "Glow sticks are used by police, fire, and emergency medical services as light sources, similar to their military applications. Often, emergency rescue crews will hand out glow sticks in order to keep track of people at night, who may not have access to their own lighting. Glow sticks are sometimes attached to life vests and lifeboats on passenger and commercial vessels, to ensure night time visibility. \nGlow sticks are often part of emergency kits to provide basic lighting and provide ease of identification in dark areas. They can be found in emergency lighting kits in buildings, public transportation vehicles, and subway stations.", "Bis(2,4,5-trichlorophenyl-6-carbopentoxyphenyl)oxalate, trademarked \"Cyalume\", was invented in 1971 by Michael M. Rauhut, of American Cyanamid, based on work by Edwin A. Chandross and David Iba Sr. of Bell Labs.\nOther early work on chemiluminescence was carried out at the same time, by researchers under Herbert Richter at China Lake Naval Weapons Center.\nSeveral US patents for glow stick-type devices were issued in 1973–74. A later 1976 patent recommended a single glass ampoule that is suspended in a second substance, that when broken and mixed together, provide the chemiluminescent light. The design also included a stand for the signal device so it could be thrown from a moving vehicle and remain standing in an upright position on the road. The idea was this would replace traditional emergency roadside flares and would be superior, since it was not a fire hazard, would be easier and safer to deploy, and would not be made ineffective if struck by passing vehicles. This design, with its single glass ampoule inside a plastic tube filled with a second substance that when bent breaks the glass and then is shaken to mix the substances, most closely resembles the typical glow stick sold today.\nIn the early 1980s the majority of glow sticks were produced in Novato, California by Omniglow Corp. Omniglow completed a leveraged buyout of American Cyanamid's chemical light division in 1994 and became the leading supplier of glow sticks worldwide until going out of business in 2014. Most glow sticks seen today are now made in China.", "There are specific industrial uses of glow sticks, which are often used as a light source in circumstances where electric lighting and LEDs are not best suited. For example, in the mining industry, glow sticks are required for emergency evacuation in the case of a gas leak. Use of an electric light source in this case may cause an unintended explosion. Chemiluminescence, the type of light used in glow sticks, is a \"cold-light\" and does not use electricity, and will not cause a gas leak to ignite. \nGlow sticks are also used worldwide in the marine industry, often used as fishing lures in long-line, recreational, and commercial fishing, as well as for personnel safety.", "Glow sticks are used for outdoor recreation, often used at night for marking. Scuba divers use diving-rated glow sticks to mark themselves during night dives, and then can turn off bright diving lights. This is done to enable visibility of bioluminescent marine organisms, which cannot be seen while a bright dive light is illuminated. Glow sticks are used on backpacks, tent pegs, and on jackets during overnight camping expeditions. Often, glow sticks are recommended as an addition to survival kits.", "Glowsticking is the use of glow sticks in dancing (such as in glow poi and wotagei). They are frequently used for entertainment at parties (in particular raves), concerts, and dance clubs. They are used by marching band conductors for evening performances; glow sticks are also used in festivals and celebrations around the world. Glow sticks also serve multiple functions as toys, readily visible night-time warnings to motorists, and luminous markings that enable parents to keep track of their children. Another use is for balloon-carried light effects. Glow sticks are also used to create special effects in low light photography and film.\nThe Guinness Book of Records recorded the worlds largest glow stick was cracked at tall. It was created by the University of Wisconsin–Whitewaters Chemistry Department to celebrate the school's sesquicentennial, or 150th birthday in Whitewater, Wisconsin and cracked on 9 September 2018.", "Glow sticks are waterproof, do not use batteries, consume no oxygen, generate no or negligible heat, produce neither spark nor flame, can tolerate high pressures such as those found under water, are inexpensive, and are reasonably disposable. This makes them ideal as light sources and light markers by military forces, campers, spelunkers, and recreational divers.", "A glow stick, also known as a light stick, chem light, light wand, light rod, and rave light, is a self-contained, short-term light-source. It consists of a translucent plastic tube containing isolated substances that, when combined, make light through chemiluminescence. The light cannot be turned off and can be used only once. The used tube is then thrown away. Glow sticks are often used for recreation, such as for events, camping, outdoor exploration, and concerts. Glow sticks are also used for light in military and emergency services applications. Industrial uses include marine, transportation, and mining.", "Glow sticks are used by militaries, and occasionally also police tactical units, as light sources during night operations or close-quarters combat in dark areas. They are also used to mark secured areas or objects of note. When worn, they can be used to identify friendly soldiers during nighttime operations.", "Glucocerebroside (also called glucosylceramide) is any of the cerebrosides in which the monosaccharide head group is glucose.", "In Gaucher's disease, the enzyme glucocerebrosidase is nonfunctional and cannot break down glucocerebroside into glucose and ceramide in the lysosome. Affected macrophages, called Gaucher cells, have a distinct appearance similar to \"wrinkled tissue paper\" under light microscopy, because the substrates build-up within the lysosome.", "Different sources include different members in this class. Members marked with a \"#\" are considered by MeSH to be glucosidases.", "Alpha-glucosidases are targeted by alpha-glucosidase inhibitors such as acarbose and miglitol to control diabetes mellitus type 2.", "Alpha-glucosidases are enzymes involved in breaking down complex carbohydrates such as starch and glycogen into their monomers.\nThey catalyze the cleavage of individual glucosyl residues from various glycoconjugates including alpha- or beta-linked polymers of glucose. This enzyme convert complex sugars into simpler ones.", "About 132 different glucosinolates are known to occur naturally in plants. They are biosynthesized from amino acids: so-called aliphatic glucosinolates derived from mainly methionine, but also alanine, leucine, isoleucine, or valine. (Most glucosinolates are actually derived from chain-elongated homologues of these amino acids, e.g. glucoraphanin is derived from dihomomethionine, which is methionine chain-elongated twice.) Aromatic glucosinolates include indolic glucosinolates, such as glucobrassicin, derived from tryptophan and others from phenylalanine, its chain-elongated homologue homophenylalanine, and sinalbin derived from tyrosine.", "The glucosinolate sinigrin, among others, was shown to be responsible for the bitterness of cooked cauliflower and Brussels sprouts. Glucosinolates may alter animal eating behavior.", "Glucosinolates are natural components of many pungent plants such as mustard, cabbage, and horseradish. The pungency of those plants is due to mustard oils produced from glucosinolates when the plant material is chewed, cut, or otherwise damaged. These natural chemicals most likely contribute to plant defence against pests and diseases, and impart a characteristic bitter flavor property to cruciferous vegetables.", "Glucosinolates constitute a natural class of organic compounds that contain sulfur and nitrogen and are derived from glucose and an amino acid. \nThey are water-soluble anions and belong to the glucosides. Every glucosinolate contains a central carbon atom, which is bound to the sulfur atom of the thioglucose group, and via a nitrogen atom to a sulfate group (making a sulfated aldoxime). In addition, the central carbon is bound to a side group; different glucosinolates have different side groups, and it is variation in the side group that is responsible for the variation in the biological activities of these plant compounds.\nThe essence of glucosinolate chemistry is their ability to convert into an isothiocyanate (a \"mustard oil\") upon hydrolysis of the thioglucoside bond by the enzyme myrosinase.\nThe semisystematic naming of glucosinolates consists of the chemical name of the group \"R\" in the diagram followed by \"glucosinolate\", with or without a space. For example, allylglucosinolate and allyl glucosinolate refer to the same compound: both versions are found in the literature. Isothiocyanates are conventionally written as two words.\nThe following are some glucosinolates and their isothiocyanate products:\n* Allylglucosinolate (sinigrin) is the precursor of allyl isothiocyanate\n* Benzylglucosinolate (glucotropaeolin) is the precursor of benzyl isothiocyanate\n* Phenethylglucosinolate (gluconasturtiin) is the precursor of phenethyl isothiocyanate\n* (R)-4-(methylsulfinyl)butylglucosinolate (glucoraphanin) is the precursor of (R)-4-(methylsulfinyl)butyl isothiocyanate (sulforaphane)\n* (R)-2-hydroxybut-3-enylglucosinolate (progoitrin) is probably the precursor of (S)-2-hydroxybut-3-enyl isothiocyanate, which is expected to be unstable and immediately cyclize to form (S)-5-vinyloxazolidine-2-thione (goitrin)\nSinigrin was first of the class to be isolated — in 1839 as its potassium salt. Its chemical structure had been established by 1930, showing that it is a glucose derivative with β--glucopyranose configuration. It was unclear at that time whether the C=N bond was in the Z (or syn) form, with sulfur and oxygen substituents on the same side of the double bond, or the alternative E form in which they are on opposite sides. The matter was settled by X-ray crystallography in 1963. It is now known that all natural glucosinolates are of Z form, although both forms can be made in the laboratory. The \"ate\" ending in the naming of these compounds implies that they are anions at physiological pH and an early name for this allylglucosinolate was potassium myronate. Care must be taken when discussing these compounds since some older publications do not make it clear whether they refer to the anion alone, its corresponding acid or the potassium salt.", "Full details of the sequence of reactions that converts individual amino acids into the corresponding glucosinolate have been studied in the cress Arabidopsis thaliana.\nA sequence of seven enzyme-catalysed steps is used. The sulfur atom is incorporated from glutathione (GSH) and the sugar component is added to the resulting thiol derivative by a glycosyltransferase before the final sulfonation step.", "Glucosinolates occur as secondary metabolites of almost all plants of the order Brassicales. This includes the economically important family Brassicaceae as well as Capparaceae and Caricaceae.\nOutside of the Brassicales, the genera Drypetes and Putranjiva in the family Putranjivaceae, are the only other known occurrence of glucosinolates.\nGlucosinolates occur in various edible plants such as cabbage (white cabbage, Chinese cabbage, broccoli), Brussels sprouts, watercress, horseradish, capers, and radishes where the breakdown products often contribute a significant part of the distinctive taste. The glucosinolates are also found in seeds of these plants.", "The use of glucosinolate-containing crops as primary food source for animals can have negative effects if the concentration of glucosinolate is higher than what is acceptable for the animal in question, because some glucosinolates have been shown to have toxic effects (mainly as goitrogens and anti-thyroid agents) in livestock at high doses. However, tolerance level to glucosinolates varies even within the same genus (e.g. Acomys cahirinus and Acomys russatus).\nDietary amounts of glucosinolate are not toxic to humans given normal iodine intake.", "The plants contain the enzyme myrosinase, which, in the presence of water, cleaves off the glucose group from a glucosinolate. The remaining molecule then quickly converts to an isothiocyanate, a nitrile, or a thiocyanate; these are the active substances that serve as defense for the plant. Glucosinolates are also called mustard oil glycosides. The standard product of the reaction is the isothiocyanate (mustard oil); the other two products mainly occur in the presence of specialised plant proteins that alter the outcome of the reaction.\nIn the chemical reaction illustrated above, the red curved arrows in the left side of figure are simplified compared to reality, as the role of the enzyme myrosinase is not shown. However, the mechanism shown is fundamentally in accordance with the enzyme-catalyzed reaction.\nIn contrast, the reaction illustrated by red curved arrows at the right side of the figure, depicting the rearrangement of atoms resulting in the isothiocyanate, is expected to be non-enzymatic. This type of rearrangement can be named a Lossen rearrangement, or a Lossen-like rearrangement, since this name was first used for the analogous reaction leading to an organic isocyanate (R-N=C=O).\nTo prevent damage to the plant itself, the myrosinase and glucosinolates are stored in separate compartments of the cell or in different cells in the tissue, and come together only or mainly under conditions of physical injury (see Myrosinase).", "Plants produce glucosinolates in response to the degree of herbivory being suffered. Their production in relation to atmospheric CO concentrations is complex: increased CO can give increased, decreased or unchanged production and there may be genetic variation within the Brassicales.", "Glucosinolates and their products have a negative effect on many insects, resulting from a combination of deterrence and toxicity. In an attempt to apply this principle in an agronomic context, some glucosinolate-derived products can serve as antifeedants, i.e., natural pesticides.\nIn contrast, the diamondback moth, a pest of cruciferous plants, may recognize the presence of glucosinolates, allowing it to identify the proper host plant. Indeed, a characteristic, specialised insect fauna is found on glucosinolate-containing plants, including butterflies, such as large white, small white, and orange tip, but also certain aphids, moths, such as the southern armyworm, sawflies, and flea beetles. For instance, the large white butterfly deposits its eggs on these glucosinolate-containing plants, and the larvae survive even with high levels of glucosinolates and eat plant material containing glucosinolates. The whites and orange tips all possess the so-called nitrile specifier protein, which diverts glucosinolate hydrolysis toward nitriles rather than reactive isothiocyanates. In contrast, the diamondback moth possesses a completely different protein, glucosinolate sulfatase, which desulfates glucosinolates, thereby making them unfit for degradation to toxic products by myrosinase.\nOther kinds of insects (specialised sawflies and aphids) sequester glucosinolates. In specialised aphids, but not in sawflies, a distinct animal myrosinase is found in muscle tissue, leading to degradation of sequestered glucosinolates upon aphid tissue destruction. This diverse panel of biochemical solutions to the same plant chemical plays a key role in the evolution of plant-insect relationships.", "The isothiocyanates formed from glucosinolates are under laboratory research to assess the expression and activation of enzymes that metabolize xenobiotics, such as carcinogens. Observational studies have been conducted to determine if consumption of cruciferous vegetables affects cancer risk in humans, but there is insufficient clinical evidence to indicate that consuming isothiocyanates in cruciferous vegetables is beneficial, according to a 2017 review.", "Glycan arrays, like that offered by the Consortium for Functional Glycomics (CFG), National Center for Functional Glycomics (NCFG) and [http://www.zbiotech.com/ Z Biotech, LLC], contain carbohydrate compounds that can be screened with lectins, antibodies or cell receptors to define carbohydrate specificity and identify ligands. Glycan array screening works in much the same way as other microarray that is used for instance to study gene expression DNA microarrays or protein interaction Protein microarrays.\nGlycan arrays are composed of various oligosaccharides and/or polysaccharides immobilised on a solid support in a spatially-defined arrangement. This technology provides the means of studying glycan-protein interactions in a high-throughput environment. These natural or synthetic (see carbohydrate synthesis) glycans are then incubated with any glycan-binding protein such as lectins, cell surface receptors or possibly a whole organism such as a virus. Binding is quantified using fluorescence-based detection methods. Certain types of glycan microarrays can even be re-used for multiple samples using a method called Microwave Assisted Wet-Erase.", "Glycan arrays have been used to characterize previously unknown biochemical interactions. For example, photo-generated glycan arrays have been used to characterize the immunogenic properties of a tetrasaccharide found on the surface of anthrax spores. Hence, glycan array technology can be used to study the specificity of host-pathogen interactions.\nEarly on, glycan arrays were proven useful in determining the specificity of the Hemagglutinin (influenza) of the Influenza A virus binding to the host and distinguishing across different strains of flu (including avian from mammalian). This was shown with CFG arrays as well as customised arrays. \nCross-platform benchmarks led to highlight the effect of glycan presentation and spacing on binding.\nGlycan arrays are possibly combined with other techniques such as Surface Plasmon Resonance (SPR) to refine the characterisation of glycan-binding. For example, this combination led to demonstrate the calcium-dependent heparin binding of Annexin A1 that is involved in several biological processes including inflammation, apoptosis and membrane trafficking.", "Glycation (non-enzymatic glycosylation) is the covalent attachment of a sugar to a protein, lipid or nucleic acid molecule. Typical sugars that participate in glycation are glucose, fructose, and their derivatives. Glycation is the non-enzymatic process responsible for many (e.g. micro and macrovascular) complications in diabetes mellitus and is implicated in some diseases and in aging. Glycation end products are believed to play a causative role in the vascular complications of diabetes mellitus.\nIn contrast with glycation, glycosylation is the enzyme-mediated ATP-dependent attachment of sugars to protein or lipid. Glycosylation occurs at defined sites on the target molecule. It is a common form of post-translational modification of proteins and is required for the functioning of the mature protein.", "Glycations occur mainly in the bloodstream to a small proportion of the absorbed simple sugars: glucose, fructose, and galactose. It appears that fructose has approximately ten times the glycation activity of glucose, the primary body fuel. Glycation can occur through Amadori reactions, Schiff base reactions, and Maillard reactions; which lead to advanced glycation end products (AGEs).", "The term DNA glycation applies to DNA damage induced by reactive carbonyls (principally methylglyoxal and glyoxal) that are present in cells as by-products of sugar metabolism. Glycation of DNA can cause mutation, breaks in DNA and cytotoxicity. Guanine in DNA is the base most susceptible to glycation. Glycated DNA, as a form of damage, appears to be as frequent as the more well studied oxidative DNA damage. A protein, designated DJ-1 (also known as PARK7), is employed in the repair of glycated DNA bases in humans, and homologs of this protein have also been identified in bacteria.", "Red blood cells have a consistent lifespan of 120 days and are accessible for measurement of glycated hemoglobin. Measurement of HbA1c—the predominant form of glycated hemoglobin—enables medium-term blood sugar control to be monitored in diabetes.\nSome glycation products are implicated in many age-related chronic diseases, including cardiovascular diseases (the endothelium, fibrinogen, and collagen are damaged) and Alzheimer's disease (amyloid proteins are side-products of the reactions progressing to AGEs).\nLong-lived cells (such as nerves and different types of brain cell), long-lasting proteins (such as crystallins of the lens and cornea), and DNA can sustain substantial glycation over time. Damage by glycation results in stiffening of the collagen in the blood vessel walls, leading to high blood pressure, especially in diabetes. Glycations also cause weakening of the collagen in the blood vessel walls, which may lead to micro- or macro-aneurysm; this may cause strokes if in the brain.", "In molecular biology and biochemistry, glycoconjugates are the classification family for carbohydrates – referred to as glycans – which are covalently linked with chemical species such as proteins, peptides, lipids, and other compounds. Glycoconjugates are formed in processes termed glycosylation.\nGlycoconjugates are very important compounds in biology and consist of many different categories such as glycoproteins, glycopeptides, peptidoglycans, glycolipids, glycosides, and lipopolysaccharides. They are involved in cell–cell interactions, including cell–cell recognition; in cell–matrix interactions; in detoxification processes.\nGenerally, the carbohydrate part(s) play an integral role in the function of a glycoconjugate; prominent examples of this are neural cell adhesion molecule (NCAM) and blood proteins where fine details in the carbohydrate structure determine cell binding (or not) or lifetime in circulation.\nAlthough the important molecular species DNA, RNA, ATP, cAMP, cGMP, NADH, NADPH, and coenzyme A all contain a carbohydrate part, generally they are not considered as glycoconjugates.\nGlycocojugates of carbohydrates covalently linked to antigens and protein scaffolds can achieve a long term immunological response in the body. Immunization with glycoconjugates successfully induced long term immune memory against carbohydrates antigens. Glycoconjugate vaccines was introduced since the 1990s have yielded effective results against influenza and meningococcus.\nIn 2021 glycoRNAs were observed for the first time.", "Glycogen synthase (UDP-glucose-glycogen glucosyltransferase) is a key enzyme in glycogenesis, the conversion of glucose into glycogen. It is a glycosyltransferase () that catalyses the reaction of UDP-glucose and (1,4---glucosyl) to yield UDP and (1,4---glucosyl).", "Much research has been done on glycogen degradation through studying the structure and function of glycogen phosphorylase, the key regulatory enzyme of glycogen degradation. On the other hand, much less is known about the structure of glycogen synthase, the key regulatory enzyme of glycogen synthesis. The crystal structure of glycogen synthase from Agrobacterium tumefaciens, however, has been determined at 2.3 A resolution. In its asymmetric form, glycogen synthase is found as a dimer, whose monomers are composed of two Rossmann-fold domains. This structural property, among others, is shared with related enzymes, such as glycogen phosphorylase and other glycosyltransferases of the GT-B superfamily. Nonetheless, a more recent characterization of the Saccharomyces cerevisiae (yeast) glycogen synthase crystal structure reveals that the dimers may actually interact to form a tetramer. Specifically, The inter-subunit interactions are mediated by the α15/16 helix pairs, forming allosteric sites between subunits in one combination of dimers and active sites between subunits in the other combination of dimers. Since the structure of eukaryotic glycogen synthase is highly conserved among species, glycogen synthase likely forms a tetramer in humans as well.\nGlycogen synthase can be classified in two general protein families. The first family (GT3), which is from mammals and yeast, is approximately 80 kDa, uses UDP-glucose as a sugar donor, and is regulated by phosphorylation and ligand binding. The second family (GT5), which is from bacteria and plants, is approximately 50 kDA, uses ADP-glucose as a sugar donor, and is unregulated.", "Although the catalytic mechanisms used by glycogen synthase are not well known, structural similarities to glycogen phosphorylase at the catalytic and substrate binding site suggest that the mechanism for synthesis is similar in glycogen synthase and glycogen phosphorylase.", "Mutations in the GYS1 gene are associated with glycogen storage disease type 0. In humans, defects in the tight control of glucose uptake and utilization are also associated with diabetes and hyperglycemia. Patients with type 2 diabetes normally exhibit low glycogen storage levels because of impairments in insulin-stimulated glycogen synthesis and suppression of glycogenolysis. Insulin stimulates glycogen synthase by inhibiting glycogen synthase kinases or/and activating protein phosphatase 1 (PP1) among other mechanisms.", "In humans, there are two paralogous isozymes of glycogen synthase:\nThe liver enzyme expression is restricted to the liver, whereas the muscle enzyme is widely expressed. Liver glycogen serves as a storage pool to maintain the blood glucose level during fasting, whereas muscle glycogen synthesis accounts for disposal of up to 90% of ingested glucose. The role of muscle glycogen is as a reserve to provide energy during bursts of activity.\nMeanwhile, the muscle isozyme plays a major role in the cellular response to long-term adaptation to hypoxia. Notably, hypoxia only induces expression of the muscle isozyme and not the liver isozyme. However, muscle-specific glycogen synthase activation may lead to excessive accumulation of glycogen, leading to damage in the heart and central nervous system following ischemic insults.", "The reaction is highly regulated by allosteric effectors such as glucose 6-phosphate (activator) and by phosphorylation reactions (deactivating). Glucose-6-phosphate allosteric activating action allows glycogen synthase to operate as a glucose-6-phosphate sensor. The inactivating phosphorylation is triggered by the hormone glucagon, which is secreted by the pancreas in response to decreased blood glucose levels. The enzyme also cleaves the ester bond between the C1 position of glucose and the pyrophosphate of UDP itself.\nThe control of glycogen synthase is a key step in regulating glycogen metabolism and glucose storage. Glycogen synthase is directly regulated by glycogen synthase kinase 3 (GSK-3), AMPK, protein kinase A (PKA), and casein kinase 2 (CK2). Each of these protein kinases leads to phosphorylated and catalytically inactive glycogen synthase. The phosphorylation sites of glycogen synthase are summarized below. \nFor enzymes in the GT3 family, these regulatory kinases inactivate glycogen synthase by phosphorylating it at the N-terminal of the 25th residue and the C-terminal of the 120th residue. Glycogen synthase is also regulated by protein phosphatase 1 (PP1), which activates glycogen synthase via dephosphorylation. PP1 is targeted to the glycogen pellet by four targeting subunits, G, G, PTG and R6. These regulatory enzymes are regulated by insulin and glucagon signaling pathways.", "Glycogen synthase catalyzes the conversion of the glucosyl (Glc) moiety of uridine diphosphate glucose (UDP-Glc) into glucose to be incorporated into glycogen via an α(1→4) glycosidic bond. However, since glycogen synthase requires an oligosaccharide primer as a glucose acceptor, it relies on glycogenin to initiate de novo glycogen synthesis.\nIn a recent study of transgenic mice, an overexpression of glycogen synthase and an overexpression of phosphatase both resulted in excess glycogen storage levels. This suggests that glycogen synthase plays an important biological role in regulating glycogen/glucose levels and is activated by dephosphorylation.", "Clarification is a name for the method of separating fluid from solid particles. Often clarification is used along with flocculation to make the solid particles sink faster to the bottom of the clarification pool while fluid is obtained from the surface which is free of solid particles.\nThickening is the same as clarification except reverse. Solids that sink to the bottom are obtained and fluid is rejected from the surface.\nThe difference of these methods could be demonstrated with the methods used in waste water processing: in the clarification phase, sludge sinks to the bottom of the pool and clear water flows over the clear water grooves and continues its journey. The obtained sludge is then pumped into the thickeners, where sludge thickens farther and is then obtained to be pumped into digestion to be prepared into fertilizer.", "Heavy liquids such as tetrabromoethane can be used to separate ores from supporting rocks by preferential flotation. The rocks are crushed, and while sand, limestone, dolomite, and other types of rock material will float on TBE, ores such as sphalerite, galena and pyrite will sink.", "When clearing gases, an often used and mostly working method for clearing large particles is to blow it into a large chamber where the gas's velocity decreases and the solid particles start sinking to the bottom. This method is used mostly because of its cheap cost.", "Gravity separation is an industrial method of separating two components, either a suspension, or dry granular mixture where separating the components with gravity is sufficiently practical: i.e. the components of the mixture have different specific weight. Every gravitational method uses gravity as the primary force for separation. One type of gravity separator lifts the material by vacuum over an inclined vibrating screen covered deck.\nThis results in the material being suspended in air while the heavier impurities are left behind on the screen and are discharged from the stone outlet. Gravity separation is used in a wide variety of industries, and can be most simply differentiated by the characteristics of the mixture to be separated - principally that of wet i.e. - a suspension versus dry -a mixture of granular product. Often other methods are applied to make the separation faster and more efficient, such as flocculation, coagulation and suction. The most notable advantages of the gravitational methods are their cost effectiveness and in some cases excellent reduction. Gravity separation is an attractive unit operation as it generally has low capital and operating costs, uses few if any chemicals that might cause environmental concerns and the recent development of new equipment enhances the range of separations possible.", "Agriculture-\nGravity separation tables are used for the removal of impurities, admixture, insect damage and immature kernels from the following examples: wheat, barley, oilseed rape, peas, beans, cocoa beans, linseed. They can be used to separate and standardize coffee beans, cocoa beans, peanuts, corn, peas, rice, wheat, sesame and other food grains.\nThe gravity separator separates products of same size but with difference in specific weight. It has a vibrating rectangular deck, which makes it easy for the product to travel a longer distance, ensuring improved quality of the end product. The pressurized air in the deck enables the material to split according to its specific weight. As a result, the heavier particles travel to the higher level while the lighter particles travel to the lower level of the deck. It comes with easily adjustable air fans to control the volume of air distribution at different areas of the vibrating deck to meet the air supply needs of the deck. The table inclination, speed of eccentric motion and the feed rate can be precisely adjusted to achieve smooth operation of the machine.", "Fahy is the world's foremost expert in organ cryopreservation by vitrification. Fahy introduced the modern successful approach to vitrification for cryopreservation in cryobiology and he is widely credited, along with William F. Rall, for introducing vitrification into the field of reproductive biology.\nIn 2005, where he was a keynote speaker at the annual Society for Cryobiology meeting, Fahy announced that Twenty-First Century Medicine had successfully cryopreserved a rabbit kidney at −130 °C by vitrification and transplanted it into a rabbit after rewarming, with subsequent long-term life support by the vitrified-rewarmed kidney as the sole kidney. This research breakthrough was later published in the peer-reviewed journal Organogenesis.\nFahy is also a biogerontologist and is the originator and Editor-in-Chief of The Future of Aging: Pathways to Human Life Extension, a multi-authored book on the future of biogerontology. He currently serves on the editorial boards of Rejuvenation Research and the Open Geriatric Medicine Journal and served for 16 years as a Director of the American Aging Association and for 6 years as the editor of AGE News, the organization's newsletter.", "As a scientist with the American Red Cross, Fahy was the originator of the first practical method of cryopreservation by vitrification and the inventor of computer-based systems to apply this technology to whole organs. Before joining Twenty-First Century Medicine, he was the chief scientist for Organ, Inc and of LRT, Inc. He was also Head of the Tissue Cryopreservation Section of the Transfusion and Cryopreservation Research Program of the U.S. Naval Medical Research Institute in Bethesda, Maryland where he spearheaded the original concept of ice blocking agents. In 2014, he was named a Fellow of the Society for Cryobiology in recognition of the impact of his work in low temperature biology.\nIn 2015–2017, Fahy led the TRIIM (Thymus Regeneration, Immunorestoration, and Insulin Mitigation) human clinical trial, designed to reverse aspects of human aging. The purpose of the TRIIM trial was to investigate the possibility of using recombinant human growth hormone (rhGH) to prevent or reverse signs of immunosenescence in ten 51‐ to 65‐year‐old putatively healthy men. The study:", "Gregory M. Fahy is a California-based cryobiologist, biogerontologist, and businessman. He is Vice President and Chief Scientific Officer at Twenty-First Century Medicine, Inc, and has co-founded Intervene Immune, a company developing clinical methods to reverse immune system aging. He is the 2022–2023 president of the Society for Cryobiology.", "Fahy was named as a Fellow of the Society for Cryobiology in 2014, and in 2010 he received the Distinguished Scientist Award for Reproductive Biology from the Reproductive Biology Professional Group of the American Society of Reproductive Medicine. He received the Cryopreservation Award from the International Longevity and Cryopreservation Summit held in Madrid, Spain in 2017 in recognition of his career in and dedication to the field of cryobiology. Fahy also received the Grand Prize for Medicine from INPEX in 1995 for his invention of computerized organ cryoprotectant perfusion technology. In 2005, he was recognized as a Fellow of the American Aging Association.", "A native of California, Fahy holds a Bachelor of Science degree in Biology from the University of California, Irvine and a PhD in pharmacology and cryobiology from the Medical College of Georgia in Augusta.\nHe currently serves on the board of directors of two organizations and as a referee for numerous scientific journals and funding agencies, and holds 35 patents on cryopreservation methods, aging interventions, transplantation, and other topics.", "Guillemin effect is one of the magnetomechanical effects. It is connected with the tendency of a previously bent rod, made of magnetostrictive material, to be straightened, when subjected to magnetic field applied in the direction of rod's axis.", "Hair multiplication or hair cloning is a proposed technique to counter hair loss. The technology to clone hair is in its early stages, but multiple groups have demonstrated pieces of the technology at a small scale with a few in commercial development. \nScientists previously assumed that in the case of complete baldness, follicles are completely absent from the scalp, so they cannot be regenerated. However, it was discovered that the follicles are not entirely absent, as there are stem cells in the bald scalp from which the follicles naturally arise. The abnormal behavior of these follicles is suggested to be the result of progenitor cell deficiency in these areas. One recently discovered molecule (SCUBE3), may aid in activating these cells and regrowing hair.\nThe basic idea of hair cloning is that healthy follicle cells or dermal papillae can be extracted from the subject from areas that are not bald and are not suffering hair loss. They can be multiplied (cloned) by various culturing methods and the new cells can be injected back into the bald scalp, where they would produce healthy hair. In 2015, initial trials for human hair were successful in generating new follicles, but the hairs grew in various different directions, giving an unnatural look. Scientists believe they may have solved this problem by using nearly microscopic 3D-printed shafts to assist follicles growing upward through the scalp. This technique however is still in the research phase and is not available for public or commercial use. \nAs of 2023, estimates for when there will be successful hair cloning for humans are around 2030-2035; recent advancements in stem cell research and follicle generation mean that balding may be solved in around 10 years.", "Vancouver-based firm, RepliCel Life Sciences Inc. has been researching the replacement of hormone-compromised hair follicle cells.\nIn 2013, RepliCel created a partnership with cosmetics company Shiseido, giving Shiseido an exclusive license to use its RCH-01 technology in Japan, China, South Korea, Taiwan, and the ASEAN countries. Shiseido trialed RepliCels RCH-01 in Japan and received modest results. In 2021, RepliCel initiated arbitration against Shiseido and terminated the companys license agreement.", "One of the first companies to begin experimenting with hair cloning was Intercytex. Researchers at the company were convinced that their approach was the cure for baldness, and if the technology is fully developed, they can basically eliminate hair loss due to hereditary factors. This therapy would also eliminate the need for donor hair, as it can be simply grown from the patient's own cells.\nIntercytex tried to clone new hair follicles from the stem cells harvested from the back of the neck. They hoped that if they multiplied (cloned) the follicles and then implanted them back in the scalp in the bald areas they would be successful in regrowing the hair itself. They tested the method in their Phase II trials, which showed very promising results as two-thirds of the bald male patients were able to grow new hair after the treatment.\nThe company was hoping to complete the research so they can make it available to the public, so they began Phase III trials. They estimated they would be able to finish the process in a few years. However, these tests did not show the expected progress. In 2008 Intercytex admitted that they failed in fully developing the hair cloning therapy and decided to discontinue all research.\nThis was not solely the result of the failed tests, as the company's financial background also became unstable in 2008 and they had to implement several cost-cutting measures. They laid off a great number of staff members and cut funding to the research projects such as hair cloning. In 2010 they went out of business.", "Another firm researching hair cloning was ARI (Aderans Research Institute), a Japanese company that operated in the US and was the greatest competitor of Intercytex in developing the therapy. The company worked on what they called the \"Ji Gami\" process, which involved the removal of a small strip of the scalp, which is broken down into individual follicular stem cells. After the extraction, these cells are cultured, multiplied, and injected back into the bald areas of the scalp. Scientists hoped that after implantation these cloned follicular cells would mature into full-grown hair.\nDuring Phase II trials they found that the process was not suitable for multiplication but instead, it revitalized the follicles and successfully prevented future loss. The trials continued in 2012. Aderans decided to discontinue the funding of its hair multiplication research in July 2013.", "In 2012 scientists from the University of Pennsylvania School of Medicine published their own findings regarding hair cloning. During their investigation, they found that non-bald and bald scalps have the same number of stem cells, but the progenitor cell number was significantly depleted in the case of the latter. Based on this, they concluded that it is not the absence of the stem cells that are responsible for hair loss but the unsuccessful activation of said cells.\nThe researchers continued their investigation and are looking for a way to convert regular stem cells into progenitor cells, which could mean they may be able to activate the natural generation of hair on a previously bald scalp.", "In late 2013, new results were published by a research team at Durham University which suggested progress. The scientists tried a new method for multiplying, cloning the original cells not in a 2D but in a 3D system.\nA team took healthy dermal papillae from hair transplants and dissected them, then cultured them in a petri dish. In 30 hours they were able to produce 3000 dermal papilla cells. The goal was to create dermal papillae that when injected would reprogram cells around it to produce healthy hair. They chose to try the method by injecting the cloned cells in foreskin samples to \"challenge\" the cells, as the cells in the foreskin normally don't grow hair. The human skin samples were grafted on rats. After six weeks the cloned papillae cells formed brand-new hair follicles which were able to grow hair.\nThese are early results and as it is a new approach to hair cloning, several more studies and tests have to be conducted before they can move on to human testing. They also encountered new problems, such as that some of the newly grown hair appeared without pigmentation.", "The first time scientists were able to grow artificial hair follicles from stem cells was in 2010. Scientists at the Berlin Technical University in Germany took animal cells and created follicles by using them. As a result, they produced follicles \"thinner than normal\", but they were confident they could develop the right method of cloning hair from human stem cells by 2011. They estimated that the therapy would be publicly available by 2015 as they were already preparing for the clinical trials. Scientists working on the project said if the treatment was finished, it would mean a cure for approximately 80 percent of those who suffer from hair loss.\nThe university was working together with Intercytex and several other research teams, but they encountered several problems. One of them was that the multiplication process was not efficient enough. They were only able to clone one or two follicles from an extracted hair but for the process to be efficient this number should have been around 1000. There was no indication that researchers were able to overcome this obstacle.", "In October 2022, researchers from the Japan-based Yokohama National University successfully cloned fully-grown mouse hair follicles for the first time in history. It may take 5-10 years for this technology to be tested successfully in humans.", "In 2016, scientists in Japan announced they had successfully grown human skin in a lab. The skin was created using induced pluripotent stem cells, and when implanted in a mouse, the skin grew hairs successfully. Dr. Takashi Tsuji has sought donations for the group's research. The group has also formed a partnership with Organ Technologies and Kyocera Corporation to commercially develop the research. Organ Technologies secured funding from Kobayashi Pharmaceutical in late 2022 and was renamed to OrganTech in 2023. OrganTech hopes to transplant both regenerated hair follicle primordia and what they term \"next-generation implants\" into humans as soon as Q2 2024.", "dNovo, a Silicon-valley based company, was founded in 2018 and participated in the Y Combinator accelerator. The company has demonstrated its technology by growing a patch of human hair on a mouse.", "In July 2019, a researcher from San Diego-based Stemson Therapeutics, partnered with UCSD, successfully grew his own follicles on a mouse using iPSC-derived epithelial and dermal cell therapy. The hair grew straight and was aligned properly with a 3D-printed biodegradable shaft. The hairs were permanent and regenerated naturally. Stemson intends to enter clinical trials in 2026.", "Epibiotech developed an autologous dermal papilla cell that was scheduled to enter clinical trials at the end of 2023.", "In September 2023, TrichoSeeds together with Rhoto Pharmaceutical was aiming to enter clinical trials in 2024.", "Helimagnetism is a form of magnetic ordering where spins of neighbouring magnetic moments arrange themselves in a spiral or helical pattern, with a characteristic turn angle of somewhere between 0 and 180 degrees. It results from the competition between ferromagnetic and antiferromagnetic exchange interactions. It is possible to view ferromagnetism and antiferromagnetism as helimagnetic structures with characteristic turn angles of 0 and 180 degrees respectively. Helimagnetic order breaks spatial inversion symmetry, as it can be either left-handed or right-handed in nature.\nStrictly speaking, helimagnets have no permanent magnetic moment, and as such are sometimes considered a complicated type of antiferromagnet. This distinguishes helimagnets from conical magnets, (e.g. Holmium below 20 K) which have spiral modulation in addition to a permanent magnetic moment. Helimagnets can be characterized by the distance it takes for the spiral to complete one turn. In analogy to the pitch of screw thread, the period of repetition is known as the \"pitch\" of the helimagnet. If the spirals period is some rational multiple of the crystals unit cell, the structure is commensurate, like the structure originally proposed for MnO. On the other hand, if the multiple is irrational, the magnetism is incommensurate, like the updated MnO structure.\nHelimagnetism was first proposed in 1959, as an explanation of the magnetic structure of manganese dioxide. Initially applied to neutron diffraction, it has since been observed more directly by Lorentz electron microscopy. Some helimagnetic structures are reported to be stable up to room temperature. Like how ordinary ferromagnets have domain walls that separate individual magnetic domains, helimagnets have their own classes of domain walls which are characterized by topological charge.\nMany helimagnets have a chiral cubic structure, such as the FeSi (B20) crystal structure type. In these materials, the combination of ferromagnetic exchange and the Dzyaloshinskii–Moriya interaction leads to helixes with relatively long periods. Since the crystal structure is noncentrosymetric even in the paramagnetic state, the magnetic transition to a helimagnetic state does not break inversion symmetry, and the direction of the spiral is locked to the crystal structure.\nOn the other hand, helimagnetism in other materials can also be based on frustrated magnetism or the RKKY interaction. The result is that centrosymmetric structures like the MnP-type (B31) compounds can also exhibit double-helix type helimagnetism where both left and right handed spirals coexist. For these itinerant helimagnets, the direction of the helicity can be controlled by applied electric currents and magnetic fields.", "Since HeH cannot be stored in any usable form, its chemistry must be studied by forming it in situ.\nReactions with organic substances, for example, can be studied by creating a tritium derivative of the desired organic compound. Decay of tritium to He followed by its extraction of a hydrogen atom yields HeH which is then surrounded by the organic material and will in turn react.", "The helium hydride ion, hydridohelium(1+) ion, or helonium is a cation (positively charged ion) with chemical formula HeH. It consists of a helium atom bonded to a hydrogen atom, with one electron removed. It can also be viewed as protonated helium. It is the lightest heteronuclear ion, and is believed to be the first compound formed in the Universe after the Big Bang.\nThe ion was first produced in a laboratory in 1925. It is stable in isolation, but extremely reactive, and cannot be prepared in bulk, because it would react with any other molecule with which it came into contact. Noted as the strongest known acid—stronger than even fluoroantimonic acid—its occurrence in the interstellar medium had been conjectured since the 1970s, and it was finally detected in April 2019 using the airborne SOFIA telescope.", "The helium hydrogen ion is isoelectronic with molecular hydrogen ().\nUnlike the dihydrogen ion , the helium hydride ion has a permanent dipole moment, which makes its spectroscopic characterization easier. The calculated dipole moment of HeH is 2.26 or 2.84 D. The electron density in the ion is higher around the helium nucleus than the hydrogen. 80% of the electron charge is closer to the helium nucleus than to the hydrogen nucleus.\nSpectroscopic detection is hampered, because one of its most prominent spectral lines, at 149.14 μm, coincides with a doublet of spectral lines belonging to the methylidyne radical ⫶CH.\nThe length of the covalent bond in the ion is 0.772 Å or 77.2 pm.", "The helium hydride ion has six relatively stable isotopologues, that differ in the isotopes of the two elements, and hence in the total atomic mass number (A) and the total number of neutrons (N) in the two nuclei:\n* or (A = 4, N = 1) \n* or (A = 5, N = 2) \n* or (A = 6, N = 3; radioactive) \n* or (A = 5, N = 2) \n* or (A = 6, N = 3) \n* or (A = 7, N = 4; radioactive) \nThey all have three protons and two electrons. The first three are generated by radioactive decay of tritium in the molecules HT = , DT = , and = , respectively. The last three can be generated by ionizing the appropriate isotopologue of in the presence of helium-4.\nThe following isotopologues of the helium hydride ion, of the dihydrogen ion , and of the trihydrogen ion have the same total atomic mass number A:\n* , , , (A = 4)\n* , , , , (A = 5)\n* , , , , (A = 6)\n* , , (A = 7)\nThe masses in each row above are not equal, though, because the binding energies in the nuclei are different.", "Unlike the helium hydride ion, the neutral helium hydride molecule HeH is not stable in the ground state. However, it does exist in an excited state as an excimer (HeH*), and its spectrum was first observed in the mid-1980s.\nThe neutral molecule is the first entry in the Gmelin database.", "Hydridohelium(1+), specifically , was first detected indirectly in 1925 by T. R. Hogness and E. G. Lunn. They were injecting protons of known energy into a rarefied mixture of hydrogen and helium, in order to study the formation of hydrogen ions like , and . They observed that appeared at the same beam energy (16 eV) as , and its concentration increased with pressure much more than that of the other two ions. From these data, they concluded that the ions were transferring a proton to molecules that they collided with, including helium.\nIn 1933, K. Bainbridge used mass spectrometry to compare the masses of the ions (helium hydride ion) and (twice-deuterated trihydrogen ion) in order to obtain an accurate measurement of the atomic mass of deuterium relative to that of helium. Both ions have 3 protons, 2 neutrons, and 2 electrons. He also compared (helium deuteride ion) with (trideuterium ion), both with 3 protons and 3 neutrons.", "HeH cannot be prepared in a condensed phase, as it would donate a proton to any anion, molecule or atom that it came in contact with. It has been shown to protonate O, NH, SO, HO, and CO, giving Dioxidanylium|, Ammonium|, Sulfanetriiumdione|, HO, and Methyliumdione| respectively. Other molecules such as nitric oxide, nitrogen dioxide, nitrous oxide, hydrogen sulfide, methane, acetylene, ethylene, ethane, methanol and acetonitrile react but break up due to the large amount of energy produced.\nIn fact, HeH is the strongest known acid, with a proton affinity of 177.8 kJ/mol. The hypothetical aqueous acidity can be estimated using Hess's law:\n(a) Estimated to be same as for Li(aq) → Li(g).<br/>\n(b) Estimated from solubility data.\nA free energy change of dissociation of −360 kJ/mol is equivalent to a pK of −63 at 298 K.", "H. Schwartz observed in 1955 that the decay of the tritium molecule = should generate the helium hydride ion with high probability.\nIn 1963, F. Cacace at the Sapienza University of Rome conceived the decay technique for preparing and studying organic radicals and carbenium ions. In a variant of that technique, exotic species like methanium are produced by reacting organic compounds with the that is produced by the decay of that is mixed with the desired reagents. Much of what we know about the chemistry of came through this technique.", "The first attempt to compute the structure of the HeH ion (specifically, ) by quantum mechanical theory was made by J. Beach in 1936. Improved computations were sporadically published over the next decades.", "In 1980, V. Lubimov (Lyubimov) at the ITEP laboratory in Moscow claimed to have detected a mildly significant rest mass (30 ± 16) eV for the neutrino, by analyzing the energy spectrum of the β decay of tritium. The claim was disputed, and several other groups set out to check it by studying the decay of molecular tritium . It was known that some of the energy released by that decay would be diverted to the excitation of the decay products, including ; and this phenomenon could be a significant source of error in that experiment. This observation motivated numerous efforts to precisely compute the expected energy states of that ion in order to reduce the uncertainty of those measurements. Many have improved the computations since then, and now there is quite good agreement between computed and experimental properties; including for the isotopologues , , and .", "In 1956, M. Cantwell predicted theoretically that the spectrum of vibrations of that ion should be observable in the infrared; and the spectra of the deuterium and common hydrogen isotopologues ( and ) should lie closer to visible light and hence easier to observe. The first detection of the spectrum of was made by D. Tolliver and others in 1979, at wavenumbers between 1,700 and 1,900 cm. In 1982, P. Bernath and T. Amano detected nine infrared lines between 2,164 and 3,158 waves per cm.", "HeH has long been conjectured since the 1970s to exist in the interstellar medium. Its first detection, in the nebula NGC 7027, was reported in an article published in the journal Nature in April 2019.", "The helium hydride ion is formed during the decay of tritium in the molecule HT or tritium molecule T. Although excited by the recoil from the beta decay, the molecule remains bound together.", "It is believed to be the first compound to have formed in the universe, and is of fundamental importance in understanding the chemistry of the early universe. This is because hydrogen and helium were almost the only types of atoms formed in Big Bang nucleosynthesis. Stars formed from the primordial material should contain HeH, which could influence their formation and subsequent evolution. In particular, its strong dipole moment makes it relevant to the opacity of zero-metallicity stars. HeH is also thought to be an important constituent of the atmospheres of helium-rich white dwarfs, where it increases the opacity of the gas and causes the star to cool more slowly.\nHeH could be formed in the cooling gas behind dissociative shocks in dense interstellar clouds, such as the shocks caused by stellar winds, supernovae and outflowing material from young stars. If the speed of the shock is greater than about , quantities large enough to detect might be formed. If detected, the emissions from HeH would then be useful tracers of the shock.\nSeveral locations had been suggested as possible places HeH might be detected. These included cool helium stars, H II regions, and dense planetary nebulae, like NGC 7027, where, in April 2019, HeH was reported to have been detected.", "Additional helium atoms can attach to HeH to form larger clusters such as HeH, HeH, HeH, HeH and HeH.\nThe dihelium hydride cation, HeH, is formed by the reaction of dihelium cation with molecular hydrogen:\n: + H → HeH + H\nIt is a linear ion with hydrogen in the centre.\nThe hexahelium hydride ion, HeH, is particularly stable.\nOther helium hydride ions are known or have been studied theoretically. Helium dihydride ion, or dihydridohelium(1+), , has been observed using microwave spectroscopy. It has a calculated binding energy of 25.1 kJ/mol, while trihydridohelium(1+), , has a calculated binding energy of 0.42 kJ/mol.", "Ion separation is another application of magnetic separation. The separation is driven by the magnetic field that induces a separating force. The force differentiate then between heavy and lighter ions causing the separation. This phenomenon has been demonstrated on test bench and pilot scale.", "In the recent past the problem of removing the deleterious iron particles from a process stream had a few alternatives. Magnetic separation was typically limited and moderately effective. Magnetic separators that used permanent magnets could generate fields of low intensity only. These worked well in removing ferrous tramp but not fine paramagnetic particles. Thus high-intensity magnetic separators that were effective in collecting paramagnetic particles came into existence. These focus on the separation of very fine particles that are paramagnetic.\nThe current is passed through the coil, which creates a magnetic field, which magnetizes the expanded steel matrix ring. The paramagnetic matrix material behaves like a magnet in the magnetic field and thereby attracts the fines. The ring is rinsed when it is in the magnetic field and all the non-magnetic particles are carried with the rinse water. Next as the ring leaves the magnetic zone the ring is flushed and a vacuum of about – 0.3 bars is applied to remove the magnetic particles attached to the matrix ring.", "High-gradient magnetic separator is to separate magnetic and non-magnetic particles (concentrate and tails) from the feed slurry. This feed comes from intermediate thickener underflow pump through Linear Screen & Passive Matrix. Tailings go to tailing thickener & product goes to throw launder through vacuum tanks.", "HTK (branded as Custodiol® by Essential Pharmaceuticals LLC), has been presented by industry to surgeons as an alternative solution that exceeds other cardioplegias in myocardial protection during cardiac surgery. This claim relies on the single-dose administration of HTK compared with other multidose cardioplegias (MDC), sparing time in the adjustment of equipment during cardioplegia re-administration, allowing greater time to operate and thus a decreased CPB duration. Other benefits include a lower concentration of sodium, calcium, and potassium compared with other cardioplegias with cardiac arrest arising from the deprivation of sodium. Finally, histidine is thought to aid buffering, mannitol and tryptophan to improve membrane stability, and ketoglutarate to help ATP production during reperfusion.\nA 2021 meta-analysis demonstrated no statistical advantage of HTK over blood or other crystalloid cardioplegias during adult cardiac surgery. The only practical advantage of HTK, therefore, is the single-dose administration compared to multi-dose requirements of blood and other crystalloid cardioplegia.", "Histidine-tryptophan-ketoglutarate, or Custodiol HTK solution, is a high-flow, low-potassium preservation solution used for organ transplantation. The solution was initially developed by Hans-Jürgen Bretschneider.\nHTK solution is intended for perfusion and flushing of donor liver, kidney, heart, lung and pancreas prior to removal from the donor and for preserving these organs during hypothermic storage and transport to the recipient. HTK solution is based on the principle of inactivating organ function by withdrawal of extracellular sodium and calcium, together with intensive buffering of the extracellular space by means of histidine/histidine hydrochloride, so as to prolong the period during which the organs will tolerate interruption of oxygenated blood. The composition of HTK is similar to that of intracellular fluid. All of the components of HTK occur naturally in the body. The osmolarity of HTK is 310 mOsm/L.", "* Sodium: 15 mmol/L\n* Potassium: 9 mmol/L\n* Magnesium: 4 mmol/L\n* Calcium: 0.015 mmol/L\n* Ketoglutarate/glutamic acid: 1 mmol/L\n* Histidine: 198 mmol/L\n* Mannitol: 30 mmol/L\n* Tryptophan: 2 mmol/L", "Princeton's conversion of the Model C stellarator to a tokamak produced results matching the Soviets. With an apparent solution to the magnetic bottle problem in-hand, plans begin for a larger machine to test scaling and methods to heat the plasma.\nIn 1972, John Nuckolls outlined the idea of fusion ignition, a fusion chain reaction. Hot helium made during fusion reheats the fuel and starts more reactions. Nuckolls's paper started a major development effort. LLNL built laser systems including Argus, Cyclops, Janus, the neodymium-doped glass (Nd:glass) laser Long Path, Shiva laser, and the 10 beam Nova in 1984. Nova would ultimately produce 120 kilojoules of infrared light during a nanosecond pulse.\nThe UK built the Central Laser Facility in 1976.\nThe \"advanced tokamak\" concept emerged, which included non-circular plasma, internal diverters and limiters, superconducting magnets, and operation in the so-called \"H-mode\" island of increased stability. Two other designs became prominent; the compact tokamak sited the magnets on the inside of the vacuum chamber, and the spherical tokamak with as small a cross section as possible.\nIn 1974 J.B. Taylor re-visited ZETA and noticed that after an experimental run ended, the plasma entered a short period of stability. This led to the reversed field pinch concept. On May 1, 1974, the KMS fusion company (founded by Kip Siegel) achieved the world's first laser induced fusion in a deuterium-tritium pellet.\nThe Princeton Large Torus (PLT), the follow-on to the Symmetrical Tokamak, surpassed the best Soviet machines and set temperature records that were above what was needed for a commercial reactor. Soon after it received funding with the target of breakeven.\nIn the mid-1970s, Project PACER, carried out at LANL explored the possibility of exploding small hydrogen bombs (fusion bombs) inside an underground cavity. As an energy source, the system was the only system that could work using the technology of the time. It required a large, continuous supply of nuclear bombs, however, with questionable economics.\nIn 1976, the two beam Argus laser became operational at LLNL. In 1977, the 20 beam Shiva laser there was completed, capable of delivering 10.2 kilojoules of infrared energy on target. At a price of $25 million and a size approaching that of a football field, Shiva was the first megalaser.\nAt a 1977 workshop at the Claremont Hotel in Berkeley Dr. C. Martin Stickley, then Director of the Energy Research and Development Agency ’s Office of Inertial Fusion, claimed that \"no showstoppers\" lay on the road to fusion energy.\nThe DOE selected a Princeton design Tokamak Fusion Test Reactor (TFTR) and the challenge of running on deuterium-tritium fuel.\nThe 20 beam Shiva laser at LLNL became capable of delivering 10.2 kilojoules of infrared energy on target. Costing $25 million and nearly covering a football field, Shiva was the first \"megalaser\" at LLNL.", "In 1920, the British physicist, Francis William Aston, discovered that the mass of four hydrogen atoms is greater than the mass of one helium atom (He-4), which implied that energy can be released by combining hydrogen atoms to form helium. This provided the first hints of a mechanism by which stars could produce energy. Throughout the 1920s, Arthur Stanley Eddington became a major proponent of the proton–proton chain reaction (PP reaction) as the primary system running the Sun. Quantum tunneling was discovered by Friedrich Hund in 1929, and shortly afterwards Robert Atkinson and Fritz Houtermans used the measured masses of light elements to show that large amounts of energy could be released by fusing small nuclei.\nHenry Norris Russell observed that the relationship in the Hertzsprung–Russell diagram suggested that a stars heat came from a hot core rather than from the entire star. Eddington used this to calculate that the temperature of the core would have to be about 40 million K. This became a matter of debate, because the value is much higher than astronomical observations that suggested about one-third to one-half that value. George Gamow introduced the mathematical basis for quantum tunnelling in 1928. In 1929 Atkinson and Houtermans provided the first estimates of the stellar fusion rate. They showed that fusion can occur at lower energies than previously believed, backing Eddingtons calculations.\nNuclear experiments began using a particle accelerator built by John Cockcroft and Ernest Walton at Ernest Rutherfords Cavendish Laboratory at the University of Cambridge. In 1932, Walton produced the first man-made fission by using protons from the accelerator to split lithium into alpha particles. The accelerator was then used to fire deuterons at various targets. Working with Rutherford and others, Mark Oliphant discovered the nuclei of helium-3 (helions) and tritium (tritons'), the first case of human-caused fusion.\nNeutrons from fusion were first detected in 1933. The experiment involved the acceleration of protons towards a target at energies of up to 600,000 electron volts.\nA theory verified by Hans Bethe in 1939 showed that beta decay and quantum tunneling in the Sun's core might convert one of the protons into a neutron and thereby produce deuterium rather than a diproton. The deuterium would then fuse through other reactions to further increase the energy output. For this work, Bethe won the 1967 Nobel Prize in Physics.\nIn 1938, Peter Thonemann developed a detailed plan for a pinch device, but was told to do other work for his thesis.\nThe first patent related to a fusion reactor was registered in 1946 by the United Kingdom Atomic Energy Authority. The inventors were Sir George Paget Thomson and Moses Blackman. This was the first detailed examination of the Z-pinch concept. Starting in 1947, two UK teams carried out experiments based on this concept.", "The first successful man-made fusion device was the boosted fission weapon tested in 1951 in the Greenhouse Item test. The first true fusion weapon was 1952s Ivy Mike, and the first practical example was 1954s Castle Bravo. In these devices, the energy released by a fission explosion compresses and heats the fuel, starting a fusion reaction. Fusion releases neutrons. These neutrons hit the surrounding fission fuel, causing the atoms to split apart much faster than normal fission processes. This increased the effectiveness of bombs: normal fission weapons blow themselves apart before all their fuel is used; fusion/fission weapons do not waste their fuel.", "In 1949 expatriate German Ronald Richter proposed the Huemul Project in Argentina, announcing positive results in 1951. These turned out to be fake, but prompted others' interest. Lyman Spitzer began considering ways to solve problems involved in confining a hot plasma, and, unaware of the Z-pinch efforts, he created the stellarator. Spitzer applied to the US Atomic Energy Commission for funding to build a test device.\nDuring this period, James L. Tuck, who had worked with the UK teams on Z-pinch, had been introducing the stellarator concept to his coworkers at LANL. When he heard of Spitzer's pitch, he applied to build a pinch machine of his own, the Perhapsatron.\nSpitzer's idea won funding and he began work under Project Matterhorn. His work led to the creation of Princeton Plasma Physics Laboratory (PPPL). Tuck returned to LANL and arranged local funding to build his machine. By this time it was clear that the pinch machines were afflicted by instability, stalling progress. In 1953, Tuck and others suggested solutions that led to a second series of pinch machines, such as the ZETA and Sceptre devices.\nSpitzers first machine, A worked, but his next one, B', suffered from instabilities and plasma leakage.\nIn 1954 AEC chair Lewis Strauss foresaw electricity as \"too cheap to meter\". Strauss was likely referring to fusion power, part of the secret Project Sherwood—but his statement was interpreted as referring to fission. The AEC had issued more realistic testimony regarding fission to Congress months before, projecting that \"costs can be brought down... [to]... about the same as the cost of electricity from conventional sources...\"", "In 1951 Edward Teller and Stanislaw Ulam at Los Alamos National Laboratory (LANL) developed the Teller-Ulam design for a thermonuclear weapon, allowing for the development of multi-megaton yield fusion bombs. Fusion work in the UK was classified after the Klaus Fuchs affair.\nIn the mid-1950s the theoretical tools used to calculate the performance of fusion machines were not predicting their actual behavior. Machines invariably leaked plasma at rates far higher than predicted. In 1954, Edward Teller gathered fusion researchers at the Princeton Gun Club. He pointed out the problems and suggested that any system that confined plasma within concave fields was doomed due to what became known as interchange instability. Attendees remember him saying in effect that the fields were like rubber bands, and they would attempt to snap back to a straight configuration whenever the power was increased, ejecting the plasma. He suggested that the only way to predictably confine plasma would be to use convex fields: a \"cusp\" configuration.\nWhen the meeting concluded, most researchers turned out papers explaining why Tellers concerns did not apply to their devices. Pinch machines did not use magnetic fields in this way, while the mirror and stellarator claques proposed various solutions. This was soon followed by Martin David Kruskal and Martin Schwarzschilds paper discussing pinch machines, however, which demonstrated those devices' instabilities were inherent.", "The largest \"classic\" pinch device was the ZETA, which started operation in the UK in 1957. Its name is a take-off on small experimental fission reactors that often had \"zero energy\" in their name, such as ZEEP.\nIn early 1958, John Cockcroft announced that fusion had been achieved in the ZETA, an announcement that made headlines around the world. He dismissed US physicists' concerns. US experiments soon produced similar neutrons, although temperature measurements suggested these could not be from fusion. The ZETA neutrons were later demonstrated to be from different versions of the instability processes that had plagued earlier machines. Cockcroft was forced to retract his fusion claims, tainting the entire field for years. ZETA ended in 1968.", "The first experiment to achieve controlled thermonuclear fusion was accomplished using Scylla I at LANL in 1958. Scylla I was a θ-pinch machine, with a cylinder full of deuterium. Electric current shot down the sides of the cylinder. The current made magnetic fields that pinched the plasma, raising temperatures to 15 million degrees Celsius, for long enough that atoms fused and produced neutrons. The Sherwood program sponsored a series of Scylla machines at Los Alamos. The program began with 5 researchers and $100,000 in US funding in January 1952. By 1965, a total of $21 million had been spent. The θ-pinch approach was abandoned after calculations showed it could not scale up to produce a reactor.", "The history of nuclear fusion began early in the 20th century as an inquiry into how stars powered themselves and expanded to incorporate a broad inquiry into the nature of matter and energy, as potential applications expanded to include warfare, energy production and rocket propulsion.", "In 1950–1951 in the Soviet Union, Igor Tamm and Andrei Sakharov first discussed a tokamak-like approach. Experimental research on those designs began in 1956 at the Moscow Kurchatov Institute by a group of Soviet scientists led by Lev Artsimovich. The tokamak essentially combined a low-power pinch device with a low-power stellarator. The notion was to combine the fields in such a way that the particles orbited within the reactor a particular number of times, today known as the \"safety factor\". The combination of these fields dramatically improved confinement times and densities, resulting in huge improvements over existing devices.", "In 1951 Ivy Mike, part of Operation Ivy, became the first detonation of a thermonuclear weapon, yielding 10.4 megatons of TNT using liquid deuterium. Cousins and Ware built a toroidal pinch device in England and demonstrated that the plasma in pinch devices is inherently unstable. In 1953 The Soviet Union tested its RDS-6S test, (codenamed \"Joe 4\" in the US) demonstrated a fission/fusion/fission (\"Layercake\") design that yielded 600 kilotons. Igor Kurchatov spoke at Harwell on pinch devices, revealing that the USSR was working on fusion.\nSeeking to generate electricity, Japan, France and Sweden all start fusion research programs\nIn 1955, John D. Lawson (scientist) creates what is now known as the Lawson criterion which is a criterion for a fusion reactor to produce more energy than is lost to the environment due to problems like Bremsstrahlung radiation.\nIn 1956 the Soviet Union began publishing articles on plasma physics, leading the US and UK to follow over the next several years.\nThe Sceptre III z-pinch plasma column remained stable for 300 to 400 microseconds, a dramatic improvement on previous efforts. The team calculated that the plasma had an electrical resistivity around 100 times that of copper, and was able to carry 200 kA of current for 500 microseconds.", "In 1960 John Nuckolls published the concept of inertial confinement fusion (ICF). The laser, introduced the same year, turned out to be a suitable \"driver\".\nIn 1961 the Soviet Union tested its 50 megaton Tsar Bomba, the most powerful thermonuclear weapon ever.\nSpitzer published a key plasma physics text at Princeton in 1963. He took the ideal gas laws and adapted them to an ionized plasma, developing many of the fundamental equations used to model a plasma.\nLaser fusion was suggested in 1962 by scientists at LLNL. Initially, lasers had little power. Laser fusion (inertial confinement fusion) research began as early as 1965.\nAt the 1964 World's Fair, the public was given its first fusion demonstration. The device was a Theta-pinch from General Electric. This was similar to the Scylla machine developed earlier at Los Alamos.\nBy the mid-1960s progress had stalled across the world. All of the major designs were losing plasma at unsustainable rates. The 12-beam \"4 pi laser\" attempt at inertial confinement fusion developed at LLNL targeted a gas-filled target chamber of about 20 centimeters in diameter.\nThe magnetic mirror was first published in 1967 by Richard F. Post and many others at LLNL. The mirror consisted of two large magnets arranged so they had strong fields within them, and a weaker, but connected, field between them. Plasma introduced in the area between the two magnets would \"bounce back\" from the stronger fields in the middle.\nA.D. Sakharov's group constructed the first tokamaks. The most successful were the T-3 and its larger version T-4. T-4 was tested in 1968 in Novosibirsk, producing the first quasistationary fusion reaction. When this was announced, the international community was skeptical. A British team was invited to see T-3, and confirmed the Soviet claims. A burst of activity followed as many planned devices were abandoned and tokamaks were introduced in their place—the C model stellarator, then under construction after many redesigns, was quickly converted to the Symmetrical Tokamak.\nIn his work with vacuum tubes, Philo Farnsworth observed that electric charge accumulated in the tube. In 1962, Farnsworth patented a design using a positive inner cage to concentrate plasma and fuse protons. During this time, Robert L. Hirsch joined Farnsworth Television labs and began work on what became the Farnsworth-Hirsch Fusor. This effect became known as the Multipactor effect. Hirsch patented the design in 1966 and published it in 1967.\nPlasma temperatures of approximately 40 million degrees Celsius and 10 deuteron-deuteron fusion reactions per discharge were achieved at LANL with Scylla IV.\nIn 1968 the Soviets announced results from the T-3 tokamak, claiming temperatures an order of magnitude higher than any other device. A UK team, nicknamed \"The Culham Five\", confirmed the results. The results led many other teams, including the Princeton group, which converted their stellarator to a tokamak.", "In 1985, Donna Strickland and Gérard Mourou invented a method to amplify laser pulses by \"chirping\". This changed a single wavelength into a full spectrum. The system amplified the beam at each wavelength and then reversed the beam into one color. Chirp pulsed amplification became instrumental for NIF and the Omega EP system.\nLANL constructed a series of laser facilities. They included Gemini (a two beam system), Helios (eight beams), Antares (24 beams) and Aurora (96 beams). The program ended in the early nineties with a cost on the order of one billion dollars.\nIn 1987, Akira Hasegawa noticed that in a dipolar magnetic field, fluctuations tended to compress the plasma without energy loss. This effect was noticed in data taken by Voyager 2, when it encountered Uranus. This observation became the basis for a fusion approach known as the levitated dipole.\nIn tokamaks, the Tore Supra was under construction from 1983 to 1988 in Cadarache, France. Its superconducting magnets permitted it to generate a strong permanent toroidal magnetic field. First plasma came in 1988.\nIn 1983, JET achieved first plasma. In 1985, the Japanese tokamak, JT-60 produced its first plasmas. In 1988, the T-15 a Soviet tokamak was completed, the first to use (helium-cooled) superconducting magnets.\nIn 1998, the T-15 Soviet tokamak with superconducting helium-cooled coils was completed.", "The US funded a magnetic mirror program in the late 1970s and early 1980s. This program resulted in a series of magnetic mirror devices including: 2X, Baseball I, Baseball II, the Tandem Mirror Experiment and upgrade, the Mirror Fusion Test Facility, and MFTF-B. These machines were built and tested at LLNL from the late 1960s to the mid-1980s. The final machine, MFTF cost 372 million dollars and was, at that time, the most expensive project in LLNL history. It opened on February 21, 1986, and immediately closed, allegedly to balance the federal budget.", "Laser fusion progress: in 1983, the NOVETTE laser was completed. The following December, the ten-beam NOVA laser was finished. Five years later, NOVA produced 120 kilojoules of infrared light during a nanosecond pulse.\nResearch focused on either fast delivery or beam smoothness. Both focused on increasing energy uniformity. One early problem was that the light in the infrared wavelength lost energy before hitting the fuel. Breakthroughs were made at LLE at University of Rochester. Rochester scientists used frequency-tripling crystals to transform infrared laser beams into ultraviolet beams.", "In 1984, Martin Peng proposed an alternate arrangement of magnet coils that would greatly reduce the aspect ratio while avoiding the erosion issues of the compact tokamak: a spherical tokamak. Instead of wiring each magnet coil separately, he proposed using a single large conductor in the center, and wiring the magnets as half-rings off of this conductor. What was once a series of individual rings passing through the hole in the center of the reactor was reduced to a single post, allowing for aspect ratios as low as 1.2. The ST concept appeared to represent an enormous advance in tokamak design. The proposal came during a period when US fusion research budgets were dramatically smaller. ORNL was provided with funds to develop a suitable central column built out of a high-strength copper alloy called \"Glidcop\". However, they were unable to secure funding to build a demonstration machine.\nFailing at ORNL, Peng began a worldwide effort to interest other teams in the concept and get a test machine built. One approach would be to convert a spheromak. Peng's advocacy caught the interest of Derek Robinson, of the United Kingdom Atomic Energy Authority. Robinson gathered a team and secured on the order of 100,000 pounds to build an experimental machine, the Small Tight Aspect Ratio Tokamak, or START. Parts of the machine were recycled from earlier projects, while others were loaned from other labs, including a 40 keV neutral beam injector from ORNL. Construction began in 1990 and operation started in January 1991. It achieved a record beta (plasma pressure compared to magnetic field pressure) of 40% using a neutral beam injector", "The International Thermonuclear Experimental Reactor (ITER) coalition forms, involving EURATOM, Japan, the Soviet Union and United States and kicks off the conceptual design process.", "In 1991 JETs Preliminary Tritium Experiment achieved the worlds first controlled release of fusion power.\nIn 1992, Physics Today published Robert McCory's outline of the current state of ICF, advocating for a national ignition facility. This was followed by a review article from John Lindl in 1995, making the same point. During this time various ICF subsystems were developed, including target manufacturing, cryogenic handling systems, new laser designs (notably the NIKE laser at NRL) and improved diagnostics including time of flight analyzers and Thomson scattering. This work was done at the NOVA laser system, General Atomics, Laser Mégajoule and the GEKKO XII system in Japan. Through this work and lobbying by groups like the fusion power associates and John Sethian at NRL, Congress authorized funding for the NIF project in the late nineties.\nIn 1992 the United States and the former republics of the Soviet Union stopped testing nuclear weapons.\nIn 1993 TFTR at PPPL experimented with 50% deuterium, 50% tritium, eventually reaching 10 megawatts.\nIn the early nineties, theory and experimental work regarding fusors and polywells was published. In response, Todd Rider at MIT developed general models of these devices, arguing that all plasma systems at thermodynamic equilibrium were fundamentally limited. In 1995, William Nevins published a criticism arguing that the particles inside fusors and polywells would acquire angular momentum, causing the dense core to degrade.\nIn 1995, the University of Wisconsin–Madison built a large fusor, known as HOMER. Dr George H. Miley at Illinois built a small fusor that produced neutrons using deuterium and discovered the \"star mode\" of fusor operation. At this time in Europe, an IEC device was developed as a commercial neutron source by Daimler-Chrysler and NSD Fusion.\nThe next year, Tore Supra reached a record plasma duration of two minutes with a current of almost 1 M amperes driven non-inductively by 2.3 MW of lower hybrid frequency waves (i.e. 280 MJ of injected and extracted energy), enabled by actively cooled plasma-facing components.\nThe upgraded Z-machine opened to the public in August 1998. The key attributes were its 18 million ampere current and a discharge time of less than 100 nanoseconds. This generated a magnetic pulse inside a large oil tank, which struck a liner (an array of tungsten wires). Firing the Z-machine became a way to test high energy, high temperature (2 billion degrees) conditions. In 1996.\nIn 1997, JET reached 16.1 MW (65% of heat to plasma), sustaining over 10 MW for over 0.5 sec. As of 2020 this remained the record output level. Four megawatts of alpha particle self-heating was achieved.\nITER was officially announced as part of a seven-party consortium (six countries and the EU). ITER was designed to produce ten times more fusion power than the input power. ITER was sited in Cadarache. The US withdrew from the project in 1999.\nJT-60 produced a reversed shear plasma with the equivalent fusion amplification factor of 1.25 - as of 2021 this remained the world record.\nIn the late nineties, a team at Columbia University and MIT developed the levitated dipole, a fusion device that consisted of a superconducting electromagnet, floating in a saucer shaped vacuum chamber. Plasma swirled around this donut and fused along the center axis.\nIn 1999 MAST replaced START.", "\"Fast ignition\" appeared in the late nineties, as part of a push by LLE to build the Omega EP system, which finished in 2008. Fast ignition showed dramatic power savings and moved ICF into the race for energy production. The HiPER experimental facility became dedicated to fast ignition.\nIn 2001 the United States, China and Republic of Korea joined ITER while Canada withdrew.\nIn April 2005, a UCLA team announced a way of producing fusion using a machine that \"fits on a lab bench\", using lithium tantalate to generate enough voltage to fuse deuterium. The process did not generate net power.\nThe next year, China's EAST test reactor was completed. This was the first tokamak to use superconducting magnets to generate both toroidal and poloidal fields.\nIn the early 2000s, LANL researchers claimed that an oscillating plasma could reach local thermodynamic equilibrium. This prompted the POPS and Penning trap designs.\nIn 2005 NIF fired its first bundle of eight beams, achieving the most powerful laser pulse to date - 152.8 kJ (infrared).\nMIT researchers became interested in fusors for space propulsion, using fusors with multiple inner cages. Greg Piefer founded Phoenix Nuclear Labs and developed the fusor into a neutron source for medical isotope production. Robert Bussard began speaking openly about the polywell in 2006.\nIn March 2009, NIF became operational.\nIn the early 2000s privately backed fusion companies launched to develop commercial fusion power. Tri Alpha Energy, founded in 1998, began by exploring a field-reversed configuration approach. In 2002, Canadian company General Fusion began proof-of-concept experiments based on a hybrid magneto-inertial approach called Magnetized Target Fusion. Investors included Jeff Bezos (General Fusion) and Paul Allen (Tri Alpha Energy). Toward the end of the decade, Tokamak Energy started exploring spherical tokamak devices using reconnection.", "In 2017, General Fusion developed its plasma injector technology and Tri Alpha Energy constructed and operated its C-2U device. In August 2014, Phoenix Nuclear Labs announced the sale of a high-yield neutron generator that could sustain 5×10 deuterium fusion reactions per second over a 24-hour period.\nIn October 2014, Lockheed Martin's Skunk Works announced the development of a high beta fusion reactor, the Compact Fusion Reactor. Although the original concept was to build a 20-ton, container-sized unit, the team conceded in 2018 that the minimum scale would be 2,000 tons.\nIn January 2015, the polywell was presented at Microsoft Research. TAE Technologies announced that its Norman reactor had achieved plasma.\nIn 2017, Helion Energy's fifth-generation plasma machine went into operation, seeking to achieve plasma density of 20 T and fusion temperatures. ST40 generated \"first plasma\".\nIn 2018, Eni announced a $50 million investment in Commonwealth Fusion Systems, to attempt to commercialize ARC technology using a test reactor (SPARC) in collaboration with MIT. The reactor planned to employ yttrium barium copper oxide (YBCO) high-temperature superconducting magnet technology. Commonwealth Fusion Systems in 2021 tested successfully a 20 T magnet making it the strongest high-temperature superconducting magnet in the world. Following the 20 T magnet CFS raised $1.8 billion from private investors.\nGeneral Fusion began developing a 70% scale demo system. In 2018, TAE Technologies' reactor reached nearly 20 M°C.", "In 2010, NIF researchers conducted a series of \"tuning\" shots to determine the optimal target design and laser parameters for high-energy ignition experiments with fusion fuel. Net fuel energy gain was achieved in September 2013.\nIn April 2014, LLNL ended the Laser Inertial Fusion Energy (LIFE) program and directed their efforts towards NIF.\nA 2012 paper demonstrated that a dense plasma focus had achieved temperatures of 1.8 billion degrees Celsius, sufficient for boron fusion, and that fusion reactions were occurring primarily within the contained plasmoid, necessary for net power.\nIn August 2014, MIT announced a tokamak it named the ARC fusion reactor, using rare-earth barium-copper oxide (REBCO) superconducting tapes to construct high-magnetic field coils that it claimed produced comparable magnetic field strength in a smaller configuration than other designs.\nIn October 2015, researchers at the Max Planck Institute of Plasma Physics completed building the largest stellarator to date, the Wendelstein 7-X. In December they produced the first helium plasma, and in February 2016 produced hydrogen plasma. In 2015, with plasma discharges lasting up to 30 minutes, Wendelstein 7-X attempted to demonstrate the essential stellarator attribute: continuous operation of a high-temperature plasma.\nIn 2014 EAST achieved a record confinement time of 30 seconds for plasma in the high-confinement mode (H-mode), thanks to improved heat dispersal. This was an order of magnitude improvement vs other reactors. In 2017 the reactor achieved a stable 101.2-second steady-state high confinement plasma, setting a world record in long-pulse H-mode operation.\nIn 2018 MIT scientists formulated a theoretical means to remove the excess heat from compact nuclear fusion reactors via larger and longer divertors.\nIn 2019 the United Kingdom announced a planned £200-million (US$248-million) investment to produce a design for a fusion facility named the Spherical Tokamak for Energy Production (STEP), by the early 2040s.", "In December 2020, the Chinese experimental nuclear fusion reactor HL-2M achieved its first plasma discharge. In May 2021, Experimental Advanced Superconducting Tokamak (EAST) announced a new world record for superheated plasma, sustaining a temperature of 120 M°C for 101 seconds and a peak of 160 M°C for 20 seconds. In December 2021 EAST set a new world record for high temperature (70 M°C) plasma of 1,056 seconds.\nIn 2020, Chevron Corporation announced an investment in start-up Zap Energy, co-founded by British entrepreneur and investor, Benj Conway, together with physicists Brian Nelson and Uri Shumlak from University of Washington. In 2021 the company raised $27.5 million in Series B funding led by Addition.\nIn 2021, the US DOE launched the INFUSE program, a public-private knowledge sharing initiative involving a PPPL, MIT Plasma Science and Fusion Center and Commonwealth Fusion Systems partnership, together with partnerships with TAE Technologies, Princeton Fusion Systems, and Tokamak Energy. In 2021, DOE's Fusion Energy Sciences Advisory Committee approved a strategic plan to guide fusion energy and plasma physics research that included a working power plant by 2040, similar to Canadian, Chinese, and U.K. efforts.\nIn January 2021, SuperOx announced the commercialization of a new superconducting wire, with more than 700 A/mm2 current capability.\nTAE Technologies announced that its Norman device had sustained a temperature of about 60 million degrees C for 30 milliseconds, 8 and 10 times higher, respectively, than the company's previous devices. The duration was claimed to be limited by the power supply rather than the device.\nIn August 2021, the National Ignition Facility recorded a record-breaking 1.3 megajoules of energy created from fusion which is the first example of the Lawson criterion being surpassed in a laboratory.\nIn February 2022, JET sustained 11 MW and a Q value of 0.33 for over 5 seconds, outputting 59.7 megajoules, using a mix of deuterium and tritium for fuel. In March 2022 it was announced that Tokamak Energy achieved a record plasma temperature of 100 million kelvins, inside a commercial compact tokamak.\nIn October 2022, the Korea Superconducting Tokamak Advanced Research (KSTAR) reached a record plasma duration of 45 seconds, sustaining the high-temperature fusion plasma over the 100 million degrees Celsus based on the integrated real-time RMP control for ELM-less H-mode, i.e. fast ions regulated enhancement (FIRE) mode, machine learning algorithm, and 3D field optimization via an edge-localized RMP.\nIn December 2022, the NIF achieved the first scientific breakeven controlled fusion experiment, with an energy gain of 1.5.\nIn February 2024, the KSTAR tokamak set a new record (shot #34705) for the longest duration (102 seconds) of a magnetically confined plasma. The plasma was operated in ELM-less H-mode, with much better control of the error field than was possible previously. KSTAR also set a record (shot #34445) for the longest steady-state duration at a temperature of 100 million degrees Celsius (48 seconds).", "In the German/US HIBALL study, Garching used the high repetition rate of the RF driver to serve four reactor chambers using liquid lithium inside the chamber cavity. In 1982 high-confinement mode (H-mode) was discovered in tokamaks.", "Some amphibians and reptiles have the ability to regenerate limbs, eyes, spinal cords, hearts, intestines, and upper and lower jaws. The Japanese fire belly newt can regenerate its eye lens 18 times over a period of 16 years and retain its structural and functional properties. The cells at the site of the injury have the ability to undifferentiate, reproduce rapidly, and differentiate again to create a new limb or organ.\nHox genes are a group of related genes that control the body plan of an embryo along the head-tail axis. They are responsible for body segment differentiation and express the arrangement of numerous body components during initial embryonic development. Primarily, these sets of genes are utilized during the development of body plans by coding for the transcription factors that trigger production of body segment specific structures. Additionally in most animals, these genes are laid out along the chromosome similar to the order in which they are expressed along the anterior–posterior axis.\nVariants of the Hox genes are found almost in every phylum with the exception of the sponge which use a different type of developmental genes. The homology of these genes is of important interest to scientists as they may hold more answers to the evolution of many species. In fact, these genes demonstrate such a high degree of homology that a human Hox gene variant – HOXB4 – could mimic the function of its homolog in the fruit fly (Drosophila). Studies suggest that the regulation and other target genes in different species are actually what causes such a great difference in phenotypic difference between species.\nHox genes contain a DNA sequence known as the homeobox that are involved in the regulation of patterns of anatomical development. They contain a specific DNA sequence with the aim of providing instructions for making a string of 60 protein building blocks - amino acids- which are referred to as the homeodomain. Most homeodomain-containing proteins function as transcription factors and fundamentally bind and regulate the activity of different genes. The homeodomain is the segment of the protein that binds to precise regulatory regions of the target genes. Genes within the homeobox family are implicated in a wide variety of significant activities during growth. These activities include directing the development of limbs and organs along the anterior-posterior axis and regulating the process by which cells mature to carry out specific functions, a process known as cellular differentiation. Certain homeobox genes can act tumor suppressors, which means they help prevent cells from growing and dividing too rapidly or in an uncontrolled way.\nDue to the fact that homeobox genes have so many important functions, mutations in these genes are accountable for a wide array of developmental disorders. Changes in certain homeobox genes often result in eye disorders, cause abnormal head, face, and tooth development. Additionally, increased or decreased activity of certain homeobox genes has been associated with several forms of cancer later in life.", "As previously mentioned, Hox genes encode transcription factors that regulate embryonic and post-embryonic developmental processes. The expression of Hox genes is regulated in part by the tight, spatial arrangement of conserved coding and non-coding DNA regions. The potential for evolutionary alterations in Hox cluster composition is viewed to be small among vertebrates. On the other hand, recent studies of a small number of non-mammalian taxa propose greater dissimilarity than initially considered. Next, generation sequencing of considerable genomic fragments greater than 100 kilobases from the eastern newt (Notophthalmus viridescens) was analyzed. Subsequently, it was found that the composition of Hox cluster genes were conserved relative to orthologous regions from other vertebrates. Furthermore, it was found that the length of introns and intergenic regions varied. In particular, the distance between HoxD13 and HoxD11 is longer in newt than orthologous regions from vertebrate species with expanded Hox clusters and is predicted to exceed the length of the entire HoxD clusters (HoxD13–HoxD4) of humans, mice, and frogs. Many recurring DNA sequences were recognized for newt Hox clusters, counting an enrichment of DNA transposon-like sequences similar to non-coding genomic fragments. Researchers found the results to suggest that Hox cluster expansion and transposon accumulation are common features of non-mammalian tetrapod vertebrates.\nAfter the loss of a limb, cells draw together to form a clump known as a blastema. This superficially appears undifferentiated, but cells that originated in the skin later develop into new skin, muscle cells into new muscle and cartilage cells into new cartilage. It is only the cells from just beneath the surface of the skin that are pluripotent and able to develop into any type of cell. Salamander Hox genomic regions show elements of conservation and variety in comparison to other vertebrate species. Whereas the structure and organization of Hox coding genes is conserved, newt Hox clusters show variation in the lengths of introns and intergenic regions, and the HoxD13–11 region exceeds the lengths of orthologous segments even among vertebrate species with expanded Hox clusters. Researchers have suggested that the HoxD13–11 expansion predated a basal salamander genome size amplification that occurred approximately 191 million years ago, because it preserved in all three extant amphibian groups. Supplementary verification supports the proposal that Hox clusters are acquiescent to structural evolution and variation is present in the lengths of introns and intergenic regions, relatively high numbers of repetitive sequences, and non-random accumulations of DNA transposons in newts and lizards. Researchers found that the non-random accretion of DNA-like transposons could possibly change developmental encoding by generating sequence motifs for transcriptional control.\nIn conclusion, the available data from several non-mammalian tetrapods suggest that Hox structural flexibility is the rule, not the exception. It is thought that this elasticity may allow for developmental variation across non-mammalian taxa. This is of course true for both embryogenesis and during the redeployment of Hox genes during post-embryonic developmental processes, such as metamorphosis and regeneration.", "Another phenomena that exists in animal models is the presence of gradient fields in early development. More specifically, this has been shown in the aquatic amphibian: the newt. These \"gradient fields\" as they are known in developmental biology, have the ability to form the appropriate tissues that they are designed to form when cells from other parts of the embryo are introduced or transplanted into specific fields. The first reporting of this was in 1934. Originally, the specific mechanism behind this rather bizarre phenomenon was not known, however Hox genes have been shown to be prevalent behind this process. More specifically, a concept now known as polarity has been implemented as one - but not the only one - of the mechanisms that are driving this development.\nStudies done by Oliver and colleagues in 1988 showed that different concentrations of XIHbox 1 antigen was present along the anterior-posterior mesoderm of various developing animal models. One conclusion that this varied concentration of protein expression is actually causing differentiation amongst various tissues and could be one of the culprits behind these so-called \"gradient fields\". While the protein products of Hox genes are strongly involved in these fields and differentiation in amphibians and reptiles, there are other causality factors involved. For example, retinoic acid and other growth factors have been shown to play a role in these gradient fields.", "Hox genes play a massive role in some amphibians and reptiles in their ability to regenerate lost limbs, especially HoxA and HoxD genes.\nIf the processes involved in forming new tissue can be reverse-engineered into humans, it may be possible to heal injuries of the spinal cord or brain, repair damaged organs and reduce scarring and fibrosis after surgery. Despite the large conservation of the Hox genes through evolution, mammals and humans specifically cannot regenerate any of their limbs. This raises a question as to why humans which also possess an analog to these genes cannot regrow and regenerate limbs. Beside the lack of specific growth factor, studies have shown that something as small as base pair differences between amphibian and human Hox analogs play a crucial role in human inability to reproduce limbs. Undifferentiated stem cells and the ability to have polarity in tissues is vital to this process.", "Essentially, Hox genes contribute to the specification of three main components of limb development, including the stylopod, zeugopod and autopod. Certain mutations in Hox genes can potentially lead to the proximal and/or distal losses along with different abnormalities. Three different models have been created for outlining the patterning of these regions. The Zone of polarizing activity (ZPA) in the limb bud has pattern-organizing activity through the utilization of a morphogen gradient of a protein called Sonic hedgehog (Shh). Sonic hedgehog is turned on in the posterior region via the early expression of HoxD genes, along with the expression of Hoxb8. Shh is maintained in the posterior through a feedback loop between the ZPA and the AER. Shh cleaves the Ci/Gli3 transcriptional repressor complex to convert the transcription factor Gli3 to an activator, which activates the transcription of HoxD genes along the anterior/posterior axis. It is evident that different Hox genes are critical for proper limb development in different amphibians.\nResearchers conducted a study targeting the Hox-9 to Hox-13 genes in different species of frogs and other amphibians. Similar to an ancient tetrapod group with assorted limb types, it is important to note that amphibians are required for the understanding of the origin and diversification of limbs in different land vertebrates. A PCR (Polymerase Chain Reaction) study was conducted in two species of each amphibian order to identify Hox-9 to Hox-13. Fifteen distinct posterior Hox genes and one retro-pseudogene were identified, and the former confirm the existence of four Hox clusters in each amphibian order. Certain genes expected to occur in all tetrapods, based on the posterior Hox complement of mammals, fishes and coelacanth, were not recovered. HoxD-12 is absent in frogs and possibly other amphibians. By definition, the autopodium is distal segment of a limb, comprising the hand or foot. Considering Hox-12’s function in autopodium development, the loss of this gene may be related to the absence of the fifth finger in frogs and salamanders.", "The Human Tissue (Scotland) Act 2006 (asp 4) is an Act of the Scottish Parliament to consolidate and overhaul previous legislation regarding the handling of human tissue.\nIt deals with three distinct uses of human tissue: its donation primarily for the purpose of transplantation, but also for research, education or training and audit; its removal, retention and use following a post-mortem examination; and for the purposes of the Anatomy Act 1984 as amended for Scotland by the 2006 Act.\nIts counterpart in the rest of the United Kingdom is the Human Tissue Act 2004.", "At the intracellular level, hECTs exhibit several essential structural features of cardiomyocytes, including organized sarcomeres, gap-junctions, and sarcoplasmic reticulum structures; however, the distribution and organization of many of these structures is characteristic of neonatal heart tissue rather than adult human heart muscle. hECTs also express key cardiac genes (α-MHC, SERCA2a and ACTC1) nearing the levels seen in the adult heart. Analogous to the characteristics of ECTs from animal models, hECTs beat spontaneously and reconstitute many fundamental physiological responses of normal heart muscle, such as the Frank-Starling mechanism and sensitivity to calcium. hECTs show dose-dependent responses to certain drugs, such as morphological changes in action potentials due to ion channel blockers and modulation of contractile properties by inotropic and lusitropic agents.", "Even with current technologies, hECT structure and function is more at the level of newborn heart muscle than adult myocardium. Nonetheless, important advances have led to the generation of hECT patches for myocardial repair in animal models and use for in vitro models of drug screening. hECTs can also be used to experimentally model CVD using genetic manipulation and adenoviral-mediated gene transfer. In animal models of myocardial infarction (MI), hECT injection into the hearts of rats and mice reduces infarct size and improves heart function and contractility. As a proof of principle, grafts of engineered heart tissues have been implanted in rats following MI with beneficial effects on left ventricular function. The use of hECTs in generating tissue engineered heart valves is also being explored to improve current heart valve constructs for in vivo animal studies. As tissue engineering technology advances to overcome current limitations, hECTs are a promising avenue for experimental drug discovery, screening and disease modelling and in vivo repair.", "hESCs and hiPSCs are the primary cells used to generate hECTs. Human pluripotent stem cells are differentiated into cardiomyocytes (hPSC-CMs) in culture through a milieu containing small-molecule mediators (e.g. cytokines, growth and transcription factors). Transforming hPSC-CMs into hECTs incorporates the use of 3-dimensional (3D) tissue scaffolds to mimic the natural physiological environment of the heart. This 3D scaffold, along with collagen – a major component of the cardiac extracellular matrix – provides the appropriate conditions to promote cardiomyocyte organization, growth and differentiation.", "Human engineered cardiac tissues (hECTs) are derived by experimental manipulation of pluripotent stem cells, such as human embryonic stem cells (hESCs) and, more recently, human induced pluripotent stem cells (hiPSCs) to differentiate into human cardiomyocytes. Interest in these bioengineered cardiac tissues has risen due to their potential use in cardiovascular research and clinical therapies. These tissues provide a unique in vitro model to study cardiac physiology with a species-specific advantage over cultured animal cells in experimental studies. hECTs also have therapeutic potential for in vivo regeneration of heart muscle. hECTs provide a valuable resource to reproduce the normal development of human heart tissue, understand the development of human cardiovascular disease (CVD), and may lead to engineered tissue-based therapies for CVD patients.", "The acid is usually formed by acidification of an azide salt like sodium azide. Normally solutions of sodium azide in water contain trace quantities of hydrazoic acid in equilibrium with the azide salt, but introduction of a stronger acid can convert the primary species in solution to hydrazoic acid. The pure acid may be subsequently obtained by fractional distillation as an extremely explosive colorless liquid with an unpleasant smell.\nIts aqueous solution can also be prepared by treatment of barium azide solution with dilute sulfuric acid, filtering the insoluble barium sulfate.\nIt was originally prepared by the reaction of aqueous hydrazine with nitrous acid:\nWith the hydrazinium cation this reaction is written as:\nOther oxidizing agents, such as hydrogen peroxide, nitrosyl chloride, trichloramine or nitric acid, can also be used to produce hydrazoic acid from hydrazine.", "Hydrazoic acid, also known as hydrogen azide, azic acid or azoimide, is a compound with the chemical formula . It is a colorless, volatile, and explosive liquid at room temperature and pressure. It is a compound of nitrogen and hydrogen, and is therefore a pnictogen hydride. The oxidation state of the nitrogen atoms in hydrazoic acid is fractional and is -1/3. It was first isolated in 1890 by Theodor Curtius. The acid has few applications, but its conjugate base, the azide ion, is useful in specialized processes.\nHydrazoic acid, like its fellow mineral acids, is soluble in water. Undiluted hydrazoic acid is dangerously explosive with a standard enthalpy of formation ΔH (l, 298K) = +264 kJ/mol. When dilute, the gas and aqueous solutions (<10%) can be safely prepared but should be used immediately; because of its low boiling point, hydrazoic acid is enriched upon evaporation and condensation such that dilute solutions incapable of explosion can form droplets in the headspace of the container or reactor that are capable of explosion.", "Hydrazoic acid reacts with nitrous acid:\nThis reaction is unusual in that it involves compounds with nitrogen in four different oxidation states.", "Hydrazoic acid is volatile and highly toxic. It has a pungent smell and its vapor can cause violent headaches. The compound acts as a non-cumulative poison.", "2-Furonitrile, a pharmaceutical intermediate and potential artificial sweetening agent has been prepared in good yield by treating furfural with a mixture of hydrazoic acid () and perchloric acid () in the presence of magnesium perchlorate in the benzene solution at 35 °C.\nThe all gas-phase iodine laser (AGIL) mixes gaseous hydrazoic acid with chlorine to produce excited nitrogen chloride, which is then used to cause iodine to lase; this avoids the liquid chemistry requirements of COIL lasers.", "In its properties hydrazoic acid shows some analogy to the halogen acids, since it forms poorly soluble (in water) lead, silver and mercury(I) salts. The metallic salts all crystallize in the anhydrous form and decompose on heating, leaving a residue of the pure metal. It is a weak acid (pK = 4.75.) Its heavy metal salts are explosive and readily interact with the alkyl iodides. Azides of heavier alkali metals (excluding lithium) or alkaline earth metals are not explosive, but decompose in a more controlled way upon heating, releasing spectroscopically-pure gas. Solutions of hydrazoic acid dissolve many metals (e.g. zinc, iron) with liberation of hydrogen and formation of salts, which are called azides (formerly also called azoimides or hydrazoates).\nHydrazoic acid may react with carbonyl derivatives, including aldehydes, ketones, and carboxylic acids, to give an amine or amide, with expulsion of nitrogen. This is called Schmidt reaction or Schmidt rearrangement.\nDissolution in the strongest acids produces explosive salts containing the aminodiazonium ion , for example:\nThe ion is isoelectronic to diazomethane .\nThe decomposition of hydrazoic acid, triggered by shock, friction, spark, etc. produces nitrogen and hydrogen:\nHydrazoic acid undergoes unimolecular decomposition at sufficient energy:\nThe lowest energy pathway produces NH in the triplet state, making it a spin-forbidden reaction. This is one of the few reactions whose rate has been determined for specific amounts of vibrational energy in the ground electronic state, by laser photodissociation studies. In addition, these unimolecular rates have been analyzed theoretically, and the experimental and calculated rates are in reasonable agreement.", "Hydroiodic acid is listed as a U.S. Federal DEA List I Chemical, owing to its use as a reducing agent related to the production of methamphetamine from ephedrine or pseudoephedrine (recovered from nasal decongestant pills).", "The Cativa process is a major end use of hydroiodic acid, which serves as a co-catalyst for the production of acetic acid by the carbonylation of methanol.", "Hydroiodic acid reacts with oxygen in air to give iodine:\n:4 HI + O → 2 + 2 I\nLike other hydrogen halides, hydroiodic acid adds to alkenes to give alkyl iodides. It can also be used as a reducing agent, for example in the reduction of aromatic nitro compounds to anilines.", "Hydroiodic acid (or hydriodic acid) is a colorless and aqueous solution of hydrogen iodide (HI). It is a strong acid, which is ionized completely in an aqueous solution. Concentrated solutions of hydroiodic acid are usually 48% to 57% HI.", "In chemistry, hydronium (hydroxonium in traditional British English) is the common name for the cation , also written as , the type of oxonium ion produced by protonation of water. It is often viewed as the positive ion present when an Arrhenius acid is dissolved in water, as Arrhenius acid molecules in solution give up a proton (a positive hydrogen ion, ) to the surrounding water molecules (). In fact, acids must be surrounded by more than a single water molecule in order to ionize, yielding aqueous and conjugate base. Three main structures for the aqueous proton have garnered experimental support: the Eigen cation, which is a tetrahydrate, HO(HO), the Zundel cation, which is a symmetric dihydrate, H(HO), and the Stoyanov cation, an expanded Zundel cation, which is a hexahydrate: H(HO)(HO). Spectroscopic evidence from well-defined IR spectra overwhelmingly supports the Stoyanov cation as the predominant form. For this reason, it has been suggested that wherever possible, the symbol H(aq) should be used instead of the hydronium ion.", "The molar concentration of hydronium or ions determines a solution's pH according to\nwhere M = mol/L. The concentration of hydroxide ions analogously determines a solution's pOH. The molecules in pure water auto-dissociate into aqueous protons and hydroxide ions in the following equilibrium:\nIn pure water, there is an equal number of hydroxide and ions, so it is a neutral solution. At , pure water has a pH of 7 and a pOH of 7 (this varies when the temperature changes: see self-ionization of water). A pH value less than 7 indicates an acidic solution, and a pH value more than 7 indicates a basic solution.", "According to IUPAC nomenclature of organic chemistry, the hydronium ion should be referred to as oxonium. Hydroxonium may also be used unambiguously to identify it.\nAn oxonium ion is any cation containing a trivalent oxygen atom.", "Since and N have the same number of electrons, is isoelectronic with ammonia. As shown in the images above, has a trigonal pyramidal molecular geometry with the oxygen atom at its apex. The bond angle is approximately 113°, and the center of mass is very close to the oxygen atom. Because the base of the pyramid is made up of three identical hydrogen atoms, the molecules symmetric top configuration is such that it belongs to the point group. Because of this symmetry and the fact that it has a dipole moment, the rotational selection rules are ΔJ = ±1 and ΔK = 0. The transition dipole lies along the c'-axis and, because the negative charge is localized near the oxygen atom, the dipole moment points to the apex, perpendicular to the base plane.", "The hydrated proton is very acidic: at 25 °C, its pK is approximately 0. The values commonly given for pK(HO) are 0 or –1.74. The former uses the convention that the activity of the solvent in a dilute solution (in this case, water) is 1, while the latter uses the value of the concentration of water in the pure liquid of 55.5 M. Silverstein has shown that the latter value is thermodynamically unsupportable. The disagreement comes from the ambiguity that to define pK of HO in water, HO has to act simultaneously as a solute and the solvent. The IUPAC has not given an official definition of pK that would resolve this ambiguity. Burgot has argued that HO(aq) + HO (l) ⇄ HO (aq) + HO (aq) is simply not a thermodynamically well-defined process. For an estimate of pK(HO), Burgot suggests taking the measured value pK(HO) = 0.3, the pK of HO in ethanol, and applying the correlation equation pK = pK – 1.0 (± 0.3) to convert the ethanol pK to an aqueous value, to give a value of pK(HO) = –0.7 (± 0.3). On the other hand, Silverstein has shown that Ballinger and Longs experimental results support a pK of 0.0 for the aqueous proton. Neils and Schaertel provide added arguments for a pK' of 0.0 \nThe aqueous proton is the most acidic species that can exist in water (assuming sufficient water for dissolution): any stronger acid will ionize and yield a hydrated proton. The acidity of (aq) is the implicit standard used to judge the strength of an acid in water: strong acids must be better proton donors than (aq), as otherwise a significant portion of acid will exist in a non-ionized state (i.e.: a weak acid). Unlike (aq) in neutral solutions that result from water's autodissociation, in acidic solutions, (aq) is long-lasting and concentrated, in proportion to the strength of the dissolved acid.\npH was originally conceived to be a measure of the hydrogen ion concentration of aqueous solution. Virtually all such free protons are quickly hydrated; acidity of an aqueous solution is therefore more accurately characterized by its concentration of (aq). In organic syntheses, such as acid catalyzed reactions, the hydronium ion () is used interchangeably with the ion; choosing one over the other has no significant effect on the mechanism of reaction.", "For many strong acids, it is possible to form crystals of their hydronium salt that are relatively stable. These salts are sometimes called acid monohydrates. As a rule, any acid with an ionization constant of or higher may do this. Acids whose ionization constants are below generally cannot form stable salts. For example, nitric acid has an ionization constant of , and mixtures with water at all proportions are liquid at room temperature. However, perchloric acid has an ionization constant of , and if liquid anhydrous perchloric acid and water are combined in a 1:1 molar ratio, they react to form solid hydronium perchlorate ().\nThe hydronium ion also forms stable compounds with the carborane superacid . X-ray crystallography shows a symmetry for the hydronium ion with each proton interacting with a bromine atom each from three carborane anions 320 pm apart on average. The salt is also soluble in benzene. In crystals grown from a benzene solution the solvent co-crystallizes and a cation is completely separated from the anion. In the cation three benzene molecules surround hydronium forming pi-cation interactions with the hydrogen atoms. The closest (non-bonding) approach of the anion at chlorine to the cation at oxygen is 348 pm.\nThere are also many known examples of salts containing hydrated hydronium ions, such as the ion in , the and ions both found in .\nSulfuric acid is also known to form a hydronium salt at temperatures below .", "Hydronium is an abundant molecular ion in the interstellar medium and is found in diffuse and dense molecular clouds as well as the plasma tails of comets. Interstellar sources of hydronium observations include the regions of Sagittarius B2, Orion OMC-1, Orion BN–IRc2, Orion KL, and the comet Hale–Bopp.\nInterstellar hydronium is formed by a chain of reactions started by the ionization of into by cosmic radiation. can produce either or through dissociative recombination reactions, which occur very quickly even at the low (≥10 K) temperatures of dense clouds. This leads to hydronium playing a very important role in interstellar ion-neutral chemistry.\nAstronomers are especially interested in determining the abundance of water in various interstellar climates due to its key role in the cooling of dense molecular gases through radiative processes. However, does not have many favorable transitions for ground-based observations. Although observations of HDO (the deuterated version of water) could potentially be used for estimating abundances, the ratio of HDO to is not known very accurately.\nHydronium, on the other hand, has several transitions that make it a superior candidate for detection and identification in a variety of situations. This information has been used in conjunction with laboratory measurements of the branching ratios of the various dissociative recombination reactions to provide what are believed to be relatively accurate and abundances without requiring direct observation of these species.", "As mentioned previously, is found in both diffuse and dense molecular clouds. By applying the reaction rate constants (α, β, and γ) corresponding to all of the currently available characterized reactions involving , it is possible to calculate k(T) for each of these reactions. By multiplying these k(T) by the relative abundances of the products, the relative rates (in cm/s) for each reaction at a given temperature can be determined. These relative rates can be made in absolute rates by multiplying them by the . By assuming for a dense cloud and for a diffuse cloud, the results indicate that most dominant formation and destruction mechanisms were the same for both cases. It should be mentioned that the relative abundances used in these calculations correspond to TMC-1, a dense molecular cloud, and that the calculated relative rates are therefore expected to be more accurate at . The three fastest formation and destruction mechanisms are listed in the table below, along with their relative rates. Note that the rates of these six reactions are such that they make up approximately 99% of hydronium ion's chemical interactions under these conditions. All three destruction mechanisms in the table below are classified as dissociative recombination reactions.\nIt is also worth noting that the relative rates for the formation reactions in the table above are the same for a given reaction at both temperatures. This is due to the reaction rate constants for these reactions having β and γ constants of 0, resulting in which is independent of temperature.\nSince all three of these reactions produce either or OH, these results reinforce the strong connection between their relative abundances and that of . The rates of these six reactions are such that they make up approximately 99% of hydronium ion's chemical interactions under these conditions.", "As early as 1973 and before the first interstellar detection, chemical models of the interstellar medium (the first corresponding to a dense cloud) predicted that hydronium was an abundant molecular ion and that it played an important role in ion-neutral chemistry. However, before an astronomical search could be underway there was still the matter of determining hydroniums spectroscopic features in the gas phase, which at this point were unknown. The first studies of these characteristics came in 1977, which was followed by other, higher resolution spectroscopy experiments. Once several lines had been identified in the laboratory, the first interstellar detection of HO was made by two groups almost simultaneously in 1986. The first, published in June 1986, reported observation of the J' = 1 − 2 transition at in OMC-1 and Sgr B2. The second, published in August, reported observation of the same transition toward the Orion-KL nebula.\nThese first detections have been followed by observations of a number of additional transitions. The first observations of each subsequent transition detection are given below in chronological order:\nIn 1991, the 3 − 2 transition at was observed in OMC-1 and Sgr B2. One year later, the 3 − 2 transition at was observed in several regions, the clearest of which was the W3 IRS 5 cloud.\nThe first far-IR 4 − 3 transition at 69.524 µm (4.3121 THz) was made in 1996 near Orion BN-IRc2. In 2001, three additional transitions of in were observed in the far infrared in Sgr B2; 2 − 1 transition at 100.577 µm (2.98073 THz), 1 − 1 at 181.054 µm (1.65582 THz) and 2 − 1 at 100.869 µm (2.9721 THz).", "Researchers have yet to fully characterize the solvation of hydronium ion in water, in part because many different meanings of solvation exist. A freezing-point depression study determined that the mean hydration ion in cold water is approximately : on average, each hydronium ion is solvated by 6 water molecules which are unable to solvate other solute molecules.\nSome hydration structures are quite large: the magic ion number structure (called magic number because of its increased stability with respect to hydration structures involving a comparable number of water molecules – this is a similar usage of the term magic number as in nuclear physics) might place the hydronium inside a dodecahedral cage. However, more recent ab initio method molecular dynamics simulations have shown that, on average, the hydrated proton resides on the surface of the cluster. Further, several disparate features of these simulations agree with their experimental counterparts suggesting an alternative interpretation of the experimental results.\nTwo other well-known structures are the Zundel cation and the Eigen cation. The Eigen solvation structure has the hydronium ion at the center of an complex in which the hydronium is strongly hydrogen-bonded to three neighbouring water molecules. In the Zundel complex the proton is shared equally by two water molecules in a symmetric hydrogen bond. Recent work indicates that both of these complexes represent ideal structures in a more general hydrogen bond network defect.\nIsolation of the hydronium ion monomer in liquid phase was achieved in a nonaqueous, low nucleophilicity superacid solution (). The ion was characterized by high resolution nuclear magnetic resonance.\nA 2007 calculation of the enthalpies and free energies of the various hydrogen bonds around the hydronium cation in liquid protonated water at room temperature and a study of the proton hopping mechanism using molecular dynamics showed that the hydrogen-bonds around the hydronium ion (formed with the three water ligands in the first solvation shell of the hydronium) are quite strong compared to those of bulk water.\nA new model was proposed by Stoyanov based on infrared spectroscopy in which the proton exists as an ion. The positive charge is thus delocalized over 6 water molecules.", "As the temperature decreases, further physiological systems falter and heart rate, respiratory rate, and blood pressure all decrease. This results in an expected heart rate in the 30s at a temperature of .\nThere is often cold, inflamed skin, hallucinations, lack of reflexes, fixed dilated pupils, low blood pressure, pulmonary edema, and shivering is often absent. Pulse and respiration rates decrease significantly, but fast heart rates (ventricular tachycardia, atrial fibrillation) can also occur. Atrial fibrillation is not typically a concern in and of itself.", "Twenty to fifty percent of hypothermia deaths are associated with paradoxical undressing. This typically occurs during moderate and severe hypothermia, as the person becomes disoriented, confused, and combative. They may begin discarding their clothing, which, in turn, increases the rate of heat loss.\nRescuers who are trained in mountain survival techniques are taught to expect this; however, people who die from hypothermia in urban environments who are found in an undressed state are sometimes incorrectly assumed to have been subjected to sexual assault.\nOne explanation for the effect is a cold-induced malfunction of the hypothalamus, the part of the brain that regulates body temperature. Another explanation is that the muscles contracting peripheral blood vessels become exhausted (known as a loss of vasomotor tone) and relax, leading to a sudden surge of blood (and heat) to the extremities, causing the person to feel overheated.", "As hypothermia progresses, symptoms include: mental status changes such as amnesia, confusion, slurred speech, decreased reflexes, and loss of fine motor skills.", "Hypothermia usually occurs from exposure to low temperatures, and is frequently complicated by alcohol consumption. Any condition that decreases heat production, increases heat loss, or impairs thermoregulation, however, may contribute. Thus, hypothermia risk factors include: substance use disorders (including alcohol use disorder), homelessness, any condition that affects judgment (such as hypoglycemia), the extremes of age, poor clothing, chronic medical conditions (such as hypothyroidism and sepsis), and living in a cold environment. Hypothermia occurs frequently in major trauma, and is also observed in severe cases of anorexia nervosa. Hypothermia is also associated with worse outcomes in people with sepsis. While most people with sepsis develop fevers (elevated body temperature), some develop hypothermia.\nIn urban areas, hypothermia frequently occurs with chronic cold exposure, such as in cases of homelessness, as well as with immersion accidents involving drugs, alcohol or mental illness. While studies have shown that people experiencing homelessness are at risk of premature death from hypothermia, the true incidence of hypothermia-related deaths in this population is difficult to determine. In more rural environments, the incidence of hypothermia is higher among people with significant comorbidities and less able to move independently. With rising interest in wilderness exploration, and outdoor and water sports, the incidence of hypothermia secondary to accidental exposure may become more frequent in the general population.", "Heat is primarily generated in muscle tissue, including the heart, and in the liver, while it is lost through the skin (90%) and lungs (10%). Heat production may be increased two- to four-fold through muscle contractions (i.e. exercise and shivering). The rate of heat loss is determined, as with any object, by convection, conduction, and radiation. The rates of these can be affected by body mass index, body surface area to volume ratios, clothing and other environmental conditions.\nMany changes to physiology occur as body temperatures decrease. These occur in the cardiovascular system leading to the Osborn J wave and other dysrhythmias, decreased central nervous system electrical activity, cold diuresis, and non-cardiogenic pulmonary edema.\nResearch has shown that glomerular filtration rates (GFR) decrease as a result of hypothermia. In essence, hypothermia increases preglomerular vasoconstriction, thus decreasing both renal blood flow (RBF) and GFR.", "Hypothermia continues to be a major limitation to swimming or diving in cold water. The reduction in finger dexterity due to pain or numbness decreases general safety and work capacity, which consequently increases the risk of other injuries.\nOther factors predisposing to immersion hypothermia include dehydration, inadequate rewarming between repetitive dives, starting a dive while wearing cold, wet dry suit undergarments, sweating with work, inadequate thermal insulation, and poor physical conditioning.\nHeat is lost much more quickly in water than in air. Thus, water temperatures that would be quite reasonable as outdoor air temperatures can lead to hypothermia in survivors, although this is not usually the direct clinical cause of death for those who are not rescued. A water temperature of can lead to death in as little as one hour, and water temperatures near freezing can cause death in as little as 15 minutes. During the sinking of the Titanic, most people who entered the water died in 15&ndash;30 minutes.\nThe actual cause of death in cold water is usually the bodily reactions to heat loss and to freezing water, rather than hypothermia (loss of core temperature) itself. For example, plunged into freezing seas, around 20% of victims die within two minutes from cold shock (uncontrolled rapid breathing, and gasping, causing water inhalation, massive increase in blood pressure and cardiac strain leading to cardiac arrest, and panic); another 50% die within 15–30 minutes from cold incapacitation: inability to use or control limbs and hands for swimming or gripping, as the body \"protectively\" shuts down the peripheral muscles of the limbs to protect its core. Exhaustion and unconsciousness cause drowning, claiming the rest within a similar time.", "Staying dry and wearing proper clothing help to prevent hypothermia. Synthetic and wool fabrics are superior to cotton as they provide better insulation when wet and dry. Some synthetic fabrics, such as polypropylene and polyester, are used in clothing designed to wick perspiration away from the body, such as liner socks and moisture-wicking undergarments. Clothing should be loose fitting, as tight clothing reduces the circulation of warm blood. In planning outdoor activity, prepare appropriately for possible cold weather. Those who drink alcohol before or during outdoor activity should ensure at least one sober person is present responsible for safety.\nCovering the head is effective, but no more effective than covering any other part of the body. While common folklore says that people lose most of their heat through their heads, heat loss from the head is no more significant than that from other uncovered parts of the body. However, heat loss from the head is significant in infants, whose head is larger relative to the rest of the body than in adults. Several studies have shown that for uncovered infants, lined hats significantly reduce heat loss and thermal stress. Children have a larger surface area per unit mass, and other things being equal should have one more layer of clothing than adults in similar conditions, and the time they spend in cold environments should be limited. However children are often more active than adults, and may generate more heat. In both adults and children, overexertion causes sweating and thus increases heat loss.\nBuilding a shelter can aid survival where there is danger of death from exposure. Shelters can be constructed out of a variety of materials. Metal can conduct heat away from the occupants and is sometimes best avoided. The shelter should not be too big so body warmth stays near the occupants. Good ventilation is essential especially if a fire will be lit in the shelter. Fires should be put out before the occupants sleep to prevent carbon monoxide poisoning. People caught in very cold, snowy conditions can build an igloo or snow cave to shelter.\nThe United States Coast Guard promotes using life vests to protect against hypothermia through the 50/50/50 rule: If someone is in water for 50 minutes, they have a 50 percent better chance of survival if they are wearing a life jacket. A heat escape lessening position can be used to increase survival in cold water.\nBabies should sleep at 16–20 °C (61–68 °F) and housebound people should be checked regularly to make sure the temperature of the home is at least 18 °C (64 °F).", "An apparent self-protective behaviour, known as \"terminal burrowing\", or \"hide-and-die syndrome\", occurs in the final stages of hypothermia. Those affected will enter small, enclosed spaces, such as underneath beds or behind wardrobes. It is often associated with paradoxical undressing. Researchers in Germany claim this is \"obviously an autonomous process of the brain stem, which is triggered in the final state of hypothermia and produces a primitive and burrowing-like behavior of protection, as seen in hibernating mammals\". This happens mostly in cases where temperature drops slowly.", "Hypothermia is defined as a body core temperature below in humans. Symptoms depend on the temperature. In mild hypothermia, there is shivering and mental confusion. In moderate hypothermia, shivering stops and confusion increases. In severe hypothermia, there may be hallucinations and paradoxical undressing, in which a person removes their clothing, as well as an increased risk of the heart stopping.\nHypothermia has two main types of causes. It classically occurs from exposure to cold weather and cold water immersion. It may also occur from any condition that decreases heat production or increases heat loss. Commonly, this includes alcohol intoxication but may also include low blood sugar, anorexia and advanced age. Body temperature is usually maintained near a constant level of through thermoregulation. Efforts to increase body temperature involve shivering, increased voluntary activity, and putting on warmer clothing. Hypothermia may be diagnosed based on either a persons symptoms in the presence of risk factors or by measuring a persons core temperature.\nThe treatment of mild hypothermia involves warm drinks, warm clothing, and voluntary physical activity. In those with moderate hypothermia, heating blankets and warmed intravenous fluids are recommended. People with moderate or severe hypothermia should be moved gently. In severe hypothermia, extracorporeal membrane oxygenation (ECMO) or cardiopulmonary bypass may be useful. In those without a pulse, cardiopulmonary resuscitation (CPR) is indicated along with the above measures. Rewarming is typically continued until a person's temperature is greater than . If there is no improvement at this point or the blood potassium level is greater than 12 millimoles per litre at any time, resuscitation may be discontinued.\nHypothermia is the cause of at least 1,500 deaths a year in the United States. It is more common in older people and males. One of the lowest documented body temperatures from which someone with accidental hypothermia has survived is in a 2-year-old boy from Poland named Adam. Survival after more than six hours of CPR has been described. In individuals for whom ECMO or bypass is used, survival is around 50%. Deaths due to hypothermia have played an important role in many wars. \nThe term is from Greek ῠ̔πο (ypo), meaning \"under\", and θέρμη (thérmē), meaning \"heat\". The opposite of hypothermia is hyperthermia, an increased body temperature due to failed thermoregulation.", "Alcohol consumption increases the risk of hypothermia in two ways: vasodilation and temperature controlling systems in the brain. Vasodilation increases blood flow to the skin, resulting in heat being lost to the environment. This produces the effect of feeling warm, when one is actually losing heat. Alcohol also affects the temperature-regulating system in the brain, decreasing the body's ability to shiver and use energy that would normally aid the body in generating heat. The overall effects of alcohol lead to a decrease in body temperature and a decreased ability to generate body heat in response to cold environments. Alcohol is a common risk factor for death due to hypothermia. Between 33% and 73% of hypothermia cases are complicated by alcohol.", "Accurate determination of core temperature often requires a special low temperature thermometer, as most clinical thermometers do not measure accurately below . A low temperature thermometer can be placed in the rectum, esophagus or bladder. Esophageal measurements are the most accurate and are recommended once a person is intubated. Other methods of measurement such as in the mouth, under the arm, or using an infrared ear thermometer are often not accurate.\nAs a hypothermic person's heart rate may be very slow, prolonged feeling for a pulse could be required before detecting. In 2005, the American Heart Association recommended at least 30–45 seconds to verify the absence of a pulse before initiating CPR. Others recommend a 60-second check.\nThe classical ECG finding of hypothermia is the Osborn J wave. Also, ventricular fibrillation frequently occurs below and asystole below . The Osborn J may look very similar to those of an acute ST elevation myocardial infarction. Thrombolysis as a reaction to the presence of Osborn J waves is not indicated, as it would only worsen the underlying coagulopathy caused by hypothermia.", "Symptoms of mild hypothermia may be vague, with sympathetic nervous system excitation (shivering, high blood pressure, fast heart rate, fast respiratory rate, and contraction of blood vessels). These are all physiological responses to preserve heat. Increased urine production due to cold, mental confusion, and liver dysfunction may also be present. Hyperglycemia may be present, as glucose consumption by cells and insulin secretion both decrease, and tissue sensitivity to insulin may be blunted. Sympathetic activation also releases glucose from the liver. In many cases, however, especially in people with alcoholic intoxication, hypoglycemia appears to be a more common cause. Hypoglycemia is also found in many people with hypothermia, as hypothermia may be a result of hypoglycemia.", "Various degrees of hypothermia may be deliberately induced in medicine for purposes of treatment of brain injury, or lowering metabolism so that total brain ischemia can be tolerated for a short time. Deep hypothermic circulatory arrest is a medical technique in which the brain is cooled as low as 10 °C, which allows the heart to be stopped and blood pressure to be lowered to zero, for the treatment of aneurysms and other circulatory problems that do not tolerate arterial pressure or blood flow. The time limit for this technique, as also for accidental arrest in ice water (which internal temperatures may drop to as low as 15 °C), is about one hour.", "Aggressiveness of treatment is matched to the degree of hypothermia. Treatment ranges from noninvasive, passive external warming to active external rewarming, to active core rewarming. In severe cases resuscitation begins with simultaneous removal from the cold environment and management of the airway, breathing, and circulation. Rapid rewarming is then commenced. Moving the person as little and as gently as possible is recommended as aggressive handling may increase risks of a dysrhythmia.\nHypoglycemia is a frequent complication and needs to be tested for and treated. Intravenous thiamine and glucose is often recommended, as many causes of hypothermia are complicated by Wernicke's encephalopathy.\nThe UK National Health Service advises against putting a person in a hot bath, massaging their arms and legs, using a heating pad, or giving them alcohol. These measures can cause a rapid fall in blood pressure and potential cardiac arrest.", "Rewarming can be done with a number of methods including passive external rewarming, active external rewarming, and active internal rewarming. Passive external rewarming involves the use of a person's own ability to generate heat by providing properly insulated dry clothing and moving to a warm environment. Passive external rewarming is recommended for those with mild hypothermia.\nActive external rewarming involves applying warming devices externally, such as a heating blanket. These may function by warmed forced air (Bair Hugger is a commonly used device), chemical reactions, or electricity. In wilderness environments, hypothermia may be helped by placing hot water bottles in both armpits and in the groin. Active external rewarming is recommended for moderate hypothermia. Active core rewarming involves the use of intravenous warmed fluids, irrigation of body cavities with warmed fluids (the chest or abdomen), use of warm humidified inhaled air, or use of extracorporeal rewarming such as via a heart lung machine or extracorporeal membrane oxygenation (ECMO). Extracorporeal rewarming is the fastest method for those with severe hypothermia. When severe hypothermia has led to cardiac arrest, effective extracorporeal warming results in survival with normal mental function about 50% of the time. Chest irrigation is recommended if bypass or ECMO is not possible.\nRewarming shock (or rewarming collapse) is a sudden drop in blood pressure in combination with a low cardiac output which may occur during active treatment of a severely hypothermic person. There was a theoretical concern that external rewarming rather than internal rewarming may increase the risk. These concerns were partly believed to be due to afterdrop, a situation detected during laboratory experiments where there is a continued decrease in core temperature after rewarming has been started. Recent studies have not supported these concerns, and problems are not found with active external rewarming.", "For people who are alert and able to swallow, drinking warm (not hot) sweetened liquids can help raise the temperature. General medical consensus advises against alcohol and caffeinated drinks. As most hypothermic people are moderately dehydrated due to cold-induced diuresis, warmed intravenous fluids to a temperature of are often recommended.", "Hypothermia is often defined as any body temperature below . With this method it is divided into degrees of severity based on the core temperature.\nAnother classification system, the Swiss staging system, divides hypothermia based on the presenting symptoms which is preferred when it is not possible to determine an accurate core temperature.\nOther cold-related injuries that can be present either alone or in combination with hypothermia include:\n*Chilblains: condition caused by repeated exposure of skin to temperatures just above freezing. The cold causes damage to small blood vessels in the skin. This damage is permanent and the redness and itching will return with additional exposure. The redness and itching typically occurs on cheeks, ears, fingers, and toes.\n*Frostbite: the freezing and destruction of tissue, which happens below the freezing point of water\n*Frostnip: a superficial cooling of tissues without cellular destruction\n*Trench foot or immersion foot: a condition caused by repetitive exposure to water at non-freezing temperatures\nThe normal human body temperature is often stated as . Hyperthermia and fever, are defined as a temperature of greater than .", "It is usually recommended not to declare a person dead until their body is warmed to a near normal body temperature of greater than , since extreme hypothermia can suppress heart and brain function. This is summarized in the common saying \"Youre not dead until youre warm and dead.\" Exceptions include if there are obvious fatal injuries or the chest is frozen so that it cannot be compressed. If a person was buried in an avalanche for more than 35 minutes and is found with a mouth packed full of snow without a pulse, stopping early may also be reasonable. This is also the case if a person's blood potassium is greater than 12 mmol/L.\nThose who are stiff with pupils that do not move may survive if treated aggressively. Survival with good function also occasionally occurs even after the need for hours of CPR. Children who have near-drowning accidents in water near can occasionally be revived, even over an hour after losing consciousness. The cold water lowers the metabolism, allowing the brain to withstand a much longer period of hypoxia. While survival is possible, mortality from severe or profound hypothermia remains high despite optimal treatment. Studies estimate mortality at between 38% and 75%.\nIn those who have hypothermia due to another underlying health problem, when death occurs it is frequently from that underlying health problem.", "Between 1995 and 2004 in the United States, an average of 1560 cold-related emergency department visits occurred per year and in the years 1999 to 2004, an average of 647 people died per year due to hypothermia. Of deaths reported between 1999 and 2002 in the US, 49% of those affected were 65 years or older and two-thirds were male. Most deaths were not work related (63%) and 23% of affected people were at home. Hypothermia was most common during the autumn and winter months of October through March. In the United Kingdom, an estimated 300 deaths per year are due to hypothermia, whereas the annual incidence of hypothermia-related deaths in Canada is 8000.", "Hypothermia has played a major role in the success or failure of many military campaigns, from Hannibals loss of nearly half his men in the Second Punic War (218 B.C.) to the near destruction of Napoleons armies in Russia in 1812. Men wandered around confused by hypothermia, some lost consciousness and died, others shivered, later developed torpor, and tended to sleep. Others too weak to walk fell on their knees; some stayed that way for some time resisting death. The pulse of some was weak and hard to detect; others groaned; yet others had eyes open and wild with quiet delirium. Deaths from hypothermia in Russian regions continued through the first and second world wars, especially in the Battle of Stalingrad.\nCivilian examples of deaths caused by hypothermia occurred during the sinkings of the RMS Titanic and RMS Lusitania, and more recently of the MS Estonia.\nAntarctic explorers developed hypothermia; Ernest Shackleton and his team measured body temperatures \"below 94.2°, which spells death at home\", though this probably referred to oral temperatures rather than core temperature and corresponded to mild hypothermia. One of Scott's team, Atkinson, became confused through hypothermia.\nNazi human experimentation during World War II amounting to medical torture included hypothermia experiments, which killed many victims. There were 360 to 400 experiments and 280 to 300 subjects, indicating some had more than one experiment performed on them. Various methods of rewarming were attempted: \"One assistant later testified that some victims were thrown into boiling water for rewarming\".", "In those without signs of life, cardiopulmonary resuscitation (CPR) should be continued during active rewarming. For ventricular fibrillation or ventricular tachycardia, a single defibrillation should be attempted. However, people with severe hypothermia may not respond to pacing or defibrillation. It is not known if further defibrillation should be withheld until the core temperature reaches . In Europe, epinephrine is not recommended until the person's core temperature reaches , while the American Heart Association recommends up to three doses of epinephrine before a core temperature of is reached. Once a temperature of has been reached, normal ACLS protocols should be followed.", "Hypothermia can happen in most mammals in cold weather and can be fatal. Baby mammals such as kittens are unable to regulate their body temperatures and have a risk of hypothermia if they are not kept warm by their mothers. \nMany animals other than humans often induce hypothermia during hibernation or torpor.\nWater bears (Tardigrade), microscopic multicellular organisms, can survive freezing at low temperatures by replacing most of their internal water with the sugar trehalose, preventing the crystallization that otherwise damages cell membranes.", "Signs and symptoms vary depending on the degree of hypothermia, and may be divided by the three stages of severity. People with hypothermia may appear pale and feel cold to touch.\nInfants with hypothermia may feel cold when touched, with bright red skin and an unusual lack of energy. \nBehavioural changes such as impaired judgement, impaired sense of time and place, unusual aggression and numbness can be observed in individuals with hypothermia, they can also deny their condition and refuse any help. A hypothermic person can be euphoric and hallucinating. \nCold stress refers to a near-normal body temperature with low skin temperature, signs include shivering. Cold stress is caused by cold exposure and it can lead to hypothermia and frostbite if not treated.", "Systems with rate-independent hysteresis have a persistent memory of the past that remains after the transients have died out. The future development of such a system depends on the history of states visited, but does not fade as the events recede into the past. If an input variable cycles from to and back again, the output may be initially but a different value upon return. The values of depend on the path of values that passes through but not on the speed at which it traverses the path. Many authors restrict the term hysteresis to mean only rate-independent hysteresis. Hysteresis effects can be characterized using the Preisach model and the generalized Prandtl−Ishlinskii model.", "In aerodynamics, hysteresis can be observed when decreasing the angle of attack of a wing after stall, regarding the lift and drag coefficients. The angle of attack at which the flow on top of the wing reattaches is generally lower than the angle of attack at which the flow separates during the increase of the angle of attack.", "There are a great variety of applications of the hysteresis in ferromagnets. Many of these make use of their ability to retain a memory, for example magnetic tape, hard disks, and credit cards. In these applications, hard magnets (high coercivity) like iron are desirable, such that as much energy is absorbed as possible during the write operation and the resultant magnetized information is not easily erased.\nOn the other hand, magnetically soft (low coercivity) iron is used for the cores in electromagnets. The low coercivity minimizes the energy loss associated with hysteresis, as the magnetic field periodically reverses in the presence of an alternating current. The low energy loss during a hysteresis loop is the reason why soft iron is used for transformer cores and electric motors.", "Often, some amount of hysteresis is intentionally added to an electronic circuit to prevent unwanted rapid switching. This and similar techniques are used to compensate for contact bounce in switches, or noise in an electrical signal.\nA Schmitt trigger is a simple electronic circuit that exhibits this property.\nA latching relay uses a solenoid to actuate a ratcheting mechanism that keeps the relay closed even if power to the relay is terminated.\nSome positive feedback from the output to one input of a comparator can increase the natural hysteresis (a function of its gain) it exhibits.\nHysteresis is essential to the workings of some memristors (circuit components which \"remember\" changes in the current passing through them by changing their resistance).\nHysteresis can be used when connecting arrays of elements such as nanoelectronics, electrochrome cells and memory effect devices using passive matrix addressing. Shortcuts are made between adjacent components (see crosstalk) and the hysteresis helps to keep the components in a particular state while the other components change states. Thus, all rows can be addressed at the same time instead of individually.\nIn the field of audio electronics, a noise gate often implements hysteresis intentionally to prevent the gate from \"chattering\" when signals close to its threshold are applied.", "Moving parts within machines, such as the components of a gear train, normally have a small gap between them, to allow movement and lubrication. As a consequence of this gap, any reversal in direction of a drive part will not be passed on immediately to the driven part. This unwanted delay is normally kept as small as practicable, and is usually called backlash. The amount of backlash will increase with time as the surfaces of moving parts wear.", "A hysteresis is sometimes intentionally added to computer algorithms. The field of user interface design has borrowed the term hysteresis to refer to times when the state of the user interface intentionally lags behind the apparent user input. For example, a menu that was drawn in response to a mouse-over event may remain on-screen for a brief moment after the mouse has moved out of the trigger region and the menu region. This allows the user to move the mouse directly to an item on the menu, even if part of that direct mouse path is outside of both the trigger region and the menu region. For instance, right-clicking on the desktop in most Windows interfaces will create a menu that exhibits this behavior.", "Hysteresis can be observed in the stage-flow relationship of a river during rapidly changing conditions such as passing of a flood wave. It is most pronounced in low gradient streams with steep leading edge hydrographs. https://pubs.usgs.gov/ja/70193968/70193968.pdf", "The phenomenon of hysteresis in ferromagnetic materials is the result of two effects: rotation of magnetization and changes in size or number of magnetic domains. In general, the magnetization varies (in direction but not magnitude) across a magnet, but in sufficiently small magnets, it does not. In these single-domain magnets, the magnetization responds to a magnetic field by rotating. Single-domain magnets are used wherever a strong, stable magnetization is needed (for example, magnetic recording).\nLarger magnets are divided into regions called domains. Across each domain, the magnetization does not vary; but between domains are relatively thin domain walls in which the direction of magnetization rotates from the direction of one domain to another. If the magnetic field changes, the walls move, changing the relative sizes of the domains. Because the domains are not magnetized in the same direction, the magnetic moment per unit volume is smaller than it would be in a single-domain magnet; but domain walls involve rotation of only a small part of the magnetization, so it is much easier to change the magnetic moment. The magnetization can also change by addition or subtraction of domains (called nucleation and denucleation).", "The contact angle formed between a liquid and solid phase will exhibit a range of contact angles that are possible. There are two common methods for measuring this range of contact angles. The first method is referred to as the tilting base method. Once a drop is dispensed on the surface with the surface level, the surface is then tilted from 0° to 90°. As the drop is tilted, the downhill side will be in a state of imminent wetting while the uphill side will be in a state of imminent dewetting. As the tilt increases the downhill contact angle will increase and represents the advancing contact angle while the uphill side will decrease; this is the receding contact angle. The values for these angles just prior to the drop releasing will typically represent the advancing and receding contact angles. The difference between these two angles is the contact angle hysteresis.\nThe second method is often referred to as the add/remove volume method. When the maximum liquid volume is removed from the drop without the interfacial area decreasing the receding contact angle is thus measured. When volume is added to the maximum before the interfacial area increases, this is the advancing contact angle. As with the tilt method, the difference between the advancing and receding contact angles is the contact angle hysteresis. Most researchers prefer the tilt method; the add/remove method requires that a tip or needle stay embedded in the drop which can affect the accuracy of the values, especially the receding contact angle.", "The equilibrium shapes of bubbles expanding and contracting on capillaries (blunt needles) can exhibit hysteresis depending on the relative magnitude of the maximum capillary pressure to ambient pressure, and the relative magnitude of the bubble volume at the maximum capillary pressure to the dead volume in the system. The bubble shape hysteresis is a consequence of gas compressibility, which causes the bubbles to behave differently across expansion and contraction. During expansion, bubbles undergo large non equilibrium jumps in volume, while during contraction the bubbles are more stable and undergo a relatively smaller jump in volume resulting in an asymmetry across expansion and contraction. The bubble shape hysteresis is qualitatively similar to the adsorption hysteresis, and as in the contact angle hysteresis, the interfacial properties play an important role in bubble shape hysteresis.\nThe existence of the bubble shape hysteresis has important consequences in interfacial rheology experiments involving bubbles. As a result of the hysteresis, not all sizes of the bubbles can be formed on a capillary. Further the gas compressibility causing the hysteresis leads to unintended complications in the phase relation between the applied changes in interfacial area to the expected interfacial stresses. These difficulties can be avoided by designing experimental systems to avoid the bubble shape hysteresis.", "Hysteresis can also occur during physical adsorption processes. In this type of hysteresis, the quantity adsorbed is different when gas is being added than it is when being removed. The specific causes of adsorption hysteresis are still an active area of research, but it is linked to differences in the nucleation and evaporation mechanisms inside mesopores. These mechanisms are further complicated by effects such as cavitation and pore blocking.\nIn physical adsorption, hysteresis is evidence of mesoporosity-indeed, the definition of mesopores (2–50 nm) is associated with the appearance (50 nm) and disappearance (2 nm) of mesoporosity in nitrogen adsorption isotherms as a function of Kelvin radius. An adsorption isotherm showing hysteresis is said to be of Type IV (for a wetting adsorbate) or Type V (for a non-wetting adsorbate), and hysteresis loops themselves are classified according to how symmetric the loop is. Adsorption hysteresis loops also have the unusual property that it is possible to scan within a hysteresis loop by reversing the direction of adsorption while on a point on the loop. The resulting scans are called \"crossing\", \"converging\", or \"returning\", depending on the shape of the isotherm at this point.", "The relationship between matric water potential and water content is the basis of the water retention curve. Matric potential measurements (Ψ) are converted to volumetric water content (θ) measurements based on a site or soil specific calibration curve. Hysteresis is a source of water content measurement error. Matric potential hysteresis arises from differences in wetting behaviour causing dry medium to re-wet; that is, it depends on the saturation history of the porous medium. Hysteretic behaviour means that, for example, at a matric potential (Ψ) of , the volumetric water content (θ) of a fine sandy soil matrix could be anything between 8% and 25%.\nTensiometers are directly influenced by this type of hysteresis. Two other types of sensors used to measure soil water matric potential are also influenced by hysteresis effects within the sensor itself. Resistance blocks, both nylon and gypsum based, measure matric potential as a function of electrical resistance. The relation between the sensor's electrical resistance and sensor matric potential is hysteretic. Thermocouples measure matric potential as a function of heat dissipation. Hysteresis occurs because measured heat dissipation depends on sensor water content, and the sensor water content–matric potential relationship is hysteretic. , only desorption curves are usually measured during calibration of soil moisture sensors. Despite the fact that it can be a source of significant error, the sensor specific effect of hysteresis is generally ignored.", "When an external magnetic field is applied to a ferromagnetic material such as iron, the atomic domains align themselves with it. Even when the field is removed, part of the alignment will be retained: the material has become magnetized. Once magnetized, the magnet will stay magnetized indefinitely. To demagnetize it requires heat or a magnetic field in the opposite direction. This is the effect that provides the element of memory in a hard disk drive.\nThe relationship between field strength and magnetization is not linear in such materials. If a magnet is demagnetized () and the relationship between and is plotted for increasing levels of field strength, follows the initial magnetization curve. This curve increases rapidly at first and then approaches an asymptote called magnetic saturation. If the magnetic field is now reduced monotonically, follows a different curve. At zero field strength, the magnetization is offset from the origin by an amount called the remanence. If the relationship is plotted for all strengths of applied magnetic field the result is a hysteresis loop called the main loop. The width of the middle section is twice the coercivity of the material.\nA closer look at a magnetization curve generally reveals a series of small, random jumps in magnetization called Barkhausen jumps. This effect is due to crystallographic defects such as dislocations.\nMagnetic hysteresis loops are not exclusive to materials with ferromagnetic ordering. Other magnetic orderings, such as spin glass ordering, also exhibit this phenomenon.", "In the elastic hysteresis of rubber, the area in the centre of a hysteresis loop is the energy dissipated due to material internal friction.\nElastic hysteresis was one of the first types of hysteresis to be examined.\nThe effect can be demonstrated using a rubber band with weights attached to it. If the top of a rubber band is hung on a hook and small weights are attached to the bottom of the band one at a time, it will stretch and get longer. As more weights are loaded onto it, the band will continue to stretch because the force the weights are exerting on the band is increasing. When each weight is taken off, or unloaded, the band will contract as the force is reduced. As the weights are taken off, each weight that produced a specific length as it was loaded onto the band now contracts less, resulting in a slightly longer length as it is unloaded. This is because the band does not obey Hooke's law perfectly. The hysteresis loop of an idealized rubber band is shown in the figure.\nIn terms of force, the rubber band was harder to stretch when it was being loaded than when it was being unloaded. In terms of time, when the band is unloaded, the effect (the length) lagged behind the cause (the force of the weights) because the length has not yet reached the value it had for the same weight during the loading part of the cycle. In terms of energy, more energy was required during the loading than the unloading, the excess energy being dissipated as thermal energy.\nElastic hysteresis is more pronounced when the loading and unloading is done quickly than when it is done slowly. Some materials such as hard metals don't show elastic hysteresis under a moderate load, whereas other hard materials like granite and marble do. Materials such as rubber exhibit a high degree of elastic hysteresis.\nWhen the intrinsic hysteresis of rubber is being measured, the material can be considered to behave like a gas. When a rubber band is stretched it heats up, and if it is suddenly released, it cools down perceptibly. These effects correspond to a large hysteresis from the thermal exchange with the environment and a smaller hysteresis due to internal friction within the rubber. This proper, intrinsic hysteresis can be measured only if the rubber band is thermally isolated.\nSmall vehicle suspensions using rubber (or other elastomers) can achieve the dual function of springing and damping because rubber, unlike metal springs, has pronounced hysteresis and does not return all the absorbed compression energy on the rebound. Mountain bikes have made use of elastomer suspension, as did the original Mini car.\nThe primary cause of rolling resistance when a body (such as a ball, tire, or wheel) rolls on a surface is hysteresis. This is attributed to the viscoelastic characteristics of the material of the rolling body.", "The term \"hysteresis\" is derived from , an Ancient Greek word meaning \"deficiency\" or \"lagging behind\". It was coined in 1881 by Sir James Alfred Ewing to describe the behaviour of magnetic materials.\nSome early work on describing hysteresis in mechanical systems was performed by James Clerk Maxwell. Subsequently, hysteretic models have received significant attention in the works of Ferenc Preisach (Preisach model of hysteresis), Louis Néel and Douglas Hugh Everett in connection with magnetism and absorption. A more formal mathematical theory of systems with hysteresis was developed in the 1970s by a group of Russian mathematicians led by Mark Krasnosel'skii.", "In control systems, hysteresis can be used to filter signals so that the output reacts less rapidly than it otherwise would by taking recent system history into account. For example, a thermostat controlling a heater may switch the heater on when the temperature drops below A, but not turn it off until the temperature rises above B. (For instance, if one wishes to maintain a temperature of 20 °C then one might set the thermostat to turn the heater on when the temperature drops to below 18 °C and off when the temperature exceeds 22 °C).\nSimilarly, a pressure switch can be designed to exhibit hysteresis, with pressure set-points substituted for temperature thresholds.", "The most known empirical models in hysteresis are Preisach and Jiles-Atherton models. These models allow an accurate modeling of the hysteresis loop and are widely used in the industry. However, these models lose the connection with thermodynamics and the energy consistency is not ensured. A more recent model, with a more consistent thermodynamical foundation, is the vectorial incremental nonconservative consistent hysteresis (VINCH) model of Lavet et al. (2011)", "A hysteresis effect may be observed in voicing onset versus offset. The threshold value of the subglottal pressure required to start the vocal fold vibration is lower than the threshold value at which the vibration stops, when other parameters are kept constant. In utterances of vowel-voiceless consonant-vowel sequences during speech, the intraoral pressure is lower at the voice onset of the second vowel compared to the voice offset of the first vowel, the oral airflow is lower, the transglottal pressure is larger and the glottal width is smaller.", "Hysteresis is the dependence of the state of a system on its history. For example, a magnet may have more than one possible magnetic moment in a given magnetic field, depending on how the field changed in the past. Plots of a single component of the moment often form a loop or hysteresis curve, where there are different values of one variable depending on the direction of change of another variable. This history dependence is the basis of memory in a hard disk drive and the remanence that retains a record of the Earth's magnetic field magnitude in the past. Hysteresis occurs in ferromagnetic and ferroelectric materials, as well as in the deformation of rubber bands and shape-memory alloys and many other natural phenomena. In natural systems, it is often associated with irreversible thermodynamic change such as phase transitions and with internal friction; and dissipation is a common side effect.\nHysteresis can be found in physics, chemistry, engineering, biology, and economics. It is incorporated in many artificial systems: for example, in thermostats and Schmitt triggers, it prevents unwanted frequent switching.\nHysteresis can be a dynamic lag between an input and an output that disappears if the input is varied more slowly; this is known as rate-dependent hysteresis. However, phenomena such as the magnetic hysteresis loops are mainly rate-independent, which makes a durable memory possible.\nSystems with hysteresis are nonlinear, and can be mathematically challenging to model. Some hysteretic models, such as the Preisach model (originally applied to ferromagnetism) and the Bouc–Wen model, attempt to capture general features of hysteresis; and there are also phenomenological models for particular phenomena such as the Jiles–Atherton model for ferromagnetism.\nIt is difficult to define hysteresis precisely. Isaak D. Mayergoyz wrote \"...the very meaning of hysteresis varies from one area to another, from paper to paper and from author to author. As a result, a stringent mathematical definition of hysteresis is needed in order to avoid confusion and ambiguity.\".", "Hysteresis in cell biology often follows bistable systems where the same input state can lead to two different, stable outputs. Where bistability can lead to digital, switch-like outputs from the continuous inputs of chemical concentrations and activities, hysteresis makes these systems more resistant to noise. These systems are often characterized by higher values of the input required to switch into a particular state as compared to the input required to stay in the state, allowing for a transition that is not continuously reversible, and thus less susceptible to noise. \nCells undergoing cell division exhibit hysteresis in that it takes a higher concentration of cyclins to switch them from G2 phase into mitosis than to stay in mitosis once begun.\nBiochemical systems can also show hysteresis-like output when slowly varying states that are not directly monitored are involved, as in the case of the cell cycle arrest in yeast exposed to mating pheromone. Here, the duration of cell cycle arrest depends not only on the final level of input Fus3, but also on the previously achieved Fus3 levels. This effect is achieved due to the slower time scales involved in the transcription of intermediate Far1, such that the total Far1 activity reaches its equilibrium value slowly, and for transient changes in Fus3 concentration, the response of the system depends on the Far1 concentration achieved with the transient value. Experiments in this type of hysteresis benefit from the ability to change the concentration of the inputs with time. The mechanisms are often elucidated by allowing independent control of the concentration of the key intermediate, for instance, by using an inducible promoter.\nDarlington in his classic works on genetics discussed hysteresis of the chromosomes, by which he meant \"failure of the external form of the chromosomes to respond immediately to the internal stresses due to changes in their molecular spiral\", as they lie in a somewhat rigid medium in the limited space of the cell nucleus.\nIn developmental biology, cell type diversity is regulated by long range-acting signaling molecules called morphogens that pattern uniform pools of cells in a concentration- and time-dependent manner. The morphogen sonic hedgehog (Shh), for example, acts on limb bud and neural progenitors to induce expression of a set of homeodomain-containing transcription factors to subdivide these tissues into distinct domains. It has been shown that these tissues have a memory of previous exposure to Shh.\nIn neural tissue, this hysteresis is regulated by a homeodomain (HD) feedback circuit that amplifies Shh signaling. In this circuit, expression of Gli transcription factors, the executors of the Shh pathway, is suppressed. Glis are processed to repressor forms (GliR) in the absence of Shh, but in the presence of Shh, a proportion of Glis are maintained as full-length proteins allowed to translocate to the nucleus, where they act as activators (GliA) of transcription. By reducing Gli expression then, the HD transcription factors reduce the total amount of Gli (GliT), so a higher proportion of GliT can be stabilized as GliA for the same concentration of Shh.", "There is some evidence that T cells exhibit hysteresis in that it takes a lower signal threshold to activate T cells that have been previously activated. Ras GTPase activation is required for downstream effector functions of activated T cells. Triggering of the T cell receptor induces high levels of Ras activation, which results in higher levels of GTP-bound (active) Ras at the cell surface. Since higher levels of active Ras have accumulated at the cell surface in T cells that have been previously stimulated by strong engagement of the T cell receptor, weaker subsequent T cell receptor signals received shortly afterwards will deliver the same level of activation due to the presence of higher levels of already activated Ras as compared to a naïve cell.", "The property by which some neurons do not return to their basal conditions from a stimulated condition immediately after removal of the stimulus is an example of hysteresis.", "Neuropsychology, in exploring the neural correlates of consciousness, interfaces with neuroscience, although the complexity of the central nervous system is a challenge to its study (that is, its operation resists easy reduction). Context-dependent memory and state-dependent memory show hysteretic aspects of neurocognition.", "Lung hysteresis is evident when observing the compliance of a lung on inspiration versus expiration. The difference in compliance (Δvolume/Δpressure) is due to the additional energy required to overcome surface tension forces during inspiration to recruit and inflate additional alveoli.\nThe transpulmonary pressure vs Volume curve of inhalation is different from the Pressure vs Volume curve of exhalation, the difference being described as hysteresis. Lung volume at any given pressure during inhalation is less than the lung volume at any given pressure during exhalation.", "The idea of hysteresis is used extensively in the area of labor economics, specifically with reference to the unemployment rate. According to theories based on hysteresis, severe economic downturns (recession) and/or persistent stagnation (slow demand growth, usually after a recession) cause unemployed individuals to lose their job skills (commonly developed on the job) or to find that their skills have become obsolete, or become demotivated, disillusioned or depressed or lose job-seeking skills. In addition, employers may use time spent in unemployment as a screening tool, i.e., to weed out less desired employees in hiring decisions. Then, in times of an economic upturn, recovery, or \"boom\", the affected workers will not share in the prosperity, remaining unemployed for long periods (e.g., over 52 weeks). This makes unemployment \"structural\", i.e., extremely difficult to reduce simply by increasing the aggregate demand for products and labor without causing increased inflation. That is, it is possible that a ratchet effect in unemployment rates exists, so a short-term rise in unemployment rates tends to persist. For example, traditional anti-inflationary policy (the use of recession to fight inflation) leads to a permanently higher \"natural\" rate of unemployment (more scientifically known as the NAIRU). This occurs first because inflationary expectations are \"sticky\" downward due to wage and price rigidities (and so adapt slowly over time rather than being approximately correct as in theories of rational expectations) and second because labor markets do not clear instantly in response to unemployment.\nThe existence of hysteresis has been put forward as a possible explanation for the persistently high unemployment of many economies in the 1990s. Hysteresis has been invoked by Olivier Blanchard among others to explain the differences in long run unemployment rates between Europe and the United States. Labor market reform (usually meaning institutional change promoting more flexible wages, firing, and hiring) or strong demand-side economic growth may not therefore reduce this pool of long-term unemployed. Thus, specific targeted training programs are presented as a possible policy solution. However, the hysteresis hypothesis suggests such training programs are aided by persistently high demand for products (perhaps with incomes policies to avoid increased inflation), which reduces the transition costs out of unemployment and into paid employment easier.", "One type of hysteresis is a lag between input and output. An example is a sinusoidal input that results in a sinusoidal output , but with a phase lag :\nSuch behavior can occur in linear systems, and a more general form of response is\nwhere is the instantaneous response and is the impulse response to an impulse that occurred time units in the past. In the frequency domain, input and output are related by a complex generalized susceptibility that can be computed from ; it is mathematically equivalent to a transfer function in linear filter theory and analogue signal processing.\nThis kind of hysteresis is often referred to as rate-dependent hysteresis. If the input is reduced to zero, the output continues to respond for a finite time. This constitutes a memory of the past, but a limited one because it disappears as the output decays to zero. The phase lag depends on the frequency of the input, and goes to zero as the frequency decreases.\nWhen rate-dependent hysteresis is due to dissipative effects like friction, it is associated with power loss.", "Hysteresis manifests itself in state transitions when melting temperature and freezing temperature do not agree. For example, agar melts at and solidifies from . This is to say that once agar is melted at 85 °C, it retains a liquid state until cooled to 40 °C. Therefore, from the temperatures of 40 to 85 °C, agar can be either solid or liquid, depending on which state it was before.", "Hysteretic models are mathematical models capable of simulating complex nonlinear behavior (hysteresis) characterizing mechanical systems and materials used in different fields of engineering, such as aerospace, civil, and mechanical engineering. Some examples of mechanical systems and materials having hysteretic behavior are:\n* materials, such as steel, reinforced concrete, wood;\n* structural elements, such as steel, reinforced concrete, or wood joints;\n* devices, such as seismic isolators and dampers.\nEach subject that involves hysteresis has models that are specific to the subject. In addition, there are hysteretic models that capture general features of many systems with hysteresis. An example is the Preisach model of hysteresis, which represents a hysteresis nonlinearity as a linear superposition of square loops called non-ideal relays. Many complex models of hysteresis arise from the simple parallel connection, or superposition, of elementary carriers of hysteresis termed hysterons.\nA simple and intuitive parametric description of various hysteresis loops may be found in the Lapshin model. Along with the smooth loops, substitution of trapezoidal, triangular or rectangular pulses instead of the harmonic functions allows piecewise-linear hysteresis loops frequently used in discrete automatics to be built in the model. There are implementations of the hysteresis loop model in Mathcad and in R programming language.\nThe Bouc–Wen model of hysteresis is often used to describe non-linear hysteretic systems. It was introduced by Bouc and extended by Wen, who demonstrated its versatility by producing a variety of hysteretic patterns. This model is able to capture in analytical form, a range of shapes of hysteretic cycles which match the behaviour of a wide class of hysteretical systems; therefore, given its versability and mathematical tractability, the Bouc–Wen model has quickly gained popularity and has been extended and applied to a wide variety of engineering problems, including multi-degree-of-freedom (MDOF) systems, buildings, frames, bidirectional and torsional response of hysteretic systems two- and three-dimensional continua, and soil liquefaction among others. The Bouc–Wen model and its variants/extensions have been used in applications of structural control, in particular in the modeling of the behaviour of magnetorheological dampers, base isolation devices for buildings and other kinds of damping devices; it has also been used in the modelling and analysis of structures built of reinforced concrete, steel, masonry and timber.. The most important extension of Bouc-Wen Model was carried out by Baber and Noori and later by Noori and co-workers. That extended model, named, BWBN, can reproduce the complex shear pinching or slip-lock phenomenon that earlier model could not reproduce. The BWBN model has been widely used in a wide spectrum of applications and implementations are available in software such as OpenSees.\nHysteretic models may have a generalized displacement as input variable and a generalized force as output variable, or vice versa. In particular, in rate-independent hysteretic models, the output variable does not depend on the rate of variation of the input one.\nRate-independent hysteretic models can be classified into four different categories depending on the type of equation that needs to be solved to compute the output variable:\n* algebraic models\n* transcendental models\n* differential models\n* integral models", "Some notable hysteretic models are listed below with their associated fields.\n* Bean's critical state model (magnetism)\n* Bouc–Wen model (structural engineering)\n* Ising model (magnetism)\n* Jiles–Atherton model (magnetism)\n* Novak–Tyson model (cell-cycle control)\n* Preisach model (magnetism)\n* Stoner–Wohlfarth model (magnetism)", "When hysteresis occurs with extensive and intensive variables, the work done on the system is the area under the hysteresis graph.", "Hysteresis is a commonly encountered phenomenon in ecology and epidemiology, where the observed equilibrium of a system can not be predicted solely based on environmental variables, but also requires knowledge of the system's past history. Notable examples include the theory of spruce budworm outbreaks and behavioral-effects on disease transmission.\nIt is commonly examined in relation to critical transitions between ecosystem or community types in which dominant competitors or entire landscapes can change in a largely irreversible fashion.", "Electrical hysteresis typically occurs in ferroelectric material, where domains of polarization contribute to the total polarization. Polarization is the electrical dipole moment (either C·m or C·m). The mechanism, an organization of the polarization into domains, is similar to that of magnetic hysteresis.", "Economic systems can exhibit hysteresis. For example, export performance is subject to strong hysteresis effects: because of the fixed transportation costs it may take a big push to start a country's exports, but once the transition is made, not much may be required to keep them going.\nWhen some negative shock reduces employment in a company or industry, fewer employed workers then remain. As usually the employed workers have the power to set wages, their reduced number incentivizes them to bargain for even higher wages when the economy again gets better instead of letting the wage be at the equilibrium wage level, where the supply and demand of workers would match. This causes hysteresis: the unemployment becomes permanently higher after negative shocks.", "Contraceptive implants are primarily used to prevent unintended pregnancy and treat conditions such as non-pathological forms of menorrhagia. Examples include copper- and hormone-based intrauterine devices.", "Sensory and neurological implants are used for disorders affecting the major senses and the brain, as well as other neurological disorders. They are predominately used in the treatment of conditions such as cataract, glaucoma, keratoconus, and other visual impairments; otosclerosis and other hearing loss issues, as well as middle ear diseases such as otitis media; and neurological diseases such as epilepsy, Parkinson's disease, and treatment-resistant depression. Examples include the intraocular lens, intrastromal corneal ring segment, cochlear implant, tympanostomy tube, and neurostimulator.", "Other types of organ dysfunction can occur in the systems of the body, including the gastrointestinal, respiratory, and urological systems. Implants are used in those and other locations to treat conditions such as gastroesophageal reflux disease, gastroparesis, respiratory failure, sleep apnea, urinary and fecal incontinence, and erectile dysfunction. Examples include the LINX, implantable gastric stimulator, diaphragmatic/phrenic nerve stimulator, neurostimulator, surgical mesh, artificial urinary sphincter and penile implant.", "An implant is a medical device manufactured to replace a missing biological structure, support a damaged biological structure, or enhance an existing biological structure. For example, an implant may be a rod, used to strengthen weak bones. Medical implants are human-made devices, in contrast to a transplant, which is a transplanted biomedical tissue. The surface of implants that contact the body might be made of a biomedical material such as titanium, silicone, or apatite depending on what is the most functional. In 2018, for example, American Elements developed a nickel alloy powder for 3D printing robust, long-lasting, and biocompatible medical implants. In some cases implants contain electronics, e.g. artificial pacemaker and cochlear implants. Some implants are bioactive, such as subcutaneous drug delivery devices in the form of implantable pills or drug-eluting stents.", "Cardiovascular medical devices are implanted in cases where the heart, its valves, and the rest of the circulatory system is in disorder. They are used to treat conditions such as heart failure, cardiac arrhythmia, ventricular tachycardia, valvular heart disease, angina pectoris, and atherosclerosis. Examples include the artificial heart, artificial heart valve, implantable cardioverter-defibrillator, artificial cardiac pacemaker, and coronary stent.", "Electrical implants are being used to relieve pain from rheumatoid arthritis. The electric implant is embedded in the neck of patients with rheumatoid arthritics, the implant sends electrical signals to electrodes in the vagus nerve. The application of this device is being tested an alternative to medicating people with rheumatoid arthritis for their lifetime.", "Cosmetic implants — often prosthetics — attempt to bring some portion of the body back to an acceptable aesthetic norm. They are used as a follow-up to mastectomy due to breast cancer, for correcting some forms of disfigurement, and modifying aspects of the body (as in buttock augmentation and chin augmentation). Examples include the breast implant, nose prosthesis, ocular prosthesis, and injectable filler.", "Orthopaedic implants help alleviate issues with the bones and joints of the body. They are used to treat bone fractures, osteoarthritis, scoliosis, spinal stenosis, and chronic pain. Examples include a wide variety of pins, rods, screws, and plates used to anchor fractured bones while they heal.\nMetallic glasses based on magnesium with zinc and calcium addition are tested as the potential metallic biomaterials for biodegradable medical implants.\nPatients with orthopaedic implants sometimes need to be put under magnetic resonance imaging (MRI) machine for detailed musculoskeletal study. Therefore, concerns have been raised regarding the loosening and migration of implant, heating of the implant metal which could cause thermal damage to surrounding tissues, and distortion of the MRI scan that affects the imaging results. A study of orthopaedic implants in 2005 has shown that majority of the orthopaedic implants does not react with magnetic fields under the 1.0 Tesla MRI scanning machine with the exception of external fixator clamps. However, at 7.0 Tesla, several orthopaedic implants would show significant interaction with the MRI magnetic fields, such as heel and fibular implant.", "Medical devices are classified by the US Food and Drug Administration (FDA) under three different classes depending on the risks the medical device may impose on the user. According to 21CFR 860.3, Class I devices are considered to pose the least amount of risk to the user and require the least amount of control. Class I devices include simple devices such as arm slings and hand-held surgical instruments. Class II devices are considered to need more regulation than Class I devices and are required to undergo specific requirements before FDA approval. Class II devices include X-ray systems and physiological monitors. Class III devices require the most regulatory controls since the device supports or sustains human life or may not be well tested. Class III devices include replacement heart valves and implanted cerebellar stimulators. Many implants typically fall under Class II and Class III devices.", "A variety of minimally bioreactive metals are routinely implanted. The most commonly implanted form of stainless steel is 316L. Cobalt-chromium and titanium-based implant alloys are also permanently implanted. All of these are made passive by a thin layer of oxide on their surface. A consideration, however, is that metal ions diffuse outward through the oxide and end up in the surrounding tissue. Bioreaction to metal implants includes the formation of a small envelope of fibrous tissue. The thickness of this layer is determined by the products being dissolved, and the extent to which the implant moves around within the enclosing tissue. Pure titanium may have only a minimal fibrous encapsulation. Stainless steel, on the other hand, may elicit encapsulation of as much as 2 mm.", "Under ideal conditions, implants should initiate the desired host response. Ideally, the implant should not cause any undesired reaction from neighboring or distant tissues. However, the interaction between the implant and the tissue surrounding the implant can lead to complications. The process of implantation of medical devices is subjected to the same complications that other invasive medical procedures can have during or after surgery. Common complications include infection, inflammation, and pain. Other complications that can occur include risk of rejection from implant-induced coagulation and allergic foreign body response. Depending on the type of implant, the complications may vary.\nWhen the site of an implant becomes infected during or after surgery, the surrounding tissue becomes infected by microorganisms. Three main categories of infection can occur after operation. Superficial immediate infections are caused by organisms that commonly grow near or on skin. The infection usually occurs at the surgical opening. Deep immediate infection, the second type, occurs immediately after surgery at the site of the implant. Skin-dwelling and airborne bacteria cause deep immediate infection. These bacteria enter the body by attaching to the implant's surface prior to implantation. Though not common, deep immediate infections can also occur from dormant bacteria from previous infections of the tissue at the implantation site that have been activated from being disturbed during the surgery. The last type, late infection, occurs months to years after the implantation of the implant. Late infections are caused by dormant blood-borne bacteria attached to the implant prior to implantation. The blood-borne bacteria colonize on the implant and eventually get released from it. Depending on the type of material used to make the implant, it may be infused with antibiotics to lower the risk of infections during surgery. However, only certain types of materials can be infused with antibiotics, the use of antibiotic-infused implants runs the risk of rejection by the patient since the patient may develop a sensitivity to the antibiotic, and the antibiotic may not work on the bacteria.\nInflammation, a common occurrence after any surgical procedure, is the body's response to tissue damage as a result of trauma, infection, intrusion of foreign materials, or local cell death, or as a part of an immune response. Inflammation starts with the rapid dilation of local capillaries to supply the local tissue with blood. The inflow of blood causes the tissue to become swollen and may cause cell death. The excess blood, or edema, can activate pain receptors at the tissue. The site of the inflammation becomes warm from local disturbances of fluid flow and the increased cellular activity to repair the tissue or remove debris from the site.\nImplant-induced coagulation is similar to the coagulation process done within the body to prevent blood loss from damaged blood vessels. However, the coagulation process is triggered from proteins that become attached to the implant surface and lose their shapes. When this occurs, the protein changes conformation and different activation sites become exposed, which may trigger an immune system response where the body attempts to attack the implant to remove the foreign material. The trigger of the immune system response can be accompanied by inflammation. The immune system response may lead to chronic inflammation where the implant is rejected and has to be removed from the body. The immune system may encapsulate the implant as an attempt to remove the foreign material from the site of the tissue by encapsulating the implant in fibrinogen and platelets. The encapsulation of the implant can lead to further complications, since the thick layers of fibrous encapsulation may prevent the implant from performing the desired functions. Bacteria may attack the fibrous encapsulation and become embedded into the fibers. Since the layers of fibers are thick, antibiotics may not be able to reach the bacteria and the bacteria may grow and infect the surrounding tissue. In order to remove the bacteria, the implant would have to be removed. Lastly, the immune system may accept the presence of the implant and repair and remodel the surrounding tissue. Similar responses occur when the body initiates an allergic foreign body response. In the case of an allergic foreign body response, the implant would have to be removed.", "1) The elastic modulus of the implant is decreased, allowing the implant to better match the elastic modulus of the bone. The elastic modulus of cortical bone (~18 GPa) is significantly lower than typical solid titanium or steel implants (110 GPa and 210 GPa, respectively), causing the implant take up a disproportionate amount of the load applied to the appendage, leading to an effect called stress shielding.\n2) Porosity enables osteoblastic cells to grow into the pores of implants. Cells can span gaps of smaller than 75 microns and grow into pores larger than 200 microns. Bone ingrowth is a favorable effect, as it anchors the cells into the implant, increasing the strength of the bone-implant interface. More load is transferred from the implant to the bone, reducing stress shielding effects. The density of the bone around the implant is likely to be higher due to the increased load applied to the bone. Bone ingrowth reduces the likelihood of the implant loosening over time because stress shielding and corresponding bone resorption over extended timescales is avoided. Porosity of greater than 40% is favorable to facilitate sufficient anchoring of the osteoblastic cells.", "The many examples of implant failure include rupture of silicone breast implants, hip replacement joints, and artificial heart valves, such as the Bjork–Shiley valve, all of which have caused FDA intervention. The consequences of implant failure depend on the nature of the implant and its position in the body. Thus, heart valve failure is likely to threaten the life of the individual, while breast implant or hip joint failure is less likely to be life-threatening.\nDevices implanted directly in the grey matter of the brain produce the highest quality signals, but are prone to scar-tissue build-up, causing the signal to become weaker, or even non-existent, as the body reacts to a foreign object in the brain.\nIn 2018, Implant files, an investigation made by ICIJ revealed that medical devices that are unsafe and have not been adequately tested were implanted in patients' bodies. In United Kingdom, Prof Derek Alderson, president of the Royal College of Surgeons, concludes: \"All implantable devices should be registered and tracked to monitor efficacy and patient safety in the long-term.\"", "Porous implants are characterized by the presence of voids in the metallic or ceramic matrix. Voids can be regular, such as in additively manufactured (AM) lattices, or stochastic, such as in gas-infiltrated production processes. The reduction in the modulus of the implant follows a complex nonlinear relationship dependent on the volume fraction of base material and morphology of the pores.\nExperimental models exist to predict the range of modulus that stochastic porous material may take. Above 10% vol. fraction porosity, models begin to deviate significantly. Different models, such as the rule of mixtures for low porosity, two-material matrices have been developed to describe mechanical properties.\nAM lattices have more predictable mechanical properties compared to stochastic porous materials and can be tuned such that they have favorable directional mechanical properties. Variables such as strut diameter, strut shape, and number of cross-beams can have a dramatic effect on loading characteristics of the lattice. AM has the ability to fine-tune the lattice spacing to within a much smaller range than stochastically porous structures, enabling the future cell-development of specific cultures in tissue engineering.", "* ASTM F67 Unalloyed (Commercially Pure) Titanium\n* ASTM F136 Ti-6Al-4V-ELI\n* ASTM F1295 Ti-6Al-7Nb\n* ASTM F1472 Ti-6Al-4V", "Ovarian hyperstimulation is the stimulation to induce development of multiple follicles of the ovaries. It should start with response prediction by e.g. age, antral follicle count and level of anti-Müllerian hormone. The resulting prediction of e.g. poor or hyper-response to ovarian hyperstimulation determines the protocol and dosage for ovarian hyperstimulation.\nOvarian hyperstimulation also includes suppression of spontaneous ovulation, for which two main methods are available: Using a (usually longer) GnRH agonist protocol or a (usually shorter) GnRH antagonist protocol. In a standard long GnRH agonist protocol the day when hyperstimulation treatment is started and the expected day of later oocyte retrieval can be chosen to conform to personal choice, while in a GnRH antagonist protocol it must be adapted to the spontaneous onset of the previous menstruation. On the other hand, the GnRH antagonist protocol has a lower risk of ovarian hyperstimulation syndrome (OHSS), which is a life-threatening complication.\nFor the ovarian hyperstimulation in itself, injectable gonadotropins (usually FSH analogues) are generally used under close monitoring. Such monitoring frequently checks the estradiol level and, by means of gynecologic ultrasonography, follicular growth. Typically approximately 10 days of injections will be necessary.\nWhen stimulating ovulation after suppressing endogenous secretion, it is necessary to supply exogenous gonadotropines. The most common one is the human menopausal gonadotropin (hMG), which is obtained by donation of menopausal women. Other pharmacological preparations are FSH+LH or coripholitropine alpha.", "By sperm washing, the risk that a chronic disease in the individual providing the sperm would infect the birthing parent or offspring can be brought to negligible levels.\nIf the sperm donor has hepatitis B, The Practice Committee of the American Society for Reproductive Medicine advises that sperm washing is not necessary in IVF to prevent transmission, unless the birthing partner has not been effectively vaccinated. In birthing people with hepatitis B, the risk of vertical transmission during IVF is no different from the risk in spontaneous conception. However, there is not enough evidence to say that ICSI procedures are safe in birthing people with hepatitis B in regard to vertical transmission to the offspring.\nRegarding potential spread of HIV/AIDS, Japan's government prohibited the use of IVF procedures in which both partners are infected with HIV. Despite the fact that the ethics committees previously allowed the Ogikubo, Tokyo Hospital, located in Tokyo, to use IVF for couples with HIV, the Ministry of Health, Labour and Welfare of Japan decided to block the practice. Hideji Hanabusa, the vice president of the Ogikubo Hospital, states that together with his colleagues, he managed to develop a method through which scientists are able to remove HIV from sperm.\nIn the United States, people seeking to be an embryo recipient undergo infectious disease screening required by the Food and Drug Administration (FDA), and reproductive tests to determine the best placement location and cycle timing before the actual embryo transfer occurs. The amount of screening the embryo has already undergone is largely dependent on the genetic parents' own IVF clinic and process. The embryo recipient may elect to have their own embryologist conduct further testing.", "A risk of ovarian stimulation is the development of ovarian hyperstimulation syndrome, particularly if hCG is used for inducing final oocyte maturation. This results in swollen, painful ovaries. It occurs in 30% of patients. Mild cases can be treated with over the counter medications and cases can be resolved in the absence of pregnancy. In moderate cases, ovaries swell and fluid accumulated in the abdominal cavities and may have symptoms of heartburn, gas, nausea or loss of appetite. In severe cases, patients have sudden excess abdominal pain, nausea, vomiting and will result in hospitalisation.\nDuring egg retrieval, there exists a small chance of bleeding, infection, and damage to surrounding structures such as bowel and bladder (transvaginal ultrasound aspiration) as well as difficulty in breathing, chest infection, allergic reactions to medication, or nerve damage (laparoscopy).\nEctopic pregnancy may also occur if a fertilised egg develops outside the uterus, usually in the fallopian tubes and requires immediate destruction of the foetus.\nIVF does not seem to be associated with an elevated risk of cervical cancer, nor with ovarian cancer or endometrial cancer when neutralising the confounder of infertility itself. Nor does it seem to impart any increased risk for breast cancer.\nRegardless of pregnancy result, IVF treatment is usually stressful for patients. Neuroticism and the use of escapist coping strategies are associated with a higher degree of distress, while the presence of social support has a relieving effect. A negative pregnancy test after IVF is associated with an increased risk for depression, but not with any increased risk of developing anxiety disorders. Pregnancy test results do not seem to be a risk factor for depression or anxiety among men when the relationships is between two cisgender, heterosexual people. Hormonal agents such as gonadotropin-releasing hormone agonist (GnRH agonist) are associated with depression.\nStudies show that there is an increased risk of venous thrombosis or pulmonary embolism during the first trimester of IVF. When looking at long-term studies comparing patients who received or did not receive IVF, there seems to be no correlation with increased risk of cardiac events. There are more ongoing studies to solidify this.\nSpontaneous pregnancy has occurred after successful and unsuccessful IVF treatments. Within 2 years of delivering an infant conceived through IVF, subfertile patients had a conception rate of 18%.", "The Latin term in vitro, meaning \"in glass\", is used because early biological experiments involving cultivation of tissues outside the living organism were carried out in glass containers, such as beakers, test tubes, or Petri dishes. Today, the scientific term \"in vitro\" is used to refer to any biological procedure that is performed outside the organism in which it would normally have occurred, to distinguish it from an in vivo procedure (such as in vivo fertilisation), where the tissue remains inside the living organism in which it is normally found.\nA colloquial term for babies conceived as the result of IVF, \"test tube babies\", refers to the tube-shaped containers of glass or plastic resin, called test tubes, that are commonly used in chemistry and biology labs. However, IVF is usually performed in Petri dishes, which are both wider and shallower and often used to cultivate cultures.\nIVF is a form of assisted reproductive technology.", "The first successful birth of a child after IVF treatment, Louise Brown, occurred in 1978. Louise Brown was born as a result of natural cycle IVF where no stimulation was made. The procedure took place at Dr Kershaws Cottage Hospital (now Dr Kershaws Hospice) in Royton, Oldham, England. Robert G. Edwards, the physiologist who co-developed the treatment, was awarded the Nobel Prize in Physiology or Medicine in 2010. His co-workers, Patrick Steptoe and Jean Purdy, were not eligible for consideration as the Nobel Prize is not awarded posthumously.\nThe second successful birth of a test tube baby occurred in India just 67 days after Louise Brown was born. The girl, named Durga, was conceived in vitro using a method developed independently by Subhash Mukhopadhyay, a physician and researcher from Hazaribag. Mukhopadhyay had been performing experiments on his own with primitive instruments and a household refrigerator. However, state authorities prevented him from presenting his work at scientific conferences, and it was many years before Mukhopadhyay's contribution was acknowledged in works dealing with the subject.\nAdriana Iliescu held the record as the oldest woman to give birth using IVF and a donor egg, when she gave birth in 2004 at the age of 66, a record passed in 2006. After the IVF treatment some couples are able to get pregnant without any fertility treatments. In 2018 it was estimated that eight million children had been born worldwide using IVF and other assisted reproduction techniques.", "IVF may be used to overcome female infertility when it is due to problems with the fallopian tubes, making in vivo fertilisation difficult. It can also assist in male infertility, in those cases where there is a defect in sperm quality; in such situations intracytoplasmic sperm injection (ICSI) may be used, where a sperm cell is injected directly into the egg cell. This is used when sperm has difficulty penetrating the egg. ICSI is also used when sperm numbers are very low. When indicated, the use of ICSI has been found to increase the success rates of IVF.\nAccording to UK's National Institute for Health and Care Excellence (NICE) guidelines, IVF treatment is appropriate in cases of unexplained infertility for people who have not conceived after 2 years of regular unprotected sexual intercourse.\nIn people with anovulation, it may be an alternative after 7–12 attempted cycles of ovulation induction, since the latter is expensive and more easy to control.", "The sperm and the egg are incubated together at a ratio of about 75,000:1 in a culture media in order for the actual fertilisation to take place. A review in 2013 came to the result that a duration of this co-incubation of about 1 to 4 hours results in significantly higher pregnancy rates than 16 to 24 hours. In most cases, the egg will be fertilised during co-incubation and will show two pronuclei. In certain situations, such as low sperm count or motility, a single sperm may be injected directly into the egg using intracytoplasmic sperm injection (ICSI). The fertilised egg is passed to a special growth medium and left for about 48 hours until the embryo consists of six to eight cells.\nIn gamete intrafallopian transfer, eggs are removed from the woman and placed in one of the fallopian tubes, along with the mans sperm. This allows fertilisation to take place inside the womans body. Therefore, this variation is actually an in vivo fertilisation, not in vitro.", "The live birth rate is the percentage of all IVF cycles that lead to a live birth. This rate does not include miscarriage or stillbirth; multiple-order births, such as twins and triplets, are counted as one pregnancy. A 2019 summary compiled by the Society for Assisted Reproductive Technology (SART) which reports the average IVF success rates in the United States per age group using non-donor eggs compiled the following data:\nIn 2006, Canadian clinics reported a live birth rate of 27%. Birth rates in younger patients were slightly higher, with a success rate of 35.3% for those 21 and younger, the youngest group evaluated. Success rates for older patients were also lower and decrease with age, with 37-year-olds at 27.4% and no live births for those older than 48, the oldest group evaluated. Some clinics exceeded these rates, but it is impossible to determine if that is due to superior technique or patient selection, since it is possible to artificially increase success rates by refusing to accept the most difficult patients or by steering them into oocyte donation cycles (which are compiled separately). Further, pregnancy rates can be increased by the placement of several embryos at the risk of increasing the chance for multiples.\nBecause not each IVF cycle that is started will lead to oocyte retrieval or embryo transfer, reports of live birth rates need to specify the denominator, namely IVF cycles started, IVF retrievals, or embryo transfers. The SART summarised 2008–9 success rates for US clinics for fresh embryo cycles that did not involve donor eggs and gave live birth rates by the age of the prospective mother, with a peak at 41.3% per cycle started and 47.3% per embryo transfer for patients under 35 years of age.\nIVF attempts in multiple cycles result in increased cumulative live birth rates. Depending on the demographic group, one study reported 45% to 53% for three attempts, and 51% to 71% to 80% for six attempts.\nEffective from 15 February 2021 the majority of Australian IVF clinics publish their individual success rate online via YourIVFSuccess.com.au. This site also contains a predictor tool.", "Pregnancy rate may be defined in various ways. In the United States, SART and the Centers for Disease Control (and appearing in the table in the Success Rates section above) include statistics on positive pregnancy test and clinical pregnancy rate.\nThe 2019 summary compiled by the SART the following data for non-donor eggs (first embryo transfer) in the United States:\nIn 2006, Canadian clinics reported an average pregnancy rate of 35%. A French study estimated that 66% of patients starting IVF treatment finally succeed in having a child (40% during the IVF treatment at the centre and 26% after IVF discontinuation). Achievement of having a child after IVF discontinuation was mainly due to adoption (46%) or spontaneous pregnancy (42%).", "According to a study done by the Mayo Clinic, miscarriage rates for IVF are somewhere between 15 and 25% for those under the age of 35. In naturally conceived pregnancies, the rate of miscarriage is between 10 and 20% for those under the age of 35. Risk of miscarriage, regardless of the method of conception, does increase with age.", "The main potential factors that influence pregnancy (and live birth) rates in IVF have been suggested to be maternal age, duration of infertility or subfertility, bFSH and number of oocytes, all reflecting ovarian function. Optimal age is 23–39 years at time of treatment.\nBiomarkers that affect the pregnancy chances of IVF include:\n* Antral follicle count, with higher count giving higher success rates.\n* Anti-Müllerian hormone levels, with higher levels indicating higher chances of pregnancy, as well as of live birth after IVF, even after adjusting for age.\n* Level of DNA fragmentation as measured, e.g. by Comet assay, advanced maternal age and semen quality.\n* People with ovary-specific FMR1 genotypes including het-norm/low have significantly decreased pregnancy chances in IVF.\n*Progesterone elevation on the day of induction of final maturation is associated with lower pregnancy rates in IVF cycles in women undergoing ovarian stimulation using GnRH analogues and gonadotrophins. At this time, compared to a progesterone level below 0.8 ng/ml, a level between 0.8 and 1.1 ng/ml confers an odds ratio of pregnancy of approximately 0.8, and a level between 1.2 and 3.0 ng/ml confers an odds ratio of pregnancy of between 0.6 and 0.7. On the other hand, progesterone elevation does not seem to confer a decreased chance of pregnancy in frozen–thawed cycles and cycles with egg donation.\n* Characteristics of cells from the cumulus oophorus and the membrana granulosa, which are easily aspirated during oocyte retrieval. These cells are closely associated with the oocyte and share the same microenvironment, and the rate of expression of certain genes in such cells are associated with higher or lower pregnancy rate.\n* An endometrial thickness (EMT) of less than 7 mm decreases the pregnancy rate by an odds ratio of approximately 0.4 compared to an EMT of over 7 mm. However, such low thickness rarely occurs, and any routine use of this parameter is regarded as not justified.\nOther determinants of outcome of IVF include:\n* As maternal age increases, the likelihood of conception decreases and the chance of miscarriage increases.\n*With increasing paternal age, especially 50 years and older, the rate of blastocyst formation decreases.\n* Tobacco smoking reduces the chances of IVF producing a live birth by 34% and increases the risk of an IVF pregnancy miscarrying by 30%.\n* A body mass index (BMI) over 27 causes a 33% decrease in likelihood to have a live birth after the first cycle of IVF, compared to those with a BMI between 20 and 27. Also, pregnant people who are obese have higher rates of miscarriage, gestational diabetes, hypertension, thromboembolism and problems during delivery, as well as leading to an increased risk of fetal congenital abnormality. Ideal body mass index is 19–30. \n* Salpingectomy or laparoscopic tubal occlusion before IVF treatment increases chances for people with hydrosalpinges.\n* Success with previous pregnancy and/or live birth increases chances\n* Low alcohol/caffeine intake increases success rate\n* The number of embryos transferred in the treatment cycle\n* Embryo quality\n* Some studies also suggest that autoimmune disease may also play a role in decreasing IVF success rates by interfering with the proper implantation of the embryo after transfer.\nAspirin is sometimes prescribed to people for the purpose of increasing the chances of conception by IVF, but there was no evidence to show that it is safe and effective.\nA 2013 review and meta analysis of randomised controlled trials of acupuncture as an adjuvant therapy in IVF found no overall benefit, and concluded that an apparent benefit detected in a subset of published trials where the control group (those not using acupuncture) experienced a lower than average rate of pregnancy requires further study, due to the possibility of publication bias and other factors.\nA Cochrane review came to the result that endometrial injury performed in the month prior to ovarian induction appeared to increase both the live birth rate and clinical pregnancy rate in IVF compared with no endometrial injury. There was no evidence of a difference between the groups in miscarriage, multiple pregnancy or bleeding rates. Evidence suggested that endometrial injury on the day of oocyte retrieval was associated with a lower live birth or ongoing pregnancy rate.\nIntake of antioxidants (such as N-acetyl-cysteine, melatonin, vitamin A, vitamin C, vitamin E, folic acid, myo-inositol, zinc or selenium) has not been associated with a significantly increased live birth rate or clinical pregnancy rate in IVF according to Cochrane reviews. The review found that oral antioxidants given to the sperm donor with male factor or unexplained subfertility may improve live birth rates, but more evidence is needed.\nA Cochrane review in 2015 came to the result that there is no evidence identified regarding the effect of preconception lifestyle advice on the chance of a live birth outcome.", "Theoretically, IVF could be performed by collecting the contents from the fallopian tubes or uterus after natural ovulation, mixing it with sperm, and reinserting the fertilised ova into the uterus. However, without additional techniques, the chances of pregnancy would be extremely small. The additional techniques that are routinely used in IVF include ovarian hyperstimulation to generate multiple eggs, ultrasound-guided transvaginal oocyte retrieval directly from the ovaries, co-incubation of eggs and sperm, as well as culture and selection of resultant embryos before embryo transfer into a uterus.", "There are several methods termed natural cycle IVF:\n* IVF using no drugs for ovarian hyperstimulation, while drugs for ovulation suppression may still be used.\n* IVF using ovarian hyperstimulation, including gonadotropins, but with a GnRH antagonist protocol so that the cycle initiates from natural mechanisms.\n* Frozen embryo transfer; IVF using ovarian hyperstimulation, followed by embryo cryopreservation, followed by embryo transfer in a later, natural, cycle.\nIVF using no drugs for ovarian hyperstimulation was the method for the conception of Louise Brown. This method can be successfully used when people want to avoid taking ovarian stimulating drugs with its associated side-effects. HFEA has estimated the live birth rate to be approximately 1.3% per IVF cycle using no hyperstimulation drugs for women aged between 40 and 42.\nMild IVF is a method where a small dose of ovarian stimulating drugs are used for a short duration during a natural menstrual cycle aimed at producing 2–7 eggs and creating healthy embryos. This method appears to be an advance in the field to reduce complications and side-effects for women, and it is aimed at quality, and not quantity of eggs and embryos. One study comparing a mild treatment (mild ovarian stimulation with GnRH antagonist co-treatment combined with single embryo transfer) to a standard treatment (stimulation with a GnRH agonist long-protocol and transfer of two embryos) came to the result that the proportions of cumulative pregnancies that resulted in term live birth after 1 year were 43.4% with mild treatment and 44.7% with standard treatment. Mild IVF can be cheaper than conventional IVF and with a significantly reduced risk of multiple gestation and OHSS.", "In certain countries, including Austria, Italy, Estonia, Hungary, Spain and Israel, the male does not have the full ability to withdraw consent to storage or use of embryos once they are fertilised. In the United States, the matter has been left to the courts on a more or less ad hoc basis. If embryos are implanted and a child is born contrary to the wishes of the male, he still has legal and financial responsibilities of a father.", "The eggs are retrieved from the patient using a transvaginal technique called transvaginal oocyte retrieval, involving an ultrasound-guided needle piercing the vaginal wall to reach the ovaries. Through this needle follicles can be aspirated, and the follicular fluid is passed to an embryologist to identify ova. It is common to remove between ten and thirty eggs. The retrieval process, which lasts approximately 20 to 40 minutes, is performed under conscious sedation or general anesthesia to ensure patient comfort. Following optimal follicular development, the eggs are meticulously retrieved using transvaginal ultrasound guidance with the aid of a specialised ultrasound probe and a fine needle aspiration technique. The follicular fluid, containing the retrieved eggs, is expeditiously transferred to the embryology laboratory for subsequent processing.", "In vitro fertilisation (IVF) is a process of fertilisation where an egg is combined with sperm in vitro (\"in glass\"). The process involves monitoring and stimulating a womans ovulatory process, removing an ovum or ova (egg or eggs) from their ovaries and letting a mans sperm fertilise them in a culture medium in a laboratory. After the fertilised egg (zygote) undergoes embryo culture for 2–6 days, it is transferred by catheter into the uterus, with the intention of establishing a successful pregnancy.\nIVF is a type of assisted reproductive technology used for infertility treatment, gestational surrogacy, and, in combination with pre-implantation genetic testing, avoiding transmission of genetic conditions. A fertilised egg from a donor may implant into a surrogate's uterus, and the resulting child is genetically unrelated to the surrogate. Some countries have banned or otherwise regulate the availability of IVF treatment, giving rise to fertility tourism. Restrictions on the availability of IVF include costs and age, in order for a person to carry a healthy pregnancy to term. Children born through IVF are colloquially called test tube babies.\nIn July 1978, Louise Brown was the first child successfully born after her mother received IVF treatment. Brown was born as a result of natural-cycle IVF, where no stimulation was made. The procedure took place at Dr Kershaws Cottage Hospital (now Dr Kershaws Hospice) in Royton, Oldham, England. Robert Edwards was awarded the Nobel Prize in Physiology or Medicine in 2010. The physiologist co-developed the treatment together with Patrick Steptoe and embryologist Jean Purdy but the latter two were not eligible for consideration as they had died and the Nobel Prize is not awarded posthumously.\nAssisted by egg donation and IVF, there are many women who may be past their reproductive years, have infertile partners, have idiopathic female-fertility issues, or have reached menopause, that can still become pregnant. After the IVF treatment, some couples get pregnant without any fertility treatments. In 2023, it was estimated that twelve million children had been born worldwide using IVF and other assisted reproduction techniques. A 2019 study that explores 10 adjuncts with IVF (screening hysteroscopy, DHEA, testosterone, GH, aspirin, heparin, antioxidants, seminal plasma and PRP) suggests that until more evidence is done to show that these adjuncts are safe and effective, they should be avoided.", "In the laboratory, for ICSI treatments, the identified eggs are stripped of surrounding cells (also known as cumulus cells) and prepared for fertilisation. An oocyte selection may be performed prior to fertilisation to select eggs that can be fertilised, as it is required they are in metaphase II. There are cases in which if oocytes are in the metaphase I stage, they can be kept being cultured so as to undergo a posterior sperm injection. In the meantime, semen is prepared for fertilisation by removing inactive cells and seminal fluid in a process called sperm washing. If semen is being provided by a sperm donor, it will usually have been prepared for treatment before being frozen and quarantined, and it will be thawed ready for use.", "Certain kinds of IVF have been shown to lead to distortions in the sex ratio at birth. Intracytoplasmic sperm injection (ICSI), which was first applied in 1991, leads to slightly more female births (51.3% female). Blastocyst transfer, which was first applied in 1984, leads to significantly more male births (56.1% male). Standard IVF done at the second or third day leads to a normal sex ratio.\nEpigenetic modifications caused by extended culture leading to the death of more female embryos has been theorised as the reason why blastocyst transfer leads to a higher male sex ratio; however, adding retinoic acid to the culture can bring this ratio back to normal. A second theory is that the male-biased sex ratio may due to a higher rate of selection of male embryos. Male embryos develop faster in vitro, and thus may appear more viable for transfer.", "The major complication of IVF is the risk of multiple births. This is directly related to the practice of transferring multiple embryos at embryo transfer. Multiple births are related to increased risk of pregnancy loss, obstetrical complications, prematurity, and neonatal morbidity with the potential for long term damage. Strict limits on the number of embryos that may be transferred have been enacted in some countries (e.g. Britain, Belgium) to reduce the risk of high-order multiples (triplets or more), but are not universally followed or accepted. Spontaneous splitting of embryos in the uterus after transfer can occur, but this is rare and would lead to identical twins. A double blind, randomised study followed IVF pregnancies that resulted in 73 infants, and reported that 8.7% of singleton infants and 54.2% of twins had a birth weight of less than . There is some evidence that making a double embryo transfer during one cycle achieves a higher live birth rate than a single embryo transfer; but making two single embryo transfers in two cycles has the same live birth rate and would avoid multiple pregnancies.", "* Intracytoplasmic sperm injection (ICSI) is where a single sperm is injected directly into an egg. Its main usage as an expansion of IVF is to overcome male infertility problems, although it may also be used where eggs cannot easily be penetrated by sperm, and occasionally in conjunction with sperm donation. It can be used in teratozoospermia, since once the egg is fertilised abnormal sperm morphology does not appear to influence blastocyst development or blastocyst morphology.\n* Additional methods of embryo profiling. For example, methods are emerging in making comprehensive analyses of up to entire genomes, transcriptomes, proteomes and metabolomes which may be used to score embryos by comparing the patterns with ones that have previously been found among embryos in successful versus unsuccessful pregnancies.\n* Assisted zona hatching (AZH) can be performed shortly before the embryo is transferred to the uterus. A small opening is made in the outer layer surrounding the egg in order to help the embryo hatch out and aid in the implantation process of the growing embryo.\n* In egg donation and embryo donation, the resultant embryo after fertilisation is inserted in another person than the one providing the eggs. These are resources for those with no eggs due to surgery, chemotherapy, or genetic causes; or with poor egg quality, previously unsuccessful IVF cycles or advanced maternal age. In the egg donor process, eggs are retrieved from a donors ovaries, fertilised in the laboratory with sperm, and the resulting healthy embryos are returned to the recipients uterus.\n* In oocyte selection, the oocytes with optimal chances of live birth can be chosen. It can also be used as a means of preimplantation genetic screening.\n* Embryo splitting can be used for twinning to increase the number of available embryos.\n*Cytoplasmic transfer is where the cytoplasm from a donor egg is injected into an egg with compromised mitochondria. The resulting egg is then fertilised with sperm and introduced into a uterus, usually that of the person who provided the recipient egg and nuclear DNA. Cytoplasmic transfer was created to aid those who experience infertility due to deficient or damaged mitochondria, contained within an egg's cytoplasm.", "Cryopreservation can be performed as oocyte cryopreservation before fertilisation, or as embryo cryopreservation after fertilisation.\nThe Rand Consulting Group has estimated there to be 400,000 frozen embryos in the United States in 2006. The advantage is that patients who fail to conceive may become pregnant using such embryos without having to go through a full IVF cycle. Or, if pregnancy occurred, they could return later for another pregnancy. Spare oocytes or embryos resulting from fertility treatments may be used for oocyte donation or embryo donation to another aspiring parent, and embryos may be created, frozen and stored specifically for transfer and donation by using donor eggs and sperm. Also, oocyte cryopreservation can be used for those who are likely to lose their ovarian reserve due to undergoing chemotherapy.\nBy 2017, many centres have adopted embryo cryopreservation as their primary IVF therapy, and perform few or no fresh embryo transfers. The two main reasons for this have been better endometrial receptivity when embryos are transferred in cycles without exposure to ovarian stimulation and also the ability to store the embryos while awaiting the results of preimplantation genetic testing.\nThe outcome from using cryopreserved embryos has uniformly been positive with no increase in birth defects or development abnormalities.", "There are various expansions or additional techniques that can be applied in IVF, which are usually not necessary for the IVF procedure itself, but would be virtually impossible or technically difficult to perform without concomitantly performing methods of IVF.", "Luteal support is the administration of medication, generally progesterone, progestins, hCG, or GnRH agonists, and often accompanied by estradiol, to increase the success rate of implantation and early embryogenesis, thereby complementing and/or supporting the function of the corpus luteum. A Cochrane review found that hCG or progesterone given during the luteal phase may be associated with higher rates of live birth or ongoing pregnancy, but that the evidence is not conclusive. Co-treatment with GnRH agonists appears to improve outcomes, by a live birth rate RD of +16% (95% confidence interval +10 to +22%). On the other hand, growth hormone or aspirin as adjunctive medication in IVF have no evidence of overall benefit.", "Preimplantation genetic screening (PGS) or preimplantation genetic diagnosis (PGD) has been suggested to be able to be used in IVF to select an embryo that appears to have the greatest chances for successful pregnancy. However, a systematic review and meta-analysis of existing randomised controlled trials came to the result that there is no evidence of a beneficial effect of PGS with cleavage-stage biopsy as measured by live birth rate. On the contrary, for those of advanced maternal age, PGS with cleavage-stage biopsy significantly lowers the live birth rate. Technical drawbacks, such as the invasiveness of the biopsy, and non-representative samples because of mosaicism are the major underlying factors for inefficacy of PGS.\nStill, as an expansion of IVF, patients who can benefit from PGS/PGD include:\n* Those who have a family history of inherited disease\n* Those who want prenatal sex discernment. This can be used to diagnose monogenic disorders with sex linkage. It can potentially be used for sex selection, wherein a fetus is aborted if having an undesired sex.\n* Those who already have a child with an incurable disease and need compatible cells from a second healthy child to cure the first, resulting in a \"saviour sibling\" that matches the sick child in HLA type.\nPGS screens for numeral chromosomal abnormalities while PGD diagnosis the specific molecular defect of the inherited disease. In both PGS and PGD, individual cells from a pre-embryo, or preferably trophectoderm cells biopsied from a blastocyst, are analysed during the IVF process. Before the transfer of a pre-embryo back to a person's uterus, one or two cells are removed from the pre-embryos (8-cell stage), or preferably from a blastocyst. These cells are then evaluated for normality. Typically within one to two days, following completion of the evaluation, only the normal pre-embryos are transferred back to the uterus. Alternatively, a blastocyst can be cryopreserved via vitrification and transferred at a later date to the uterus. In addition, PGS can significantly reduce the risk of multiple pregnancies because fewer embryos, ideally just one, are needed for implantation.", "The number to be transferred depends on the number available, the age of the patient and other health and diagnostic factors. In countries such as Canada, the UK, Australia and New Zealand, a maximum of two embryos are transferred except in unusual circumstances. In the UK and according to HFEA regulations, a woman over 40 may have up to three embryos transferred, whereas in the US, there is no legal limit on the number of embryos which may be transferred, although medical associations have provided practice guidelines. Most clinics and country regulatory bodies seek to minimise the risk of multiple pregnancy, as it is not uncommon for multiple embryos to implant if multiple embryos are transferred. Embryos are transferred to the patient's uterus through a thin, plastic catheter, which goes through their vagina and cervix. Several embryos may be passed into the uterus to improve chances of implantation and pregnancy.", "Laboratories have developed grading methods to judge ovocyte and embryo quality. In order to optimise pregnancy rates, there is significant evidence that a morphological scoring system is the best strategy for the selection of embryos. Since 2009 where the first time-lapse microscopy system for IVF was approved for clinical use, morphokinetic scoring systems has shown to improve to pregnancy rates further. However, when all different types of time-lapse embryo imaging devices, with or without morphokinetic scoring systems, are compared against conventional embryo assessment for IVF, there is insufficient evidence of a difference in live-birth, pregnancy, stillbirth or miscarriage to choose between them. Active efforts to develop a more accurate embryo selection analysis based on Artificial Intelligence and Deep Learning are underway. Embryo Ranking Intelligent Classification Assistant (ERICA), is a clear example. This Deep Learning software substitutes manual classifications with a ranking system based on an individual embryo's predicted genetic status in a non-invasive fashion. Studies on this area are still pending and current feasibility studies support its potential.", "The main durations of embryo culture are until cleavage stage (day two to four after co-incubation) or the blastocyst stage (day five or six after co-incubation). Embryo culture until the blastocyst stage confers a significant increase in live birth rate per embryo transfer, but also confers a decreased number of embryos available for transfer and embryo cryopreservation, so the cumulative clinical pregnancy rates are increased with cleavage stage transfer. Transfer day two instead of day three after fertilisation has no differences in live birth rate. There are significantly higher odds of preterm birth (odds ratio 1.3) and congenital anomalies (odds ratio 1.3) among births having from embryos cultured until the blastocyst stage compared with cleavage stage.", "When the ovarian follicles have reached a certain degree of development, induction of final oocyte maturation is performed, generally by an injection of human chorionic gonadotropin (hCG). Commonly, this is known as the \"trigger shot.\" hCG acts as an analogue of luteinising hormone, and ovulation would occur between 38 and 40 hours after a single HCG injection, but the egg retrieval is performed at a time usually between 34 and 36 hours after hCG injection, that is, just prior to when the follicles would rupture. This avails for scheduling the egg retrieval procedure at a time where the eggs are fully mature. HCG injection confers a risk of ovarian hyperstimulation syndrome. Using a GnRH agonist instead of hCG eliminates most of the risk of ovarian hyperstimulation syndrome, but with a reduced delivery rate if the embryos are transferred fresh. For this reason, many centers will freeze all oocytes or embryos following agonist trigger.", "A review in 2013 came to the result that infants resulting from IVF (with or without ICSI) have a relative risk of birth defects of 1.32 (95% confidence interval 1.24–1.42) compared to naturally conceived infants. In 2008, an analysis of the data of the National Birth Defects Study in the US found that certain birth defects were significantly more common in infants conceived through IVF, notably septal heart defects, cleft lip with or without cleft palate, esophageal atresia, and anorectal atresia; the mechanism of causality is unclear. However, in a population-wide cohort study of 308,974 births (with 6,163 using assisted reproductive technology and following children from birth to age five) researchers found: \"The increased risk of birth defects associated with IVF was no longer significant after adjustment for parental factors.\" Parental factors included known independent risks for birth defects such as maternal age, smoking status, etc. Multivariate correction did not remove the significance of the association of birth defects and ICSI (corrected odds ratio 1.57), although the authors speculate that underlying male infertility factors (which would be associated with the use of ICSI) may contribute to this observation and were not able to correct for these confounders. The authors also found that a history of infertility elevated risk itself in the absence of any treatment (odds ratio 1.29), consistent with a Danish national registry study and \"implicates patient factors in this increased risk.\" The authors of the Danish national registry study speculate: \"our results suggest that the reported increased prevalence of congenital malformations seen in singletons born after assisted reproductive technology is partly due to the underlying infertility or its determinants.\"", "IVF success rates are the percentage of all IVF procedures that result in favourable outcomes. Depending on the type of calculation used, this outcome may represent the number of confirmed pregnancies, called the pregnancy rate, or the number of live births, called the live birth rate. Due to advances in reproductive technology, live birth rates by cycle five of IVF have increased from 76% in 2005 to 80% in 2010, despite a reduction in the number of embryos being transferred (which decreased the multiple birth rate from 25% to 8%).\nThe success rate depends on variable factors such as age of the birthing person, cause of infertility, embryo status, reproductive history, and lifestyle factors. Younger candidates of IVF are more likely to get pregnant. People older than 41 are more likely to get pregnant with a donor egg. People who have been previously pregnant are in many cases more successful with IVF treatments than those who have never been pregnant.", "Some alternatives to IVF are:\n*Artificial insemination, including intracervical insemination and intrauterine insemination of semen. It requires that a woman ovulates, but is a relatively simple procedure, and can be used in the home for self-insemination without medical practitioner assistance. The beneficiaries of artificial insemination are people who desire to give birth to their own child who may be single, people who are in a lesbian relationship or females who are in a heterosexual relationship but with a male partner who is infertile or who has a physical impairment which prevents full intercourse from taking place.\n*Ovulation induction (in the sense of medical treatment aiming for the development of one or two ovulatory follicles) is an alternative for people with anovulation or oligoovulation, since it is less expensive and more easy to control. It generally involves antiestrogens such as clomifene citrate or letrozole, and is followed by natural or artificial insemination.\n*Surrogacy, the process in which a surrogate agrees to bear a child for another person or persons, who will become the child's parent(s) after birth. People may seek a surrogacy arrangement when pregnancy is medically impossible, when pregnancy risks are too dangerous for the intended gestational carrier, or when a single man or a male couple wish to have a child.\n*Adoption whereby a person assumes the parenting of another, usually a child, from that person's biological or legal parent or parents.", "Costs of IVF can be broken down into direct and indirect costs. Direct costs include the medical treatments themselves, including doctor consultations, medications, ultrasound scanning, laboratory tests, the actual IVF procedure, and any associated hospital charges and administrative costs. Indirect costs includes the cost of addressing any complications with treatments, compensation for the gestational surrogate, patients' travel costs, and lost hours of productivity. These costs can be exaggerated by the increasing age of the woman undergoing IVF treatment (particularly those over the age of 40), and the increase costs associated with multiple births. For instance, a pregnancy with twins can cost up to three times that of a singleton pregnancy. While some insurances cover one cycle of IVF, it takes multiple cycles of IVF to have a successful outcome. A study completed in Northern California reveals that the IVF procedure alone that results in a successful outcome costs $61,377, and this can be more costly with the use of a donor egg.\nThe cost of IVF rather reflects the costliness of the underlying healthcare system than the regulatory or funding environment, and ranges, on average for a standard IVF cycle and in 2006 United States dollars, between $12,500 in the United States to $4,000 in Japan. In Ireland, IVF costs around €4,000, with fertility drugs, if required, costing up to €3,000. The cost per live birth is highest in the United States ($41,000) and United Kingdom ($40,000) and lowest in Scandinavia and Japan (both around $24,500).\nThe high cost of IVF is also a barrier to access for disabled individuals, who typically have lower incomes, face higher health care costs, and seek health care services more often than non-disabled individuals.\nNavigating insurance coverage for transgender expectant parents presents a unique challenge. Insurance plans are designed to cater towards a specific population, meaning that some plans can provide adequate coverage for gender-affirming care but fail to provide fertility services for transgender patients. Additionally, insurance coverage is constructed around a person's legally recognised sex and not their anatomy; thus, transgender people may not get coverage for the services they need, including transgender men for fertility services.", "The research on transgender reproduction and family planning is limited. A 2020 comparative study of children born to a transgender father and cisgender mother via donor sperm insemination in France showed no significant differences to IVF and naturally conceived children of cisgender parents.\nTransgender men can experience challenges in pregnancy and birthing from the cis-normative structure within the medical system, as well as psychological challenges such as renewed gender dysphoria. The effect of continued testosterone therapy during pregnancy and breastfeeding is undetermined. Ethical concerns include reproductive rights, reproductive justice, physician autonomy, and transphobia within the health care setting.", "People with disabilities who wish to have children are equally or more likely than the non-disabled population to experience infertility, yet disabled individuals are much less likely to have access to fertility treatment such as IVF. There are many extraneous factors that hinder disabled individuals access to IVF, such as assumptions about decision-making capacity, sexual interests and abilities, heritability of a disability, and beliefs about parenting ability. These same misconceptions about people with disabilities that once led health care providers to sterilise thousands of women with disabilities now lead them to provide or deny reproductive care on the basis of stereotypes concerning people with disabilities and their sexuality.\nNot only do misconceptions about disabled individuals parenting ability, sexuality, and health restrict and hinder access to fertility treatment such as IVF, structural barriers such as providers uneducated in disability healthcare and inaccessible clinics severely hinder disabled individuals access to receiving IVF.", "In some cases, laboratory mix-ups (misidentified gametes, transfer of wrong embryos) have occurred, leading to legal action against the IVF provider and complex paternity suits. An example is the case of a woman in California who received the embryo of another couple and was notified of this mistake after the birth of her son. This has led to many authorities and individual clinics implementing procedures to minimise the risk of such mix-ups. The HFEA, for example, requires clinics to use a double witnessing system, the identity of specimens is checked by two people at each point at which specimens are transferred. Alternatively, technological solutions are gaining favour, to reduce the manpower cost of manual double witnessing, and to further reduce risks with uniquely numbered RFID tags which can be identified by readers connected to a computer. The computer tracks specimens throughout the process and alerts the embryologist if non-matching specimens are identified. Although the use of RFID tracking has expanded in the US, it is still not widely adopted.", "Pre-implantation genetic diagnosis (PGD) is criticised for giving select demographic groups disproportionate access to a means of creating a child possessing characteristics that they consider \"ideal\". Many fertile couples now demand equal access to embryonic screening so that their child can be just as healthy as one created through IVF. Mass use of PGD, especially as a means of population control or in the presence of legal measures related to population or demographic control, can lead to intentional or unintentional demographic effects such as the skewed live-birth sex ratios seen in China following implementation of its one-child policy.\nWhile PGD was originally designed to screen for embryos carrying hereditary genetic diseases, the method has been applied to select features that are unrelated to diseases, thus raising ethical questions. Examples of such cases include the selection of embryos based on histocompatibility (HLA) for the donation of tissues to a sick family member, the diagnosis of genetic susceptibility to disease, and sex selection.\nThese examples raise ethical issues because of the morality of eugenics. It becomes frowned upon because of the advantage of being able to eliminate unwanted traits and selecting desired traits. By using PGD, individuals are given the opportunity to create a human life unethically and rely on science and not by natural selection.\nFor example, a deaf British couple, Tom and Paula Lichy, have petitioned to create a deaf baby using IVF. Some medical ethicists have been very critical of this approach. Jacob M. Appel wrote that \"intentionally culling out blind or deaf embryos might prevent considerable future suffering, while a policy that allowed deaf or blind parents to select for such traits intentionally would be far more troublesome.\"", "In 2008, a California physician transferred 12 embryos to a woman who gave birth to octuplets (Suleman octuplets). This led to accusations that a doctor is willing to endanger the health and even life of people in order to gain money. Robert Winston, professor of fertility studies at Imperial College London, had called the industry \"corrupt\" and \"greedy\" stating that \"one of the major problems facing us in healthcare is that IVF has become a massive commercial industry,\" and that \"what has happened, of course, is that money is corrupting this whole technology\", and accused authorities of failing to protect couples from exploitation: \"The regulatory authority has done a consistently bad job. Its not prevented the exploitation of people, its not put out very good information to couples, it's not limited the number of unscientific treatments people have access to\". The IVF industry has been described as a market-driven construction of health, medicine and the human body.\nThe industry has been accused of making unscientific claims, and distorting facts relating to infertility, in particular through widely exaggerated claims about how common infertility is in society, in an attempt to get as many couples as possible and as soon as possible to try treatments (rather than trying to conceive naturally for a longer time). This risks removing infertility from its social context and reducing the experience to a simple biological malfunction, which not only can be treated through bio-medical procedures, but should be treated by them.", "All pregnancies can be risky, but there are greater risk for birthing parents who are older and are over the age of 40. As people get older, they are more likely to develop conditions such as gestational diabetes and pre-eclampsia. If the birthing parent does conceive over the age of 40, their offspring may be of lower birth weight, and more likely to requires intensive care. Because of this, the increased risk is a sufficient cause for concern. The high incidence of caesarean in older patients is commonly regarded as a risk.\nThose conceiving at 40 have a greater risk of gestational hypertension and premature birth. The offspring is at risk when being born from older mothers, and the risks associated with being conceived through IVF.\nAdriana Iliescu held the record for a while as the oldest woman to give birth using IVF and a donor egg, when she gave birth in 2004 at the age of 66. In September 2019, a 74-year-old woman became the oldest-ever to give birth after she delivered twins at a hospital in Guntur, Andhra Pradesh.", "Although menopause is a natural barrier to further conception, IVF has allowed people to be pregnant in their fifties and sixties. People whose uteruses have been appropriately prepared receive embryos that originated from an egg donor. Therefore, although they do not have a genetic link with the child, they have a physical link through pregnancy and childbirth. Even after menopause, the uterus is fully capable of carrying out a pregnancy.", "A 2009 statement from the ASRM found no persuasive evidence that children are harmed or disadvantaged solely by being raised by single parents, unmarried parents, or homosexual parents. It did not support restricting access to assisted reproductive technologies on the basis of a prospective parents marital status or sexual orientation. A 2018 study found that childrens psychological well-being did not differ when raised by either same-sex parents or heterosexual parents, even finding that psychological well-being was better amongst children raised by same-sex parents.\nEthical concerns include reproductive rights, the welfare of offspring, nondiscrimination against unmarried individuals, homosexual, and professional autonomy.\nA controversy in California focused on the question of whether physicians opposed to same-sex relationships should be required to perform IVF for a lesbian couple. Guadalupe T. Benitez, a lesbian medical assistant from San Diego, sued doctors Christine Brody and Douglas Fenton of the North Coast Womans Care Medical Group after Brody told her that she had \"religious-based objections to treating her and homosexuals in general to help them conceive children by artificial insemination,\" and Fenton refused to authorise a refill of her prescription for the fertility drug Clomid on the same grounds. The California Medical Association had initially sided with Brody and Fenton, but the case, North Coast Womens Care Medical Group v. Superior Court, was decided unanimously by the California State Supreme Court in favour of Benitez on 19 August 2008.\nNadya Suleman came to international attention after having twelve embryos implanted, eight of which survived, resulting in eight newborns being added to her existing six-child family. The Medical Board of California sought to have fertility doctor Michael Kamrava, who treated Suleman, stripped of his licence. State officials allege that performing Sulemans procedure is evidence of unreasonable judgment, substandard care, and a lack of concern for the eight children she would conceive and the six she was already struggling to raise. On 1 June 2011 the Medical Board issued a ruling that Kamravas medical licence be revoked effective 1 July 2011.", "Some children conceived by IVF using anonymous donors report being troubled over not knowing about their donor parent as well any genetic relatives they may have and their family history.\nAlana Stewart, who was conceived using donor sperm, began an online forum for donor children called AnonymousUS in 2010. The forum welcomes the viewpoints of anyone involved in the IVF process. Olivia Pratten, a donor-conceived Canadian, sued the province of British Columbia for access to records on her donor fathers identity in 2008. \"Im not a treatment, Im a person, and those records belong to me,\" Pratten said. In May 2012, a court ruled in Prattens favour, agreeing that the laws at the time discriminated against donor children and making anonymous sperm and egg donation in British Columbia illegal.\nIn the U.K., Sweden, Norway, Germany, Italy, New Zealand, and some Australian states, donors are not paid and cannot be anonymous.\nIn 2000, a website called Donor Sibling Registry was created to help biological children with a common donor connect with each other.\nIn 2012, a documentary called Anonymous Fathers Day' was released that focuses on donor-conceived children.", "There may be leftover embryos or eggs from IVF procedures if the person for whom they were originally created has successfully carried one or more pregnancies to term, and no longer wishes to use them. With the patient's permission, these may be donated to help others conceive by means of third party reproduction.\nIn embryo donation, these extra embryos are given to others for transfer, with the goal of producing a successful pregnancy. Embryo recipients have genetic issues or poor-quality embryos or eggs of their own. The resulting child is considered the child of whoever birthed them, and not the child of the donor, the same as occurs with egg donation or sperm donation. As per The National Infertility Association, typically, genetic parents donate the eggs or embryos to a fertility clinic where they are preserved by oocyte cryopreservation or embryo cryopreservation until a carrier is found for them. The process of matching the donation with the prospective parents is conducted by the agency itself, at which time the clinic transfers ownership of the embryos to the prospective parent(s).\nAlternatives to donating unused embryos are destroying them (or having them transferred at a time when pregnancy is very unlikely), keeping them frozen indefinitely, or donating them for use in research (rendering them non-viable). Individual moral views on disposing of leftover embryos may depend on personal views on the beginning of human personhood and the definition and/or value of potential future persons, and on the value that is given to fundamental research questions. Some people believe donation of leftover embryos for research is a good alternative to discarding the embryos when patients receive proper, honest and clear information about the research project, the procedures and the scientific values.\nDuring the embryo selection and transfer phases, many embryos may be discarded in favour of others. This selection may be based on criteria such as genetic disorders or the sex. One of the earliest cases of special gene selection through IVF was the case of the Collins family in the 1990s, who selected the sex of their child.\nThe ethic issues remain unresolved as no worldwide consensus exists in science, religion, and philosophy on when a human embryo should be recognised as a person. For those who believe that this is at the moment of conception, IVF becomes a moral question when multiple eggs are fertilised, begin development, and only a few are chosen for uterus transfer.\nIf IVF were to involve the fertilisation of only a single egg, or at least only the number that will be transferred, then this would not be an issue. However, this has the chance of increasing costs dramatically as only a few eggs can be attempted at a time. As a result, the couple must decide what to do with these extra embryos. Depending on their view of the embryo's humanity or the chance the couple will want to try to have another child, the couple has multiple options for dealing with these extra embryos. Couples can choose to keep them frozen, donate them to other infertile couples, thaw them, or donate them to medical research. Keeping them frozen costs money, donating them does not ensure they will survive, thawing them renders them immediately unviable, and medical research results in their termination. In the realm of medical research, the couple is not necessarily told what the embryos will be used for, and as a result, some can be used in stem cell research.\nIn February 2024, the Alabama Supreme Court ruled in LePage v. Center for Reproductive Medicine that cryopreserved embryos were \"persons\" or \"extrauterine children\". After Dobbs v. Jackson Women's Health Organization (2022), some antiabortionists had hoped to get a judgement that fetuses and embryos were \"person[s]\".", "The Catholic Church opposes all kinds of assisted reproductive technology and artificial contraception, on the grounds that they separate the procreative goal of marital sex from the goal of uniting married couples.\nThe Catholic Church permits the use of a small number of reproductive technologies and contraceptive methods such as natural family planning, which involves charting ovulation times, and allows other forms of reproductive technologies that allow conception to take place from normative sexual intercourse, such as a fertility lubricant. Pope Benedict XVI had publicly re-emphasised the Catholic Church's opposition to in vitro fertilisation, saying that it replaces love between a husband and wife.\nThe Catechism of the Catholic Church, in accordance with the Catholic understanding of natural law, teaches that reproduction has an \"inseparable connection\" to the sexual union of married couples. In addition, the church opposes IVF because it might result in the disposal of embryos; in Catholicism, an embryo is viewed as an individual with a soul that must be treated as a person. The Catholic Church maintains that it is not objectively evil to be infertile, and advocates adoption as an option for such couples who still wish to have children.\nHindus welcome IVF as a gift for those who are unable to bear children and have declared doctors related to IVF to be conducting punya as there are several characters who were claimed to be born without intercourse, mainly Kaurav and five Pandavas.\nRegarding the response to IVF by Islam, a general consensus from the contemporary Sunni scholars concludes that IVF methods are immoral and prohibited. However, Gad El-Hak Ali Gad El-Hak's ART fatwa includes that:\n*IVF of an egg from the wife with the sperm of her husband and the transfer of the fertilised egg back to the uterus of the wife is allowed, provided that the procedure is indicated for a medical reason and is carried out by an expert physician.\n*Since marriage is a contract between the wife and husband during the span of their marriage, no third party should intrude into the marital functions of sex and procreation. This means that a third party donor is not acceptable, whether he or she is providing sperm, eggs, embryos, or a uterus. The use of a third party is tantamount to zina, or adultery.\nWithin the Orthodox Jewish community the concept is debated as there is little precedent in traditional Jewish legal textual sources. Regarding laws of sexuality, religious challenges include masturbation (which may be regarded as \"seed wasting\"), laws related to sexual activity and menstruation (niddah) and the specific laws regarding intercourse. An additional major issue is that of establishing paternity and lineage. For a baby conceived naturally, the fathers identity is determined by a legal presumption (chazakah) of legitimacy: rov biot achar habaal – a womans sexual relations are assumed to be with her husband. Regarding an IVF child, this assumption does not exist and as such Rabbi Eliezer Waldenberg (among others) requires an outside supervisor to positively identify the father. Reform Judaism has generally approved IVF.", "Many women of sub-Saharan Africa choose to foster their children to infertile women. IVF enables these infertile women to have their own children, which imposes new ideals to a culture in which fostering children is seen as both natural and culturally important. Many infertile women are able to earn more respect in their society by taking care of the children of other mothers, and this may be lost if they choose to use IVF instead. As IVF is seen as unnatural, it may even hinder their societal position as opposed to making them equal with fertile women. It is also economically advantageous for infertile women to raise foster children as it gives these children greater ability to access resources that are important for their development and also aids the development of their society at large. If IVF becomes more popular without the birth rate decreasing, there could be more large family homes with fewer options to send their newborn children. This could result in an increase of orphaned children and/or a decrease in resources for the children of large families. This would ultimately stifle the childrens and the communitys growth.\nIn the US, the pineapple has emerged as a symbol of IVF users, possibly because some people thought, without scientific evidence, that eating pineapple might slightly increase the success rate for the procedure.", "Studies have indicated that IVF mothers show greater emotional involvement with their child, and they enjoy motherhood more than mothers by natural conception. Similarly, studies have indicated that IVF fathers express more warmth and emotional involvement than fathers by adoption and natural conception and enjoy fatherhood more. Some IVF parents become overly involved with their children.", "If the underlying infertility is related to abnormalities in spermatogenesis, it is plausible, but too early to examine that male offspring are at higher risk for sperm abnormalities.\nIVF does not seem to confer any risks regarding cognitive development, school performance, social functioning, and behaviour. Also, IVF infants are known to be as securely attached to their parents as those who were naturally conceived, and IVF adolescents are as well-adjusted as those who have been naturally conceived.\nLimited long-term follow-up data suggest that IVF may be associated with an increased incidence of hypertension, impaired fasting glucose, increase in total body fat composition, advancement of bone age, subclinical thyroid disorder, early adulthood clinical depression and binge drinking in the offspring. It is not known, however, whether these potential associations are caused by the IVF procedure in itself, by adverse obstetric outcomes associated with IVF, by the genetic origin of the children or by yet unknown IVF-associated causes. Increases in embryo manipulation during IVF result in more deviant fetal growth curves, but birth weight does not seem to be a reliable marker of fetal stress.\nIVF, including ICSI, is associated with an increased risk of imprinting disorders (including Prader–Willi syndrome and Angelman syndrome), with an odds ratio of 3.7 (95% confidence interval 1.4 to 9.7).\nAn IVF-associated incidence of cerebral palsy and neurodevelopmental delay are believed to be related to the confounders of prematurity and low birthweight. Similarly, an IVF-associated incidence of autism and attention-deficit disorder are believed to be related to confounders of maternal and obstetric factors.\nOverall, IVF does not cause an increased risk of childhood cancer. Studies have shown a decrease in the risk of certain cancers and an increased risks of certain others including retinoblastoma, hepatoblastoma and rhabdomyosarcoma.", "The laws of many countries permit IVF for only single individuals, lesbian couples, and persons participating in surrogacy arrangements.", "In larger urban centres, studies have noted that lesbian, gay, bisexual, transgender and queer (LGBTQ+) populations are among the fastest-growing users of fertility care. IVF is increasingly being used to allow lesbian and other LGBT couples to share in the reproductive process through a technique called reciprocal IVF. The eggs of one partner are used to create embryos which the other partner carries through pregnancy. For gay male couples, many elect to use IVF through gestational surrogacy, where one partners sperm is used to fertilise a donor ovum, and the resulting embryo is transplanted into a surrogate carriers womb. There are various IVF options available for same-sex couples including, but not limited to, IVF with donor sperm, IVF with a partners oocytes, reciprocal IVF, IVF with donor eggs, and IVF with gestational surrogate. IVF with donor sperm can be considered traditional IVF for lesbian couples, but reciprocal IVF or using a partners oocytes are other options for lesbian couples trying to conceive to include both partners in the biological process. Using a partners oocytes is an option for partners who are unsuccessful in conceiving with their own, and reciprocal IVF involves undergoing reproduction with a donor egg and sperm that is then transferred to a partner who will gestate. Donor IVF involves conceiving with a third partys eggs. Typically, for gay male couples hoping to use IVF, the common techniques are using IVF with donor eggs and gestational surrogates.", "Many LGBT communities centre their support around cisgender gay, lesbian and bisexual people and neglect to include proper support for transgender people. The same 2020 literature review analyses the social, emotional and physical experiences of pregnant transgender men. A common obstacle faced by pregnant transgender men is the possibility of gender dysphoria. Literature shows that transgender men report uncomfortable procedures and interactions during their pregnancies as well as feeling misgendered due to gendered terminology used by healthcare providers. Outside of the healthcare system, pregnant transgender men may experience gender dysphoria due to cultural assumptions that all pregnant people are cisgender women. These people use three common approaches to navigating their pregnancy: passing as a cisgender woman, hiding their pregnancy, or being out and visibly pregnant as a transgender man. Some transgender and gender diverse patients describe their experience in seeking gynaecological and reproductive health care as isolating and discriminatory, as the strictly binary healthcare system often leads to denial of healthcare coverage or unnecessary revelation of their transgender status to their employer.\nMany transgender people retain their original sex organs and choose to have children through biological reproduction. Advances in assisted reproductive technology and fertility preservation have broadened the options transgender people have to conceive a child using their own gametes or a donor's. Transgender men and women may opt for fertility preservation before any gender affirming surgery, but it is not required for future biological reproduction. It is also recommended that fertility preservation is conducted before any hormone therapy. Additionally, while fertility specialists often suggest that transgender men discontinue their testosterone hormones prior to pregnancy, research on this topic is still inconclusive. However, a 2019 study found that transgender male patients seeking oocyte retrieval via assisted reproductive technology (including IVF) were able to undergo treatment four months after stopping testosterone treatment, on average. All patients experienced menses and normal AMH, FSH and E levels and antral follicle counts after coming off testosterone, which allowed for successful oocyte retrieval. Despite assumptions that the long-term androgen treatment negatively impacts fertility, oocyte retrieval, an integral part of the IVF process, does not appear to be affected.\nBiological reproductive options available to transgender women include, but are not limited to, IVF and IUI with the trans womans sperm and a donor or a partners eggs and uterus. Fertility treatment options for transgender men include, but are not limited to, IUI or IVF using his own eggs with a donors sperm and/or donors eggs, his uterus, or a different uterus, whether that is a partners or a surrogates.", "In Australia, the average age of women undergoing ART treatment is 35.5 years among those using their own eggs (one in four being 40 or older) and 40.5 years among those using donated eggs. While IVF is available in Australia, Australians using IVF are unable to choose their baby's gender.", "In Canada, one cycle of IVF treatment can cost between $7,750 to $12,250 CAD, and medications alone can cost between $2,500 to over $7,000 CAD. The funding mechanisms that influence accessibility in Canada vary by province and territory, with some provinces providing full, partial or no coverage.\nNew Brunswick provides partial funding through their Infertility Special Assistance Fund – a one time grant of up to $5,000. Patients may only claim up to 50% of treatment costs or $5,000 (whichever is less) occurred after April 2014. Eligible patients must be a full-time New Brunswick resident with a valid Medicare card and have an official medical infertility diagnosis by a physician.\nIn December 2015, the Ontario provincial government enacted the Ontario Fertility Program for patients with medical and non-medical infertility, regardless of sexual orientation, gender or family composition. Eligible patients for IVF treatment must be Ontario residents under the age of 43 and have a valid Ontario Health Insurance Plan card and have not already undergone any IVF cycles. Coverage is extensive, but not universal. Coverage extends to certain blood and urine tests, physician/nurse counselling and consultations, certain ultrasounds, up to two cycle monitorings, embryo thawing, freezing and culture, fertilisation and embryology services, single transfers of all embryos, and one surgical sperm retrieval using certain techniques only if necessary. Drugs and medications are not covered under this Program, along with psychologist or social worker counselling, storage and shipping of eggs, sperm or embryos, and the purchase of donor sperm or eggs.", "IVF is expensive in China and not generally accessible to unmarried women. In August 2022, China's National Health Authority announced that it will take steps to make assisted reproductive technology more accessible, including by guiding local governments to include such technology in its national medical system.\nCroatia\nNo egg or sperm donations take place in Croatia, however using donated sperm or egg in ART and IUI is allowed. With donated eggs, sperm or embryo, a heterosexual couple and single women have legal access to IVF. Male or female couples do not have access to ART as a form of reproduction. The minimum age for males and females to access ART in Croatia is 18 there is no maximum age. Donor anonymity applies, but the born child can be given access to the donor's identity at a certain age", "The penetration of the IVF market in India is quite low, with only 2,800 cycles per million infertile people in the reproductive age group (20–44 years), as compared to China, which has 6,500 cycles. The key challenges are lack of awareness, affordability and accessibility. Since 2018, however, India has become a destination for fertility tourism, because of lower costs than in the Western world. In December 2021, the Lok Sabha passed the Assisted Reproductive Technology (Regulation) Bill 2020, to regulate ART services including IVF centres, sperm and egg banks.", "Israel has the highest rate of IVF in the world, with 1,657 procedures performed per million people per year. Couples without children can receive funding for IVF for up to two children. The same funding is available for people without children who will raise up to two children in a single parent home. IVF is available for people aged 18 to 45. The Israeli Health Ministry says it spends roughly $3450 per procedure.", "One, two or three IVF treatments are government subsidised for people who are younger than 40 and have no children. The rules for how many treatments are subsidised, and the upper age limit for the people, vary between different county councils. Single people are treated, and embryo adoption is allowed. There are also private clinics that offer the treatment for a fee.", "Availability of IVF in England is determined by Clinical Commissioning Groups (CCGs). The National Institute for Health and Care Excellence (NICE) recommends up to 3 cycles of treatment for people under 40 years old with minimal success conceiving after 2 years of unprotected sex. Cycles will not be continued for people who are older than 40 years. CCGs in Essex, Bedfordshire and Somerset have reduced funding to one cycle, or none, and it is expected that reductions will become more widespread. Funding may be available in \"exceptional circumstances\" – for example if a male partner has a transmittable infection or one partner is affected by cancer treatment. According to the campaign group Fertility Fairness \"at the end of 2014 every CCG in England was funding at least one cycle of IVF\". Prices paid by the NHS in England varied between under £3,000 to more than £6,000 in 2014/5. In February 2013, the cost of implementing the NICE guidelines for IVF along with other treatments for infertility was projected to be £236,000 per year per 100,000 members of the population.\nIVF increasingly appears on NHS treatments blacklists. In August 2017 five of the 208 CCGs had stopped funding IVF completely and others were considering doing so. By October 2017 only 25 CCGs were delivering the three recommended NHS IVF cycles to eligible people under 40. Policies could fall foul of discrimination laws if they treat same sex couples differently from heterosexual ones. In July 2019 Jackie Doyle-Price said that women were registering with surgeries further away from their own home in order to get around CCG rationing policies.\nThe Human Fertilisation and Embryology Authority said in September 2018 that parents who are limited to one cycle of IVF, or have to fund it themselves, are more likely choose to implant multiple embryos in the hope it increases the chances of pregnancy. This significantly increases the chance of multiple births and the associated poor outcomes, which would increase NHS costs. The president of the Royal College of Obstetricians and Gynaecologists said that funding 3 cycles was \"the most important factor in maintaining low rates of multiple pregnancies and reduce(s) associated complications\".", "In the United States, overall availability of IVF in 2005 was 2.5 IVF physicians per 100,000 population, and utilisation was 236 IVF cycles per 100,000. 126 procedures are performed per million people per year. Utilisation highly increases with availability and IVF insurance coverage, and to a significant extent also with percentage of single persons and median income. In the US, an average cycle, from egg retrieval to embryo implantation, costs $12,400, and insurance companies that do cover treatment, even partially, usually cap the number of cycles they pay for. As of 2015, more than 1 million babies had been born utilising IVF technologies.\nIn the US, nineteen states have laws requiring insurance coverage for infertility treatment, and thirteen of those specifically include IVF. These states that mandate IVF coverage are: Arkansas, California, Colorado, Connecticut, Delaware, Hawaii, Illinois, Louisiana, Maryland, Massachusetts, Montana, New Hampshire, New Jersey, New York, Ohio, Rhode Island, Texas, Utah, and West Virginia. These laws differ by state but many require an egg be fertilised with sperm from a spouse and that in order to be covered you must show you cannot become pregnant through penile-vaginal sex. These requirements are not possible for a same-sex couple to meet. No state Medicaid program, however, covers for IVF according to a 2020 report.\nMany fertility clinics in the United States limit the upper age at which people are eligible for IVF to 50 or 55 years. These cut-offs make it difficult for people older than fifty-five to utilise the procedure.", "Government agencies in China passed bans on the use of IVF in 2003 by unmarried people or by couples with certain infectious diseases.\nIn India, the use of IVF as a means of sex selection (preimplantation genetic diagnosis) is banned under the Pre-Conception and Pre-Natal Diagnostic Techniques Act, 1994.\nSunni Muslim nations generally allow IVF between married couples when conducted with their own respective sperm and eggs, but not with donor eggs from other couples. But Iran, which is Shia Muslim, has a more complex scheme. Iran bans sperm donation but allows donation of both fertilised and unfertilised eggs. Fertilised eggs are donated from married couples to other married couples, while unfertilised eggs are donated in the context of mutah or temporary marriage to the father.\nBy 2012 Costa Rica was the only country in the world with a complete ban on IVF technology, it having been ruled unconstitutional by the nations Supreme Court because it \"violated life.\" Costa Rica had been the only country in the western hemisphere that forbade IVF. A law project sent reluctantly by the government of President Laura Chinchilla was rejected by parliament. President Chinchilla has not publicly stated her position on the question of IVF. However, given the massive influence of the Catholic Church in her government any change in the status quo seems very unlikely. In spite of Costa Rican government and strong religious opposition, the IVF ban has been struck down by the Inter-American Court of Human Rights in a decision of 20 December 2012. The court said that a long-standing Costa Rican guarantee of protection for every human embryo violated the reproductive freedom of infertile couples because it prohibited them from using IVF, which often involves the disposal of embryos not implanted in a womans uterus. On 10 September 2015, President Luis Guillermo Solís signed a decree legalising in-vitro fertilisation. The decree was added to the countrys official gazette on 11 September. Opponents of the practice have since filed a lawsuit before the countrys Constitutional Court.\nAll major restrictions on single but infertile people using IVF were lifted in Australia in 2002 after a final appeal to the Australian High Court was rejected on procedural grounds in the Leesa Meldrum case. A Victorian federal court had ruled in 2000 that the existing ban on all single women and lesbians using IVF constituted sex discrimination. Victoria's government announced changes to its IVF law in 2007 eliminating remaining restrictions on fertile single women and lesbians, leaving South Australia as the only state maintaining them.\nFederal regulations in the United States include screening requirements and restrictions on donations, but generally do not affect sexually intimate partners. However, doctors may be required to provide treatments due to nondiscrimination laws, as for example in California. The US state of Tennessee proposed a bill in 2009 that would have defined donor IVF as adoption. During the same session another bill proposed barring adoption from any unmarried and cohabitating couple, and activist groups stated that passing the first bill would effectively stop unmarried women from using IVF. Neither of these bills passed.\nIn 2024, the Supreme Court of Alabama ruled that embryos created during in-vitro fertilisation are \"extrauterine children\", and that an 1872 state law allowing parents to sue over the death of a minor \"applies to all unborn children, regardless of their location.\" This ruling raised concerns from The National Infertility Association and the American Society for Reproductive Medicine that the decision would mean Alabama's bans on abortion prohibit IVF as well, while the University of Alabama at Birmingham health system paused IVF treatments. Eight days later the Alabama legislature voted to protect IVF providers and patients from criminal or civil liability.\nFew American courts have addressed the issue of the \"property\" status of a frozen embryo. This issue might arise in the context of a divorce case, in which a court would need to determine which spouse would be able to decide the disposition of the embryos. It could also arise in the context of a dispute between a sperm donor and egg donor, even if they were unmarried. In 2015, an Illinois court held that such disputes could be decided by reference to any contract between the parents-to-be. In the absence of a contract, the court would weigh the relative interests of the parties.", "Research has shown that men largely view themselves as \"passive contributors\" since they have \"less physical involvement\" in IVF treatment. Despite this, many men feel distressed after seeing the toll of hormonal injections and ongoing physical intervention on their female partner. Fertility was found to be a significant factor in a man's perception of his masculinity, driving many to keep the treatment a secret. In cases where the men did share that he and his partner were undergoing IVF, they reported to have been teased, mainly by other men, although some viewed this as an affirmation of support and friendship. For others, this led to feeling socially isolated. In comparison with females, males showed less deterioration in mental health in the years following a failed treatment. However, many men did feel guilt, disappointment and inadequacy, stating that they were simply trying to provide an \"emotional rock\" for their partners.", "Materials used in the construction of an in vivo bioreactor space vary widely depending on the type of substrate, type of tissue, and mechanical demands of said tissue being grown. At its simplest, a bioreactor space will be created between tissue layers through the use of hydrogel injections to create a bioreactor space. Early models used an impermeable silicone shroud to encase a scaffold, though more recent studies have begun 3D printing custom bioreactor molds to further enhance the mechanical growth properties of the bioreactors. The choice of bioreactor chamber material generally requires that it is nontoxic and medical grade, examples include: \"silicon, polycarbonate, and acrylic polymer\". Recently both Teflon and titanium have been used in the growth of bone. One study utilized Polymethyl methacrylate as a chamber material and 3D printed hollow rectangular blocks. Yet another study pushed the limits of the in vivo bioreactor by proving that the omentum is suitable as a bioreactor space and chamber. Specifically, highly vascularized and functional bladder tissue was grown within the omentum space.", "An example of the implementation of the IVB approach was in the engineering of autologous bone by injecting calcium alginate in a sub-periosteal location. The periosteum is a membrane that covers the long bones, jawbone, ribs and the skull. This membrane contains an endogenous population of pluripotent cells called the periosteal cells, which are a type of mesenchymal stem cells (MSC), which reside in the cambium layer, i.e., the side facing the bone. A key step in the procedure is the elevation of the periosteum without damaging the cambium surface and to ensure this a new technique called hydraulic elevation was developed.\nThe choice of the sub-periosteum site is used because stimulation of the cambium layer using transforming growth factor–beta resulted in enhanced chondrogenesis, i.e., formation of cartilage. In development the formation of bone can either occur via a Cartilage template initially formed by the MSCs that then gets ossified through a process called endochondral ossification or directly from MSC differentiation to bone via a process termed intra-membranous ossification. Upon exposure of the periosteal cells to calcium from the alginate gel, these cells become bone cells and start producing bone matrix through the intra-membranous ossification process, recapitulating all steps of bone matrix deposition. The extension of the IVB paradigm to engineering autologous hyaline cartilage was also recently demonstrated. In this case, agarose is injected and this triggers local hypoxia, which then results in the differentiation of the periosteal MSCs into articular chondrocytes, i.e. cells similar to those found in the joint cartilage. Since this processes occurs in a relative short period of less than two weeks and cartilage can remodel into bone, this approach might provide some advantages in treatment of both cartilage and bone loss. The IVB concept needs to be however realized in humans and this is currently being undertaken.", "Initially, focusing on bone growth, subcutaneous pockets were used for bone prefabrication as a simple in vivo bioreactor model. The pocket is an artificially created space between varying levels of subcutaneous fascia. The location provides regenerative ques to the bioreactor implant but does not rely on pre-existing bone tissue as a substrate. Furthermore, these bioreactors may be wrapped with muscle tissue to encourage vascularization and bone growth. Another strategy is through the use of a periosteal flap wrapped around the bioreactor, or the scaffold itself to create an in vivo bioreactor. This strategy utilizes the guided bone regeneration treatment scheme, and is a safe method for bone prefabrication. These flap methods of packing the bioreactor within fascia, or wrapping it in tissue is effective, though somewhat random due to the non-directed vascularization these methods incur. The axial vascular bundle (AVB) strategy requires that an artery and vein are inserted in an in vitro bioreactor to transport growth factors, cells, and remove waste. This ultimately results in extensive vascularization of the bioreactor space and a vast improvement in growth capability. This vascularization, though effective, is limited by the surface contact that it can achieve between the scaffold and the capillaries filling the bioreactor space. Thus, a combination of the flap and AVB techniques can maximize the growth rate and vascular contact of the bioreactor as suggested by Han and Dai, by inserting a vascular bundle into a scaffold wrapped in either musculature or periosteum. If inadequate pre-existing vasculature is present in the growth site due to damage or disease, an arteriovenous loop (AVL) can be used. The AVL strategy requires a surgical connection be made between an artery of vein to form an arteriovenous fistula which is then placed within an in vitro bioreactor space containing a scaffold. A capillary network will form from this loop and accelerate the vascularization of new tissue.", "Scaffold materials are designed to enhance tissue formation through control of the local and surrounding environments. Scaffolds are critical in regulating cellular growth and provide a volume in which vascularization and stem cell differentiation can occur. Scaffold geometry significantly affects tissue differentiation through physical growth ques. Predicting tissue formation computationally requires theories that link physical growth ques to cell differentiation. Current models rely on mechano-regulation theory, widely shaped by Prendergast et al. for predicting cell growth. Thus a quantitative analysis of geometry and materials commonly used in tissue scaffolds is capable.\nSuch materials include:\n* Porous ceramic and demineralized bone matrix supports\n* Coralline cylinders\n* Biodegradable material such as poly(α-hydroxy esters)\n* Decellularized tissue matrices\n* Injectable biomaterials or hydrogels are typically composed of polysaccharides, proteins/peptide mimetics, or synthetic polymers such as (poly(ethylene glycol)).\n* Peptide amphiphile (PA) systems are self assembling and can form solid bioactive scaffolds after injection within the body.\n* Inert systems have been proven to be adequate for tissue formation. Cartilage formation has occurred by injecting an inert agarose gel beneath the periosteum in a rabbit model, vascularization was restricted.\n* fibrin\n* Sponges made from collagen", "Tissue engineering done in vivo is capable of recruiting local cellular populations into a bioreactor space. Indeed a range of neotissue growth has been shown: bone, cartilage, fat, and muscle. In theory, any tissue type could be grown in this manner if all necessary components (growth factors, environmental and physical ques) are met. Recruitment of stem cells require a complex process of mobilization from their niche, though research suggests that mature cells transplanted upon the bioreactor scaffold can improve stem cell recruitment. These cells secrete growth factors that promote repair and can be co-cultured with stem cells to improve tissue formation.", "The in vivo bioreactor is a tissue engineering paradigm that uses bioreactor methodology to grow neotissue in vivo that augments or replaces malfunctioning native tissue. Tissue engineering principles are used to construct a confined, artificial bioreactor space in vivo that hosts a tissue scaffold and key biomolecules necessary for neotissue growth. Said space often requires inoculation with pluripotent or specific stem cells to encourage initial growth, and access to a blood source. A blood source allows for recruitment of stem cells from the body alongside nutrient delivery for continual growth. This delivery of cells and nutrients to the bioreactor eventually results in the formation of a neotissue product.", "Conceptually, the in vivo bioreactor was borne from complications in a repair method of bone fracture, bone loss, necrosis, and tumor reconstruction known as bone grafting. Traditional bone grafting strategies require fresh, autologous bone harvested from the iliac crest; this harvest site is limited by the amount of bone that can safely be removed, as well as associated pain and morbidity. Other methods include cadaverous allografts and synthetic options (often made of hydroxyapatite) that have become available in recent years. In response to the question of limited bone sourcing, it has been posited that bone can be grown to fit a damaged region within the body through the application of tissue engineering principles.\nTissue engineering is a biomedical engineering discipline that combines biology, chemistry, and engineering to design neotissue (newly formed tissue) on a scaffold. Tissues scaffolds are functionally identical to the extracellular matrix found, acting as a site upon which regenerative cellular components adsorb to encourage cellular growth. This cellular growth is then artificially stimulated by additive growth factors in the environment that encourage tissue formation. The scaffold is often seeded with stem cells and growth additives to encourage a smooth transition from cells to tissues, and more recently, organs. Traditionally, this method of tissue engineering is performed in vitro, where scaffold components and environmental manipulation recreate in vivo stimuli that direct growth. Environmental manipulation includes changes in physical stimulation, pH, potential gradients, cytokine gradients, and oxygen concentration. The overarching goal of in vitro tissue engineering is to create a functional tissue that is equivalent to native tissue in terms of composition, biomechanical properties, and physiological performance. However, in vitro tissue engineering suffers from a limited ability to mimic in vitro conditions, often leading to inadequate tissue substitutes. Therefore, in vivo tissue engineering has been suggested as a method to circumvent the tedium of environmental manipulation and use native in vivo stimuli to direct cell growth. To achieve in vivo tissue growth, an artificial bioreactor space must be established in which cells may grow. The in vivo bioreactor depends on harnessing the reparative qualities of the body to recruit stem cells into an implanted scaffold, and utilize vasculature to supply all necessary growth components.", "Incandescence is the emission of electromagnetic radiation (including visible light) from a hot body as a result of its high temperature. The term derives from the Latin verb incandescere, to glow white. A common use of incandescence is the incandescent light bulb, now being phased out.\nIncandescence is due to thermal radiation. It usually refers specifically to visible light, while thermal radiation refers also to infrared or any other electromagnetic radiation.", "In practice, virtually all solid or liquid substances start to glow around , with a mildly dull red color, whether or not a chemical reaction takes place that produces light as a result of an exothermic process. This limit is called the Draper point. The incandescence does not vanish below that temperature, but it is too weak in the visible spectrum to be perceptible.\nAt higher temperatures, the substance becomes brighter and its color changes from red towards white and finally blue.\nIncandescence is exploited in incandescent light bulbs, in which a filament is heated to a temperature at which a fraction of the radiation falls in the visible spectrum. The majority of the radiation, however, is emitted in the infrared part of the spectrum, rendering incandescent lights relatively inefficient as a light source. If the filament could be made hotter, efficiency would increase; however, there are currently no materials able to withstand such temperatures which would be appropriate for use in lamps.\nMore efficient light sources, such as fluorescent lamps and LEDs, do not function by incandescence.\nSunlight is the incandescence of the \"white hot\" surface of the Sun.", "Indiglo backlights typically emit a distinct greenish-blue color and evenly light the entire display or dial. Certain Indiglo models, e.g., Timex Datalink USB, use a negative liquid-crystal display so that only the digits are illuminated, rather than the entire display.", "Timex introduced the Indiglo technology in 1992 in their Ironman watch line and subsequently expanded its use to 70% of their watch line, including mens and womens watches, sport watches and chronographs. Casio introduced their version of electroluminescent backlight technology in 1995.\nThe Indiglo name was later licensed to other companies, such as Austin Innovations Inc., for use on their electroluminescent products.\nFrom 2006-2011, the Timex Group marketed a line of high-end quartz watches under the TX Watch Company brand, using a proprietary six-hand, four-motor, micro-processor controlled movement. To separate the brand from Timex, the movements had luxury features associated with a higher-end brand, e.g., sapphire crystals and stainless steel or titanium casework &mdash; and used hands treated with super-luminova luminescent pigment for low-light legibility &mdash; rather than indiglo technology. \nWhen the Timex Group migrated the microprocessor-controlled, multi-motor, multi-hand technology to its Timex brand in 2012, it created a sub-collection marketed as Intelligent Quartz (IQ). The line employed the same movements and capabilities from the TX brand, at a much lower price-point -- incorporating indiglo technology rather than the super-luminova pigments.", "Indiglo is a product feature on watches marketed by Timex, incorporating an electroluminescent panel as a backlight for even illumination of the watch dial.\nThe brand is owned by Indiglo Corporation, which is in turn solely owned by Timex, and the name derives from the word indigo, as the original watches featuring the technology emitted a green-blue light.", "Vitamin D is produced when the skin is exposed to UVB, whether from sunlight or an artificial source. It is needed for mineralization of bone and bone growth. Areas in which vitamin D's role is being investigated include reducing the risk of cancer, heart disease, multiple sclerosis and glucose dysregulation. Exposing arms and legs to a minimal 0.5 erythemal (mild sunburn) UVB dose is equal to consuming about 3000 IU of vitamin D3. In a study in Boston, MA, researchers found that adults who used tanning beds had \"robust\" levels of 25(OH)D (46 ng/mL on average), along with higher hip bone density, compared to adults who did not use them.\nObtaining vitamin D from indoor tanning has to be weighed against the risk of developing skin cancer. The indoor-tanning industry has stressed the relationship between tanning and the production of vitamin D. According to the US National Institutes of Health, some researchers have suggested that \"5–30 minutes of sun exposure between 10 AM and 3 PM at least twice a week to the face, arms, legs, or back without sunscreen usually lead to sufficient vitamin D synthesis and that the moderate use of commercial tanning beds that emit 2%–6% UVB radiation is also effective\". Most researchers say the health risks outweigh the benefits, that the UVB doses produced by tanning beds exceed what is needed for adequate vitamin D production, and that adequate vitamin D levels can be achieved by taking supplements and eating fortified foods.", "Indoor tanning involves using a device that emits ultraviolet radiation to produce a cosmetic tan. Typically found in tanning salons, gyms, spas, hotels, and sporting facilities, and less often in private residences, the most common device is a horizontal tanning bed, also known as a sunbed or solarium. Vertical devices are known as tanning booths or stand-up sunbeds.\nFirst introduced in the 1960s, indoor tanning became popular with people in the Western world, particularly in Scandinavia, in the late 1970s. The practice finds a cultural parallel in skin whitening in Asian countries, and both support multibillion-dollar industries. Most indoor tanners are women, 16–25 years old, who want to improve their appearance or mood, acquire a pre-holiday tan, or treat a skin condition.\nAcross Australia, Canada, Northern Europe and the United States, 18.2% of adults, 45.2% of university students, and 22% of adolescents had tanned indoors in the previous year, according to studies in 2007–2012. As of 2010 the indoor-tanning industry employed 160,000 in the United States, where 10–30 million tanners visit 25,000 indoor facilities annually. In the United Kingdom, 5,350 tanning salons were in operation in 2009. From 1997 several countries and US states banned under-18s from indoor tanning. The commercial use of tanning beds was banned entirely in Brazil in 2009 and Australia in 2015. , thirteen U.S. states and one territory have banned under-18s from using them, and at least 42 states and the District of Columbia have imposed regulations, such as requiring parental consent.\nIndoor tanning is a source of UV radiation, which is known to cause skin cancer, including melanoma and skin aging, and is associated with sunburn, photodrug reactions, infections, weakening of the immune system, and damage to the eyes, including cataracts, photokeratitis (snow blindness) and eye cancer. Injuries caused by tanning devices lead to over 3,000 emergency-room cases a year in the United States alone. Physicians may use or recommend tanning devices to treat skin conditions such as psoriasis, but the World Health Organization does not recommend their use for cosmetic purposes. The WHO's International Agency for Research on Cancer includes tanning devices, along with ultraviolet radiation from the sun, in its list of group 1 carcinogens. Researchers at the Yale School of Public Health found evidence of addiction to tanning in a 2017 paper.", "Indoor tanning is most popular with white females, 16–25 years old, with low-to-moderate skin sensitivity, who know other tanners. Studies seeking to link indoor tanning to education level and income have returned inconsistent results. Prevalence was highest in one German study among those with a moderate level of education (neither high nor low).\nThe late teens to early–mid 20s is the highest-prevalence age group. In a national survey of white teenagers in 2003 in the US (aged 13–19), 24% had used a tanning facility. Indoor-tanning prevalence figures in the US vary from 30 million each year to just under 10 million (7.8 million women and 1.9 million men).\nThe figures in the US are in decline: according to the Centres for Disease Control and Prevention, usage in the 18–29 age group fell from 11.3 percent in 2010 to 8.6 percent in 2013, perhaps attributable in part to a 10% \"tanning tax\" introduced in 2010. Attitudes toward tanning vary across states; in one study, doctors in the northeast and midwest of the country were more likely than those in the south or west to recommend tanning beds to treat vitamin D deficiency and depression.\nTanning bed use is more prevalent in northern countries. In Sweden in 2001, 44% said they had used one (in a survey of 1,752 men and women aged 18–37). Their use increased in Denmark between 1994 and 2002 from 35% to 50% (reported use in the previous two years). In Germany, between 29% and 47% had used one, and one survey found that 21% had done so in the previous year. In France, 15% of adults in 1994–1995 had tanned indoors; the practice was more common in the north of France. In 2006, 12% of grade 9–10 students in Canada had used a tanning bed in the last year. In 2004, 7% of 8–11-year-olds in Scotland said they had used one. Tanning bed use is higher in the UK in the north of England. One study found that the prevalence was lower in London than in less urban areas of the country.", "In 1890 the Danish physician Niels Ryberg Finsen developed a carbon arc lamp (\"Finsen's light\" or a \"Finsen lamp\") that produced ultraviolet radiation for use in skin therapy, including to treat lupus vulgaris. He won the 1903 Nobel Prize in Physiology or Medicine for his work.\nUntil the late 19th century in Europe and the United States, pale skin was a symbol of high social class among white people. Victorian women would carry parasols and wear wide-brimmed hats and gloves; their homes featured heavy curtains that kept out the sun. But as the working classes moved from country work to city factories, and to crowded, dark, unsanitary homes, pale skin became increasingly associated with poverty and ill health. In 1923 Coco Chanel returned from a holiday in Cannes with a tan, later telling Vogue magazine: \"A golden tan is the index of chic!\" Tanned skin had become a fashion accessory.\nIn parallel physicians began advising their patients on the benefits of the \"sun cure\", citing its antiseptic properties. Sunshine was promoted as a treatment for depression, diabetes, constipation, pneumonia, high and low blood pressure, and many other ailments. Home-tanning equipment was introduced in the 1920s in the form of \"sunlamps\" or \"health lamps\", UV lamps that emitted a large percentage of UVB, leading to burns. Friedrich Wolff, a German scientist, began using UV light on athletes, and developed beds that emitted 95% UVA and 5% UVB, which reduced the likelihood of burning. The worlds first tanning salon opened in 1977 in Berlin, followed by tanning salons in Europe and North America in the late 1970s. In 1978 Wolffs devices began selling in the United States, and the indoor tanning industry was born.", "Tanning lamps, also known as tanning bulbs or tanning tubes, produce the ultraviolet light in tanning devices. The performance (or output) varies widely between brands and styles. Most are low-pressure fluorescent tubes, but high-pressure bulbs also exist. The electronics systems and number of lamps affect performance, but to a lesser degree than the lamp itself. Tanning lamps are regulated separately from tanning beds in most countries, as they are the consumable portion of the system.", "Most tanning beds are horizontal enclosures with a bench and canopy (lid) that house long, low-pressure fluorescent bulbs (100–200 watt) under an acrylic surface. The tanner is surrounded by bulbs when the canopy is closed. Modern tanning beds emit mostly UVA (the sun emits around 95% UVA and 5% UVB). One review of studies found that the UVB irradiance of beds was on average lower than the summer sun at latitudes 37°S to 35°N, but that UVA irradiance was on average much higher.\nThe user sets a timer (or it is set remotely by the salon operator), lies on the bed and pulls down the canopy. The maximum exposure time for most low-pressure beds is 15–20 minutes. In the US, maximum times are set by the manufacturer according to how long it takes to produce four \"minimal erythema doses\" (MEDs), an upper limit laid down by the FDA. An MED is the amount of UV radiation that will produce erythema (redness of the skin) within a few hours of exposure.\nHigh-pressure beds use smaller, higher-wattage quartz bulbs and emit a higher percentage of UVA. They may emit 10–15 times more UVA than the midday sun, and have a shorter maximum exposure time (typically 10–12 minutes). UVA gives an immediate, short-term tan by bronzing melanin in the skin, but no new melanin is formed. UVB has no immediate bronzing effect, but with a delay of 72 hours makes the skin produce new melanin, leading to tans of longer duration. UVA is less likely to cause burning or dry skin than UVB, but is associated with wrinkling and loss of elasticity because it penetrates deeper.\nCommercial tanning beds cost $6,000 to $30,000 as of 2006, with high-pressure beds at the high end. One Manhattan chain was charging $10 to $35 per session in 2016, depending on the number, strength, and type of bulbs. This is known as level 1–6 tanning; level 1 involves a basic low-pressure bed with 36 x 100-watt bulbs. Depending on the quality of the bed, it may contain a separate facial tanner, shoulder tanners, a choice of tanning levels and UVA/UVB combinations, sound system, MP3 connection, aromatherapy, air conditioning, a misting option and voice guide. There are also open-air beds, in which the tanner is not entirely enclosed.", "Tanning booths (also known as stand-up sunbeds) are vertical enclosures; the tanner stands during exposure, hanging onto straps or handrails, and is surrounded by tanning bulbs. In most models, the tanner closes a door, but there are open designs too. Some booths use the same electronics and lamps as tanning beds, but most have more lamps and are likely to use 100–160 watt lamps. They often have a maximum session of 7–15 minutes. There are other technical differences, or degrees of intensity, but for all practical intents, their function and safety are the same as a horizontal bed. Booths have a smaller footprint, which some commercial operators find useful. Some tanners prefer booths out of concern for hygiene, since the only shared surface is the floor.", "Before entering a tanning unit, the tanner usually applies indoor tanning lotion to the whole body and may use a separate facial-tanning lotion. These lotions are considerably more expensive than drugstore lotions. They contain no sunscreen, but instead moisturize the skin with ingredients such as aloe vera, hempseed oil and sunflower seed oil. They may also contain dihydroxyacetone, a sunless tanner. So-called \"tingle\" tanning lotions cause vasodilation, increasing blood circulation.\nGoggles (eye protection) should be worn to avoid eye damage. In one 2004 study, tanners said they avoided goggles to prevent leaving pale skin around the eyes. In the US, CFR Title 21 requires that new tanning equipment come with eye protection and most states require that commercial tanning operators provide eye protection for their clients. Laws in other countries are similar.", "Reasons cited for indoor tanning include improving appearance, acquiring a pre-holiday tan, feeling good and treating a skin condition. Tanners often cite feelings of well-being; exposure to tanning beds is reported to \"increase serum beta-endorphin levels by 44%\". Beta-endorphin is associated with feelings of relaxation and euphoria, including \"runner's high\".\nImproving appearance is the most-cited reason. Studies show that tanned skin has semiotic power, signifying health, beauty, youth and the ability to seduce. Women, in particular, say not only that they prefer their appearance with tanned skin, but that they receive the same message from friends and family, especially from other women. They believe tanned skin makes them look thinner and more toned, and that it covers or heals skin blemishes such as acne. Other reasons include acquiring a base tan for further sunbathing; that a uniform tan is easier to achieve in a tanning unit than in the sun, and a desire to avoid tan lines. Proponents of indoor tanning say that tanning beds deliver more consistent, predictable exposure than the sun, but studies show that indoor tanners do suffer burns. In two surveys in the US in 1998 and 2004, 58% of indoor tanners said they had been burned during sessions.", "Tanning facilities are ubiquitous in the US, although the figures are in decline. In a study in the US published in 2002, there was a higher density in colder areas with a lower median income and higher proportion of whites. A study in 1997 found an average of 50.3 indoor-tanning facilities in 20 US cities (13.89 facilities for every 100,000 residents); the highest was 134 in Minneapolis, MN, and the lowest four in Honolulu, Hawaii. In 2006 a study of 116 cities in the US found 41.8 facilities on average, a higher density than either Starbucks or McDonalds. Of the countrys 125 top colleges and universities in 2014, 12% had indoor-tanning facilities on campus and 42.4% in off-campus housing, 96% of the latter free of charge to the tenants.\nThere are fewer professional salons than tanning facilities; the latter includes tanning beds in gyms, spas and similar. According to the FDA, citing the Indoor Tanning Association, there were 25,000 tanning salons in 2010 in the US (population 308.7 million in 2010). Mailing-list data suggest there were 18,200 in September 2008 and 12,200 in September 2015, a decline of 30 percent. According to Chris Sternberg of the American Suntanning Association, the figures are 18,000 in 2009 and 9,500 in 2016.\nThe South West Public Health Observatory found 5,350 tanning salons in the UK in 2009: 4,492 in England (population 52.6 million in 2010), 484 in Scotland (5.3 million), 203 in Wales (3 million) and 171 in Northern Ireland (1.8 million).", "Certain skin conditions, including keratosis, psoriasis, eczema and acne, may be treated with UVB light therapy, including by using tanning beds in commercial salons. Using tanning beds allows patients to access UV exposure when dermatologist-provided phototherapy is not available. A systematic review of studies, published in Dermatology and Therapy in 2015, noted that moderate sunlight is a treatment recommended by the American National Psoriasis Foundation, and suggested that clinicians consider UV phototherapy and tanning beds as a source of that therapy.\nWhen UV light therapy is used in combination with psoralen, an oral or topical medication, the combined therapy is referred to as PUVA. A concern with the use of commercial tanning is that beds that primarily emit UVA may not treat psoriasis effectively. One study found that plaque psoriasis is responsive to erythemogenic doses of either UVA or UVB. It does require more energy to reach erythemogenic dosing with UVA.", "Ultraviolet radiation (UVR) is part of the electromagnetic spectrum, just beyond visible light. Ultraviolet wavelengths are 100 to 400 nanometres (nm, billionths of a metre) and are divided into three bands: A, B and C. UVA wavelengths are the longest, 315 to 400 nm; UVB are 280 to 315 nm, and UVC wavelengths are the shortest, 100 to 280 nm.\nAbout 95% of the UVR that reaches the earth from the sun is UVA and 5% UVB; no appreciable UVC reaches the earth. While tanning systems before the 1970s produced some UVC, modern tanning devices produce no UVC, a small amount of UVB and mostly UVA. Classified by the WHO as a group 1 carcinogen, UVR has \"complex and mixed effects on human health\". While it causes skin cancer and other damage, including wrinkles, it also triggers the synthesis of vitamin D and endorphins in the skin.", "Exposure to UV radiation is associated with skin aging, wrinkle production, liver spots, loss of skin elasticity, erythema (reddening of the skin), sunburn, photokeratitis (snow blindness), ocular melanoma (eye cancer), and infections. Tanning beds can contain many microbes, some of which are pathogens that can cause skin infections and gastric distress. In one study in New York in 2009, the most common pathogens found on tanning beds were Pseudomonas spp. (aeruginosa and putida), Bacillus spp., Klebsiella pneumoniae, Enterococcus species, Staphylococcus aureus, and Enterobacter cloacae. Several prescription and over-the-counter drugs, including antidepressants, antibiotics, antifungals and anti-diabetic medication, can cause photosensitivity, which makes burning the skin while tanning more likely. This risk is increased by a lack of staff training in tanning facilities.", "Exposure to ultraviolet radiation (UVR), whether from the sun or tanning devices is known to be a major cause of the three main types of skin cancer: non-melanoma skin cancer (basal cell carcinoma and squamous cell carcinoma) and melanoma. Overexposure to UVR induces at least two types of DNA damage: cyclobutane–pyrimidine dimers (CPDs) and 6–4 photoproducts (6–4PPs). While DNA repair enzymes can fix some mutations, if they are not sufficiently effective, a cell will acquire genetic mutations which may cause the cell to die or become cancerous. These mutations can result in cancer, aging, persistent mutation and cell death. For example, squamous cell carcinoma can be caused by a UVB-induced mutation in the p53 gene.\nNon-melanoma skin cancer includes squamous cell carcinoma (SCC) and basal cell carcinoma (BCC), and is more common than melanoma. With early detection and treatment, it is typically not life-threatening. Prevalence increases with age, cumulative exposure to UV, and proximity to the equator. It is most prevalent in Australia, where the rate is 1,000 in 100,000 and where, as of 2000, it represented 75 percent of all cancers.\nMelanoma accounts for approximately one percent of skin cancer, and causes most of skin cancer-related deaths. The average age of diagnosis is 63, and it is the most common cancer in the 25–29 age group and the second most common in the 15-29 group, which may be due in part to the increased UV exposure and use of indoor tanning observed in this population. In the United States, the melanoma incidence rate was 22.3 per 100,000, based on 2010–2014 data from the National Institutes of Health Surveillance, Epidemiology and End Results (SEER) Program, and the death rate was 2.7 per 100,000. 9,730 people were estimated to die of melanoma in the United States in 2017, and these numbers are anticipated to continue rising. Although 91.7% of patients diagnosed with melanoma survive beyond 5-years, advanced melanoma is largely incurable, and only 19.9% percent of patients with metastatic disease survive beyond 5 years. An international meta-analysis performed in 2014 estimates that annually, 464,170 cases of skin cancer can be attributed to exposure to indoor tanning.\nA 2012 analysis of epidemiological studies found a 20% increase in the risk of melanoma (a relative risk of 1.20) among those who had ever used a tanning device compared to those who had not, and a 59% percent increase (a relative risk of 1.59) among those who had used one before age 35. Additionally, a 2014 systematic review and meta-analysis found that indoor tanners had a 16 percent increased risk of developing melanoma, which increased to 23 percent for North Americans. For those who started tanning indoors before age 25, their risk further increased to 35% compared to those who began after age 25.", "The Food and Drug Administration (FDA) classifies tanning beds as \"moderate risk\" devices (changed in 2014 from \"low risk\"). It requires that devices carry a black box warning that they should not be used by individuals under the age of 18, but it has not banned their use by minors. , California, Delaware, the District of Columbia, Hawaii, Illinois, Kansas, Louisiana, Massachusetts, Minnesota, Nevada, New Hampshire, North Carolina, Oregon, Texas, Vermont and Washington have banned the use of tanning beds for minors under the age of 18. Other states strictly regulate indoor tanning under the age of 18, with most banning indoor tanning for persons under the age of 14 unless medically required, and some requiring the consent of a guardian for those aged 14–17. In 2010 under the Affordable Care Act, a 10% \"tanning tax\" was introduced, which is added to the fees charged by tanning facilities; it was expected to raise $2.7 billion for health care over ten years.\nTanning beds are regulated in the United States by the federal governments Code of Federal Regulations (21 CFR 1040.20). This is designed to ensure that the devices adhere to a set of safety rules, with the primary focus on sunbed and lamp manufacturers regarding maximum exposure times and product equivalence. Additionally, must have a \"Recommended Exposure Schedule\" posted on both the front of the tanning bed and in the owners manual, and list the original lamp that was certified for that particular tanning bed. Salon owners are required to replace the lamps with either exactly the same lamp, or a lamp that is certified by the lamp manufacturer to be.\nStates control regulations for salons, regarding operator training, sanitization of sunbeds and eyewear, and additional warning signs. Many states also ban or regulate the use of tanning beds by minors under the age of 18.\nAmerican osteopathic physician Joseph Mercola was prosecuted in 2016 by the Federal Trade Commission (FTC) for selling tanning beds to \"reverse your wrinkles\" and \"slash your risk of cancer\". The settlement meant that consumers who had purchased the devices were eligible for refunds totalling $5.3 million. Mercola had falsely claimed that the FDA \"endorsed indoor tanning devices as safe\", and had failed to disclose that he had paid the Vitamin D Council for its endorsement of his devices. The FTC said that it was deceptive for the defendants to fail to disclose that tanning is not necessary to produce vitamin D.", "In New Zealand, indoor tanning is regulated by a voluntary code of practice. Salons are asked to turn away under-18s, those with type 1 skin (fair skin that burns easily or never tans), people who experienced episodes of sunburn as children, and anyone taking certain medications, with several moles, or who has had skin cancer. Tanners are asked to sign a consent form, which includes health information and advice about the importance of wearing goggles. Surveys have found a high level of non-compliance. The government has carried out bi-annual surveys of tanning facilities since 2012.", "In 1997 France became the first country to ban minors from indoor tanning. Under-18s are similarly prohibited in Austria, Belgium, Germany, Ireland, Portugal, Spain, Poland and the United Kingdom. In addition, Ireland prohibits salons from offering \"happy hour\" discounts. Netherlands also forbid the usage of a tanning bed below the age of 18.", "Book chapters are cited in short form above and long form below. All other sources are cited above only.\n*Coups, Elliot J. and Phillips, L. Alison (2012). \"Prevalence and Correlates of Indoor Tanning\", in Carolyn J. Heckman, Sharon L. Manne (eds.), Shedding Light on Indoor Tanning. Dordrecht: Springer Science & Business Media, 5–32. \n*Hay, Jennifer and Lipsky, Samara (2012), \"International Perspectives on Indoor Tanning\", in Heckman and Manne (eds)., 179–193.\n*Hunt, Yvonne; Augustson, Erik; Rutten, Lila; Moser, Richard; and Yaroch, Amy (2012). \"History and Culture of Tanning in the United States\", in Heckman and Manne (eds.), 33–68.\n*Lessin, Stuart R; Perlis, Clifford S.; Zook, and Matthew B. Zook (2012). \"How Ultraviolet Radiation Tans Skin\" in Heckman and Manne (eds.), 87–94.\n*Lluria-Prevatt, Maria; Dickinson, Sally E.; and Alberts, David S. (2013). \"Skin Cancer Prevention\", in David Alberts, Lisa M. Hess (eds.). Fundamentals of Cancer Prevention. Heidelberg and Berlin: Springer Verlag, 321–376.", "Brazil's National Health Surveillance Agency banned the use of tanning beds for cosmetic purposes in 2009, making that country the first to enact a ban. It followed a 2002 ban on minors using the beds.", "Commercial tanning services are banned in all states, except the Northern Territory where no salons are in operation. Private ownership of tanning beds is permitted.", "Addiction to indoor tanning has been recognized as a psychiatric disorder. The disorder is characterized as excessive indoor tanning that causes the subject personal distress; it has been associated with anxiety, eating disorders and smoking. The media has described the addiction as tanorexia. According to the Canadian Pediatric Society, \"repeated UVR exposures, and the use of indoor tanning beds specifically, may have important systemic and behavioural consequences, including mood changes, compulsive disorders, pain and physical dependency.\"", "Children and adolescents who use tanning beds are at greater risk because of biological vulnerability to UV radiation. Epidemiological studies have shown that exposure to artificial tanning increases the risk of malignant melanoma and that the longer the exposure, the greater the risk, particularly in individuals exposed before the age of 30 or who have been sunburned.\nOne study conducted among college students found that awareness of the risks of tanning beds did not deter the students from using them. Teenagers are frequent targets of tanning industry marketing, which includes offers of coupons and placing ads in high-school newspapers. Members of the United States House Committee on Energy and Commerce commissioned a \"sting\" operation in 2012, in which callers posing as a 16-year-old woman who wanted to tan for the first time called 300 tanning salons in the US. Staff reportedly failed to follow FDA recommendations, denied the risks of tanning, and offered misleading information about benefits.", "Indoor tanning is prohibited for under-18s in British Columbia, Alberta, Manitoba, Saskatchewan, Ontario, Quebec, and Prince Edward Island; and for under-19s in New Brunswick, Nova Scotia, Newfoundland and Labrador, and the Northwest Territories. Health Canada recommends against the use of tanning equipment.", "A heterogeneous mixture (e. g. liquid and solid) can be separated by mechanical separation processes like filtration or centrifugation. Homogeneous mixtures can be separated by molecular separation processes; these are either equilibrium-based or rate-controlled. Equilibrium-based processes are operating by the formation of two immiscible phases with different compositions at equilibrium, an example is distillation (in distillation the vapor has another composition than the liquid). Rate-controlled processes are based on different transport rates of compounds through a medium, examples are adsorption, ion exchange or crystallization.\nSeparation of a mixture into two phases can be done by an energy separating agent, a mass separating agent, a barrier or external fields. Energy-separating agents are used for creating a second phase (immiscible of different composition than the first phase), they are the most common techniques used in industry. For example, leads the addition of heat (the separating agent) to a liquid (first phase) to the formation of vapor (second phase). Mass-separating agents are other chemicals. They selectively dissolve or absorb one of the products; they are either a liquid (for sorption, extractive distillation or extraction) or a solid (for adsorption or ion exchange). The use of a barrier which restricts the movement of one compound but not of the other one (semipermeable membranes) is less common; external fields are used just in special applications.", "Industrial separation processes are technical procedures which are used in industry to separate a product from impurities or other products. The original mixture may either be a natural resource (like ore, oil or sugar cane) or the product of a chemical reaction (like a drug or an organic solvent).", "Separation processes are of great economic importance as they are accounting for 40 – 90% of capital and operating costs in industry. The separation processes of mixtures are including besides others washing, extraction, pressing, drying, clarification, evaporation, crystallization and filtration. Often several separation processes are performed successively. Separation operations are having several different functions:\n* Purification of raw materials and products and recovery of by-products\n* Recycling of solvents and unconverted reactants\n* Removal of contaminants from effluents", "Magnetoelastic effect can be used in development of force sensors. This effect was used for sensors:\n* in civil engineering.\n* for monitoring of large diesel engines in locomotives.\n* for monitoring of ball valves.\n* for biomedical monitoring.\nInverse magnetoelastic effects have to be also considered as a side effect of accidental or intentional application of mechanical stresses to the magnetic core of inductive component, e.g. fluxgates or generator/motor stators when installed with interference fits.", "Method suitable for effective testing of magnetoelastic effect in magnetic materials should fulfill the following requirements:\n* magnetic circuit of the tested sample should be closed. Open magnetic circuit causes demagnetization, which reduces magnetoelastic effect and complicates its analysis.\n* distribution of stresses should be uniform. Value and direction of stresses should be known.\n* there should be the possibility of making the magnetizing and sensing windings on the sample - necessary to measure magnetic hysteresis loop under mechanical stresses.\nFollowing testing methods were developed:\n* tensile stresses applied to the strip of magnetic material in the shape of a ribbon. Disadvantage: open magnetic circuit of the tested sample.\n* tensile or compressive stresses applied to the frame-shaped sample. Disadvantage: only bulk materials may be tested. No stresses in the joints of sample columns.\n* compressive stresses applied to the ring core in the sideways direction. Disadvantage: non-uniform stresses distribution in the core .\n* tensile or compressive stresses applied axially to the ring sample. Disadvantage: stresses are perpendicular to the magnetizing field.", "In the case of a single stress acting upon a single magnetic domain, the magnetic strain energy density can be expressed as:\nwhere is the magnetostrictive expansion at saturation, and is the angle between the saturation magnetization and the stress's direction. \nWhen and are both positive (like in iron under tension), the energy is minimum for = 0, i.e. when tension is aligned with the saturation magnetization. Consequently, the magnetization is increased by tension.", "In fact, magnetostriction is more complex and depends on the direction of the crystal axes. In iron, the [100] axes are the directions of easy magnetization, while there is little magnetization along the [111] directions (unless the magnetization becomes close to the saturation magnetization, leading to the change of the domain orientation from [111] to [100]). This magnetic anisotropy pushed authors to define two independent longitudinal magnetostrictions and .\n* In cubic materials, the magnetostriction along any axis can be defined by a known linear combination of these two constants. For instance, the elongation along [110] is a linear combination of and . \n* Under assumptions of isotropic magnetostriction (i.e. domain magnetization is the same in any crystallographic directions), then and the linear dependence between the elastic energy and the stress is conserved, . Here, , and are the direction cosines of the domain magnetization, and , , those of the bond directions, towards the crystallographic directions.", "Under a given uni-axial mechanical stress , the flux density for a given magnetizing field strength may increase or decrease. The way in which a material responds to stresses depends on its saturation magnetostriction . For this analysis, compressive stresses are considered as negative, whereas tensile stresses are positive.<br />\nAccording to Le Chatelier's principle:\nThis means, that when the product is positive, the flux density increases under stress. On the other hand, when the product is negative, the flux density decreases under stress. This effect was confirmed experimentally.", "The inverse magnetostrictive effect, magnetoelastic effect or Villari effect, after its discoverer Emilio Villari, is the change of the magnetic susceptibility of a material when subjected to a mechanical stress.", "The magnetostriction characterizes the shape change of a ferromagnetic material during magnetization, whereas the inverse magnetostrictive effect characterizes the change of sample magnetization (for given magnetizing field strength ) when mechanical stresses are applied to the sample.", "In aqueous solution it is a weak acid, having a pK of 3.7:\nIsocyanic acid hydrolyses to carbon dioxide and ammonia:\nDilute solutions of isocyanic acid are stable in inert solvents, e.g. ether and chlorinated hydrocarbons.\nAt high concentrations, isocyanic acid oligomerizes to give the trimer cyanuric acid and cyamelide, a polymer. These species usually are easily separated from liquid- or gas-phase reaction products.\nIsocyanic acid reacts with amines to give ureas (carbamides):\nThis reaction is called carbamylation.\nHNCO adds across electron-rich double bonds, such as vinylethers, to give the corresponding isocyanates.\nIsocyanic acid, HNCO, is a Lewis acid whose free energy, enthalpy and entropy changes for its 1:1 association with a number of bases in carbon tetrachloride solution at 25 °C have been reported. The acceptor properties of HNCO are compared with other Lewis acid in the ECW model.\nLow-temperature photolysis of solids containing HNCO creates the tautomer cyanic acid , also called hydrogen cyanate. Pure cyanic acid has not been isolated, and isocyanic acid is the predominant form in all solvents. Sometimes information presented for cyanic acid in reference books is actually for isocyanic acid.", "Isocyanic acid is a chemical compound with the structural formula HNCO, which is often written as . It is a colourless, volatile and poisonous substance, with a boiling point of 23.5 °C. It is the predominant tautomer and an isomer of cyanic acid (aka. cyanol) ().\nThe derived anion of isocyanic acid is the same as the derived anion of cyanic acid, and that anion is , which is called cyanate. The related functional group is isocyanate; it is distinct from cyanate (), fulminate (), and nitrile oxide ().\nIsocyanic acid was discovered in 1830 by Justus von Liebig and Friedrich Wöhler.\nIsocyanic acid is the simplest stable chemical compound that contains carbon, hydrogen, nitrogen, and oxygen, the four most commonly found elements in organic chemistry and biology. It is the only fairly stable one of the four linear isomers with molecular formula HOCN that have been synthesized, the others being cyanic acid (cyanol, ) and the elusive fulminic acid () and isofulminic acid .", "Although the electronic structure according to valence bond theory can be written as H−N=C=O, the vibrational spectrum has a band at 2268.8 cm in the gas phase, which some say indicates a carbon–nitrogen triple bond. If so, then the canonical form is the major resonance structure.\nHowever, classic vibrational analysis would indicate that the 2268.8 cm is the asymmetric N=C=O stretch, as per Colthup et al., as well as the NIST Chemistry WebBook, which also reports the corresponding symmetric N=C=O stretch (weak in infrared, but strong in Raman) to be 1327 cm. Based on these classic assignments, there is no need to invoke a full charged state for the N and O atoms, to explain the vibrational spectral data.", "Isocyanic acid can be made by protonation of the cyanate anion, such as from salts like potassium cyanate, by either gaseous hydrogen chloride or acids such as oxalic acid.\nHNCO also can be made by the high-temperature thermal decomposition of the trimer cyanuric acid:\nIn the reverse of the famous synthesis of urea by Friedrich Wöhler,\nisocyanic acid is produced and rapidly trimerizes to cyanuric acid.", "Isocyanic acid has been detected in many kinds of interstellar environments.\nIsocyanic acid is also present in various forms of smoke, including smog and cigarette smoke. It was detected using mass spectrometry, and easily dissolves in water, posing a health risk to the lungs.", "The tautomer, known as cyanic acid, HOCN, in which the oxygen atom is protonated exists in equilibrium with isocyanic acid to the extent of about 3%. The vibrational spectrum is indicative of the presence of a triple bond between the nitrogen and carbon atoms.", "Like allolactose, IPTG binds to the lac repressor and releases the tetrameric repressor from the lac operator in an allosteric manner, thereby allowing the transcription of genes in the lac operon, such as the gene coding for beta-galactosidase, a hydrolase enzyme that catalyzes the hydrolysis of β-galactosides into monosaccharides. But unlike allolactose, the sulfur (S) atom creates a chemical bond which is non-hydrolyzable by the cell, preventing the cell from metabolizing or degrading the inducer. Therefore, its concentration remains constant during an experiment.\nIPTG uptake by E. coli can be independent of the action of lactose permease, since other transport pathways are also involved. At low concentration, IPTG enters cells through lactose permease, but at high concentrations (typically used for protein induction), IPTG can enter the cells independently of lactose permease.", "When stored as a powder at 4 °C or below, IPTG is stable for 5 years. It is significantly less stable in solution; Sigma recommends storage for no more than a month at room temperature. IPTG is an effective inducer of protein expression in the concentration range of 100 μmol/L to 3.0 mmol/L. , a mutant that over-produces the lac repressor, is present, then a higher concentration of IPTG may be necessary.\nIn blue-white screen, IPTG is used together with X-gal. Blue-white screen allows colonies that have been transformed with the recombinant plasmid rather than a non-recombinant one to be identified in cloning experiments.", "Isopropyl β--1-thiogalactopyranoside (IPTG) is a molecular biology reagent. This compound is a molecular mimic of allolactose, a lactose metabolite that triggers transcription of the lac operon, and it is therefore used to induce protein expression where the gene is under the control of the lac operator.", "Jiří Linhart (13 April 1924 &ndash; 6 January 2011) Nuclear fusion physicist and Czech Olympic swimmer. He competed in the men's 200 metre breaststroke at the 1948 Summer Olympics in London. He stayed on in London after which he took his PhD under the supervision of Denis Gabor. He was a pioneer of Nuclear Fusion, author of [https://www.cambridge.org/core/journals/journal-of-fluid-mechanics/article/abs/plasma-physics-by-j-g-linhart-amsterdam-north-holland-publishing-co-1960-278-pp-50s/B5B22BC6E784AF39819CB8F40B17D112# \"Plasma Physics\" (1960) - the first textbook on Plasma science], and many [https://iopscience.iop.org/article/10.1088/0029-5515/10/3/001 academic papers] and early [https://www.freepatentsonline.com/3113917.html patents on nuclear reactors].\nIn 1956 he became group Head of Acceleration at CERN, and in 1960 he became the head of the EURATOM group in Frascati.\nHe was also a very keen chess player, playing in the [https://www.chessgames.com/perl/chessplayer?pid=161073 Haifa Olympiad in 1976.]", "Julie Elizabeth Gough is a Professor of Biomaterials and Tissue Engineering at The University of Manchester. She specializes on controlling cellular responses at the cell-biomaterial interface by engineering defined surfaces for mechanically sensitive connective tissues.", "Gough is a cell biologist. She studied cell- and immunobiology, and molecular pathology and toxicology at the University of Leicester, graduating with a BSc in 1993 and an MSc in 1994, respectively. She continued her doctoral studies at the University of Nottingham, earning her PhD in Biomaterials in 1998. Between 1998 and 2002, she furthered her studies at both Nottingham and Imperial College London as a postdoctoral fellow working on novel composites and bioactive glasses for bone repair.", "Gough joined the School of Materials, Faculty of Science and Engineering at The University of Manchester, as a lecturer in 2002. She was quickly promoted to Senior lecturer and Reader in 2006 and 2010, respectively.\nFrom 2012 to 2013 she was a Royal Academy of Engineering/Leverhulme Trust Senior Research Fellow. Gough was made full Professor in 2014. \nSince then, she has continued her research in tissue engineering of mechanically sensitive connective tissues such as bone, cartilage, skeletal muscle and the intervertebral disc. This includes analysis and control of cells such as osteoblasts, chondrocytes, fibroblasts, keratinocytes, myoblasts and macrophages on a variety of materials and scaffolds. Her research also involves the development of scaffolds for tissue repair using novel hydrogels and magnesium alloys as various porous and fibrous materials. Gough has worked on the advisory board of the journal Biomaterials Science, and as part of the local organising committee for the World Biomaterials Congress.", "The K factor or characterization factor is defined from Rankine boiling temperature °R=1.8Tb[k] and relative to water density ρ at 60°F: \nK(UOP) = \nThe K factor is a systematic way of classifying a crude oil according to its paraffinic, naphthenic, intermediate or aromatic nature. 12.5 or higher indicate a crude oil of predominantly paraffinic constituents, while 10 or lower indicate a crude of more aromatic nature. The K(UOP) is also referred to as the UOP K factor or just UOPK.", "Dr. Kenneth B. Storey is among the top 2% of highly cited scientists in the world.\n*[https://pubmed.ncbi.nlm.nih.gov/?term=storey+kb&sort=date&size=100 PubMed]\n* [https://scholar.google.com/citations?user=mzhKxEoAAAAJ&hl=en Google Scholar]", "Storeys research includes studies of enzyme properties, gene expression, protein phosphorylation, epigenetics, and cellular signal transduction mechanisms to seek out the basic principles of how organisms endure and flourish under extreme conditions. He is particularly known within the field of cryobiology for his studies of animals that can survive freezing, especially the frozen \"frog-sicles\" (Rana sylvatica) that have made his work popular with multiple TV shows and magazines. Storeys studies of the adaptations that allow frogs, insects, and other animals to survive freezing have made major advances in the understanding of how cells, tissues and organs can endure freezing. Storey was also responsible for the discovery that some turtle species are freeze tolerant: newly hatched painted turtles that spend their first winter on land (Chrysemys picta marginata & C. p. bellii). These turtles are unique as they are the only reptiles, and highest vertebrate life form, known to tolerate prolonged natural freezing of extracellular body fluids during winter hibernation. These advances may aid the development of organ cryopreservation technology. A second area of his research is metabolic rate depression - understanding the mechanisms by which some animals can reduce their metabolism and enter a state of hypometabolism or torpor that allows them to survive prolonged environmental stresses. His studies have identified molecular mechanisms that underlie metabolic arrest across phylogeny and that support phenomena including mammalian hibernation, estivation, and anoxia- and ischemia-tolerance. These studies hold key applications for medical science, particularly for preservation technologies that aim to extend the survival time of excised organs in cold or frozen storage. Additional applications include insights into hyperglycemia in metabolic syndrome and diabetes, and anoxic and ischemic damage caused by heart attack and stroke. Furthermore, Storey's lab has created several web based programs freely available for [http://www.kenstoreylab.com/research-tools/ data management, data plotting, and microRNA analysis].", "Kenneth B. Storey (born October 23, 1949) is a Canadian scientist whose work draws from a variety of fields including biochemistry and molecular biology. He is a Professor of Biology, Biochemistry and Chemistry at Carleton University in Ottawa, Canada. Storey has a world-wide reputation for his research on biochemical adaptation - the molecular mechanisms that allow animals to adapt to and endure severe environmental stresses such as deep cold, oxygen deprivation, and desiccation.", "Kenneth Storey studied biochemistry at the University of Calgary (B.Sc. 71) and zoology at the University of British Columbia (Ph.D. 74). Storey is a Professor of Biochemistry, cross-appointed in the Departments of Biology, Chemistry and Neuroscience and holds the Canada Research Chair in Molecular Physiology at Carleton University in Ottawa, Canada.\nStorey is an elected fellow of the Royal Society of Canada, of the Society for Cryobiology and of the American Association for the Advancement of Science. He has won fellowships and awards for research excellence including the Fry medal from the Canadian Society of Zoologists (2011), the Flavelle medal from the Royal Society of Canada (2010), Ottawa Life Sciences Council Basic Research Award (1998), a [https://killamprogram.canadacouncil.ca/ Killam Senior Research Fellowship] (1993–1995), the Ayerst Award from the [https://csmb-scbm.ca/ Canadian Society for Molecular Biosciences] (1989), an E.W.R. Steacie Memorial Fellowship from the Natural Sciences and Engineering Research Council of Canada (1984–1986), and four Carleton University Research Achievement Awards. Storey is the author of over 1200 research articles, the editor of seven books, has given over 500 talks at conferences and institutes worldwide, and organized numerous international symposia.", "The Koenigsberger ratio is the proportion of remanent magnetization relative to induced magnetization in natural rocks. It was first described by . It is a dimensionless parameter often used in geophysical exploration to describe the magnetic characteristics of a geological body for help in interpreting magnetic anomaly patterns.\nThe total magnetization of a rock is the sum of its natural remanent magnetization and the magnetization induced by the ambient geomagnetic field. Thus, a Koenigsberger ratio, Q, greater than 1 indicates that the remanence properties contribute the majority of the total magnetization of the rock.", "LASNEX is a computer program that simulates the interactions between x-rays and a plasma, along with many effects associated with these interactions. The program is used to predict the performance of inertial confinement fusion (ICF) devices such as the Nova laser or proposed particle beam \"drivers\". \nVersions of LASNEX have been used since the late 1960s or early 1970s, and the program has been constantly updated. LASNEXs existence was mentioned in John Nuckolls seminal paper in Nature in 1972 that first widely introduced the ICF concept, saying it was \"...like breaking an enemy code. It tells you how many divisions to bring to bear on a problem.\"\nLASNEX uses a 2-dimensional finite element method (FEM) for calculations, breaking down the experimental area into a grid of arbitrary polygons. Each node on the grid records values for various parameters in the simulation. Values for thermal (low-energy) electrons and ions, super-thermal (high-energy and relativistic) electrons, x-rays from the laser, reaction products and the electric and magnetic fields were all stored for each node. The simulation engine then evolves the system forward through time, reading values from the nodes, applying formulas, and writing them back out. The process is very similar to other FEM systems, like those used in aerodynamics.\nIn spite of numerous problems in very early ICF research, LASNEX offered clear suggestions that slight increases in performance would be all that was needed to reach ignition. By the late 1970s further work with LASNEX indicated that the issue was not energy as much as the number of laser beams, and suggested that the Shiva laser with 10 kJ of energy in 20 beams would reach ignition. It did not, failing to contain the Rayleigh–Taylor instability. A review of the progress by The New York Times the following year noted that the system \"fell short of the more optimistic estimates by a factor of 10,000\".\nReal-world results from the Shiva project were then used to tune the LASNEX code, which now predicted that a somewhat larger machine, the Nova laser, would reach ignition. It did not; although Nova demonstrated fusion reactions on a large scale, it was far from ignition.\nNovas results were also used to tune the LASNEX system, which once again predicted that ignition could be reached, this time with a significantly larger machine. Given the past failures and rising costs, the Department of Energy decided to directly test the concept with a series of underground nuclear tests known as \"Halite\" and \"Centurion\", depending on which lab was handling the experiment. Halite/Centurion placed typical ICF targets in hohlraums, metal cylinders intended to smooth out the drivers energy so it shines on the fuel target evenly. The hohlraum/fuel assemblies were then placed at various distances from a small atomic bomb, detonation of which released significant quantities of x-rays. These x-rays heated the hohlraums until they glowed in the x-ray spectrum (having been heated \"x-ray hot\" as opposed to \"white hot\") and it was this smooth x-ray illumination that started the fusion reactions within the fuel. These results demonstrated that the amount of energy needed to cause ignition was approximately 100 MJ, about 25 times greater than any machine that was being considered.\nThe data from Halite/Centurion was used to further tune LASNEX, which then predicted that careful shaping of laser pulse would reduce the energy required by a factor of about 100 times, between 1 and 2 MJ, so a design with a total output of 4 MJ began to be on the safe side. This emerged as the National Ignition Facility concept. In 2022, NIF achieved ignition, triggering a self-sustaining fusion reaction which released 3.15 MJ of energy using 2.05 MJ of laser energy. \nFor these reasons, LASNEX is somewhat controversial in the ICF field. More accurately, LASNEX generally predicted a device's low-energy behaviour quite closely, but becomes increasingly inaccurate as the energy levels are increased.\nAdvanced 3D versions of the same basic concept, like ICF3D and HYDRA, continue to drive modern ICF design, and likewise have failed to closely match experimental performance.", "The LCP family or TagU family of proteins is a conserved family of phosphotransferases that are involved in the attachment of teichoic acid (TA) molecules to gram-positive cell wall or cell membrane. It was initially thought as the LytR (lytic repressor) component of a LytABC operon encoding autolysins, but the mechanism of regulation was later realized to be the production of TA molecules. It was accordingly renamed TagU.\nThe \"LCP\" acronym derives from three proteins initially identified to contain this domain, LytR (now TagU, ), cpsA (\"Capsular polysaccharide expression regulator\"), and psr (\"PBP 5 synthesis repressor\"). These proteins were mistaken as transcriptional regulators via different reasons, but all three of them are now known to be TagU-like enzymes. While TagU itself only attaches TA molecules to the peptidoglycan cell wall (forming WTA), other LCP proteins may glycosylate cell wall proteins (A. oris LcpA, ) or attach TA molecules to a cell membrane anchor (forming LTA). Most, if not all, LCP proteins also have a secondary pyrophosphatase activity.\nTypical TagU proteins are made up of an N-terminal transmembrane domain (for anchoring), an optional, non-conserved accessory domain (CATH 3tflA01), a core catalytic domain, and sometimes a C-terminal domain for which the structure is unknown. The core LCP domain is a magnesium-dependent enzyme.", "Lactulose is a non-absorbable sugar used in the treatment of constipation and hepatic encephalopathy. It is administered orally for constipation, and either orally or rectally for hepatic encephalopathy. It generally begins working after 8–12 hours, but may take up to 2 days to improve constipation.\nCommon side effects include abdominal bloating and cramps. A potential exists for electrolyte problems as a result of the diarrhea it produces. No evidence of harm to the fetus has been found when used during pregnancy. It is generally regarded as safe during breastfeeding. It is classified as an osmotic laxative.\nLactulose was first made in 1929, and has been used medically since the 1950s. Lactulose is made from the milk sugar lactose, which is composed of two simple sugars, galactose and glucose. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2021, it was the 265th most commonly prescribed medication in the United States, with more than 1million prescriptions.", "Lactulose is used in the treatment of chronic constipation in patients of all ages as a long-term treatment. The dosage of lactulose for chronic idiopathic constipation is adjusted depending on the constipation severity and desired effect, from a mild stool softener to causing diarrhea. Lactulose is contraindicated in case of galactosemia, as most preparations contain the monosaccharide galactose due to its synthesis process.\nLactulose may be used to counter the constipating effects of opioids, and in the symptomatic treatment of hemorrhoids as a stool softener.\nLactulose is commonly prescribed for children who develop fear of their bowel movements and are withholders. This is because lactulose, when dosed in the proper amount, causes a bowel movement that is impossible to retain for very long. Lactulose is also used for the elderly because of its gentle and consistent results.", "Lactulose is used as a test of small intestine bacterial overgrowth (SIBO). Recently, the reliability of it for diagnosing SIBO has been seriously questioned. A large amount of it is given with subsequent testing of molecular hydrogen gas in the breath. The test is positive if an increase in exhaled hydrogen occurs before that which would be expected by normal digestion by the normal gut flora in the colon. An earlier result has been hypothesized to indicate digestion occurring within the small intestine. An alternate explanation for differences in results is the variance in small bowel transit time among tested subjects.", "No evidence of harm to the fetus has been found when used during pregnancy. It is generally regarded as safe during breastfeeding.", "Lactulose is useful in treating hyperammonemia (high blood ammonia), which can lead to hepatic encephalopathy. Lactulose helps trap the ammonia (NH) in the colon and bind to it. It does this by using gut flora to acidify the colon, transforming the freely diffusible ammonia into ammonium ions (), which can no longer diffuse back into the blood. It is also useful for preventing hyperammonemia caused as a side effect of administration of valproic acid.", "Lactulose is not absorbed in the small intestine nor broken down by human enzymes, thus stays in the digestive bolus through most of its course, causing retention of water through osmosis leading to softer, easier-to-pass stool. It has a secondary laxative effect in the colon, where it is fermented by the gut flora, producing metabolites which have osmotic powers and peristalsis-stimulating effects (such as acetate), but also methane associated with flatulence.\nLactulose is metabolized in the colon by bacterial flora into short-chain fatty acids, including lactic acid and acetic acid. These partially dissociate, acidifying the colonic contents (increasing the H concentration in the gut). This favors the formation of the nonabsorbable from NH, trapping NH in the colon and effectively reducing plasma NH concentrations. Lactulose is therefore effective in treating hepatic encephalopathy. Specifically, it is effective as secondary prevention of hepatic encephalopathy in people with cirrhosis. Moreover, research showed improved cognitive functions and health-related quality of life in people with cirrhosis with minimal hepatic encephalopathy treated with lactulose.", "Lactulose is a disaccharide formed from one molecule each of the simple sugars (monosaccharides) fructose and galactose. Lactulose is not normally present in raw milk, but is a product of heat processes: the greater the heat, the greater amount of this substance (from 3.5 mg/L in low-temperature pasteurized milk to 744 mg/L in in-container sterilized milk).\nLactulose is produced commercially by isomerization of lactose. A variety of reaction conditions and catalysts can be used.", "Lactulose is available as a generic medication. It is available without prescription in most countries, but a prescription is required in the United States, Philippines, and Austria.", "In some countries where lactulose may be obtained without a prescription, lactulose is commonly used as a food additive to improve taste and promote intestinal transit.", "Common side effects of lactulose are abdominal cramping, borborygmus, and flatulence. In normal individuals, overdose is considered uncomfortable, but not life-threatening. Uncommon side effects are nausea and vomiting. In sensitive individuals, such as the elderly or people with reduced kidney function, excess lactulose dosage can result in dehydration and electrolyte disturbances such as low magnesium levels. Ingestion of lactulose does not cause a weight gain because it is not digestible, with no nutritional value. Although lactulose is less likely to cause dental caries than sucrose, as a sugar, a potential for this exists. This should be taken into consideration when taken by people with a high susceptibility to this condition.", "In general case LLE (2) is nonintegrable. But it admits the two integrable reductions:\n: a) in the 1+1 dimensions, that is Eq. (3), it is integrable\n: b) when . In this case the (1+1)-dimensional LLE (3) turns into the continuous classical Heisenberg ferromagnet equation (see e.g. Heisenberg model (classical)) which is already integrable.", "In solid-state physics, the Landau–Lifshitz equation (LLE), named for Lev Landau and Evgeny Lifshitz, is a partial differential equation describing time evolution of magnetism in solids, depending on 1 time variable and 1, 2, or 3 space variables.", "The LLE describes an anisotropic magnet. The equation is described in as follows: It is an equation for a vector field S, in other words a function on R taking values in R. The equation depends on a fixed symmetric 3 by 3 matrix J, usually assumed to be diagonal; that is, . It is given by Hamilton's equation of motion for the Hamiltonian\n(where J(S) is the quadratic form of J applied to the vector S)\nwhich is\nIn 1+1 dimensions this equation is\nIn 2+1 dimensions this equation takes the form\nwhich is the (2+1)-dimensional LLE. For the (3+1)-dimensional case LLE looks like", "In 1996 John Slonczewski expanded the model to account for the spin-transfer torque, i.e. the torque induced upon the magnetization by spin-polarized current flowing through the ferromagnet. This is commonly written in terms of the unit moment defined by :\nwhere is the dimensionless damping parameter, and are driving torques, and is the unit vector along the polarization of the current.", "In a ferromagnet, the magnitude of the magnetization at each spacetime point is approximated by the saturation magnetization (although it can be smaller when averaged over a chunk of volume). The Landau-Lifshitz equation, a precursor of the LLG equation, phenomenologically describes the rotation of the magnetization in response to the effective field which accounts for not only a real magnetic field but also internal magnetic interactions such as exchange and anisotropy. An earlier, but equivalent, equation (the Landau–Lifshitz equation) was introduced by :\nwhere is the electron gyromagnetic ratio and is a phenomenological damping parameter, often replaced by\nwhere is a dimensionless constant called the damping factor. The effective field is a combination of the external magnetic field, the demagnetizing field, and various internal magnetic interactions involving quantum mechanical effects, which is typically defined as the functional derivative of the magnetic free energy with respect to the local magnetization . To solve this equation, additional conditions for the demagnetizing field must be included to accommodate the geometry of the material.", "In physics, the Landau–Lifshitz–Gilbert equation (usually abbreviated as LLG equation), named for Lev Landau, Evgeny Lifshitz, and T. L. Gilbert, is a name used for a differential equation describing the dynamics (typically the precessional motion) of magnetization in a solid. It is a modified version by Gilbert of the original equation of Landau and Lifshitz. The LLG equation is similar to the Bloch equation, but they differ in the form of the damping term. The LLG equation describes a more general scenario of magnetization dynamics beyond the simple Larmor precession. In particular, the effective field driving the precessional motion of is not restricted to real magnetic fields; it incorporates a wide range of mechanisms including magnetic anisotropy, exchange interaction, and so on.\nThe various forms of the LLG equation are commonly used in micromagnetics to model the effects of a magnetic field and other magnetic interactions on ferromagnetic materials. It provides a practical way to model the time-domain behavior of magnetic elements. Recent developments generalizes the LLG equation to include the influence of spin-polarized currents in the form of spin-transfer torque.", "In 1955 Gilbert replaced the damping term in the Landau–Lifshitz (LL) equation by one that depends on the time derivative of the magnetization:\nThis is the Landau–Lifshitz–Gilbert (LLG) equation, where is the damping parameter, which is characteristic of the material. It can be transformed into the Landau–Lifshitz equation:\nwhere\nIn this form of the LL equation, the precessional term depends on the damping term. This better represents the behavior of real ferromagnets when the damping is large.", "The reaction is fueled with deuterium, a widely available non-radioactive hydrogen isotope composed of one proton, one neutron, and one electron. The deuterium is confined in the space between the atoms of a metal solid such as erbium or titanium. Erbium can indefinitely maintain 10 cm deuterium atoms (deuterons) at room temperature. The deuteron-saturated metal forms an overall neutral plasma. The electron density of the metal reduces the likelihood that two deuterium nuclei will repel each other as they get closer together.\nA dynamitron electron-beam accelerator generates an electron beam that hits a tantalum target and produces gamma rays, irradiating titanium deuteride or erbium deuteride. A gamma ray of about 2.2 megaelectron volts (MeV) strikes a deuteron and splits it into proton and neutron. The neutron collides with another deuteron. This second, energetic deuteron can experience screened fusion or a stripping reaction.\nAlthough the lattice is notionally at room temperature, LCF creates an energetic environment inside the lattice where individual atoms achieve fusion-level energies. Heated regions are created at the micrometer scale.", "Lattice confinement fusion (LCF) is a type of nuclear fusion in which deuteron-saturated metals are exposed to gamma radiation or ion beams, such as in an IEC fusor, avoiding the confined high-temperature plasmas used in other methods of fusion.", "A related technique pumps deuterium gas through the wall of a palladium-silver alloy tubing. The palladium is electrolytically loaded with deuterium. In some experiments this produces fast neutrons that trigger further reactions. Other experimenters (Fralick et al.) also made claims of anomalous heat produced by this system.", "In 2020, a team of NASA researchers seeking a new energy source for deep-space exploration missions published the first paper describing a method for triggering nuclear fusion in the space between the atoms of a metal solid, an example of screened fusion. The experiments did not produce self-sustaining reactions, and the electron source itself was energetically expensive.", "In a stripping reaction, the metal strips a neutron from accelerated deuteron and fuses it with the metal, yielding a different isotope of the metal. If the produced metal isotope is radioactive, it may decay into another element, releasing energy in the form of ionizing radiation in the process.", "The energetic deuteron fuses with another deuteron, yielding either a helium nucleus and a neutron or a hydrogen nucleus and a proton. These fusion products may fuse with other deuterons, creating an alpha particle, or with another helium or hydrogen nucleus. Each releases energy, continuing the process.", "Pyroelectric fusion has previously been observed in erbium hydrides. A high-energy beam of deuterium ions generated by pyroelectric crystals was directed at a stationary, room-temperature or target, and fusion was observed.\nIn previous fusion research, such as inertial confinement fusion (ICF), fuel such as the rarer tritium is subjected to high pressure for a nano-second interval, triggering fusion. In magnetic confinement fusion (MCF), the fuel is heated in a plasma to temperatures much higher than those at the center of the Sun. In LCF, conditions sufficient for fusion are created in a metal lattice that is held at ambient temperature during exposure to high-energy photons. ICF devices momentarily reach densities of 10 cc, while MCF devices momentarily achieve 10.\nLattice confinement fusion requires energetic deuterons and is therefore not cold fusion.\nLattice confinement fusion is used as a method to increase the cathode fuel density of inertial electrostatic fusion devices such as a Farnsworth-Hirsch fusor. This increases the probability of fusion events occurring and therefore the radiation output produced. In applications where fusors are used as X-ray, neutron, or proton radiation source, lattice confinement fusion improves the energy efficiency of the device.", "Some hepatitis C viral glycoproteins may attach to C-type lectins on the host cell surface (liver cells) to initiate infection. To avoid clearance from the body by the innate immune system, pathogens (e.g., virus particles and bacteria that infect human cells) often express surface lectins known as adhesins and hemagglutinins that bind to tissue-specific glycans on host cell-surface glycoproteins and glycolipids. Multiple viruses, including influenza and several viruses in the Paramyxoviridae family, use this mechanism to bind and gain entry to target cells.", "Lectins are carbohydrate-binding proteins that are highly specific for sugar groups that are part of other molecules, so cause agglutination of particular cells or precipitation of glycoconjugates and polysaccharides. Lectins have a role in recognition at the cellular and molecular level and play numerous roles in biological recognition phenomena involving cells, carbohydrates, and proteins. Lectins also mediate attachment and binding of bacteria, viruses, and fungi to their intended targets.\nLectins are found in many foods. Some foods, such as beans and grains, need to be cooked, fermented or sprouted to reduce lectin content. Some lectins are beneficial, such as CLEC11A, which promotes bone growth, while others may be powerful toxins such as ricin.\nLectins may be disabled by specific mono- and oligosaccharides, which bind to ingested lectins from grains, legumes, nightshade plants, and dairy; binding can prevent their attachment to the carbohydrates within the cell membrane. The selectivity of lectins means that they are useful for analyzing blood type, and they have been researched for potential use in genetically engineered crops to transfer pest resistance.", "The first writer to advocate a lectin-free diet was Peter J. DAdamo, a naturopathic physician best known for promoting the Blood type diet. He argued that lectins may damage a persons blood type by interfering with digestion, food metabolism, hormones, insulin production—and so should be avoided. D'Adamo provided no scientific evidence nor published data for his claims, and his diet has been criticized for making inaccurate statements about biochemistry.\nSteven Gundry proposed a lectin-free diet in his book The Plant Paradox (2017). It excludes a large range of commonplace foods including whole grains, legumes, and most fruit, as well as the nightshade vegetables: tomatoes, potatoes, eggplant, bell peppers, and chili peppers. Gundry's claims about lectins are considered pseudoscience. His book cites studies that have nothing to do with lectins, and some that show—contrary to his own recommendations—that avoiding the whole grains wheat, barley, and rye will allow increase of harmful bacteria while diminishing helpful bacteria.", "Lectins may bind to a soluble carbohydrate or to a carbohydrate moiety that is a part of a glycoprotein or glycolipid. They typically agglutinate certain animal cells and/or precipitate glycoconjugates. Most lectins do not possess enzymatic activity.", "The function of lectins in plants (legume lectin) is still uncertain. Once thought to be necessary for rhizobia binding, this proposed function was ruled out through lectin-knockout transgene studies.\nThe large concentration of lectins in plant seeds decreases with growth, and suggests a role in plant germination and perhaps in the seed's survival itself. The binding of glycoproteins on the surface of parasitic cells also is believed to be a function. Several plant lectins have been found to recognize noncarbohydrate ligands that are primarily hydrophobic in nature, including adenine, auxins, cytokinin, and indole acetic acid, as well as water-soluble porphyrins. These interactions may be physiologically relevant, since some of these molecules function as phytohormones.\nLectin receptor kinases (LecRKs) are believed to recognize damage associated molecular patterns (DAMPs), which are created or released from herbivore attack. In Arabidopsis, legume-type LecRKs Clade 1 has 11 LecRK proteins. LecRK-1.8 has been reported to recognize extracellular NAD molecules and LecRK-1.9 has been reported to recognize extracellular ATP molecules.", "Lectins are widespread in nature, and many foods contain the proteins. Some lectins can be harmful if poorly cooked or consumed in great quantities. They are most potent when raw as boiling, stewing or soaking in water for several hours can render most lectins inactive. Cooking raw beans at low heat, though, such as in a slow cooker, will not remove all the lectins.\nSome studies have found that lectins may interfere with absorption of some minerals, such as calcium, iron, phosphorus, and zinc. The binding of lectins to cells in the digestive tract may disrupt the breakdown and absorption of some nutrients, and as they bind to cells for long periods of time, some theories hold that they may play a role in certain inflammatory conditions such as rheumatoid arthritis and type 1 diabetes, but research supporting claims of long-term health effects in humans is limited and most existing studies have focused on developing countries where malnutrition may be a factor, or dietary choices are otherwise limited.", "Lectins are one of many toxic constituents of many raw plants that are inactivated by proper processing and preparation (e.g., cooking with heat, fermentation). For example, raw kidney beans naturally contain toxic levels of lectin (e.g. phytohaemagglutinin). Adverse effects may include nutritional deficiencies, and immune (allergic) reactions.", "Lectins are considered a major family of protein antinutrients, which are specific sugar-binding proteins exhibiting reversible carbohydrate-binding activities. Lectins are similar to antibodies in their ability to agglutinate red blood cells.\nMany legume seeds have been proven to contain high lectin activity, termed hemagglutination. Soybean is the most important grain legume crop in this category. Its seeds contain high activity of soybean lectins (soybean agglutinin or SBA).", "Lectins from legume plants, such as PHA or concanavalin A, have been used widely as model systems to understand the molecular basis of how proteins recognize carbohydrates, because they are relatively easy to obtain and have a wide variety of sugar specificities. The many crystal structures of legume lectins have led to a detailed insight of the atomic interactions between carbohydrates and proteins.\nLegume seed lectins have been studied for their insecticidal potential and have shown harmful effects for the development of pest.", "Concanavalin A and other commercially available lectins have been used widely in affinity chromatography for purifying glycoproteins.\nIn general, proteins may be characterized with respect to glycoforms and carbohydrate structure by means of affinity chromatography, blotting, affinity electrophoresis, and affinity immunoelectrophoreis with lectins, as well as in microarrays, as in evanescent-field fluorescence-assisted lectin microarray.", "One example of the powerful biological attributes of lectins is the biochemical warfare agent ricin. The protein ricin is isolated from seeds of the castor oil plant and comprises two protein domains. Abrin from the jequirity pea is similar:\n* One domain is a lectin that binds cell surface galactosyl residues and enables the protein to enter cells.\n* The second domain is an N-glycosidase that cleaves nucleobases from ribosomal RNA, resulting in inhibition of protein synthesis and cell death.", "Lectins have these functions in animals:\n* The regulation of cell adhesion\n* The regulation of glycoprotein synthesis\n* The regulation of blood protein levels\n* The binding of soluble extracellular and intercellular glycoproteins\n* As a receptor on the surface of mammalian liver cells for the recognition of galactose residues, which results in removal of certain glycoproteins from the circulatory system\n* As a receptor that recognizes hydrolytic enzymes containing mannose-6-phosphate, and targets these proteins for delivery to the lysosomes; I-cell disease is one type of defect in this particular system.\n* Lectins are known to play important roles in the innate immune system. Lectins such as the mannose-binding lectin, help mediate the first-line defense against invading microorganisms. Other immune lectins play a role in self-nonself discrimination and they likely modulate inflammatory and autoreactive processes. Intelectins (X-type lectins) bind microbial glycans and may function in the innate immune system as well. Lectins may be involved in pattern recognition and pathogen elimination in the innate immunity of vertebrates including fishes.", "Long before a deeper understanding of their numerous biological functions, the plant lectins, also known as phytohemagglutinins, were noted for their particularly high specificity for foreign glycoconjugates (e.g., those of fungi and animals) and used in biomedicine for blood cell testing and in biochemistry for fractionation.\nAlthough they were first discovered more than 100 years ago in plants, now lectins are known to be present throughout nature. The earliest description of a lectin is believed to have been given by Peter Hermann Stillmark in his doctoral thesis presented in 1888 to the University of Dorpat. Stillmark isolated ricin, an extremely toxic hemagglutinin, from seeds of the castor plant (Ricinus communis).\nThe first lectin to be purified on a large scale and available on a commercial basis was concanavalin A, which is now the most-used lectin for characterization and purification of sugar-containing molecules and cellular structures. The legume lectins are probably the most well-studied lectins.", "Purified lectins are important in a clinical setting because they are used for blood typing. Some of the glycolipids and glycoproteins on an individual's red blood cells can be identified by lectins.\n* A lectin from Dolichos biflorus is used to identify cells that belong to the A1 blood group.\n* A lectin from Ulex europaeus is used to identify the H blood group antigen.\n* A lectin from Vicia graminea is used to identify the N blood group antigen.\n* A lectin from Iberis amara is used to identify the M blood group antigen.\n* A lectin from coconut milk is used to identify Theros antigen.\n* A lectin from Carex is used to identify R antigen.\nIn neuroscience, the anterograde labeling method is used to trace the path of efferent axons with PHA-L, a lectin from the kidney bean.\nA lectin (BanLec) from bananas inhibits HIV-1 in vitro. Achylectins, isolated from Tachypleus tridentatus, show specific agglutinating activity against human A-type erythrocytes. Anti-B agglutinins such as anti-BCJ and anti-BLD separated from Charybdis japonica and Lymantria dispar, respectively, are of value both in routine blood grouping and research.", "William C. Boyd alone and then together with Elizabeth Shapleigh introduced the term \"lectin\" in 1954 from the Latin word lectus, \"chosen\" (from the verb legere, to choose or pick out).", "Lentztrehaloses A, B, and C are trehalose analogues found in an actinomycete Lentzea sp. ML457-mF8. Lentztrehaloses A and B can be synthesized chemically. The non-reducing disaccharide trehalose is commonly used in foods and various products as stabilizer and humectant, respectively. Trehalose has been shown to have curative effects for treating various diseases in animal models including neurodegenerative diseases, hepatic diseases, and arteriosclerosis. Trehalose, however, is readily digested by hydrolytic enzyme trehalase that is widely expressed in many organisms from microbes to human. As a result, trehalose may cause decomposition of the containing products. And its medicinal effect may be reduced by the hydrolysis by trehalase. Lentztrehaloses are rarely hydrolyzed by microbial and mammalian trehalases and may be used in various areas as a biologically stable substitute of trehalose.", "An important parameter in wet scrubbing systems is the rate of liquid flow. It is common in wet scrubber terminology to express the liquid flow as a function of the gas flow rate that is being treated. This is commonly called the liquid-to-gas ratio (L/G ratio) and uses the units of gallons per 1,000 actual cubic feet or litres per cubic metre (L/m). \nExpressing the amount of liquid used as a ratio enables systems of different sizes to be readily compared.\nFor particulate removal, the liquid-to-gas ratio is a function of the mechanical design of the system; while for gas absorption this ratio gives an indication of the difficulty of removing a pollutant. Most wet scrubbers used for particulate control operate with liquid-to-gas ratios in the range of 4 to 20 gallons per 1,000 actual cubic foot (0.5 to 3 litres per actual cubic metre). \nDepending on scrubber design, a minimum volume of liquid is required to \"wet\" the scrubber internals and create sufficient collection targets. After a certain optimum point, adding excess liquid to a particulate wet scrubber does not increase efficiency and in fact, could be counter-productive by causing excessive pressure loss. Liquid-to-gas ratios for gas absorption are often higher, in the range of 20 to 40 gallons per 1,000 actual cubic foot (3 to 6 litres per actual cubic metre).\nL/G ratio illustrates a number of points about the choice of wet scrubbers used for gas absorption. For example, because flue-gas desulfurization systems must deal with heavy particulate loadings, open, simple designs (such as venturi, spray chamber and moving bed) are used.\nAlso, the liquid-to-gas ratio for the absorption process is higher than for particle removal and gas velocities are kept low to enhance the absorption process.\nSolubility is a very important factor affecting the amount of a pollutant that can be absorbed. Solubility governs the amount of liquid required (liquid-to-gas ratio) and the necessary contact time. More soluble gases require less liquid. Also, more soluble gases will be absorbed faster.", "* Bethea, R. M. 1978. Air Pollution Control Technology. New York: Van Nostrand Reinhold.\n* National Asphalt Pavement Association. 1978. The Maintenance and Operation of Exhaust Systems in the Hot Mix Batch Plant. 2nd ed. Information Series 52.\n* Perry, J. H. (Ed.). 1973. Chemical Engineers’ Handbook. 5th ed. New York: McGraw-Hill.\n* Richards, J. R. 1995. Control of Particulate Emissions (APTI Course 413). U.S. Environmental Protection Agency.\n* Richards, J. R. 1995. Control of Gaseous Emissions. (APTI Course 415). U.S. Environmental Protection Agency.\n* Schifftner, K. C. 1979, April. Venturi scrubber operation and maintenance. Paper presented at the U.S. EPA Environmental Research Information Center. Atlanta, GA.\n* Semrau, K. T. 1977. Practical process design of particulate scrubbers. Chemical Engineering. 84:87-91.\n* U.S. Environmental Protection Agency. 1982, September. Control Techniques for Particulate Emissions from Stationary Sources. Vol. 1. EPA 450/3-81-005a.\n* Wechselblatt, P. M. 1975. Wet scrubbers (particulates). In F. L. Cross and H. E. Hesketh (Eds.), Handbook for the Operation and Maintenance of Air Pollution Control Equipment. Westport: Technomic Publishing.", "Burning of the most abundant isotope of lithium, lithium-7, occurs by a collision of Li and a proton producing beryllium-8, which promptly decays into two helium-4 nuclei. The temperature necessary for this reaction is just below the temperature necessary for hydrogen fusion. Convection in low-mass stars ensures that lithium in the whole volume of the star is depleted. Therefore, the presence of the lithium line in a candidate brown dwarf's spectrum is a strong indicator that it is indeed substellar.", "From a study of lithium abundances in 53 T Tauri stars, it has been found that lithium depletion varies strongly with size, suggesting that lithium burning by the P-P chain, during the last highly convective and unstable stages during the pre–main sequence later phase of the Hayashi contraction may be one of the main sources of energy for T Tauri stars. Rapid rotation tends to improve mixing and increase the transport of lithium into deeper layers where it is destroyed. T Tauri stars generally increase their rotation rates as they age, through contraction and spin-up, as they conserve angular momentum. This causes an increased rate of lithium loss with age. Lithium burning will also increase with higher temperatures and mass, and will last for at most a little over 100 million years.\nThe P-P chain for lithium burning is as follows\nIt will not occur in stars less than sixty times the mass of Jupiter. In this way, the rate of lithium depletion can be used to calculate the age of the star.", "The use of lithium to distinguish candidate brown dwarfs from low-mass stars is commonly referred to as the lithium test. Heavier stars like the Sun can retain lithium in their outer atmospheres, which never get hot enough for lithium depletion, but those are distinguishable from brown dwarfs by their size. Brown dwarfs at the high end of their mass range (60–75 M) can be hot enough to deplete their lithium when they are young. Dwarfs of mass greater than 65 M can burn off their lithium by the time they are half a billion years old; thus, this test is not perfect.", "Lithium burning is a nucleosynthetic process in which lithium is depleted in a star. Lithium is generally present in brown dwarfs and not in older low-mass stars. Stars, which by definition must achieve the high temperature (2.5 × 10 K) necessary for fusing hydrogen, rapidly deplete their lithium.", "The basis of many functional gastrointestinal disorders (FGIDs) is distension of the intestinal lumen. Such luminal distension may induce pain, a sensation of bloating, abdominal distension and motility disorders. Therapeutic approaches seek to reduce factors that lead to distension, particularly of the distal small and proximal large intestine. Food substances that can induce distension are those that are poorly absorbed in the proximal small intestine, osmotically active, and fermented by intestinal bacteria with hydrogen (as opposed to methane) production. The small molecule FODMAPs exhibit these characteristics.\nOver many years, there have been multiple observations that ingestion of certain short-chain carbohydrates, including lactose, fructose and sorbitol, fructans and galactooligosaccharides, can induce gastrointestinal discomfort similar to that of people with irritable bowel syndrome. These studies also showed that dietary restriction of short-chain carbohydrates was associated with symptoms improvement.\nThese short-chain carbohydrates (lactose, fructose and sorbitol, fructans and GOS) behave similarly in the intestine. Firstly, being small molecules and either poorly absorbed or not absorbed at all, they drag water into the intestine via osmosis. Secondly, these molecules are readily fermented by colonic bacteria, so upon malabsorption in the small intestine they enter the large intestine where they generate gases (hydrogen, carbon dioxide and methane). The dual actions of these carbohydrates cause an expansion in volume of intestinal contents, which stretches the intestinal wall and stimulates nerves in the gut. It is this stretching that triggers the sensations of pain and discomfort that are commonly experienced by people with IBS.\nThe FODMAP concept was first published in 2005 as part of a hypothesis paper. In this paper, it was proposed that a collective reduction in the dietary intake of all indigestible or slowly absorbed, short-chain carbohydrates would minimise stretching of the intestinal wall. This was proposed to reduce stimulation of the guts nervous system and provide the best chance of reducing symptom generation in people with IBS (see below). At the time, there was no collective term for indigestible or slowly absorbed, short-chain carbohydrates, so the term FODMAP' was created to improve understanding and facilitate communication of the concept.\nThe low FODMAP diet was originally developed by a research team at Monash University in Melbourne, Australia. The Monash team undertook the first research to investigate whether a low FODMAP diet improved symptom control in patients with IBS and established the mechanism by which the diet exerted its effect. Monash University also established a rigorous food analysis program to measure the FODMAP content of a wide selection of Australian and international foods. The FODMAP composition data generated by Monash University updated previous data that was based on limited literature, with guesses (sometimes wrong) made where there was little information.", "Below are low-FODMAP foods categorized by group according to the Monash University \"Low-FODMAP Diet\".\n* Vegetables: alfalfa, bean sprouts, green beans, bok choy, capsicum (bell pepper), carrot, chives, fresh herbs, choy sum, cucumber, lettuce, tomato, zucchini, the green parts of leeks and spring onions\n* Fruits: orange, grapes, honeydew melon (not watermelon)\n* Protein: meats, fish, chicken, eggs, tofu (not silken), tempeh\n* Dairy: lactose-free milk, lactose-free yoghurts, hard cheese\n* Breads and cereals: rice, crisped rice, maize or corn, potatoes, quinoa, and breads made with their flours alone; however, oats and spelt are relatively low in FODMAPs\n* Biscuits (cookies) and snacks: made with flour of cereals listed above, without high FODMAP ingredients added (such as onion, pear, honey, or polyol artificial sweeteners)\n* Nuts and seeds: almonds (no more than ten nuts per serving), pumpkin seeds; not cashews or pistachios\n* Beverage options: water, coffee, tea\nOther sources confirm the suitability of these and suggest some additional foods.", "A low-FODMAP diet is a person's global restriction of consumption of all fermentable carbohydrates (FODMAPs), recommended only for a short time. A low-FODMAP diet is recommended for managing patients with irritable bowel syndrome (IBS) and can reduce digestive symptoms of IBS including bloating and flatulence.\nIf the problem lies with indigestible fiber instead, the patient may be directed to a low-residue diet.", "A low-FODMAP diet might help to improve short-term digestive symptoms in adults with functional abdominal bloating and irritable bowel syndrome, but its long-term use can have negative effects because it causes a detrimental impact on the gut microbiota and metabolome. It should only be used for short periods of time and under the advice of a specialist. More studies are needed to evaluate its effectiveness in children with irritable bowel syndrome.\nThere is only a little evidence of its effectiveness in treating functional symptoms in inflammatory bowel disease from small studies that are susceptible to bias. More studies are needed to assess the true impact of this diet on health.\nIn addition, the use of a low-FODMAP diet without medical advice can lead to serious health risks, including nutritional deficiencies and misdiagnosis, so it is advisable to conduct a complete medical evaluation before starting a low-FODMAP diet to ensure a correct diagnosis and that the appropriate therapy may be undertaken.\nSince the consumption of gluten is suppressed or reduced with a low-FODMAP diet, the improvement of the digestive symptoms with this diet may not be related to the withdrawal of the FODMAPs, but of gluten, indicating the presence of an unrecognized celiac disease, avoiding its diagnosis and correct treatment, with the consequent risk of several serious health complications, including various types of cancer.\nA low-FODMAP diet is highly restrictive in various groups of nutrients, can be impractical to follow in the long-term and may add an unnecessary financial burden.", "Lume is a short term for the luminous phosphorescent glowing solution applied on watch dials. There are some people who \"relume\" watches, or replace faded lume. Formerly, lume consisted mostly of radium; however, radium is radioactive and has been mostly replaced on new watches by less bright, but less toxic compounds. After radium was effectively outlawed in 1968, tritium became the luminescent material of choice, because, while still radioactive, it is much less potent than radium, tritium being about as radioactive as an x-ray, the decrease in radioactivity resulting from a diminishment of strength and quantity of the beta waves that are given off by tritium as an element.\nCommon pigments used in lume include the phosphorescent pigments zinc sulfide and strontium aluminate. Use of zinc sulfide for safety related products dates back to the 1930s. However, the development of strontium oxide aluminate, with a luminance approximately 10 times greater than zinc sulfide, has relegated most zinc sulfide based products to the novelty category. Strontium oxide aluminate based pigments are now used in exit signs, pathway marking, and other safety related signage.\nStrontium aluminate based afterglow pigments are marketed under brandnames like Super-LumiNova, Watchlume Co, NoctiLumina, and Glow in the Dark (Phosphorescent) Technologies.", "*Light-emitting diodes (LEDs) emit light via electro-luminescence.\n*Phosphors, materials that emit light when irradiated by higher-energy electromagnetic radiation or particle radiation\n*Laser, and lamp industry\n*Phosphor thermometry, measuring temperature using phosphorescence\n*Thermoluminescence dating\n*Thermoluminescent dosimeter\n*Non-disruptive observation of processes within a cell.\nLuminescence occurs in some minerals when they are exposed to low-powered sources of ultraviolet or infrared electromagnetic radiation (for example, portable UV lamps) at atmospheric pressure and atmospheric temperatures. This property of these minerals can be used during the process of mineral identification at rock outcrops in the field or in the laboratory.", "Luminescence is the \"spontaneous emission of radiation from an electronically excited species (or from a vibrationally excited species) not in thermal equilibrium with its environment\", according to the IUPAC definition. A luminescent object is emitting \"cold light\", in contrast to \"incandescence\", where an object only emits light after heating. Generally, the emission of light is due to the movement of electrons between different energy levels within an atom after excitation by external factors. However, the exact mechanism of light emission in \"vibrationally excited species\" is unknown, as seen in sonoluminescence. \nThe dials, hands, scales, and signs of aviation and navigational instruments and markings are often coated with luminescent materials in a process known as \"luminising\".", "*Radioluminescence, a result of bombardment by ionizing radiation\n*Electroluminescence, a result of an electric current passed through a substance\n**Cathodoluminescence, a result of a luminescent material being struck by electrons\n*Chemiluminescence, the emission of light as a result of a chemical reaction\n**Bioluminescence, a result of biochemical reactions in a living organism\n**Electrochemiluminescence, a result of an electrochemical reaction\n**Lyoluminescence, a result of dissolving a solid (usually heavily irradiated) in a liquid solvent\n**Candoluminescence, is light emitted by certain materials at elevated temperatures, which differs from the blackbody emission expected at the temperature in question.\n*Mechanoluminescence, a result of a mechanical action on a solid\n**Triboluminescence, generated when bonds in a material are broken when that material is scratched, crushed, or rubbed\n**Fractoluminescence, generated when bonds in certain crystals are broken by fractures\n**Piezoluminescence, produced by the action of pressure on certain solids\n**Sonoluminescence, a result of imploding bubbles in a liquid when excited by sound\n*Crystalloluminescence, produced during crystallization\n*Thermoluminescence, the re-emission of absorbed energy when a substance is heated\n**Cryoluminescence, the emission of light when an object is cooled (an example of this is wulfenite)\n*Photoluminescence, a result of the absorption of photons\n**Fluorescence, traditionally defined as the emission of light that ends immediately after the source of excitation is removed. As the definition does not fully describe the phenomenon, quantum mechanics is employed where it is defined as there is no change in spin multiplicity from the state of excitation to emission of light.\n**Phosphorescence, traditionally defined as persistent emission of light after the end of excitation. As the definition does not fully describe the phenomenon, quantum mechanics is employed where it is defined as there is a change in spin multiplicity from the state of excitation to the emission of light.", "In chemistry, a luminophore (sometimes shortened to lumophore) is an atom or functional group in a chemical compound that is responsible for its luminescent properties. Luminophores can be either organic or inorganic.\nLuminophores can be further classified as fluorophores or phosphors, depending on the nature of the excited state responsible for the emission of photons. However, some luminophores cannot be classified as being exclusively fluorophores or phosphors. Examples include transition-metal complexes such as tris(bipyridine)ruthenium(II) chloride, whose luminescence comes from an excited (nominally triplet) metal-to-ligand charge-transfer (MLCT) state, which is not a true triplet state in the strict sense of the definition; and colloidal quantum dots, whose emissive state does not have either a purely singlet or triplet spin.\nMost luminophores consist of conjugated π systems or transition-metal complexes. There are also purely inorganic luminophores, such as zinc sulfide doped with rare-earth metal ions, rare-earth metal oxysulfides doped with other rare-earth metal ions, yttrium oxide doped with rare-earth metal ions, zinc orthosilicate doped with manganese ions, etc. Luminophores can be observed in action in fluorescent lights, television screens, computer monitor screens, organic light-emitting diodes and bioluminescence.\nThe correct, textbook terminology is luminophore, not lumophore, although the latter term has been frequently used in the chemical literature.", "Radioluminescent paint was invented in 1908 by Sabin Arnold von Sochocky and originally incorporated radium-226. Radium paint was widely used for 40 years on the faces of watches, compasses, and aircraft instruments, so they could be read in the dark. Radium is a radiological hazard, emitting gamma rays that can penetrate a glass watch dial and into human tissue. During the 1920s and 1930s, the harmful effects of this paint became increasingly clear. A notorious case involved the \"Radium Girls\", a group of women who painted watchfaces and later suffered adverse health effects from ingestion, in many cases resulting in death. In 1928, Dr von Sochocky himself died of aplastic anemia as a result of radiation exposure. Thousands of legacy radium dials are still owned by the public and the paint can still be dangerous if ingested in sufficient quantities, which is why it has been banned in many countries.\nRadium paint used zinc sulfide phosphor, usually trace metal doped with an activator, such as copper (for green light), silver (blue-green), and more rarely copper-magnesium (for yellow-orange light). The phosphor degrades relatively fast and the dials lose luminosity in several years to a few decades; clocks and other devices available from antique shops and other sources therefore are not luminous any more. However, due to the long 1600 year half-life of the Ra-226 isotope they are still radioactive and can be identified with a Geiger counter.\nThe dials can be renovated by application of a very thin layer of fresh phosphor, without the radium content (with the original material still acting as the energy source); the phosphor layer has to be thin due to the light self-absorption in the material.", "Fluorescent paints glow when exposed to short-wave ultraviolet (UV) radiation. These UV wavelengths are found in sunlight and many artificial lights, but the paint requires a special black light to view so these glowing-paint applications are called black-light effects. Fluorescent paint is available in a wide range of colors and is used in theatrical lighting and effects, posters, and as entertainment for children.\nThe fluorescent chemicals in fluorescent paint absorb the invisible UV radiation, then emit the energy as longer wavelength visible light of a particular color. Human eyes perceive this light as the unusual glow of fluorescence. The painted surface also reflects any ordinary visible light striking it, which tends to wash out the dim fluorescent glow. So viewing fluorescent paint requires a longwave UV light which does not emit much visible light. This is called a black light. It has a dark blue filter material on the bulb which lets the invisible UV pass but blocks the visible light the bulb produces, allowing only a little purple light through. Fluorescent paints are best viewed in a darkened room.\nFluorescent paints are made in both visible and invisible types. Visible fluorescent paint also has ordinary visible light pigments, so under white light it appears a particular color, and the color just appears enhanced brilliantly under black lights. Invisible fluorescent paints appear transparent or pale under daytime lighting, but will glow under UV light. Since patterns painted with this type are invisible under ordinary visible light, they can be used to create a variety of clever effects.\nBoth types of fluorescent painting benefit when used within a contrasting ambiance of clean, matte-black backgrounds and borders. Such a \"black out\" effect will minimize other awareness, so cultivating the peculiar luminescence of UV fluorescence. Both types of paints have extensive application where artistic lighting effects are desired, particularly in \"black box\" entertainments and environments such as theaters, bars, shrines, etc. The effective wattage needed to light larger empty spaces increases, with narrow-band light such as UV wavelengths being rapidly scattered in outdoor environments.", "Phosphorescent paint is commonly called \"glow-in-the-dark\" paint. It is made from phosphors such as silver-activated zinc sulfide or doped strontium aluminate, and typically glows a pale green to greenish-blue color. The mechanism for producing light is similar to that of fluorescent paint, but the emission of visible light persists long after it has been exposed to light. Phosphorescent paints have a sustained glow which lasts for up to 12 hours after exposure to light, fading over time.\nThis type of paint has been used to mark escape paths in aircraft and for decorative use such as \"stars\" applied to walls and ceilings. It is an alternative to radioluminescent paint. Kenners Lightning Bug Glo-Juice was a popular non-toxic paint product in 1968, marketed at children, alongside other glow-in-the-dark toys and novelties. Phosphorescent paint is typically used as body paint, on childrens walls and outdoors.\nWhen applied as a paint or a more sophisticated coating (e.g. a thermal barrier coating), phosphorescence can be used for temperature detection or degradation measurements known as phosphor thermometry.", "Radioluminescent paint is a self-luminous paint that consists of a small amount of a radioactive isotope (radionuclide) mixed with a radioluminescent phosphor chemical. The radioisotope continually decays, emitting radiation particles which strike molecules of the phosphor, exciting them to emit visible light. The isotopes selected are typically strong emitters of beta radiation, preferred since this radiation will not penetrate an enclosure. Radioluminescent paints will glow without exposure to light until the radioactive isotope has decayed (or the phosphor degrades), which may be many years.\nBecause of safety concerns and tighter regulation, consumer products such as clocks and watches now increasingly use phosphorescent rather than radioluminescent substances. Previously radioluminicesent paints were used extensively on watch and clock dials and known colloquially to watchmakers as \"clunk\". Radioluminescent paint may still be preferred in specialist applications, such as diving watches.", "The latest generation of the radioluminescent materials is based on tritium, a radioactive isotope of hydrogen with half-life of 12.32 years that emits very low-energy beta radiation. The devices are similar to a fluorescent tube in construction, as they consist of a hermetically sealed (usually borosilicate-glass) tube, coated inside with a phosphor, and filled with tritium. They are known under many names – e.g. gaseous tritium light source (GTLS), traser, betalight.\nTritium light sources are most often seen as \"permanent\" illumination for the hands of wristwatches intended for diving, nighttime, or tactical use. They are additionally used in glowing novelty keychains, in self-illuminated exit signs, and formerly in fishing lures. They are favored by the military for applications where a power source may not be available, such as for instrument dials in aircraft, compasses, lights for map reading, and sights for weapons.\nTritium lights are also found in some old rotary dial telephones, though due to their age they no longer produce a useful amount of light.", "In the second half of the 20th century, radium was progressively replaced with promethium-147. Promethium is only a relatively low-energy beta-emitter, which, unlike alpha emitters, does not degrade the phosphor lattice and the luminosity of the material does not degrade as fast. Promethium-based paints are significantly safer than radium, but the half-life of Pm is only 2.62 years and therefore it is not suitable for long-life applications.\nPromethium-based paint was used to illuminate Apollo Lunar Module electrical switch tips, the Apollo command and service module hatch and EVA handles, and control panels of the Lunar Roving Vehicle.", "Luminous paint (or luminescent paint) is paint that emits visible light through fluorescence, phosphorescence, or radioluminescence.", "Lyoluminescence refers to the emission of light while dissolving a solid into a liquid solvent. It is a form of chemiluminescence. The most common lyoluminescent effect is seen when solid samples which have been heavily irradiated by ionizing radiation are dissolved in water. The total amount of light emitted by the material increases proportionally with the total radiation dose received by the material up to a certain level called the saturation value.\nMany gamma-irradiated substances are known to produce lyoluminescence; these include spices, powdered milk, soups, cotton and paper. While the broad variety of materials which exhibit lyoluminescence confounds explanation by a single common mechanism there is a common feature to the phenomenon, the production of free radicals in solution. Lyoluminescence intensity can be increased by performing the dissolution of the solid in a solution containing conventionally chemiluminescent compounds such as luminol. These are thus called lyoluminescence sensitizers.", "In chemistry, a lyonium ion is the cation derived by the protonation of a solvent molecule. For example, a hydronium ion is formed by the protonation of water, and is the cation formed by the protonation of methanol.\nIts counterpart is a lyate ion, the anion formed by the deprotonation of a solvent molecule.\nLyonium and lyate ions, resulting from molecular autoionization, contribute to the molar conductivity of protolytic solvents.", "Active metabolism of glucose with production of bicarbonate has been demonstrated by Pettersson and Cohen.\nPettersson studies were on the metabolism of glucose and fatty acids by kidneys during 6 day hypothermic perfusion storage and he found that the kidneys consumed glucose at 4.4 μmol/g/day and fatty acids at 5.8 μmol/g/day. In Cohens study the best 8 day stored kidneys consumed glucose at the rate of 2.3 μmol/g/day and 4.9 μmol/g/day respectively which made it likely that they were using fatty acids at similar rates to Petterssons dogs' kidneys. The constancy of both the glucose consumption rate and the rate of bicarbonate production implied that no injury was affecting the glycolytic enzyme or carbonic anhydrase enzyme systems.\nLee showed that fatty acids were the preferred substrate of the rabbits kidney cortex at normothermic temperatures, and glucose the preferred substrate for the medullary cells which normally metabolise anaerobically. Abodeely showed that both fatty acids and glucose could be utilised by the outer medulla of the rabbits kidney but that glucose was used preferentially. At hypothermia the metabolic needs of the kidney are much reduced but measurable consumption of glucose, fatty acids and ketone bodies occurs. Horsburgh showed that lipid is utilised by hypothermic kidneys, with palmitate consumption being 0-15% of normal in the rat kidney cortex at 15 °C. Pettersson showed that, on a molar basis, glucose and fatty acids were metabolised by hypothermically perfused kidneys at about the same rates. The cortex of the hypothermic dog kidney was shown by Huang to lose lipid (35% loss of total lipid after 24 hours) unless oleate was added to the kidney perfusate. Huang commented that this loss could affect the structure of the cell and that the loss also suggested that the kidney was utilising fatty acid. In a later publication Huang showed that dog kidney cortex slices metabolised fatty acids, but not glucose, at 10 °C.\nEven if the correct nutrients are provided, they may be lost by absorption into the tubing of the preservation system. Lee demonstrated that silicone rubber (a material used extensively in kidney preservation systems) absorbed 46% of a perfusate's oleic acid after 4 hours of perfusion.", "An essential preliminary to the development of kidney storage and transplantation was the work of Alexis Carrel in developing methods for vascular anastomosis. Carrel went on to describe the first kidney transplants, which were performed in dogs in 1902; Ullman independently described similar experiments in the same year. In these experiments kidneys were transplanted without there being any attempt at storage.\nThe crucial step in making in vitro storage of kidneys possible, was the demonstration by Fuhrman in 1943, of a reversible effect of hypothermia on the metabolic processes of isolated tissues. Prior to this, kidneys had been stored at normal body temperatures using blood or diluted blood perfusates, but no successful reimplantations had been made. Fuhrman showed that slices of rat kidney cortex and brain withstood cooling to 0.2 °C for one hour at which temperature their oxygen consumption was minimal. When the slices were rewarmed to 37 °C their oxygen consumption recovered to normal.\nThe beneficial effect of hypothermia on ischaemic intact kidneys was demonstrated by Owens in 1955 when he showed that, if dogs were cooled to 23-26 °C, and their thoracic aortas were occluded for 2 hours, their kidneys showed no apparent damage when the dogs were rewarmed. This protective effect of hypothermia on renal ischaemic damage was confirmed by Bogardus who showed a protective effect from surface cooling of dog kidneys whose renal pedicles were clamped in situ for 2 hours. Moyer demonstrated the applicability of these dog experiments to the human, by showing the same effect on dog and human kidney function from the same periods of hypothermic ischaemia.\nIt was not until 1958 that it was shown that intact dog kidneys would survive ischaemia even better if they were cooled to lower temperatures. Stueber showed that kidneys would survive in situ clamping of the renal pedicle for 6 hours if the kidneys were cooled to 0-5 °C by being placed in a cooling jacket, and Schloerb showed that a similar technique with cooling of heparinised dog kidneys to 2-4 °C gave protection for 8 hours but not 12 hours. Schloerb also attempted in vitro storage and auto-transplantation of cooled kidneys, and had one long term survivor after 4 hours kidney storage followed by reimplantation and immediate contralateral nephrectomy. He also had a near survivor, after 24-hour kidney storage and delayed contralateral nephrectomy, in a dog that developed a late arterial thrombosis in the kidney.\nThese methods of surface cooling were improved by the introduction of techniques in which the kidney's vascular system was flushed out with cold fluid prior to storage. This had the effect of increasing the speed of cooling of the kidney and removed red cells from the vascular system. Kiser used this technique to achieve successful 7 hours in vitro storage of a dog kidney, when the kidney had been flushed at 5 °C with a mixture of dextran and diluted blood prior to storage. In 1960 Lapchinsky confirmed that similar storage periods were possible, when he reported eight dogs surviving after their kidneys had been stored at 2-4 °C for 28 hours, followed by auto-transplantation and delayed contralateral nephrectomy. Although Lapchinsky gave no details in his paper, Humphries reported that these experiments had involved cooling the kidneys for 1 hour with cold blood, and then storage at 2-4 °C, followed by rewarming of the kidneys over 1 hour with warm blood at the time of reimplantation. The contralateral nephrectomies were delayed for two months.\nHumphries developed this storage technique by continuously perfusing the kidney throughout the period of storage. He used diluted plasma or serum as the perfusate and pointed out the necessity for low perfusate pressures to prevent kidney swelling, but admitted that the optimum values for such variables as perfusate temperature, Po, and flow, remained unknown. His best results, at this time, were 2 dogs that survived after having their kidneys stored for 24 hours at 4-10 °C followed by auto-transplantation and delayed contralateral nephrectomy a few weeks later.\nCalne challenged the necessity of using continuous perfusion methods by demonstrating that successful 12-hour preservation could be achieved using much simpler techniques. Calne had one kidney supporting life even when the contralateral nephrectomy was performed at the same time as the reimplantation operation. Calne merely heparinised dog kidneys and then stored them in iced solution at 4 °C. Although 17-hour preservation was shown to be possible in one experiment when nephrectomy was delayed, no success was achieved with 24-hour storage.\nThe next advance was made by Humphries in 1964, when he modified the perfusate used in his original continuous perfusion system, and had a dog kidney able to support life after 24-hour storage, even when an immediate contralateral nephrectomy was performed at the same time as the reimplantation. In these experiments autogenous blood, diluted 50% with Tis-U-Sol solution at 10 °C, was used as the perfusate. The perfusate pressure was 40 mm Hg and perfusate pH 7.11-7.35 (at 37 °C). A membrane lung was used for oxygenation to avoid damaging the blood.\nIn attempting to improve on these results Manax investigated the effect of hyperbaric oxygen, and found that successful 48-hour storage of dog kidneys was possible at 2 °C without using continuous perfusion, when the kidneys were flushed with a dextran/Tis-U-Sol solution before storage at 7.9 atmospheres pressure, and if the contralateral nephrectomy was delayed till 2 to 4 weeks after reimplantation. Manax postulated that hyperbaric oxygen might work either by inhibiting metabolism or by aiding diffusion of oxygen into the kidney cells, but he reported no control experiments to determine whether other aspects of his model were more important than hyperbaria.\nA marked improvement in storage times was achieved by Belzer in 1967 when he reported successful 72-hour kidney storage after returning to the use of continuous perfusion using a canine plasma based perfusate at 8-12 °C. Belzer found that the crucial factor in permitting uncomplicated 72-hour perfusion was cryoprecipitation of the plasma used in the perfusate to reduce the amount of unstable lipo-proteins which otherwise precipitated out of solution and progressively obstructed the kidney's vascular system. A membrane oxygenator was also used in the system in a further attempt to prevent denaturation of the lipo-proteins because only 35% of the lipo-proteins were removed by cryo-precipitation. The perfusate comprised 1 litre of canine plasma, 4 mEq of magnesium sulphate, 250 mL of dextrose, 80 units of insulin, 200,000 units of penicillin and 100 mg of hydrocortisone. Besides being cryo-precipitated, the perfusate was pre-filtered through a 0.22 micron filter immediately prior to use. Belzer used a perfusate pH of 7.4-7.5, a Po of 150–190 mm Hg, and a perfusate pressure of 50–80 mm Hg systolic, in a machine that produced a pulsatile perfusate flow. Using this system Belzer had 6 dogs surviving after their kidneys had been stored for 72 hours and then reimplanted, with immediate contralateral nephrectomies being performed at the reimplantation operations.\nBelzers use of hydrocortisone as an adjuvant to preservation had been suggested by Lotkes work with dog kidney slices, in which hydrocortisone improved the ability of slices to excrete PAH and oxygen after 30 hour storage at 2-4 °C; Lotke suggested that hydrocortisone might be acting as a lysosomal membrane stabiliser in these experiments. The other components of Belzers model were arrived at empirically. The insulin and magnesium were used partially in an attempt to induce artificial hibernation, as Suomalainen found this regime to be effective in inducing hibernation in natural hibernators. The magnesium was also provided as a metabolic inhibitor following Kamiyamas demonstration that it was an effective agent in dog heart preservation. A further justification for the magnesium was that it was needed to replace calcium which had been bound by citrate in the plasma.\nBelzer demonstrated the applicability of his dog experiments to human kidney storage when he reported his experiences in human renal transplantation using the same storage techniques as he had used for dog kidneys. He was able to store kidneys for up to 50 hours with only 8% of patients requiring post operative dialysis when the donor had been well prepared.\nIn 1968 Humphries reported 1 survivor out of 14 dogs following 5 day storage of their kidneys in a perfusion machine at 10 °C, using a diluted plasma medium containing extra fatty acids. However, delayed contralateral nephrectomy 4 weeks after reimplantation was necessary in these experiments to achieve success, and this indicated that the kidneys were severely injured during storage.\nIn 1969 Collins reported an improvement in the results that could be achieved with simple non perfusion methods of hypothermic kidney storage. He based his technique on the observation by Keller that the loss of electrolytes from a kidney during storage could be prevented by the use of a storage fluid containing cations in quantities approaching those normally present in cells. In Collins model, the dogs were well hydrated prior to nephrectomy, and were also given mannitol to induce a diuresis. Phenoxybenzamine, a vasodilator and lysozomal enzyme stabiliser, was injected into the renal artery before nephrectomy. The kidneys were immersed in saline immediately after removal, and perfused through the renal artery with 100-150 mL of a cold electrolyte solution from a height of 100 cm. The kidneys remained in iced saline for the rest of the storage period. The solution used for these successful cold perfusions imitated the electrolyte composition of intracellular fluids by containing large amounts of potassium and magnesium. The solution also contained glucose, heparin, procaine and phenoxybenzamine. The solutions pH was 7.0 at 25 °C. Collins was able to achieve successful 24-hour storage of 6 kidneys, and 30 hour storage of 3 kidneys, with the kidneys functioning immediately after reimplantation, despite immediate contralateral nephrectomies. Collins emphasised the poor results obtained with a Ringers solution flush, in finding similar results with this management when compared with kidneys treated by surface cooling alone. Liu reported that Collins solution could give successful 48-hour storage when the solution was modified by the inclusion of amino acids and vitamins. However, Liu performed no control experiments to show that these modifications were crucial.\nDifficulty was found by other workers in repeating Belzers successful 72-hour perfusion storage experiments. Woods was able to achieve successful 48-hour storage of 3 out of 6 kidneys when he used the Belzer additives with cryoprecipitated plasma as the perfusate in a hypothermic perfusion system, but he was unable to extend the storage time to 72 hours as Belzer had done. However, Woods later achieved successful 3 and 7 days storage of dog kidneys. Woods had modified Belzers perfusate by the addition of 250 mg of methyl prednisolone, increased the magnesium sulphate content to 16.2 mEq and the insulin to 320 units. Six of 6 kidneys produced life sustaining function when they were reimplanted after 72 hours storage despite immediate contralateral nephrectomies; 1 of 2 kidneys produced life sustaining function after 96 hours storage, 1 of 2 after 120 hours storage, and 1 of 2 after 168 hours storage. Perfusate pressure was 60 mm Hg with a perfusate pump rate of 70 beats per minute, and perfusate pH was automatically maintained at 7.4 by a CO titrator. Woods stressed the importance of hydration of the donor and recipient animals. Without the methyl prednisolone, Woods found vessel fragility to be a problem when storage times were longer than 48 hours.\nA major simplification to the techniques of hypothermic perfusion storage was made by Johnson and Claes in 1972 with the introduction of an albumin based perfusate. This perfusate eliminated the need for the manufacture of the cryoprecipitated and millipore filtered plasma used by Belzer. The preparation of this perfusate had been laborious and time-consuming, and there was the potential risk from hepatitis virus and cytotoxic antibodies. The absence of lipo-proteins from the perfusate meant that the membrane oxygenator could be eliminated from the perfusion circuit, as there was no need to avoid a perfusate/air interface to prevent precipitation of lipo-proteins. Both workers used the same additives as recommended by Belzer.\nThe solution that Johnson used was prepared by the Blood Products Laboratory (Elstree: England) by extracting heat labile fibrinogen and gamma globulins from plasma to give a plasma protein fraction (PPF) solution. The solution was incubated at 60 °C for 10 hours to inactivate the agent of serum hepatitis. The result was a 45 g/L human albumin solution containing small amounts of gamma and beta globulins which was stable between 0 °C and 30 °C for 5 years. PPF contained 2.2 mmol/L of free fatty acids.\nJohnsons experiments were mainly concerned with the storage of kidneys that had been damaged by prolonged warm injury. However, in a control group of non-warm injured dog kidneys, Johnson showed that 24-hour preservation was easily achieved when using a PPF perfusate, and he described elsewhere a survivor after 72 hours perfusion and reimplantation with immediate contralateral nephrectomy. With warm injured kidneys, PPF perfusion gave better results than Collins method, with 6 out of 6 dogs surviving after 40 minutes warm injury and 24-hour storage followed by reimplantation of the kidneys and immediate contralateral nephrectomy. Potassium, magnesium, insulin, glucose, hydrocortisone and ampicillin were added to the PPF solution to provide an energy source and to prevent leakage of intracellular potassium. Perfusate temperature was 6 °C, pressure 40–80 mm Hg, and Po 200–400 mm Hg. The pH was maintained between 7.2 and 7.4.\nClaes used a perfusate based on human albumin (Kabi: Sweden) diluted with saline to a concentration of 45 g/L. Claes preserved 4 out of 5 dog kidneys for 96 hours with the kidneys functioning immediately after reimplantation despite immediate contralateral nephrectomies. Claes also compared this perfusate with Belzer's cryoprecipitated plasma in a control group and found no significant difference between the function of the reimplanted kidneys in the two groups.\nThe only other group besides Woods' to report successful seven-day storage of kidneys was Liu and Humphries in 1973. They had three out of seven dogs surviving, after their kidneys had been stored for seven days followed by reimplantation and immediate contralateral nephrectomy. Their best dog had a peak post reimplantation creatinine of 50 mg/L (0.44 mmol/L). Liu used well hydrated dogs undergoing a mannitol diuresis and stored the kidneys at 9 °C – 10 °C using a perfusate derived from human PPF. The PPF was further fractionated by using a highly water-soluble polymer (Pluronic F-38), and sodium acetyl tryptophanate and sodium caprylate were added to the PPF as stabilisers to permit pasteurisation. To this solution were added human albumin, heparin, mannitol, glucose, magnesium sulphate, potassium chloride, insulin, methyl prednisolone, carbenicillin, and water to adjust the osmolality to 300-310 mosmol/kg. The perfusate was exchanged after 3.5 days storage. Perfusate pressure was 60 mm Hg or less, at a pump rate of 60 per minute. Perfusate pH was 7.12–7.32 (at 37 °C), Pco2 27–47 mm Hg, and Po 173–219 mm Hg. In a further report on this study Humphries found that when the experiments were repeated with a new batch of PPF no survivors were obtained, and histology of the survivors from the original experiment showed glomerular hypercellularity which he attributed to a possible toxic effect of the Pluronic polymer.\nJoyce and Proctor reported the successful use of a simple dextran based perfusate for 72-hour storage of dog kidneys. 10 out of 17 kidneys were viable after reimplantation and immediate contralateral nephrectomy. Joyce used non pulsatile perfusion at 4 °C with a perfusate containing Dextran 70 (Pharmacia) 2.1%, with additional electrolytes, glucose (19.5 g/L), procaine and hydrocortisone. The perfusate contained no plasma or plasma components. Perfusate pressure was only 30 cm HO, pH 7.34-7.40 and Po 250–400 mm Hg. This work showed that, for 72-hour storage, no nutrients other than glucose were needed, and low perfusate pressures and flows were adequate.\nIn 1973 Sacks showed that simple ice storage could be successfully used for 72-hour storage when a new flushing solution was used for the initial cooling and flush out of the kidney. Sacks removed kidneys from well hydrated dogs that were diuresing after a mannitol infusion, and flushed the kidneys with 200 mL of solution from a height of 100 cm. The kidneys were then simply kept at 2 °C for 72 hours without further perfusion. Reimplantation was followed by immediate contralateral nephrectomies. The flush solution was designed to imitate intracellular fluid composition and contained mannitol as an impermeable ion to further prevent cell swelling. The osmolality of the solution was 430 mosmol/kg and its pH was 7.0 at 2 °C. The additives that had been used by Collins (dextrose, phenoxybenzamine, procaine and heparin) were omitted by Sacks.\nThese results have been equalled by Ross who also achieved successful 72-hour storage without using continuous perfusion, although he was unable to reproduce Collins or Sacks results using the original Collins or Sacks solutions. Rosss successful solution was similar in electrolyte composition to intracellular fluid with the addition of hypertonic citrate and mannitol. No phosphate, bicarbonate, chloride or glucose were present in the solution; the osmolality was 400 mosmol/kg and the pH 7.1. Five of 8 dogs survived reimplantation of their kidneys and immediate contralateral nephrectomy, when the kidneys had been stored for 72 hours after having been flushed with Rosss solution; but Ross was unable to achieve 7 day storage with this technique even when delayed contralateral nephrectomy was used.\nThe requirements for successful 72-hour hypothermic perfusion storage have been further defined by Collins who showed that pulsatile perfusion was not needed if a perfusate pressure of 49 mm Hg was used, and that 7 °C was a better temperature for storage than 2 °C or 12 °C. He also compared various perfusate compositions and found that a phosphate buffered perfusate could be used successfully, so eliminating the need for a carbon dioxide supply. Grundmann has also shown that low perfusate pressure is adequate. He used a mean pulsatile pressure of 20 mm Hg in 72-hour perfusions and found that this gave better results than mean pressures of 15, 40, 50 or 60 mm Hg.\nSuccessful storage up to 8 days was reported by Cohen using various types of perfusate – with the best result being obtained when using a phosphate buffered perfusate at 8 °C. Inability to repeat these successful experiments was thought to be due to changes that had been made in the way that the PPF was manufactured with higher octanoic acid content being detrimental. Octanoic acid was shown to be able to stimulate metabolic activity during hypothermic perfusion and this might be detrimental.", "Perfusion storage methods can mechanically injury the vascular endothelium of the kidney, which leads to arterial thrombosis or fibrin deposition after reimplantation. Hill noted that, in human kidneys, fibrin deposition in the glomerulus after reimplantation and postoperative function, correlated with the length of perfusion storage. He had taken biopsies at revascularisation from human kidneys preserved by perfusion or ice storage, and showed by electron microscopy that endothelial disruption only occurred in those kidneys that had been perfused. Biopsies taken one hour after revascularisation showed platelets and fibrin adherent to any areas of denuded vascular basement membrane. A different type of vascular damage was described by Sheil who showed how a jet lesion could be produced distal to the cannula tied into the renal artery, leading to arterial thrombosis approximately 1 cm distal to the cannula site.", "Nuclear DNA is injured during cold storage of kidneys. Lazarus showed that single stranded DNA breaks occurred within 16 hours in hypothermically stored mice kidneys, with the injury being inhibited a little by storage in Collins or Sacks solutions. This nuclear injury differed from that seen in warm injury when double stranded DNA breaks occurred.", "Certain perfusates have been shown to have toxic effects on kidneys as a result of the inadvertent inclusion of particular chemicals in their formulation. Collins showed that the procaine included in the formulation of his flush fluids could be toxic, and Pegg has commented how toxic materials, such as PVC plasticizers, may be washed out of perfusion circuit tubing. Dvorak showed that the methyl-prednisolone addition to the perfusate that was thought to be essential by Woods might in some circumstances be harmful. He showed that with over g of methyl-prednisolone in 650 mL of perfusate (compared with 250 mg in 1 litre used by Woods) irreversible haemodynamic and structural changes were produced in the kidney after 20 hours of perfusion. There was necrosis of capillary loops, occlusion of Bowman's spaces, basement membrane thickening and endothelial cell damage.", "Abouna showed that ammonia was released into the perfusate during 3 day kidney storage, and suggested that this might be toxic to the kidney cells unless removed by frequent replacement of the perfusate. Some support for the use of perfusate exchange during long perfusions was provided by Liu who used perfusate exchange in his successful 7 day storage experiments. Grundmann also found that 96-hour preservation quality was improved by the use of a double volume of perfusate or by perfusate exchange. However, Grundmann's conclusions were based on comparisons with a control group of only 3 dogs. Cohen was unable to demonstrate any production of ammonia during 8 days of perfusion and no benefit from perfusate exchange; the progressive alkalinity that occurred during perfusion was shown to be due to bicarbonate production.", "All cells require ATP as an energy source for their metabolic activity. The kidney is damaged by anoxia when kidney cortical cells are unable to generate sufficient ATP under anaerobic conditions to meet the needs of the cells. When excising a kidney some anoxia is inevitable in the interval between dividing the renal artery and cooling the kidney. It has been shown by Bergstrom that 50% of a dogs kidneys cortical cells ATP content is lost within 1 minute of clamping the renal artery, and similar results were found by Warnick in whole mice kidneys, with a fall in cellular ATP by 50% after about 30 seconds of warm anoxia. Warnick and Bergstrom also showed that cooling the kidney immediately after removal markedly reduced any further ATP loss. When these non warm-injured kidneys were perfused with oxygenated hypothermic plasma, ATP levels were reduced by 50% after 24-hour storage and, after 48 hours, mean tissue ATP levels were a little higher than this indicating that synthesis of ATP had occurred. Pegg has shown that rabbit kidneys can resynthesize ATP after a period of perfusion storage following warm injury, but no resynthesis occurred in non warm-injured kidneys.\nWarm anoxia can also occur during reimplantation of the kidney after storage. Lannon showed, by measurements of succinate metabolism, how the kidney was more sensitive to a period of warm hypoxia occurring after storage than to the same period of warm hypoxia occurring immediately prior to storage.", "Machine perfusion (MP) is a technique used in organ transplantation as a means of preserving the organs which are to be transplanted.\nMachine perfusion has various forms and can be categorised according to the temperature of the perfusate: cold (4 °C) and warm (37 °C). Machine perfusion has been applied to renal transplantation, liver transplantation and lung transplantation. It is an alternative to static cold storage (SCS).", "The mechanisms that damage kidneys during hypothermic storage can be sub-divided as follows:\n# Injury to the metabolic processes of the cell caused by:\n## Cold\n## Anoxia when the kidney is warm both before and after the period of hypothermic storage.\n## Failure to supply the correct nutrients.\n## Toxin accumulation in the perfusate.\n## Toxic damage from the storage fluid.\n## Washout of essential substrates from the kidney cells.\n# Injury to nuclear DNA.\n# Mechanical injury to the vascular system of the kidney during hypothermic perfusion.\n# Post reimplantation injury.", "There is evidence that immunological mechanisms may injure hypothermically perfused kidneys after reimplantation if the perfusate contained specific antibody. Cross described two pairs of human cadaver kidneys that were perfused simultaneously with cryoprecipitated plasma containing type specific HLA antibody to one of the pairs. Both these kidneys suffered early arterial thrombosis. Light described similar hyperacute rejection following perfusion storage and showed that the cryoprecipitated plasma used contained cytotoxic IgM antibody. This potential danger of using cryoprecipitated plasma was demonstrated experimentally by Filo who perfused dog kidneys for 24 hours with specifically sensitised cryoprecipitated dog plasma and found that he could induce glomerular and vascular lesions with capillary engorgement, endothelial swelling, infiltration by polymorphonuclear leucocytes and arterial thrombosis. Immunofluorescent microscopy demonstrated specific binding of IgG along endothelial surfaces, in glomeruli, and also in vessels. After reimplantation, complement fixation and tissue damage occurred in a similar pattern. There was some correlation between the severity of the histological damage and subsequent function of the kidneys.\nMany workers have attempted to prevent kidneys rewarming during reimplantation but only Cohen has described using a system of active cooling. Measurements of lysosomal enzyme release from kidneys subjected to sham anastomoses, when either in or out of the cooling system, demonstrated how sensitive kidneys were to rewarming after a period of cold storage, and confirmed the effectiveness of the cooling system in preventing enzyme release. A further factor in minimising injury at the reimplantation operations may have been that the kidneys were kept at 7 °C within the cooling coil, which was within a degree of the temperature used during perfusion storage, so that the kidneys were not subjected to the greater changes in temperature that would have occurred if ice cooling had been used.\nDempster described using slow release of the vascular clamps at the end of kidney reimplantation operations to avoid injuring the kidney, but other workers have not mentioned whether or not they used this manoeuvre. After Cohen found vascular injury with intra renal bleeding after 3 days of perfusion storage, a technique of slow revascularisation was used for all subsequent experiments, with the aim of giving the intra- renal vessels time to recover their tone sufficiently to prevent full systolic pressure being applied to the fragile glomerular vessels. The absence of gross vascular injury in his later perfusions may be attributable to the use of this manoeuvre.", "The level of nucleotides remaining in the cell after storage was thought by Warnick to be important in determining whether the cell would be able to re-synthesize ATP and recover after rewarming. Frequent changing of the perfusate or the use of a large volume of perfusate has the theoretical disadvantage that broken down adenine nucleotides may be washed out of the cells and so not be available for re-synthesis into ATP when the kidney is rewarmed.", "A record-long of human transplant organ preservation with machine perfusion of a liver for 3 days rather than usually <12 hours was reported in 2022. It could possibly be extended to 10 days and prevent substantial cell damage by low temperature preservation methods. Alternative approaches include novel cryoprotectant solvents.\nThere is a novel organ perfusion system under development that can restore, i.e. on the cellular level, multiple vital (pig) organs one hour after death (during which the body had a prolonged warm ischaemia), and a similar method/system for reviving (pig) brains hours after death. The system for cellular recovery could be used to preserve donor organs or for revival-treatments in medical emergencies.", "The structural changes that occur during 72-hour hypothermic storage of previously uninjured kidneys have been described by Mackay who showed how there was progressive vacuolation of the cytoplasm of the cells which particularly affected the proximal tubules. On electron microscopy the mitochondria were seen to become swollen with early separation of the internal cristal membranes and later loss of all internal structure. Lysosomal integrity was well preserved until late, and the destruction of the cell did not appear to be caused by lytic enzymes because there was no more injury immediately adjacent to the lysosomes than in the rest of the cell.\nWoods and Liu – when describing successful 5 and 7 day kidney storage - described the light microscopic changes seen at the end of perfusion and at post mortem, but found few gross abnormalities apart from some infiltration with lymphocytes and occasional tubular atrophy.\nThe changes during short perfusions of human kidneys prior to reimplantation have been described by Hill who also performed biopsies 1 hour after reimplantation. On electron microscopy Hill found endothelial damage which correlated with the severity of the fibrin deposition after reimplantation. The changes that Hill saw in the glomeruli on light microscopy were occasional fibrin thrombi and infiltration with polymorphs. Hill suspected that these changes were an immunologically induced lesion, but found that there was no correlation between the severity of the histological lesion and the presence or absence of immunoglobulin deposits.\nThere are several reports of the analysis of urine produced by kidneys during perfusion storage. Kastagir analysed urine produced during 24-hour perfusion and found it to be an ultrafiltrate of the perfusate, Scott found a trace of protein in the urine during 24-hour storage, and Pederson found only a trace of protein after 36 hours perfusion storage. Pederson mentioned that he had found heavy proteinuria during earlier experiments. Woods noted protein casts in the tubules of viable kidneys after 5 day storage, but he did not analyse the urine produced during perfusion. In Cohen's study there was a progressive increase in urinary protein concentration during 8 day preservation until the protein content of the urine equalled that of the perfusate. This may have been related to the swelling of the glomerular basement membranes and the progressive fusion of epithelial cell foot processes that was also observed during the same period of perfusion storage.", "At normal temperatures pumping mechanisms in cell walls retain intracellular potassium at high levels and extrude sodium. If these pumps fail sodium is taken up by the cell and potassium lost. Water follows the sodium passively and results in swelling of the cells. The importance of this control of cell swelling was demonstrated by McLoughlin who found a significant correlation between canine renal cortical water content and the ability of kidneys to support life after 36-hour storage. The pumping mechanism is driven by the enzyme system known as Na+K+- activated ATPase and is inhibited by cold. Levy found that metabolic activity at 10 °C, as indicated by oxygen consumption measurements, was reduced to about 5% of normal and, because all enzyme systems are affected in a similar way by hypothermia, ATPase activity is markedly reduced at 10 °C.\nThere are, however, tissue and species differences in the cold sensitivity of this ATPase which may account for the differences in the ability of tissues to withstand hypothermia. Martin has shown that in dog kidney cortical cells some ATPase activity is still present at 10 °C but not at 0 °C. In liver and heart cells activity was completely inhibited at 10 °C and this difference in the cold sensitivity of ATPase correlated with the greater difficulty in controlling cell swelling during hypothermic storage of liver and heart cells. A distinct ATPase is found in vessel walls, and this was shown by Belzer to be completely inhibited at 10 °C, when at this temperature kidney cortical cells ATPase is still active. These experiments were performed on aortic endothelium, but if the vascular endothelium of the kidney has the same properties, then vascular injury may be the limiting factor in prolonged kidney storage.\nWillis has shown how hibernators derive some of their ability to survive low temperatures by having a Na+K+-ATPase which is able to transport sodium and potassium actively across their cell membranes, at 5 °C, about six times faster than in non-hibernators; this transport rate is sufficient to prevent cell swelling.\nThe rate of cooling of a tissue may also be significant in the production of injury to enzyme systems. Francavilla showed that when liver slices were rapidly cooled (immediate cooling to 12 °C in 6 minutes) anaerobic glycolysis, as measured on rewarming to 37 °C, was inhibited by about 67% of the activity that was demonstrated in slices that had been subjected to delayed cooling. However, dog kidney slices were less severely affected by the rapid cooling than were the liver slices.", "A magnetic particle with triaxial anisotropy still has a single easy axis, but it also has a hard axis (direction of maximum energy) and an intermediate axis (direction associated with a saddle point in the energy). The coordinates can be chosen so the energy has the form\nIf the easy axis is the direction, the intermediate axis is the direction and the hard axis is the direction.", "In condensed matter physics, magnetic anisotropy describes how an objects magnetic properties can be different depending on direction. In the simplest case, there is no preferential direction for an objects magnetic moment. It will respond to an applied magnetic field in the same way, regardless of which direction the field is applied. This is known as magnetic isotropy. In contrast, magnetically anisotropic materials will be easier or harder to magnetize depending on which way the object is rotated.\nFor most magnetically anisotropic materials, there are two easiest directions to magnetize the material, which are a 180° rotation apart. The line parallel to these directions is called the easy axis. In other words, the easy axis is an energetically favorable direction of spontaneous magnetization. Because the two opposite directions along an easy axis are usually equivalently easy to magnetize along, the actual direction of magnetization can just as easily settle into either direction, which is an example of spontaneous symmetry breaking.\nMagnetic anisotropy is a prerequisite for hysteresis in ferromagnets: without it, a ferromagnet is superparamagnetic.", "The observed magnetic anisotropy in an object can happen for several different reasons. Rather than having a single cause, the overall magnetic anisotropy of a given object is often explained by a combination of these different factors:\n; Magnetocrystalline anisotropy: The atomic structure of a crystal introduces preferential directions for the magnetization.\n; Shape anisotropy: When a particle is not perfectly spherical, the demagnetizing field will not be equal for all directions, creating one or more easy axes.\n; Magnetoelastic anisotropy: Tension may alter magnetic behaviour, leading to magnetic anisotropy.\n; Exchange anisotropy: Occurs when antiferromagnetic and ferromagnetic materials interact.", "The magnetic anisotropy of a benzene ring (A), alkene (B), carbonyl (C), alkyne (D), and a more complex molecule (E) are shown in the figure. Each of these unsaturated functional groups (A-D) create a tiny magnetic field and hence some local anisotropic regions (shown as cones) in which the shielding effects and the chemical shifts are unusual. The bisazo compound (E) shows that the designated proton {H} can appear at different chemical shifts depending on the photoisomerization state of the azo groups. The trans isomer holds proton {H} far from the cone of the benzene ring thus the magnetic anisotropy is not present. While the cis form holds proton {H} in the vicinity of the cone, shields it and decreases its chemical shift. This phenomenon enables a new set of nuclear Overhauser effect (NOE) interactions (shown in red) that come to existence in addition to the previously existing ones (shown in blue).", "A magnetic particle with uniaxial anisotropy has one easy axis. If the easy axis is in the direction, the anisotropy energy can be expressed as one of the forms:\nwhere is the volume, the anisotropy constant, and the angle between the easy axis and the particle's magnetization. When shape anisotropy is explicitly considered, the symbol is often used to indicate the anisotropy constant, instead of . In the widely used Stoner–Wohlfarth model, the anisotropy is uniaxial.", "Suppose that a ferromagnet is single-domain in the strictest sense: the magnetization is uniform and rotates in unison. If the magnetic moment is and the volume of the particle is , the magnetization is , where is the saturation magnetization and are direction cosines (components of a unit vector) so . The energy associated with magnetic anisotropy can depend on the direction cosines in various ways, the most common of which are discussed below.", "A magnetic particle with cubic anisotropy has three or four easy axes, depending on the anisotropy parameters. The energy has the form\nIf the easy axes are the and axes. If there are four easy axes characterized by .", "The main application of these space groups is to magnetic structure, where the black/white lattice points correspond to spin up/spin down configuration of electron spin. More abstractly, the magnetic space groups are often thought of as representing time reversal symmetry. This is in contrast to time crystals, which instead have time translation symmetry. In the most general form, magnetic space groups can represent symmetries of any two valued lattice point property, such as positive/negative electrical charge or the alignment of electric dipole moments. The magnetic space groups place restrictions on the electronic band structure of materials. Specifically, they place restrictions on the connectivity of the different electron bands, which in turn defines whether material has symmetry-protected topological order. Thus, the magnetic space groups can be used to identify topological materials, such as topological insulators.\nExperimentally, the main source of information about magnetic space groups is neutron diffraction experiments. The resulting experimental profile can be matched to theoretical structures by Rietveld refinement or simulated annealing.\nAdding the two-valued symmetry is also a useful concept for frieze groups which are often used to classify artistic patterns. In that case, the 7 frieze groups with the addition of color reversal become 24 color-reversing frieze groups. Beyond the simple two-valued property, the idea has been extended further to three colors in three dimensions, and to even higher dimensions and more colors.", "A major step was the work of Heinrich Heesch, who first rigorously established the concept of antisymmetry as part of a series of papers in 1929 and 1930. Applying this antisymmetry operation to the 32 crystallographic point groups gives a total of 122 magnetic point groups. However, although Heesch correctly laid out each of the magnetic point groups, his work remained obscure, and the point groups were later re-derived by Tavger and Zaitsev. The concept was more fully explored by Shubnikov in terms of color symmetry. When applied to space groups, the number increases from the usual 230 three dimensional space groups to 1651 magnetic space groups, as found in the 1953 thesis of Alexandr Zamorzaev. While the magnetic space groups were originally found using geometry, it was later shown the same magnetic space groups can be found using generating sets.", "The magnetic space groups can be placed into three categories. First, the 230 colorless groups contain only spatial symmetry, and correspond to the crystallographic space groups. Then there are 230 grey groups, which are invariant under antisymmetry. Finally are the 1191 black-white groups, which contain the more complex symmetries. There are two common conventions for giving names to the magnetic space groups. They are Opechowski-Guiccione and Belov-Neronova-Smirnova. For colorless and grey groups, the conventions use the same names, but they treat the black-white groups differently. A full list of the magnetic space groups (in both conventions) can be found both in the original papers, and in several places online.\nThe types can be distinguished by their different construction. Type I magnetic space groups, are identical to the ordinary space groups,.\nType II magnetic space groups, , are made up of all the symmetry operations of the crystallographic space group, , plus the product of those operations with time reversal operation, . Equivalently, this can be seen as the direct product of an ordinary space group with the point group .\nType III magnetic space groups, , are constructed using a group , which is a subgroup of with index 2.\nType IV magnetic space groups, , are constructed with the use of a pure translation, , which is Seitz notation for null rotation and a translation, . Here the is a vector (usually given in fractional coordinates) pointing from a black colored point to a white colored point, or vice versa.", "In solid state physics, the magnetic space groups, or Shubnikov groups, are the symmetry groups which classify the symmetries of a crystal both in space, and in a two-valued property such as electron spin. To represent such a property, each lattice point is colored black or white, and in addition to the usual three-dimensional symmetry operations, there is a so-called \"antisymmetry\" operation which turns all black lattice points white and all white lattice points black. Thus, the magnetic space groups serve as an extension to the crystallographic space groups which describe spatial symmetry alone.\nThe application of magnetic space groups to crystal structures is motivated by Curies Principle. Compatibility with a materials symmetries, as described by the magnetic space group, is a necessary condition for a variety of material properties, including ferromagnetism, ferroelectricity, topological insulation.", "When the periodicity of the magnetic order coincides with the periodicity of crystallographic order, the magnetic phase is said to be commensurate, and can be well-described by a magnetic space group. However, when this is not the case, the order does not correspond to any magnetic space group. These phases can instead be described by magnetic superspace groups, which describe incommensurate order. This is the same formalism often used to describe the ordering of some quasicrystals.", "The following table lists all of the 122 possible three-dimensional magnetic point groups. This is given in the short version of Hermann–Mauguin notation in the following table. Here, the addition of an apostrophe to a symmetry operation indicates that the combination of the symmetry element and the antisymmetry operation is a symmetry of the structure. There are 32 Crystallographic point groups, 32 grey groups, and 58 magnetic point groups.\nThe magnetic point groups which are compatible with ferromagnetism are colored cyan, the magnetic point groups which are compatible with ferroelectricity are colored red, and the magnetic point groups which are compatible with both ferromagnetism and ferroelectricity are purple. There are 31 magnetic point groups which are compatible with ferromagnetism. These groups, sometimes called admissible, leave at least one component of the spin invariant under operations of the point group. There are 31 point groups compatible with ferroelectricity; these are generalizations of the crystallographic polar point groups. There are also 31 point groups compatible with the theoretically proposed ferrotorodicity. Similar symmetry arguments have been extended to other electromagnetic material properties such as magnetoelectricity or piezoelectricity.\nThe following diagrams show the stereographic projection of most of the magnetic point groups onto a flat surface. Not shown are the grey point groups, which look identical to the ordinary crystallographic point groups, except they are also invariant under the antisymmetry operation.", "The Landau theory of second-order phase transitions has been applied to magnetic phase transitions. The magnetic space group of disordered structure, , transitions to the magnetic space group of the ordered phase, . is a subgroup of , and keeps only the symmetries which have not been broken during the phase transition. This can be tracked numerically by evolution of the order parameter, which belongs to a single irreducible representation of .\nImportant magnetic phase transitions include the paramagnetic to ferromagnetic transition at the Curie temperature and the paramagnetic to antiferromagnetic transition at the Néel temperature. Differences in the magnetic phase transitions explain why FeO, MnCO, and CoCO are weakly ferromagnetic, whereas the structurally similar CrO and FeCO are purely antiferromagnetic. This theory developed into what is now known as antisymmetric exchange.\nA related scheme is the classification of Aizu species which consist of a prototypical non-ferroic magnetic point group, the letter \"F\" for ferroic, and a ferromagnetic or ferroelectric point group which is a subgroup of the prototypical group which can be reached by continuous motion of the atoms in the crystal structure.", "The black-white Bravais lattices characterize the translational symmetry of the structure like the typical Bravais lattices, but also contain additional symmetry elements. For black-white Bravais lattices, the number of black and white sites is always equal. There are 14 traditional Bravais lattices, 14 grey lattices, and 22 black-white Bravais lattices, for a total of 50 two-color lattices in three dimensions.\nThe table shows the 36 black-white Bravais lattices, including the 14 traditional Bravais lattices, but excluding the 14 gray lattices which look identical to the traditional lattices. The lattice symbols are those used for the traditional Bravais lattices. The suffix in the symbol indicates the mode of centering by the black (antisymmetry) points in the lattice, where s denotes edge centering.", "The magnetocrystalline anisotropy energy is generally represented as an expansion in powers of the direction cosines of the magnetization. The magnetization vector can be written , where is the saturation magnetization. Because of time reversal symmetry, only even powers of the cosines are allowed. The nonzero terms in the expansion depend on the crystal system (e.g., cubic or hexagonal). The order of a term in the expansion is the sum of all the exponents of magnetization components, e.g., is second order.", "The magnetocrystalline anisotropy parameters are generally defined for ferromagnets that are constrained to remain undeformed as the direction of magnetization changes. However, coupling between the magnetization and the lattice does result in deformation, an effect called magnetostriction. To keep the lattice from deforming, a stress must be applied. If the crystal is not under stress, magnetostriction alters the effective magnetocrystalline anisotropy. If a ferromagnet is single domain (uniformly magnetized), the effect is to change the magnetocrystalline anisotropy parameters.\nIn practice, the correction is generally not large. In hexagonal crystals, there is no change in . In cubic crystals, there is a small change, as in the table below.", "The magnetocrystalline anisotropy parameters have a strong dependence on temperature. They generally decrease rapidly as the temperature approaches the Curie temperature, so the crystal becomes effectively isotropic. Some materials also have an isotropic point at which . Magnetite (), a mineral of great importance to rock magnetism and paleomagnetism, has an isotropic point at 130 kelvin.\nMagnetite also has a phase transition at which the crystal symmetry changes from cubic (above) to monoclinic or possibly triclinic below. The temperature at which this occurs, called the Verwey temperature, is 120 Kelvin.", "In a cubic crystal the lowest order terms in the energy are\nIf the second term can be neglected, the easy axes are the ⟨100⟩ axes (i.e., the , , and , directions) for and the ⟨111⟩ directions for (see images on right).\nIf is not assumed to be zero, the easy axes depend on both and . These are given in the table below, along with hard axes (directions of greatest energy) and intermediate axes (saddle points) in the energy). In energy surfaces like those on the right, the easy axes are analogous to valleys, the hard axes to peaks and the intermediate axes to mountain passes.\nBelow are some room-temperature anisotropy constants for cubic ferromagnets. The compounds involving are ferrites, an important class of ferromagnets. In general the anisotropy parameters for cubic ferromagnets are higher than those for uniaxial ferromagnets. This is consistent with the fact that the lowest order term in the expression for cubic anisotropy is fourth order, while that for uniaxial anisotropy is second order.", "The energy density for a tetragonal crystal is\nNote that the term, the one that determines the basal plane anisotropy, is fourth order (same as the term). The definition of may vary by a constant multiple between publications.\nThe energy density for a rhombohedral crystal is", "In a hexagonal system the axis is an axis of sixfold rotation symmetry. The energy density is, to fourth \norder,\nThe uniaxial anisotropy is mainly determined by these first two terms. Depending on the values and , there are four different kinds of anisotropy (isotropic, easy axis, easy plane and easy cone):\n* : the ferromagnet is isotropic.\n* and : the axis is an easy axis.\n* and : the basal plane is an easy plane.\n* and : the basal plane is an easy plane.\n* : the ferromagnet has an easy cone (see figure to right).\nThe basal plane anisotropy is determined by the third term, which is sixth-order. The easy directions are projected onto three axes in the basal plane.\nBelow are some room-temperature anisotropy constants for hexagonal ferromagnets. Since all the values of and are positive, these materials have an easy axis.\nHigher order constants, in particular conditions, may lead to first order magnetization processes FOMP.", "Magnetocrystalline anisotropy has a great influence on industrial uses of ferromagnetic materials. Materials with high magnetic anisotropy usually have high coercivity, that is, they are hard to demagnetize. These are called \"hard\" ferromagnetic materials and are used to make permanent magnets. For example, the high anisotropy of rare-earth metals is mainly responsible for the strength of rare-earth magnets. During manufacture of magnets, a powerful magnetic field aligns the microcrystalline grains of the metal such that their \"easy\" axes of magnetization all point in the same direction, freezing a strong magnetic field into the material.\nOn the other hand, materials with low magnetic anisotropy usually have low coercivity, their magnetization is easy to change. These are called \"soft\" ferromagnets and are used to make magnetic cores for transformers and inductors. The small energy required to turn the direction of magnetization minimizes core losses, energy dissipated in the transformer core when the alternating current changes direction.", "The spin-orbit interaction is the primary source of magnetocrystalline anisotropy. It is basically the orbital motion of the electrons which couples with crystal electric field giving rise to the first order contribution to magnetocrystalline anisotropy. The second order arises due to the mutual interaction of the magnetic dipoles. This effect is weak compared to the exchange interaction and is difficult to compute from first principles, although some successful computations have been made.", "In physics, a ferromagnetic material is said to have magnetocrystalline anisotropy if it takes more energy to magnetize it in certain directions than in others. These directions are usually related to the principal axes of its crystal lattice. It is a special case of magnetic anisotropy. In other words, the excess energy required to magnetize a specimen in a particular direction over that required to magnetize it along the easy direction is called crystalline anisotropy energy.", "More than one kind of crystal system has a single axis of high symmetry (threefold, fourfold or sixfold). The anisotropy of such crystals is called uniaxial anisotropy. If the axis is taken to be the main symmetry axis of the crystal, the lowest order term in the energy is\nThe ratio is an energy density (energy per unit volume). This can also be represented in spherical polar coordinates with , , and :\nThe parameter , often represented as , has units of energy density and depends on composition and temperature.\nThe minima in this energy with respect to satisfy\nIf ,\nthe directions of lowest energy are the directions. The axis is called the easy axis. If , there is an easy plane perpendicular to the symmetry axis (the basal plane of the crystal).\nMany models of magnetization represent the anisotropy as uniaxial and ignore higher order terms. However, if , the lowest energy term does not determine the direction of the easy axes within the basal plane. For this, higher-order terms are needed, and these depend on the crystal system (hexagonal, tetragonal or rhombohedral).", "Magnetomechanical effects connect magnetic, mechanical and electric phenomena in solid materials.\n* Magnetostriction \n* Inverse magnetostrictive effect\n* Wiedemann effect \n* Matteucci effect \n* Guillemin effect\nMagnetostriction is thermodynamically opposite to inverse magnetostriction effect. The same situation occurs for Wiedemann and Matteuci effects.\nFor magnetic, mechanical and electric phenomena in fluids see Magnetohydrodynamics and Electrohydrodynamics.", "*1. The (ordinary) Hall effect changes sign upon magnetic field reversal and it is an orbital effect (unrelated to spin) due to the Lorentz force. Transversal AMR (planar Hall effect) does not change sign and it is caused by spin-orbit interaction.", "An example of magnetoresistance due to direct action of magnetic field on electric current can be studied on a Corbino disc (see Figure).\nIt consists of a conducting annulus with perfectly conducting rims. Without a magnetic field, the battery drives a radial current between the rims. When a magnetic field perpendicular to the plane of the annulus is applied, (either into or out of the page) a circular component of current flows as well, due to Lorentz force. Initial interest in this problem began with Boltzmann in 1886, and independently was re-examined by Corbino in 1911.\nIn a simple model, supposing the response to the Lorentz force is the same as for an electric field, the carrier velocity v is given by:\nwhere μ is the carrier mobility. Solving for the velocity, we find:\nwhere the effective reduction in mobility due to the B-field (for motion perpendicular to this field) is apparent. Electric current (proportional to the radial component of velocity) will decrease with increasing magnetic field and hence the resistance of the device will increase. Critically, this magnetoresistive scenario depends sensitively on the device geometry and current lines and it does not rely on magnetic materials.\nIn a semiconductor with a single carrier type, the magnetoresistance is proportional to (1 + (μB)), where μ is the semiconductor mobility (units m·V·s or T) and B is the magnetic field (units teslas). Indium antimonide, an example of a high mobility semiconductor, could have an electron mobility above 4 m·V·s at 300 K. So in a 0.25 T field, for example the magnetoresistance increase would be 100%.", "Thomson's experiments are an example of AMR, a property of a material in which a dependence of electrical resistance on the angle between the direction of electric current and direction of magnetization is observed. The effect arises in most cases from the simultaneous action of magnetization and spin-orbit interaction (exceptions related to non-collinear magnetic order notwithstanding, see Sec. 4(b) in the review ) and its detailed mechanism depends on the material. It can be for example due to a larger probability of s-d scattering of electrons in the direction of magnetization (which is controlled by the applied magnetic field). The net effect (in most materials) is that the electrical resistance has maximum value when the direction of current is parallel to the applied magnetic field. AMR of new materials is being investigated and magnitudes up to 50% have been observed in some uranium (but otherwise quite conventional) ferromagnetic compounds. Very recently, materials with extreme AMR have been identified driven by unconventional mechanisms such as a metal-insulator transition triggered by rotating the magnetic moments (while for some directions of magnetic moments, the system is semimetallic, for other directions a gap opens).\nIn polycrystalline ferromagnetic materials, the AMR can only depend on the angle between the magnetization and current direction\nand (as long as the resistivity of the material can be described by a rank-two tensor), it must follow\nwhere is the (longitudinal) resistivity of the film and are the resistivities for and , respectively. Associated with longitudinal resistivity, there is also transversal resistivity dubbed (somewhat confusingly[1]) the planar Hall effect. In monocrystals, resistivity depends also on individually.\nTo compensate for the non-linear characteristics and inability to detect the polarity of a magnetic field, the following structure is used for sensors. It consists of stripes of aluminum or gold placed on a thin film of permalloy (a ferromagnetic material exhibiting the AMR effect) inclined at an angle of 45°. This structure forces the current not to flow along the “easy axes” of thin film, but at an angle of 45°. The dependence of resistance now has a permanent offset which is linear around the null point. Because of its appearance, this sensor type is called barber pole.\nThe AMR effect is used in a wide array of sensors for measurement of Earth's magnetic field (electronic compass), for electric current measuring (by measuring the magnetic field created around the conductor), for traffic detection and for linear position and angle sensing. The biggest AMR sensor manufacturers are Honeywell, NXP Semiconductors, STMicroelectronics, and [http://www.sensitec.com Sensitec GmbH].\nAs theoretical aspects, I. A. Campbell, A. Fert, and O. Jaoul (CFJ) derived an expression of the AMR ratio for Ni-based alloys using the two-current model with s-s and s-d scattering processes, where s is a conduction electron and d is 3d states with the spin-orbit interaction. The AMR ratio is expressed as\nwith and , where , , and are a spin-orbit coupling constant (so-called ), an exchange field, and a resistivity for spin , respectively. In addition, recently, Satoshi Kokado et al. have obtained the general expression of the AMR ratio for 3d transition-metal ferromagnets by extending the CFJ theory to a more general one. The general expression can also be applied to half-metals.", "Magnetoresistance is the tendency of a material (often ferromagnetic) to change the value of its electrical resistance in an externally-applied magnetic field. There are a variety of effects that can be called magnetoresistance. Some occur in bulk non-magnetic metals and semiconductors, such as geometrical magnetoresistance, Shubnikov–de Haas oscillations, or the common positive magnetoresistance in metals. Other effects occur in magnetic metals, such as negative magnetoresistance in ferromagnets or anisotropic magnetoresistance (AMR). Finally, in multicomponent or multilayer systems (e.g. magnetic tunnel junctions), giant magnetoresistance (GMR), tunnel magnetoresistance (TMR), colossal magnetoresistance (CMR), and extraordinary magnetoresistance (EMR) can be observed.\nThe first magnetoresistive effect was discovered in 1856 by William Thomson, better known as Lord Kelvin, but he was unable to lower the electrical resistance of anything by more than 5%. Today, systems including semimetals and concentric ring EMR structures are known. In these, a magnetic field can adjust the resistance by orders of magnitude. Since different mechanisms can alter the resistance, it is useful to separately consider situations where it depends on a magnetic field directly (e.g. geometric magnetoresistance and multiband magnetoresistance) and those where it does so indirectly through magnetization (e.g. AMR and TMR).", "William Thomson (Lord Kelvin) first discovered ordinary magnetoresistance in 1856. He experimented with pieces of iron and discovered that the resistance increases when the current is in the same direction as the magnetic force and decreases when the current is at 90° to the magnetic force. He then did the same experiment with nickel and found that it was affected in the same way but the magnitude of the effect was greater. This effect is referred to as anisotropic magnetoresistance (AMR).\nIn 2007, Albert Fert and Peter Grünberg were jointly awarded the Nobel Prize for the discovery of giant magnetoresistance.", "Internally, ferromagnetic materials have a structure that is divided into domains, each of which is a region of uniform magnetization. When a magnetic field is applied, the boundaries between the domains shift and the domains rotate; both of these effects cause a change in the materials dimensions. The reason that a change in the magnetic domains of a material results in a change in the materials dimensions is a consequence of magnetocrystalline anisotropy; it takes more energy to magnetize a crystalline material in one direction than in another. If a magnetic field is applied to the material at an angle to an easy axis of magnetization, the material will tend to rearrange its structure so that an easy axis is aligned with the field to minimize the free energy of the system. Since different crystal directions are associated with different lengths, this effect induces a strain in the material.\nThe reciprocal effect, the change of the magnetic susceptibility (response to an applied field) of a material when subjected to a mechanical stress, is called the Villari effect. Two other effects are related to magnetostriction: the Matteucci effect is the creation of a helical anisotropy of the susceptibility of a magnetostrictive material when subjected to a torque and the Wiedemann effect is the twisting of these materials when a helical magnetic field is applied to them.\nThe Villari reversal is the change in sign of the magnetostriction of iron from positive to negative when exposed to magnetic fields of approximately 40 kA/m.\nOn magnetization, a magnetic material undergoes changes in volume which are small: of the order 10.", "* Electronic article surveillance – using magnetostriction to prevent shoplifting\n* Magnetostrictive delay lines - an earlier form of computer memory\n* Magnetostrictive loudspeakers and headphones", "These materials generally show non-linear behavior with a change in applied magnetic field or stress. For small magnetic fields, linear piezomagnetic constitutive behavior is enough. Non-linear magnetic behavior is captured using a classical macroscopic model such as the Preisach model and Jiles-Atherton model. For capturing magneto-mechanical behavior, Armstrong proposed an \"energy average\" approach. More recently, Wahi et al. have proposed a computationally efficient constitutive model wherein constitutive behavior is captured using a \"locally linearizing\" scheme.", "For actuator applications, maximum rotation of magnetic moments leads to the highest possible magnetostriction output. This can be achieved by processing techniques such as stress annealing and field annealing. However, mechanical pre-stresses can also be applied to thin sheets to induce alignment perpendicular to actuation as long as the stress is below the buckling limit. For example, it has been demonstrated that applied compressive pre-stress of up to ~50 MPa can result in an increase of magnetostriction by ~90%. This is hypothesized to be due to a \"jump\" in initial alignment of domains perpendicular to applied stress and improved final alignment parallel to applied stress.", "Magnetostrictive materials can convert magnetic energy into kinetic energy, or the reverse, and are used to build actuators and sensors. The property can be quantified by the magnetostrictive coefficient, λ, which may be positive or negative and is defined as the fractional change in length as the magnetization of the material increases from zero to the saturation value. The effect is responsible for the familiar \"electric hum\" () which can be heard near transformers and high power electrical devices.\nCobalt exhibits the largest room-temperature magnetostriction of a pure element at 60 microstrains. Among alloys, the highest known magnetostriction is exhibited by Terfenol-D, (Ter for terbium, Fe for iron, NOL for Naval Ordnance Laboratory, and D for dysprosium). Terfenol-D, , exhibits about 2,000 microstrains in a field of 160 kA/m (2 kOe) at room temperature and is the most commonly used engineering magnetostrictive material. Galfenol, , and Alfer, , are newer alloys that exhibit 200-400 microstrains at lower applied fields (~200 Oe) and have enhanced mechanical properties from the brittle Terfenol-D. Both of these alloys have <100> easy axes for magnetostriction and demonstrate sufficient ductility for sensor and actuator applications.\nAnother very common magnetostrictive composite is the amorphous alloy with its trade name Metglas 2605SC. Favourable properties of this material are its high saturation-magnetostriction constant, λ, of about 20 microstrains and more, coupled with a low magnetic-anisotropy field strength, H, of less than 1 kA/m (to reach magnetic saturation). Metglas 2605SC also exhibits a very strong ΔE-effect with reductions in the effective Young's modulus up to about 80% in bulk. This helps build energy-efficient magnetic MEMS.\nCobalt ferrite, (CoO·FeO), is also mainly used for its magnetostrictive applications like sensors and actuators, thanks to its high saturation magnetostriction (~200 parts per million). In the absence of rare-earth elements, it is a good substitute for Terfenol-D. Moreover, its magnetostrictive properties can be tuned by inducing a magnetic uniaxial anisotropy. This can be done by magnetic annealing, magnetic field assisted compaction, or reaction under uniaxial pressure. This last solution has the advantage of being ultrafast (20 min), thanks to the use of spark plasma sintering.\nIn early sonar transducers during World War II, nickel was used as a magnetostrictive material. To alleviate the shortage of nickel, the Japanese navy used an iron-aluminium alloy from the Alperm family.", "Like flux density, the magnetostriction also exhibits hysteresis versus the strength of the magnetizing field. The shape of this hysteresis loop (called \"dragonfly loop\") can be reproduced using the Jiles-Atherton model.", "Magnetostriction is a property of magnetic materials that causes them to change their shape or dimensions during the process of magnetization. The variation of materials' magnetization due to the applied magnetic field changes the magnetostrictive strain until reaching its saturation value, λ. The effect was first identified in 1842 by James Joule when observing a sample of iron.\nMagnetostriction applies to magnetic fields, while electrostriction applies to electric fields.\nMagnetostriction causes energy loss due to frictional heating in susceptible ferromagnetic cores, and is also responsible for the low-pitched humming sound that can be heard coming from transformers, where alternating currents produce a changing magnetic field.", "Single-crystal alloys exhibit superior microstrain, but are vulnerable to yielding due to the anisotropic mechanical properties of most metals. It has been observed that for polycrystalline alloys with a high area coverage of preferential grains for microstrain, the mechanical properties (ductility) of magnetostrictive alloys can be significantly improved. Targeted metallurgical processing steps promote abnormal grain growth of {011} grains in galfenol and alfenol thin sheets, which contain two easy axes for magnetic domain alignment during magnetostriction. This can be accomplished by adding particles such as boride species and niobium carbide () during initial chill casting of the ingot.\nFor a polycrystalline alloy, an established formula for the magnetostriction, λ, from known directional microstrain measurements is:\nλ = 1/5(2λ+3λ)\nDuring subsequent hot rolling and recrystallization steps, particle strengthening occurs in which the particles introduce a “pinning” force at grain boundaries that hinders normal (stochastic) grain growth in an annealing step assisted by a atmosphere. Thus, single-crystal-like texture (~90% {011} grain coverage) is attainable, reducing the interference with magnetic domain alignment and increasing microstrain attainable for polycrystalline alloys as measured by semiconducting strain gauges. These surface textures can be visualized using electron backscatter diffraction (EBSD) or related diffraction techniques.", "Magnonics is an emerging field of modern magnetism, which can be considered a sub-field of modern solid state physics. Magnonics combines the study of waves and magnetism. Its main aim is to investigate the behaviour of spin waves in nano-structure elements. In essence, spin waves are a propagating re-ordering of the magnetisation in a material and arise from the precession of magnetic moments. Magnetic moments arise from the orbital and spin moments of the electron, most often it is this spin moment that contributes to the net magnetic moment.\nFollowing the success of the modern hard disk, there is much current interest in future magnetic data storage and using spin waves for things such as magnonic logic and data storage. Similarly, spintronics looks to utilize the inherent spin degree of freedom to complement the already successful charge property of the electron used in contemporary electronics. Modern magnetism is concerned with furthering the understanding of the behaviour of the magnetisation on very small (sub-micrometre) length scales and very fast (sub-nanosecond) timescales and how this can be applied to improving existing or generating new technologies and computing concepts. A magnon torque device was invented and later perfected at the National University of Singapore's Electrical & Computer Engineering department, which is based on such potential uses, with results published on November 29, 2019, in Science.\nA magnonic crystal is a magnetic metamaterial with alternating magnetic properties. Like conventional metamaterials, their properties arise from geometrical structuring, rather than their bandstructure or composition directly. Small spatial inhomogeneities create an effective macroscopic behaviour, leading to properties not readily found in nature. By alternating parameters such as the relative permeability or saturation magnetisation, there exists the possibility to tailor magnonic bandgaps in the material. By tuning the size of this bandgap, only spin wave modes able to cross the bandgap would be able to propagate through the media, leading to selective propagation of certain spin wave frequencies. See Surface magnon polariton.", "Spin waves can propagate in magnetic media with magnetic ordering such as ferromagnets and antiferromagnets. The frequencies of the precession of the magnetisation depend on the material and its magnetic parameters, in general precession frequencies are in the microwave from 1–100 GHz, exchange resonances in particular materials can even see frequencies up to several THz. This higher precession frequency opens new possibilities for analogue and digital signal processing.\nSpin waves themselves have group velocities on the order of a few km per second. The damping of spin waves in a magnetic material also causes the amplitude of the spin wave to decay with distance, meaning the distance freely propagating spin waves can travel is usually only several 10s of μm. The damping of the dynamical magnetisation is accounted for phenomenologically by the Gilbert damping constant in the Landau-Lifshitz-Gilbert equation (LLG equation), the energy loss mechanism itself is not completely understood, but is known to arise microscopically from magnon-magnon scattering, magnon-phonon scattering and losses due to eddy currents. The Landau-Lifshitz-Gilbert equation is the equation of motion for the magnetisation. All of the properties of the magnetic systems such as the applied bias field, the samples exchange, anisotropy and dipolar fields are described in terms of an effective magnetic field that enters the Landau–Lifshitz–Gilbert equation. The study of damping in magnetic systems is an ongoing modern research topic.\nThe LL equation was introduced in 1935 by Landau and Lifshitz to model the precessional motion of magnetization in a solid with an effective magnetic field and with damping. Later, Gilbert modified the damping term, which in the limit of small damping yields identical results. The LLG equation is,\nThe constant is the Gilbert phenomenological damping parameter and depends on the solid, and is the electron gyromagnetic ratio. Here \nResearch in magnetism, like the rest of modern science, is conducted with a symbiosis of theoretical and experimental approaches. Both approaches go hand-in-hand, experiments test the predictions of theory and theory provides explanations and predictions of new experiments. The theoretical side focuses on numerical modelling and simulations, so called micromagnetic modelling. Programs such as OOMMF or NMAG are micromagnetic solvers that numerically solve the LLG equation with appropriate boundary conditions. Prior to the start of the simulation, magnetic parameters of the sample and the initial groundstate magnetisation and bias field details are stated.", "Experimentally, there are many techniques that exist to study magnetic phenomena, each with its own limitations and advantages. The experimental techniques can be distinguished by being time-domain (optical and field pumped TR-MOKE), field-domain (ferromagnetic resonance (FMR)) and frequency-domain techniques (Brillouin light scattering (BLS), vector network analyser - ferromagnetic resonance (VNA-FMR)). Time-domain techniques allow the temporal evolution of the magnetisation to be traced indirectly by recording the polarisation response of the sample. The magnetisation can be inferred by the so-called Kerr rotation. Field-domain techniques such as FMR tickle the magnetisation with a CW microwave field. By measuring the absorption of the microwave radiation through the sample, as an external magnetic field is swept provides information about magnetic resonances in the sample. Importantly, the frequency at which the magnetisation precesses depends on the strength of the applied magnetic field. As the external field strength is increased, so does the precession frequency. Frequency-domain techniques such as VNA-FMR, examine the magnetic response due to excitation by an RF current, the frequency of the current is swept through the GHz range and the amplitude of either the transmitted or reflected current can be measured.\nModern ultrafast lasers allow femtosecond (fs) temporal resolution for time-domain techniques, such tools are now standard in laboratory environments. Based on the magneto-optic Kerr effect, TR-MOKE is a pump-probe technique where a pulsed laser source illuminates the sample with two separate laser beams. The pump beam is designed to excite or perturb the sample from equilibrium, it is very intense designed to create highly non-equilibrium conditions within the sample material, exciting the electron, and thereby subsequently the phonon and the spin system. Spin-wave states at high energy are excited and subsequently populate the lower lying states during their relaxation paths. A much weaker beam called a probe beam is spatially overlapped with the pump beam on the magnonic materials surface. The probe beam is passed along a delay line, which is a mechanical way of increasing the probe path length. By increasing the probe path length, it becomes delayed with respect to the pump beam and arrives at a later time on the sample surface. Time-resolution is built in the experiment by changing the delay distance. As the delay line position is stepped, the reflected beam properties are measured. The measured Kerr rotation is proportional to the dynamic magnetisation as the spin-waves propagate in the media. The temporal resolution is limited by the temporal width of the laser pulse only. This allows to connect ultrafast optics with a local spin-wave excitation and contact free detection in magnonic metamaterials, photomagnonics.\nSince 2009 \"Magnonics\" conferences are organised every second year. The next conference takes place in July-August 2025 in Cala Millor, Mallorca, Spain.", "From manna, produced by several species of tree and shrub e.g. Fraxinus ornus from whose secretions mannitol was originally isolated.", "GDP-mannose is produced from GTP and mannose-6-phosphate by the enzyme mannose-1-phosphate guanylyltransferase.\nThe degradation of mannans (and many related forms of hemicellulose) has been well studied. The hydrolysis of the main mannan backbone is catalyzed by various enzymes including β-mannosidase, β-glucosidase, and β-mannase. The side chains are degraded by esterases and α-galactosidase.\nWhen a long chain of mannan is hydrolyzed into shorter chains, these smaller molecules are known as mannan oligosaccharide (MOS). MOS by definition can be produced from either insoluble galactomannan or soluble glucomannan, although the latter type is more widely marketed.\nGlucomannan MOS is used as prebiotics in animal husbandry and nutritional supplements due to its bioactivity.", "Plant mannans have β(1-4) linkages, occasionally with α(1-6) galactose branches, forming galactomannans. They are insoluble and a form of storage polysaccharide. Ivory nut is a source of mannans. An additional type is galactoglucomannan found in soft wood with a mixed mannose/glucose β(1-4) backbone. Many mannans are acetylated and some from marine sources, have sulfate esters side chains.\nYeast and some plants such as conjac and salep have a different type of mannans in their cell wall, with a α(1-6) linked backbone and α(1-2) and α(1-3) linked glucose branches, hence \"glucomannan\". It is water soluble. It is serologically similar to structures found on mammalian glycoproteins. Detection of mannan leads to lysis in the mannan-binding lectin pathway.", "Mannans are polymers containing the sugar mannose as a principal component.\nThey are a type of polysaccharide found in hemicellulose, a major source of biomass found in higher plants such as softwoods. These polymers also typically contain two other sugars, galactose and glucose. They are often branched (unlike cellulose).", "Marcel Joseph Vogel (April 14, 1917 – February 12, 1991) was a research scientist working at the IBM San Jose Research Center for 27 years. He is sometimes referred to as Dr. Vogel, although this title was based on an honorary degree, not a Ph.D. Later in his career, he became interested in various theories of quartz crystals and other occult and esoteric fields of study.", "It is claimed that Vogel started his research into luminescence while he was still in his teens. This research eventually led him to publish his thesis, Luminescence in Liquids and Solids and Their Practical Application, in collaboration with University of Chicago's Dr. Peter Pringsheim in 1943.\nTwo years after the publication, Vogel incorporated his own company, Vogel Luminescence, in San Francisco. For the next decade the firm developed a variety of new products: fluorescent crayons, tags for insecticides, a black light inspection kit to determine the secret trackways of rodents in cellars from their urine and the psychedelic colors popular in \"new age\" posters. In 1957, Vogel Luminescence was sold to Ultra Violet Products and Vogel joined IBM as a full-time research scientist. He retired from IBM in 1984.\nIn 1977 and 1978, Vogel participated in experiments with the Markovich Tesla Electrical Power Source, referred to as MTEPS, that was built by Peter T. Markovich.\nHe received 32 patents for his inventions up through his tenure at IBM. Among these was the magnetic coating for the 24\" hard disk drive systems still in use. His areas of expertise, besides luminescence, were phosphor technology, magnetics and liquid crystal systems.\nAt Vogel's February 14, 1991 funeral, IBM researcher and Sacramento, California physician Bernard McGinity, M.D. said of him, \"He made his mark because of the brilliance of his mind, his prolific ideas, and his seemingly limitless creativity.\"", "Vogel examined a metal sample which was allegedly given to Billy Meier by extraterrestrials, but by misinterpreting a graph on a test instrument, erroneously concluded it contained thallium, a rare metal.", "Vogel was a proponent of research into plant consciousness and believed \"empathy between plant and human\" could be established.", "Vogel was featured in the first episode of In Search Of... hosted by Leonard Nimoy, called \"Other Voices\". He gave his theories regarding the possibility of communication between plants.", "Zenobi-Wong works in the area of tissue engineering, in particular for cartilage regeneration. She develops functional biomaterials which mimic the extracellular matrix. The biofabrication techniques used to develop these materials include electrospinning, casting, two-photon polymerization and bioprinting.\nZenobi-Wong holds four licensed patents in the fields of tissue engineering, tissue engineering techniques, and gene expression assays. She was one of the originators of the MSc Biomedical Engineering program at ETH Zürich, and developed several graduate level courses in tissue engineering and biomedical engineering. Zenobi-Wong currently serves as President of the Swiss Society for Biomaterials and Regenerative Medicine, and as secretary general of the International Society of Biofabrication.", "Marcy Zenobi-Wong is an American engineer and professor of Tissue Engineering and Biofabrication at the Swiss Federal Institute of Technology (ETH Zurich). She is known for her work in the field of Tissue Engineering.", "Zenobi-Wong completed her undergraduate degree in mechanical engineering at the Massachusetts Institute of Technology, and a graduate degree at Stanford University. She completed her PhD on the role of mechanical forces in skeletal development in 1990. After this, she first worked for a year as a postdoc in the Orthopaedic Research Laboratories, University of Michigan, before moving to the University of Bern as group leader Cartilage Biomechanics in 1992, where she habilitated in 2000. In 2003, she moved to ETH Zürich, first to the Institute for Biomedical Engineering, and later to the Department of Health Sciences and Technology, where she became an associate professor in 2017.", "In chemical separation processes, a mass separating agent (MSA) is a chemical species that is added to ensure that the intended separation process takes place. It is analogous to an energy separating agent, which aids separations processes via addition of energy. An MSA may be partially immiscible with one or more mixture components and frequently is the constituent of highest concentration in the added phase. Alternatively, the MSA may be miscible with a liquid feed mixture, but may selectively alter partitioning of species between liquid and vapor phases.\nDisadvantages of using an MSA are a need for an additional separator to recover the MSA for recycle, a need for MSA makeup, possible MSA product contamination, and more difficult design procedures.\nProcesses like absorption and stripping generally utilize various MSAs.", "Matteucci effect is one of the magnetomechanical effects, which is thermodynamically inverse to Wiedemann effect. This effect was described by Carlo Matteucci in 1858. It is observable in amorphous wires with helical domain structure, which can be obtained by twisting the wire, or annealing under twist. The effect is most distinct in the so-called dwarven alloys (called so because of the historical cobalt element etymology), with cobalt as main substituent.", "The maximum energy product is defined based on the magnetic hysteresis saturation loop (- curve), in the demagnetizing portion where the and fields are in opposition. It is defined as the maximal value of the product of and along this curve (actually, the maximum of the negative of the product, , since they have opposing signs):\nEquivalently, it can be graphically defined as the area of the largest rectangle that can be drawn between the origin and the saturation demagnetization B-H curve (see figure).\nThe significance of is that the volume of magnet necessary for any given application tends to be inversely proportional to . This is illustrated by considering a simple magnetic circuit containing a permanent magnet of volume and an air gap of volume , connected to each other by a magnetic core. Suppose the goal is to reach a certain field strength in the gap. In such a situation, the total magnetic energy in the gap (volume-integrated magnetic energy density) is directly equal to half the volume-integrated in the magnet:\nthus in order to achieve the desired magnetic field in the gap, the required volume of magnet can be minimized by maximizing in the magnet. By choosing a magnetic material with a high , and also choosing the aspect ratio of the magnet so that its is equal to , the required volume of magnet to achieve a target flux density in the air gap is minimized. This expression assumes that the permeability in the core that is connecting the magnetic material to the air gap is infinite, so unlike the equation might imply, you cannot get arbitrarily large flux density in the air gap by decreasing the gap distance. A real core will eventually saturate.", "In magnetics, the maximum energy product is an important figure-of-merit for the strength of a permanent magnet material. It is often denoted and is typically given in units of either (kilojoules per cubic meter, in SI electromagnetism) or (mega-gauss-oersted, in gaussian electromagnetism). 1 MGOe is equivalent to .\nDuring the 20th century, the maximum energy product of commercially available magnetic materials rose from around 1 MGOe (e.g. in KS Steel) to over 50 MGOe (in neodymium magnets). Other important permanent magnet properties include the remanence () and coercivity (); these quantities are also determined from the saturation loop and are related to the maximum energy product, though not directly.", "The limit for the ambient size of the bubble is set by the appearance of instabilities in the shape of the oscillating bubble.\nThe shape stability thresholds depend on changes in the radial dynamics, caused by different liquid viscosities or driving frequencies. If the frequency is decreased, the parametric instability is suppressed as the stabilizing influence of viscosity can appear longer to suppress perturbations. However, the collapses of low-frequency-driven bubbles favor an earlier onset of the Rayleigh-Taylor instability. Larger bubbles can be stabilized to show sonoluminescence when not too high forcing pressures are applied. At low-frequency the water vapor becomes more important. The bubbles can be stabilized by cooling the fluid, whereas more light is emitted.", "One of the greatest obstacles in sonoluminescence research has been trying to obtain measurements of the interior of the bubble. Most measurements, like temperature and pressure, are indirectly measured using models and bubble dynamics.", "The theory of bubble dynamics was started in 1917 by Lord Rayleigh during his work with the Royal Navy to investigate cavitation damage on ship propellers. Over several decades his work was refined and developed by Milton Plesset, Andrea Prosperetti, and others. The Rayleigh–Plesset equation is:\nwhere is the bubble radius, is the second order derivative of the bubble radius with respect to time, is the first order derivative of the bubble radius with respect to time, is the density of the liquid, is the pressure in the gas (which is assumed to be uniform), is the background static pressure, is the sinusoidal driving pressure, is the viscosity of the liquid, and is the surface tension of the gas-liquid interface.", "Sonoluminescence is a phenomenon that occurs when a small gas bubble is acoustically suspended and periodically driven in a liquid solution at ultrasonic frequencies, resulting in bubble collapse, cavitation, and light emission. The thermal energy that is released from the bubble collapse is so great that it can cause weak light emission. The mechanism of the light emission remains uncertain, but some of the current theories, which are categorized under either thermal or electrical processes, are Bremsstrahlung radiation, argon rectification hypothesis, and hot spot. Some researchers are beginning to favor thermal process explanations as temperature differences have consistently been observed with different methods of spectral analysis. In order to understand the light emission mechanism, it is important to know what is happening in the bubbles interior and at the bubbles surface.", "SBSL emits more light than MBSL due to fewer interactions between neighboring bubbles. Another advantage for SBSL is that a single bubble collapses without being affected by other surrounding bubbles, allowing more accurate studies on acoustic cavitation and sonoluminescence theories. Some exotic theories have been made, for example from Schwinger in 1992 who hinted the dynamical Casimir effect as a potential photon-emission process. Several theories say that the location of light emission is in the liquid instead of inside the bubble. Other SBSL theories explain that the emission of photons due to the high temperatures in the bubble are analogical to the hot spot theories of MBSL. Regarding the thermal emission a large variety of different processes are prevalent. Because temperatures are increasing from several hundred to many thousand kelvin during collapse, the processes can be molecular recombination, collision-induced emission, molecular emission, excimers, atomic recombination, radiative attachments of ions, neutral and ion Bremsstrahlung, or emission from confined electrons in voids. Which of these theories applies depends on accurate measurements and calculations of the temperature inside the bubble.", "Unlike single-bubble sonoluminescence, multi-bubble sonoluminescence is the creation of many oscillating and collapsing bubbles. Typically in MBSL, the light emission from each individual bubble is weaker than in SBSL because the neighboring bubbles can interact and affect each other. Because each neighboring bubble can interact with each other, it can make it more difficult to produce accurate studies and to characterize the properties of the collapsing bubble.", "Some of the developed theories about the mechanism of SBSL result in prognoses for the peak temperature from 6000 K to 20,000 K. What they all have in common is, a) the interior of the bubble heats up and becomes at least as hot as that measured for MBSL, b) water vapor is the main temperature-limiting factor and c) the averaged temperature over the bubble does not rise higher than 10,000 K.", "These equations were made using five major assumptions, with four of them being common to all the equations:\n# The bubble remains spherical\n# The bubble contents obey the ideal gas law\n# The internal pressure remains uniform throughout the bubble\n# No evaporation or condensation occurs inside the bubble\nThe fifth assumption, which changes between each formulation, pertains to the thermodynamic behavior of the liquid surrounding the bubble. These assumptions severely limit the models when the pulsations are large and the wall velocities reach the speed of sound.", "Prosperetti found a way to accurately determine the internal pressure of the bubble using the following equation.\nwhere is the temperature, is the thermal conductivity of the gas, and is the radial distance.", "This formulation allows the study of the motions and the effects of heat conduction, shear viscosity, compressibility, and surface tension on small cavitation bubbles in liquids that are set into motion by an acoustic pressure field. The effect of vapor pressure on the cavitation bubble can also be determined using the interfacial temperature. The formulation is specifically designed to describe the motion of a bubble that expands to a maximum radius and then violently collapses or contracts. This set of equations was solved using an improved Euler method.\nwhere is the radius of the bubble, the dots indicate first and second time derivatives, is the density of the liquid, is the speed of sound through the liquid, is the pressure on the liquid side of the bubble's interface, is time, and is the driving pressure.", "Prior to the early 1990s, the studies on different chemical and physical variables of sonoluminescence were all conducted using multi-bubble sonoluminescence (MBSL). This was a problem since all of the theories and bubble dynamics were based on single bubble sonoluminescence (SBSL) and researchers believed that the bubble oscillations of neighboring bubbles could affect each other. Single bubble sonoluminescence wasn't achieved until the early 1990s and allowed the study of the effects of various parameters on a single cavitating bubble. After many of the early theories were disproved, the remaining plausible theories can be classified into two different processes: electrical and thermal.", "The surface of a collapsing bubble like those seen in both SBSL and MBSL serves as a boundary layer between the liquid and vapor phases of the solution.", "It can be inferred from these results that the difference in surface tension between these different compounds is the source of different spectra emitted and the time scales in which emission occur.", "Because the bubble collapse occurs within microseconds, the hot spot theory states that the thermal energy results from an adiabatic bubble collapse. In 1950 it was assumed that the bubble internal temperatures were as high as 10,000 K at the collapse of a spherical symmetric bubble. In the 1990s, sonoluminescence spectra were used by Suslick to measure effective emission temperatures in bubble clouds (multibubble sonoluminescence) of 5000 K, and more recently temperatures as high as 20,000 K in single bubble cavitation.", "MBSL has been observed in many different solutions under a variety of conditions. Unfortunately it is more difficult to study as the bubble cloud is uneven and can contain a wide range of pressures and temperatures. SBSL is easier to study due to the predictable nature of the bubble. This bubble is sustained in a standing acoustic wave of moderate pressure, approximately 1.5 atm. Since cavitation does not normally occur at these pressures the bubble may be seeded through several techniques:\n# Transient boiling through short current pulse in nichrome wire.\n# A small jet of water perturbs the surface to introduce air bubbles.\n# A rapidly formed vapor cavity via focused laser pulse.\nThe standing acoustic wave, which contains pressure antinodes at the center of the containment vessel, causes the bubbles to quickly coalesce into a single radially oscillating bubble.", "The inertia of a collapsing bubble generates high pressures and temperatures capable of ionizing a small fraction of the noble gas within the volume of the bubble. This small fraction of ionized gas is transparent and allows for volume emission to be detected. Free electrons from the ionized noble gas begin to interact with other neutral atoms causing thermal bremsstrahlung radiation. Surface emission emits a more intense flash of light with a longer duration and is dependent on wavelength. Experimental data suggest that only volume emission occurs in the case of sonoluminescence. As the sound wave reaches a low energy trough the bubble expands and electrons are able to recombine with free ions and halt light emission. Light pulse time is dependent on the ionization energy of the noble gas with argon having a light pulse of 160 picoseconds.", "In 1937, the explanations for the light emission have favored electrical discharges. The first ideas have been about the charge separation in cavitation bubbles, which have been seen as spherical capacitors with charges at the center and the wall. \nAt the collapse, the capacitance decreases and voltage increases until electric breakdown occurs. A further suggestion was a charge separation by enhancing charge fluctuations on the bubble wall, however, a breakdown should take place during the expansion phase of the bubble dynamics. \nThese discharge theories have to assume that the emitting bubble undergoes an asymmetric collapse, because a symmetric charge distribution cannot radiate light.", "The effect that different chemicals present in solution have to the velocity of the collapsing bubble has recently been studied. Nonvolatile liquids such as sulfuric and phosphoric acid have been shown to produce flashes of light several nanoseconds in duration with a much slower bubble wall velocity, and producing several thousand-fold greater light emission. This effect is probably masked in SBSL in aqueous solutions by the absorption of light by water molecules and contaminants.", "The collapsed bubble expands due to high internal pressure and experiences a diminishing effect until the high pressure antinode returns to the center of the vessel. The bubble continues to occupy more or less the same space due to the acoustic radiation force, the Bjerknes force, and the buoyancy force of the bubble.", "Once a single bubble is stabilized in the pressure antinode of the standing wave, it can be made to emit pulses of light by driving the bubble into highly nonlinear oscillations. This is done by the increasing pressure of the acoustic wave to disrupt the steady, linear growth of the bubble which cause the bubble to collapse in a runaway reaction that only reverts due to the high pressures inside the bubble at its minimum radius.", "The Keller–Miksis formulation is an equation derived for the large, radial oscillations of a bubble trapped in a sound field. When the frequency of the sound field approaches the natural frequency of the bubble, it will result in large amplitude oscillations. The Keller–Miksis equation takes into account the viscosity, surface tension, incident sound wave, and acoustic radiation coming from the bubble, which was previously unaccounted for in Lauterborns calculations. Lauterborn solved the equation that Plesset, et al. modified from Rayleighs original analysis of large oscillating bubbles. Keller and Miksis obtained the following formula:\nwhere is the radius of the bubble, the dots indicate first and second time derivatives, is the density of the liquid, is the speed of sound through the liquid, is the pressure on the liquid side of the bubble's interface, is time, and is the time-delayed driving pressure.", "Mechanochromic luminescence (ML) references to intensity and/or color changes of (solid-state) luminescent materials induced by mechanical forces, such as rubbing, crushing, pressing, shearing, or smearing. Unlike \"triboluminescence\" which does not require additional excitation source other than force itself, ML is often manifested by external photoexcitation such as a UV lamp. The most common cause of ML is related to changes of intermolecular interactions of dyes and pigments, which gives rise to various strong (exciton splitting) and/or weak (Forster) excited state interactions. For example, a certain boron complex of [http://pubs.acs.org/doi/abs/10.1021/ja9097719 sunscreen compound] avobenzone exhibits reversible ML. A [http://pubs.rsc.org/en/Content/ArticleLanding/2012/JM/c2jm32809g recent detailed study] suggests that ML from the boron complex consists of two critical coupled steps: 1) generation of low energy exciton trap via mechanical perturbation; and 2) exciton migration from regions where photoexcitation results in a higher excited state. Since solid-state energy transfer can be very efficient, only a small fraction of the low-energy exciton traps is required when mechanical force is applied. As a result, for crystalline ML materials, XRD measurement may not able to detect changes before and after mechanical stimuli while its photoluminescence can be quite different.", "Mechanoluminescence is light emission resulting from any mechanical action on a solid. It can be produced through ultrasound, or through other means.\n* Electrochemiluminescence is the emission induced by an electrochemical stimulus.\n* Fractoluminescence is caused by stress that results in the formation of fractures, that in turn yield light.\n* Piezoluminescence is caused by pressure that results in elastic deformation and large polarization from the piezoelectric effect.\n* Sonoluminescence is the emission of short bursts of light from imploding bubbles in a liquid when excited by sound.\n* Triboluminescence is nominally caused by rubbing, but sometimes occurs because of resulting fractoluminescence. It is often used as a synonym.", "Metamagnetism is a sudden (often, dramatic) increase in the magnetization of a material with a small change in an externally applied magnetic field. The metamagnetic behavior may have quite different physical causes for different types of metamagnets. Some examples of physical mechanisms leading to metamagnetic behavior are:\n# Itinerant metamagnetism - Exchange splitting of the Fermi surface in a paramagnetic system of itinerant electrons causes an energetically favorable transition to bulk magnetization near the transition to a ferromagnet or other magnetically ordered state.\n# Antiferromagnetic transition - Field-induced spin flips in antiferromagnets cascade at a critical energy determined by the applied magnetic field.\nDepending on the material and experimental conditions, metamagnetism may be associated with a first-order phase transition, a continuous phase transition at a critical point (classical or quantum), or crossovers beyond a critical point that do not involve a phase transition at all. These wildly different physical explanations sometimes lead to confusion as to what the term \"metamagnetic\" is referring in specific cases.", "Methanediol, also known as formaldehyde monohydrate or methylene glycol, is an organic compound with chemical formula . It is the simplest geminal diol. In aqueous solutions it coexists with oligomers (short polymers). The compound is closely related and convertible to the industrially significant derivatives paraformaldehyde (), formaldehyde (), and 1,3,5-trioxane ().\nMethanediol is a product of the hydration of formaldehyde. The equilibrium constant for hydration is estimated to be 10, predominates in dilute (<0.1%) solution. In more concentrated solutions, it oligomerizes to .", "The dianion, methanediolate, is believed to be an intermediate in the crossed Cannizzaro reaction.\nGaseous methanediols can be generated by electron irradiation and sublimation of a mixture of methanol and oxygen ices.\nMethanediol is believed to occur as an intermediate in the decomposition of carbonyl compounds in the atmosphere, and as a product of ozonolysis on these compounds.", "Methanediol, rather than formaldehyde, is listed as one of the main ingredients of \"Brazilian blowout\", a hair-straightening formula marketed in the United States. The equilibrium with formaldehyde has caused concern since formaldehyde in hair straighteners is a health hazard. Research funded by the Professional Keratin Smoothing Council (PKSC), an industry association that represents selected manufacturers of professional-use only keratin smoothing products, has disputed the risk.", "The demagnetizing field is the magnetic field created by the magnetic sample upon itself. The associated energy is:\nwhere H is the demagnetizing field. This field depends on the magnetic configuration itself, and it can be found by solving:\nwhere −∇·M is sometimes called magnetic charge density. The solution of these equations (c.f. magnetostatics) is:\nwhere r is the vector going from the current integration point to the point where H is being calculated.\nIt is worth noting that the magnetic charge density can be infinite at the edges of the sample, due to M changing discontinuously from a finite value inside to zero outside of the sample. This is usually dealt with by using suitable boundary conditions on the edge of the sample.\nThe energy of the demagnetizing field favors magnetic configurations that minimize magnetic charges. In particular, on the edges of the sample, the magnetization tends to run parallel to the surface. In most cases it is not possible to minimize this energy term at the same time as the others. The static equilibrium then is a compromise that minimizes the total magnetic energy, although it may not minimize individually any particular term.", "The interaction of micromagnetics with mechanics is also of interest in the context of industrial applications that deal with magneto-acoustic resonance such as in hypersound speakers, high frequency magnetostrictive transducers etc. \nFEM simulations taking into account the effect of magnetostriction into micromagnetics are of importance. Such simulations use models described above within a finite element framework.\nApart from conventional magnetic domains and domain-walls, the theory also treats the statics and dynamics of topological line and point configurations, e.g. magnetic vortex and antivortex states; or even 3d-Bloch points, where, for example, the magnetization leads radially into all directions from the origin, or into topologically equivalent configurations. Thus in space, and also in time, nano- (and even pico-)scales are used.\nThe corresponding topological quantum numbers are thought to be used as information carriers, to apply the most recent, and already studied, propositions in information technology.\nAnother application that has emerged in the last decade is the application of micromagnetics towards neuronal stimulation. In this discipline, numerical methods such as finite-element analysis are used to analyze the electric/magnetic fields generated by the stimulation apparatus; then the results are validated or explored further using in-vivo or in-vitro neuronal stimulation. Several distinct set of neurons have been studied using this methodology including retinal neurons, cochlear neurons, vestibular neurons, and cortical neurons of embryonic rats.", "This is the equation of motion of the magnetization. It describes a Larmor precession of the magnetization around the effective field, with an additional damping term arising from the coupling of the magnetic system to the environment. The equation can be written in the so-called Gilbert form (or implicit form) as:\nwhere γ is the electron gyromagnetic ratio and α the Gilbert damping constant.\nIt can be shown that this is mathematically equivalent to the following Landau-Lifshitz (or explicit) form:\nWhere is the Gilbert Damping constant, characterizing how quickly the damping term takes away energy from the system ( = 0, no damping, permanent precession).", "The effective field is the local field felt by the magnetization. It can be described informally as the derivative of the magnetic energy density with respect to the orientation of the magnetization, as in:\nwhere dE/dV is the energy density. In variational terms, a change dm of the magnetization and the associated change dE of the magnetic energy are related by:\nSince m is a unit vector, dm is always perpendicular to m. Then the above definition leaves unspecified the component of H that is parallel to m. This is usually not a problem, as this component has no effect on the magnetization dynamics.\nFrom the expression of the different contributions to the magnetic energy, the effective field can be found to be:", "The magnetoelastic energy describes the energy storage due to elastic lattice distortions. It may be neglected if magnetoelastic coupled effects are neglected.\nThere exists a preferred local distortion of the crystalline solid associated with the magnetization director m, . \nFor a simple model, one can assume this strain to be isochoric and fully\nisotropic in the lateral direction, yielding the deviatoric ansatz\nwhere the material parameter E > 0 is the magnetostrictive\nconstant. Clearly, E is the strain induced by the magnetization in\nthe direction m. With this ansatz at hand, we consider the elastic\nenergy density to be a function of the elastic, stress-producing\nstrains . A quadratic form for the magnetoelastic energy is\nwhere \nis the fourth-order elasticity tensor. Here the elastic response is assumed to be isotropic (based on \nthe two Lamé constants λ and μ).\nTaking into account the constant length of m, we obtain the invariant-based representation\nThis energy term contributes to magnetostriction.", "The Zeeman energy is the interaction energy between the magnetization and any externally applied field. It's written as:\nwhere H is the applied field and µ is the vacuum permeability.\nThe Zeeman energy favors alignment of the magnetization parallel to the applied field.", "The purpose of dynamic micromagnetics is to predict the time evolution of the magnetic configuration of a sample subject to some non-steady conditions such as the application of a field pulse or an AC field. This is done by solving the Landau-Lifshitz-Gilbert equation, which is a partial differential equation describing the evolution of the magnetization in terms of the local effective field acting on it.", "The exchange energy is a phenomenological continuum description of the quantum-mechanical exchange interaction. It is written as:\nwhere A is the exchange constant; m, m and m are the components of m;\nand the integral is performed over the volume of the sample.\nThe exchange energy tends to favor configurations where the magnetization varies only slowly across the sample. This energy is minimized when the magnetization is perfectly uniform.", "The purpose of static micromagnetics is to solve for the spatial distribution of the magnetization M at equilibrium. In most cases, as the temperature is much lower than the Curie temperature of the material considered, the modulus |M| of the magnetization is assumed to be everywhere equal to the saturation magnetization M. The problem then consists in finding the spatial orientation of the magnetization, which is given by the magnetization direction vector m = M/M, also called reduced magnetization.\nThe static equilibria are found by minimizing the magnetic energy,\nsubject to the constraint |M|=M or |m|=1.\nThe contributions to this energy are the following:", "Magnetic anisotropy arises due to a combination of crystal structure and spin-orbit interaction. It can be generally written as:\nwhere F, the anisotropy energy density, is a function of the orientation of the magnetization. Minimum-energy directions for F are called easy axes.\nTime-reversal symmetry ensures that F is an even function of m. The simplest such function is\nwhere K is called the anisotropy constant. In this approximation, called uniaxial anisotropy, the easy axis is the z direction.\nThe anisotropy energy favors magnetic configurations where the magnetization is everywhere aligned along an easy axis.", "Micromagnetics as a field (i.e., that deals specifically with the behaviour of ferromagnetic materials at sub-micrometer length scales) was introduced in 1963 when William Fuller Brown Jr. published a paper on antiparallel domain wall structures. Until comparatively recently computational micromagnetics has been prohibitively expensive in terms of computational power, but smaller problems are now solvable on a modern desktop PC.", "Micromagnetics is a field of physics dealing with the prediction of magnetic behaviors at sub-micrometer length scales. The length scales considered are large enough for the atomic structure of the material to be ignored (the continuum approximation), yet small enough to resolve magnetic structures such as domain walls or vortices.\nMicromagnetics can deal with static equilibria, by minimizing the magnetic energy, and with dynamic behavior, by solving the time-dependent dynamical equation.", "Microscope-based diagnostics are widely performed and served as a gold standard in histological analysis. However this procedure generally requires a series time-consuming lab-based procedures including fixation, paraffin embedment, sectioning, and staining to produce microscope slides with optically thin tissue slides (4–6 µm). While in developed regions histology is commonly used, people who live in areas with limited resources can hardly access it and consequently are in need for a low-cost, more efficient way to access pathological diagnosis. The main significance of MUSE system comes from its capacity to produce high-resolution microscopic image with subcellular features in a time-efficient manner with less costs and less lab-expertises requirements.\nWith 280 nm deep UV excitation and simple but robust hardware design, MUSE system can collect fluorescence signals without the need for fluorescence filtering techniques or complex mathematical image reconstruction. It has potential for generate high quality images containing more information than microscope slides in terms of its 2.5 dimensional features. MUSE images have been validated with diagnostic values. The system is capable to produce images from various tissue type in different sizes, either fresh or fixed.", "The microscope setup is based on an inverted microscope design. An automated stage is used to record larger areas by mosaicing a series of single adjacent frames. The LED light is focused using a ball lens with a short focal length onto the sample surface in an oblique-angle cis-illumination scheme since standard microscopy optics do not transmit UV light efficiently. No dichroic mirror or filter is required as microscope objectives are opaque to UV excitation light. The emitted fluorescence light is collected using a long-working-distance objective and focused via a tube lens onto a CCD camera.\nSpecimens are submerged in exogenous dye for 10 seconds and then briefly washed in water or phosphate-buffered saline (PBS). The resulting stained specimens generate bright enough signals for direct and interpretable visualization through microscope eyepiece.", "Previous work from MUSE includes the detection of endogenous fluorescent molecules in intact clinical and human tissues for functional and structural characterization, which is limited by the relatively dim autofluorescence found in tissue. However, the use of bright exogenous dyes can provide substantially more remitted light than the autofluorescence approach.\nSeveral dyes have been studied for MUSE's application, including eosin, rhodamine, DAPI, Hoechst, acridine orange, propidium iodide, and proflavine. Eosin and rhodamine stain the cytoplasm and the extracellular matrix, making the bulk of the tissue visible. Hoechst and DAPI fluoresce brightly when bound to DNA, allowing them to serve as excellent nuclear stains.", "Microscopy with UV Surface Excitation (MUSE) is a novel microscopy method that utilizes the shallow penetration of UV photons (230–300 nm) excitation. Compared to conventional microscopes, which usually require sectioning to exclude blurred signals from outside of the focal plane, MUSE's low penetration depth limits the excitation volume to a thin layer, and removes the tissue sectioning requirement. The entire signal collected is the desired light, and all photons collected contribute to the image formation.", "MUSE system mainly serves as a low-cost alternative to traditional histological analysis for cancer diagnostics with simpler and less time-consuming techniques. By integrating microscopy and fresh tissue fluorescence staining into an automated optical system, the overall acquiring time needed for getting digital images with diagnostic values can be much shortened into the scale of minutes comparing with conventional pathology, where general procedure can take from hours to days. The color-mapping techniques that correlated fluorescence staining to traditional H&E staining provide the same visual representation to pathologists based on existing knowledge with no need for additional training on image recognition.\nAdditionally, this system also has great potential to be used for intraoperative consultation, a method performed in pathologists lab that examine the microscopic features of tissue during oncological surgery usually for rapid cancer lesion and margin detection. It also can play an important role in biological and medical research, which might require examination on cellular features of tissue samples. In the future, the system can be further optimized to include more features including staining protocol, LEDs wavelength for more research usages and applications.", "Mictomagnetism is a spin system in which various exchange interactions are mixed. It is observed in several kinds of alloys, including Cu–Mn, Fe–Al and Ni–Mn alloys. Cooled in zero magnetic field, these materials have low remanence and coercivity. Cooled in a magnetic field, they have much larger remanence, and the hysteresis loop is shifted in the direction opposite to the field (an effect similar to exchange bias).", "The MIRAGE Commission consists of three groups which tightly interact with each other.\nThe advisory board consists of leading scientists in glycobiology, who, for example, critically review the outcomes of the working group and promote the reporting guidelines within the community.\nThe working group seeks for external consultation and directly interacts with the glycomics community. The group members carry out defined subprojects (e.g. development and revision of guidelines) by focusing on specific research areas to fulfill the overall aims of the MIRAGE project.\nThe co-ordination team links the subprojects from the working group together and passes the outcomes to the advisory board for review.", "The following reporting guidelines were developed and published:\n* MIRAGE MS guidelines for reporting mass spectrometry-based glycan analysis. These guidelines are based on the MIAPE guideline template, i.e. MIAPE-MS version 2.24. \n* MIRAGE Sample preparation guidelines which are considered a common basis for any further MIRAGE reporting guidelines in order to keep the requirements for data analysis short and consistent.\n* MIRAGE Glycan microarray guidelines for the comprehensive description of Glycan array experiments the reporting guidelines for glycan microarray analysis have been developed. In order to assist the authors to reporting in compliance with these guidelines, exemplar publications and a template with a data example is provided.\n* MIRAGE Liquid chromatography guidelines for reporting of liquid chromatography (LC) glycan data.", "The Minimum Information Required About a Glycomics Experiment (MIRAGE) initiative is part of the Minimum Information Standards and specifically applies to guidelines for reporting (describing metadata) on a glycomics experiment. The initiative is supported by the Beilstein Institute for the Advancement of Chemical Sciences. The MIRAGE project focuses on the development of publication guidelines for interaction and structural glycomics data as well as the development of data exchange formats. The project was launched in 2011 in Seattle and set off with the description of the aims of the MIRAGE project.", "The MIRAGE reporting guidelines provide essential frameworks for subsequent projects related with the development of both software tools for the analysis of experimental glycan data and databases for the deposition of interaction analysis data (e.g. from glycan microarray experiments) and structural analysis data (e.g. from mass spectrometry and liquid chromatography experiments). As the guidelines include the definitions of the minimum information required for reporting glycomics experiments comprehensively, this information is incorporated in database structures, data acquisition forms and data exchange formats.<br>\nThe following databases comply with the MIRAGE guidelines:\n* UniCarb-DB, which stores curated data and information on glycan structures and associated fragment data characterised by LC-Tandem_mass spectrometry strategies.\n* [https://glycostore.org/ GlycoStore] a curated Chromatography, Electrophoresis and Mass spectrometry derived composition database of N-, O-, glycosphingolipid (GSL) glycans and free oligosaccharides associated with a range of glycoproteins, glycolipids and biotherapeutics.\n* UniCarb-DR a MS data repository for glycan structures\n* [https://glycopost.glycosmos.org/ GlycoPOST] a mass spectra repository for glycomics and glycoproteomics\nThe following projects refer to the MIRAGE standards:\n* [https://glytoucan.org/ GlyTouCan] is a glycan structure repository where unique identifiers are assigned to individually reported glycan structures \n* [http://www.unicarbkb.org/ UniCarbKB] a database of glycans and glycoproteins \n* [http://www.glygen.org/index.html GlyGen], a data integration and dissemination project for carbohydrate and glycoconjugate related data\n* [https://glyconnect.expasy.org/ GlyConnect], an integrated platform for glycomics and glycoproteomics", "Mixer settlers are a class of mineral process equipment used in the solvent extraction process. A mixer settler consists of a first stage that mixes the phases together followed by a quiescent settling stage that allows the phases to separate by gravity.", "A mixing chamber where a mechanical agitator brings in intimate contact the feed solution and the solvent to carry out the transfer of solute(s). The mechanical agitator is equipped with a motor which drives a mixing and pumping turbine. This turbine draws the two phases from the settlers of the adjacent stages, mixes them, and transfers this emulsion to the associated settler. \nThe mixer may consists of one or multiple stages of mixing tanks. Common laboratory mixers consist of a single mixing stage, whereas industrial scale copper mixers may consist of up to three mixer stages where each stage performs a combined pumping and mixing action. Use of multiple stages allows a longer reaction time and also minimizes the short circuiting of unreacted material through the mixers.", "A settling chamber where the two phases separate by static decantation. Coalescence plates facilitate the separation of the emulsion into two phases (heavy and light). The two phases then pass to continuous stages by overflowing the light phase and heavy phase weirs. The height of the heavy phase weir can be adjusted in order to position the heavy/light interphase in the settling chamber based on the density of each one of the phases.\nThe settler is a calm pool downstream of the mixer where the liquids are allowed to separate by gravity. The liquids are then removed separately from the end of the mixer.", "In the case of oxide copper ore, a heap leaching pad will dissolve a dilute copper sulfate solution in a weak sulfuric acid solution. This pregnant leach solution (PLS) is pumped to an extraction mixer settler where it is mixed with the organic phase (a kerosene hosted extractant). The copper transfers to the organic phase, and the aqueous phase (now called raffinate) is pumped back to the heap to recover more copper.\nIn a high-chloride environment typical of Chilean copper mines, a wash stage will rinse any residual pregnant solution entrained in the organic with clean water.\nThe copper is then stripped from organic phase in the strip stage into a strong sulfuric acid solution suitable for electrowinning. This strong acid solution is called barren electrolyte when it enters the cell, and strong electrolyte when it is copper bearing after reacting in the cell.", "Industrial mixer settlers are commonly used in the copper, nickel, uranium, lanthanide, and cobalt hydrometallurgy industries, when solvent extraction processes are applied.\nThey are also used in the Nuclear reprocessing field to separate and purify primarily Uranium and Plutonium, removing the fission product impurities.\nIn the multiple countercurrent process, multiple mixer settlers are installed with mixing and settling chambers located at alternating ends for each stage (since the outlet of the settling sections feed the inlets of the adjacent stage's mixing sections). Mixer-settlers are used when a process requires longer residence times and when the solutions are easily separated by gravity. They require a large facility footprint, but do not require much headspace, and need limited remote maintenance capability for occasional replacement of mixing motors. (Colven, 1956; Davidson, 1957)\nThe equipment units can be arrayed as:\n*extraction (moving an ion of interest from an aqueous phase to an organic phase), \n*washing (rinsing entrained aqueous contaminant out of an organic phase containing the ion of interest), and \n*stripping (moving an ion of interest from an organic phase into an aqueous phase).", "Flow rate of the liquid phase and mole fraction of the desired compound in it are and .\nFlow rate of the vapour phase and mole fraction of the desired compound in it are and .", ", where is the enthalpy of the liquid and is the enthalpy of the vapour\nBy substituting the mass balance equation in above equation we get the following expression:", "Flow rate of the liquid phase and molar fractions of the desired compound in it are and .\nFlow rate of the vapour phase and molar fractions of the desired compound in it are and .", "Distillation is a process in which we separate components of different vapour pressure. One fraction leaves overhead and is condensed to distillate and the other is the bottom product. The bottom product is mostly liquid while the overhead fraction can be vapour or an aerosol. \nThis method requires the components to have different volatility to be separated.\nThe column consists of three sections: a stripping section, a rectification section, and a feed section.\nFor rectification and stripping a countercurrent liquid phase must flow through the column, so that liquid and vapour can contact each other on each stage.\nThe distillation column is fed with a mixture containing the mole fraction xf of the desired compound. The overhead mixture is a gas or an aerosol which contains the mole fraction xD of the desired compound and the bottom product contains a mixture with the fraction xB of the desired compound.\nAn overhead condenser is a heat exchange equipment used for condensing the mixture leaving the top of the column. Either cooling water or air is used as a cooling agent.\nAn overhead accumulator is a horizontal pressure vessel containing the condensed mixture.\nPumps can be used to control the reflux to the column.\nA Reboiler produces the vapour stream in the distillation column. It can be used internally and externally.", "The total molar hold up in the nth tray Mn is considered constant.\nThe imbalances in the input and output flows are taken into account for in the component and the heat balance equations.", "Molecular distillation is a type of short-path vacuum distillation, characterized by an extremely low vacuum pressure, 0.01 torr or below, which is performed using a molecular still. It is a process of separation, purification and concentration of natural products, complex and thermally sensitive molecules for example vitamins and polyunsaturated fatty acids. This process is characterized by short term exposure of the distillate liquid to high temperatures in high vacuum (around mmHg) in the distillation column and a small distance between the evaporator and the condenser around 2 cm. In molecular distillation, fluids are in the free molecular flow regime, i.e. the mean free path of molecules is comparable to the size of the equipment. The gaseous phase no longer exerts significant pressure on the substance to be evaporated, and consequently, rate of evaporation no longer depends on pressure. The motion of molecules is in the line of sight, because they do not form a continuous gas anymore. Thus, a short path between the hot surface and the cold surface is necessary, typically by suspending a hot plate covered with a film of feed next to a cold plate with a line of sight in between.\nThis process has the advantages of avoiding the problem of toxicity that occurs in techniques that use solvents as the separating agent, and also of minimizing losses due to thermal decomposition. and can be used in a continuous feed process to harvest distillate without having to break vacuum.\nMolecular distillation is used industrially for purification of oils. It is also used to enrich borage oil in γ-linolenic acid (GLA) and also to recover tocopherols from deodorizer distillate of soybean oil (DDSO). Molecular stills were historically used by Wallace Carothers in the synthesis of larger polymers, as a reaction product, water, interfered with polymerization by undoing the reaction via hydrolysis, but the water could be removed by the molecular still.", "The Morin transition (also known as a spin-flop transition) is a magnetic phase transition in α-FeO hematite where the antiferromagnetic ordering is reorganized from being aligned perpendicular to the c-axis to be aligned parallel to the c-axis below T.\nT = 260K for Fe in α-FeO.\nA change in magnetic properties takes place at the Morin transition temperature.", "All chromatographic purifications and separations which are executed via solvent gradient batch chromatography can be performed using MCSGP. Typical examples are reversed phase purification of peptides, hydrophobic interaction chromatography for fatty acids or for example ion exchange chromatography of proteins or antibodies. The process can effectively enrich components, which have been fed in only small amounts. Continuous capturing of antibodies without affinity chromatography can be realized with the MCSGP-process.", "Biomolecules are often purified via solvent gradient batch chromatography. Here smooth linear solvent gradients are applied to carefully handle the separation between the desired component and hundreds of impurities. The desired product is usually intermediate between weakly and strongly absorbing impurities. A center cut is required to get the desired pure product. Often the preparative resins have a low efficiency due to strong axial dispersion and slow mass transfer. Then a purification in one chromatographic step is not possible. Countercurrent movement as known from the SMB process would be required. For large scale productions and for very valuable molecules countercurrent solid movement need to be applied to increase the separation efficiency, the yield and the productivity of the purification. The MCSGP process combines both techniques in one process, the countercurrent SMB principle and the solvent gradient batch technique.\nDiscontinuous mode consists of equilibration, loading, washing, purification and regeneration steps. The discontinuous mode of operation allows exploiting the advantage of solvent gradients, but it implies high solvent consumptions and low productivities with respect to continuous countercurrent processes. An established process of this kind is the simulated moving bed technique (SMB) that requires the solvent-consuming steps of equilibration, washing, regeneration only once per operation and has a better resin utilization. However, major drawbacks of SMB are the inability of separating a mixture into three fractions and the lack of solvent gradient applicability.\nIn the case of antibodies, the state-of-the-art technique is based on batch affinity chromatography (with Protein A or Protein G as ligands) which is able to selectively bind antibody molecules. In general, affinity techniques have the advantage of purifying biomolecules with high yields and purities but the disadvantages are in general the high stationary phase cost, ligand leaching and reduced cleanability.\nThe MCSGP process can result in purities and yields comparable to those of purification using Protein A. The second application example for the MCSGP prototype is the separation of three MAb variants using a preparative weak cation-exchange resin. Although the intermediately eluting MAb variant can only be obtained with 80% purity at recoveries close to zero in a batch chromatographic process, the MCSGP process can provide 90% purity at 93% yield. A numerical comparison of the MCSGP process with the batch chromatographic process, and a batch chromatographic process including ideal recycling, has been performed using an industrial polypeptide purification as the model system. It shows that the MCSGP process can increase the productivity by a factor of 10 and reduce the solvent requirement by 90%.\nThe main advantages with respect to solvent gradient batch chromatography are high yields also for difficult separations, less solvent consumption, higher productivity, usage of countercurrent solid movement, which increases the separation efficiency. The process is continuous. Once a steady state is reached, it delivers continuously purified product in constant quality and quantity. Automatic cleaning in place is integrated. A pure empirical design of the operating conditions from a single solvent gradient batch chromatogram is possible.", "The MCSGP process consists of several, at least two, chromatographic columns which are switched in position opposite to the flow direction. Most of the columns are equipped with a gradient pump to adjust the modifier concentration at the column inlet. Some columns are connected directly, so that non pure product streams are internally recycled. Other columns are short circuited, so that they operate in pure batch mode. The system is split into several sections, from which every section performs a tasks analogous to the tasks of a batch purification. These tasks are loading the feed, running the gradient elution, recycling of weakly adsorbing site fractions, fractionation of the purified product, recycling of strongly adsorbing site fractions, cleaning the column from strongly adsorbing impurities, cleaning in place and re-equilibration of the column to start the next purification run. All of the tasks mentioned here are carried out at the same time in one unit. Recycling of non-pure side fractions is performed in countercurrent movement.", "Multicolumn countercurrent solvent gradient purification (MCSGP) is a form of chromatography that is used to separate or purify biomolecules from complex mixtures. It was developed at the Swiss Federal Institute of Technology Zürich by Aumann and Morbidelli. The process consists of two to six chromatographic columns which are connected to one another in such a way that as the mixture moves through the columns the compound is purified into several fractions.", "There are four major mechanisms to induce exchange interactions between two magnetic moments in a system: 1). Direct exchange 2). RKKY 3). Superexchange 4). Spin-Lattice. No matter which one is dominated, a general form of the exchange interaction can be written as\nwhere are the site indexes and is the coupling constant that couples two multipole moments and . One can immediately find if is restricted to 1 only, the Hamiltonian reduces to conventional Heisenberg model.\nAn important feature of the multipolar exchange Hamiltonian is its anisotropy. The value of coupling constant is usually very sensitive to the relative angle between two multipoles. Unlike conventional spin only exchange Hamiltonian where the coupling constants are isotropic in a homogeneous system, the highly anisotropic atomic orbitals (recall the shape of the wave functions) coupling to the system's magnetic moments will inevitably introduce huge anisotropy even in a homogeneous system. This is one of the main reasons that most multipolar orderings tend to be non-colinear.", "Consider a quantum mechanical system with Hilbert space spanned by , where is the total angular momentum and is its projection on the quantization axis. Then any quantum operators can be represented using the basis set as a matrix with dimension . Therefore, one can define matrices to completely expand any quantum operator in this Hilbert space. Taking J=1/2 as an example, a quantum operator A can be expanded as \nObviously, the matrices: form a basis set in the operator space. Any quantum operator defined in this Hilbert can be expended by operators. In the following, let's call these matrices as a super basis to distinguish the eigen basis of quantum states. More specifically the above super basis can be called a transition super basis because it describes the transition between states and . In fact, this is not the only super basis that does the trick. We can also use Pauli matrices and the identity matrix to form a super basis\nSince the rotation properties of follow the same rules as the rank 1 tensor of cubic harmonics and the identity matrix follows the same rules as the rank 0 tensor , the basis set can be called cubic super basis. Another commonly used super basis is spherical harmonic super basis which is built by replacing the to the raising and lowering operators \nAgain, share the same rotational properties as rank 1 spherical harmonic tensors , so it is called spherical super basis.\nBecause atomic orbitals are also described by spherical or cubic harmonic functions, one can imagine or visualize these operators using the wave functions of atomic orbitals although they are essentially matrices not spatial functions.\nIf we extend the problem to , we will need 9 matrices to form a super basis. For transition super basis, we have . For cubic super basis, we have . For spherical super basis, we have . In group theory, are called scalar or rank 0 tensor, are called dipole or rank 1 tensors, are called quadrupole or rank 2 tensors.\nThe example tells us, for a -multiplet problem, one will need all rank tensor operators to form a complete super basis. Therefore, for a system, its density matrix must have quadrupole components. This is the reason why a problem will automatically introduce high-rank multipoles to the system", "A general definition of spherical harmonic super basis of a -multiplet problem can be expressed as \nwhere the parentheses denote a 3-j symbol; K is the rank which ranges ; Q is the\nprojection index of rank K which ranges from −K to +K. A cubic harmonic super basis where all the tensor operators are hermitian can be defined as \nThen, any quantum operator defined in the -multiplet Hilbert space can be expanded as\nwhere the expansion coefficients can be obtained by taking the trace inner product, e.g. .\nApparently, one can make linear combination of these operators to form a new super basis that have different symmetries.", "Unlike magnetic spin ordering where the antiferromagnetism can be defined by flipping the magnetization axis of two neighbor sites from a ferromagnetic configuration, flipping of the magnetization axis of a multipole is usually meaningless. Taking a moment as an example, if one flips the z-axis by making a rotation toward the y-axis, it just changes nothing. Therefore, a suggested definition of antiferromagnetic multipolar ordering is to flip their phases by , i.e. . In this regard, the antiferromagnetic spin ordering is just a special case of this definition, i.e. flipping the phase of a dipole moment is equivalent to flipping its magnetization axis. As for high rank multipoles, e.g. , it actually becomes a rotation and for it is even not any kind of rotation.", "Magnetic materials with strong spin-orbit interaction, such as: LaFeAsO, PrFeP, YbRuGe, UO, NpO, CeLaB, URuSi and many other compounds, are found to have magnetic ordering constituted by high rank multipoles, e.g. quadruple, octople, etc. Due to the strong spin-orbit coupling, multipoles are automatically introduced to the systems when the total angular momentum quantum number J is larger than 1/2. If those multipoles are coupled by some exchange mechanisms, those multipoles could tend to have some ordering as conventional spin 1/2 Heisenberg problem. Except the multipolar ordering, many hidden order phenomena are believed closely related to the multipolar interactions", "Calculation of multipolar exchange interactions remains a challenging issue in many aspects. Although there were many works based on fitting the model Hamiltonians with experiments, predictions of the coupling constants based on first-principle schemes remain lacking. Currently there are two studies implemented first-principles approach to explore multipolar exchange interactions. An early study was developed in 80's. It is based on a mean field approach that can greatly reduce the complexity of coupling constants induced by RKKY mechanism, so the multipolar exchange Hamiltonian can be described by just a few unknown parameters and can be obtained by fitting with experiment data. Later on, a first-principles approach to estimate the unknown parameters was further developed and got good agreements with a few selected compounds, e.g. cerium momnpnictides. Another first-principle approach was also proposed recently. It maps all the coupling constants induced by all static exchange mechanisms to a series of DFT+U total energy calculations and got agreement with uranium dioxide.", "Using the addition theorem of tensor operators, the product of a rank n tensor and a rank m tensor can generate a new tensor with rank n+m ~ |n-m|. Therefore, a high rank tensor can be expressed as the product of low rank tensors. This convention is useful to interpret the high rank multipolar exchange terms as a \"multi-exchange\" process of dipoles (or pseudospins). For example, for the spherical harmonic tensor operators of case, we have\nIf so, a quadrupole-quadrupole interaction (see next section) can be considered as a two steps dipole-dipole interaction. For example, , so the one step quadrupole transition on site now becomes a two steps of dipole transition . Hence not only inter-site-exchange but also intra-site-exchange terms appear (so called multi-exchange). If is even larger, one can expect more complicated intra-site-exchange terms would appear. However, one has to note that it is not a perturbation expansion but just a mathematical technique. The high rank terms are not necessarily smaller than low rank terms. In many systems, high rank terms are more important than low rank terms.", "In the muon-catalyzed fusion of most interest, a positively charged deuteron (d), a positively charged triton (t), and a muon essentially form a positively charged muonic molecular heavy hydrogen ion (d–μ–t). The muon, with a rest mass 207 times greater than the rest mass of an electron, is able to drag the more massive triton and deuteron 207 times closer together to each other\n in the muonic (d–μ–t) molecular ion than can an electron in the corresponding electronic (d–e–t) molecular ion. The average separation between the triton and the deuteron in the electronic molecular ion is about one angstrom (100 pm), so the average separation between the triton and the deuteron in the muonic molecular ion is 207 times smaller than that. Due to the strong nuclear force, whenever the triton and the deuteron in the muonic molecular ion happen to get even closer to each other during their periodic vibrational motions, the probability is very greatly enhanced that the positively charged triton and the positively charged deuteron would undergo quantum tunnelling through the repulsive Coulomb barrier that acts to keep them apart. Indeed, the quantum mechanical tunnelling probability depends roughly exponentially on the average separation between the triton and the deuteron, allowing a single muon to catalyze the d–t nuclear fusion in less than about half a picosecond, once the muonic molecular ion is formed.\nThe formation time of the muonic molecular ion is one of the \"rate-limiting steps\" in muon-catalyzed fusion that can easily take up to ten thousand or more picoseconds in a liquid molecular deuterium and tritium mixture (D, DT, T), for example. Each catalyzing muon thus spends most of its ephemeral existence of 2.2 microseconds, as measured in its rest frame, wandering around looking for suitable deuterons and tritons with which to bind.\nAnother way of looking at muon-catalyzed fusion is to try to visualize the ground state orbit of a muon around either a deuteron or a triton. Suppose the muon happens to have fallen into an orbit around a deuteron initially, which it has about a 50% chance of doing if there are approximately equal numbers of deuterons and tritons present, forming an electrically neutral muonic deuterium atom (d–μ) that acts somewhat like a \"fat, heavy neutron\" due both to its relatively small size (again, 207 times smaller than an electrically neutral electronic deuterium atom (d–e)) and to the very effective \"shielding\" by the muon of the positive charge of the proton in the deuteron. Even so, the muon still has a much greater chance of being transferred to any triton that comes near enough to the muonic deuterium than it does of forming a muonic molecular ion. The electrically neutral muonic tritium atom (t–μ) thus formed will act somewhat like an even \"fatter, heavier neutron,\" but it will most likely hang on to its muon, eventually forming a muonic molecular ion, most likely due to the resonant formation of a hyperfine molecular state within an entire deuterium molecule D (d=e=d), with the muonic molecular ion acting as a \"fatter, heavier nucleus\" of the \"fatter, heavier\" neutral \"muonic/electronic\" deuterium molecule ([d–μ–t]=e=d), as predicted by Vesman, an Estonian graduate student, in 1967.\nOnce the muonic molecular ion state is formed, the shielding by the muon of the positive charges of the proton of the triton and the proton of the deuteron from each other allows the triton and the deuteron to tunnel through the Coulomb barrier in time span of order of a nanosecond The muon survives the d–t muon-catalyzed nuclear fusion reaction and remains available (usually) to catalyze further d–t muon-catalyzed nuclear fusions. Each exothermic d–t nuclear fusion releases about 17.6 MeV of energy in the form of a \"very fast\" neutron having a kinetic energy of about 14.1 MeV and an alpha particle α (a helium-4 nucleus) with a kinetic energy of about 3.5 MeV. An additional 4.8 MeV can be gleaned by having the fast neutrons moderated in a suitable \"blanket\" surrounding the reaction chamber, with the blanket containing lithium-6, whose nuclei, known by some as \"lithions,\" readily and exothermically absorb thermal neutrons, the lithium-6 being transmuted thereby into an alpha particle and a triton.", "To create this effect, a stream of negative muons, most often created by decaying pions, is sent to a block that may be made up of all three hydrogen isotopes (protium, deuterium, and/or tritium), where the block is usually frozen, and the block may be at temperatures of about 3 kelvin (−270 degrees Celsius) or so. The muon may bump the electron from one of the hydrogen isotopes. The muon, 207 times more massive than the electron, effectively shields and reduces the electromagnetic repulsion between two nuclei and draws them much closer into a covalent bond than an electron can. Because the nuclei are so close, the strong nuclear force is able to kick in and bind both nuclei together. They fuse, release the catalytic muon (most of the time), and part of the original mass of both nuclei is released as energetic particles, as with any other type of nuclear fusion. The release of the catalytic muon is critical to continue the reactions. The majority of the muons continue to bond with other hydrogen isotopes and continue fusing nuclei together. However, not all of the muons are recycled: some bond with other debris emitted following the fusion of the nuclei (such as alpha particles and helions), removing the muons from the catalytic process. This gradually chokes off the reactions, as there are fewer and fewer muons with which the nuclei may bond. The number of reactions achieved in the lab can be as high as 150 d–t fusions per muon (average).", "The first kind of muon–catalyzed fusion to be observed experimentally, by L.W. Alvarez et al., was protium (H or H) and deuterium (D or H) muon-catalyzed fusion. The fusion rate for p–d (or pd) muon-catalyzed fusion has been estimated to be about a million times slower than the fusion rate for d–t muon-catalyzed fusion.\nOf more practical interest, deuterium–deuterium muon-catalyzed fusion has been frequently observed and extensively studied experimentally, in large part because deuterium already exists in relative abundance and, like protium, deuterium is not at all radioactive. (Tritium rarely occurs naturally, and is radioactive with a half-life of about 12.5 years.)\nThe fusion rate for d–d muon-catalyzed fusion has been estimated to be only about 1% of the fusion rate for d–t muon-catalyzed fusion, but this still gives about one d–d nuclear fusion every 10 to 100 picoseconds or so. However, the energy released with every d–d muon-catalyzed fusion reaction is only about 20% or so of the energy released with every d–t muon-catalyzed fusion reaction. Moreover, the catalyzing muon has a probability of sticking to at least one of the d–d muon-catalyzed fusion reaction products that Jackson in this 1957 paper estimated to be at least 10 times greater than the corresponding probability of the catalyzing muon sticking to at least one of the d–t muon-catalyzed fusion reaction products, thereby preventing the muon from catalyzing any more nuclear fusions. Effectively, this means that each muon catalyzing d–d muon-catalyzed fusion reactions in pure deuterium is only able to catalyze about one-tenth of the number of d–t muon-catalyzed fusion reactions that each muon is able to catalyze in a mixture of equal amounts of deuterium and tritium, and each d–d fusion only yields about one-fifth of the yield of each d–t fusion, thereby making the prospects for useful energy release from d–d muon-catalyzed fusion at least 50 times worse than the already dim prospects for useful energy release from d–t muon-catalyzed fusion.\nPotential \"aneutronic\" (or substantially aneutronic) nuclear fusion possibilities, which result in essentially no neutrons among the nuclear fusion products, are almost certainly not very amenable to muon-catalyzed fusion. One such essentially aneutronic nuclear fusion reaction involves a deuteron from deuterium fusing with a helion (He) from helium-3, which yields an energetic alpha particle and a much more energetic proton, both positively charged (with a few neutrons coming from inevitable d–d nuclear fusion side reactions). However, one muon with only one negative electric charge is incapable of shielding both positive charges of a helion from the one positive charge of a deuteron. The chances of the requisite two muons being present simultaneously are exceptionally remote.", "Muon-catalyzed fusion (abbreviated as μCF or MCF) is a process allowing nuclear fusion to take place at temperatures significantly lower than the temperatures required for thermonuclear fusion, even at room temperature or lower. It is one of the few known ways of catalyzing nuclear fusion reactions.\nMuons are unstable subatomic particles which are similar to electrons but 207 times more massive. If a muon replaces one of the electrons in a hydrogen molecule, the nuclei are consequently drawn 186 times closer than in a normal molecule, due to the reduced mass being 186 times the mass of an electron. When the nuclei move closer together, the fusion probability increases, to the point where a significant number of fusion events can happen at room temperature.\nMethods for obtaining muons, however, require far more energy than can be produced by the resulting fusion reactions. Muons have a mean lifetime of , much longer than many other subatomic particles but nevertheless far too brief to allow their useful storage. \nTo create useful room-temperature muon-catalyzed fusion, reactors would need a cheap, efficient muon source and/or a way for each individual muon to catalyze many more fusion reactions.", "According to Gordon Pusch, a physicist at Argonne National Laboratory, various breakeven calculations on muon-catalyzed fusion omit the heat energy the muon beam itself deposits in the target. By taking this factor into account, muon-catalyzed fusion can already exceed breakeven; however, the recirculated power is usually very large compared to power out to the electrical grid (about 3–5 times as large, according to estimates). Despite this rather high recirculated power, the overall cycle efficiency is comparable to conventional fission reactors; however the need for 4–6 MW electrical generating capacity for each megawatt out to the grid probably represents an unacceptably large capital investment. Pusch suggested using Bogdan Maglich's \"migma\" self-colliding beam concept to significantly increase the muon production efficiency, by eliminating target losses, and using tritium nuclei as the driver beam, to optimize the number of negative muons.\nIn 2021, Kelly, Hart and Rose produced a μCF model whereby the ratio, Q, of thermal energy produced to the kinetic energy of the accelerated deuterons used to create negative pions (and thus negative muons through pion decay) was optimized. In this model, the heat energy of the incoming deuterons as well as that of the particles produced due to the deuteron beam impacting a tungsten target was recaptured to the extent possible, as suggested by Gordon Pusch in the previous paragraph. Additionally, heat energy due to tritium breeding in a lithium-lead shell was recaptured, as suggested by Jändel, Danos and Rafelski in 1988. The best Q value was found to be about 130% assuming that 50% of the muons produced were actually utilized for fusion catalysis. Furthermore, assuming that the accelerator was 18% efficient at transforming electrical energy into deuteron kinetic energy and conversion efficiency of heat energy into electrical energy of 60%, they estimate that, currently, the amount of electrical energy that could be produced by a μCF reactor would be 14% of the electrical energy consumed. In order for this to improve, they suggest that some combination of a) increasing accelerator efficiency and b) increasing the number of fusion reactions per negative muon above the assumed level of 150 would be needed.", "If muon-catalyzed d–t nuclear fusion is realized practically, it will be a much more attractive way of generating power than conventional nuclear fission reactors because muon-catalyzed d–t nuclear fusion (like most other types of nuclear fusion), produces far fewer harmful (and far less long-lived) radioactive wastes.\nThe large number of neutrons produced in muon-catalyzed d–t nuclear fusions may be used to breed fissile fuels from fertile material – for example, thorium-232 could breed uranium-233 in this way. The fissile fuels that have been bred can then be \"burned,\" either in a conventional critical nuclear fission reactor or in an unconventional subcritical fission reactor, for example, a reactor using nuclear transmutation to process nuclear waste, or a reactor using the energy amplifier concept devised by Carlo Rubbia and others.\nAnother benefit of muon-catalyzed fusion is that the fusion process can start with pure deuterium gas without tritium. Plasma fusion reactors like ITER or Wendelstein X7 need tritium to initiate and also need a tritium factory. Muon-catalyzed fusion generates tritium under operation and increases operating efficiency up to an optimum point when the deuterium:tritium ratio reaches about 1:1. Muon-catalyzed fusion can operate as a tritium factory and deliver tritium for material and plasma fusion research.", "Except for some refinements, little has changed since Jacksons 1957 assessment of the feasibility of muon-catalyzed fusion other than Vesmans 1967 prediction of the hyperfine resonant formation of the muonic (d–μ–t) molecular ion which was subsequently experimentally observed. This helped spark renewed interest in the whole field of muon-catalyzed fusion, which remains an active area of research worldwide. However, as Jackson observed in his paper, muon-catalyzed fusion is \"unlikely\" to provide \"useful power production ... unless an energetically cheaper way of producing μ-mesons can be found.\"\nOne practical problem with the muon-catalyzed fusion process is that muons are unstable, decaying in (in their rest frame). Hence, there needs to be some cheap means of producing muons, and the muons must be arranged to catalyze as many nuclear fusion reactions as possible before decaying.\nAnother, and in many ways more serious, problem is the \"alpha-sticking\" problem, which was recognized by Jackson in his 1957 paper. The α-sticking problem is the approximately 1% probability of the muon \"sticking\" to the alpha particle that results from deuteron-triton nuclear fusion, thereby effectively removing the muon from the muon-catalysis process altogether. Even if muons were absolutely stable, each muon could catalyze, on average, only about 100 d-t fusions before sticking to an alpha particle, which is only about one-fifth the number of muon catalyzed d–t fusions needed for break-even, where as much thermal energy is generated as electrical energy is consumed to produce the muons in the first place, according to Jackson's rough estimate.\nMore recent measurements seem to point to more encouraging values for the α-sticking probability, finding the α-sticking probability to be around 0.3% to 0.5%, which could mean as many as about 200 (even up to 350) muon-catalyzed d–t fusions per muon. Indeed, the team led by Steven E. Jones achieved 150 d–t fusions per muon (average) at the Los Alamos Meson Physics Facility. The results were promising and almost enough to reach theoretical break-even. Unfortunately, these measurements for the number of muon-catalyzed d–t fusions per muon are still not enough to reach industrial break-even. Even with break-even, the conversion efficiency from thermal energy to electrical energy is only about 40% or so, further limiting viability. The best recent estimates of the electrical \"energy cost\" per muon is about with accelerators that are (coincidentally) about 40% efficient at transforming electrical energy from the power grid into acceleration of the deuterons.\nAs of 2012, no practical method of producing energy through this means has been published, although some discoveries using the Hall effect show promise.", "The term \"cold fusion\" was coined to refer to muon-catalyzed fusion in a 1956 New York Times article about Luis W. Alvarez's paper.\nIn 1957 Theodore Sturgeon wrote a novelette, \"The Pod in the Barrier\", in which humanity has ubiquitous cold fusion reactors that work with muons. The reaction is \"When hydrogen one and hydrogen two are in the presence of Mu mesons, they fuse into helium three, with an energy yield in electron volts of 5.4 times ten to the fifth power\". Unlike the thermonuclear bomb contained in the Pod (which is used to destroy the Barrier) they can become temporarily disabled by \"concentrated disbelief\" that muon fusion works.\nIn Sir Arthur C. Clarkes third novel in the Space Odyssey series, 2061: Odyssey Three', muon-catalyzed fusion is the technology that allows mankind to achieve easy interplanetary travel. The main character, Heywood Floyd, compares Luis Alvarez to Lord Rutherford for underestimating the future potential of their discoveries.", "Andrei Sakharov and F.C. Frank predicted the phenomenon of muon-catalyzed fusion on theoretical grounds before 1950. Yakov Borisovich Zeldovich also wrote about the phenomenon of muon-catalyzed fusion in 1954. Luis W. Alvarez et al.', when analyzing the outcome of some experiments with muons incident on a hydrogen bubble chamber at Berkeley in 1956, observed muon-catalysis of exothermic p–d, proton and deuteron, nuclear fusion, which results in a helion, a gamma ray, and a release of about 5.5 MeV of energy. The Alvarez experimental results, in particular, spurred John David Jackson to publish one of the first comprehensive theoretical studies of muon-catalyzed fusion in his ground-breaking 1957 paper. This paper contained the first serious speculations on useful energy release from muon-catalyzed fusion. Jackson concluded that it would be impractical as an energy source, unless the \"alpha-sticking problem\" (see below) could be solved, leading potentially to an energetically cheaper and more efficient way of utilizing the catalyzing muons.", "The majority of current advancements in muscle tissue engineering reside in the skeletal muscle category, so the majority of these examples will have to do with skeletal muscle engineering and regeneration. We will review a couple of examples of smooth muscle tissue engineering and cardiac muscle tissue engineering in this section as well.", "* Autologous MDSC Injections to Treat Urinary Incontinence: an in vivo injection technique for pure stress incontinence in female subjects in which defective muscle cells were replaced with stem cells that would differentiate to become functioning smooth muscle cells in the urinary sphincter\n* Vascular Smooth Muscle regeneration using induced pluripotent stem cells (iPSCs); an in vitro technique in which iPSCs were differentiated into proliferative smooth muscle cells using a nanofibrous scaffold.\n* Formation of coiled three-dimensional (3D) cellular constructs containing smooth muscle-like cells differentiated from dedifferentiated fat (DFAT) cells: an in vitro technique for controlling the 3D organization of smooth muscle cells in which DFAT cells are suspended in a mixture of extracellular proteins with optimized stiffness so that they differentiate into smooth muscle-like cells with specific 3D orientation; a muscle tissue engineered construct for a smooth muscle cell precursor", "The term muscle tissue engineering, while it is a subset of the much larger discipline, tissue engineering, was first coined in 1988 when Herman Vandenburgh, a surgeon, cultured avian myotubes in collagen-coated culture plates. This started a new era of in vitro tissue engineering. The ideal was officially adopted in 1988 in Vandenburgh's publication titled Maintenance of Highly Contractile Tissue-Cultured Avian Skeletal Myotubes in Collagen Gel. In 1989, the same group determined that mechanical stimulation of myoblasts in vitro facilitates engineered skeletal muscle growth.", "A rudimentary understanding of muscle tissue began to develop as early as 1835, when embryonic myogenesis was first described. In the 1860s, it was shown that muscle is capable of regeneration and an experimental regeneration was conducted to better understand the specific method by which this was done in vivo. Following this discovery, muscle generation and degeneration in man were described for the first time. Researchers consequently assessed several aspects of muscle regeneration in vivo, including \"the continuous or discontinuous regeneration depending on tissue type\" to increase functional understanding of the phenomena. It was not until the 1960s, however, that researchers determined what components were required for muscle regeneration.", "In 1957, it was determined via DNA content that myoblasts proliferate, but myonuclei do not. Following this discovery, the satellite cell was experimentally uncovered by Mauro and Katz as stem cells which sit on the surface of the myofibre and have the capability to differentiate into muscle cells. Satellite cells provide myoblasts for growth, differentiation, and repair of muscle tissue. Muscle tissue engineering officially began as a discipline in 1988 when Herman Vandenburgh cultured avian myotubes in collagen-coated culture plates. Following this development, it was found in 1989 that mechanical stimulation of myoblasts in vitro facilitates engineered skeletal muscle growth. Most of the modern innovations in the field of muscle tissue engineering are found in the 21st century.", "Between 2000 and 2010, the effects of volumetric muscle loss (VML) were assessed as it pertains to muscle tissue engineering. VML can be caused by a variety of injuries or diseases, including general trauma, postoperative damage, cancer ablation, congenital defects, and degenerative myopathy. Although muscle contains a stem cell population called satellite cells that are capable of regenerating small muscle injuries, muscle damage in VML is so extensive that it overwhelms muscles natural regenerative capabilities. Currently VML is treated through an autologous muscle flap or graft but there are various problems associated with this procedure. Donor site morbidity, lack of donor tissue, and inadequate vascularization all limit the ability of doctors to adequately treat VML. The field of muscle tissue engineering attempts to address this problem through the design of a functional muscle construct that can be used to treat the damaged muscle instead of harvesting an autologous muscle flap from elsewhere on the patients body.\nResearch conducted between 2000 and 2010 informed the conclusion that functional analysis of a tissue engineered muscle construct is important to illustrate its potential to help regenerate muscle. A variety of assays are generally used to evaluate a tissue engineered muscle construct including immunohistochemistry, RT-PCR, electrical stimulation and resulting peak-to-peak voltage, scanning electron microscope imaging, and in vivo response.\nThe most recent advances in the field include cultured meat, biorobotic systems, and biohybrid impants in regenerative medicine or disease modeling.", "* Avian myotubes: highly contractile skeletal myotubes cultured and differentiated in vitro on collagen-coated culture plates\n* Cultured Meat (CM): cultured, cell based, lab grown, in vitro, clean meat obtained through cellular agriculture\n* Human Bio-Artificial Muscle (BAM): formed through a seven day, in vitro tissue engineering procedure in which human myoblasts fuse and differentiate into aligned myofibres in an extracellular matrix; these constructs are used for intramuscular drug injection to replace pre- or non-clinical injection models and complement animal studies\n* Myoblast transfer in the treatment of Duchenne's Muscular Dystrophy (DMD): an in vivo technique to replace dystrophin, a skeletal muscle protein which is deficient in patients with DMD; myoblasts fuse with muscle fibers and contribute their nuclei which then replace deficient gene products in the host nuclei\n* Autologous hematopoetic stem cell transplantation (AHSCT) as a method for treating Multiple Sclerosis (MS): an in vivo technique for treating MS in which the immune system is destroyed and is reconstituted with hematopoetic stem cells; has been shown to reduce the effects of MS for 4-5 years in 70-80% of patients\n* Volumetric muscle loss repair using Muscle Derived Stem Cells (MDSCs): an in situ technique for muscle loss repair in which patients have suffered from trauma or combat injuries; MDSCs cast in an in situ fibrin gel were capable of forming new myofibres that became engrafted in a muscle defect that was created by a partial-thickness wedge resection in the tibialis anterior muscle of laboratory mice\n* Development of skeletal muscle organoids to model neuromuscular disorders and muscular dystrophies; an in vitro technique in which human pluripotent stem cells (hPSCs) are differentiated into functional 3D human skeletal muscle organoid (hSkMOs); hPSCs were guided towards the paraxial mesodermal lineage which then gives rise to myogenic pregenitor cells and myoblasts in well plates with no scaffold; organoids were round, uniformly sized, and exhibited homogeneous morphology upon full development and were shown to successfully model muscle development and regeneration\n* Bioprinted Tibialis Anterior (TA) Muscle in Rats: an in vitro technique in which bioengineered skeletal muscle tissue composed of human primary muscle pregenitor cells (hMPCs) was fabricated – upon implantation, the bioprinted material reached 82% functional recovery in rodent models of the TA muscle", "* Intracoronary Administration of Bone Marrow-Derived Progenitor Cells: an in vivo technique in which progenitor cells derived from bone marrow are administered into an infarct artery to differentiate into functional cardiac cells and recover contractile function after an acute, ST-elevation myocardial infarction, thus preventing adverse remodeling of the left ventricle.\n* Human Cardiac Organoids:an in vitro, scaffold-free technique for producing a functioning cardiac organoid; cardiac spheroids made from a mixed cell population derived from human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) cultured on gelatin-coated well plates, without a scaffold, resulted in the generation of a functioning cardiac organoid", "Muscle tissue engineering methods are consistently categorized across literature into three groups: in situ, in vivo, and in vitro muscle tissue engineering. We will assess each of these categories and detail specific practices used in each one.", "\"In vivo\" is a latin phrase whose literal translation is \"in a living thing.\" This term is used in the English language to describe a process which occurs inside of a living organism. In the realm of muscle tissue engineering, this term applies to the seeding of cells into a biomaterial scaffold immediately prior to implantation. The goal of in vivo muscle tissue engineering is to create a cell-seeded scaffold that once implanted into the wound site will preserve cell efficacy. In vivo methods provide a greater amount of control over cell phenotype, mechanical properties, and functionality of the tissue construct.\nAs described in Skeletal Muscle Tissue Engineering: Biomaterials-Based Strategies for the Treatment of Volumetric Muscle Loss (Carnes & Pins, 2020), in vivo muscle tissue engineering builds on the concept of in situ engineering by not only implanting a biomaterial scaffold with specific mechanical and chemical properties, but also seeding the scaffold with the specific cell type needed for regeneration of the tissue. Reid et al. describe common scaffolds utilized in the in vivo muscle tissue engineering process. These scaffolds include hydrogels infused with hyaluronic acid (HA), gelatin silk fibroin, and chitosan as these materials promote muscle cell migration and proliferation. For example, a biodegradable and renewable material derived from chitin known as chitosan, has unique mechanical properties which support smooth muscle cell differentiation and retention in the tissue regeneration site. When this scaffold is further functionalized with Arginine-Glycine-Aspartic Acid (RGD), it provides a better growth environment for smooth muscle cells. Another scaffold commonly used is decellularized extracellular matrix (ECM) tissue as it is fully biocompatible, biodegradable, and contains all of the necessary protein binding sites for full functional recovery and integration of muscle tissue. Once seeded with cells, this material becomes an optimal environment for cell proliferation and integration with existing tissue as it effectively mimics the environment in which tissue naturally regenerates in the mammalian body.\nThe in vivo muscle tissue engineering technique provides the wound healing process with a \"head start\" in development, as the body no longer needs to recruit host cells to begin regeneration. This approach also bypasses the need for cell manipulation prior to implantation, thus ensuring that they maintain all of their mechanical and functional properties.", "\"In vitro\" is a latin phrase whose literal translation is \"within the glass.\" This term is used in the English language to describe a process which occurs outside of a living organism. Within the context of muscle tissue engineering, the term \"in vitro\" applies to the seeding of cells into a biomaterial scaffold with growth factors and nutrients, then culturing these constructs until a functional construct, such as myofibres, is developed. These developed constructs are then implanted into the wound site with the expectation that they will continue to proliferate and integrate into host muscle tissue. The goal of in vitro muscle tissue engineering is to increase the functionality of the tissue before it is ever implanted into the body, thus increasing mechanical properties and potential to thrive in the host body.\nAbdulghani & Mitchell describe in vitro muscle tissue engineering as a concept with utilizes the same basic strategies of in vivo tissue engineering. The difference between the two methods, however, is the development of a fully functional tissue engineered muscle graft (TEMG) that occurs in the in vitro technique. In vitro muscle tissue engineering includes the seeding of cells onto a biomaterial scaffold, but goes a step further by adding growth factors and biochemical and biophysical cues to promote cell growth, proliferation, differentiation, and finally regeneration into a functional muscle tissue construct. Typically, in vitro scaffolds contain specific surface features which guide the direction of cell proliferation. They are usually fibrous with aligned pores as these features encourage cell adhesion during regeneration. Beyond the types of scaffolds used in this technique, a largely important aspect of this technique is the electrical and mechanical stimulation which mimic the natural regeneration environment and encourage the expansion of intracellular communication pathways. Before TEMGs are introduced into the wound defect, they musts be vascularized to promote proper integration with the host tissue. To achieve vascularization, researchers typically seed a scaffold with multiple cell types in order to develop both muscle tissue and vascular pathways. This process prevents rejection of the TEMG upon implantation as it is able to effectively thrive in the host tissue environment. There is always a risk of immune rejection when implanting fully developed tissue, though, so this method tissue regeneration is the most closely monitored post-implantation.\nThe in vitro muscle tissue engineering technique is used to create muscle tissue with more successful functional and mechanical properties. According to Carnes & Pins in Skeletal Muscle Tissue Engineering: Biomaterials-Based Strategies for the Treatment of Volumetric Muscle Loss, this approach develops a microenvironment that is more conducive to enhancing tissue regeneration upon implantation, thus restoring full functionality to patients.", "Current muscle tissue engineering trends lead towards the development of skeletal muscle regeneration techniques over smooth muscle or cardiac muscle regeneration. A current trend found throughout literature is the treatment of Volumetric Muscle Loss (VML) using muscle tissue engineering techniques. VML is the result of abrupt loss of skeletal muscle due to surgical resection, trauma, or combat injuries. It has been observed that tissue grafts, the current treatment plan, do not restore full functionality or aesthetic integrity to the site of injury. Muscle tissue engineering offers an optimistic possibility for patients, as in situ, in vivo, and in vitro techniques have been proven to restore functionality to muscle tissue in the wound site. Methods being explored include acellular scaffold implantation, cell-seeded scaffold implantation, and in vitro fabrication of muscle grafts. Preliminary data from each of these methods promises a solution for patients suffering from VML.\nBeyond specific technological advances in the field of muscle tissue engineering, researchers are working to establish a connection with the larger umbrella that is tissue engineering.", "Muscle tissue engineering is a subset of the general field of tissue engineering, which studies the combined use of cells and scaffolds to design therapeutic tissue implants. Within the clinical setting, muscle tissue engineering involves the culturing of cells from the patients own body or from a donor, development of muscle tissue with or without the use of scaffolds, then the insertion of functional muscle tissue into the patients body. Ideally, this implantation results in full regeneration of function and aesthetic within the patient's body. Outside the clinical setting, muscle tissue engineering is involved in drug screening, hybrid mechanical muscle actuators, robotic devices, and the development of engineered meat as a new food source.\nInnovations within the field of muscle tissue engineering seek to repair and replace defective muscle tissue, thus returning normal function.The practice begins by harvesting and isolating muscle cells from a donor site, then culturing those cells in media. The cultured cells form cell sheets and finally muscle bundles which are implanted into the patient.", "Muscle is a naturally aligned organ, with individual muscle fibers packed together into larger units called muscle fascicles. The uniaxial alignment of muscle fibers allows them to simultaneously contract in the same direction and properly propagate force on the bones via the tendons. Approximately 45% of the human body is composed of muscle tissue, and this tissue can be classified into three different groups: skeletal muscle, cardiac muscle, and smooth muscle. Muscle plays a role in structure, stability, and movement in mammalian bodies. The basic unit for a muscle is a muscle fiber, which is made up of myofilaments actin and myosin. This muscle fiber contains sarcomeres which generate the force required for contraction.\nA major focus of muscle tissue engineering is to create constructs with the functionality of native muscle and ability to contract. To this end, alignment of the tissue engineered construct is extremely important. It has been shown that cells grown on substrates with alignment cues form more robust muscle fibers. Several other design criteria considered in muscle tissue engineering include the scaffold porosity, stiffness, biocompatibility, and degradation timeline. Substrate stiffness should ideally be in the myogenic range, which has been shown to be 10-15 kPa.\nThe purpose of muscle tissue engineering is to reconstruct functional muscular tissue which has been lost via traumatic injury, tumor ablation, or functional damage caused by myopathies. Until now, the only method used to restore muscular tissue function and aesthetic was free tissue transfer. Full function is typically not restored, however, which results in donor site morbidity and volume deficiency. The success of tissue engineering as it pertains to the regeneration of skin, cartilage, and bone indicates that the same success will be found in engineering muscular tissue. Early innovations in the field yielded in vitro cell culturing and regeneration of muscle tissue which would be implanted in the body, but advances in recent years have shown that there may be potential for in vivo muscle tissue engineering using scaffolding.", "“In situ” is a latin phrase whose literal translation is “on site.” It is a term that has been used in the English language since the mid-eighteenth century to describe something that is in its original place or position. In the context of muscle tissue engineering, in situ tissue engineering involves the introduction and implantation of an acellular scaffold into the site of injury or degenerated tissue. The goal of in situ muscle tissue engineering is to encourage host cell recruitment, natural scaffold formation, and proliferation and differentiation of host cells. The main idea which in situ muscle tissue engineering is based on is the self-healing, regenerative properties of the mammalian body. The primary method for in situ muscle tissue engineering is described in the following section:\nAs described in Biomaterials for In Situ Tissue Regeneration: A Review (Abdulghani & Mitchell, 2019), in situ muscle tissue engineering requires very specific biomaterials which have the capability to recruit stem cells or progenitor cells to the site of the muscle defect, thus allowing regeneration of tissue without implantation of seed cells. The key to a successful scaffold is the appropriate properties (i.e. biocompatibility, mechanical strength, elasticity, biodegradability) and the correct shape and volume for the specific muscle defect in which they are implanted. This scaffold should effectively mimic the cellular response of the host tissue, and Mann et al. have found that Polyethylene glycol-based hydrogels are very successful as in situ biomaterial scaffolds because they are chemically modified to be degraded by biological enzymes, thus encouraging cell migration and proliferation. Beyond Polyethylene glycol-based hydrogels, synthetic biomaterials such as PLA and PCL are successful in situ scaffolds as they can be fully customized to each specific patient. These materials stiffness, degradation, and porosity properties are tailored to the degenerated tissues topology, volume, and cell type so as to provide the optimal environment for host cell migration and proliferation.\nIn situ engineering promotes natural regeneration of damaged tissue by effectively mimicking the mammalian body's own wound healing response. The use of both biological and synthetic biomaterials as scaffolds promotes host cell migration and proliferation directly to the defect site, thus decreasing the amount of time required for muscle tissue regeneration. Furthermore, in situ engineering effectively bypasses the risk of implant rejection by the immune system due to the biodegradable qualities in each scaffold.", "Established in 1972, the focus of the research is on cryoinjury, cryosurgery, cryopreservation, lyophilization and hypothermia. Since 1985 the Institute has published the open access peer-reviewed scientific journal Problems of Cryobiology and Cryomedicine.", "The Institute for Problems of Cryobiology and Cryomedicine in Kharkiv is one of the institutes of the National Academy of Science of Ukraine, and is the largest institute devoted to cryobiology research in the world.", "A nanofountain probe (NFP) is a device for drawing micropatterns of liquid chemicals at extremely small resolution. An NFP contains a cantilevered micro-fluidic device terminated in a nanofountain. The embedded microfluidics facilitates rapid and continuous delivery of molecules from the on-chip reservoirs to the fountain tip. When the tip is brought into contact with the substrate, a liquid meniscus forms, providing a path for molecular transport to the substrate. By controlling the geometry of the meniscus through hold time and deposition speed, various inks and biomolecules could be patterned on a surface, with sub 100 nm resolution.", "Taking advantage of the unique tip geometry of the NFP nanomaterials are directly injected into live cells with minimal invasiveness. This enables unique studies of nanoparticle-mediated delivery, as well as cellular pathways and toxicity. Whereas typical in vitro studies are limited to cell populations, these broadly-applicable tools enable multifaceted interrogation at a truly single cell level.", "Nano fountain probes (NFPs) are fabricated on the wafer-scale using microfabrication techniques allowing for batch fabrication of numerous chips. Through the different generations of devices, design and experimentation improved the device yielding to a robust fabrication process. The highly enhanced feature dimension and shapes is expected to improve the performance in writing and imaging.", "The advent of dip-pen nanolithography (DPN) in recent years represented a revolution in nanoscale patterning technology. With sub-100-nanometer resolution and an architecture conducive to massive parallelization, DPN is capable of producing large arrays of nanoscale features. As such, conventional DPN and other probe-based techniques are generally limited in their rate of deposition and by the need for repeated re-inking during extended patterning.\nTo address these challenges, nanofountain probe was developed by Espinosa et al. where microchannels were embedded in AFM probes to transport ink or bio-molecules from reservoirs to substrates, realizing continuous writing at the nanoscale. Integration of continuous liquid ink feeding within the NFP facilitates more rapid deposition and eliminates the need for repeated dipping, all while preserving the sub-100-nanometer resolution of DPN.", "NFP is used in the development of a to scale, direct-write nanomanufacturing platform. The platform is capable of constructing complex, highly-functional nanoscale devices from a diverse suite of materials (e.g., nanoparticles, catalysts (increase rate of reaction), biomolecules, and chemical solutions). Demonstrated nanopatterning capabilities include:\n• Biomolecules (proteins, DNA) for biodetection assays or cell adhesion studies\n• Functional nanoparticles for drug delivery studies and nanosystems making (fabrication)\n• Catalysts for carbon nanotube growth in nanodevice fabrication\n• Thiols for directed self-assembly of nanostructures.", "This effect was first seen by Russian physicists in the 1960s in A.F.Ioffe Physicotechnical Institute, Leningrad, Russia. Subsequently, it was studied in semiconductors such as indium antimonide (InSb), germanium (Ge) and indium arsenide (InAs) by workers in West Germany, Ukraine (Institute of Semiconductor Physics, Kyiv), Japan (Chiba University) and the United States. It was first observed in the mid-infrared (3-5 µm wavelength) in the more convenient diode structures in InSb heterostructure diodes by workers at the Defence Research Agency, Great Malvern, UK (now QinetiQ). These British workers later demonstrated LWIR band (8-12 µm) negative luminescence using mercury cadmium telluride diodes.\nLater the Naval Research Laboratory, Washington DC, started work on negative luminescence in mercury cadmium telluride (HgCdTe). The phenomenon has since been observed by several university groups around the world.", "Negative luminescence is most readily observed in semiconductors. Incoming infrared radiation is absorbed in the material by the creation of an electron–hole pair. An electric field is used to remove the electrons and holes from the region before they have a chance to recombine and re-emit thermal radiation. This effect occurs most efficiently in regions of low charge carrier density.\nNegative luminescence has also been observed in semiconductors in orthogonal electric and magnetic fields. In this case, the junction of a diode is not necessary and the effect can be observed in bulk material. A term that has been applied to this type of negative luminescence is galvanomagnetic luminescence.\nNegative luminescence might appear to be a violation of Kirchhoff's law of thermal radiation. This is not true, as the law only applies in thermal equilibrium.\nAnother term that has been used to describe negative luminescent devices is \"Emissivity switch\", as an electric current changes the effective emissivity.", "Negative luminescence is a physical phenomenon by which an electronic device emits less thermal radiation when an electric current is passed through it than it does in thermal equilibrium (current off). When viewed by a thermal camera, an operating negative luminescent device looks colder than its environment.", "In tissue engineering, neo-organ is the final structure of a procedure based on transplantation consisting of endogenous stem/progenitor cells grown ex vivo within predesigned matrix scaffolds. Current organ donation faces the problems of patients waiting to match for an organ and the possible risk of the patient's body rejecting the organ. Neo-organs are being researched as a solution to those problems with organ donation. Suitable methods for creating neo-organs are still under development. One experimental method is using adult stem cells, which use the patients own stem cells for organ donation. Currently this method can be combined with decellularization, which uses a donor organ for structural support but removes the donors cells from the organ. Similarly, the concept of 3-D bioprinting organs has shown experimental success in printing bioink layers that mimic the layer of organ tissues. However, these bioinks do not provide structural support like a donor organ. Current methods of clinically successful neo-organs use a combination of decellularized donor organs, along with adult stem cells of the organ recipient to account for both the structural support of a donor organ and the personalization of the organ for each individual patient to reduce the chance of rejection.", "The word neo-organ comes from the Greek word \"neos,\" which means new. Organ transplants have been successfully used for medical purposes since 1954. The difficulty with the traditional process of organ transplants is that it requires waiting for a viable donor to donate an organ. The process of matching the organ to make sure it is compatible with the patient has also proven to be challenging. There are two main challenges: finding the right candidate for the patient and avoiding the patient rejecting the organ even if it is a match. Neo-organs can be used to avoid the process of organ matching and donating.", "Research is being conducted for methods of creating neo-organs including three methods such as using adult stem cells, decellularization, and 3-D Bioprinting:", "One of the most studied methods is to use the patients own cells to generate a new organ, ex-vivo Specifically, researchers have chosen to focus on adult stem cells, or somatic stem cells, for the generation of new organ cells to create organs. There has been success in the production and use of some organs. The first stem-cell based organ, a tracheal graft, was transplanted successfully in 2008. The method involves obtaining a donor organ, removing the cells and MHC antigens from the donor organ, and colonizing it with stem cells obtained from the patient. This method does not create an entire organ from stem cells, and it still requires a donor to provide the decellularized graft. However, the first surgery done with this method was successful and the patient has shown no signs of rejection since. The current debate with this method is whether the decellularized graft was only used to provide the shape of the organ, or whether it provided benefits from it being a donor graft. Current research is being done to find ways to use adult stem cells for neo-organs without using decellularized donor organs for structural support.", "Researchers have begun to focus on decellularization for organ transplants since it reduces the chance of rejection to almost none. This process was used in the first successful stem-cell based organ transplant by removing the cells and MHC antigens from the donor organ. There are different ways to remove the cells from the organ which can include physical, chemical, and enzymatic treatments. This method is especially useful when trying to create a neo-heart because the heart needs to be created in a way where the structure remains. Since the stem cells used are currently not able to maintain a shape, researchers have started to look more into decellularization of existing organs to be able to perform successful transplant procedures without the problem of rejection. While this method may assist with the problem of rejection, donors are still needed to provide this structure to patients.", "The process of creating a 3-D organ with stem cells is thought to not be possible without the structural support of a donor organ. However, new studies are being conducted that discuss research on the process of 3-D bioprinting organs. The process of 3-D Bioprinting includes combining cells and growth factors to create a bioink, then using that bioink to print individual layers of tissue. Research is being done to find ways to use the formulated bioink to print organs that have the same structural support of donor organs without the need for donors. While currently there have not been experimental success with printing structural organs, there has been success with using bioink for printing tissue layers. A method for creating gelatin based vascularized bone equivalents has shown to be successful in a small scale experiment, but it has not been used clinically.", "H) tritium (T, hydrogen-3, H) fusion reactions are the most common accelerator based (as opposed to radioactive isotopes) neutron sources. In these systems, neutrons are produced by creating ions of deuterium, tritium, or deuterium and tritium and accelerating these into a hydride target loaded with deuterium, or deuterium and tritium. The DT reaction is used more than the DD reaction because the yield of the DT reaction is 50–100 times higher than that of the DD reaction.\nD + T → n + He E = 14.1 MeV\nD + D → n + He E = 2.5 MeV\nNeutrons produced by DD and DT reactions are emitted somewhat anisotropically from the target, slightly biased in the forward (in the axis of the ion beam) direction. The anisotropy of the neutron emission from DD and DT reactions arises from the fact the reactions are isotropic in the center of momentum coordinate system (COM) but this isotropy is lost in the transformation from the COM coordinate system to the laboratory frame of reference. In both frames of reference, the He nuclei recoil in the opposite direction to the emitted neutron consistent with the law of conservation of momentum.\nThe gas pressure in the ion source region of the neutron tubes generally ranges between 0.1 and 0.01 mm Hg. The mean free path of electrons must be shorter than the discharge space to achieve ionization (lower limit for pressure) while the pressure must be kept low enough to avoid formation of discharges at the high extraction voltages applied between the electrodes. The pressure in the accelerating region, however, has to be much lower, as the mean free path of electrons must be longer to prevent formation of a discharge between the high voltage electrodes.\nThe ion accelerator usually consists of several electrodes with cylindrical symmetry, acting as an einzel lens. The ion beam can thus be focused to a small point at the target. The accelerators typically require power supplies of 100–500 kV. They usually have several stages, with voltage between the stages not exceeding 200 kV to prevent field emission.\nIn comparison with radionuclide neutron sources, neutron tubes can produce much higher neutron fluxes and consistent (monochromatic) neutron energy spectra can be obtained. The neutron production rate can also be controlled.", "The central part of a neutron generator is the particle accelerator itself, sometimes called a neutron tube. Neutron tubes have several components including an ion source, ion optic elements, and a beam target; all of these are enclosed within a vacuum-tight enclosure. High voltage insulation between the ion optical elements of the tube is provided by glass and/or ceramic insulators. The neutron tube is, in turn, enclosed in a metal housing, the accelerator head, which is filled with a dielectric medium to insulate the high voltage elements of the tube from the operating area. The accelerator and ion source high voltages are provided by external power supplies. The control console allows the operator to adjust the operating parameters of the neutron tube. The power supplies and control equipment are normally located within of the accelerator head in laboratory instruments, but may be several kilometers away in well logging instruments.\nIn comparison with their predecessors, sealed neutron tubes do not require vacuum pumps and gas sources for operation. They are therefore more mobile and compact, while also durable and reliable. For example, sealed neutron tubes have replaced radioactive modulated neutron initiators, in supplying a pulse of neutrons to the imploding core of modern nuclear weapons.\nExamples of neutron tube ideas date as far back as the 1930s, pre-nuclear weapons era, by German scientists filing a 1938 German patent (March 1938, patent #261,156) and obtaining a United States Patent (July 1941, USP #2,251,190); examples of present state of the art are given by developments such as the Neutristor, a mostly solid state device, resembling a computer chip, invented at Sandia National Laboratories in Albuquerque NM. Typical sealed designs are used in a pulsed mode and can be operated at different output levels, depending on the life from the ion source and loaded targets.", "A good ion source should provide a strong ion beam without consuming much of the gas. For hydrogen isotopes, production of atomic ions is favored over molecular ions, as atomic ions have higher neutron yield on collision. The ions generated in the ion source are then extracted by an electric field into the accelerator region, and accelerated towards the target. The gas consumption is chiefly caused by the pressure difference between the ion generating and ion accelerating spaces that has to be maintained. Ion currents of 10 mA at gas consumptions of 40 cm/hour are achievable.\nFor a sealed neutron tube, the ideal ion source should use low gas pressure, give high ion current with large proportion of atomic ions, have low gas clean-up, use low power, have high reliability and high lifetime, its construction has to be simple and robust and its maintenance requirements have to be low.\nGas can be efficiently stored in a replenisher, an electrically heated coil of zirconium wire. Its temperature determines the rate of absorption/desorption of hydrogen by the metal, which regulates the pressure in the enclosure.", "The Penning source is a low gas pressure, cold cathode ion source which utilizes crossed electric and magnetic fields. The ion source anode is at a positive potential, either dc or pulsed, with respect to the source cathode. The ion source voltage is normally between 2 and 7 kilovolts. A magnetic field, oriented parallel to the source axis, is produced by a permanent magnet. A plasma is formed along the axis of the anode which traps electrons which, in turn, ionize gas in the source. The ions are extracted through the exit cathode. Under normal operation, the ion species produced by the Penning source are over 90% molecular ions. This disadvantage is however compensated for by the other advantages of the system.\nOne of the cathodes is a cup made of soft iron, enclosing most of the discharge space. The bottom of the cup has a hole through which most of the generated ions are ejected by the magnetic field into the acceleration space. The soft iron shields the acceleration space from the magnetic field, to prevent a breakdown.\nIons emerging from the exit cathode are accelerated through the potential difference between the exit cathode and the accelerator electrode. The schematic indicates that the exit cathode is at ground potential and the target is at high (negative) potential. This is the case in many sealed tube neutron generators. However, in cases when it is desired to deliver the maximum flux to a sample, it is desirable to operate the neutron tube with the target grounded and the source floating at high (positive) potential. The accelerator voltage is normally between 80 and 180 kilovolts.\nThe accelerating electrode has the shape of a long hollow cylinder. The ion beam has a slightly diverging angle (about 0.1 radian). The electrode shape and distance from target can be chosen so the entire target surface is bombarded with ions. Acceleration voltages of up to 200 kV are achievable.\nThe ions pass through the accelerating electrode and strike the target. When ions strike the target, 2–3 electrons per ion are produced by secondary emission. In order to prevent these secondary electrons from being accelerated back into the ion source, the accelerator electrode is biased negative with respect to the target. This voltage, called the suppressor voltage, must be at least 500 volts and may be as high as a few kilovolts. Loss of suppressor voltage will result in damage, possibly catastrophic, to the neutron tube.\nSome neutron tubes incorporate an intermediate electrode, called the focus or extractor electrode, to control the size of the beam spot on the target. The gas pressure in the source is regulated by heating or cooling the gas reservoir element.", "Ions can be created by electrons formed in high-frequency electromagnetic field. The discharge is formed in a tube located between electrodes, or inside a coil. Over 90% proportion of atomic ions is achievable.", "The targets used in neutron generators are thin films of metal such as titanium, scandium, or zirconium which are deposited onto a silver, copper or molybdenum substrate. Titanium, scandium, and zirconium form stable chemical compounds called metal hydrides when combined with hydrogen or its isotopes. These metal hydrides are made up of two hydrogen (deuterium or tritium) atoms per metal atom and allow the target to have extremely high densities of hydrogen. This is important to maximize the neutron yield of the neutron tube. The gas reservoir element also uses metal hydrides, e.g. uranium hydride, as the active material.\nTitanium is preferred to zirconium as it can withstand higher temperatures (200 °C), and gives higher neutron yield as it captures deuterons better than zirconium. The maximum temperature allowed for the target, above which hydrogen isotopes undergo desorption and escape the material, limits the ion current per surface unit of the target; slightly divergent beams are therefore used. A 1 microampere ion beam accelerated at 200 kV to a titanium-tritium target can generate up to 10 neutrons per second. The neutron yield is mostly determined by the accelerating voltage and the ion current level.\nAn example of a tritium target in use is a 0.2 mm thick silver disc with a 1 micrometer layer of titanium deposited on its surface; the titanium is then saturated with tritium.\nMetals with sufficiently low hydrogen diffusion can be turned into deuterium targets by bombardment of deuterons until the metal is saturated. Gold targets under such condition show four times higher efficiency than titanium. Even better results can be achieved with targets made of a thin film of a high-absorption high-diffusivity metal (e.g. titanium) on a substrate with low hydrogen diffusivity (e.g. silver), as the hydrogen is then concentrated on the top layer and can not diffuse away into the bulk of the material. Using a deuterium-tritium gas mixture, self-replenishing D-T targets can be made. The neutron yield of such targets is lower than of tritium-saturated targets in deuteron beams, but their advantage is much longer lifetime and constant level of neutron production. Self-replenishing targets are also tolerant to high-temperature bake-out of the tubes, as their saturation with hydrogen isotopes is performed after the bakeout and tube sealing.", "One approach for generating the high voltage fields needed to accelerate ions in a neutron tube is to use a pyroelectric crystal. In April 2005 researchers at UCLA demonstrated the use of a thermally cycled pyroelectric crystal to generate high electric fields in a neutron generator application. In February 2006 researchers at Rensselaer Polytechnic Institute demonstrated the use of two oppositely poled crystals for this application. Using these low-tech power supplies it is possible to generate a sufficiently high electric field gradient across an accelerating gap to accelerate deuterium ions into a deuterated target to produce the D + D fusion reaction. These devices are similar in their operating principle to conventional sealed-tube neutron generators which typically use Cockcroft–Walton type high voltage power supplies. The novelty of this approach is in the simplicity of the high voltage source. Unfortunately, the relatively low accelerating current that pyroelectric crystals can generate, together with the modest pulsing frequencies that can be achieved (a few cycles per minute) limits their near-term application in comparison with today's commercial products (see below). Also see pyroelectric fusion.", "Neutron generators are neutron source devices which contain compact linear particle accelerators and that produce neutrons by fusing isotopes of hydrogen together. The fusion reactions take place in these devices by accelerating either deuterium, tritium, or a mixture of these two isotopes into a metal hydride target which also contains deuterium, tritium or a mixture of these isotopes. Fusion of deuterium atoms (D + D) results in the formation of a helium-3 ion and a neutron with a kinetic energy of approximately 2.5 MeV. Fusion of a deuterium and a tritium atom (D + T) results in the formation of a helium-4 ion and a neutron with a kinetic energy of approximately 14.1 MeV. Neutron generators have applications in medicine, security, and materials analysis.\nThe basic concept was first developed by Ernest Rutherfords team in the Cavendish Laboratory in the early 1930s. Using a linear accelerator driven by a Cockcroft–Walton generator, Mark Oliphant led an experiment that fired deuterium ions into a deuterium-infused metal foil and noticed that a small number of these particles gave off alpha particles. This was the first demonstration of nuclear fusion, as well as the first discovery of Helium-3 and tritium, created in these reactions. The introduction of new power sources has continually shrunk the size of these machines, from Oliphants that filled the corner of the lab, to modern machines that are highly portable. Thousands of such small, relatively inexpensive systems have been built over the past five decades.\nWhile neutron generators do produce fusion reactions, the number of accelerated ions that cause these reactions is very low. It can be easily demonstrated that the energy released by these reactions is many times lower than the energy needed to accelerate the ions, so there is no possibility of these machines being used to produce net fusion power. A related concept, colliding beam fusion, attempts to address this issue using two accelerators firing at each other.", "Another type of innovative neutron generator is the inertial electrostatic confinement fusion device. This neutron generator avoids using a solid target which will be sputter eroded causing metalization of insulating surfaces. Depletion of the reactant gas within the solid target is also avoided. Far greater operational lifetime is achieved. Originally called a fusor, it was invented by Philo Farnsworth, the inventor of electronic television.", "Neutron generators find application in semiconductor production industry. They also have use cases in the enrichment of depleted uranium, acceleration of breeder reactors, and activation and excitement of experimental thorium reactors.\nIn material analysis neutron activation analysis is used to determine concentration of different elements in mixed materials such as minerals or ores.", "In addition to the conventional neutron generator design described above several other approaches exist to use electrical systems for producing neutrons.", "Nitrosylsulfuric acid is the chemical compound with the formula . It is a colourless solid that is used industrially in the production of caprolactam, and was formerly part of the lead chamber process for producing sulfuric acid. The compound is the mixed anhydride of sulfuric acid and nitrous acid.\nIn organic chemistry, it is used as a reagent for nitrosating, as a diazotizing agent, and as an oxidizing agent.", "A typical procedure entails dissolving sodium nitrite in cold sulfuric acid:\nIt can also be prepared by the reaction of nitric acid and sulfur dioxide.\n is used in organic chemistry to prepare diazonium salts from amines, for example in the Sandmeyer reaction. Related NO-delivery reagents include nitrosonium tetrafluoroborate and nitrosyl chloride.\nIn industry, the nitrosodecarboxylation reaction between nitrosylsulfuric acid and cyclohexanecarboxylic acid is used to generate caprolactam:", "Calculation can be employed to determine the nuclear binding energy of nuclei. The calculation involves determining the mass defect, converting it into energy, and expressing the result as energy per mole of atoms, or as energy per nucleon.", "Small nuclei that are larger than hydrogen can combine into bigger ones and release energy, but in combining such nuclei, the amount of energy released is much smaller compared to hydrogen fusion. The reason is that while the overall process releases energy from letting the nuclear attraction do its work, energy must first be injected to force together positively charged protons, which also repel each other with their electric charge.\nFor elements that weigh more than iron (a nucleus with 26 protons), the fusion process no longer releases energy. In even heavier nuclei energy is consumed, not released, by combining similarly sized nuclei. With such large nuclei, overcoming the electric repulsion (which affects all protons in the nucleus) requires more energy than is released by the nuclear attraction (which is effective mainly between close neighbors). Conversely, energy could actually be released by breaking apart nuclei heavier than iron.\nWith the nuclei of elements heavier than lead, the electric repulsion is so strong that some of them spontaneously eject positive fragments, usually nuclei of helium that form stable alpha particles. This spontaneous break-up is one of the forms of radioactivity exhibited by some nuclei.\nNuclei heavier than lead (except for bismuth, thorium, and uranium) spontaneously break up too quickly to appear in nature as primordial elements, though they can be produced artificially or as intermediates in the decay chains of heavier elements. Generally, the heavier the nuclei are, the faster they spontaneously decay.\nIron nuclei are the most stable nuclei (in particular iron-56), and the best sources of energy are therefore nuclei whose weights are as far removed from iron as possible. One can combine the lightest ones—nuclei of hydrogen (protons)—to form nuclei of helium, and that is how the Sun generates its energy. Alternatively, one can break up the heaviest ones—nuclei of uranium or plutonium—into smaller fragments, and that is what nuclear reactors do.", "An example that illustrates nuclear binding energy is the nucleus of C (carbon-12), which contains 6 protons and 6 neutrons. The protons are all positively charged and repel each other, but the nuclear force overcomes the repulsion and causes them to stick together. The nuclear force is a close-range force (it is strongly attractive at a distance of 1.0 fm and becomes extremely small beyond a distance of 2.5 fm), and virtually no effect of this force is observed outside the nucleus. The nuclear force also pulls neutrons together, or neutrons and protons.\nThe energy of the nucleus is negative with regard to the energy of the particles pulled apart to infinite distance (just like the gravitational energy of planets of the Solar System), because energy must be utilized to split a nucleus into its individual protons and neutrons. Mass spectrometers have measured the masses of nuclei, which are always less than the sum of the masses of protons and neutrons that form them, and the difference—by the formula —gives the binding energy of the nucleus.", "Mass defect is defined as the difference between the mass of a nucleus, and the sum of the masses of the nucleons of which it is composed. The mass defect is determined by calculating three quantities. These are: the actual mass of the nucleus, the composition of the nucleus (number of protons and of neutrons), and the masses of a proton and of a neutron. This is then followed by converting the mass defect into energy. This quantity is the nuclear binding energy, however it must be expressed as energy per mole of atoms or as energy per nucleon.", "The binding energy of helium is the energy source of the Sun and of most stars. The sun is composed of 74 percent hydrogen (measured by mass), an element having a nucleus consisting of a single proton. Energy is released in the Sun when 4 protons combine into a helium nucleus, a process in which two of them are also converted to neutrons.\nThe conversion of protons to neutrons is the result of another nuclear force, known as the weak (nuclear) force. The weak force, like the strong force, has a short range, but is much weaker than the strong force. The weak force tries to make the number of neutrons and protons into the most energetically stable configuration. For nuclei containing less than 40 particles, these numbers are usually about equal. Protons and neutrons are closely related and are collectively known as nucleons. As the number of particles increases toward a maximum of about 209, the number of neutrons to maintain stability begins to outstrip the number of protons, until the ratio of neutrons to protons is about three to two.\nThe protons of hydrogen combine to helium only if they have enough velocity to overcome each others mutual repulsion sufficiently to get within range of the strong nuclear attraction. This means that fusion only occurs within a very hot gas. Hydrogen hot enough for combining to helium requires an enormous pressure to keep it confined, but suitable conditions exist in the central regions of the Sun, where such pressure is provided by the enormous weight of the layers above the core, pressed inwards by the Suns strong gravity. The process of combining protons to form helium is an example of nuclear fusion.\nProducing helium from normal hydrogen would be practically impossible on earth because of the difficulty in creating deuterium. Research is being undertaken on developing a process using deuterium and tritium. The Earths oceans contain a large amount of deuterium that could be used and tritium can be made in the reactor itself from lithium, and furthermore the helium product does not harm the environment, so some consider nuclear fusion a good alternative to supply our energy needs. Experiments to carry out this form of fusion have so far only partially succeeded. Sufficiently hot deuterium and tritium must be confined. One technique is to use very strong magnetic fields, because charged particles (like those trapped in the Earths radiation belt) are guided by magnetic field lines.", "In the main isotopes of light elements, such as carbon, nitrogen and oxygen, the most stable combination of neutrons and of protons occurs when the numbers are equal (this continues to element 20, calcium). However, in heavier nuclei, the disruptive energy of protons increases, since they are confined to a tiny volume and repel each other. The energy of the strong force holding the nucleus together also increases, but at a slower rate, as if inside the nucleus, only nucleons close to each other are tightly bound, not ones more widely separated.\nThe net binding energy of a nucleus is that of the nuclear attraction, minus the disruptive energy of the electric force. As nuclei get heavier than helium, their net binding energy per nucleon (deduced from the difference in mass between the nucleus and the sum of masses of component nucleons) grows more and more slowly, reaching its peak at iron. As nucleons are added, the total nuclear binding energy always increases—but the total disruptive energy of electric forces (positive protons repelling other protons) also increases, and past iron, the second increase outweighs the first. Iron-56 (Fe) is the most efficiently bound nucleus meaning that it has the least average mass per nucleon. However, nickel-62 is the most tightly bound nucleus in terms of binding energy per nucleon. (Nickel-62s higher binding energy does not translate to a larger mean mass loss than Fe, because Ni has a slightly higher ratio of neutrons/protons than does iron-56, and the presence of the heavier neutrons increases nickel-62s average mass per nucleon).\nTo reduce the disruptive energy, the weak interaction allows the number of neutrons to exceed that of protons—for instance, the main isotope of iron has 26 protons and 30 neutrons. Isotopes also exist where the number of neutrons differs from the most stable number for that number of nucleons. If changing one proton into a neutron or one neutron into a proton increases the stability (lowering the mass), then this will happen through beta decay, meaning the nuclide will be radioactive.\nThe two methods for this conversion are mediated by the weak force, and involve types of beta decay. In the simplest beta decay, neutrons are converted to protons by emitting a negative electron and an antineutrino. This is always possible outside a nucleus because neutrons are more massive than protons by an equivalent of about 2.5 electrons. In the opposite process, which only happens within a nucleus, and not to free particles, a proton may become a neutron by ejecting a positron and an electron neutrino. This is permitted if enough energy is available between parent and daughter nuclides to do this (the required energy difference is equal to 1.022 MeV, which is the mass of 2 electrons). If the mass difference between parent and daughter is less than this, a proton-rich nucleus may still convert protons to neutrons by the process of electron capture, in which a proton simply electron captures one of the atom's K orbital electrons, emits a neutrino, and becomes a neutron.\nAmong the heaviest nuclei, starting with tellurium nuclei (element 52) containing 104 or more nucleons, electric forces may be so destabilizing that entire chunks of the nucleus may be ejected, usually as alpha particles, which consist of two protons and two neutrons (alpha particles are fast helium nuclei). (Beryllium-8 also decays, very quickly, into two alpha particles.) This type of decay becomes more and more probable as elements rise in atomic weight past 104.\nThe curve of binding energy is a graph that plots the binding energy per nucleon against atomic mass. This curve has its main peak at iron and nickel and then slowly decreases again, and also a narrow isolated peak at helium, which is more stable than other low-mass nuclides. The heaviest nuclei in more than trace quantities in nature, uranium U, are unstable, but having a half-life of 4.5 billion years, close to the age of the Earth, they are still relatively abundant; they (and other nuclei heavier than helium) have formed in stellar evolution events like supernova explosions preceding the formation of the Solar System. The most common isotope of thorium, Th, also undergoes alpha particle emission, and its half-life (time over which half a number of atoms decays) is even longer, by several times. In each of these, radioactive decay produces daughter isotopes that are also unstable, starting a chain of decays that ends in some stable isotope of lead.", "The nuclear fusion process works as follows: five billion years ago, the new Sun formed when gravity pulled together a vast cloud of hydrogen and dust, from which the Earth and other planets also arose. The gravitational pull released energy and heated the early Sun, much in the way Helmholtz proposed.\nThermal energy appears as the motion of atoms and molecules: the higher the temperature of a collection of particles, the greater is their velocity and the more violent are their collisions. When the temperature at the center of the newly formed Sun became great enough for collisions between hydrogen nuclei to overcome their electric repulsion, and bring them into the short range of the attractive nuclear force, nuclei began to stick together. When this began to happen, protons combined into deuterium and then helium, with some protons changing in the process to neutrons (plus positrons, positive electrons, which combine with electrons and annihilate into gamma-ray photons). This released nuclear energy now keeps up the high temperature of the Sun's core, and the heat also keeps the gas pressure high, keeping the Sun at its present size, and stopping gravity from compressing it any more. There is now a stable balance between gravity and pressure.\nDifferent nuclear reactions may predominate at different stages of the Sun's existence, including the proton–proton reaction and the carbon–nitrogen cycle—which involves heavier nuclei, but whose final product is still the combination of protons to form helium.\nA branch of physics, the study of controlled nuclear fusion, has tried since the 1950s to derive useful power from nuclear fusion reactions that combine small nuclei into bigger ones, typically to heat boilers, whose steam could turn turbines and produce electricity. No earthly laboratory can match one feature of the solar powerhouse: the great mass of the Sun, whose weight keeps the hot plasma compressed and confines the nuclear furnace to the Sun's core. Instead, physicists use strong magnetic fields to confine the plasma, and for fuel they use heavy forms of hydrogen, which burn more easily. Magnetic traps can be rather unstable, and any plasma hot enough and dense enough to undergo nuclear fusion tends to slip out of them after a short time. Even with ingenious tricks, the confinement in most cases lasts only a small fraction of a second.", "Nuclear binding energy in experimental physics is the minimum energy that is required to disassemble the nucleus of an atom into its constituent protons and neutrons, known collectively as nucleons. The binding energy for stable nuclei is always a positive number, as the nucleus must gain energy for the nucleons to move apart from each other. Nucleons are attracted to each other by the strong nuclear force. In theoretical nuclear physics, the nuclear binding energy is considered a negative number. In this context it represents the energy of the nucleus relative to the energy of the constituent nucleons when they are infinitely far apart. Both the experimental and theoretical views are equivalent, with slightly different emphasis on what the binding energy means.\nThe mass of an atomic nucleus is less than the sum of the individual masses of the free constituent protons and neutrons. The difference in mass can be calculated by the Einstein equation, , where E is the nuclear binding energy, c is the speed of light, and m is the difference in mass. This missing mass is known as the mass defect, and represents the energy that was released when the nucleus was formed.\nThe term \"nuclear binding energy\" may also refer to the energy balance in processes in which the nucleus splits into fragments composed of more than one nucleon. If new binding energy is available when light nuclei fuse (nuclear fusion), or when heavy nuclei split (nuclear fission), either process can result in release of this binding energy. This energy may be made available as nuclear energy and can be used to produce electricity, as in nuclear power, or in a nuclear weapon. When a large nucleus splits into pieces, excess energy is emitted as gamma rays and the kinetic energy of various ejected particles (nuclear fission products).\nThese nuclear binding energies and forces are on the order of one million times greater than the electron binding energies of light atoms like hydrogen.", "An absorption or release of nuclear energy occurs in nuclear reactions or radioactive decay; those that absorb energy are called endothermic reactions and those that release energy are exothermic reactions. Energy is consumed or released because of differences in the nuclear binding energy between the incoming and outgoing products of the nuclear transmutation.\nThe best-known classes of exothermic nuclear transmutations are nuclear fission and nuclear fusion. Nuclear energy may be released by fission, when heavy atomic nuclei (like uranium and plutonium) are broken apart into lighter nuclei. The energy from fission is used to generate electric power in hundreds of locations worldwide. Nuclear energy is also released during fusion, when light nuclei like hydrogen are combined to form heavier nuclei such as helium. The Sun and other stars use nuclear fusion to generate thermal energy which is later radiated from the surface, a type of stellar nucleosynthesis. In any exothermic nuclear process, nuclear mass might ultimately be converted to thermal energy, emitted as heat.\nIn order to quantify the energy released or absorbed in any nuclear transmutation, one must know the nuclear binding energies of the nuclear components involved in the transmutation.", "There are around 94 naturally occurring elements on earth. The atoms of each element have a nucleus containing a specific number of protons (always the same number for a given element), and some number of neutrons, which is often roughly a similar number. Two atoms of the same element having different numbers of neutrons are known as isotopes of the element. Different isotopes may have different properties – for example one might be stable and another might be unstable, and gradually undergo radioactive decay to become another element.\nThe hydrogen nucleus contains just one proton. Its isotope deuterium, or heavy hydrogen, contains a proton and a neutron. Helium contains two protons and two neutrons, and carbon, nitrogen and oxygen – six, seven and eight of each particle, respectively. However, a helium nucleus weighs less than the sum of the weights of the two heavy hydrogen nuclei which combine to make it. The same is true for carbon, nitrogen and oxygen. For example, the carbon nucleus is slightly lighter than three helium nuclei, which can combine to make a carbon nucleus. This difference is known as the mass defect.", "Mass defect (also called \"mass deficit\") is the difference between the mass of an object and the sum of the masses of its constituent particles. Discovered by Albert Einstein in 1905, it can be explained using his formula E = mc, which describes the equivalence of energy and mass. The decrease in mass is equal to the energy emitted in the reaction of an atoms creation divided by c'. By this formula, adding energy also increases mass (both weight and inertia), whereas removing energy decreases mass. For example, a helium atom containing four nucleons has a mass about 0.8% less than the total mass of four hydrogen atoms (each containing one nucleon). The helium nucleus has four nucleons bound together, and the binding energy which holds them together is, in effect, the missing 0.8% of mass.\nIf a combination of particles contains extra energy—for instance, in a molecule of the explosive TNT—weighing it reveals some extra mass, compared to its end products after an explosion. (The end products must be weighed after they have been stopped and cooled, however, as the extra mass must escape from the system as heat before its loss can be noticed, in theory.) On the other hand, if one must inject energy to separate a system of particles into its components, then the initial mass is less than that of the components after they are separated. In the latter case, the energy injected is \"stored\" as potential energy, which shows as the increased mass of the components that store it. This is an example of the fact that energy of all types is seen in systems as mass, since mass and energy are equivalent, and each is a \"property\" of the other.\nThe latter scenario is the case with nuclei such as helium: to break them up into protons and neutrons, one must inject energy. On the other hand, if a process existed going in the opposite direction, by which hydrogen atoms could be combined to form helium, then energy would be released. The energy can be computed using E = Δmc for each nucleus, where Δm is the difference between the mass of the helium nucleus and the mass of four protons (plus two electrons, absorbed to create the neutrons of helium).\nFor lighter elements, the energy that can be released by assembling them from lighter elements decreases, and energy can be released when they fuse. This is true for nuclei lighter than iron/nickel. For heavier nuclei, more energy is needed to bind them, and that energy may be released by breaking them up into fragments (known as nuclear fission). Nuclear power is generated at present by breaking up uranium nuclei in nuclear power reactors, and capturing the released energy as heat, which is converted to electricity.\nAs a rule, very light elements can fuse comparatively easily, and very heavy elements can break up via fission very easily; elements in the middle are more stable and it is difficult to make them undergo either fusion or fission in an environment such as a laboratory.\nThe reason the trend reverses after iron is the growing positive charge of the nuclei, which tends to force nuclei to break up. It is resisted by the strong nuclear interaction, which holds nucleons together. The electric force may be weaker than the strong nuclear force, but the strong force has a much more limited range: in an iron nucleus, each proton repels the other 25 protons, while the nuclear force only binds close neighbors. So for larger nuclei, the electrostatic forces tend to dominate and the nucleus will tend over time to break up.\nAs nuclei grow bigger still, this disruptive effect becomes steadily more significant. By the time polonium is reached (84 protons), nuclei can no longer accommodate their large positive charge, but emit their excess protons quite rapidly in the process of alpha radioactivity—the emission of helium nuclei, each containing two protons and two neutrons. (Helium nuclei are an especially stable combination.) Because of this process, nuclei with more than 94 protons are not found naturally on Earth (see periodic table). The isotopes beyond uranium (atomic number 92) with the longest half-lives are plutonium-244 (80 million years) and curium-247 (16 million years).", "Electrons and nuclei are kept together by electrostatic attraction (negative attracts positive). Furthermore, electrons are sometimes shared by neighboring atoms or transferred to them (by processes of quantum physics); this link between atoms is referred to as a chemical bond and is responsible for the formation of all chemical compounds.\nThe electric force does not hold nuclei together, because all protons carry a positive charge and repel each other. If two protons were touching, their repulsion force would be almost 40 Newton. Because each of the neutrons carries total charge zero, a proton could electrically attract a neutron if the proton could induce the neutron to become electrically polarized. However, having the neutron between two protons (so their mutual repulsion decreases to 10 N) would attract the neutron only for an electric quadrupole arrangement. Higher multipoles, needed to satisfy more protons, cause weaker attraction, and quickly become implausible.\nAfter the proton and neutron magnetic moments were measured and verified, it was apparent that their magnetic forces might be 20 or 30 newtons, attractive if properly oriented. A pair of protons would do 10 joules of work to each other as they approach – that is, they would need to release energy of 0.5 MeV in order to stick together. On the other hand, once a pair of nucleons magnetically stick, their external fields are greatly reduced, so it is difficult for many nucleons to accumulate much magnetic energy.\nTherefore, another force, called the nuclear force (or residual strong force) holds the nucleons of nuclei together. This force is a residuum of the strong interaction, which binds quarks into nucleons at an even smaller level of distance.\nThe fact that nuclei do not clump together (fuse) under normal conditions suggests that the nuclear force must be weaker than the electric repulsion at larger distances, but stronger at close range. Therefore, it has short-range characteristics. An analogy to the nuclear force is the force between two small magnets: magnets are very difficult to separate when stuck together, but once pulled a short distance apart, the force between them drops almost to zero.\nUnlike gravity or electrical forces, the nuclear force is effective only at very short distances. At greater distances, the electrostatic force dominates: the protons repel each other because they are positively charged, and like charges repel. For that reason, the protons forming the nuclei of ordinary hydrogen—for instance, in a balloon filled with hydrogen—do not combine to form helium (a process that also would require some protons to combine with electrons and become neutrons). They cannot get close enough for the nuclear force, which attracts them to each other, to become important. Only under conditions of extreme pressure and temperature (for example, within the core of a star), can such a process take place.", "In artificial fusion, the primary fuel is not constrained to be protons and higher temperatures can be used, so reactions with larger cross-sections are chosen. Another concern is the production of neutrons, which activate the reactor structure radiologically, but also have the advantages of allowing volumetric extraction of the fusion energy and tritium breeding. Reactions that release no neutrons are referred to as aneutronic.\nTo be a useful energy source, a fusion reaction must satisfy several criteria. It must:\n;Be exothermic: This limits the reactants to the low Z (number of protons) side of the curve of binding energy. It also makes helium the most common product because of its extraordinarily tight binding, although and also show up.\n;Involve low atomic number (Z) nuclei: This is because the electrostatic repulsion that must be overcome before the nuclei are close enough to fuse ( Coulomb barrier ) is directly related to the number of protons it contains – its atomic number.\n;Have two reactants: At anything less than stellar densities, three-body collisions are too improbable. In inertial confinement, both stellar densities and temperatures are exceeded to compensate for the shortcomings of the third parameter of the Lawson criterion, ICF's very short confinement time.\n;Have two or more products: This allows simultaneous conservation of energy and momentum without relying on the electromagnetic force.\n;Conserve both protons and neutrons: The cross sections for the weak interaction are too small.\nFew reactions meet these criteria. The following are those with the largest cross sections:\nFor reactions with two products, the energy is divided between them in inverse proportion to their masses, as shown. In most reactions with three products, the distribution of energy varies. For reactions that can result in more than one set of products, the branching ratios are given.\nSome reaction candidates can be eliminated at once. The D–Li reaction has no advantage compared to p– because it is roughly as difficult to burn but produces substantially more neutrons through – side reactions. There is also a p– reaction, but the cross section is far too low, except possibly when T > 1 MeV, but at such high temperatures an endothermic, direct neutron-producing reaction also becomes very significant. Finally there is also a p– reaction, which is not only difficult to burn, but can be easily induced to split into two alpha particles and a neutron.\nIn addition to the fusion reactions, the following reactions with neutrons are important in order to \"breed\" tritium in \"dry\" fusion bombs and some proposed fusion reactors:\nThe latter of the two equations was unknown when the U.S. conducted the Castle Bravo fusion bomb test in 1954. Being just the second fusion bomb ever tested (and the first to use lithium), the designers of the Castle Bravo \"Shrimp\" had understood the usefulness of Li in tritium production, but had failed to recognize that Li fission would greatly increase the yield of the bomb. While Li has a small neutron cross-section for low neutron energies, it has a higher cross section above 5 MeV. The 15 Mt yield was 150% greater than the predicted 6 Mt and caused unexpected exposure to fallout.\nTo evaluate the usefulness of these reactions, in addition to the reactants, the products, and the energy released, one needs to know something about the nuclear cross section. Any given fusion device has a maximum plasma pressure it can sustain, and an economical device would always operate near this maximum. Given this pressure, the largest fusion output is obtained when the temperature is chosen so that is a maximum. This is also the temperature at which the value of the triple product required for ignition is a minimum, since that required value is inversely proportional to (see Lawson criterion). (A plasma is \"ignited\" if the fusion reactions produce enough power to maintain the temperature without external heating.) This optimum temperature and the value of at that temperature is given for a few of these reactions in the following table.\nNote that many of the reactions form chains. For instance, a reactor fueled with and creates some , which is then possible to use in the – reaction if the energies are \"right\". An elegant idea is to combine the reactions (8) and (9). The from reaction (8) can react with in reaction (9) before completely thermalizing. This produces an energetic proton, which in turn undergoes reaction (8) before thermalizing. Detailed analysis shows that this idea would not work well, but it is a good example of a case where the usual assumption of a Maxwellian plasma is not appropriate.", "Any of the reactions above can in principle be the basis of fusion power production. In addition to the temperature and cross section discussed above, we must consider the total energy of the fusion products E, the energy of the charged fusion products E, and the atomic number Z of the non-hydrogenic reactant.\nSpecification of the – reaction entails some difficulties, though. To begin with, one must average over the two branches (2i) and (2ii). More difficult is to decide how to treat the and products. burns so well in a deuterium plasma that it is almost impossible to extract from the plasma. The – reaction is optimized at a much higher temperature, so the burnup at the optimum – temperature may be low. Therefore, it seems reasonable to assume the but not the gets burned up and adds its energy to the net reaction, which means the total reaction would be the sum of (2i), (2ii), and (1):\n:5 → + 2 n + + p, E = 4.03 + 17.6 + 3.27 = 24.9 MeV, E = 4.03 + 3.5 + 0.82 = 8.35 MeV.\nFor calculating the power of a reactor (in which the reaction rate is determined by the D–D step), we count the – fusion energy per D–D reaction as E = (4.03 MeV + 17.6 MeV) × 50% + (3.27 MeV) × 50% = 12.5 MeV and the energy in charged particles as E = (4.03 MeV + 3.5 MeV) × 50% + (0.82 MeV) × 50% = 4.2 MeV. (Note: if the tritium ion reacts with a deuteron while it still has a large kinetic energy, then the kinetic energy of the helium-4 produced may be quite different from 3.5 MeV, so this calculation of energy in charged particles is only an approximation of the average.) The amount of energy per deuteron consumed is 2/5 of this, or 5.0 MeV (a specific energy of about 225 million MJ per kilogram of deuterium).\nAnother unique aspect of the – reaction is that there is only one reactant, which must be taken into account when calculating the reaction rate.\nWith this choice, we tabulate parameters for four of the most important reactions\nThe last column is the neutronicity of the reaction, the fraction of the fusion energy released as neutrons. This is an important indicator of the magnitude of the problems associated with neutrons like radiation damage, biological shielding, remote handling, and safety. For the first two reactions it is calculated as . For the last two reactions, where this calculation would give zero, the values quoted are rough estimates based on side reactions that produce neutrons in a plasma in thermal equilibrium.\nOf course, the reactants should also be mixed in the optimal proportions. This is the case when each reactant ion plus its associated electrons accounts for half the pressure. Assuming that the total pressure is fixed, this means that particle density of the non-hydrogenic ion is smaller than that of the hydrogenic ion by a factor . Therefore, the rate for these reactions is reduced by the same factor, on top of any differences in the values of . On the other hand, because the – reaction has only one reactant, its rate is twice as high as when the fuel is divided between two different hydrogenic species, thus creating a more efficient reaction.\nThus there is a \"penalty\" of for non-hydrogenic fuels arising from the fact that they require more electrons, which take up pressure without participating in the fusion reaction. (It is usually a good assumption that the electron temperature will be nearly equal to the ion temperature. Some authors, however, discuss the possibility that the electrons could be maintained substantially colder than the ions. In such a case, known as a \"hot ion mode\", the \"penalty\" would not apply.) There is at the same time a \"bonus\" of a factor 2 for – because each ion can react with any of the other ions, not just a fraction of them.\nWe can now compare these reactions in the following table.\nThe maximum value of is taken from a previous table. The \"penalty/bonus\" factor is that related to a non-hydrogenic reactant or a single-species reaction. The values in the column \"inverse reactivity\" are found by dividing by the product of the second and third columns. It indicates the factor by which the other reactions occur more slowly than the – reaction under comparable conditions. The column \"Lawson criterion\" weights these results with E and gives an indication of how much more difficult it is to achieve ignition with these reactions, relative to the difficulty for the – reaction. The next-to-last column is labeled \"power density\" and weights the practical reactivity by E. The final column indicates how much lower the fusion power density of the other reactions is compared to the – reaction and can be considered a measure of the economic potential.", "The ions undergoing fusion in many systems will essentially never occur alone but will be mixed with electrons that in aggregate neutralize the ions' bulk electrical charge and form a plasma. The electrons will generally have a temperature comparable to or greater than that of the ions, so they will collide with the ions and emit x-ray radiation of 10–30 keV energy, a process known as Bremsstrahlung.\nThe huge size of the Sun and stars means that the x-rays produced in this process will not escape and will deposit their energy back into the plasma. They are said to be opaque to x-rays. But any terrestrial fusion reactor will be optically thin for x-rays of this energy range. X-rays are difficult to reflect but they are effectively absorbed (and converted into heat) in less than mm thickness of stainless steel (which is part of a reactor's shield). This means the bremsstrahlung process is carrying energy out of the plasma, cooling it.\nThe ratio of fusion power produced to x-ray radiation lost to walls is an important figure of merit. This ratio is generally maximized at a much higher temperature than that which maximizes the power density (see the previous subsection). The following table shows estimates of the optimum temperature and the power ratio at that temperature for several reactions:\nThe actual ratios of fusion to Bremsstrahlung power will likely be significantly lower for several reasons. For one, the calculation assumes that the energy of the fusion products is transmitted completely to the fuel ions, which then lose energy to the electrons by collisions, which in turn lose energy by Bremsstrahlung. However, because the fusion products move much faster than the fuel ions, they will give up a significant fraction of their energy directly to the electrons. Secondly, the ions in the plasma are assumed to be purely fuel ions. In practice, there will be a significant proportion of impurity ions, which will then lower the ratio. In particular, the fusion products themselves must remain in the plasma until they have given up their energy, and will remain for some time after that in any proposed confinement scheme. Finally, all channels of energy loss other than Bremsstrahlung have been neglected. The last two factors are related. On theoretical and experimental grounds, particle and energy confinement seem to be closely related. In a confinement scheme that does a good job of retaining energy, fusion products will build up. If the fusion products are efficiently ejected, then energy confinement will be poor, too.\nThe temperatures maximizing the fusion power compared to the Bremsstrahlung are in every case higher than the temperature that maximizes the power density and minimizes the required value of the fusion triple product. This will not change the optimum operating point for – very much because the Bremsstrahlung fraction is low, but it will push the other fuels into regimes where the power density relative to – is even lower and the required confinement even more difficult to achieve. For – and –, Bremsstrahlung losses will be a serious, possibly prohibitive problem. For –, p– and p– the Bremsstrahlung losses appear to make a fusion reactor using these fuels with a quasineutral, isotropic plasma impossible. Some ways out of this dilemma have been considered but rejected. This limitation does not apply to non-neutral and anisotropic plasmas; however, these have their own challenges to contend with.", "In a classical picture, nuclei can be understood as hard spheres that repel each other through the Coulomb force but fuse once the two spheres come close enough for contact. Estimating the radius of an atomic nuclei as about one femtometer, the energy needed for fusion of two hydrogen is:\nThis would imply that for the core of the sun, which has a Boltzmann distribution with a temperature of around 1.4 keV, the probability hydrogen would reach the threshold is , that is, fusion would never occur. However, fusion in the sun does occur due to quantum mechanics.", "At the temperatures and densities in stellar cores, the rates of fusion reactions are notoriously slow. For example, at solar core temperature (T ≈ 15 MK) and density (160 g/cm), the energy release rate is only 276 μW/cm—about a quarter of the volumetric rate at which a resting human body generates heat. Thus, reproduction of stellar core conditions in a lab for nuclear fusion power production is completely impractical. Because nuclear reaction rates depend on density as well as temperature and most fusion schemes operate at relatively low densities, those methods are strongly dependent on higher temperatures. The fusion rate as a function of temperature (exp(−E/kT)), leads to the need to achieve temperatures in terrestrial reactors 10–100 times higher than in stellar interiors: T ≈ .", "The Naval Research Lab's plasma physics formulary gives the total cross section in barns as a function of the energy (in keV) of the incident particle towards a target ion at rest fit by the formula:\n: with the following coefficient values:\nBosch-Hale also reports a R-matrix calculated cross sections fitting observation data with Padé rational approximating coefficients. With energy in units of keV and cross sections in units of millibarn, the factor has the form:\n:, with the coefficient values: \nwhere", "One force capable of confining the fuel well enough to satisfy the Lawson criterion is gravity. The mass needed, however, is so great that gravitational confinement is only found in stars—the least massive stars capable of sustained fusion are red dwarfs, while brown dwarfs are able to fuse deuterium and lithium if they are of sufficient mass. In stars heavy enough, after the supply of hydrogen is exhausted in their cores, their cores (or a shell around the core) start fusing helium to carbon. In the most massive stars (at least 8–11 solar masses), the process is continued until some of their energy is produced by fusing lighter elements to iron. As iron has one of the highest binding energies, reactions producing heavier elements are generally endothermic. Therefore, significant amounts of heavier elements are not formed during stable periods of massive star evolution, but are formed in supernova explosions. Some lighter stars also form these elements in the outer parts of the stars over long periods of time, by absorbing energy from fusion in the inside of the star, by absorbing neutrons that are emitted from the fusion process.\nAll of the elements heavier than iron have some potential energy to release, in theory. At the extremely heavy end of element production, these heavier elements can produce energy in the process of being split again back toward the size of iron, in the process of nuclear fission. Nuclear fission thus releases energy that has been stored, sometimes billions of years before, during stellar nucleosynthesis.", "There are also electrostatic confinement fusion devices. These devices confine ions using electrostatic fields. The best known is the fusor. This device has a cathode inside an anode wire cage. Positive ions fly towards the negative inner cage, and are heated by the electric field in the process. If they miss the inner cage they can collide and fuse. Ions typically hit the cathode, however, creating prohibitory high conduction losses. Also, fusion rates in fusors are very low due to competing physical effects, such as energy loss in the form of light radiation. Designs have been proposed to avoid the problems associated with the cage, by generating the field using a non-neutral cloud. These include a plasma oscillating device, a Penning trap and the polywell. The technology is relatively immature, however, and many scientific and engineering questions remain.\nThe most well known Inertial electrostatic confinement approach is the fusor. Starting in 1999, a number of amateurs have been able to do amateur fusion using these homemade devices. Other IEC devices include: the Polywell, MIX POPS and Marble concepts.", "The probability that fusion occurs is greatly increased compared to the classical picture, thanks to the smearing of the effective radius as the de Broglie wavelength as well as quantum tunneling through the potential barrier. To determine the rate of fusion reactions, the value of most interest is the cross section, which describes the probability that particles will fuse by giving a characteristic area of interaction. An estimation of the fusion cross-sectional area is often broken into three pieces:\nwhere is the geometric cross section, is the barrier transparency and is the reaction characteristics of the reaction.\n is of the order of the square of the de Broglie wavelength where is the reduced mass of the system and is the center of mass energy of the system.\n can be approximated by the Gamow transparency, which has the form: where is the Gamow factor and comes from estimating the quantum tunneling probability through the potential barrier.\n contains all the nuclear physics of the specific reaction and takes very different values depending on the nature of the interaction. However, for most reactions, the variation of is small compared to the variation from the Gamow factor and so is approximated by a function called the astrophysical S-factor, , which is weakly varying in energy. Putting these dependencies together, one approximation for the fusion cross section as a function of energy takes the form:\nMore detailed forms of the cross-section can be derived through nuclear physics-based models and R-matrix theory.", "A third confinement principle is to apply a rapid pulse of energy to a large part of the surface of a pellet of fusion fuel, causing it to simultaneously \"implode\" and heat to very high pressure and temperature. If the fuel is dense enough and hot enough, the fusion reaction rate will be high enough to burn a significant fraction of the fuel before it has dissipated. To achieve these extreme conditions, the initially cold fuel must be explosively compressed. Inertial confinement is used in the hydrogen bomb, where the driver is x-rays created by a fission bomb. Inertial confinement is also attempted in \"controlled\" nuclear fusion, where the driver is a laser, ion, or electron beam, or a Z-pinch. Another method is to use conventional high explosive material to compress a fuel to fusion conditions. The UTIAS explosive-driven-implosion facility was used to produce stable, centred and focused hemispherical implosions to generate neutrons from D-D reactions. The simplest and most direct method proved to be in a predetonated stoichiometric mixture of deuterium-oxygen. The other successful method was using a miniature Voitenko compressor, where a plane diaphragm was driven by the implosion wave into a secondary small spherical cavity that contained pure deuterium gas at one atmosphere.", "In fusion systems that are in thermal equilibrium, the particles are in a Maxwell–Boltzmann distribution, meaning the particles have a range of energies centered around the plasma temperature. The sun, magnetically confined plasmas and inertial confinement fusion systems are well modeled to be in thermal equilibrium. In these cases, the value of interest is the fusion cross-section averaged across the Maxwell–Boltzmann distribution. The Naval Research Lab's plasma physics formulary tabulates Maxwell averaged fusion cross sections reactivities in .\nFor energies the data can be represented by:\nwith in units of keV.", "American chemist William Draper Harkins was the first to propose the concept of nuclear fusion in 1915. Then in 1921, Arthur Eddington suggested hydrogen–helium fusion could be the primary source of stellar energy. Quantum tunneling was discovered by Friedrich Hund in 1927, and shortly afterwards Robert Atkinson and Fritz Houtermans used the measured masses of light elements to demonstrate that large amounts of energy could be released by fusing small nuclei. Building on the early experiments in artificial nuclear transmutation by Patrick Blackett, laboratory fusion of hydrogen isotopes was accomplished by Mark Oliphant in 1932. In the remainder of that decade, the theory of the main cycle of nuclear fusion in stars was worked out by Hans Bethe. Research into fusion for military purposes began in the early 1940s as part of the Manhattan Project. Self-sustaining nuclear fusion was first carried out on 1 November 1952, in the Ivy Mike hydrogen (thermonuclear) bomb test.\nWhile fusion was achieved in the operation of the hydrogen bomb (H-bomb), the reaction must be controlled and sustained in order for it to be a useful energy source. Research into developing controlled fusion inside fusion reactors has been ongoing since the 1930s, but the technology is still in its developmental phase.\nThe US National Ignition Facility, which uses laser-driven inertial confinement fusion, was designed with a goal of break-even fusion; the first large-scale laser target experiments were performed in June 2009 and ignition experiments began in early 2011. On 13 December 2022, the United States Department of Energy announced that on 5 December 2022, they had successfully accomplished break-even fusion, \"delivering 2.05 megajoules (MJ) of energy to the target, resulting in 3.15 MJ of fusion energy output.\"\nPrior to this breakthrough, controlled fusion reactions had been unable to produce break-even (self-sustaining) controlled fusion. The two most advanced approaches for it are magnetic confinement (toroid designs) and inertial confinement (laser designs). Workable designs for a toroidal reactor that theoretically will deliver ten times more fusion energy than the amount needed to heat plasma to the required temperatures are in development (see ITER). The ITER facility is expected to finish its construction phase in 2025. It will start commissioning the reactor that same year and initiate plasma experiments in 2025, but is not expected to begin full deuterium–tritium fusion until 2035.\nPrivate companies pursuing the commercialization of nuclear fusion received $2.6 billion in private funding in 2021 alone, going to many notable startups including but not limited to Commonwealth Fusion Systems, Helion Energy Inc., General Fusion, TAE Technologies Inc. and Zap Energy Inc.", "The key problem in achieving thermonuclear fusion is how to confine the hot plasma. Due to the high temperature, the plasma cannot be in direct contact with any solid material, so it has to be located in a vacuum. Also, high temperatures imply high pressures. The plasma tends to expand immediately and some force is necessary to act against it. This force can take one of three forms: gravitation in stars, magnetic forces in magnetic confinement fusion reactors, or inertial as the fusion reaction may occur before the plasma starts to expand, so the plasma's inertia is keeping the material together.", "Muon-catalyzed fusion is a fusion process that occurs at ordinary temperatures. It was studied in detail by Steven Jones in the early 1980s. Net energy production from this reaction has been unsuccessful because of the high energy required to create muons, their short 2.2 µs half-life, and the high chance that a muon will bind to the new alpha particle and thus stop catalyzing fusion.", "Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions.\nAccelerating light ions is relatively easy, and can be done in an efficient manner—requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between the electrodes. The system can be arranged to accelerate ions into a static fuel-infused target, known as beam–target fusion, or by accelerating two streams of ions towards each other, beam–beam fusion. The key problem with accelerator-based fusion (and with cold targets in general) is that fusion cross sections are many orders of magnitude lower than Coulomb interaction cross-sections. Therefore, the vast majority of ions expend their energy emitting bremsstrahlung radiation and the ionization of atoms of the target. Devices referred to as sealed-tube neutron generators are particularly relevant to this discussion. These small devices are miniature particle accelerators filled with deuterium and tritium gas in an arrangement that allows ions of those nuclei to be accelerated against hydride targets, also containing deuterium and tritium, where fusion takes place, releasing a flux of neutrons. Hundreds of neutron generators are produced annually for use in the petroleum industry where they are used in measurement equipment for locating and mapping oil reserves.\nA number of attempts to recirculate the ions that \"miss\" collisions have been made over the years. One of the better-known attempts in the 1970s was Migma, which used a unique particle storage ring to capture ions into circular orbits and return them to the reaction area. Theoretical calculations made during funding reviews pointed out that the system would have significant difficulty scaling up to contain enough fusion fuel to be relevant as a power source. In the 1990s, a new arrangement using a field-reverse configuration (FRC) as the storage system was proposed by Norman Rostoker and continues to be studied by TAE Technologies . A closely related approach is to merge two FRC's rotating in opposite directions, which is being actively studied by Helion Energy. Because these approaches all have ion energies well beyond the Coulomb barrier, they often suggest the use of alternative fuel cycles like p-B that are too difficult to attempt using conventional approaches.", "Thermonuclear fusion is the process of atomic nuclei combining or \"fusing\" using high temperatures to drive them close enough together for this to become possible. Such temperatures cause the matter to become a plasma and, if confined, fusion reactions may occur due to collisions with extreme thermal kinetic energies of the particles. There are two forms of thermonuclear fusion: uncontrolled, in which the resulting energy is released in an uncontrolled manner, as it is in thermonuclear weapons (\"hydrogen bombs\") and in most stars; and controlled, where the fusion reactions take place in an environment allowing some or all of the energy released to be harnessed for constructive purposes.\nTemperature is a measure of the average kinetic energy of particles, so by heating the material it will gain energy. After reaching sufficient temperature, given by the Lawson criterion, the energy of accidental collisions within the plasma is high enough to overcome the Coulomb barrier and the particles may fuse together.\nIn a deuterium–tritium fusion reaction, for example, the energy necessary to overcome the Coulomb barrier is 0.1 MeV. Converting between energy and temperature shows that the 0.1 MeV barrier would be overcome at a temperature in excess of 1.2 billion kelvin.\nThere are two effects that are needed to lower the actual temperature. One is the fact that temperature is the average kinetic energy, implying that some nuclei at this temperature would actually have much higher energy than 0.1 MeV, while others would be much lower. It is the nuclei in the high-energy tail of the velocity distribution that account for most of the fusion reactions. The other effect is quantum tunnelling. The nuclei do not actually have to have enough energy to overcome the Coulomb barrier completely. If they have nearly enough energy, they can tunnel through the remaining barrier. For these reasons fuel at lower temperatures will still undergo fusion events, at a lower rate.\nThermonuclear fusion is one of the methods being researched in the attempts to produce fusion power. If thermonuclear fusion becomes favorable to use, it would significantly reduce the world's carbon footprint.", "A substantial energy barrier of electrostatic forces must be overcome before fusion can occur. At large distances, two naked nuclei repel one another because of the repulsive electrostatic force between their positively charged protons. If two nuclei can be brought close enough together, however, the electrostatic repulsion can be overcome by the quantum effect in which nuclei can tunnel through coulomb forces.\nWhen a nucleon such as a proton or neutron is added to a nucleus, the nuclear force attracts it to all the other nucleons of the nucleus (if the atom is small enough), but primarily to its immediate neighbors due to the short range of the force. The nucleons in the interior of a nucleus have more neighboring nucleons than those on the surface. Since smaller nuclei have a larger surface-area-to-volume ratio, the binding energy per nucleon due to the nuclear force generally increases with the size of the nucleus but approaches a limiting value corresponding to that of a nucleus with a diameter of about four nucleons. It is important to keep in mind that nucleons are quantum objects. So, for example, since two neutrons in a nucleus are identical to each other, the goal of distinguishing one from the other, such as which one is in the interior and which is on the surface, is in fact meaningless, and the inclusion of quantum mechanics is therefore necessary for proper calculations.\nThe electrostatic force, on the other hand, is an inverse-square force, so a proton added to a nucleus will feel an electrostatic repulsion from all the other protons in the nucleus. The electrostatic energy per nucleon due to the electrostatic force thus increases without limit as nuclei atomic number grows.\nThe net result of the opposing electrostatic and strong nuclear forces is that the binding energy per nucleon generally increases with increasing size, up to the elements iron and nickel, and then decreases for heavier nuclei. Eventually, the binding energy becomes negative and very heavy nuclei (all with more than 208 nucleons, corresponding to a diameter of about 6 nucleons) are not stable. The four most tightly bound nuclei, in decreasing order of binding energy per nucleon, are , , , and . Even though the nickel isotope, , is more stable, the iron isotope is an order of magnitude more common. This is due to the fact that there is no easy way for stars to create through the alpha process.\nAn exception to this general trend is the helium-4 nucleus, whose binding energy is higher than that of lithium, the next heavier element. This is because protons and neutrons are fermions, which according to the Pauli exclusion principle cannot exist in the same nucleus in exactly the same state. Each proton or neutron's energy state in a nucleus can accommodate both a spin up particle and a spin down particle. Helium-4 has an anomalously large binding energy because its nucleus consists of two protons and two neutrons (it is a doubly magic nucleus), so all four of its nucleons can be in the ground state. Any additional nucleons would have to go into higher energy states. Indeed, the helium-4 nucleus is so tightly bound that it is commonly treated as a single quantum mechanical particle in nuclear physics, namely, the alpha particle.\nThe situation is similar if two nuclei are brought together. As they approach each other, all the protons in one nucleus repel all the protons in the other. Not until the two nuclei actually come close enough for long enough so the strong attractive nuclear force can take over and overcome the repulsive electrostatic force. This can also be described as the nuclei overcoming the so-called Coulomb barrier. The kinetic energy to achieve this can be lower than the barrier itself because of quantum tunneling.\nThe Coulomb barrier is smallest for isotopes of hydrogen, as their nuclei contain only a single positive charge. A diproton is not stable, so neutrons must also be involved, ideally in such a way that a helium nucleus, with its extremely tight binding, is one of the products.\nUsing deuterium–tritium fuel, the resulting energy barrier is about 0.1 MeV. In comparison, the energy needed to remove an electron from hydrogen is 13.6 eV. The (intermediate) result of the fusion is an unstable He nucleus, which immediately ejects a neutron with 14.1 MeV. The recoil energy of the remaining He nucleus is 3.5 MeV, so the total energy liberated is 17.6 MeV. This is many times more than what was needed to overcome the energy barrier.\nThe reaction cross section (σ) is a measure of the probability of a fusion reaction as a function of the relative velocity of the two reactant nuclei. If the reactants have a distribution of velocities, e.g. a thermal distribution, then it is useful to perform an average over the distributions of the product of cross-section and velocity. This average is called the reactivity, denoted . The reaction rate (fusions per volume per time) is times the product of the reactant number densities:\nIf a species of nuclei is reacting with a nucleus like itself, such as the DD reaction, then the product must be replaced by .\n increases from virtually zero at room temperatures up to meaningful magnitudes at temperatures of 10–100 keV. At these temperatures, well above typical ionization energies (13.6 eV in the hydrogen case), the fusion reactants exist in a plasma state.\nThe significance of as a function of temperature in a device with a particular energy confinement time is found by considering the Lawson criterion. This is an extremely challenging barrier to overcome on Earth, which explains why fusion research has taken many years to reach the current advanced technical state.", "An important fusion process is the stellar nucleosynthesis that powers stars, including the Sun. In the 20th century, it was recognized that the energy released from nuclear fusion reactions accounts for the longevity of stellar heat and light. The fusion of nuclei in a star, starting from its initial hydrogen and helium abundance, provides that energy and synthesizes new nuclei. Different reaction chains are involved, depending on the mass of the star (and therefore the pressure and temperature in its core).\nAround 1920, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars. At that time, the source of stellar energy was unknown; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einsteins equation . This was a particularly remarkable development since at that time fusion and thermonuclear energy had not yet been discovered, nor even that stars are largely composed of hydrogen (see metallicity). Eddingtons paper reasoned that:\n# The leading theory of stellar energy, the contraction hypothesis, should cause the rotation of a star to visibly speed up due to conservation of angular momentum. But observations of Cepheid variable stars showed this was not happening.\n# The only other known plausible source of energy was conversion of matter to energy; Einstein had shown some years earlier that a small amount of matter was equivalent to a large amount of energy. \n# Francis Aston had also recently shown that the mass of a helium atom was about 0.8% less than the mass of the four hydrogen atoms which would, combined, form a helium atom (according to the then-prevailing theory of atomic structure which held atomic weight to be the distinguishing property between elements; work by Henry Moseley and Antonius van den Broek would later show that nucleic charge was the distinguishing property and that a helium nucleus, therefore, consisted of two hydrogen nuclei plus additional mass). This suggested that if such a combination could happen, it would release considerable energy as a byproduct.\n# If a star contained just 5% of fusible hydrogen, it would suffice to explain how stars got their energy. (It is now known that most ordinary stars contain far more than 5% hydrogen.)\n# Further elements might also be fused, and other scientists had speculated that stars were the \"crucible\" in which light elements combined to create heavy elements, but without more accurate measurements of their atomic masses nothing more could be said at the time.\nAll of these speculations were proven correct in the following decades.\nThe primary source of solar energy, and that of similar size stars, is the fusion of hydrogen to form helium (the proton–proton chain reaction), which occurs at a solar-core temperature of 14 million kelvin. The net result is the fusion of four protons into one alpha particle, with the release of two positrons and two neutrinos (which changes two of the protons into neutrons), and energy. In heavier stars, the CNO cycle and other processes are more important. As a star uses up a substantial fraction of its hydrogen, it begins to synthesize heavier elements. The heaviest elements are synthesized by fusion that occurs when a more massive star undergoes a violent supernova at the end of its life, a process known as supernova nucleosynthesis.", "The release of energy with the fusion of light elements is due to the interplay of two opposing forces: the nuclear force, a manifestation of the strong interaction, which holds protons and neutrons tightly together in the atomic nucleus; and the Coulomb force, which causes positively charged protons in the nucleus to repel each other. Lighter nuclei (nuclei smaller than iron and nickel) are sufficiently small and proton-poor to allow the nuclear force to overcome the Coulomb force. This is because the nucleus is sufficiently small that all nucleons feel the short-range attractive force at least as strongly as they feel the infinite-range Coulomb repulsion. Building up nuclei from lighter nuclei by fusion releases the extra energy from the net attraction of particles. For larger nuclei, however, no energy is released, because the nuclear force is short-range and cannot act across larger nuclei.\nFusion powers stars and produces virtually all elements in a process called nucleosynthesis. The Sun is a main-sequence star, and, as such, generates its energy by nuclear fusion of hydrogen nuclei into helium. In its core, the Sun fuses 620 million metric tons of hydrogen and makes 616 million metric tons of helium each second. The fusion of lighter elements in stars releases energy and the mass that always accompanies it. For example, in the fusion of two hydrogen nuclei to form helium, 0.645% of the mass is carried away in the form of kinetic energy of an alpha particle or other forms of energy, such as electromagnetic radiation.\nIt takes considerable energy to force nuclei to fuse, even those of the lightest element, hydrogen. When accelerated to high enough speeds, nuclei can overcome this electrostatic repulsion and be brought close enough such that the attractive nuclear force is greater than the repulsive Coulomb force. The strong force grows rapidly once the nuclei are close enough, and the fusing nucleons can essentially \"fall\" into each other and the result is fusion and net energy produced. The fusion of lighter nuclei, which creates a heavier nucleus and often a free neutron or proton, generally releases more energy than it takes to force the nuclei together; this is an exothermic process that can produce self-sustaining reactions.\nEnergy released in most nuclear reactions is much larger than in chemical reactions, because the binding energy that holds a nucleus together is greater than the energy that holds electrons to a nucleus. For example, the ionization energy gained by adding an electron to a hydrogen nucleus is —less than one-millionth of the released in the deuterium–tritium (D–T) reaction shown in the adjacent diagram. Fusion reactions have an energy density many times greater than nuclear fission; the reactions produce far greater energy per unit of mass even though individual fission reactions are generally much more energetic than individual fusion ones, which are themselves millions of times more energetic than chemical reactions. Only direct conversion of mass into energy, such as that caused by the annihilatory collision of matter and antimatter, is more energetic per unit of mass than nuclear fusion. (The complete conversion of one gram of matter would release of energy.)", "Some other confinement principles have been investigated.\n* Antimatter-initialized fusion uses small amounts of antimatter to trigger a tiny fusion explosion. This has been studied primarily in the context of making nuclear pulse propulsion, and pure fusion bombs feasible. This is not near becoming a practical power source, due to the cost of manufacturing antimatter alone.\n* Pyroelectric fusion was reported in April 2005 by a team at UCLA. The scientists used a pyroelectric crystal heated from −34 to 7 °C (−29 to 45 °F), combined with a tungsten needle to produce an electric field of about 25 gigavolts per meter to ionize and accelerate deuterium nuclei into an erbium deuteride target. At the estimated energy levels, the D–D fusion reaction may occur, producing helium-3 and a 2.45 MeV neutron. Although it makes a useful neutron generator, the apparatus is not intended for power generation since it requires far more energy than it produces. D–T fusion reactions have been observed with a tritiated erbium target.\n* Nuclear fusion–fission hybrid (hybrid nuclear power) is a proposed means of generating power by use of a combination of nuclear fusion and fission processes. The concept dates to the 1950s, and was briefly advocated by Hans Bethe during the 1970s, but largely remained unexplored until a revival of interest in 2009, due to the delays in the realization of pure fusion.\n* Project PACER, carried out at Los Alamos National Laboratory (LANL) in the mid-1970s, explored the possibility of a fusion power system that would involve exploding small hydrogen bombs (fusion bombs) inside an underground cavity. As an energy source, the system is the only fusion power system that could be demonstrated to work using existing technology. However it would also require a large, continuous supply of nuclear bombs, making the economics of such a system rather questionable.\n* Bubble fusion also called sonofusion was a proposed mechanism for achieving fusion via sonic cavitation which rose to prominence in the early 2000s. Subsequent attempts at replication failed and the principal investigator, Rusi Taleyarkhan, was judged guilty of research misconduct in 2008.", "Nuclear fusion is a reaction in which two or more atomic nuclei, usually deuterium and tritium (hydrogen isotopes), combine to form one or more different atomic nuclei and subatomic particles (neutrons or protons). The difference in mass between the reactants and products is manifested as either the release or absorption of energy. This difference in mass arises due to the difference in nuclear binding energy between the atomic nuclei before and after the reaction. Nuclear fusion is the process that powers active or main-sequence stars and other high-magnitude stars, where large amounts of energy are released.\nA nuclear fusion process that produces atomic nuclei lighter than iron-56 or nickel-62 will generally release energy. These elements have a relatively small mass and a relatively large binding energy per nucleon. Fusion of nuclei lighter than these releases energy (an exothermic process), while the fusion of heavier nuclei results in energy retained by the product nucleons, and the resulting reaction is endothermic. The opposite is true for the reverse process, called nuclear fission. Nuclear fusion uses lighter elements, such as hydrogen and helium, which are in general more fusible; while the heavier elements, such as uranium, thorium and plutonium, are more fissionable. The extreme astrophysical event of a supernova can produce enough energy to fuse nuclei into elements heavier than iron.", "Electrically charged particles (such as fuel ions) will follow magnetic field lines (see Guiding centre). The fusion fuel can therefore be trapped using a strong magnetic field. A variety of magnetic configurations exist, including the toroidal geometries of tokamaks and stellarators and open-ended mirror confinement systems.", "Practical engineering designs must first take into account safety as the primary goal. All designs should incorporate passive cooling in combination with refractory materials to prevent melting and reconfiguration of fissionables into geometries capable of un-intentional criticality. Blanket layers of Lithium bearing compounds will generally be included as part of the design to generate Tritium to allow the system to be self-supporting for one of the key fuel element components. Tritium, because of its relatively short half-life and extremely high radioactivity, is best generated on-site to obviate the necessity of transportation from a remote location. D-T fuel can be manufactured on-site using Deuterium derived from heavy water production and Tritium generated in the hybrid reactor itself. Nuclear spallation to generate additional neutrons can be used to enhance the fission output, with the caveat that this is a tradeoff between the number of neutrons (typically 20-30 neutrons per spallation event) against a reduction of the individual energy of each neutron. This is a consideration if the reactor is to use natural Thorium as a fuel. While high energy (0.17c) neutrons produced from fusion events are capable of directly causing fission in both Thorium and U, the lower energy neutrons produced by spallation generally cannot. This is a tradeoff that affects the mixture of fuels against the degree of spallation used in the design.", "There are three main components to the hybrid fusion fuel cycle: deuterium, tritium, and fissionable elements. Deuterium can be derived by the separation of hydrogen isotopes in seawater (see heavy water production). Tritium may be generated in the hybrid process itself by absorption of neutrons in lithium bearing compounds. This would entail an additional lithium-bearing blanket and a means of collection. Small amounts of tritium are also produced by neutron activation in nuclear fission reactors, particularly when heavy water is used as a neutron moderator or coolant. The third component is externally derived fissionable materials from demilitarized supplies of fissionables, or commercial nuclear fuel and waste streams. Fusion driven fission also offers the possibility of using thorium as a fuel, which would greatly increase the potential amount of fissionables available. The extremely energetic nature of the fast neutrons emitted during the fusion events (up to 0.17 the speed of light) can allow normally non-fissioning U to undergo fission directly (without conversion first to Pu), enabling refined natural Uranium to be used with very low enrichment, while still maintaining a deeply subcritical regime.", "The surrounding blanket can be a fissile material (enriched uranium or plutonium) or a fertile material (capable of conversion to a fissionable material by neutron bombardment) such as thorium, depleted uranium or spent nuclear fuel. Such subcritical reactors (which also include particle accelerator-driven neutron spallation systems) offer the only currently-known means of active disposal (versus storage) of spent nuclear fuel without reprocessing. Fission by-products produced by the operation of commercial light water nuclear reactors (LWRs) are long-lived and highly radioactive, but they can be consumed using the excess neutrons in the fusion reaction along with the fissionable components in the blanket, essentially destroying them by nuclear transmutation and producing a waste product which is far safer and less of a risk for nuclear proliferation. The waste would contain significantly reduced concentrations of long-lived, weapons-usable actinides per gigawatt-year of electric energy produced compared to the waste from a LWR. In addition, there would be about 20 times less waste per unit of electricity produced. This offers the potential to efficiently use the very large stockpiles of enriched fissile materials, depleted uranium, and spent nuclear fuel.", "Hybrid nuclear fusion–fission (hybrid nuclear power) is a proposed means of generating power by use of a combination of nuclear fusion and fission processes.\nThe basic idea is to use high-energy fast neutrons from a fusion reactor to trigger fission in non-fissile fuels like U-238 or Th-232. Each neutron can trigger several fission events, multiplying the energy released by each fusion reaction hundreds of times. As the fission fuel is not fissile, there is no self-sustaining chain reaction from fission. This would not only make fusion designs more economical in power terms, but also be able to burn fuels that were not suitable for use in conventional fission plants, even their nuclear waste.\nIn general terms, the hybrid is similar in concept to the fast breeder reactor, which uses a compact high-energy fission core in place of the hybrid's fusion core. Another similar concept is the accelerator-driven subcritical reactor, which uses a particle accelerator to provide the neutrons instead of nuclear reactions.", "In contrast to current commercial fission reactors, hybrid reactors potentially demonstrate what is considered inherently safe behavior because they remain deeply subcritical under all conditions and decay heat removal is possible via passive mechanisms. The fission is driven by neutrons provided by fusion ignition events, and is consequently not self-sustaining. If the fusion process is deliberately shut off or the process is disrupted by a mechanical failure, the fission damps out and stops nearly instantly. This is in contrast to the forced damping in a conventional reactor by means of control rods which absorb neutrons to reduce the neutron flux below the critical, self-sustaining, level. The inherent danger of a conventional fission reactor is any situation leading to a positive feedback, runaway, chain reaction such as occurred during the Chernobyl disaster. In a hybrid configuration the fission and fusion reactions are decoupled, i.e. while the fusion neutron output drives the fission, the fission output has no effect whatsoever on the fusion reaction, eliminating any chance of a positive feedback loop.", "Conventional fission power systems rely on a chain reaction of nuclear fission events that release a few neutrons that cause further fission events. By careful arrangement and the use of various absorber materials the system can be set in a balance of released and absorbed neutrons, known as criticality.\nNatural uranium is a mix of several isotopes, mainly a trace amount of U and over 99% U. When they undergo fission, both of these isotopes release fast neutrons with an energy distribution peaking around 1 to 2 MeV. This energy is too low to cause fission in U, which means it cannot sustain a chain reaction. U will undergo fission when struck by neutrons of this energy, so it is possible for U to sustain a chain reaction. There are too few U atoms in natural uranium to sustain a chain reaction, the atoms are spread out too far and the chance a neutron will hit one is too small. Chain reactions are accomplished by concentrating, or enriching, the fuel, increasing the amount of U to produce enriched uranium, while the leftover, now mostly U, is a waste product known as depleted uranium. U will sustain a chain reaction if enriched to about 20% of the fuel mass.\nU will undergo fission more easily if the neutrons are of lower energy, the so-called thermal neutrons. Neutrons can be slowed to thermal energies through collisions with a neutron moderator material, the easiest to use are the hydrogen atoms found in water. By placing the fission fuel in water, the probability that the neutrons will cause fission in another U is greatly increased, which means the level of enrichment needed to reach criticality is greatly reduced. This leads to the concept of reactor-grade enriched uranium, with the amount of U increased from just less than 1% in natural ore to between 3 and 5%, depending on the reactor design. This is in contrast to weapons-grade enrichment, which increases to the U to at least 20%, and more commonly, over 90%.\nIn order to maintain criticality, the fuel has to retain that extra concentration of U. A typical fission reactor burns off enough of the U to cause the reaction to stop over a period on the order of a few months. A combination of burnup of the U along with the creation of neutron absorbers, or poisons, as part of the fission process eventually results in the fuel mass not being able to maintain criticality. This burned up fuel has to be removed and replaced with fresh fuel. The result is nuclear waste that is highly radioactive and filled with long-lived radionuclides that present a safety concern.\nThe waste contains most of the U it started with, only 1% or so of the energy in the fuel is extracted by the time it reaches the point where it is no longer fissile. One solution to this problem is to reprocess the fuel, which uses chemical processes to separate the U (and other non-poison elements) from the waste, and then mixes the extracted U5 in fresh fuel loads. This reduces the amount of new fuel that needs to be mined and also concentrates the unwanted portions of the waste into a smaller load. Reprocessing is expensive, however, and it has generally been more economical to simply buy fresh fuel from the mine.\nLike U, Pu can maintain a chain reaction, so it is a useful reactor fuel. However, Pu is not found in commercially useful amounts in nature. Another possibility is to breed Pu from the U through neutron capture, or various other means. This process only occurs with higher-energy neutrons than would be found in a moderated reactor, so a conventional reactor only produces small amounts of Pu when the neutron is captured within the fuel mass before it is moderated.\nMore typically, special reactors are used that are designed specifically for the breeding of Pu. The simplest way to achieve this is to further enrich the original U fuel well beyond what is needed for use in a moderated reactor, to the point where the U maintains criticality even with the fast neutrons. The extra fast neutrons escaping the fuel load can then be used to breed fuel in a U assembly surrounding the reactor core, most commonly taken from the stocks of depleted uranium. Pu can also be used for the core, which means once the system is up and running, it can be refuelled using the Pu it creates, with enough left over to feed into other reactors as well.\nExtracting the Pu from the U feedstock can be achieved with chemical processing, in the same fashion as normal reprocessing. The difference is that the mass will contain far fewer other elements, particularly some of the highly radioactive fission products found in normal nuclear waste. Unfortunately it is a tendency that breeder reactors in the \"free world\" (like the SNR-300, the Integral fast reactor) that have been built were demolished before operation, as a \"symbol\" (as Bill Clinton has stated). The Prototype Fast Breeder Reactor passed tests in 2017 and apparently is about to face the same fate, leaving some military reactors and the Russian BN-800 reactor operating, mostly consuming spent nuclear fuel.", "Fusion–fission designs essentially replace the lithium blanket with a blanket of fission fuel, either natural uranium ore or even nuclear waste. The fusion neutrons have more than enough energy to cause fission in the U, as well as many of the other elements in the fuel, including some of the transuranic waste elements. The reaction can continue even when all of the U is burned off; the rate is controlled not by the neutrons from the fission events, but the neutrons being supplied by the fusion reactor.\nFission occurs naturally because each event gives off more than one neutron capable of producing additional fission events. Fusion, at least in D-T fuel, gives off only a single neutron, and that neutron is not capable of producing more fusion events. When that neutron strikes fissile material in the blanket, one of two reactions may occur. In many cases, the kinetic energy of the neutron will cause one or two neutrons to be struck out of the nucleus without causing fission. These neutrons still have enough energy to cause other fission events. In other cases the neutron will be captured and cause fission, which will release two or three neutrons. This means that every fusion neutron in the fusion–fission design can result in anywhere between two and four neutrons in the fission fuel.\nThis is a key concept in the hybrid concept, known as fission multiplication. For every fusion event, several fission events may occur, each of which gives off much more energy than the original fusion, about 11 times. This greatly increases the total power output of the reactor. This has been suggested as a way to produce practical fusion reactors in spite of the fact that no fusion reactor has yet reached break-even, by multiplying the power output using cheap fuel or waste. However, a number of studies have repeatedly demonstrated that this only becomes practical when the overall reactor is very large, 2 to 3 GWt, which makes it expensive to build.\nThese processes also have the side-effect of breeding Pu or U, which can be removed and used as fuel in conventional fission reactors. This leads to an alternate design where the primary purpose of the fusion–fission reactor is to reprocess waste into new fuel. Although far less economical than chemical reprocessing, this process also burns off some of the nastier elements instead of simply physically separating them out. This also has advantages for non-proliferation, as enrichment and reprocessing technologies are also associated with nuclear weapons production. However, the cost of the nuclear fuel produced is very high, and is unlikely to be able to compete with conventional sources.", "A key issue for the fusion–fission concept is the number and lifetime of the neutrons in the various processes, the so-called neutron economy.\nIn a pure fusion design, the neutrons are used for breeding tritium in a lithium blanket. Natural lithium consists of about 92% Li and the rest is mostly Li. Li breeding requires neutron energies even higher than those released by fission, around 5 MeV, well within the range of energies provided by fusion. This reaction produces tritium and helium-4, and another slow neutron. Li can react with high or low energy neutrons, including those released by the Li reaction. This means that a single fusion reaction can produce several tritiums, which is a requirement if the reactor is going to make up for natural decay and losses in the fusion processes.\nWhen the lithium blanket is replaced, or supplanted, by fission fuel in the hybrid design, neutrons that do react with the fissile material are no longer available for tritium breeding. The new neutrons released from the fission reactions can be used for this purpose, but only in Li. One could process the lithium to increase the amount of Li in the blanket, making up for these losses, but the downside to this process is that the Li reaction only produces one tritium atom. Only the high-energy reaction between the fusion neutron and Li can create more than one tritium, and this is essential for keeping the reactor running.\nTo address this issue, at least some of the fission neutrons must also be used for tritium breeding in Li. Every one that does is no longer available for fission, reducing the reactor output. This requires a very careful balance if one wants the reactor to be able to produce enough tritium to keep itself running, while also producing enough fission events to keep the fission side energy positive. If these cannot be accomplished simultaneously, there is no reason to build a hybrid. Even if this balance can be maintained, it might only occur at a level that is economically infeasible.", "Through the early development of the hybrid concept the question of overall economics appeared difficult to handle. A series of studies starting in the late 1970s provided a much clearer picture of the hybrid in a complete fuel cycle, and allowed the economics to be better understood. These studies appeared to indicate there was no reason to build a hybrid.\nOne of the most detailed of these studies was published in 1980 by Los Alamos National Laboratory (LANL). Their study noted that the hybrid would produce most of its energy indirectly, both through the fission events in its own reactor, and much more by providing Pu-239 to fuel conventional fission reactors. In this overall picture, the hybrid is essentially identical to the breeder reactor, which uses fast neutrons from plutonium fission to breed more fuel in a fission blanket in largely the same fashion as the hybrid. Both require chemical processing to remove the bred Pu-239, both presented the same proliferation and safety risks as a result, and both produced about the same amount of fuel. Since that fuel is the primary source of energy in the overall cycle, the two systems were almost identical in the end.\nWhat was not identical, however, was the technical maturity of the two designs. The hybrid would require considerable additional research and development before it would be known if it could even work, and even if that were demonstrated, the result would be a system essentially identical to breeders which were already being built at that time. The report concluded:", "The concept dates to the 1950s, and was strongly advocated by Hans Bethe during the 1970s. At that time the first powerful fusion experiments were being built, but it would still be many years before they could be economically competitive. Hybrids were proposed as a way of greatly accelerating their market introduction, producing energy even before the fusion systems reached break-even. However, detailed studies of the economics of the systems suggested they could not compete with existing fission reactors.\nThe idea was abandoned and lay dormant until the 2000s, when the continued delays in reaching break-even led to a brief revival around 2009. These studies generally concentrated on the nuclear waste disposal aspects of the design, as opposed to the production of energy. The concept has seen cyclical interest since then, based largely on the success or failure of more conventional solutions like the Yucca Mountain nuclear waste repository\nAnother major design effort for energy production was started at Lawrence Livermore National Laboratory (LLNL) under their LIFE program. Industry input led to the abandonment of the hybrid approach for LIFE, which was then re-designed as a pure-fusion system. LIFE was cancelled when the underlying technology, from the National Ignition Facility, failed to reach its design performance goals.\nApollo Fusion, a company founded by Google executive Mike Cassidy in 2017, was also reported to be focused on using the subcritical nuclear fusion-fission hybrid method. Their web site is now focussed on their hall effect thrusters, and mentions fusion only in passing.\nOn 2022, September 9, Professor Peng Xianjue of the Chinese Academy of Engineering Physics announced that the Chinese government had approved the construction of the worlds largest pulsed-powerplant - the Z-FFR, namely Z(-pinch)-Fission-Fusion Reactor- in Chengdu, Sichuan province. Neutrons produced in a Z-pinch facility (endowed with cylindrical symmetry and fuelled with deuterium and tritium) will strike a coaxial blanket including both uranium and lithium isotopes. Uranium fission will boost the facilitys overall heat output by 10 to 20 times. Interaction of lithium and neutrons will provide tritium for further fueling. Innovative, quasi-spherical geometry near the core of Z-FFR leads to high performance of Z-pinch discharge. According to Prof. Xianjue, this will considerably speed up the use of fusion energy and prepare it for commercial power production by 2035.", "The fusion process alone currently does not achieve sufficient gain (power output over power input) to be viable as a power source. By using the excess neutrons from the fusion reaction to in turn cause a high-yield fission reaction (close to 100%) in the surrounding subcritical fissionable blanket, the net yield from the hybrid fusion–fission process can provide a targeted gain of 100 to 300 times the input energy (an increase by a factor of three or four over fusion alone). Even allowing for high inefficiencies on the input side (i.e. low laser efficiency in ICF and Bremsstrahlung losses in Tokamak designs), this can still yield sufficient heat output for economical electric power generation. This can be seen as a shortcut to viable fusion power until more efficient pure fusion technologies can be developed, or as an end in itself to generate power, and also consume existing stockpiles of nuclear fissionables and waste products.\nIn the LIFE project at the Lawrence Livermore National Laboratory LLNL, using technology developed at the National Ignition Facility, the goal is to use fuel pellets of deuterium and tritium surrounded by a fissionable blanket to produce energy sufficiently greater than the input (laser) energy for electrical power generation. The principle involved is to induce inertial confinement fusion (ICF) in the fuel pellet which acts as a highly concentrated point source of neutrons which in turn converts and fissions the outer fissionable blanket. In parallel with the ICF approach, the University of Texas at Austin is developing a system based on the tokamak fusion reactor, optimising for nuclear waste disposal versus power generation. The principles behind using either ICF or tokamak reactors as a neutron source are essentially the same (the primary difference being that ICF is essentially a point-source of neutrons while Tokamaks are more diffuse toroidal sources).", "Fusion reactors typically burn a mixture of deuterium (D) and tritium (T). When heated to millions of degrees, the kinetic energy in the fuel begins to overcome the natural electrostatic repulsion between nuclei, the so-called coulomb barrier, and the fuel begins to undergo fusion. This reaction gives off an alpha particle and a high energy neutron of 14 MeV. A key requirement to the economic operation of a fusion reactor is that the alphas deposit their energy back into the fuel mix, heating it so that additional fusion reactions take place. This leads to a condition not unlike the chain reaction in the fission case, known as ignition.\nDeuterium can be obtained by the separation of hydrogen isotopes in sea water (see heavy water production). Tritium has a short half life of just over a decade, so only trace amounts are found in nature. To fuel the reactor, the neutrons from the reaction are used to breed more tritium through a reaction in a blanket of lithium surrounding the reaction chamber. Tritium breeding is key to the success of a D-T fusion cycle, and to date this technique has not been demonstrated. Predictions based on computer modeling suggests that the breeding ratios are quite small and a fusion plant would barely be able to cover its own use. Many years would be needed to breed enough surplus to start another reactor.", "In superparamagnetism (a form of magnetism), the Néel effect appears when a superparamagnetic material in a conducting coil is subject to varying frequencies of magnetic fields. The non-linearity of the superparamagnetic material acts as a frequency mixer, with voltage measured at the coil terminals. It consists of several frequency components, at the initial frequency and at the frequencies of certain linear combinations. The frequency shift of the field to be measured allows for detection of a direct current field with a standard coil.", "In 1949 French physicist Louis Néel (1904-2000) discovered that when they are finely divided, ferromagnetic nanoparticles lose their hysteresis below a certain size; this phenomenon is known as superparamagnetism. The magnetization of these materials is subject to the applied field, which is highly non-linear.\nThis curve is well described by the Langevin function, but for weak fields it can be simply written as:\nwhere is the susceptibility at zero field and is known as the Néel coefficient. The Néel coefficient reflects the non-linearity of superparamagnetic materials in low fields.", "If a coil of turns with a surface through which passes a current of excitation is immersed in a magnetic field collinear with the axis of the coil, a superparamagnetic material is deposited inside the coil.\nThe electromotive force to the terminals of a winding of the coil, , is given by the formula:\nwhere is the magnetic induction given by the equation:\nIn the absence of magnetic material,\nand\nDifferentiating this expression, the frequency of the voltage is the same as the excitation current or the magnetic field .\nIn the presence of superparamagnetic material, neglecting the higher terms of the Taylor expansion, we obtain for B:\nA new derivation of the first term of the equation provides frequency voltage components of the stream of excitement or the magnetic field .\nThe development of the second term multiplies the frequency components in which intermodular frequencies start components and generate their linear combinations. The non-linearity of the superparamagnetic material acts as a frequency mixer.\nCalling the total magnetic field within the coil at the abscissa, integrating the above induction coil along the abscissa between 0 and and differentiating with respect to obtains:\nwith \nThe conventional terms of self-inductance and Rogowski effect are found in both the original frequencies. The third term is due to the Néel effect; it reports the intermodulation between the excitation current and the external field.\nWhen the excitation current is sinusoidal, the effect is Néel characterized by the appearance of a second harmonic carrying the information flow field:", "An important application of the Néel effect is as a current sensor, measuring the magnetic field radiated by a conductor with a current; this is the principle of Néel effect current sensors. The Néel effect allows the accurate measurement of currents with very low-frequency-type sensors in a current transformer without contact.\nThe transducer of a Néel-effect current sensor consists of a coil with a core of superparamagnetic nanoparticles. The coil is traversed by a current excitation:\nIn the presence of an external magnetic field to be measured: \nthe transducer transposes (with the Néel effect) the information to be measured, H (f) around a carrier frequency, the harmonic of order 2 excitation current 2:\nwhich is simpler. The electromotive force generated by the coil is proportional to the magnetic field to measure:\nand to the square of the excitation current:\nTo improve the measurement's performance (such as linearity and sensitivity to temperature and vibration), the sensor includes a second permanent winding-reaction against it to cancel the second harmonic. The relationship of the current reaction against the primary current is proportional to the number of turns against reaction:", "Mass spectrometry identified S-GlcNAc as a post-translational modification found on cysteine residues. In vitro experiments demonstrated that OGT could catalyze the formation of S-GlcNAc and that OGA is incapable of hydrolyzing S-GlcNAc. Though a previous report suggested that OGA is capable of hydrolyzing thioglycosides, this was only demonstrated on the aryl thioglycoside para-nitrophenol-S-GlcNAc; para-nitrothiophenol is a more activated leaving group than a cysteine residue. Recent studies have supported the use of S-GlcNAc as an enzymatically stable structural model of O-GlcNAc that can be incorporated through solid-phase peptide synthesis or site-directed mutagenesis.", "O-GlcNAc (short for O-linked GlcNAc or O-linked β-N-acetylglucosamine) is a reversible enzymatic post-translational modification that is found on serine and threonine residues of nucleocytoplasmic proteins. The modification is characterized by a β-glycosidic bond between the hydroxyl group of serine or threonine side chains and N-acetylglucosamine (GlcNAc). O-GlcNAc differs from other forms of protein glycosylation: (i) O-GlcNAc is not elongated or modified to form more complex glycan structures, (ii) O-GlcNAc is almost exclusively found on nuclear and cytoplasmic proteins rather than membrane proteins and secretory proteins, and (iii) O-GlcNAc is a highly dynamic modification that turns over more rapidly than the proteins which it modifies. O-GlcNAc is conserved across metazoans.\nDue to the dynamic nature of O-GlcNAc and its presence on serine and threonine residues, O-GlcNAcylation is similar to protein phosphorylation in some respects. While there are roughly 500 kinases and 150 phosphatases that regulate protein phosphorylation in humans, there are only 2 enzymes that regulate the cycling of O-GlcNAc: O-GlcNAc transferase (OGT) and O-GlcNAcase (OGA) catalyze the addition and removal of O-GlcNAc, respectively. OGT utilizes UDP-GlcNAc as the donor sugar for sugar transfer.\nFirst reported in 1984, this post-translational modification has since been identified on over 5,000 proteins. Numerous functional roles for O-GlcNAcylation have been reported including crosstalking with serine/threonine phosphorylation, regulating protein-protein interactions, altering protein structure or enzyme activity, changing protein subcellular localization, and modulating protein stability and degradation. Numerous components of the cells transcription machinery have been identified as being modified by O-GlcNAc, and many studies have reported links between O-GlcNAc, transcription, and epigenetics. Many other cellular processes are influenced by O-GlcNAc such as apoptosis, the cell cycle, and stress responses. As UDP-GlcNAc is the final product of the hexosamine biosynthetic pathway, which integrates amino acid, carbohydrate, fatty acid, and nucleotide metabolism, it has been suggested that O-GlcNAc acts as a \"nutrient sensor\" and responds to the cells metabolic status. Dysregulation of O-GlcNAc has been implicated in many pathologies including Alzheimer's disease, cancer, diabetes, and neurodegenerative disorders.", "In 1984, the Hart lab was probing for terminal GlcNAc residues on the surfaces of thymocytes and lymphocytes. Bovine milk β-1,4-galactosyltransferase, which reacts with terminal GlcNAc residues, was used to perform radiolabeling with UDP-[H]galactose. β-elimination of serine and threonine residues demonstrated that most of the [H]galactose was attached to proteins O-glycosidically; chromatography revealed that the major β-elimination product was Galβ1-4GlcNAcitol. Insensitivity to peptide N-glycosidase treatment provided additional evidence for O-linked GlcNAc. Permeabilizing cells with detergent prior to radiolabeling greatly increased the amount of [H]galactose incorporated into Galβ1-4GlcNAcitol, leading the authors to conclude that most of the O-linked GlcNAc monosaccharide residues were intracellular.", "O-GlcNAc is generally a dynamic modification that can be cycled on and off various proteins. Some residues are thought to be constitutively modified by O-GlcNAc. The O-GlcNAc modification is installed by OGT in a sequential bi-bi mechanism where the donor sugar, UDP-GlcNAc, binds to OGT first followed by the substrate protein. The O-GlcNAc modification is removed by OGA in a hydrolysis mechanism involving anchimeric assistance (substrate-assisted catalysis) to yield the unmodified protein and GlcNAc. While crystal structures have been reported for both OGT and OGA, the exact mechanisms by which OGT and OGA recognize substrates have not been completely elucidated. Unlike N-linked glycosylation, for which glycosylation occurs in a specific consensus sequence (Asn-X-Ser/Thr, where X is any amino acid except Pro), no definitive consensus sequence has been identified for O-GlcNAc,. Consequently, predicting sites of O-GlcNAc modification is challenging, and identifying modification sites generally requires mass spectrometry methods. For OGT, studies have shown that substrate recognition is regulated by a number of factors including aspartate and asparagine ladder motifs in the lumen of the superhelical TPR domain, active site residues, and adaptor proteins. As crystal structures have shown that OGT requires its substrate to be in an extended conformation, it has been proposed that OGT has a preference for flexible substrates. In in vitro kinetic experiments measuring OGT and OGA activity on a panel of protein substrates, kinetic parameters for OGT were shown to be variable between various proteins while kinetic parameters for OGA were relatively constant between various proteins. This result suggested that OGT is the \"senior partner\" in regulating O-GlcNAc and OGA primarily recognizes substrates via the presence of O-GlcNAc rather than the identity of the modified protein.", "Several methods exist to detect the presence of O-GlcNAc and characterize the specific residues modified.", "Wheat germ agglutinin, a plant lectin, is able to recognize terminal GlcNAc residues and is thus often used for detection of O-GlcNAc. This lectin has been applied in lectin affinity chromatography for the enrichment and detection of O-GlcNAc.", "Pan-O-GlcNAc antibodies that recognize the O-GlcNAc modification largely irrespective of the modified proteins identity are commonly used. These include RL2, an IgG antibody raised against O-GlcNAcylated nuclear pore complex proteins, and CTD110.6, an IgM antibody raised against an immunogenic peptide with a single serine O-GlcNAc modification. Other O'-GlcNAc-specific antibodies have been reported and demonstrated to have some dependence on the identity of the modified protein.", "Expressed protein ligation has been used to prepare O-GlcNAc-modified proteins in a site-specific manner. Methods exist for solid-phase peptide synthesis incorporation of GlcNAc-modified serine, threonine, or cysteine.", "Many metabolic chemical reporters have been developed to identify O-GlcNAc. Metabolic chemical reporters are generally sugar analogues that bear an additional chemical moiety allowing for additional reactivity. For example, peracetylated GlcNAc (AcGlcNAz) is a cell-permeable azido sugar that is de-esterified intracellularly by esterases to GlcNAz and converted to UDP-GlcNAz in the hexosamine salvage pathway. UDP-GlcNAz can be utilized as a sugar donor by OGT to yield the O-GlcNAz modification. The presence of the azido sugar can then be visualized via alkyne-containing bioorthogonal chemical probes in an azide-alkyne cycloaddition reaction. These probes can incorporate easily identifiable tags such as the FLAG peptide, biotin, and dye molecules. Mass tags based on polyethylene glycol (PEG) have also been used to measure O-GlcNAc stoichiometry. Conjugation of 5 kDa PEG molecules leads to a mass shift for modified proteins - more heavily O-GlcNAcylated proteins will have multiple PEG molecules and thus migrate more slowly in gel electrophoresis. Other metabolic chemical reporters bearing azides or alkynes (generally at the 2 or 6 positions) have been reported. Instead of GlcNAc analogues, GalNAc analogues may be used as well as UDP-GalNAc is in equilibrium with UDP-GlcNAc in cells due to the action of UDP-galactose-4-epimerase (GALE). Treatment with AcGalNAz was found to result in enhanced labeling of O-GlcNAc relative to AcGlcNAz, possibly due to a bottleneck in UDP-GlcNAc pyrophosphorylase processing of GlcNAz-1-P to UDP-GlcNAz. AcGlcN-β-Ala-NBD-α-1-P(Ac-SATE), a metabolic chemical reporter that is processed intracellularly to a fluorophore-labeled UDP-GlcNAc analogue, has been shown to achieve one-step fluorescent labeling of O-GlcNAc in live cells. Metabolic labeling may also be used to identify binding partners of O-GlcNAcylated proteins. The N-acetyl group may be elongated to incorporate a diazirine moiety. Treatment of cells with peracetylated, phosphate-protected AcGlcNDAz-1-P(Ac-SATE) leads to modification of proteins with O-GlcNDAz. UV irradiation then induces photocrosslinking between proteins bearing the O'-GlcNDaz modification and interacting proteins.\nSome issues have been identified with various metabolic chemical reporters, e.g., their use may inhibit the hexosamine biosynthetic pathway, they may not be recognized by OGA and therefore are not able to capture O-GlcNAc cycling, or they may be incorporated into glycosylation modifications besides O-GlcNAc as seen in secreted proteins. Metabolic chemical reporters with chemical handles at the N-acetyl position may also label acetylated proteins as the acetyl group may be hydrolyzed into acetate analogues that can be utilized for protein acetylation. Additionally, per-O-acetylated monosaccharides have been identified to react cysteines leading to artificial S-glycosylation. This occurs via an elimination-addition mechanism.", "Biochemical approaches such as Western blotting may provide supporting evidence that a protein is modified by O-GlcNAc; mass spectrometry (MS) is able to provide definitive evidence as to the presence of O-GlcNAc. Glycoproteomic studies applying MS have contributed to the identification of proteins modified by O-GlcNAc.\nAs O-GlcNAc is substoichiometric and ion suppression occurs in the presence of unmodified peptides, an enrichment step is usually performed prior to mass spectrometry analysis. This may be accomplished using lectins, antibodies, or chemical tagging. The O-GlcNAc modification is labile under collision-induced fragmentation methods such as collision-induced dissociation (CID) and higher-energy collisional dissociation (HCD), so these methods in isolation are not readily applicable for O-GlcNAc site mapping. HCD generates fragment ions characteristic of N-acetylhexosamines that can be used to determine O-GlcNAcylation status. In order to facilitate site mapping with HCD, β-elimination followed by Michael addition with dithiothreitol (BEMAD) may be used to convert the labile O-GlcNAc modification into a more stable mass tag. For BEMAD mapping of O-GlcNAc, the sample must be treated with phosphatatase otherwise other serine/threonine post-translational modifications such as phosphorylation may be detected. Electron-transfer dissociation (ETD) is used for site mapping as ETD causes peptide backbone cleavage while leaving post-translational modifications such as O-GlcNAc intact.\nTraditional proteomic studies perform tandem MS on the most abundant species in the full-scan mass spectra, prohibiting full characterization of lower-abundance species. One modern strategy for targeted proteomics uses isotopic labels, e.g., dibromide, to tag O-GlcNAcylated proteins. This method allows for algorithmic detection of low-abundance species, which are then sequenced by tandem MS. Directed tandem MS and targeted glycopeptide assignment allow for identification of O-GlcNAcylated peptide sequences. One example probe consists of a biotin affinity tag, an acid-cleavable silane, an isotopic recoding motif, and an alkyne. Unambiguous site mapping is possible for peptides with only one serine/threonine residue.\nThe general procedure for this isotope-targeted glycoproteomics (IsoTaG) method is the following:\n# Metabolically label O-GlcNAc to install O-GlcNAz onto proteins\n# Use click chemistry to link IsoTaG probe to O-GlcNAz\n# Use streptavidin beads to enrich for tagged proteins\n# Treat beads with trypsin to release non-modified peptides\n# Cleave isotopically recoded glycopeptides from beads using mild acid\n# Obtain a full-scan mass spectrum from isotopically recoded glycopeptides\n# Apply algorithm to detect unique isotope signature from probe\n# Perform tandem MS on the isotopically recoded species to obtain glycopeptide amino acid sequences\n# Search protein database for identified sequences\nOther methodologies have been developed for quantitative profiling of O-GlcNAc using differential isotopic labeling. Example probes generally consist of a biotin affinity tag, a cleavable linker (acid- or photo-cleavable), a heavy or light isotopic tag, and an alkyne.", "Various chemical and genetic strategies have been developed to manipulate O-GlcNAc, both on a proteome-wide basis and on specific proteins.", "Small molecule inhibitors have been reported for both OGT and OGA that function in cells or in vivo. OGT inhibitors result in a global decrease of O-GlcNAc while OGA inhibitors result in a global increase of O-GlcNAc; these inhibitors are not able to modulate O-GlcNAc on specific proteins.\nInhibition of the hexosamine biosynthetic pathway is also able to decrease O-GlcNAc levels. For instance, glutamine analogues azaserine and 6-diazo-5-oxo-L-norleucine (DON) can inhibit GFAT, though these molecules may also non-specifically affect other pathways.", "Peptide therapeutics such as are attractive for their high specificity and potency, but they often have poor pharmacokinetic profiles due to their degradation by serum proteases. Though O-GlcNAc is generally associated with intracellular proteins, it has been found that engineered peptide therapeutics modified by O-GlcNAc have enhanced serum stability in a mouse model and have similar structure and activity compared to the respective unmodified peptides. This method has been applied to engineer GLP-1 and PTH peptides.", "Site-directed mutagenesis of O-GlcNAc-modified serine or threonine residues to alanine may be used to evaluate the function of O-GlcNAc at specific residues. As alanines side chain is a methyl group and is thus not able to act as an O-GlcNAc site, this mutation effectively permanently removes O-GlcNAc at a specific residue. While serine/threonine phosphorylation may be modeled by mutagenesis to aspartate or glutamate, which have negatively charged carboxylate side chains, none of the 20 canonical amino acids sufficiently recapitulate the properties of O-GlcNAc. Mutagenesis to tryptophan has been used to mimic the steric bulk of O-GlcNAc, though tryptophan is much more hydrophobic than O-GlcNAc. Mutagenesis may also perturb other post-translational modifications, e.g., if a serine is alternatively phosphorylated or O-GlcNAcylated, alanine mutagenesis permanently eliminates the possibilities of both phosphorylation and O'-GlcNAcylation.", "Chemoenzymatic labeling provides an alternative strategy to incorporate handles for click chemistry. The Click-IT O-GlcNAc Enzymatic Labeling System, developed by the Hsieh-Wilson group and subsequently commercialized by Invitrogen, utilizes a mutant GalT Y289L enzyme that is able to transfer azidogalactose (GalNAz) onto O-GlcNAc. The presence of GalNAz (and therefore also O-GlcNAc) can be detected with various alkyne-containing probes with identifiable tags such as biotin, dye molecules, and PEG.", "Fusion constructs of a nanobody and TPR-truncated OGT allow for proximity-induced protein-specific O-GlcNAcylation in cells. The nanobody may be directed towards protein tags, e.g., GFP, that are fused to the target protein, or the nanobody may be directed towards endogenous proteins. For example, a nanobody recognizing a C-terminal EPEA sequence can direct OGT enzymatic activity to α-synuclein.", "Apoptosis, a form of controlled cell death, has been suggested to be regulated by O-GlcNAc. In various cancers, elevated O-GlcNAc levels have been reported to suppress apoptosis. Caspase-3, caspase-8, and caspase-9 have been reported to be modified by O-GlcNAc. Caspase-8 is modified near its cleavage/activation sites; O-GlcNAc modification may block caspase-8 cleavage and activation by steric hindrance. Pharmacological lowering of O-GlcNAc with 5S-GlcNAc accelerated caspase activation while pharmacological raising of O-GlcNAc with thiamet-G inhibited caspase activation.", "An engineered protein biosensor has been developed that can detect changes in O-GlcNAc levels using Förster resonance energy transfer. This sensor consists of four components linked together in the following order: cyan fluorescent protein (CFP), an O-GlcNAc binding domain (based on GafD, a lectin sensitive for terminal β-O-GlcNAc), a CKII peptide that is a known OGT substrate, and yellow fluorescent protein (YFP). Upon O-GlcNAcylation of the CKII peptide, the GafD domain binds the O-GlcNAc moiety, bringing the CFP and YFP domains into close proximity and generating a FRET signal. Generation of this signal is reversible and can be used to monitor O-GlcNAc dynamics in response to various treatments. This sensor may be genetically encoded and used in cells. Addition of a localization sequence allows for targeting of this O-GlcNAc sensor to the nucleus, cytoplasm, or plasma membrane.", "O-GlcNAc has been implicated in influenza A virus (IAV)-induced cytokine storm. Specifically, O-GlcNAcylation of S430 on interferon regulatory factor-5 (IRF5) has been shown to promote its interaction with TNF receptor-associated factor 6 (TRAF6) in cellular and mouse models. TRAF6 mediates K63-linked ubiquitination of IRF5 which is necessary for IRF5 activity and subsequent cytokine production. Analysis of clinical samples showed that blood glucose levels were elevated in IAV-infected patients compared to healthy individuals. In IAV-infected patients, blood glucose levels positively correlated with IL-6 and IL-8 levels. O-GlcNAcylation of IRF5 was also relatively higher in peripheral blood mononuclear cells of IAV-infected patients.", "Many known phosphorylation sites and O-GlcNAcylation sites are nearby each other or overlapping. As protein O-GlcNAcylation and phosphorylation both occur on serine and threonine residues, these post-translational modifications can regulate each other. For example, in CKIIα, S347 O-GlcNAc has been shown to antagonize T344 phosphorylation. Reciprocal inhibition, i.e., phosphorylation inhibition of O-GlcNAcylation and O-GlcNAcylation of phosphorylation, has been observed on other proteins including murine estrogen receptor β, RNA Pol II, tau, p53, CaMKIV, p65, β-catenin, and α-synuclein. Positive cooperativity has also been observed between these two post-translational modifications, i.e., phosphorylation induces O-GlcNAcylation or O-GlcNAcylation induces phosphorylation. This has been demonstrated on MeCP2 and HDAC1. In other proteins, e.g., cofilin, phosphorylation and O-GlcNAcylation appear to occur independently of each other.\nIn some cases, therapeutic strategies are under investigation to modulate O-GlcNAcylation to have a downstream effect on phosphorylation. For instance, elevating tau O-GlcNAcylation may offer therapeutic benefit by inhibiting pathological tau hyperphosphorylation.\nBesides phosphorylation, O-GlcNAc has been found to influence other post-translational modifications such as lysine acetylation and monoubiquitination.", "Histone proteins, the primary protein component of chromatin, are known to be modified by O-GlcNAc. O-GlcNAc has been identified on all core histones (H2A, H2B, H3, and H4). The presence of O-GlcNAc on histones has been suggested to affect gene transcription as well as other histone marks such as acetylation and monoubiquitination. TET2 has been reported to interact with the TPR domain of OGT and facilitate recruitment of OGT to histones. This interaction is associated with H2B S112 O-GlcNAc, which in turn is associated with H2B K120 monoubiquitination. Phosphorylation of OGT T444 via AMPK has been found to inhibit OGT-chromatin association and downregulate H2B S112 O-GlcNAc.", "The hexosamine biosynthetic pathways product, UDP-GlcNAc, is utilized by OGT to catalyze the addition of O-GlcNAc. This pathway integrates information about the concentrations of various metabolites including amino acids, carbohydrates, fatty acids, and nucleotides. Consequently, UDP-GlcNAc levels are sensitive to cellular metabolite levels. OGT activity is in part regulated by UDP-GlcNAc concentration, making a link between cellular nutrient status and O'-GlcNAc.\nGlucose deprivation causes a decline in UDP-GlcNAc levels and an initial decline in O-GlcNAc, but counterintuitively, O-GlcNAc is later significantly upregulated. This later increase has been shown to be dependent on AMPK and p38 MAPK activation, and this effect is partially due to increases in OGT mRNA and protein levels. It has also been suggested that this effect is dependent on calcium and CaMKII. Activated p38 is able to recruit OGT to specific protein targets, including neurofilament H; O-GlcNAc modification of neurofilament H enhances its solubility. During glucose deprivation, glycogen synthase is modified by O-GlcNAc which inhibits its activity.", "NRF2, a transcription factor associated with the cellular response to oxidative stress, has been found to be indirectly regulated by O-GlcNAc. KEAP1, an adaptor protein for the cullin 3-dependent E3 ubiquitin ligase complex, mediates the degradation of NRF2; oxidative stress leads to conformational changes in KEAP1 that repress degradation of NRF2. O-GlcNAc modification of KEAP1 at S104 is required for efficient ubiquitination and subsequent degradation of NRF2, linking O-GlcNAc to oxidative stress. Glucose deprivation leads to a reduction in O-GlcNAc and reduces NRF2 degradation. Cells expressing a KEAP1 S104A mutant are resistant to erastin-induced ferroptosis, consistent with higher NRF2 levels upon removal of S104 O-GlcNAc.\nElevated O-GlcNAc levels have been associated with diminished synthesis of hepatic glutathione, an important cellular antioxidant. Acetaminophen overdose leads to accumulation of the strongly oxidizing metabolite NAPQI in the liver, which is detoxified by glutathione. In mice, OGT knockout has a protective effect against acetaminophen-induced liver injury, while OGA inhibition with thiamet-G exacerbates acetaminophen-induced liver injury.", "O-GlcNAc has been found to slow protein aggregation, though the generality of this phenomenon is unknown.\nSolid-phase peptide synthesis was used to prepare full-length α-synuclein with an O-GlcNAc modification at T72. Thioflavin T aggregation assays and transmission electron microscopy demonstrated that this modified α-synuclein does not readily form aggregates.\nTreatment of JNPL3 tau transgenic mice with an OGA inhibitor was shown to increase microtubule-associated protein tau O-GlcNAcylation. Immunohistochemistry analysis of the brainstem revealed decreased formation of neurofibrillary tangles. Recombinant O-GlcNAcylated tau was shown to aggregate slower than unmodified tau in an in vitro thioflavin S aggregation assay. Similar results were obtained for a recombinantly prepared O-GlcNAcylated TAB1 construct versus its unmodified form.", "Protein kinases are the enzymes responsible for phosphorylation of serine and threonine residues. O-GlcNAc has been identified on over 100 (~20% of the human kinome) kinases, and this modification is often associated with alterations in kinase activity or kinase substrate scope.\nThe first report of a kinase being directly regulated by O-GlcNAc was published in 2009. CaMKIV is glycosylated at multiple sites, though S189 was found to be the major site. An S189A mutant was more readily activated by CaMKIV T200 phosphorylation, suggesting that O-GlcNAc at S189 inhibits CaMKIV activity. Homology modeling showed that S189 O-GlcNAc may interfere with ATP binding.\nAMPK and OGT are known to modify each other, i.e., AMPK phosphorylates OGT and OGT O-GlcNAcylates AMPK. AMPK activation by AICA ribonucleotide is associated with nuclear localization of OGT in differentiated C2C12 mouse skeletal muscle myotubes, resulting in increased nuclear O-GlcNAc. This effect was not observed in proliferating cells and undifferentiated myoblastic cells. AMPK phosphorylation of OGT T444 has been found to block OGT association with chromatin and decrease H2B S112 O-GlcNAc. Overexpression of GFAT, the enzyme that controls glucose flux into the hexosamine biosynthetic pathway, in mouse adipose tissue has been found to lead to AMPK activation and downstream ACC inhibition and elevated fatty acid oxidation. Glucosamine treatment in cultured 3T3L1 adipocytes showed a similar effect. The exact relationship between O-GlcNAc and AMPK has not been completely elucidated as various studies have reported that OGA inhibition inhibits AMPK activation, OGT inhibition also inhibits AMPK activation, upregulating O-GlcNAc by glucosamine treatment activates AMPK, and OGT knockdown activates AMPK; these results suggest that additional indirect communication between AMPK pathways and O-GlcNAc or cell type-specific effects.\nCKIIα substrate recognition has been shown to be altered upon S347 O-GlcNAcylation.", "Protein phosphatase 1 subunits PP1β and PP1γ have been shown to form functional complexes with OGT. A synthetic phosphopeptide was able to be dephosphorylated and O-GlcNAcylated by an OGT immunoprecipitate. This complex has been referred to as a \"yin-yang complex\" as it replaces a phosphate modification with an O-GlcNAc modification.\nMYPT1 is another protein phosphatase subunit that forms complexes with OGT and is itself O-GlcNAcylated. MYPT1 appears to have a role in directing OGT towards specific substrates.", "Co-translational O-GlcNAc has been identified on Sp1 and Nup62. This modification suppresses co-translational ubiquitination and thus protects nascent polypeptides from proteasomal degradation. Similar protective effects of O-GlcNAc on full-length Sp1 have been observed. It is unknown if this pattern is universal or only applicable to specific proteins.\nProtein phosphorylation is often used as a mark for subsequent degradation. Tumor suppressor protein p53 is targeted for proteasomal degradation via COP9 signalosome-mediated phosphorylation of T155. O-GlcNAcylation of p53 S149 has been associated with decreased T155 phosphorylation and protection of p53 from degradation. β-catenin O-GlcNAcylation competes with T41 phosphorylation, which signals β-catenin for degradation, stabilizing the protein.\nO-GlcNAcylation of the Rpt2 ATPase subunit of the 26S proteasome has been shown to inhibit proteasome activity. Testing various peptide sequences revealed that this modification slows proteasomal degradation of hydrophobic peptides, degradation of hydrophilic peptides does not appear to be affected. This modification has been shown to suppress other pathways that activate the proteasome such as Rpt6 phosphorylation by cAMP-dependent protein kinase.\nOGA-S localizes to lipid droplets and has been proposed to locally activate the proteasome to promote remodeling of lipid droplet surface proteins.", "Dysregulation of O-GlcNAc is associated with cancer cell proliferation and tumor growth.\nO-GlcNAcylation of the glycolytic enzyme PFK1 at S529 has been found to inhibit PFK1 enzymatic activity, reducing glycolytic flux and redirecting glucose towards the pentose phosphate pathway. Structural modeling and biochemical experiments suggested that O-GlcNAc at S529 would inhibit PFK1 allosteric activation by fructose 2,6-bisphosphate and oligomerization into active forms. In a mouse model, mice injected with cells expressing PFK1 S529A mutant showed lower tumor growth than mice injected with cells expressing PFK1 wild-type. Additionally, OGT overexpression enhanced tumor growth in the latter system but had no significant effect on the system with mutant PFK1. Hypoxia induces PFK1 S529 O-GlcNAc and increases flux through the pentose phosphate pathway to generate more NADPH, which maintains glutathione levels and detoxifies reactive oxygen species, imparting a growth advantage to cancer cells. PFK1 was found to be glycosylated in human breast and lung tumor tissues. OGT has also been reported to positively regulate HIF-1α. HIF-1α is normally degraded under normoxic conditions by prolyl hydroxylases that utilize α-ketoglutarate as a co-substrate. OGT suppresses α-ketoglutarate levels, protecting HIF-1α from proteasomal degradation by pVHL and promoting aerobic glycolysis. In contrast with the previous study on PFK1, this study found that elevating OGT or O-GlcNAc upregulated PFK1, though the two studies are consistent in finding that O-GlcNAc levels are positively associated with flux through the pentose phosphate pathway. This study also found that decreasing O-GlcNAc selectively killed cancer cells via ER stress-induced apoptosis.\nHuman pancreatic ductal adenocarcinoma (PDAC) cell lines have higher O-GlcNAc levels than human pancreatic duct epithelial (HPDE) cells. PDAC cells have some dependency upon O-GlcNAc for survival as OGT knockdown selectively inhibited PDAC cell proliferation (OGT knockdown did not significantly affect HPDE cell proliferation), and inhibition of OGT with 5S-GlcNAc showed the same result. Hyper-O-GlcNAcylation in PDAC cells appeared to be anti-apoptotic, inhibiting cleavage and activation of caspase-3 and caspase-9. Numerous sites on the p65 subunit of NF-κB were found to be modified by O-GlcNAc in a dynamic manner; O-GlcNAc at p65 T305 and S319 in turn positively regulate other modifications associated with NF-κB activation such as p300-mediated K310 acetylation and IKK-mediated S536 phosphorylation. These results suggested that NF-κB is constitutively activated by O-GlcNAc in pancreatic cancer.\nOGT stabilization of EZH2 in various breast cancer cell lines has been found to inhibit expression of tumor suppressor genes. In hepatocellular carcinoma models, O-GlcNAc is associated with activating phosphorylation of HDAC1, which in turn regulates expression of the cell cycle regulator p21 and cell motility regulator E-cadherin.\nOGT has been found to stabilize SREBP-1 and activate lipogenesis in breast cancer cell lines. This stabilization was dependent on the proteasome and AMPK. OGT knockdown resulted in decreased nuclear SREBP-1, but proteasomal inhibition with MG132 blocked this effect. OGT knockdown also increased the interaction between SREBP-1 and the E3 ubiquitin ligase FBW7. AMPK is activated by T172 phosphorylation upon OGT knockdown, and AMPK phosphorylates SREBP-1 S372 to inhibit its cleavage and maturation. OGT knockdown had a diminished effect on SREBP-1 levels in AMPK-null cell lines. In a mouse model, OGT knockdown inhibited tumor growth but SREBP-1 overexpression partly rescued this effect. These results contrast from those of a previous study which found that OGT knockdown/inhibition inhibited AMPK T172 phosphorylation and increased lipogenesis.\nIn breast and prostate cancer cell lines, high levels of OGT and O-GlcNAc have been associated both in vitro and in vivo with processes associated with disease progression, e.g., angiogenesis, invasion, and metastasis. OGT knockdown or inhibition was found to downregulate the transcription factor FoxM1 and upregulate the cell-cycle inhibitor p27 (which is regulated by FoxM1-dependent expression of the E3 ubiquitin ligase component Skp2), causing G1 cell cycle arrest. This appeared to be dependent on proteasomal degradation of FoxM1, as expression of a FoxM1 mutant lacking a degron rescued the effects of OGT knockdown. FoxM1 was found not to be directly modified by O-GlcNAc, suggesting that hyper-O-GlcNAcylation of FoxM1 regulators impairs FoxM1 degradation. Targeting OGT also lowered levels of FoxM1-regulated proteins associated with cancer invasion and metastasis (MMP-2 & MMP-9), and angiogenesis (VEGF). O-GlcNAc modification of cofilin S108 has also been reported to be important for breast cancer cell invasion by regulating cofilin subcellular localization in invadopodia.", "O-GlcNAcylation of a protein can alter its interactome. As O-GlcNAc is highly hydrophilic, its presence may disrupt hydrophobic protein-protein interactions. For example, O-GlcNAc disrupts Sp1 interaction with TAF110, and O-GlcNAc disrupts CREB interaction with TAF130 and CRTC.\nSome studies have also identified instances where protein-protein interactions are induced by O-GlcNAc. Metabolic labeling with the diazirine-containing O-GlcNDAz has been applied to identify protein-protein interactions induced by O-GlcNAc. Using a bait glycopeptide based roughly on a consensus sequence for O-GlcNAc, α-enolase, EBP1, and 14-3-3 were identified as potential O-GlcNAc readers. X-ray crystallography showed that 14-3-3 recognized O-GlcNAc through an amphipathic groove that also binds phosphorylated ligands. Hsp70 has also been proposed to act as a lectin to recognize O-GlcNAc. It has been suggested that O-GlcNAc plays a role in the interaction of α-catenin and β-catenin.", "Various cellular stress stimuli have been associated with changes in O-GlcNAc. Treatment with hydrogen peroxide, cobalt(II) chloride, UVB light, ethanol, sodium chloride, heat shock, and sodium arsenite, all result in elevated O-GlcNAc. Knockout of OGT sensitizes cells to thermal stress. Elevated O-GlcNAc has been associated with expression of Hsp40 and Hsp70.", "Numerous studies have identified aberrant phosphorylation of tau as a hallmark of Alzheimers disease. O-GlcNAcylation of bovine tau was first characterized in 1996. A subsequent report in 2004 demonstrated that human brain tau is also modified by O-GlcNAc. O-GlcNAcylation of tau was demonstrated to regulate tau phosphorylation with hyperphosphorylation of tau observed in the brain of mice lacking OGT, which has been associated with the formation of neurofibrillary tangles. Analysis of brain samples showed that protein O-GlcNAcylation is compromised in Alzheimers disease and paired helical fragment-tau was not recognized by traditional O-GlcNAc detection methods, suggesting that pathological tau has impaired O-GlcNAcylation relative to tau isolated from control brain samples. Elevating tau O-GlcNAcylation was proposed as a therapeutic strategy for reducing tau phosphorylation.\nTo test this therapeutic hypothesis, a selective and blood-brain barrier-permeable OGA inhibitor, thiamet-G, was developed. Thiamet-G treatment was able to increase tau O-GlcNAcylation and suppress tau phosphorylation in cell culture and in vivo in healthy Sprague-Dawley rats. A subsequent study showed that thiamet-G treatment also increased tau O-GlcNAcylation in a JNPL3 tau transgenic mouse model. In this model, tau phosphorylation was not significantly affected by thiamet-G treatment, though decreased numbers of neurofibrillary tangles and slower motor neuron loss were observed. Additionally, O-GlcNAcylation of tau was noted to slow tau aggregation in vitro.\nOGA inhibition with MK-8719 is being investigated in clinical trials as a potential treatment strategy for Alzheimer's disease and other tauopathies including progressive supranuclear palsy.", "The proteins that regulate genetics are often categorized as writers, readers, and erasers, i.e., enzymes that install epigenetic modifications, proteins that recognize these modifications, and enzymes that remove these modifications. To date, O-GlcNAc has been identified on writer and eraser enzymes. O-GlcNAc is found in multiple locations on EZH2, the catalytic methyltransferase subunit of PRC2, and is thought to stabilize EZH2 prior to PRC2 complex formation and regulate di- and tri-methyltransferase activity. All three members of the ten-eleven translocation (TET) family of dioxygenases (TET1, TET2, and TET3) are known to be modified by O-GlcNAc. O-GlcNAc has been suggested to cause nuclear export of TET3, reducing its enzymatic activity by depleting it from the nucleus. O-GlcNAcylation of HDAC1 is associated with elevated activating phosphorylation of HDAC1.", "Elevated O-GlcNAc has been associated with diabetes.\nPancreatic β cells synthesize and secrete insulin to regulate blood glucose levels. One study found that inhibition of OGA with streptozotocin followed by glucosamine treatment resulted in O-GlcNAc accumulation and apoptosis in β cells; a subsequent study showed that a galactose-based analogue of streptozotocin was unable to inhibit OGA but still resulted in apoptosis, suggesting that the apoptotic effects of streptozotocin are not directly due to OGA inhibition.\nO-GlcNAc has been suggested to attenuate insulin signaling. In 3T3-L1 adipocytes, OGA inhibition with PUGNAc inhibited insulin-mediated glucose uptake. PUGNAc treatment also inhibited insulin-stimulated Akt T308 phosphorylation and downstream GSK3β S9 phosphorylation. In a later study, insulin stimulation of COS-7 cells caused OGT to localize to the plasma membrane. Inhibition of PI3K with wortmannin reversed this effect, suggesting dependence on phosphatidylinositol(3,4,5)-triphosphate. Increasing O-GlcNAc levels by subjecting cells to high glucose conditions or PUGNAc treatment inhibited insulin-stimulated phosphorylation of Akt T308 and Akt activity. IRS1 phosphorylation at S307 and S632/S635, which is associated with attenuated insulin signaling, was enhanced. Subsequent experiments in mice with adenoviral delivery of OGT showed that OGT overexpression negatively regulated insulin signaling in vivo. Many components of the insulin signaling pathway, including β-catenin, IR-β, IRS1, Akt, PDK1, and the p110α subunit of PI3K were found to be directly modified by O-GlcNAc. Insulin signaling has also been reported to lead to OGT tyrosine phosphorylation and OGT activation, resulting in increased O-GlcNAc levels.\nAs PUGNAc also inhibits lysosomal β-hexosaminidases, the OGA-selective inhibitor NButGT was developed to further probe the relationship between O-GlcNAc and insulin signaling in 3T3-L1 adipocytes. This study also found that PUGNAc resulted in impaired insulin signaling, but NButGT did not, as measured by changes in phosphorylation of Akt T308, suggesting that the effects observed with PUGNAc may be due to off-target effects besides OGA inhibition.", "Parkinsons disease is associated with aggregation of α-synuclein. As O-GlcNAc modification of α-synuclein has been found to inhibit its aggregation, elevating α-synuclein O-GlcNAc is being explored as a therapeutic strategy to treat Parkinsons disease.", "Treatment of macrophages with lipopolysaccharide (LPS), a major component of the Gram-negative bacteria outer membrane, results in elevated O-GlcNAc in cellular and mouse models. During infection, cytosolic OGT was de-S-nitrosylated and activated. Suppressing O-GlcNAc with DON inhibited the O-GlcNAcylation and nuclear translocation of NF-κB, as well as downstream induction of inducible nitric oxide synthase and IL-1β production. DON treatment also improved cell survival during LPS treatment.", "Oleum is produced in the contact process, where sulfur is oxidized to sulfur trioxide which is subsequently dissolved in concentrated sulfuric acid. Sulfuric acid itself is regenerated by dilution of part of the oleum.\nThe lead chamber process for sulfuric acid production was abandoned, partly because it could not produce sulfur trioxide or concentrated sulfuric acid directly due to corrosion of the lead, and absorption of NO gas. Until this process was made obsolete by the contact process, oleum had to be obtained through indirect methods. Historically, the biggest production of oleum came from the distillation of iron sulfates at Nordhausen, from which the historical name Nordhausen sulfuric acid is derived.", "Oleum (Latin oleum, meaning oil), or fuming sulfuric acid, is a term referring to solutions of various compositions of sulfur trioxide in sulfuric acid, or sometimes more specifically to disulfuric acid (also known as pyrosulfuric acid).\nOleums can be described by the formula ySO·HO where y is the total molar mass of sulfur trioxide content. The value of y can be varied, to include different oleums. They can also be described by the formula HSO·xSO where x is now defined as the molar free sulfur trioxide content. Oleum is generally assessed according to the free SO content by mass. It can also be expressed as a percentage of sulfuric acid strength; for oleum concentrations, that would be over 100%. For example, 10% oleum can also be expressed as HSO·0.13611SO, 1.13611SO·HO or 102.25% sulfuric acid. The conversion between % acid and % oleum is:\nFor x = 1 and y = 2 the empirical formula HSO for disulfuric (pyrosulfuric) acid is obtained. Pure disulfuric acid is a solid at room temperature, melting at 36&thinsp;°C and rarely used either in the laboratory or industrial processes — although recent research indicates that pure disulfuric acid has never been isolated yet.", "Oleum is an important intermediate in the manufacture of sulfuric acid due to its high enthalpy of hydration. When SO is added to water, rather than dissolving, it tends to form a fine mist of sulfuric acid, which is difficult to manage. However, SO added to concentrated sulfuric acid readily dissolves, forming oleum which can then be diluted with water to produce additional concentrated sulfuric acid.\nTypically, above concentrations of 98.3%, sulfuric acid will undergo a spontaneous decomposition into sulfur trioxide and water\nThis means that sulfuric acid above said concentration will readily degenerate until it reaches 98.3%; this is impractical in some applications such as synthesis where anhydrous conditions are preferred (like alcohol eliminations). The addition of sulfur trioxide allows the concentration to be increased by means of Le Chatelier's principle.", "Oleum is a useful form for transporting sulfuric acid compounds, typically in rail tank cars, between oil refineries (which produce various sulfur compounds as a byproduct of refining) and industrial consumers.\nCertain compositions of oleum are solid at room temperature, and thus are safer to ship than as a liquid. Solid oleum can be converted into liquid at the destination by steam heating or dilution or concentration. This requires care to prevent overheating and evaporation of sulfur trioxide. To extract it from a tank car requires careful heating using steam conduits inside the tank car. Great care must be taken to avoid overheating, as this can increase the pressure in the tank car beyond the tank's safety valve limit.\nIn addition, oleum is less corrosive to metals than sulfuric acid, because there is no free water to attack surfaces. Because of that, sulfuric acid is sometimes concentrated to oleum for in-plant pipelines and then diluted back to acid for use in industrial reactions.\nIn Richmond, California in 1993 a significant release occurred due to overheating, causing a release of sulfur trioxide that absorbed moisture from the atmosphere, creating a mist of micrometre-sized sulfuric acid particles that formed an inhalation health hazard. This mist spread over a wide area.", "Oleum is used in the manufacture of many explosives with the notable exception of nitrocellulose. (In modern manufacturing of nitrocellulose, the HSO concentration is often adjusted using oleum.) The chemical requirements for explosives manufacture often require anhydrous mixtures containing nitric acid and sulfuric acid. Ordinary commercial grade nitric acid consists of the constant boiling azeotrope of nitric acid and water, and contains 68% nitric acid. Mixtures of ordinary nitric acid in sulfuric acid therefore contain substantial amounts of water and are unsuitable for processes such as those that occur in the manufacture of trinitrotoluene.\nThe synthesis of RDX and certain other explosives does not require oleum.\nAnhydrous nitric acid, referred to as white fuming nitric acid, can be used to prepare water-free nitration mixtures, and this method is used in laboratory scale operations where the cost of material is not of primary importance. Fuming nitric acid is hazardous to handle and transport, because it is extremely corrosive and volatile. For industrial use, such strong nitration mixtures are prepared by mixing oleum with ordinary commercial nitric acid so that the free sulfur trioxide in the oleum consumes the water in the nitric acid.", "Like concentrated sulfuric acid, oleum is such a strong dehydrating agent that if poured onto powdered glucose, or virtually any other sugar, it will draw the hydrogen elements of water out of the sugar in an exothermic reaction, leaving a residue of nearly pure carbon as a solid. This carbon expands outward, hardening as a solid black substance with gas bubbles in it.", "Oleum is a harsh reagent, and is highly corrosive. One important use of oleum as a reagent is the secondary nitration of nitrobenzene. The first nitration can occur with nitric acid in sulfuric acid, but this deactivates the ring towards further electrophilic substitution. A stronger reagent, oleum, is needed to introduce the second nitro group onto the aromatic ring.", "Oocyte selection is a procedure that is performed prior to in vitro fertilization, in order to use oocytes with maximal chances of resulting in pregnancy. In contrast, embryo selection takes place after fertilization.\nNot all women can conceive naturally, leaving them with a need for technologies and research that can help them have children. Women who might not be able to have their kids naturally may have the option of in vitro fertilization. In vitro fertilization can be a series of treatments that involves the fertilization of a mature egg with a sperm in a laboratory. Oocyte selection is a part the process for in vitro fertilization. An Oocyte is an egg/ovum that is not fully mature or developed and has not been fertilized; Therefore an oocyte is an undeveloped ovum. \n__TOC__", "Selecting an oocyte for in vitro fertilization involves assessing the quality of the oocyte which is usually done by accessing the morphological features of the oocyte. The major parts of the oocyte that are accessed for quality in terms of morphological characteristics are the cumulus cells, zona pellucida, polar body, perivitelline space, and cytoplasm; These are the main parts of the oocyte and are usually assessed by conventional microscopy. The size of an oocyte is another factor of the quality of the oocyte; Larger oocyte are usually more quality than smaller ones. Chromosomal evaluation may be performed. Embryos from rescued in vitro-matured metaphase II (IVM-MII) oocytes show significantly higher fertilization rates and more blastomeres per embryo compared with those from arrested metaphase I (MI) oocytes (58.5% vs. 43.9% and 5.7 vs. 5.0, respectively).\nAlso, morphological features of the oocyte that can be obtained by standard light or polarized light microscopy. However, there is no clear tendency in recent publications to a general increase in predictive value of morphological features. Suggested techniques include zona pellucida imaging, which can detect differences in birefringence between eggs, which is a predictor of compaction, blastulation and pregnancy.\nPotentially, polar body biopsy may be used for molecular analysis, and can be used for preimplantation genetic screening.", "The most common classes of compounds with this property are the stilbenes, e.g., 4,4′-diamino-2,2′-stilbenedisulfonic acid. Older, non-commercial fluorescent compounds include umbelliferone, which absorbs in the UV portion of the spectrum and re-emit it in the blue portion of the visible spectrum. A white surface treated with an optical brightener can emit more visible light than that which shines on it, making it appear brighter. The blue light emitted by the brightener compensates for the diminishing blue of the treated material and changes the hue away from yellow or brown and toward white.\nApproximately 400 brightener types are listed in the international Colour Index database, but fewer than 90 are produced commercially, and only a handful are commercially important. The Colour Index Generic Names and Constitution Numbers can be assigned to a specific substance. However, some are duplicated, since manufacturers apply for the index number when they produce it. The global OBA production for paper, textiles, and detergents is dominated by just a few di- and tetra-sulfonated triazole-stilbenes and a di-sulfonated stilbene-biphenyl derivatives. The stilbene derivatives are subject to fading upon prolonged exposure to UV, due to the formation of optically inactive cis-stilbenes. They are also degraded by oxygen in air, like most dye colorants. All brighteners have extended conjugation and/or aromaticity, allowing for electron movement. Some non-stilbene brighteners are used in more permanent applications such as whitening synthetic fiber.\nBrighteners can be \"boosted\" by the addition of certain polyols, such as high molecular weight polyethylene glycol or polyvinyl alcohol. These additives increase the visible blue light emissions significantly. Brighteners can also be \"quenched\". Excess brightener will often cause a greening effect as emissions start to show above the blue region in the visible spectrum.", "Optical brighteners, optical brightening agents (OBAs), fluorescent brightening agents (FBAs), or fluorescent whitening agents (FWAs), are chemical compounds that absorb light in the ultraviolet and violet region (usually 340-370 nm) of the electromagnetic spectrum, and re-emit light in the blue region (typically 420-470 nm) through the phenomenon of fluorescence. These additives are often used to enhance the appearance of color of fabric and paper, causing a \"whitening\" effect; they make intrinsically yellow/orange materials look less so, by compensating the deficit in blue and purple light reflected by the material, with the blue and purple optical emission of the fluorophore.", "Brighteners are commonly added to laundry detergents to make the clothes appear cleaner. Normally cleaned laundry appears yellowish, which consumers do not like. Optical brighteners have replaced bluing which was formerly used to produce the same effect.\nBrighteners are used in many papers, especially high brightness papers, resulting in their strongly fluorescent appearance under UV illumination. Paper brightness is typically measured at 457 nm, well within the fluorescent activity range of brighteners. Paper used for banknotes does not contain optical brighteners, so a common method for detecting counterfeit notes is to check for fluorescence.\nOptical brighteners have also found use in cosmetics. One application is to formulas for washing and conditioning grey or blonde hair, where the brightener can not only increase the luminance and sparkle of the hair, but can also correct dull, yellowish discoloration without darkening the hair. Some advanced face and eye powders contain optical brightener microspheres that brighten shadowed or dark areas of the skin, such as \"tired eyes\".\nEnd uses of optical brighteners include:\n# Detergent whitener (instead of bluing agents)\n# Paper brightening (internal or in a coating)\n# Fiber whitening (internal, added to polymer melts)\n# Textile whitening (external, added to fabric finishes)\n# Color-correcting or brightening additive in advanced cosmetic formulas (shampoos, conditioners, eye makeup)", "From around 2002 to 2012, chemical brighteners were used by many Chinese farmers to enhance the appearance of their white mushrooms. This illegal use was mostly eliminated by the Chinese Ministry of Agriculture.", "Collagen is the primary component of the extracellular matrix. Collagen scaffolds efficiently support fibroblast growth, which in turn allows keratinocytes to grow nicely into multilayers. Collagen (mainly collagen type I) is often used as a scaffold because it is biocompatible, non-immunogenic and available. However, collagen biodegrades relatively rapidly and is not good at withstanding mechanical forces. Improved characteristics can be created by cross-linking collagen-based matrices: this is an effective method to correct the instability and mechanical properties.", "Gelatin is the denatured form of collagen. Gelatin possesses several advantages for tissue-engineering application: they attract fibroblasts, are non-immunogenic, easy to manipulate and boost the formation of epithelium. There are three types of gelatin-based scaffolds:\n* Gelatin-oxidized dextran matrix\n* Gelatin-chitosan-oxidized dextran matrix\n* Gelatin-glucan matrix\n* Gelatin-hyaluronate matrix\n* Gelatin-chitosan hyaluronic acid matrix.\nGlucan is a polysaccharide with antibacterial, antiviral and anticoagulant properties. Hyaluronic acid is added to improve the biological and mechanical properties of the matrix.", "Cell culture techniques make it possible to produce epithelial sheets for the replacement of damaged oral mucosa. Partial-thickness tissue engineering uses one type of cell layer, this can be in monolayers or multilayers. Monolayer epithelial sheets suffice for the study of the basic biology of oral mucosa, for example its responses to stimuli such as mechanical stress, growth factor addition and radiation damage. Oral mucosa, however, is a complex multilayer structure with proliferating and differentiating cells and monolayer epithelial sheets have been shown to be fragile, difficult to handle and likely to contract without a supporting extracellular matrix. Monolayer epithelial sheets can be used to manufacture multilayer cultures. These multilayer epithelial sheets show signs of differentiation such as the formation of a basement membrane and keratinization. Fibroblasts are the most common cells in extracellular matrix and are important for epithelial morphogenesis. If fibroblasts are absent from the matrix, the epithelium stops proliferating but continues to differentiate. The structures obtained by partial-thickness oral mucosa engineering form the basis for full-thickness oral mucosa engineering.", "A scaffold or matrix serves as a temporary supporting structure (extracellular matrix), the initial architecture, on which the cells can grow three-dimensionally into the desired tissue. A scaffold must provide the environment needed for cellular growth and differentiation; it must provide the strength to withstand mechanical stress and guide their growth. Moreover, scaffolds should be biodegradable and degrade at the same rate as the tissue regenerates to be optimally replaced by the host tissue. There are numerous scaffolds to choose from and when choosing a scaffold biocompatibility, porosity and stability should also be held into account. Available scaffolds for oral mucosa tissue engineering are:", "* Acellular Dermis. An acellular dermis is made by removing the cells (epidermis and dermal fibroblasts) from split-thickness skin. It has two sides: one side has a basal lamina suitable for the epithelial cells, and the other is suitable for fibroblast infiltration because it has intact vessel channels. It is durable, able to keep its structure and does not trigger immune reactions (non-immunogenic).\n* Amniotic Membrane. The amniotic membrane, the inner part of the placenta, has a thick basement membrane of collagen type IV and laminin and avascular connective tissue.", "Fibroblast-populated Skin Substitutes are scaffolds which contain fibroblasts that are able to proliferate and produce extracellular matrix and growth factors within 2 to 3 weeks. This creates a matrix similar to that of a dermis. \nCommercially available types are for example: \n* Dermagraft\n* Apligraf\n* Orcel\n* Polyactive\n* Hyalograf 3D", "Tissue engineering of oral mucosa combines cells, materials and engineering to produce a three-dimensional reconstruction of oral mucosa. It is meant to simulate the real anatomical structure and function of oral mucosa. Tissue engineered oral mucosa shows promise for clinical use, such as the replacement of soft tissue defects in the oral cavity. These defects can be divided into two major categories: the gingival recessions (receding gums) which are tooth-related defects, and the non tooth-related defects. Non tooth-related defects can be the result of trauma, chronic infection or defects caused by tumor resection or ablation (in the case of oral cancer). Common approaches for replacing damaged oral mucosa are the use of autologous grafts and cultured epithelial sheets.", "With the advancement of tissue engineering an alternative approach was developed: the full-thickness engineered oral mucosa. Full-thickness engineered oral mucosa is a better simulation of the in vivo situation because they take the anatomical structure of native oral mucosa into account. Problems, such as tissue shortage and donor site morbidity, do not occur when using full-thickness engineered oral mucosa.\nThe main goal when producing full-thickness engineered oral mucosa is to make it resemble normal oral mucosa as much as possible. This is achieved by using a combination of different cell types and scaffolds. \n* Lamina propria: is mimicked by seeding oral fibroblasts, producing extracellular matrix, into a biocompatible (porous) scaffold and culturing them in a fibroblast differentiation medium.\n* Basement membrane: containing type IV collagen, laminin, fibronectin and integrins. Ideally, the basement membrane must contain a lamina lucida and a lamina densa.\n* Stratified squamous epithelium: is simulated by oral keratinocytes cultured in a medium containing keratinocyte growth factors such as the epidermal growth factor (EGF).\nTo obtain the best results, the type and origin of the fibroblasts and keratinocytes used in oral mucosa tissue engineering are important factors to hold into account. Fibroblasts are usually taken from the dermis of the skin or oral mucosa. Kertinocytes can be isolated from different areas of the oral cavity (such as the palate or gingiva). It is important that the fibroblasts and keratinocytes are used in the earliest stage possible as the function of these cells decreases with time. The transplanted keratinocytes and fibroblasts should adapt to their new environment and adopt their function. There is a risk of losing the transplanted tissue if the cells do not adapt properly. This adaptation goes more smoothly when the donor tissue cells resemble the cells of the native tissue.", "Autologous grafts are used to transfer tissue from one site to another on the same body. The use of autologous grafts prevents transplantation rejection reactions. \nGrafts used for oral reconstruction are preferably taken from the oral cavity itself (such as gingival and palatal grafts). However, their limited availability and small size leads to the use of either skin transplants or intestinal mucosa to be able to cover bigger defects.\nOther than tissue shortage, donor site morbidity is a common problem that may occur when using autologous grafts. When tissue is obtained from somewhere other than the oral cavity (such as the intestine or skin) there is a risk of the graft not being able to lose its original donor tissue characteristics. For example, skin grafts are often taken from the radial forearm or lateral upper arm when covering more extensive defects. A positive aspect of using skin grafts is the large availability of skin. However, skin grafts differ from oral mucosa in: consistency, color and keratinization pattern. The transplanted skin graft often continues to grow hair in the oral cavity.", "Fibrin-based scaffolds contain fibrin which gives the keratinocytes stability. Moreover, they are simple to reproduce and handle.", "A hybrid scaffold is a skin substitute based on a combination of synthetic and natural materials. Examples of hybrid scaffolds are HYAFF and Laserskin. These hybrid scaffolds have been shown to have good in-vitro and in-vivo biocompatibilities and their biodegradability is controllable.", "The use of natural materials in scaffolds has its disadvantages. Usually, they are expensive, not available in large quantities and they have the risk of disease transmission. This has led to the development of synthetic scaffolds.\nWhen producing synthetic scaffolds there is full control over their properties. For example, they can be made to have good mechanical properties and the right biodegradability. When it comes to synthetic scaffolds thickness, porosity and pore size are important factors for controlling connective tissue formation. \nExamples of synthetic scaffolds are:\n* Polyethylene terephthalate membranes (PET membranes)\n* Polycarbonate-permeable membranes (PC membranes)\n* Porous polylactic glycolic acid (PLGA)\nHistorical use of electrospinning to produce synthetic scaffolds dates back to at least the late 1980s when Simon showed that technology could be used to produce nano- and submicron-scale fibrous scaffolds from polymer solutions specifically intended for use as in vitro cell and tissue substrates. This early use of electrospun lattices for cell culture and tissue engineering showed that various cell types would adhere to and proliferate upon polycarbonate fibers. It was noted that as opposed to the flattened morphology typically seen in 2D culture, cells grown on the electrospun fibers exhibited a more rounded 3-dimensional morphology generally observed of tissues in vivo.", "Although it has not yet been commercialized for clinical use clinical studies have been done on intra- and extra-oral treatments with full-thickness engineered oral mucosa.\nFull-thickness engineered oral mucosa is mainly used in maxillofacial reconstructive surgery and periodontal peri-implant reconstruction. Good clinical and histological results have been obtained. For example, there is vascular ingrowth and the transplanted keratinocytes integrate well into the native epithelium. Full-thickness engineered oral mucosa has also shown good results for extra-oral applications such as urethral reconstruction, ocular surface reconstruction and eyelid reconstruction.", "To better understand the challenges for building full-thickness engineered oral mucosa it is important to first understand the structure of normal oral mucosa. Normal oral mucosa consists of two layers, the top stratified squamous epithelial layer and the bottom lamina propria. The epithelial layer consists of four layers:\n* Stratum basale (basal layer)\n* Stratum spinosum (spinous layer)\n* Stratum granulosum (granular layer)\n* Stratum corneum (keratinized/superficial layer)\nDepending on the region of the mouth the epithelium may be keratinized or non-keratinized. Non-keratinized squamous epithelium covers the soft palate, lips, cheeks and the floor of the mouth. Keratinized squamous epithelium is present in the gingiva and hard palate. Keratinization is the differentiation of keratinocytes in the granular layer into dead surface cells to form the stratum corneum. The cells terminally differentiate as they migrate to the surface (from the basal layer where the progenitor cells are located to the dead superficial surface).\nThe lamina propria is a fibrous connective tissue layer that consists of a network of type I and III collagen and elastin fibers. The main cells of the lamina propria are the fibroblasts, which are responsible for the production of the extracellular matrix. The basement membrane forms the border between the epithelial layer and the lamina propria.", "Compound collagen-based scaffolds have been developed in an attempt to improve the function of these scaffolds for tissue engineering. An example of a compound collagen scaffold is the collagen-chitosan matrix. Chitosan is a polysaccharide that is chemically similar to cellulose. Unlike collagen, chitosan biodegrades relatively slowly. However, chitosan is not very biocompatible with fibroblasts. To improve the stability of scaffolds containing gelatin or collagen and the biocompatibility of chitosan is made by crosslinking the two; they compensate for each other's shortcomings.\nCollagen-elastine membrane, collagen-glycosaminoglycane (C-GAG) matrix, cross-linked collagen matrix Integra and Terudermis are other examples of compound collagen scaffolds.\nAllogeneic cultured keratinocytes and fibroblasts in bovine collagen (Gintuit) is the first cell-based product made from allogeneic human cells and bovine collagen approved by the US Food and Drug Administration (FDA). It is an allogeneic cellularized scaffold product and was approved for medical use in the United States in March 2012.", "Selective laser sintering (SLS) uses powdered material as the substrate for printing new objects. SLS can be used to create metal, plastic, and ceramic objects. This technique uses a laser controlled by a computer as the power source to sinter powdered material. The laser traces a cross-section of the shape of the desired object in the powder, which fuses it together into a solid form. A new layer of powder is then laid down and the process repeats itself, building each layer with every new application of powder, one by one, to form the entirety of the object. One of the advantages of SLS printing is that it requires very little additional tooling, i.e. sanding, once the object is printed. Recent advances in organ printing using SLS include 3D constructs of craniofacial implants as well as scaffolds for cardiac tissue engineering.", "Fused deposition modeling (FDM) is more common and inexpensive compared to selective laser sintering. This printer uses a printhead that is similar in structure to an inkjet printer; however, ink is not used. Plastic beads are heated at high temperature and released from the printhead as it moves, building the object in thin layers. A variety of plastics can be used with FDM printers. Additionally, most of the parts printed by FDM are typically composed from the same thermoplastics that are utilized in tradition injection molding or machining techniques. Due to this, these parts have analogous durability, mechanical properties, and stability characteristics. Precision control allows for a consistent release amount and specific location deposition for each layer contributing to the shape. As the heated plastic is deposited from the printhead, it fuses or bonds to the layers below. As each layer cools, they harden and gradually take hold of the solid shape intended to be created as more layers are contributed to the structure.", "Natural-synthetic hybrid polymers are based on the synergic effect between synthetic and biopolymeric constituents. Gelatin-methacryloyl (GelMA) has become a popular biomaterial in the field of bioprinting. GelMA has shown it has viable potential as a bioink material due to its suitable biocompatibility and readily tunable psychochemical properties. Hyaluronic acid (HA)-PEG is another natural-synthetic hybrid polymer that has proven to be very successful in bioprinting applications. HA combined with synthetic polymers aid in obtaining more stable structures with high cell viability and limited loss in mechanical properties after printing. A recent application of HA-PEG in bioprinting is the creation of artificial liver. Lastly, a series of biodegradable polyurethane (PU)-gelatin hybrid polymers with tunable mechanical properties and efficient degradation rates have been implemented in organ printing. This hybrid has the ability to print complicated structures such as a nose-shaped construct.\nAll of the polymers described above have the potential to be manufactured into implantable, bioartificial organs for purposes including, but not limited to, customized organ restoration, drug screening, as well as metabolic model analysis.", "Extrusion bioprinting includes the consistent statement of a specific printing fabric and cell line from an extruder, a sort of portable print head. This tends to be a more controlled and gentler handle for fabric or cell statement, and permits for more noteworthy cell densities to be utilized within the development of 3D tissue or organ structures. In any case, such benefits are set back by the slower printing speeds involved by this procedure. Extrusion bioprinting is frequently coupled with UV light, which photopolymerizes the printed fabric to create a more steady, coordinated construct.", "Drop-based bioprinting makes cellular developments utilizing droplets of an assigned material, which has oftentimes been combined with a cell line. Cells themselves can also be deposited in this manner with or without polymer. When printing polymer scaffolds using these methods, each drop starts to polymerize upon contact with the substrate surface and merge into a larger structure as droplets start to coalesce. Polymerization can happen through a variety of methods depending on the polymer used. For instance, alginate polymerization is started by calcium ions in the substrate, which diffuse into the liquified bioink and permit for the arrangement of a strong gel. Drop-based bioprinting is commonly utilized due to its productive speed. However, this may make it less appropriate for more complicated organ structures.", "Sacrificial writing into function tissue (SWIFT) is a method of organ printing where living cells are packed tightly to mimic the density that occurs in the human body. While packing, tunnels are carved to mimic blood vessels and oxygen and essential nutrients are delivered via these tunnels. This technique pieces together other methods that only packed cells or created vasculature. SWIFT combines both and is an improvement that brings researchers closer to creating functional artificial organs.", "Printing materials must fit a broad spectrum of criteria, one of the foremost being biocompatibility. The resulting scaffolds formed by 3D printed materials should be physically and chemically appropriate for cell proliferation. Biodegradability is another important factor, and insures that the artificially formed structure can be broken down upon successful transplantation, to be replaced by a completely natural cellular structure. Due to the nature of 3D printing, materials used must be customizable and adaptable, being suited to wide array of cell types and structural conformations.", "3D printing for the manufacturing of artificial organs has been a major topic of study in biological engineering. As the rapid manufacturing techniques entailed by 3D printing become increasingly efficient, their applicability in artificial organ synthesis has grown more evident. Some of the primary benefits of 3D printing lie in its capability of mass-producing scaffold structures, as well as the high degree of anatomical precision in scaffold products. This allows for the creation of constructs that more effectively resemble the microstructure of a natural organ or tissue structure. Organ printing using 3D printing can be conducted using a variety of techniques, each of which confers specific advantages that can be suited to particular types of organ production.", "The field of organ printing stemmed from research in the area of stereolithography, the basis for the practice of 3D printing that was invented in 1984. In this early era of 3D printing, it was not possible to create lasting objects because of the material used for the printing process was not durable. 3D printing was instead used as a way to model potential end products that would eventually be made from different materials under more traditional techniques. In the beginning of the 1990s, nanocomposites were developed that allowed 3D printed objects to be more durable, permitting 3D printed objects to be used for more than just models. It was around this time that those in the medical field began considering 3D printing as an avenue for generating artificial organs. By the late 1990s, medical researchers were searching for biocompatible materials that could be used in 3D printing.\nThe concept of bioprinting was first demonstrated in 1988. At this time, a researcher used a modified HP inkjet printer to deposit cells using cytoscribing technology. Progress continued in 1999 when the first artificial organ made using bioprinting was printed by a team of scientist leads by Dr. Anthony Atala at the Wake Forest Institute for Regenerative Medicine. The scientists at Wake Forest printed an artificial scaffold for a human bladder and then seeded the scaffold with cells from their patient. Using this method, they were able to grow a functioning organ and ten years after implantation the patient had no serious complications.\nAfter the bladder at Wake Forest, strides were taken towards printing other organs. In 2002, a miniature, fully functional kidney was printed. In 2003, Dr. Thomas Boland from Clemson University patented the use of inkjet printing for cells. This process utilized a modified spotting system for the deposition of cells into organized 3D matrices placed on a substrate. This printer allowed for extensive research into bioprinting and suitable biomaterials. For instance, since these initial findings, the 3D printing of biological structures has been further developed to encompass the production of tissue and organ structures, as opposed to cell matrices. Additionally, more techniques for printing, such as extrusion bioprinting, have been researched and subsequently introduced as a means of production.\nIn 2004, the field of bioprinting was drastically changed by yet another new bioprinter. This new printer was able to use live human cells without having to build an artificial scaffold first. In 2009, Organovo used this novel technology to create the first commercially available bioprinter. Soon after, Organovo's bioprinter was used to develop a biodegradable blood vessel, the first of its kind, without a cell scaffold.\nIn the 2010s and beyond, further research has been put forth into producing other organs, such as the liver and heart valves, and tissues, such as a blood-borne network, via 3D printing. In 2019, scientists in Israel made a major breakthrough when they were able to print a rabbit-sized heart with a network of blood vessels that were capable of contracting like natural blood vessels. The printed heart had the correct anatomical structure and function compared to real hearts. This breakthrough represented a real possibility of printing fully functioning human organs. In fact, scientists at the Warsaw Foundation for Research and Development of Science in Poland have been working on creating a fully artificial pancreas using bioprinting technology. As of today, these scientists have been able to develop a functioning prototype. This is a growing field and much research is still being conducted.", "Materials for 3D printing usually consist of alginate or fibrin polymers that have been integrated with cellular adhesion molecules, which support the physical attachment of cells. Such polymers are specifically designed to maintain structural stability and be receptive to cellular integration. The term bio-ink has been used as a broad classification of materials that are compatible with 3D bioprinting. Hydrogel alginates have emerged as one of the most commonly used materials in organ printing research, as they are highly customizable, and can be fine-tuned to simulate certain mechanical and biological properties characteristic of natural tissue. The ability of hydrogels to be tailored to specific needs allows them to be used as an adaptable scaffold material, that are suited for a variety of tissue or organ structures and physiological conditions. A major challenge in the use of alginate is its stability and slow degradation, which makes it difficult for the artificial gel scaffolding to be broken down and replaced with the implanted cells' own extracellular matrix. Alginate hydrogel that is suitable for extrusion printing is also often less structurally and mechanically sound; however, this issue can be mediated by the incorporation of other biopolymers, such as nanocellulose, to provide greater stability. The properties of the alginate or mixed-polymer bioink are tunable and can be altered for different applications and types of organs.\nOther natural polymers that have been used for tissue and 3D organ printing include chitosan, hydroxyapatite (HA), collagen, and gelatin. Gelatin is a thermosensitive polymer with properties exhibiting excellent wear solubility, biodegradability, biocompatibility, as well as a low immunologic rejection. These qualities are advantageous and result in high acceptance of the 3D bioprinted organ when implanted in vivo.", "This method of organ printing uses spatially controlled light or laser to create a 2D pattern that is layered through a selective photopolymerization in the bio-ink reservoir. A 3D structure can then be built in layers using the 2D pattern. Afterwards the bio-ink is removed from the final product. SLA bioprinting allows for the creation of complex shapes and internal structures. The feature resolution for this method is extremely high and the only disadvantage is the scarcity of resins that are biocompatible.", "Synthetic polymers are human made through chemical reactions of monomers. Their mechanical properties are favorable in that their molecular weights can be regulated from low to high based on differing requirements. However, their lack of functional groups and structural complexity has limited their usage in organ printing. Current synthetic polymers with excellent 3D printability and in vivo tissue compatibility, include polyethylene glycol (PEG), poly(lactic-glycolic acid) (PLGA), and polyurethane (PU). PEG is a biocompatible, nonimmunogenic synthetic polyether that has tunable mechanical properties for use in 3D bioprinting. Though PEG has been utilized in various 3D printing applications, the lack of cell-adhesive domains has limited further use in organ printing. PLGA, a synthetic copolymer, is widely familiar in living creatures, such as animals, humans, plants, and microorganisms. PLGA is used in conjunction with other polymers to create different material systems, including PLGA-gelatin, PLGA-collagen, all of which enhance mechanical properties of the material, biocompatible when placed in vivo, and have tunable biodegradability. PLGA has most often been used in printed constructs for bone, liver, and other large organ regeneration efforts. Lastly, PU is unique in that it can be classified into two groups: biodegradable or non-biodegradable. It has been used in the field of bioprinting due to its excellent mechanical and bioinert properties. An application of PU would be inanimate artificial hearts; however, using existing 3D bioprinters, this polymer cannot be printed. A new elastomeric PU was created composed of PEG and polycaprolactone (PCL) monomers. This new material exhibits excellent biocompatibility, biodegradability, bioprintability, and biostability for use in complex bioartificial organ printing and manufacturing. Due to high vascular and neural network construction, this material can be applied to organ printing in a variety of complex ways, such as the brain, heart, lung, and kidney.", "Surgical usage of 3D printing has evolved from printing surgical instrumentation to the development of patient-specific technologies for total joint replacements, dental implants, and hearing aids. In the field of organ printing, applications can be applied for patients and surgeons. For instance, printed organs have been used to model structure and injury to better understand the anatomy and discuss a treatment regime with patients. For these cases, the functionality of the organ is not required and is used for proof-of-concept. These model organs provide advancement for improving surgical techniques, training inexperienced surgeons, and moving towards patient-specific treatments.", "The creation of a complete organ often requires incorporation of a variety of different cell types, arranged in distinct and patterned ways. One advantage of 3D-printed organs, compared to traditional transplants, is the potential to use cells derived from the patient to make the new organ. This significantly decreases the likelihood of transplant rejection, and may remove the need for immunosuppressive drugs after transplant, which would reduce the health risks of transplants. However, since it may not always be possible to collect all the needed cell types, it may be necessary to collect adult stem cells or induce pluripotency in collected tissue. This involves resource-intensive cell growth and differentiation and comes with its own set of potential health risks, since cell proliferation in a printed organ occurs outside the body and requires external application of growth factors. However, the ability of some tissues to self-organize into differentiated structures may provide a way to simultaneously construct the tissues and form distinct cell populations, improving the efficacy and functionality of organ printing.", "Organ printing for medical applications is still in the developmental stages. Thus, the long term impacts of organ printing have yet to be determined. Researchers hope that organ printing could decrease the organ transplant shortage. There is currently a shortage of available organs, including liver, kidneys, and lungs. The lengthy wait time to receive life saving organs is one of the leading causes of death in the United States, with nearly one third of deaths each year in the United States that could be delayed or prevented with organ transplants. Currently the only organ that has been 3D bioprinted and successfully transplanted into a human is a bladder. The bladder was formed from the host's bladder tissue. Researchers have proposed that a potential positive impact of 3D printed organs is the ability to customize organs for the recipient. Developments enabling an organ recipient’s host cells to be used to synthesize organs decreases the risk of organ rejection.\nThe ability to print organs has decreased the demand for animal testing. Animal testing is used to determine the safety of products ranging from makeup to medical devices. Cosmetic companies are already using smaller tissue models to test new products on skin. The ability to 3D print skin reduces the need for animal trials for makeup testing. In addition, the ability to print models of human organs to test the safety and efficacy of new drugs further reduces the necessity for animal trials. Researchers at Harvard University determined that drug safety can be accurately tested on smaller tissue models of lungs. The company Organovo, which designed one of the initial commercial bioprinters in 2009, has displayed that biodegradable 3D tissue models can be used to research and develop new drugs, including those to treat cancer. An additional impact of organ printing includes the ability to rapidly create tissue models, therefore increasing productivity.", "Organ printing utilizes techniques similar to conventional 3D printing where a computer model is fed into a printer that lays down successive layers of plastics or wax until a 3D object is produced. In the case of organ printing, the material being used by the printer is a biocompatible plastic. The biocompatible plastic forms a scaffold that acts as the skeleton for the organ that is being printed. As the plastic is being laid down, it is also seeded with human cells from the patient's organ that is being printed for. After printing, the organ is transferred to an incubation chamber to give the cells time to grow. After a sufficient amount of time, the organ is implanted into the patient.\nTo many researchers the ultimate goal of organ printing is to create organs that can be fully integrated into the human body. Successful organ printing has the potential to impact several industries, notably artificial organs organ transplants, pharmaceutical research, and the training of physicians and surgeons.", "From an ethical standpoint, there are concerns with respect to the availability of organ printing technologies, the cell sources, and public expectations. Although this approach may be less expensive than traditional surgical transplantation, there is skepticism in regards to social availability of these 3D printed organs. Contemporary research has found that there is potential social stratification for the wealthier population to have access to this therapy while the general population remains on the organ registry. The cell sources mentioned previously also need to be considered. Organ printing can decrease or eliminate animal studies and trials, but also raises questions on the ethical implications of autologous and allogenic sources. More specifically, studies have begun to examine future risks for humans undergoing experimental testing. Generally, this application can give rise to social, cultural, and religious differences, making it more difficult for worldwide integration and regulation. Overall, the ethical considerations of organ printing are similar to those of general ethics of bioprinting, but are extrapolated from tissue to organ. Altogether, organ printing possesses short- and long-term legal and ethical consequences that need to be considered before mainstream production can be feasible.", "The current American regulation for organ matching is centered on the national registry of organ donors after the National Organ Transplant Act was passed in 1984. This act was set in place to ensure equal and honest distribution, although it has been proven insufficient due to the large demand for organ transplants. Organ printing can assist in diminishing the imbalance between supply and demand by printing patient-specific organ replacements, all of which is unfeasible without regulation. The Food and Drug Administration (FDA) is responsible for regulation of biologics, devices, and drugs in the United States. Due to the complexity of this therapeutic approach, the location of organ printing on the spectrum has not been discerned. Studies have characterized printed organs as multi-functional combination products, meaning they fall between the biologics and devices sectors of the FDA; this leads to more extensive processes for review and approval. In 2016, the FDA issued draft guidance on the Technical Considerations for Additive Manufactured Devices and is currently evaluating new submissions for 3D printed devices. However, the technology itself is not advanced enough for the FDA to mainstream it directly. Currently, the 3D printers, rather than the finished products, are the main focus in safety and efficacy evaluations in order to standardize the technology for personalized treatment approaches. From a global perspective, only South Korea and Japan's medical device regulation administrations have provided guidelines that are applicable to 3D bio-printing.\nThere are also concerns with intellectual property and ownership. These can have a large impact on more consequential matters such as piracy, quality control for manufacturing, and unauthorized use on the black market. These considerations are focused more on the materials and fabrication processes; they are more extensively explained in the legal aspects subsection of 3D printing.", "One of the challenges of 3D printing organs is to recreate the vasculature required to keep the organs alive. Designing a correct vasculature is necessary for the transport of nutrients, oxygen, and waste. Blood vessels, especially capillaries, are difficult due to the small diameter. Progress has been made in this area at Rice University, where researchers designed a 3D printer to make vessels in biocompatible hydrogels and designed a model of lungs that can oxygenate blood. However, accompanied with this technique is the challenge of replicating the other minute details of organs. It is difficult to replicate the entangled networks of airways, blood vessels, and bile ducts and complex geometry of organs.\nThe challenges faced in the organ printing field extends beyond the research and development of techniques to solve the issues of multivascularization and difficult geometries. Before organ printing can become widely available, a source for sustainable cell sources must be found and large-scale manufacturing processes need to be developed. Additional challenges include designing clinical trials to test the long-term viability and biocompatibility of synthetic organs. While many developments have been made in the field of organ printing, more research must be conducted.", "Organ printing technology can also be combined with microfluidic technology to develop organs-on-chips. These organs-on-chips have the potential to be used for disease models, aiding in drug discovery, and performing high-throughput assays. Organ-on-chips work by providing a 3D model that imitates the natural extracellular matrix, allowing them to display realistic responses to drugs. Thus far, research has been focused on developing liver-on-a-chip and heart-on-a-chip, but there exists the potential to develop an entire body-on-a-chip model.\nBy combining 3D printed organs, researchers are able to create a body-on-a-chip. The heart-on-a-chip model has already been used to investigate how several drugs with heart rate-based negative side effects, such as the chemotherapeutic drug doxorubicin could affect people on an individual basis. The new body-on-a-chip platform includes liver, heart, lungs, and kidney-on-a-chip. The organs-on-a-chip are separately printed or constructed and then integrated together. Using this platform drug toxicity studies are performed in high throughput, lowering the cost and increasing the efficiency in the drug-discovery pipeline.", "3D organ printing technology permits the fabrication of high degrees of complexity with great reproducibility, in a fast and cost-effective manner. 3D printing has been used in pharmaceutical research and fabrication, providing a transformative system allowing precise control of droplet size and dose, personalized medicine, and the production of complex drug-release profiles. This technology calls for implantable drug delivery devices, in which the drug is injected into the 3D printed organ and is released once in vivo. Also, organ printing has been used as a transformative tool for in vitro testing. The printed organ can be utilized in discovery and dosage research upon drug-release factors.", "Currently, the sole method for treatment for those in organ failure is to await a transplant from a living or recently deceased donor. In the United States alone, there are over 100,000 patients on the organ transplant list waiting for donor organs to become available. Patients on the donor list can wait days, weeks, months, or even years for a suitable organ to become available. The average wait time for some common organ transplants are as follows: four months for a heart or lung, eleven months for a liver, two years for a pancreas, and five years for a kidney. This is a significant increase from the 1990s, when a patient could wait as little as five weeks for a heart. These extensive wait times are due to a shortage of organs as well as the requirement for finding an organ that is suitable for the recipient. An organ is deemed suitable for a patient based on blood type, comparable body size between donor and recipient, the severity of the patients medical condition, the length of time the patient has been waiting for an organ, patient availability (i.e. ability to contact patient, if patient has an infection), the proximity of the patient to the donor, and the viability time of the donor organ. In the United States, 20 people die everyday waiting for organs. 3D organ printing has the potential to remove both these issues; if organs could be printed as soon as there is need, there would be no shortage. Additionally, seeding printed organs with a patients own cells would eliminate the need to screen donor organs for compatibility.", "The types of printers used for organ printing include:\n* Inkjet printer\n* Multi-nozzle\n* Hybrid printer\n* Electrospinning\n* Drop-on-demand\nThese printers are used in the methods described previously. Each printer requires different materials and has its own advantages and limitations.", "3D-printing techniques have been used in a variety of industries for the overall goal of fabricating a product. Organ printing, on the other hand, is a novel industry that utilizes biological components to develop therapeutic applications for organ transplants. Due to the increased interest in this field, regulation and ethical considerations desperately need to be established. Specifically, there can be legal complications from pre-clinical to clinical translation for this treatment method.", "3D cell-culture models exceed 2D culture systems by promoting higher levels of cell differentiation and tissue organization. 3D culture systems are more successful because the flexibility of the ECM gels accommodates shape changes and cell-cell connections – formerly prohibited by rigid 2D culture substrates. Nevertheless, even the best 3D culture models fail to mimic an organs cellular properties in many aspects, including tissue-to-tissue interfaces (e.g., epithelium and vascular endothelium), spatiotemporal gradients of chemicals, and the mechanically active microenvironments (e.g. arteries vasoconstriction and vasodilator responses to temperature differentials). The application of microfluidics in organs-on-chips enables the efficient transport and distribution of nutrients and other soluble cues throughout the viable 3D tissue constructs. Organs-on-chips are referred to as the next wave of 3D cell-culture models that mimic whole living organs' biological activities, dynamic mechanical properties and biochemical functionalities.", "In the early phase of drug development, animal models were the only way of obtaining in vivo data that would predict the human pharmacokinetic responses. However, experiments on animals are lengthy, expensive and controversial. For example, animal models are often subjected to mechanical or chemical techniques that simulate human injuries. There are also concerns with regards to the validity of such animal models, due to deficiency in cross-species extrapolation. Moreover, animal models offer very limited control of individual variables and it can be cumbersome to harvest specific information.\nTherefore, mimicking a human's physiological responses in an in vitro model needs to be made more affordable, and needs to offer cellular level control in biological experiments: biomimetic microfluidic systems could replace animal testing. The development of MEMS-based biochips that reproduce complex organ-level pathological responses could revolutionize many fields, including toxicology and the developmental process of pharmaceuticals and cosmetics that rely on animal testing and clinical trials.\nRecently, physiologically based perfusion in vitro systems have been developed to provide cell culture environment close to in vivo cell environment. A new testing platforms based on multi-compartmental perfused systems have gained a remarkable interest in pharmacology and toxicology. It aims to provide a cell culture environment close to the in vivo situation to reproduce more reliably in vivo mechanisms or ADME processes that involve its absorption, distribution, metabolism, and elimination. Perfused in vitro systems combined with kinetic modelling are promising tools for studying in vitro the different processes involved in the toxicokinetics of xenobiotics.\nEfforts made toward the development of micro fabricated cell culture systems that aim to create models that replicate aspects of the human body as closely as possible and give examples that demonstrate their potential use in drug development, such as identifying synergistic drug interactions as well as simulating multi-organ metabolic interactions. Multi compartment micro fluidic-based devices, particularly those that are physical representations of physiologically based pharmacokinetic (PBPK) models that represent the mass transfer of compounds in compartmental models of the mammalian body, may contribute to improving the drug development process. Some emerging technologies have the ability to measure multiple biological processes in a co-culture of mixed cell types, cells from different parts of the body, which is suggested to provide more similarity to in Vivo models. \nMathematical pharmacokinetic (PK) models aim to estimate concentration-time profiles within each organ on the basis of the initial drug dose. Such mathematical models can be relatively simple, treating the body as a single compartment in which the drug distribution reaches a rapid equilibrium after administration. Mathematical models can be highly accurate when all parameters involved are known. Models that combine PK or PBPK models with PD models can predict the time-dependent pharmacological effects of a drug. We can nowadays predict with PBPK models the PK of about any chemical in humans, almost from first principles. These models can be either very simple, like statistical dose-response models, or sophisticated and based on systems biology, according to the goal pursued and the data available. All we need for those models are good parameter values for the molecule of interest.\nMicrofluidic cell culture systems such as micro cell culture analogs (μCCAs) could be used in conjunction with PBPK models. These μCCAs scaled-down devices, termed also body-on-a-chip devices, can simulate multi-tissue interactions under near-physiological fluid flow conditions and with realistic tissue-to-tissue size ratios . Data obtained with these systems may be used to test and refine mechanistic hypotheses. Microfabricating devices also allows us to custom-design them and scale the organs' compartments correctly with respect to one another.\nBecause the device can be used with both animal and human cells, it can facilitate cross-species extrapolation. Used in conjunction with PBPK models, the devices permit an estimation of effective concentrations that can be used for studies with animal models or predict the human response. In the development of multicompartment devices, representations of the human body such as those in used PBPK models can be used to guide the device design with regard to the arrangement of chambers and fluidic channel connections to augment the drug development process, resulting in increased success in clinical trials.", "Researchers are working towards building a multi-channel 3D microfluidic cell culture system that compartmentalizes microenvironments in which 3D cellular aggregates are cultured to mimic multiple organs in the body. Most organ-on-a-chip models today only culture one cell type, so even though they may be valid models for studying whole organ functions, the systemic effect of a drug on the human body is not verified.\nIn particular, an integrated cell culture analog (µCCA) was developed and included lung cells, drug-metabolizing liver and fat cells. The cells were linked in a 2D fluidic network with culture medium circulating as a blood surrogate, thus efficiently providing a nutritional delivery transport system, while simultaneously removing wastes from the cells. \"The development of the µCCA laid the foundation for a realistic in vitro pharmacokinetic model and provided an integrated biomimetic system for culturing multiple cell types with high fidelity to in vivo situations\", claim C. Zhang et al. They have developed a microfluidic human-on-a-chip, culturing four different cell types to mimic four human organs: liver, lung, kidney and fat. They focused on developing a standard serum-free culture media that would be valuable to all cell types included in the device. Optimized standard media are generally targeted to one specific cell-type, whereas a human-on-a-chip will evidently require a common medium (CM). In fact, they claim to have identified a cell culture CM that, when used to perfuse all cell cultures in the microfluidic device, maintains the cells' functional levels. Heightening the sensitivity of the in vitro cultured cells ensures the validity of the device, or that any drug injected into the microchannels will stimulate an identical physiological and metabolic reaction from the sample cells as whole organs in humans.\nA human-on-a-chip design that allows tuning microfluidic transport to multiple tissues using a single fluidic actuator was designed and evaluated for modelling prediabetic hyperglycaemia using liver and pancreatic tissues.\nWith more extensive development of these kinds of chips, pharmaceutical companies will potentially be able to measure direct effects of one organ's reaction on another. For instance, the delivery of biochemical substances would be screened to confirm that even though it may benefit one cell type, it does not compromise the functions of others. It is probably already possible to print these organs with 3D printers, but the cost is too high. Designing whole body biomimetic devices addresses a major reservation that pharmaceutical companies have towards organs-on-chips, namely the isolation of organs. As these devices become more and more accessible, the complexity of the design increases exponentially. Systems will soon have to simultaneously provide mechanical perturbation and fluid flow through a circulatory system. \"Anything that requires dynamic control rather than just static control is a challenge\", says Takayama from the University of Michigan. This challenge has been partially tackled by tissue engineering Linda Griffith group from MIT. A complex multi-organ-on-a-chip was developed to have 4, 7, or 10 organs interconnected through fluidic control. The system is able to maintain the function of these organs for weeks.", "Human skin is the first line of defense against many pathogens and can itself be subject to a variety of diseases and issues, such as cancers and inflammation. As such, skin-on-a-chip (SoC) applications include testing of topical pharmaceuticals and cosmetics, studying the pathology of skin diseases and inflammation, and \"creating noninvasive automated cellular assays\" to test for the presence of antigens or antibodies that could denote the presence of a pathogen. Despite the wide variety of potential applications, relatively little research has gone into developing a skin-on-a-chip compared to many other organ-on-a-chips, such as lungs and kidneys. Issues such as detachment of the collagen scaffolding from microchannels, incomplete cellular differentiation, and predominant use of poly(dimethysiloxane) (PDMS) for device fabrication, which has been shown to leach chemicals into biological samples and cannot be mass-produced stymie standardization of a platform. One additional difficulty is the variability of cell-culture scaffolding, or the base substance in which to culture cells, that is used in skin-on-chip devices. In the human body, this substance is known as the extracellular matrix.\nThe extracellular matrix (ECM) is composed primarily of collagen, and various collagen-based scaffolding has been tested in SoC models. Collagen tends to detach from the microfluidic backbone during culturing due to the contraction of fibroblasts. One study attempted to address this problem by comparing the qualities of collagen scaffolding from three different animal sources: pig skin, rat tail, and duck feet. Other studies also faced detachment issues due to contraction, which can problematic considering that the process of full skin differentiation can take up to several weeks. Contraction issues have been avoided by replacing collagen scaffolding with a fibrin-based dermal matrix, which did not contract. Greater differentiation and formation of cell layers was also reported in microfluidic culture when compared to traditional static culture, agreeing with earlier findings of improved cell-cell and cell-matrix interactions due to dynamic perfusion, or increased permeation through interstitial spaces due to the pressure from continuous media flow. This improved differentiation and growth is thought to be in part a product of shear stress created by the pressure gradient along a microchannel due to fluid flow, which may also improve nutrient supply to cells not directly adjacent to the medium. In static cultures, used in traditional skin equivalents, cells receive nutrients in the medium only through diffusion, whereas dynamic perfusion can improve nutrient flow through interstitial spaces, or gaps between cells. This perfusion has also been demonstrated to improve tight junction formation of the stratum corneum, the tough outer layer of the epidermis, which is the main barrier to penetration of the surface layer of the skin.\nDynamic perfusion may also improve cell viability, demonstrated by placing a commercial skin equivalent in a microfluidic platform that extended the expected lifespan by several weeks. This early study also demonstrated the importance of hair follicles in skin equivalent models. Hair follicles are the primary route into the subcutaneous layer for topical creams and other substances applied to the surface of the skin, a feature that more recent studies have often not accounted for.\nOne study developed a SoC consisting of three layers, the epidermis, dermis, and endothelial layer, separated by porous membranes, to study edema, swelling due to extracellular fluid accumulation, a common response to infection or injury and an essential step for cellular repair. It was demonstrated that pre-application of Dex, a steroidal cream with anti-inflammatory properties, reduced this swelling in the SoC.", "Cardiovascular diseases are often caused by changes in structure and function of small blood vessels. For instance, self-reported rates of hypertension suggest that the rate is increasing, says a 2003 report from the National Health and Nutrition Examination Survey. A microfluidic platform simulating the biological response of an artery could not only enable organ-based screens to occur more frequently throughout a drug development trial, but also yield a comprehensive understanding of the underlying mechanisms behind pathologic changes in small arteries and develop better treatment strategies. Axel Gunther from the University of Toronto argues that such MEMS-based devices could potentially help in the assessment of a patient's microvascular status in a clinical setting (personalized medicine).\nConventional methods used to examine intrinsic properties of isolated resistance vessels (arterioles and small arteries with diameters varying between 30 µm and 300 µm) include the pressure myography technique. However, such methods currently require manually skilled personnel and are not scalable. An artery-on-a-chip could overcome several of these limitations by accommodating an artery onto a platform which would be scalable, inexpensive and possibly automated in its manufacturing.\nAn organ-based microfluidic platform has been developed as a lab-on-a-chip onto which a fragile blood vessel can be fixed, allowing for determinants of resistance artery malfunctions to be studied.\nThe artery microenvironment is characterized by surrounding temperature, transmural pressure, and luminal & abluminal drug concentrations. The multiple inputs from a microenvironment cause a wide range of mechanical or chemical stimuli on the smooth muscle cells (SMCs) and endothelial cells (ECs) that line the vessel's outer and luminal walls, respectively. Endothelial cells are responsible for releasing vasoconstriction and vasodilator factors, thus modifying tone. Vascular tone is defined as the degree of constriction inside a blood vessel relative to its maximum diameter. Pathogenic concepts currently believe that subtle changes to this microenvironment have pronounced effects on arterial tone and can severely alter peripheral vascular resistance. The engineers behind this design believe that a specific strength lies in its ability to control and simulate heterogeneous spatiotemporal influences found within the microenvironment, whereas myography protocols have, by virtue of their design, only established homogeneous microenvironments. They proved that by delivering phenylephrine through only one of the two channels providing superfusion to the outer walls, the drug-facing side constricted much more than the drug opposing side.\nThe artery-on-a-chip is designed for reversible implantation of the sample. The device contains a microchannel network, an artery loading area and a separate artery inspection area. There is a microchannel used for loading the artery segment, and when the loading well is sealed, it is also used as a perfusion channel, to replicate the process of nutritive delivery of arterial blood to a capillary bed in the biological tissue. Another pair of microchannels serves to fix the two ends of the arterial segment. Finally, the last pair of microchannels is used to provide superfusion flow rates, in order to maintain the physiological and metabolic activity of the organ by delivering a constant sustaining medium over the abluminal wall. A thermoelectric heater and a thermoresistor are connected to the chip and maintain physiological temperatures at the artery inspection area.\nThe protocol of loading and securing the tissue sample into the inspection zone helps understand how this approach acknowledges whole organ functions. After immersing the tissue segment into the loading well, the loading process is driven by a syringe withdrawing a constant flow rate of buffer solution at the far end of the loading channel. This causes the transport of the artery towards its dedicated position. This is done with closed fixation and superfusion in/outlet lines. After stopping the pump, sub-atmospheric pressure is applied through one of the fixation channels. Then after sealing the loading well shut, the second fixation channel is subjected to a sub-atmospheric pressure. Now the artery is symmetrically established in the inspection area, and a transmural pressure is felt by the segment. The remaining channels are opened and constant perfusion and superfusion are adjusted using separate syringe pumps.\nVessel-on-chips have been applied to study many disease processes. For example, Alireza Mashaghi and his co-workers developed a model to study viral hemorrhagic syndrome, which involves virus induced vascular integrity loss. The model was used to study Ebola virus disease and to study anti-Ebola drugs. In 2021, the approach has been adapted to model Lassa fever and to show the therapeutic effects of peptide FX-06 for Lassa virus disease.", "Recreation of the prostate epithelium is motivated by evidence suggesting it to be the site of nucleation in cancer metastasis. These systems essentially serve as the next step in the development of cells cultured from mice to two and subsequently three-dimensional human cell culturing. PDMS developments have enabled the creation of microfluidic systems that offer the benefit of adjustable topography, gas and liquid exchange, as well as an ease of observation via conventional microscopy.\nResearchers at the University of Grenoble Alpes have outlined a methodology that utilizes such a microfluidic system in the attempt to construct a viable Prostate epithelium model. The approach focuses on a cylindrical microchannel configuration, mimicking the morphology of a human secretory duct, within which the epithelium is located. Various microchannel diameters were assessed for successful promotion of cell cultures, and it was observed that diameters of 150-400 µm were the most successful. Furthermore, cellular adhesion endured throughout this experimentation, despite the introduction of physical stress through variations in microfluidic currents.\nThe objective of these constructions is to facilitate the collection of prostatic fluid, along with gauging cellular reactions to microenvironmental changes. Additionally, prostate-on-a-chip enables the recreation of metastasis scenarios, which allows the assessment of drug candidates and other therapeutic approaches. Scalability of this method is also attractive to researchers, as the reusable mold approach ensures a low-cost of production.", "Lung-on-a-chips are being designed in an effort to improve the physiological relevance of existing in vitro alveolar-capillary interface models. Such a multifunctional microdevice can reproduce key structural, functional and mechanical properties of the human alveolar-capillary interface (i.e., the fundamental functional unit of the living lung).\nDongeun Huh from Wyss Institute for Biologically Inspired Engineering at Harvard describes their fabrication of a system containing two closely apposed microchannels separated by a thin (10 µm) porous flexible membrane made of PDMS. The device largely comprises three microfluidic channels, and only the middle one holds the porous membrane. Culture cells were grown on either side of the membrane: human alveolar epithelial cells on one side, and human pulmonary microvascular endothelial cells on the other.\nThe compartmentalization of the channels facilitates not only the flow of air as a fluid which delivers cells and nutrients to the apical surface of the epithelium, but also allows for pressure differences to exist between the middle and side channels. During normal inspiration in a human's respiratory cycle, intrapleural pressure decreases, triggering an expansion of the alveoli. As air is pulled into the lungs, alveolar epithelium and the coupled endothelium in the capillaries are stretched. Since a vacuum is connected to the side channels, a decrease in pressure will cause the middle channel to expand, thus stretching the porous membrane and subsequently, the entire alveolar-capillary interface. The pressure-driven dynamic motion behind the stretching of the membrane, also described as a cyclic mechanical strain (valued at approximately 10%), significantly increases the rate of nanoparticle translocation across the porous membrane, when compared to a static version of this device, and to a Transwell culture system.\nIn order to fully validate the biological accuracy of a device, its whole-organ responses must be evaluated. In this instance, researchers inflicted injuries to the cells:\n* Pulmonary inflammation: Pulmonary inflammatory responses entail a multistep strategy, but alongside an increased production of epithelial cells and an early response release of cytokines, the interface should undergo an increased number of leukocyte adhesion molecules. In Huh's experiment, the pulmonary inflammation was simulated by introducing medium containing a potent proinflammatory mediator. Only hours after the injury was caused, the cells in the microfluidic device subjected to a cyclic strain reacted in accordance with the previously mentioned biological response.\n* Pulmonary infection: Living E-coli bacteria was used to demonstrate how the system can even mimic the innate cellular response to a bacterial pulmonary infection. The bacteria were introduced onto the apical surface of the alveolar epithelium. Within hours, neutrophils were detected in the alveolar compartment, meaning they had transmigrated from the vascular microchannel where the porous membrane had phagocytized the bacteria.\nAdditionally, researchers believe the potential value of this lung-on-a-chip system will aid in toxicology applications. By investigating the pulmonary response to nanoparticles, researchers hope to learn more about health risks in certain environments, and correct previously oversimplified in vitro models. Because a microfluidic lung-on-a-chip can more exactly reproduce the mechanical properties of a living human lung, its physiological responses will be quicker and more accurate than a Transwell culture system. Nevertheless, published studies admit that responses of a lung-on-a-chip do not yet fully reproduce the responses of native alveolar epithelial cells.", "Renal cells and nephrons have already been simulated by microfluidic devices. \"Such cell cultures can lead to new insights into cell and organ function and be used for drug screening\". A kidney-on-a-chip device has the potential to accelerate research encompassing artificial replacement for lost kidney function. Nowadays, dialysis requires patients to go to a clinic up to three times per week. A more transportable and accessible form of treatment would not only increase the patient's overall health (by increasing frequency of treatment), but the whole process would become more efficient and tolerable. Artificial kidney research is striving to bring transportability, wearability and perhaps implantation capability to the devices through innovative disciplines: microfluidics, miniaturization and nanotechnology.\nThe nephron is the functional unit of the kidney and is composed of a glomerulus and a tubular component. Researchers at MIT claim to have designed a bioartificial device that replicates the function of the nephron's glomerulus, proximal convoluted tubule and loop of Henle.\nEach part of the device has its unique design, generally consisting of two microfabricated layers separated by a membrane. The only inlet to the microfluidic device is designed for the entering blood sample. In the glomerulus section of the nephron, the membrane allows certain blood particles through its wall of capillary cells, composed by the endothelium, basement membrane and the epithelial podocytes. The fluid that is filtered from the capillary blood into Bowmans space is called filtrate or primary urine.\nIn the tubules, some substances are added to the filtrate as part of the urine formation, and some substances reabsorbed out of the filtrate and back into the blood. The first segment of these tubules is the proximal convoluted tubule. This is where the almost complete absorption of nutritionally important substances takes place. In the device, this section is merely a straight channel, but blood particles going to the filtrate have to cross the previously mentioned membrane and a layer of renal proximal tubule cells. The second segment of the tubules is the loop of Henle where the reabsorption of water and ions from the urine takes place. The device's looping channels strives to simulate the countercurrent mechanism of the loop of Henle. Likewise, the loop of Henle requires a number of different cell types because each cell type has distinct transport properties and characteristics. These include the descending limb cells, thin ascending limb cells, thick ascending limb cells, cortical collecting duct cells and medullary collecting duct cells.\nOne step towards validating the microfluidic devices simulation of the full filtration and reabsorption behavior of a physiological nephron would include demonstrating that the transport properties between blood and filtrate are identical with regards to where they occur and what is being let in by the membrane. For example, the large majority of passive transport of water occurs in the proximal tubule and the descending thin limb, or the active transport of NaCl largely occurs in the proximal tubule and the thick ascending limb. The devices design requirements would require the filtration fraction in the glomerulus to vary between 15–20%, or the filtration reabsorption in the proximal convoluted tubule to vary between 65–70%, and finally the urea concentration in urine (collected at one of the two outlets of the device) to vary between 200–400 mM.\nOne recent report illustrates a biomimic nephron on hydrogel microfluidic devices with establishing the function of passive diffusion. The complex physiological function of nephron is achieved on the basis of interactions between vessels and tubules (both are hollow channels). However, conventional laboratory techniques usually focus on 2D structures, such as petri-dish that lacks capability to recapitulate real physiology that occurs in 3D. Therefore, the authors developed a new method to fabricate functional, cell-lining and perfusable microchannels inside 3D hydrogel. The vessel endothelial and renal epithelial cells are cultured inside hydrogel microchannel and form cellular coverage to mimic vessels and tubules, respectively. They employed confocal microscope to examine the passive diffusion of one small organic molecule (usually drugs) between the vessels and tubules in hydrogel. The study demonstrates the beneficial potential to mimic renal physiology for regenerative medicine and drug screening.", "An organ-on-a-chip (OOC) is a multi-channel 3-D microfluidic cell culture, integrated circuit (chip) that simulates the activities, mechanics and physiological response of an entire organ or an organ system. It constitutes the subject matter of significant biomedical engineering research, more precisely in bio-MEMS. The convergence of labs-on-chips (LOCs) and cell biology has permitted the study of human physiology in an organ-specific context. By acting as a more sophisticated in vitro approximation of complex tissues than standard cell culture, they provide the potential as an alternative to animal models for drug development and toxin testing.\nAlthough multiple publications claim to have translated organ functions onto this interface, the development of these microfluidic applications is still in its infancy. Organs-on-chips vary in design and approach between different researchers. Organs that have been simulated by microfluidic devices include brain, lung, heart, kidney, liver, prostate, vessel (artery), skin, bone, cartilage and more.\nA limitation of the early organ-on-a-chip approach is that simulation of an isolated organ may miss significant biological phenomena that occur in the body's complex network of physiological processes, and that this oversimplification limits the inferences that can be drawn. Many aspects of subsequent microphysiometry aim to address these constraints by modeling more sophisticated physiological responses under accurately simulated conditions via microfabrication, microelectronics and microfluidics.\nThe development of organ chips has enabled the study of the complex pathophysiology of human viral infections. An example is the liver chip platform that has enabled studies of viral hepatitis.", "A lab-on-a-chip is a device that integrates one or several laboratory functions on a single chip that deals with handling particles in hollow microfluidic channels. It has been developed for over a decade. Advantages in handling particles at such a small scale include lowering fluid volume consumption (lower reagents costs, less waste), increasing portability of the devices, increasing process control (due to quicker thermo-chemical reactions) and decreasing fabrication costs. Additionally, microfluidic flow is entirely laminar (i.e., no turbulence). Consequently, there is virtually no mixing between neighboring streams in one hollow channel. In cellular biology convergence, this rare property in fluids has been leveraged to better study complex cell behaviors, such as cell motility in response to chemotactic stimuli, stem cell differentiation, axon guidance, subcellular propagation of biochemical signaling and embryonic development.", "The human gut-on-a-chip contains two microchannels that are separated by the flexible porous Extracellular Matrix (ECM)-coated membrane lined by the gut epithelial cells: Caco-2, which has been used extensively as the intestinal barrier. Caco-2 cells are cultured under spontaneous differentiation of its parental cell, a human colon adenocarcinoma, that represent the model of protective and absorptive properties of the gut. The microchannels are fabricated from polydimethylsiloxane (PDMS) polymer. In order to mimic the gut microenvironment, peristalsis-like fluid flow is designed. By inducing suction in the vacuum chambers along both sides of the main cell channel bilayer, cyclic mechanical strain of stretching and relaxing are developed to mimic the gut behaviors. Furthermore, cells undergo spontaneous villus morphogenesis and differentiation, which generalizes characteristics of intestinal cells. Under the three-dimensional villi scaffold, cells not only proliferate, but metabolic activities are also enhanced. Another important player in the gut is the microbes, namely gut microbiota. Many microbial species in the gut microbiota are strict anaerobes. In order to co-culture these oxygen intolerant anaerobes with the oxygen favorable intestinal cells, a polysulfone fabricated gut-on-a-chip is designed. The system maintained the co-culture of colon epithelial cells, goblet-like cells, and bacteria Faecalibacterium prausnitzii, [https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?mode=Info&id=39491 Eubacterium rectale], and Bacteroides thetaiotaomicron.\nOral administration is one of the most common methods for drug administration. It allows patients, especially out-patients, to self-serve the drugs with minimal possibility of experiencing acute drug reactions and in most cases: pain-free. However, the drug's action in the body can be largely influenced by the first pass effect. The gut, which plays an important role in the human digestive system, determines the effectiveness of a drug by absorbing its chemical and biological properties selectively. While it is costly and time consuming to develop new drugs, the fact that the gut-on-a-chip technology attains a high level of throughput has significantly decreased research and development costs and time for new drugs.\nEven though the cause for inflammatory bowel disease (IBD) is elusive, its pathophysiology involves the gut microbiota. Current methods of inducing IBD are using inflammatory cues to activate Caco-2. It was found that the intestinal epithelium experienced a reduction in barrier function and increased cytokine concentrations. The gut-on-a-chip allowed for the assessment on drug transport, absorption and toxicity as well as potential developments in studying pathogenesis and interactions in the microenvironment overall. Immune cells are essential in mediating inflammatory processes in many gastrointestinal disorders, a recent gut-on-a-chip system also includes multiple immune cells, e.g., macrophages, dendritic cells, and CD4+ T cells in the system. Additionally, the gut-on-a-chip allows the testing of anti-inflammatory effects of bacterial species.\nThe chip was used to model human radiation-induced injury to the intestine in vitro as it recapitulated the injuries at both cellular and tissue levels. Injuries include but not limited to: inhabitation of mucus production, promotion of villus blunting, and distortion of microvilli.", "Brain-on-a-chip devices create an interface between neuroscience and microfluidics by: 1) improving culture viability; 2) supporting high-throughput screening; 3) modeling organ-level physiology and disease in vitro/ex vivo, and 4) adding high precision and tunability of microfluidic devices. Brain-on-a-chip devices span multiple levels of complexity in terms of cell culture methodology. Devices have been made using platforms that range from traditional 2D cell culture to 3D tissues in the form of organotypic brain slices.\nOrganotypic brain slices are an in vitro model that replicates in vivo physiology with additional throughput and optical benefits, thus pairing well with microfluidic devices. Brain slices have advantages over primary cell culture in that tissue architecture is preserved and multicellular interactions can still occur. There is flexibility in their use, as slices can be used acutely (less than 6 hours after slice harvesting) or cultured for later experimental use. Because organotypic brain slices can maintain viability for weeks, they allow for long-term effects to be studied. Slice-based systems also provide experimental access with precise control of extracellular environments, making it a suitable platform for correlating disease with neuropathological outcomes. Because approximately 10 to 20 slices can be extracted from a single brain, animal usage is significantly reduced relative to in vivo studies. Organotypic brain slices can be extracted and cultured from multiple animal species (e.g. rats), but also from humans.\nMicrofluidic devices have been paired with organotypic slices to improve culture viability. The standard procedure for culturing organotypic brain slices (around 300 microns in thickness) uses semi-porous membranes to create an air-medium interface, but this technique results in diffusion limitations of nutrients and dissolved gases. Because microfluidic systems introduce laminar flow of these necessary nutrients and gases, transport is improved and higher tissue viability can be achieved. In addition to keeping standard slices viable, brain-on-a-chip platforms have allowed the successful culturing of thicker brain slices (approximately 700 microns), despite a significant transport barrier due to thickness. As thicker slices retain more native tissue architecture, this allows brain-on-a-chip devices to achieve more \"in vivo-like\" characteristics without sacrificing cell viability. Microfluidic devices support high-throughput screening and toxicological assessments in both 2D and slice cultures, leading to the development of novel therapeutics targeted for the brain. One device was able to screen the drugs pitavastatin and irinotecan combinatorically in glioblastoma multiform (the most common form of human brain cancer). These screening approaches have been combined with the modeling of the blood-brain barrier (BBB), a significant hurdle for drugs to overcome when treating the brain, allowing for drug efficacy across this barrier to be studied in vitro. Microfluidic probes have been used to deliver dyes with high regional precision, making way for localized microperfusion in drug applications. Microfluidic BBB in vitro models replicate a 3D environment for embedded cells (which provides precise control of cellular and extracellular environment), replicate shear stress, have more physiologically relevant morphology in comparison to 2D models, and provide easy incorporation of different cell types into the device. Because microfluidic devices can be designed with optical accessibility, this also allows for the visualization of morphology and processes in specific regions or individual cells. Brain-on-a-chip systems can model organ-level physiology in neurological diseases, such as Alzheimers disease, Parkinsons disease, and multiple sclerosis more accurately than with traditional 2D and 3D cell culture techniques. The ability to model these diseases in a way that is indicative of in vivo conditions is essential for the translation of therapies and treatments. Additionally, brain-on-a-chip devices have been used for medical diagnostics, such as in biomarker detection for cancer in brain tissue slices.\nBrain-on-a-chip devices can cause shear stress on cells or tissue due to flow through small channels, which can result in cellular damage. These small channels also introduce susceptibility to the trapping of air bubbles that can disrupt flow and potentially cause damage to the cells. The widespread use of PDMS (polydimethylsiloxane) in brain-on-a-chip devices has some drawbacks. Although PDMS is cheap, malleable, and transparent, proteins and small molecules can be absorbed by it and later leech at uncontrolled rates.\nDespite the progress in microfluidic BBB devices, these devices are often too technically complex, require highly specialized setups and equipment, and are unable to detect temporal and spatial differences in the transport kinetics of substances that migrate across cellular barriers. Also, direct measurements of permeability in these models are limited due to the limited perfusion and complex, poorly defined geometry of the newly formed microvascular network.", "Past efforts to replicate in vivo cardiac tissue environments have proven to be challenging due to difficulties when mimicking contractility and electrophysiological responses. Such features would greatly increase the accuracy of in vitro experiments.\nMicrofluidics has already contributed to in vitro experiments on cardiomyocytes, which generate the electrical impulses that control the heart rate. For instance, researchers have built an array of PDMS microchambers, aligned with sensors and stimulating electrodes as a tool that will electrochemically and optically monitor the cardiomyocytes' metabolism. Another lab-on-a-chip similarly combined a microfluidic network in PDMS with planar microelectrodes, this time to measure extracellular potentials from single adult murine cardiomyocytes.\nA reported design of a heart-on-a-chip claims to have built \"an efficient means of measuring structure-function relationships in constructs that replicate the hierarchical tissue architectures of laminar cardiac muscle.\" This chip determines that the alignment of the myocytes in the contractile apparatus made of cardiac tissue and the gene expression profile (affected by shape and cell structure deformation) contributes to the force produced in cardiac contractility. This heart-on-a-chip is a biohybrid construct: an engineered anisotropic ventricular myocardium is an elastomeric thin film.\nThe design and fabrication process of this particular microfluidic device entails first covering the edges of a glass surface with tape (or any protective film) such as to contour the substrate's desired shape. A spin coat layer of PNIPA is then applied. After its dissolution, the protective film is peeled away, resulting in a self-standing body of PNIPA. The final steps involve the spin coating of protective surface of PDMS over the cover slip and curing. Muscular thin films (MTF) enable cardiac muscle monolayers to be engineered on a thin flexible substrate of PDMS. In order to properly seed the 2D cell culture, a microcontact printing technique was used to lay out a fibronectin \"brick wall\" pattern on the PDMS surface. Once the ventricular myocytes were seeded on the functionalized substrate, the fibronectin pattern oriented them to generate an anisotropic monolayer.\nAfter the cutting of the thin films into two rows with rectangular teeth, and subsequent placement of the whole device in a bath, electrodes stimulate the contraction of the myocytes via a field-stimulation – thus curving the strips/teeth in the MTF. Researchers have developed a correlation between tissue stress and the radius of curvature of the MTF strips during the contractile cycle, validating the demonstrated chip as a \"platform for quantification of stress, electrophysiology and cellular architecture.\"\nWhile researchers have focused on 2D cell cultures, 3D cell constructs mimic the in vivo environment and the interactions (e.g., cell to cell) occurring in the human body better. Hence, they are considered promising models for studies such as toxicology and response to drugs. Based on the study of Chen et al., the interactions of valvular endothelial/interstitial cells (VECs/VICs) are studied via a 3D PDMS-glass microfluidic device with a top channel flowed with VECs under shear stress, a membrane with uniform pores, and a bottom channel containing VIC-hydrogel. VECs are verified to restrain the differentiation of morbid VIC myofibroblast, with reinforced suppression by shear stress.\nAnother PDMS 3D microfluidic heart-on-a-chip design is measured to generate 10% to 15% of uniaxial cyclic mechanical strains. The device consists of a cell culture with hanging posts for caging and an actuation compartment with scaffolding posts to avoid buckling of PDMS, along with the cardiac cycle pressure signal imitation. The neonatal rat micro-engineered cardiac tissues (μECTs) stimulated by this design show improved synchronous beating, proliferation, maturation, and viability compared to the unstimulated control. The contraction rate of human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CM) is observed to accelerate with 100-fold less isoprenaline, a heart block treatment, when having electrical pacing signal (+ES) compared to that without ES.\n3D microfluidic heart-on-a-chips have also facilitated the research of heart diseases. For instance, cardiac hypertrophy and fibrosis are studied via the respective biomarker level of the mechanically stimulated μECTs, such as atrial natriuretic peptide (ANP) for the former and transforming growth factor-β (TGF-β) for the latter. Also, the knowledge of ischaemia is gained by action potential observations.\nThe microfluidic approaches utilized for teasing apart specific mechanisms at the single-cell level and at the tissue-level are becoming increasingly sophisticated and so are the fabrication methods. Rapid dissemination and availability of low cost, high resolution 3D printing technology is revolutionizing this space and opening new possibilities for building patient specific heart and cardiovascular systems. The confluence of high resolution 3D printing, patient derived iPSCs with artificial intelligence is posed to make significant strides towards truly personalized heart modelling and ultimately, patient care.", "The liver is a major organ of metabolism, and it is related to glycogen storage, decomposition of red blood cells, certain protein and hormone synthesis, and detoxification. Within these functions, its detoxification response is essential for new drug development and clinical trials. In addition, because of its multi-functions, the liver is prone to many diseases, and liver diseases have become a global challenge.\nLiver-on-a-chip devices utilize microfluidic techniques to simulate the hepatic system by imitating complex hepatic lobules that involve liver functions. Liver-on-a-chip devices provide a good model to help researchers work on dysfunction and pathogenesis of the liver with relatively low cost. Researchers use primary rat hepatocytes and other nonparenchymal cells. This coculture method is extensively studied and is proved to be beneficial for extension of hepatocytes survival time and support the performance of liver-specific functions. Many liver-on-a-chip systems are made of poly(dimethylsiloxane) (PDMS) with multiple channels and chambers based on specific design and objective. PDMS is used and has become popular because it has relatively low price for raw materials, and it is also easily molded for microfluidic devices. But PDMS can absorb important signaling molecules including proteins and hormones. Other more inert materials such as polysulfone or polycarbonate are used in liver-chips.\nA study by Emulate researchers assessed advantages of using liver-chips predicting drug-induced liver injury which could reduce the high costs and time needed in drug development workflows/pipelines, sometimes described as the pharmaceutical industry's \"productivity crisis\". Zaher Nahle subsequently outlined 12 \"reasons why micro-physiological systems (MPS) like organ-chips are better at modeling human diseases\".\nOne design from Kane et al. cocultures primary rat hepatocytes and 3T3-J2 fibroblasts in an 8*8 element array of microfluidic wells. Each well is separated into two chambers. The primary chamber contains rat hepatocytes and 3T3-J2 fibroblasts and is made of glass for cells adhesion. Each of primary chamber is connected to a microfluidic network that supply metabolic substrate and remove metabolic byproducts. A 100 µm thick membrane of PDMS separates the primary and secondary chamber, allowing the secondary chamber to be connected to another microfluidic network that perfuses 37 °C room air with 10% carbon dioxide, and producing air exchange for rat hepatocytes. The production of urea and steady-state protein proves the viability of this device for use in high-throughput toxicity studies.\nAnother design from Kang et al. cocultures primary rat hepatocytes and endothelial cells. A single-channel is made first. Hepatocytes and endothelial cells are then planted on the device and are separated by a thin Matrigel layer in between. The metabolic substrate and metabolic byproducts share this channel to be supplied or removed. Later, a dual-channel is made, and endothelial cells and hepatocytes cells have their own channels to supply the substrate or remove the byproduct. The production of urea and positive result on hepatitis B virus (HBV) replication test shows its potential to study hepatotropic viruses.\nThere are a few other applications on liver-on-a-chip. Lu et al. developed a liver tumor-on-a-chip model. The decellularized liver matrix (DLM)-gelatin methacryloyl (GelMA)-based biomimetic liver tumor-on-a-chip proved to be a suitable design for further anti-tumor studies. Zhou et al. analyzed alcohol injures on the hepatocytes and the signaling and recovery.\nThe liver-on-a-chip has shown its great potential for liver-related research. Future goals for liver-on-a-chip devices focus on recapitulating a more realistic hepatic environment, including reagents in fluids, cell types, extending survival time, etc.", "Organoids enable to study how cells interact together in an organ, their interaction with their environment, how diseases affect them and the effect of drugs. In vitro culture makes this system easy to manipulate and facilitates their monitoring. While organs are difficult to culture because their size limits the penetration of nutrients, the small size of organoids limits this problem. On the other hand, they do not exhibit all organ features and interactions with other organs are not recapitulated in vitro. While research on stem cells and regulation of stemness was the first field of application of intestinal organoids, they are now also used to study e.g. uptake of nutrients, drug transport and secretion of incretin hormones. This is of great relevance in the context of malabsorption diseases as well as metabolic diseases such as obesity, insulin resistance, and diabetes.", "A multitude of organ structures have been recapitulated using organoids. This section aims to outline the state of the field as of now through providing an abridged list of the organoids that have been successfully created, along with a brief outline based on the most recent literature for each organoid, and examples of how it has been utilized in research.", "Attempts to create organs in vitro started with one of the first dissociation-reaggregation experiments where Henry Van Peters Wilson demonstrated that mechanically dissociated sponge cells can reaggregate and self-organize to generate a whole organism. In the subsequent decades, multiple labs were able to generate different types of organs in vitro through the dissociation and reaggregation of organ tissues obtained from amphibians and embryonic chicks. The formation of first tissue-like colonies in vitro was observed for the first time by co-culturing keratinocytes and 3T3 fibroblasts. The phenomena of mechanically dissociated cells aggregating and reorganizing to reform the tissue they were obtained from subsequently led to the development of the differential adhesion hypothesis by Malcolm Steinberg. With the advent of the field of stem cell biology, the potential of stem cells to form organs in vitro was realized early on with the observation that when stem cells form teratomas or embryoid bodies, the differentiated cells can organize into different structures resembling those found in multiple tissue types. The advent of the field of organoids, started with a shift from culturing and differentiating stem cells in two dimensional (2D) media, to three dimensional (3D) media to allow for the development of the complex 3-dimensional structures of organs. Utilization of 3D media culture media methods for the structural organization was made possible with the development of extracellular matrices (ECM). In the late 1980s, Bissell and colleagues showed that a laminin rich gel can be used as a basement membrane for differentiation and morphogenesis in cell cultures of mammary epithelial cells. Since 1987, researchers have devised different methods for 3D culturing, and were able to utilize different types of stem cells to generate organoids resembling a multitude of organs. In the 1990s, in addition to their role in physical support, the role of ECM components in gene expression by their interaction with integrin-based focal adhesion pathways was reported. In 2006, Yaakov Nahmias and David Odde showed the self-assembly of vascular liver organoid maintained for over 50 days in vitro. In 2008, Yoshiki Sasai and his team at RIKEN institute demonstrated that stem cells can be coaxed into balls of neural cells that self-organize into distinctive layers. In 2009 the Laboratory of Hans Clevers at Hubrecht Institute and University Medical Center Utrecht, Netherlands, showed that single LGR5-expressing intestinal stem cells self-organize to crypt-villus structures in vitro without necessity of a mesenchymal niche, making them the first organoids. In 2010, Mathieu Unbekandt & Jamie A. Davies demonstrated the production of renal organoids from murine fetus-derived renogenic stem cells. In 2014, Qun Wang and co-workers engineered collagen-I and laminin based gels and synthetic foam biomaterials for the culture and delivery of intestinal organoids and encapsulated DNA-functionalized gold nanoparticles into intestinal organoids to form an intestinal Trojan horse for drug delivery and gene therapy. Subsequent reports showed significant physiological function of these organoids in vitro and in vivo.\nOther significant early advancements included in 2013, Madeline Lancaster at the Institute of Molecular Biotechnology of the Austrian Academy of Sciences established a protocol starting from pluripotent stem cells to generate cerebral organoids that mimic the developing human brain's cellular organization. Meritxell Huch and Craig Dorrell at Hubrecht Institute and University Medical Center Utrecht demonstrated that single Lgr5+ cells from damaged mouse liver can be clonally expanded as liver organoids in Rspo1-based culture medium over several months. In 2014, Artem Shkumatov et al. at the University of Illinois at Urbana-Champaign demonstrated that cardiovascular organoids can be formed from ES cells through modulation of the substrate stiffness, to which they adhere. Physiological stiffness promoted three-dimensionality of EBs and cardiomyogenic differentiation. In 2015, Takebe et al. demonstrated a generalized method for organ bud formation from diverse tissues by combining pluripotent stem cell-derived tissue-specific progenitors or relevant tissue samples with endothelial cells and mesenchymal stem cells. They suggested that the less mature tissues, or organ buds, generated through the self-organized condensation principle might be the most efficient approach toward the reconstitution of mature organ functions after transplantation, rather than condensates generated from cells of a more advanced stage.", "Organoids provide an opportunity to create cellular models of human disease, which can be studied in the laboratory to better understand the causes of disease and identify possible treatments. The power of organoids in this regard was first shown for a genetic form of microcephaly, where patient cells were used to make cerebral organoids, which were smaller and showed abnormalities in early generation of neurons. In another example, the genome editing system called CRISPR was applied to human pluripotent stem cells to introduce targeted mutations in genes relevant to two different kidney diseases, polycystic kidney disease and focal segmental glomerulosclerosis. These CRISPR-modified pluripotent stem cells were subsequently grown into human kidney organoids, which exhibited disease-specific phenotypes. Kidney organoids from stem cells with polycystic kidney disease mutations formed large, translucent cyst structures from kidney tubules. When cultured in the absence of adherent cues (in suspension), these cysts reached sizes of 1 cm in diameter over several months. Kidney organoids with mutations in a gene linked to focal segmental glomerulosclerosis developed junctional defects between podocytes, the filtering cells affected in that disease. Importantly, these disease phenotypes were absent in control organoids of identical genetic background, but lacking the CRISPR mutations. Comparison of these organoid phenotypes to diseased tissues from mice and humans suggested similarities to defects in early development.\nAs first developed by Takahashi and Yamanaka in 2007, induced pluripotent stem cells (iPSC) can also be reprogrammed from patient skin fibroblasts. These stem cells carry the exact genetic background of the patient including any genetic mutations which might contribute to the development of human disease. Differentiation of these cells into kidney organoids has been performed from patients with Lowe Syndrome due to ORCL1 mutations. This report compared kidney organoids differentiated from patient iPSC to unrelated control iPSC and demonstrated an inability of patient kidney cells to mobilise transcription factor SIX2 from the golgi complex. Because SIX2 is a well characterised marker of nephron progenitor cells in the cap mesenchyme, the authors concluded that renal disease frequently seen in Lowe Syndrome (global failure of proximal tubule reabsorption or renal Fanconi syndrome) could be related to alteration in nephron patterning arising from nephron progenitor cells lacking this important SIX2 gene expression.\nOther studies have used CRISPR gene editing to correct the patients mutation in the patient iPSC cells to create an isogenic control, which can be performed simultaneously with iPSC reprogramming. Comparison of a patient iPSC derived organoid against an isogenic control is the current gold standard in the field as it permits isolation of the mutation of interest as the only variable within the experimental model. In one such report, kidney organoids derived from iPSC of a patient with Mainzer-Saldino Syndrome due to compound heterozygous mutations in IFT140 were compared to an isogenic control organoid in which an IFT140' variant giving rise to a non-viable mRNA transcript was corrected by CRISPR. Patient kidney organoids demonstrated abnormal ciliary morphology consistent with existing animal models which was rescued to wild type morphology in the gene corrected organoids. Comparative transcriptional profiling of epithelial cells purified from patient and control organoids highlighted pathways involved in cell polarity, cell-cell junctions and dynein motor assembly, some of which had been implicated for other genotypes within the phenotypic family of renal ciliopathies. Another report utilising an isogenic control demonstrated abnormal nephrin localisation in the glomeruli of kidney organoids generated from a patient with congenital nephrotic syndrome.\nThings such as epithelial metabolism can also be modelled.", "Organoids offer researchers an exceptional model to study developmental biology. Since the identification of pluripotent stem cells, there have been great advancements in directing pluripotent stem cells fate in vitro using 2D cultures. These advancements in PSC fate direction, coupled with the advancements in 3D culturing techniques allowed for the creation of organoids that recapitulate the properties of various specific subregions of a multitude of organs. The use of these organoids has thus greatly contributed to expanding our understanding of the processes of organogenesis, and the field of developmental biology. In central nervous system development, for example, organoids have contributed to our understanding of the physical forces that underlie retinal cup formation. More recent work has extended cortical organoid growth periods extensively and at nearly a year under specific differentiation conditions, the organoids persist and have some features of human fetal development stages.", "The first successful transplantation of an organoid into a human, a patient with ulcerative colitis whose cells were used for the organoid, was carried out in 2022.", "Organoid formation generally requires culturing the stem cells or progenitor cells in a 3D medium. Stem cells have the ability to self-renew and differentiate into various cell subtypes, and they enable understanding the processes of development and disease progression. Therefore organoids derived from stem cells enable studying biology and physiology at the organ level. The 3D medium can be made using an extracellular matrix hydrogel such as Matrigel or Cultrex BME, which is a laminin-rich extracellular matrix that is secreted by the Engelbreth-Holm-Swarm tumor line. Organoid bodies can then be made through embedding stem cells in the 3D medium. When pluripotent stem cells are used for the creation of the organoid, the cells are usually, but not all the time, allowed to form embryoid bodies. Those embryoid bodies are then pharmacologically treated with patterning factors to drive the formation of the desired organoid identity. Organoids have also been created using adult stem cells extracted from the target organ, and cultured in 3D media.\nBiochemical cues have been incorporated in 3D organoid cultures and with exposure of morphogenes, morphogen inhibitors, or growth factors, organoid models can be developed using embryonic stem cells (ESCs) or adult stem cells (ASCs). Vascularization techniques can be utilized to embody microenvironments that are close to their counterparts, physiologically. Vasculature systems that can facilitate oxygen or nutrients to the inner mass of organoids can be achieved through microfluidic systems, vascular endothelial growth factor delivery systems, and endothelial cell-coated modules. With patient-derived induced pluripotent stem cells (iPSCs) and CRISPR/Cas-based genome editing technologies, genome-edited or mutated pluripotent stem cells (PSCs) with altered signaling cues can be generated to control intrinsic cues within organoids.", "An organoid is a miniaturised and simplified version of an organ produced in vitro in three dimensions that mimics the key functional, structural and biological complexity of that organ. They are derived from one or a few cells from a tissue, embryonic stem cells or induced pluripotent stem cells, which can self-organize in three-dimensional culture owing to their self-renewal and differentiation capacities. The technique for growing organoids has rapidly improved since the early 2010s, and The Scientist names it as one of the biggest scientific advancements of 2013. Scientists and engineers use organoids to study development and disease in the laboratory, drug discovery and development in industry, personalized diagnostics and medicine, gene and cell therapies, tissue engineering and regenerative medicine.", "A cerebral organoid describes artificially grown, in vitro, miniature organs resembling the brain. Cerebral organoids are created by culturing human pluripotent stem cells in a three-dimensional structure using rotational bioreactor and develop over the course of months. The procedure has potential applications in the study of brain development, physiology and function. Cerebral organoids may experience \"simple sensations\" in response to external stimulation and neuroscientists are among those expressing concern that such organs could develop sentience. They propose that further evolution of the technique needs to be subject to a rigorous oversight procedure. In 2023, researchers have built a hybrid biocomputer that combines a laboratory-grown human brain organoids with conventional circuits, and can complete tasks such as voice recognition. Cerebral Organoids are currently being used to research and develop Organoid Intelligence (OI) technologies.", "Gastrointestinal organoids refer to organoids that recapitulate structures of the gastrointestinal tract. The gastrointestinal tract arises from the endoderm, which during development forms a tube that can be divided in three distinct regions, which give rise to, along with other organs, the following sections of the gastrointestinal tract: \n:# The foregut gives rise to the oral cavity and the stomach\n:# The midgut gives rise to the small intestines and the ascending colon \n:# The hindgut gives rise to the rectum and the rest of the colon\nOrganoids have been created for the following structures of the gastrointestinal tract:", "Intestinal organoids have thus far been among the gut organoids generated directly from intestinal tissues or pluripotent stem cells. One way human pluripotent stem cells can be driven to form intestinal organoids is through first the application of activin A to drive the cells into a mesoendodermal identity, followed by the pharmacological upregulation of Wnt3a and Fgf4 signaling pathways as they have been demonstrated to promote posterior gut fate. Intestinal organoids have also been generated from intestinal stem cells, extracted from adult tissue and cultured in 3D media. These adult stem cell-derived organoids are often referred to as enteroids or colonoids, depending on their segment of origin, and have been established from both the human and murine intestine. Intestinal organoids consist of a single layer of polarized intestinal epithelial cells surrounding a central lumen. As such, recapitulate the crypt-villus structure of the intestine, by recapitulating its function, physiology and organization, and maintaining all the cell types found normally in the structure including intestinal stem cells. Thus, intestinal organoids are a valuable model to study intestinal nutrient transport, drug absorption and delivery, nanomaterials and nanomedicine, incretin hormone secretion, and infection by various enteropathogens. For example, Qun Wang's team rationally designed artificial virus nanoparticles as oral drug delivery vehicles (ODDVs) with gut organoid-derived mucosal models and demonstrated a new concept of using newly established colon organoids as tools for high-throughput drug screening, toxicity testing, and oral drug development. Intestinal organoids also recapitulate the crypt-Villus structure to such a high degree of fidelity that they have been successfully transplanted to mouse intestines, and are hence highly regarded as a valuable model for research. One of the fields of research that intestinal organoids have been utilized is that of stem cell niche. Intestinal organoids were used to study the nature of the intestinal stem cell niche, and research done with them demonstrated the positive role IL-22 has in maintaining in intestinal stem cells, along with demonstrating the roles of other cell types like neurons and fibroblasts in maintenance of intestinal stem cells. In the field of infection biology, different intestinal organoid-based model systems have been explored. On one hand, organoids can be infected in bulk by simply mixing them with the enteropathogen of interest. However, to model infection via a more natural route starting from the intestinal lumen, microinjection of the pathogen is required. In addition, the polarity of intestinal organoids can be inverted, and they can even be dissociated into single cells and cultured as 2D monolayers in order to make both the apical and basolateral sides of the epithelium more easily accessible. Intestinal organoids have also demonstrated therapeutic potential.\nIn order to more accurately recapitulate the intestine in vivo, co-cultures of intestinal organoids and immune cells have been developed. Furthermore, organ-on-a-chip models combine intestinal organoids with other cell types such as endothelial or immune cells as well as peristaltic flow.", "Gastric organoids recapitulate at least partly the physiology of the stomach. Gastric organoids have been generated directly from pluripotent stem cells through the temporal manipulation of the FGF, WNT, BMP, retinoic acid and EGF signalling pathways in three-dimensional culture conditions. Gastric organoids have also been generated using LGR5 expressing stomach adult stem cells. Gastric organoids have been used as model for the study of cancer along with human disease and development. For example, one study investigated the underlying genetic alterations behind a patients metastatic tumor population, and identified that unlike the patients primary tumor, the metastasis had both alleles of the TGFBR2 gene mutated. To further assess the role of TGFBR2 in the metastasis, the investigators created organoids where TGFBR2 expression is knocked down, through which they were able to demonstrate that reduced TGFBR2 activity leads to invasion and metastasis of cancerous tumors both in vitro and in vivo.", "Intestinal organoids grown from rectal biopsies using culture protocols established by the Clevers group have been used to model cystic fibrosis, and led to the first application of organoids for personalised treatment. Cystic fibrosis is an inherited disease that is caused by gene mutations of the cystic fibrosis transmembrane conductance regulator gene that encodes an epithelial ion channel necessary for healthy epithelial surface fluids. Studies by the laboratory of Jeffrey Beekman (Wilhelmina Childrens Hospital, University Medical Center Utrecht, The Netherlands) described in 2013 that stimulation of colorectal organoids with cAMP-raising agonists such as forskolin or cholera toxin induced rapid swelling of organoids in a fully CFTR dependent manner. Whereas organoids from non-cystic fibrosis subjects swell in response to forskolin as a consequence of fluid transport into the organoids lumens, this is severely reduced or absent in organoids derived from people with cystic fibrosis. Swelling could be restored by therapeutics that repair the CFTR protein (CFTR modulators), indicating that individual responses to CFTR modulating therapy could be quantitated in a preclinical laboratory setting. Schwank et al. also demonstrated that the intestinal cystic fibrosis organoid phenotype could be repaired by CRISPR-Cas9 gene editing in 2013.\nFollow-up studies by Dekkers et al. in 2016 revealed that quantitative differences in forskolin-induced swelling between intestinal organoids derived from people with cystic fibrosis associate with known diagnostic and prognostic markers such as CFTR gene mutations or in vivo biomarkers of CFTR function. In addition, the authors demonstrated that CFTR modulator responses in intestinal organoids with specific CFTR mutations correlated with published clinical trial data of these treatments. This led to preclinical studies where organoids from patients with extremely rare CFTR mutations for who no treatment was registered were found to respond strongly to a clinically available CFTR modulator. The suggested clinical benefit of treatment for these subjects based on the preclinical organoid test was subsequently confirmed upon clinical introduction of treatment by members of the clinical CF center under supervision of Kors van der Ent (Department of Paediatric Pulmonology, Wilhelmina Children's Hospital, University Medical Center Utrecht, The Netherlands). These studies show for the first time that organoids can be used for the individual tailoring of therapy or personalised medicine.", "* Tooth organoid (TO) (see also tooth regeneration)\n* Thyroid organoid\n* Thymic organoid\n::Thymic organoids recapitulate at least partly the architecture and stem-cell niche functionality of the thymus, which is a lymphoid organ where T cells mature. Thymic organoids have been generated through the seeding of thymic stromal cells in 3-dimensional culture. Thymic organoids seem to successfully recapitulate the thymus' function, as co-culturing human hematopoietic or bone marrow stem cells with mouse thymic organoids resulted in the production of T-cells. \n* Testicular organoid\n*Prostate organoid\n* Hepatic organoid. A recent study showed the usefulness of the technology for identifying novel medication for the treatment of hepatitis E as it allows to allows to recapitulate the entire viral life cycle.\n* Pancreatic organoid\n::Recent advances in cell repellent microtiter plates has allowed rapid, cost-effective screening of large small molecule drug like libraries against 3D models of pancreas cancer. These models are consistent in phenotype and expression profiles with those found in the lab of Dr. David Tuveson.\n* Epithelial organoid\n* Lung organoid\n* Kidney organoid\n* Gastruloid (embryonic organoid) – Generates all embryonic axes and fully implements the collinear Hox gene expression patterns along the anteroposterior axis.\n* Blastoid (blastocyst-like organoid)\n* Endometrial organoid\n* Cardiac organoid – In 2018 hollow cardiac organoids were made to beat, and to respond to stimuli to beat faster or slower.\n* Retinal organoid\n* Breast cancer organoid\n* Colorectal cancer organoid\n* Glioblastoma organoid \n* Neuroendocrine tumor organoid\n* Myelinoid (Myelin organoid)\n* Blood-brain barrier (BBB) organoid", "Lingual organoids are organoids that recapitulate, at least partly, aspects of the tongue physiology. Epithelial lingual organoids have been generated using BMI1 expressing epithelial stem cells in three-dimensional culture conditions through the manipulation of EGF, WNT, and TGF-β. This organoid culture, however, lacks taste receptors, as these cells do not arise from Bmi1 expressing epithelial stem cells. Lingual taste bud organoids containing taste cells, however, have been created using the LGR5+ or CD44+ stem/progenitor cells of circumvallate (CV) papilla tissue. These taste bud organoids have been successfully created both directly from isolated Lgr5- or LGR6-expressing taste stem/progenitor cells. and indirectly, through the isolation, digestion, and subsequent culturing of CV tissue containing Lgr5+ or CD44+ stem/progenitor cells.", "Lancaster and Knoblich define an organoid as a collection of organ-specific cell types that develops from stem cells or organ progenitors, self-organizes through cell sorting and spatially restricted lineage commitment in a manner similar to in vivo, and exhibits the following properties: \n* it has multiple organ-specific cell types;\n* it is capable of recapitulating some specific function of the organ (e.g. contraction, neural activity, endocrine secretion, filtration, excretion);\n* its cells are grouped together and spatially organized, similar to an organ.", "Osazones are highly coloured and crystalline compounds. Osazones are readily distinguished.\n*Maltosazone (from maltose) forms petal-shaped crystals.\n*Lactosazone (from lactose) forms powder puff-shaped crystals.\n*Galactosazone (from galactose) forms rhombic-plate shaped crystals.\n*Glucosazone (from glucose, fructose or mannose) forms broomstick or needle-shaped crystals.", "Osazone formation was developed by Emil Fischer, who used the reaction as a test to identify monosaccharides.\nThe formation of a pair of hydrazone functionalities involves both oxidation and condensation reactions. Since the reaction requires a free carbonyl group, only \"reducing sugars\" participate. Sucrose, which is nonreducing, does not form an osazone.", "Osazone are a class of carbohydrate derivatives found in organic chemistry formed when reducing sugars are reacted with excess of phenylhydrazine at boiling temperatures.", "There are currently very few ova banks in existence.\nGenerally, the main purpose of storing ova, at present, is to overcome infertility which may arise at a later age, or due to a disease. The ova are generally collected between 31 and 35 years of age.\nThe procedure of collecting ova may or may not include ovarian hyperstimulation.\nIt can be expected however that ova collection will become more important in the future, i.e. for third party reproduction, and/or for producing stem cells, i.e. from unfertilized eggs (oocytes).", "An ova bank, or cryobank, or egg cell bank is a facility that collects and stores human ova, mainly from ova donors, primarily for the purpose of achieving pregnancies of either the donor, at a later time (i.e. to overcome issues of infertility), or through third party reproduction, notably by artificial insemination. Ova donated in this way are known as donor ova.", "Ovine forestomach matrix (OFM) (marketed as AROA ECM) is a layer of decellularized extracellular matrix (ECM) biomaterial isolated from the propria submucosa of the rumen of sheep. OFM is used in tissue engineering and as a tissue scaffold for wound healing and surgical applications", "Multi-layered OFM devices, reinforced with synthetic polymer were first described in 2008 and in the scientific literature in 2010. These devices, termed ‘reinforced biologics’ have been designed for applications in the surgical repair of hernia as an alternative to synthetic surgical mesh (a mesh prosthesis). OFM reinforced biologics are distributed in the US by Tela Bio Inc. Clinical studies have shown that OFM reinforced biologics have lower hernia recurrence rates versus synthetic hernia meshes or biologics such as acellular dermis.", "OFM was cleared by the FDA in 2016 and 2021 for surgical applications in plastics and reconstructive surgery as a multi-layered product (Myriad Matrix™) and powdered format (Myriad Morcells™). OFM-based surgical devices are routinely used in complex lower extremity reconstruction, pilonidal sinus reconstruction, hidradenitis suppurativa and complex traumatic wounds.\nOFM-based surgical devices are routinely used in plastics and reconstructive surgery for the regeneration of soft tissues when used as an artificial skin", "Aroa Biosurgery Limited first distributed OFM commercially in 2012 as Endoform™ Dermal Template (later Endoform™ Natural) through a distribution partnership with Hollister Incorporated (IL, USA). Endoform™ Natural and Endoform™ Antimicrobial (0.3% ionic silver w/w), are single layers of OFM is used in the treatment of acute and chronic wounds, including diabetic foot ulcers (DFU) and venous leg ulcers (VLU). Endoform™ Natural has been shown to accelerate wound healing of DFU. The wound product Symphony™ combines OFM and hyaluronic acid and is designed to support healing during the proliferative phase particularly in patients whose healing is severely impaired or compromised due to disease", "OFM can be fabricated into a range of different product presentations for tissue engineering applications, and can be functionalized with therapeutic agents including silver, doxycycline and hyaluronic acid. OFM has been commercialized as single and multi-layered sheets, reinforced biologics and powders.\nWhen placed in the body OFM does not elicit a negative inflammatory response and is absorbed into the regenerating tissues via a process called tissue remodeling.", "OFM comprises more than 24 collagens (most notably types I and III), but also contains many growth factors, polysaccharides and proteoglycans that naturally exist as part of the extracellular matrix and play important roles in wound healing and soft tissue repair. The composition includes more than 150 different proteins, including elastin, fibronectin, glycosaminoglycans, basement membrane components, and various growth factors, such as vascular endothelial growth factor (VEGF), fibroblast growth factor (FGF) and platelet derived growth factor (PDGF). OFM has been shown to recruit mesenchymal stem cells, stimulate cell proliferation, angiogenesis and vascularogenesis, and modulate matrix metalloproteinase and neutrophil elastase. The porous structure of OFM has been characterized by differential scanning calorimetry (DSC), scanning electron microscopy (SEM), atomic force microscopy (AFM), histology, Sirius Red staining, small-angle x-ray scattering (SAXS), and micro computerized topography (MicroCT). OFM has been shown to contain residual vascular channels that facilitate blood vessel formation through angioconduction.", "OFM was developed and is manufactured by Aroa Biosurgery Limited (New Zealand, formerly Mesynthes Limited, New Zealand) and was first patented in 2008 and described in the scientific literature in 2010. OFM is manufactured from sheep rumen tissue, using a process of decellularization to selectively remove the unwanted sheep cells and cell components to leave an intact and functional extracellular matrix. OFM comprises a special layer of tissue found in rumen, the propria submucosa, which is structurally and functionally distinct from the submucosa of other gastrointestinal tissues.\nOFM was first cleared by the FDA in 2009 for the treatment of wounds. Since 2008 there have been >70 publications describing OFM and its clinical applications, and over 6 million clinical applications of OFM-based devices.", "An oxyacid, oxoacid, or ternary acid is an acid that contains oxygen. Specifically, it is a compound that contains hydrogen, oxygen, and at least one other element, with at least one hydrogen atom bonded to oxygen that can dissociate to produce the H cation and the anion of the acid.", "Many inorganic oxyacids are traditionally called with names ending with the word acid and which also contain, in a somewhat modified form, the name of the element they contain in addition to hydrogen and oxygen. Well-known examples of such acids are sulfuric acid, nitric acid and phosphoric acid.\nThis practice is fully well-established, and IUPAC has accepted such names. In light of the current chemical nomenclature, this practice is an exception, because systematic names of compounds are formed according to the elements they contain and their molecular structure, not according to other properties (for example, acidity) they have.\nIUPAC, however, recommends against calling future compounds not yet discovered with a name ending with the word acid. Indeed, acids can be called with names formed by adding the word hydrogen in front of the corresponding anion; for example, sulfuric acid could just as well be called hydrogen sulfate (or dihydrogen sulfate). In fact, the fully systematic name of sulfuric acid, according to IUPACs rules, would be dihydroxidodioxidosulfur and that of the sulfate ion, tetraoxidosulfate(2−)', Such names, however, are almost never used.\nHowever, the same element can form more than one acid when compounded with hydrogen and oxygen. In such cases, the English practice to distinguish such acids is to use the suffix -ic in the name of the element in the name of the acid containing more oxygen atoms, and the suffix -ous in the name of the element in the name of the acid containing fewer oxygen atoms. Thus, for example, sulfuric acid is HSO, and sulfurous acid, HSO. Analogously, nitric acid is HNO, and nitrous acid, HNO. If there are more than two oxyacids having the same element as the central atom, then, in some cases, acids are distinguished by adding the prefix per- or hypo- to their names. The prefix per-, however, is used only when the central atom is a halogen or a group 7 element. For example, chlorine has the four following oxyacids:\n* hypochlorous acid HClO\n* chlorous acid HClO\n* chloric acid HClO\n* perchloric acid HClO\nSome elemental atoms can exist in a high enough oxidation state that they can hold one more double-bonded oxygen atom than the perhalic acids do. In that case, any acids regarding such element are given the prefix hyper-. Currently, the only known acid with this prefix is hyperruthenic acid, HRuO.\nThe suffix -ite occurs in names of anions and salts derived from acids whose names end to the suffix -ous. On the other hand, the suffix -ate occurs in names of anions and salts derived from acids whose names end to the suffix -ic. Prefixes hypo- and per- occur in the name of anions and salts; for example the ion is called perchlorate.\nIn a few cases, the prefixes ortho- and para- occur in names of some oxyacids and their derivative anions. In such cases, the para- acid is what can be thought as remaining of the ortho- acid if a water molecule is separated from the ortho- acid molecule. For example, phosphoric acid, HPO, has sometimes been called orthophosphoric acid, in order to distinguish it from metaphosphoric acid, HPO. However, according to IUPACs current rules, the prefix ortho-' should only be used in names of orthotelluric acid and orthoperiodic acid, and their corresponding anions and salts.", "In the following table, the formula and the name of the anion refer to what remains of the acid when it loses all its hydrogen atoms as protons. Many of these acids, however, are polyprotic, and in such cases, there also exists one or more intermediate anions. In name of such anions, the prefix hydrogen- (in older nomenclature bi-) is added, with numeral prefixes if needed. For example, is the sulfate anion, and , the hydrogensulfate (or bisulfate) anion. Similarly, is phosphate, is hydrogenphosphate, and is dihydrogenphosphate.", "Under Lavoisiers original theory, all acids contained oxygen, which was named from the Greek ὀξύς (oxys: acid, sharp) and the root -γενής (-genes': creator). It was later discovered that some acids, notably hydrochloric acid, did not contain oxygen and so acids were divided into oxo-acids and these new hydroacids.\nAll oxyacids have the acidic hydrogen bound to an oxygen atom, so bond strength (length) is not a factor, as it is with binary nonmetal hydrides. Rather, the electronegativity of the central atom and the number of oxygen atoms determine oxyacid acidity. For oxyacids with the same central atom, acid strength increases with the number of oxygen atoms attached to it. With the same number of oxygen atoms attached to it, acid strength increases with increasing electronegativity of the central atom.\nCompared to the salts of their deprotonated forms (a class of compounds known as the oxyanions), oxyacids are generally less stable, and many of them only exist formally as hypothetical species, or only exist in solution and cannot be isolated in pure form. There are several general reasons for this: (1) they may condense to form oligomers (e.g., HCrO to HCrO), or dehydrate all the way to form the anhydride (e.g., HCO to CO), (2) they may disproportionate to one compound of higher and another of lower oxidation state (e.g., HClO to HClO and HClO), or (3) they might exist almost entirely as another, more stable tautomeric form (e.g., phosphorous acid P(OH) exists almost entirely as phosphonic acid HP(=O)(OH)). Nevertheless, perchloric acid (HClO), sulfuric acid (HSO), and nitric acid (HNO) are a few common oxyacids that are relatively easily prepared as pure substances.\nImidic acids are created by replacing =O with =NR in an oxyacid.", "An oxyacid molecule contains the structure X−O−H, where other atoms or atom groups can be connected to the central atom X. In a solution, such a molecule can be dissociated into ions in two distinct ways:\n* X−O−H ⇄ (X−O) + H\n* X−O−H ⇄ X + OH\nIf the central atom X is strongly electronegative, then it strongly attracts the electrons of the oxygen atom. In that case, the bond between the oxygen and hydrogen atom is weak, and the compound ionizes easily in the way of the former of the two chemical equations above. In this case, the compound XOH is an acid, because it releases a proton, that is, a hydrogen ion. For example, nitrogen, sulfur and chlorine are strongly electronegative elements, and therefore nitric acid, sulfuric acid, and perchloric acid, are strong acids.\nIf, however, the electronegativity of X is low, then the compound is dissociated to ions according to the latter chemical equation, and XOH is an alkaline hydroxide. Examples of such compounds are sodium hydroxide NaOH and calcium hydroxide Ca(OH). Owing to the high electronegativity of oxygen, however, most of the common oxobases, such as sodium hydroxide, while strongly basic in water, are only moderately basic in comparison to other bases. For example, the pKa of the conjugate acid of sodium hydroxide, water, is 15.7, while that of sodium amide, ammonia, is closer to 40, making sodium hydroxide a much weaker base than sodium amide.\nIf the electronegativity of X is somewhere in between, the compound can be amphoteric, and in that case it can dissociate to ions in both ways, in the former case when reacting with bases, and in the latter case when reacting with acids. Examples of this include aliphatic alcohols, such as ethanol.\nInorganic oxyacids typically have a chemical formula of type HXO, where X is an atom functioning as a central atom, whereas parameters m and n depend on the oxidation state of the element X. In most cases, the element X is a nonmetal, but some metals, for example chromium and manganese, can form oxyacids when occurring at their highest oxidation states.\nWhen oxyacids are heated, many of them dissociate to water and the anhydride of the acid. In most cases, such anhydrides are oxides of nonmetals. For example, carbon dioxide, CO, is the anhydride of carbonic acid, HCO, and sulfur trioxide, SO, is the anhydride of sulfuric acid, HSO. These anhydrides react quickly with water and form those oxyacids again.\nMany organic acids, like carboxylic acids and phenols, are oxyacids. Their molecular structure, however, is much more complicated than that of inorganic oxyacids.\nMost of the commonly encountered acids are oxyacids. Indeed, in the 18th century, Lavoisier assumed that all acids contain oxygen and that oxygen causes their acidity. Because of this, he gave to this element its name, oxygenium, derived from Greek and meaning acid-maker, which is still, in a more or less modified form, used in most languages. Later, however, Humphry Davy showed that the so-called muriatic acid did not contain oxygen, despite its being a strong acid; instead, it is a solution of hydrogen chloride, HCl. Such acids which do not contain oxygen are nowadays known as hydroacids.", "The thickness of the ozone layer varies worldwide and is generally thinner near the equator and thicker near the poles. Thickness refers to how much ozone is in a column over a given area and varies from season to season. The reasons for these variations are due to atmospheric circulation patterns and solar intensity.\nThe majority of ozone is produced over the tropics and is transported towards the poles by stratospheric wind patterns. In the northern hemisphere these patterns, known as the Brewer–Dobson circulation, make the ozone layer thickest in the spring and thinnest in the fall. When ozone is produced by solar UV radiation in the tropics, it is done so by circulation lifting ozone-poor air out of the troposphere and into the stratosphere where the sun photolyzes oxygen molecules and turns them into ozone. Then, the ozone-rich air is carried to higher latitudes and drops into lower layers of the atmosphere.\nResearch has found that the ozone levels in the United States are highest in the spring months of April and May and lowest in October. While the total amount of ozone increases moving from the tropics to higher latitudes, the concentrations are greater in high northern latitudes than in high southern latitudes, with spring ozone columns in high northern latitudes occasionally exceeding 600 DU and averaging 450 DU whereas 400 DU constituted a usual maximum in the Antarctic before anthropogenic ozone depletion. This difference occurred naturally because of the weaker polar vortex and stronger Brewer–Dobson circulation in the northern hemisphere owing to that hemisphere’s large mountain ranges and greater contrasts between land and ocean temperatures. The difference between high northern and southern latitudes has increased since the 1970s due to the ozone hole phenomenon. The highest amounts of ozone are found over the Arctic during the spring months of March and April, but the Antarctic has the lowest amounts of ozone during the summer months of September and October,", "The photochemical mechanisms that give rise to the ozone layer were discovered by the British physicist Sydney Chapman in 1930. Ozone in the Earth's stratosphere is created by ultraviolet light striking ordinary oxygen molecules containing two oxygen atoms (O), splitting them into individual oxygen atoms (atomic oxygen); the atomic oxygen then combines with unbroken O to create ozone, O. The ozone molecule is unstable (although, in the stratosphere, long-lived) and when ultraviolet light hits ozone it splits into a molecule of O and an individual atom of oxygen, a continuing process called the ozone-oxygen cycle. Chemically, this can be described as:\nAbout 90 percent of the ozone in the atmosphere is contained in the stratosphere. Ozone concentrations are greatest between about , where they range from about 2 to 8 parts per million. If all of the ozone were compressed to the pressure of the air at sea level, it would be only thick.", "The ozone layer or ozone shield is a region of Earths stratosphere that absorbs most of the Suns ultraviolet radiation. It contains a high concentration of ozone (O) in relation to other parts of the atmosphere, although still small in relation to other gases in the stratosphere. The ozone layer contains less than 10 parts per million of ozone, while the average ozone concentration in Earth's atmosphere as a whole is about 0.3 parts per million. The ozone layer is mainly found in the lower portion of the stratosphere, from approximately above Earth, although its thickness varies seasonally and geographically.\nThe ozone layer was discovered in 1913 by French physicists Charles Fabry and Henri Buisson. Measurements of the sun showed that the radiation sent out from its surface and reaching the ground on Earth is usually consistent with the spectrum of a black body with a temperature in the range of , except that there was no radiation below a wavelength of about 310 nm at the ultraviolet end of the spectrum. It was deduced that the missing radiation was being absorbed by something in the atmosphere. Eventually the spectrum of the missing radiation was matched to only one known chemical, ozone. Its properties were explored in detail by the British meteorologist G. M. B. Dobson, who developed a simple spectrophotometer (the Dobsonmeter) that could be used to measure stratospheric ozone from the ground. Between 1928 and 1958, Dobson established a worldwide network of ozone monitoring stations, which continue to operate to this day. The \"Dobson unit\", a convenient measure of the amount of ozone overhead, is named in his honor.\nThe ozone layer absorbs 97 to 99 percent of the Sun's medium-frequency ultraviolet light (from about 200 nm to 315 nm wavelength), which otherwise would potentially damage exposed life forms near the surface.\nIn 1985, atmospheric research revealed that the ozone layer was being depleted by chemicals released by industry, mainly chlorofluorocarbons (CFCs). Concerns that increased UV radiation due to ozone depletion threatened life on Earth, including increased skin cancer in humans and other ecological problems, led to bans on the chemicals, and the latest evidence is that ozone depletion has slowed or stopped. The United Nations General Assembly has designated September 16 as the International Day for the Preservation of the Ozone Layer.\nVenus also has a thin ozone layer at an altitude of 100 kilometers above the planet's surface.", "The ozone layer can be depleted by free radical catalysts, including nitric oxide (NO), nitrous oxide (NO), hydroxyl (OH), atomic chlorine (Cl), and atomic bromine (Br). While there are natural sources for all of these species, the concentrations of chlorine and bromine increased markedly in recent decades because of the release of large quantities of man-made organohalogen compounds, especially chlorofluorocarbons (CFCs) and bromofluorocarbons. These highly stable compounds are capable of surviving the rise to the stratosphere, where Cl and Br radicals are liberated by the action of ultraviolet light. Each radical is then free to initiate and catalyze a chain reaction capable of breaking down over 100,000 ozone molecules. By 2009, nitrous oxide was the largest ozone-depleting substance (ODS) emitted through human activities.\nThe breakdown of ozone in the stratosphere results in reduced absorption of ultraviolet radiation. Consequently, unabsorbed and dangerous ultraviolet radiation is able to reach the Earths surface at a higher intensity. Ozone levels have dropped by a worldwide average of about 4 percent since the late 1970s. For approximately 5 percent of the Earths surface, around the north and south poles, much larger seasonal declines have been seen, and are described as \"ozone holes\". \"Ozone holes\" are actually patches in the ozone layer in which the ozone is thinner. The thinnest parts of the ozone are at the polar points of Earths axis. The discovery of the annual depletion of ozone above the Antarctic was first announced by Joe Farman, Brian Gardiner and Jonathan Shanklin, in a paper which appeared in Nature' on May 16, 1985.\nRegulation attempts have included but not have been limited to the Clean Air Act implemented by the United States Environmental Protection Agency. The Clean Air Act introduced the requirement of [https://www.epa.gov/criteria-air-pollutants/naaqs-table National Ambient Air Quality Standards (NAAQS)] with ozone pollutions being one of six criteria pollutants. This regulation has proven to be effective since counties, cities and tribal regions must abide by these standards and the EPA also provides assistance for each region to regulate contaminants. Effective presentation of information has also proven to be important in order to educate the general population of the existence and regulation of ozone depletion and contaminants. A scientific paper was written by Sheldon Ungar in which the author explores and studies how information about the depletion of the ozone, climate change and various related topics. The ozone case was communicated to lay persons \"with easy-to-understand bridging metaphors derived from the popular culture\" and related to \"immediate risks with everyday relevance\". The specific metaphors used in the discussion (ozone shield, ozone hole) proved quite useful and, compared to global climate change, the ozone case was much more seen as a \"hot issue\" and imminent risk. Lay people were cautious about a depletion of the ozone layer and the risks of skin cancer.\n\"Bad\" ozone can cause adverse health risks respiratory effects (difficulty breathing) and is proven to be an aggravator of respiratory illnesses such as asthma, COPD and emphysema. That is why many countries have set in place regulations to improve \"good\" ozone and prevent the increase of \"bad\" ozone in urban or residential areas. In terms of ozone protection (the preservation of \"good\" ozone) the European Union has strict guidelines on what products are allowed to be bought, distributed or used in specific areas. With effective regulation, the ozone is expected to heal over time.\nIn 1978, the United States, Canada and Norway enacted bans on CFC-containing aerosol sprays that damage the ozone layer. The European Community rejected an analogous proposal to do the same. In the U.S., chlorofluorocarbons continued to be used in other applications, such as refrigeration and industrial cleaning, until after the discovery of the Antarctic ozone hole in 1985. After negotiation of an international treaty (the Montreal Protocol), CFC production was capped at 1986 levels with commitments to long-term reductions. This allowed for a ten-year phase-in for developing countries (identified in Article 5 of the protocol). Since that time, the treaty was amended to ban CFC production after 1995 in the developed countries, and later in developing countries. Today, all of the world's 197 countries have signed the treaty. Beginning January 1, 1996, only recycled and stockpiled CFCs were available for use in developed countries like the US. This production phaseout was possible because of efforts to ensure that there would be substitute chemicals and technologies for all ODS uses.\nOn August 2, 2003, scientists announced that the global depletion of the ozone layer may be slowing down because of the international regulation of ozone-depleting substances. In a study organized by the American Geophysical Union, three satellites and three ground stations confirmed that the upper-atmosphere ozone-depletion rate slowed significantly during the previous decade. Some breakdown can be expected to continue because of ODSs used by nations which have not banned them, and because of gases which are already in the stratosphere. Some ODSs, including CFCs, have very long atmospheric lifetimes, ranging from 50 to over 100 years. It has been estimated that the ozone layer will recover to 1980 levels near the middle of the 21st century. A gradual trend toward \"healing\" was reported in 2016.\nCompounds containing C–H bonds (such as hydrochlorofluorocarbons, or HCFCs) have been designed to replace CFCs in certain applications. These replacement compounds are more reactive and less likely to survive long enough in the atmosphere to reach the stratosphere where they could affect the ozone layer. While being less damaging than CFCs, HCFCs can have a negative impact on the ozone layer, so they are also being phased out. These in turn are being replaced by hydrofluorocarbons (HFCs) and other compounds that do not destroy stratospheric ozone at all.\nThe residual effects of CFCs accumulating within the atmosphere lead to a concentration gradient between the atmosphere and the ocean. This organohalogen compound is able to dissolve into the ocean's surface waters and is able to act as a time-dependent tracer. This tracer helps scientists study ocean circulation by tracing biological, physical and chemical pathways.", "As ozone in the atmosphere prevents most energetic ultraviolet radiation reaching the surface of the Earth, astronomical data in these wavelengths have to be gathered from satellites orbiting above the atmosphere and ozone layer. Most of the light from young hot stars is in the ultraviolet and so study of these wavelengths is important for studying the origins of galaxies. The Galaxy Evolution Explorer, GALEX, is an orbiting ultraviolet space telescope launched on April 28, 2003, which operated until early 2012.", "Although the concentration of the ozone in the ozone layer is very small, it is vitally important to life because it absorbs biologically harmful ultraviolet (UV) radiation coming from the Sun. Extremely short or vacuum UV (10–100 nm) is screened out by nitrogen. UV radiation capable of penetrating nitrogen is divided into three categories, based on its wavelength; these are referred to as UV-A (400–315 nm), UV-B (315–280 nm), and UV-C (280–100 nm).\nUV-C, which is very harmful to all living things, is entirely screened out by a combination of dioxygen (< 200 nm) and ozone (> about 200 nm) by around altitude. UV-B radiation can be harmful to the skin and is the main cause of sunburn; excessive exposure can also cause cataracts, immune system suppression, and genetic damage, resulting in problems such as skin cancer. The ozone layer (which absorbs from about 200 nm to 310 nm with a maximal absorption at about 250 nm) is very effective at screening out UV-B; for radiation with a wavelength of 290 nm, the intensity at the top of the atmosphere is 350 million times stronger than at the Earths surface. Nevertheless, some UV-B, particularly at its longest wavelengths, reaches the surface, and is important for the skins production of vitamin D in mammals.\nOzone is transparent to most UV-A, so most of this longer-wavelength UV radiation reaches the surface, and it constitutes most of the UV reaching the Earth. This type of UV radiation is significantly less harmful to DNA, although it may still potentially cause physical damage, premature aging of the skin, indirect genetic damage, and skin cancer.", "Psoralens are materials that make the skin more sensitive to UV light. They are photosensitizing agents found in plants naturally and manufactured synthetically. Psoralens are taken as pills (systemically) or can be applied directly to the skin, by soaking the skin in a solution that contains the psoralens. They allow UVA energy to be effective at lower doses. When combined with exposure to the UVA in PUVA, psoralens are highly effective at clearing psoriasis and vitiligo. In the case of vitiligo, they work by increasing the sensitivity of melanocytes, the cells that manufacture skin color, to UVA light. Melanocytes have sensors that detect UV light and trigger the manufacture of brown skin color. This color protects the body from the harmful effects of UV light. It can also be connected to the skin's immune response.\nLED PUVA lamps give much more intense light compared to fluorescent type lamps. This reduces the treatment time, makes the treatment more effective, and enables the use of a weaker psoralen.\nThe physician and physiotherapists can choose a starting dose of UV based on the patient's skin type. The UV dose will be increased in every treatment until the skin starts to respond, normally when it becomes a little bit pink.\nNormally the UVA dose is increased slowly, starting from 10 seconds and increased by 10 seconds a day, until the skin becomes a little bit pink. When the skin is little bit pink the time should be steady.\nTo reduce the number of treatments, some clinics test the skin before the treatments, by exposing a small area of the patient's skin to UVA, after ingestion of psoralen. The dose of UVA that produces redness 12 hours later, called the minimum phototoxic dose (MPD), or minimal erythema dose (MED) becomes the starting dose for treatment.", "At least for vitiligo, narrowband ultraviolet B (UVB) nanometer phototherapy is now used more commonly than PUVA since it does not require the use of the psoralen. As with PUVA, treatment is carried out 2 to 3 times a week in a clinic or every day at home, and there is no need to use psoralen.\nNarrowband UVB therapy is less effective for the legs and hands, compared to the face and neck. To the hands and legs PUVA may be more effective. The reason can be because UVA penetrates deeper in the skin, and the melanocytes in the skin of the hands and legs are positioned deeper in the skin. Narrowband UVB 311 nanometer is blocked by the topmost skin layer, and UVA 365 nanometer reaches the melanocytes that are in the bottom skin layer.\nMelanin is a dark pigment of the skin and the melanocytes produce it. The melanocytes produce melanin when their receptors detect UV light. The purpose of the melanin is to block UV light so that it will not cause damage to the body cells under the skin.", "PUVA (psoralen and UVA) is an ultraviolet light therapy treatment for skin diseases: vitiligo, eczema, psoriasis, graft-versus-host disease, mycosis fungoides, large plaque parapsoriasis, and cutaneous T-cell lymphoma, using the sensitizing effects of the drug psoralen. The psoralen is applied or taken orally to sensitize the skin, then the skin is exposed to UVA.\nPhotodynamic therapy is the general use of nontoxic light-sensitive compounds that are exposed selectively to light, whereupon they become toxic to targeted malignant and other diseased cells. Still, PUVA therapy is often classified as a separate technique from photodynamic therapy.", "In Egypt around 2000 BC, the juice of Ammi majus was rubbed on patches of vitiligo after which patients were encouraged to lie in the sun. In the 13th century, vitiligo was treated with a tincture of honey and the powdered seeds of a plant called \"aatrillal\", which was abundant in the Nile Valley. The plant has since been identified as A. majus, which contains significant amounts of both bergapten and methoxsalen, two psoralen derivatives well known for their photosensitizing effects.\nIn the 1890s Niels Ryberg Finsen of Copenhagen developed a bulky phototherapy machine to treat skin diseases using UV light. In 1900, the French electrical engineer Gustave Trouvé miniaturized Finsen's machine with a series of portable light radiators to heal skin diseases such as lupus and epithelioma. Such machines have only been available in a chemically synthesized form since the 1970s.\nIn the 1940s, Abdel Monem El Mofty from Cairo University Medical School used crystalline methoxsalen (8-methoxypsoralen, also called xanthotoxin) followed by sunlight exposure to treat vitiligo. This began the development of modern PUVA therapy for the treatment of vitiligo, psoriasis, and other diseases of the skin.", "For small spots of vitiligo, it is possible to use psoralen as drops, applied only on the spots. This method does not have side effects since the amount is very low.\nFor larger area, the psoralen is taken as a pill, and the amount is high (10 mg); some patients experience nausea and itching after ingesting the psoralen compound. For these patients PUVA bath therapy may be a good option.\nLong term use of PUVA therapy with a pill has been associated with higher rates of skin cancer.\nThe most significant complication of PUVA therapy for psoriasis is squamous cell skin cancer. Two carcinogenic components of the therapy include the nonionizing radiation of UVA light as well as the psoralen intercalation with DNA. Both processes negatively contribute to genome instability.", "Peroxyoxalates are esters initially formed by the reaction of hydrogen peroxide with oxalate diesters or oxalyl chloride, with or without base, although the reaction is much faster with base:\nPeroxyoxalates are intermediates that will rapidly transform into 1,2-dioxetanedione, another high-energy intermediate. The likely mechanism of 1,2-dioxetanedione formation from peroxyoxalate in base is illustrated below:\n1,2-Dioxetanedione will rapidly decompose into carbon dioxide (CO). If there is no fluorescer present, only heat will be released. However, in the presence of a fluorescer, light can be generated (chemiluminescence). \nPeroxyoxalate chemiluminescence (CL) was first reported by Rauhut in 1967 in the reaction of diphenyl oxalate. The emission is generated by the reaction of an oxalate ester with hydrogen peroxide in the presence of a suitably fluorescent energy acceptor. This reaction is used in glow sticks.\nThe three most widely used oxalates are bis(2,4,6-trichlorophenyl)oxalate (TCPO), Bis(2,4,5-trichlorophenyl-6-carbopentoxyphenyl)oxalate (CPPO) and bis(2,4-dinitrophenyl) oxalate (DNPO). Other aryl oxalates have been synthesized and evaluated with respect to their possible analytical applications. Divanillyl oxalate, a more eco-friendly or \"green\" oxalate for chemiluminescence, decomposes into the nontoxic, biodegradable compound vanillin and works in nontoxic, biodegradable triacetin. Peroxyoxalate CL is an example of indirect or sensitized chemiluminescence in which the energy from an excited intermediate is transferred to a suitable fluorescent molecule, which relaxes to the ground state by emitting a photon. Rauhut and co-workers have reported that the intermediate responsible for providing the energy of excitation is 1,2-dioxetanedione. The peroxyoxalate reaction is able to excite many different compounds, having emissions spanning the visible and infrared regions of the spectrum, and the reaction can supply up to 440 kJ mol-1, corresponding to excitation at 272 nm. It has been found, however, that the chemiluminescence intensity corrected for quantum yield decreases as the singlet excitation energy of the fluorescent molecule increases. There is also a linear relationship between the corrected chemiluminescence intensity and the oxidation potential of the molecule. This suggests the possibility of an electron transfer step in the mechanism, as demonstrated in several other chemiluminescence systems. It has been postulated that a transient charge transfer complex is formed between the intermediate 1,2-dioxetanedione and the fluorescer, and a modified mechanism was proposed involving the transfer of an electron from the fluorescer to the reactive intermediate. The emission of light is thought to result from the annihilation of the fluorescer radical cation with the carbon dioxide radical anion formed when the 1,2-dioxetanedione decomposes. This process is called chemically induced electron exchange luminescence (CIEEL).\nChemiluminescent reactions are widely used in analytical chemistry.", "Persistent luminescence materials are mainly used in safety signs, watch dials, decorative objects and toys. They have also been used as nanoprobes in small animal optical imaging.", "Commonly referred as phosphorescence, persistent luminescence is the emission of light by a phosphorescent material after an excitation by ultraviolet or visible light. Such materials would \"glow in the dark\".", "The mechanism underlying this phenomenon is not fully understood. However, the phenomenon of persistent luminescence must not be mistaken for fluorescence and phosphorescence (see for definitions and ). Indeed, in fluorescence, the lifetime of the excited state is in the order of a few nanoseconds and in phosphorescence, even if the lifetime of the emission can reach several seconds, the reason of the long emission is due to the deexcitation between two electronic states of different spin multiplicity. For persistent luminescence, it has been known for a long time that the phenomenon involved energy traps (such as electron or hole trap) in a material which are filled during the excitation. After the end of the excitation, the stored energy is gradually released to emitter centers which emit light usually by a fluorescence-like mechanism.", "Some other phosphors commercially available, for use as X-ray screens, neutron detectors, alpha particle scintillators, etc., are:\n*GdOS:Tb (P43), green (peak at 545 nm), 1.5 ms decay to 10%, low afterglow, high X-ray absorption, for X-ray, neutrons and gamma\n*GdOS:Eu, red (627 nm), 850 μs decay, afterglow, high X-ray absorption, for X-ray, neutrons and gamma\n*GdOS:Pr, green (513 nm), 7 μs decay, no afterglow, high X-ray absorption, for X-ray, neutrons and gamma\n*, green (513 nm), 4 μs decay, no afterglow, high X-ray absorption, for X-ray, neutrons and gamma\n*YOS:Tb (P45), white (545 nm), 1.5 ms decay, low afterglow, for low-energy X-ray\n*YOS:Eu (P22R), red (627 nm), 850 μs decay, afterglow, for low-energy X-ray\n*YOS:Pr, white (513 nm), 7 μs decay, no afterglow, for low-energy X-ray\n* (HS), green (560 nm), 80 μs decay, afterglow, efficient but low-res X-ray\n* (HSr), red (630 nm), 80 μs decay, afterglow, efficient but low-res X-ray\n*CdWO, blue (475 nm), 28 μs decay, no afterglow, intensifying phosphor for X-ray and gamma\n*CaWO, blue (410 nm), 20 μs decay, no afterglow, intensifying phosphor for X-ray\n*MgWO, white (500 nm), 80 μs decay, no afterglow, intensifying phosphor\n*YSiO:Ce (P47), blue (400 nm), 120 ns decay, no afterglow, for electrons, suitable for photomultipliers\n*YAlO:Ce (YAP), blue (370 nm), 25 ns decay, no afterglow, for electrons, suitable for photomultipliers\n*YAlO:Ce (YAG), green (550 nm), 70 ns decay, no afterglow, for electrons, suitable for photomultipliers\n* (YGG), green (530 nm), 250 ns decay, low afterglow, for electrons, suitable for photomultipliers\n*CdS:In, green (525 nm), <1 ns decay, no afterglow, ultrafast, for electrons\n*ZnO:Ga, blue (390 nm), <5 ns decay, no afterglow, ultrafast, for electrons\n*ZnO:Zn (P15), blue (495 nm), 8 μs decay, no afterglow, for low-energy electrons\n* (P22G), green (565 nm), 35 μs decay, low afterglow, for electrons\n* (P22G), green (540 nm), 35 μs decay, low afterglow, for electrons\n* (P20), green (530 nm), 80 μs decay, low afterglow, for electrons\n*ZnS:Ag (P11), blue (455 nm), 80 μs decay, low afterglow, for alpha particles and electrons\n*anthracene, blue (447 nm), 32 ns decay, no afterglow, for alpha particles and electrons\n*plastic (EJ-212), blue (400 nm), 2.4 ns decay, no afterglow, for alpha particles and electrons\n*ZnSiO:Mn (P1), green (530 nm), 11 ms decay, low afterglow, for electrons\n*ZnS:Cu (GS), green (520 nm), decay in minutes, long afterglow, for X-rays\n*NaI:Tl, for X-ray, alpha, and electrons\n*CsI:Tl, green (545 nm), 5 μs decay, afterglow, for X-ray, alpha, and electrons\n*LiF/ZnS:Ag (ND), blue (455 nm), 80 μs decay, for thermal neutrons\n* (NDg), green (565 nm), 35 μs decay, for neutrons\n*Cerium doped YAG phosphor, yellow, used in white LEDs for turning blue to white light with a broad spectrum of light", "For projection televisions, where the beam power density can be two orders of magnitude higher than in conventional CRTs, some different phosphors have to be used.\nFor blue color, is employed. However, it saturates. can be used as an alternative that is more linear at high energy densities.\nFor green, a terbium-activated ; its color purity and brightness at low excitation densities is worse than the zinc sulfide alternative, but it behaves linear at high excitation energy densities, while zinc sulfide saturates. However, it also saturates, so or can be substituted. is bright but water-sensitive, degradation-prone, and the plate-like morphology of its crystals hampers its use; these problems are solved now, so it is gaining use due to its higher linearity.\n is used for red emission.", "White light-emitting diodes are usually blue InGaN LEDs with a coating of a suitable material. Cerium(III)-doped YAG (YAG:Ce, or YAlO:Ce) is often used; it absorbs the light from the blue LED and emits in a broad range from greenish to reddish, with most of its output in yellow. This yellow emission combined with the remaining blue emission gives the \"white\" light, which can be adjusted to color temperature as warm (yellowish) or cold (bluish) white. The pale yellow emission of the Ce:YAG can be tuned by substituting the cerium with other rare-earth elements such as terbium and gadolinium and can even be further adjusted by substituting some or all of the aluminium in the YAG with gallium. However, this process is not one of phosphorescence. The yellow light is produced by a process known as scintillation, the complete absence of an afterglow being one of the characteristics of the process.\nSome rare-earth-doped Sialons are photoluminescent and can serve as phosphors. Europium(II)-doped β-SiAlON absorbs in ultraviolet and visible light spectrum and emits intense broadband visible emission. Its luminance and color does not change significantly with temperature, due to the temperature-stable crystal structure. It has a great potential as a green down-conversion phosphor for white LEDs; a yellow variant also exists (α-SiAlON). For white LEDs, a blue LED is used with a yellow phosphor, or with a green and yellow SiAlON phosphor and a red CaAlSiN-based (CASN) phosphor.\nWhite LEDs can also be made by coating near-ultraviolet-emitting LEDs with a mixture of high-efficiency europium-based red- and blue-emitting phosphors plus green-emitting copper- and aluminium-doped zinc sulfide . This is a method analogous to the way fluorescent lamps work.\nSome newer white LEDs use a yellow and blue emitter in series, to approximate white; this technology is used in some Motorola phones such as the Blackberry as well as LED lighting and the original-version stacked emitters by using GaN on SiC on InGaP but was later found to fracture at higher drive currents.\nMany white LEDs used in general lighting systems can be used for data transfer, as, for example, in systems that modulate the LED to act as a beacon.\nIt is also common for white LEDs to use phosphors other than Ce:YAG, or to use two or three phosphors to achieve a higher CRI, often at the cost of efficiency. Examples of additional phosphors are R9, which produces a saturated red, nitrides which produce red, and aluminates such as lutetium aluminum garnet that produce green. Silicate phosphors are brighter but fade more quickly, and are used in LCD LED backlights in mobile devices. LED phosphors can be placed directly over the die or made into a dome and placed above the LED: this approach is known as a remote phosphor. Some colored LEDs, instead of using a colored LED, use a blue LED with a colored phosphor because such an arrangement is more efficient than a colored LED. Oxynitride phosphors can also be used in LEDs. The precursors used to make the phosphors may degrade when exposed to air.", "The phosphors in color CRTs need higher contrast and resolution than the black-and-white ones. The energy density of the electron beam is about 100 times greater than in black-and-white CRTs; the electron spot is focused to about 0.2 mm diameter instead of about 0.6 mm diameter of the black-and-white CRTs. Effects related to electron irradiation degradation are therefore more pronounced.\nColor CRTs require three different phosphors, emitting in red, green and blue, patterned on the screen. Three separate electron guns are used for color production (except for displays that use beam-index tube technology, which is rare). The red phosphor has always been a problem, being the dimmest of the three necessitating the brighter green and blue electron beam currents be adjusted down to make them equal the red phosphor's lower brightness. This made early color TVs only usable indoors as bright light made it impossible to see the dim picture, while portable black-and-white TVs viewable in outdoor sunlight were already common.\nThe composition of the phosphors changed over time, as better phosphors were developed and as environmental concerns led to lowering the content of cadmium and later abandoning it entirely. The was replaced with with lower cadmium/zinc ratio, and then with cadmium-free .\nThe blue phosphor stayed generally unchanged, a silver-doped zinc sulfide. The green phosphor initially used manganese-doped zinc silicate, then evolved through silver-activated cadmium-zinc sulfide, to lower-cadmium copper-aluminium activated formula, and then to cadmium-free version of the same. The red phosphor saw the most changes; it was originally manganese-activated zinc phosphate, then a silver-activated cadmium-zinc sulfide, then the europium(III) activated phosphors appeared; first in an yttrium vanadate matrix, then in yttrium oxide and currently in yttrium oxysulfide. The evolution of the phosphors was therefore (ordered by B-G-R):\n* &ndash; &ndash; \n* &ndash; &ndash; \n* &ndash; &ndash; (1964&ndash;?)\n* &ndash; &ndash; or \n* &ndash; or &ndash;", "Phosphor banded stamps first appeared in 1959 as guides for machines to sort mail. Around the world many varieties exist with different amounts of banding. Postage stamps are sometimes collected by whether or not they are \"tagged\" with phosphor (or printed on luminescent paper).", "A phosphor is a substance that exhibits the phenomenon of luminescence; it emits light when exposed to some type of radiant energy. The term is used both for fluorescent or phosphorescent substances which glow on exposure to ultraviolet or visible light, and cathodoluminescent substances which glow when struck by an electron beam (cathode rays) in a cathode-ray tube.\nWhen a phosphor is exposed to radiation, the orbital electrons in its molecules are excited to a higher energy level; when they return to their former level they emit the energy as light of a certain color. Phosphors can be classified into two categories: fluorescent substances which emit the energy immediately and stop glowing when the exciting radiation is turned off, and phosphorescent substances which emit the energy after a delay, so they keep glowing after the radiation is turned off, decaying in brightness over a period of milliseconds to days.\nFluorescent materials are used in applications in which the phosphor is excited continuously: cathode-ray tubes (CRT) and plasma video display screens, fluoroscope screens, fluorescent lights, scintillation sensors, white LEDs, and luminous paints for black light art. Phosphorescent materials are used where a persistent light is needed, such as glow-in-the-dark watch faces and aircraft instruments, and in radar screens to allow the target blips to remain visible as the radar beam rotates. CRT phosphors were standardized beginning around World War II and designated by the letter \"P\" followed by a number.\nPhosphorus, the light-emitting chemical element for which phosphors are named, emits light due to chemiluminescence, not phosphorescence.", "The scintillation process in inorganic materials is due to the electronic band structure found in the crystals. An incoming particle can excite an electron from the valence band to either the conduction band or the exciton band (located just below the conduction band and separated from the valence band by an energy gap). This leaves an associated hole behind, in the valence band. Impurities create electronic levels in the forbidden gap. The excitons are loosely bound electron–hole pairs that wander through the crystal lattice until they are captured as a whole by impurity centers. The latter then rapidly de-excite by emitting scintillation light (fast component). In the case of inorganic scintillators, the activator impurities are typically chosen so that the emitted light is in the visible range or near-UV, where photomultipliers are effective. The holes associated with electrons in the conduction band are independent from the latter. Those holes and electrons are captured successively by impurity centers exciting certain metastable states not accessible to the excitons. The delayed de-excitation of those metastable impurity states, slowed by reliance on the low-probability forbidden mechanism, again results in light emission (slow component).\nPhosphors are often transition-metal compounds or rare-earth compounds of various types. In inorganic phosphors, these inhomogeneities in the crystal structure are created usually by addition of a trace amount of dopants, impurities called activators. (In rare cases dislocations or other crystal defects can play the role of the impurity.) The wavelength emitted by the emission center is dependent on the atom itself and on the surrounding crystal structure.", "Phosphors are usually made from a suitable host material with an added activator. The best known type is a copper-activated zinc sulfide (ZnS) and the silver-activated zinc sulfide (zinc sulfide silver).\nThe host materials are typically oxides, nitrides and oxynitrides, sulfides, selenides, halides or silicates of zinc, cadmium, manganese, aluminium, silicon, or various rare-earth metals. The activators prolong the emission time (afterglow). In turn, other materials (such as nickel) can be used to quench the afterglow and shorten the decay part of the phosphor emission characteristics.\nMany phosphor powders are produced in low-temperature processes, such as sol-gel, and usually require post-annealing at temperatures of ~1000 °C, which is undesirable for many applications. However, proper optimization of the growth process allows manufacturers to avoid the annealing.\nPhosphors used for fluorescent lamps require a multi-step production process, with details that vary depending on the particular phosphor. Bulk material must be milled to obtain a desired particle size range, since large particles produce a poor-quality lamp coating, and small particles produce less light and degrade more quickly. During the firing of the phosphor, process conditions must be controlled to prevent oxidation of the phosphor activators or contamination from the process vessels. After milling, the phosphor may be washed to remove minor excess of activator elements. Volatile elements must not be allowed to escape during processing. Lamp manufacturers have changed compositions of phosphors to eliminate some toxic elements, such as beryllium, cadmium, or thallium, formerly used.\nThe commonly quoted parameters for phosphors are the wavelength of emission maximum (in nanometers, or alternatively color temperature in kelvins for white blends), the peak width (in nanometers at 50% of intensity), and decay time (in seconds).\nExamples:\n* Calcium sulfide with strontium sulfide with bismuth as activator, , yields blue light with glow times up to 12 hours, red and orange are modifications of the zinc sulfide formula. Red color can be obtained from strontium sulfide.\n* Zinc sulfide with about 5 ppm of a copper activator is the most common phosphor for the glow-in-the-dark toys and items. It is also called GS phosphor.\n*Mix of zinc sulfide and cadmium sulfide emit color depending on their ratio; increasing of the CdS content shifts the output color towards longer wavelengths; its persistence ranges between 1–10 hours.\n* Strontium aluminate activated by europium, SrAlO:Eu(II):Dy(III), is a material developed in 1993 by Nemoto & Co. engineer Yasumitsu Aoki with higher brightness and significantly longer glow persistence; it produces green and aqua hues, where green gives the highest brightness and aqua the longest glow time. SrAlO:Eu:Dy is about 10 times brighter, 10 times longer glowing, and 10 times more expensive than ZnS:Cu. The excitation wavelengths for strontium aluminate range from 200 to 450 nm. The wavelength for its green formulation is 520 nm, its blue-green version emits at 505 nm, and the blue one emits at 490 nm. Colors with longer wavelengths can be obtained from the strontium aluminate as well, though for the price of some loss of brightness.", "Many phosphors tend to lose efficiency gradually by several mechanisms. The activators can undergo change of valence (usually oxidation), the crystal lattice degrades, atoms – often the activators – diffuse through the material, the surface undergoes chemical reactions with the environment with consequent loss of efficiency or buildup of a layer absorbing either the exciting or the radiated energy, etc.\nThe degradation of electroluminescent devices depends on frequency of driving current, the luminance level, and temperature; moisture impairs phosphor lifetime very noticeably as well.\nHarder, high-melting, water-insoluble materials display lower tendency to lose luminescence under operation.\nExamples:\n* BaMgAlO:Eu (BAM), a plasma-display phosphor, undergoes oxidation of the dopant during baking. Three mechanisms are involved; absorption of oxygen atoms into oxygen vacancies on the crystal surface, diffusion of Eu(II) along the conductive layer, and electron transfer from Eu(II) to absorbed oxygen atoms, leading to formation of Eu(III) with corresponding loss of emissivity. Thin coating of aluminium phosphate or lanthanum(III) phosphate is effective in creating a barrier layer blocking access of oxygen to the BAM phosphor, for the cost of reduction of phosphor efficiency. Addition of hydrogen, acting as a reducing agent, to argon in the plasma displays significantly extends the lifetime of BAM:Eu phosphor, by reducing the Eu(III) atoms back to Eu(II).\n* YO:Eu phosphors under electron bombardment in presence of oxygen form a non-phosphorescent layer on the surface, where electron–hole pairs recombine nonradiatively via surface states.\n* ZnS:Mn, used in AC thin-film electroluminescent (ACTFEL) devices degrades mainly due to formation of deep-level traps, by reaction of water molecules with the dopant; the traps act as centers for nonradiative recombination. The traps also damage the crystal lattice. Phosphor aging leads to decreased brightness and elevated threshold voltage.\n* ZnS-based phosphors in CRTs and FEDs degrade by surface excitation, coulombic damage, build-up of electric charge, and thermal quenching. Electron-stimulated reactions of the surface are directly correlated to loss of brightness. The electrons dissociate impurities in the environment, the reactive oxygen species then attack the surface and form carbon monoxide and carbon dioxide with traces of carbon, and nonradiative zinc oxide and zinc sulfate on the surface; the reactive hydrogen removes sulfur from the surface as hydrogen sulfide, forming nonradiative layer of metallic zinc. Sulfur can be also removed as sulfur oxides.\n* ZnS and CdS phosphors degrade by reduction of the metal ions by captured electrons. The M ions are reduced to M; two M then exchange an electron and become one M and one neutral M atom. The reduced metal can be observed as a visible darkening of the phosphor layer. The darkening (and the brightness loss) is proportional to the phosphor's exposure to electrons and can be observed on some CRT screens that displayed the same image (e.g. a terminal login screen) for prolonged periods.\n* Europium(II)-doped alkaline earth aluminates degrade by formation of color centers.\n* :Ce degrades by loss of luminescent Ce ions.\n* :Mn (P1) degrades by desorption of oxygen under electron bombardment.\n* Oxide phosphors can degrade rapidly in presence of fluoride ions, remaining from incomplete removal of flux from phosphor synthesis.\n* Loosely packed phosphors, e.g. when an excess of silica gel (formed from the potassium silicate binder) is present, have tendency to locally overheat due to poor thermal conductivity. E.g. :Tb is subject to accelerated degradation at higher temperatures.", "Cathode-ray tubes produce signal-generated light patterns in a (typically) round or rectangular format. Bulky CRTs were used in the black-and-white household television (TV) sets that became popular in the 1950s, as well as first-generation, tube-based color TVs, and most earlier computer monitors. CRTs have also been widely used in scientific and engineering instrumentation, such as oscilloscopes, usually with a single phosphor color, typically green. Phosphors for such applications may have long afterglow, for increased image persistence.\nThe phosphors can be deposited as either thin film, or as discrete particles, a powder bound to the surface. Thin films have better lifetime and better resolution, but provide less bright and less efficient image than powder ones. This is caused by multiple internal reflections in the thin film, scattering the emitted light.\nWhite (in black-and-white): The mix of zinc cadmium sulfide and zinc sulfide silver, the is the white P4 phosphor used in black and white television CRTs. Mixes of yellow and blue phosphors are usual. Mixes of red, green and blue, or a single white phosphor, can also be encountered.\nRed: Yttrium oxide-sulfide activated with europium is used as the red phosphor in color CRTs. The development of color TV took a long time due to the search for a red phosphor. The first red emitting rare-earth phosphor, YVO:Eu, was introduced by Levine and Palilla as a primary color in television in 1964. In single crystal form, it was used as an excellent polarizer and laser material.\nYellow: When mixed with cadmium sulfide, the resulting zinc cadmium sulfide , provides strong yellow light.\nGreen: Combination of zinc sulfide with copper, the P31 phosphor or , provides green light peaking at 531 nm, with long glow.\nBlue: Combination of zinc sulfide with few ppm of silver, the ZnS:Ag, when excited by electrons, provides strong blue glow with maximum at 450 nm, with short afterglow with 200 nanosecond duration. It is known as the P22B phosphor. This material, zinc sulfide silver, is still one of the most efficient phosphors in cathode-ray tubes. It is used as a blue phosphor in color CRTs.\nThe phosphors are usually poor electrical conductors. This may lead to deposition of residual charge on the screen, effectively decreasing the energy of the impacting electrons due to electrostatic repulsion (an effect known as \"sticking\"). To eliminate this, a thin layer of aluminium (about 100 nm) is deposited over the phosphors, usually by vacuum evaporation, and connected to the conductive layer inside the tube. This layer also reflects the phosphor light to the desired direction, and protects the phosphor from ion bombardment resulting from an imperfect vacuum.\nTo reduce the image degradation by reflection of ambient light, contrast can be increased by several methods. In addition to black masking of unused areas of screen, the phosphor particles in color screens are coated with pigments of matching color. For example, the red phosphors are coated with ferric oxide (replacing earlier Cd(S,Se) due to cadmium toxicity), blue phosphors can be coated with marine blue (CoO·nalumina|) or ultramarine (). Green phosphors based on ZnS:Cu do not have to be coated due to their own yellowish color.", "The black-and-white television screens require an emission color close to white. Usually, a combination of phosphors is employed.\nThe most common combination is (blue + yellow). Other ones are (blue + yellow), and (blue + green + red – does not contain cadmium and has poor efficiency). The color tone can be adjusted by the ratios of the components.\nAs the compositions contain discrete grains of different phosphors, they produce image that may not be entirely smooth. A single, white-emitting phosphor, overcomes this obstacle. Due to its low efficiency, it is used only on very small screens.\nThe screens are typically covered with phosphor using sedimentation coating, where particles suspended in a solution are let to settle on the surface.", "For displaying of a limited palette of colors, there are a few options.\nIn beam penetration tubes, different color phosphors are layered and separated with dielectric material. The acceleration voltage is used to determine the energy of the electrons; lower-energy ones are absorbed in the top layer of the phosphor, while some of the higher-energy ones shoot through and are absorbed in the lower layer. So either the first color or a mixture of the first and second color is shown. With a display with red outer layer and green inner layer, the manipulation of accelerating voltage can produce a continuum of colors from red through orange and yellow to green.\nAnother method is using a mixture of two phosphors with different characteristics. The brightness of one is linearly dependent on electron flux, while the other one's brightness saturates at higher fluxes—the phosphor does not emit any more light regardless of how many more electrons impact it. At low electron flux, both phosphors emit together; at higher fluxes, the luminous contribution of the nonsaturating phosphor prevails, changing the combined color.\nSuch displays can have high resolution, due to absence of two-dimensional structuring of RGB CRT phosphors. Their color palette is, however, very limited. They were used e.g. in some older military radar displays.", "Zinc sulfide phosphors are used with radioactive materials, where the phosphor was excited by the alpha- and beta-decaying isotopes, to create luminescent paint for dials of watches and instruments (radium dials). Between 1913 and 1950 radium-228 and radium-226 were used to activate a phosphor made of silver doped zinc sulfide (ZnS:Ag), which gave a greenish glow. The phosphor is not suitable to be used in layers thicker than 25 mg/cm, as the self-absorption of the light then becomes a problem. Furthermore, zinc sulfide undergoes degradation of its crystal lattice structure, leading to gradual loss of brightness significantly faster than the depletion of radium. ZnS:Ag coated spinthariscope screens were used by Ernest Rutherford in his experiments discovering atomic nucleus.\nCopper doped zinc sulfide (ZnS:Cu) is the most common phosphor used and yields blue-green light. Copper and magnesium doped zinc sulfide yields yellow-orange light.\nTritium is also used as a source of radiation in various products utilizing tritium illumination.", "Quenching of the triplet state by O (which has a triplet ground state) as a result of Dexter energy transfer is well known in solutions of phosphorescent heavy-metal complexes and doped polymers. In recent years, phosphorescence porous materials(such as Metal–organic frameworks and Covalent organic frameworks) have shown promising oxygen sensing capabilities, for their non-linear gas-adsorption in ultra-low partial pressures of oxygen.", "In these applications, the phosphor is directly added to the plastic used to mold the toys, or mixed with a binder for use as paints.\nZnS:Cu phosphor is used in glow-in-the-dark cosmetic creams frequently used for Halloween make-ups.\nGenerally, the persistence of the phosphor increases as the wavelength increases. \nSee also lightstick for chemiluminescence-based glowing items.", "Phosphor thermometry is a temperature measurement approach that uses the temperature dependence of certain phosphors. For this, a phosphor coating is applied to a surface of interest and, usually, the decay time is the emission parameter that indicates temperature. Because the illumination and detection optics can be situated remotely, the method may be used for moving surfaces such as high speed motor surfaces. Also, phosphor may be applied to the end of an optical fiber as an optical analog of a thermocouple.", "Phosphor layers provide most of the light produced by fluorescent lamps, and are also used to improve the balance of light produced by metal halide lamps. Various neon signs use phosphor layers to produce different colors of light. Electroluminescent displays found, for example, in aircraft instrument panels, use a phosphor layer to produce glare-free illumination or as numeric and graphic display devices. White LED lamps consist of a blue or ultra-violet emitter with a phosphor coating that emits at longer wavelengths, giving a full spectrum of visible light. Unfocused and undeflected cathode-ray tubes have been used as stroboscope lamps since 1958.", "Electroluminescence can be exploited in light sources. Such sources typically emit from a large area, which makes them suitable for backlights of LCD displays. The excitation of the phosphor is usually achieved by application of high-intensity electric field, usually with suitable frequency. Current electroluminescent light sources tend to degrade with use, resulting in their relatively short operation lifetimes.\nZnS:Cu was the first formulation successfully displaying electroluminescence, tested at 1936 by Georges Destriau in Madame Marie Curie laboratories in Paris.\nPowder or AC electroluminescence is found in a variety of backlight and night light applications. Several groups offer branded EL offerings (e.g. IndiGlo used in some Timex watches) or \"Lighttape\", another trade name of an electroluminescent material, used in electroluminescent light strips. The Apollo space program is often credited with being the first significant use of EL for backlights and lighting.", "A phosphoroscope is piece of experimental equipment devised in 1857 by physicist A. E. Becquerel to measure how long it takes a phosphorescent material to stop glowing after it has been excited.\nIt consists of two rotating disks with holes in them. The holes are arranged on each disk at equal angular intervals and a constant distance from the centre, but the holes in one disk do not align with the holes in the other. A sample of phosphorescent material is placed in between the two disks. Light coming in through a hole in one of the discs excites the phosphorescent material which then emits light for a short amount of time. The disks are then rotated and by changing their speed, the length of time the material glows can be determined.", "Pure organochlorides like polyvinyl chloride (PVC) do not absorb any light above 220 nm. The initiation of photo-oxidation is instead caused by various irregularities in the polymer chain, such as structural defects as well as hydroperoxides, carbonyl groups, and double bonds. \nHydroperoxides formed during processing are the most important initiator to begin with, however their concentration decreases during photo-oxidation whereas carbonyl concentration increases, as such carbonyls may become the primary initiator over time.\nPropagation steps involve the hydroperoxyl radical, which can abstract hydrogen from both hydrocarbon (-CH-) and organochloride (-CHCl-) sites in the polymer at comparable rates. Radicals formed at hydrocarbon sites rapidly convert to alkenes with loss of radical chlorine. This forms allylic hydrogens (shown in red) which are more susceptible to hydrogen abstraction leading to the formation of polyenes in zipper-like reactions.\nWhen the polyenes contain at least eight conjugated double bonds they become coloured, leading to yellowing and eventual browning of the material. This is off-set slightly by longer polyenes being photobleached with atmospheric oxygen, however PVC does eventually discolour unless polymer stabilisers are present. Reactions at organochloride sites proceed via the usual hydroperoxyl and hydroperoxide before photolysis yields the α-chloro-alkoxyl radical. This species can undergo various reactions to give carbonyls, peroxide cross-links and beta scission products.", "In polymer chemistry photo-oxidation (sometimes: oxidative photodegradation) is the degradation of a polymer surface due to the combined action of light and oxygen. It is the most significant factor in the weathering of plastics. Photo-oxidation causes the polymer chains to break (chain scission), resulting in the material becoming increasingly brittle. This leads to mechanical failure and, at an advanced stage, the formation of microplastics. In textiles the process is called phototendering.\nTechnologies have been developed to both accelerate and inhibit this process. For example, plastic building components like doors, window frames and gutters are expected to last for decades, requiring the use of advanced UV-polymer stabilizers. Conversely, single-use plastics can be treated with biodegradable additives to accelerate their fragmentation.\nMany pigments and dyes can similarly have effects due to their ability to absorb UV-energy.", "Susceptibility to photo-oxidation varies depending on the chemical structure of the polymer. Some materials have excellent stability, such as fluoropolymers, polyimides, silicones and certain acrylate polymers. However, global polymer production is dominated by a range of commodity plastics which account for the majority of plastic waste. Of these polyethylene terephthalate (PET) has only moderate UV resistance and the others, which include polystyrene, polyvinyl chloride (PVC) and polyolefins like polypropylene (PP) and polyethylene (PE) are all highly susceptible. \nPhoto-oxidation is a form of photodegradation and begins with formation of free radicals on the polymer chain, which then react with oxygen in chain reactions. For many polymers the general autoxidation mechanism is a reasonable approximation of the underlying chemistry. The process is autocatalytic, generating increasing numbers of radicals and reactive oxygen species. These reactions result in changes to the molecular weight (and molecular weight distribution) of the polymer and as a consequence the material becomes more brittle. The process can be divided into four stages:\n:Initiation the process of generating the initial free radical. \n:Propagation the conversion of one active species to another\n:Chain branching steps which end with more than one active species being produced. The photolysis of hydroperoxides is the main example.\n:Termination steps in which active species are removed, for instance by radical disproportionation\nPhoto-oxidation can occur simultaneously with other processes like thermal degradation, and each of these can accelerate the other.", "Polyolefins such as polyethylene and polypropylene are susceptible to photo-oxidation and around 70% of light stabilizers produced world-wide are used in their protection, despite them representing only around 50% of global plastic production. Aliphatic hydrocarbons can only adsorb high energy UV-rays with a wavelength below ~250 nm, however the Earth’s atmosphere and ozone layer screen out such rays, with the normal minimum wavelength being 280–290 nm.\nThe bulk of the polymer is therefore photo-inert and degradation is instead attributed to the presence of various impurities, which are introduced during the manufacturing or processing stages. These include hydroperoxide and carbonyl groups, as well as metal salts such as catalyst residues. \nAll of these species act as photoinitiators.\nThe organic hydroperoxide and carbonyl groups are able to absorb UV light above 290 nm whereupon they undergo photolysis to generate radicals. Metal impurities act as photocatalysts, although such reactions can be complex. It has also been suggested that polymer-O charge-transfer complexes are involved. Initiation generates radical-carbons on the polymer chain, sometimes called macroradicals (P•).\nChain initiation\nChain propagation\nChain branching\nTermination\nClassically the carbon-centred macroradicals (P•) rapidly react with oxygen to form hydroperoxyl radicals (POO•), which in turn abstract an H atom from the polymer chain to give a hydroperoxide (POOH) and a fresh macroradical. Hydroperoxides readily undergo photolysis to give an alkoxyl macroradical radical (PO•) and a hydroxyl radical (HO•), both of which may go on to form new polymer radicals via hydrogen abstraction. Non-classical alternatives to these steps have been proposed. The alkoxyl radical may also undergo beta scission, generating a acyl-ketone and macroradical. This is considered to be the main cause of chain breaking in polypropylene.\nSecondary hydroperoxides can also undergo an intramolecular reaction to give a ketone group, although this is limited to polyethylene.\nThe ketones generated by these processes are themselves photo-active, although much more weakly. At ambient temperatures they undergo Type II Norrish reactions with chain scission. They may also absorb UV-energy, which they can then transfer to O, causing it to enter its highly reactive singlet state. Singlet oxygen is a potent oxidising agent can go on to form cause further degradation.", "For polystyrene the complete mechanism of photo-oxidation is still a matter of debate, as different pathways may operate concurrently and vary according to the wavelength of the incident light.\nRegardless, there is agreement on the major steps. \nPure polystyrene should not be able to absorb light with a wavelength below ~280 nm and initiation is explained though photo-labile impurities (hydroperoxides) and charge transfer complexes, all of which are able to absorb normal sunlight. Charge-transfer complexes of oxygen and polystyrene phenyl groups absorb light to form singlet oxygen, which acts as a radical initiator. Carbonyl impurities in the polymer (c.f. acetophenone) also absorb light in the near ultraviolet range (300 to 400 nm), forming excited ketones able to abstract hydrogen atoms directly from the polymer. Hyroperoxide undergoes photolysis to form hydroxyl and alkoxyl radicals.\nThese initiation steps generate macroradicals at tertiary sites, as these are more stabilised. The propagation steps are essentially identical to those seen for polyolefins; with oxidation, hydrogen abstraction and photolysis leading to beta scission reactions and increasing numbers of radicals. \nThese steps account for the majority of chain-breaking, however in a minor pathway the hydroperoxide reacts directly with polymer to form a ketone group (acetophenone) and a terminal alkene without the formation of additional radicals.\nPolystyrene is observed to yellow during photo-oxidation, which is attributed to the formation of polyenes from these terminal alkenes.", "Perhaps surprisingly, the effect of temperature is often greater than the effect of UV exposure. This can be seen in terms of the Arrhenius equation, which shows that reaction rates have an exponential dependence on temperature. By comparison the dependence of degradation rate on UV exposure and the availability of oxygen is broadly linear. As the oceans are cooler than land plastic pollution in the marine environment degrades more slowly. Materials buried in landfill do not degrade by photo-oxidation at all, though they may gradually decay by other processes.\nMechanical stress can effect the rate of photo-oxidation and may also accelerate the physical breakup of plastic objects. Stress can be caused by mechanical load (tensile and shear stresses) or even by temperature cycling, particularly in composite systems consisting of materials with differing temperature coefficients of expansion. Similarly, sudden rainfall can cause thermal stress.", "Biodegradable additives may be added to polymers to accelerate their degradation. In the case of photo-oxidation OXO-biodegradation additives are used. These are transition metal salts such as iron (Fe), manganese (Mn), and cobalt (Co). Fe complexes increase the rate of photooxidation by promoting the homolysis of hydroperoxides via Fenton reactions.\nThe use of such additives has been controversial due to concerns that treated plastics do not fully biodegrade and instead result in the accelerated formation of microplastics. Oxo-plastics would be difficult to distinguish from untreated plastic but their inclusion during plastic recycling can create a destabilised product with fewer potential uses, potentially jeopardising the business case for recycling any plastic. OXO-biodegradation additives were banned in the EU in 2019", "UV attack by sunlight can be ameliorated or prevented by adding anti-UV polymer stabilizers, usually prior to shaping the product by injection moulding. UV stabilizers in plastics usually act by absorbing the UV radiation preferentially, and dissipating the energy as low-level heat. The chemicals used are similar to those in sunscreen products, which protect skin from UV attack. They are used frequently in plastics, including cosmetics and films. Different UV stabilizers are utilized depending upon the substrate, intended functional life, and sensitivity to UV degradation. UV stabilizers, such as benzophenones, work by absorbing the UV radiation and preventing the formation of free radicals. Depending upon substitution, the UV absorption spectrum is changed to match the application. Concentrations normally range from 0.05% to 2%, with some applications up to 5%.\nFrequently, glass can be a better alternative to polymers when it comes to UV degradation. Most of the commonly used glass types are highly resistant to UV radiation. Explosion protection lamps for oil rigs for example can be made either from polymer or glass. Here, the UV radiation and rough weathers belabor the polymer so much, that the material has to be replaced frequently.\nPoly(ethylene-naphthalate) (PEN) can be protected by applying a zinc oxide coating, which acts as protective film reducing the diffusion of oxygen. Zinc oxide can also be used on polycarbonate (PC) to decrease the oxidation and photo-yellowing rate caused by solar radiation.", "The photo-oxidation of polymers can be investigated by either natural or accelerated weather testing. Such testing is important in determining the expected service-life of plastic items as well as the fate of waste plastic.\nIn natural weather testing, polymer samples are directly exposed to open weather for a continuous period of time, while accelerated weather testing uses a specialized test chamber which simulates weathering by sending a controlled amount of UV light and water at a sample. A test chamber may be advantageous in that the exact weathering conditions can be controlled, and the UV or moisture conditions can be made more intense than in natural weathering. Thus, degradation is accelerated and the test is less time-consuming. \nThrough weather testing, the impact of photooxidative processes on the mechanical properties and lifetimes of polymer samples can be determined. For example, the tensile behavior can be elucidated through measuring the stress–strain curve for a specimen. This stress–strain curve is created by applying a tensile stress (which is measured as the force per area applied to a sample face) and measuring the corresponding strain (the fractional change in length). Stress is usually applied until the material fractures, and from this stress–strain curve, mechanical properties such as the Young’s modulus can be determined. Overall, weathering weakens the sample, and as it becomes more brittle, it fractures more easily. This is observed as a decrease in the yield strain, fracture strain, and toughness, as well as an increase in the Young’s modulus and break stress (the stress at which the material fractures). \nAside from measuring the impact of degradation on mechanical properties, the degradation rate of plastic samples can also be quantified by measuring the change in mass of a sample over time, as microplastic fragments can break off from the bulk material as degradation progresses and the material becomes more brittle through chain-scission. Thus, the percentage change in mass is often measured in experiments to quantify degradation. \nMathematical models can also be created to predict the change in mass of a polymer sample over the weathering process. Because mass loss occurs at the surface of the polymer sample, the degradation rate is dependent on surface area. Thus, a model for the dependence of degradation on surface area can be made by assuming that the rate of change in mass resulting from degradation is directly proportional to the surface area SA of the specimen: \nHere, is the density and k is known as the specific surface degradation rate (SSDR), which changes depending on the polymer sample’s chemical composition and weathering environment. Furthermore, for a microplastic sample, SA is often approximated as the surface area of a cylinder or sphere. Such an equation can be solved to determine the mass of a polymer sample as a function of time.", "Dyes and pigments are used in polymer materials to provide colour, however they can also effect the rate of photo-oxidation. Many absorb UV rays and in so doing protect the polymer, however absorption can cause the dyes to enter an excited state where they may attack the polymer or transfer energy to O to form damaging singlet oxygen. Cu-phthalocyanine is an example, it strongly absorbs UV light however the excited Cu-phthalocyanine may act as a photoinitiator by abstracting hydrogen atoms from the polymer. Its interactions may become even more complicated when other additives are present.\nFillers such as carbon black can screen out UV light, effectively stabilisers the polymer, whereas flame retardants tend to cause increased levels of photo-oxidation.", "Degradation can be detected before serious cracks are seen in a product by using infrared spectroscopy, which is able to detect chemical species formed by photo-oxidation. In particular, peroxy-species and carbonyl groups have distinct absorption bands.\nIn the example shown at left, carbonyl groups were easily detected by IR spectroscopy from a cast thin film. The product was a road cone made by rotational moulding in LDPE, which had cracked prematurely in service. Many similar cones also failed because an anti-UV additive had not been used during processing. Other plastic products which failed included polypropylene mancabs used at roadworks which cracked after service of only a few months.\nThe effects of degradation can also be characterized through scanning electron microscopy (SEM). For example, through SEM, defects like cracks and pits can be directly visualized, as shown at right. These samples were exposed to 840 hours of exposure to UV light and moisture using a test chamber. Crack formation is often associated with degradation, such that materials that do not display significant cracking behavior, such as HDPE in the right example, are more likely to be stable against photooxidation compared to other materials like LDPE and PP. However, some plastics that have undergone photooxidation may also appear smoother in an SEM image, with some defects like grooves having disappeared afterwards. This is seen in polystyrene in the right example.", "Unlike most other commodity plastics polyethylene terephthalate (PET) is able to absorb the near ultraviolet rays in sunlight. Absorption begins at 360 nm, becoming stronger below 320 nm and is very significant below 300 nm. Despite this PET has better resistance to photo-oxidation than other commodity plastics, this is due to a poor quantum yield or the absorption. The degradation chemistry is complicated due to simultaneous photodissociation (i.e. not involving oxygen) and photo-oxidation reactions of both the aromatic and aliphatic parts of the molecule. Chain scission is the dominant process, with chain branching and the formation of coloured impurities being less common. Carbon monoxide, carbon dioxide, and carboxylic acids are the main products.\nThe photo-oxidation of other linear polyesters such as polybutylene terephthalate and polyethylene naphthalate proceeds similarly. \nPhotodissociation involves the formation of an excited terephthalic acid unit which undergoes Norrish reactions. The type I reaction dominates, which cause chain scission at the carbonyl unit to give a range of products.\nType II Norrish reactions are less common but give rise to acetaldehyde by way of vinyl alcohol esters. This has an exceedingly low odour and taste threshold and can cause an off-taste in bottled water.\nRadicals formed by photolysis may initiate the photo-oxidation in PET. Photo-oxidation of the aromatic terephthalic acid core results in its step-wise oxidation to 2,5-dihydroxyterephthalic acid. The photo-oxidation process at aliphatic sites is similar to that seen for polyolefins, with the formation of hydroperoxide species eventually leading to beta-scission of the polymer chain.", "Picramic acid, also known as 2-amino-4,6-dinitrophenol, is an acid obtained by neutralizing an alcoholic solution of picric acid with ammonium hydroxide. Hydrogen sulfide is then added to the resulting solution, which turns red, yielding sulfur and red crystals. These are the ammonium salts of picramic acid, from which it can be extracted using acetic acid. Picramic acid is explosive and very toxic. It has a bitter taste.\nAlong with its sodium salt (sodium picramate) it is used in low concentrations in certain hair dyes, such as henna, it is considered safe for this use provided its concentration remains low.", "Piezoluminescence is a form of luminescence created by pressure upon certain solids. This phenomenon is characterized by recombination processes involving electrons, holes and impurity ion centres. Some piezoelectric crystals give off a certain amount of piezoluminescence when under pressure. Irradiated salts, such as NaCl, KCl, KBr and polycrystalline chips of LiF (TLD-100), have been found to exhibit piezoluminescent properties. It has also been discovered that ferroelectric polymers exhibit piezoluminescence upon the application of stress.\nIn the folk-literature surrounding psychedelic production, DMT, 5-MeO-DMT, and LSD have been reported to exhibit piezoluminescence. As specifically noted in the book Acid Dreams, it is stated that Augustus Owsley Stanley III, one of the most prolific producers of LSD in the 1960s, observed piezoluminescence in the compound's purest form, which observation is confirmed by Alexander Shulgin: \"A totally pure salt, when dry and when shaken in the dark, will emit small flashes of white light.\"", "Piezomagnetism is a phenomenon observed in some antiferromagnetic and ferrimagnetic crystals. It is characterized by a linear coupling between the system's magnetic polarization and mechanical strain. In a piezomagnetic material, one may induce a spontaneous magnetic moment by applying mechanical stress, or a physical deformation by applying a magnetic field.\nPiezomagnetism differs from the related property of magnetostriction; if an applied magnetic field is reversed in direction, the strain produced changes signs. Additionally, a non-zero piezomagnetic moment can be produced by mechanical strain alone, at zero fields, which is not true of magnetostriction. According to the Institute of Electrical and Electronics Engineers (IEEE): \nThe piezomagnetic effect is made possible by an absence of certain symmetry elements in a crystal structure; specifically, symmetry under time reversal forbids the property.\nThe first experimental observation of piezomagnetism was made in 1960, in the fluorides of cobalt and manganese.\nThe strongest piezomagnet known is uranium dioxide, with magnetoelastic memory switching at magnetic fields near 180,000 Oe at temperatures below 30 kelvins.", "The Polder tensor is a tensor introduced by Dirk Polder for the description of magnetic permeability of ferrites. The tensor notation needs to be used because ferrimagnetic material becomes anisotropic in the presence of a magnetizing field.\nThe tensor is described mathematically as:\nNeglecting the effects of damping, the components of the tensor are given by\nwhere\n (rad / s) / (A / m) is the effective gyromagnetic ratio and , the so-called effective g-factor (physics), is a ferrite material constant typically in the range of 1.5 - 2.6, depending on the particular ferrite material. is the frequency of the RF/microwave signal propagating through the ferrite, is the internal magnetic bias field, is the magnetization of the ferrite material and is the magnetic permeability of free space.\nTo simplify computations, the radian frequencies of and can be replaced with frequencies (Hz) in the equations for and because the factor cancels. In this case, Hz / (A / m) MHz / Oe. If CGS units are used, computations can be further simplified because the factor can be dropped.", "Polythionic acid is an oxoacid which has a straight chain of sulfur atoms and has the chemical formula S(SOH) (n > 2). Trithionic acid (HSO), tetrathionic acid (HSO) are simple examples. They are the conjugate acids of polythionates. The compounds of n SO) does not belong to the polythionic acids due to strongly different properties.", "Polythionic acids are rarely encountered, but polythionates are common and important. \nPolythionic acids have been identified in crater lakes. The phenomenon may be useful to predict volcanic activity.", "All polythionates anion contains chains of sulfur atoms attached to the terminal SOH-groups. Names of polythionic acids are determined by the number of atoms in the chain of sulfur atoms:\n* – dithionic acid\n* – trithionic acid\n* – , etc.", "react with or , forming thiosulfuric acid , as the analogous reaction with forms disulfonomonosulfonic acid ; similarly polysulfanes HS (n = 2–6) give HSSOH. Reactions from both ends of the polysulfane chain lead to the formation of polysulfonodisulfonic acid HOSSSOH.\nMany methods exist for the synthesis of these acids, but the mechanism is unclear because of the large number of simultaneously occurring and competing reactions such as redox, chain transfer, and disproportionation. Typical examples are:\n* Interaction between hydrogen sulfide and sulfur dioxide in highly dilute aqueous solution. This yields a complex mixture of various oxyacids of sulfur of different structures, called Wackenroder solution. At temperatures above 20 °C solutes slowly decomposes with separation unit sulfur, sulfur dioxide, and sulfuric acid.\n::HS + HSO → HSO + HO\n::HSO + 2 HSO → HSO + 2 HO\n::HSO + HSO → HSO + HSO\n* Reactions of sulfur halides with or , for example :\n:: SCl + 2 → [OSSSO ] + 2 HCl\n:: SCl + 2 → [OSSSO] + 2 HCl\n:: SCl + 2 → [OSSSO] + 2 HCl\nAnhydrous polythionic acids can be formed in diethyl ether solution by the following three general ways:\n: HSSOH + SO → HSO (n = 1, 2 ... 8)\n: HS + 2 SO → HSO (n = 1, 2 ... 8)\n: 2 HSSOH + I → HSO + 2 HI (n = 1, 2 ... 6)\nPolythionic acids with a small number of sulfur atoms in the chain (n = 3, 4, 5, 6) are the most stable. Polythionic acids are stable only in aqueous solutions, and are rapidly destroyed at higher concentrations with the release of sulfur, sulfur dioxide and - sometimes - sulfuric acid. Acid salts of polythionic acids do not exist. Polythionate ions are significantly more stable than the corresponding acids.\nUnder the action of oxidants (potassium permanganate, potassium dichromate) polythionic acids and their salts are oxidized to sulfate, and the interaction with strong reducing agents (amalgam of sodium) converts them into sulfites and dithionites.", "Numerous acids and salts of this group have a venerable history, and chemistry systems, where they exist, dates back to the studies John Dalton devoted to the behavior of hydrogen sulfide in aqueous solutions of sulfur dioxide (1808). This solution now has the name of Heinrich Wilhelm Ferdinand Wackenroder, who conducted a systematic study (1846). Over the next 60–80 years, numerous studies have shown the presence of ions, in particular tetrathionate and pentathionate anion ( and , respectively).", "Research centered on three plasma confinement designs; the stellarator headed by Lyman Spitzer at the Princeton Plasma Physics Laboratory, the toroidal pinch or Perhapsatron led by James Tuck at the Los Alamos National Laboratory and the magnetic mirror devices at the Livermore National Laboratory led by Richard F. Post. By June, 1954 a preliminary study had been completed for a full scale \"Model D\" stellarator that would be over long and produce 5,000 MW of electricity at a capital cost of $209 per kilowatt. However, each concept encountered unanticipated problems, in the form of plasma instabilities that prevented the requisite temperatures and pressures from being achieved, and it eventually became clear that sustained hydrogen fusion would not be developed quickly. Strauss left AEC in 1958 and his successor did not share Strauss' enthusiasm for fusion research. Consequently, Project Sherwood was relegated from a crash program to one that concentrated on basic research.", "Project Sherwood was the codename for a United States program in controlled nuclear fusion during the period it was classified. After 1958, when fusion research was declassified around the world, the project was reorganized as a separate division within the United States Atomic Energy Commission (AEC) and lost its codename.\nSherwood developed out of a number of ad hoc efforts dating back to about 1951. Primary among these was the stellarator program at Princeton University, itself code-named Project Matterhorn. Since then the weapons labs had clamored to join the club, Los Alamos with its z-pinch efforts, Livermores magnetic mirror program, and later, Oak Ridges fuel injector efforts. By 1953 the combined budgets were increasing into the million dollar range, demanding some sort of oversight at the AEC level.\nThe name \"Sherwood\" was suggested by Paul McDaniel, Deputy Director of the AEC. He noted that funding for the wartime Hood Building was being dropped and moved to the new program, so they were \"robbing Hood to pay Friar Tuck\", a reference to the British physicist and fusion researcher James L. Tuck. The connection to Robin Hood and Friar Tuck gave the project its name.\nLewis Strauss strongly supported keeping the program secret until pressure from the United Kingdom led to a declassification effort at the 2nd Atoms for Peace meeting in the fall of 1958. After this time a number of purely civilian organizations also formed to organize meetings on the topic, with the American Physical Society organizing meetings under their Division of Plasma Physics. These meetings have been carried on to this day and were renamed International Sherwood Fusion Theory Conference. The original Project Sherwood became simply the Controlled Thermonuclear Research program within the AEC and its follow-on organizations.", "In the early 1950s, Oak Ridge National Laboratory was composed of a small group of scientists that were mostly experienced with research in ion-source technology. However, research from Project Sherwood was a growing area of interest, and the researchers at Oak Ridge National Laboratory wanted to participate in the discovery of controlled fusion. They studied areas of controlled fusion such as the rate of plasma diffusion in a magnetic field and the charge-exchange process. However, their work with ion-source was still a large part of their research.", "The declassification of the program was a large topic of discussion between scientists at all of the laboratories involved with the project and at the Sherwood conferences. The reasoning for an initial high classification status was that if the research into controlled fusion were to be successful then it would be a significant advantage in regards to military aspects. In particular, fusion products high-energy neutrons which could be used to enrich uranium into plutonium for nuclear bomb production. If a small fusion machine was possible, this represented a significant proliferation risk.\nHowever, as the difficultly in making a working fusion reactor became increasingly clear, fears of hidden reactors faded. Additionally, while some of the required industrial work could be conducted without access to the classified information, there were some instances where the classified information of the program was a necessity for those people working on projects such as the large-scale stellarator, the ultra-high vacuum, and the problem of energy storage. In these instances, there was a contract with the Commission that the information that was being used would only be shared with the personnel that was directly working on the project. It soon became apparent that industrial companies were expected to become highly invested in the area of fission and because of this it became clear that these companies should have full access to the research information obtained by Project Sherwood. In June 1956, permits for the research information from Project Sherwood became available through the Commission for companies that were qualified.\nBetween 1955 and 1958, information became more and more available to the public with its gradual declassification beginning with the sharing of information with the United Kingdom. Huge supporters of declassification of the program included the director of the Division of Research, Thomas Johnson, and a member of his staff, Amasa Bishop. Some of their reasoning for wanting declassification was that the secrecy of the project could negatively impact their ability to enlist and employ experienced personnel to the program. The also argued that it would change the way their conferences could be held. The scientists working on the project would be able to freely discuss their findings with others in the scientific community rather than only the scientists working on the same project.\nIn 1956, Soviet physicist Igor Kurchatov gave a talk in the UK where he revealed the entire Soviet fusion program and detailed the problems they were having. Now that the very group of people the classification was intended to keep in the dark were at roughly the same stage of development, there was no obvious reason to continue classification. While the UK had been among the first to classify their program in the aftermath of the Klaus Fuchs affair in 1950, in the summer of 1957 they appeared to have successfully created fusion in their new ZETA and were clamoring to tell the press of their advances. Their agreement to share information with the US required them to classify their work, and now they also began pressing the US to agree to declassification.\nBy May 1958, basic information about the various projects within Project Sherwood including the stellarator, magnetic mirrors, and molecular ion beams had been released to the public.", "The funding for Project Sherwood began with the closure of another program called Project Lincoln at the Hood Laboratory. As the number of people working on the projects grew, so did the budget. Under Strauss the program was reorganized, and its funding and staffing increased dramatically. From early 1954 to 1955, the number of people working on Project Sherwood grew from 45 to 110. By the next year, that number had doubled. The original budget from the shut down of Project Lincoln was $1 million. The breakdown of the year budget from 1951 to 1957 can be seen in the table below. At its peak, Project Sherwood had a budget of $23 million per year and retained more than 500 scientists.", "In 1954, there was a program started at New York University called the Division of Research. It was a small program that included personnel from the Institute of Mathematical Sciences at New York University.", "There was another small group of scientists at Tufts College in Medford, Massachusetts that had become involved in research of the pinch effect. Although their work was not officially part of the Atomic Energy Commission, some of their personnel attended the Sherwood conferences.", "Although there was already a main project (magnetic mirror) at the University of California, scientist W. R. Baker began research into the pinch effect at UCRL, Berkeley in 1952. Two years later, Stirling Colgate began research on shock-heating at UCRL, Livermore.", "*Massachusetts Institute of Technology (MIT)\n*Carnegie Institute\n*Westinghouse Electric Corporation\n*Gould-National Batteries, Inc.\n*General Electrical Company", "The Beckett skimmer has some similarities to the downdraft skimmer but introduced a foam nozzle to produce the flow of air bubbles. The name Beckett comes from the patented foam nozzle developed and sold by the Beckett Corporation (United States), although similar foam nozzle designs are sold by other companies outside the United States (e.g. Sicce (Italy)). Instead of using the plastic media that is found in downdraft skimmer designs, the Beckett skimmer uses design concepts from previous generations of skimmers, specifically the downdraft skimmer and the venturi skimmer (the Beckett 1408 Foam Nozzle is a modified 4 port venturi) to produce a hybrid that is capable of using powerful pressure rated water pumps and quickly processing large amounts of aquarium water in a short period of time. Commercial Beckett skimmers come in single Beckett, dual Beckett, and quad Beckett designs. Well engineered Beckett skimmers are quiet and reliable. Due to the advances in pump technologies and introduction of DC pumps, the concerns of powerful pumps taking up additional space, introducing additional noise, and using more electricity have all been alleviated. Unlike the Downdraft and Spray Induction skimmers, Beckett skimmer designs are produced by a number of companies in the United States and elsewhere and are not known to be restricted by patents.", "A protein skimmer or foam fractionator is a device used to remove organic compounds such as food and waste particles from water. It is most commonly used in commercial applications like municipal water treatment facilities, public aquariums, and aquaculture facilities. Smaller protein skimmers are also used for filtration of home saltwater aquariums and even freshwater aquariums and ponds.", "A recent trend is to change the method by which the skimmer is fed dirty water from the aquarium as a means to recirculate water within the skimmer multiple times before it is returned to the sump or the aquarium. Aspirating pump skimmers are the most popular type of skimmer to use recirculating designs although other types of skimmers, such as Beckett skimmers, are also available in recirculating versions. While there is a popular belief among some aquarist that this recirculation increases the dwell or contact time of the generated air bubbles within the skimmer there is no authoritative evidence that this is true. Each time water is recirculated within the skimmer any air bubbles in that water sample are destroyed and new bubbles are generated by the recirculating pump venturi apparatus so the air-water contact time begins again for these newly created bubbles. In non-recirculating skimmer designs, a skimmer has one inlet supplied by a pump that pulls water in from the aquarium and injects it with air into the skimmer and releasing the foam or air/water mix into the reaction chamber. With a recirculating design, the one inlet is usually driven by a separate feed pump, or in some cases may be gravity fed, to receive the dirty water to process, while the pump providing the foam or air/water mix into the reaction chamber is set up separately in a closed loop on the side of the skimmer. The recirculating pump pulls water out of the skimmer and injects air to generate the foam or air/water mix before returning it to the skimmer reaction chamber—thus recirculating it. The feed pump in a recirculating design typically injects a smaller amount of dirty water than co/counter-current designs. The separate feed pump allows easy control of the rate of water exchange through the skimmer and for many aquarists this is one of the important attractions of recirculating skimmer designs. Because the pump configuration of these skimmers is similar to that of aspirating pump skimmers, the power consumption advantages are also similar.", "This method is related to the downdraft, but uses a pump to power a spray nozzle, fixed a few inches above the water level. The spray action entraps and shreds the air in the base of the unit, similar to holding your thumb over a garden hose, which then rises to the collection chamber. In the United States, one company has patented the spray induction technology and the commercial product offerings are limited to that single company.", "The Downdraft skimmer is both a proprietary skimmer design and a style of protein skimmer that injects water under high pressure into tubes that have a foam or bubble generating mechanism and carry the air/water mixture down into the skimmer and into a separate chamber. The proprietary design is protected in the United States with patents and commercial skimmer products in the US are limited to that single company. Their design uses one or more tubes with plastic media such as bio balls inside to mix water under high pressure and air in the body of the skimmer resulting in foam that collects protein waste in a collection cup. This was one of the earlier high performance protein skimmer designs and large models were produced that saw success in large and public aquariums.", "All skimmers have key features in common: water flows through a chamber and is brought into contact with a column of fine bubbles. The bubbles collect proteins and other substances and carry them to the top of the device where the foam, but not the water, collects in a cup. Here the foam condenses to a liquid, which can be easily removed from the system. The material that collects in the cup can range from pale greenish-yellow, watery liquid to a thick black tar.\nConsider this summary of optimal protein skimmer design by Randy Holmes-Farley: \nAlso under considerable recent attention has been the general shape of a skimmer as well. In particular, much attention has been given to the introduction of cone shaped skimmer units. Originally designed by Klaus Jensen in 2004, the concept was founded on the principle that a conical body allows the foam to accumulate more steadily through a gently sloping transition. It was claimed that this reduces the overall turbulence, resulting in more efficient skimming. However, this design reduces the overall volume inside the skimmer, reducing dwell time. Cylindrical-shaped protein skimmers are the most popular design and allow for the largest volume of air and water.\nOverall, protein skimmers can be classed in two ways depending on whether they operate by co-current flow or counter-current flow. In a co-current flow system, air is introduced at the bottom of the chamber and is in contact with the water as it rises upwards towards the collection chamber. In a counter-current system, air is forced into the system under pressure and moves against the flow of the water for a while before it rises up towards the collection cup. Because the air bubbles may be in contact with the water for a longer period in a counter-current flow system, protein skimmers of this type are considered by some to be more effective at removing organic wastes.", "The premise behind these skimmers is that a high pressure pump combined with a venturi, can be used to introduce the bubbles into the water stream. The tank water is pumped through the venturi, in which fine bubbles are introduced via pressure differential, then enters the skimmer body. This method was popular due to its compact size and high efficiency for the time but venturi designs are now outdated and surpassed by more efficient needle-wheel designs.", "The original method of protein skimming, running pressurized air through a diffuser to produce large quantities of micro bubbles, remains a viable, effective, and economic choice, although newer technologies may require lower maintenance. The air stone is most often an oblong, partially hollowed block of wood, most often of the genus Tilia. The most popular wooden air-stones for skimmers are made from limewood (Tilia europaea or European limewood) although basswood (Tilia americana or American Linden), works as well, may be cheaper and is often more readily available. The wooden blocks are drilled, tapped, fitted with an air fitting, and connected by air tubing to one or more air pumps delivering at least 1 cfm. The wooden air stone is placed at the bottom of a tall column of water. The tank water is pumped into the column, allowed to pass by the rising bubbles, and back into the tank. To get enough contact time with the bubble, these units can be many feet in height.\nAir stone protein skimmers may be constructed as a DIY project from pvc pipes and fittings at low cost [http://www.angelfire.com/ok/dog1/skimmer.html] [http://www.hawkfish.org/snailman/diy8inskimmer.htm] and with varying degrees of complexity [https://web.archive.org/web/20121225063946/http://ozreef.org/diy_plans/protein_skimmers/air_stone_protein_skimmer.html].\nAir stone protein skimmers require powerful air pumps which are often power hungry, loud, and hot, leading to an increase in the aquarium water temperatures. While this method has been around for many years, due to more efficient technologies emerging, many regard it as inefficient current uses in larger systems or systems with large bio-loads.", "Protein skimming removes certain organic compounds, including proteins and amino acids found in food particles and fish waste, by using the polarity of the protein itself. Due to their intrinsic charge, water-borne proteins are either repelled or attracted by the air/water interface and these molecules can be described as hydrophobic (such as fats or oils) or hydrophilic (such as salt, sugar, ammonia, most amino acids, and most inorganic compounds). However, some larger organic molecules can have both hydrophobic and hydrophilic portions. These molecules are called amphipathic or amphiphilic. Commercial protein skimmers work by generating a large air/water interface, specifically by injecting large numbers of bubbles into the water column. In general, the smaller the bubbles the more effective the protein skimming is because the surface area of small bubbles occupying the same volume is much greater than the same volume of larger bubbles. Large numbers of small bubbles present an enormous air/water interface for hydrophobic organic molecules and amphipathic organic molecules to collect on the bubble surface (the air/water interface). Water movement hastens diffusion of organic molecules, which effectively brings more organic molecules to the air/water interface and lets the organic molecules accumulate on the surface of the air bubbles. This process continues until the interface is saturated, unless the bubble is removed from the water or it bursts, in which case the accumulated molecules release back into the water column. However, it is important to note that further exposure of a saturated air bubble to organic molecules may continue to result in changes as compounds that bind more strongly may replace those molecules with a weaker binding that have already accumulated on the interface. Although some aquarists believe that increasing the contact time (or dwell time as it is sometimes called) is always good, it is incorrect to claim that it is always better to increase the contact time between bubbles and the aquarium water. As the bubbles increase near the top of the protein skimmer water column, they become denser and the water begins to drain and create the foam that will carry the organic molecules to the skimmate collection cup or to a separate skimmate waste collector and the organic molecules, and any inorganic molecules that may have become bound to the organic molecules, will be exported from the water system.\nIn addition to the proteins removed by skimming, there are a number of other organic and inorganic molecules that are typically removed. These include a variety of fats, fatty acids, carbohydrates, metals such as copper, and trace elements such as iodine. Particulates, phytoplankton, bacteria, and detritus are also removed; this is desired by some aquarists, and is often enhanced by placement of the skimmer before other forms of filtration, lessening the burden on the filtration system as a whole. There is at least one published study that provides a detailed list of the export products removed by the skimmer. Aquarists who keep filter-feeding invertebrates, however, sometimes prefer to keep these particulates in the water to serve as natural food.\nProtein skimmers are used to harvest algae and phytoplankton gently enough to maintain viability for culturing or commercial sale as live cultures.\nAlternative forms of water filtration have recently come into use, including the algae scrubber, which leaves food particles in the water for corals and small fish to consume, but removes the noxious compounds including ammonia, nitrite, nitrate, and phosphate that protein skimmers do not remove.", "This basic concept is more correctly known as an aspirating skimmer, since some skimmer designs using an aspirator do not use a \"Pin-Wheel\"/\"Adrian-Wheel\" or \"Needle-Wheel\". \"Pin-Wheel\"/\"Adrian-Wheel\" describes the look of an impeller that consists of a disk with pins mounted perpendicular (90°) to the disc and parallel to the rotor. \"Needle-Wheel\" describes the look of an impeller that consists of a series of pins projecting out perpendicular to the rotor from a central axis. \"Mesh-Wheel\" describes the look of an impeller that consists of a mesh material attached to a plate or central axis on the rotor. The purpose of these modified impellers is to chop or shred the air that is introduced via an air aspirator apparatus or external air pump into very fine bubbles. The Mesh-Wheel design provides excellent results in the short term because of its ability to create fine bubbles with its thin cutting surfaces, but its propensity for clogging makes it an unreliable design.\nThe air aspirator differs from the venturi by the positioning of the water pump. With a venturi, the water is pushed through the unit, creating a vacuum to draw in air. With an air aspirator, the water is pulled through the unit, creating a vacuum to draw in air. These terms, however, are often incorrectly interchanged. \nThis style of protein skimmer has become very popular with public aquariums and is believed to be the most popular type of skimmer used with residential reef aquariums today. It has been particularly successful in smaller aquariums due to its usually compact size, ease of set up and use, and quiet operation. Since the pump is pushing a mixture of air and water, the power required to turn the rotor can be decreased and may result in a lower power requirement for that pump vs. the same pump with a different impeller when it is only pumping water.", "In 1940, ZoBell and Conn stated that they had never encountered \"true psychrophiles\" or organisms that grow best at relatively low temperatures. In 1958, J. L. Ingraham supported this by concluding that there are very few or possibly no bacteria that fit the textbook definitions of psychrophiles. Richard Y. Morita emphasizes this by using the term psychrotroph to describe organisms that do not meet the definition of psychrophiles. The confusion between the terms psychrotrophs and psychrophiles was started because investigators were unaware of the thermolability of psychrophilic organisms at the laboratory temperatures. Due to this, early investigators did not determine the cardinal temperatures for their isolates.\nThe similarity between these two is that they are both capable of growing at zero, but optimum and upper temperature limits for the growth are lower for psychrophiles compared to psychrotrophs. Psychrophiles are also more often isolated from permanently cold habitats compared to psychrotrophs. Although psychrophilic enzymes remain under-used because the cost of production and processing at low temperatures is higher than for the commercial enzymes that are presently in use, the attention and resurgence of research interest in psychrophiles and psychrotrophs will be a contributor to the betterment of the environment and the desire to conserve energy.", "Microscopic algae that can tolerate extremely cold temperatures can survive in snow, ice, and very cold seawater. On snow, cold-tolerant algae can bloom on the snow surface covering land, glaciers, or sea ice when there is sufficient light. These snow algae darken the surface of the snow and can contribute to snow melt. In seawater, phytoplankton that can tolerate both very high salinities and very cold temperatures are able to live in sea ice. One example of a psychrophilic phytoplankton species is the ice-associated diatom Fragilariopsis cylindrus. Phytoplankton living in the cold ocean waters near Antarctica often have very high protein content, containing some of the highest concentrations ever measured of enzymes like Rubisco.", "Psychrotrophic microbes are able to grow at temperatures below , but have better growth rates at higher temperatures. Psychrotrophic bacteria and fungi are able to grow at refrigeration temperatures, and can be responsible for food spoilage and as foodborne pathogens such as Yersinia. They provide an estimation of the product's shelf life, but also they can be found in soils, in surface and deep sea waters, in Antarctic ecosystems, and in foods.\nPsychrotrophic bacteria are of particular concern to the dairy industry. Most are killed by pasteurization; however, they can be present in milk as post-pasteurization contaminants due to less than adequate sanitation practices. According to the Food Science Department at Cornell University, psychrotrophs are bacteria capable of growth at temperatures at or less than . At freezing temperatures, growth of psychrotrophic bacteria becomes negligible or virtually stops.\nAll three subunits of the RecBCD enzyme are essential for physiological activities of the enzyme in the Antarctic Pseudomonas syringae, namely, repairing of DNA damage and supporting the growth at low temperature. The RecBCD enzymes are exchangeable between the psychrophilic P. syringae and the mesophilic E. coli when provided with the entire protein complex from same species. However, the RecBC proteins (RecBCPs and RecBCEc) of the two bacteria are not equivalent; the RecBCEc is proficient in DNA recombination and repair, and supports the growth of P. syringae at low temperature, while RecBCPs is insufficient for these functions. Finally, both helicase and nuclease activity of the RecBCDPs are although important for DNA repair and growth of P. syringae at low temperature, the RecB-nuclease activity is not essential in vivo.", "Insects that are psychrotrophic can survive cold temperatures through several general mechanisms (unlike opportunistic and chill susceptible insects): (1) chill tolerance, (2) freeze avoidance, and (3) freeze tolerance. Chill tolerant insects succumb to freezing temperatures after prolonged exposure to mild or moderate freezing temperatures. Freeze avoiding insects can survive extended periods of time at sub-freezing temperatures in a supercooled state, but die at their supercooling point. Freeze tolerant insects can survive ice crystal formation within their body at sub-freezing temperatures. Freeze tolerance within insects is argued to be on a continuum, with some insect species exhibiting partial (e.g., Tipula paludosa, Hemideina thoracica\n), moderate (e.g., Cryptocercus punctulatus), and strong freezing tolerance (e.g., Eurosta solidaginis and Syrphus ribesii), and other insect species exhibiting freezing tolerance with low supercooling point (e.g., Pytho deplanatus).", "The cold environments that psychrophiles inhabit are ubiquitous on Earth, as a large fraction of the planetary surface experiences temperatures lower than 10 °C. They are present in permafrost, polar ice, glaciers, snowfields and deep ocean waters. These organisms can also be found in pockets of sea ice with high salinity content. Microbial activity has been measured in soils frozen below −39 °C. In addition to their temperature limit, psychrophiles must also adapt to other extreme environmental constraints that may arise as a result of their habitat. These constraints include high pressure in the deep sea, and high salt concentration on some sea ice.", "Psychrophiles or cryophiles (adj. psychrophilic or cryophilic) are extremophilic organisms that are capable of growth and reproduction in low temperatures, ranging from to . They are found in places that are permanently cold, such as the polar regions and the deep sea. They can be contrasted with thermophiles, which are organisms that thrive at unusually high temperatures, and mesophiles at intermediate temperatures. Psychrophile is Greek for cold-loving, .\nMany such organisms are bacteria or archaea, but some eukaryotes such as lichens, snow algae, phytoplankton, fungi, and wingless midges, are also classified as psychrophiles.", "Psychrophiles include bacteria, lichens, snow algae, phytoplankton, fungi, and insects.\nAmong the bacteria that can tolerate extreme cold are Arthrobacter sp., Psychrobacter sp. and members of the genera Halomonas, Pseudomonas, Hyphomonas, and Sphingomonas. Another example is Chryseobacterium greenlandensis, a psychrophile that was found in 120,000-year-old ice.\nUmbilicaria antarctica and Xanthoria elegans are lichens that have been recorded photosynthesizing at temperatures ranging down to −24 °C, and they can grow down to around −10 °C. Some multicellular eukaryotes can also be metabolically active at sub-zero temperatures, such as some conifers; those in the Chironomidae family are still active at −16 °C.\nMicroalgae that live in snow and ice include green, brown, and red algae. Snow algae species such as Chloromonas sp., Chlamydomonas sp., and Chlorella sp. are found in polar environments.\nSome phytoplankton can tolerate extremely cold temperatures and high salinities that occur in brine channels when sea ice forms in polar oceans. Some examples are diatoms like Fragilariopsis cylindrus, Nitzchia lecointeii, Entomoneis kjellmanii, Nitzchia stellata, Thalassiosira australis, Berkelaya adeliense, and Navicula glaciei.\nPenicillium is a genus of fungi found in a wide range of environments including extreme cold.\nAmong the psychrophile insects, the Grylloblattidae or ice crawlers, found on mountaintops, have optimal temperatures between 1–4 °C. The wingless midge (Chironomidae) Belgica antarctica can tolerate salt, being frozen and strong ultraviolet, and has the smallest known genome of any insect. The small genome, of 99 million base pairs, is thought to be adaptive to extreme environments.", "Psychrophiles are protected from freezing and the expansion of ice by ice-induced desiccation and vitrification (glass transition), as long as they cool slowly. Free living cells desiccate and vitrify between −10 °C and −26 °C. Cells of multicellular organisms may vitrify at temperatures below −50 °C. The cells may continue to have some metabolic activity in the extracellular fluid down to these temperatures, and they remain viable once restored to normal temperatures.\nThey must also overcome the stiffening of their lipid cell membrane, as this is important for the survival and functionality of these organisms. To accomplish this, psychrophiles adapt lipid membrane structures that have a high content of short, unsaturated fatty acids. Compared to longer saturated fatty acids, incorporating this type of fatty acid allows for the lipid cell membrane to have a lower melting point, which increases the fluidity of the membranes. In addition, carotenoids are present in the membrane, which help modulate the fluidity of it.\nAntifreeze proteins are also synthesized to keep psychrophiles internal space liquid, and to protect their DNA when temperatures drop below waters freezing point. By doing so, the protein prevents any ice formation or recrystallization process from occurring.\nThe enzymes of these organisms have been hypothesized to engage in an activity-stability-flexibility relationship as a method for adapting to the cold; the flexibility of their enzyme structure will increase as a way to compensate for the freezing effect of their environment.\nCertain cryophiles, such as Gram-negative bacteria Vibrio and Aeromonas spp., can transition into a viable but nonculturable (VBNC) state. During VBNC, a micro-organism can respire and use substrates for metabolism – however, it cannot replicate. An advantage of this state is that it is highly reversible. It has been debated whether VBNC is an active survival strategy or if eventually the organism's cells will no longer be able to be revived. There is proof however it may be very effective – Gram positive bacteria Actinobacteria have been shown to have lived about 500,000 years in the permafrost conditions of Antarctica, Canada, and Siberia.", "Some researchers have examined the use of antimatter as an alternative fusion trigger, mainly in the context of antimatter-catalyzed nuclear pulse propulsion but also nuclear weapons. Such a system, in a weapons context, would have many of the desired properties of a pure fusion weapon. The technical barriers to producing and containing the required quantities of antimatter appear formidable, well beyond present capabilities.\nInduced gamma emission is another approach that is currently being researched. Very high energy-density chemicals such as ballotechnics and others have also been suggested as a means of triggering a pure fusion weapon. \nNuclear isomers have also been investigated for use in pure fusion weaponry. Hafnium and tantalum isomers can be induced to emit very strong gamma radiation. Gamma emission from these isomers may have enough energy to start a thermonuclear reaction, without requiring any fissile material.", "Despite the many millions of dollars spent by the U.S. between 1952 and 1992 to produce a pure fusion weapon, no measurable success was ever achieved. In 1998, the U.S. Department of Energy (DOE) released a restricted data declassification decision stating that even if the DOE made a substantial investment in the past to develop a pure fusion weapon, \"the U.S. is not known to have and is not developing a pure fusion weapon and no credible design for a pure fusion weapon resulted from the DOE investment\". The power densities needed to ignite a fusion reaction still seem attainable only with the aid of a fission explosion, or with large apparatus such as powerful lasers like those at the National Ignition Facility, the Sandia Z-pinch machine, or various magnetic tokamaks. Regardless of any claimed advantages of pure fusion weapons, building those weapons does not appear to be feasible using currently available technologies and many have expressed concern that pure fusion weapons research and development would subvert the intent of the Nuclear Non-Proliferation Treaty and the Comprehensive Test Ban Treaty.\nIt has been claimed that it is possible to conceive of a crude, deliverable, pure fusion weapon, using only present-day, unclassified technology. The weapon design weighs approximately 3 tonnes, and might have a total yield of approximately 3 tonnes of TNT. The proposed design uses a large explosively pumped flux compression generator to produce the high power density required to ignite the fusion fuel. From the point of view of explosive damage, such a weapon would have no clear advantages over a conventional explosive, but the massive neutron flux could deliver a lethal dose of radiation to humans within a 500-meter radius (most of those fatalities would occur over a period of months, rather than immediately).", "A pure fusion weapon is a hypothetical hydrogen bomb design that does not need a fission \"primary\" explosive to ignite the fusion of deuterium and tritium, two heavy isotopes of hydrogen used in fission-fusion thermonuclear weapons. Such a weapon would require no fissile material and would therefore be much easier to develop in secret than existing weapons. Separating weapons-grade uranium (U-235) or breeding plutonium (Pu-239) requires a substantial and difficult-to-conceal industrial investment, and blocking the sale and transfer of the needed machinery has been the primary mechanism to control nuclear proliferation to date.", "All current thermonuclear weapons use a fission bomb as a first stage to create the high temperatures and pressures necessary to start a fusion reaction between deuterium and tritium in a second stage. For many years, nuclear weapon designers have researched whether it is possible to create high enough temperatures and pressures inside a confined space to ignite a fusion reaction, without using fission. Pure fusion weapons offer the possibility of generating arbitrarily small nuclear yields because no critical mass of fissile fuel need be assembled for detonation, as with a conventional fission primary needed to spark a fusion explosion. There is also the advantage of reduced collateral damage stemming from fallout because these weapons would not create the highly radioactive byproducts made by fission-type weapons. These weapons would be lethal not only because of their explosive force, which could be large compared to bombs based on chemical explosives, but also because of the neutrons they generate.\nWhile various neutron source devices have been developed, some of them based on fusion reactions, none of them are able to produce a net energy yield, either in controlled form for energy production or uncontrolled for a weapon.", "As the density increases, the Gamow peak increases in height and shifts towards lower energy, while the potential barriers are depressed. If the potential barriers are depressed by the amount of , the Gamow peak is shifted across the origin, making the reactions density-dependent, as the Gamow peak energy is much larger than the thermal energy. The material becomes a degenerate gas at such densities. Harrison proposed that models fully independent of temperature be called cryonuclear.\nPycnonuclear reactions can proceed in two ways: direct ( or ) or through chain of electron capture reactions ().", "In Wolf–Rayet stars, the triple-alpha reaction is accommodated by the low-energy of resonance. However, in neutron stars the temperature in the core is so low that the triple-alpha reactions can occur via the pycnonuclear pathway.", "As the neutron stars undergo accretion, the density in the crust increases, passing the electron capture threshold. As the electron capture threshold ( g cm) is exceeded, it allows for the formation of light nuclei from the process of double electron capture (), forming the light neon nuclei and free neutrons, which further increases the density of the crust. As the density increases, the crystal lattices of neutron-rich nuclei are forced closer together due to gravitational collapse of accreting material, and at a point where the nuclei are pushed so close together that their zero-point oscillations allow them to break through the Coulomb barrier, fusion occurs. While the main site of pycnonuclear fusion within neutron stars is the inner crust, pycnonuclear reactions between light nuclei can occur even in the plasma ocean. Since the core of neutron stars was approximated to be g cm, at such extreme densities, pycnonuclear reactions play a large role as demonstrated by Haensel & Zdunik, who showed that at densities of g cm, they serve as a major heat source. In the fusion processes of the inner crust, the burning of neutron-rich nuclei () releases a lot of heat, allowing pycnonuclear fusion to perform as a major energy source, possibly even acting as an energy basin for gamma-ray bursts.\nFurther studies have established that most magnetars are found at densities of g cm, indicating that pycnonuclear reactions along with subsequent electron capture reactions could serve as major heat sources.", "Pycnonuclear reactions can occur anywhere and in any matter, but under standard conditions, the speed of the reaction is exceedingly low, and thus, have no significant role outside of extremely dense systems, neutron-rich and free electron-rich environments, such as the inner crust of a neutron star. A feature of pycnonuclear reactions is that the rate of the reaction is directly proportional to the density of the space that the reaction is occurring in, but is almost fully independent of the temperature of the environment.\nPycnonuclear reactions are observed in neutron stars or white dwarfs, with evidence present of them occurring in lab-generated deuterium-tritium plasma. Some speculations also relate the fact that Jupiter emits more radiation than it receives from the Sun with pycnonuclear reactions or cold fusion.", "In white dwarfs, the core of the star is cold, under which conditions, so, if treated classically, the nuclei that arrange themselves into a crystal lattice are in their ground state. The zero-point oscillations of nuclei in the crystal lattice with energy at the energy at Gamow's peak equal to can overcome the Coulomb barrier, actuating pycnonuclear reactions. A semi-analytical model indicates that in white dwarfs, a thermonuclear runaway can occur at much earlier ages than that of the universe, as the pycnonuclear reactions in the cores of white dwarfs exceed the luminosity of the white dwarfs, allowing C-burning to occur, which catalyzes the formation of type Ia supernovas in accreting white dwarfs, whose mass is equal to the Chandrasekhar mass.\nSome studies indicate that the contribution of pycnonuclear reactions towards instability of white dwarfs is only significant in carbon white dwarfs, while in oxygen white dwarfs, such instability is caused mostly due to electron capture. Although other authors disagree that the pycnonuclear reactions can act as major long-term heating sources for massive (1.25 ) white dwarfs, as their density would not suffice for a high rate of pycnonuclear reactions.\nWhile most studies indicate that at the end of their lifecycle, white dwarfs slowly decay into black dwarfs, where pycnonuclear reactions slowly turn their cores into , according to some versions, a collapse of black dwarfs is possible: M.E. Caplan (2020) theorizes that in the most massive black dwarfs (1.25 ), due to their declining electron fraction resulting from production, they will exceed the Chandrasekhar limit in the very far future, speculating that their lifetime and delay time can stretch to up to years.", "Before delving into the mathematical model, it is important to understand that pycnonuclear fusion, in its essence, occurs due to two main events:\n* A phenomenon of quantum nature called quantum diffusion.\n* Overlap of the wave functions of zero-point oscillations of the nuclei.\nBoth of these effects are heavily affected by screening. The term screening is generally used by nuclear physicists when referring to plasmas of particularly high density. In order for the pycnonuclear fusion to occur, the two particles must overcome the electrostatic repulsion between them - the energy required for this is called the Coulomb barrier. Due to the presence of other charged particles (mainly electrons) next to the reacting pair, they exert the effect of shielding - as the electrons create an electron cloud around the positively charged ions - effectively reducing the electrostatic repulsion between them, lowering the Coulomb barrier. This phenomenon of shielding is referred to as \"screening\", and in cases where it is particularly strong, it is called \"strong screening\". Consequently, in cases where the plasma has a strong screening effect, the rate of pycnonuclear fusion is substantially enhanced.\nQuantum tunnelling is the foundation of the quantum physical approach to pycnonuclear fusion. It is closely intertwined with the screening effect, as the transmission coefficient depends on the height of the potential barrier, the mass of the particles, and their relative velocity (since the total energy of the system depends on the kinetic energy). From this follows that the transmission coefficient is very sensitive to the effects of screening. Thus, the effect of screening not only contributes to the reduction of the potential barrier that allows for \"classical\" fusion to occur via the overlap of the wave functions of the zero-point oscillations of the particles, but also to the increase of the transmission coefficient, both of which increase the rate of pycnonuclear fusion.\nOn top of the other various jargon related to pycnonuclear fusion, the papers also introduce various regimes, that define the rate of pycnonuclear fusion. Specifically, they identify the zero-temperature, intermediate, and thermally-enhanced regimes as their main ones.", "The pioneers to the derivation of the rate of pycnonuclear fusion in one-component plasma (OCP) were Edwin Salpeter and David Van-Horn, with their article published in 1969. Their approach used a semiclassical method to solve the Schrödinger equation by using the Wentzel-Kramers-Brillouin (WKB) approximation, and Wigner-Seitz (WS) spheres. Their model is heavily simplified, and whilst it is primitive, is required to understand other approaches which largely extrapolated on the works of Salpeter & Van Horn. They employed the WS spheres to simplify the OCP into regions containing one ion each, with the ions situated on the vertices of a BCC crystal lattice. Then, using the WKB approximation, they resolved the effect of quantum tunnelling on the fusing nuclei. Extrapolating this to the entire lattice allowed them to arrive at their formula for the rate of pycnonuclear fusion:\nwhere is the density of the plasma, is the mean molecular weight per electron (atomic nucleus), is a constant equal to and serves as a conversion factor from atomic mass units to grams, and represents the thermal average of the pairwise reaction probability.\nHowever, the big fault of the method proposed by Salpeter & Van-Horn is that they neglected the dynamic model of the lattice. This was improved upon by Schramm and Koonin in 1990. In their model, they found that the dynamic model cannot be neglected, but it is possible that the effects caused by the dynamicity can be cancelled out.", "The current consensus on the rate of pycnonuclear reactions is not coherent. There are currently a lot of uncertainties to consider when modelling the rate of pycnonuclear reactions, especially in spaces with high numbers of free particles. The primary focus of current research is on the effects of crystal lattice deformation and the presence of free neutrons on the reaction rate. Every time fusion occurs, nuclei are removed from the crystal lattice - creating a defect. The difficulty of approximating this model lies within the fact that the further changes occurring to the lattice and the effect of various deformations on the rate are thus far unknown. Since neighbouring lattices can affect the rate of reaction too, negligence of such deformations could lead to major discrepancies. Another confounding variable would be the presence of free neutrons in the crusts of neutron stars. The presence of free neutrons could potentially affect the Coulomb barrier, making it either taller or thicker. A study published by D.G. Yakovlev in 2006 has shown that the rate calculation of the first pycnonuclear fusion of two nuclei in the crust of a neutron star can have an uncertainty magnitude of up to seven. In this study, Yakovlev also highlighted the uncertainty in the threshold of pycnonuclear fusion (e.g., at what density it starts), giving the approximate density required for the start of pycnonuclear fusion of g cm, arriving at a similar conclusion as Haesnel and Zdunik. According to Haesnel and Zdunik, additional uncertainty of rate calculations in neutron stars can also be due to uneven distribution of the crustal heating, which can impact the thermal states of neutron stars before and after accretion.\nIn white dwarfs and neutron stars, the nuclear reaction rates can not only be affected by pycnonuclear reactions but also by the plasma screening of the Coulomb interaction. A Ukrainian Electrodynamic Research Laboratory \"Proton-21\", established that by forming a thin electron plasma layer on the surface of the target material, and, thus, forcing the self-compression of the target material at low temperatures, they could stimulate the process of pycnonuclear fusion. The startup of the process was due to the self-contracting plasma \"scanning\" the entire volume of the target material, screening the Coulomb field.", "Pycnonuclear fusion () is a type of nuclear fusion reaction which occurs due to zero-point oscillations of nuclei around their equilibrium point bound in their crystal lattice. In quantum physics, the phenomenon can be interpreted as overlap of the wave functions of neighboring ions, and is proportional to the overlapping amplitude. Under the conditions of above-threshold ionization, the reactions of neutronization and pycnonuclear fusion can lead to the creation of absolutely stable environments in superdense substances.\nThe term \"pycnonuclear\" was coined by A.G.W. Cameron in 1959, but research showing the possibility of nuclear fusion in extremely dense and cold compositions was published by W. A. Wildhack in 1940.", "The first successful results with pyroelectric fusion using a tritiated target was reported in 2010. Putterman and Naranjo worked with T. Venhaus of Los Alamos National Laboratory to measure a 14.1 MeV neutron signal far above background.", "In April 2005, a UCLA team headed by chemistry professor James K. Gimzewski and physics professor Seth Putterman utilized a tungsten probe attached to a pyroelectric crystal to increase the electric field strength. Brian Naranjo, a graduate student working under Putterman, conducted the experiment demonstrating the use of a pyroelectric power source for producing fusion on a laboratory bench top device. The device used a lithium tantalate () pyroelectric crystal to ionize deuterium atoms and to accelerate the deuterons towards a stationary erbium dideuteride (ErD) target. Around 1000 fusion reactions per second took place, each resulting in the production of an 820 keV helium-3 nucleus and a 2.45 MeV neutron. The team anticipates applications of the device as a neutron generator or possibly in microthrusters for space propulsion.\nA team at Rensselaer Polytechnic Institute, led by Yaron Danon and his graduate student Jeffrey Geuther, improved upon the UCLA experiments using a device with two pyroelectric crystals and capable of operating at non-cryogenic temperatures.\nPyroelectric fusion has been hyped in the news media, which overlooked the work of Dougar Jabon, Fedorovich and Samsonenko. Pyroelectric fusion is not related to the earlier claims of fusion reactions, having been observed during sonoluminescence (bubble fusion) experiments conducted under the direction of Rusi Taleyarkhan of Purdue University. Naranjo of the UCLA team was one of the main critics of these earlier prospective fusion claims from Taleyarkhan.", "Pyroelectric fusion refers to the technique of using pyroelectric crystals to generate high strength electrostatic fields to accelerate deuterium ions (tritium might also be used someday) into a metal hydride target also containing deuterium (or tritium) with sufficient kinetic energy to cause these ions to undergo nuclear fusion. It was reported in April 2005 by a team at UCLA. The scientists used a pyroelectric crystal heated from −34 to 7 °C (−29 to 45 °F), combined with a tungsten needle to produce an electric field of about 25 gigavolts per meter to ionize and accelerate deuterium nuclei into an erbium deuteride target. Though the energy of the deuterium ions generated by the crystal has not been directly measured, the authors used 100 keV (a temperature of about 10 K) as an estimate in their modeling. At these energy levels, two deuterium nuclei can fuse to produce a helium-3 nucleus, a 2.45 MeV neutron and bremsstrahlung. Although it makes a useful neutron generator, the apparatus is not intended for power generation since it requires far more energy than it produces.", "The process of light ion acceleration using electrostatic fields and deuterium ions to produce fusion in solid deuterated targets was first demonstrated by Cockcroft and Walton in 1932 (see Cockcroft–Walton generator). That process is used in miniaturized versions of their original accelerator, in the form of small sealed tube neutron generators, for petroleum exploration.\nThe process of pyroelectricity has been known from ancient times. The first use of a pyroelectric field to accelerate deuterons was in a 1997 experiment conducted by Drs. V.D. Dougar Jabon, G.V. Fedorovich, and N.V. Samsonenko. This group was the first to utilize a lithium tantalate () pyroelectric crystal in fusion experiments.\nThe novel idea with the pyroelectric approach to fusion is in its application of the pyroelectric effect to generate accelerating electric fields. This is done by heating the crystal from &minus;34 °C to +7 °C over a period of a few minutes.\nNuclear D-D fusion driven by pyroelectric crystals was proposed by Naranjo and Putterman in 2002. It was also discussed by Brownridge and Shafroth in 2004. The possibility of using pyroelectric crystals in a neutron production device (by D-D fusion) was proposed in a conference paper by Geuther and Danon in 2004 and later in a publication discussing electron and ion acceleration by pyroelectric crystals. None of these later authors had prior knowledge of the earlier 1997 experimental work conducted by Dougar Jabon, Fedorovich, and Samsonenko which mistakenly believed that fusion occurred within the crystals. The key ingredient of using a tungsten needle to produce sufficient ion beam current for use with a pyroelectric crystal power supply was first demonstrated in the 2005 Nature paper, although in a broader context tungsten emitter tips have been used as ion sources in other applications for many years. In 2010, it was found that tungsten emitter tips are not necessary to increase the acceleration potential of pyroelectric crystals; the acceleration potential can allow positive ions to reach kinetic energies between 300 and 310 keV.", "The sample (fruits, vegetables, tobacco, etc.) is homogenized and centrifuged with a reagent and agitated for 1 minute. The reagents used depend on the type of sample to be analyzed. Following this, the sample is put through a dispersive solid phase extraction cleanup prior to analysis by gas-liquid chromatography or liquid-liquid chromatography.\nSamples prepared using the QuEChERS method can be processed more quickly using a homogenization instrument. Such instruments can homogenize the food sample in a centrifuge tube, then agitate the sample with the reagent of choice, before moving the extracted sample for centrifuging. By using such an instrument, the samples can be moved through the QuEChERS method more quickly.\nSome modifications to the original QuEChERS method had to be introduced to ensure efficient extraction of pH-dependent compounds (e.g., phenoxyalkanoic acids), to minimize degradation of susceptible compounds (e.g., base and acid labile pesticides) and to expand the spectrum of matrices covered.", "QuEChERS is a solid phase extraction method for detection of biocide residues in food. The name is a portmanteau word formed from \"quick, easy, cheap, effective, rugged, and safe\".\n__TOC__", "The rapid development in the multidisciplinary field of tissue engineering has resulted in a variety of new and innovative medicinal products, often carrying living cells, intended to repair, regenerate or replace damaged human tissue. Tissue engineered medicinal products (TEMPs) vary in terms of the type and origin of cells and the product’s complexity. As all medicinal products, the safety and efficacy of TEMPs must be consistent throughout the manufacturing process. Quality control and assurance are of paramount importance and products are constantly assessed throughout the manufacturing process to ensure their safety, efficacy, consistency and reproducibility between batches. The European Medicines Agency (EMA) is responsible for the development, assessment and supervision of medicines in the EU. The appointed committees are involved in referral procedures concerning safety or the balance of benefit/risk of a medicinal product. In addition, the committees organize inspections with regards to the conditions under which medicinal products are being manufactured. For example, the compliance with good manufacturing practice (GMP), good clinical practice (GCP), good laboratory practice (GLP) and pharmacovigilance (PhV).", "When quality control of TEMPs is considered, a risk assessment needs to be conducted. A risk is defined as a \"potentially unfavourable effect that can be attributed to the clinical use of advanced therapy medicinal products (ATMPs) and is of concern to the patient and/or to other populations (e.g. caregivers and off-spring)\". Some risks include immunogenicity, disease transmission, tumor formation, treatment failure, undesirable tissue formation, and inadvertent germ transduction. A risk factor is defined as a \"qualitative or quantitative characteristic that contributes to a specific risk following handling and/or administration of an ATMP\". The integration of all available information on risks and risk factors is called risk profiling. Due to the fact that every TEMP is different, the risks associated with each one of them vary and, subsequently, the procedures that must be implemented to ensure its quality are also unique to the product. Once the risks associated with the TEMP are identified, the appropriate tests must be developed and validated accordingly. Thus, there is no standard set of tests for the quality control of TEMPs. The EMA has released a set of regulatory guidelines on the topics to be considered by companies involved in the development and marketing of medicines for use in the European Union. These guidelines have to be followed in order for the marketing authorization of a product to be issued. Fictitious examples of risk analysis for further elucidation of the process are provided in the EMA guidelines.", "Careful and detailed documentation concerning the characteristics of the starting materials (e.g. history of the cell line derivation and cell banking) and manufacturing process steps (e.g. procurement of tissue or cells and manipulation) must be maintained. The cellular part of every cell-based medicinal product must be characterized in terms of identity, purity, potency, viability and suitability for the intended use. The non-cellular constituents must be also characterized with regards to their intended function in the final product. For example, scaffolds or membranes that are used to support the cells must be identified and characterized in terms of porosity, density, microscopic structure and particular size. The same requirement for characterization applies for biologically active molecules, such as growth factors or cytokines.", "Proper quality control involves the release testing of the final product through updated and validated methods. The release specifications of the product must be selected on the basis of the parameters defined during the characterization studies and the appropriate release tests must be performed. In case a release test cannot be performed on the final product but only on previous stages of the manufacturing, exceptions can be made after proper justification. However, in these cases adequate quality control has to rise from the manufacturing process. Specifications about the stability of the product, the presence or not of genetically modified cells, structural components and whether it is a combination product must also be defined.", "The integrability is underpinned by the existence of large symmetry algebras for the different models. For the XXX case this is the Yangian , while in the XXZ case this is the quantum group , the q-deformation of the affine Lie algebra of , as explained in the notes by .\nThese appear through the transfer matrix, and the condition that the Bethe vectors are generated from a state satisfying corresponds to the solutions being part of a highest-weight representation of the extended symmetry algebras.", "For higher spins, say spin , replace with coming from the Lie algebra representation of the Lie algebra , of dimension . The XXX Hamiltonian\nis solvable by Bethe ansatz with Bethe equations", "Following the approach of , the spectrum of the Hamiltonian for the XXX model\ncan be determined by the Bethe ansatz. In this context, for an appropriately defined family of operators dependent on a spectral parameter acting on the total Hilbert space with each , a Bethe vector is a vector of the form\nwhere .\nIf the satisfy the Bethe equation\nthen the Bethe vector is an eigenvector of with eigenvalue .\nThe family as well as three other families come from a transfer matrix (in turn defined using a Lax matrix), which acts on along with an auxiliary space , and can be written as a block matrix with entries in ,\nwhich satisfies fundamental commutation relations (FCRs) similar in form to the Yang–Baxter equation used to derive the Bethe equations. The FCRs also show there is a large commuting subalgebra given by the generating function , as , so when is written as a polynomial in , the coefficients all commute, spanning a commutative subalgebra which is an element of. The Bethe vectors are in fact simultaneous eigenvectors for the whole subalgebra.", "For spin and a parameter for the deformation from the XXX model, the BAE (Bethe ansatz equation) is\nNotably, for these are precisely the BAEs for the six-vertex model, after identifying , where is the anisotropy parameter of the six-vertex model. This was originally thought to be coincidental until Baxter showed the XXZ Hamiltonian was contained in the algebra generated by the transfer matrix , given exactly by", "For quantum mechanical reasons (see exchange interaction or ), the dominant coupling between two dipoles may cause nearest-neighbors to have lowest energy when they are aligned. Under this assumption (so that magnetic interactions only occur between adjacent dipoles) and on a 1-dimensional periodic lattice, the Hamiltonian can be written in the form\nwhere is the coupling constant and dipoles are represented by classical vectors (or \"spins\") σ, subject to the periodic boundary condition . \nThe Heisenberg model is a more realistic model in that it treats the spins quantum-mechanically, by replacing the spin by a quantum operator acting upon the tensor product , of dimension . To define it, recall the Pauli spin-1/2 matrices\nand for and denote , where is the identity matrix.\nGiven a choice of real-valued coupling constants and , the Hamiltonian is given by\nwhere the on the right-hand side indicates the external magnetic field, with periodic boundary conditions. The objective is to determine the spectrum of the Hamiltonian, from which the partition function can be calculated and the thermodynamics of the system can be studied.\nIt is common to name the model depending on the values of , and : if , the model is called the Heisenberg XYZ model; in the case of , it is the Heisenberg XXZ model; if , it is the Heisenberg XXX model. The spin 1/2 Heisenberg model in one dimension may be solved exactly using the Bethe ansatz. In the algebraic formulation, these are related to particular quantum affine algebras and elliptic quantum groups in the XXZ and XYZ cases respectively. Other approaches do so without Bethe ansatz.", "The quantum Heisenberg model, developed by Werner Heisenberg, is a statistical mechanical model used in the study of critical points and phase transitions of magnetic systems, in which the spins of the magnetic systems are treated quantum mechanically. It is related to the prototypical Ising model, where at each site of a lattice, a spin represents a microscopic magnetic dipole to which the magnetic moment is either up or down. Except the coupling between magnetic dipole moments, there is also a multipolar version of Heisenberg model called the multipolar exchange interaction.", "The physics of the Heisenberg XXX model strongly depends on the sign of the coupling constant\n and the dimension of the space. For positive the ground state is always ferromagnetic. At negative the ground state is antiferromagnetic in two and three dimensions. In one dimension the nature of correlations in the antiferromagnetic Heisenberg model depends on the spin of the magnetic dipoles. If the spin is integer then only short-range order is present. A system of half-integer spins exhibits quasi-long range order.\nA simplified version of Heisenberg model is the one-dimensional Ising model, where the transverse magnetic field is in the x-direction, and the interaction is only in the z-direction:\nAt small g and large g, the ground state degeneracy is different, which implies that there must be a quantum phase transition in between. It can be solved exactly for the critical point using the duality analysis. The duality transition of the Pauli matrices is and , where and are also Pauli matrices which obey the Pauli matrix algebra.\nUnder periodic boundary conditions, the transformed Hamiltonian can be shown is of a very similar form:\nbut for the attached to the spin interaction term. Assuming that there's only one critical point, we can conclude that the phase transition happens at .", "* Another important object is entanglement entropy. One way to describe it is to subdivide the unique ground state into a block (several sequential spins) and the environment (the rest of the ground state). The entropy of the block can be considered as entanglement entropy. At zero temperature in the critical region (thermodynamic limit) it scales logarithmically with the size of the block. As the temperature increases the logarithmic dependence changes into a linear function. For large temperatures linear dependence follows from the second law of thermodynamics.\n* The Heisenberg model provides an important and tractable theoretical example for applying density matrix renormalisation.\n* The six-vertex model can be solved using the algebraic Bethe ansatz for the Heisenberg spin chain .\n* The half-filled Hubbard model in the limit of strong repulsive interactions can be mapped onto a Heisenberg model with representing the strength of the superexchange interaction.\n* Limits of the model as the lattice spacing is sent to zero (and various limits are taken for variables appearing in the theory) describes integrable field theories, both non-relativistic such as the nonlinear Schrödinger equation, and relativistic, such as the sigma model, the sigma model (which is also a principal chiral model) and the sine-Gordon model.\n* Calculating certain correlation functions in the planar or large limit of N = 4 supersymmetric Yang–Mills theory", "Infrared radiofluorescence (sometimes spelt radio-fluorescence) is a dating technique involving the infrared (~ 880 nm) luminescence signal of orthoclase from exposure to ionizing radiation. It can reveal the last time of daylight exposure of sediments, e.g., a layer of sand exposed to light before deposition.", "In the second half of the 20th century, radium was progressively replaced with paint containing promethium-147. Promethium is a low-energy beta-emitter, which, unlike alpha emitters like radium, does not degrade the phosphor lattice, so the luminosity of the material will not degrade so quickly. It also does not emit the penetrating gamma rays which radium does. The half-life of Pm is only 2.62 years, so in a decade the radioactivity of a promethium dial will decline to only 1/16 of its original value, making it safer to dispose of, compared to radium with its half life of 1600 years. This short half-life meant that the luminosity of promethium dials also dropped by half every 2.62 years, giving them a short useful life, which led to promethium's replacement by tritium.\nPromethium-based paint was used to illuminate Apollo Lunar Module electrical switch tips and painted on control panels of the Lunar Roving Vehicle.", "The first use of radioluminescence was in luminous paint containing radium, a natural radioisotope. Beginning in 1908, luminous paint containing a mixture of radium and copper-doped zinc sulfide was used to paint watch faces and instrument dials, giving a greenish glow. Phosphors containing copper-doped zinc sulfide (ZnS:Cu) yield blue-green light; copper and manganese-doped zinc sulfide (), yielding yellow-orange light are also used. Radium-based luminescent paint is no longer used due to the radiation hazard posed to persons manufacturing the dials. These phosphors are not suitable for use in layers thicker than 25 mg/cm, as the self-absorption of the light then becomes a problem. Zinc sulfide undergoes degradation of its crystal lattice structure, leading to gradual loss of brightness significantly faster than the depletion of radium.\nZnS:Ag coated spinthariscope screens were used by Ernest Rutherford in his experiments discovering the atomic nucleus.\nRadium was used in luminous paint until the 1960s, when it was replaced with the other radioisotopes mentioned above due to health concerns. In addition to alpha and beta particles, radium emits penetrating gamma rays, which can pass through the metal and glass of a watch dial, and skin. A typical older radium wristwatch dial has a radioactivity of 3–10 kBq and could expose its wearer to an annual dose of 24 millisieverts if worn continuously. Another health hazard is its decay product, the radioactive gas radon, which constitutes a significant risk even at extremely low concentrations when inhaled. Radium's long half-life of 1600 years means that surfaces coated with radium paint, such as watch faces and hands, remain a health hazard long after their useful life is over. There are still millions of luminous radium clock, watch, and compass faces and aircraft instrument dials owned by the public. The case of the \"Radium Girls\", workers in watch factories in the early 1920s who painted watch faces with radium paint and later contracted fatal cancer through ingesting radium when they pointed their brushes with their lips, increased public awareness of the hazards of radioluminescent materials, and radioactivity in general.", "The latest generation of radioluminescent materials is based on tritium, a radioactive isotope of hydrogen with half-life of 12.32 years that emits very low-energy beta radiation. It is used on wristwatch faces, gun sights, and emergency exit signs. The tritium gas is contained in a small glass tube, coated with a phosphor on the inside. Beta particles emitted by the tritium strike the phosphor coating and cause it to fluoresce, emitting light, usually yellow-green.\nTritium is used because it is believed to pose a negligible threat to human health, in contrast to the previous radioluminescent source, radium, which proved to be a significant radiological hazard. The low-energy 5.7 keV beta particles emitted by tritium cannot pass through the enclosing glass tube. Even if they could, they are not able to penetrate human skin. Tritium is only a health threat if ingested or inhaled. Since tritium is a gas, if a tritium tube breaks, the gas dissipates in the air and is diluted to safe concentrations.\nTritium has a half-life of 12.32 years, so the brightness of a tritium light source will decline to half its initial value in that time.", "Radioluminescence occurs when an incoming particle of ionizing radiation collides with an atom or molecule, exciting an orbital electron to a higher energy level. The particle usually comes from the radioactive decay of an atom of a radioisotope, an isotope of an element which is radioactive. The electron then returns to its ground energy level by emitting the extra energy as a photon of light. A chemical that releases light of a particular color when struck by ionizing radiation is called a phosphor. Radioluminescent light sources usually consist of a radioactive substance mixed with, or in proximity to, a phosphor.", "Radioluminescence is the phenomenon by which light is produced in a material by bombardment with ionizing radiation such as alpha particles, beta particles, or gamma rays. Radioluminescence is used as a low level light source for night illumination of instruments or signage. Radioluminescent paint is occasionally used for clock hands and instrument dials, enabling them to be read in the dark. Radioluminescence is also sometimes seen around high-power radiation sources, such as nuclear reactors and radioisotopes.", "Since radioactivity was discovered around the beginning of the 20th century, the main application of radioluminescence has been in radioluminescent paint, used on watch and compass dials, gunsights, aircraft flight instrument faces, and other instruments, allowing them to be seen in darkness. Radioluminescent paint consists of a mixture of a chemical containing a radioisotope with a radioluminescent chemical (phosphor). The continuous radioactive decay of the isotope's atoms releases radiation particles which strike the molecules of the phosphor, causing them to emit light. The constant bombardment by radioactive particles causes the chemical breakdown of many types of phosphor, so radioluminescent paints lose some of their luminosity during their working life.\nRadioluminescent materials may also be used in the construction of an optoelectric nuclear battery, a type of radioisotope generator in which nuclear energy is converted into light.", "Radium was discovered by Marie and Pierre Curie in 1898 and was soon combined with paint to make luminescent paint, which was applied to clocks, airplane instruments, and the like, to be able to read them in the dark.\nIn 1914, Dr. Sabin Arnold von Sochocky and Dr. George S. Willis founded the Radium Luminous Material Corporation. The company made luminescent paint. The company later changed its name to the United States Radium Corporation. The use of radium to provide luminescence for hands and indices on watches soon followed.\nThe Ingersoll Watch division of the Waterbury Clock Company, a nationally-known maker of low-cost pocket and wristwatches, was a leading popularizer of the use of radium for watch hands and indices through the introduction of their \"Radiolite\" watches in 1916. The Radiolite series, made in various sizes and models, became a signature of the Connecticut-based company.\nRadium dials were typically painted by young women, who used to point their brushes by licking and shaping the bristles prior to painting the fine lines and numbers on the dials. This practice resulted in the ingestion of radium, which caused serious jaw-bone degeneration and malignancy and other dental diseases. The disease, radium-induced osteonecrosis, was recognized as an occupational disease in 1925 after a group of radium painters, known as the Radium Girls, from the United States Radium Corporation sued. By 1930, all dial painters stopped pointing their brushes by mouth. Stopping this practice drastically reduced the amount of radium ingested and therefore, the incidence of malignancy.\nLuminous Processes employees interviewed by a journalist in 1978 stated they had been left ignorant of radium's dangers. They were told that eliminating lip-pointing had ended earlier problems. They worked in unvented rooms, they wore smocks that they laundered at home. Geiger counters could pick up readings from pants returned from a dry cleaner and from clothes stored away in a cedar chest.\"", "According to the United States Environmental Protection Agency, \"radioactive antiques [including watches] are usually not a health risk as long as they are intact and in good condition.\" However, radium is highly radioactive, emitting alpha, beta, and gamma radiation — the effects of which are particularly deleterious if inhaled or ingested since there is no shielding within the body. Indeed, the body treats radium as it does calcium, storing it in bone where it may cause bone degeneration and cancer.\nTherefore, it is of the utmost importance that watches with radium dials should not be taken apart without proper training, technique, and facilities. Radium paint can be ingested by inhaling flaking paint particles. The alpha particles emitted by the radium, which is taken up in bone, will kill off surrounding bone tissue, resulting in a condition loosely referred to as radium jaw. Inhaled or ingested particles may deposit a high local dose with a risk of radiation-caused lung or gastrointestinal cancer. Penetrating gamma radiation produced by some dials also represents a significant health risk.\nAlthough old radium dials generally no longer produce light, this is due to the breakdown of the crystal structure of the luminous zinc sulfide rather than the radioactive decay of the radium. The radium isotope (Ra) used has a half-life of about 1,600 years, so radium dials remain essentially just as radioactive as when originally painted 50 or 100 years ago, whether or not they remain luminous.\nRadium dials held near the face have been shown to produce radiation doses in excess of 10 µSv / hour. After about 20 minutes this delivers the equivalent of one whole day's worth of normal background radiation. This rate probably only represents the dose rate from gamma emission, as the alpha emission will be stopped by the lacquer, or crystal, or case; hence, the dose rate following ingestion or inhalation of the dust could be much higher.\nChronic exposure to high levels of radium can result in an increased incidence of bone, liver, or breast cancer. Decaying radium also produces the gas radon, recognized as the second leading cause of lung cancer in the United States and the United Kingdom. A 2018 study by researchers from the University of Northampton found that a collection of 30 vintage military watches with radium dials kept in a small, unventilated room produced a radon concentration 134 times greater than the UK's recommended \"safe\" level.\nIngestion of radium has been linked to anemia, cataracts, broken teeth, and reduced bone growth.", "In the United States there does not seem to have ever been prohibitory legislation passed banning the use of radium in clocks and watches. It was only with the passage of the Energy Policy Act of 2005 that the United States Nuclear Regulatory Commission (NRC) was given oversight of the use of radium. Prior to that date, \"the federal government had a limited role, if any, in ensuring the safe use of radium,\" according to the NRC. The element was phased out of use by industry acting under its own volition as superior and safer luminous materials entered the marketplace.", "Radium dials are watch, clock and other instrument dials painted with luminous paint containing radium-226 to produce radioluminescence. Radium dials were produced throughout most of the 20th century before being replaced by safer tritium-based luminous material in the 1970s and finally by non-toxic, non-radioactive strontium aluminate–based photoluminescent material from the middle 1990s.", "*Undark produced by the United States Radium Corporation\n*Luna produced by the Radium Dial Company\n*Marvelite produced by the Cold Light Manufacturing Company (a subsidiary of the Radium Company of Colorado)", "In naphtha cracking process, C4R4 refers to C4 residual obtained after separation of 1,3-butadiene, isobutylene, 1-butene, and cis- or trans-2-butene from C4 raffinate stream which mainly consists of n-butane. Normally C4R4 is a side product in tert-butyl alcohol plant if C4R3 is used for feed.", "In naphtha cracking process, C4R3 refers to C4 residual obtained after separation of 1,3-butadiene, isobutylene, and 1-butene from C4 raffinate stream which mainly consists of cis- or trans-2-butene, n-butane, and unseparated 1-butene. Normally C4R3 is being process through a selective hydrogenation unit (SHU) and CDHydro deisobutenizer unit to produce isobutylene as a feed to tert-butyl alcohol plant.", "In chemical separation terminology, the raffinate (from French raffiner, to refine) is a product which has had a component or components removed. The product having the removed materials is referred to as the extract. For example, in solvent extraction, the raffinate is the liquid stream which remains after solutes from the original liquid are removed through contact with an immiscible liquid. In metallurgy, raffinating refers to a process in which impurities are removed from liquid material.\nIn pressure swing adsorption the raffinate refers to the gas which is not adsorbed during the high pressure stage. The species which is desorbed from the adsorbent at low pressure may be called the \"extract\" product.", "In naphtha cracking process, C4R1 refers to C4 residual obtained after separation of 1,3-butadiene from C4 raffinate stream and which, mainly consists of isobutylene 40~50 wt% and cis- or trans-2-butene 30~35 wt%. Normally C4R1 is a side product in 1,3-butadiene plant and feed to tert-butyl alcohol plant.", "In naphtha cracking process, C4R2 refers to C4 residual obtained after separation of 1,3-butadiene and isobutylene from C4 raffinate stream and which mainly consists of cis- or trans-2-butene 50~60 wt%, 1-butene 10~15 wt%, and n-butane ~20 wt%. Normally C4R2 is a side product in tert-butyl alcohol plant if C4R1 is used for feed.", "A rare sugar is a sugar that occurs in limited quantities in nature. Rare sugars can be made using enzymes, choosing which enzymes to use if you know the substrate can be aided by the Izumoring-strategy.\nSpecific examples of rare sugars are:\n* Allulose\n* Allose\n* Sorbose\n* Tagatose", "Growing crystals for X-ray crystallography can be quite difficult. For X-ray analysis, single perfect crystals are required. Typically a small amount (5–100 mg) of a pure compound is used, and crystals are allowed to grow very slowly. Several techniques can be used to grow these perfect crystals:\n* Slow evaporation of a single solvent - typically the compound is dissolved in a suitable solvent and the solvent is allowed to slowly evaporate. Once the solution is saturated crystals can be formed.\n* Slow evaporation of a multi-solvent system - the same as above, however as the solvent composition changes due to evaporation of the more volatile solvent. The compound is more soluble in the volatile solvent, and so the compound becomes increasingly insoluble in solution and crystallizes.\n* Slow diffusion - similar to the above. However, a second solvent is allowed to evaporate from one container into a container holding the compound solution (gas diffusion). As the solvent composition changes due to an increase in the solvent that has gas diffused into the solution, the compound becomes increasingly insoluble in the solution and crystallizes.\n* Interface/slow mixing (often performed in an NMR tube). Similar to the above, but instead of one solvent gas-diffusing into another, the two solvents mix (diffuse) by liquid-liquid diffusion. Typically a second solvent is \"layered\" carefully on top of the solution containing the compound. Over time the two solution mix. As the solvent composition changes due to diffusion, the compound becomes increasingly insoluble in solution and crystallizes, usually at the interface. Additionally, it is better to use a denser solvent as the lower layer, and/or a hotter solvent as the upper layer because this results in the slower mixing of the solvents.\n* Specialized equipment can be used in the shape of an \"H\" to perform the above, where one of the vertical lines of the \"H\" is a tube containing a solution of the compound, and the other vertical line of the \"H\" is a tube containing a solvent which the compound is not soluble in, and the horizontal line of the \"H\" is a tube which joins the two vertical tubes, which also has a fine glass sinter that restricts the mixing of the two solvents.\n* Once single perfect crystals have been obtained, it is recommended that the crystals are kept in a sealed vessel with some of the liquid of crystallization to prevent the crystal from drying out. Single perfect crystals may contain solvent of crystallization in the crystal lattice. Loss of this internal solvent from the crystals can result in the crystal lattice breaking down, and the crystals turning to powder.", "Crystallization requires an initiation step. This can be spontaneous or can be done by adding a small amount of the pure compound (a seed crystal) to the saturated solution, or can be done by simply scratching the glass surface to create a seeding surface for crystal growth. It is thought that even dust particles can act as simple seeds.", "This method is the same as the above but where two (or more) solvents are used. This relies on both \"compound A\" and \"impurity B\" being soluble in a first solvent. A second solvent is slowly added. Either \"compound A\" or \"impurity B\" will be insoluble in this solvent and precipitate, whilst the other of \"compound A\"/\"impurity B\" will remain in solution. Thus the proportion of first and second solvents is critical. Typically the second solvent is added slowly until one of the compounds begins to crystallize from the solution and then the solution is cooled. Heating is not required for this technique but can be used.\nThe reverse of this method can be used where a mixture of solvents dissolves both A and B. One of the solvents is then removed by distillation or by an applied vacuum. This results in a change in the proportions of the solvent causing either \"compound A\" or \"impurity B\" to precipitate.", "Typically, the mixture of \"compound A\" and \"impurity B\" is dissolved in the smallest amount of hot solvent to fully dissolve the mixture, thus making a saturated solution. The solution is then allowed to cool. As the solution cools the solubility of compounds in the solution drops. This results in the desired compound dropping (recrystallizing) from the solution. The slower the rate of cooling, the bigger the crystals form.\nIn an ideal situation the solubility product of the impurity, B, is not exceeded at any temperature. In that case, the solid crystals will consist of pure A and all the impurities will remain in the solution. The solid crystals are collected by filtration and the filtrate is discarded. If the solubility product of the impurity is exceeded, some of the impurities will co-precipitate. However, because of the relatively low concentration of the impurity, its concentration in the precipitated crystals will be less than its concentration in the original solid. Repeated recrystallization will result in an even purer crystalline precipitate. The purity is checked after each recrystallization by measuring the melting point, since impurities lower the melting point. NMR spectroscopy can also be used to check the level of impurity. Repeated recrystallization results in some loss of material because of the non-zero solubility of compound A.\nThe crystallization process requires an initiation step, such as the addition of a \"seed\" crystal. In the laboratory, a minuscule fragment of glass, produced by scratching the side of the glass recrystallization vessel, may provide the nucleus on which crystals may grow. \nSuccessful recrystallization depends on finding the right solvent. This is usually a combination of prediction/experience and trial/error. The compounds must be more soluble at higher temperatures than at lower temperatures. Any insoluble impurity is removed by the technique of hot filtration.", "For ice, recrystallization refers to the growth of larger crystals at the expense of smaller ones. Some biological antifreeze proteins have been shown to inhibit this process, and the effect may be relevant in freezing-tolerant organisms.", "In chemistry, recrystallization is a procedure for purifying compounds. The most typical situation is that a desired \"compound A\" is contaminated by a small amount of \"impurity B\". There are various methods of purification that may be attempted (see Separation process), recrystallization being one of them. There are also different recrystallization techniques that can be used such as:", "In chemistry, recrystallization is a technique used to purify chemicals. By dissolving a mixture of a compound and impurities in an appropriate solvent, either the desired compound or impurities can be removed from the solution, leaving the other behind. It is named for the crystals often formed when the compound precipitates out. Alternatively, recrystallization can refer to the natural growth of larger ice crystals at the expense of smaller ones.", "Hot filtration can be used to separate \"compound A\" from both \"impurity B\" and some \"insoluble matter C\". This technique normally uses a single-solvent system as described above. When both \"compound A\" and \"impurity B\" are dissolved in the minimum amount of hot solvent, the solution is filtered to remove \"insoluble matter C\". This matter may be anything from a third impurity compound to fragments of broken glass. For a successful procedure, one must ensure that the filtration apparatus is hot in order to stop the dissolved compounds from crystallizing from the solution during filtration, thus forming crystals on the filter paper or funnel.\nOne way to achieve this is to heat a conical flask containing a small amount of clean solvent on a hot plate. A filter funnel is rested on the mouth, and hot solvent vapors keep the stem warm. Jacketed filter funnels may also be used. The filter paper is preferably fluted, rather than folded into a quarter; this allows quicker filtration, thus less opportunity for the desired compound to cool and crystallize from the solution.\nOften it is simpler to do the filtration and recrystallization as two independent and separate steps. That is dissolve \"compound A\" and \"impurity B\" in a suitable solvent at room temperature, filter (to remove insoluble compound/glass), remove the solvent and then recrystallize using any of the methods listed above.", "Regenerative medicine deals with the \"process of replacing, engineering or regenerating human or animal cells, tissues or organs to restore or establish normal function\". This field holds the promise of engineering damaged tissues and organs by stimulating the body's own repair mechanisms to functionally heal previously irreparable tissues or organs.\nRegenerative medicine also includes the possibility of growing tissues and organs in the laboratory and implanting them when the body cannot heal itself. When the cell source for a regenerated organ is derived from the patient's own tissue or cells, the challenge of organ transplant rejection via immunological mismatch is circumvented. This approach could alleviate the problem of the shortage of organs available for donation.\nSome of the biomedical approaches within the field of regenerative medicine may involve the use of stem cells. Examples include the injection of stem cells or progenitor cells obtained through directed differentiation (cell therapies); the induction of regeneration by biologically active molecules administered alone or as a secretion by infused cells (immunomodulation therapy); and transplantation of in vitro grown organs and tissues (tissue engineering).", "Though uses of cord blood beyond blood and immunological disorders is speculative, some research has been done in other areas. Any such potential beyond blood and immunological uses is limited by the fact that cord cells are hematopoietic stem cells (which can differentiate only into blood cells), and not pluripotent stem cells (such as embryonic stem cells, which can differentiate into any type of tissue). Cord blood has been studied as a treatment for diabetes. However, apart from blood disorders, the use of cord blood for other diseases is not a routine clinical modality and remains a major challenge for the stem cell community.\nAlong with cord blood, Wharton's jelly and the cord lining have been explored as sources for mesenchymal stem cells (MSC), and as of 2015 had been studied in vitro, in animal models, and in early stage clinical trials for cardiovascular diseases, as well as neurological deficits, liver diseases, immune system diseases, diabetes, lung injury, kidney injury, and leukemia.", "The ancient Greeks postulated whether parts of the body could be regenerated in the 700s BC. Skin grafting, invented in the late 19th century, can be thought of as the earliest major attempt to recreate bodily tissue to restore structure and function. Advances in transplanting body parts in the 20th century further pushed the theory that body parts could regenerate and grow new cells. These advances led to tissue engineering, and from this field, the study of regenerative medicine expanded and began to take hold. This began with cellular therapy, which led to the stem cell research that is widely being conducted today.\nThe first cell therapies were intended to slow the aging process. This began in the 1930s with Paul Niehans, a Swiss doctor who was known to have treated famous historical figures such as Pope Pius XII, Charlie Chaplin, and king Ibn Saud of Saudi Arabia. Niehans would inject cells of young animals (usually lambs or calves) into his patients in an attempt to rejuvenate them. In 1956, a more sophisticated process was created to treat leukemia by inserting bone marrow from a healthy person into a patient with leukemia. This process worked mostly due to both the donor and receiver in this case being identical twins. Nowadays, bone marrow can be taken from people who are similar enough to the patient who needs the cells to prevent rejection.\nThe term \"regenerative medicine\" was first used in a 1992 article on hospital administration by Leland Kaiser. Kaiser's paper closes with a series of short paragraphs on future technologies that will impact hospitals. One paragraph had \"Regenerative Medicine\" as a bold print title and stated, \"A new branch of medicine will develop that attempts to change the course of chronic disease and in many instances will regenerate tired and failing organ systems.\"\nThe term was brought into the popular culture in 1999 by William A. Haseltine when he coined the term during a conference on Lake Como, to describe interventions that restore to normal function that which is damaged by disease, injured by trauma, or worn by time. Haseltine was briefed on the project to isolate human embryonic stem cells and embryonic germ cells at Geron Corporation in collaboration with researchers at the University of Wisconsin–Madison and Johns Hopkins School of Medicine. He recognized that these cells' unique ability to differentiate into all the cell types of the human body (pluripotency) had the potential to develop into a new kind of regenerative therapy. Explaining the new class of therapies that such cells could enable, he used the term \"regenerative medicine\" in the way that it is used today: \"an approach to therapy that ... employs human genes, proteins and cells to re-grow, restore or provide mechanical replacements for tissues that have been injured by trauma, damaged by disease or worn by time\" and \"offers the prospect of curing diseases that cannot be treated effectively today, including those related to aging\".\nLater, Haseltine would go on to explain that regenerative medicine acknowledges the reality that most people, regardless of which illness they have or which treatment they require, simply want to be restored to normal health. Designed to be applied broadly, the original definition includes cell and stem cell therapies, gene therapy, tissue engineering, genomic medicine, personalized medicine, biomechanical prosthetics, recombinant proteins, and antibody treatments. It also includes more familiar chemical pharmacopeia—in short, any intervention that restores a person to normal health. In addition to functioning as shorthand for a wide range of technologies and treatments, the term “regenerative medicine” is also patient friendly. It solves the problem that confusing or intimidating language discourages patients.\nThe term regenerative medicine is increasingly conflated with research on stem cell therapies. Some academic programs and departments retain the original broader definition while others use it to describe work on stem cell research.\nFrom 1995 to 1998 Michael D. West, PhD, organized and managed the research between Geron Corporation and its academic collaborators James Thomson at the University of Wisconsin–Madison and John Gearhart of Johns Hopkins University that led to the first isolation of human embryonic stem and human embryonic germ cells, respectively.\nIn March 2000, Haseltine, Antony Atala, M.D., Michael D. West, Ph.D., and other leading researchers founded E-Biomed: The Journal of Regenerative Medicine. The peer-reviewed journal facilitated discourse around regenerative medicine by publishing innovative research on stem cell therapies, gene therapies, tissue engineering, and biomechanical prosthetics. The Society for Regenerative Medicine, later renamed the Regenerative Medicine and Stem Cell Biology Society, served a similar purpose, creating a community of like-minded experts from around the world.\nIn June 2008, at the Hospital Clínic de Barcelona, Professor Paolo Macchiarini and his team, of the University of Barcelona, performed the first tissue engineered trachea (wind pipe) transplantation. Adult stem cells were extracted from the patients bone marrow, grown into a large population, and matured into cartilage cells, or chondrocytes, using an adaptive method originally devised for treating osteoarthritis. The team then seeded the newly grown chondrocytes, as well as epithelial cells, into a decellularised (free of donor cells) tracheal segment that was donated from a 51-year-old transplant donor who had died of cerebral hemorrhage. After four days of seeding, the graft was used to replace the patients left main bronchus. After one month, a biopsy elicited local bleeding, indicating that the blood vessels had already grown back successfully.\nIn 2009, the SENS Foundation was launched, with its stated aim as \"the application of regenerative medicine – defined to include the repair of living cells and extracellular material in situ – to the diseases and disabilities of ageing\". In 2012, Professor Paolo Macchiarini and his team improved upon the 2008 implant by transplanting a laboratory-made trachea seeded with the patient's own cells.\nOn September 12, 2014, surgeons at the Institute of Biomedical Research and Innovation Hospital in Kobe, Japan, transplanted a 1.3 by 3.0 millimeter sheet of retinal pigment epithelium cells, which were differentiated from iPS cells through directed differentiation, into an eye of an elderly woman, who suffers from age-related macular degeneration.\nIn 2016, Paolo Macchiarini was fired from Karolinska University in Sweden due to falsified test results and lies. The TV-show Experimenten aired on Swedish Television and detailed all the lies and falsified results.", "* [http://www.adigosstemcells.com/regenerative-medicines.php Regenerative Medicine], gives more details about Regenerative Stem Cells.\n* Kevin Strange and Viravuth Yin, \"A Shot at Regeneration: A once abandoned drug compound shows an ability to rebuild organs damaged by illness and injury\", Scientific American, vol. 320, no. 4 (April 2019), pp. 56–61.", "Regenerative medicine has been studied by dentists to find ways that damaged teeth can be repaired and restored to obtain natural structure and function. Dental tissues are often damaged due to tooth decay, and are often deemed to be irreplaceable except by synthetic or metal dental fillings or crowns, which requires further damage to be done to the teeth by drilling into them to prevent the loss of an entire tooth.\nResearchers from King's College London have created a drug called Tideglusib that claims to have the ability to regrow dentin, the second layer of the tooth beneath the enamel which encases and protects the pulp (often referred to as the nerve).\nAnimal studies conducted on mice in Japan in 2007 show great possibilities in regenerating an entire tooth. Some mice had a tooth extracted and the cells from bioengineered tooth germs were implanted into them and allowed to grow. The result were perfectly functioning and healthy teeth, complete with all three layers, as well as roots. These teeth also had the necessary ligaments to stay rooted in its socket and allow for natural shifting. They contrast with traditional dental implants, which are restricted to one spot as they are drilled into the jawbone.\nA persons baby teeth are known to contain stem cells that can be used for regeneration of the dental pulp after a root canal treatment or injury. These cells can also be used to repair damage from periodontitis, an advanced form of gum disease that causes bone loss and severe gum recession. Research is still being done to see if these stem cells are viable enough to grow into completely new teeth. Some parents even opt to keep their childrens baby teeth in special storage with the thought that, when older, the children could use the stem cells within them to treat a condition.", "Extracellular matrix materials are commercially available and are used in reconstructive surgery, treatment of chronic wounds, and some orthopedic surgeries; as of January 2017 clinical studies were under way to use them in heart surgery to try to repair damaged heart tissue.\nThe use of fish skin with its natural constituent of omega 3, has been developed by an Icelandic company Kereceis. Omega 3 is a natural anti-inflammatory, and the fish skin material acts as a scaffold for cell regeneration. In 2016 their product Omega3 Wound was approved by the FDA for the treatment of chronic wounds and burns. In 2021 the FDA gave approval for Omega3 Surgibind to be used in surgical applications including plastic surgery.", "Widespread interest and funding for research on regenerative medicine has prompted institutions in the United States and around the world to establish departments and research institutes that specialize in regenerative medicine including: The Department of Rehabilitation and Regenerative Medicine at Columbia University, the Institute for Stem Cell Biology and Regenerative Medicine at Stanford University, the Center for Regenerative and Nanomedicine at Northwestern University, the Wake Forest Institute for Regenerative Medicine, and the British Heart Foundation Centers of Regenerative Medicine at the University of Oxford. In China, institutes dedicated to regenerative medicine are run by the Chinese Academy of Sciences, Tsinghua University, and the Chinese University of Hong Kong, among others.", "Post-treatment disinfection provides secondary protection against compromised membranes and downstream problems. Disinfection by means of ultraviolet (UV) lamps (sometimes called germicidal or bactericidal) may be employed to sterilize pathogens that evade the RO process. Chlorination or chloramination (chlorine and ammonia) protects against pathogens that may have lodged in the distribution system downstream.", "Many reef aquarium keepers use RO systems to make fish-friendly seawater. Ordinary tap water can contain excessive chlorine, chloramines, copper, nitrates, nitrites, phosphates, silicates, or other chemicals detrimental to marine organisms. Contaminants such as nitrogen and phosphates can lead to unwanted algae growth. An effective combination of both RO and deionization is popular among reef aquarium keepers, and is preferred above other water purification processes due to the low cost of ownership and operating costs. Where chlorine and chloramines are found in the water, carbon filtration is needed before RO, as common residential membranes do not address these compounds.\nFreshwater aquarists also use RO to duplicate the soft waters found in many tropical waters. While many tropical fish can survive in treated tap water, breeding can be impossible. Many aquatic shops sell containers of RO water for this purpose.", "Reverse osmosis (RO) is a water purification process that uses a semi-permeable membrane to separate water molecules from other substances. RO applies pressure to overcome osmotic pressure that favors even distributions. RO can remove dissolved or suspended chemical species as well as biological substances (principally bacteria), and is used in industrial processes and the production of potable water. RO retains the solute on the pressurized side of the membrane and the purified solvent passes to the other side. It relies on the relative sizes of the various molecules to decide what passes through. \"Selective\" membranes reject large molecules, while accepting smaller molecules (such as solvent molecules, e.g., water).\nRO is most commonly known for its use in drinking water purification from seawater, removing the salt and other effluent materials from the water molecules.\nAs of 2013 the world's largest RO desalination plant was in Sorek, Israel, outputting .", "A process of osmosis through semi-permeable membranes was first observed in 1748 by Jean-Antoine Nollet. For the following 200 years, osmosis was only a laboratory phenomenon. In 1950, the University of California at Los Angeles (UCLA) first investigated osmotic desalination. Researchers at both UCLA and University of Florida desalinated seawater in the mid-1950s, but the flux was too low to be commercially viable. Sidney Loeb at UCLA and Srinivasa Sourirajan at the National Research Council of Canada, Ottawa, found techniques for making asymmetric membranes characterized by an effectively thin \"skin\" layer supported atop a highly porous and much thicker substrate region. John Cadotte, of Filmtec corporation, discovered that membranes with particularly high flux and low salt passage could be made by interfacial polymerization of m-phenylene diamine and trimesoyl chloride. Cadotte's patent on this process was the subject of litigation and expired. Almost all commercial RO membrane is now made by this method. By 2019, approximately 16,000 desalination plants operated around the world, producing around . Around half of this capacity was in the Middle East and North Africa region.\nIn 1977 Cape Coral, Florida became the first US municipality to use RO at scale, with an initial operating capacity of 11.35 million liters (3 million US gal) per day. By 1985, rapid growth led the city to operate the world's largest low-pressure RO plant, producing 56.8 million liters (15 million US gal) per day (MGD).", "In (forward) osmosis, the solvent moves from an area of low solute concentration (high water potential), through a membrane, to an area of high solute concentration (low water potential). The driving force for the movement of the solvent is the reduction in the Gibbs free energy of the system in which the difference in solvent concentration between the sides of a membrane is reduced. This is called osmotic pressure. It reduces as the solvent moves into the more concentrated solution. Applying an external pressure to reverse the natural flow of pure solvent, thus, is reverse osmosis. The process is similar to other membrane technology applications.\nRO differs from filtration in that the mechanism of fluid flow is reversed, as the solvent crosses membrane, leaving the solute behind. The predominant removal mechanism in membrane filtration is straining, or size exclusion, where the pores are 0.01 micrometers or larger, so the process can theoretically achieve perfect efficiency regardless of parameters such as the solution's pressure and concentration. RO instead involves solvent diffusion across a membrane that is either nonporous or uses nanofiltration with pores 0.001 micrometers in size. The predominant removal mechanism is from differences in solubility or diffusivity, and the process is dependent on pressure, solute concentration, and other conditions.\nRO requires pressure between 2–17 bar (30–250 psi) for fresh and brackish water, and 40–82 bar (600–1200 psi) for seawater. Seawater has around 27 bar (390 psi) natural osmotic pressure that must be overcome.\nMembrane pore sizes vary from 0.1 to 5,000 nm. Particle filtration removes particles of 1 µm or larger. Microfiltration removes particles of 50 nm or larger. Ultrafiltration removes particles of roughly 3 nm or larger. Nanofiltration removes particles of 1 nm or larger. RO is in the final category of membrane filtration, hyperfiltration, and removes particles larger than 0.1 nm.", "Around the world, household drinking water purification systems, including a RO step, are commonly used for improving water for drinking and cooking.\nSuch systems typically include these steps:\n* a sediment filter to trap particles, including rust and calcium carbonate\n* a second sediment filter with smaller pores\n* an activated carbon filter to trap organic chemicals and chlorine, which degrades certain types of thin-film composite membrane\n* an RO thin-film composite membrane\n* an ultraviolet lamp for sterilizing any microbes that survive RO\n* a second carbon filter to capture chemicals that survive RO\nIn some systems, the carbon prefilter is replaced by a cellulose triacetate (CTA) membrane. CTA is a paper by-product membrane bonded to a synthetic layer that allows contact with chlorine in the water. These require a small amount of chlorine in the water source to prevent bacteria from forming on it. The typical rejection rate for CTA membranes is 85–95%.\nThe cellulose triacetate membrane rots unless protected by chlorinated water, while the thin-film composite membrane breaks down in the presence of chlorine. The thin-film composite (TFC) membrane is made of synthetic material, and requires the chlorine to be removed before the water enters the membrane. To protect the TFC membrane elements from chlorine damage, carbon filters are used as pre-treatment. TFC membranes have a higher rejection rate of 95–98% and a longer life than CTA membranes.\nPortable RO water processors are sold for personal water available. To work effectively, the water feeding to these units should be under pressure (typically 280 kPa (40 psi) or greater). These processors can be used in areas lacking clean water.\nUS mineral water production uses RO. In Europe such processing of natural mineral water (as defined by a European directive) is not allowed. In practice, a fraction of the living bacteria pass through RO through membrane imperfections or bypass the membrane entirely through leaks in seals.\nFor household purification absent the need to remove dissolved minerals (soften the water), the alternative to RO is an activated carbon filter with a microfiltration membrane.", "For small-scale hydrogen production, RO is sometimes used to prevent formation of mineral deposits on the surface of electrodes.", "A solar-powered desalination unit produces potable water from saline water by using a photovoltaic system to supply the energy. Solar power works well for water purification in settings lacking grid electricity and can reduce operating costs and greenhouse emissions. For example, a solar-powered desalination unit designed passed tests in Australia's Northern Territory.\nSunlight's intermittent nature makes output prediction difficult without an energy storage capability. However batteries or thermal energy storage systems can provide power when the sun does not.", "RO-purified rainwater collected from storm drains is used for landscape irrigation and industrial cooling in Los Angeles and other cities.\nIn industry, RO removes minerals from boiler water at power plants. The water is distilled multiple times to ensure that it does not leave deposits on the machinery or cause corrosion.\nRO is used to clean effluent and brackish groundwater. The effluent in larger volumes (more than 500 m/day) is treated in a water treatment plant first, and then the effluent runs through RO. This hybrid process reduces treatment cost significantly and lengthens membrane life.\nRO can be used for the production of deionized water.\nIn 2002, Singapore announced that a process named NEWater would be a significant part of its water plans. RO would be used to treat wastewater before discharging the effluent into reservoirs.", "Reverse osmosis is a more economical way to concentrate liquids (such as fruit juices) than conventional heat-treatment. Concentration of orange and tomato juice has advantages including a lower operating cost and the ability to avoid heat-treatment, which makes it suitable for heat-sensitive substances such as protein and enzymes.\nRO is used in the dairy industry to produce whey protein powders and concentrate milk. The whey (liquid remaining after cheese manufacture) is concentrated with RO from 6% solids to 10–20% solids before ultrafiltration processing. The retentate can then be used to make whey powders, including whey protein isolate. Additionally, the permeate, which contains lactose, is concentrated by RO from 5% solids to 18–total solids to reduce crystallization and drying costs.\nAlthough RO was once avoided in the wine industry, it is now widespread. An estimated 60 RO machines were in use in Bordeaux, France, in 2002. Known users include many of elite firms, such as Château Léoville-Las Cases.", "In 1946, some maple syrup producers started using RO to remove water from sap before boiling the sap to syrup. RO allows about 75–90% of the water to be removed, reducing energy consumption and exposure of the syrup to high temperatures.", "When beer at typical concentration is subjected to reverse osmosis, both water and alcohol pass across the membrane more readily than other components, leaving a \"beer concentrate\". The concentrate is then diluted with fresh water to restore the non-volatile components to their original intensity.", "Treatment with RO is limited, resulting in low recoveries on high concentration (measured with electrical conductivity) and membrane fouling. RO applicability is limited by conductivity, organics, and scaling inorganic elements such as CaSO, Si, Fe and Ba. Low organic scaling can use two different technologies: spiral wound membrane, and (for high organic scaling, high conductivity and higher pressure (up to 90 bars)), disc tube modules with RO membranes can be used. Disc tube modules were redesigned for landfill leachate purification that is usually contaminated with organic material. Due to the cross-flow, it is given a flow booster pump that recirculates the flow over the membrane between 1.5 and 3 times before it is released as a concentrate. High velocity protects against membrane scaling and allows membrane cleaning.", "Larger scale reverse osmosis water purification units (ROWPU) exist for military use. These have been adopted by the United States armed forces and the Canadian Forces. Some models are containerized, some are trailers, and some are themselves vehicles.\nThe water is treated with a polymer to initiate coagulation. Next, it is run through a multi-media filter where it undergoes primary treatment, removing turbidity. It is then pumped through a cartridge filter which is usually spiral-wound cotton. This process strips any particles larger than 5 µm and eliminates almost all turbidity.\nThe clarified water is then fed through a high-pressure piston pump into a series of RO vessels. 90.00–99.98% of the raw water's total dissolved solids are removed and military standards require that the result have no more than 1000–1500 parts per million by measure of electrical conductivity. It is then disinfected with chlorine.", "Areas that have limited surface water or groundwater may choose to desalinate. RO is an increasingly common method, because of its relatively low energy consumption.\nEnergy consumption is around , with the development of more efficient energy recovery devices and improved membrane materials. According to the International Desalination Association, for 2011, RO was used in 66% of installed desalination capacity (0.0445 of 0.0674 km/day), and nearly all new plants. Other plants use thermal distillation methods: multiple-effect distillation, and multi-stage flash.\nSea-water RO (SWRO) desalination requires around 3 kWh/m, much higher than those required for other forms of water supply, including RO treatment of wastewater, at 0.1 to 1 kWh/m. Up to 50% of the seawater input can be recovered as fresh water, though lower recovery rates may reduce membrane fouling and energy consumption.\nBrackish water reverse osmosis (BWRO) is the desalination of water with less salt than seawater, usually from river estuaries or saline wells. The process is substantially the same as SWRO, but requires lower pressures and less energy. Up to 80% of the feed water input can be recovered as fresh water, depending on feed salinity.\nThe Ashkelon desalination plant in Israel is the world's largest.\nThe typical single-pass SWRO system consists of:\n* Intake\n* Pretreatment\n* High-pressure pump (if not combined with energy recovery)\n* Membrane assembly\n* Energy recovery (if used)\n* Remineralisation and pH adjustment\n* Disinfection\n* Alarm/control panel", "Graphene membranes are meant to take advantage of their thinness to increase efficiency. Graphene is a singular layer of carbon atoms, so it is about 1000 times thinner than existing membranes. Graphene membranes are around 100 nm thick while current membranes are about 100 µm. Many researchers were concerned with the durability of graphene and if it would be able to handle RO pressures. New research finds that depending on the substrate (a supporting layer that does no filtration and only provides structural support), graphene membranes can withstand 57MPa of pressure which is about 10 times the typical pressures for seawater RO.\nBatch RO may offer increased energy efficiency, more durable equipment and higher salinity limits.\nThe conventional approach claimed that molecules cross the membrane individually. A research team devised a \"solution-friction\" theory, claiming that molecules in groups through transient pores. Characterizing that process could guide membrane development. The accepted theory is that individual water molecules diffuse through the membrane, termed the \"solution-diffusion\" model.", "Carbon nanotubes are meant to potentially solve the typical tradeoff between the permeability and the selectivity of RO membranes. CNTs present many ideal characteristics including: mechanical strength, electron affinity, and also exhibiting flexibility during modification. By restructuring carbon nanotubes and coating or impregnating them with other chemical compounds, scientists can manufacture these membranes to have all of the most desirable traits. The hope with CNT membranes is to find a combination of high water permeability while also decreasing the amount of neutral solutes taken out of the water. This would help decrease energy costs and the cost of remineralization after purification through the membrane.", "RO removes both harmful contaminants and desirable minerals. Some studies report some relation between long-term health effects and consumption of water low on calcium and magnesium, although these studies are of low quality.", "The high pressure pump pushes water through the membrane. Typical pressures for brackish water range from 1.6 to 2.6 MPa (225 to 376 psi). In the case of seawater, they range from 5.5 to 8 MPa (800 to 1,180 psi). This requires substantial energy. Where energy recovery is used, part of the high pressure pump's work is done by the energy recovery device, reducing energy inputs.", "The membrane assembly consists of a pressure vessel with a membrane that allows feedwater to be pushed against it. The membrane must be strong enough to withstand the pressure. RO membranes are made in a variety of configurations. The two most common are spiral-wound and hollow-fiber.\nOnly part of the water pumped onto the membrane passes through. The left-behind \"concentrate\" passes along the saline side of the membrane and flushes away the salt and other remnants. The percentage of desalinated water is the \"recovery ratio\". This varies with salinity and system design parameters: typically 20% for small seawater systems, 40% – 50% for larger seawater systems, and 80% – 85% for brackish water. The concentrate flow is typically 3 bar/50 psi less than the feed pressure, and thus retains much of the input energy.\nThe desalinated water purity is a function of the feed water salinity, membrane selection and recovery ratio. To achieve higher purity a second pass can be added which generally requires another pumping cycle. Purity expressed as total dissolved solids typically varies from 100 to 400 parts per million (ppm or mg/litre) on a seawater feed. A level of 500 ppm is generally the upper limit for drinking water, while the US Food and Drug Administration classifies mineral water as water containing at least 250 ppm.", "Energy recovery can reduce energy consumption by 50% or more. Much of the input energy can be recovered from the concentrate flow, and the increasing efficiency of energy recovery devices greatly reduces energy requirements. Devices used, in order of invention, are:\n* Turbine or Pelton wheel: a water turbine driven by the concentrate flow, connected to the pump drive shaft provides part of the input power. Positive displacement axial piston motors have been used in place of turbines on smaller systems.\n* Turbocharger: a water turbine driven by concentrate flow, directly connected to a centrifugal pump that boosts the output pressure, reducing the pressure needed from the pump and thereby its energy input, similar in construction principle to car engine turbochargers.\n* Pressure exchanger: using the pressurized concentrate flow, via direct contact or a piston, to pressurize part of the membrane feed flow to near concentrate flow pressure. A boost pump then raises this pressure by typically 3 bar / 50 psi to the membrane feed pressure. This reduces flow needed from the high-pressure pump by an amount equal to the concentrate flow, typically 60%, and thereby its energy input. These are widely used on larger low-energy systems. They are capable of 3 kWh/m or less energy consumption.\n* Energy-recovery pump: a reciprocating piston pump. The pressurized concentrate flow is applied to one side of each piston to help drive the membrane feed flow from the opposite side. These are the simplest energy recovery devices to apply, combining the high pressure pump and energy recovery in a single self-regulating unit. These are widely used on smaller low-energy systems. They are capable of 3 kWh/m or less energy consumption.\n* Batch operation: RO systems run with a fixed volume of fluid (thermodynamically a closed system) do not suffer from wasted energy in the brine stream, as the energy to pressurize a virtually incompressible fluid (water) is negligible. Such systems have the potential to reach second-law efficiencies of 60%.", "The desalinated water is stabilized to protect downstream pipelines and storage, usually by adding lime or caustic soda to prevent corrosion of concrete-lined surfaces. Liming material is used to adjust pH between 6.8 and 8.1 to meet the potable water specifications, primarily for effective disinfection and for corrosion control. Remineralisation may be needed to replace minerals removed from the water by desalination, although this process has proved to be costly and inconvenient in order to meet mineral demand by humans and plants as found in typical freshwater. For instance water from Israels national water carrier typically contains dissolved magnesium levels of 20 to 25 mg/liter, while water from the Ashkelon plant has no magnesium. Ashkelon water created magnesium-deficiency symptoms in crops, including tomatoes, basil, and flowers, and had to be remedied by fertilization. Israeli drinking water standards require a minimum calcium level of 20 mg/liter. Askelons post-desalination treatment uses sulfuric acid to dissolve calcite (limestone), resulting in calcium concentrations of 40 to 46 mg/liter, lower than the 45 to 60 mg/liter found in typical Israeli fresh water.", "Large-scale industrial/municipal systems recover typically 75% to 80% of the feed water, or as high as 90%, because they can generate the required higher pressure.", "An increasingly popular method of cleaning windows is the \"water-fed pole\" system. Instead of washing windows with conventional detergent, they are scrubbed with purified water, typically containing less than 10 ppm dissolved solids, using a brush on the end of a pole wielded from ground level. RO is commonly used to purify the water.", "Research has examined integrating RO with electrodialysis to improve recovery of valuable deionized products, or to reduce concentrate volumes.", "Household RO units use a lot of water because they have low back pressure. Household RO water purifiers typically produce one liter of usable water and 3-25 liters of wastewater. The remainder is discharged, usually into the drain. Because wastewater carries the rejected contaminants, recovering this water is not practical for household systems. Wastewater is typically delivered to house drains. A RO unit delivering of treated water per day also discharge between . This led India's National Green Tribunal to propose a ban on RO water purification systems in areas where the total dissolved solids (TDS) measure in water is less than 500 mg/liter. In Delhi, large-scale use of household RO devices has increased the total water demand of the already water-parched National Capital Territory of India.", "Another approach is low-pressure high-recovery multistage RO (LPHR). It produces concentrated brine and freshwater by cycling the output repeatedly through a relatively porous membrane at relatively low pressure. Each cycle removes additional impurities. Once the output is relatively pure, it is sent through a conventional RO membrane at conventional pressure to complete the filtration step. LPHR was found to be economically feasible, recovering more than 70% with an OPD between 58 and 65 bar and leaving no more than 350 ppm TDS from a seawater feed with 35,000 ppm TDS.", "Depending upon the desired product, either the solvent or solute stream of RO will be waste. For food concentration applications, the concentrated solute stream is the product and the solvent stream is waste. For water treatment applications, the solvent stream is purified water and the solute stream is concentrated waste. The solvent waste stream from food processing may be used as reclaimed water, but there may be fewer options for disposal of a concentrated waste solute stream. Ships may use marine dumping and coastal desalination plants typically use marine outfalls. Landlocked RO plants may require evaporation ponds or injection wells to avoid polluting groundwater or surface runoff.", "Current RO membranes, thin-film composite (TFC) polyamide membranes, are being studied to find ways of improving their permeability. Through new imaging methods, researchers were able to make 3D models of membranes and examine how water flowed through them. They found that TFC membranes with areas of low flow significantly decreased water permeability. By ensuring uniformity of the membranes and allowing water to flow continuously without slowing down, membrane permeability could be improved by 30%-40%.", "Pretreatment is important when working nanofiltration membranes due to their spiral-wound design. The material is engineered to allow one-way flow. The design does not allow for backpulsing with water or air agitation to scour its surface and remove accumulated solids. Since material cannot be removed from the membrane surface, it is susceptible to fouling (loss of production capacity). Therefore, pretreatment is a necessity for any RO or nanofiltration system. Pretreatment has four major components:\n* Screening solids: Solids must be removed and the water treated to prevent membrane fouling by particle or biological growth, and reduce the risk of damage to high-pressure components.\n* Cartridge filtration: String-wound polypropylene filters are typically used to remove particles of 1–5 µm diameter.\n* Dosing: Oxidizing biocides, such as chlorine, are added to kill bacteria, followed by bisulfite dosing to deactivate the chlorine that can destroy a thin-film composite membrane. Biofouling inhibitors do not kill bacteria, while preventing them from growing slime on the membrane surface and plant walls.\n* Prefiltration pH adjustment: If the pH, hardness and the alkalinity in the feedwater result in scaling while concentrated in the reject stream, acid is dosed to maintain carbonates in their soluble carbonic acid form.\n:CO + HO = HCO + HO\n:HCO + HO = HCO + HO\n* Carbonic acid cannot combine with calcium to form calcium carbonate scale. Calcium carbonate scaling tendency is estimated using the Langelier saturation index. Adding too much sulfuric acid to control carbonate scales may result in calcium sulfate, barium sulfate, or strontium sulfate scale formation on the membrane.\n* Prefiltration antiscalants: Scale inhibitors (also known as antiscalants) prevent formation of more scales than acid, which can only prevent formation of calcium carbonate and calcium phosphate scales. In addition to inhibiting carbonate and phosphate scales, antiscalants inhibit sulfate and fluoride scales and disperse colloids and metal oxides. Despite claims that antiscalants can inhibit silica formation, no concrete evidence proves that silica polymerization is inhibited by antiscalants. Antiscalants can control acid-soluble scales at a fraction of the dosage required to control the same scale using sulfuric acid.\n* Some small-scale desalination units use beach wells. These are usually drilled on the seashore. These intake facilities are relatively simple to build and the seawater they collect is pretreated via slow filtration through subsurface sand/seabed formations. Raw seawater collected using beach wells is often of better quality in terms of solids, silt, oil, grease, organic contamination, and microorganisms, compared to open seawater intakes. Beach intakes may also yield source water of lower salinity.", "AFPs work through an interaction with small ice crystals that is similar to an enzyme-ligand binding mechanism which inhibits recrystallization of ice. This explanation of the interruption of the ice crystal structure by the AFP has come to be known as the adsorption-inhibition hypothesis.\nAccording to this hypothesis, AFPs disrupt the thermodynamically favourable growth of an ice crystal via kinetic inhibition of contact between solid ice and liquid water. In this manner, the nucleation sites of the ice crystal lattice are blocked by the AFP, inhibiting the rapid growth of the crystal that could be fatal for the organism. In physical chemistry terms, the AFPs adsorbed onto the exposed ice crystal force the growth of the ice crystal in a convex fashion as the temperature drops, which elevates the ice vapour pressure at the nucleation sites. Ice vapour pressure continues to increase until it reaches equilibrium with the surrounding solution (water), at which point the growth of the ice crystal stops.\nThe aforementioned effect of AFPs on ice crystal nucleation is lost at the thermal hysteresis point. At a certain low temperature, the maximum convexity of the ice nucleation site is reached. Any further cooling will actually result in a \"spreading\" of the nucleation site away from this convex region, causing rapid, uncontrollable nucleation of the ice crystal. The temperature at which this phenomenon occurs is the thermal hysteresis point.<br>\nThe adsorption-inhibition hypothesis is further supported by the observation that antifreeze activity increases with increasing AFP concentration – the more AFPs adsorb onto the forming ice crystal, the more crowded these proteins become, making ice crystal nucleation less favourable.\nIn the R. inquisitor beetle, AFPs are found in the haemolymph, a fluid that bathes all the cells of the beetle and fills a cavity called the haemocoel. The presence of AFPs in R. inquisitor allows the tissues and fluids within the beetle to withstand freezing up to -30 °C (the thermal hysteresis point for this AFP). This strategy provides an obvious survival benefit to these beetles, who are endemic to cold climates, such as Scandinavia, Siberia, and Alaska.", "RiAFP refers to an antifreeze protein (AFP) produced by the Rhagium inquisitor longhorned beetle. It is a type V antifreeze protein with a molecular weight of 12.8 kDa; this type of AFP is noted for its hyperactivity. R. inquisitor is a freeze-avoidant species, meaning that, due to its AFP, R. inquisitor prevents its body fluids from freezing altogether. This contrasts with freeze-tolerant species, whose AFPs simply depress levels of ice crystal formation in low temperatures. Whereas most insect antifreeze proteins contain cysteines at least every sixth residue, as well as varying numbers of 12- or 13-mer repeats of 8.3-12.5kDa, RiAFP is notable for containing only one disulfide bridge. This property of RiAFP makes it particularly attractive for recombinant expression and biotechnological applications.", "The fact that the binding motif appears as a \"triplet\" of the conserved TxT repeat, as well as the observation that blastp queries have returned no viable matches, has led some researchers to suggest that RiAFP represents a new type of AFP – one that differs from the heavily studied TmAFP (from T. molitor), DcAFP (from D. canadensis), and CfAFP (from C. fumiferana). On the basis of these observations, it has been predicted that the need for insect AFPs came about after insect evolutionary divergence, much like the evolution of fish AFPs; thus, different AFPs most likely evolved in parallel from adaptations to cold (environmental) stress. As a result, homology modelling with TmAFP, DcAFP, or CfAFP would prove to be fruitless.\nSecondary structure modelling algorithms have determined that the internal repeats are spaced sufficiently to tend towards β-strand configuration; no helical regions include the conserved repeats; and all turn regions are located at the ends of β-strand regions. These data suggest that RiAFP is a well-folded β-helical protein, having six β-strand regions consisting of 13-amino acids (including one TxTxTxT binding motif) per strand.\nPrimary crystallographic studies, have been published on a RiAFP crystal (which diffracted to 1.3Å resolution) in the trigonal space group P321 (or P321), with unit-cell parameters a = b = 46.46, c = 193.21Å.", "The primary structure of RiAFP (the sequence may be found [https://www.ncbi.nlm.nih.gov/protein/313766639 here]) determined by Mass Spectroscopy, Edman degradation and by constructing a partial cDNA sequence and PCR have shown that a TxTxTxT internal repeat exists. Sequence logos constructed from the RiAFP internal repeats, have been particularly helpful in the determination of the consensus sequence of these repeats. The TxTxTxT domains are irregularly spaced within the protein and have been shown to be conserved from the TxT binding motif of other AFPs. The hydroxyl moiety of the T residues fits well, when spaced as they are in the internal repeats, with the hydroxyl moieties of externally facing water molecules in the forming ice lattice. This mimics the formation of the growth cone at a nucleation site in the absence of AFPs. Thus, the binding of RiAFP inhibits the growth of the crystal in the basal and prism planes of the ice.", "In the early 1970s Bussard became Assistant Director under Director Robert Hirsch at the Controlled Thermonuclear Reaction Division of what was then known as the Atomic Energy Commission. They founded the mainline fusion program for the United States: the Tokamak. In June 1995, Bussard claimed in a letter to all fusion laboratories, as well as to key members of the US Congress, that he and the other founders of the program supported the Tokamak not out of conviction that it was the best technical approach but rather as a vehicle for generating political support, thereby allowing them to pursue \"all the hopeful new things the mainline labs would not try\".\nIn a 1998 Analog magazine article, fellow fusion researcher Tom Ligon described an easily built demonstration fusor system along with some of Bussard's ideas for fusion reactors and incredibly powerful spacecraft propulsion systems, with which spacecraft could swiftly move throughout the solar system.", "During 2006 and 2007, Bussard sought the large-scale funding necessary to design and construct a full-scale Polywell fusion power plant. His fusor design is feasible enough, he asserted, to render unnecessary the construction of larger and larger test models still too small to achieve break-even. Also, the scaling of power with size goes as the seventh power of the machine radius, while the gain scales as the fifth power, so there is little incentive to build half-scale systems; one might as well build the real thing.\nOn March 29, 2006, Bussard claimed on the fusor.net internet forum that EMC² had developed an inertial electrostatic confinement fusion process that was 100,000 times more efficient than previous designs, but that the US Navy budget line item that supported the work was zero-funded in FY2006.\nBussard provided more details of his breakthrough and the circumstances surrounding the end of his Navy funding in a letter to the James Randi Educational Foundation internet forum on June 23.\nFrom October 2, 2006, to October 6, 2006, Bussard presented an informal overview of the previous decade of his work at the 57th International Astronautical Congress. This was the first publication of this work in 11 years, as the U.S. Navy had put an embargo on publications of the research, in 1994.\nBussard presented further details of his IEC fusion research at a Google Tech Talk on November 9, 2006, of which a video was widely circulated.\nBussard presented more of his thoughts on the potential world impact of fusion power at a Yahoo! Tech Talk on April 10, 2007.\n(The video is only available internally for Yahoo employees.) He also spoke on the internet talk radio show The Space Show, Broadcast 709, on May 7, 2007.\nHe founded a non-profit organization to solicit tax-deductible donations to restart the work in 2007, EMC2 Fusion Development Corporation.", "\"Thus, we have the ability to do away with oil (and other fossil fuels) but it will take 4–6 years and ca. $100–200M to build the full-scale plant and demonstrate it.\"\n\"Somebody will build it; and when it's built, it will work; and when it works people will begin to use it, and it will begin to displace all other forms of energy.\"", "Bussard Ramjets are common plot devices in science fiction.\nLarry Niven uses them in his Known Space setting to propel interstellar flight. Following a standard hi-tech faster/cheaper/better learning curve, he started with robot probes during the early stages of interstellar colonization and eventually plotted them as affordable to wealthy individuals relocating their families off a too-crowded Earth (in \"The Ethics of Madness\"). Niven also employed Bussard Ramjets as the propulsion / stabilizing engine of the Ringworld (four novels), which were also set in Known Space.\nIn the Star Trek universe, a variation called the Bussard Hydrogen Collector or Bussard Ramscoop appears as part of the matter/antimatter propulsion system that allows Starfleet ships to travel faster than the speed of light. The ramscoops attach to the front of the warp nacelles, and when the ships internal supply of deuterium runs low, they collect interstellar hydrogen and convert it to deuterium and anti-deuterium for use as the primary fuel in a starships warp drive.", "Robert W. Bussard (August 11, 1928 &ndash; October 6, 2007) was an American physicist who worked primarily in nuclear fusion energy research. He was the recipient of the Schreiber-Spence Achievement Award for STAIF-2004. He was also a fellow of the International Academy of Astronautics and held a Ph.D. from Princeton University.", "In June 1955 Bussard moved to Los Alamos and joined the Nuclear Propulsion Divisions Project Rover designing nuclear thermal rocket engines. Bussard and R.D. DeLauer wrote two important monographs on nuclear propulsion, Nuclear Rocket Propulsion and Fundamentals of Nuclear Flight'.", "Bussard worked on a promising new type of inertial electrostatic confinement (IEC) fusor, called the Polywell, that has a magnetically shielded grid (MaGrid). He founded Energy/Matter Conversion Corporation, Inc. (EMC2) in 1985 to validate his theory, and tested several (15) experimental devices from 1994 through 2006. The U.S. Navy contract funding that supported the work expired while experiments were still small. However, the final tests of the last device, WB-6, reputedly solved the last remaining physics problem just as the funding expired and the EMC2 labs had to be shut down.\nFurther funding was eventually found, the work continued and the WB-7 prototype was constructed and tested, and the research is ongoing.", "In 1960, Bussard conceived of the Bussard ramjet, an interstellar space drive powered by hydrogen fusion using hydrogen collected with a magnetic field from the interstellar gas. Due to the presence of high-energy particles throughout space, much of the interstellar hydrogen exists in an ionized state (H II regions) that can be manipulated by magnetic or electric fields. Bussard proposed to \"scoop\" up ionized hydrogen and funnel it into a fusion reactor, using the exhaust from the reactor as a rocket engine.\nIt appears the energy gain in the reactor must be extremely high for the ramjet to work at all; any hydrogen picked up by the scoop must be sped up to the same speed as the ship in order to provide thrust, and the energy required to do so increases with the ship's speed. Hydrogen itself does not fuse very well (unlike deuterium, which is rare in the interstellar medium), and so cannot be used directly to produce energy, a fact which accounts for the billion-year scale of stellar lifetimes. This problem was solved, in principle, according to Bussard by use of the stellar CNO cycle in which carbon is used as a catalyst to burn hydrogen via the strong nuclear reaction.", "The waste discharge can be sent into incineration plant, where the organic solid undergoes combustion process. The combustion process produces heat that can be used to generate electricity.", "Select filter cloth to obtain a good surface for cake formation. Use twill weave variation in the construction pattern of the fabric for better wear resistance. The belt tension, de-mooning bar height, wash water quantity and discharge roll speed are carefully tuned to maintain a good path for the cake formation to prevent excessive wear of the filter cloth.", "Select filter cloth based on the type filter aid used (refer Filter aid selection), adjust the advancing knife to optimize the knife advance rate per drum revolution. (Detail explained in Advance blade section)", "Select filter cloth to obtain good wear and solid binding characteristics. Use moderate blowback pressure to avoid high wear. Adjust duration of blow back pressure short enough to remove the cake from the filter cloth. The tuning of valve body is important for the blow back to prevent the excess filtrated being force back out of the pipe to with the release cake solid as this minimises wear and filter media maintenance.", "Select filter cloth to obtain solid binding resistance and good cake release. Use coated fabric for more effective cake release and have a longer-lasting cloth media due to solid binding filter cloth. Both the discharge roll speed and drum speed must be the same. Adjust the scraper knife to leave a significant heal on discharge roll to produce a continuous cake transfer.", "The waste stream is irradiated with Ultraviolet radiation. The UV radiation disinfect by disrupting the pathogen cell to be mutated and prevent the cell from replicating. Eventually the mutated cell becomes extinct and this process eliminates odour.", "Most commonly used post treatment, where chlorine is dissolved in water to form and hydrochloric acid hypochlorous acid. The latter act as a disinfectant that is able to eliminate pathogens such as bacteria, viruses and protozoa by penetrating the cell walls.", "Vat level and drum speed are the two basic operating parameters for any rotary vacuum drum filter. These parameters are adjusted dependently to each other to optimize the filtration performance.\nValve level determines the proportion filter cycle in the filter. The filter cycle consist of the filter drum rotation, release of cake formation from slurry and the drying period for the cake formation shown in figure 1. By default, operate the vat at its maximum level to maximise the rate of filtration. Reduce vat level if discharged solid is in the form of thin and slimy cake or if the discharged solid is very thick.\nDecrease in the vat level eventually leads to a decrease in the portion of the drum being submerge under the slurry, more surface exposure for the cake dying surface hence, larger cake formation to dry time ratio. This result in less moisture content of formed solid and lessen the thickness of the form solid. In addition to operating at lower vat level, the flow rate per drum revolution decreases and ultimately thinner cake formation occurs. In the case of pre coat discharge the filter aid efficiency increases.\t\nDrum speed is the driving factor for the filter output and its units is in the form of minutes per drum revolution. At steady operating conditions, adjusting the drum speed gives a proportional relationship with the filter throughput as shown as in figure 2.", "The stream is exposed to ozone and ozone is unstable at atmospheric condition. The ozone (O3) decomposes into oxygen (O2) and more oxygen is dissolved into the stream. The pathogen is oxidised to form carbon dioxide. This process eliminates the odour of the stream but result in slightly acidic product due to the effect of carbon dioxide present.", "The waste discharge can be used as land stabilizer as dry bio-solids that can be distributed to the market. The land stabilizer is used in reclaiming marginal land such as mining waste land. This process will help to restore the land to its initial appearance.", "The approximate knife advance rate can be determined for a set of operating conditions using table 6 below. The table indicates the number of hours that the filter can operate in a one-inch pre coat cake; the required condition is that the advance blade must be at a constant position. This method can be used to check for optimum operation range.\nIf the operating parameter is higher than the optimum range, the user can reduce the knife advance rate and use a tighter grade of filter aid. This will result in less filter aid used (lower capital cost) and less filter aid being removed (lower disposal cost). However, if the operating parameter is lower than the optimum range, the user can increase the knife advanced rate (more production) and decrease the drum speed for less filter air usage (reduced operating cost).", "Generally, the main process in a rotary vacuum drum filter is continuous filtration whereby solids are separated from liquids through a filter medium by a vacuum. The filter cloth is one of the most important components on a filter and is typically made of weaving polymer yarns. The best selection of cloth can increase the performance of filtration. Initially, slurry is pumped into the trough and as the drum rotates, it is partially submerged in the slurry. The vacuum draws liquid and air through the filter media and out the shaft hence forming a layer of cake. An agitator is used to regulate the slurry if the texture is coarse and it is settling rapidly. Solids that are trapped on the surface of the drum are washed and dried after 2/3 of revolution, removing all the free moisture.\nDuring the washing stage, the wash liquid can either be poured onto the drum or sprayed on the cake. Cake pressing is optional but its advantages are preventing cake cracking and removing more moisture. Cake discharge is when all the solids are removed from the surface of the cake by a scraper blade, leaving a clean surface as drum re-enters the slurry. There are a few types of discharge which are scraper, roller, string, endless belt and pre coat. The filtrate and air flow through internal pipes, valve and into the vacuum receiver where the separation of liquid and gas occurs producing a clear filtrate. Pre coat filtration is an ideal method to produce a high clarity of filtrate. Basically, the drum surface is pre coated with a filter aid such as diatomaceous earth (DE) or perlite to improve filtration and increase cake permeability. It then undergoes the same process cycle as the conventional rotary vacuum drum filter however, pre coat filtration uses a higher precision blade to scrape off the cake.\nThe filter is assessed by the size of the drum or filter area and its possible output. Typically, the output is in the units of pounds per hour of dry solids per square foot of filter area. The size of the auxiliary parts depends on the area of the filter and the type of usage. Rotary vacuum filters are flexible in handling variety of materials therefore the estimated solids yield from 5 to 200 pounds per hour per square foot. For pre coat discharge, the solid output is approximately 2 to 40 gallons per hour per square foot. Filtration efficiencies can also be improved in terms dryness of filter cake by significantly preventing filtrate liquid from getting stuck in the filter drum during filtration phase. Usage of multiple filters for example, running 3 filter units instead of 2 units yields a thicker cake hence, producing a clearer filtrate. This becomes beneficial in terms of production cost and also quality.", "This is the standard drum filter discharge. A scraper blade, which serves to redirect the filter cake into the discharge chute, removes the cake from the filter cloth just before re-entering the vat. Scraper discharge is used if the desired separation requires high filtration rate or if heavy solid slurry is used or if the slurry is easy to filter to produce cake formation or if a longer wear resistance is desired for the separation of the mentioned slurry.", "The filtrate cakes that are thin and fragile are usually the end products of this discharge lie. The materials are capable of changing phases, from solid to liquid, due to instability and disturbance. Two rollers guide the strings back to drum surface and at the same time separation of the filtrate cake occurs as they pass the rollers. Application of the string discharge can be seen at the pharmaceutical and starch industries. String discharge is used if the high solid concentration slurry is used or if the slurry is easy to filter to produce cake formation or if the discharged solid is fibrous, stringy or pulpy or if a longer wear resistance is desired for the separation of the mentioned slurry.", "It is a suitable discharge option for cakes that are thin and have the tendency to stick with one another. Filter cakes on the drum and discharged roll are pressed against one another to ensure that the thin filter cake is peeled or pulled from the drum. Removal of solids from the discharge roll is done via a knife blade. Roll discharged is used if the desired separation requires high filtration rate, if high solid content slurry is used or if the slurry is easy to filter to produce cake formation or if the discharged solid is sticky or mud-like cake.", "The filter cloth is washed on both sides with each drum rotation while discharging filter cakes. The products for this mechanism are usually sticky, wet and thin thus, requiring the aid of a discharge roll. Belt discharge is used if slurry with moderate solid concentration is used or if the slurry is easy to filter to produce cake formation or if a longer wear resistance is desired for the separation of the mentioned slurry.....", "Basically there are five types of discharge that are used for the rotary vacuum drum filter such as belt, scraper, roll, string and pre coat discharge.", "* Due to the structure, the pressure difference is theoretically limited to atmospheric pressure (1 bar), and in practice somewhat lower.\n* Besides the drum, other accessories, for example, agitators and vacuum pump, vacuum receivers, slurry pumps are required.\n* The discharge cake contains residual moisture.\n* The cake tends to crack due to their air drawn through by the vacuum system, so that washing and drying are not efficient. \n* High energy consumption by the vacuum pump.", "* The rotary vacuum drum filter is a continuous and automatic operation, so the operating cost is low.\n* The variation of the drum speed rotating can be used to control the cake thickness.\n* The process can be easily modified (pre-coating filter process).\n* Can produce relatively clean product by adding a showering device.", "Applications:\n* The rotary filter is most suitable for continuous operation on large quantities of slurry.\n* If the slurry contains considerable amount of solids, that is, in the range of 15-30%.\n* Examples of pharmaceutical applications include the collection of calcium carbonate, magnesium carbonate and starch.\n* The separation of the mycelia from the fermentation liquor in the manufacture of antibiotics.\n* block and instant yeast production.", "Rotary vacuum drum filter (RVDF), patented in 1872, is one of the oldest filters used in the industrial liquid-solids separation. It offers a wide range of industrial processing flow sheets and provides a flexible application of dewatering, washing and/or clarification.\nA rotary vacuum filter consists of a large rotating drum covered by a cloth. The drum is suspended on an axial over a trough containing liquid or solids slurry with approximately 50-80% of the screen area immersed in the slurry.\nAs the drum rotates into and out of the trough, the slurry is sucked on the surface of the cloth and rotated out of the liquid or solids suspension as a cake. When the cake is rotating out, it is dewatered in the drying zone. The cake is dry because the vacuum drum is continuously sucking the cake and taking the water out of it. At the final step of the separation, the cake is discharged as solids products and the drum rotates continuously to another separation cycle.", "A Rotary Vacuum Filter Drum consists of a cylindrical filter membrane that is partly sub-merged in a slurry to be filtered. The inside of the drum is held lower than the ambient pressure. As the drum rotates through the slurry, the liquid is sucked through the membrane, leaving solids to cake on the membrane surface while the drum is submerged. A knife or blade is positioned to scrape the product from the surface.\nThe technique is well suited to slurries, flocculated suspensions, and liquids with a high solid content, which could clog other forms of filter. It is common to pre-coated with a filter aid, typically of diatomaceous earth (DE) or Perlite. In some implementations, the knife also cuts off a small portion of the filter media to reveal a fresh media surface that will enter the liquid as the drum rotates. Such systems advance the knife automatically as the surface is removed.", "Filter aid selection: filter aid are recoat cake that act as the actual filter media and there two different types which are diatomaceous earth or perlite.\nImportant parameter to consider is the solid penetration into the pre coat cake and its limits 0.002 to 0.005 inch penetration thickness.\nLarge amount of filter aid is used i.e. “open”, more filter aid is aid removed which lead to higher disposal cost. If little amount of filter aid is used i.e. “tight” will lead to no flow rate into the drum. This comparison can be illustrated in figure 5 as below.", "Minimise the lateral pressure of the strings by adjusting the alignment tine bar to avoid the string being cut off. Have ceramic tube place over each aligning tine bar to act as bearing surface for the strings.", "Application of this discharge are usually seen where production of filter cakes that blind the filter media thoroughly and processes that have low solid concentration slurry. Pre coat discharge is used if slurry with very low solid concentration slurry is used that resulted in difficult cake formation or if the slurry is difficult to filter to produce cake .", "The rotary vacuum drum filter designs available vary in physical aspects and their characteristics. The filtration area ranges from 0.5 m to 125 m. Disregarding the size of the design, filter cloth washing is a priority as it ensures efficiency of cake washing and acting vacuum. However, a smaller design would be more economical as the maintenance, energy usage and investment cost would be less than a bigger rotary vacuum drum filter.\nOver the years, the technology drive has pushed development to further heights revolving around rotary vacuum drum filter in terms of design, performance, maintenance and cost. This has also led to the development of smaller rotary drum vacuum filters, ranging from laboratory scale to pilot scale, both of which can be used for smaller applications (such as at a lab in a university) High performance capacity, optimised filtrate drainage with low flow resistance and minimal pressure loss are just a few of the benefits. \nWith advanced control systems prompting automation, this has reduced the operation of attention needed hence, reducing the operational cost. Advancements in technology also means that precoat can be cut to 1/20th the thickness of human hair, thus making the use of precoat more efficient Lowered operational and capital cost can also be achieved nowadays due to easier maintenance and cleaning. Complete cell emptying can be done quickly with the installation of leading and trailing pipes.\nGiven that the filter cloth is usually one of the more expensive component in the rotary vacuum drum filter build up, priority on its maintenance must be kept quite high. A longer lifetime, protection from damage and consistent performance are the few criteria that must not be overlooked.\nBesides considering production cost and quality, cake washing and cake thickness are essential issues that are important in the process. Methods have been performed to ensure a minimal amount of cake moisture while undergoing good cake washing with large cake dewatering angle. An even thickness of filter cake besides having a complete cake discharge is also possible.", "Rui Luís Reis (born 19 April 1967) is a Portuguese scientist known for his research in tissue engineering, regenerative medicine, biomaterials, biomimetics, stem cells, and biodegradable polymers.\nReis is a professor of at the University of Minho in Braga and Guimarães. He is the Founding Director of the 3Bs Research Group, part of the Research Institute on Biomaterials, Biodegradables and Biomimetics (I3Bs) of UMinho (www.i3bs.uminho.pt), a group that specializes in the areas of Regenerative Medicine, Tissue Engineering, Stem Cells and Biomaterials. He is also the Director of the ICVS/3Bs Associate Laboratory of UMinho. He is the CEO of the European Institute of Excellence on Tissue Engineering and Regenerative Medicine. Rui L. Reis was, from 2013 to 2017, the Vice-Rector (vice-president) for research and innovation of UMinho. From 2007 to 2021 Reis was the editor-in-chief of the Journal of Tissue Engineering and Regenerative Medicine. From 2016 to 2018, he was president of the Tissue Engineering and Regenerative Medicine International Society (TERMIS).\nReis is in the board of several scientific societies, companies and associations. From 2017 to 2019, he was the President of TECMINHO - the technology transfer office of the University of Minho. \nReis is the CEO of the European Institute of Excellence on Tissue Engineering and Regenerative Medicine in Avepark, Guimarães.\nHe co-founded different start up companies originating from the research and activities of 3B's research group, such as Stemmatters and HydruStent/HydruMedical.\nReis is the current president of the I3B's research institute, and one of the most cited Portuguese researchers in science.", "Reis was born and has always lived in Porto, being one of three children of a chemical engineering professor and a domestic. Reis spent a small part of his childhood in Metangula, Mozambique, a small town near Lake Niassa, while his father was engaged in military service during the Portuguese Colonial War. He is married with Olga Paiva and has one son, Bernardo Reis (born in 2001). He is a strong supporter of FC Porto.\nReis graduated in Metallurgical Engineering, University of Porto, Portugal, in 1990. He then completed a master's degree at the University of Porto, Portugal, in 1994. Reis did his PhD on Polymer Engineering – Biomaterials, Regenerative Medicine & Tissue Engineering, in the University of Minho, Portugal and Brunel University London, in 1999. He also completed a Doctor of Science (D.Sc.) degree on Biomedical Engineering - Biomaterials & Tissue Engineering, by University of Minho, Portugal, in 2007. \nReis has also received two Honoris Causa degrees: A first in Medicine from University of Granada, Spain, in 2010 and a second in Engineering from University Polytechnica of Bucharest, Romania, in 2018.", "Reis is a researcher who has been involved in the field of biomaterials since 1990. He has worked with several universities and companies abroad.\nSome of Reis' research has been on liver and neurological tissues regeneration, new strategies for antimicrobial materials, innovative high-throughput approaches for studying cell/materials interactions, as well as on TE approaches for developing different 3D disease models, including different cancer models, and therapies for treatment of diabetes and Alzheimers.\nReis has also been responsible for several cooperation programs with universities and companies worldwide. He has coordinated four major EU research projects, including the STREP \"HIPPOCRATES\".\nUnder HORIZON 2020, Reis was the coordinator of the ERA Chairs FoReCast grant for 3B's-UMinho. He has coordinated two TWINNING projects Gene2Skin and Chem2Nature, and is currently coordinating another TWINNING project. Until 2021, he was the coordinator of the 15 MEuros EC funded TEAMING proposal, \"The Discoveries Centre for Regenerative and Precision Medicine\" with UCL - University College London, UPorto, UAveiro, ULisboa, and UNova Lisboa. He is also the PI of a major project of the Portuguese roadmap for strategic infrastructures, TERM Research Hub.", "*2002: Jean LeRay Award by the European Society for Biomaterials for outstanding contributions to the biomaterials field as a young scientist \n*2007: Pfizer Award for Clinical Research\n*2007: START Innovation Award\n*2011: George Winter Award by the European Society for Biomaterials\n*2011: Gold Medal of Scientific Merit from the City of Guimarães\n*2014: Clemson Award for Contributions to the Literature by the Society for Biomaterials (SFB, USA) \n*2014: Nomination as a Commander (Comendador, a kind of knighthood) of the Military Order of Saint James of the Sword by the Portuguese President of the Republic\n*2015: International Fellow of Tissue Engineering of Regenerative Medicine (FTERM), Boston\n*2016: Induction as a foreigner member of the National Academy of Engineering (NAE) of the USA\n*2018: IET A F Harvey Prize – Institute of Engineering and Technology\n*2018: Induction as Fellow of the European Alliance for Medical and Biological Engineering and Science (EAMBES)\n*2018: UNESCO-Equatorial Guinea International Prize for Research in the Life Sciences\n*2018: Induction as Fellow of the American Institute for Medical and Biological Engineering (AIMBE)\n*2018: Honoris Causa degree awarded by the University Politechnica of Bucharest (UPB)\n*2019: Career Achievement Award of the Tissue Engineering and Regenerative Medicine International Society, TERMIS-EU\n*2020: Gold Medal of the City of Braga\n*2022: Klaas de Groot Award by the European Society for Biomaterials", "Salting in refers to the effect where increasing the ionic strength of a solution increases the solubility of a solute, such as a protein. This effect tends to be observed at lower ionic strengths.\nProtein solubility is a complex function of physicochemical nature of the protein, pH, temperature, and the concentration of the salt used. It also depends on whether the salt is kosmotropic, whereby the salt will stabilize water. The solubility of proteins usually increases slightly in the presence of salt, referred to as \"salting in\". However, at high concentrations of salt, the solubility of the proteins drop sharply and proteins can precipitate out, referred to as \"salting out\".", "Initial salting in at low concentrations is explained by the Debye–Huckel theory. Proteins are surrounded by the salt counterions (ions of opposite net charge) and this screening results in decreasing electrostatic free energy of the protein and increasing activity of the solvent, which in turn leads to increasing solubility. This theory predicts that the logarithm of solubility is proportional to the square root of the ionic strength.\nThe behavior of proteins in solutions at high salt concentrations is explained by John Gamble Kirkwood. The abundance of the salt ions decreases the solvating power of salt ions, resulting in the decrease in the solubility of the proteins and precipitation results.\nAt high salt concentrations, the solubility is given by the following empirical expression.\n:log S = B − KI\nwhere S is the solubility of the protein, B is a constant (function of protein, pH and temperature), K is the salting out constant (function of pH, mixing and salt), and I is the ionic strength of the salt. This expression is an approximation to that proposed by Long and McDevit.", "Salting out (also known as salt-induced precipitation, salt fractionation, anti-solvent crystallization, precipitation crystallization, or drowning out) is a purification technique that utilizes the reduced solubility of certain molecules in a solution of very high ionic strength. Salting out is typically used to precipitate large biomolecules, such as proteins or DNA. Because the salt concentration needed for a given protein to precipitate out of the solution differs from protein to protein, a specific salt concentration can be used to precipitate a target protein. This process is also used to concentrate dilute solutions of proteins. Dialysis can be used to remove the salt if needed.", "As different proteins have different compositions of amino acids, different protein molecules precipitate at different concentrations of salt solution.\nUnwanted proteins can be removed from a protein solution mixture by salting out as long as the solubility of the protein in various concentrations of salt solution is known.\nAfter removing the precipitate by filtration or centrifugation, the desired protein can be precipitated by altering the salt concentration to the level at which the desired protein becomes insoluble.\nOne demerit of salting out in purification of proteins is that, in addition to precipitating a specific protein of interest, contaminants are also precipitated as well. Thus to obtain a purer protein of interest, additional purification methods such as ion exchange chromatography may be required.", "Salt compounds dissociate in aqueous solutions. This property is exploited in the process of salting out. When the salt concentration is increased, some of the water molecules are attracted by the salt ions, which decreases the number of water molecules available to interact with the charged part of the protein.\nThere are hydrophobic amino acids and hydrophilic amino acids in protein molecules.\nAfter protein folding in aqueous solution, hydrophobic amino acids usually form protected hydrophobic areas while hydrophilic amino acids interact with the molecules of solvation and allow proteins to form hydrogen bonds with the surrounding water molecules. If enough of the protein surface is hydrophilic, the protein can be dissolved in water.\nWhen salt is added to the solution, there is more frequent interaction between solvent molecules and salt ions. As a result, the protein and salt ions compete to interact with the solvent molecules with the result that there are fewer solvent molecules available for interaction with the protein molecules than before. The protein–protein interactions thus become stronger than the solvent–solute interactions and the protein molecules associate by forming hydrophobic interactions with each other. After dissociation in a given solvent, the negatively charged atoms from a chosen salt begin to compete for interactions with positively charged molecules present in the solution. Similarly, the positively charged cations compete for interactions with the negatively charged molecules of the solvent. This process is known as salting out.\nSoaps are easily precipitated by concentrated salt solution, the metal ion in the salt reacts with the fatty acids forming back the soap and glycerin (glycerol). To separate glycerin from the soap, the pasty boiling mass is treated with brine (NaCl solution). Contents of the kettle salt out (separate) into an upper layer that is a curdy mass of impure soap and a lower layer that consists of an aqueous salt solution with the glycerin dissolved in it. The slightly alkaline salt solution, termed spent lye, is extracted from the bottom of the pan or kettle and may be subsequently treated for glycerin recovery.", "The settling particles can contact each other and arise when approaching the floor of the sedimentation tanks at very high particle concentration. So that further settling will only occur in adjust matrix as the sedimentation rate decreasing. This is can be illustrated by the lower region of the zone-settling diagram (Figure 3). In Compression zone, the settled solids are compressed by gravity (the weight of solids), as the settled solids are compressed under the weight of overlying solids, and water is squeezed out while the space gets smaller.", "Sedimentation in potable water treatment generally follows a step of chemical coagulation and flocculation, which allows grouping particles together into flocs of a bigger size. This increases the settling speed of suspended solids and allows settling colloids.", "Sedimentation has been used to treat wastewater for millennia.\nPrimary treatment of sewage is removal of floating and settleable solids through sedimentation. Primary clarifiers reduce the content of suspended solids as well as the pollutant embedded in the suspended solids. Because of the large amount of reagent necessary to treat domestic wastewater, preliminary chemical coagulation and flocculation are generally not used, remaining suspended solids being reduced by following stages of the system. However, coagulation and flocculation can be used for building a compact treatment plant (also called a \"package treatment plant\"), or for further polishing of the treated water.\nSedimentation tanks called \"secondary clarifiers\" remove flocs of biological growth created in some methods of secondary treatment including activated sludge, trickling filters and rotating biological contactors.", "In a horizontal sedimentation tank, some particles may not follow the diagonal line in Fig. 1, while settling faster as they grow. So this says that particles can grow and develop a higher settling velocity if a greater depth with longer retention time. However, the collision chance would be even greater if the same retention time were spread over a longer, shallower tank. In fact, in order to avoid hydraulic short-circuiting, tanks usually are made 3–6 m deep with retention times of a few hours.", "The physical process of sedimentation (the act of depositing sediment) has applications in water treatment, whereby gravity acts to remove suspended solids from water. Solid particles entrained by the turbulence of moving water may be removed naturally by sedimentation in the still water of lakes and oceans. Settling basins are ponds constructed for the purpose of removing entrained solids by sedimentation. Clarifiers are tanks built with mechanical means for continuous removal of solids being deposited by sedimentation; however, clarification does not remove dissolved solids.", "Suspended solids (or SS), is the mass of dry solids retained by a filter of a given porosity related to the volume of the water sample. This includes particles 10 μm and greater.\nColloids are particles of a size between 1 nm (0.001 µm) and 1 µm depending on the method of quantification. Because of Brownian motion and electrostatic forces balancing the gravity, they are not likely to settle naturally.\nThe limit sedimentation velocity of a particle is its theoretical descending speed in clear and still water. In settling process theory, a particle will settle only if:\n# In a vertical ascending flow, the ascending water velocity is lower than the limit sedimentation velocity.\n# In a longitudinal flow, the ratio of the length of the tank to the height of the tank is higher than the ratio of the water velocity to the limit sedimentation velocity.\nRemoval of suspended particles by sedimentation depends upon the size, zeta potential and specific gravity of those particles. Suspended solids retained on a filter may remain in suspension if their specific gravity is similar to water while very dense particles passing through the filter may settle. Settleable solids are measured as the visible volume accumulated at the bottom of an Imhoff cone after water has settled for one hour.\nGravitational theory is employed, alongside the derivation from Newton's second law and the Navier–Stokes equations.\nStokes' law explains the relationship between the settling rate and the particle diameter. Under specific conditions, the particle settling rate is directly proportional to the square of particle diameter and inversely proportional to liquid viscosity.\nThe settling velocity, defined as the residence time taken for the particles to settle in the tank, enables the calculation of tank volume. Precise design and operation of a sedimentation tank is of high importance in order to keep the amount of sediment entering the diversion system to a minimum threshold by maintaining the transport system and stream stability to remove the sediment diverted from the system. This is achieved by reducing stream velocity as low as possible for the longest period of time possible. This is feasible by widening the approach channel and lowering its floor to reduce flow velocity thus allowing sediment to settle out of suspension due to gravity. The settling behavior of heavier particulates is also affected by the turbulence.", "Although sedimentation might occur in tanks of other shapes, removal of accumulated solids is easiest with conveyor belts in rectangular tanks or with scrapers rotating around the central axis of circular tanks. Settling basins and clarifiers should be designed based on the settling velocity (v) of the smallest particle to be theoretically 100% removed. The overflow rate is defined as:\n:Overflow rate (v ) = Flow of water (Q (m/s)) /(Surface area of settling basin (A(m))\nIn many countries this value is named as surface loading in m/h per m. Overflow rate is often used for flow over an edge (for example a weir) in the unit m/h per m.\nThe unit of overflow rate is usually meters (or feet) per second, a velocity. Any particle with settling velocity (v) greater than the overflow rate will settle out, while other particles will settle in the ratio v/v.\nThere are recommendations on the overflow rates for each design that ideally take into account the change in particle size as the solids move through the operation:\n* Quiescent zones: per second\n* Full-flow basins: per second\n* Off-line basins: per second\nHowever, factors such as flow surges, wind shear, scour, and turbulence reduce the effectiveness of settling. To compensate for these less than ideal conditions, it is recommended doubling the area calculated by the previous equation.\nIt is also important to equalize flow distribution at each point across the cross-section of the basin. Poor inlet and outlet designs can produce extremely poor flow characteristics for sedimentation.\nSettling basins and clarifiers can be designed as long rectangles (Figure 1.a), that are hydraulically more stable and easier to control for large volumes. Circular clarifiers (Fig. 1.b) work as a common thickener (without the usage of rakes), or as upflow tanks (Fig. 1.c).\nSedimentation efficiency does not depend on the tank depth. If the forward velocity is low enough so that the settled material does not re-suspend from the tank floor, the area is still the main parameter when designing a settling basin or clarifier, taking care that the depth is not too low.", "Settling basins and clarifiers are designed to retain water so that suspended solids can settle. By sedimentation principles, the suitable treatment technologies should be chosen depending on the specific gravity, size and shear resistance of particles. Depending on the size and density of particles, and physical properties of the solids, there are four types of sedimentation processes:\n* Type 1 – Dilutes, non-flocculent, free-settling (every particle settles independently.)\n* Type 2 – Dilute, flocculent (particles can flocculate as they settle).\n* Type 3 – Concentrated suspensions, zone settling, hindered settling (sludge thickening).\n* Type 4 – Concentrated suspensions, compression (sludge thickening).\nDifferent factors control the sedimentation rate in each.", "As the concentration of particles in a suspension is increased, a point is reached where particles are so close together that they no longer settle independently of one another and the velocity fields of the fluid displaced by adjacent particles, overlap. There is also a net upward flow of liquid displaced by the settling particles. This results in a reduced particle-settling velocity and the effect is known as hindered settling.\nThere is a common case for hindered settling occurs. the whole suspension tends to settle as a ‘blanket’ due to its extremely high particle concentration. This is known as zone settling, because it is easy to make a distinction between several different zones which separated by concentration discontinuities. Fig. 3 represents a typical batch-settling column tests on a suspension exhibiting zone-settling characteristics. There is a clear interface near the top of the column would be formed to separating the settling sludge mass from the clarified supernatant as long as leaving such a suspension to stand in a settling column. As the suspension settles, this interface will move down at the same speed. At the same time, there is an interface near the bottom between that settled suspension and the suspended blanket. After settling of suspension is complete, the bottom interface would move upwards and meet the top interface which moves downwards.", "Unhindered settling is a process that removes the discrete particles in a very low concentration without interference from nearby particles. In general, if the concentration of the solutions is lower than 500 mg/L total suspended solids, sedimentation will be considered discrete. Concentrations of raceway effluent total suspended solids (TSS) in the west are usually less than 5 mg/L net. TSS concentrations of off-line settling basin effluent are less than 100 mg/L net. The particles keep their size and shape during discrete settling, with an independent velocity. With such low concentrations of suspended particles, the probability of particle collisions is very low and consequently the rate of flocculation is small enough to be neglected for most calculations. Thus the surface area of the settling basin becomes the main factor of sedimentation rate. All continuous flow settling basins are divided into four parts: inlet zone, settling zone, sludge zone and outlet zone (Figure 2).\nIn the inlet zone, flow is established in a same forward direction. Sedimentation occurs in the settling zone as the water flow towards to outlet zone. The clarified liquid is then flow out from outlet zone.\nSludge zone: settled will be collected here and usually we assume that it is removed from water flow once the particles arrives the sludge zone.\nIn an ideal rectangular sedimentation tank, in the settling zone, the critical particle enters at the top of the settling zone, and the settle velocity would be the smallest value to reach the sludge zone, and at the end of outlet zone, the velocity component of this critical particle are the settling velocity in vertical direction (v) and in horizontal direction (v).\nFrom Figure 1, the time needed for the particle to settle;\n:t =H/v=L/v (3)\nSince the surface area of the tank is WL, and v = Q/WL, v = Q/WH, where Q is the flow rate and W, L, H is the width, length, depth of the tank.\nAccording to Eq. 1, this also is a basic factor that can control the sedimentation tank performance which called overflow rate.\nEq. 2 also shows that the depth of sedimentation tank is independent to the sedimentation efficiency, only if the forward velocity is low enough to make sure the settled mass would not suspended again from the tank floor.", "Sensor-based sorting has been introduced by Wotruba and Harbeck as an umbrella term for all applications where particles are singularly detected by a sensor technique and then rejected by an amplified mechanical, hydraulic or pneumatic process.", "Precondition for the applicability of sensor-based ore sorting is the presence of liberation at the particle size of interest. Before entering into sensor-based ore sorting testing procedures there is the possibility to assess the degree of liberation through the inspection of drill cores, hand-counting and washability analysis. The quantification of liberation does not include any process efficiencies, but gives an estimate of the possible sorting result and can thus be applied for desktop financial feasibility analysis.\nDrill core analysis\nBoth for green-field and brown-field applications, inspection of drill core in combination with the grade distribution and mineralogical description is a good option for estimation of the liberation characteristics and the possible success of sensor-based ore sorting. In combination with the mining method and mine plan, an estimation of possible grade distribution in coarse particles can be done.", "Sensor-based ore sorting is in comparison to other coarse particle separation technologies relatively cheap. While the costs for the equipment itself are relatively high in capital expenditure and operating costs, the absence of extensive infrastructure in a system results in operating costs that are to be compared to jigging. The specific costs are very much depending on the average particle size of the feed and on the ease of the separation. Coarser particles imply higher capacity and thus less costs. Detailed costing can be conducted after the mini-bulk stage in the technical feasibility evaluation.\nPrejudice against waste rejection with sensor-based sorting widely spread, that the loss of valuables, thus the recovery penalty of this process, supersedes the potential downstream cost savings and is therefore economically not viable. It must be noted that for waste rejection the aim for the separation with sensor-based ore sorting must be put onto maximum recovery, which means that only low grade or barren waste is rejected because the financial feasibility is very much sensitive to that factor. Nevertheless, through the rejection of waste before comminution and concentration steps, recovery can be often increased in the downstream process, meaning that the overall recovery is equal or even higher than the one in the base case, meaning that instead of losing product, additional product can be produced, which adds the additional revenue to the cost savings on the positive side in the cash flow.\nIf the rejected material is replaced with additional higher grade material, the main economic benefit unfolds through the additional production. It implies, that in conjunction with sensor-based ore sorting, the capacity of the crushing station is increased, to allow for the additional mass-flow that is subsequently taken out by the sensor-based ore sorters as waste.", "The feed is then transferred to the presentation mechanism which is the belt or the chute in the two main machine types respectively. This sub-process has the function to pass single particles of the material stream in a stable and predictable manner, thus in a unidirectional movement orthogonal to the detection line with uniform speed profile.", "A sized screen fraction with a size range coefficient (d95/d5) of 2-5 (optimal 2-3) is fed onto a vibratory feeder which has the function to create a mono-layer, by pre-accelerating the particles. A common misunderstanding in plant design is, that you can use the vibratory feeder to discharge from a buffer bunker but a separate units needs to be applied, since the feed distribution is very important to the efficiency of the sensor-based sorter and different loads on the feeder change its position and vibration characteristics.", "Single particle testing is an extensive but powerful laboratory procedure developed by Tomra. Outo of a sample set of multiple hundreds of fragments in the size range 30-60mm are measured individually on each of the available detection technologies. After recording of the raw data, all the fragments are comminute and assayed individually which then allows plotting of the liberation function of the sample set and in addition, the detection efficiency of each detection technology in combination with the calibration method applied. This makes the evaluation of detection and calibration and subsequently the selection of the most powerful combination possible. This analysis is possible to be applied on quarters or half sections of drill core.", "The chute-type machine has a lower footprint and fewer moving parts which results in lower investment and operating costs. In general, it is more applicable to well liberated material and surface detection, because a double sided scanning is possible on a more reliable on the system. The applicable top size of the chute-type machine is bigger, as material handling of particles up to is only technically viable on this setup.\nThe cost for most average farmers and industry workers is around $500 for the study and ergonomic design of the sensor. The sensor itself is still a prototype not yet built but looking to be approved by FDA around 2003", "The belt-type machine is generally more applicable to smaller and to adhesive feed. In addition, the feed presentation is more stable which makes it more applicable for more difficult and heterogenous applications.", "Tungsten plays a large and indispensable role in modern high-tech industry. Up to 500,000 tons of raw tungsten ore are mined each year by Wolfram Bergbau und Hütten AG (WHB)in Felbertal, Austria, which is the largest scheelite deposit in Europe. 25% of the run-of-mine ore are separated as waste before entering the mill.", "Sensor-based ore sorting is the terminology used in the mining industry. It is a coarse physical coarse particle separation technology usually applied in the size range for . Aim is either to create a lumpy product in ferrous metals, coal or industrial minerals applications or to reject waste before it enters production bottlenecks and more expensive comminution and concentration steps in the process.\nIn the majority of all mining processes, particles of sub-economic grade enter the traditional comminution, classification and concentration steps. If the amount of sub-economic material in the above-mentioned fraction is roughly 25% or more, there is good potential that sensor-based ore sorting is a technically and financially viable option. High added value can be achieved with relatively low capital expenditure, especially when increasing the productivity through downstream processing of higher grade feed and through increased overall recovery when rejecting deleterious waste.", "Sensor-based sorting is a coarse particle separation technology applied in mining for the dry separation of bulk materials. The functional principle does not limit the technology to any kind of segment or mineral application but makes the technical viability mainly depend on the liberation characteristics at the size range , which is usually sorted. If physical liberation is present there is a good potential that one of the sensors available on industrial scale sorting machines can differentiate between valuable and non-valuable particles. \nThe separation is based on features measured with a detection technology that are used to derive a yes/no decision for actuation of usually pneumatic impulses. Sensor-based sorting is a disruptive technology in the mining industry which is universally applicable for all commodities. A comprehensive study examines both the technologys potential and its limitations, whilst providing a framework for application development and evaluation. All relevant aspects, from sampling to plant design and integration into mining and mineral processing systems, are covered. Other terminologies used in the industry include ore sorting, automated sorting, electronic sorting, and optical sorting'.", "The main subprocesses of sensor-based sorting are material conditioning, material presentation, detection, data processing and separation. \n* Material conditioning includes all operations which prepare the particles for being detected by the sensor. All optical sensors need clean material to be able to detect optical characteristics. Conditioning includes screening and cleaning of the feed material. \n* The aim of the material presentation is the isolation of the particles by creating a single particle layer with the densest surface cover possible without particles touching each other and enough distance to each other allowing for a selective detection and rejection of each single particle. \nThere are two types of sensor-based sorters: the chute type and the belt type. For both types the first step in acceleration is spreading out the particles by a vibrating feeder followed by either a fast belt or a chute. On the belt type the sensor usually detects the particles horizontally while they pass it on the belt. For the chute type the material detection is usually done vertically while the material passes the sensor in a free fall. The data processing is done in real time by a computer. The computer transfers the result of the data processing to an ultra fast ejection unit which, depending on the sorting decision, ejects a particle or lets it pass.", "As for any other physical separation process, liberation is pre-requisite for possible separation. Liberation characteristics are well known and relatively easy to study for particulate lots in smaller size ranges, e.g. flotation feed and products. The analysis is essential for understanding the possible results of physical separation and relatively easy to conduct in laboratory on a couple of dozens of grams of sample which can be studied using optical methods or such as the QEMSCAN. \nFor larger particles above it is widely known for applications that are treated using density separation methods, such as coal or iron ore. Here, the washability analysis can be conducted on sample masses up to 10 tonnes in equipped laboratories. For sensor-based sorting, where laboratory methods can only tell about the liberation characteristics where the describing feature is the density (e.g. iron ore, coal), hand counting, single-particle tests and bulk tests can reveal the liberation characteristics of a bulk material: Hereby, only single particle tests reveal the true liberation, while hand counting and bulk testing give a result which also incorporates the separation efficiency of the type of analysis. More information on the testing procedures used in technical feasibility evaluation can be found in the respective chapter.", "Hand-counting is a cheap and easy to conduct method to estimate the liberation characteristics of a bulk sample wither originating from run-of-mine material, a waste dump or for example exploration trenching. Analysis of particles in the size range 10-100mm has been conducted on a total sample mass of 10 tonnes. By visual inspection of trained personnel, a classification of each particle into different bins (e.g. lithology, grade) is possible and the distribution is determined by weighing each bin. A trained professional can quickly estimate the efficiency of a specific detection and process efficiency of sensor-based ore sorting knowing the sensor response of the mineralogy of ore in question and other process efficiency parameters.", "A size range coefficient of approximately three is advisable. A minimum amount of undersized fine material must enter the machines to optimize availability. Moisture of the feed is not important, if the material is sufficiently dewatered and the undersize fraction is efficiently removed. For surface detection technologies sometimes spray water on the classifying screen is required to clean the surfaces. Surface detection technologies would otherwise measure the reflectance of the adhesions on the surface and a correlation to the particle's content is not given.", "During the more than 80 years of technical development of sensor-based ore sorting equipment, various types of machines have been developed. This includes the channel-type, bucket-wheel type and cone type sorters. The main machine types being installed in the mining industry today are belt-type and chute-type machines. Harbeck made a good comparison of both disadvantages and advantages of the systems for different sorting applications. The selection of a machine-type for an application depends various case-dependent factors, including the detection system applied, particle size, moisture, yield amongst others.", "Sensor-based sorting, is an umbrella term for all applications in which particles are detected using a sensor technique and rejected by an amplified mechanical, hydraulic or pneumatic process. \nThe technique is generally applied in mining, recycling and food processing and used in the particle size range between . Since sensor-based sorting is a single particle separation technology, the throughput is proportional to the average particle size and weight fed onto the machine.", "The washability analysis is widely known in bulk material analysis, where the specific density is the physical property describing the liberation and the separation results, which is then in the form of the partition curve. The partition curve is defined as the curve which gives as a function of a physical property or characteristic, the proportions in which different elemental classes of raw feed having the same property are split into separate products. It is thus per its definition not limited to, but predominantly applied in analysis of liberation and process efficiency of density separation processes. For sensor-based ore sorting, the partition (also called Tromp) curves for chromite, iron ore and coal are known and can thus be applied for process modelling.", "The oldest form of mineral processing practiced since the Stone Age is hand-picking. Georgius Agricola also describes hand-picking is his book De re metallica in 1556. Sensor-based sorting is the automation and extension to hand picking. In addition to sensors that measure visible differences like color (and the further interpretation of the data regarding texture and shape), other sensors are available on industrial scale sorters that are able to measure differences invisible for the human eye (EM, XRT, NIR).\nThe principles of the technology and the first machinery has been developed since the 1920s (. Nevertheless, widely applied and standard technology it is only in the industrial minerals and gemstone segments. Mining is benefiting from the step change developments in sensing and computing technologies and from machine development in the recycling and food processing industries.\nIn 2002, Cutmore and Eberhard stated that the relatively small installed base of sensor-based sorters in mining is more a result of insufficient industry interest than any technical barriers to their effective use \nNowadays sensor-based sorting is beginning to reveal its potential in various applications in basically all segments of mineral production (industrial minerals, gemstones, base-metals, precious metals, ferrous metals, fuel). Precondition is physical liberation in coarse size ranges (~) to make physical separation possible. Either the product fraction, but more often the waste fraction needs to be liberated. If liberation is present, there is good potential that one of available detection technologies on today's sensor-based sorters can positively or negatively identify one of the two desired fractions.", "To gather relevant statistical data higher sample masses are needed in some cases. Thus, transportation of the sample into the mini-bulk testing facility becomes unviable and the equipment is set up in the field. Containerised units in conjunction with Diesel-powered crushing and screening equipment are often applied and used for production test runs under full scale operating conditions.", "Spectral and spatial is collected by the detection system. The spatial component catches the position of the particles distribution across the width of the sorting machine, which is then used in case the ejection mechanism is activated for a single particle. Spectral data comprises the features that are used for material discrimination. In a superseding processing step, spectral and spatial can be combined to include patterns into the separation criterion. Huge amount of data is collected in real time multiple processing and filtering steps are bringing the data down to the Yes/no decision – either for ejecting a particle or for keeping the ejection mechanism still for that one.", "In the detection sub-process location and property vectors are recorded to allow particle localization for ejection and material classification for discrimination purposes. All detection technologies applied have in common to be cheap, contactless and fast. The technologies are subdivided in transmitting and reflecting groups, the first measuring the inner content of a particle while the later only uses the surface reflection for discrimination. Surface, or reflection technologies have the disadvantage that the surfaces need to be representing the content, thus need to be clean from clay and dust adhesions. But by default surface reflection technologies violate the Fundamental Sampling Principle because not all components of a particle have the same probability of being detected. \nThe main transmitting technologies are EM (Electromagnetics) and XRT (X-ray-Transmission). EM detection is based on the conductivity of the material passing an alternating electromagnetic field. The principle of XRT is widely known through the application in medical diagnostics and airport luggage scanners. The main surface or reflection technologies are traditionally X-ray luminescence detectors capturing the fluorescence of diamonds under the excitation of X-ray radiation and color cameras detecting brightness and colour difference. Spectroscopic methods such as near-infrared spectroscopy known from remote sensing in exploration in mining for decades, have found their way into industrial scale sensor-based sorters. Advantage of the application of near-infrared spectroscopy is that the evidence can be measured on the presence of specific molecular bonds, thus minerals composition of the near-infrared active minerals. There is more detection technologies available on industrial scale sensor-based ore sorters. Readers that want to go into detail can find more in the literature.", "Sensor-based sorting installations normally comprise the following basic units; crusher, screen, sensor-based sorter and compressor. There are principally two different kinds of installations that are described in the following paragraphs – stationary and semi-mobile installations.", "Transportable semi-mobile installations have gained increasing popularity in the last two decades. They are enabled by the fact that complete sensor-based sorting systems are relatively compact in relation to the capacity in tonnes per hour. This is mainly because little infrastructure is needed. The picture shows a containerised sensor-based sorter which is applied in Chromitite sorting. The system is operated in conjunction with a Diesel-powered mobile crusher and screen. Material handling of the feed, undersize fraction, product and waste fraction is conducted using a wheel loader. The system is powered by a Diesel generator and a compressor station delivers the instrument quality air needed for the operation.\nSemi-mobile installations are applied primarily to minimise material handling and save transport costs. Another reason for choosing the semi-mobile option for an installation is bulk testing of new ore bodies. Capacity of a system very much depends on the size fraction sorted, but a 250tph capacity is a good estimate for semi-mobile installations, considering a capacity of 125tph sorter feed and 125tph undersize material. During the last decade both generic plant designs and customised designs have been developed, for example in the framework of the i2mine project.", "Mini-bulk tests are conducted with 1-100t of samples on industrial scale sensor-based ore sorters. The size fraction intervals to be treated are prepared using screen classifications. Full capacity is established then with each fraction and multiple cut-points are programmed in the sorting software. After creating multiple sorting fractions in rougher, scavenger and cleaner steps these weighed are sent for assays. The resulting data delivers all input for flow-sheet development. Since the tests are conducted on industrial scale equipment, there is no scale-up factor involved when designing a flow-sheet and installation of sensor-based ore sorting.", "For higher grade applications such as ferrous metals, coal and industrial minerals, sensor-based ore sorting can be applied to create a final product. Pre-condition is, that the liberation allows for the creation of a sellable product. Undersize material is usually bypassed as product, but can also be diverted to the waste fraction, if the composition does not meet the required specifications. This is case and application dependent.", "Most prominent example of the application of sensor-based ore sorting is the rejection of barren waste before transporting and comminution. Waste rejection is also known under the term pre-concentration. A discrimination has been introduced by Robben. Rule of thumb is that at least 25% of liberated barren waste must be present in the fraction to be treated by sensor-based ore sorting to make waste rejection financially feasible.\nReduction of waste before it enters comminution and grinding processes does not only reduce the costs in those processes, but also releases the capacity that can be filled with higher grade material and thus implies higher productivity of the system. A prejudice against the application of a waste rejection process is, that the valuable content lost in this process is a penalty higher than the savings that can be achieved. But it is reported in the literature that the overall recovery even increases through bringing higher grade material as feed into the mill. In addition, the higher productivity is an additional source of income. If noxious waste such as acid consuming calcite is removed, the downstream recovery increases and the downstream costs decrease disproportionally as reported for example by Bergmann. The coarse waste rejected can be an additional source of income if there is a local market for aggregates.", "Sensor-based ore sorting is financially especially attractive for low grade or marginal ore or waste dump material. This described scenario describes that waste dump material or marginal ore is sorted and added to the run-of-mine production. The needed capacity for the sensor-based ore sorting step is less in this case such as the costs involved. Requirement is that two crude material streams are fed in parallel, requiring two crushing stations. Alternatively, marginal and high grade ore can be buffered on an intermediate stockpile and dispatched in an alternating operation. The latter option has the disadvantage that the planned production time, the loading, of the sensor-based ore sorter is low, unless a significant intermediate stockpile or bunker is installed. Treating the marginal ore separately has the advantage that less equipment is needed since the processed material stream is lower, but it has the disadvantage that the potential of the technology is not unfolded for the higher grade material, where sensor-based sorting would also add benefit.", "To cope with high volume mass flows and for application, where a changing physical location of the sensor-based sorting process is of no benefit for the financial feasibility of the operation, stationary installations are applied. Another reason for applying stationary installations are multistage (Rougher, Scavenger, Cleaner) sensor-based ore sorting processes. Within stationary installations, sorters are usually located in parallel, which allows transport of the discharge fractions with one product and one waste belt respectively, which decreases plant footprint and amount of conveyors.", "Sensor-based sorting can be applied to separate the coarse fraction of the run-of-mine material according to its characteristics. Possible separation criteria are grade, mineralogy, grade and grindability amongst others. Treating different ore types separately results either in an optimised cash flow in the sense, that revenue ist shifted to an earlier point in time, or increased overall recovery which translates to higher productivity and thus revenue. If two separate plant lines are installed, the increased productivity must compensate for the overall higher capital expenditure and operating costs.", "The expert conference “Sensor-Based Sorting” is addressing new developments and applications in the field of automatic sensor separation techniques for primary and secondary raw materials. The conference provides a platform for plant operators, manufacturers, developers and scientists to exchange know-how and experiences.\nThe congress is hosted by the Department of Processing and Recycling and the Unit for Mineral Processing (AMR) of RWTH Aachen University in cooperation with the GDMB Society of Metallurgists and Miners, Clausthal. Scientific supervisors are Professor Thomas Pretz and Professor Hermann Wotruba.", "Comex provides sorting technologies for mining industries using multi-sensory solution integrated in the same sorting units, like X-ray, hyper-spectral IR and color optical sensors and 3D cameras, which can be very effective in identifying and sorting of various mineral particles. Integration of AI models for sensor data processing is of critical importance to achieve good sorting results.", "Pebble circuits are a very advantageous location for the application of sensor-based ore sorters. Usually it is hard waste recirculating and limiting the total mill capacity. In addition, the tonnage is significantly lower in comparison to the total run-of-mine stream, the size range is applicable and usually uniform and the particles' surfaces are clean. High impact on total mill capacity is reported in the literature.", "Raytec Vision is a camera and sensor-based manufacturer based in Parma and specialized in food sorting. The applications of Raytec Vision's machines are many: tomatoes, tubers, fruit, fresh cut, vegetables and confectionery products. Each machine can separate good products from wastes, foreign bodies and defects and guarantees high levels of food safety for the final consumer.", "A sensor-based sorting equipment supplier with large installed base in the industries mining, recycling and food. Tomras sensor-based sorting equipment and services for the precious metals and base metals segment are marketed through a cooperation agreement with Outotec from Finland, which brings the extensive comminution, processing and application experience of Outotec together with Tomras sensor-based ore sorting technology and application expertise.", "Steinert provides sorting technologies for recycling and mining industries using a variety of sensors, like X-ray, inductive, NIR and color optical sensors and 3D laser camera, which can be combined for sorting a variety of materials. NIR technology is used in the recycling field.", "The process efficiency of sensor-based ore sorting is described in detail by C. Robben in 2014. The total process efficiency is subdivided into the following sub-process efficiencies; Platform efficiency, preparation efficiency, presentation efficiency, detection efficiency and separation efficiency. All the sub-process contribute to the total process efficiency, of course in combination with the liberation characteristics of the bulk material that the technology is applied to. The detailed description of the sib-processes and their contribution to the total process efficiency can be found in the literature.", "The state-of-the-art mechanism of today's sensor-based ore sorters is a pneumatic ejection. Here, a combination of high speed air valves and an array of nozzles perpendicular to the acceleration belt or chute allows precise application of air pulses to change the direction of flight of single particles. The nozzle pitch and diameter is adapted to the particle size. The air impulse must be precise enough to change the direction of flight of a single particle by applying the drag force to this single particle and directing it over the mechanical splitter plate.", "A separation process is a method that converts a mixture or a solution of chemical substances into two or more distinct product mixtures, a scientific process of separating two or more substances in order to obtain purity. At least one product mixture from the separation is enriched in one or more of the source mixture's constituents. In some cases, a separation may fully divide the mixture into pure constituents. Separations exploit differences in chemical properties or physical properties (such as size, shape, mass, density, or chemical affinity) between the constituents of a mixture.\nProcesses are often classified according to the particular properties they exploit to achieve separation. If no single difference can be used to accomplish the desired separation, multiple operations can often be combined to achieve the desired end.\nWith a few exceptions, elements or compounds exist in nature in an impure state. Often these raw materials must go through a separation before they can be put to productive use, making separation techniques essential for the modern industrial economy. \nThe purpose of separation may be:\n* analytical: to identify the size of each fraction of a mixture is attributable to each component without attempting to harvest the fractions.\n* preparative: to \"prepare\" fractions for input into processes that benefit when components are separated.\nSeparations may be performed on a small scale, as in a laboratory for analytical purposes, or on a large scale, as in a chemical plant.", "* Centrifugation and cyclonic separation, separates based on density differences\n* Chelation\n* Chromatography separates dissolved substances by different interaction with (i.e., travel through) a material.\n** High-performance liquid chromatography (HPLC)\n** Thin-layer chromatography (TLC)\n** Countercurrent chromatography (CCC)\n** Droplet countercurrent chromatography (DCC)\n** Paper chromatography\n** Ion chromatography\n** Size-exclusion chromatography (SEC)\n** Affinity chromatography\n** Centrifugal partition chromatography\n** Gas chromatography and Inverse gas chromatography\n* Crystallization\n* Decantation\n* Demister (vapor), removes liquid droplets from gas streams\n* Distillation, used for mixtures of liquids with different boiling points\n* Drying, removes liquid from a solid by vaporization or evaporation\n* Electrophoresis, separates organic molecules based on their different interaction with a gel under an electric potential (i.e., different travel)\n** Capillary electrophoresis\n* Electrostatic separation, works on the principle of corona discharge, where two plates are placed close together and high voltage is applied. This high voltage is used to separate the ionized particles.\n* Elutriation\n* Evaporation\n* Extraction\n** Leaching\n** Liquid–liquid extraction\n** Solid phase extraction\n** Supercritical fluid extraction\n** Subcritical fluid extraction\n* Field flow fractionation\n* Filtration – Mesh, bag and paper filters are used to remove large particulates suspended in fluids (e.g., fly ash) while membrane processes including microfiltration, ultrafiltration, nanofiltration, reverse osmosis, dialysis (biochemistry) utilising synthetic membranes, separates micrometre-sized or smaller species\n* Flocculation, separates a solid from a liquid in a colloid, by use of a flocculant, which promotes the solid clumping into flocs\n* Fractional distillation\n* Fractional freezing\n* Magnetic separation\n* Oil-water separation, gravimetrically separates suspended oil droplets from waste water in oil refineries, petrochemical and chemical plants, natural gas processing plants and similar industries\n* Precipitation\n* Recrystallization\n* Scrubbing, separation of particulates (solids) or gases from a gas stream using liquid.\n* Sedimentation, separates using vocal density pressure differences\n** Gravity separation\n* Sieving\n* Sponge, adhesion of atoms, ions or molecules of gas, liquid, or dissolved solids to a surface\n* Stripping\n* Sublimation\n* Vapor–liquid separation, separates by gravity, based on the Souders–Brown equation\n* Winnowing\n* Zone refining", "Some types of separation require complete purification of a certain component. An example is the production of aluminum metal from bauxite ore through electrolysis refining. In contrast, an incomplete separation process may specify an output to consist of a mixture instead of a single pure component. A good example of an incomplete separation technique is oil refining. Crude oil occurs naturally as a mixture of various hydrocarbons and impurities. The refining process splits this mixture into other, more valuable mixtures such as natural gas, gasoline and chemical feedstocks, none of which are pure substances, but each of which must be separated from the raw crude.\nIn both complete separation and incomplete separation, a series or cascade of separations may be necessary to obtain the desired end products. In the case of oil refining, crude is subjected to a long series of individual distillation steps, each of which produces a different product or intermediate.", "Short-path distillation is a distillation technique that involves the distillate traveling a short distance, often only a few centimeters, and is normally done at reduced pressure. Short-path distillation systems often have a variety of names depending on the manufacturer of the system and what compounds are being distilled within them. A classic example would be a distillation involving the distillate traveling from one glass bulb to another, without the need for a condenser separating the two chambers. This technique is often used for compounds which are unstable at high temperatures or to purify small amounts of compound. The advantage is that the heating temperature can be considerably lower at reduced pressure than the boiling point of the liquid at standard pressure, and the distillate only has to travel a short distance before condensing. A short path ensures that little compound is lost on the sides of the apparatus. The Kugelrohr is a kind of a short path distillation apparatus which can contain multiple chambers to collect distillate fractions. To increase the evaporation rate without increasing temperature there are several modern techniques that increase the surface area of the liquid such as thin film, wiped film or wiper film, and rolled film all of which involve mechanically spreading a film of the liquid over a large surface.", "The Society for Cryobiology is an international scientific society that was founded in 1964. Its objectives are to promote research in low temperature biology, to improve scientific understanding in this field, and to disseminate and aid in the application of this knowledge. The Society also publishes a journal called Cryobiology.\nThe society has hosted 60 annual meetings to date, with the 2024 annual meeting being held in Washington. The three-day event will host over 350 delegates from more than 35 countries.", "Pontecorvo was never able to prove his theory, but he was on to something with his thinking. In 2002, results from an experiment conducted 2100 meters underground at the Sudbury Neutrino Observatory proved and supported Pontecorvo's theory and discovered that neutrinos released from the Sun can in fact change form or flavor because they are not completely massless. This discovery of neutrino oscillation solved the solar neutrino problem, nearly 40 years after Davis and Bahcall began studying solar neutrinos.", "A solar neutrino is a neutrino originating from nuclear fusion in the Sun's core, and is the most common type of neutrino passing through any source observed on Earth at any particular moment. Neutrinos are elementary particles with extremely small rest mass and a neutral electric charge. They only interact with matter via the weak interaction and gravity, making their detection very difficult. This has led to the now-resolved solar neutrino problem. Much is now known about solar neutrinos, but the research in this field is ongoing.", "The timeline of solar neutrinos and their discovery dates back to the 1960s, beginning with the two astrophysicists John N. Bahcall and Raymond Davis Jr. The experiment, known as the Homestake experiment, named after the town in which it was conducted (Homestake, South Dakota), aimed to count the solar neutrinos arriving at Earth. Bahcall, using a solar model he developed, came to the conclusion that the most effective way to study solar neutrinos would be via the chlorine-argon reaction. Using his model, Bahcall was able to calculate the number of neutrinos expected to arrive at Earth from the Sun. Once the theoretical value was determined, the astrophysicists began pursuing experimental confirmation. Davis developed the idea of taking hundreds of thousands of liters of perchloroethylene, a chemical compound made up of carbon and chlorine, and searching for neutrinos using a chlorine-argon detector. The process was conducted very far underground, hence the decision to conduct the experiment in Homestake as the town was home to the Homestake Gold Mine. By conducting the experiment deep underground, Bahcall and Davis were able to avoid cosmic ray interactions which could affect the process and results. The entire experiment lasted several years as it was able to detect only a few chlorine to argon conversions each day, and the first results were not yielded by the team until 1968. To their surprise, the experimental value of the solar neutrinos present was less than 20% of the theoretical value Bahcall calculated. At the time, it was unknown if there was an error with the experiment or with the calculations, or if Bahcall and Davis did not account for all variables, but this discrepancy gave birth to what became known as the solar neutrino problem.", "The Sudbury Neutrino Observatory (SNO), a underground observatory in Sudbury, Canada, is the other site where neutrino oscillation research was taking place in the late 1990s and early 2000s. The results from experiments at this observatory along with those at Super-Kamiokande are what helped solve the solar neutrino problem.\nThe SNO is also a heavy-water Cherenkov detector and designed to work the same way as the Super-Kamiokande. The Neutrinos when reacted with heavy water produce the blue Cherenkov light, signaling the detection of neutrinos to researchers and observers.", "Davis and Bahcall continued their work to understand where they may have gone wrong or what they were missing, along with other astrophysicists who also did their own research on the subject. Many reviewed and redid Bahcall's calculations in the 1970s and 1980s, and although there was more data making the results more precise, the difference still remained. Davis even repeated his experiment changing the sensitivity and other factors to make sure nothing was overlooked, but he found nothing and the results still showed \"missing\" neutrinos. By the end of the 1970s, the widely expected result was the experimental data yielded about 39% of the calculated number of neutrinos. In 1969, Bruno Pontecorvo, an Italo-Russian astrophysicist, suggested a new idea that maybe we do not quite understand neutrinos like we think we do, and that neutrinos could change in some way, meaning the neutrinos that are released by the sun changed form and were no longer neutrinos the way neutrinos were thought off by the time they reached Earth where the experiment was conducted. This theory Pontecorvo had would make sense in accounting for the discrepancy between the experimental and theoretical results that persisted.", "The Super-Kamiokande is a 50,000 ton water Cherenkov detector underground. The primary uses for this detector in Japan in addition to neutrino observation is cosmic ray observation as well as searching for proton decay. In 1998, the Super-Kamiokande was the site of the Super-Kamiokande experiment which led to the discovery of neutrino oscillation, the process by neutrinos change their flavor, either to electron, muon or tau.\nThe Super-Kamiokande experiment began in 1996 and is still active. In the experiment, the detector works by being able to spot neutrinos by analyzing water molecules and detecting electrons being removed from them which then produces a blue Cherenkov light, which is produced by neutrinos. Therefore, when this detection of blue light happens it can be inferred that a neutrino is present and counted.", "The critical issue of the solar neutrino problem, that many astrophysicists interested in solar neutrinos studied and attempted to solve in late 1900s and early 2000s, is solved. In the 21st century, even without a main problem to solve, there is still unique and novel research ongoing in this field of astrophysics.", "Solar neutrinos are produced in the core of the Sun through various nuclear fusion reactions, each of which occurs at a particular rate and leads to its own spectrum of neutrino energies. Details of the more prominent of these reactions are described below.\nThe main contribution comes from the proton–proton chain. The reaction is:\nor in words:\n: two protons deuteron + positron + electron neutrino.\nOf all Solar neutrinos, approximately 91% are produced from this reaction. As shown in the figure titled \"Solar neutrinos (proton–proton chain) in the standard solar model\", the deuteron will fuse with another proton to create a He nucleus and a gamma ray. This reaction can be seen as:\nThe isotope He can be produced by using the He in the previous reaction which is seen below.\nWith both helium-3 and helium-4 now in the environment, one of each weight of helium nucleus can fuse to produce beryllium:\nBeryllium-7 can follow two different paths from this stage: It could capture an electron and produce the more stable lithium-7 nucleus and an electron neutrino, or alternatively, it could capture one of the abundant protons, which would create boron-8. The first reaction via lithium-7 is:\nThis lithium-yielding reaction produces approximately 7% of the solar neutrinos. The resulting lithium-7 later combines with a proton to produce two nuclei of helium-4. The alternative reaction is proton capture, that produces boron-8, which then beta decays into beryllium-8 as shown below:\nThis alternative boron-yielding reaction produces about 0.02% of the solar neutrinos; although so few that they would conventionally be neglected, these rare solar neutrinos stand out because of their higher average energies. The asterisk (*) on the beryllium-8 nucleus indicates that it is in an excited, unstable state. The excited beryllium-8 nucleus then splits into two helium-4 nuclei:", "The highest flux of solar neutrinos come directly from the proton–proton interaction, and have a low energy, up to 400 keV. There are also several other significant production mechanisms, with energies up to 18 MeV. From the Earth, the amount of neutrino flux at Earth is around 7·10 particles·cm·s . The number of neutrinos can be predicted with great confidence by the standard solar model, but the number of neutrinos detected on Earth versus the number of neutrinos predicted are different by a factor of a third, which is the solar neutrino problem.\nSolar models additionally predict the location within the Sun's core where solar neutrinos should originate, depending on the nuclear fusion reaction which leads to their production. Future neutrino detectors will be able to detect the incoming direction of these neutrinos with enough precision to measure this effect.\nThe energy spectrum of solar neutrinos is also predicted by solar models. It is essential to know this energy spectrum because different neutrino detection experiments are sensitive to different neutrino energy ranges. The Homestake experiment used chlorine and was most sensitive to solar neutrinos produced by the decay of the beryllium isotope Be. The Sudbury Neutrino Observatory is most sensitive to solar neutrinos produced by B. The detectors that use gallium are most sensitive to the solar neutrinos produced by the proton–proton chain reaction process, however they were not able to observe this contribution separately. The observation of the neutrinos from the basic reaction of this chain, proton–proton fusion in deuterium, was achieved for the first time by Borexino in 2014. In 2012 the same collaboration reported detecting low-energy neutrinos for the proton–electron–proton (pep reaction) that produces 1 in 400 deuterium nuclei in the Sun. The detector contained 100 metric tons of liquid and saw on average 3 events each day (due to C production) from this relatively uncommon thermonuclear reaction.\nIn 2014, Borexino reported a successful direct detection of neutrinos from the pp-reaction at a rate of 144±33/day, consistent with the predicted rate of 131±2/day that was expected based on the standard solar model prediction that the pp-reaction generates 99% of the Suns luminosity and their analysis of the detectors efficiency.\nAnd in 2020, Borexino reported the first detection of CNO cycle neutrinos from deep within the solar core.\nNote that Borexino measured neutrinos of several energies; in this manner they have demonstrated experimentally, for the first time, the pattern of solar neutrino oscillations predicted by the theory. Neutrinos can trigger nuclear reactions. By looking at ancient ores of various ages that have been exposed to solar neutrinos over geologic time, it may be possible to interrogate the luminosity of the Sun over time, which, according to the standard solar model, has changed over the eons as the (presently) inert byproduct helium has accumulated in its core.", "Wolfgang Pauli was the first to suggest the idea of a particle such as the neutrino existing in our universe in 1930. He believed such a particle to be completely massless. This was the belief amongst the astrophysics community until the solar neutrino problem was solved.\nFrederick Reines, from the University of California at Irvine, and George A. Cowan were the first astrophysicists to detect neutrinos in 1956. They won a Nobel Prize in Physics for their work in 1995.\nRaymond Davis and John Bahcall are the pioneers of solar neutrino studies. While Bahcall never won a Nobel Prize, Davis along with Masatoshi Koshiba won the Nobel Prize in Physics in 2002 after the solar neutrino problem was solved for their contributions in helping solve the problem.\nPontecorvo, known as the first astrophysicist to suggest the idea neutrinos have some mass and can oscillate, never received a Nobel Prize for his contributions due to his passing in 1993.\nArthur B. McDonald, a Canadian physicist, was a key contributor in building the Sudbury Neutrino Observatory (SNO) in the mid 1980s and later became the director of the SNO and leader of the team that solved the solar neutrino problem. McDonald, along with Japanese physicist Kajita Takaaki both received a Nobel Prize for their work discovering the oscillation of neutrinos in 2015.", "This research, published in 2017, aimed to solve the solar neutrino and antineutrino flux for extremely low energies (keV range). Processes at these low energies consisted vital information that told researchers about the solar metallicity. Solar metallicity is the measure of elements present in the particle that are heavier than hydrogen and helium, typically in this field this element is usually iron. The results from this research yielded significantly different findings compared to past research in terms of the overall flux spectrum. Currently technology does not yet exist to put these findings to the test.", "This research, published in 2017, aimed to search for the solar neutrino effective magnetic moment. The search was completed using data from exposure from the Borexino experiment's second phase which consisted of data over 1291.5 days (3.54 years). The results yielded that the electron recoil spectrum shape was as expected with no major changes or deviations from it.", "The Borexino detector is located at the Laboratori Nazionali de Gran Sasso, Italy. Borexino is an actively used detector, and experiments are on-going at the site. The goal of the Borexino experiment is measuring low energy, typically below 1 MeV, solar neutrinos in real-time. The detector is a complex structure consisting of photomultipliers, electrons, and calibration systems making it equipped to take proper measurements of the low energy solar neutrinos. Photomultipliers are used as the detection device in this system as they are able to detect light for extremely weak signals.\nSolar neutrinos are able to provide direct insight into the core of the Sun because that is where the solar neutrinos originate. Solar neutrinos leaving the Sun's core reach Earth before light does due to the fact solar neutrinos do not interact with any other particle or subatomic particle during their path, while light (photons) bounces around from particle to particle. The Borexino experiment used this phenomenon to discover that the Sun releases the same amount of energy currently as it did a 100,000 years ago.", "Solar-blind technology is a set of technologies to produce images without interference from the Sun. This is done by using wavelengths of ultraviolet light that are totally absorbed by the ozone layer, yet are transmitted in the Earth's atmosphere. Wavelengths from 240 to 280 nm are completely absorbed by the ozone layer. Elements of this technology are ultraviolet light sources, ultraviolet image detectors, and filters that only transmit the range of wavelengths that are blocked by ozone. A system will also have a signal processing system, and a way to display the results (image).", "Solar-blind imaging can be used to detect corona discharge, in electrical infrastructure. Missile exhaust can be detected from the troposphere or ground. Also when looking down on the Earth from space, the Earth appears dark in this range, so rockets can be easily detected from above once they pass the ozone layer.\nIsrael, People's Republic of China, Russia, South Africa, United Kingdom, and United States are developing this technology.", "Ultraviolet illumination can be produced from longer wavelengths using non-linear optical materials. These can be a second harmonic generator. They must have a suitable birefringence in order to phase match the output frequency doubled UV light. One compound commercially used is L-arginine phosphate monohydrate known as LAP. Research is underway for substances that are very non-linear, have a suitable birefringence, are transparent in the spectrum and have a high degree of resistance to damage from lasers.", "An optical filter can be used to block out visible light and near-ultraviolet light. It is important to have a high transmittance within the solar-blind spectrum, but to strongly block the other wavelengths.\nInterference filters can pass 25% of the wanted rays, and reduce others by 1000 to 10,000 times. However they are unstable and have a narrow field of view.\nAbsorption filters may only pass 10% of wanted UV, but can reject by a ratio of 10. They can have a wide field of view and are stable.", "Semiconductor ultraviolet detectors are solid state, and convert an ultraviolet photon into an electric pulse. If they are transparent to visible light, then they will not be sensitive to light.", "Normal glass does not transmit below 350 nm, so it is not used for optics in solar-blind systems. Instead calcium fluoride, fused silica, and magnesium fluoride are used as they are transparent to shorter wavelengths.", "Solid acids are used in catalysis in many industrial chemical processes, from large-scale catalytic cracking in petroleum refining to the synthesis of various fine chemicals.\nOne large scale application is alkylation, e.g., the combination of benzene and ethylene to give ethylbenzene. Another application is the rearrangement of cyclohexanone oxime to caprolactam. Many alkylamines are prepared by amination of alcohols, catalyzed by solid acids.\nSolid acids can be used as electrolytes in fuel cells.", "Most solid state acids are organic acids such as oxalic acid, tartaric acid, citric acid, maleic acid, etc. Examples of inorganic solid acids include silico-aluminates (zeolites, alumina, silico-aluminophosphate), and sulfated zirconia. Many transition metal oxides are acidic, including titania, zirconia, and niobia. Such acids are used in cracking. Many solid Brønsted acids are also employed industrially, including polystyrene sulfonate, solid phosphoric acid, niobic acid, and heteropolyoxometallates.", "Solid acids are acids that are insoluble in the reaction medium. They are often used as heterogeneous catalysts.", "Applications in biotechnology were developed only most recently. This is due to the sensitivity of bioproducts such as proteins towards organic extractants.\nOne approach by C. van den Berg et al. focuses on the use of impregnated particles for in situ recovery of phenol from Pseudomonas putida fermentations using ionic liquids. Further development led to the use of high capacity polysulfone capsules. These capsules are basically hollow particles surrounded by a membrane. The interior is completely filled with extractant and thus increases the impregnation capacity as compared to classical SIRs.\nA completely new approach of using SIRs for the separation or purification of biotechnological products such as proteins is based on the concept of impregnating porous particles with aqueous polymer solutions developed by B. Burghoff. These so-called Tunable Aqueous Polymer-Phase Impregnated Resins (TAPPIR) enhance aqueous two-phase extraction (ATPE) by applying the SIR technology. During classical aqueous two-phase extraction, biotechnological components such as proteins are extracted from aqueous solutions by using a second aqueous phase. This second aqueous phase contains e.g. polyethylene glycol (PEG). On the one hand, a low density difference and low interfacial tension between the two aqueous phases facilitate comparatively fast mass transfer between the phases. On the other hand, PEG appears to stabilize the protein molecules, which results in a comparatively low protein denaturation during the extraction. However, a significant drawback of ATPE is the persistent emulsification, which makes phase separation a challenge. The idea behind TAPPIR is to use the advantages posed by SIRs, namely low extractant loss due to immobilization in the pores and less emulsification than in liquid-liquid extraction. This way, the drawbacks of ATPE could be remedied. The setup would consist of a packed column or a fluidized bed rather than liquid-liquid extraction equipment with additional phase separation steps. Nonetheless, as yet only first feasibility studies are on the way to prove the concept. Adrawback of this method is the non-conitnous working mode. The packed column is run similar as a chromatographic column.", "Only recently also other extraction applications have been investigated, e.g. the large scale recovery of apolar organics on offshore oil platforms using the so-called Macro-Porous Polymer Extraction (MPPE) Technology. In such an application, where the SIR particles are contained in a packed bed, flow rates from 0.5 m h upward without maximum flow restrictions can apparently be treated cost competitive to air stripping/activated carbon, steam stripping and bio treatment systems, according to the technology developer. Additional investigations, mostly done in an academic environment, include polar organics like amino-alcohols, organic acids, amino acids, flavonoids, and aldehydes on a bench-scale or pilot-scale. Also, the application of SIRs for the separation of more polar solutes, such as for instance ethers and phenols, has been investigated in the group of A.B. de Haan.", "Mostly, SIRs have been investigated and used for the recovery of heavy metals. Applications include the removal of cadmium, vanadium, copper, chrome, iridium, etc.", "Figure 1 to the right explains the basic principle, in which the organic extractant E is contained inside the pores of a porous particle. The solute S, which is initially dissolved in the aqueous phase surrounding the SIR particle, physically dissolves in the organic extractant phase during the extraction process. Furthermore, the solute S can react with the extractant to form a complex ES. This complexation of the solute with the extractant shifts the overall extraction equilibrium further towards the organic phase. This way, the extraction of the solute is enhanced.\nWhile during conventional liquid-liquid extraction the solvent and the extractant have to be dispersed, in a SIR setup the dispersion is already achieved by the impregnated particles. This also prevents an additional phase separation step, which would be necessary after the emulsification occurring in liquid-liquid extraction. In order to elucidate the effect of emulsification, Figure 2 (to the left) compares the two systems of an extractant in liquid-liquid equilibrium with water, left, and SIR particles in equilibrium with water, right. The figure shows that no emulsification occurs in the SIR system, whereas the liquid-liquid system shows turbidity implying emulsification. Also, the impregnation step decreases the solvent loss into the aqueous phase compared to liquid-liquid extraction. This decrease of extractant loss is contributed to physical sorption of the extractant on the particle surface, which means that the extractant inside the pores does not entirely behave as a bulk liquid. Depending on the pore size of the used particles, capillary forces may also play a role in retaining the extractant. Otherwise, van-der-Waals forces, pi-pi-interactions or hydrophobic interactions might stabilize the extractant inside the particle pores. However, the possible decrease of extractant loss depends largely on the pore size and the water solubility of the extractant. Nonetheless, SIRs have a significant advantage over e.g. custom made ion-exchange resins with chemically bonded ligands. SIRs can be reused for different separation tasks by just rinsing one complexing agent out and re-impregnating them with another more suitable extractant. This way, potentially expensive design and production steps of e.g. affinity resins can be avoided. Finally, by filling the whole volume of the particle pores with an extractant (complexing agent), a higher capacity for solutes can be achieved than with ordinary adsorption or ion exchange resins, where only the surface area is available.\nHowever, there are possible drawbacks of SIR technology, such as leaching of the extractant or clogging of a fixed bed by attrition of the particles. These might be remedied by choosing the proper particle-extractant-system. This implies selecting a suitable extractant with low water solubility, which is sufficiently retained inside the pores, and selecting mechanically stable particles as a solid support for the extractant. Additionally, SIRs can be stabilized by coating them, as shown by D. Muraviev et al. As coating material, A. W. Trochimczuk et al. used polyvinyl alcohol.\nIn order to remove or recover the extracted solute, SIR particles can be regenerated using low pressure steam stripping, which is particularly effective for the recovery of volatile hydrocarbons. However, if the vapor pressure of the extracted solute is too low, or if the complexation between solute and extractant is too strong, other techniques need to be applied, e.g. pH swing.", "The principle of Solvent Impregnated Resins was first shown in 1971 by Abraham Warshawsky. This first venture was aimed at the extraction of metals. Ever since then, SIRs have been mainly used for metal extraction, be it heavy metals or specifically radioactive metals. Much research on SIRs has been done by J.L Cortina and e.g. N. Kabay, K. Jerabek or J. Serarols. However, lately investigations also go towards using SIRs for the separation of natural compounds, and even for separation of biotechnological products.", "Solvent impregnated resins (SIRs) are commercially available (macro)porous resins impregnated with a solvent/an extractant. In this approach, a liquid extractant is contained within the pores of (adsorption) particles. Usually, the extractant is an organic liquid. Its purpose is to extract one or more dissolved components from a surrounding aqueous environment. The basic principle combines adsorption, chromatography and liquid-liquid extraction.", "The main impregnation techniques are wet impregnation and dry impregnation. During wet impregnation, the porous particles are dissolved in the extractant and allowed to soak with the respective fluid. In this approach, the particles are either contacted with a precalculated amount of extractant, which completely soaks into the porous matrix, or the particles are contacted with an excess of extractant. After soaking, the remaining extractant, which is not inside the pores, is evaporated.\nIf the wet method is used, the extractant is dissolved in an additional solvent prior to impregnation. The porous particles are then dispersed in the extractant-solvent solution. After soaking the particles, the excess solvent can either be filtered off or evaporated. In the first case, an extractant-solvent mixture would be retained within the pores. This would be of interest for extractants which would be solid at design conditions when pure. In the second case, only the extractant would remain inside the pores. Figure 3 shows porous particles dispersed in an aqueous solution after wet impregnation. The cut-out in Figure 3 shows an enlarge segment of the surface of such an impregnated particle.\nAn additional, albeit not so frequently used technique is the modifier addition method. This technique relies on the use of an extractant/solvent/modifier system. The additional modifier is supposed to enhance the penetration of the extractant into the particle pores. The solvent is subsequently evaporated, leaving extractant and modifier in the particle pores.\nFurthermore, the dynamic column method can be used. The particles are contacted with a solvent until they are completely soaked. This can be done prior or after packing into the column. The packed bed is then rinsed with the liquid extractant until inlet and outlet concentrations are the same. This approach is particularly interesting when particles are already packed in a column and shall be reused for a SIR application.", "Some have argued that the Rayleigh–Plesset equation described above is unreliable for predicting bubble temperatures and that actual temperatures in sonoluminescing systems can be far higher than 20,000 kelvins. Some research claims to have measured temperatures as high as 100,000 kelvins and speculates temperatures could reach into the millions of kelvins. Temperatures this high could cause thermonuclear fusion. This possibility is sometimes referred to as bubble fusion and is likened to the implosion design used in the fusion component of thermonuclear weapons.\nExperiments in 2002 and 2005 by R. P. Taleyarkhan using deuterated acetone showed measurements of tritium and neutron output consistent with fusion. However, the papers were considered low quality and there were doubts cast by a report about the author's scientific misconduct. This made the report lose credibility among the scientific community.\nOn January 27, 2006, researchers at Rensselaer Polytechnic Institute claimed to have produced fusion in sonoluminescence experiments.", "Pistol shrimp (also called snapping shrimp) produce a type of cavitation luminescence from a collapsing bubble caused by quickly snapping its claw. The animal snaps a specialized claw shut to create a cavitation bubble that generates acoustic pressures of up to 80 kPa at a distance of 4 cm from the claw. As it extends out from the claw, the bubble reaches speeds of 60 miles per hour (97 km/h) and releases a sound reaching 218 decibels. The pressure is strong enough to kill small fish. The light produced is of lower intensity than the light produced by typical sonoluminescence and is not visible to the naked eye. The light and heat produced by the bubble may have no direct significance, as it is the shockwave produced by the rapidly collapsing bubble which these shrimp use to stun or kill prey. However, it is the first known instance of an animal producing light by this effect and was whimsically dubbed \"shrimpoluminescence\" upon its discovery in 2001. It has subsequently been discovered that another group of crustaceans, the mantis shrimp, contains species whose club-like forelimbs can strike so quickly and with such force as to induce sonoluminescent cavitation bubbles upon impact.\nA mechanical device with 3D printed snapper claw at five times the actual size was also reported to emit light in a similar fashion, this bioinspired design was based on the snapping shrimp snapper claw molt shed from an Alpheus formosus, the striped snapping shrimp.", "An unusually exotic hypothesis of sonoluminescence, which has received much popular attention, is the Casimir energy hypothesis suggested by noted physicist Julian Schwinger and more thoroughly considered in a paper by Claudia Eberlein of the University of Sussex. Eberlein's paper suggests that the light in sonoluminescence is generated by the vacuum within the bubble in a process similar to Hawking radiation, the radiation generated at the event horizon of black holes. According to this vacuum energy explanation, since quantum theory holds that vacuum contains virtual particles, the rapidly moving interface between water and gas converts virtual photons into real photons. This is related to the Unruh effect or the Casimir effect. The argument has been made that sonoluminescence releases too large an amount of energy and releases the energy on too short a time scale to be consistent with the vacuum energy explanation, although other credible sources argue the vacuum energy explanation might yet prove to be correct.", "The mechanism of the phenomenon of sonoluminescence is unknown. Hypotheses include: hotspot, bremsstrahlung radiation, collision-induced radiation and corona discharges, nonclassical light, proton tunneling, electrodynamic jets and fractoluminescent jets (now largely discredited due to contrary experimental evidence). \nIn 2002, M. Brenner, S. Hilgenfeldt, and D. Lohse published a 60-page review that contains a detailed explanation of the mechanism. An important factor is that the bubble contains mainly inert noble gas such as argon or xenon (air contains about 1% argon, and the amount dissolved in water is too great; for sonoluminescence to occur, the concentration must be reduced to 20–40% of its equilibrium value) and varying amounts of water vapor. Chemical reactions cause nitrogen and oxygen to be removed from the bubble after about one hundred expansion-collapse cycles. The bubble will then begin to emit light. The light emission of highly compressed noble gas is exploited technologically in the argon flash devices.\nDuring bubble collapse, the inertia of the surrounding water causes high pressure and high temperature, reaching around 10,000 kelvins in the interior of the bubble, causing the ionization of a small fraction of the noble gas present. The amount ionized is small enough for the bubble to remain transparent, allowing volume emission; surface emission would produce more intense light of longer duration, dependent on wavelength, contradicting experimental results. Electrons from ionized atoms interact mainly with neutral atoms, causing thermal bremsstrahlung radiation. As the wave hits a low energy trough, the pressure drops, allowing electrons to recombine with atoms and light emission to cease due to this lack of free electrons. This makes for a 160-picosecond light pulse for argon (even a small drop in temperature causes a large drop in ionization, due to the large ionization energy relative to photon energy). This description is simplified from the literature above, which details various steps of differing duration from 15 microseconds (expansion) to 100 picoseconds (emission).\nComputations based on the theory presented in the review produce radiation parameters (intensity and duration time versus wavelength) that match experimental results with errors no larger than expected due to some simplifications (e.g., assuming a uniform temperature in the entire bubble), so it seems the phenomenon of sonoluminescence is at least roughly explained, although some details of the process remain obscure.\nAny discussion of sonoluminescence must include a detailed analysis of metastability. Sonoluminescence in this respect is what is physically termed a bounded phenomenon meaning that the sonoluminescence exists in a bounded region of parameter space for the bubble; a coupled magnetic field being one such parameter. The magnetic aspects of sonoluminescence are very well documented.", "Sonoluminescence can occur when a sound wave of sufficient intensity induces a gaseous cavity within a liquid to collapse quickly. This cavity may take the form of a preexisting bubble or may be generated through a process known as cavitation. Sonoluminescence in the laboratory can be made to be stable so that a single bubble will expand and collapse over and over again in a periodic fashion, emitting a burst of light each time it collapses. For this to occur, a standing acoustic wave is set up within a liquid, and the bubble will sit at a pressure antinode of the standing wave. The frequencies of resonance depend on the shape and size of the container in which the bubble is contained.\nSome facts about sonoluminescence:\n* The light that flashes from the bubbles last between 35 and a few hundred picoseconds long, with peak intensities of the order of .\n* The bubbles are very small when they emit light—about in diameter—depending on the ambient fluid (e.g., water) and the gas content of the bubble (e.g., atmospheric air).\n* SBSL pulses can have very stable periods and positions. In fact, the frequency of light flashes can be more stable than the rated frequency stability of the oscillator making the sound waves driving them. The stability analyses of the bubble, however, show that the bubble itself undergoes significant geometric instabilities due to, for example, the Bjerknes forces and Rayleigh–Taylor instabilities.\n* The addition of a small amount of noble gas (such as helium, argon, or xenon) to the gas in the bubble increases the intensity of the emitted light.\nSpectral measurements have given bubble temperatures in the range from , the exact temperatures depending on experimental conditions including the composition of the liquid and gas. Detection of very high bubble temperatures by spectral methods is limited due to the opacity of liquids to short wavelength light characteristic of very high temperatures.\nA study describes a method of determining temperatures based on the formation of plasmas. Using argon bubbles in sulfuric acid, the data shows the presence of ionized molecular oxygen , sulfur monoxide, and atomic argon populating high-energy excited states, which confirms a hypothesis that the bubbles have a hot plasma core. The ionization and excitation energy of dioxygenyl cations, which they observed, is . From this observation, they conclude the core temperatures reach at least —hotter than the surface of the Sun.", "The sonoluminescence effect was first discovered at the University of Cologne in 1934 as a result of work on sonar. Hermann Frenzel and H. Schultes put an ultrasound transducer in a tank of photographic developer fluid. They hoped to speed up the development process. Instead, they noticed tiny dots on the film after developing and realized that the bubbles in the fluid were emitting light with the ultrasound turned on. It was too difficult to analyze the effect in early experiments because of the complex environment of a large number of short-lived bubbles. This phenomenon is now referred to as multi-bubble sonoluminescence (MBSL).\nIn 1960, Peter Jarman from Imperial College of London proposed the most reliable theory of sonoluminescence phenomenon. He concluded that sonoluminescence is basically thermal in origin and that it might possibly arise from microshocks with the collapsing cavities.\nIn 1990, an experimental advance was reported by Gaitan and Crum, who produced stable single-bubble sonoluminescence (SBSL). In SBSL, a single bubble trapped in an acoustic standing wave emits a pulse of light with each compression of the bubble within the standing wave. This technique allowed a more systematic study of the phenomenon because it isolated the complex effects into one stable, predictable bubble. It was realized that the temperature inside the bubble was hot enough to melt steel, as seen in an experiment done in 2012; the temperature inside the bubble as it collapsed reached about . Interest in sonoluminescence was renewed when an inner temperature of such a bubble well above was postulated. This temperature is thus far not conclusively proven; rather, recent experiments indicate temperatures around .", "Sonoluminescence is the emission of light from imploding bubbles in a liquid when excited by sound.\nSonoluminescence was first discovered in 1934 at the University of Cologne. It occurs when a sound wave of sufficient intensity induces a gaseous cavity within a liquid to collapse quickly, emitting a burst of light. The phenomenon can be observed in stable single-bubble sonoluminescence (SBSL) and multi-bubble sonoluminescence (MBSL). In 1960, Peter Jarman proposed that sonoluminescence is thermal in origin and might arise from microshocks within collapsing cavities. Later experiments revealed that the temperature inside the bubble during SBSL could reach up to . The exact mechanism behind sonoluminescence remains unknown, with various hypotheses including hotspot, bremsstrahlung, and collision-induced radiation. Some researchers have even speculated that temperatures in sonoluminescing systems could reach millions of kelvins, potentially causing thermonuclear fusion; this idea, however, has been met with skepticism by other researchers. The phenomenon has also been observed in nature, with the pistol shrimp being the first known instance of an animal producing light through sonoluminescence.", "The dynamics of the motion of the bubble is characterized to a first approximation by the Rayleigh–Plesset equation (named after Lord Rayleigh and Milton Plesset):\nThis is an approximate equation that is derived from the Navier–Stokes equations (written in spherical coordinate system) and describes the motion of the radius of the bubble R as a function of time t. Here, μ is the viscosity, is the external pressure infinitely far from the bubble, is the internal pressure of the bubble, is the liquid density, and γ is the surface tension. The over-dots represent time derivatives. This equation, though approximate, has been shown to give good estimates on the motion of the bubble under the acoustically driven field except during the final stages of collapse. Both simulation and experimental measurement show that during the critical final stages of collapse, the bubble wall velocity exceeds the speed of sound of the gas inside the bubble. Thus a more detailed analysis of the bubble's motion is needed beyond Rayleigh–Plesset to explore the additional energy focusing that an internally formed shock wave might produce. In the static case, the Rayleigh-Plesset equation simplifies, yielding the Young–Laplace equation.", "In the United States, sperm banks maintain lists or catalogs of donors which provide basic information about the donor such as racial origin, skin color, height, weight, color of eyes, and blood group. Some of these catalogs are available for browsing on the Internet, while others are made available to patients only when they apply to a sperm bank for treatment. Some sperm banks make additional information about each donor available for an additional fee, and others make additional basic information known to children produced from donors when those children reach the age of 18. Some clinics offer \"exclusive donors\" whose sperm is used to produce pregnancies for only one recipient woman. How accurate this is, or can be, is not known, and neither is it known whether the information produced by sperm banks, or by the donors themselves, is true. Many sperm banks will, however, carry out whatever checks they can to verify the information they request, such as checking the identity of the donor and contacting his own doctor to verify medical details.\nIn the United Kingdom, most donors are anonymous at the point of donation and recipients can see only non-identifying information about their donor (height, weight, ethnicity etc.). Donors need to provide identifying information to the clinic and clinics will usually ask the donors doctor to confirm any medical details they have been given. Donors are asked to provide a pen portrait of themselves which is held by the HFEA and can be obtained by the adult conceived from the donation at the age of 18, along with identifying information such as the donors name and last known address. Known donation is permitted and it is not uncommon for family or friends to donate to a recipient couple.\nQualities that potential recipients typically prefer in donors include the donors being tall, college educated, and with a consistently high sperm count.\nA review came to the result that 68% of donors had given information to the clinical staff regarding physical characteristics and education but only 16% had provided additional information such as hereditary aptitudes and temperament or character.", "Sperm banks make information available about the sperm donors whose donations they hold to enable customers to select the donor whose sperm they wish to use. This information is often available by way of an online catalog. Subscription fees to be able to view the sperm donor through California Cryobank, for example, start at $145. This cost could potentially be a barrier for many on limited income and may not have discretionary income to spend on sperm donor services.\nA sperm bank will also usually have facilities to help customers to make their choice and they will be able to advise on the suitability of donors for individual donors and their partners.\nWhere the recipient has a partner, they may prefer to use sperm from a donor whose physical features are similar to those of their partner if they have one. In some cases, the choice of a donor with the correct blood group will be paramount, with particular considerations for the protection of recipients with negative blood groups. If a surrogate is to be used, such as where the customer is not intending to carry the child, considerations about their blood group etc. will also need to be taken into account. Similar considerations will apply where both partners in a lesbian couple intend to have a child using the same donor.\nInformation made available by a sperm bank will usually include the race, height, weight, blood group, health and eye color of the donor. Sometimes information about the donors age, family history and educational achievements will also be given. Some sperm banks make a personal profile of a donor available and occasionally more information may be purchased about a donor, either in the form of a DVD or in written form. Catalogs usually state whether samples supplied by a particular donor have already given rise to pregnancies, but this is not necessarily a guide to the fecundity of the sperm since a donor may not have been in the program long enough for any pregnancies to have been recorded. The donors educational qualification is also taken into account when choosing a donor.\nIf an individual intends to have more than one child, they may wish to have the additional child or children by the same donor. Sperm banks will usually advise whether sufficient stocks of sperm are available from a particular donor for subsequent pregnancies, and they normally have facilities available so that the woman may purchase and store additional vials from that donor on payment of an appropriate fee. These will be stored until required for subsequent pregnancies or they may be on-sold if they become surplus to the woman's requirements.\nThe catalogue will also state whether samples of sperm are available for ICI, IUI, or IVF use.", "Some sperm banks enable recipients to choose the sex of their child, through methods of sperm sorting. Although the methods used do not guarantee 100% success, the chances of being able to select the gender of a child are held to be considerably increased.\nOne of the processes used is the swim up method, whereby a sperm extender is added to the donor's freshly ejaculated sperm and the test-tube is left to settle. After about half-an-hour, the lighter sperm, containing the male chromosome pair (XY), will have swum to the top, leaving the heavier sperm, containing the female chromosome pair (XX), at the bottom, thus allowing selection and storage according to sex.\nThe alternative process is the Percoll Method which is similar to the swim up method but involves additionally the centrifuging of the sperm in a similar way to the washing of samples produced for IUI inseminations, or for IVF purposes.\nSex selection is not permitted in a number of countries, including the UK.", "There is a market for vials of processed sperm and for various reasons a sperm bank may sell-on stocks of vials which it holds known as onselling. The costs of screening of donors and storage of frozen donor sperm vials are not insignificant and in practice most sperm banks will try to dispose of all samples from an individual donor. The onselling of sperm therefore enables a sperm bank to maximize the sale and disposal of sperm samples which it has processed. The reasons for onselling may also be where part of, or even the main business of, a particular sperm bank is to process and store sperm rather than to use it in fertility treatments, or where a sperm bank is able to collect and store more sperm than it can use within nationally set limits. In the latter case a sperm bank may onsell sperm from a particular donor for use in another jurisdiction after the number of pregnancies achieved from that donor has reached its national maximum.\nSperm banks may supply other sperm banks or a fertility clinic with donor sperm to be used for achieving pregnancies.\nSperm banks may also supply sperm for research or educational purposes.", "In the United States, sperm banks are regulated as Human Cell and Tissue or Cell and Tissue Bank Product (HCT/Ps) establishments by the Food and Drug Administration (FDA) with new guidelines in effect May 25, 2005. Many states also have regulations in addition to those imposed by the FDA, including New York and California.\nIn the European Union a sperm bank must have a license according to the EU Tissue Directive which came into effect on April 7, 2006. In the United Kingdom, sperm banks are regulated by the Human Fertilisation and Embryology Authority.\nIn countries where sperm banks are allowed to operate, the sperm donor will not usually become the legal father of the children produced from the sperm he donates, but he will be the biological father of such children. In cases of surrogacy involving embryo donation, a form of gestational surrogacy, the commissioning mother or the commissioning parents will not be biologically related to the child and may need to go through an adoption procedure.\nAs with other forms of third party reproduction, the use of donor sperm from a sperm bank gives rise to a number of moral, legal, and ethical issues, including, but not limited to the right of the sperm donor remaining anonymous, and the child's right to know their familial background.\nFurthermore, as local regulations reduce the size of the donor pool and, in some cases, exclude entire classes of potential buyers such as single women and lesbian couples, restricting donations to only heterosexual couples who are married. Some customers choose to buy abroad or on the internet, having the samples delivered at home.", "There have been reports of incidents of abuse regarding forced insemination with sperm samples bought online.\nFurther abuse of sperm banks comes from the fertility clinic staff themselves. There have been a number of reports of staff at sperm banks and fertility clinics providing their own sperm in place of donor sperm. There have also been cases in which men have claimed their sperm sample was used by a clinic to inseminate a woman without his consent. This has led to cases of malpractice, and in some states, lobbying to create fertility fraud laws. These incidents have also led to outcry by people who had been conceived by such incidents, raising concerns of consanguinity, as well as the simple right to know who their siblings and biologic parents are.", "One study conducted by investigators at the University of North Carolina Chapel Hill looked into donated sperm utilization within the United States from 1995 to 2017. Cross-sectional studies recorded that an estimated 170,701 individuals during 1995 used donated sperm, while the 2011 to 2013 cohort had a decreased amount of donated sperm use of 37,385. Most recently, in the 2015 to 2017 cohort, 440,986 individuals were reported to use donated sperm. When looking at 200,197 individuals across 2011–2017, 76% had a 4-year college degree or further while 24% had high school or 2-year college degree. In terms of household percent of poverty, 71% of the sperm bank users were at or above 400% of the household poverty level while only 11% were between 200 and 399% of the household poverty levels. Although the household income levels were not explicit, there seems to be an obvious trend that higher education level attainment (such as finishing college or higher) and being at much higher income level above the household poverty levels were the common tendencies in the sperm bank users.", "A sperm bank, semen bank, or cryobank is a facility or enterprise which purchases, stores and sells human semen. The semen is produced and sold by men who are known as sperm donors. The sperm is purchased by or for other persons for the purpose of achieving a pregnancy or pregnancies other than by a sexual partner. Sperm sold by a sperm donor is known as donor sperm.\nA sperm bank may be a separate entity supplying donor sperm to individuals or to fertility centers or clinics, or it may be a facility which is run by a clinic or other medical establishment mainly or exclusively for their patients or customers.\nA pregnancy may be achieved using donor sperm for insemination with similar outcomes to sexual intercourse. By using sperm from a donor rather than from the sperm recipient's partner, the process is a form of third party reproduction. In the 21st century artificial insemination with donor sperm from a sperm bank is most commonly used for individuals with no male partner, i.e. single women and coupled lesbians.\nA sperm donor must generally meet specific requirements regarding age and screening for medical history. In the United States, sperm banks are regulated as Human Cell and Tissue or Cell and Tissue Bank Product (HCT/Ps) establishments by the Food and Drug Administration. Many states also have regulations in addition to those imposed by the FDA. In the European Union a sperm bank must have a license according to the EU Tissue Directive. In the United Kingdom, sperm banks are regulated by the Human Fertilisation and Embryology Authority.", "The majority of sperm donors who donate their sperm through a sperm bank receive some kind of payment, although this is rarely a significant amount. A review including 29 studies from nine countries came to the result that the amount of money actual donors received for their donation varied from $10 to €70 per donation or sample. The payments vary from the situation in the United Kingdom where donors are only entitled to their expenses in connection with the donation, to the situation with some US sperm banks where a donor receives a set fee for each donation plus an additional amount for each vial stored. At one prominent California sperm bank for example, TSBC, donors receive roughly $50 for each donation (ejaculation) which has acceptable motility/survival rates both at donation and at a test-thaw a couple of days later. Because of the requirement for the two-day celibacy period before donation, and geographical factors which usually require the donor to travel, it is not a viable way to earn a significant income—and is far less lucrative than selling human eggs. Some private donors may seek remuneration although others donate for altruistic reasons. According to the EU Tissue Directive donors in EU may only receive compensation, which is strictly limited to making good the expenses and inconveniences related to the donation.", "Based on the statistics presented in earlier discussions, there is controversy with regard to a perceived lack of diversity within the donor sperm pool of many sperm banks. This includes, but is not limited to, height requirements implemented by some sperm banks. As a result, it is alleged that potential sperm recipients often encounter very limited sperm donor pool options. Lack of diversity results in very limited choices especially among ethnic minorities within the United States. Whenever an individual chooses to specify their preferred donor background, the number of available options (sperm donors that meet the particular individuals criteria) can dwindle down to the low single digits. Scott Brown from California Cryobank admitted: \"We dont get as many minority applicants as we [would] like.\" Even after numerous attempts to reach out to numerous ethnic communities, the response can be nearly nonexistent.\nAt the California Cryoback, Brown mentions that one out of 100 would be able to become final sperm donor while Ottey from the Fairfax Cryobank mentions one out of 200 would be able to become ultimate sperm donors. In addition, locations of the California Cryobank are in Los Angeles, Los Altos, California; mid-Manhattan, and Cambridge Massachusetts. These locations are known to have a population with higher socioeconomic latitude and being more likely to afford the services. Moreover, one of the requirements includes the potential sperm donor to be able to live nearby the sperm bank in order to provide samples once to twice a month for at least a term of six months. This could create potential barriers for populations who are at socioeconomic disadvantage and do not have their own forms of transportation; often having to rely on multiple forms of public transportation to reach certain places. This factor could cause a significant decrease in the sperm donor pool and less diverse availability for sperm recipients.\nSome controversy stems from the fact that donors father children for others, in the majority of cases, for single people or same-sex couples, but usually take no part in the upbringing of such children. The issue of sperm banks providing fertility services to single women and coupled lesbians so that they can have their own biological children by a donor is itself, often controversial in some jurisdictions but in many countries where sperm banks operate, this group form the main body of recipients. Donors usually do not have a say in who may be a recipient of their sperm.\nAnother controversy centers around the use of sperm posthumously, or after the death of the sperm donor, as pioneered by California Cryobank. Within the United States, there were differences when it came to a child conceived after the fathers death and the eligibility for survivors benefits. Under California law, there was one court case (Vernoff vs. Astrue) in which the mothers child (conceived after the fathers death) was not eligible for the survivors benefits. However, Arizona courts had a different approach when it came to children who were born after fathers death that the children are eligible for the survivors benefits. There were numerous other stories of similar situations across different states in the United States and even the United Kingdom. Canada, France, Germany, and Sweden do not permit the retrieval use of sperm posthumously.", "After the sample has been processed for cryoprotection, the sperm is stored in small vials or straws holding between 0.4 and 1.0 ml of sperm and then cryogenically preserved in liquid nitrogen tanks. Two approaches for sperm cryoperservation include conventional freezing and vitrification. The conventional technique consists of a slow freezing process that is most commonly used for assisted reproduction technologies (ART). Whereas the vitrification method is a faster approach for sperm cryopreservation in converting liquid to solid state. The disadvantage of this latter process is increase in contamination from the liquid nitrogen and smaller sperm sample size to improve the speed for high cooling rate.\nIt has been proposed that there should be an upper limit on how long frozen sperm can be stored; however, a baby has been conceived in the United Kingdom using sperm frozen for 21 years and andrology experts believe sperm can be frozen indefinitely. The UK government places an upper limit for storage of 55 years.\nFollowing the necessary quarantine period, which is usually six months, a sample will be thawed. To thaw a sperm sample, the vial or straw is left at room temperature for approximately 30 minutes, and then brought to body temperature by holding it in the hands of the person performing the insemination. Once a sperm sample is thawed, it cannot be frozen again, and should be used to artificially inseminate a recipient or used for another assisted reproduction technologies (ART) treatment immediately.\nFreeze-drying is another promising alternative for storing semen for its accessibility with regular refrigerator. This method has been successfully replicated in animal species. However, DNA can be damaged in this process, therefore further research is warranted to determine factors that can effect the efficacy of this method.", "After collection, sperm must be processed for storage. According to the Sperm Bank of California, sperm banks and clinics can use the unwashed or wash method to process sperm samples. The wash method includes removing unwanted particles and adding buffer solutions to preserve viable sperm. However, this approach can contribute to further stress on the sperm cells and decrease the survival of sperm after freezing. The unwashed approach allows for more flexibility to freeze the semen sample and increases the number of sperm survival. One sample can produce 1–20 vials or straws, depending on the quantity of the ejaculate and whether the sample is washed or unwashed. Unwashed samples are used for intracervical insemination (ICI) treatments, and washed samples are used in intrauterine insemination (IUI) and for in-vitro fertilization (IVF) or assisted reproduction technologies (ART) procedures.\nA cryoprotectant semen extender is conducted if the semen sample is placed in the freezer for storage. Semen extenders play a key factor in protecting sperm sample from freeze and osmotic shock, oxidative stress, and cell injury due to the formation of ice crystal during frozen storage. The collection of semen is preserved by stabilizing the properties of the sperm cells such as the membrane, motility, and DNA integrity in order to create a sustainable viable environment. There are two common forms of medium for sperm cyropreservation, one containing of egg yolk from hens and glycerol, and the other containing just glycerol. One study compared media supplemented with egg yolk and media supplemented with soy lecithin, finding that there was no significance between sperm motility, morphology, chromatin decondensation, or binding between the two, indicating that soy lecithin may be a viable alternative to egg yolk.", "A sperm donor will usually be required to enter into a contract with a sperm bank to supply their semen, typically for a period of six to twenty-four months depending on the number of pregnancies which the sperm bank intends to produce from the donor. If a sperm bank has access to world markets e.g. by direct sales, or sales to clinics outside their own jurisdiction, a man may donate for a longer period than two years, as the risk of consanguinity is reduced (although local laws vary widely). Some sperm banks with access to world markets impose their own rules on the number of pregnancies which can be achieved in a given regional area or a state or country, and these sperm banks may permit donors to donate for four or five years, or even longer.\nThe contract may also specify the place and hours for donation, a requirement to notify the sperm bank in the case of acquiring a sexual infection, and the requirement not to have intercourse or to masturbate for a period of usually 2–3 days before making a donation.\nThe contract may also describe the types of treatment for which the donated sperm may be used, such as artificial insemination and IVF, and whether the donors sperm may be used in surrogacy arrangements. It may also stipulate whether the sperm may be used for research or training purposes. In certain cases, a sperm donor may specify the maximum number of offspring or families which may be produced from the donors sperm. Family may be defined as a couple who may each bear children from the same donor. The contract may also require consent if the donor's samples are to be exported. In the United Kingdom, for example, the maximum number of families for which a donor is permitted to bear children is ten, but a sperm bank or fertility center in the UK may export sperm to other fertility centers so that this may be used to produce more pregnancies abroad. Where this happens, consent must be provided by the donor. Faced with a growing demand for donor sperm, sperm banks may try to maximize the use of a donor whilst still reducing the risk of consanguinity. In legislations with a national register of sperm donors or a national regulatory body, a sperm donor may be required to fill in a separate form of consent which will be registered with the regulatory authority. In the United Kingdom this body is the HFEA.\nA sperm donor generally produces and collects sperm at a sperm bank or clinic by masturbation in a private room or cabin, known as a mens production room (UK), donor cabin' (DK) or a masturbatorium (US). Many of these facilities contain pornography such as videos/DVD, magazines, and/or photographs which may assist the donor in becoming aroused in order to facilitate production of the ejaculate, also known as the \"semen sample\" but the increasing usage of porn in the U.S. has dulled many men to its effects. Often, using any type of personal lubricant, saliva, oil or anything else to lubricate and stimulate the genitals is prohibited as it can contaminate the semen sample and have negative impacts on the quality and health of sperm. In some circumstances, it may also be possible for semen from donors to be collected during sexual intercourse with the use of a collection condom which results in higher sperm counts.", "A sperm bank will aim to provide donor sperm which is safe by checking and screening donors and of their semen. A sperm donor must generally meet specific requirements regarding age and medical history. Requirements for sperm donors are strictly enforced, as in a study of 24,040 potential sperm donors, only 5620, or 23.38% were eligible to donate their sperm.\nSperm banks typically screen potential donors for a range of diseases and disorders, including genetic diseases, chromosomal abnormalities and sexually transmitted infections that may be transmitted through sperm. The screening procedure generally also includes a quarantine period, in which the samples are frozen and stored for at least six months after which the donor will be re-tested for the STIs. This is to ensure no new infections have been acquired or have developed during the period of donation. Providing the result is negative, the sperm samples can be released from quarantine and used in treatments. Common reasons for sperm rejection include suboptimal semen quality and STDs. Chromosomal abnormalities are also a cause for semen rejection, but are less common. Children conceived through sperm donation have a birth defect rate of almost a fifth compared with the general population.\nA sperm bank takes a number of steps to ensure the health and quality of the sperm which it supplies and it will inform customers of the checks which it undertakes, providing relevant information about individual donors. A sperm bank will usually guarantee the quality and number of motile sperm available in a sample after thawing. They will try to select men as donors who are particularly fertile and whose sperm will survive the freezing and thawing process. Samples are often sold as containing a particular number of motile sperm per milliliter, and different types of samples may be sold by a sperm bank for differing types of use, e.g. ICI or IUI.\nThe sperm will be checked to ensure its fecundity and also to ensure that motile sperm will survive the freezing process. If a man is accepted onto the sperm banks program as a sperm donor, his sperm will be constantly monitored, the donor will be regularly checked for infectious diseases, and samples of his blood will be taken at regular intervals. A sperm bank may provide a donor with dietary supplements containing herbal or mineral substances such as maca, zinc, vitamin E and arginine which are designed to improve the quality and quantity of the donors semen, as well as reducing the refractory time (i.e. the time between viable ejaculations). All sperm is frozen in straws or vials and stored for as long as the sperm donor may and can maintain it.\nDonors are subject to tests for infectious diseases such as human immunoviruses HIV (HIV-1 and HIV-2), human T-cell lymphotropic viruses (HTLV-1 and HTLV-2), syphilis, chlamydia, gonorrhea, hepatitis B virus, hepatitis C virus, cytomegalovirus (CMV), Trypanosoma cruzi and malaria as well as hereditary diseases such as cystic fibrosis, Sickle cell anemia, Familial Mediterranean fever, Gauchers disease, thalassaemia, Tay–Sachs disease, Canavans disease, familial dysautonomia, congenital adrenal hyperplasia, carnitine transporter deficiency and Karyotyping 46XY. Karyotyping is not a requirement in either EU or the US but some sperm banks choose to test donors as an extra service to the customer.\nA sperm donor may also be required to produce their medical records and those of their family, often for several generations. A sperm sample is usually tested micro-biologically at the sperm bank before it is prepared for freezing and subsequent use. A sperm donor's blood group may also be registered to ensure compatibility with the recipient.\nSome sperm banks may disallow sexually active gay men from donating sperm due to the population's increased risk of HIV and hepatitis B. Modern sperm banks have also been known to screen out potential donors based on genetic conditions and family medical history.", "The finding of a potential sperm donor and motivating them to actually donate sperm is typically called recruitment. A sperm bank can recruit donors by advertising, often in colleges, in local newspapers, and also on the internet.\nA donor must be a fit and healthy male, normally between 18 and 45 years of age, and willing to undergo frequent and rigorous testing. The donor must also be willing to donate their sperm so that it can be used to impregnate people who are unrelated to and unknown by them. Some sperm banks require two screenings and a laboratory screening before a donor is eligible. The donor must agree to relinquish all legal rights to all children which result from their donations. The donor must produce their sperm at the sperm bank thus enabling the identity of the donor, once proven, always to be ascertained, and also enabling fresh samples of sperm to be produced for immediate processing. Some sperm banks have been accused of heightism due to minimum height requirements.", "Subject to any regulations restricting who can obtain donor sperm, donor sperm is available to all people who, for whatever reason, wish to have a child. These regulations vary significantly across jurisdictions, and some countries do not have any regulations. When an individual finds that they are barred from receiving donor sperm within their jurisdiction, they may travel to another jurisdiction to obtain sperm. Regulations change from time to time. In most jurisdictions, donor sperm is available to an individual if their partner is infertile or where they have a genetic disorder. However, the categories of individuals who may obtain donor sperm is expanding, with its availability to single persons and to same-sex couples becoming more common, and some sperm banks supply fertility centers which specialize in the treatment of such people.\nFrozen vials of donor sperm may be shipped by the sperm bank to a recipient's home for self-insemination, or they may be shipped to a fertility clinic or physician for use in fertility treatments. The sperm bank will rely on the recipient woman or medical practitioner to report the outcome of any use of the sperm to the sperm bank. This enables a sperm bank to adhere to any national limits of pregnancy numbers. The sperm bank may also impose its own worldwide limit on numbers.\nSperm is introduced into the recipient by means of artificial insemination or by IVF. The most common technique is conventional artificial insemination which consists of a catheter to put the sperm into the vagina where it is deposited at the entrance to the cervix. In biological terms, this is much the same process as when semen is ejaculated from the penis during sexual intercourse. Owing to its simplicity, this method of insemination is commonly used for home and self inseminations principally by single women and lesbians. Other types of uses include intrauterine insemination (IUI) and deep intrauterine artificial insemination where washed sperm must be used. These methods of insemination are most commonly used in fertility centers and clinics mainly because they produce better pregnancy rates than ICI insemination especially where the woman has no underlying fertility issues.\nMen may also store their own sperm at a sperm bank for future use particularly where they anticipate traveling to a war zone or having to undergo chemotherapy which might damage the testes.\nSperm from a sperm donor may also be used in surrogacy arrangements and for creating embryos for embryo donation. Donor sperm may be supplied by the sperm bank directly to the recipient to enable a woman to perform her own artificial insemination which can be carried out using a needleless syringe or a cervical cap conception device. The cervical cap conception device allows the donor semen to be held in place close to the cervix for between six and eight hours to allow fertilization to take place. Alternatively, donor sperm can be supplied by a sperm bank through a registered medical practitioner who will perform an appropriate method of insemination or IVF treatment using the donor sperm in order for the woman to become pregnant.", "The first sperm banks began as early as 1964 in Iowa, USA and Tokyo, Japan and were established for a medical therapeutic approach to support individuals who were infertile. As a result, over 1 million babies were born within 40 years.\nSperm banks provide the opportunity for individuals to have a child who otherwise would not be able to conceive naturally. This includes, but is not limited to, single women, same-sexed couples, and couples where one partner is infertile.\nWhere a sperm bank provides fertility services directly to a recipient woman, it may employ different methods of fertilization using donor sperm in order to optimize the chances of a pregnancy. Sperm banks do not provide a cure for infertility in individuals who produce non-viable sperm. Nevertheless, the increasing range of services available through sperm banks enables people to have choices over challenges with reproduction.\nIndividuals may choose an anonymous donor who will not be a part of family life, or they may choose known donors who may be contacted later in life by the donor children. People may choose to use a surrogate to bear their children, using eggs provided by the person and sperm from a donor. Sperm banks often provide services which enable an individual to have subsequent pregnancies by the same donor, but equally, people may choose to have children by a number of different donors. Sperm banks sometimes enable an individual to choose the sex of their child, enabling even greater control over the way families are planned. Sperm banks increasingly adopt a less formal approach to the provision of their services thereby enabling people to take a relaxed approach to their own individual requirements.\nMen who donate semen through a sperm bank provide an opportunity for others who cannot have children on their own. Sperm donors may or may not have legal obligations or responsibilities to the child conceived through this route. Whether a donor is anonymous or not, this factor is important in allowing sperm banks to recruit sperm donors and to use their sperm to produce whatever number of pregnancies from each donor as are permitted where they operate, or alternatively, whatever number they decide.\nIn many parts of the world sperm banks are not allowed to be established or to operate. Where sperm banks are allowed to operate they are often controlled by local legislation which is primarily intended to protect the unborn child, but which may also provide a compromise between the conflicting views which surround their operation. A particular example of this is the control which is often placed on the number of children which a single donor may father and which may be designed to protect against consanguinity. However, such legislation usually cannot prevent a sperm bank from supplying donor sperm outside the jurisdiction in which it operates, and neither can it prevent sperm donors from donating elsewhere during their lives. There is an acute shortage of sperm donors in many parts of the world and there is obvious pressure from many quarters for donor sperm from those willing and able to provide it to be made available as safely and as freely as possible.", "Some antiferromagnetic materials exhibit a non-zero magnetic moment at a temperature near absolute zero. This effect is ascribed to spin canting, a phenomenon through which spins are tilted by a small angle about their axis rather than being exactly co-parallel. \nSpin canting is due to two factors contrasting each other: isotropic exchange would align the spins exactly antiparallel, while antisymmetric exchange arising from relativistic effects (spin–orbit coupling) would align the spins at 90° to each other. The net result is a small perturbation, the extent of which depends on the relative strength of these effects.\nThis effect is observable in many materials such as hematite.", "A spin chain is a type of model in statistical physics. Spin chains were originally formulated to model magnetic systems, which typically consist of particles with magnetic spin located at fixed sites on a lattice. A prototypical example is the quantum Heisenberg model. Interactions between the sites are modelled by operators which act on two different sites, often neighboring sites.\nThey can be seen as a quantum version of statistical lattice models, such as the Ising model, in the sense that the parameter describing the spin at each site is promoted from a variable taking values in a discrete set (typically , representing spin up and spin down) to a variable taking values in a vector space (typically the spin-1/2 or two-dimensional representation of ).", "The prototypical example, and a particular example of the Heisenberg spin chain, is known as the spin 1/2 Heisenberg XXX model. \nThe graph is the periodic 1-dimensional lattice with -sites. Explicitly, this is given by , and the elements of being with identified with .\nThe associated Lie algebra is .\nAt site there is an associated Hilbert space which is isomorphic to the two dimensional representation of (and therefore further isomorphic to ). The Hilbert space of system configurations is , of dimension .\nGiven an operator on the two-dimensional representation of , denote by the operator on which acts as on and as identity on the other with . Explicitly, it can be written \nwhere the 1 denotes identity.\nThe Hamiltonian is essentially, up to an affine transformation,\nwith implied summation over index , and where are the Pauli matrices. The Hamiltonian has symmetry under the action of the three total spin operators .\nThe central problem is then to determine the spectrum (eigenvalues and eigenvectors in ) of the Hamiltonian. This is solved by the method of an Algebraic Bethe ansatz, discovered by Hans Bethe and further explored by Ludwig Faddeev.", "The lattice is described by a graph with vertex set and edge set .\nThe model has an associated Lie algebra . More generally, this Lie algebra can be taken to be any complex, finite-dimensional semi-simple Lie algebra . More generally still it can be taken to be an arbitrary Lie algebra.\nEach vertex has an associated representation of the Lie algebra , labelled . This is a quantum generalization\nof statistical lattice models, where each vertex has an associated spin variable.\nThe Hilbert space for the whole system, which could be called the configuration space, is the tensor product of the representation spaces at each vertex: \nA Hamiltonian is then an operator on the Hilbert space. In the theory of spin chains, there are possibly many Hamiltonians which mutually commute. This allows the operators to be simultaneously diagonalized. \nThere is a notion of exact solvability for spin chains, often stated as determining the spectrum of the model. In precise terms, this means determining the simultaneous eigenvectors of the Hilbert space for the Hamiltonians of the system as well as the eigenvalues of each eigenvector with respect to each Hamiltonian.", "The prototypical example of a spin chain is the Heisenberg model, described by Werner Heisenberg in 1928. This models a one-dimensional lattice of fixed particles with spin 1/2. A simple version (the antiferromagnetic XXX model) was solved, that is, the spectrum of the Hamiltonian of the Heisenberg model was determined, by Hans Bethe using the Bethe ansatz. \nNow the term Bethe ansatz is used generally to refer to many ansatzes used to solve exactly solvable problems in spin chain theory such as for the other variations of the Heisenberg model (XXZ, XYZ), and even in statistical lattice theory, such as for the six-vertex model. \nAnother spin chain with physical applications is the Hubbard model, introduced by John Hubbard in 1963.\nThis model was shown to be exactly solvable by Elliott Lieb and Fa-Yueh Wu in 1968.\nAnother example of (a class of) spin chains is the Gaudin model, described and solved by Michel Gaudin in 1976", "In addition to unusual experimental properties, spin glasses are the subject of extensive theoretical and computational investigations. A substantial part of early theoretical work on spin glasses dealt with a form of mean-field theory based on a set of replicas of the partition function of the system.\nAn important, exactly solvable model of a spin glass was introduced by David Sherrington and Scott Kirkpatrick in 1975. It is an Ising model with long range frustrated ferro- as well as antiferromagnetic couplings. It corresponds to a mean-field approximation of spin glasses describing the slow dynamics of the magnetization and the complex non-ergodic equilibrium state.\nUnlike the Edwards–Anderson (EA) model, in the system though only two-spin interactions are considered, the range of each interaction can be potentially infinite (of the order of the size of the lattice). Therefore, we see that any two spins can be linked with a ferromagnetic or an antiferromagnetic bond and the distribution of these is given exactly as in the case of Edwards–Anderson model. The Hamiltonian for SK model is very similar to the EA model:\nwhere have same meanings as in the EA model. The equilibrium solution of the model, after some initial attempts by Sherrington, Kirkpatrick and others, was found by Giorgio Parisi in 1979 with the replica method. The subsequent work of interpretation of the Parisi solution—by M. Mezard, G. Parisi, M.A. Virasoro and many others—revealed the complex nature of a glassy low temperature phase characterized by ergodicity breaking, ultrametricity and non-selfaverageness. Further developments led to the creation of the cavity method, which allowed study of the low temperature phase without replicas. A rigorous proof of the Parisi solution has been provided in the work of Francesco Guerra and Michel Talagrand.\nThe formalism of replica mean-field theory has also been applied in the study of neural networks, where it has enabled calculations of properties such as the storage capacity of simple neural network architectures without requiring a training algorithm (such as backpropagation) to be designed or implemented.\nMore realistic spin glass models with short range frustrated interactions and disorder, like the Gaussian model where the couplings between neighboring spins follow a Gaussian distribution, have been studied extensively as well, especially using Monte Carlo simulations. These models display spin glass phases bordered by sharp phase transitions.\nBesides its relevance in condensed matter physics, spin glass theory has acquired a strongly interdisciplinary character, with applications to neural network theory, computer science, theoretical biology, econophysics etc.", "It is the time dependence which distinguishes spin glasses from other magnetic systems.\nAbove the spin glass transition temperature, T, the spin glass exhibits typical magnetic behaviour (such as paramagnetism).\nIf a magnetic field is applied as the sample is cooled to the transition temperature, magnetization of the sample increases as described by the Curie law. Upon reaching T, the sample becomes a spin glass, and further cooling results in little change in magnetization. This is referred to as the field-cooled magnetization.\nWhen the external magnetic field is removed, the magnetization of the spin glass falls rapidly to a lower value known as the remanent magnetization.\nMagnetization then decays slowly as it approaches zero (or some small fraction of the original value this remains unknown). This decay is non-exponential, and no simple function can fit the curve of magnetization versus time adequately. This slow decay is particular to spin glasses. Experimental measurements on the order of days have shown continual changes above the noise level of instrumentation.\nSpin glasses differ from ferromagnetic materials by the fact that after the external magnetic field is removed from a ferromagnetic substance, the magnetization remains indefinitely at the remanent value. Paramagnetic materials differ from spin glasses by the fact that, after the external magnetic field is removed, the magnetization rapidly falls to zero, with no remanent magnetization. The decay is rapid and exponential.\nIf the sample is cooled below T in the absence of an external magnetic field, and a magnetic field is applied after the transition to the spin glass phase, there is a rapid initial increase to a value called the zero-field-cooled magnetization. A slow upward drift then occurs toward the field-cooled magnetization.\nSurprisingly, the sum of the two complicated functions of time (the zero-field-cooled and remanent magnetizations) is a constant, namely the field-cooled value, and thus both share identical functional forms with time, at least in the limit of very small external fields.", "This is similar to the Ising model. In this model, we have spins arranged on a -dimensional lattice with only nearest neighbor interactions. This model can be solved exactly for the critical temperatures and a glassy phase is observed to exist at low temperatures. The Hamiltonian for this spin system is given by:\nwhere refers to the Pauli spin matrix for the spin-half particle at lattice point , and the sum over refers to summing over neighboring lattice points and . A negative value of denotes an antiferromagnetic type interaction between spins at points and . The sum runs over all nearest neighbor positions on a lattice, of any dimension. The variables representing the magnetic nature of the spin-spin interactions are called bond or link variables.\nIn order to determine the partition function for this system, one needs to average the free energy where , over all possible values of . The distribution of values of is taken to be a Gaussian with a mean and a variance :\nSolving for the free energy using the replica method, below a certain temperature, a new magnetic phase called the spin glass phase (or glassy phase) of the system is found to exist which is characterized by a vanishing magnetization along with a non-vanishing value of the two point correlation function between spins at the same lattice point but at two different replicas:\nwhere are replica indices. The order parameter for the ferromagnetic to spin glass phase transition is therefore , and that for paramagnetic to spin glass is again . Hence the new set of order parameters describing the three magnetic phases consists of both and .\nUnder the assumption of replica symmetry, the mean-field free energy is given by the expression:", "*. [http://iopscience.iop.org/article/10.1088/0305-4608/5/5/017/meta;jsessionid=4B8D9A38523A828CD28C8CE67DD973E8.c5.iopscience.cld.iop.org ShieldSquare Captcha]\n*. [https://archive.today/20130415143828/http://papercore.org/Sherrington1975 Papercore Summary http://papercore.org/Sherrington1975]\n* [https://archive.today/20130415190815/http://papercore.org/Parisi1980 Papercore Summary http://papercore.org/Parisi1980].", "In condensed matter physics, a spin glass is a magnetic state characterized by randomness, besides cooperative behavior in freezing of spins at a temperature called \"freezing temperature\" T. In ferromagnetic solids, component atoms' magnetic spins all align in the same direction. Spin glass when contrasted with a ferromagnet is defined as \"disordered\" magnetic state in which spins are aligned randomly or without a regular pattern and the couplings too are random.\nThe term \"glass\" comes from an analogy between the magnetic disorder in a spin glass and the positional disorder of a conventional, chemical glass, e.g., a window glass. In window glass or any amorphous solid the atomic bond structure is highly irregular; in contrast, a crystal has a uniform pattern of atomic bonds. In ferromagnetic solids, magnetic spins all align in the same direction; this is analogous to a crystal's lattice-based structure.\nThe individual atomic bonds in a spin glass are a mixture of roughly equal numbers of ferromagnetic bonds (where neighbors have the same orientation) and antiferromagnetic bonds (where neighbors have exactly the opposite orientation: north and south poles are flipped 180 degrees). These patterns of aligned and misaligned atomic magnets create what are known as frustrated interactions distortions in the geometry of atomic bonds compared to what would be seen in a regular, fully aligned solid. They may also create situations where more than one geometric arrangement of atoms is stable.\nSpin glasses and the complex internal structures that arise within them are termed \"metastable\" because they are \"stuck\" in stable configurations other than the lowest-energy configuration (which would be aligned and ferromagnetic). The mathematical complexity of these structures is difficult but fruitful to study experimentally or in simulations; with applications to physics, chemistry, materials science and artificial neural networks in computer science.", "A thermodynamic system is ergodic when, given any (equilibrium) instance of the system, it eventually visits every other possible (equilibrium) state (of the same energy). One characteristic of spin glass systems is that, below the freezing temperature , instances are trapped in a \"non-ergodic\" set of states: the system may fluctuate between several states, but cannot transition to other states of equivalent energy. Intuitively, one can say that the system cannot escape from deep minima of the hierarchically disordered energy landscape; the distances between minima are given by an ultrametric, with tall energy barriers between minima. The participation ratio counts the number of states that are accessible from a given instance, that is, the number of states that participate in the ground state. The ergodic aspect of spin glass was instrumental in the awarding of half the 2021 Nobel Prize in Physics to Giorgio Parisi.\nFor physical systems, such as dilute manganese in copper, the freezing temperature is typically as low as 30 kelvins (−240 °C), and so the spin-glass magnetism appears to be practically without applications in daily life. The non-ergodic states and rugged energy landscapes are, however, quite useful in understanding the behavior of certain neural networks, including Hopfield networks, as well as many problems in computer science optimization and genetics.", "Elemental crystalline neodymium is paramagnetic at room temperature and becomes an antiferromagnet with incommensurate order upon cooling below 19.9 K. Below this transition temperature it exhibits a complex set of magnetic phases that have long spin relaxation times and spin-glass behavior that does not rely on structural disorder.", "A detailed account of the history of spin glasses from the early 1960s to the late 1980s can be found in a series of popular articles by Philip W. Anderson in Physics Today.", "This is also called the \"p-spin model\". The infinite-range model is a generalization of the Sherrington–Kirkpatrick model where we not only consider two spin interactions but -spin interactions, where and is the total number of spins. Unlike the Edwards–Anderson model, similar to the SK model, the interaction range is still infinite. The Hamiltonian for this model is described by:\nwhere have similar meanings as in the EA model. The limit of this model is known as the random energy model. In this limit, it can be seen that the probability of the spin glass existing in a particular state, depends only on the energy of that state and not on the individual spin configurations in it.\nA gaussian distribution of magnetic bonds across the lattice is assumed usually to solve this model. Any other distribution is expected to give the same result, as a consequence of the central limit theorem. The gaussian distribution function, with mean and variance , is given as:\nThe order parameters for this system are given by the magnetization and the two point spin correlation between spins at the same site , in two different replicas, which are the same as for the SK model. This infinite range model can be solved explicitly for the free energy in terms of and , under the assumption of replica symmetry as well as 1-Replica Symmetry Breaking.", "A spin ice is a magnetic substance that does not have a single minimal-energy state. It has magnetic moments (i.e. \"spin\") as elementary degrees of freedom which are subject to frustrated interactions. By their nature, these interactions prevent the moments from exhibiting a periodic pattern in their orientation down to a temperature much below the energy scale set by the said interactions. Spin ices show low-temperature properties, residual entropy in particular, closely related to those of common crystalline water ice. The most prominent compounds with such properties are dysprosium titanate (DyTiO) and holmium titanate (HoTiO). The orientation of the magnetic moments in spin ice resembles the positional organization of hydrogen atoms (more accurately, ionized hydrogen, or protons) in conventional water ice (see figure 1).\nExperiments have found evidence for the existence of deconfined magnetic monopoles in these materials, with properties resembling those of the hypothetical magnetic monopoles postulated to exist in vacuum.", "In 1935, Linus Pauling noted that the hydrogen atoms in water ice would be expected to remain disordered even at absolute zero. That is, even upon cooling to zero temperature, water ice is expected to have residual entropy, i.e., intrinsic randomness. This is due to the fact that the hexagonal crystalline structure of common water ice contains oxygen atoms with four neighboring hydrogen atoms. In ice, for each oxygen atom, two of the neighboring hydrogen atoms are near (forming the traditional HO molecule), and two are further away (being the hydrogen atoms of two neighboring water molecules). Pauling noted that the number of configurations conforming to this \"two-near, two-far\" ice rule grows exponentially with the system size, and, therefore, that the zero-temperature entropy of ice was expected to be extensive. Pauling's findings were confirmed by specific heat measurements, though pure crystals of water ice are particularly hard to create.\nSpin ices are materials that consist of regular corner-linked tetrahedra of magnetic ions, each of which has a non-zero magnetic moment, often abridged to \"spin\", which must satisfy in their low-energy state a \"two-in, two-out\" rule on each tetrahedron making the crystalline structure (see figure 2). This is highly analogous to the two-near, two far rule in water ice (see figure 1). Just as Pauling showed that the ice rule leads to an extensive entropy in water ice, so does the two-in, two-out rule in the spin ice systems – these exhibit the same residual entropy properties as water ice. Be that as it may, depending on the specific spin ice material, it is generally much easier to create large single crystals of spin ice materials than water ice crystals. Additionally, the ease to induce interaction of the magnetic moments with an external magnetic field in a spin ice system makes the spin ices more suitable than water ice for exploring how the residual entropy can be affected by external influences.\nWhile Philip Anderson had already noted in 1956 the connection between the problem of the frustrated Ising antiferromagnet on a (pyrochlore) lattice of corner-shared tetrahedra and Pauling's water ice problem, real spin ice materials were only discovered forty years later. The first materials identified as spin ices were the pyrochlores DyTiO (dysprosium titanate), HoTiO (holmium titanate). In addition, compelling evidence has been reported that DySnO (dysprosium stannate) and HoSnO (holmium stannate) are spin ices. These four compounds belong to the family of rare-earth pyrochlore oxides. CdErSe, a spinel in which the magnetic Er ions sit on corner-linked tetrahedra, also displays spin ice behavior.\nSpin ice materials are characterized by a random disorder in the orientation of the moment of the magnetic ions, even when the material is at very low temperatures. Alternating current (AC) magnetic susceptibility measurements find evidence for a dynamic freezing of the magnetic moments as the temperature is lowered somewhat below the temperature at which the specific heat displays a maximum. The broad maximum in the heat capacity does not correspond to a phase transition. Rather, the temperature at which the maximum occurs, about 1K in DyTiO, signals a rapid change in the number of tetrahedra where the two-in, two-out rule is violated. Tetrahedra where the rule is violated are sites where the aforementioned monopoles reside. Mathematically, spin ice configurations can be described by closed Eulerian paths.", "Spin ices are geometrically frustrated magnetic systems. While frustration is usually associated with triangular or tetrahedral arrangements of magnetic moments coupled via antiferromagnetic exchange interactions, as in Andersons Ising model, spin ices are frustrated ferromagnets. It is the very strong local magnetic anisotropy from the crystal field forcing the magnetic moments to point either in or out of a tetrahedron that renders ferromagnetic interactions frustrated in spin ices. Most importantly, it is the long-range magnetostatic dipole–dipole interaction, and not' the nearest-neighbor exchange, that causes the frustration and the consequential two-in, two-out rule that leads to the spin ice phenomenology.\nFor a tetrahedron in a two-in, two-out state, the magnetization field is divergent-free; there is as much \"magnetization intensity\" entering a tetrahedron as there is leaving (see figure 3). In such a divergent-free situation, there exists no source or sink for the field. According to Gauss theorem (also known as Ostrogradskys theorem), a nonzero divergence of a field is caused, and can be characterized, by a real number called \"charge\". In the context of spin ice, such charges characterizing the violation of the two-in, two-out magnetic moment orientation rule are the aforementioned monopoles.\nIn Autumn 2009, researchers reported experimental observation of low-energy quasiparticles resembling the predicted monopoles in spin ice. A single crystal of the dysprosium titanate spin ice candidate was examined in the temperature range of 0.6–2.0K. Using neutron scattering, the magnetic moments were shown to align in the spin ice material into interwoven tube-like bundles resembling Dirac strings. At the defect formed by the end of each tube, the magnetic field looks like that of a monopole. Using an applied magnetic field, the researchers were able to control the density and orientation of these strings. A description of the heat capacity of the material in terms of an effective gas of these quasiparticles was also presented.\nThe effective charge of a magnetic monopole, Q (see figure 3) in both the dysprosium and holmium titanate spin ice compounds is approximately (Bohr magnetons per angstrom). The elementary magnetic constituents of spin ice are magnetic dipoles, so the emergence of monopoles is an example of the phenomenon of fractionalization.\nThe microscopic origin of the atomic magnetic moments in magnetic materials is quantum mechanical; the Planck constant enters explicitly in the equation defining the magnetic moment of an electron, along with its charge and its mass. Yet, the magnetic moments in the dysprosium titanate and the holmium titanate spin ice materials are effectively described by classical statistical mechanics, and not quantum statistical mechanics, over the experimentally relevant and reasonably accessible temperature range (between 0.05K and 2K) where the spin ice phenomena manifest themselves. Although the weakness of quantum effects in these two compounds is rather unusual, it is believed to be understood. There is current interest in the search of quantum spin ices, materials in which the laws of quantum mechanics now become needed to describe the behavior of the magnetic moments. Magnetic ions other than dysprosium (Dy) and holmium (Ho) are required to generate a quantum spin ice, with praseodymium (Pr), terbium (Tb) and ytterbium (Yb) being possible candidates. One reason for the interest in quantum spin ice is the belief that these systems may harbor a quantum spin liquid, a state of matter where magnetic moments continue to wiggle (fluctuate) down to absolute zero temperature. The theory describing the low-temperature and low-energy properties of quantum spin ice is akin to that of vacuum quantum electrodynamics, or QED. This constitutes an example of the idea of emergence.", "Artificial spin ices are metamaterials consisting of coupled nanomagnets arranged on periodic and aperiodic lattices.\nThese systems have enabled the experimental investigation of a variety of phenomena such as frustration, emergent magnetic monopoles, and phase transitions. In addition, artificial spin ices show potential as reprogrammable magnonic crystals and have been studied for their fast dynamics. A variety of geometries have been explored, including quasicrystalline systems and 3D structures, as well as different magnetic materials to modify anisotropies and blocking temperatures.\nFor example, polymer magnetic composites comprising 2D lattices of droplets of solid-liquid phase change material, with each droplet containing a single magnetic dipole particle, form an artificial spin ice above the droplet melting point, and, after cooling, a spin glass state with low bulk remanence. Spontaneous emergence of 2D magnetic vortices was observed in such spin ices, which vortex geometries were correlated with the external bulk remanence.\nFuture work in this field includes further developments in fabrication and characterization methods, exploration of new geometries and material combinations, and potential applications in computation, data storage, and reconfigurable microwave circuits.\nIn 2021 a study demonstrated neuromorphic reservoir computing using artificial spin ice, solving a range of computational tasks using the complex magnetic dynamics of the artificial spin ice.\nIn 2022, another studied achieved an artificial kagome spin ice which could potentially be used in the future for novel high-speed computers with low power consumption.", "Spin waves are observed through four experimental methods: inelastic neutron scattering, inelastic light scattering (Brillouin scattering, Raman scattering and inelastic X-ray scattering), inelastic electron scattering (spin-resolved electron energy loss spectroscopy), and spin-wave resonance (ferromagnetic resonance). \n* In inelastic neutron scattering the energy loss of a beam of neutrons that excite a magnon is measured, typically as a function of scattering vector (or equivalently momentum transfer), temperature and external magnetic field. Inelastic neutron scattering measurements can determine the dispersion curve for magnons just as they can for phonons. Important inelastic neutron scattering facilities are present at the ISIS neutron source in Oxfordshire, UK, the Institut Laue-Langevin in Grenoble, France, the High Flux Isotope Reactor at Oak Ridge National Laboratory in Tennessee, USA, and at the National Institute of Standards and Technology in Maryland, USA. \n* Brillouin scattering similarly measures the energy loss of photons (usually at a convenient visible wavelength) reflected from or transmitted through a magnetic material. Brillouin spectroscopy is similar to the more widely known Raman scattering, but probes a lower energy and has a superior energy resolution in order to be able to detect the meV energy of magnons. \n* Ferromagnetic (or antiferromagnetic) resonance instead measures the absorption of microwaves, incident on a magnetic material, by spin waves, typically as a function of angle, temperature and applied field. Ferromagnetic resonance is a convenient laboratory method for determining the effect of magnetocrystalline anisotropy on the dispersion of spin waves. One group at the Max Planck Institute of Microstructure Physics in Halle, Germany proved that by using spin polarized electron energy loss spectroscopy (SPEELS), very high energy surface magnons can be excited. This technique allows one to probe the dispersion of magnons in the ultrathin ferromagnetic films. The first experiment was performed for a 5 ML Fe film. With momentum resolution, the magnon dispersion was explored for an 8 ML fcc Co film on Cu(001) and an 8 ML hcp Co on W(110), respectively. The maximum magnon energy at the border of the surface Brillouin zone was 240 meV.", "In this model the magnetization\nwhere is the volume. The propagation of spin waves is described by the Landau-Lifshitz equation of motion:\nwhere is the gyromagnetic ratio and is the damping constant. The cross-products in this forbidding-looking equation show that the propagation of spin waves is governed by the torques generated by internal and external fields. (An equivalent form is the Landau-Lifshitz-Gilbert equation, which replaces the final term by a more \"simple looking\" equivalent one.)\nThe first term on the right hand side of the equation describes the precession of the magnetization under the influence of the applied field, while the above-mentioned final term describes how the magnetization vector \"spirals in\" towards the field direction as time progresses. In metals the damping forces described by the constant are in many cases dominated by the eddy currents.\nOne important difference between phonons and magnons lies in their dispersion relations. The dispersion relation for phonons is to first order linear in wavevector , namely , where is frequency, and is the velocity of sound. Magnons have a parabolic dispersion relation: where the parameter represents a \"spin stiffness.\" The form is the third term of a Taylor expansion of a cosine term in the energy expression originating from the dot product. The underlying reason for the difference in dispersion relation is that the order parameter (magnetization) for the ground-state in ferromagnets violates time-reversal symmetry. Two adjacent spins in a solid with lattice constant that participate in a mode with wavevector have an angle between them equal to .", "In condensed matter physics, a spin wave is a propagating disturbance in the ordering of a magnetic material. These low-lying collective excitations occur in magnetic lattices with continuous symmetry. From the equivalent quasiparticle point of view, spin waves are known as magnons, which are bosonic modes of the spin lattice that correspond roughly to the phonon excitations of the nuclear lattice. As temperature is increased, the thermal excitation of spin waves reduces a ferromagnet's spontaneous magnetization. The energies of spin waves are typically only in keeping with typical Curie points at room temperature and below.", "When magnetoelectronic devices are operated at high frequencies, the generation of spin waves can be an important energy loss mechanism. Spin wave generation limits the linewidths and therefore the quality factors Q of ferrite components used in microwave devices. The reciprocal of the lowest frequency of the characteristic spin waves of a magnetic material gives a time scale for the switching of a device based on that material.", "The simplest way of understanding spin waves is to consider the Hamiltonian for the Heisenberg ferromagnet:\nwhere is the exchange energy, the operators represent the spins at Bravais lattice points, is the Landé -factor, is the Bohr magneton and is the internal field which includes the external field plus any \"molecular\" field. Note that in the classical continuum case and in dimensions the Heisenberg ferromagnet equation has the form\nIn and dimensions this equation admits several integrable and non-integrable extensions like the Landau-Lifshitz equation, the Ishimori equation and so on. For a ferromagnet and the ground state of the Hamiltonian is that in which all spins are aligned parallel with the field . That is an eigenstate of can be verified by rewriting it in terms of the spin-raising and spin-lowering operators given by:\nresulting in\nwhere has been taken as the direction of the magnetic field. The spin-lowering operator annihilates the state with minimum projection of spin along the -axis, while the spin-raising operator annihilates the ground state with maximum spin projection along the -axis. Since\nfor the maximally aligned state, we find\nwhere N is the total number of Bravais lattice sites. The proposition that the ground state is an eigenstate of the Hamiltonian is confirmed.\nOne might guess that the first excited state of the Hamiltonian has one randomly selected spin at position rotated so that\nbut in fact this arrangement of spins is not an eigenstate. The reason is that such a state is transformed by the spin raising and lowering operators. The operator will increase the -projection of the spin at position back to its low-energy orientation, but the operator will lower the -projection of the spin at position . The combined effect of the two operators is therefore to propagate the rotated spin to a new position, which is a hint that the correct eigenstate is a spin wave, namely a superposition of states with one reduced spin. The exchange energy penalty associated with changing the orientation of one spin is reduced by spreading the disturbance over a long wavelength. The degree of misorientation of any two near-neighbor spins is thereby minimized. From this explanation one can see why the Ising model magnet with discrete symmetry has no spin waves: the notion of spreading a disturbance in the spin lattice over a long wavelength makes no sense when spins have only two possible orientations. The existence of low-energy excitations is related to the fact that in the absence of an external field, the spin system has an infinite number of degenerate ground states with infinitesimally different spin orientations. The existence of these ground states can be seen from the fact that the state does not have the full rotational symmetry of the Hamiltonian , a phenomenon which is called spontaneous symmetry breaking.", "Spinning band distillation is a technique used to separate liquid mixtures which are similar in boiling points. When liquids with similar boiling points are distilled, the vapors are mixtures, and not pure compounds. Fractionating columns help separate the mixture by allowing the mixed vapors to cool, condense, and vaporize again in accordance with Raoult's law. With each condensation-vaporization cycles, the vapors are enriched in a certain component. A larger surface area allows more cycles, improving separation.\nSpinning band distillation takes this concept one step further by using a spinning helical band made of an inert material such as metal or Teflon to push the rising vapors and descending condensate to the sides of the column, coming into close contact with each other. This speeds up equilibration and provides for a greater number of condensation-vaporization cycles.", "Spinning band distillation may sometimes be used to recycle waste solvents which contain different solvents, and other chemical compounds.", "Spinning cone columns are used in a form of low temperature vacuum steam distillation to gently extract volatile chemicals from liquid foodstuffs while minimising the effect on the taste of the product. For instance, the columns can be used to remove some of the alcohol from wine, off smells from cream, and to capture aroma compounds that would otherwise be lost in coffee processing.", "The columns are made of stainless steel. Conical vanes are attached alternately to the wall of the column and to a central rotating shaft. The product is poured in at the top under vacuum, and steam is pumped into the column from below. The vanes provide a large surface area over which volatile compounds can evaporate into the steam, and the rotation ensures a thin layer of the product is constantly moved over the moving cone. It typically takes 20 seconds for the liquid to move through the column, and industrial columns might process . The temperature and pressure can be adjusted depending on the compounds targeted.", "Improvements in viticulture and warmer vintages have led to increasing levels of sugar in wine grapes, which have translated to higher levels of alcohol - which can reach over 15% ABV in Zinfandels from California. Some producers feel that this unbalances their wine, and use spinning cones to reduce the alcohol by 1-2 percentage points. In this case the wine is passed through the column once to distill out the most volatile aroma compounds which are then put to one side while the wine goes through the column a second time at higher temperature to extract alcohol. The aroma compounds are then mixed back into the wine. Some producers such as Joel Peterson of Ravenswood argue that technological \"fixes\" such as spinning cones remove a sense of terroir from the wine; if the wine has the tannins and other components to balance 15% alcohol, Peterson argues that it should be accepted on its own terms.\nThe use of spinning cones, and other technologies such as reverse osmosis, was banned in the EU until recently, although for many years they could freely be used in wines imported into the EU from certain New World wine producing countries such as Australia and the USA. In November 2007, the Wine Standards Branch (WSB) of the UK's Food Standards Agency banned the sale of a wine called Sovio, made from Spanish grapes that would normally produce wines of 14% ABV. Sovio runs 40-50% of the wine over spinning cones to reduce the alcohol content to 8%, which means that under EU law it could not be sold as wine as it was below 8.5%; above that, under the rules prevailing at the time, it would be banned because spinning cones could not be used in EU winemaking.\nSubsequently, the EU legalized dealcoholization with a 2% adjustment limit in its Code of Winemaking Practices, publishing that in its Commission Regulation (EC) No 606/2009 and stipulating that the dealcoholization must be accomplished by physical separation techniques which would embrace the spinning cone method.\nMore recently, in International Organisation of Vine and Wine Resolutions OIV-OENO 394A-2012 and OIV-OENO 394B-2012 of June 22, 2012 EU recommended winemaking procedures were modified to permit use of the spinning cone column and membrane techniques such as reverse osmosis on wine, subject to a limitation on the adjustment. That limitation is currently under review following the proposal by some EU members that it be eliminated altogether. The limitation is applicable only to products formally labeled as \"wine\".", "Spiral separators of the wet type, also called spiral concentrators, are devices to separate solid components in a slurry, based upon a combination of the solid particle density as well as the particle's hydrodynamic properties (e.g. drag). The device consists of a tower, around which is wound a sluice, from which slots or channels are placed in the base of the sluice to extract solid particles that have come out of suspension.\nAs larger and heavier particles sink to the bottom of the sluice faster and experience more drag from the bottom, they travel slower, and so move towards the center of the spiral. Conversely, light particles stay towards the outside of the spiral, with the water, and quickly reach the bottom. At the bottom, a \"cut\" is made with a set of adjustable bars, channels, or slots, separating the low and high density parts.", "Dry spiral separators, capable of distinguishing round particles from nonrounds, are used to sort the feed by shape. The device consists of a tower, around which is wound an inwardly inclined flight. A catchment funnel is placed around this inner flight. Round particles roll at a higher speed than other objects, and so are flung off the inner flight and into the collection funnel. Shapes which are not round enough are collected at the bottom of the flight.\nSeparators of this type may be used for removing weed seeds from the intended harvest, or to remove deformed lead shot.", "The term spiral separator can refer to either a device for separating slurry components by density (wet spiral separators), or for a device for sorting particles by shape (dry spiral separators).", "Typical spiral concentrators will use a slurry from about 20%-40% solids by weight, with a particle size somewhere between 0.75—1.5mm (17-340 mesh), though somewhat larger particle sizes are sometimes used. The spiral separator is less efficient at the particle sizes of 0.1—0.074mm however. For efficient separation, the density difference between the heavy minerals and the light minerals in the feedstock should be at least 1 g/cm; and because the separation is dependent upon size and density, spiral separators are most effective at purifying ore if its particles are of uniform size and shape. A spiral separator may process a couple tons per hour of ore, per flight, and multiple flights may be stacked in the same space as one, to improve capacity. \nMany things can be done to improve the separation efficiency, including:\n* changing the rate of material feed\n*changing the grain size of the material\n*changing the slurry mass percentage\n*adjusting the cutter bar positions\n*running the output of one spiral separator (often, a third, intermediate, cut) through a second.\n*adding washwater inlets along the length of the spiral, to aid in separating light minerals\n*adding multiple outlets along the length, to improve the ability of the spiral to remove heavy contaminants\n*adding ridges on the sluice at an angle to the direction of flow.", "Since ethanol boils at a much lower temperature than water, simple distillation can separate ethanol from water by applying heat to the mixture. Historically, a copper vessel was used for this purpose, since copper removes undesirable sulfur-based compounds from the alcohol. However, many modern stills are made of stainless steel pipes with copper linings to prevent erosion of the entire vessel and lower copper levels in the waste product (which in large distilleries is processed to become animal feed). Copper is the preferred material for stills because it yields an overall better-tasting spirit. The taste is improved by the chemical reaction between the copper in the still and the sulfur compounds created by the yeast during fermentation. These unwanted and flavor-changing sulfur compounds are chemically removed from the final product resulting in a smoother, better-tasting drink. All copper stills will require repairs about every eight years due to the precipitation of copper-sulfur compounds. The beverage industry was the first to implement a modern distillation apparatus and led the way in developing equipment standards which are now widely accepted in the chemical industry.\nThere is also an increasing usage of the distillation of gin under glass and PTFE, and even at reduced pressures, to facilitate a fresher product. This is irrelevant to alcohol quality because the process starts with triple distilled grain alcohol, and the distillation is used solely to harvest botanical flavors such as limonene and other terpene like compounds. The ethyl alcohol is relatively unchanged.\nThe simplest standard distillation apparatus is commonly known as a pot still, consisting of a single heated chamber and a vessel to collect purified alcohol. A pot still incorporates only one condensation, whereas other types of distillation equipment have multiple stages which result in higher purification of the more volatile component (alcohol). Pot still distillation gives an incomplete separation, but this can be desirable for the flavor of some distilled beverages.\nIf a purer distillate is desired, a reflux still is the most common solution. Reflux stills incorporate a fractionating column, commonly created by filling copper vessels with glass beads to maximize available surface area. As alcohol boils, condenses, and reboils through the column, the effective number of distillations greatly increases. Vodka and gin and other neutral grain spirits are distilled by this method, then diluted to concentrations appropriate for human consumption.\nAlcoholic products from home distilleries are common throughout the world but are sometimes in violation of local statutes. The product of illegal stills in the United States is commonly referred to as moonshine and in Ireland, poitín. However, poitín, although made illegal in 1661, has been legal for export in Ireland since 1997. Note that the term moonshine itself is often misused as many believe it to be a specific kind of high-proof alcohol that was distilled from corn, but the term can refer to any illicitly distilled alcohol.", "A still is an apparatus used to distill liquid mixtures by heating to selectively boil and then cooling to condense the vapor. A still uses the same concepts as a basic distillation apparatus, but on a much larger scale. Stills have been used to produce perfume and medicine, water for injection (WFI) for pharmaceutical use, generally to separate and purify different chemicals, and to produce distilled beverages containing ethanol.", "Stripping works on the basis of mass transfer. The idea is to make the conditions favorable for the component, A, in the liquid phase to transfer to the vapor phase. This involves a gas–liquid interface that A must cross. The total amount of A that has moved across this boundary can be defined as the flux of A, N.", "Stripping is commonly used in industrial applications to remove harmful contaminants from waste streams. One example would be the removal of TBT and PAH contaminants from harbor soils. The soils are dredged from the bottom of contaminated harbors, mixed with water to make a slurry and then stripped with steam. The cleaned soil and contaminant rich steam mixture are then separated. This process is able to decontaminate soils almost completely.\nSteam is also frequently used as a stripping agent for water treatment. Volatile organic compounds are partially soluble in water and because of environmental considerations and regulations, must be removed from groundwater, surface water, and wastewater. These compounds can be present because of industrial, agricultural, and commercial activity.", "The variables and design considerations for strippers are many. Among them are the entering conditions, the degree of recovery of the solute needed, the choice of the stripping agent and its flow, the operating conditions, the number of stages, the heat effects, and the type and size of the equipment.\nThe degree of recovery is often determined by environmental regulations, such as for volatile organic compounds like chloroform.\nFrequently, steam, air, inert gases, and hydrocarbon gases are used as stripping agents. This is based on solubility, stability, degree of corrosiveness, cost, and availability. As stripping agents are gases, operation at nearly the highest temperature and lowest pressure that will maintain the components and not vaporize the liquid feed stream is desired. This allows for the minimization of flow. As with all other variables, minimizing cost while achieving efficient separation is the ultimate goal.\nThe size of the equipment, and particularly the height and diameter, is important in determining the possibility of flow channeling that would reduce the contact area between the liquid and vapor streams. If flow channeling is suspected to be occurring, a redistribution plate is often necessary to, as the name indicates, redistribute the liquid flow evenly to reestablish a higher contact area.\nAs mentioned previously, strippers can be trayed or packed. Packed columns, and particularly when random packing is used, are usually favored for smaller columns with a diameter less than 2 feet and a packed height of not more than 20 feet. Packed columns can also be advantageous for corrosive fluids, high foaming fluids, when fluid velocity is high, and when particularly low pressure drop is desired. Trayed strippers are advantageous because of ease of design and scale up. Structured packing can be used similar to trays despite possibly being the same material as dumped (random) packing. Using structured packing is a common method to increase the capacity for separation or to replace damaged trays.\nTrayed strippers can have sieve, valve, or bubble cap trays while packed strippers can have either structured packing or random packing. Trays and packing are used to increase the contact area over which mass transfer can occur as mass transfer theory dictates. Packing can have varying material, surface area, flow area, and associated pressure drop. Older generation packing include ceramic Raschig rings and Berl saddles. More common packing materials are metal and plastic Pall rings, metal and plastic Zbigniew Białecki rings, and ceramic Intalox saddles. Each packing material of this newer generation improves the surface area, the flow area, and/or the associated pressure drop across the packing. Also important, is the ability of the packing material to not stack on top of itself. If such stacking occurs, it drastically reduces the surface area of the material. Lattice design work has been increasing of late that will further improve these characteristics.\nDuring operation, monitoring the pressure drop across the column can help to determine the performance of the stripper. A changed pressure drop over a significant range of time can be an indication that the packing may need to be replaced or cleaned.", "Stripping is mainly conducted in trayed towers (plate columns) and packed columns, and less often in spray towers, bubble columns, and centrifugal contactors.\nTrayed towers consist of a vertical column with liquid flowing in the top and out the bottom. The vapor phase enters in the bottom of the column and exits out of the top. Inside of the column are trays or plates. These trays force the liquid to flow back and forth horizontally while the vapor bubbles up through holes in the trays. The purpose of these trays is to increase the amount of contact area between the liquid and vapor phases.\nPacked columns are similar to trayed columns in that the liquid and vapor flows enter and exit in the same manner. The difference is that in packed towers there are no trays. Instead, packing is used to increase the contact area between the liquid and vapor phases. There are many different types of packing used and each one has advantages and disadvantages.", "Stripping is a physical separation process where one or more components are removed from a liquid stream by a vapor stream. In industrial applications the liquid and vapor streams can have co-current or countercurrent flows. Stripping is usually carried out in either a packed or trayed column.", "Strontium aluminate phosphors produce green and aqua hues, where green gives the highest brightness and aqua the longest glow time. Different aluminates can be used as the host matrix. This influences the wavelength of emission of the europium ion, by its covalent interaction with surrounding oxygens, and crystal field splitting of the 5d orbital energy levels.\nThe excitation wavelengths for strontium aluminate range from 200 to 450 nm, and the emission wavelengths range from 420 to 520 nm. The wavelength for its green formulation is 520 nm, its aqua, or blue-green, version emits at 505 nm, and its blue emits at 490 nm. Strontium aluminate can be formulated to phosphoresce at longer (yellow to red) wavelengths as well, though such emission is often dimmer than that of more common phosphorescence at shorter wavelengths.\nFor europium-dysprosium doped aluminates, the peak emission wavelengths are 520 nm for , 480 nm for , and 400 nm for .\n is important as a persistently luminescent phosphor for industrial applications. It can be produced by molten salt assisted process at 900 °C.\nThe most described type is the stoichiometric green-emitting (approx. 530 nm) . shows significantly longer afterglow than the europium-only doped material. The Eu dopant shows high afterglow, while Eu has almost none. Polycrystalline is used as a green phosphor for plasma displays, and when doped with praseodymium or neodymium it can act as a good active laser medium. is a phosphor emitting at 305 nm, with quantum efficiency of 70%. Several strontium aluminates can be prepared by the sol-gel process.\nThe wavelengths produced depend on the internal crystal structure of the material. Slight modifications in the manufacturing process (the type of reducing atmosphere, small variations of stoichiometry of the reagents, addition of carbon or rare-earth halides) can significantly influence the emission wavelengths.\nStrontium aluminate phosphor is usually fired at about 1250 °C, though higher temperatures are possible. Subsequent exposure to temperatures above 1090 °C is likely to cause loss of its phosphorescent properties. At higher firing temperatures, the undergoes transformation to .\nCerium and manganese doped strontium aluminate shows intense narrowband (22 nm wide) phosphorescence at 515 nm when excited by ultraviolet radiation (253.7 nm mercury emission line, to lesser degree 365 nm). It can be used as a phosphor in fluorescent lamps in photocopiers and other devices. A small amount of silicon substituting the aluminium can increase emission intensity by about 5%; the preferred composition of the phosphor is .\nHowever, the material has high hardness, causing abrasion to the machinery used in processing it; manufacturers frequently coat the particles with a suitable lubricant when adding them to a plastic. Coating also prevents the phosphor from water degradation over time. \nThe glow intensity depends on the particle size; generally, the bigger the particles, the better the glow.\nStrontium aluminate is insoluble in water and has an approximate pH of 8 (very slightly basic).", "Strontium aluminate cement can be used as refractory structural material. It can be prepared by sintering of a blend of strontium oxide or strontium carbonate with alumina in a roughly equimolar ratio at about 1500 °C. It can be used as a cement for refractory concrete for temperatures up to 2000 °C as well as for radiation shielding. The use of strontium aluminate cements is limited by the availability of the raw materials.\nStrontium aluminates have been examined as proposed materials for immobilization of fission products of radioactive waste, namely strontium-90. Europium-doped strontium aluminate nanoparticles are proposed as indicators of stress and cracks in materials, as they emit light when subjected to mechanical stress (mechanoluminescence). They are also useful for fabricating mechano-optical nanodevices. Non-agglomerated particles are needed for this purpose; they are difficult to prepare conventionally but can be made by ultrasonic spray pyrolysis of a mixture of strontium acetylacetonate, aluminium acetylacetonate and europium acetylacetonate in reducing atmosphere (argon with 5% of hydrogen).", "Strontium aluminate based afterglow pigments are marketed under numerous brand names such as Core Glow, Super-LumiNova and Lumibrite, developed by Seiko. \nMany companies additionally sell products that contain a mix of strontium aluminate particles and a host material. Due to the nearly endless ability to recharge, strontium aluminate products cross many industries. Some of the most popular uses are for street lighting, such as the viral bike path. \nCompanies offer an industrial marble aggregate mixed with the strontium aluminate, to enable ease of using within standard construction processes. The glowing marble aggregates are often pressed into the cement or asphalt during the final stages of [https://static1.squarespace.com/static/5fd2b4109f12ec3a24e2bf24/t/5ffca2aa15cc5241eed1cb48/1610392238473/core_glow_project_guide.pdf construction]. \nReusable and non-toxic glow stick alternatives are now being developed using strontium aluminate particles.\nCubic strontium aluminate can be used used as a water-soluble sacrificial layer for the production of free-standing films of complex oxide materials.", "Strontium aluminates are considered non-toxic, and are biologically and chemically inert.\nCare should be used when handling loose powder, which can cause irritation if inhaled or exposed to mucous membranes.", "Phosphorescent materials were discovered in the 1700s, and people have been studying them and making improvements over the centuries. The development of strontium aluminate pigments in 1993 was spurred on by the need to find a substitute for glow-in-the-dark materials with high luminance and long phosphorescence, especially those that used promethium. This led to the discovery by Yasumitsu Aoki (Nemoto & Co.) of materials with luminance approximately 10 times greater than zinc sulfide and phosphorescence approximately 10 times longer, and 10 times more expensive. The invention was patented by Nemoto & Co., Ltd. and licensed to other manufacturers and watch brands. Strontium aluminates are now the longest lasting and brightest phosphorescent material commercially available. \nFor many phosphorescence-based purposes, strontium aluminate is a superior phosphor to its predecessor, copper-activated zinc sulfide, being about 10 times brighter and 10 times longer glowing. It is frequently used in glow in the dark objects, where it replaces the cheaper but less efficient Cu:ZnS that many people recognize with nostalgia – this is what made glow in the dark stars stickers glow. \nAdvancements in understanding of phosphorescent mechanisms, as well as advancements in molecular imaging, have enabled the development of novel, state-of-the-art strontium aluminates.", "Strontium aluminate is an aluminate compound with the chemical formula (sometimes written as ). It is a pale yellow, monoclinic crystalline powder that is odourless and non-flammable. When activated with a suitable dopant (e.g. europium, written as ), it acts as a photoluminescent phosphor with long persistence of phosphorescence.\nStrontium aluminates exist in a variety of other compositions including (monoclinic), (cubic), (hexagonal), and (orthorhombic). The different compositions cause different colours of light to be emitted.", "The term sublimation refers specifically to a physical change of state and is not used to describe the transformation of a solid to a gas in a chemical reaction. For example, the dissociation on heating of solid ammonium chloride into hydrogen chloride and ammonia is not sublimation but a chemical reaction. Similarly the combustion of candles, containing paraffin wax, to carbon dioxide and water vapor is not sublimation but a chemical reaction with oxygen.", "In ancient alchemy, a protoscience that contributed to the development of modern chemistry and medicine, alchemists developed a structure of basic laboratory techniques, theory, terminology, and experimental methods. Sublimation was used to refer to the process in which a substance is heated to a vapor, then immediately collects as sediment on the upper portion and neck of the heating medium (typically a retort or alembic), but can also be used to describe other similar non-laboratory transitions. It was mentioned by alchemical authors such as Basil Valentine and George Ripley, and in the Rosarium philosophorum, as a process necessary for the completion of the magnum opus. Here, the word sublimation was used to describe an exchange of \"bodies\" and \"spirits\" similar to laboratory phase transition between solids and gases. Valentine, in his Le char triomphal de lantimoine' (Triumphal Chariot of Antimony, published 1646) made a comparison to spagyrics in which a vegetable sublimation can be used to separate the spirits in wine and beer. Ripley used language more indicative of the mystical implications of sublimation, indicating that the process has a double aspect in the spiritualization of the body and the corporalizing of the spirit. He writes:\n<blockquote><poem>\nAnd Sublimations we make for three causes,\nThe first cause is to make the body spiritual.\nThe second is that the spirit may be corporeal,\nAnd become fixed with it and consubstantial.\nThe third cause is that from its filthy original.\nIt may be cleansed, and its saltiness sulphurious\nMay be diminished in it, which is infectious.", "Sublimation is a technique used by chemists to purify compounds. A solid is typically placed in a sublimation apparatus and heated under vacuum. Under this reduced pressure, the solid volatilizes and condenses as a purified compound on a cooled surface (cold finger), leaving a non-volatile residue of impurities behind. Once heating ceases and the vacuum is removed, the purified compound may be collected from the cooling surface.\nFor even higher purification efficiencies, a temperature gradient is applied, which also allows for the separation of different fractions. Typical setups use an evacuated glass tube that is heated gradually in a controlled manner. The material flow is from the hot end, where the initial material is placed, to the cold end that is connected to a pump stand. By controlling temperatures along the length of the tube, the operator can control the zones of re-condensation, with very volatile compounds being pumped out of the system completely (or caught by a separate cold trap), moderately volatile compounds re-condensing along the tube according to their different volatilities, and non-volatile compounds remaining in the hot end.\nVacuum sublimation of this type is also the method of choice for purification of organic compounds for use in the organic electronics industry, where very high purities (often > 99.99%) are needed to satisfy the standards for consumer electronics and other applications.", "Iodine gradually sublimes and produces visible fumes on gentle heating at standard atmospheric temperature. It is possible to obtain liquid iodine at atmospheric pressure by controlling the temperature at just between the melting point and the boiling point of iodine. In forensic science, iodine vapor can reveal latent fingerprints on paper.", "Naphthalene, an organic compound commonly found in pesticides such as mothballs, sublimes easily because it is made of non-polar molecules that are held together only by van der Waals intermolecular forces. Naphthalene is a solid that sublimes gradually at standard temperature and pressure, at a high rate, with the critical sublimation point at around 80°C or 176°F. At low temperature, its vapour pressure is high enough, 1mmHg at 53°C, to make the solid form of naphthalene evaporate into gas. On cool surfaces, the naphthalene vapours will solidify to form needle-like crystals.", "For clarification, a distinction between the two corresponding cases is needed. With reference to a phase diagram, the sublimation that occurs left of the solid-gas boundary, the triple point or the solid-liquid boundary (corresponding to evaporation in vaporization) may be called gradual sublimation; and the substance sublimes gradually, regardless of rate. The sublimation that occurs at the solid-gas boundary (critical sublimation point) (corresponding to boiling in vaporization) may be called rapid sublimation, and the substance sublimes rapidly. The words \"gradual\" and \"rapid\" have acquired special meanings in this context and no longer describe the rate of sublimation.", "Snow and ice sublime gradually at temperatures below the solid-liquid boundary (melting point) (generally 0 °C), and at partial pressures below the triple point pressure of , at a low rate. In freeze-drying, the material to be dehydrated is frozen and its water is allowed to sublime under reduced pressure or vacuum. The loss of snow from a snowfield during a cold spell is often caused by sunshine acting directly on the upper layers of the snow. Sublimation of ice is a factor to the erosive wear of glacier ice, also called ablation in glaciology.", "Sublimation is historically used as a generic term to describe a two-step phase transition ― a solid-to-gas transition (sublimation in a more precise definition) followed by a gas-to-solid transition (deposition). (See below)", "Vaporization (from liquid to gas) is divided into two types: vaporization on the surface of the liquid is called evaporation, and vaporization at the boiling point with formation of bubbles in the interior of the liquid is called boiling. However there is no such distinction for the solid-to-gas transition, which is always called sublimation in both corresponding cases.", "Sublimation is the transition of a substance directly from the solid to the gas state, without passing through the liquid state. The verb form of sublimation is sublime, or less preferably, sublimate. Sublimate also refers to the product obtained by sublimation. The point at which sublimation occurs rapidly (for further details, see below) is called critical sublimation point, or simply sublimation point. Notable examples include sublimation of dry ice at room temperature and atmospheric pressure, and that of solid iodine with heating.\nThe reverse process of sublimation is deposition (also called desublimation), in which a substance passes directly from a gas to a solid phase, without passing through the liquid state.\nAll solids sublime, though most sublime at extremely low rates that are hardly detectable. At normal pressures, most chemical compounds and elements possess three different states at different temperatures. In these cases, the transition from the solid to the gas state requires an intermediate liquid state. The pressure referred to is the partial pressure of the substance, not the total (e.g. atmospheric) pressure of the entire system. Thus, any solid can sublime if its vapour pressure is higher than the surrounding partial pressure of the same substance, and in some cases, sublimes at an appreciable rate (e.g. water ice just below 0 °C).\nFor some substances, such as carbon and arsenic, sublimation from solid state is much more achievable than evaporation from liquid state and it is difficult to obtain them as liquids. This is because the pressure of their triple point in its phase diagram (which corresponds to the lowest pressure at which the substance can exist as a liquid) is very high.\nSublimation is caused by the absorption of heat which provides enough energy for some molecules to overcome the attractive forces of their neighbors and escape into the vapor phase. Since the process requires additional energy, sublimation is an endothermic change. The enthalpy of sublimation (also called heat of sublimation) can be calculated by adding the enthalpy of fusion and the enthalpy of vaporization.", "The enthalpy of sublimation has commonly been predicted using the equipartition theorem. If the lattice energy is assumed to be approximately half the packing energy, then the following thermodynamic corrections can be applied to predict the enthalpy of sublimation. Assuming a 1 molar ideal gas gives a correction for the thermodynamic environment (pressure and volume) in which pV = RT, hence a correction of 1RT. Additional corrections for the vibrations, rotations and translation then need to be applied. From the equipartition theorem gaseous rotation and translation contribute 1.5RT each to the final state, therefore a +3RT correction. Crystalline vibrations and rotations contribute 3RT each to the initial state, hence −6RT. Summing the RT corrections; −6RT + 3RT + RT = −2RT. This leads to the following approximate sublimation enthalpy. A similar approximation can be found for the entropy term if rigid bodies are assumed.", "Solid carbon dioxide (dry ice) sublimes rapidly along the solid-gas boundary (sublimation point) below the triple point (e.g., at the temperature of −78.5 °C, at atmospheric pressure), whereas its melting into liquid CO can occur along the solid-liquid boundary (melting point) at pressures and temperatures above the triple point (i.e., 5.1 atm, −56.6 °C).", "Arsenic can sublime readily at high temperatures.\nCadmium and zinc sublime much more than other common materials, so they are not suitable materials for use in vacuum.", "While the definition of sublimation is simple, there is often confusion as to what counts as a sublimation.", "Dye-sub printing is a digital printing technology using full color artwork that works with polyester and polymer-coated substrates. Also referred to as digital sublimation, the process is commonly used for decorating apparel, signs and banners, as well as novelty items such as cell phone covers, plaques, coffee mugs, and other items with sublimation-friendly surfaces. The process uses the science of sublimation, in which heat and pressure are applied to a solid, turning it into a gas through an endothermic reaction without passing through the liquid phase.\nIn sublimation printing, unique sublimation dyes are transferred to sheets of “transfer” paper via liquid gel ink through a piezoelectric print head. The ink is deposited on these high-release inkjet papers, which are used for the next step of the sublimation printing process. After the digital design is printed onto sublimation transfer sheets, it is placed on a heat press along with the substrate to be sublimated.\nIn order to transfer the image from the paper to the substrate, it requires a heat press process that is a combination of time, temperature and pressure. The heat press applies this special combination, which can change depending on the substrate, to “transfer” the sublimation dyes at the molecular level into the substrate. The most common dyes used for sublimation activate at 350 degrees Fahrenheit. However, a range of 380 to 420 degrees Fahrenheit is normally recommended for optimal color.\nThe result of the sublimation process is a nearly permanent, high resolution, full color print. Because the dyes are infused into the substrate at the molecular level, rather than applied at a topical level (such as with screen printing and direct to garment printing), the prints will not crack, fade or peel from the substrate under normal conditions.", "A typical sublimation apparatus separates a mix of appropriate solid materials in a vessel in which it applies heat under a controllable atmosphere (air, vacuum or inert gas). If the material is not at first solid, then it may freeze under reduced pressure. Conditions are so chosen that the solid volatilizes and condenses as a purified compound on a cooled surface, leaving the non-volatile residual impurities or solid products behind. \nThe form of the cooled surface often is a so-called cold finger which for very low-temperature sublimation may actually be cryogenically cooled. If the operation is a batch process, then the sublimed material can be collected from the cooled surface once heating ceases and the vacuum is released. Although this may be quite convenient for small quantities, adapting sublimation processes to large volume is generally not practical with the apparatus becoming extremely large and generally needing to be disassembled to recover products and remove residue. \nAmong the advantages of applying the principle to certain materials are the comparatively low working temperatures, reduced exposure to gases such as oxygen that might harm certain products, and the ease with which it can be performed on extremely small quantities. The same apparatus may also be used for conventional distillation of extremely small quantities due to the very small volume and surface area between evaporating and condensing regions, although this is generally only useful if the cold finger can be cold enough to solidify the condensate.", "More sophisticated variants of sublimation apparatus include those that apply a temperature gradient so as to allow for controlled recrystallization of different fractions along the cold surface. Thermodynamic processes follow a statistical distribution, and suitably designed apparatus exploit this principle with a gradient that will yield different purities in particular temperature zones along the collection surface. Such techniques are especially helpful when the requirement is to refine or separate multiple products or impurities from the same mix of raw materials. It is necessary in particular when some of the required products have similar sublimation points or pressure curves.", "A sublimatory or sublimation apparatus is equipment, commonly laboratory glassware, for purification of compounds by selective sublimation. In principle, the operation resembles purification by distillation, except that the products do not pass through a liquid phase.", "Fructose, along with glucose, is one of the principal sugars involved in the creation of wine. At time of harvest, there is usually an equal amount of glucose and fructose molecules in the grape; however, as the grape overripens the level of fructose will become higher. In wine, fructose can taste nearly twice as sweet as glucose and is a key component in the creation of sweet dessert wines. During fermentation, glucose is consumed first by the yeast and converted into alcohol. A winemaker that chooses to halt fermentation (either by temperature control or the addition of brandy spirits in the process of fortification) will be left with a wine that is high in fructose and notable residual sugars. The technique of süssreserve, where unfermented grape must is added after the wine's fermentation is complete, will result in a wine that tastes less sweet than a wine whose fermentation was halted. This is because the unfermented grape must will still have roughly equal parts of fructose and the less sweet tasting glucose. Similarly, the process of chaptalization where sucrose (which is one part glucose and one part fructose) is added will usually not increase the sweetness level of the wine.", "Sucrose is a disaccharide, a molecule composed of the two monosaccharides glucose, and fructose. Invertase is the enzyme cleaves the glycosidic linkage between the glucose and fructose molecules.\nIn most wines, there will be very little sucrose, since it is not a natural constituent of grapes and sucrose added for the purpose of chaptalisation will be consumed in the fermentation. The exception to this rule is Champagne and other sparkling wines, to which an amount of liqueur dexpédition (typically sucrose dissolved in a still wine) is added after the second fermentation in bottle, a practice known as dosage'.", "Glucose, along with fructose, is one of the primary sugars found in wine grapes. In wine, glucose tastes less sweet than fructose. It is a six-carbon sugar molecule derived from the breakdown of sucrose. At the beginning of the ripening stage there is usually more glucose than fructose present in the grape (as much as five times more) but the rapid development of fructose shifts the ratio to where at harvest there are generally equal amounts. Grapes that are overripe, such as some late harvest wines, may have more fructose than glucose. During fermentation, yeast cells break down and convert glucose first. The linking of glucose molecules with aglycone, in a process that creates glycosides, also plays a role in the resulting flavor of the wine due to their relation and interactions with phenolic compounds like anthocyanins and terpenoids.", "Flash release is a technique used in wine pressing. The technique allows for a better extraction of wine polysaccharides.", "Sugars in wine are at the heart of what makes winemaking possible. During the process of fermentation, sugars from wine grapes are broken down and converted by yeast into alcohol (ethanol) and carbon dioxide. Grapes accumulate sugars as they grow on the grapevine through the translocation of sucrose molecules that are produced by photosynthesis from the leaves. During ripening the sucrose molecules are hydrolyzed (separated) by the enzyme invertase into glucose and fructose. By the time of harvest, between 15 and 25% of the grape will be composed of simple sugars. Both glucose and fructose are six-carbon sugars but three-, four-, five- and seven-carbon sugars are also present in the grape. Not all sugars are fermentable, with sugars like the five-carbon arabinose, rhamnose and xylose still being present in the wine after fermentation. Very high sugar content will effectively kill the yeast once a certain (high) alcohol content is reached. For these reasons, no wine is ever fermented completely \"dry\" (meaning without any residual sugar). Sugars role in dictating the final alcohol content of the wine (and such its resulting body and \"mouth-feel\") sometimes encourages winemakers to add sugar (usually sucrose) during winemaking in a process known as chaptalization solely in order to boost the alcohol content – chaptalization does not increase the sweetness' of a wine.", "In wine tasting, humans are least sensitive to the taste of sweetness (in contrast to sensitivity to bitterness or sourness) with the majority of the population being able to detect sugar or \"sweetness\" in wines between 1% and 2.5% residual sugar. Additionally, other components of wine such as acidity and tannins can mask the perception of sugar in the wine.", "By the late 1960s, radium was phased out and replaced with safer alternatives.\nTritium was used on and the original Panerai Luminor dive watch Radiomir and almost all Swiss watches from 1960 to 1998 when it was banned. Tritium-based substances ceased to be used by Omega SA in 1997.\nIn the 21st century, one radioluminescent alternative for afterglow pigments requiring radiation protection is being produced and used for watches and other uses. These are tritium-based devices called \"gaseous tritium light source\" (GTLS). GTLS are made using sturdy (often glass) containers internally coated with a phosphor layer and filled with tritium gas before the containers are permanently sealed. They have the advantage of being self-powered and producing a consistent luminosity that does not gradually fade during the night. However, GTLS contain radioactive tritium gas that has a half-life of slightly over 12.3 years. Additionally, phosphor degradation will cause the brightness of a tritium container to drop by more during that period. The more tritium that is initially inserted in the container, the brighter it is to begin with, and the longer its useful life. This means the intensity of the tritium-powered light source will slowly fade, generally becoming too dim to be useful for dark adapted human eyes after 20 to 30 years.", "Super-LumiNova granulated pigments are applied either by manual application, screen printing or pad printing. RC Tritech AG recommends up to application thickness in one or multiple layer(s). Over that, the ultraviolet light starts getting problems to effectively reach and activate the bottom of the deposited pigment, diminishing the returns for additional application thickness. The pigments and binders are produced separately, as there is no optimal binder for differing applications. This forces RC Tritech AG to offer many solvent and non-solvent based binder systems to maximally concentrate the granulated pigments in the mixture for application on various surfaces.\nAlternatively, RC Tritech AG offers Lumicast pieces, which are highly concentrated luminous Super-LumiNova 3D-castings. According to RC Tritech AG these ceramic parts can be made in any customer desired shape and result in a higher light emission brightness when compared to the common application methods. Lumicast pieces can be glued or form fitted on various surfaces.", "Over time, RC Tritech AG developed other afterglow color variations than the original Nemoto & Co. C3 green and higher grades of afterglow pigments.\nAny other Super-LumiNova emission color offering than C3 is achieved by adding colorants that adsorb light and hence limit the amount of light the afterglow pigment can absorb and emit. After the green glowing and pale yellow-green in daylight appearing C3 (emission at 515 nm) variant, the blue-green glowing and in daylight white appearing BGW9 (emission at 485 nm, close to the turquoise wavelength) color variant is the second most effective variant regarding pure afterglow brightness. Different colors can however be chosen to optimize (perceived) light emission, dictated by the human eye luminous efficiency function variance. Maximal light emission around wavelengths of 555 nm (green) is important for obtaining optimal photopic vision using the eye cone cells for observation in – or just coming from – well-lit conditions. Maximal light emission around wavelengths of 498 nm (cyan) is important for obtaining optimal scotopic vision using the eye rod cells for observation in low-light conditions. Besides technical and human eye dictated reasons, esthetic or other reasons can also influence Super-LumiNova color choices.\nSuper-LumiNova is offered in three grade levels; Standard, A and X1. The initial brightness of these grades does not significantly vary, but the light intensity decay over time of the A and X1 grades is significantly reduced. This means the X1 grade takes the longest to become too dim to be useful for the human eye. Not all Super-LumiNova color variations are available in three grades.", "Due to the fact that no chemical change occurs after a charge-discharge cycle, the pigments theoretically retain their afterglow properties indefinitely. A reduction in light intensity only occurs very slowly, almost imperceptibly. This reduction increases with the degree of coloring of the pigments. Intensely colored types lose their intensity more quickly than neutral ones. High temperatures of up to several hundred degrees Celsius are not a problem. The only thing that needs to be avoided is prolonged contact with water or high humidity, as this creates a hydroxide layer that negatively affects the light emission intensity.", "Super-LumiNova is a brand name under which strontium aluminate–based non-radioactive and nontoxic photoluminescent or afterglow pigments for illuminating markings on watch dials, hands and bezels, etc. in the dark are marketed. When activated with a suitable dopant (Europium and Dysprosium), it acts as a photoluminescent phosphor with long persistence of phosphorescence. This technology offers up to ten times higher brightness than previous zinc sulfide–based materials.\nThese types of phosphorescent pigments, often called lume, operate like a light battery. After sufficient activation by sunlight, fluorescent, LED, UV (backlight), incandescent and other light sources, they glow in the dark for hours. Electrons within the pigment are being \"excited\" by ultraviolet light exposure – the excitation wavelengths for strontium aluminate range from 200 to 450 nm electromagnetic radiation – to a higher energetic state and after the excitation source is removed, fall back to their normal energetic state by releasing the energy loss as visible light over a period of time. Although fading over time, appropriately thick applicated larger markings remain visible for dark adapted human eyes for the whole night. This Ultraviolet light exposure induced activation and subsequent light emission process can be repeated again and again.", "Besides being used in timepieces by industry and hobbyists, Super-LumiNova is also marketed for application on:\n* Instruments: scales, dials, markings, indicators, etc.\n* Scales: engravings, silkscreen-printing\n* Aviation instruments and markings\n* Jewelry\n* Safety- and emergency panels, signs, markings\n* Aiming posts\n* Various other parts", "Nemoto & Co., Ltd. – a global manufacturer of phosphorescent pigments and other specialized phosphors – was founded by Kenzo Nemoto in December 1941 as a luminous paint processing company and has supplied and developed luminous paint to the watch and clock and aviation instruments industry since.\nSuper-LumiNova is based on LumiNova branded pigments, invented in 1993 by the Nemoto staff members Yoshihiko Murayama, Nobuyoshi Takeuchi, Yasumitsu Aoki and Takashi Matsuzawa as a safe replacement for radium-based luminous paints. The invention was patented by Nemoto & Co., Ltd. and licensed to other manufacturers and watch brands.\nIn 1998 Nemoto & Co. established a join-venture with RC Tritech AG called LumiNove AG, Schweitz to manufacture 100 percent Swiss made afterglow pigments branded as Super-LumiNova. After that, the production of radioactive luminous compounds by RC Tritech AG was completely stopped. According to RC Tritech AG the Swiss watch brands all use their Super-LumiNova pigments.", "Superferromagnetism is the magnetism of an ensemble of magnetically interacting super-moment-bearing material particles that would be superparamagnetic if they were not interacting. Nanoparticles of iron oxides, such as ferrihydrite (nominally FeOOH), often cluster and interact magnetically. These interactions change the magnetic behaviours of the nanoparticles (both above and below their blocking temperatures) and lead to an ordered low-temperature phase with non-randomly oriented particle super-moments.", "The phenomenon appears to have been first described and the term \"superferromagnatism\" introduced by Bostanjoglo and Röhkel, for a metallic film system. A decade later, the same phenomenon was rediscovered and described to occur in small-particle systems. The discovery is attributed as such in the scientific literature.", "The time of the NP decaying magnetic field for bound particles in SPMR measurements is on the order of seconds. Unbound particles of similar size decay on the order of milliseconds, contributing very little to the results.\nThe decay curve for bound NP is fit by an equation of the form\nor\nThe constants are fit to the experimental data and a particular time point is used to extract the value of the magnetic field. The fields from all the sensor positions are then used to construct a field contour map.", "One application of SPMR is the detection of disease and cancer. This is accomplished by functionalizing the NP with biomarkers, including cell antibodies (Ab). The functionalized NP+Ab may be subsequently attached to cells targeted by the biomarker in cell cultures, blood and marrow samples, as well as animal models.\nA variety of biochemical procedures are used to conjugate the NP with the biomarker. The resulting NP+Ab are either directly mixed with incubated blood or diseased cells, or injected into animals. Following injection the functionalized NP reside in the bloodstream until encountering cells that are specific to the biomarker attached to the Ab.\nConjugation of NP with Ab followed by attachment to cells is accomplished by identifying particular cell lines expressing varying levels of the Ab by flow cytometry. The Ab is conjugated to the superparamagnetic iron oxide NP by different methods including the carbodiimide method. The conjugated NP+Ab are then incubated with the cell lines and may be examined by transmission-electron microscopy (TEM) to confirm that the NP+Ab are attached to the cells. Other methods to determine whether NP are present on the surface of the cell are confocal microscopy, Prussian blue histochemistry, and SPMR. The resulting carboxylate functionality of the polymer-encapsulated NPs by this method allows conjugation of amine groups on the Ab to the carboxylate anions on the surface of the NPs using standard two-step EDC/NHS chemistry.", "A system of magnetic coils are used for magnetizing the NP during SPMR measurements such as those used for medical research applications. The subject of investigation may be living cell cultures, animals, or humans. The optimum magnitude of the magnetizing field will saturate the NP magnetic moment, although physical coil size and electrical constraints may be the limiting factor.\nThe use of magnetizing fields that provide a uniform field across the subject in one direction is desirable, as it reduces the number of variables when solving the inverse electromagnetic problem to determine the coordinates of NP sources in the sample. A uniform magnetizing field may be obtained with the use of Helmholtz coils.\nThe magnetizing field is applied for a sufficient time to allow the NP dipole moment to reach its maximum value. This field is then rapidly turned off > 1 msec, followed by a short duration to allow for any induced currents from the magnetizing field pulse to die away. Following this, the sensors are turned on and measure the decaying field for a sufficient time to obtain an accurate value of the decay time constant; 1–3 s. Magnetizing fields of ~5 mT for a Helmholtz coil of 1 m in diameter are used.\nThe magnetic sensors that measure the decaying magnetic fields require high magnetic field sensitivity in order to determine magnetic moments of NP with adequate sensitivity. SQUID sensors, similar to those used in magnetoencephalography are appropriate for this task. Atomic magnetometers also have adequate sensitivity.\nUnshielded environments reduce expense and provide greater flexibility in location of the equipment but limit the sensitivity of the measurement to ~ 1 pT. This is offset by reducing the effect of external electromagnetic noise with noise reduction algorithms.\nA contour map of the decaying magnetic fields is used to localize the sources containing bound NP. This map is produced from the field distribution obtained from an array of SQUID sensors, multiple positions of the sources under the sensors, or a combination of both. The magnetic moments of the sources is obtained during this procedure.", "Localization of magnetic sources producing the SPMR fields is done by solving the inverse problem of electromagnetism. The forward electromagnetic problem consists of modeling the sources as magnetic dipoles for each magnetic source or more complex configurations that model each source as a distributed source. Examples of the latter are multiple models, Bayesian models, or distributed dipole models. The magnetic dipole model has the form\nwhere r and p are the location and dipole moment vectors of the magnetic dipole, and is the magnetic permeability of free space.\nFor a subject containing N sources, a minimum of 4N measurements of the magnetic field are required to determine the coordinates and magnetic moment of each source. In the case where the particles have been aligned by the external magnetizing field in a particular orientation, 3N measurements are required to obtain solutions. This latter situation leads to increased accuracy for finding the locations of objects as fewer variables are required in the inverse solution algorithm. Increased number of measurements provides an over-determined solution, increasing the localization accuracy.\nSolving the inverse problem for magnetic dipole or more complex models is performed with non-linear algorithms. The Levenberg-Marquardt algorithm is one approach to obtaining solutions to this non-linear problem. More complex methods are available from other biomagnetism programs.\nCoordinates and magnetic moments, for each source assumed to be present in the sample, are determined from solution of the inverse problem.", "SPMR measurements depend on the characteristics of the nanoparticle (NP) that is used. The NP must have the property that the bulk material is normally ferromagnetic in the bulk. Magnetite (FeO) is one such example as it is ferromagnetic when below its Curie temperature. However, if the NPs are single domain, and of a size less than ~50 nm, they exhibit paramagnetic properties even below the Curie temperature due to the energy of the NP being dominated by thermal activity rather than magnetic energy. If an external magnetic field is applied, the NPs align with that field and have a magnetic moment now characteristic of ferromagnetic behavior. When this external field is removed, the NPs relax back to their paramagnetic state.\nThe size of the NP determines the rate of decay of the relaxation process after the extinction of the external magnetization field. The NP decay rate also depends on whether the particle is bound (tethered) to a surface, or is free to rotate. The latter case is dominated by thermal activity, Brownian motion.\nFor the bound case, the decay rate is given by the Néel equation\nHere the value of is normally taken as &thinsp;≈&thinsp;10&thinsp;s, is the anisotropy energy density of the magnetic material (1.35&thinsp;×&thinsp;10&thinsp;J/m), the magnetic core volume, is Boltzmann’s constant, and is the absolute temperature. This exponential relationship between the particle volume and the decay time implies a very strong dependence on the diameter of the NP used in SPMR studies, requiring precise size restrictions on producing these particles.\nFor magnetite, this requires a particle diameter of ~25 nm. The NP also require high monodispersity around this diameter as NP a few nm below this value will decay too fast and a few nm above will decay too slowly to fit into the time window of the measurement.\nThe value of the time constant, , depends on the method of fabrication of the NP. Different chemical procedures will produce slightly different values as well as different NP magnetic moments. Equally important characteristics of the NP are monodispersity, single domain character, and crystalline structure.", "Superparamagnetic relaxometry (SPMR) is a technology combining the use of sensitive magnetic sensors and the superparamagnetic properties of magnetite nanoparticles (NP). For NP of a sufficiently small size, on the order of tens of nanometers (nm), the NP exhibit paramagnetic properties, i.e., they have little or no magnetic moment. When they are exposed to a small external magnetic field, on the order of a few millitesla (mT), the NP align with that field and exhibit ferromagnetic properties with large magnetic moments. Following removal of the magnetizing field, the NP slowly become thermalized, decaying with a distinct time constant from the ferromagnetic state back to the paramagnetic state. This time constant depends strongly upon the NP diameter and whether they are unbound or bound to an external surface such as a cell. Measurement of this decaying magnetic field is typically done by superconducting quantum interference detectors (SQUIDs). The magnitude of the field during the decay process determines the magnetic moment of the NPs in the source. A spatial contour map of the field distribution determines the location of the source in three dimensions as well as the magnetic moment.", "A superparamagnetic system can be measured with AC susceptibility measurements, where an applied magnetic field varies in time, and the magnetic response of the system is measured. A superparamagnetic system will show a characteristic frequency dependence: When the frequency is much higher than 1/τ, there will be a different magnetic response than when the frequency is much lower than 1/τ, since in the latter case, but not the former, the ferromagnetic clusters will have time to respond to the field by flipping their magnetization. The precise dependence can be calculated from the Néel–Arrhenius equation, assuming that the neighboring clusters behave independently of one another (if clusters interact, their behavior becomes more complicated). It is also possible to perform magneto-optical AC susceptibility measurements with magneto-optically active superparamagnetic materials such as iron oxide nanoparticles in the visible wavelength range.", "Superparamagnetism sets a limit on the storage density of hard disk drives due to the minimum size of particles that can be used. This limit on areal-density is known as the superparamagnetic limit.\n* Older hard disk technology uses longitudinal recording. It has an estimated limit of 100 to 200 Gbit/in.\n* Current hard disk technology uses perpendicular recording. drives with densities of approximately 1 Tbit/in are available commercially. This is at the limit for conventional magnetic recording that was predicted in 1999.\n* Future hard disk technologies currently in development include: heat-assisted magnetic recording (HAMR) and microwave-assisted magnetic recording (MAMR), which use materials that are stable at much smaller sizes. They require localized heating or microwave excitation before the magnetic orientation of a bit can be changed. Bit-patterned recording (BPR) avoids the use of fine-grained media and is another possibility. In addition, magnetic recording technologies based on topological distortions of the magnetization, known as skyrmions, have been proposed.", "There is no time-dependence of the magnetization when the nanoparticles are either completely blocked () or completely superparamagnetic (). There is, however, a narrow window around where the measurement time and the relaxation time have comparable magnitude. In this case, a frequency-dependence of the susceptibility can be observed. For a randomly oriented sample, the complex susceptibility is:\nwhere\n* is the frequency of the applied field\n* is the susceptibility in the superparamagnetic state\n* is the susceptibility in the blocked state\n* is the relaxation time of the assembly\nFrom this frequency-dependent susceptibility, the time-dependence of the magnetization for low-fields can be derived:", "* Imaging: contrast agents in magnetic resonance imaging (MRI)\n* Magnetic separation: cell-, DNA-, protein- separation, RNA fishing\n* Treatments: targeted drug delivery, magnetic hyperthermia, magnetofection", "Let us imagine that the magnetization of a single superparamagnetic nanoparticle is measured and let us define as the measurement time. If , the nanoparticle magnetization will flip several times during the measurement, then the measured magnetization will average to zero. If , the magnetization will not flip during the measurement, so the measured magnetization will be what the instantaneous magnetization was at the beginning of the measurement. In the former case, the nanoparticle will appear to be in the superparamagnetic state whereas in the latter case it will appear to be “blocked” in its initial state.\nThe state of the nanoparticle (superparamagnetic or blocked) depends on the measurement time. A transition between superparamagnetism and blocked state occurs when . In several experiments, the measurement time is kept constant but the temperature is varied, so the transition between superparamagnetism and blocked state is seen as a function of the temperature. The temperature for which is called the blocking temperature:\nFor typical laboratory measurements, the value of the logarithm in the previous equation is in the order of 20–25.\nEquivalently, blocking temperature is the temperature below which a material shows slow relaxation of magnetization.", "Normally, any ferromagnetic or ferrimagnetic material undergoes a transition to a paramagnetic state above its Curie temperature. Superparamagnetism is different from this standard transition since it occurs below the Curie temperature of the material.\nSuperparamagnetism occurs in nanoparticles which are single-domain, i.e. composed of a single magnetic domain. This is possible when their diameter is below 3–50 nm, depending on the materials. In this condition, it is considered that the magnetization of the nanoparticles is a single giant magnetic moment, sum of all the individual magnetic moments carried by the atoms of the nanoparticle. Those in the field of superparamagnetism call this \"macro-spin approximation\".\nBecause of the nanoparticle’s magnetic anisotropy, the magnetic moment has usually only two stable orientations antiparallel to each other, separated by an energy barrier. The stable orientations define the nanoparticle’s so called “easy axis”. At finite temperature, there is a finite probability for the magnetization to flip and reverse its direction. The mean time between two flips is called the Néel relaxation time and is given by the following Néel–Arrhenius equation:\nwhere:\n* is thus the average length of time that it takes for the nanoparticle’s magnetization to randomly flip as a result of thermal fluctuations.\n* is a length of time, characteristic of the material, called the attempt time or attempt period (its reciprocal is called the attempt frequency); its typical value is between 10 and 10 second.\n* K is the nanoparticle’s magnetic anisotropy energy density and V its volume. KV is therefore the energy barrier associated with the magnetization moving from its initial easy axis direction, through a “hard plane”, to the other easy axis direction.\n* k is the Boltzmann constant.\n* T is the temperature.\nThis length of time can be anywhere from a few nanoseconds to years or much longer. In particular, it can be seen that the Néel relaxation time is an exponential function of the grain volume, which explains why the flipping probability becomes rapidly negligible for bulk materials or large nanoparticles.", "Superparamagnetism is a form of magnetism which appears in small ferromagnetic or ferrimagnetic nanoparticles. In sufficiently small nanoparticles, magnetization can randomly flip direction under the influence of temperature. The typical time between two flips is called the Néel relaxation time. In the absence of an external magnetic field, when the time used to measure the magnetization of the nanoparticles is much longer than the Néel relaxation time, their magnetization appears to be in average zero; they are said to be in the superparamagnetic state. In this state, an external magnetic field is able to magnetize the nanoparticles, similarly to a paramagnet. However, their magnetic susceptibility is much larger than that of paramagnets.", "When an external magnetic field H is applied to an assembly of superparamagnetic nanoparticles, their magnetic moments tend to align along the applied field, leading to a net magnetization. The magnetization curve of the assembly, i.e. the magnetization as a function of the applied field, is a reversible S-shaped increasing function. This function is quite complicated but for some simple cases:\n# If all the particles are identical (same energy barrier and same magnetic moment), their easy axes are all oriented parallel to the applied field and the temperature is low enough (T &lt; T ≲ KV/(10 k)), then the magnetization of the assembly is\n# If all the particles are identical and the temperature is high enough (T ≳ KV/k), then, irrespective of the orientations of the easy axes:\nIn the above equations:\n* n is the density of nanoparticles in the sample\n* is the magnetic permeability of vacuum\n* is the magnetic moment of a nanoparticle\n* is the Langevin function\nThe initial slope of the function is the magnetic susceptibility of the sample :\nThe latter susceptibility is also valid for all temperatures if the easy axes of the nanoparticles are randomly oriented.\nIt can be seen from these equations that large nanoparticles have a larger µ and so a larger susceptibility. This explains why superparamagnetic nanoparticles have a much larger susceptibility than standard paramagnets: they behave exactly as a paramagnet with a huge magnetic moment.", "Tagging of postage stamps means that the stamps are printed on luminescent paper or with luminescent ink to facilitate automated mail processing. Both fluorescence and phosphorescence are used. The same stamp may have been printed with and without these luminescent features, the two varieties are referred to as tagged and untagged, respectively.", "The US Post Office Department started experiments with fluorescent compounds in the early 1960s. An 8¢ air mail stamp issued in 1963 was the first stamp printed for trials with new cancelling machines. The 5¢ City Delivery issue of 1963 was the first commemorative issue produced with tagging.\nPrecancelled stamps and service-inscribed stamps are not usually tagged because they need not be routed through the cancelling equipment.", "Deutsche Bundespost started issuing stamps on fluorescent Lumogen paper in 1960 in connection with trials for automated mail processing in the Darmstadt area. Fluorescent paper was generally used for stamps of Deutsche Bundespost and Deutsche Bundespost Berlin from 1961 on. Deutsche Post AG continues to use this technology. Deutsche Post of the GDR did not use luminescent tagging on stamps.", "Since luminescent ink or luminescent paper are only delivered to specialist printers, tagging also serves as an anti-counterfeiting measure, similar to the practice on banknotes.", "Letters and postcards fed into an automated mail processing plant are illuminated with ultraviolet light. The reaction of the luminescent features of the stamps on this illumination is used to position the mail items such that the stamps can be cancelled, and that the significant parts of the address such as postcodes may be read and the mail be sorted accordingly.\nThe luminescent features of the stamps are generally invisible or barely visible to the human eye in normal illumination. They can, however, be identified under ultraviolet light similar to the way it is done in the postal machinery. In general, fluorescent features can be identified with UV light of a longer wavelength than needed for phosphorescent features (see below).\nThe luminescent substance (\"taggant\") can be printed over the whole surface of the stamp, the main design, the margins only, single bands or bars or other patterns, or can be added the paper itself.\nThe tagging pattern can also be varied to enable sorting of mail according to the service class.", "Fluorescent stamps can be detected with a black light fluorescent tube. Phosphorescent stamps can be detected using a shortwave UV lamp. The effects of both processes can be recorded photographically. Lamps for both ranges of wavelengths as well as combinations of both are available. Care must be taken when using UV lamps, since their light can damage the eyes.", "Phosphorescent materials release the absorbed energy only slowly, so that they exhibit an \"afterglow\". Materials for stamp tagging absorb ultraviolet light of wavelengths between 180 nm and 300 nm (UVC, short-wave UV) and emit light of a greenish or reddish colour depending on the substances used.", "Luminescent tagging has been added to postage stamps of the United Kingdom since the Wilding issues of 1959 in the shape of vertical bands. Stamps of the current Machin series have been printed with one or two such \"phosphor bands\"; those for second-class mail bear only one such band, those for first-class mail bear two. The positions of the bands may vary: stamps from booklets may have shortened, notched, or inset bands that do not extend onto neighbouring gutters to avoid the use of the latter instead of stamps for franking. Due to the presence of optical brighteners in many printing papers, phosphorescent materials were chosen for stamp tagging in the UK.", "The first tagged stamps of Canada were issued in 1962 with vertical phosphorescent bands. In 1972, fluorescent general tagging was introduced, initially as vertical bars, now normally on all four sides of the stamp.", "When Deutsche Post of the GDR expanded automated mail processing in the 1980s, they did not use luminescent tagging, but used sideways illumination to identify the shadows of the stamp perforation in order to position mail items in cancelling and sorting machinery. Red light was used for this purpose, giving a good contrast to ordinary writing ink colours and enabling machine reading of postcodes. Some issues of Postal cards were printed entirely in orange to facilitate the latter process. However, the colours of the imprinted stamps was later changed to those of the usual definitives of the corresponding value, and simulated perforations were added around the stamp design to help locate the stamp position.", "Upon absorption of light, fluorescent materials emit light upon of a longer wavelength (lower energy) than the absorbed radiation, but cease to do so once immediately, when the illumination is stopped. The tagging of stamps uses substances that absorb ultraviolet light of wavelengths between 300 nm and 450 nm (\"Black light\", UVA, long-wave UV) and emit light in the visible spectrum. Under UV illumination they usually glow a greenish or yellowish colour.\nIt must not be confused with the \"whitening\" of paper which is achieved by adding optical brighteners that usually re-emit light in the blue region of the spectrum, making the paper appear whiter by compensating a perceived deficit in reflected colours of these wavelengths.", "There are a number of methods through which hypothermia is induced. These include: cooling catheters, cooling blankets, and application of ice applied around the body among others. As of 2013 it is unclear if one method is any better than the others. While cool intravenous fluid may be given to start the process, further methods are required to keep the person cold.\nCore body temperature must be measured (either via the esophagus, rectum, bladder in those who are producing urine, or within the pulmonary artery) to guide cooling. A temperature below should be avoided, as adverse events increase significantly. The person should be kept at the goal temperature plus or minus half a degree Celsius for 24 hours. Rewarming should be done slowly with suggested speeds of per hour.\nTargeted temperature management should be started as soon as possible. The goal temperature should be reached before 8 hours. Targeted temperature management remains partially effective even when initiated as long as 6 hours after collapse.\nPrior to the induction of targeted temperature management, pharmacological agents to control shivering must be administered. When body temperature drops below a certain threshold—typically around —people may begin to shiver. It appears that regardless of the technique used to induce hypothermia, people begin to shiver when temperature drops below this threshold. Drugs commonly used to prevent and treat shivering in targeted temperature management include acetaminophen, buspirone, opioids including pethidine (meperidine), dexmedetomidine, fentanyl, and/or propofol. If shivering is unable to be controlled with these drugs, patients are often placed under general anesthesia and/or are given paralytic medication like vecuronium. People should be rewarmed slowly and steadily in order to avoid harmful spikes in intracranial pressure.", "Targeted temperature management (TTM) previously known as therapeutic hypothermia or protective hypothermia is an active treatment that tries to achieve and maintain a specific body temperature in a person for a specific duration of time in an effort to improve health outcomes during recovery after a period of stopped blood flow to the brain. This is done in an attempt to reduce the risk of tissue injury following lack of blood flow. Periods of poor blood flow may be due to cardiac arrest or the blockage of an artery by a clot as in the case of a stroke.\nTargeted temperature management improves survival and brain function following resuscitation from cardiac arrest. Evidence supports its use following certain types of cardiac arrest in which an individual does not regain consciousness. The target temperature is often between 32–34 °C. Targeted temperature management following traumatic brain injury is of unclear benefit. While associated with some complications, these are generally mild.\nTargeted temperature management is thought to prevent brain injury by several methods, including decreasing the brain's oxygen demand, reducing the production of neurotransmitters like glutamate, as well as reducing free radicals that might damage the brain. Body temperature may be lowered by many means, including cooling blankets, cooling helmets, cooling catheters, ice packs and ice water lavage.", "The 2013 ILCOR and 2010 American Heart Association guidelines support the use of cooling following resuscitation from cardiac arrest. These recommendations were largely based on two trials from 2002 which showed improved survival and brain function when cooled to after cardiac arrest.\nHowever, more recent research suggests that there is no benefit to cooling to when compared with less aggressive cooling only to a near-normal temperature of ; it appears cooling is effective because it prevents fever, a common complication seen after cardiac arrest. There is no difference in long term quality of life following mild compared to more severe cooling.\nIn children, following cardiac arrest, cooling does not appear useful as of 2018.", "Hypothermia therapy for neonatal encephalopathy has been proven to improve outcomes for newborn infants affected by perinatal hypoxia-ischemia, hypoxic ischemic encephalopathy or birth asphyxia. A 2013 Cochrane review found that it is useful in full term babies with encephalopathy. Whole body or selective head cooling to , begun within six hours of birth and continued for 72 hours, reduces mortality and reduces cerebral palsy and neurological deficits in survivors.", "Targeted temperature management is used during open-heart surgery because it decreases the metabolic needs of the brain, heart, and other organs, reducing the risk of damage to them. The patient is given medication to prevent shivering. The body is then cooled to 25–32 °C (79–89 °F). The heart is stopped and an external heart-lung pump maintains circulation to the patient's body. The heart is cooled further and is maintained at a temperature below 15 °C (60 °F) for the duration of the surgery. This very cold temperature helps the heart muscle to tolerate its lack of blood supply during the surgery.", "Possible complications may include: infection, bleeding, dysrhythmias and high blood sugar. One review found an increased risk of pneumonia and sepsis but not the overall risk of infection. Another review found a trend towards increased bleeding but no increase in severe bleeding. Hypothermia induces a \"cold diuresis\" which can lead to electrolyte abnormalities – specifically hypokalemia, hypomagnesaemia, and hypophosphatemia, as well as hypovolemia.", "The earliest rationale for the effects of hypothermia as a neuroprotectant focused on the slowing of cellular metabolism resulting from a drop in body temperature. For every one degree Celsius drop in body temperature, cellular metabolism slows by 5–7%. Accordingly, most early hypotheses suggested that hypothermia reduces the harmful effects of ischemia by decreasing the body's need for oxygen. The initial emphasis on cellular metabolism explains why the early studies almost exclusively focused on the application of deep hypothermia, as these researchers believed that the therapeutic effects of hypothermia correlated directly with the extent of temperature decline.\nIn the special case of infants with perinatal asphyxia, it appears that apoptosis is a prominent cause of cell death and that hypothermia therapy for neonatal encephalopathy interrupts the apoptotic pathway. In general, cell death is not directly caused by oxygen deprivation, but occurs indirectly as a result of the cascade of subsequent events. Cells need oxygen to create ATP, a molecule used by cells to store energy, and cells need ATP to regulate intracellular ion levels. ATP is used to fuel both the importation of ions necessary for cellular function and the removal of ions that are harmful to cellular function. Without oxygen, cells cannot manufacture the necessary ATP to regulate ion levels and thus cannot prevent the intracellular environment from approaching the ion concentration of the outside environment. It is not oxygen deprivation itself that precipitates cell death, but rather without oxygen the cell can not make the ATP it needs to regulate ion concentrations and maintain homeostasis.\nNotably, even a small drop in temperature encourages cell membrane stability during periods of oxygen deprivation. For this reason, a drop in body temperature helps prevent an influx of unwanted ions during an ischemic insult. By making the cell membrane more impermeable, hypothermia helps prevent the cascade of reactions set off by oxygen deprivation. Even moderate dips in temperature strengthen the cellular membrane, helping to minimize any disruption to the cellular environment. It is by moderating the disruption of homeostasis caused by a blockage of blood flow that many now postulate, results in hypothermia's ability to minimize the trauma resultant from ischemic injuries.\nTargeted temperature management may also help to reduce reperfusion injury, damage caused by oxidative stress when the blood supply is restored to a tissue after a period of ischemia. Various inflammatory immune responses occur during reperfusion. These inflammatory responses cause increased intracranial pressure, which leads to cell injury and in some situations, cell death. Hypothermia has been shown to help moderate intracranial pressure and therefore to minimize the harmful effects of a patients inflammatory immune responses during reperfusion. The oxidation that occurs during reperfusion also increases free radical production. Since hypothermia reduces both intracranial pressure and free radical production, this might be yet another mechanism of action for hypothermias therapeutic effect. Overt activation of N-methyl-D-aspartate (NMDA) receptors following brain injuries can lead to calcium entry which triggers neuronal death via the mechanisms of excitotoxicity.", "Cooling catheters are inserted into a femoral vein. Cooled saline solution is circulated through either a metal coated tube or a balloon in the catheter. The saline cools the persons whole body by lowering the temperature of a persons blood. Catheters reduce temperature at rates ranging from per hour. Through the use of the control unit, catheters can bring body temperature to within of the target level. Furthermore, catheters can raise temperature at a steady rate, which helps to avoid harmful rises in intracranial pressure. A number of studies have demonstrated that targeted temperature management via catheter is safe and effective.\nAdverse events associated with this invasive technique include bleeding, infection, vascular puncture, and deep vein thrombosis (DVT). Infection caused by cooling catheters is particularly harmful, as resuscitated people are highly vulnerable to the complications associated with infections. Bleeding represents a significant danger, due to a decreased clotting threshold caused by hypothermia. The risk of deep vein thrombosis may be the most pressing medical complication.\nDeep vein thrombosis can be characterized as a medical event whereby a blood clot forms in a deep vein, usually the femoral vein. This condition may become potentially fatal if the clot travels to the lungs and causes a pulmonary embolism. Another potential problem with cooling catheters is the potential to block access to the femoral vein, which is a site normally used for a variety of other medical procedures, including angiography of the venous system and the right side of the heart. However, most cooling catheters are triple lumen catheters, and the majority of people post-arrest will require central venous access. Unlike non-invasive methods which can be administered by nurses, the insertion of cooling catheters must be performed by a physician fully trained and familiar with the procedure. The time delay between identifying a person who might benefit from the procedure and the arrival of an interventional radiologist or other physician to perform the insertion may minimize some of the benefit of invasive methods' more rapid cooling.", "Transnasal evaporative cooling is a method of inducing the hypothermia process and provides a means of continuous cooling of a person throughout the early stages of targeted temperature management and during movement throughout the hospital environment. This technique uses two cannulae, inserted into a person's nasal cavity, to deliver a spray of coolant mist that evaporates directly underneath the brain and base of the skull. As blood passes through the cooling area, it reduces the temperature throughout the rest of the body.\nThe method is compact enough to be used at the point of cardiac arrest, during ambulance transport, or within the hospital proper. It is intended to reduce rapidly the person's temperature to below while targeting the brain as the first area of cooling. Research into the device has shown cooling rates of per hour in the brain (measured through infrared tympanic measurement) and per hour for core body temperature reduction.", "With these technologies, cold water circulates through a blanket, or torso wraparound vest and leg wraps. To lower temperature with optimal speed, 70% of a persons surface area should be covered with water blankets. The treatment represents the most well studied means of controlling body temperature. Water blankets lower a persons temperature exclusively by cooling a person's skin and accordingly require no invasive procedures.\nWater blankets possess several undesirable qualities. They are susceptible to leaking, which may represent an electrical hazard since they are operated in close proximity to electrically powered medical equipment. The Food and Drug Administration also has reported several cases of external cooling blankets causing significant burns to the skin of person. Other problems with external cooling include overshoot of temperature (20% of people will have overshoot), slower induction time versus internal cooling, increased compensatory response, decreased patient access, and discontinuation of cooling for invasive procedures such as the cardiac catheterization.\nIf therapy with water blankets is given along with two litres of cold intravenous saline, people can be cooled to in 65 minutes. Most machines now come with core temperature probes. When inserted into the rectum, the core body temperature is monitored and feedback to the machine allows changes in the water blanket to achieve the desired set temperature. In the past some of the models of cooling machines have produced an overshoot in the target temperature and cooled people to levels below , resulting in increased adverse events. They have also rewarmed patients at too fast a rate, leading to spikes in intracranial pressure. Some of the new models have more software that attempt to prevent this overshoot by utilizing warmer water when the target temperature is close and preventing any overshoot. Some of the new machines now also have 3 rates of cooling and warming; a rewarming rate with one of these machines allows a patient to be rewarmed at a very slow rate of just an hour in the \"automatic mode\", allowing rewarming from to over 24 hours.", "There are a number of non-invasive head cooling caps and helmets designed to target cooling at the brain. A hypothermia cap is typically made of a synthetic material such as neoprene, silicone, or polyurethane and filled with a cooling agent such as ice or gel which is either cooled to a very cold temperature, , before application or continuously cooled by an auxiliary control unit. Their most notable uses are in preventing or reducing alopecia in chemotherapy, and for preventing cerebral palsy in babies born with hypoxic ischemic encephalopathy. In the continuously cooled iteration, coolant is cooled with the aid of a compressor and pumped through the cooling cap. Circulation is regulated by means of valves and temperature sensors in the cap. If the temperature deviates or if other errors are detected, an alarm system is activated. The frozen iteration involves continuous application of caps filled with Crylon gel cooled to to the scalp before, during and after intravenous chemotherapy. As the caps warm on the head, multiple cooled caps must be kept on hand and applied every 20 to 30 minutes.", "TTM has been studied in several use scenarios where it has not usually been found to be helpful, or is still under investigation, despite theoretical grounds for its usefulness.", "There is currently no evidence supporting targeted temperature management use in humans and clinical trials have not been completed. Most of the data concerning hypothermia's effectiveness in treating stroke is limited to animal studies. These studies have focused primarily on ischemic stroke as opposed to hemorrhagic stroke, as hypothermia is associated with a lower clotting threshold. In these animal studies, hypothermia was represented an effective neuroprotectant. The use of hypothermia to control intracranial pressure (ICP) after an ischemic stroke was found to be both safe and practical.", "Animal studies have shown the benefit of targeted temperature management in traumatic central nervous system (CNS) injuries. Clinical trials have shown mixed results with regards to the optimal temperature and delay of cooling. Achieving therapeutic temperatures of is thought to prevent secondary neurological injuries after severe CNS trauma. A systematic review of randomised controlled trials in traumatic brain injury (TBI) suggests there is no evidence that hypothermia is beneficial.", "A clinical trial in cardiac arrest patients showed that hypothermia improved neurological outcome and reduced mortality. A retrospective study of the use of hypothermia for cardiac arrest patients showed favorable neurological outcome and survival. Osborn waves on electrocardiogram (ECG) are frequent during TTM after cardiac arrest, particularly in patients treated with 33 °C. Osborn waves are not associated with increased risk of ventricular arrhythmia, and may be considered a benign physiological phenomenon, associated with lower mortality in univariable analyses.", "As of 2015 hypothermia had shown no improvements in neurological outcomes or in mortality in neurosurgery.", "Hypothermia has been applied therapeutically since antiquity. The Greek physician Hippocrates, the namesake of the Hippocratic Oath, advocated the packing of wounded soldiers in snow and ice. Napoleonic surgeon Baron Dominique Jean Larrey recorded that officers who were kept closer to the fire survived less often than the minimally pampered infantrymen. In modern times, the first medical article concerning hypothermia was published in 1945. This study focused on the effects of hypothermia on patients with severe head injury. In the 1950s, hypothermia received its first medical application, being used in intracerebral aneurysm surgery to create a bloodless field. Most of the early research focused on the applications of deep hypothermia, defined as a body temperature of . Such an extreme drop in body temperature brings with it a whole host of side effects, which made the use of deep hypothermia impractical in most clinical situations.\nThis period also saw sporadic investigation of more mild forms of hypothermia, with mild hypothermia being defined as a body temperature of . In the 1950s, Doctor Rosomoff demonstrated in dogs the positive effects of mild hypothermia after brain ischemia and traumatic brain injury. In the 1980s further animal studies indicated the ability of mild hypothermia to act as a general neuroprotectant following a blockage of blood flow to the brain. This animal data was supported by two landmark human studies that were published simultaneously in 2002 by the New England Journal of Medicine. Both studies, one occurring in Europe and the other in Australia, demonstrated the positive effects of mild hypothermia applied following cardiac arrest. Responding to this research, in 2003 the American Heart Association (AHA) and the International Liaison Committee on Resuscitation (ILCOR) endorsed the use of targeted temperature management following cardiac arrest. Currently, a growing percentage of hospitals around the world incorporate the AHA/ILCOR guidelines and include hypothermic therapies in their standard package of care for patients with cardiac arrest. Some researchers go so far as to contend that hypothermia represents a better neuroprotectant following a blockage of blood to the brain than any known drug. Over this same period a particularly successful research effort showed that hypothermia is a highly effective treatment when applied to newborn infants following birth asphyxia. Meta-analysis of a number of large randomised controlled trials showed that hypothermia for 72 hours started within 6 hours of birth significantly increased the chance of survival without brain damage.", "Tengion, Inc. is an American development-stage regenerative medicine company founded in 2003 with financing from J&J Development Corporation, HealthCap and Oak Investment Partners, which is headquartered in Winston-Salem, North Carolina. Its goals are discovering, developing, manufacturing and commercializing a range of replacement organs and tissues, or neo-organs and neo-tissues, to address unmet medical needs in urologic, renal, gastrointestinal, and vascular diseases and disorders. The company creates these human neo-organs from a patient’s own cells or autologous cells, in conjunction with its Organ Regeneration Platform. \nThe company declared Chapter 7 bankruptcy in December 2014, and it, along with its assets and tissue engineering samples, have been bought back by its creditors and former executives as of March 2015. The purchase was expedited, so that time-sensitive research can continue.", "All current Tengion's regenerative medicine product candidates are investigational and will not be commercially available until the completion of clinical trials and the review and approval of associated marketing applications by the Food and Drug Administration.", "Its most advanced candidate is the Neo-Urinary Conduit. A Phase I clinical trial of the Tengion Neo-Urinary Conduit was completed in some health care institutions, in patients with bladder cancer who require a total cystectomy. The trial ended in December 2014, however information on the results has not yet been made publicly available.\nThe company also develops the Neo-Bladder Augment, a Phase II clinical trial product for the treatment of neurogenic bladder resulting from spina bifida in pediatric patients, as well as neurogenic bladder resulting from spinal cord injury in adult patients; the Neo-Bladder Replacement to serve as a functioning bladder, eliminating the need for an ostomy bag, for patients who have their bladders removed due to cancer; and the Neo-Kidney Augment to prevent or delay dialysis by increasing renal function in patients with advanced chronic kidney disease.", "In addition, it is involved in developing the Neo-GI Augment, a gastrointestinal development program; and Neo-Vessel Replacement, which targets various blood vessel applications consisting of vascular access grafts, arteriovenous, and shunts for patients with ESRD (end stage renal disease) undergoing hemodialysis treatment, as well as for vessel replacement for patients undergoing coronary or peripheral artery bypass procedures.", "Founded in 2003 and formerly headquartered in East Norriton Township, Pennsylvania before moving to Winston-Salem, North Carolina in 2012, Tengion went public in 2010, after its stock has been approved for listing on the NASDAQ, through a $26 million IPO to help advance its research and development activities. Some of the groundbreaking regenerative medicine technologies of Dr. Anthony Atala, director of the Wake Forest Institute for Regenerative Medicine, were the core from where those research and development activities developed. \nOn September 4, 2012, Tengion received a notice from NASDAQ stating that the company had not regained compliance with NASDAQ Listing Rule 5550(b)(1) and that its common stock would cease trading on the NASDAQ Capital Market effective on September 6, 2012, and would begin trading on the OTCQB tier of the OTC Marketplace. The company was bought by former executives and creditors after declaring bankruptcy in 2014.", "Tetrabromoauric acid is an inorganic compound with the formula . It is the bromide analog of chloroauric acid. It is generated analogously, by reacting a mixture of hydrobromic and nitric acids with elemental gold. The oxidation state of gold in and anion is +3. The salts of (tetrabromoauric(III) acid) are tetrabromoaurates(III), containing anions (tetrabromoaurate(III) anions), which have square planar molecular geometry.", "High energy radiation creates electronic excited states in crystalline materials. In some materials, these states are trapped, or arrested, for extended periods of time by localized defects, or imperfections, in the lattice interrupting the normal intermolecular or inter-atomic interactions in the crystal lattice. Quantum-mechanically, these states are stationary states which have no formal time dependence; however, they are not stable energetically, as vacuum fluctuations are always \"prodding\" these states. Heating the material enables the trapped states to interact with phonons, i.e. lattice vibrations, to rapidly decay into lower-energy states, causing the emission of photons in the process.", "Thermoluminescence is a form of luminescence that is exhibited by certain crystalline materials, such as some minerals, when previously absorbed energy from electromagnetic radiation or other ionizing radiation is re-emitted as light upon heating of the material. The phenomenon is distinct from that of black-body radiation.", "The amount of luminescence is proportional to the original dose of radiation received. In thermoluminescence dating, this can be used to date buried objects that have been heated in the past, since the ionizing dose received from radioactive elements in the soil or from cosmic rays is proportional to age. This phenomenon has been applied in the thermoluminescent dosimeter, a device to measure the radiation dose received by a chip of suitable material that is carried by a person or placed with an object.\nThermoluminescence is a common geochronology tool for dating pottery or other fired archeological materials, as heat empties or resets the thermoluminescent signature of the material (Figure 1). Subsequent recharging of this material from ambient radiation can then be empirically dated by the equation:\nAge = (subsequently accumulated dose of ambient radiation) / (dose accumulated per year)\nThis technique was modified for use as a passive sand migration analysis tool (Figure 2). The research shows direct consequences resulting from the improper replenishment of starving beaches using fine sands. Beach nourishment is a problem worldwide and receives large amounts of attention due to the millions of dollars spent yearly in order to keep beaches beautified for tourists, e.g. in Waikiki, Hawaii. Sands with sizes 90–150 μm (very fine sand) were found to migrate from the swash zone 67% faster than sand grains of 150-212 μm (fine sand; Figure 3). Furthermore, the technique was shown to provide a passive method of policing sand replenishment and a passive method of observing riverine or other sand inputs along shorelines (Figure 4).", "Thermoluminescence dating (TL) is the determination, by means of measuring the accumulated radiation dose, of the time elapsed since material containing crystalline minerals was either heated (lava, ceramics) or exposed to sunlight (sediments). As a crystalline material is heated during measurements, the process of thermoluminescence starts. Thermoluminescence emits a weak light signal that is proportional to the radiation dose absorbed by the material. It is a type of luminescence dating.\nThe technique has wide application, and is relatively cheap at some US$300–700 per object; ideally a number of samples are tested. Sediments are more expensive to date. The destruction of a relatively significant amount of sample material is necessary, which can be a limitation in the case of artworks. The heating must have taken the object above 500 °C, which covers most ceramics, although very high-fired porcelain creates other difficulties. It will often work well with stones that have been heated by fire. The clay core of bronze sculptures made by lost wax casting is also able to be tested.\nDifferent materials vary considerably in their suitability for the technique, depending on several factors. Subsequent irradiation, for example if an x-ray is taken, can affect accuracy, as will the \"annual dose\" of radiation a buried object has received from the surrounding soil. Ideally this is assessed by measurements made at the precise findspot over a long period. For artworks, it may be sufficient to confirm whether a piece is broadly ancient or modern (that is, authentic or a fake), and this may be possible even if a precise date cannot be estimated.", "Oxford Authentication: Home - TL Testing Authentication Oxford Authentication® Ltd authenticates ceramic antiquities using the scientific technique of thermoluminescence (TL). TL testing is a dating method for archaeological items which can distinguish between genuine and fake antiquities. See some of their case studies here: https://www.oxfordauthentication.com/case-studies/", "Natural crystalline materials contain imperfections: impurity ions, stress dislocations, and other phenomena that disturb the regularity of the electric field that holds the atoms in the crystalline lattice together. These imperfections lead to local humps and dips in the crystalline material's electric potential. Where there is a dip (a so-called \"electron trap\"), a free electron may be attracted and trapped.\nThe flux of ionizing radiation&mdash;both from cosmic radiation and from natural radioactivity&mdash;excites electrons from atoms in the crystal lattice into the conduction band where they can move freely. Most excited electrons will soon recombine with lattice ions, but some will be trapped, storing part of the energy of the radiation in the form of trapped electric charge (Figure 1).\nDepending on the depth of the traps (the energy required to free an electron from them) the storage time of trapped electrons will vary as some traps are sufficiently deep to store charge for hundreds of thousands of years.", "Thermoluminescence dating is used for material where radiocarbon dating is not available, like sediments. Its use is now common in the authentication of old ceramic wares, for which it gives the approximate date of the last firing. An example of this can be seen in [http://www.antiquity.ac.uk/ant/079/ant0790390.htm Rink and Bartoll, 2005].\nThermoluminescence dating was modified for use as a passive sand migration analysis tool by [http://www.jcronline.org/perlserv/?request=get-abstract&doi=10.2112%2F04-0406.1 Keizars, et al., 2008] (Figure 3), demonstrating the direct consequences resulting from the improper replenishment of starving beaches using fine sands, as well as providing a passive method of policing sand replenishment and observing riverine or other sand inputs along shorelines (Figure 4).", "Optically stimulated luminescence dating is a related measurement method which replaces heating with exposure to intense light. The sample material is illuminated with a very bright source of green or blue light (for quartz) or infrared light (for potassium feldspar). Ultraviolet light emitted by the sample is detected for measurement.", "Another important technique in testing samples from a historic or archaeological site is a process known as thermoluminescence testing, which involves the principle that all\nobjects absorb radiation from the environment. This process frees electrons within elements or minerals that remain caught within the item. Thermoluminescence testing involves\nheating a sample until it releases a type of light, which is then measured to determine the last time the item was heated.\nIn thermoluminescence dating, these long-term traps are used to determine the age of materials: When irradiated crystalline material is again heated or exposed to strong light, the trapped electrons are given sufficient energy to escape. In the process of recombining with a lattice ion, they lose energy and emit photons (light quanta), detectable in the laboratory.\nThe amount of light produced is proportional to the number of trapped electrons that have been freed which is in turn proportional to the radiation dose accumulated. In order to relate the signal (the thermoluminescence&mdash;light produced when the material is heated) to the radiation dose that caused it, it is necessary to calibrate the material with known doses of radiation since the density of traps is highly variable.\nThermoluminescence dating presupposes a \"zeroing\" event in the history of the material, either heating (in the case of pottery or lava) or exposure to sunlight (in the case of sediments), that removes the pre-existing trapped electrons. Therefore, at that point the thermoluminescence signal is zero.\nAs time goes on, the ionizing radiation field around the material causes the trapped electrons to accumulate (Figure 2). In the laboratory, the accumulated radiation dose can be measured, but this by itself is insufficient to determine the time since the zeroing event.\nThe Radiation Dose Rate - the dose accumulated per year-must be determined first. This is commonly done by measurement of the alpha radioactivity (the uranium and thorium content) and the potassium content (K-40 is a beta and gamma emitter) of the sample material.\nOften the gamma radiation field at the position of the sample material is measured, or it may be calculated from the alpha radioactivity and potassium content of the sample environment, and the cosmic ray dose is added in. Once all components of the radiation field are determined, the accumulated dose from the thermoluminescence measurements is divided by the dose accumulating each year, to obtain the years since the zeroing event.", "*[http://www.users.globalnet.co.uk/~qtls/ GlobalNet.co.uk], Quaternary TL Surveys - Guide to thermoluminescence date measurement\n*Aitken, M.J., Thermoluminescence Dating, Academic Press, London (1985) &ndash; Standard text for introduction to the field. Quite complete and rather technical, but well written and well organized. There is a second edition.\n*Aitken, M.J., Introduction to Optical Dating, Oxford University Press (1998) &ndash; Good introduction to the field.\n*Keizars, K.Z. 2003. NRTL as a method of analysis of sand transport along the coast of the St. Joseph Peninsula, Florida. GAC/MAC 2003. Presentation: Brock University, St. Catharines, Ontario, Canada.\n* [http://www.jcronline.org/perlserv/?request=get-abstract&doi=10.2112%2F04-0406.1 JCRonline.org], Ķeizars, Z., Forrest, B., Rink, W.J. 2008. Natural Residual Thermoluminescence as a Method of Analysis of Sand Transport along the Coast of the St. Joseph Peninsula, Florida. Journal of Coastal Research, 24: 500–507.\n*Keizars, Z. 2008b. NRTL trends observed in the sands of St. Joseph Peninsula, Florida. Queens University. Presentation: Queens University, Kingston, Ontario, Canada.\n*Liritzis, I., 2011. Surface Dating by Luminescence: An Overview. Geochronometria, 38(3): 292–302.\n*Mortlock, AJ; Price, D and Gardiner, G. The Discovery and Preliminary Thermoluminescence Dating of Two Aboriginal Cave Shelters in the Selwyn Ranges, Queensland [online]. Australian Archaeology, No. 9, Nov 1979: 82–86. Availability: <[https://archive.today/20150204113920/http://search.informit.com.au/documentSummary;dn=993492375664325;res=IELHSS]> . [cited 04 Feb 15].\n* [http://www.antiquity.ac.uk/ant/079/ant0790390.htm Antiquity.ac.uk], Rink, W. J., Bartoll, J. 2005. Dating the geometric Nasca lines in the Peruvian desert. Antiquity, 79: 390–401.\n*Sullasi, H. S., Andrade, M. B., Ayta, W. E. F., Frade, M., Sastry, M. D., & Watanabe, S. (2004). Irradiation for dating Brazilian fish fossil by thermoluminescence and EPR technique. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, 213, 756–760.[http://doi:10.1016/S0168-583X(03)01698-7 doi:10.1016/S0168-583X(03)01698-7]", "The compound has been prepared in a multistep process starting with the base hydrolysis of phosphorus pentasulfide to give dithiophosphate, which is isolated as its barium salt:\nIn a second stage, the barium salt is decomposed with sulfuric acid, precipitating barium sulfate and liberating free dithiophosphoric acid:\nUnder controlled conditions, dithiophosphoric acid hydrolyses to give the monothioderivative:", "Thiophosphoric acid is an inorganic compound with the chemical formula . Structurally, it is the acid derived from phosphoric acid with one oxygen atom replaced by sulfur atom, although it cannot be prepared from phosphoric acid. It is a colorless compound that is rarely isolated in pure form, but rather as a solution. The structure of the compound has not been reported, but two tautomers are reasonable: and .", "** HIF-2010 Symposium in Darmstadt, Germany. Robert J Burke presented on Single Pass (Heavy Ion Fusion) HIF and Charles Helsley made a presentation on the commercialization of HIF within the decade.\n** May 23–26, Workshop for Accelerators for Heavy Ion Fusion at Lawrence Berkeley National Laboratory, presentation by Robert J. Burke on \"Single Pass Heavy Ion Fusion\". The Accelerator Working Group publishes recommendations supporting moving RF accelerator driven HIF toward commercialization.\n** Stephen Slutz & Roger Vesey of Sandia National Labs publish a paper in Physical Review Letters presenting a computer simulation of the MagLIF concept showing it can produce high gain. According to the simulation, a 70 Mega Amp Z-pinch facility in combination with a Laser may be able to produce a spectacular energy return of 1000 times the expended energy. A 60 MA facility would produce a 100x yield.\n** JET announces a major breakthrough in controlling instabilities in a fusion plasma. [http://phys.org/news/2012-01-closer-nuclear-fusion.html?=y One step closer to controlling nuclear fusion]\n** In August Robert J. Burke presents updates to the SPRFD HIF process and Charles Helsley presents the Economics of SPRFD at the 19th International HIF Symposium at Berkeley, California. Industry was there in support of ion generation for SPRFD. The Fusion Power Corporation SPRFD patent is granted in Russia.\n** China's EAST tokamak test reactor achieves a record confinement time of 30 seconds for plasma in the high-confinement mode (H-mode), thanks to improvements in heat dispersal from tokamak walls. This is an improvement of an order of magnitude with respect to state-of-the-art reactors.\n** Construction of JT-60SA begins in January.\n** US Scientists at NIF successfully generate more energy from fusion reactions than the energy absorbed by the nuclear fuel.\n** Phoenix Nuclear Labs announces the sale of a high-yield neutron generator that could sustain 5×10 deuterium fusion reactions per second over a 24-hour period.\n** On 9 October 2014, fusion research bodies from European Union member states and Switzerland signed an agreement to cement European collaboration on fusion research and EUROfusion, the European Consortium for the Development of Fusion Energy, was born.\n** Germany conducts the first plasma discharge in Wendelstein 7-X, a large-scale stellarator capable of steady-state plasma confinement under fusion conditions.\n** In January the polywell is presented at Microsoft Research.\n** In August, MIT announces the ARC fusion reactor, a compact tokamak using rare-earth barium-copper oxide (REBCO) superconducting tapes to produce high-magnetic field coils that it claims produce comparable magnetic field strength in a smaller configuration than other designs.\n** The Wendelstein 7-X produces the device's first hydrogen plasma.\n** China's EAST tokamak test reactor achieves a stable 101.2-second steady-state high confinement plasma, setting a world record in long-pulse H-mode operation on the night of July 3.\n** Helion Energy's fifth-generation plasma machine goes into operation, seeking to achieve plasma density of 20 Tesla and fusion temperatures.\n** UK company Tokamak Energy's ST40 fusion reactor generates first plasma.\n** TAE Technologies announces that the Norman reactor had achieved plasma.\n** Energy corporation Eni announces a $50 million investment in start-up Commonwealth Fusion Systems, to commercialize ARC technology via the SPARC test reactor in collaboration with MIT.\n** MIT scientists formulate a theoretical means to remove the excess heat from compact nuclear fusion reactors via larger and longer divertors.\n** General Fusion begins developing a 70% scale demo system to be completed around 2023.\n** TAE Technologies announces its reactor has reached a high temperature of nearly 20 million°C.\n** The Fusion Industry Association founded as an initiative in 2018, is the unified voice of the fusion industry, working to transform the energy system with commercially viable fusion power.\n** The United Kingdom announces a planned £200-million (US$248-million) investment to produce a design for the Spherical Tokamak for Energy Production (STEP) fusion facility around 2040.", "** Building construction for the immense 192-beam 500-terawatt NIF project is completed and construction of laser beam-lines and target bay diagnostics commences, expecting to take its first full system shot in 2010.\n** Negotiations on the Joint Implementation of ITER begin between Canada, countries represented by the European Union, Japan and Russia.\n** Claims and counter-claims are published regarding bubble fusion, in which a table-top apparatus was reported as producing small-scale fusion in a liquid undergoing acoustic cavitation. Like cold fusion (see 1989), it is later dismissed.\n** European Union proposes Cadarache in France and Vandellos in Spain as candidate sites for ITER while Japan proposes Rokkasho.\n** The United States rejoins the ITER project with China and Republic of Korea also joining. Canada withdraws.\n** Cadarache in France is selected as the European Candidate Site for ITER.\n** Sandia National Laboratories begins fusion experiments in the Z machine.\n** The United States drops its own ITER-scale tokamak project, FIRE, recognising an inability to match EU progress.\n** Following final negotiations between the EU and Japan, ITER chooses Cadarache over Rokkasho for the site of the reactor. In concession, Japan is able to host the related materials research facility and granted rights to fill 20% of the project's research posts while providing 10% of the funding.\n** The NIF fires its first bundle of eight beams achieving the highest ever energy laser pulse of 152.8 kJ (infrared).\n** China's EAST test reactor is completed, the first tokamak experiment to use superconducting magnets to generate both the toroidal and poloidal fields.\n** Construction of the NIF reported as complete.\n** Ricardo Betti, the third Under Secretary, responsible for Nuclear Energy, testifies before Congress: \"IFE [ICF for energy production] has no home\".", "** Decision to construct the National Ignition Facility \"beamlet\" laser at LLNL is made.\n** The START Tokamak fusion experiment begins in Culham. The experiment would eventually achieve a record beta (plasma pressure compared to magnetic field pressure) of 40% using a neutral beam injector. It was the first design that adapted the conventional toroidal fusion experiments into a tighter spherical design.\n** The JT-60 tokamak was upgraded to JT-60U in March.\n** The Engineering Design Activity for the ITER starts with participants EURATOM, Japan, Russia and United States. It ended in 2001.\n** The United States and the former republics of the Soviet Union cease nuclear weapons testing.\n** The TFTR tokamak at Princeton (PPPL) experiments with a 50% deuterium, 50% tritium mix, eventually producing as much as 10 megawatts of power from a controlled fusion reaction.\n** NIF Beamlet laser is completed and begins experiments validating the expected performance of NIF.\n** The USA declassifies information about indirectly driven (hohlraum) target design.\n** Comprehensive European-based study of HIF driver begins, centered at the Gesellschaft für Schwerionenforschung (GSI) and involving 14 laboratories, including USA and Russia. The Heavy Ion Driven Inertial Fusion (HIDIF) study will be completed in 1997.\n** A record is reached at Tore Supra: a plasma duration of two minutes with a current of almost 1 million amperes driven non-inductively by 2.3 MW of lower hybrid frequency waves (i.e. 280 MJ of injected and extracted energy). This result was possible due to the actively cooled plasma-facing components installed in the machine.\n** The JT-60U tokamak achieves extrapolated breakeven at Q = 1.05.\n** The JET tokamak in the UK produces 16 MW of fusion power - this remains the world record for fusion power until 2022 when JET sets an even higher record. Four megawatts of alpha particle self-heating was achieved.\n** LLNL study compared projected costs of power from ICF and other fusion approaches to the projected future costs of existing energy sources.\n** Groundbreaking ceremony held for the National Ignition Facility (NIF).\n** The JT-60 tokamak in Japan produced a high performance reversed shear plasma with the equivalent fusion amplification factor of 1.25 - the current world record of Q, fusion energy gain factor.\n** Results of European-based study of heavy ion driven fusion power system (HIDIF, GSI-98-06) incorporates telescoping beams of multiple isotopic species. This technique multiplies the 6-D phase space usable for the design of HIF drivers.\n** The United States withdraws from the ITER project.\n** The START experiment is succeeded by MAST.", "** HIBALL study by German and US institutions, Garching uses the high repetition rate of the RF accelerator driver to serve four reactor chambers and first-wall protection using liquid lithium inside the chamber cavity.\n** Tore Supra construction starts at Cadarache, France. Its superconducting magnets will permit it to generate a strong permanent toroidal magnetic field.\n** high-confinement mode (H-mode) discovered in tokamaks.\n** JET, the largest operational magnetic confinement plasma physics experiment is completed on time and on budget. First plasmas achieved.\n** The NOVETTE laser at LLNL comes on line and is used as a test bed for the next generation of ICF lasers, specifically the NOVA laser.\n** The huge 10 beam NOVA laser at LLNL is completed and switches on in December. NOVA would ultimately produce a maximum of 120 kilojoules of infrared laser light during a nanosecond pulse in a 1989 experiment.\n** National Academy of Sciences reviewed military ICF programs, noting HIF's major advantages clearly but averring that HIF was \"supported primarily by other [than military] programs\". The review of ICF by the National Academy of Sciences marked the trend with the observation: \"The energy crisis is dormant for the time being.\" Energy becomes the sole purpose of heavy ion fusion. \n** The Japanese tokamak, JT-60 completed. First plasmas achieved.\n** The T-15, Soviet tokamak with superconducting helium-cooled coils completed.\n** The Conceptual Design Activity for the International Thermonuclear Experimental Reactor (ITER), the successor to T-15, TFTR, JET and JT-60, begins. Participants include EURATOM, Japan, the Soviet Union and United States. It ended in 1990.\n** The first plasma produced at Tore Supra in April.\n** On March 23, two Utah electrochemists, Stanley Pons and Martin Fleischmann, announced that they had achieved cold fusion: fusion reactions which could occur at room temperatures. However, they made their announcements before any peer review of their work was performed, and no subsequent experiments by other researchers revealed any evidence of fusion.", "** Princeton's conversion of the Model C stellarator to the Symmetrical Tokamak is completed, and tests match and then best the Soviet results. With an apparent solution to the magnetic bottle problem in-hand, plans begin for a larger machine to test the scaling and various methods to heat the plasma.\n** Kapchinskii and Teplyakov introduce a particle accelerator for heavy ions that appear suitable as an ICF driver in place of lasers.\n** The first neodymium-doped glass (Nd:glass) laser for ICF research, the \"Long Path laser\" is completed at LLNL and is capable of delivering ~50 joules to a fusion target.\n** Design work on JET, the Joint European Torus, begins.\n** J.B. Taylor re-visited ZETA results of 1958 and explained that the quiet-period was in fact very interesting. This led to the development of reversed field pinch, now generalised as \"self-organising plasmas\", an ongoing line of research.\n** KMS Fusion, a private-sector company, builds an ICF reactor using laser drivers. Despite limited resources and numerous business problems, KMS successfully compresses fuel in December 1973, and on 1 May 1974 successfully demonstrates the world's first laser-induced fusion. Neutron-sensitive nuclear emulsion detectors, developed by Nobel Prize winner Robert Hofstadter, were used to provide evidence of this discovery.\n** Beams using mature high-energy accelerator technology are hailed as the elusive \"brand-X\" driver capable of producing fusion implosions for commercial power. The Livingston Curve, which illustrates the improvement in power of particle accelerators over time, is modified to show the energy needed for fusion to occur. Experiments commence on the single beam LLNL Cyclops laser, testing new optical designs for future ICF lasers.\n** The Princeton Large Torus (PLT), the follow-on to the Symmetrical Tokamak, begins operation. It soon surpasses the best Soviet machines and sets several temperature records that are above what is needed for a commercial reactor. PLT continues to set records until it is decommissioned.\n** Workshop, called by the US-ERDA (now DoE) at the Claremont Hotel in Berkeley, CA for an ad-hoc two-week summer study. Fifty senior scientists from the major US ICF programs and accelerator laboratories participated, with program heads and Nobel laureates also attending. In the closing address, Dr. C. Martin Stickley, then Director of US-ERDA's Office of Inertial Fusion, announced the conclusion was \"no showstoppers\" on the road to fusion energy. \n** The two beam Argus laser is completed at LLNL and experiments involving more advanced laser-target interactions commence.\n** Based on the continued success of the PLT, the DOE selects a larger Princeton design for further development. Initially designed simply to test a commercial-sized tokamak, the DOE team instead gives them the explicit goal of running on a deuterium-tritium fuel as opposed to test fuels like hydrogen or deuterium. The project is given the name Tokamak Fusion Test Reactor (TFTR).\n** The 20 beam Shiva laser at LLNL is completed, capable of delivering 10.2 kilojoules of infrared energy on target. At a price of $25 million and a size approaching that of a football field, the Shiva laser is the first of the \"megalasers\" at LLNL and brings the field of ICF research fully within the realm of \"big science\".\n** The JET project is given the go-ahead by the EC, choosing the UK's center at Culham as its site.\n** As PLT continues to set new records, Princeton is given additional funding to adapt TFTR with the explicit goal of reaching breakeven.\n** LANL successfully demonstrates the radio frequency quadrupole accelerator (RFQ).\n** ANL and Hughes Research Laboratories demonstrate required ion source brightness with xenon beam at 1.5MeV. \n** The Foster Panel report to US-DoEs Energy Research and advisory board on ICF concludes that heavy ion fusion (HIF) is the \"conservative approach\" to ICF. Listing HIFs advantages in his report, John Foster remarked: \"...now that is kind of exciting.\" After DoE Office of Inertial Fusion completed review of programs, Director Gregory Canavan decides to accelerate the HIF effort.", "** Assembly of ITER, which has been under construction for years, commences.\n** The Chinese experimental nuclear fusion reactor HL-2M is turned on for the first time, achieving its first plasma discharge.\n** China's EAST tokamak sets a new world record for superheated plasma, sustaining a temperature of 120 million degrees Celsius for 101 seconds and a peak of 160 million degrees Celsius for 20 seconds.\n** The National Ignition Facility achieves generating 70% of the input energy, necessary to sustain fusion, from inertial confinement fusion energy, an 8x improvement over previous experiments in spring 2021 and a 25x increase over the yields achieved in 2018.\n** The first Fusion Industry Association report was published - \"The global fusion industry in 2021\"\n** China's Experimental Advanced Superconducting Tokamak (EAST), a nuclear fusion reactor research facility, sustained plasma at 70 million degrees Celsius for as long as 1,056 seconds (17 minutes, 36 seconds), achieving the new world record for sustained high temperatures (fusion energy however requires i.a. temperatures over 150 million °C).\n** The Joint European Torus in Oxford, UK, reports 59 megajoules produced with nuclear fusion over five seconds (11 megawatts of power), more than double the previous record of 1997.\n** United States researchers at Lawrence Livermore National Laboratory National Ignition Facility (NIF) in California has recorded the first case of ignition on August 8, 2021. Producing an energy yield of 0.72, of laser beam input to fusion output.\n** Building on the achievement in August 2022, American researchers at Lawrence Livermore National Laboratory National Ignition Facility (NIF) in California recorded the first ever net energy production with nuclear fusion, producing more fusion energy than laser beam put in. Laser efficiency was in the order of 1%.\n** On February 15, 2023, Wendelstein 7-X reached a new milestone: Power plasma with gigajoule energy turnover generated for eight minutes.\n** JT-60SA achieves first plasma in October, making it the largest operational superconducting tokamak in the world.\n**The Korea Superconducting Tokamak Advanced Research (KSTAR) achieved the new record of 102-sec-long operation (Integrated RMP control for ELM-less H-mode with a notable advancement on the favorable control the error field, Tungsten divertor) with the achieved duration of 48 seconds at the high-temperature of about 100 million degrees Celsius in February 2024, after the last record of 45-sec-long operation (ELM-less H-mode (FIRE mode), Carbon-based divertor, 2022). See and (21 March 2024).", "** George Paget Thomson of Imperial College, London designs the toroidal solenoid, a simple fusion device. With Moses Blackman, he further develops the concept and files for a patent. This becomes the first fusion device to receive a patent. Repeated attempts to get development funding fail.\n** A meeting at Harwell on the topic of fusion raises new concerns with the concept. On his return to London, Thomson gets graduate students James L. Tuck and Alan Alfred Ware to build a prototype device out of old radar parts.\n** Peter Thonemann comes up with a similar idea, but uses a different method of heating the fuel. This seems much more practical and finally gains the mild interest of the UK nuclear establishment. Not aware of who he is talking to, Thonemann describes the concept to Thomson, who adopts the same concept.\n** Herbert Skinner begins to write a lengthy report on the entire fusion concept, pointing out several areas of little or no knowledge.\n** The Ministry of Supply (MoS) asks Thomson about the status of his patent filing, and he describes the problems he has getting funding. The MoS forces Harwell to provide some money, and Thomson releases his rights to the patent. It is granted late that year.\n** Skinner publishes his report, calling for some experimental effort to explore the areas of concern. Along with the MoS's calls for funding of Thomson, this event marks the beginning of formal fusion research in the UK.", "** Ernest Rutherford's Cavendish Laboratory at Cambridge University begins nuclear experiments with a particle accelerator built by John Cockcroft and Ernest Walton.\n** In April, Walton produces the first man-made fission by using protons from the accelerator to split lithium into alpha particles.\n** Using an updated version of the equipment firing deuterium rather than hydrogen, Mark Oliphant discovered helium-3 and tritium, and that heavy hydrogen nuclei could be made to react with each other. This is the first direct demonstration of fusion in the lab.\n** Kantrowitz and Jacobs of the NACA Langley Research Center built a toroidal magnetic bottle and heat the plasma with a 150 W radio source. Hoping to heat the plasma to millions of degrees, the system fails and they are forced to abandon their Diffusion Inhibitor. This is the first attempt to make a working fusion reactor.\n** Peter Thonemann develops a detailed plan for a pinch device, but is told to do other work for his thesis.\n** Hans Bethe provides detailed calculations of the proton–proton chain reaction that powers stars. This work results in a Nobel Prize for Physics.", "** Based on F.W. Astons measurements of the masses of low-mass elements and Einsteins discovery that E=mc, Arthur Eddington proposes that large amounts of energy released by fusing small nuclei together provides the energy source that powers the stars.\n** Henry Norris Russell notes that the relationship in the Hertzsprung–Russell diagram suggests a hot core rather than burning throughout the star. Eddington uses this to calculate that the core would have to be about 40 million Kelvin. This was a matter of some debate at the time, because the value is much higher than what observations suggest, which is about one-third to one-half that value.\n** George Gamow introduces the mathematical basis for quantum tunnelling.\n** Atkinson and Houtermans provide the first calculations of the rate of nuclear fusion in stars. Based on Gamows tunnelling, they show fusion can occur at lower energies than previously believed. When used with Eddingtons calculations of the required fusion rates in stars, their calculations demonstrate this would occur at the lower temperatures that Eddington had calculated.", "This timeline of nuclear fusion is an incomplete chronological summary of significant events in the study and use of nuclear fusion.", "** After considering the concept for some time, John Nuckolls publishes the concept of inertial confinement fusion. The laser, introduced the same year, appears to be a suitable \"driver\".\n** The Soviet Union test the Tsar Bomba (50 megatons), the most powerful thermonuclear weapon ever.\n** Plasma temperatures of approximately 40 million degrees Celsius and a few billion deuteron-deuteron fusion reactions per discharge were achieved at LANL with the Scylla IV device.\n** At an international meeting at the UKs new fusion research centre in Culham, the Soviets release early results showing greatly improved performance in toroidal pinch machines. The announcement is met by scepticism, especially by the UK team whos ZETA was largely identical. Spitzer, chairing the meeting, essentially dismisses it out of hand.\n** At the same meeting, odd results from the ZETA machine are published. It will be years before the significance of these results are realized.\n** By the end of the meeting, it is clear that most fusion efforts have stalled. All of the major designs, including the stellarator, pinch machines and magnetic mirrors are all losing plasma at rates that are simply too high to be useful in a reactor setting. Less-known designs like the levitron and astron are faring no better.\n** The 12-beam \"4 pi laser\" using ruby as the lasing medium is developed at Lawrence Livermore National Laboratory (LLNL) includes a gas-filled target chamber of about 20 centimeters in diameter.\n** Demonstration of Farnsworth-Hirsch Fusor appeared to generate neutrons in a nuclear reaction.\n** Hans Bethe wins the 1967 Nobel Prize in physics for his publication on how fusion powers the stars in work of 1939.\n** Robert L. Hirsch is hired by Amasa Bishop of the Atomic Energy Commission as staff physicist. Hirsch would eventually end up running the fusion program during the 1970s.\n** Further results from the T-3 tokamak, similar to the toroidal pinch machine mentioned in 1965, claims temperatures to be over an order of magnitude higher than any other device. The Western scientists remain highly sceptical.\n** The Soviets invite a UK team from ZETA to perform independent measurements on T-3.\n** The UK team, nicknamed \"The Culham Five\", confirm the Soviet results early in the year. They publish their results in Octobers edition of Nature'. This leads to a \"veritable stampede\" of tokamak construction around the world.\n** After learning of the Culham Five's results in August, a furious debate breaks out in the US establishment over whether or not to build a tokamak. After initially pooh-poohing the concept, the Princeton group eventually decides to convert their stellarator to a tokamak.", "** In January, Klaus Fuchs admits to passing nuclear secrets to the Soviet Union. Almost all nuclear research in the UK, including the fledgling fusion program, is immediately classified. Thomson, until this time working at Imperial University, is moved to the Atomic Weapons Research Establishment.\n** The tokamak, a type of magnetic confinement fusion device, was proposed by Soviet scientists Andrei Sakharov and Igor Tamm.\n** Edward Teller and Stanislaw Ulam at Los Alamos National Laboratory (LANL) develop the Teller-Ulam design for the thermonuclear weapon, allowing for the development of multi-megaton weapons.\n** A press release from Argentina claims that their Huemul Project had produced controlled nuclear fusion. This prompted a wave of responses in other countries, especially the U.S.\n*** Lyman Spitzer dismisses the Argentinian claims, but while thinking about it comes up with the stellarator concept. Funding is arranged under Project Matterhorn and develops into the Princeton Plasma Physics Laboratory.\n*** Tuck introduces the British pinch work to LANL. He develops the Perhapsatron under the codename Project Sherwood. The project name is a play on his name via Friar Tuck.\n*** Richard F. Post presents his magnetic mirror concept and also receives initial funding, eventually moving to Lawrence Livermore National Laboratory (LLNL).\n*** In the UK, repeated requests for more funding that had previously been turned down are suddenly approved. Within a short time, three separate efforts are started, one at Harwell and two at Atomic Weapons Establishment (Aldermaston). Early planning for a much larger machine at Harwell begins.\n*** Using the Huemul release as leverage, Soviet researchers find their funding proposals rapidly approved. Work on linear pinch machines begins that year.\n** Ivy Mike shot of Operation Ivy, the first detonation of a thermonuclear weapon, yields 10.4 megatons of TNT out of a fusion fuel of liquid deuterium.\n** Cousins and Ware build a larger toroidal pinch device in England and demonstrated that the plasma in pinch devices is inherently unstable.\n** The Soviet RDS-6S test, code named \"Joe 4\", demonstrated a fission/fusion/fission (\"Layercake\") design for a nuclear weapon.\n** Linear pinch devices in the US and USSR report detections of neutrons, an indication of fusion reactions. Both are later explained as coming from instabilities in the fuel, and are non-fusion in nature.\n** Early planning for the large ZETA device at Harwell begins. The name is a take-off on small experimental fission reactors which often had \"zero energy\" in their name, ZEEP being an example.\n** Edward Teller gives a now-famous speech on plasma stability in magnetic bottles at the Princeton Gun Club. His work suggests that most magnetic bottles are inherently unstable, outlining what is today known as the interchange instability.\n** At the first Atoms for Peace meeting in Geneva, Homi J. Bhabha predicts that fusion will be in commercial use within two decades. This prompts a number of countries to begin fusion research; Japan, France and Sweden all start programs this year or the next.\n** Experimental research of tokamak systems started at Kurchatov Institute, Moscow by a group of Soviet scientists led by Lev Artsimovich.\n** Construction of ZETA begins at Harwell.\n** Igor Kurchatov gives a talk at Harwell on pinch devices, revealing for the first time that the USSR is also working on fusion. He details the problems they are seeing, mirroring those in the US and UK.\n** In August, a number of articles on plasma physics appear in various Soviet journals.\n** In the wake of the Kurchatov's speech, the US and UK begin to consider releasing their own data. Eventually, they settle on a release prior to the 2nd Atoms for Peace conference in Geneva in 1958.\n** In the US, at LANL, Scylla I begins operation using the θ-pinch design.\n** ZETA is completed in the summer, it will be the largest fusion machine for a decade.\n** In August, initial results on ZETA appear to suggest the machine has successfully reached basic fusion temperatures. UK researchers start pressing for public release, while the US demurs.\n** Scientists at the AEI Research laboratory in Harwell reported that the Sceptre III plasma column remained stable for 300 to 400 microseconds, a dramatic improvement on previous efforts. Working backward, the team calculated that the plasma had an electrical resistivity around 100 times that of copper, and was able to carry 200 kA of current for 500 microseconds in total.\n** In January, the US and UK release large amounts of data, with the ZETA team claiming fusion. Other researchers, notably Artsimovich and Spitzer, are sceptical.\n** In May, a series of new tests demonstrate the measurements on ZETA were erroneous, and the claims of fusion have to be retracted.\n** American, British and Soviet scientists began to share previously classified controlled fusion research as part of the Atoms for Peace conference in Geneva in September. It is the largest international scientific meeting to date. It becomes clear that basic pinch concepts are not successful and that no device has yet created fusion at any level.\n** Scylla demonstrates the first controlled thermonuclear fusion in any laboratory, although confirmation came too late to be announced at Geneva. This θ-pinch approach will ultimately be abandoned as calculations show it cannot scale up to produce a reactor.", "A contract was signed between TERMIS and the Mary Ann Liebert publisher which designated the journal Tissue Engineering, Parts A, B, and C as the official journal of TERMIS with free on-line access for the membership.", "Each TERMIS chapter has defined awards to recognize outstanding scientists and their contributions within the community.", "Fellows of Tissue Engineering and Regenerative Medicine (FTERM) recipients are:\n* Alini, Mauro\n* Atala, Anthony \n* Badylak, Stephen \n* Cancedda, Ranieri \n* Cao, Yilin \n* Chatzinikolaidou, Maria\n* El Haj, Alicia\n* Fontanilla, Marta\n* Germain, Lucie\n* Gomes, Manuela\n* Griffith, Linda \n* Guldberg, Robert \n* Hellman, Kiki \n* Hilborn, Jöns\n* Hubbell, Jeffrey \n* Hutmacher, Dietmar \n* Khang, Gilson \n* Kirkpatrick, C. James \n* Langer, Robert\n* Lee, Hai-Bang \n* Lee, Jin Ho \n* Lewandowska-Szumiel, Malgorzata\n* Marra, Kacey\n* Martin, Ivan \n* McGuigan, Alison\n* Mikos, Antonios \n* Mooney, David \n* Motta, Antonella\n* Naughton, Gail \n* Okano, Teruo \n* Pandit, Abhay\n* Parenteau, Nancy \n* Radisic, Milica \n* Ratner, Buddy \n* Redl, Heinz \n* Reis, Rui L. \n* Richards, R. Geoff\n* Russell, Alan\n* Schenke-Layland, Katja\n* Shoichet, Molly\n* Smith, David\n* Tabata, Yasuhiko\n* Tuan, Rocky\n* Vacanti, Charles\n* Vacanti, Joseph\n* van Osch, Gerjo\n* Vunjak-Novakovic, Gordana\n* Wagner, William\n* Weiss, Anthony S.\nEmeritus\n* Johnson, Peter\n* Williams, David\nDeceased Fellows\n*Nerem, Robert", "It was determined that each Chapter would have its own Council, the overall activities being determined by the Governing Board, on which each Council was represented, and an executive committee.", "* The Career Achievement Award is aimed towards a recognition of individuals who have made outstanding contributions to the field of TERM and have carried out most of their career in the TERMIS-EU geographical area. \n* The Mid Terms Career Award has been established in 2020 to recognize individuals that are within 10–20 years after obtaining their PhD, with a successful research group and clear evidence of outstanding performance.\n* The Robert Brown Early Career Principal Investigator Award recognizes individuals that are within 2–10 years after obtaining their PhD, with clear evidence of a growing profile.", "The Student and Young Investigator Section of TERMIS (TERMIS-SYIS) brings together undergraduate and graduate students, post-doctoral researchers and young investigators in industry and academia related to tissue engineering and regenerative medicine. It follows the organizational and working pattern of TERMIS.", "At the beginning of the Society, it was agreed that there would be Continental Chapters of TERMIS, initially TERMIS-North America (TERMIS-NA) and TERMIS-Europe (TERMIS-EU), to be joined at the time of the major Shanghai conference in October 2005 by TERMIS-Asia Pacific (TERMIS-AP). It was subsequently agreed that the remit of TERMIS-North America should be expanded to incorporate activity in South America, the chapter becoming TERMIS-Americas (TERMS-AM) officially in 2012.", "It was agreed that there would be a World Congress every three years, with each Chapter organizing its own conference in the intervening two years.", "Tissue engineering emerged during the 1990s as a potentially powerful option for regenerating tissue and research initiatives were established in various cities in the US and in European countries including the UK, Italy, Germany and Switzerland, and also in Japan. Soon fledgling societies were formed in these countries in order to represent these new sciences, notably the European Tissue Engineering Society (ETES) and, in the US, the Tissue Engineering Society (TES), soon to become the Tissue Engineering Society international (TESi) and the Regenerative Medicine Society (RMS).\nBecause of the overlap between the activities of these societies and the increasing globalization of science and medicine, considerations of a merger between TESI and ETES and RMS were initiated in 2004 and agreement was reached during 2005 about the formation of the consolidated society, the Tissue Engineering and Regenerative Medicine International Society (TERMIS). Election of officers for TERMIS took place in September 2005, and the by-laws were approved by the Board.\nRapid progress in the organization of TERMIS took place during late 2005 and 2006. The SYIS, Student and Young Investigator Section was established in January 2006, website and newsletter launched and membership dues procedures put in place.", "Regenerative medicine involves processes of replacing, engineering or regenerating human cells, tissues or organs to restore or establish normal function. A major technology of regenerative medicine is tissue engineering, which has variously been defined as \"an interdisciplinary field that applies the principles of engineering and the life sciences toward the development of biological substitutes that restore, maintain, or improve tissue function\", or \"the creation of new tissue by the deliberate and controlled stimulation of selected target cells through a systematic combination of molecular and mechanical signals\".", "Tissue Engineering and Regenerative Medicine International Society is an international learned society dedicated to tissue engineering and regenerative medicine.", "* The Excellence Achievement Award has been established to recognize a researcher in the Asia-Pacific region who has made continuous and landmark contributions to the tissue engineering and regenerative medicine field.\n* The Outstanding Scientist Award has been established to recognize a mid-career researcher in the Asia-Pacific region who has made significant contributions to the TERM field.\n* The Young Scholar Award has been established to recognize a young researcher in the Asia-Pacific region who has made significant and consistent achievements in the TERM field, showing clear evidence of their potential to excel.\n* The Mary Ann Liebert, Inc. Best TERM Paper Award has been established to recognize a student researcher (undergraduate/graduate/postdoc) in the Asia-Pacific region who has achieved outstanding research accomplishments in the TERM field.\n* The TERMIS-AP Innovation Team Award has been established to recognize a team of researchers in the Asia-Pacific region. It aims to recognize successful applications of tissue engineering and regenerative medicine leading to the development of relevant products/therapies/technologies which will ultimately benefit the patients.", "Common methods include those of the DISCO family, including 3DISCO, and CLARITY and related protocols. Others include BABB, PEGASOS, SHANEL, SeeDB, CUBIC, ExM, and SHIELD.", "Tissue clearing has been applied to the nervous system, bones (including teeth), skeletal muscles, hearts and vasculature, gastrointestinal organs, urogenital organs, skin, lymph nodes, mammary glands, lungs, eyes, tumors, and adipose tissues. Whole-body clearing is less common, but has been done in smaller animals, including rodents. Tissue clearing has also been applied to human cancer tissues", "Tissue clearing refers to a group of chemical techniques used to turn tissues transparent. This allows deep insight into these tissues, while preserving spatial resolution. Many tissue clearing methods exist, each with different strengths and weaknesses. Some are generally applicable, while others are designed for specific applications. Tissue clearing is usually combined with one or more labeling techniques and subsequently imaged, most often by optical sectioning microscopy techniques. Tissue clearing has been applied to many areas in biological research.", "In the early 1900s, Werner Spalteholz developed a technique that allowed the clarification of large tissues, using Wintergrünöl (methyl salicylate) and benzyl benzoate. Over the next hundred years, various scientists introduced their own variations on Spalteholzs technique. Tuchin et al. introduced TOC in 1997, adding a new branch of tissue clearing that was hydrophilic instead of hydrophobic like Spalteholzs technique. In 2007, Dodt et al. developed a two step process, wherein tissues were first dehydrated with ethanol and hexane and subsequently made transparent by immersion in benzyl alcohol and benzyl benzoate (BABB), a technique they coupled with light sheet fluorescence microscopy. Hama et al. developed another hydrophilic approach, Scale, in 2011. The following year, Ertürk et al. developed a hydrophobic approach called 3DISCO, in which they pretreated tissue with tetrahydrofuran and dichloromethane before clearing it in dibenzyl ether. A year later, in 2013, Chung et al. developed CLARITY, the first approach to use hydrogel monomers to clear tissue.", "Imaging cleared tissues generates massive volumes of complex data, which requires powerful computational hardware and software to store, process, analyze, and visualize. A single mouse brain can generate terabytes of data. Both commercial and open-source software exists to address this need, some of it adapted from solutions for two-dimensional images and some of it designed specifically for the three-dimensional images produced by imaging of cleared tissues.", "While multiple classification standards for tissue clearing exist, the most common classifications use the chemical principle and mechanism of clearing to group tissue clearing methods. These include hydrophobic clearing methods, which may also be known as organic, solvent-based, organic solvent-based, or dehydration clearing methods; hydrophilic clearing methods, which may also be known as aqueous-based or water-based methods, and may be further sub-categorized into simple immersion and hyperhydration (also called delipidation/ hydration); and hydrogel-based clearing methods, which may also be known as detergent or hydrogel embedding methods. Tissue-expansion clearing methods use hydrogel, and may be included under hydrogel-based clearing or as their own category.", "Tissue clearing methods have varying compatibility with different methods of fluorescent labeling. Some are better suited to pre-clearing tagging approaches, such as genetic labeling. while others require post-clearing tagging, such as immunolabeling and chemical dye labeling.", "After clearing and labeling, tissues are typically imaged using confocal microscopy, two-photon microscopy, or one of the many variants of light-sheet fluorescence microscopy. Other less commonly used methods include optical projection tomography and stimulated Raman scattering.", "Tissue opacity is thought to be the result of light scattering due to heterogeneous refractive indices. Tissue clearing methods chemically homogenize refractive indices, resulting in almost completely transparent tissue.", "Implantation of any foreign device or material through the means of surgery results in at least some degree of tissue trauma. Therefore, especially when removing a native heart valve either partially or completely, the tissue trauma will trigger a cascade of inflammatory responses and elicit acute inflammation. During the initial phase of acute inflammation, vasodilation occurs to increase blood flow to the wound site along with the release of growth factors, cytokines, and other immune cells. Furthermore, cells release reactive oxygen species and cytokines, which cause secondary damage to surrounding tissue. These chemical factors then proceed to promote the recruitment of other immune responsive cells such as monocytes or white blood cells, which help foster the formation of a blood clot and protein-rich matrix.", "If the acute inflammatory response persists, the body then proceeds to undergo chronic inflammation. During this continual and systemic inflammation phase, one of the primary driving forces is the infiltration of macrophages. The macrophages and lymphocytes induce the formation of new tissues and blood vessels to help supply nutrients to the biomaterial site. New fibrous tissue then encapsulates the foreign biomaterial in order to minimize interactions between the biomaterial and surrounding tissue. While the prolonging of chronic inflammation may be a likely indicator for an infection, inflammation may on occasion be present upwards to five years post-surgery. Chronic inflammation marked by the presence of fibrosis and inflammatory cells was observed in rat cells 30 days post implantation of a device.\nFollowing chronic inflammation, mineralization occurs approximately 60 days after implantation due to the buildup of cellular debris and calcification, which has the potential to compromise the functionality of biocompatible implanted devices in vivo.", "Various biomaterials, whether they are biological, synthetic, or a combination of both, can be used to create scaffolds, which when implanted in a human body can promote host tissue regeneration. First, cells from the patient in which the scaffold will be implanted in are harvested. These cells are expanded and seeded into the created scaffold, which is then inserted inside the human body. The human body serves as a bioreactor, which allows the formation of an extracellular matrix (ECM) along with fibrous proteins around the scaffold to provide the necessary environment for the heart and circulatory system. The initial implantation of the foreign scaffold triggers various signaling pathways guided by the foreign body response for cell recruitment from neighboring tissues. The new nanofiber network surrounding the scaffold mimics the native ECM of the host body. Once cells begin to populate the cell, the scaffold is designed to gradually degrade, leaving behind a constructed heart valve made of the host body's own cells that is fully capable of cell repopulation and withstanding environmental changes within the body. The scaffold designed for tissue engineering is one of the most crucial components because it guides tissue construction, viability, and functionality long after implantation and degradation.", "Biological scaffolds can be created from human donor tissue or from animals; however, animal tissue is often more popular since it is more widely accessible and more plentiful. Xenograft, from a donor of a different species from the recipient, heart valves can be from either pigs, cows, or sheep. If either human or animal tissue is used, the first step in creating useful scaffolds is decellularization, which means to remove the cellular contents all the while preserving the ECM matrix, which is advantageous compared to manufacturing synthetic scaffolds from scratch. Many decellularization methods have been used such as the use of nonionic and ionic detergents that disrupt cellular material interactions or the use of enzymes to cleave peptide bonds, RNA, and DNA.", "There are also current approaches that are manufacturing scaffolds and coupling them with biological cues. Fabricated scaffolds can also be manufactured using either biological, synthetic, or a combination of both materials from scratch to mimic the native heart valve observed using imaging techniques. Since the scaffold is created from raw materials, there is much more flexibility in controlling the scaffold's properties and can be more tailored. Some types of fabricated scaffolds include solid 3-D porous scaffolds that have a large pore network that permits the flow through of cellular debris, allowing further tissue and vascular growth. 3-D porous scaffolds can be manufactured through 3-D printing or various polymers, ranging from polyglycolic acid (PGA) and polylactic acid (PLA) to more natural polymers such as collagen.\nFibrous scaffolds have the potential to closely match the structure of ECM through its use of fibers, which have a high growth factor. Techniques to produce fibrous scaffolds include electrospinning, in which a liquid solution of polymers is stretched from an applied high electric voltage to produce thin fibers. Conversely to the 3-D porous scaffolds, fibrous scaffolds have a very small pore size that prevents the pervasion of cells within the scaffold.\nHydrogel scaffolds are created by cross-linking hydrophilic polymers through various reaction such as free radical polymerization or conjugate addition reaction. Hydrogels are beneficial because they have a high water content, which allows the ease of nutrients and small materials to pass through.", "The biocompatibility of surgically implanted foreign biomaterial refers to the interactions between the biomaterial and the host body tissue. Cell line as well as cell type such as fibroblasts can largely impact tissue responses towards implanted foreign devices by changing cell morphology. Thus the cell source as well as protein adsorption, which is dependent on biomaterial surface property, play a crucial role in tissue response and cell infiltration at the scaffold site.", "Tissue engineered heart valves (TEHV) offer a new and advancing proposed treatment of creating a living heart valve for people who are in need of either a full or partial heart valve replacement. Currently, there are over a quarter of a million prosthetic heart valves implanted annually, and the number of patients requiring replacement surgeries is only suspected to rise and even triple over the next fifty years. While current treatments offered such as mechanical valves or biological valves are not deleterious to ones health, they both have their own limitations in that mechanical valves necessitate the lifelong use of anticoagulants while biological valves are susceptible to structural degradation and reoperation. Thus, in situ (in its original position or place) tissue engineering of heart valves serves as a novel approach that explores the use creating a living heart valve composed of the hosts own cells that is capable of growing, adapting, and interacting within the human body's biological system.\nResearch has not yet reached the stage of clinical trials.", "Studies performed seeded scaffolds made of polymers with various cell lines in vitro, in which the scaffolds degraded over time while leaving behind a cellular matrix and proteins. The first study on tissue engineering of heart valves was published in 1995. During 1995 and 1996, Shinoka used a scaffold made of polyglycolic acid (PGA), approved by the FDA for human implantation, and seeded it with sheep endothelial cells and fibroblasts with the goal of replacing a sheeps pulmonary valve leaflet. What resulted from Shinokas study was an engineered heart valve that was much thicker and more rigid, which prompted Hoerstrup to conduct a study to replace all three pulmonary valve leaflets in a sheep using a poly-4-hydroxybutyrate (P4HB) coated PGA scaffold and sheep endothelial cells and myofibroblast.", "While many in vitro and in vivo studies have been tested in animal models, the translation from animal models to humans has not begun. Factors such as the size of surgical cut sites, duration of the procedure, and available resources and cost must all be considered. Synthetic nanomaterials have the potential to advance scaffoldings used in tissue engineering of heart valves. The use of nanotechnology could help expand beneficial properties of fabricated scaffolds such as higher tensile strength.", "Many risks and challenges must still be addressed and explored before tissue engineered heart valves can fully be clinically implemented:\n* Contamination – Particular source materials can foster a microbiological environment that is conducive to the susceptibility of viruses and infectious diseases. Anytime an external scaffold is implanted within the human body, contamination, while inevitable, can be diminished through the enforcement of sterile technique.\n* Scaffold Interactions - There are many risks associated with the interactions between cells and the implanted scaffold as specific biocompatibility requirements are still largely unknown with current research. The response to these interactions are also highly individualistic, dependent on the specific patient's biological environment; therefore, animal models researched prior may not accurately portray outcomes in the human body. Due to the highly interactive nature between the scaffold and surrounding tissue, properties such as biodegradability, biocompatibility, and immunogenicity must all be carefully considered as they are key factors in the performance of the final product.\n* Structural complexity – Heart valves with their heterogeneous structure are very complex and dynamic, thus posing a challenge for tissue engineered valves to mimic. The new valves must have high durability while also meeting the anatomical shape and mechanical functions of the native valve.", "Another option studied was using decellularized biological scaffolds and seeding them with their corresponding cells in vitro. In 2000, Steinhoff implanted a decellularized sheep pulmonary valve scaffold seeded with sheep endothelial cells and myofibroblasts. Dohmen then created a decellularized cryopreserved pulmonary allograft scaffold and seeded it with human vascular endothelial cells to reconstruct the right ventricular outflow tract (RVOT) in a human patient in 2002. Perry in 2003 seeded a P4HB coated PGA scaffold with sheep mesenchymal stem cells in vitro; however, an in vivo study was not performed. In 2004, Iwai conducted a study using a poly(lactic-co-glycolic acid) PLGA compounded with collagen microsponge sphere scaffold, which was seeded with endothelial and smooth muscle cells at the site of a dog's pulmonary artery. Sutherland in 2005 utilized a sheep mesenchymal stem cell seeded PGA and poly-L-lactic acid (PLLA) scaffold to replace all three pulmonary valve leaflets in a sheep.", "A handful of studies utilized tissue engineering of heart valves in vivo in animal models and humans. In 2000, Matheny conducted a study in which he used a pig's small intestinal submucosa to replace one pulmonary valve leaflet. Limited studies have also been conducted in a clinical setting. For instance in 2001, Elkins implanted SynerGraft treated decellularized human pulmonary valves in patients. Simon similarly used SynerGraft decellularized pig valves for implantation in children; however, these valves widely failed as there were no host cells but rather high amounts of inflammatory cells found at the scaffold site instead. Studies led by Dohmen, Konertz, and colleagues in Berlin, Germany involved the implantation of a biological pig valve in 50 patients who underwent the Ross operation from 2002 to 2004. Using a decellularized porcine xenograft valve, also called Matrix P, in adults with a median age of 46 years, the aim of the study was to offer a proposal for pulmonary valve replacement. While some patients died postoperatively and had to undergo reoperation, the short-term results appear to be going well as the valve is behaving similarly to a native, healthy valve. One animal trial combined the transcatheter aortic valve replacement (TAVR) procedure with tissue engineered heart valves (TEHVs). A TAVR stent integrated with human cell-derived extracellular matrix was implanted and examined in sheep, in which the valve upheld structural integrity and cell infiltration, allowing the potential clinical application to extend TAVR to younger patients.", "Tissue engineered heart valves offer certain advantages over traditional biological and mechanical valves:\n* Living valve – The option of a living heart valve replacement is highly optimal for children as the live valve has the ability to grow and respond to its biological environment, which is especially beneficial for children whose bodies are continually changing. This option would help reduce the number of reoperation needed in a child's life.\n* Customized process – Since the scaffolds used in tissue engineering can be manufactured from scratch, there is a higher degree of flexibility and control. This allows the potential of tailoring tissue engineered heart valves and its properties such as the scaffold's shape and biomaterial makeup to be tailored specifically to the patient.", "Under normal physiological conditions, inflammatory cells protect the body from foreign objects, and the body undergoes a foreign body reaction based on the adsorption of blood and proteins on the biomaterial surface. In the first two to four weeks post implant, there is an association between biomaterial adherent macrophages and cytokine expression near the foreign implant site, which can be explored using semi-quantitative RT-PCR. Macrophages fuse together to form foreign body giant cells (FBGCs), which similarly express cytokine receptors on their cell membranes and actively participate in the inflammatory response. Device failure in organic polyether polyurethane (PEU) pacemakers compared to silicone rubber showcases that the foreign body response may indeed lead to degradation of biomaterials, causing subsequent device failures. The utilization of to prevent functionality and durability compromise is proposed to minimize and slow the rate of biomaterial degradation.", "Tissue remodeling is the reorganization or renovation of existing tissues. Tissue remodeling can be either physiological or pathological. The process can either change the characteristics of a tissue such as in blood vessel remodeling, or result in the dynamic equilibrium of a tissue such as in bone remodeling. Macrophages repair wounds and remodel tissue by producing extracellular matrix and proteases to modify that specific matrix.\nA myocardial infarction induces tissue remodeling of the heart in a three-phase process: inflammation, proliferation, and maturation. Inflammation is characterized by massive necrosis in the infarcted area. Inflammatory cells clear the dead cells. In the proliferation phase, inflammatory cells die by apoptosis, being replaced by myofibroblasts which produce large amounts of collagen. In the maturation phase, myofibroblast numbers are reduced by apoptosis, allowing for infiltration by endothelial cells (for blood vessels) and cardiomyocytes (heart tissue cells). Usually, however, much of the tissue remodeling is pathological, resulting in a large amount of fibrous tissue. By contrast, aerobic exercise can produce beneficial cardiac tissue remodeling in those suffering from left ventricular hypertrophy.\nProgrammed cellular senescence contributes to beneficial tissue remodeling during embryonic development of the fetus.\nIn a brain stroke the penumbra area surrounding the ischemic event initially undergoes a damaging remodeling, but later transitions to a tissue remodeling characterized by repair.\nVascular remodeling refers to a compensatory change in blood vessel walls due to plaque growth. Vascular expansion is called positive remodeling, whereas vascular constriction is called negative remodeling.\nTissue remodeling occurs in adipose tissue with increased body fat. In obese subjects, this remodeling is often pathological, characterized by excessive inflammation and fibrosis.", "A tree wrap or tree wrapping is a wrap of garden tree saplings, roses, and other delicate plants to protect them from frost damage (e.g. frost cracks or complete death). In the past it was made of straw (straw wrap) . Now there are commercial tree wrap materials, such as crepe paper or burlap tapes. Tree wrapping is also used to prevent saplings from sunscald and drying of the bark. A disadvantage of tape wrapping is dampness under the wrapping during rainy seasons.", "Trehalose is a disaccharide formed by a bond between two α-glucose units. It is found in nature as a disaccharide and also as a monomer in some polymers. Two other isomers exist, α,β-trehalose, otherwise known as neotrehalose, and β,β-trehalose (also referred to as isotrehalose). Neotrehalose has not been isolated from a living organism. Isotrehalose is also yet to be isolated from a living organism, but was found in starch hydroisolates.", "In 1832, H.A.L. Wiggers discovered trehalose in an ergot of rye, and in 1859 Marcellin Berthelot isolated it from Trehala manna, a substance made by weevils and named it trehalose.\nTrehalose has long been known as an autophagy inducer that acts independently of mTOR. In 2017, research was published showing that trehalose induces autophagy by activating TFEB, a protein that acts as a master regulator of the autophagy-lysosome pathway.", "Trehalose is an ingredient, along with hyaluronic acid, in an artificial tears product used to treat dry eye. Outbreaks of Clostridium difficile were initially associated with trehalose,. This finding was disputed in 2019.\nIn 2021, the FDA accepted an Investigational New Drug (IND) application and granted fast track status for an injectable form of trehalose (SLS-005) as a potential treatment for spinocerebellar ataxia type 3 (SCA3).", "Five biosynthesis pathways have been reported for trehalose. The most common pathway is TPS/TPP pathway which is used by organisms that synthesize trehalose using the enzyme trehalose-6-phosphate (T6P) synthase (TPS). Second, trehalose synthase (TS) in certain types of bacteria could produce trehalose by using maltose and another disaccharide with two glucose units as substrates. Third, the TreY-TreZ pathway in some bacteria converts starch that contain maltooligosaccharide or glycogen directly into trehalose. Fourth, in primitive bacteria, trehalose glycisyltransferring synthase (TreT) produces trehalose from ADP-glucose and glucose. Fifth, trehalose phosphorylase (TreP) either hydrolyses trehalose into glucose-1-phosphate and glucose or may act reversibly in certain species. Vertebrates do not have the ability to synthesize or store trehalose. Trehalase in humans is found only in specific location such as the intestinal mucosa, renal brush-border, liver and blood. Expression of this enzyme in vertebrates is initially found during the gestation period that is the highest after weaning. Then, the level of trehalase remained constant in the intestine throughout life. Meanwhile, diets consisting of plants and fungi contain trehalose. Moderate amount of trehalose in diet is essential and having low amount of trehalose could result in diarrhea, or other intestinal symptoms.", "At least three biological pathways support trehalose biosynthesis. An industrial process can derive trehalose from corn starch.", "Organisms ranging from bacteria, yeast, fungi, insects, invertebrates, and lower and higher plants have enzymes that can make trehalose.\nIn nature, trehalose can be found in plants, and microorganisms. In animals, trehalose is prevalent in shrimp, and also in insects, including grasshoppers, locusts, butterflies, and bees, in which trehalose serves as blood-sugar. Trehalase genes are found in tardigrades, the microscopic ecdysozoans found worldwide in diverse extreme environments.\nTrehalose is the major carbohydrate energy storage molecule used by insects for flight. One possible reason for this is that the glycosidic linkage of trehalose, when acted upon by an insect trehalase, releases two molecules of glucose, which is required for the rapid energy requirements of flight. This is double the efficiency of glucose release from the storage polymer starch, for which cleavage of one glycosidic linkage releases only one glucose molecule.\nIn plants, trehalose is seen in sunflower seeds, moonwort, Selaginella plants, and sea algae. Within the fungi, it is prevalent in some mushrooms, such as shiitake (Lentinula edodes), oyster, king oyster, and golden needle.\nEven within the plant kingdom, Selaginella (sometimes called the resurrection plant), which grows in desert and mountainous areas, may be cracked and dried out, but will turn green again and revive after rain because of the function of trehalose.\nThe two prevalent theories as to how trehalose works within the organism in the state of cryptobiosis are the vitrification theory, a state that prevents ice formation, or the water displacement theory, whereby water is replaced by trehalose.\nIn bacterial cell wall, trehalose has a structural role in adaptive responses to stress such as osmotic differences and extreme temperature. Yeast uses trehalose as a carbon source in response to abiotic stresses. In humans, the only known function of trehalose is its ability to activate autophagy inducer.\nTrehalose has also been reported for anti-bacterial, anti-biofilm, and anti-inflammatory (in vitro and in vivo) activities, upon its esterification with fatty acids of varying chain lengths.", "Trehalose is a nonreducing sugar formed from two glucose units joined by a 1–1 alpha bond, giving it the name The bonding makes trehalose very resistant to acid hydrolysis, and therefore is stable in solution at high temperatures, even under acidic conditions. The bonding keeps nonreducing sugars in closed-ring form, such that the aldehyde or ketone end groups do not bind to the lysine or arginine residues of proteins (a process called glycation). Trehalose is less soluble than sucrose, except at high temperatures (>80 °C). Trehalose forms a rhomboid crystal as the dihydrate, and has 90% of the calorific content of sucrose in that form. Anhydrous forms of trehalose readily regain moisture to form the dihydrate. Anhydrous forms of trehalose can show interesting physical properties when heat-treated.\nTrehalose aqueous solutions show a concentration-dependent clustering tendency. Owing to their ability to form hydrogen bonds, they self-associate in water to form clusters of various sizes. All-atom molecular dynamics simulations showed that concentrations of 1.5–2.2 molar allow trehalose molecular clusters to percolate and form large and continuous aggregates.\nTrehalose directly interacts with nucleic acids, facilitates melting of double stranded DNA and stabilizes single-stranded nucleic acids.", "Trehalose (from Turkish tıgala – a sugar derived from insect cocoons + -ose) is a sugar consisting of two molecules of glucose. It is also known as mycose or tremalose. Some bacteria, fungi, plants and invertebrate animals synthesize it as a source of energy, and to survive freezing and lack of water.\nExtracting trehalose was once a difficult and costly process, but around 2000, the Hayashibara company (Okayama, Japan) discovered an inexpensive extraction technology from starch. Trehalose has high water retention capabilities, and is used in food, cosmetics and as a drug. A procedure developed in 2017 using trehalose allows sperm storage at room temperatures.", "Trehalose is rapidly broken down into glucose by the enzyme trehalase, which is present in the brush border of the intestinal mucosa of omnivores (including humans) and herbivores. It causes less of a spike in blood sugar than glucose. Trehalose has about 45% the sweetness of sucrose at concentrations above 22%, but when the concentration is reduced, its sweetness decreases more quickly than that of sucrose, so that a 2.3% solution tastes 6.5 times less sweet as the equivalent sugar solution.\nIt is commonly used in prepared frozen foods, like ice cream, because it lowers the freezing point of foods.\nDeficiency of trehalase enzyme is unusual in humans, except in the Greenlandic Inuit, where it occurs in 10–15% of the population.", "Trinitromethane as a neutral molecule is colorless. It is highly acidic, easily forming an intensely yellow anion, (NO)C. The pK of trinitromethane has been measured at 0.17 ± 0.02 at 20 °C, which is remarkably acidic for a methane derivative. Trinitromethane easily dissolves in water to form an acidic yellow solution.\nThere is some evidence that the anion, which obeys the 4n+2 Hückel rule, is aromatic.", "Trinitromethane forms a series of bright yellow ionic salts. Many of these salts tend to be unstable and can be easily detonated by heat or impact.\nThe potassium salt of nitroform, KC(NO) is a lemon yellow crystalline solid that decomposes slowly at room temperatures and explodes above 95 °C. The ammonium salt is somewhat more stable, and deflagrates or explodes above 200 °C. The hydrazine salt, hydrazinium nitroformate is thermally stable to above 125 °C and is being investigated as an ecologically friendly oxidizer for use in solid fuels for rockets.", "Trinitromethane, also referred to as nitroform, is a nitroalkane and oxidizer with chemical formula HC(NO). It was first obtained in 1857 as the ammonium salt by the Russian chemist Leon Nikolaevich Shishkov (1830–1908). In 1900, it was discovered that nitroform can be produced by the reaction of acetylene with anhydrous nitric acid. This method went on to become the industrial process of choice during the 20th century. In the laboratory, nitroform can also be produced by hydrolysis of tetranitromethane under mild basic conditions.", "Carbon is a necessary component of all known life. C, a stable isotope of carbon, is abundantly produced in stars due to three factors:\n# The decay lifetime of a Be nucleus is four orders of magnitude larger than the time for two He nuclei (alpha particles) to scatter.\n# An excited state of the C nucleus exists a little (0.3193 MeV) above the energy level of Be + He. This is necessary because the ground state of C is 7.3367 MeV below the energy of Be + He; a Be nucleus and a He nucleus cannot reasonably fuse directly into a ground-state C nucleus. However, Be and He use the kinetic energy of their collision to fuse into the excited C (kinetic energy supplies the additional 0.3193 MeV necessary to reach the excited state), which can then transition to its stable ground state. According to one calculation, the energy level of this excited state must be between about 7.3 MeV and 7.9 MeV to produce sufficient carbon for life to exist, and must be further \"fine-tuned\" to between 7.596 MeV and 7.716 MeV in order to produce the abundant level of C observed in nature. The Hoyle state has been measured to be about 7.65 MeV above the ground state of C.\n# In the reaction C + He → O, there is an excited state of oxygen which, if it were slightly higher, would provide a resonance and speed up the reaction. In that case, insufficient carbon would exist in nature; almost all of it would have converted to oxygen.\nSome scholars argue the 7.656 MeV Hoyle resonance, in particular, is unlikely to be the product of mere chance. Fred Hoyle argued in 1982 that the Hoyle resonance was evidence of a \"superintellect\"; Leonard Susskind in The Cosmic Landscape rejects Hoyle's intelligent design argument. Instead, some scientists believe that different universes, portions of a vast \"multiverse\", have different fundamental constants: according to this controversial fine-tuning hypothesis, life can only evolve in the minority of universes where the fundamental constants happen to be fine-tuned to support the existence of life. Other scientists reject the hypothesis of the multiverse on account of the lack of independent evidence.", "The triple-alpha process is highly dependent on carbon-12 and beryllium-8 having resonances with slightly more energy than helium-4. Based on known resonances, by 1952 it seemed impossible for ordinary stars to produce carbon as well as any heavier element. Nuclear physicist William Alfred Fowler had noted the beryllium-8 resonance, and Edwin Salpeter had calculated the reaction rate for Be, C, and O nucleosynthesis taking this resonance into account. However, Salpeter calculated that red giants burned helium at temperatures of 2·10 K or higher, whereas other recent work hypothesized temperatures as low as 1.1·10 K for the core of a red giant.\nSalpeters paper mentioned in passing the effects that unknown resonances in carbon-12 would have on his calculations, but the author never followed up on them. It was instead astrophysicist Fred Hoyle who, in 1953, used the abundance of carbon-12 in the universe as evidence for the existence of a carbon-12 resonance. The only way Hoyle could find that would produce an abundance of both carbon and oxygen was through a triple-alpha process with a carbon-12 resonance near 7.68 MeV, which would also eliminate the discrepancy in Salpeters calculations.\nHoyle went to Fowlers lab at Caltech and said that there had to be a resonance of 7.68 MeV in the carbon-12 nucleus. (There had been reports of an excited state at about 7.5 MeV.) Fred Hoyles audacity in doing this is remarkable, and initially, the nuclear physicists in the lab were skeptical. Finally, a junior physicist, Ward Whaling, fresh from Rice University, who was looking for a project decided to look for the resonance. Fowler permitted Whaling to use an old Van de Graaff generator that was not being used. Hoyle was back in Cambridge when Fowler's lab discovered a carbon-12 resonance near 7.65 MeV a few months later, validating his prediction. The nuclear physicists put Hoyle as first author on a paper delivered by Whaling at the summer meeting of the American Physical Society. A long and fruitful collaboration between Hoyle and Fowler soon followed, with Fowler even coming to Cambridge.\nThe final reaction product lies in a 0+ state (spin 0 and positive parity). Since the Hoyle state was predicted to be either a 0+ or a 2+ state, electron–positron pairs or gamma rays were expected to be seen. However, when experiments were carried out, the gamma emission reaction channel was not observed, and this meant the state must be a 0+ state. This state completely suppresses single gamma emission, since single gamma emission must carry away at least 1 unit of angular momentum. Pair production from an excited 0+ state is possible because their combined spins (0) can couple to a reaction that has a change in angular momentum of 0.", "The triple-alpha steps are strongly dependent on the temperature and density of the stellar material. The power released by the reaction is approximately proportional to the temperature to the 40th power, and the density squared. In contrast, the proton–proton chain reaction produces energy at a rate proportional to the fourth power of temperature, the CNO cycle at about the 17th power of the temperature, and both are linearly proportional to the density. This strong temperature dependence has consequences for the late stage of stellar evolution, the red-giant stage.\nFor lower mass stars on the red-giant branch, the helium accumulating in the core is prevented from further collapse only by electron degeneracy pressure. The entire degenerate core is at the same temperature and pressure, so when its density becomes high enough, fusion via the triple-alpha process rate starts throughout the core. The core is unable to expand in response to the increased energy production until the pressure is high enough to lift the degeneracy. As a consequence, the temperature increases, causing an increased reaction rate in a positive feedback cycle that becomes a runaway reaction. This process, known as the helium flash, lasts a matter of seconds but burns 60–80% of the helium in the core. During the core flash, the star's energy production can reach approximately 10 solar luminosities which is comparable to the luminosity of a whole galaxy, although no effects will be immediately observed at the surface, as the whole energy is used up to lift the core from the degenerate to normal, gaseous state. Since the core is no longer degenerate, hydrostatic equilibrium is once more established and the star begins to \"burn\" helium at its core and hydrogen in a spherical layer above the core. The star enters a steady helium-burning phase which lasts about 10% of the time it spent on the main sequence (the Sun is expected to burn helium at its core for about a billion years after the helium flash).\nFor higher mass stars, carbon collects in the core, displacing the helium to a surrounding shell where helium burning occurs. In this helium shell, the pressures are lower and the mass is not supported by electron degeneracy. Thus, as opposed to the center of the star, the shell is able to expand in response to increased thermal pressure in the helium shell. Expansion cools this layer and slows the reaction, causing the star to contract again. This process continues cyclically, and stars undergoing this process will have periodically variable radius and power production. These stars will also lose material from their outer layers as they expand and contract.", "With further increases of temperature and density, fusion processes produce nuclides only up to nickel-56 (which decays later to iron); heavier elements (those beyond Ni) are created mainly by neutron capture. The slow capture of neutrons, the s-process, produces about half of elements beyond iron. The other half are produced by rapid neutron capture, the r-process, which probably occurs in core-collapse supernovae and neutron star mergers.", "Ordinarily, the probability of the triple-alpha process is extremely small. However, the beryllium-8 ground state has almost exactly the energy of two alpha particles. In the second step, Be + He has almost exactly the energy of an excited state of C. This resonance greatly increases the probability that an incoming alpha particle will combine with beryllium-8 to form carbon. The existence of this resonance was predicted by Fred Hoyle before its actual observation, based on the physical necessity for it to exist, in order for carbon to be formed in stars. The prediction and then discovery of this energy resonance and process gave very significant support to Hoyle's hypothesis of stellar nucleosynthesis, which posited that all chemical elements had originally been formed from hydrogen, the true primordial substance. The anthropic principle has been cited to explain the fact that nuclear resonances are sensitively arranged to create large amounts of carbon and oxygen in the universe.", "Helium accumulates in the cores of stars as a result of the proton–proton chain reaction and the carbon–nitrogen–oxygen cycle.\nNuclear fusion reaction of two helium-4 nuclei produces beryllium-8, which is highly unstable, and decays back into smaller nuclei with a half-life of , unless within that time a third alpha particle fuses with the beryllium-8 nucleus to produce an excited resonance state of carbon-12, called the Hoyle state, which nearly always decays back into three alpha particles, but once in about 2421.3 times releases energy and changes into the stable base form of carbon-12. When a star runs out of hydrogen to fuse in its core, it begins to contract and heat up. If the central temperature rises to 10 K, six times hotter than the Sun's core, alpha particles can fuse fast enough to get past the beryllium-8 barrier and produce significant amounts of stable carbon-12.\nThe net energy release of the process is 7.275 MeV.\nAs a side effect of the process, some carbon nuclei fuse with additional helium to produce a stable isotope of oxygen and energy:\n: + → + (+7.162 MeV)\nNuclear fusion reactions of helium with hydrogen produces lithium-5, which also is highly unstable, and decays back into smaller nuclei with a half-life of .\nFusing with additional helium nuclei can create heavier elements in a chain of stellar nucleosynthesis known as the alpha process, but these reactions are only significant at higher temperatures and pressures than in cores undergoing the triple-alpha process. This creates a situation in which stellar nucleosynthesis produces large amounts of carbon and oxygen, but only a small fraction of those elements are converted into neon and heavier elements. Oxygen and carbon are the main \"ash\" of helium-4 burning.", "The triple-alpha process is a set of nuclear fusion reactions by which three helium-4 nuclei (alpha particles) are transformed into carbon.", "The triple-alpha process is ineffective at the pressures and temperatures early in the Big Bang. One consequence of this is that no significant amount of carbon was produced in the Big Bang.", "UV curing is used for converting or curing inks, adhesives, and coatings. UV-cured adhesive has become a high speed replacement for two-part adhesives, eliminating the need for solvent removal, ratio mixing, and potential life concern. It is used in flexographic, offset, pad, and screen printing processes; where UV curing systems are used to polymerize images on screen-printed products, ranging from T-shirts to 3D and cylindrical parts. It is used in fine instrument finishing (guitars, violins, ukuleles, etc.), pool cue manufacturing and other wood craft industries. Printing with UV curable inks provides the ability to print on a very wide variety of substrates such as plastics, paper, canvas, glass, metal, foam boards, tile, films, and many other materials.\nIndustries that use UV curing include medicine, automobiles, cosmetics (for example artificial fingernails and gel nail polish), food, science, education, and art. UV curable inks have successfully met the demands of the publication sector in terms of print quality, durability, and compatibility with different substrates, making them a suitable choice for printing applications in this industry.", "Medium-pressure mercury-vapor lamps have historically been the industry standard for curing products with ultraviolet light. The bulbs work by sending an electric discharge to excite a mixture of mercury and noble gases, generating a plasma. Once the mercury reaches a plasma state, it irradiates a high spectral output in the UV region of the electromagnetic spectrum. Major peaks in light intensity occur in the 240-270 nm and 350-380 nm regions. These intense peaks, when matched with the absorption profile of a photoinitiator, cause the rapid curing of materials. By modifying the bulb mixture with different gases and metal halides, the distribution of wavelength peaks can be altered, and material interactions are changed.\nMedium-pressure lamps can either be standard gas-discharge lamps or electrodeless lamps, and typically use an elongated bulb to emit energy. By incorporating optical designs such an elliptical or even aconic reflector, light can either be focused or projected over a far distance. These lamps can often operate at over 900 degrees Celsius and produce UV energy levels over 10 W/cm.", "UV curing (ultraviolet curing) is the process by which ultraviolet light initiates a photochemical reaction that generates a crosslinked network of polymers through radical polymerization or cationic polymerization. UV curing is adaptable to printing, coating, decorating, stereolithography, and in the assembly of a variety of products and materials. UV curing is a low-temperature, high speed, and solventless process as curing occurs via polymerization. Originally introduced in the 1960s, this technology has streamlined and increased automation in many industries in the manufacturing sector.", "A primary advantage of curing with ultraviolet light is the speed at which a material can be processed. Speeding up the curing, or drying step, in a process can reduce flaws and errors by decreasing time that an ink or coating spends as wet. This can increase the quality of a finished item, and potentially allow for greater consistency. Another benefit to decreasing manufacturing time is that less space needs to be devoted to storing items which can not be used until the drying step is finished.\nBecause UV energy has unique interactions with many different materials, UV curing allows for the creation of products with characteristics not achievable via other means. This has led to UV curing becoming fundamental in many fields of manufacturing and technology, where changes in strength, hardness, durability, chemical resistance, and many other properties are required.", "Since development of the aluminium gallium nitride LED in the early 2000s, UV LED technology has seen sustained growth in the UV curing marketplace. Generating energy most efficiently in the 365-405 nm UVA wavelengths, continued technological advances, have allowed for improved electrical efficiency of UV LEDs as well as significant increases in output. UV LED lamps generate high energy directed to a specific area which strengthen the uniformity. Benefiting from lower-temperature operation and the lack of hazardous mercury, UV LEDs have replaced medium-pressure lamps in many applications. Major limitations include difficulties in designing optics for curing on complex three-dimensional objects, and poor efficiency at generating lower-wavelength energy, though development work continues.", "Radical Polymerization is used in the curing of acrylic resins in the presence of UV in the industry. Light energy from UV breaks apart photoinitiaters, forming radicals. The radical then react with the polymers, forming polymers with radical groups that then react with additional monomers. The monomer chain extends until it reaches another polymer and reacts with the polymer. Polymers will form with monomer bridges between them, thus leading to a cross-linked network.", "Low-pressure mercury-vapor lamps generate primarily 254 nm UVC energy, and are most commonly used in disinfection applications. Operated at lower temperatures and with less voltage than medium-pressure lamps, they, like all UV sources, require shielding when operated to prevent excess exposure of skin and eyes.", "Cationic polymerization is used in the curing of epoxy resins in the presence of UV in the industry. Light energy from UV breaks apart photoinitiaters, forming an acidic which then donates a proton to the polymer. The monomers then attach themselves to the polymer, forming longer and longer chains leading to a cross-linked network.", "The main components of a UV curing solution includes resins, monomers, and photoinitiators. Resin is an oligomer that imparts specific properties to the final polymer. A monomer is used as a cross-linking agent and regulates the viscosity of the mixture to suit the application. The photoinitiator is responsible for absorbing the light and kickstarting the reaction, which helps control the cure rate and depth of cure. Each of these elements has a role to play in the crosslinking process and is linked to the composition of the final polymer.", "UV pinning is the process of applying a dose of low intensity ultraviolet (UV) light to a UV curable ink (UV ink). The lights wavelengths must be correctly matched to the inks photochemical properties. As a result, the ink droplets move to a higher viscosity state, but stop short of full cure. This is also referred to as the \"gelling\" of the ink.\nUV pinning is typically used in UV ink jet applications (e.g. the printing of labels, the printing of electronics, and the fabrication of 3-D microstructures).", "UV pinning enhances the management of drop size and image integrity, minimizing the unwanted mixing of drops and providing the highest possible image quality and the sharpest colour rendering.\nChallenge:\nOvercome the wetting problems that were causing UV-Curable inks to spread and cause ink droplets to bleed into each other before full curing single-pass digital printing of narrow web labels.\nSolution:\nA UV pinning system that uses high power UV light emitting diodes(LEDs) installed next to the inkjet array (print head). The UV light from the pinning system, typically lower than that of the full cure UV system, causes the UV ink to thicken, also known as gelling, but not fully cure. This ink thickening stops dot gain and holds the ink droplet pattern in place until it reaches the full cure UV system.", "UV-sensitive syndrome is a cutaneous condition inherited in an autosomal recessive fashion, characterized by photosensitivity and solar lentigines. Recent research identified that mutations of the KIAA1530 (UVSSA) gene as cause for the development of UV-sensitive syndrome. Furthermore, this protein was identified as a new player in the Transcription-coupled repair (TC-NER).", "Corona discharge on electrical apparatus can be detected by its ultraviolet emissions. Corona causes degradation of electrical insulation and emission of ozone and nitrogen oxide.\nEPROMs (Erasable Programmable Read-Only Memory) are erased by exposure to UV radiation. These modules have a transparent (quartz) window on the top of the chip that allows the UV radiation in.", "Ultraviolet radiation is helpful in the treatment of skin conditions such as psoriasis and vitiligo. Exposure to UVA, while the skin is hyper-photosensitive, by taking psoralens is an effective treatment for psoriasis. Due to the potential of psoralens to cause damage to the liver, PUVA therapy may be used only a limited number of times over a patient's lifetime.\nUVB phototherapy does not require additional medications or topical preparations for the therapeutic benefit; only the exposure is needed. However, phototherapy can be effective when used in conjunction with certain topical treatments such as anthralin, coal tar, and vitamin A and D derivatives, or systemic treatments such as methotrexate and Soriatane.", "Light-emitting diodes (LEDs) can be manufactured to emit radiation in the ultraviolet range. In 2019, following significant advances over the preceding five years, UV‑A LEDs of 365 nm and longer wavelength were available, with efficiencies of 50% at 1.0 W output. Currently, the most common types of UV LEDs are in 395 nm and 365 nm wavelengths, both of which are in the UV‑A spectrum. The rated wavelength is the peak wavelength that the LEDs put out, but light at both higher and lower wavelengths are present.\nThe cheaper and more common 395 nm UV LEDs are much closer to the visible spectrum, and give off a purple color. Other UV LEDs deeper into the spectrum do not emit as much visible light LEDs are used for applications such as UV curing applications, charging glow-in-the-dark objects such as paintings or toys, and lights for detecting counterfeit money and bodily fluids. UV LEDs are also used in digital print applications and inert UV curing environments. Power densities approaching 3 W/cm (30 kW/m) are now possible, and this, coupled with recent developments by photo-initiator and resin formulators, makes the expansion of LED cured UV materials likely.\nUV‑C LEDs are developing rapidly, but may require testing to verify effective disinfection. Citations for large-area disinfection are for non-LED UV sources known as germicidal lamps. Also, they are used as line sources to replace deuterium lamps in liquid chromatography instruments.", "Some animals, including birds, reptiles, and insects such as bees, can see near-ultraviolet wavelengths. Many fruits, flowers, and seeds stand out more strongly from the background in ultraviolet wavelengths as compared to human color vision. Scorpions glow or take on a yellow to green color under UV illumination, thus assisting in the control of these arachnids. Many birds have patterns in their plumage that are invisible at usual wavelengths but observable in ultraviolet, and the urine and other secretions of some animals, including dogs, cats, and human beings, are much easier to spot with ultraviolet. Urine trails of rodents can be detected by pest control technicians for proper treatment of infested dwellings.\nButterflies use ultraviolet as a communication system for sex recognition and mating behavior. For example, in the Colias eurytheme butterfly, males rely on visual cues to locate and identify females. Instead of using chemical stimuli to find mates, males are attracted to the ultraviolet-reflecting color of female hind wings. In Pieris napi butterflies it was shown that females in northern Finland with less UV-radiation present in the environment possessed stronger UV signals to attract their males than those occurring further south. This suggested that it was evolutionarily more difficult to increase the UV-sensitivity of the eyes of the males than to increase the UV-signals emitted by the females.\nMany insects use the ultraviolet wavelength emissions from celestial objects as references for flight navigation. A local ultraviolet emitter will normally disrupt the navigation process and will eventually attract the flying insect.\nThe green fluorescent protein (GFP) is often used in genetics as a marker. Many substances, such as proteins, have significant light absorption bands in the ultraviolet that are of interest in biochemistry and related fields. UV-capable spectrophotometers are common in such laboratories.\nUltraviolet traps called bug zappers are used to eliminate various small flying insects. They are attracted to the UV and are killed using an electric shock, or trapped once they come into contact with the device. Different designs of ultraviolet radiation traps are also used by entomologists for collecting nocturnal insects during faunistic survey studies.", "Ultraviolet lamps are used to sterilize workspaces and tools used in biology laboratories and medical facilities. Commercially available low-pressure mercury-vapor lamps emit about 86% of their radiation at 254 nanometers (nm), with 265 nm being the peak germicidal effectiveness curve. UV at these germicidal wavelengths damage a microorganism's DNA/RNA so that it cannot reproduce, making it harmless, (even though the organism may not be killed). Since microorganisms can be shielded from ultraviolet rays in small cracks and other shaded areas, these lamps are used only as a supplement to other sterilization techniques.\nUV-C LEDs are relatively new to the commercial market and are gaining in popularity. Due to their monochromatic nature (±5 nm) these LEDs can target a specific wavelength needed for disinfection. This is especially important knowing that pathogens vary in their sensitivity to specific UV wavelengths. LEDs are mercury free, instant on/off, and have unlimited cycling throughout the day.\nDisinfection using UV radiation is commonly used in wastewater treatment applications and is finding an increased usage in municipal drinking water treatment. Many bottlers of spring water use UV disinfection equipment to sterilize their water. Solar water disinfection has been researched for cheaply treating contaminated water using natural sunlight. The UV-A irradiation and increased water temperature kill organisms in the water.\nUltraviolet radiation is used in several food processes to kill unwanted microorganisms. UV can be used to pasteurize fruit juices by flowing the juice over a high-intensity ultraviolet source. The effectiveness of such a process depends on the UV absorbance of the juice.\nPulsed light (PL) is a technique of killing microorganisms on surfaces using pulses of an intense broad spectrum, rich in UV-C between 200 and 280 nm. Pulsed light works with xenon flash lamps that can produce flashes several times per second. Disinfection robots use pulsed UV.\nThe antimicrobial effectiveness of filtered far-UVC (222 nm) light on a range of pathogens, including bacteria and fungi showed inhibition of pathogen growth, and since it has lesser harmful effects, it provides essential insights for reliable disinfection in healthcare settings, such as hospitals and long-term care homes. UVC has also been shown to be effective at degrading SARS-CoV-2 virus.", "Using a catalytic chemical reaction from titanium dioxide and UVC exposure, oxidation of organic matter converts pathogens, pollens, and mold spores into harmless inert byproducts. However, the reaction of titanium dioxide and UVC is not a straight path. Several hundreds of reactions occur prior to the inert byproducts stage and can hinder the resulting reaction creating formaldehyde, aldehyde, and other VOCs en route to a final stage. Thus, the use of titanium dioxide and UVC requires very specific parameters for a successful outcome. The cleansing mechanism of UV is a photochemical process. Contaminants in the indoor environment are almost entirely organic carbon-based compounds, which break down when exposed to high-intensity UV at 240 to 280 nm. Short-wave ultraviolet radiation can destroy DNA in living microorganisms. UVCs effectiveness is directly related to intensity and exposure time.\nUV has also been shown to reduce gaseous contaminants such as carbon monoxide and VOCs. UV lamps radiating at 184 and 254 nm can remove low concentrations of hydrocarbons and carbon monoxide if the air is recycled between the room and the lamp chamber. This arrangement prevents the introduction of ozone into the treated air. Likewise, air may be treated by passing by a single UV source operating at 184 nm and passed over iron pentaoxide to remove the ozone produced by the UV lamp.", "Electronic components that require clear transparency for light to exit or enter (photovoltaic panels and sensors) can be potted using acrylic resins that are cured using UV energy. The advantages are low VOC emissions and rapid curing.\nCertain inks, coatings, and adhesives are formulated with photoinitiators and resins. When exposed to UV light, polymerization occurs, and so the adhesives harden or cure, usually within a few seconds. Applications include glass and plastic bonding, optical fiber coatings, the coating of flooring, UV coating and paper finishes in offset printing, dental fillings, and decorative fingernail \"gels\".\nUV sources for UV curing applications include UV lamps, UV LEDs, and excimer flash lamps. Fast processes such as flexo or offset printing require high-intensity light focused via reflectors onto a moving substrate and medium so high-pressure Hg (mercury) or Fe (iron, doped)-based bulbs are used, energized with electric arcs or microwaves. Lower-power fluorescent lamps and LEDs can be used for static applications. Small high-pressure lamps can have light focused and transmitted to the work area via liquid-filled or fiber-optic light guides.\nThe impact of UV on polymers is used for modification of the (roughness and hydrophobicity) of polymer surfaces. For example, a poly(methyl methacrylate) surface can be smoothed by vacuum ultraviolet.\nUV radiation is useful in preparing low-surface-energy polymers for adhesives. Polymers exposed to UV will oxidize, thus raising the surface energy of the polymer. Once the surface energy of the polymer has been raised, the bond between the adhesive and the polymer is stronger.", "Ultraviolet radiation is used for very fine resolution photolithography, a procedure wherein a chemical called a photoresist is exposed to UV radiation that has passed through a mask. The exposure causes chemical reactions to occur in the photoresist. After removal of unwanted photoresist, a pattern determined by the mask remains on the sample. Steps may then be taken to \"etch\" away, deposit on or otherwise modify areas of the sample where no photoresist remains.\nPhotolithography is used in the manufacture of semiconductors, integrated circuit components, and printed circuit boards. Photolithography processes used to fabricate electronic integrated circuits presently use 193 nm UV and are experimentally using 13.5 nm UV for extreme ultraviolet lithography.", "In general, ultraviolet detectors use either a solid-state device, such as one based on silicon carbide or aluminium nitride, or a gas-filled tube as the sensing element. UV detectors that are sensitive to UV in any part of the spectrum respond to irradiation by sunlight and artificial light. A burning hydrogen flame, for instance, radiates strongly in the 185- to 260-nanometer range and only very weakly in the IR region, whereas a coal fire emits very weakly in the UV band yet very strongly at IR wavelengths; thus, a fire detector that operates using both UV and IR detectors is more reliable than one with a UV detector alone. Virtually all fires emit some radiation in the UVC band, whereas the Suns radiation at this band is absorbed by the Earths atmosphere. The result is that the UV detector is \"solar blind\", meaning it will not cause an alarm in response to radiation from the Sun, so it can easily be used both indoors and outdoors.\nUV detectors are sensitive to most fires, including hydrocarbons, metals, sulfur, hydrogen, hydrazine, and ammonia. Arc welding, electrical arcs, lightning, X-rays used in nondestructive metal testing equipment (though this is highly unlikely), and radioactive materials can produce levels that will activate a UV detection system. The presence of UV-absorbing gases and vapors will attenuate the UV radiation from a fire, adversely affecting the ability of the detector to detect flames. Likewise, the presence of an oil mist in the air or an oil film on the detector window will have the same effect.", "UV/Vis spectroscopy is widely used as a technique in chemistry to analyze chemical structure, the most notable one being conjugated systems. UV radiation is often used to excite a given sample where the fluorescent emission is measured with a spectrofluorometer. In biological research, UV radiation is used for quantification of nucleic acids or proteins. In environmental chemistry, UV radiation could also be used to detect Contaminants of emerging concern in water samples.\nIn pollution control applications, ultraviolet analyzers are used to detect emissions of nitrogen oxides, sulfur compounds, mercury, and ammonia, for example in the flue gas of fossil-fired power plants. Ultraviolet radiation can detect thin sheens of spilled oil on water, either by the high reflectivity of oil films at UV wavelengths, fluorescence of compounds in oil, or by absorbing of UV created by Raman scattering in water. UV absorbance can also be uesd to quantify contaminants in wastewater. Most commonly used 254 nm UV absorbance is genrally used as a surrogate parameters to quantify NOM. Another form of light-based detection method uses a wide spectrum of excitation emission matrix (EEM) to detect and identify contaminants based on their flourense properties. EEM could be used to discriminate different groups of NOM based on the difference in light emission and excitation of fluorophores. NOMs with certain molecular structures are reported to have fluorescent properties in a wide range of excitation/emission wavelengths.\nUltraviolet lamps are also used as part of the analysis of some minerals and gems.", "Ultraviolet helps detect organic material deposits that remain on surfaces where periodic cleaning and sanitizing may have failed. It is used in the hotel industry, manufacturing, and other industries where levels of cleanliness or contamination are inspected.\nPerennial news features for many television news organizations involve an investigative reporter using a similar device to reveal unsanitary conditions in hotels, public toilets, hand rails, and such.", "Using multi-spectral imaging it is possible to read illegible papyrus, such as the burned papyri of the Villa of the Papyri or of Oxyrhynchus, or the Archimedes palimpsest. The technique involves taking pictures of the illegible document using different filters in the infrared or ultraviolet range, finely tuned to capture certain wavelengths of light. Thus, the optimum spectral portion can be found for distinguishing ink from paper on the papyrus surface.\nSimple NUV sources can be used to highlight faded iron-based ink on vellum.", "UV is an investigative tool at the crime scene helpful in locating and identifying bodily fluids such as semen, blood, and saliva. For example, ejaculated fluids or saliva can be detected by high-power UV sources, irrespective of the structure or colour of the surface the fluid is deposited upon. UV–vis microspectroscopy is also used to analyze trace evidence, such as textile fibers and paint chips, as well as questioned documents.\nOther applications include the authentication of various collectibles and art, and detecting counterfeit currency. Even materials not specially marked with UV sensitive dyes may have distinctive fluorescence under UV exposure or may fluoresce differently under short-wave versus long-wave ultraviolet.", "Colorless fluorescent dyes that emit blue light under UV are added as optical brighteners to paper and fabrics. The blue light emitted by these agents counteracts yellow tints that may be present and causes the colors and whites to appear whiter or more brightly colored.\nUV fluorescent dyes that glow in the primary colors are used in paints, papers, and textiles either to enhance color under daylight illumination or to provide special effects when lit with UV lamps. Blacklight paints that contain dyes that glow under UV are used in a number of art and aesthetic applications.\nAmusement parks often use UV lighting to fluoresce ride artwork and backdrops. This often has the side effect of causing rider's white clothing to glow light-purple.\nTo help prevent counterfeiting of currency, or forgery of important documents such as driver's licenses and passports, the paper may include a UV watermark or fluorescent multicolor fibers that are visible under ultraviolet light. Postage stamps are tagged with a phosphor that glows under UV rays to permit automatic detection of the stamp and facing of the letter.\nUV fluorescent dyes are used in many applications (for example, biochemistry and forensics). Some brands of pepper spray will leave an invisible chemical (UV dye) that is not easily washed off on a pepper-sprayed attacker, which would help police identify the attacker later.\nIn some types of nondestructive testing UV stimulates fluorescent dyes to highlight defects in a broad range of materials. These dyes may be carried into surface-breaking defects by capillary action (liquid penetrant inspection) or they may be bound to ferrite particles caught in magnetic leakage fields in ferrous materials (magnetic particle inspection).", "Photographic film responds to ultraviolet radiation but the glass lenses of cameras usually block radiation shorter than 350 nm. Slightly yellow UV-blocking filters are often used for outdoor photography to prevent unwanted bluing and overexposure by UV rays. For photography in the near UV, special filters may be used. Photography with wavelengths shorter than 350 nm requires special quartz lenses which do not absorb the radiation.\nDigital cameras sensors may have internal filters that block UV to improve color rendition accuracy. Sometimes these internal filters can be removed, or they may be absent, and an external visible-light filter prepares the camera for near-UV photography. A few cameras are designed for use in the UV.\nPhotography by reflected ultraviolet radiation is useful for medical, scientific, and forensic investigations, in applications as widespread as detecting bruising of skin, alterations of documents, or restoration work on paintings. Photography of the fluorescence produced by ultraviolet illumination uses visible wavelengths of light.\nIn ultraviolet astronomy, measurements are used to discern the chemical composition of the interstellar medium, and the temperature and composition of stars. Because the ozone layer blocks many UV frequencies from reaching telescopes on the surface of the Earth, most UV observations are made from space.", "Ultraviolet rays are usually invisible to most humans. The lens of the human eye blocks most radiation in the wavelength range of 300–400 nm; shorter wavelengths are blocked by the cornea. Humans also lack color receptor adaptations for ultraviolet rays. Nevertheless, the photoreceptors of the retina are sensitive to near-UV, and people lacking a lens (a condition known as aphakia) perceive near-UV as whitish-blue or whitish-violet. Under some conditions, children and young adults can see ultraviolet down to wavelengths around 310 nm. Near-UV radiation is visible to insects, some mammals, and some birds. Birds have a fourth color receptor for ultraviolet rays; this, coupled with eye structures that transmit more UV gives smaller birds \"true\" UV vision.", "The eye is most sensitive to damage by UV in the lower UV‑C band at 265–275 nm. Radiation of this wavelength is almost absent from sunlight at the surface of the Earth but is emitted by artificial sources such as the electrical arcs employed in arc welding. Unprotected exposure to these sources can cause \"welder's flash\" or \"arc eye\" (photokeratitis) and can lead to cataracts, pterygium and pinguecula formation. To a lesser extent, UV‑B in sunlight from 310 to 280 nm also causes photokeratitis (\"snow blindness\"), and the cornea, the lens, and the retina can be damaged.\nProtective eyewear is beneficial to those exposed to ultraviolet radiation. Since light can reach the eyes from the sides, full-coverage eye protection is usually warranted if there is an increased risk of exposure, as in high-altitude mountaineering. Mountaineers are exposed to higher-than-ordinary levels of UV radiation, both because there is less atmospheric filtering and because of reflection from snow and ice.\nOrdinary, untreated eyeglasses give some protection. Most plastic lenses give more protection than glass lenses, because, as noted above, glass is transparent to UV‑A and the common acrylic plastic used for lenses is less so. Some plastic lens materials, such as polycarbonate, inherently block most UV.", "Ultraviolet (UV) light is electromagnetic radiation of wavelengths of 10–400 nanometers, shorter than that of visible light, but longer than X-rays. UV radiation is present in sunlight, and constitutes about 10% of the total electromagnetic radiation output from the Sun. It is also produced by electric arcs, Cherenkov radiation, and specialized lights, such as mercury-vapor lamps, tanning lamps, and black lights. \nThe photons of ultraviolet have greater energy than those of visible light, from about 3.1 to 12 electron volts, around the minimum energy required to ionize atoms. Although long-wavelength ultraviolet is not considered an ionizing radiation because its photons lack sufficient energy, it can induce chemical reactions and cause many substances to glow or fluoresce. Many practical applications, including chemical and biological effects, are derived from the way that UV radiation can interact with organic molecules. These interactions can involve absorption or adjusting energy states in molecules, but do not necessarily involve heating. Short-wave ultraviolet light is ionizing radiation. Consequently, short-wave UV damages DNA and sterilizes surfaces with which it comes into contact.\nFor humans, suntan and sunburn are familiar effects of exposure of the skin to UV light, along with an increased risk of skin cancer. The amount of UV light produced by the Sun means that the Earth would not be able to sustain life on dry land if most of that light were not filtered out by the atmosphere. More energetic, shorter-wavelength \"extreme\" UV below 121 nm ionizes air so strongly that it is absorbed before it reaches the ground. However, ultraviolet light (specifically, UVB) is also responsible for the formation of vitamin D in most land vertebrates, including humans. The UV spectrum, thus, has effects both beneficial and detrimental to life.\nThe lower wavelength limit of the visible spectrum is conventionally taken as 400 nm, so ultraviolet rays are not visible to humans, although people can sometimes perceive light at shorter wavelengths than this. Insects, birds, and some mammals can see near-UV (NUV), i.e., slightly shorter wavelengths than what humans can see.", "Reptiles need UVB for biosynthesis of vitamin D, and other metabolic processes. Specifically cholecalciferol (vitamin D3), which is needed for basic cellular / neural functioning as well as the utilization of calcium for bone and egg production. The UVA wavelength is also visible to many reptiles and might play a significant role in their ability survive in the wild as well as in visual communication between individuals. Therefore, in a typical reptile enclosure, a fluorescent UV a/b source (at the proper strength / spectrum for the species), must be available for many captive species to survive. Simple supplementation with cholecalciferol (Vitamin D3) will not be enough as there's a complete biosynthetic pathway that is \"leapfrogged\" (risks of possible overdoses), the intermediate molecules and metabolites also play important functions in the animals health. Natural sunlight in the right levels is always going to be superior to artificial sources, but this might not be possible for keepers in different parts of the world.\nIt is a known problem that high levels of output of the UVa part of the spectrum can both cause cellular and DNA damage to sensitive parts of their bodies – especially the eyes where blindness is the result of an improper UVa/b source use and placement photokeratitis. For many keepers there must also be a provision for an adequate heat source this has resulted in the marketing of heat and light \"combination\" products. Keepers should be careful of these \"combination\" light/ heat and UVa/b generators, they typically emit high levels of UVa with lower levels of UVb that are set and difficult to control so that animals can have their needs met. A better strategy is to use individual sources of these elements and so they can be placed and controlled by the keepers for the max benefit of the animals.", "The evolution of early reproductive proteins and enzymes is attributed in modern models of evolutionary theory to ultraviolet radiation. UVB causes thymine base pairs next to each other in genetic sequences to bond together into thymine dimers, a disruption in the strand that reproductive enzymes cannot copy. This leads to frameshifting during genetic replication and protein synthesis, usually killing the cell. Before formation of the UV-blocking ozone layer, when early prokaryotes approached the surface of the ocean, they almost invariably died out. The few that survived had developed enzymes that monitored the genetic material and removed thymine dimers by nucleotide excision repair enzymes. Many enzymes and proteins involved in modern mitosis and meiosis are similar to repair enzymes, and are believed to be evolved modifications of the enzymes originally used to overcome DNA damages caused by UV.", "Gas lasers, laser diodes, and solid-state lasers can be manufactured to emit ultraviolet rays, and lasers are available that cover the entire UV range. The nitrogen gas laser uses electronic excitation of nitrogen molecules to emit a beam that is mostly UV. The strongest ultraviolet lines are at 337.1 nm and 357.6 nm in wavelength. Another type of high-power gas lasers are excimer lasers. They are widely used lasers emitting in ultraviolet and vacuum ultraviolet wavelength ranges. Presently, UV argon-fluoride excimer lasers operating at 193 nm are routinely used in integrated circuit production by photolithography. The current wavelength limit of production of coherent UV is about 126 nm, characteristic of the Ar* excimer laser.\nDirect UV-emitting laser diodes are available at 375 nm. UV diode-pumped solid state lasers have been demonstrated using cerium-doped lithium strontium aluminum fluoride crystals (Ce:LiSAF), a process developed in the 1990s at Lawrence Livermore National Laboratory. Wavelengths shorter than 325 nm are commercially generated in diode-pumped solid-state lasers. Ultraviolet lasers can also be made by applying frequency conversion to lower-frequency lasers.\nUltraviolet lasers have applications in industry (laser engraving), medicine (dermatology, and keratectomy), chemistry (MALDI), free-air secure communications, computing (optical storage), and manufacture of integrated circuits.", "UV degradation is one form of polymer degradation that affects plastics exposed to sunlight. The problem appears as discoloration or fading, cracking, loss of strength or disintegration. The effects of attack increase with exposure time and sunlight intensity. The addition of UV absorbers inhibits the effect.\nSensitive polymers include thermoplastics and speciality fibers like aramids. UV absorption leads to chain degradation and loss of strength at sensitive points in the chain structure. Aramid rope must be shielded with a sheath of thermoplastic if it is to retain its strength.\nMany pigments and dyes absorb UV and change colour, so paintings and textiles may need extra protection both from sunlight and fluorescent lamps, two common sources of UV radiation. Window glass absorbs some harmful UV, but valuable artifacts need extra shielding. Many museums place black curtains over watercolour paintings and ancient textiles, for example. Since watercolours can have very low pigment levels, they need extra protection from UV. Various forms of picture framing glass, including acrylics (plexiglass), laminates, and coatings, offer different degrees of UV (and visible light) protection.", "Overexposure to UV‑B radiation not only can cause sunburn but also some forms of skin cancer. However, the degree of redness and eye irritation (which are largely not caused by UV‑A) do not predict the long-term effects of UV, although they do mirror the direct damage of DNA by ultraviolet.\nAll bands of UV radiation damage collagen fibers and accelerate aging of the skin. Both UV‑A and UV‑B destroy vitamin A in skin, which may cause further damage.\nUVB radiation can cause direct DNA damage. This cancer connection is one reason for concern about ozone depletion and the ozone hole.\nThe most deadly form of skin cancer, malignant melanoma, is mostly caused by DNA damage independent from UV‑A radiation. This can be seen from the absence of a direct UV signature mutation in 92% of all melanoma. Occasional overexposure and sunburn are probably greater risk factors for melanoma than long-term moderate exposure. UV‑C is the highest-energy, most-dangerous type of ultraviolet radiation, and causes adverse effects that can variously be mutagenic or carcinogenic.\nIn the past, UV‑A was considered not harmful or less harmful than UV‑B, but today it is known to contribute to skin cancer via indirect DNA damage (free radicals such as reactive oxygen species). UV‑A can generate highly reactive chemical intermediates, such as hydroxyl and oxygen radicals, which in turn can damage DNA. The DNA damage caused indirectly to skin by UV‑A consists mostly of single-strand breaks in DNA, while the damage caused by UV‑B includes direct formation of thymine dimers or cytosine dimers and double-strand DNA breakage. UV‑A is immunosuppressive for the entire body (accounting for a large part of the immunosuppressive effects of sunlight exposure), and is mutagenic for basal cell keratinocytes in skin.\nUVB photons can cause direct DNA damage. UV‑B radiation excites DNA molecules in skin cells, causing aberrant covalent bonds to form between adjacent pyrimidine bases, producing a dimer. Most UV-induced pyrimidine dimers in DNA are removed by the process known as nucleotide excision repair that employs about 30 different proteins. Those pyrimidine dimers that escape this repair process can induce a form of programmed cell death (apoptosis) or can cause DNA replication errors leading to mutation.\nAs a defense against UV radiation, the amount of the brown pigment melanin in the skin increases when exposed to moderate (depending on skin type) levels of radiation; this is commonly known as a sun tan. The purpose of melanin is to absorb UV radiation and dissipate the energy as harmless heat, protecting the skin against both direct and indirect DNA damage from the UV. UV‑A gives a quick tan that lasts for days by oxidizing melanin that was already present and triggers the release of the melanin from melanocytes. UV‑B yields a tan that takes roughly 2 days to develop because it stimulates the body to produce more melanin.", "In humans, excessive exposure to UV radiation can result in acute and chronic harmful effects on the eye's dioptric system and retina. The risk is elevated at high altitudes and people living in high latitude areas where snow covers the ground right into early summer and sun positions even at zenith are low, are particularly at risk. Skin, the circadian system, and the immune system can also be affected.\nThe differential effects of various wavelengths of light on the human cornea and skin are sometimes called the \"erythemal action spectrum\". The action spectrum shows that UVA does not cause immediate reaction, but rather UV begins to cause photokeratitis and skin redness (with lighter skinned individuals being more sensitive) at wavelengths starting near the beginning of the UVB band at 315 nm, and rapidly increasing to 300 nm. The skin and eyes are most sensitive to damage by UV at 265–275 nm, which is in the lower UV‑C band. At still shorter wavelengths of UV, damage continues to happen, but the overt effects are not as great with so little penetrating the atmosphere. The WHO-standard ultraviolet index is a widely publicized measurement of total strength of UV wavelengths that cause sunburn on human skin, by weighting UV exposure for action spectrum effects at a given time and location. This standard shows that most sunburn happens due to UV at wavelengths near the boundary of the UV‑A and UV‑B bands.", "Photobiology is the scientific study of the beneficial and harmful interactions of non-ionizing radiation in living organisms, conventionally demarcated around 10 eV, the first ionization energy of oxygen. UV ranges roughly from 3 to 30 eV in energy. Hence photobiology entertains some, but not all, of the UV spectrum.", "UV light (specifically, UV‑B) causes the body to produce vitamin D, which is essential for life. Humans need some UV radiation to maintain adequate vitamin D levels. According to the World Health Organization:\nVitamin D can also be obtained from food and supplementation. Excess sun exposure produces harmful effects, however.\nVitamin D promotes the creation of serotonin. The production of serotonin is in direct proportion to the degree of bright sunlight the body receives. Serotonin is thought to provide sensations of happiness, well-being and serenity to human beings.", "The impact of ultraviolet radiation on human health has implications for the risks and benefits of sun exposure and is also implicated in issues such as fluorescent lamps and health. Getting too much sun exposure can be harmful, but in moderation, sun exposure is beneficial.", "Lasers have been used to indirectly generate non-coherent extreme UV (E‑UV) radiation at 13.5 nm for extreme ultraviolet lithography. The E‑UV is not emitted by the laser, but rather by electron transitions in an extremely hot tin or xenon plasma, which is excited by an excimer laser. This technique does not require a synchrotron, yet can produce UV at the edge of the X‑ray spectrum. Synchrotron light sources can also produce all wavelengths of UV, including those at the boundary of the UV and X‑ray spectra at 10 nm.", "The vacuum ultraviolet (V‑UV) band (100–200 nm) can be generated by non-linear 4 wave mixing in gases by sum or difference frequency mixing of 2 or more longer wavelength lasers. The generation is generally done in gasses (e.g. krypton, hydrogen which are two-photon resonant near 193 nm) or metal vapors (e.g. magnesium). By making one of the lasers tunable, the V‑UV can be tuned. If one of the lasers is resonant with a transition in the gas or vapor then the V‑UV production is intensified. However, resonances also generate wavelength dispersion, and thus the phase matching can limit the tunable range of the 4 wave mixing. Difference frequency mixing (i.e., ) has an advantage over sum frequency mixing because the phase matching can provide greater tuning.\nIn particular, difference frequency mixing two photons of an (193 nm) excimer laser with a tunable visible or near IR laser in hydrogen or krypton provides resonantly enhanced tunable V‑UV covering from 100 nm to 200 nm. Practically, the lack of suitable gas / vapor cell window materials above the lithium fluoride cut-off wavelength limit the tuning range to longer than about 110 nm. Tunable V‑UV wavelengths down to 75 nm was achieved using window-free configurations.", "Black light incandescent lamps are also made from an incandescent light bulb with a filter coating which absorbs most visible light. Halogen lamps with fused quartz envelopes are used as inexpensive UV light sources in the near UV range, from 400 to 300 nm, in some scientific instruments. Due to its black-body spectrum a filament light bulb is a very inefficient ultraviolet source, emitting only a fraction of a percent of its energy as UV.", "Specialized UV gas-discharge lamps containing different gases produce UV radiation at particular spectral lines for scientific purposes. Argon and deuterium arc lamps are often used as stable sources, either windowless or with various windows such as magnesium fluoride. These are often the emitting sources in UV spectroscopy equipment for chemical analysis.\nOther UV sources with more continuous emission spectra include xenon arc lamps (commonly used as sunlight simulators), deuterium arc lamps, mercury-xenon arc lamps, and metal-halide arc lamps.\nThe excimer lamp, a UV source developed in the early 2000s, is seeing increasing use in scientific fields. It has the advantages of high-intensity, high efficiency, and operation at a variety of wavelength bands into the vacuum ultraviolet.", "UV rays also treat certain skin conditions. Modern phototherapy has been used to successfully treat psoriasis, eczema, jaundice, vitiligo, atopic dermatitis, and localized scleroderma. In addition, UV light, in particular UV‑B radiation, has been shown to induce cell cycle arrest in keratinocytes, the most common type of skin cell. As such, sunlight therapy can be a candidate for treatment of conditions such as psoriasis and exfoliative cheilitis, conditions in which skin cells divide more rapidly than usual or necessary.", "Shortwave UV lamps are made using a fluorescent lamp tube with no phosphor coating, composed of fused quartz or vycor, since ordinary glass absorbs UV‑C. These lamps emit ultraviolet light with two peaks in the UV‑C band at 253.7 nm and 185 nm due to the mercury within the lamp, as well as some visible light. From 85% to 90% of the UV produced by these lamps is at 253.7 nm, whereas only 5–10% is at 185 nm. The fused quartz tube passes the 253.7 nm radiation but blocks the 185 nm wavelength. Such tubes have two or three times the UV‑C power of a regular fluorescent lamp tube. These low-pressure lamps have a typical efficiency of approximately 30–40%, meaning that for every 100 watts of electricity consumed by the lamp, they will produce approximately 30–40 watts of total UV output. They also emit bluish-white visible light, due to mercury's other spectral lines. These \"germicidal\" lamps are used extensively for disinfection of surfaces in laboratories and food-processing industries, and for disinfecting water supplies.", "A black light lamp emits long-wave UV‑A radiation and little visible light. Fluorescent black light lamps work similarly to other fluorescent lamps, but use a phosphor on the inner tube surface which emits UV‑A radiation instead of visible light. Some lamps use a deep-bluish-purple Woods glass optical filter that blocks almost all visible light with wavelengths longer than 400 nanometers. The purple glow given off by these tubes is not the ultraviolet itself, but visible purple light from mercurys 404 nm spectral line which escapes being filtered out by the coating. Other black lights use plain glass instead of the more expensive Wood's glass, so they appear light-blue to the eye when operating.\nIncandescent black lights are also produced, using a filter coating on the envelope of an incandescent bulb that absorbs visible light (see section below). These are cheaper but very inefficient, emitting only a small fraction of a percent of their power as UV. Mercury-vapor black lights in ratings up to 1 kW with UV-emitting phosphor and an envelope of Wood's glass are used for theatrical and concert displays.\nBlack lights are used in applications in which extraneous visible light must be minimized; mainly to observe fluorescence, the colored glow that many substances give off when exposed to UV light. UV‑A / UV‑B emitting bulbs are also sold for other special purposes, such as tanning lamps and reptile-husbandry.", "Ultraviolet absorbers are molecules used in organic materials (polymers, paints, etc.) to absorb UV radiation to reduce the UV degradation (photo-oxidation) of a material. The absorbers can themselves degrade over time, so monitoring of absorber levels in weathered materials is necessary.\nIn sunscreen, ingredients that absorb UVA/UVB rays, such as avobenzone, oxybenzone and octyl methoxycinnamate, are organic chemical absorbers or \"blockers\". They are contrasted with inorganic absorbers/\"blockers\" of UV radiation such as carbon black, titanium dioxide, and zinc oxide.\nFor clothing, the ultraviolet protection factor (UPF) represents the ratio of sunburn-causing UV without and with the protection of the fabric, similar to sun protection factor (SPF) ratings for sunscreen. Standard summer fabrics have UPFs around 6, which means that about 20% of UV will pass through.\nSuspended nanoparticles in stained-glass prevent UV rays from causing chemical reactions that change image colors. A set of stained-glass color-reference chips is planned to be used to calibrate the color cameras for the 2019 ESA Mars rover mission, since they will remain unfaded by the high level of UV present at the surface of Mars.\nCommon soda–lime glass, such as window glass, is partially transparent to UVA, but is opaque to shorter wavelengths, passing about 90% of the light above 350 nm, but blocking over 90% of the light below 300 nm. A study found that car windows allow 3–4% of ambient UV to pass through, especially if the UV was greater than 380 nm. Other types of car windows can reduce transmission of UV that is greater than 335 nm. Fused quartz, depending on quality, can be transparent even to vacuum UV wavelengths. Crystalline quartz and some crystals such as CaF and MgF transmit well down to 150 nm or 160 nm wavelengths.\nWood's glass is a deep violet-blue barium-sodium silicate glass with about 9% nickel oxide developed during World War I to block visible light for covert communications. It allows both infrared daylight and ultraviolet night-time communications by being transparent between 320 nm and 400 nm and also the longer infrared and just-barely-visible red wavelengths. Its maximum UV transmission is at 365 nm, one of the wavelengths of mercury lamps.", "Very hot objects emit UV radiation (see black-body radiation). The Sun emits ultraviolet radiation at all wavelengths, including the extreme ultraviolet where it crosses into X-rays at 10 nm. Extremely hot stars (such as O- and B-type) emit proportionally more UV radiation than the Sun. Sunlight in space at the top of Earth's atmosphere (see solar constant) is composed of about 50% infrared light, 40% visible light, and 10% ultraviolet light, for a total intensity of about 1400 W/m in vacuum.\nThe atmosphere blocks about 77% of the Suns UV, when the Sun is highest in the sky (at zenith), with absorption increasing at shorter UV wavelengths. At ground level with the sun at zenith, sunlight is 44% visible light, 3% ultraviolet, and the remainder infrared. Of the ultraviolet radiation that reaches the Earths surface, more than 95% is the longer wavelengths of UVA, with the small remainder UVB. Almost no UVC reaches the Earth's surface. The fraction of UVA and UVB which remains in UV radiation after passing through the atmosphere is heavily dependent on cloud cover and atmospheric conditions. On \"partly cloudy\" days, patches of blue sky showing between clouds are also sources of (scattered) UVA and UVB, which are produced by Rayleigh scattering in the same way as the visible blue light from those parts of the sky. UVB also plays a major role in plant development, as it affects most of the plant hormones. During total overcast, the amount of absorption due to clouds is heavily dependent on the thickness of the clouds and latitude, with no clear measurements correlating specific thickness and absorption of UVA and UVB.\nThe shorter bands of UVC, as well as even more-energetic UV radiation produced by the Sun, are absorbed by oxygen and generate the ozone in the ozone layer when single oxygen atoms produced by UV photolysis of dioxygen react with more dioxygen. The ozone layer is especially important in blocking most UVB and the remaining part of UVC not already blocked by ordinary oxygen in air.", "The electromagnetic spectrum of ultraviolet radiation (UVR), defined most broadly as 10–400 nanometers, can be subdivided into a number of ranges recommended by the ISO standard ISO 21348:\nSeveral solid-state and vacuum devices have been explored for use in different parts of the UV spectrum. Many approaches seek to adapt visible light-sensing devices, but these can suffer from unwanted response to visible light and various instabilities. Ultraviolet can be detected by suitable photodiodes and photocathodes, which can be tailored to be sensitive to different parts of the UV spectrum. Sensitive UV photomultipliers are available. Spectrometers and radiometers are made for measurement of UV radiation. Silicon detectors are used across the spectrum.\nVacuum UV, or VUV, wavelengths (shorter than 200 nm) are strongly absorbed by molecular oxygen in the air, though the longer wavelengths around 150–200 nm can propagate through nitrogen. Scientific instruments can, therefore, use this spectral range by operating in an oxygen-free atmosphere (commonly pure nitrogen), without the need for costly vacuum chambers. Significant examples include 193-nm photolithography equipment (for semiconductor manufacturing) and circular dichroism spectrometers.\nTechnology for VUV instrumentation was largely driven by solar astronomy for many decades. While optics can be used to remove unwanted visible light that contaminates the VUV, in general; detectors can be limited by their response to non-VUV radiation, and the development of solar-blind devices has been an important area of research. Wide-gap solid-state devices or vacuum devices with high-cutoff photocathodes can be attractive compared to silicon diodes.\nExtreme UV (EUV or sometimes XUV) is characterized by a transition in the physics of interaction with matter. Wavelengths longer than about 30 nm interact mainly with the outer valence electrons of atoms, while wavelengths shorter than that interact mainly with inner-shell electrons and nuclei. The long end of the EUV spectrum is set by a prominent He spectral line at 30.4 nm. EUV is strongly absorbed by most known materials, but synthesizing multilayer optics that reflect up to about 50% of EUV radiation at normal incidence is possible. This technology was pioneered by the NIXT and MSSTA sounding rockets in the 1990s, and it has been used to make telescopes for solar imaging. See also the Extreme Ultraviolet Explorer satellite.\nSome sources use the distinction of \"hard UV\" and \"soft UV\". For instance, in the case of astrophysics, the boundary may be at the Lyman limit (wavelength 91.2 nm), with \"hard UV\" being more energetic; the same terms may also be used in other fields, such as cosmetology, optoelectronic, etc. The numerical values of the boundary between hard/soft, even within similar scientific fields, do not necessarily coincide; for example, one applied-physics publication used a boundary of 190 nm between hard and soft UV regions.", "\"Ultraviolet\" means \"beyond violet\" (from Latin ultra, \"beyond\"), violet being the color of the highest frequencies of visible light. Ultraviolet has a higher frequency (thus a shorter wavelength) than violet light.\nUV radiation was discovered in 1801 when the German physicist Johann Wilhelm Ritter observed that invisible rays just beyond the violet end of the visible spectrum darkened silver chloride-soaked paper more quickly than violet light itself. He called them \"(de-)oxidizing rays\" () to emphasize chemical reactivity and to distinguish them from \"heat rays\", discovered the previous year at the other end of the visible spectrum. The simpler term \"chemical rays\" was adopted soon afterwards, and remained popular throughout the 19th century, although some said that this radiation was entirely different from light (notably John William Draper, who named them \"tithonic rays\"). The terms \"chemical rays\" and \"heat rays\" were eventually dropped in favor of ultraviolet and infrared radiation, respectively. In 1878, the sterilizing effect of short-wavelength light by killing bacteria was discovered. By 1903, the most effective wavelengths were known to be around 250 nm. In 1960, the effect of ultraviolet radiation on DNA was established.\nThe discovery of the ultraviolet radiation with wavelengths below 200 nm, named \"vacuum ultraviolet\" because it is strongly absorbed by the oxygen in air, was made in 1893 by German physicist Victor Schumann.", "Ultraviolet radiation can aggravate several skin conditions and diseases, including systemic lupus erythematosus, Sjögrens syndrome, Sinear Usher syndrome, rosacea, dermatomyositis, Dariers disease, Kindler–Weary syndrome and Porokeratosis.", "Because of its ability to cause chemical reactions and excite fluorescence in materials, ultraviolet radiation has a number of applications. The following table gives some uses of specific wavelength bands in the UV spectrum.\n* 13.5 nm: Extreme ultraviolet lithography\n* 30–200 nm: Photoionization, ultraviolet photoelectron spectroscopy, standard integrated circuit manufacture by photolithography\n* 230–365 nm: UV-ID, label tracking, barcodes\n* 230–400 nm: Optical sensors, various instrumentation\n* 240–280 nm: Disinfection, decontamination of surfaces and water (DNA absorption has a peak at 260 nm), germicidal lamps\n* 200–400 nm: Forensic analysis, drug detection\n* 270–360 nm: Protein analysis, DNA sequencing, drug discovery\n* 280–400 nm: Medical imaging of cells\n* 300–320 nm: Light therapy in medicine\n* 300–365 nm: Curing of polymers and printer inks\n* 350–370 nm: Bug zappers (flies are most attracted to light at 365 nm)", "Medical organizations recommend that patients protect themselves from UV radiation by using sunscreen. Five sunscreen ingredients have been shown to protect mice against skin tumors. However, some sunscreen chemicals produce potentially harmful substances if they are illuminated while in contact with living cells. The amount of sunscreen that penetrates into the lower layers of the skin may be large enough to cause damage.\nSunscreen reduces the direct DNA damage that causes sunburn, by blocking UV‑B, and the usual SPF rating indicates how effectively this radiation is blocked. SPF is, therefore, also called UVB-PF, for \"UV‑B protection factor\". This rating, however, offers no data about important protection against UVA, which does not primarily cause sunburn but is still harmful, since it causes indirect DNA damage and is also considered carcinogenic. Several studies suggest that the absence of UV‑A filters may be the cause of the higher incidence of melanoma found in sunscreen users compared to non-users. Some sunscreen lotions contain titanium dioxide, zinc oxide, and avobenzone, which help protect against UV‑A rays.\nThe photochemical properties of melanin make it an excellent photoprotectant. However, sunscreen chemicals cannot dissipate the energy of the excited state as efficiently as melanin and therefore, if sunscreen ingredients penetrate into the lower layers of the skin, the amount of reactive oxygen species may be increased. The amount of sunscreen that penetrates through the stratum corneum may or may not be large enough to cause damage.\nIn an experiment by Hanson et al. that was published in 2006, the amount of harmful reactive oxygen species (ROS) was measured in untreated and in sunscreen treated skin. In the first 20 minutes, the film of sunscreen had a protective effect and the number of ROS species was smaller. After 60 minutes, however, the amount of absorbed sunscreen was so high that the amount of ROS was higher in the sunscreen-treated skin than in the untreated skin. The study indicates that sunscreen must be reapplied within 2 hours in order to prevent UV light from penetrating to sunscreen-infused live skin cells.", "UVGI can be used to disinfect air with prolonged exposure. In the 1930s and 40s, an experiment in public schools in Philadelphia showed that upper-room ultraviolet fixtures could significantly reduce the transmission of measles among students. In 2020, UVGI is again being researched as a possible countermeasure against COVID-19.\nUV and violet light are able to neutralize the infectivity of SARS-CoV-2. Viral titers usually found in the sputum of COVID-19 patients are completely inactivated by levels of UV-A and UV-B irradiation that are similar to those levels experienced from natural sun exposure. This finding suggests that the reduced incidence of SARS-COV-2 in the summer may be, in part, due to the neutralizing activity of solar UV irradiation.\nVarious UV-emitting devices can be used for SARS-CoV-2 disinfection, and these devices may help in reducing the spread of infection. SARS-CoV-2 can be inactivated by a wide range of UVC wavelengths, and the wavelength of 222 nm provides the most effective disinfection performance.\nDisinfection is a function of UV intensity and time. For this reason, it is in theory not as effective on moving air, or when the lamp is perpendicular to the flow, as exposure times are dramatically reduced. However, numerous professional and scientific publications have indicated that the overall effectiveness of UVGI actually increases when used in conjunction with fans and HVAC ventilation, which facilitate whole-room circulation that exposes more air to the UV source. Air purification UVGI systems can be free-standing units with shielded UV lamps that use a fan to force air past the UV light. Other systems are installed in forced air systems so that the circulation for the premises moves microorganisms past the lamps. Key to this form of sterilization is placement of the UV lamps and a good filtration system to remove the dead microorganisms. For example, forced air systems by design impede line-of-sight, thus creating areas of the environment that will be shaded from the UV light. However, a UV lamp placed at the coils and drain pans of cooling systems will keep microorganisms from forming in these naturally damp places.", "When the modeling phase is complete, selected systems are validated using a professional third party to provide oversight and to determine how closely the model is able to predict the reality of system performance. System validation uses non-pathogenic surrogates such as MS 2 phage or Bacillus subtilis to determine the Reduction Equivalent Dose within an envelope of flow and transmittance.\nTo validate effectiveness in drinking water systems, the method described in the EPA UV guidance manual is typically used by US water utilities, whilst Europe has adopted Germany's DVGW 294 standard. For wastewater systems, the NWRI/AwwaRF Ultraviolet Disinfection Guidelines for Drinking Water and Water Reuse protocols are typically used, especially in wastewater reuse applications.", "Sizing of a UV system is affected by three variables: flow rate, lamp power, and UV transmittance in the water. Manufacturers typically developed sophisticated computational fluid dynamics (CFD) models validated with bioassay testing. This involves testing the UV reactor's disinfection performance with either MS2 or T1 bacteriophages at various flow rates, UV transmittance, and power levels in order to develop a regression model for system sizing. For example, this is a requirement for all public water systems in the United States per the EPA UV manual.\nThe flow profile is produced from the chamber geometry, flow rate, and particular turbulence model selected. The radiation profile is developed from inputs such as water quality, lamp type (power, germicidal efficiency, spectral output, arc length), and the transmittance and dimension of the quartz sleeve. Proprietary CFD software simulates both the flow and radiation profiles. Once the 3D model of the chamber is built, it is populated with a grid or mesh that comprises thousands of small cubes.\nPoints of interest—such as at a bend, on the quartz sleeve surface, or around the wiper mechanism—use a higher resolution mesh, whilst other areas within the reactor use a coarse mesh. Once the mesh is produced, hundreds of thousands of virtual particles are \"fired\" through the chamber. Each particle has several variables of interest associated with it, and the particles are \"harvested\" after the reactor. Discrete phase modeling produces delivered dose, head loss, and other chamber-specific parameters.", "Recent developments in LED technology have led to commercially available UV-C LEDs. UV-C LEDs use semiconductors to emit light between 255 nm and 280 nm. The wavelength emission is tuneable by adjusting the material of the semiconductor. , the electrical-to-UV-C conversion efficiency of LEDs was lower than that of mercury lamps. The reduced size of LEDs opens up options for small reactor systems allowing for point-of-use applications and integration into medical devices. Low power consumption of semiconductors introduce UV disinfection systems that utilized small solar cells in remote or Third World applications.\nUV-C LEDs don't necessarily last longer than traditional germicidal lamps in terms of hours used, instead having more-variable engineering characteristics and better tolerance for short-term operation. A UV-C LED can achieve a longer installed time than a traditional germicidal lamp in intermittent use. Likewise, LED degradation increases with heat, while filament and HID lamp output wavelength is dependent on temperature, so engineers can design LEDs of a particular size and cost to have a higher output and faster degradation or a lower output and slower decline over time.", "Germicidal UV for disinfection is most typically generated by a mercury-vapor lamp. Low-pressure mercury vapor has a strong emission line at 254 nm, which is within the range of wavelengths that demonstrate strong disinfection effect. The optimal wavelengths for disinfection are close to 260 nm.\nMercury vapor lamps may be categorized as either low-pressure (including amalgam) or medium-pressure lamps. Low-pressure UV lamps offer high efficiencies (approx. 35% UV-C) but lower power, typically 1 W/cm power density (power per unit of arc length). Amalgam UV lamps utilize an amalgam to control mercury pressure to allow operation at a somewhat higher temperature and power density. They operate at higher temperatures and have a lifetime of up to 16,000 hours. Their efficiency is slightly lower than that of traditional low-pressure lamps (approx. 33% UV-C output), and power density is approximately 2–3 W/cm. Medium-pressure UV lamps operate at much higher temperatures, up to about 800 degrees Celsius, and have a polychromatic output spectrum and a high radiation output but lower UV-C efficiency of 10% or less. Typical power density is 30 W/cm or greater.\nDepending on the quartz glass used for the lamp body, low-pressure and amalgam UV emit radiation at 254 nm and also at 185 nm, which has chemical effects. UV radiation at 185 nm is used to generate ozone.\nThe UV lamps for water treatment consist of specialized low-pressure mercury-vapor lamps that produce ultraviolet radiation at 254 nm, or medium-pressure UV lamps that produce a polychromatic output from 200 nm to visible and infrared energy. The UV lamp never contacts the water; it is either housed in a quartz glass sleeve inside the water chamber or mounted externally to the water, which flows through the transparent UV tube. Water passing through the flow chamber is exposed to UV rays, which are absorbed by suspended solids, such as microorganisms and dirt, in the stream.", "UVGI is often used to disinfect equipment such as safety goggles, instruments, pipettors, and other devices. Lab personnel also disinfect glassware and plasticware this way. Microbiology laboratories use UVGI to disinfect surfaces inside biological safety cabinets (\"hoods\") between uses.", "Ultraviolet germicidal irradiation (UVGI) is a disinfection technique employing ultraviolet (UV) light, particularly UV-C (180-280 nm), to kill or inactivate microorganisms. UVGI primarily inactivates microbes by damaging their genetic material, thereby inhibiting their capacity to carry out vital functions.\nThe use of UVGI extends to an array of applications, encompassing food, surface, air, and water disinfection. UVGI devices can inactivate microorganisms including bacteria, viruses, fungi, molds, and other pathogens. Recent studies have substantiated the ability of UV-C light to inactivate SARS-CoV-2, the strain of coronavirus that causes COVID-19.\nUV-C wavelengths demonstrate varied germicidal efficacy and effects on biological tissue. Many germicidal lamps like low-pressure mercury (LP-Hg) lamps, with peak emissions around 254 nm, contain UV wavelengths that can be hazardous to humans. As a result, UVGI systems have been primarily limited to applications where people are not directly exposed, including hospital surface disinfection, [https://www.cdc.gov/coronavirus/2019-ncov/community/ventilation/uvgi.html upper-room UVGI], and water treatment. More recently, the application of wavelengths between 200-235 nm, often referred to as far-UVC, has gained traction for surface and air disinfection. These wavelengths are regarded as much safer due to their significantly reduced penetration into human tissue.\nNotably, UV-C light is virtually absent in sunlight reaching the Earth's surface due to the absorptive properties of the ozone layer within the atmosphere.", "Ultraviolet sterilizers are often used to help control unwanted microorganisms in aquaria and ponds. UV irradiation ensures that pathogens cannot reproduce, thus decreasing the likelihood of a disease outbreak in an aquarium.\nAquarium and pond sterilizers are typically small, with fittings for tubing that allows the water to flow through the sterilizer on its way from a separate external filter or water pump. Within the sterilizer, water flows as close as possible to the ultraviolet light source. Water pre-filtration is critical as water turbidity lowers UV-C penetration.\nMany of the better UV sterilizers have long dwell times and limit the space between the UV-C source and the inside wall of the UV sterilizer device.", "A 2006 project at University of California, Berkeley produced a design for inexpensive water disinfection in resource deprived settings. The project was designed to produce an open source design that could be adapted to meet local conditions. In a somewhat similar proposal in 2014, Australian students designed a system using potato chip (crisp) packet foil to reflect solar UV radiation into a glass tube that disinfects water without power.", "Ultraviolet disinfection of water is a purely physical, chemical-free process. Even parasites such as Cryptosporidium or Giardia, which are extremely resistant to chemical disinfectants, are efficiently reduced. UV can also be used to remove chlorine and chloramine species from water; this process is called photolysis, and requires a higher dose than normal disinfection. The dead microorganisms are not removed from the water. UV disinfection does not remove dissolved organics, inorganic compounds or particles in the water. The world's largest water disinfection plant treats drinking water for New York City. The Catskill-Delaware Water Ultraviolet Disinfection Facility, commissioned on 8 October 2013, incorporates a total of 56 energy-efficient UV reactors treating up to a day.\nUltraviolet can also be combined with ozone or hydrogen peroxide to produce hydroxyl radicals to break down trace contaminants through an advanced oxidation process.\nIt used to be thought that UV disinfection was more effective for bacteria and viruses, which have more-exposed genetic material, than for larger pathogens that have outer coatings or that form cyst states (e.g., Giardia) that shield their DNA from UV light. However, it was recently discovered that ultraviolet radiation can be somewhat effective for treating the microorganism Cryptosporidium. The findings resulted in the use of UV radiation as a viable method to treat drinking water. Giardia in turn has been shown to be very susceptible to UV-C when the tests were based on infectivity rather than excystation. It has been found that protists are able to survive high UV-C doses but are sterilized at low doses.", "UVC radiation is able to break chemical bonds. This leads to rapid aging of plastics and other material, and insulation and gaskets. Plastics sold as \"UV-resistant\" are tested only for the lower-energy UVB since UVC does not normally reach the surface of the Earth. When UV is used near plastic, rubber, or insulation, these materials may be protected by metal tape or aluminum foil.", "Since the U.S. Food and Drug Administration issued a rule in 2001 requiring that virtually all fruit and vegetable juice producers follow HACCP controls, and mandating a 5-log reduction in pathogens, UVGI has seen some use in sterilization of juices such as fresh-pressed.", "UV can influence indoor air chemistry, leading to the formation of ozone and other potentially harmful pollutants, including particulate pollution. This occurs primarily through photolysis, where UV photons break molecules into smaller radicals that form radicals such as OH. The radicals can react with volatile organic compounds (VOCs) to produce oxidized VOCs (OVOCs) and secondary organic aerosols (SOA).\nWavelengths below 242 nm can also generate ozone, which not only contributes to OVOCs and SOA formation but can be harmful in itself. When inhaled in high quantities, these pollutants can irritate the eyes and respiratory system and exacerbate conditions like asthma.\nThe specific pollutants produced depend on the initial air chemistry and the UV source power and wavelength. To control ozone and other indoor pollutants, ventilation and filtration methods are used, diluting airborne pollutants and maintaining indoor air quality.", "Many UVGI systems use UV wavelengths that can be harmful to humans, resulting in both immediate and long-term effects. Acute impacts on the eyes and skin can include conditions such as photokeratitis (often termed \"snow blindness\") and erythema (reddening of the skin), while chronic exposure may heighten the risk of skin cancer.\nHowever, the safety and effects of UV vary extensively by wavelength, implying that not all UVGI systems pose the same level of hazards. Humans typically encounter UV light in the form of solar UV, which comprises significant portions of UV-A and UV-B, but excludes UV-C. The UV-B band, able to penetrate deep into living, replicating tissue, is recognized as the most damaging and carcinogenic.\nMany standard UVGI systems, such as low-pressure mercury (LP-Hg) lamps, produce broad-band emissions in the UV-C range and also peaks in the UV-B band. This often makes it challenging to attribute damaging effects to a specific wavelength. Nevertheless, longer wavelengths in the UV-C band can cause conditions like photokeratitis and erythema. Hence, many UVGI systems are used in settings where direct human exposure is limited, such as with upper-room UVGI air cleaners and water disinfection systems.\nPrecautions are commonly implemented to protect users of these UVGI systems, including:\n* Warning labels: Labels alert users to the dangers of UV light.\n* Interlocking systems: Shielded systems, such as closed water tanks or air circulation units, often have interlocks that automatically shut off the UV lamps if the system is opened for human access. Clear viewports that block UV-C are also available.\n* Personal protective equipment: Most protective eyewear, particularly those compliant with ANSI Z87.1, block UV-C. Similarly, clothing, plastics, and most types of glass (excluding fused silica) effectively impede UV-C.\nSince the early 2010s there has been growing interest in the far-UVC wavelengths of 200-235 nm for whole-room exposure. These wavelengths are generally considered safer due to their limited penetration depth caused by increased protein absorption. This feature confines far-UVC exposure to the superficial layers of tissue, such as the outer layer of dead skin (the stratum corneum) and the tear film and surface cells of the cornea. As these tissues do not contain replicating cells, damage to them poses less carcinogenic risk. It has also been demonstrated that far-UVC does not cause erythema or damage to the cornea at levels many times that of solar UV or conventional 254 nm UVGI systems.", "UV disinfection is most effective for treating high-clarity, purified reverse osmosis distilled water. Suspended particles are a problem because microorganisms buried within particles are shielded from the UV light and pass through the unit unaffected. However, UV systems can be coupled with a pre-filter to remove those larger organisms that would otherwise pass through the UV system unaffected. The pre-filter also clarifies the water to improve light transmittance and therefore UV dose throughout the entire water column. Another key factor of UV water treatment is the flow rate—if the flow is too high, water will pass through without sufficient UV exposure. If the flow is too low, heat may build up and damage the UV lamp.\nA disadvantage of UVGI is that while water treated by chlorination is resistant to reinfection (until the chlorine off-gasses), UVGI water is not resistant to reinfection. UVGI water must be transported or delivered in such a way as to avoid reinfection.", "UV water treatment devices can be used for well water and surface water disinfection. UV treatment compares favourably with other water disinfection systems in terms of cost, labour and the need for technically trained personnel for operation. Water chlorination treats larger organisms and offers residual disinfection, but these systems are expensive because they need special operator training and a steady supply of a potentially hazardous material. Finally, boiling of water is the most reliable treatment method but it demands labour and imposes a high economic cost. UV treatment is rapid and, in terms of primary energy use, approximately 20,000 times more efficient than boiling.", "The degree of inactivation by ultraviolet radiation is directly related to the UV dose applied to the water. The dosage, a product of UV light intensity and exposure time, is usually measured in microjoules per square centimeter, or equivalently as microwatt seconds per square centimeter (μW·s/cm) = 10 mW·s/m = 0.01 W·s/m, the latter might be better, giving two-digit values for the ones in this article-->. Dosages for a 90% kill of most bacteria and viruses range between 2,000 and 8,000 μW·s/cm. Larger parasites such as Cryptosporidium require a lower dose for inactivation. As a result, US EPA has accepted UV disinfection as a method for drinking water plants to obtain Cryptosporidium, Giardia or virus inactivation credits. For example, for a 90% reduction of Cryptosporidium, a minimum dose of 2,500 μW·s/cm is required based on EPA's 2006 guidance manual.", "The effectiveness of germicidal UV depends on the duration a microorganism is exposed to UV, the intensity and wavelength of the UV radiation, the presence of particles that can protect the microorganisms from UV, and a microorganism's ability to withstand UV during its exposure.\nIn many systems, redundancy in exposing microorganisms to UV is achieved by circulating the air or water repeatedly. This ensures multiple passes so that the UV is effective against the highest number of microorganisms and will irradiate resistant microorganisms more than once to break them down.\n\"Sterilization\" is often misquoted as being achievable. While it is theoretically possible in a controlled environment, it is very difficult to prove and the term \"disinfection\" is generally used by companies offering this service as to avoid legal reprimand. Specialist companies will often advertise a certain log reduction, e.g., 6-log reduction or 99.9999% effective, instead of sterilization. This takes into consideration a phenomenon known as light and dark repair (photoreactivation and base excision repair, respectively), in which a cell can repair DNA that has been damaged by UV light.\nThe effectiveness of this form of disinfection depends on line-of-sight exposure of the microorganisms to the UV light. Environments where design creates obstacles that block the UV light are not as effective. In such an environment, the effectiveness is then reliant on the placement of the UVGI system so that line of sight is optimum for disinfection.\nDust and films coating the bulb lower UV output. Therefore, bulbs require periodic cleaning and replacement to ensure effectiveness. The lifetime of germicidal UV bulbs varies depending on design. Also, the material that the bulb is made of can absorb some of the germicidal rays.\nLamp cooling under airflow can also lower UV output. Increases in effectiveness and UV intensity can be achieved by using reflection. Aluminum has the highest reflectivity rate versus other metals and is recommended when using UV.\nOne method for gauging UV effectiveness in water disinfection applications is to compute UV dose. The U.S. Environmental Protection Agency (EPA) published UV dosage guidelines for water treatment applications in 1986. UV dose cannot be measured directly but can be inferred based on the known or estimated inputs to the process:\n* Flow rate (contact time)\n* Transmittance (light reaching the target)\n* Turbidity (cloudiness)\n* Lamp age or fouling or outages (reduction in UV intensity)\nIn air and surface disinfection applications the UV effectiveness is estimated by calculating the UV dose which will be delivered to the microbial population. The UV dose is calculated as follows:\n: UV dose (μW·s/cm) = UV intensity (μW/cm) × exposure time (seconds)\nThe UV intensity is specified for each lamp at a distance of 1 meter. UV intensity is inversely proportional to the square of the distance so it decreases at longer distances. Alternatively, it rapidly increases at distances shorter than 1m. In the above formula, the UV intensity must always be adjusted for distance unless the UV dose is calculated at exactly from the lamp. Also, to ensure effectiveness, the UV dose must be calculated at the end of lamp life (EOL is specified in number of hours when the lamp is expected to reach 80% of its initial UV output) and at the furthest distance from the lamp on the periphery of the target area. Some shatter-proof lamps are coated with a fluorated ethylene polymer to contain glass shards and mercury in case of breakage; this coating reduces UV output by as much as 20%.\nTo accurately predict what UV dose will be delivered to the target, the UV intensity, adjusted for distance, coating, and end of lamp life, will be multiplied by the exposure time. In static applications the exposure time can be as long as needed for an effective UV dose to be reached. In case of rapidly moving air, in AC air ducts, for example, the exposure time is short, so the UV intensity must be increased by introducing multiple UV lamps or even banks of lamps. Also, the UV installation should ideally be located in a long straight duct section with the lamps directing UVC in a direction parallel to the airflow to maximize the time the air is irradiated.\nThese calculations actually predict the UV fluence and it is assumed that the UV fluence will be equal to the UV dose. The UV dose is the amount of germicidal UV energy absorbed by a microbial population over a period of time. If the microorganisms are planktonic (free floating) the UV fluence will be equal the UV dose. However, if the microorganisms are protected by mechanical particles, such as dust and dirt, or have formed biofilm a much higher UV fluence will be needed for an effective UV dose to be introduced to the microbial population.", "UV light is electromagnetic radiation with wavelengths shorter than visible light but longer than X-rays. UV is categorised into several wavelength ranges, with short-wavelength UV (UV-C) considered \"germicidal UV\". Wavelengths between about 200 nm and 300 nm are strongly absorbed by nucleic acids. The absorbed energy can result in defects including pyrimidine dimers. These dimers can prevent replication or can prevent the expression of necessary proteins, resulting in the death or inactivation of the organism. Recently, it has been shown that these dimers are fluorescent.\n* Mercury-based lamps operating at low vapor pressure emit UV light at the 253.7 nm line.\n* Ultraviolet light-emitting diode (UV-C LED) lamps emit UV light at selectable wavelengths between 255 and 280 nm.\n* Pulsed-xenon lamps emit UV light across the entire UV spectrum with a peak emission near 230 nm.\nThis process is similar to, but stronger than, the effect of longer wavelengths (UV-B) producing sunburn in humans. Microorganisms have less protection against UV and cannot survive prolonged exposure to it.\nA UVGI system is designed to expose environments such as water tanks, rooms and forced air systems to germicidal UV. Exposure comes from germicidal lamps that emit germicidal UV at the correct wavelength, thus irradiating the environment. The forced flow of air or water through this environment ensures exposure of that air or water.", "Exposure limits for UV, particularly the germicidal UV-C range, have evolved over time due to scientific research and changing technology. The American Conference of Governmental Industrial Hygienists (ACGIH) and the International Commission on Non-Ionizing Radiation Protection (ICNIRP) have set exposure limits to safeguard against both immediate and long-term effects of UV exposure. These limits, also referred to as Threshold Limit Values (TLVs), form the basis for emission limits in product safety standards.\nThe UV-C photobiological spectral band is defined as 100–280 nm, with limits currently applying only from 180 to 280 nm. This reflects concerns about acute damage such as erythema and photokeratitis as well as long-term delayed effects like photocarcinogenesis. However, with the increased safety evidence surrounding UV-C for germicidal applications, the existing ACGIH TLVs were revised in 2022.\nThe TLVs for the 222 nm UV-C wavelength (peak emissions from KrCl excimer lamps), following the 2022 revision, are now 161 mJ/cm for eye exposure and 479 mJ/cm for skin exposure over an eight-hour period. For the 254 nm UV wavelength, the updated exposure limit is now set at 6 mJ/cm for eyes and 10 mJ/cm for skin.", "Using UV light for disinfection of drinking water dates back to 1910 in Marseille, France. The prototype plant was shut down after a short time due to poor reliability. In 1955, UV water treatment systems were applied in Austria and Switzerland; by 1985 about 1,500 plants were employed in Europe. In 1998 it was discovered that protozoa such as cryptosporidium and giardia were more vulnerable to UV light than previously thought; this opened the way to wide-scale use of UV water treatment in North America. By 2001, over 6,000 UV water treatment plants were operating in Europe.\nOver time, UV costs have declined as researchers develop and use new UV methods to disinfect water and wastewater. Several countries have published regulations and guidance for the use of UV to disinfect drinking water supplies, including the US and the UK.", "The utilization of UVGI for air disinfection began in earnest in the mid-1930s. William F. Wells demonstrated in 1935 that airborne infectious organisms, specifically aerosolized B. coli exposed to 254 nm UV, could be rapidly inactivated. This built upon earlier theories of infectious droplet nuclei transmission put forth by Carl Flüugge and Wells himself. Prior to this, UV radiation had been studied predominantly in the context of liquid or solid media, rather than airborne microbes.\nShortly after Wells' initial experiments, high-intensity UVGI was employed to disinfect a hospital operating room at Duke University in 1936. The method proved a success, reducing postoperative wound infections from 11.62% without the use of UVGI to 0.24% with the use of UVGI. Soon, this approach was extended to other hospitals and infant wards using UVGI \"light curtains\", designed to prevent respiratory cross-infections, with noticeable success.\nAdjustments in the application of UVGI saw a shift from \"light curtains\" to upper-room UVGI, confining germicidal irradiation above human head level. Despite its dependency on good vertical air movement, this approach yielded favorable outcomes in preventing cross-infections. This was exemplified by Wells' successful usage of upper-room UVGI between 1937 and 1941 to curtail the spread of measles in suburban Philadelphia day schools. His study found that 53.6% of susceptibles in schools without UVGI became infected, while only 13.3% of susceptibles in schools with UVGI were infected.\nRichard L. Riley, initially a student of Wells, continued the study of airborne infection and UVGI throughout the 1950s and 60s, conducting significant experiments in a Veterans Hospital TB ward. Riley successfully demonstrated that UVGI could efficiently inactivate airborne pathogens and prevent the spread of tuberculosis.\nDespite initial successes, the use of UVGI declined in the second half of the 20th century era due to various factors, including a rise in alternative infection control and prevention methods, inconsistent efficacy results, and concerns regarding its safety and maintenance requirements. However, recent events like a rise in multiple drug-resistant bacteria and the COVID-19 pandemic have renewed interest in UVGI for air disinfection.", "The development of UVGI traces back to 1878 when Arthur Downes and Thomas Blunt found that sunlight, particularly its shorter wavelengths, hindered microbial growth. Expanding upon this work, Émile Duclaux, in 1885, identified variations in sunlight sensitivity among different bacterial species. A few years later, in 1890, Robert Koch demonstrated the lethal effect of sunlight on Mycobacterium tuberculosis, hinting at UVGI's potential for combating diseases like tuberculosis.\nSubsequent studies further defined the wavelengths most efficient for germicidal inactivation. In 1892, it was noted that the UV segment of sunlight had the most potent bactericidal effect. Research conducted in the early 1890s demonstrated the superior germicidal efficacy of UV-C compared to UV-A and UV-B.\nThe mutagenic effects of UV were first unveiled in a 1914 study that observed metabolic changes in Bacillus anthracis upon exposure to sublethal doses of UV. Frederick Gates, in the late 1920s, offered the first quantitative bactericidal action spectra for Staphylococcus aureus and Bacillus coli, noting peak effectiveness at 265 nm. This matched the absorption spectrum of nucleic acids, hinting at DNA damage as the key factor in bacterial inactivation. This understanding was solidified by the 1960s through research demonstrating the ability of UV-C to form thymine dimers, leading to microbial inactivation. These early findings collectively laid the groundwork for modern UVGI as a disinfection tool.", "Ultraviolet in sewage treatment is commonly replacing chlorination. This is in large part because of concerns that reaction of the chlorine with organic compounds in the waste water stream could synthesize potentially toxic and long lasting chlorinated organics and also because of the environmental risks of storing chlorine gas or chlorine containing chemicals. Individual wastestreams to be treated by UVGI must be tested to ensure that the method will be effective due to potential interferences such as suspended solids, dyes, or other substances that may block or absorb the UV radiation. According to the World Health Organization, \"UV units to treat small batches (1 to several liters) or low flows (1 to several liters per minute) of water at the community level are estimated to have costs of US$20 per megaliter, including the cost of electricity and consumables and the annualized capital cost of the unit.\"\nLarge-scale urban UV wastewater treatment is performed in cities such as Edmonton, Alberta. The use of ultraviolet light has now become standard practice in most municipal wastewater treatment processes. Effluent is now starting to be recognized as a valuable resource, not a problem that needs to be dumped. Many wastewater facilities are being renamed as water reclamation facilities, whether the wastewater is discharged into a river, used to irrigate crops, or injected into an aquifer for later recovery. Ultraviolet light is now being used to ensure water is free from harmful organisms.", "Ultraviolet-sensitive beads (UV beads) are beads that are colorful in the presence of ultraviolet radiation. Ultraviolet rays are present in sunlight and light from various artificial sources and can cause sunburn or skin cancer. The color change in the beads alerts the wearer to the presence of the radiation.\nWhen changing colour they undergo photochromism.\nWhen the beads are not exposed to ultraviolet rays, they are colorless and either translucent or opaque. However, when sunlight falls onto the beads, they instantly turn into red, orange, yellow, blue, purple, or pink.", "Undark was a trade name for luminous paint made with a mixture of radioactive radium and zinc sulfide, as produced by the U.S. Radium Corporation between 1917 and 1938. It was used primarily in watch and clock dials. The people working in the industry who applied the radioactive paint became known as the Radium Girls because many of them became ill and some died from exposure to the radiation emitted by the radium contained within the product. The product was the direct cause of radium jaw in the dial painters. Undark was also available as a kit for general consumer use and marketed as glow-in-the-dark paint.", "Mixtures similar to Undark, consisting of radium and zinc sulfide were used by other companies. Trade names include:\n* Luna, used by the Radium Dial Company, a division of Standard Chemical Company\nand\n* Marvelite, used by Cold Light Manufacturing Company (a subsidiary of the Radium Company of Colorado)", "There are at least three stages involve in the operation of a vacuum disc filter:\nStage 1: Cake formation\nThe discs rotate in a slurry trough, compartmentalized to reduce the volume held in it at any one time, and therefore to reduce the residence time of slurry in the trough. The time available for this stage depends on two factors, the rotation speed of the disc and the height of the slurry level in the basin. A vacuum is applied inside the discs to promote cake filtration.\nStage 2: Cake dewatering\nWashing is largely restricted to the upper portions where the cake surface is nearly horizontal in orientation, which occurs at the temperature of the feed. The ceramic filter uses a sintered alumina disc to dewater slurry under low vacuum. The dewatering occurs by drawing water from the slurry by capillary action. This ensures that no air or particles are drawn into the filter medium to cause blockage. However, if too much wash water is applied then it can cascade down the cake and into the feed trough, where it merely dilutes the slurry.\nStage 3: Cake drying\nThe final water (moisture) content in the cake is regulated by passing dry (cold or hot) air or gas through the cake. Drying time is dependent on the distribution valve timing, slurry level on the basin, rotation speed, and scraper position.\nStage 4: Cake discharge\nThese are the typical conditions for the overall operation of the vacuum ceramic filter:\n* Slurry level: must be higher than the top of the sectors as they pass through the trough (otherwise air would simply pass through the cloth during cake formation).\n* Solids throughput: up to 4,000 kg/mh \n* Typical filtration capacity: 200-5,000 L/mh\n* Typical air consumption/ flow rate: 50–80 m/h·m at 500 Torr vacuum\n* Pressure difference: Typically, the pressure difference with ceramic disc is between 0.90 and 0.95 bar. However, pressure differences across the filter are usually limited to less than 85 kPa making it possible to process a wide range of feed materials in a continuous manner.\n* Rotating speed: Higher rotating speeds enable greater solid production rates by formation of thinner cakes. However, this may not be wholly desirable as washing efficiency is likely to be compromised. Moreover, an increased rotating speed requires more electrical power.\n* Minimum cake thickness: 3/8-1/2 in or 10–13 mm (for effective discharge)\n* Submergence required for cake discharge: 25% of cycle\n* Effective maximum submergence of the disk: 28% of cycle.", "The main advantage over other filtration systems is the reduction in energy consumption, up to 90% because no air flows through the discs due to the use of capillary force acting on the pores. Air breakthrough is prevented by the fine pores of the filter, thus allowing retention of higher vacuum levels. Therefore, the vacuum losses are less, which means the vacuum pump required is smaller than in conventional disc filters, thus minimizing operating costs. Power consumed by a vacuum ceramic filter with 45 m of filtration area is 15 kW while 170 kW is consumed by similar filters with cloth membranes.\nGenerally, conventional disc filters are not suitable for cake washing because the water quickly runs off the surface of the cake. As the cake solids are sprayed with a wash liquid to remove impurities, they are not suitable for conventional filtration systems where channelling or uneven distribution occurs, leading to cake cracking. However, cake washing has been proved to be more efficient with vacuum ceramic filters due to the steady flow profile and the even distribution of the cake.\nA further advantage of vacuum ceramic filter is the high output capacity with a very low water content and drier filter cake. By comparison, the performance of a VDFK-3 ceramic filter was compared with the existing BOU-40 and BLN40-3 drum type vacuum filters to filter aluminium hydroxide. From the results, the average moisture content was 5% (abs? or rel?) lower when a vacuum ceramic filter was used.\nVacuum ceramic filters also have a longer service life while cloth filters have to be replaced, which eventually increases the moisture content of the cake, lowers the productivity and disturbs the production operations. In addition, the ceramic filter is both mechanically and chemically reliable enough to withstand regeneration.\nWhilst the vacuum ceramic filter has proved to be a great innovation, there are still some limitations involved when operating the equipment. Ceramic filters exhibit large fluctuations in the recoiling washing pressure (0.05~0.35 MPa). This raises the short-term negative pressure and induces dilute acid due to the falling suck phenomenon. Therefore, the cleaning effect of the ceramic plates and the efficiency of the filter will be negatively affected.", "There are many design criteria which vary according to the type of disc and the required filtering capacity. The typical filter for extracting iron contains 12 ceramic filtering plates of the filtering elements (discs), which have a diameter of about 2705 mm, making the total filter surface 120 m. This filter is most suited to filter feed slurries with high solid concentrations (5-20% w/w) and particles ranging in size from 1–700 µm. The area of the filters available in the ceramic filter is up to 45 m, making them useful for metal and mineral concentrate processing.\nThe ceramic discs are available in two types, cast plate and membrane plate. The cast plate is a one piece ceramic plate with a homogeneous surface and a granulated core. The filter medium of the cast plate is the thick walls, separated by ceramic granules. These features form a rigid mechanical structure. The membrane plate type contains a thin membrane over a coarser core and a multi-layer porous structure made of aluminium oxide. The coarse part of the equipment provides mechanical strength to its structure while the intermediate layer acts as a membrane carrier. The outer layer membrane acts as a filtering layer. The filtration layer of the ceramic filter has uniform pores, which means that only a certain size of particles can be filtered by using vacuum ceramic filters.", "* If it takes more than five minutes to form 1/8 in. cake thicknesses, continuous filtration should not be attempted.\n* For negligible cake build up in clarification, cartridges, pre-coat drums, or sand filters are used for filtration\n*When the filtering surface is expected to be more than a few square meters, it is advisable to do laboratory tests to determine whether cake washing is critical. If there is a problem with the cake drying, filter precoating might be needed.\n* For finely ground ores and minerals, rotary drum filtration rates may be 1500 lb/(day)(sqft), at 20 rev/h and 18-25 inch Hg vacuum\n* Coarse solids and crystals may be filtered at rates of 6000 lb/ (day) (sqft) at 20 rev/h, 2-6 inch Hg vacuum. \n* Surface areas in porous ceramics: Porous ceramics processed by a sol-gel technique have extremely large surface areas, ranging from 200 to 500 square meters per gram", "Vacuum ceramic filters are to be found in the following industries:\n* paper making \n* metallurgy\n* water treatment \n* chemical \n* ore beneficiation process in mining (iron, gold, nickel, copper and quartz). \nThe process is used during a large continuous process of separating free filtering suspensions where washing is not required. Basically the filter works to separates solid-liquid mixtures by removing the water from mineral concentrates and moulding the feed slurries into pellets. This is accomplished by capillary action under low vacuum pressure. The pelletizing of the slurries is done by adding some solid matter to the sewage sludge so that water can be easily removed from the mixture. Eventually, the final cake products contain very little moisture and can be deposited as sewage. This process is commonly followed by bleaching and heating the cake. The end product of this filtration is a dry cake and filtrate containing no solid product.", "A vacuum ceramic filter is designed to separate liquids from solids for dewatering of ore concentrates purposes. The device consists of a rotator, slurry tank, ceramic filter plate, distributor, discharge scraper, cleaning device, frame, agitating device, pipe system, vacuum system, automatic acid dosing system, automatic lubricating system, valve and discharge chute. The operation and construction principle of vacuum ceramic filter resemble those of a conventional disc filter, but the filter medium is replaced by a finely porous ceramic disc. The disc material is inert, has a long operational life and is resistant to almost all chemicals. Performance can be optimized by taking into account all those factors which affect the overall efficiency of the separation process. Some of the variables affecting the performance of a vacuum ceramic filter include the solid concentration, speed rotation of the disc, slurry level in the feed basin, temperature of the feed slurry, and the pressure during dewatering stages and filter cake formation.", "The most important operating parameters of disc filters are the height of the slurry tank, agitation and the intensity and rotation speed of the disc as these will determine the cake formation and drying times. It is important to continuously agitate the slurry in order to prevent sedimentation of the solids. Excessively high agitation intensity may affect cake formation or change the particle size distribution of the product. One of the most commonly used agitators for filtration using vacuum disc filters is an oscillating cradle-type agitator located in the bottom of the basin, which requires fairly high rotation speeds to form homogeneous slurry. For processing rapidly settling high concentration slurries, bottom-feed rotary disc filters are usually used.\nStage 1: Filtration\nThe filtrate from the internal passages of the discs is removed by the low vacuum used in the filter, while the small pressure differential across the disc causes cake formation. With a thicker cake produced in this stage, more effective washing is achieved at higher wash liquor flows. However, this causes larger air volumes to be consumed at discharge due to reduced resistance and marginally lower cake moisture.\nStage 2: Dewatering\nIn rare cases, due to the even structure of the cakes formed, the steady flow profile of the ceramic filter media and the gas free filtrate flow cake, washing has proved to be efficient in ceramic disc filters. The formation of thicker cakes during filtration and higher vacuum level leads to greater removal of solute.\nStage 3: Discharge\nThe basic scraper works well when the cakes are relatively thick and non-sticky. The final cakes are discharged by blade or wire scrapers on either side of the discs However, other types of agitators should be considered and installed if the cake is sticky or thin. An air blow-back system is often employed to aid cake removal where wetter cakes are discharged from disc filter.", "One improvement over the standard design of ceramic vacuum filter is to use serialized pore size distributions of non-fibrous porous ceramic filters. The porosity of this type of ceramic can be varied from 20% to 60% by volume, which allows a low-pressure drop of liquid and gas flow. Custom sizes from 1 mm diameter/0.5 mm bore of porous ceramic filters are available for a range of designs. A non-fibrous porous ceramic filter is more resistant in alkaline and acidic conditions compared to fibrous ceramic filters. Thus, it has a longer service life as it has good wearing and erosion resistance as well as being able to withstand high temperatures.\nAnother improvement is applied at the regeneration stage when the residual filter cake is removed by back-flushing the clean plant water to wash the internal ceramic filter. Filter cake dewatering of ceramic filters produces low final cake moistures at minimum operation and maintenance costs. The residuals moisture are removed from the filter cake due to capillary action within the ceramic elements, which rotate above the slurry level. This process gives maximum filtration, and the final cake can be maintained at the lowest moisture content due to the effective cleaning of both ceramic sectors. In addition, performance can be optimized by using an ultrasonic cleaning system to achieve efficient operation conditions for regeneration of plates. The use of filtrate in looped water cycle in the design operation can reduce the water consumption up to 30-50%. High filtrate purity can be obtained, as there is only 0.001-0.005 g/L solids in the filtrate produced from this process. This eventually results in the reduction of polymer flocculant consumption in thickeners.\nCeramic scraper knives have been introduced to this design as they are able to shave through the mass formed in filter cake dewatering. The remaining layer of solid residue on the filter provides protection from mechanical abrasion. Therefore, the maintenance costs can be reduced while the service life of the ceramic filter increases.", "Filtrate is the waste that has been discharge in vacuum ceramic filters through the waste stream. During cake washing, a wash liquid is sprayed on the cake solids to remove impurities or additional filtrate. The filtrate goes into filtrate tank and is drained through a discharge system. However, the filtrate is recyclable and has low suspended solid content. Thus, it can be recycled through the system without further treatment. Filtrate is used to flush the disc during back flow washing to clean the micro-porous structure and remove any residual cake.", "Vaginal transplantation is procedure whereby donated or laboratory-grown vagina tissue is used to create a neovagina. It is most often used in women who have vaginal aplasia (the congenital absence of a vagina).", "Vaginal aplasia is a rare medical condition in which the vagina does not form properly before birth. Those with the condition may have a partially formed vagina, or none at all. The condition is typically treated by reconstructive surgery. First a space is surgically created where the vagina would typically exist. Then tissue from another part of the body is harvested, molded into the shape of a vagina, and grafted into the vagina cavity. This technique has significant drawbacks. Typically, the implanted tissue does not function normally as a muscle, which can lead to low enjoyment of sexual intercourse. Additionally, stenosis (narrowing of the cavity) can occur over time. Most women require multiple surgeries before a satisfactory result is achieved. An alternative to traditional reconstructive surgery is transplantation.", "In a handful of cases, a woman with vaginal aplasia has received a successful vagina transplant donated by her mother. The first such case is believed to have occurred in 1970, with no signs of rejection taking place after three years. In at least one case, a woman who received such a transplant was able to conceive and give birth. In 1981, a 12-year-old girl with vaginal aplasia received a vaginal wall implant from her mother. She became sexually active seven years later, without incident. At age 24, she conceived and carried a child to term. The child was born via cesarean section.", "In April 2014, a team of scientists led by Anthony Atala reported that they had successfully transplanted laboratory-grown vaginas into four female teenaged girls with a rare medical condition called Mayer-Rokitansky-Küster-Hauser syndrome that causes the vagina to develop improperly, or sometimes not at all. Between 1 of 1,500 to 4,000 females are born with this condition.\nThe four patients began treatment between May 2005 and August 2008. In each case, the medical research team began by taking a small sample of genital tissue from the teenagers vulva. The sample was used as a seed to grow additional tissue in the lab which was then placed in a vaginal shaped, biodegradable mold. Vaginal-lining cells were placed on the inside of the tube, while muscle cells were attached to the outside. Five to six weeks later, the structure was implanted into the patients, where the tissue continued to grow and connected with the girls circulatory and other bodily systems. After about eight years, all four patients reported normal function and pleasure levels during sexual intercourse according to the Female Sexual Function Index questionnaire, a validated self-report tool. No adverse results or complications were reported.\nIn two of the four women, the vagina was attached to the uterus, making pregnancy possible. No pregnancies were reported, however, during the study period. Martin Birchall, who works on tissue engineering, but was not involved in the study, said it \"addressed some of the most important questions facing translation of tissue engineering technologies.\" Commentary published by the National Health Service (NHS) called the study \"an important proof of concept\" and said it showed that tissue engineering had \"a great deal of potential.\" However, the NHS also cautioned that the sample size was very small and further research was necessary to determine the general viability of the technique.\nThe laboratory-grown autologous transplant technique could also be used on women who want reconstructive surgery due to cancer or other disease once the technique is perfected. However, more studies will need to be conducted and the techniques further developed before commercial production can begin.", "Viaspan was the trademark under which the University of Wisconsin cold storage solution (also known as University of Wisconsin solution or UW solution) was sold. Currently, UW solution is sold under the Belzer UW trademark and others like Bel-Gen or StoreProtect. UW solution was the first solution designed for use in organ transplantation, and became the first intracellular-like preservation medium. Developed in the late 1980s by Folkert Belzer and James Southard for pancreas preservation, the solution soon displaced EuroCollins solution as the preferred medium for cold storage of livers and kidneys, as well as pancreas. The solution has also been used for hearts and other organs. University of Wisconsin cold storage solution remains what is often called the gold standard for organ preservation, despite the development of other solutions that are in some respects superior.", "The guiding principles for the development of UW Solution were:\n# osmotic concentration maintained by the use of metabolically inert substances like lactobionate and raffinose rather than with glucose\n# Hydroxyethyl starch (HES) is used to prevent edema\n# Substances are added to scavenge free radicals, along with steroids and insulin.", "* Potassium lactobionate: 100 mM\n* KHPO: 25 mM\n* MgSO: 5 mM\n* Raffinose: 30 mM\n* Adenosine: 5 mM\n* Glutathione: 3 mM\n* Allopurinol: 1 mM\n* Hydroxyethyl starch: 50 g/L", "*Stefan Lovgren, [https://web.archive.org/web/20050319053238/http://news.nationalgeographic.com/news/2005/03/0318_050318_cryonics.html \"Corpses Frozen for Future Rebirth by Arizona Company\"], March 2005, National Geographic", "When sucrose is cooled slowly it results in crystal sugar (or rock candy), but when cooled rapidly it can form syrupy cotton candy (candyfloss).\nVitrification can also occur in a liquid such as water, usually through very rapid cooling or the introduction of agents that suppress the formation of ice crystals. This is in contrast to ordinary freezing which results in ice crystal formation. Vitrification is used in cryo-electron microscopy to cool samples so quickly that they can be imaged with an electron microscope without damage. In 2017, the Nobel prize for chemistry was awarded for the development of this technology, which can be used to image objects such as proteins or virus particles.\nOrdinary soda-lime glass, used in windows and drinking containers, is created by the addition of sodium carbonate and lime (calcium oxide) to silicon dioxide. Without these additives, silicon dioxide would require very high temperature to obtain a melt, and subsequently (with slow cooling) a glass.\nVitrification is used in disposal and long-term storage of nuclear waste or other hazardous wastes in a method called geomelting. Waste is mixed with glass-forming chemicals in a furnace to form molten glass that then solidifies in canisters, thereby immobilizing the waste. The final waste form resembles obsidian and is a non-leaching, durable material that effectively traps the waste inside. It is widely assumed that such waste can be stored for relatively long periods in this form without concern for air or groundwater contamination. Bulk vitrification uses electrodes to melt soil and wastes where they lie buried. The hardened waste may then be disinterred with less danger of widespread contamination. According to the Pacific Northwest National Labs, \"Vitrification locks dangerous materials into a stable glass form that will last for thousands of years.\"", "Vitrification in cryopreservation is used to preserve, for example, human egg cells (oocytes) (in oocyte cryopreservation) and embryos (in embryo cryopreservation). It prevents ice crystal formation and is a very fast process: -23,000°C/min. \nCurrently, vitrification techniques have only been applied to brains (neurovitrification) by Alcor and to the upper body by the Cryonics Institute, but research is in progress by both organizations to apply vitrification to the whole body.\nMany woody plants living in polar regions naturally vitrify their cells to survive the cold. Some can survive immersion in liquid nitrogen and liquid helium. Vitrification can also be used to preserve endangered plant species and their seeds. For example, recalcitrant seeds are considered hard to preserve. Plant vitrification solution (PVS), one of application of vitrification, has successfully preserved Nymphaea caerulea seeds.\nAdditives used in cryobiology or produced naturally by organisms living in polar regions are called cryoprotectants.", "Vitrification (, via French ) is the full or partial transformation of a substance into a glass, that is to say, a non-crystalline amorphous solid. Glasses differ from liquids structurally and glasses possess a higher degree of connectivity with the same Hausdorff dimensionality of bonds as crystals: dim = 3. In the production of ceramics, vitrification is responsible for their impermeability to water.\nVitrification is usually achieved by heating materials until they liquidize, then cooling the liquid, often rapidly, so that it passes through the glass transition to form a glassy solid. Certain chemical reactions also result in glasses.\nIn terms of chemistry, vitrification is characteristic for amorphous materials or disordered systems and occurs when bonding between elementary particles (atoms, molecules, forming blocks) becomes higher than a certain threshold value. Thermal fluctuations break the bonds; therefore, the lower the temperature, the higher the degree of connectivity. Because of that, amorphous materials have a characteristic threshold temperature termed glass transition temperature (T): below T amorphous materials are glassy whereas above T they are molten.\nThe most common applications are in the making of pottery, glass, and some types of food, but there are many others, such as the vitrification of an antifreeze-like liquid in cryopreservation.\nIn a different sense of the word, the embedding of material inside a glassy matrix is also called vitrification. An important application is the vitrification of radioactive waste to obtain a substance that is thought to be safer and more stable for disposal.\nOne study suggests during the eruption of Mount Vesuvius in 79 AD, a victim's brain was vitrified by the extreme heat of the volcanic ash; however, this has been strenuously disputed.", "Vitrification is the progressive partial fusion of a clay, or of a body, as a result of a firing process. As vitrification proceeds, the proportion of glassy bond increases and the apparent porosity of the fired product becomes progressively lower. Vitreous bodies have open porosity, and may be either opaque or translucent. In this context, \"zero porosity\" may be defined as less than 1% water absorption. However, various standard procedures define the conditions of water absorption. An example is by ASTM, who state \"The term vitreous generally signifies less than 0.5% absorption, except for floor and wall tile and low-voltage electrical insulators, which are considered vitreous up to 3% water absorption.\"\nPottery can be made impermeable to water by glazing or by vitrification. Porcelain, bone china, and sanitaryware are examples of vitrified pottery, and are impermeable even without glaze. Stoneware may be vitrified or semi-vitrified; the latter type would not be impermeable without glaze.", "The Voitenko compressor is a shaped charge adapted from its original purpose of piercing thick steel armour to the task of accelerating shock waves. It was proposed by Anatoly Emelyanovich Voitenko (Анатолий Емельянович Войтенко), a Soviet scientist, in 1964. It slightly resembles a wind tunnel.\nThe Voitenko compressor initially separates a test gas from a shaped charge with a malleable steel plate. When the shaped charge detonates, most of its energy is focused on the steel plate, driving it forward and pushing the test gas ahead of it. Ames Research Center translated this idea into a self-destroying shock tube. A shaped charge accelerated the gas in a 3-cm glass-walled tube 2 meters in length. The velocity of the resulting shock wave was a phenomenal . The apparatus exposed to the detonation was, of course, completely destroyed, but not before useful data was extracted. In a typical Voitenko compressor, a shaped charge accelerates hydrogen gas, which in turn accelerates a thin disk up to about 40 km/s. A slight modification to the Voitenko compressor concept is a super-compressed detonation, a device that uses a compressible liquid or solid fuel in the steel compression chamber instead of a traditional gas mixture. A further extension of this technology is the explosive diamond anvil cell, utilizing multiple opposed shaped-charge jets projected at a single steel-encapsulated fuel, such as hydrogen. The fuels used in these devices, along with the secondary combustion reactions and long blast impulse, produce similar conditions to those encountered in fuel-air and thermobaric explosives.\nThis method of detonation produces energies over 100 keV (~10 K temperatures), suitable not only for nuclear fusion, but other higher-order quantum reactions as well. The UTIAS explosive-driven-implosion facility was used to produce stable, centered and focused hemispherical implosions to generate neutrons from D–D reactions. The simplest and most direct method proved to be in a predetonated stoichiometric mixture of deuterium and oxygen. The other successful method was using a miniature Voitenko-type compressor, where a plane diaphragm was driven by the implosion wave into a secondary small spherical cavity that contained pure deuterium gas at one atmosphere. In brief, PETN solid explosive is used to form a hemispherical shell (3–6 mm thick) in a 20-cm diameter hemispherical cavity milled in a massive steel chamber. The remaining volume is filled with a stoichiometric mixture of (H or D and O). This mixture is detonated by a very short, thin exploding wire located at the geometric center. The arrival of the detonation wave at the spherical surface instantly and simultaneously fires the explosive liner. The detonation wave in the explosive liner hits the metal cavity, reflects, and implodes on the preheated burnt gases, focuses at the center of the hemisphere (50 microseconds after the initiation of the exploding wire) and reflects, leaving behind a very small pocket (1 mm) of extremely high-temperature, high-pressure and high-density plasma.", "One study demonstrated the direct oxidation of glucose to arabinose by the same sodium hypochlorite, skipping the aldonic acid and aldoamide steps. For example, the general degradation of D-gluconamide into D-arabinose:\nOn top of that, the Weerman test could be used to show whether a hydroxylic group is beside the amido group. This reaction is only important in a historical sense because it is slow yielding and thus rarely used.", "During the degradation of α-hydroxy-substituted carbonic acid amides, the carbon chain shortens about one carbon-atom, too. \nThe reaction is very slow at room temperature, therefore the reaction mixture is heated up to 60–65 °C.", "Additionally the Weerman degradation could be executed with α,β-unsaturated carbonic acid amides. For example, acrylamide.", "During the degradation of α-hydroxy-substituted carbonic acid amides, the carbon chain shortens by one carbon-atom.\nThe reaction proceeds very slowly at room temperature, therefore the reaction mixture is heated up to 60-65 °C.", "The Weermann degradation could be executed with α-hydroxy-substituted carbonic acid amides. For example, sugar.", "Weerman degradation, also named Weerman reaction, is a name reaction in organic chemistry. It is named after Rudolf Adrian Weerman, who discovered it in 1910. In general, it is an organic reaction in carbohydrate chemistry in which amides are degraded by sodium hypochlorite, forming an aldehyde with one less carbon. Some have regarded it as an extension of the Hofmann rearrangement.", "The reaction mechanism is that of the related Hofmann degradation. \nAt first the carbonic acid amide (1) reacts with the sodium hypochlorite. After separate water and chloride an amine with a free bond is built 2. The intermediate (3) is generated by rearrangement. At this point two different mechanisms are possible. In the mechanism above two methanol molecules reacts with the intermediate. So is the compound (4) generated. After this carbon dioxide, water, ammonium and methanol are separated in different steps. At least it is protonated into an aldehyde (5).\nUntil the intermediate (3) the mechanism is the same like above. Then only one methanol-atom is added 4. With a protonation water, methanol and carbon dioxide are separated. An ammonium ion (5) is generated. During the hydrolysis a hydroxylic group is built 6. An aldehyde (7) is generated by separating an ammonium ion.", "The reaction mechanism is that of the related Hofmann degradation. \nAt first the carbonic acid amide (1) reacts with the sodium hypochlorite. After the separation of water and chloride an amine with a free bond is built 2. The intermediate (3) is generated by rearrangement. In the next step a hydrolysis takes place. Water is added at the carbon-atom with the number 1. A hydroxylic group is generated. The last step is that an acidic amide is separated and the aldehyde (4) is generated.", "The twisting of a ferromagnetic rod through which an electric current is flowing when the rod is placed in a longitudinal magnetic field. It was discovered by the German physicist Gustav Wiedemann in 1858\n. The Wiedemann effect is one of the manifestations of magnetostriction in a field formed by the combination of a longitudinal magnetic field and a circular magnetic field that is created by an electric current. If the electric current (or the magnetic field) is alternating, the rod will begin torsional oscillation.\nIn linear approach angle of rod torsion α does not depend on its cross-section form and is defined only by current density and magnetoelastic properties of the rod:\nwhere\n* is current density;\n* is magnetoelastic parameter, proportional to longitudinal magnetic field value;\n* is the shear modulus.", "Magnetostrictive position sensors use the Wiedemann effect to excite an ultrasonic pulse. Typically a small magnet is used to mark a position along a magnetostrictive wire. The magnetic field from a short current pulse in the wire combined with that from the position magnet excites the ultrasonic pulse. The time required for this pulse to travel from the point of excitation to a pickup at the end of the wire gives the position. Reflections from the other end of the wire could lead to disturbances. In order to avoid this the wire is connected to a mechanical damper that end.", "Winnowing is a process by which chaff is separated from grain. It can also be used to remove pests from stored grain. Winnowing usually follows threshing in grain preparation. In its simplest form, it involves throwing the mixture into the air so that the wind blows away the lighter chaff, while the heavier grains fall back down for recovery. Techniques included using a winnowing fan (a shaped basket shaken to raise the chaff) or using a tool (a winnowing fork or shovel) on a pile of harvested grain.", "In 1737 Andrew Rodger, a farmer on the estate of Cavers in Roxburghshire, developed a winnowing machine for corn, called a Fanner. These were successful and the family sold them throughout Scotland for many years. Some Scottish Presbyterian ministers saw the fanners as sins against God, for the wind was a thing specially made by him and an artificial wind was a daring and impious attempt to usurp what belonged to God alone. As the Industrial Revolution progressed, the winnowing process was mechanized by the invention of additional winnowing machines, such as fanning mills.", "In ancient China, the method was improved by mechanization with the development of the rotary winnowing fan, which used a cranked fan to produce the airstream. This was featured in Wang Zhens book the Nong Shu' of 1313 AD.", "The development of the winnowing barn allowed rice plantations in South Carolina to increase their yields dramatically.", "The winnowing-fan (λίκνον [líknon], also meaning a \"cradle\") featured in the rites accorded Dionysus and in the Eleusinian Mysteries: \"it was a simple agricultural implement taken over and mysticized by the religion of Dionysus,\" Jane Ellen Harrison remarked. Dionysus Liknites (\"Dionysus of the winnowing fan\") was wakened by the Dionysian women, in this instance called Thyiades, in a cave on Parnassus high above Delphi; the winnowing-fan links the god connected with the mystery religions to the agricultural cycle, but mortal Greek babies too were laid in a winnowing-fan. In Callimachus Hymn to Zeus, Adrasteia lays the infant Zeus in a golden líknon, her goat suckles him and he is given honey. In the Odyssey, the dead oracle Teiresias tells Odysseus to walk away from Ithaca with an oar until a wayfarer tells him it is a winnowing fan (i.e., until Odysseus has come so far from the sea that people dont recognize oars), and there to build a shrine to Poseidon.", "In Saxon settlements such as one identified in Northumberland as Bedes Ad Gefrin (now called Yeavering) the buildings were shown by an excavators reconstruction to have opposed entries. In barns a draught created by the use of these opposed doorways was used in winnowing.\nThe technique developed by the Chinese was not adopted in Europe until the 18th century when winnowing machines used a sail fan. The rotary winnowing fan was exported to Europe, brought there by Dutch sailors between 1700 and 1720. Apparently, they had obtained them from the Dutch settlement of Batavia in Java, Dutch East Indies. The Swedes imported some from south China at about the same time and Jesuits had taken several to France from China by 1720. Until the beginning of the 18th century, no rotary winnowing fans existed in the West.", "In sedimentology, winnowing is the natural removal of fine material from a coarser sediment by wind or flowing water. Once a sediment has been deposited, subsequent changes in the speed or direction of wind or water flowing over it can agitate the grains in the sediment and allow the preferential removal of the finer grains. This action can improve the sorting and increase the mean grain size of a sediment after it has been deposited.\nThe term winnowing is from the analogous process for the agricultural separation of wheat from chaff.", "Xylomannan is an antifreeze molecule, found in the freeze-tolerant Alaskan beetle Upis ceramboides. Unlike antifreeze proteins, xylomannan is not a protein. Instead, it is a combination of a sugar (saccharide) and a fatty acid that is found in cell membranes. As such is expected to work in a different manner than AFPs. It is believed to work by incorporating itself directly into the cell membrane and preventing the freezing of water molecules within the cell.\nXylomannan is also found in the red seaweed Nothogenia fastigiata (Scinaiaceae family). Fraction F6 of a sulphated xylomannan from Nothogenia fastigiata was found to inhibit replication of a variety of viruses, including Herpes simplex virus types 1 and 2 (HSV-1, HSV-2), Human cytomegalovirus (HCMV, HHV-5), Respiratory syncytial virus (RSV), Influenzavirus A, Influenzavirus B, Junin and Tacaribe virus, Simian immunodeficiency virus, and (weakly) Human immunodeficiency virus types 1 and 2.", "Fung was born in Jiangsu Province, China in 1919. He earned a bachelors degree in 1941 and a masters degree in 1943 from the National Central University (later renamed Nanjing University in mainland China and reinstated in Taiwan), and earned a Ph.D. from the California Institute of Technology in 1948. Fung was Professor Emeritus and Research Engineer at the University of California San Diego. He published prominent texts along with Pin Tong who was then at Hong Kong University of Science & Technology. Fung died at the Jacobs Medical Center in San Diego, California, aged 100, on December 15, 2019.\nFung was married to Luna Yu Hsien-Shih, a former mathematician and cofounder of the UC San Diego International Center, until her death in 2017. The couple raised two children.", "He is the author of numerous books including Foundations of Solid Mechanics, Continuum Mechanics, and a series of books on Biomechanics. He is also one of the principal founders of the Journal of Biomechanics and was a past chair of the ASME International Applied Mechanics Division. In 1972, Fung established the Biomechanics Symposium under the American Society of Mechanical Engineers. This biannual summer meeting, first held at the Georgia Institute of Technology, became the annual Summer Bioengineering Conference. Fung and colleagues were also the first to recognize the importance of residual stress on arterial mechanical behavior.", "Fung's famous exponential strain constitutive equation for preconditioned soft tissues is \nwith\nquadratic forms of Green-Lagrange strains and , and material constants. is a strain energy function per volume unit, which is the mechanical strain energy for a given temperature. Materials that follow this law are known as Fung-elastic.", "Yuan-Cheng \"Bert\" Fung (September 15, 1919 – December 15, 2019) was a Chinese-American bioengineer and writer. He is regarded as a founding figure of bioengineering, tissue engineering, and the \"Founder of Modern Biomechanics\".", "* Theodore von Karman Medal, 1976\n* Otto Laporte Award, 1977\n* Worcester Reed Warner Medal, 1984\n* Jean-Leonard-Marie Poiseuille Award, 1986\n* Timoshenko Medal, 1991\n* Lissner Award for Bioengineering, from ASME\n* Borelli Medal, from ASB\n* Landis Award, from Microcirculation Society\n* Alza Award, from BMES\n* Melville Medal, 1994\n* United States National Academy of Engineering Founders Award (NAE Founders Award), 1998\n* National Medal of Science, 2000\n* Fritz J. and Dolores H. Russ Prize, 2007 (\"for the characterization and modeling of human tissue mechanics and function leading to prevention and mitigation of trauma.\")\n* Revelle Medal, from UC San Diego, 2016\nFung was elected to the United States National Academy of Sciences (1993), the National Academy of Engineering (1979), the Institute of Medicine (1991), the Academia Sinica (1968), and was a Foreign Member of the Chinese Academy of Sciences (1994 election).", "Fine ZnS powder is an efficient photocatalyst, which produces hydrogen gas from water upon illumination. Sulfur vacancies can be introduced in ZnS during its synthesis; this gradually turns the white-yellowish ZnS into a brown powder, and boosts the photocatalytic activity through enhanced light absorption.", "Zinc sulfide (or zinc sulphide) is an inorganic compound with the chemical formula of ZnS. This is the main form of zinc found in nature, where it mainly occurs as the mineral sphalerite. Although this mineral is usually black because of various impurities, the pure material is white, and it is widely used as a pigment. In its dense synthetic form, zinc sulfide can be transparent, and it is used as a window for visible optics and infrared optics.", "Zinc sulfide is also used as an infrared optical material, transmitting from visible wavelengths to just over 12 micrometers. It can be used planar as an optical window or shaped into a lens. It is made as microcrystalline sheets by the synthesis from hydrogen sulfide gas and zinc vapour, and this is sold as FLIR-grade (Forward Looking Infrared), where the zinc sulfide is in a milky-yellow, opaque form. This material when hot isostatically pressed (HIPed) can be converted to a water-clear form known as Cleartran (trademark). Early commercial forms were marketed as Irtran-2 but this designation is now obsolete.", "Zinc sulfide is a common pigment, sometimes called sachtolith. When combined with barium sulfate, zinc sulfide forms lithopone.", "ZnS exists in two main crystalline forms. This dualism is an example of polymorphism. In each form, the coordination geometry at Zn and S is tetrahedral. The more stable cubic form is known also as zinc blende or sphalerite. The hexagonal form is known as the mineral wurtzite, although it also can be produced synthetically. The transition from the sphalerite form to the wurtzite form occurs at around 1020 °C.", "The phosphorescence of ZnS was first reported by the French chemist Théodore Sidot in 1866. His findings were presented by A. E. Becquerel, who was renowned for the research on luminescence. ZnS was used by Ernest Rutherford and others in the early years of nuclear physics as a scintillation detector, because it emits light upon excitation by x-rays or electron beam, making it useful for X-ray screens and cathode ray tubes. This property made zinc sulfide useful in the dials of radium watches.", "Zinc sulfide is usually produced from waste materials from other applications. Typical sources include smelter, slag, and pickle liquors. As an example, the synthesis of ammonia from methane requires a priori removal of hydrogen sulfide impurities in the natural gas, for which zinc oxide is used. This scavenging produces zinc sulfide:\n:ZnO + HS → ZnS + HO", "It is easily produced by igniting a mixture of zinc and sulfur. Since zinc sulfide is insoluble in water, it can also be produced in a precipitation reaction. Solutions containing Zn salts readily form a precipitate ZnS in the presence of sulfide ions (e.g., from HS).\n:Zn + S → ZnS\nThis reaction is the basis of a gravimetric analysis for zinc.", "Zinc sulfide, with addition of a few ppm of a suitable activator, exhibits strong phosphorescence. The phenomenon was described by Nikola Tesla in 1893, and is currently used in many applications, from cathode ray tubes through X-ray screens to glow in the dark products. When silver is used as activator, the resulting color is bright blue, with maximum at 450 nanometers. Using manganese yields an orange-red color at around 590 nanometers. Copper gives a longer glow, and it has the familiar greenish glow-in-the-dark. Copper-doped zinc sulfide (\"ZnS plus Cu\") is used also in electroluminescent panels. It also exhibits phosphorescence due to impurities on illumination with blue or ultraviolet light.", "Both sphalerite and wurtzite are intrinsic, wide-bandgap semiconductors. These are prototypical II-VI semiconductors, and they adopt structures related to many of the other semiconductors, such as gallium arsenide. The cubic form of ZnS has a band gap of about 3.54 electron volts at 300 kelvins, but the hexagonal form has a band gap of about 3.91 electron volts. ZnS can be doped as either an n-type semiconductor or a p-type semiconductor." ]
[ "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Cryobiology", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Luminescence", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Luminescence", "Separation Processes", "Carbohydrates", "Cryobiology", "Cryobiology", "Cryobiology", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Nuclear Fusion", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Magnetic Ordering", "Luminescence", "Luminescence", "Luminescence", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Luminescence", "Luminescence", "Luminescence", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Acids + Bases", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Tissue Engineering", "Nuclear Fusion", "Cryobiology", "Cryobiology", "Cryobiology", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Cryobiology", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Ultraviolet Radiation", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Cryobiology", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Cryobiology", "Cryobiology", "Cryobiology", "Tissue Engineering", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Separation Processes", "Separation Processes", "Nuclear Fusion", "Nuclear Fusion", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Cryobiology", "Cryobiology", "Cryobiology", "Separation Processes", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Luminescence", "Luminescence", "Luminescence", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Separation Processes", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Carbohydrates", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Carbohydrates", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Luminescence", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Magnetic Ordering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Magnetic Ordering", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Separation Processes", "Separation Processes", "Separation Processes", "Cryobiology", "Cryobiology", "Cryobiology", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Separation Processes", "Separation Processes", "Separation Processes", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Nuclear Fusion", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Separation Processes", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Magnetic Ordering", "Nuclear Fusion", "Acids + Bases", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Separation Processes", "Separation Processes", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Acids + Bases", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Separation Processes", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Magnetic Ordering", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Magnetic Ordering", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Magnetic Ordering", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Cryobiology", "Cryobiology", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Luminescence", "Luminescence", "Luminescence", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Acids + Bases", "Acids + Bases", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Cryobiology", "Cryobiology", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Cryobiology", "Cryobiology", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Acids + Bases", "Luminescence", "Magnetic Ordering", "Magnetic Ordering", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Separation Processes", "Separation Processes", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Carbohydrates", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Cryobiology", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Magnetic Ordering", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Acids + Bases", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Acids + Bases", "Acids + Bases", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Cryobiology", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Acids + Bases", "Acids + Bases", "Acids + Bases", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Nuclear Fusion", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Ultraviolet Radiation", "Luminescence", "Luminescence", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Cryobiology", "Nuclear Fusion", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Carbohydrates", "Magnetic Ordering", "Magnetic Ordering", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Separation Processes", "Cryobiology", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Tissue Engineering", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence", "Luminescence" ]