title
stringlengths
3
69
text
stringlengths
776
102k
relevans
float64
0.76
0.82
popularity
float64
0.96
1
ranking
float64
0.76
0.81
Creeping normality
Creeping normality (also called gradualism, or landscape amnesia) is a process by which a major change can be accepted as normal and acceptable if it happens gradually through small, often unnoticeable, increments of change. The change could otherwise be regarded as remarkable and objectionable if it took hold suddenly or in a short time span. American scientist Jared Diamond used creeping normality in his 2005 book Collapse: How Societies Choose to Fail or Succeed. Prior to releasing his book, Diamond explored this theory while attempting to explain why, in the course of long-term environmental degradation, Easter Island natives would, seemingly irrationally, chop down the last tree: See also There are a number of metaphors related to creeping normality, including: Boiling frog Camel's nose Lingchi "First they came ..." Habituation If You Give a Mouse a Cookie Moving the goalposts Normalisation of deviance Overton window Principiis obsta (et respice finem) - 'resist the beginnings (and consider the end)' Salami tactics Shifting baseline Slippery slope Technological change as a social process Tyranny of small decisions References Perception Business terms Technological change
0.779729
0.983524
0.766882
Conjugate variables (thermodynamics)
In thermodynamics, the internal energy of a system is expressed in terms of pairs of conjugate variables such as temperature and entropy, pressure and volume, or chemical potential and particle number. In fact, all thermodynamic potentials are expressed in terms of conjugate pairs. The product of two quantities that are conjugate has units of energy or sometimes power. For a mechanical system, a small increment of energy is the product of a force times a small displacement. A similar situation exists in thermodynamics. An increment in the energy of a thermodynamic system can be expressed as the sum of the products of certain generalized "forces" that, when unbalanced, cause certain generalized "displacements", and the product of the two is the energy transferred as a result. These forces and their associated displacements are called conjugate variables. The thermodynamic force is always an intensive variable and the displacement is always an extensive variable, yielding an extensive energy transfer. The intensive (force) variable is the derivative of the internal energy with respect to the extensive (displacement) variable, while all other extensive variables are held constant. The thermodynamic square can be used as a tool to recall and derive some of the thermodynamic potentials based on conjugate variables. In the above description, the product of two conjugate variables yields an energy. In other words, the conjugate pairs are conjugate with respect to energy. In general, conjugate pairs can be defined with respect to any thermodynamic state function. Conjugate pairs with respect to entropy are often used, in which the product of the conjugate pairs yields an entropy. Such conjugate pairs are particularly useful in the analysis of irreversible processes, as exemplified in the derivation of the Onsager reciprocal relations. Overview Just as a small increment of energy in a mechanical system is the product of a force times a small displacement, so an increment in the energy of a thermodynamic system can be expressed as the sum of the products of certain generalized "forces" which, when unbalanced, cause certain generalized "displacements" to occur, with their product being the energy transferred as a result. These forces and their associated displacements are called conjugate variables. For example, consider the conjugate pair. The pressure acts as a generalized force: Pressure differences force a change in volume , and their product is the energy lost by the system due to work. Here, pressure is the driving force, volume is the associated displacement, and the two form a pair of conjugate variables. In a similar way, temperature differences drive changes in entropy, and their product is the energy transferred by heat transfer. The thermodynamic force is always an intensive variable and the displacement is always an extensive variable, yielding an extensive energy. The intensive (force) variable is the derivative of the (extensive) internal energy with respect to the extensive (displacement) variable, with all other extensive variables held constant. The theory of thermodynamic potentials is not complete until one considers the number of particles in a system as a variable on par with the other extensive quantities such as volume and entropy. The number of particles is, like volume and entropy, the displacement variable in a conjugate pair. The generalized force component of this pair is the chemical potential. The chemical potential may be thought of as a force which, when imbalanced, pushes an exchange of particles, either with the surroundings, or between phases inside the system. In cases where there are a mixture of chemicals and phases, this is a useful concept. For example, if a container holds liquid water and water vapor, there will be a chemical potential (which is negative) for the liquid which pushes the water molecules into the vapor (evaporation) and a chemical potential for the vapor, pushing vapor molecules into the liquid (condensation). Only when these "forces" equilibrate, and the chemical potential of each phase is equal, is equilibrium obtained. The most commonly considered conjugate thermodynamic variables are (with corresponding SI units): Thermal parameters: Temperature:   (K) Entropy:   (J K−1) Mechanical parameters: Pressure:   (Pa= J m−3) Volume:   (m3 = J Pa−1) or, more generally, Stress: (Pa= J m−3) Volume × Strain: (m3 = J Pa−1) Material parameters: chemical potential: (J) particle number:   (particles or mole) For a system with different types of particles, a small change in the internal energy is given by: where is internal energy, is temperature, is entropy, is pressure, is volume, is the chemical potential of the -th particle type, and is the number of -type particles in the system. Here, the temperature, pressure, and chemical potential are the generalized forces, which drive the generalized changes in entropy, volume, and particle number respectively. These parameters all affect the internal energy of a thermodynamic system. A small change in the internal energy of the system is given by the sum of the flow of energy across the boundaries of the system due to the corresponding conjugate pair. These concepts will be expanded upon in the following sections. While dealing with processes in which systems exchange matter or energy, classical thermodynamics is not concerned with the rate at which such processes take place, termed kinetics. For this reason, the term thermodynamics is usually used synonymously with equilibrium thermodynamics. A central notion for this connection is that of quasistatic processes, namely idealized, "infinitely slow" processes. Time-dependent thermodynamic processes far away from equilibrium are studied by non-equilibrium thermodynamics. This can be done through linear or non-linear analysis of irreversible processes, allowing systems near and far away from equilibrium to be studied, respectively. Pressure/volume and stress/strain pairs As an example, consider the conjugate pair. The pressure acts as a generalized force – pressure differences force a change in volume, and their product is the energy lost by the system due to mechanical work. Pressure is the driving force, volume is the associated displacement, and the two form a pair of conjugate variables. The above holds true only for non-viscous fluids. In the case of viscous fluids, plastic and elastic solids, the pressure force is generalized to the stress tensor, and changes in volume are generalized to the volume multiplied by the strain tensor. These then form a conjugate pair. If is the ij component of the stress tensor, and is the ij component of the strain tensor, then the mechanical work done as the result of a stress-induced infinitesimal strain is: or, using Einstein notation for the tensors, in which repeated indices are assumed to be summed: In the case of pure compression (i.e. no shearing forces), the stress tensor is simply the negative of the pressure times the unit tensor so that The trace of the strain tensor is the fractional change in volume so that the above reduces to as it should. Temperature/entropy pair In a similar way, temperature differences drive changes in entropy, and their product is the energy transferred by heating. Temperature is the driving force, entropy is the associated displacement, and the two form a pair of conjugate variables. The temperature/entropy pair of conjugate variables is the only heat term; the other terms are essentially all various forms of work. Chemical potential/particle number pair The chemical potential is like a force which pushes an increase in particle number. In cases where there are a mixture of chemicals and phases, this is a useful concept. For example, if a container holds water and water vapor, there will be a chemical potential (which is negative) for the liquid, pushing water molecules into the vapor (evaporation) and a chemical potential for the vapor, pushing vapor molecules into the liquid (condensation). Only when these "forces" equilibrate is equilibrium obtained. See also Generalized coordinate and generalized force: analogous conjugate variable pairs found in classical mechanics. Intensive and extensive properties Bond graph References Further reading Thermodynamic properties
0.782751
0.979719
0.766876
Guidance, navigation, and control
Guidance, navigation and control (abbreviated GNC, GN&C, or G&C) is a branch of engineering dealing with the design of systems to control the movement of vehicles, especially, automobiles, ships, aircraft, and spacecraft. In many cases these functions can be performed by trained humans. However, because of the speed of, for example, a rocket's dynamics, human reaction time is too slow to control this movement. Therefore, systems—now almost exclusively digital electronic—are used for such control. Even in cases where humans can perform these functions, it is often the case that GNC systems provide benefits such as alleviating operator work load, smoothing turbulence, fuel savings, etc. In addition, sophisticated applications of GNC enable automatic or remote control. Guidance refers to the determination of the desired path of travel (the "trajectory") from the vehicle's current location to a designated target, as well as desired changes in velocity, rotation and acceleration for following that path. Navigation refers to the determination, at a given time, of the vehicle's location and velocity (the "state vector") as well as its attitude. Control refers to the manipulation of the forces, by way of steering controls, thrusters, etc., needed to execute guidance commands while maintaining vehicle stability. Parts Guidance, navigation, and control systems consist of 3 essential parts: navigation which tracks current location, guidance which leverages navigation data and target information to direct flight control "where to go", and control which accepts guidance commands to affect change in aerodynamic and/or engine controls. Navigation is the art of determining where you are, a science that has seen tremendous focus in 1711 with the Longitude prize. Navigation aids either measure position from a fixed point of reference (ex. landmark, north star, LORAN Beacon), relative position to a target (ex. radar, infra-red, ...) or track movement from a known position/starting point (e.g. IMU). Today's complex systems use multiple approaches to determine current position. For example, today's most advanced navigation systems are embodied within the Anti-ballistic missile, the RIM-161 Standard Missile 3 leverages GPS, IMU and ground segment data in the boost phase and relative position data for intercept targeting. Complex systems typically have multiple redundancy to address drift, improve accuracy (ex. relative to a target) and address isolated system failure. Navigation systems therefore take multiple inputs from many different sensors, both internal to the system and/or external (ex. ground based update). Kalman filter provides the most common approach to combining navigation data (from multiple sensors) to resolve current position. Guidance is the "driver" of a vehicle. It takes input from the navigation system (where am I) and uses targeting information (where do I want to go) to send signals to the flight control system that will allow the vehicle to reach its destination (within the operating constraints of the vehicle). The "targets" for guidance systems are one or more state vectors (position and velocity) and can be inertial or relative. During powered flight, guidance is continually calculating steering directions for flight control. For example, the Space Shuttle targets an altitude, velocity vector, and gamma to drive main engine cut off. Similarly, an Intercontinental ballistic missile also targets a vector. The target vectors are developed to fulfill the mission and can be preplanned or dynamically created. Control Flight control is accomplished either aerodynamically or through powered controls such as engines. Guidance sends signals to flight control. A Digital Autopilot (DAP) is the interface between guidance and control. Guidance and the DAP are responsible for calculating the precise instruction for each flight control. The DAP provides feedback to guidance on the state of flight controls. Examples GNC systems are found in essentially all autonomous or semi-autonomous systems. These include: Autopilots Driverless cars, like Mars rovers or those participating in the DARPA Grand Challenge Guided missiles Precision-guided airdrop systems Reaction control systems for spacecraft Spacecraft launch vehicles Unmanned aerial vehicles Auto-steering tractors Autonomous underwater vehicle Related examples are: Celestial navigation is a position fixing technique that was devised to help sailors cross the featureless oceans without having to rely on dead reckoning to enable them to strike land. Celestial navigation uses angular measurements (sights) between the horizon and a common celestial object. The Sun is most often measured. Skilled navigators can use the Moon, planets or one of 57 navigational stars whose coordinates are tabulated in nautical almanacs. Historical tools include a sextant, watch and ephemeris data. Today's space shuttle, and most interplanetary spacecraft, use optical systems to calibrate inertial navigation systems: Crewman Optical Alignment Sight (COAS), Star Tracker. Inertial Measurement Units (IMUs) are the primary inertial system for maintaining current position (navigation) and orientation in missiles and aircraft. They are complex machines with one or more rotating Gyroscopes that can rotate freely in 3 degrees of motion within a complex gimbal system. IMUs are "spun up" and calibrated prior to launch. A minimum of 3 separate IMUs are in place within most complex systems. In addition to relative position, the IMUs contain accelerometers which can measure acceleration in all axes. The position data, combined with acceleration data provide the necessary inputs to "track" motion of a vehicle. IMUs have a tendency to "drift", due to friction and accuracy. Error correction to address this drift can be provided via ground link telemetry, GPS, radar, optical celestial navigation and other navigation aids. When targeting another (moving) vehicle, relative vectors become paramount. In this situation, navigation aids which provide updates of position relative to the target are more important. In addition to the current position, inertial navigation systems also typically estimate a predicted position for future computing cycles. See also Inertial navigation system. Astro-inertial guidance is a sensor fusion/information fusion of the Inertial guidance and Celestial navigation. Long-range Navigation (LORAN) : This was the predecessor of GPS and was (and to an extent still is) used primarily in commercial sea transportation. The system works by triangulating the ship's position based on directional reference to known transmitters. Global Positioning System (GPS) : GPS was designed by the US military with the primary purpose of addressing "drift" within the inertial navigation of Submarine-launched ballistic missile(SLBMs) prior to launch. GPS transmits 2 signal types: military and a commercial. The accuracy of the military signal is classified but can be assumed to be well under 0.5 meters. The GPS system space segment is composed of 24 to 32 satellites in medium Earth orbit at an altitude of approximately 20,200 km (12,600 mi). The satellites are in six specific orbits and transmit highly accurate time and satellite location information which can be used to derive distances and calculate position. Radar/Infrared/Laser : This form of navigation provides information to guidance relative to a known target, it has both civilian (ex rendezvous) and military applications. active (employs own radar to illuminate the target), passive (detects target's radar emissions), semiactive radar homing, Infrared homing : This form of guidance is used exclusively for military munitions, specifically air-to-air and surface-to-air missiles. The missile's seeker head homes in on the infrared (heat) signature from the target's engines (hence the term "heat-seeking missile"), Ultraviolet homing, used in FIM-92 Stinger - more resistive to countermeasures, than IR homing system Laser guidance : A laser designator device calculates relative position to a highlighted target. Most are familiar with the military uses of the technology on Laser-guided bomb. The space shuttle crew leverages a hand held device to feed information into rendezvous planning. The primary limitation on this device is that it requires a line of sight between the target and the designator. Terrain contour matching (TERCOM). Uses a ground scanning radar to "match" topography against digital map data to fix current position. Used by cruise missiles such as the Tomahawk (missile family). See also Aeronautics Air navigation Aircraft flight control system Control engineering Flight control surfaces Missile guidance Navigation References External links AIAA GNC Conference (annual) Academic Earth: Aircraft Systems Engineering: Lecture 16 GNC. Phil Hattis – MIT Georgia Tech: GNC: Theory and Applications NASA Shuttle Technology: GNC Boeing: Defense, Space & Security: International Space Station: GNC Princeton Satellite Systems: GNC of High-Altitude Airships. Joseph Mueller CEAS: EuroGNC Conference Applications of control engineering Avionics Robot control Cybernetics Military electronics Uncrewed vehicles
0.77787
0.985854
0.766867
Shock wave
In physics, a shock wave (also spelled shockwave), or shock, is a type of propagating disturbance that moves faster than the local speed of sound in the medium. Like an ordinary wave, a shock wave carries energy and can propagate through a medium, but is characterized by an abrupt, nearly discontinuous, change in pressure, temperature, and density of the medium. For the purpose of comparison, in supersonic flows, additional increased expansion may be achieved through an expansion fan, also known as a Prandtl–Meyer expansion fan. The accompanying expansion wave may approach and eventually collide and recombine with the shock wave, creating a process of destructive interference. The sonic boom associated with the passage of a supersonic aircraft is a type of sound wave produced by constructive interference. Unlike solitons (another kind of nonlinear wave), the energy and speed of a shock wave alone dissipates relatively quickly with distance. When a shock wave passes through matter, energy is preserved but entropy increases. This change in the matter's properties manifests itself as a decrease in the energy which can be extracted as work, and as a drag force on supersonic objects; shock waves are strongly irreversible processes. Terminology Shock waves can be: Normal At 90° (perpendicular) to the shock medium's flow direction. Oblique At an angle to the direction of flow. Bow Occurs upstream of the front (bow) of a blunt object when the upstream flow velocity exceeds Mach 1. Some other terms: Shock front: The boundary over which the physical conditions undergo an abrupt change because of a shock wave. Contact front: In a shock wave caused by a driver gas (for example the "impact" of a high explosive on the surrounding air), the boundary between the driver (explosive products) and the driven (air) gases. The contact front trails the shock front. In supersonic flows The abruptness of change in the features of the medium, that characterize shock waves, can be viewed as a phase transition: the pressure–time diagram of a supersonic object propagating shows how the transition induced by a shock wave is analogous to a dynamic phase transition. When an object (or disturbance) moves faster than the information can propagate into the surrounding fluid, then the fluid near the disturbance cannot react or "get out of the way" before the disturbance arrives. In a shock wave the properties of the fluid (density, pressure, temperature, flow velocity, Mach number) change almost instantaneously. Measurements of the thickness of shock waves in air have resulted in values around 200 nm (about 10−5 in), which is on the same order of magnitude as the mean free path of gas molecules. In reference to the continuum, this implies the shock wave can be treated as either a line or a plane if the flow field is two-dimensional or three-dimensional, respectively. Shock waves are formed when a pressure front moves at supersonic speeds and pushes on the surrounding air. At the region where this occurs, sound waves travelling against the flow reach a point where they cannot travel any further upstream and the pressure progressively builds in that region; a high-pressure shock wave rapidly forms. Shock waves are not conventional sound waves; a shock wave takes the form of a very sharp change in the gas properties. Shock waves in air are heard as a loud "crack" or "snap" noise. Over longer distances, a shock wave can change from a nonlinear wave into a linear wave, degenerating into a conventional sound wave as it heats the air and loses energy. The sound wave is heard as the familiar "thud" or "thump" of a sonic boom, commonly created by the supersonic flight of aircraft. The shock wave is one of several different ways in which a gas in a supersonic flow can be compressed. Some other methods are isentropic compressions, including Prandtl–Meyer compressions. The method of compression of a gas results in different temperatures and densities for a given pressure ratio which can be analytically calculated for a non-reacting gas. A shock wave compression results in a loss of total pressure, meaning that it is a less efficient method of compressing gases for some purposes, for instance in the intake of a scramjet. The appearance of pressure-drag on supersonic aircraft is mostly due to the effect of shock compression on the flow. Normal shocks In elementary fluid mechanics utilizing ideal gases, a shock wave is treated as a discontinuity where entropy increases abruptly as the shock passes. Since no fluid flow is discontinuous, a control volume is established around the shock wave, with the control surfaces that bound this volume parallel to the shock wave (with one surface on the pre-shock side of the fluid medium and one on the post-shock side). The two surfaces are separated by a very small depth such that the shock itself is entirely contained between them. At such control surfaces, momentum, mass flux and energy are constant; within combustion, detonations can be modelled as heat introduction across a shock wave. It is assumed the system is adiabatic (no heat exits or enters the system) and no work is being done. The Rankine–Hugoniot conditions arise from these considerations. Taking into account the established assumptions, in a system where the downstream properties are becoming subsonic: the upstream and downstream flow properties of the fluid are considered isentropic. Since the total amount of energy within the system is constant, the stagnation enthalpy remains constant over both regions. However, entropy is increasing; this must be accounted for by a drop in stagnation pressure of the downstream fluid. Other shocks Oblique shocks When analyzing shock waves in a flow field, which are still attached to the body, the shock wave which is deviating at some arbitrary angle from the flow direction is termed oblique shock. These shocks require a component vector analysis of the flow; doing so allows for the treatment of the flow in an orthogonal direction to the oblique shock as a normal shock. Bow shocks When an oblique shock is likely to form at an angle which cannot remain on the surface, a nonlinear phenomenon arises where the shock wave will form a continuous pattern around the body. These are termed bow shocks. In these cases, the 1d flow model is not valid and further analysis is needed to predict the pressure forces which are exerted on the surface. Shock waves due to nonlinear steepening Shock waves can form due to steepening of ordinary waves. The best-known example of this phenomenon is ocean waves that form breakers on the shore. In shallow water, the speed of surface waves is dependent on the depth of the water. An incoming ocean wave has a slightly higher wave speed near the crest of each wave than near the troughs between waves, because the wave height is not infinitesimal compared to the depth of the water. The crests overtake the troughs until the leading edge of the wave forms a vertical face and spills over to form a turbulent shock (a breaker) that dissipates the wave's energy as sound and heat. Similar phenomena affect strong sound waves in gas or plasma, due to the dependence of the sound speed on temperature and pressure. Strong waves heat the medium near each pressure front, due to adiabatic compression of the air itself, so that high pressure fronts outrun the corresponding pressure troughs. There is a theory that the sound pressure levels in brass instruments such as the trombone become high enough for steepening to occur, forming an essential part of the bright timbre of the instruments. While shock formation by this process does not normally happen to unenclosed sound waves in Earth's atmosphere, it is thought to be one mechanism by which the solar chromosphere and corona are heated, via waves that propagate up from the solar interior. Analogies A shock wave may be described as the furthest point upstream of a moving object which "knows" about the approach of the object. In this description, the shock wave position is defined as the boundary between the zone having no information about the shock-driving event and the zone aware of the shock-driving event, analogous with the light cone described in the theory of special relativity. To produce a shock wave, an object in a given medium (such as air or water) must travel faster than the local speed of sound. In the case of an aircraft travelling at high subsonic speed, regions of air around the aircraft may be travelling at exactly the speed of sound, so that the sound waves leaving the aircraft pile up on one another, similar to a traffic jam on a motorway. When a shock wave forms, the local air pressure increases and then spreads out sideways. Because of this amplification effect, a shock wave can be very intense, more like an explosion when heard at a distance (not coincidentally, since explosions create shock waves). Analogous phenomena are known outside fluid mechanics. For example, charged particles accelerated beyond the speed of light in a refractive medium (such as water, where the speed of light is less than that in a vacuum) create visible shock effects, a phenomenon known as Cherenkov radiation. Phenomenon types Below are a number of examples of shock waves, broadly grouped with similar shock phenomena: Moving shock Usually consists of a shock wave propagating into a stationary medium In this case, the gas ahead of the shock is stationary (in the laboratory frame) and the gas behind the shock can be supersonic in the laboratory frame. The shock propagates with a wavefront which is normal (at right angles) to the direction of flow. The speed of the shock is a function of the original pressure ratio between the two bodies of gas. Moving shocks are usually generated by the interaction of two bodies of gas at different pressure, with a shock wave propagating into the lower pressure gas and an expansion wave propagating into the higher pressure gas. Examples: Balloon bursting, shock tube, shock wave from explosion. Detonation wave A detonation wave is essentially a shock supported by a trailing exothermic reaction. It involves a wave travelling through a highly combustible or chemically unstable medium, such as an oxygen-methane mixture or a high explosive. The chemical reaction of the medium occurs following the shock wave, and the chemical energy of the reaction drives the wave forward. A detonation wave follows slightly different rules from an ordinary shock since it is driven by the chemical reaction occurring behind the shock wavefront. In the simplest theory for detonations, an unsupported, self-propagating detonation wave proceeds at the Chapman–Jouguet flow velocity. A detonation will also cause a shock to propagate into the surrounding air due to the overpressure induced by the explosion. When a shock wave is created by high explosives such as TNT (which has a detonation velocity of 6,900 m/s), it will always travel at high, supersonic velocity from its point of origin. Bow shock (detached shock) These shocks are curved and form a small distance in front of the body. Directly in front of the body, they stand at 90 degrees to the oncoming flow and then curve around the body. Detached shocks allow the same type of analytic calculations as for the attached shock, for the flow near the shock. They are a topic of continuing interest, because the rules governing the shock's distance ahead of the blunt body are complicated and are a function of the body's shape. Additionally, the shock standoff distance varies drastically with the temperature for a non-ideal gas, causing large differences in the heat transfer to the thermal protection system of the vehicle. See the extended discussion on this topic at atmospheric reentry. These follow the "strong-shock" solutions of the analytic equations, meaning that for some oblique shocks very close to the deflection angle limit, the downstream Mach number is subsonic. See also bow shock or oblique shock. Such a shock occurs when the maximum deflection angle is exceeded. A detached shock is commonly seen on blunt bodies, but may also be seen on sharp bodies at low Mach numbers. Examples: Space return vehicles (Apollo, Space shuttle), bullets, the boundary (bow shock) of a magnetosphere. The name "bow shock" comes from the example of a bow wave, the detached shock formed at the bow (front) of a ship or boat moving through water, whose slow surface wave speed is easily exceeded (see ocean surface wave). Attached shock These shocks appear as attached to the tip of sharp bodies moving at supersonic speeds. Examples: Supersonic wedges and cones with small apex angles. The attached shock wave is a classic structure in aerodynamics because, for a perfect gas and inviscid flow field, an analytic solution is available, such that the pressure ratio, temperature ratio, angle of the wedge and the downstream Mach number can all be calculated knowing the upstream Mach number and the shock angle. Smaller shock angles are associated with higher upstream Mach numbers, and the special case where the shock wave is at 90° to the oncoming flow (Normal shock), is associated with a Mach number of one. These follow the "weak-shock" solutions of the analytic equations. In rapid granular flows Shock waves can also occur in rapid flows of dense granular materials down inclined channels or slopes. Strong shocks in rapid dense granular flows can be studied theoretically and analyzed to compare with experimental data. Consider a configuration in which the rapidly moving material down the chute impinges on an obstruction wall erected perpendicular at the end of a long and steep channel. Impact leads to a sudden change in the flow regime from a fast moving supercritical thin layer to a stagnant thick heap. This flow configuration is particularly interesting because it is analogous to some hydraulic and aerodynamic situations associated with flow regime changes from supercritical to subcritical flows. In astrophysics Astrophysical environments feature many different types of shock waves. Some common examples are supernovae shock waves or blast waves travelling through the interstellar medium, the bow shock caused by the Earth's magnetic field colliding with the solar wind and shock waves caused by galaxies colliding with each other. Another interesting type of shock in astrophysics is the quasi-steady reverse shock or termination shock that terminates the ultra relativistic wind from young pulsars. Meteor entering events Shock waves are generated by meteoroids when they enter the Earth's atmosphere. The Tunguska event and the 2013 Russian meteor event are the best documented evidence of the shock wave produced by a massive meteoroid. When the 2013 meteor entered into the Earth's atmosphere with an energy release equivalent to 100 or more kilotons of TNT, dozens of times more powerful than the atomic bomb dropped on Hiroshima, the meteor's shock wave produced damages as in a supersonic jet's flyby (directly underneath the meteor's path) and as a detonation wave, with the circular shock wave centred at the meteor explosion, causing multiple instances of broken glass in the city of Chelyabinsk and neighbouring areas (pictured). Technological applications In the examples below, the shock wave is controlled, produced by (ex. airfoil) or in the interior of a technological device, like a turbine. Recompression shock These shocks appear when the flow over a transonic body is decelerated to subsonic speeds. Examples: Transonic wings, turbines Where the flow over the suction side of a transonic wing is accelerated to a supersonic speed, the resulting re-compression can be by either Prandtl–Meyer compression or by the formation of a normal shock. This shock is of particular interest to makers of transonic devices because it can cause separation of the boundary layer at the point where it touches the transonic profile. This can then lead to full separation and stall on the profile, higher drag, or shock-buffet, a condition where the separation and the shock interact in a resonance condition, causing resonating loads on the underlying structure. Pipe flow This shock appears when supersonic flow in a pipe is decelerated. Examples: In supersonic propulsion: ramjet, scramjet, unstart. In flow control: needle valve, choked venturi. In this case the gas ahead of the shock is supersonic (in the laboratory frame), and the gas behind the shock system is either supersonic (oblique shocks) or subsonic (a normal shock) (Although for some oblique shocks very close to the deflection angle limit, the downstream Mach number is subsonic.) The shock is the result of the deceleration of the gas by a converging duct, or by the growth of the boundary layer on the wall of a parallel duct. Combustion engines The wave disk engine (also named "Radial Internal Combustion Wave Rotor") is a kind of pistonless rotary engine that utilizes shock waves to transfer energy between a high-energy fluid to a low-energy fluid, thereby increasing both temperature and pressure of the low-energy fluid. Memristors In memristors, under externally-applied electric field, shock waves can be launched across the transition-metal oxides, creating fast and non-volatile resistivity changes. Shock capturing and detection Advanced techniques are needed to capture shock waves and to detect shock waves in both numerical computations and experimental observations. Computational fluid dynamics is commonly used to obtain the flow field with shock waves. Though shock waves are sharp discontinuities, in numerical solutions of fluid flow with discontinuities (shock wave, contact discontinuity or slip line), the shock wave can be smoothed out by low-order numerical method (due to numerical dissipation) or there are spurious oscillations near shock surface by high-order numerical method (due to Gibbs phenomena). There exist some other discontinuities in fluid flow than the shock wave. The slip surface (3D) or slip line (2D) is a plane across which the tangent velocity is discontinuous, while pressure and normal velocity are continuous. Across the contact discontinuity, the pressure and velocity are continuous and the density is discontinuous. A strong expansion wave or shear layer may also contain high gradient regions which appear to be a discontinuity. Some common features of these flow structures and shock waves and the insufficient aspects of numerical and experimental tools lead to two important problems in practices: (1) some shock waves can not be detected or their positions are detected wrong, (2) some flow structures which are not shock waves are wrongly detected to be shock waves. In fact, correct capturing and detection of shock waves are important since shock waves have the following influences: (1) causing loss of total pressure, which may be a concern related to scramjet engine performance, (2) providing lift for wave-rider configuration, as the oblique shock wave at lower surface of the vehicle can produce high pressure to generate lift, (3) leading to wave drag of high-speed vehicle which is harmful to vehicle performance, (4) inducing severe pressure load and heat flux, e.g. the Type IV shock–shock interference could yield a 17 times heating increase at vehicle surface, (5) interacting with other structures, such as boundary layers, to produce new flow structures such as flow separation, transition, etc. See also Blast wave Shock waves in astrophysics Atmospheric focusing Atmospheric reentry Cherenkov radiation Explosion Hydraulic jump Joule–Thomson effect Mach wave Magnetopause Moreton wave Normal shock tables Oblique shock Prandtl condition Prandtl–Meyer expansion fan Shocks and discontinuities (MHD) Shock (mechanics) Sonic boom Supercritical airfoil Undercompressive shock wave Unstart Shock diamond Kelvin wake pattern References Nikonov, V. A Semi-Lagrangian Godunov-Type Method without Numerical Viscosity for Shocks. Fluids 2022, 7, 16. https://doi.org/10.3390/fluids7010016 Further reading External links NASA Glenn Research Center information on: Oblique Shocks Multiple Crossed Shocks Expansion Fans Selkirk college: Aviation intranet: High speed (supersonic) flight Energy loss in a shock wave, normal and oblique shock waves Formation of a normal shock wave Fundamentals of compressible flow, 2007 NASA 2015 Schlieren image shock wave T-38C
0.769005
0.997194
0.766847
Circular dichroism
Circular dichroism (CD) is dichroism involving circularly polarized light, i.e., the differential absorption of left- and right-handed light. Left-hand circular (LHC) and right-hand circular (RHC) polarized light represent two possible spin angular momentum states for a photon, and so circular dichroism is also referred to as dichroism for spin angular momentum. This phenomenon was discovered by Jean-Baptiste Biot, Augustin Fresnel, and Aimé Cotton in the first half of the 19th century. Circular dichroism and circular birefringence are manifestations of optical activity. It is exhibited in the absorption bands of optically active chiral molecules. CD spectroscopy has a wide range of applications in many different fields. Most notably, UV CD is used to investigate the secondary structure of proteins. UV/Vis CD is used to investigate charge-transfer transitions. Near-infrared CD is used to investigate geometric and electronic structure by probing metal d→d transitions. Vibrational circular dichroism, which uses light from the infrared energy region, is used for structural studies of small organic molecules, and most recently proteins and DNA. Physical principles Circular polarization of light Electromagnetic radiation consists of an electric and magnetic field that oscillate perpendicular to one another and to the propagating direction, a transverse wave. While linearly polarized light occurs when the electric field vector oscillates only in one plane, circularly polarized light occurs when the direction of the electric field vector rotates about its propagation direction while the vector retains constant magnitude. At a single point in space, the circularly polarized-vector will trace out a circle over one period of the wave frequency, hence the name. The two diagrams below show the electric field vectors of linearly and circularly polarized light, at one moment of time, for a range of positions; the plot of the circularly polarized electric vector forms a helix along the direction of propagation . For left circularly polarized light (LCP) with propagation towards the observer, the electric vector rotates counterclockwise. For right circularly polarized light (RCP), the electric vector rotates clockwise. Interaction of circularly polarized light with matter When circularly polarized light passes through an absorbing optically active medium, the speeds between right and left polarizations differ as well as their wavelength() and the extent to which they are absorbed. Circular dichroism is the difference . The electric field of a light beam causes a linear displacement of charge when interacting with a molecule (electric dipole), whereas its magnetic field causes a circulation of charge (magnetic dipole). These two motions combined cause an excitation of an electron in a helical motion, which includes translation and rotation and their associated operators. The experimentally determined relationship between the rotational strength of a sample and the is given by The rotational strength has also been determined theoretically, We see from these two equations that in order to have non-zero , the electric and magnetic dipole moment operators ( and ) must transform as the same irreducible representation. and are the only point groups where this can occur, making only chiral molecules CD active. Simply put, since circularly polarized light itself is "chiral", it interacts differently with chiral molecules. That is, the two types of circularly polarized light are absorbed to different extents. In a CD experiment, equal amounts of left and right circularly polarized light of a selected wavelength are alternately radiated into a (chiral) sample. One of the two polarizations is absorbed more than the other one, and this wavelength-dependent difference of absorption is measured, yielding the CD spectrum of the sample. Due to the interaction with the molecule, the electric field vector of the light traces out an elliptical path after passing through the sample. It is important that the chirality of the molecule can be conformational rather than structural. That is, for instance, a protein molecule with a helical secondary structure can have a CD that changes with changes in the conformation. Delta absorbance By definition, where (Delta Absorbance) is the difference between absorbance of left circularly polarized (LCP) and right circularly polarized (RCP) light (this is what is usually measured). is a function of wavelength, so for a measurement to be meaningful the wavelength at which it was performed must be known. Molar circular dichroism It can also be expressed, by applying Beer's law, as: where and are the molar extinction coefficients for LCP and RCP light, is the molar concentration, is the path length in centimeters (cm). Then is the molar circular dichroism. This intrinsic property is what is usually meant by the circular dichroism of the substance. Since is a function of wavelength, a molar circular dichroism value must specify the wavelength at which it is valid. Extrinsic effects on circular dichroism In many practical applications of circular dichroism (CD), as discussed below, the measured CD is not simply an intrinsic property of the molecule, but rather depends on the molecular conformation. In such a case the CD may also be a function of temperature, concentration, and the chemical environment, including solvents. In this case the reported CD value must also specify these other relevant factors in order to be meaningful. In ordered structures lacking two-fold rotational symmetry, optical activity, including differential transmission (and reflection) of circularly polarized waves also depends on the propagation direction through the material. In this case, so-called extrinsic 3d chirality is associated with the mutual orientation of light beam and structure. Molar ellipticity Although is usually measured, for historical reasons most measurements are reported in degrees of ellipticity. Molar ellipticity is circular dichroism corrected for concentration. Molar circular dichroism and molar ellipticity, , are readily interconverted by the equation: This relationship is derived by defining the ellipticity of the polarization as: where and are the magnitudes of the electric field vectors of the right-circularly and left-circularly polarized light, respectively. When equals (when there is no difference in the absorbance of right- and left-circular polarized light), is 0° and the light is linearly polarized. When either or is equal to zero (when there is complete absorbance of the circular polarized light in one direction), is 45° and the light is circularly polarized. Generally, the circular dichroism effect is small, so is small and can be approximated as in radians. Since the intensity or irradiance, , of light is proportional to the square of the electric-field vector, the ellipticity becomes: Then by substituting for I using Beer's law in natural logarithm form: The ellipticity can now be written as: Since , this expression can be approximated by expanding the exponentials in a Taylor series to first-order and then discarding terms of in comparison with unity and converting from radians to degrees: The linear dependence of solute concentration and pathlength is removed by defining molar ellipticity as, Then combining the last two expression with Beer's law, molar ellipticity becomes: The units of molar ellipticity are historically (deg·cm2/dmol). To calculate molar ellipticity, the sample concentration (g/L), cell pathlength (cm), and the molecular weight (g/mol) must be known. If the sample is a protein, the mean residue weight (average molecular weight of the amino acid residues it contains) is often used in place of the molecular weight, essentially treating the protein as a solution of amino acids. Using mean residue ellipticity facilitates comparing the CD of proteins of different molecular weight; use of this normalized CD is important in studies of protein structure. Mean residue ellipticity Methods for estimating secondary structure in polymers, proteins and polypeptides in particular, often require that the measured molar ellipticity spectrum be converted to a normalized value, specifically a value independent of the polymer length. Mean residue ellipticity is used for this purpose; it is simply the measured molar ellipticity of the molecule divided by the number of monomer units (residues) in the molecule. Application to biological molecules In general, this phenomenon will be exhibited in absorption bands of any optically active molecule. As a consequence, circular dichroism is exhibited by biological molecules, because of their dextrorotary and levorotary components. Even more important is that a secondary structure will also impart a distinct CD to its respective molecules. Therefore, the alpha helix of proteins and the double helix of nucleic acids have CD spectral signatures representative of their structures. The capacity of CD to give a representative structural signature makes it a powerful tool in modern biochemistry with applications that can be found in virtually every field of study. CD is closely related to the optical rotatory dispersion (ORD) technique, and is generally considered to be more advanced. CD is measured in or near the absorption bands of the molecule of interest, while ORD can be measured far from these bands. CD's advantage is apparent in the data analysis. Structural elements are more clearly distinguished since their recorded bands do not overlap extensively at particular wavelengths as they do in ORD. In principle, these two spectral measurements can be interconverted through an integral transform (Kramers–Kronig relation), if all the absorptions are included in the measurements. The far-UV (ultraviolet) CD spectrum of proteins can reveal important characteristics of their secondary structure. CD spectra can be readily used to estimate the fraction of a molecule that is in the alpha-helix conformation, the beta-sheet conformation, the beta-turn conformation, or some other (e.g. random coil) conformation. These fractional assignments place important constraints on the possible secondary conformations that the protein can be in. CD cannot, in general, say where the alpha helices that are detected are located within the molecule or even completely predict how many there are. Despite this, CD is a valuable tool, especially for showing changes in conformation. It can, for instance, be used to study how the secondary structure of a molecule changes as a function of temperature or of the concentration of denaturing agents, e.g. Guanidinium chloride or urea. In this way it can reveal important thermodynamic information about the molecule (such as the enthalpy and Gibbs free energy of denaturation) that cannot otherwise be easily obtained. Anyone attempting to study a protein will find CD a valuable tool for verifying that the protein is in its native conformation before undertaking extensive and/or expensive experiments with it. Also, there are a number of other uses for CD spectroscopy in protein chemistry not related to alpha-helix fraction estimation. Moreover, CD spectroscopy has been used in bioinorganic interface studies. Specifically it has been used to analyze the differences in secondary structure of an engineered protein before and after titration with a reagent. The near-UV CD spectrum (>250 nm) of proteins provides information on the tertiary structure. The signals obtained in the 250–300 nm region are due to the absorption, dipole orientation and the nature of the surrounding environment of the phenylalanine, tyrosine, cysteine (or S-S disulfide bridges) and tryptophan amino acids. Unlike in far-UV CD, the near-UV CD spectrum cannot be assigned to any particular 3D structure. Rather, near-UV CD spectra provide structural information on the nature of the prosthetic groups in proteins, e.g., the heme groups in hemoglobin and cytochrome c. Visible CD spectroscopy is a very powerful technique to study metal–protein interactions and can resolve individual d–d electronic transitions as separate bands. CD spectra in the visible light region are only produced when a metal ion is in a chiral environment, thus, free metal ions in solution are not detected. This has the advantage of only observing the protein-bound metal, so pH dependence and stoichiometries are readily obtained. Optical activity in transition metal ion complexes have been attributed to configurational, conformational and the vicinal effects. Klewpatinond and Viles (2007) have produced a set of empirical rules for predicting the appearance of visible CD spectra for Cu2+ and Ni2+ square-planar complexes involving histidine and main-chain coordination. CD gives less specific structural information than X-ray crystallography and protein NMR spectroscopy, for example, which both give atomic resolution data. However, CD spectroscopy is a quick method that does not require large amounts of proteins or extensive data processing. Thus CD can be used to survey a large number of solvent conditions, varying temperature, pH, salinity, and the presence of various cofactors. CD spectroscopy is usually used to study proteins in solution, and thus it complements methods that study the solid state. This is also a limitation, in that many proteins are embedded in membranes in their native state, and solutions containing membrane structures are often strongly scattering. CD is sometimes measured in thin films. CD spectroscopy has also been done using semiconducting materials such as TiO2 to obtain large signals in the UV range of wavelengths, where the electronic transitions for biomolecules often occur. Experimental limitations CD has also been studied in carbohydrates, but with limited success due to the experimental difficulties associated with measurement of CD spectra in the vacuum ultraviolet (VUV) region of the spectrum (100–200 nm), where the corresponding CD bands of unsubstituted carbohydrates lie. Substituted carbohydrates with bands above the VUV region have been successfully measured. Measurement of CD is also complicated by the fact that typical aqueous buffer systems often absorb in the range where structural features exhibit differential absorption of circularly polarized light. Phosphate, sulfate, carbonate, and acetate buffers are generally incompatible with CD unless made extremely dilute e.g. in the 10–50 mM range. The TRIS buffer system should be completely avoided when performing far-UV CD. Borate and Onium compounds are often used to establish the appropriate pH range for CD experiments. Some experimenters have substituted fluoride for chloride ion because fluoride absorbs less in the far UV, and some have worked in pure water. Another, almost universal, technique is to minimize solvent absorption by using shorter path length cells when working in the far UV, 0.1 mm path lengths are not uncommon in this work. In addition to measuring in aqueous systems, CD, particularly far-UV CD, can be measured in organic solvents e.g. ethanol, methanol, trifluoroethanol (TFE). The latter has the advantage to induce structure formation of proteins, inducing beta-sheets in some and alpha helices in others, which they would not show under normal aqueous conditions. Most common organic solvents such as acetonitrile, THF, chloroform, dichloromethane are however, incompatible with far-UV CD. It may be of interest to note that the protein CD spectra used in secondary structure estimation are related to the π to π* orbital absorptions of the amide bonds linking the amino acids. These absorption bands lie partly in the so-called vacuum ultraviolet (wavelengths less than about 200 nm). The wavelength region of interest is actually inaccessible in air because of the strong absorption of light by oxygen at these wavelengths. In practice these spectra are measured not in vacuum but in an oxygen-free instrument (filled with pure nitrogen gas). Once oxygen has been eliminated, perhaps the second most important technical factor in working below 200 nm is to design the rest of the optical system to have low losses in this region. Critical in this regard is the use of aluminized mirrors whose coatings have been optimized for low loss in this region of the spectrum. The usual light source in these instruments is a high pressure, short-arc xenon lamp. Ordinary xenon arc lamps are unsuitable for use in the low UV. Instead, specially constructed lamps with envelopes made from high-purity synthetic fused silica must be used. Light from synchrotron sources has a much higher flux at short wavelengths, and has been used to record CD down to 160 nm. In 2010 the CD spectrophotometer at the electron storage ring facility ISA at the University of Aarhus in Denmark was used to record solid state CD spectra down to 120 nm. At the quantum mechanical level, the feature density of circular dichroism and optical rotation are identical. Optical rotary dispersion and circular dichroism share the same quantum information content. See also Chirality-induced spin selectivity Hyper Rayleigh scattering optical activity Linear dichroism Magnetic circular dichroism Optical activity Optical isomerism Optical rotation Optical rotatory dispersion Protein Circular Dichroism Data Bank Synchrotron radiation circular dichroism spectroscopy Two-photon circular dichroism Vibrational circular dichroism References External links Circular Dichroism spectroscopy by Alliance Protein Laboratories, a commercial service provider An Introduction to Circular Dichroism Spectroscopy by Applied Photophysics, an equipment supplier An animated, step-by-step tutorial on Circular Dichroism and Optical Rotation by Prof Valev. Polarization (waves) Physical quantities Protein structure
0.772061
0.993247
0.766847
Scale factor (cosmology)
The expansion of the universe is parametrized by a dimensionless scale factor . Also known as the cosmic scale factor or sometimes the Robertson–Walker scale factor, this is a key parameter of the Friedmann equations. In the early stages of the Big Bang, most of the energy was in the form of radiation, and that radiation was the dominant influence on the expansion of the universe. Later, with cooling from the expansion the roles of matter and radiation changed and the universe entered a matter-dominated era. Recent results suggest that we have already entered an era dominated by dark energy, but examination of the roles of matter and radiation are most important for understanding the early universe. Using the dimensionless scale factor to characterize the expansion of the universe, the effective energy densities of radiation and matter scale differently. This leads to a radiation-dominated era in the very early universe but a transition to a matter-dominated era at a later time and, since about 4 billion years ago, a subsequent dark-energy-dominated era. Detail Some insight into the expansion can be obtained from a Newtonian expansion model which leads to a simplified version of the Friedmann equation. It relates the proper distance (which can change over time, unlike the comoving distance which is constant and set to today's distance) between a pair of objects, e.g. two galaxy clusters, moving with the Hubble flow in an expanding or contracting FLRW universe at any arbitrary time to their distance at some reference time . The formula for this is: where is the proper distance at epoch , is the distance at the reference time , usually also referred to as comoving distance, and is the scale factor. Thus, by definition, and . The scale factor is dimensionless, with counted from the birth of the universe and set to the present age of the universe: giving the current value of as or . The evolution of the scale factor is a dynamical question, determined by the equations of general relativity, which are presented in the case of a locally isotropic, locally homogeneous universe by the Friedmann equations. The Hubble parameter is defined as: where the dot represents a time derivative. The Hubble parameter varies with time, not with space, with the Hubble constant being its current value. From the previous equation one can see that , and also that , so combining these gives , and substituting the above definition of the Hubble parameter gives which is just Hubble's law. Current evidence suggests that the expansion of the universe is accelerating, which means that the second derivative of the scale factor is positive, or equivalently that the first derivative is increasing over time. This also implies that any given galaxy recedes from us with increasing speed over time, i.e. for that galaxy is increasing with time. In contrast, the Hubble parameter seems to be decreasing with time, meaning that if we were to look at some fixed distance d and watch a series of different galaxies pass that distance, later galaxies would pass that distance at a smaller velocity than earlier ones. According to the Friedmann–Lemaître–Robertson–Walker metric which is used to model the expanding universe, if at present time we receive light from a distant object with a redshift of z, then the scale factor at the time the object originally emitted that light is . Chronology Radiation-dominated era After Inflation, and until about 47,000 years after the Big Bang, the dynamics of the early universe were set by radiation (referring generally to the constituents of the universe which moved relativistically, principally photons and neutrinos). For a radiation-dominated universe the evolution of the scale factor in the Friedmann–Lemaître–Robertson–Walker metric is obtained solving the Friedmann equations: Matter-dominated era Between about 47,000 years and 9.8 billion years after the Big Bang, the energy density of matter exceeded both the energy density of radiation and the vacuum energy density. When the early universe was about 47,000 years old (redshift 3600), mass–energy density surpassed the radiation energy, although the universe remained optically thick to radiation until the universe was about 378,000 years old (redshift 1100). This second moment in time (close to the time of recombination), at which the photons which compose the cosmic microwave background radiation were last scattered, is often mistaken as marking the end of the radiation era. For a matter-dominated universe the evolution of the scale factor in the Friedmann–Lemaître–Robertson–Walker metric is easily obtained solving the Friedmann equations: Dark-energy-dominated era In physical cosmology, the dark-energy-dominated era is proposed as the last of the three phases of the known universe, the other two being the radiation-dominated era and the matter-dominated era. The dark-energy-dominated era began after the matter-dominated era, i.e. when the Universe was about 9.8 billion years old. In the era of cosmic inflation, the Hubble parameter is also thought to be constant, so the expansion law of the dark-energy-dominated era also holds for the inflationary prequel of the big bang. The cosmological constant is given the symbol Λ, and, considered as a source term in the Einstein field equation, can be viewed as equivalent to a "mass" of empty space, or dark energy. Since this increases with the volume of the universe, the expansion pressure is effectively constant, independent of the scale of the universe, while the other terms decrease with time. Thus, as the density of other forms of matter – dust and radiation – drops to very low concentrations, the cosmological constant (or "dark energy") term will eventually dominate the energy density of the Universe. Recent measurements of the change in Hubble constant with time, based on observations of distant supernovae, show this acceleration in expansion rate, indicating the presence of such dark energy. For a dark-energy-dominated universe, the evolution of the scale factor in the Friedmann–Lemaître–Robertson–Walker metric is easily obtained solving the Friedmann equations: Here, the coefficient in the exponential, the Hubble constant, is This exponential dependence on time makes the spacetime geometry identical to the de Sitter universe, and only holds for a positive sign of the cosmological constant, which is the case according to the currently accepted value of the cosmological constant, Λ, that is approximately The current density of the observable universe is of the order of and the age of the universe is of the order of 13.8 billion years, or . The Hubble constant, , is (The Hubble time is 13.79 billion years). See also Cosmological principle Lambda-CDM model Redshift Notes References External links Relation of the scale factor with the cosmological constant and the Hubble constant Physical cosmology
0.774255
0.990413
0.766832
VUCA
VUCA is an acronym based on the leadership theories of Warren Bennis and Burt Nanus, to describe or to reflect on the volatility, uncertainty, complexity and ambiguity of general conditions and situations. The U.S. Army War College introduced the concept of VUCA in 1987, to describe a more complex multilateral world perceived as resulting from the end of the Cold War. More frequent use and discussion of the term began from 2002. It has subsequently spread to strategic leadership in organizations, from for-profit corporations to education. Meaning The VUCA framework provides a lens through which organizations can interpret their challenges and opportunities. It emphasizes strategic foresight, insight, and the behavior of entities within organizations. Furthermore, it highlights both systemic and behavioral failures often associated with organizational missteps. V = Volatility: Characterizes the rapid and unpredictable nature of change. U = Uncertainty: Denotes the unpredictability of events and issues. C = Complexity: Describes the intertwined forces and issues, making cause-and-effect relationships unclear. A = Ambiguity: Points to the unclear realities and potential misunderstandings stemming from mixed messages. These elements articulate how organizations perceive their current and potential challenges. They establish the parameters for planning and policy-making. Interacting in various ways, they can either complicate decision-making or enhance the ability to strategize, plan, and progress. Essentially, VUCA lays the groundwork for effective management and leadership. The VUCA framework is a conceptual tool that underscores the conditions and challenges organizations face when making decisions, planning, managing risks, driving change, and solving problems. It primarily shapes an organization's ability to: Anticipate the key issues that emerge. Understand the repercussions of particular issues and actions. Appreciate how variables interrelate. Prepare for diverse scenarios and challenges. Interpret and tackle pertinent opportunities. VUCA serves as a guideline for fostering awareness and preparedness in various sectors, including business, the military, education, and government. It provides a roadmap for organizations to develop strategies for readiness, foresight, adaptation, and proactive intervention. Themes VUCA, as a system of thought, revolves around an idea expressed by Andrew Porteous: "Failure in itself may not be a catastrophe. Still, failure to learn from failure is." This perspective underlines the significance of resilience and adaptability in leadership. It suggests that beyond mere competencies, it is behavioural nuances, like the ability to learn from failures and adapt, that distinguish exceptional leaders from average ones. Leaders using VUCA as a guide often see change not just as inevitable but as something to anticipate. Within VUCA, several thematic areas of consideration emerge, providing a framework for introspection and evaluation: Knowledge management and sense-making: An exploration into how we organize and interpret information. Planning and readiness considerations: A reflection on our preparedness for unforeseen challenges. Process management and resource systems: A contemplation on our efficiency in resource utilization and system deployment. Functional responsiveness and impact models: Understanding our capacity to adapt to changes. Recovery systems and forward practices: An inquiry into our resilience and future-oriented strategies. Systemic failures: A philosophical dive into organizational vulnerabilities. Behavioural failures: Exploring the human tendencies that lead to mistakes. Within the VUCA system of thought, an organization's ability to navigate these challenges is closely tied to its foundational beliefs, values, and aspirations. Those enterprises that consider themselves prepared and resolved align their strategic approach with VUCA's principles, signaling a holistic awareness. The essence of VUCA philosophy also emphasizes the need for a deep-rooted understanding of one's environment, spanning technical, social, political, market, and economic realms. Psychometrics which measure fluid intelligence by tracking information processing when faced with unfamiliar, dynamic, and vague data can predict cognitive performance in VUCA environments. Social categorization Volatility Volatility is the V component of VUCA, which refers to the different situational social-categorizations of people due to specific traits or reactions that stand out in particular situations. When people act based on a specific situation, there is a possibility that the public categorizes them into a different group than they were in a previous situation. These people might respond differently to individual situations due to social or environmental cues. The idea that situational occurrences cause certain social categorization is known as volatility and is one of the main aspects of self-categorization theory. Sociologists use volatility to better understand the impacts of stereotypes and social categorization on the situation at hand and any external forces that may cause people to perceive others differently. Volatility is the changing dynamic of social categorization in environmental situations. The dynamic can change due to any shift in a situation, whether social, technical, biological, or anything else. Studies have been conducted, but finding the specific component that causes the change in situational social categorization has proven challenging. Two distinct components link individuals to their social identities. The first component is normative fit, which pertains to how a person aligns with the stereotypes and norms associated with their particular identity. For instance, when a Hispanic woman is cleaning the house, people often associate gender stereotypes with the situation, while her ethnicity is not a central concern. However, when this same woman eats an enchilada, ethnicity stereotypes come to the forefront, while her gender is not the focal point. The second social cue is comparative fit. This is when a specific characteristic or trait of a person is prominent in certain situations compared to others. For example, as mentioned by Bodenhausen and Peery, when there is one woman in a room full of men. She stands out, because she is the only one of her gender. However, all of the men are clumped together because they do not have any specific traits that stand out. Comparative fit shows that people categorize others based on the relative social context. In a particular situation, particular characteristics are made obvious because others around that individual do not possess that characteristic. However, in other cases, this characteristic may be the norm and would not be a key characteristic in the categorization process. People can be less critical of the same person in different scenarios. For example, when looking at an African American man on the street in a low-income neighborhood and the same man inside a school in a high-income neighborhood, people will be less judgmental when seeing him in school. Nothing else has changed about this man, other than his location. When individuals are spotted in certain social contexts, the basic-level categories are forgotten, and the more partial categories are brought to light. This helps to describe the problems of situational social-categorization. This also illustrates how stereotypes can shift the perspectives of those around an individual. Uncertainty Uncertainty in the VUCA framework occurs when the availability or predictability of information in events is unknown. Uncertainty often occurs in volatile environments consisting of complex unanticipated interactions. Uncertainty may occur with the intention to imply causation or correlation between the events of a social perceiver and a target. Situations where there is either a lack of information to prove why perception is in occurrence or informational availability but lack of causation, are where uncertainty is salient. The uncertainty component of the framework serves as a grey area and is compensated by the use of social categorization and/or stereotypes. Social categorization can be described as a collection of people that have no interaction but tend to share similar characteristics. People tend to engage in social categorization, especially when there is a lack of information surrounding the event. Literature suggests that default categories tend to be assumed in the absence of any clear data when referring to someone's gender or race in the essence of a discussion. Individuals often associate general references (e.g. people, they, them, a group) with the male gender, meaning people = male. This usually occurs when there is insufficient information to distinguish someone's gender clearly. For example, when discussing a written piece of information, most assume the author is male. If an author's name is unavailable (due to lack of information), it is difficult to determine the gender of the author through the context of whatever was written. People automatically label the author as male without having any prior basis of gender, thus placing the author in a social category. This social categorization happens in this example, but people will also assume someone is male if the gender is not known in many other situations as well. Social categorization occurs in the realm of not only gender, but also race. Default assumptions may be made, like in gender, to the race of an individual or a group based on prior known stereotypes. For example, race-occupation combinations such as basketball or golf players usually receive race assumptions. Without any information on the individual's race, people usually assume a basketball player is black, and a golf player is white. This is based upon stereotypes because each sport tends to be dominated by a single race. In reality, there are other races within each sport. Complexity Complexity is the C component of VUCA, which refers to the interconnectivity and interdependence of multiple parts in a system. When conducting research, complexity is a component that scholars have to keep in mind. The results of a deliberately controlled environment are unexpected because of the non-linear interaction and interdependencies within different groups and categories. In a sociological aspect, the VUCA framework is utilized in research to understand social perception in the real world and how that plays into social categorization and stereotypes. Galen V. Bodenhausen and Destiny Peery's article, Social Categorization and Stereotyping In vivo: The VUCA Challenge, focused on researching how social categories impacted the process of social cognition and perception. The strategy used to conduct the research is to manipulate or isolate a single identity of a target while keeping all other identities constant. This method clearly shows how a specific identity in a social category can change one's perception of other identities, thus creating stereotypes. There are problems with categorizing an individual's social identity due to the complexity of an individual's background. This research fails to address the complexity of the real world and the results from this highlighted an even greater picture of social categorization and stereotyping. Complexity adds many layers of different components to an individual's identity and creates challenges for sociologists trying to examine social categories. In the real world, people are far more complex than a modified social environment. Individuals identify with more than one social category, which opens the door to a more profound discovery about stereotyping. Results from research conducted by Bodenhausen reveal that specific identities are more dominant than others. Perceivers who recognize these distinct identities latch on to them and associate their preconceived notion of such identity and make initial assumptions about the individuals and hence stereotypes are created. Conversely, perceivers who share some identities with the target tend to be more open-minded. They consider multiple social identities simultaneously, a phenomenon known as cross-categorization effects. Some social categories are nested within larger categorical structures, making subcategories more salient to perceivers. Cross-categorization can trigger both positive and negative effects. On the positive side, perceivers become more open-minded and motivated to delve deeper into their understanding of the target, moving beyond dominant social categories. However, cross-categorization can also result in social invisibility, where some cross-over identities diminish the visibility of others, leading to "intersectional invisibility" where neither social identity stands out distinctly and is overlooked. Ambiguity Ambiguity is the A component of VUCA. This refers to when the general meaning of something is unclear even when an appropriate amount of information is provided. Many get confused about the meaning of ambiguity. It is similar to the idea of uncertainty, but they have different factors. Uncertainty is when relevant information is unavailable and unknown, and ambiguity where relevant information is available but the overall meaning is still unknown. Both uncertainty and ambiguity exist in our culture today. Sociologists use ambiguity to determine how and why an answer has been developed. Sociologists focus on details such as if there was enough information present and if the subject had the full knowledge necessary to make a decision. and why did he/she come to their specific answer. Ambiguity is considered one of the leading causes of conflict within organizations. Ambiguity often prompts individuals to make assumptions, including those related to race, gender, sexual orientation, and even class stereotypes. When people possess some information but lack a complete answer, they tend to generate their own conclusions based on the available relevant information. For instance, as Bodenhausen notes, we may occasionally encounter individuals who possess a degree of androgyny, making it challenging to determine their gender. In such cases, brief exposure might lead to misclassifications based on gender-atypical features, such as very long hair on a man or very short hair on a woman. Ambiguity can result in premature categorizations, potentially leading to inaccurate conclusions due to the absence of crucial details. Sociologists suggest that ambiguity can fuel racial stereotypes and discrimination. In a South African study, white participants were shown images of racially mixed faces and asked to categorize them as European or African. Since all the participants were white, they struggled to classify these mixed-race faces as European and instead labeled them as African. This difficulty arose due to the ambiguity present in the images. The only information available to the participants was the subjects' skin tone and facial features. Despite having this information, the participants still couldn't confidently determine the ethnicity because the individuals didn't precisely resemble their own racial group. Responses and revisions Levent Işıklıgöz has suggested that the C of VUCA be changed from complexity to chaos, arguing that it is more suitable according to our era. Bill George, a professor of management practice at Harvard Business School, argues that VUCA calls for a leadership response which he calls VUCA 2.0: Vision, understanding, courage and adaptability. George's response seems a minor adaptation of Bob Johansen's VUCA prime: Vision, understanding, clarity and agility German academic Ali Aslan Gümüsay adds "paradox" to the acronym, calling it VUCA + paradox or VUCAP. See also Antifragile (disambiguation) Cynefin framework Fear, uncertainty, and doubt (FUD) Global Simplicity Index Goldilocks process Innovation butterfly Software bug References Business models
0.770081
0.995773
0.766826
Computational mechanics
Computational mechanics is the discipline concerned with the use of computational methods to study phenomena governed by the principles of mechanics. Before the emergence of computational science (also called scientific computing) as a "third way" besides theoretical and experimental sciences, computational mechanics was widely considered to be a sub-discipline of applied mechanics. It is now considered to be a sub-discipline within computational science. Overview Computational mechanics (CM) is interdisciplinary. Its three pillars are mechanics, mathematics, and computer science. Mechanics Computational fluid dynamics, computational thermodynamics, computational electromagnetics, computational solid mechanics are some of the many specializations within CM. Mathematics The areas of mathematics most related to computational mechanics are partial differential equations, linear algebra and numerical analysis. The most popular numerical methods used are the finite element, finite difference, and boundary element methods in order of dominance. In solid mechanics finite element methods are far more prevalent than finite difference methods, whereas in fluid mechanics, thermodynamics, and electromagnetism, finite difference methods are almost equally applicable. The boundary element technique is in general less popular, but has a niche in certain areas including acoustics engineering, for example. Computer Science With regard to computing, computer programming, algorithms, and parallel computing play a major role in CM. The most widely used programming language in the scientific community, including computational mechanics, is Fortran. Recently, C++ has increased in popularity. The scientific computing community has been slow in adopting C++ as the lingua franca. Because of its very natural way of expressing mathematical computations, and its built-in visualization capacities, the proprietary language/environment MATLAB is also widely used, especially for rapid application development and model verification. Process Scientists within the field of computational mechanics follow a list of tasks to analyze their target mechanical process: A mathematical model of the physical phenomenon is made. This usually involves expressing the natural or engineering system in terms of partial differential equations. This step uses physics to formalize a complex system. The mathematical equations are converted into forms which are suitable for digital computation. This step is called discretization because it involves creating an approximate discrete model from the original continuous model. In particular, it typically translates a partial differential equation (or a system thereof) into a system of algebraic equations. The processes involved in this step are studied in the field of numerical analysis. Computer programs are made to solve the discretized equations using direct methods (which are single step methods resulting in the solution) or iterative methods (which start with a trial solution and arrive at the actual solution by successive refinement). Depending on the nature of the problem, supercomputers or parallel computers may be used at this stage. The mathematical model, numerical procedures, and the computer codes are verified using either experimental results or simplified models for which exact analytical solutions are available. Quite frequently, new numerical or computational techniques are verified by comparing their result with those of existing well-established numerical methods. In many cases, benchmark problems are also available. The numerical results also have to be visualized and often physical interpretations will be given to the results. Applications Some examples where computational mechanics have been put to practical use are vehicle crash simulation, petroleum reservoir modeling, biomechanics, glass manufacturing, and semiconductor modeling. Complex systems that would be very difficult or impossible to treat using analytical methods have been successfully simulated using the tools provided by computational mechanics. See also Scientific computing Dynamical systems theory Movable cellular automaton References External links United States Association for Computational Mechanics Santa Fe Institute Comp Mech Publications Computational science Mechanics Computational fields of study Computational physics
0.789218
0.971599
0.766803
Neutron radiation
Neutron radiation is a form of ionizing radiation that presents as free neutrons. Typical phenomena are nuclear fission or nuclear fusion causing the release of free neutrons, which then react with nuclei of other atoms to form new nuclides—which, in turn, may trigger further neutron radiation. Free neutrons are unstable, decaying into a proton, an electron, plus an electron antineutrino. Free neutrons have a mean lifetime of 887 seconds (14 minutes, 47 seconds). Neutron radiation is distinct from alpha, beta and gamma radiation. Sources Neutrons may be emitted from nuclear fusion or nuclear fission, or from other nuclear reactions such as radioactive decay or particle interactions with cosmic rays or within particle accelerators. Large neutron sources are rare, and usually limited to large-sized devices such as nuclear reactors or particle accelerators, including the Spallation Neutron Source. Neutron radiation was discovered from observing an alpha particle colliding with a beryllium nucleus, which was transformed into a carbon nucleus while emitting a neutron, Be(α, n)C. The combination of an alpha particle emitter and an isotope with a large (α, n) nuclear reaction probability is still a common neutron source. Neutron radiation from fission The neutrons in nuclear reactors are generally categorized as slow (thermal) neutrons or fast neutrons depending on their energy. Thermal neutrons are similar in energy distribution (the Maxwell–Boltzmann distribution) to a gas in thermodynamic equilibrium; but are easily captured by atomic nuclei and are the primary means by which elements undergo nuclear transmutation. To achieve an effective fission chain reaction, neutrons produced during fission must be captured by fissionable nuclei, which then split, releasing more neutrons. In most fission reactor designs, the nuclear fuel is not sufficiently refined to absorb enough fast neutrons to carry on the chain reaction, due to the lower cross section for higher-energy neutrons, so a neutron moderator must be introduced to slow the fast neutrons down to thermal velocities to permit sufficient absorption. Common neutron moderators include graphite, ordinary (light) water and heavy water. A few reactors (fast neutron reactors) and all nuclear weapons rely on fast neutrons. Cosmogenic neutrons Cosmogenic neutrons are produced from cosmic radiation in the Earth's atmosphere or surface, as well as in particle accelerators. They often possess higher energy levels compared to neutrons found in reactors. Many of these neutrons activate atomic nuclei before reaching the Earth's surface, while a smaller fraction interact with nuclei in the atmospheric air. When these neutrons interact with nitrogen-14 atoms, they can transform them into carbon-14 (14C), which is extensively utilized in radiocarbon dating. Uses Cold, thermal and hot neutron radiation is most commonly used in scattering and diffraction experiments, to assess the properties and the structure of materials in crystallography, condensed matter physics, biology, solid state chemistry, materials science, geology, mineralogy, and related sciences. Neutron radiation is also used in Boron Neutron Capture Therapy to treat cancerous tumors due to its highly penetrating and damaging nature to cellular structure. Neutrons can also be used for imaging of industrial parts termed neutron radiography when using film, neutron radioscopy when taking a digital image, such as through image plates, and neutron tomography for three-dimensional images. Neutron imaging is commonly used in the nuclear industry, the space and aerospace industry, as well as the high reliability explosives industry. Ionization mechanisms and properties Neutron radiation is often called indirectly ionizing radiation. It does not ionize atoms in the same way that charged particles such as protons and electrons do (exciting an electron), because neutrons have no charge. However, neutron interactions are largely ionizing, for example when neutron absorption results in gamma emission and the gamma ray (photon) subsequently removes an electron from an atom, or a nucleus recoiling from a neutron interaction is ionized and causes more traditional subsequent ionization in other atoms. Because neutrons are uncharged, they are more penetrating than alpha radiation or beta radiation. In some cases they are more penetrating than gamma radiation, which is impeded in materials of high atomic number. In materials of low atomic number such as hydrogen, a low energy gamma ray may be more penetrating than a high energy neutron. Health hazards and protection In health physics, neutron radiation is a type of radiation hazard. Another, more severe hazard of neutron radiation, is neutron activation, the ability of neutron radiation to induce radioactivity in most substances it encounters, including bodily tissues. This occurs through the capture of neutrons by atomic nuclei, which are transformed to another nuclide, frequently a radionuclide. This process accounts for much of the radioactive material released by the detonation of a nuclear weapon. It is also a problem in nuclear fission and nuclear fusion installations as it gradually renders the equipment radioactive such that eventually it must be replaced and disposed of as low-level radioactive waste. Neutron radiation protection relies on radiation shielding. Due to the high kinetic energy of neutrons, this radiation is considered the most severe and dangerous radiation to the whole body when it is exposed to external radiation sources. In comparison to conventional ionizing radiation based on photons or charged particles, neutrons are repeatedly bounced and slowed (absorbed) by light nuclei so hydrogen-rich material is more effective at shielding than iron nuclei. The light atoms serve to slow down the neutrons by elastic scattering so they can then be absorbed by nuclear reactions. However, gamma radiation is often produced in such reactions, so additional shielding must be provided to absorb it. Care must be taken to avoid using materials whose nuclei undergo fission or neutron capture that causes radioactive decay of nuclei, producing gamma rays. Neutrons readily pass through most material, and hence the absorbed dose (measured in grays) from a given amount of radiation is low, but interact enough to cause biological damage. The most effective shielding materials are water, or hydrocarbons like polyethylene or paraffin wax. Water-extended polyester (WEP) is effective as a shielding wall in harsh environments due to its high hydrogen content and resistance to fire, allowing it to be used in a range of nuclear, health physics, and defense industries. Hydrogen-based materials are suitable for shielding as they are proper barriers against radiation. Concrete (where a considerable number of water molecules chemically bind to the cement) and gravel provide a cheap solution due to their combined shielding of both gamma rays and neutrons. Boron is also an excellent neutron absorber (and also undergoes some neutron scattering). Boron decays into carbon or helium and produces virtually no gamma radiation with boron carbide, a shield commonly used where concrete would be cost prohibitive. Commercially, tanks of water or fuel oil, concrete, gravel, and B4C are common shields that surround areas of large amounts of neutron flux, e.g., nuclear reactors. Boron-impregnated silica glass, standard borosilicate glass, high-boron steel, paraffin, and Plexiglas have niche uses. Because neutrons that strike the hydrogen nucleus (proton, or deuteron) impart energy to that nucleus, they in turn break from their chemical bonds and travel a short distance before stopping. Such hydrogen nuclei are high linear energy transfer particles, and are in turn stopped by ionization of the material they travel through. Consequently, in living tissue, neutrons have a relatively high relative biological effectiveness, and are roughly ten times more effective at causing biological damage compared to gamma or beta radiation of equivalent energy exposure. These neutrons can either cause cells to change in their functionality or to completely stop replicating, causing damage to the body over time. Neutrons are particularly damaging to soft tissues like the cornea of the eye. Effects on materials High-energy neutrons damage and degrade materials over time; bombardment of materials with neutrons creates collision cascades that can produce point defects and dislocations in the material, the creation of which is the primary driver behind microstructural changes occurring over time in materials exposed to radiation. At high neutron fluences this can lead to embrittlement of metals and other materials, and to neutron-induced swelling in some of them. This poses a problem for nuclear reactor vessels and significantly limits their lifetime (which can be somewhat prolonged by controlled annealing of the vessel, reducing the number of the built-up dislocations). Graphite neutron moderator blocks are especially susceptible to this effect, known as Wigner effect, and must be annealed periodically. The Windscale fire was caused by a mishap during such an annealing operation. Radiation damage to materials occurs as a result of the interaction of an energetic incident particle (a neutron, or otherwise) with a lattice atom in the material. The collision causes a massive transfer of kinetic energy to the lattice atom, which is displaced from its lattice site, becoming what is known as the primary knock-on atom (PKA). Because the PKA is surrounded by other lattice atoms, its displacement and passage through the lattice results in many subsequent collisions and the creations of additional knock-on atoms, producing what is known as the collision cascade or displacement cascade. The knock-on atoms lose energy with each collision, and terminate as interstitials, effectively creating a series of Frenkel defects in the lattice. Heat is also created as a result of the collisions (from electronic energy loss), as are possibly transmuted atoms. The magnitude of the damage is such that a single 1 MeV neutron creating a PKA in an iron lattice produces approximately 1,100 Frenkel pairs. The entire cascade event occurs over a timescale of 1 × 10−13 seconds, and therefore, can only be "observed" in computer simulations of the event. The knock-on atoms terminate in non-equilibrium interstitial lattice positions, many of which annihilate themselves by diffusing back into neighboring vacant lattice sites and restore the ordered lattice. Those that do not or cannot leave vacancies, which causes a local rise in the vacancy concentration far above that of the equilibrium concentration. These vacancies tend to migrate as a result of thermal diffusion towards vacancy sinks (i.e., grain boundaries, dislocations) but exist for significant amounts of time, during which additional high-energy particles bombard the lattice, creating collision cascades and additional vacancies, which migrate towards sinks. The main effect of irradiation in a lattice is the significant and persistent flux of defects to sinks in what is known as the defect wind. Vacancies can also annihilate by combining with one another to form dislocation loops and later, lattice voids. The collision cascade creates many more vacancies and interstitials in the material than equilibrium for a given temperature, and diffusivity in the material is dramatically increased as a result. This leads to an effect called radiation-enhanced diffusion, which leads to microstructural evolution of the material over time. The mechanisms leading to the evolution of the microstructure are many, may vary with temperature, flux, and fluence, and are a subject of extensive study. Radiation-induced segregation results from the aforementioned flux of vacancies to sinks, implying a flux of lattice atoms away from sinks; but not necessarily in the same proportion to alloy composition in the case of an alloyed material. These fluxes may therefore lead to depletion of alloying elements in the vicinity of sinks. For the flux of interstitials introduced by the cascade, the effect is reversed: the interstitials diffuse toward sinks resulting in alloy enrichment near the sink. Dislocation loops are formed if vacancies form clusters on a lattice plane. If these vacancy concentration expand in three dimensions, a void forms. By definition, voids are under vacuum, but may became gas-filled in the case of alpha-particle radiation (helium) or if the gas is produced as a result of transmutation reactions. The void is then called a bubble, and leads to dimensional instability (neutron-induced swelling) of parts subject to radiation. Swelling presents a major long-term design problem, especially in reactor components made out of stainless steel. Alloys with crystallographic isotropy, such as Zircaloys are subject to the creation of dislocation loops, but do not exhibit void formation. Instead, the loops form on particular lattice planes, and can lead to irradiation-induced growth, a phenomenon distinct from swelling, but that can also produce significant dimensional changes in an alloy. Irradiation of materials can also induce phase transformations in the material: in the case of a solid solution, the solute enrichment or depletion at sinks radiation-induced segregation can lead to the precipitation of new phases in the material. The mechanical effects of these mechanisms include irradiation hardening, embrittlement, creep, and environmentally-assisted cracking. The defect clusters, dislocation loops, voids, bubbles, and precipitates produced as a result of radiation in a material all contribute to the strengthening and embrittlement (loss of ductility) in the material. Embrittlement is of particular concern for the material comprising the reactor pressure vessel, where as a result the energy required to fracture the vessel decreases significantly. It is possible to restore ductility by annealing the defects out, and much of the life-extension of nuclear reactors depends on the ability to safely do so. Creep is also greatly accelerated in irradiated materials, though not as a result of the enhanced diffusivities, but rather as a result of the interaction between lattice stress and the developing microstructure. Environmentally-assisted cracking or, more specifically, irradiation-assisted stress corrosion cracking (IASCC) is observed especially in alloys subject to neutron radiation and in contact with water, caused by hydrogen absorption at crack tips resulting from radiolysis of the water, leading to a reduction in the required energy to propagate the crack. See also Neutron emission Neutron flux Neutron radiography References https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.111.222501 External links EPA definitions of various terms Comparison of Neutron Radiographic and X-Radiographic Images Neutron techniques A unique tool for research and development IARC Group 1 carcinogens Ionizing radiation Radiation Neutron-related techniques
0.772644
0.9924
0.766772
Metabolism
Metabolism (, from metabolē, "change") is the set of life-sustaining chemical reactions in organisms. The three main functions of metabolism are: the conversion of the energy in food to energy available to run cellular processes; the conversion of food to building blocks of proteins, lipids, nucleic acids, and some carbohydrates; and the elimination of metabolic wastes. These enzyme-catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. The word metabolism can also refer to the sum of all chemical reactions that occur in living organisms, including digestion and the transportation of substances into and between different cells, in which case the above described set of reactions within the cells is called intermediary (or intermediate) metabolism. Metabolic reactions may be categorized as catabolic—the breaking down of compounds (for example, of glucose to pyruvate by cellular respiration); or anabolic—the building up (synthesis) of compounds (such as proteins, carbohydrates, lipids, and nucleic acids). Usually, catabolism releases energy, and anabolism consumes energy. The chemical reactions of metabolism are organized into metabolic pathways, in which one chemical is transformed through a series of steps into another chemical, each step being facilitated by a specific enzyme. Enzymes are crucial to metabolism because they allow organisms to drive desirable reactions that require energy and will not occur by themselves, by coupling them to spontaneous reactions that release energy. Enzymes act as catalysts—they allow a reaction to proceed more rapidly—and they also allow the regulation of the rate of a metabolic reaction, for example in response to changes in the cell's environment or to signals from other cells. The metabolic system of a particular organism determines which substances it will find nutritious and which poisonous. For example, some prokaryotes use hydrogen sulfide as a nutrient, yet this gas is poisonous to animals. The basal metabolic rate of an organism is the measure of the amount of energy consumed by all of these chemical reactions. A striking feature of metabolism is the similarity of the basic metabolic pathways among vastly different species. For example, the set of carboxylic acids that are best known as the intermediates in the citric acid cycle are present in all known organisms, being found in species as diverse as the unicellular bacterium Escherichia coli and huge multicellular organisms like elephants. These similarities in metabolic pathways are likely due to their early appearance in evolutionary history, and their retention is likely due to their efficacy. In various diseases, such as type II diabetes, metabolic syndrome, and cancer, normal metabolism is disrupted. The metabolism of cancer cells is also different from the metabolism of normal cells, and these differences can be used to find targets for therapeutic intervention in cancer. Key biochemicals Most of the structures that make up animals, plants and microbes are made from four basic classes of molecules: amino acids, carbohydrates, nucleic acid and lipids (often called fats). As these molecules are vital for life, metabolic reactions either focus on making these molecules during the construction of cells and tissues, or on breaking them down and using them to obtain energy, by their digestion. These biochemicals can be joined to make polymers such as DNA and proteins, essential macromolecules of life. Amino acids and proteins Proteins are made of amino acids arranged in a linear chain joined by peptide bonds. Many proteins are enzymes that catalyze the chemical reactions in metabolism. Other proteins have structural or mechanical functions, such as those that form the cytoskeleton, a system of scaffolding that maintains the cell shape. Proteins are also important in cell signaling, immune responses, cell adhesion, active transport across membranes, and the cell cycle. Amino acids also contribute to cellular energy metabolism by providing a carbon source for entry into the citric acid cycle (tricarboxylic acid cycle), especially when a primary source of energy, such as glucose, is scarce, or when cells undergo metabolic stress. Lipids Lipids are the most diverse group of biochemicals. Their main structural uses are as part of internal and external biological membranes, such as the cell membrane. Their chemical energy can also be used. Lipids contain a long, non-polar hydrocarbon chain with a small polar region containing oxygen. Lipids are usually defined as hydrophobic or amphipathic biological molecules but will dissolve in organic solvents such as ethanol, benzene or chloroform. The fats are a large group of compounds that contain fatty acids and glycerol; a glycerol molecule attached to three fatty acids by ester linkages is called a triacylglyceride. Several variations of the basic structure exist, including backbones such as sphingosine in sphingomyelin, and hydrophilic groups such as phosphate in phospholipids. Steroids such as sterol are another major class of lipids. Carbohydrates Carbohydrates are aldehydes or ketones, with many hydroxyl groups attached, that can exist as straight chains or rings. Carbohydrates are the most abundant biological molecules, and fill numerous roles, such as the storage and transport of energy (starch, glycogen) and structural components (cellulose in plants, chitin in animals). The basic carbohydrate units are called monosaccharides and include galactose, fructose, and most importantly glucose. Monosaccharides can be linked together to form polysaccharides in almost limitless ways. Nucleotides The two nucleic acids, DNA and RNA, are polymers of nucleotides. Each nucleotide is composed of a phosphate attached to a ribose or deoxyribose sugar group which is attached to a nitrogenous base. Nucleic acids are critical for the storage and use of genetic information, and its interpretation through the processes of transcription and protein biosynthesis. This information is protected by DNA repair mechanisms and propagated through DNA replication. Many viruses have an RNA genome, such as HIV, which uses reverse transcription to create a DNA template from its viral RNA genome. RNA in ribozymes such as spliceosomes and ribosomes is similar to enzymes as it can catalyze chemical reactions. Individual nucleosides are made by attaching a nucleobase to a ribose sugar. These bases are heterocyclic rings containing nitrogen, classified as purines or pyrimidines. Nucleotides also act as coenzymes in metabolic-group-transfer reactions. Coenzymes Metabolism involves a vast array of chemical reactions, but most fall under a few basic types of reactions that involve the transfer of functional groups of atoms and their bonds within molecules. This common chemistry allows cells to use a small set of metabolic intermediates to carry chemical groups between different reactions. These group-transfer intermediates are called coenzymes. Each class of group-transfer reactions is carried out by a particular coenzyme, which is the substrate for a set of enzymes that produce it, and a set of enzymes that consume it. These coenzymes are therefore continuously made, consumed and then recycled. One central coenzyme is adenosine triphosphate (ATP), the energy currency of cells. This nucleotide is used to transfer chemical energy between different chemical reactions. There is only a small amount of ATP in cells, but as it is continuously regenerated, the human body can use about its own weight in ATP per day. ATP acts as a bridge between catabolism and anabolism. Catabolism breaks down molecules, and anabolism puts them together. Catabolic reactions generate ATP, and anabolic reactions consume it. It also serves as a carrier of phosphate groups in phosphorylation reactions. A vitamin is an organic compound needed in small quantities that cannot be made in cells. In human nutrition, most vitamins function as coenzymes after modification; for example, all water-soluble vitamins are phosphorylated or are coupled to nucleotides when they are used in cells. Nicotinamide adenine dinucleotide (NAD+), a derivative of vitamin B3 (niacin), is an important coenzyme that acts as a hydrogen acceptor. Hundreds of separate types of dehydrogenases remove electrons from their substrates and reduce NAD+ into NADH. This reduced form of the coenzyme is then a substrate for any of the reductases in the cell that need to transfer hydrogen atoms to their substrates. Nicotinamide adenine dinucleotide exists in two related forms in the cell, NADH and NADPH. The NAD+/NADH form is more important in catabolic reactions, while NADP+/NADPH is used in anabolic reactions. Mineral and cofactors Inorganic elements play critical roles in metabolism; some are abundant (e.g. sodium and potassium) while others function at minute concentrations. About 99% of a human's body weight is made up of the elements carbon, nitrogen, calcium, sodium, chlorine, potassium, hydrogen, phosphorus, oxygen and sulfur. Organic compounds (proteins, lipids and carbohydrates) contain the majority of the carbon and nitrogen; most of the oxygen and hydrogen is present as water. The abundant inorganic elements act as electrolytes. The most important ions are sodium, potassium, calcium, magnesium, chloride, phosphate and the organic ion bicarbonate. The maintenance of precise ion gradients across cell membranes maintains osmotic pressure and pH. Ions are also critical for nerve and muscle function, as action potentials in these tissues are produced by the exchange of electrolytes between the extracellular fluid and the cell's fluid, the cytosol. Electrolytes enter and leave cells through proteins in the cell membrane called ion channels. For example, muscle contraction depends upon the movement of calcium, sodium and potassium through ion channels in the cell membrane and T-tubules. Transition metals are usually present as trace elements in organisms, with zinc and iron being most abundant of those. Metal cofactors are bound tightly to specific sites in proteins; although enzyme cofactors can be modified during catalysis, they always return to their original state by the end of the reaction catalyzed. Metal micronutrients are taken up into organisms by specific transporters and bind to storage proteins such as ferritin or metallothionein when not in use. Catabolism Catabolism is the set of metabolic processes that break down large molecules. These include breaking down and oxidizing food molecules. The purpose of the catabolic reactions is to provide the energy and components needed by anabolic reactions which build molecules. The exact nature of these catabolic reactions differ from organism to organism, and organisms can be classified based on their sources of energy, hydrogen, and carbon (their primary nutritional groups), as shown in the table below. Organic molecules are used as a source of hydrogen atoms or electrons by organotrophs, while lithotrophs use inorganic substrates. Whereas phototrophs convert sunlight to chemical energy, chemotrophs depend on redox reactions that involve the transfer of electrons from reduced donor molecules such as organic molecules, hydrogen, hydrogen sulfide or ferrous ions to oxygen, nitrate or sulfate. In animals, these reactions involve complex organic molecules that are broken down to simpler molecules, such as carbon dioxide and water. Photosynthetic organisms, such as plants and cyanobacteria, use similar electron-transfer reactions to store energy absorbed from sunlight. The most common set of catabolic reactions in animals can be separated into three main stages. In the first stage, large organic molecules, such as proteins, polysaccharides or lipids, are digested into their smaller components outside cells. Next, these smaller molecules are taken up by cells and converted to smaller molecules, usually acetyl coenzyme A (acetyl-CoA), which releases some energy. Finally, the acetyl group on acetyl-CoA is oxidized to water and carbon dioxide in the citric acid cycle and electron transport chain, releasing more energy while reducing the coenzyme nicotinamide adenine dinucleotide (NAD+) into NADH. Digestion Macromolecules cannot be directly processed by cells. Macromolecules must be broken into smaller units before they can be used in cell metabolism. Different classes of enzymes are used to digest these polymers. These digestive enzymes include proteases that digest proteins into amino acids, as well as glycoside hydrolases that digest polysaccharides into simple sugars known as monosaccharides. Microbes simply secrete digestive enzymes into their surroundings, while animals only secrete these enzymes from specialized cells in their guts, including the stomach and pancreas, and in salivary glands. The amino acids or sugars released by these extracellular enzymes are then pumped into cells by active transport proteins. Energy from organic compounds Carbohydrate catabolism is the breakdown of carbohydrates into smaller units. Carbohydrates are usually taken into cells after they have been digested into monosaccharides such as glucose and fructose. Once inside, the major route of breakdown is glycolysis, in which glucose is converted into pyruvate. This process generates the energy-conveying molecule NADH from NAD+, and generates ATP from ADP for use in powering many processes within the cell. Pyruvate is an intermediate in several metabolic pathways, but the majority is converted to acetyl-CoA and fed into the citric acid cycle, which enables more ATP production by means of oxidative phosphorylation. This oxidation consumes molecular oxygen and releases water and the waste product carbon dioxide. When oxygen is lacking, or when pyruvate is temporarily produced faster than it can be consumed by the citric acid cycle (as in intense muscular exertion), pyruvate is converted to lactate by the enzyme lactate dehydrogenase, a process that also oxidizes NADH back to NAD+ for re-use in further glycolysis, allowing energy production to continue. The lactate is later converted back to pyruvate for ATP production where energy is needed, or back to glucose in the Cori cycle. An alternative route for glucose breakdown is the pentose phosphate pathway, which produces less energy but supports anabolism (biomolecule synthesis). This pathway reduces the coenzyme NADP+ to NADPH and produces pentose compounds such as ribose 5-phosphate for synthesis of many biomolecules such as nucleotides and aromatic amino acids. Fats are catabolized by hydrolysis to free fatty acids and glycerol. The glycerol enters glycolysis and the fatty acids are broken down by beta oxidation to release acetyl-CoA, which then is fed into the citric acid cycle. Fatty acids release more energy upon oxidation than carbohydrates. Steroids are also broken down by some bacteria in a process similar to beta oxidation, and this breakdown process involves the release of significant amounts of acetyl-CoA, propionyl-CoA, and pyruvate, which can all be used by the cell for energy. M. tuberculosis can also grow on the lipid cholesterol as a sole source of carbon, and genes involved in the cholesterol-use pathway(s) have been validated as important during various stages of the infection lifecycle of M. tuberculosis. Amino acids are either used to synthesize proteins and other biomolecules, or oxidized to urea and carbon dioxide to produce energy. The oxidation pathway starts with the removal of the amino group by a transaminase. The amino group is fed into the urea cycle, leaving a deaminated carbon skeleton in the form of a keto acid. Several of these keto acids are intermediates in the citric acid cycle, for example α-ketoglutarate formed by deamination of glutamate. The glucogenic amino acids can also be converted into glucose, through gluconeogenesis. Energy transformations Oxidative phosphorylation In oxidative phosphorylation, the electrons removed from organic molecules in areas such as the citric acid cycle are transferred to oxygen and the energy released is used to make ATP. This is done in eukaryotes by a series of proteins in the membranes of mitochondria called the electron transport chain. In prokaryotes, these proteins are found in the cell's inner membrane. These proteins use the energy from reduced molecules like NADH to pump protons across a membrane. Pumping protons out of the mitochondria creates a proton concentration difference across the membrane and generates an electrochemical gradient. This force drives protons back into the mitochondrion through the base of an enzyme called ATP synthase. The flow of protons makes the stalk subunit rotate, causing the active site of the synthase domain to change shape and phosphorylate adenosine diphosphate—turning it into ATP. Energy from inorganic compounds Chemolithotrophy is a type of metabolism found in prokaryotes where energy is obtained from the oxidation of inorganic compounds. These organisms can use hydrogen, reduced sulfur compounds (such as sulfide, hydrogen sulfide and thiosulfate), ferrous iron (Fe(II)) or ammonia as sources of reducing power and they gain energy from the oxidation of these compounds. These microbial processes are important in global biogeochemical cycles such as acetogenesis, nitrification and denitrification and are critical for soil fertility. Energy from light The energy in sunlight is captured by plants, cyanobacteria, purple bacteria, green sulfur bacteria and some protists. This process is often coupled to the conversion of carbon dioxide into organic compounds, as part of photosynthesis, which is discussed below. The energy capture and carbon fixation systems can, however, operate separately in prokaryotes, as purple bacteria and green sulfur bacteria can use sunlight as a source of energy, while switching between carbon fixation and the fermentation of organic compounds. In many organisms, the capture of solar energy is similar in principle to oxidative phosphorylation, as it involves the storage of energy as a proton concentration gradient. This proton motive force then drives ATP synthesis. The electrons needed to drive this electron transport chain come from light-gathering proteins called photosynthetic reaction centres. Reaction centers are classified into two types depending on the nature of photosynthetic pigment present, with most photosynthetic bacteria only having one type, while plants and cyanobacteria have two. In plants, algae, and cyanobacteria, photosystem II uses light energy to remove electrons from water, releasing oxygen as a waste product. The electrons then flow to the cytochrome b6f complex, which uses their energy to pump protons across the thylakoid membrane in the chloroplast. These protons move back through the membrane as they drive the ATP synthase, as before. The electrons then flow through photosystem I and can then be used to reduce the coenzyme NADP+. Anabolism Anabolism is the set of constructive metabolic processes where the energy released by catabolism is used to synthesize complex molecules. In general, the complex molecules that make up cellular structures are constructed step-by-step from smaller and simpler precursors. Anabolism involves three basic stages. First, the production of precursors such as amino acids, monosaccharides, isoprenoids and nucleotides, secondly, their activation into reactive forms using energy from ATP, and thirdly, the assembly of these precursors into complex molecules such as proteins, polysaccharides, lipids and nucleic acids. Anabolism in organisms can be different according to the source of constructed molecules in their cells. Autotrophs such as plants can construct the complex organic molecules in their cells such as polysaccharides and proteins from simple molecules like carbon dioxide and water. Heterotrophs, on the other hand, require a source of more complex substances, such as monosaccharides and amino acids, to produce these complex molecules. Organisms can be further classified by ultimate source of their energy: photoautotrophs and photoheterotrophs obtain energy from light, whereas chemoautotrophs and chemoheterotrophs obtain energy from oxidation reactions. Carbon fixation Photosynthesis is the synthesis of carbohydrates from sunlight and carbon dioxide (CO2). In plants, cyanobacteria and algae, oxygenic photosynthesis splits water, with oxygen produced as a waste product. This process uses the ATP and NADPH produced by the photosynthetic reaction centres, as described above, to convert CO2 into glycerate 3-phosphate, which can then be converted into glucose. This carbon-fixation reaction is carried out by the enzyme RuBisCO as part of the Calvin–Benson cycle. Three types of photosynthesis occur in plants, C3 carbon fixation, C4 carbon fixation and CAM photosynthesis. These differ by the route that carbon dioxide takes to the Calvin cycle, with C3 plants fixing CO2 directly, while C4 and CAM photosynthesis incorporate the CO2 into other compounds first, as adaptations to deal with intense sunlight and dry conditions. In photosynthetic prokaryotes the mechanisms of carbon fixation are more diverse. Here, carbon dioxide can be fixed by the Calvin–Benson cycle, a reversed citric acid cycle, or the carboxylation of acetyl-CoA. Prokaryotic chemoautotrophs also fix CO2 through the Calvin–Benson cycle, but use energy from inorganic compounds to drive the reaction. Carbohydrates and glycans In carbohydrate anabolism, simple organic acids can be converted into monosaccharides such as glucose and then used to assemble polysaccharides such as starch. The generation of glucose from compounds like pyruvate, lactate, glycerol, glycerate 3-phosphate and amino acids is called gluconeogenesis. Gluconeogenesis converts pyruvate to glucose-6-phosphate through a series of intermediates, many of which are shared with glycolysis. However, this pathway is not simply glycolysis run in reverse, as several steps are catalyzed by non-glycolytic enzymes. This is important as it allows the formation and breakdown of glucose to be regulated separately, and prevents both pathways from running simultaneously in a futile cycle. Although fat is a common way of storing energy, in vertebrates such as humans the fatty acids in these stores cannot be converted to glucose through gluconeogenesis as these organisms cannot convert acetyl-CoA into pyruvate; plants do, but animals do not, have the necessary enzymatic machinery. As a result, after long-term starvation, vertebrates need to produce ketone bodies from fatty acids to replace glucose in tissues such as the brain that cannot metabolize fatty acids. In other organisms such as plants and bacteria, this metabolic problem is solved using the glyoxylate cycle, which bypasses the decarboxylation step in the citric acid cycle and allows the transformation of acetyl-CoA to oxaloacetate, where it can be used for the production of glucose. Other than fat, glucose is stored in most tissues, as an energy resource available within the tissue through glycogenesis which was usually being used to maintained glucose level in blood. Polysaccharides and glycans are made by the sequential addition of monosaccharides by glycosyltransferase from a reactive sugar-phosphate donor such as uridine diphosphate glucose (UDP-Glc) to an acceptor hydroxyl group on the growing polysaccharide. As any of the hydroxyl groups on the ring of the substrate can be acceptors, the polysaccharides produced can have straight or branched structures. The polysaccharides produced can have structural or metabolic functions themselves, or be transferred to lipids and proteins by the enzymes oligosaccharyltransferases. Fatty acids, isoprenoids and sterol Fatty acids are made by fatty acid synthases that polymerize and then reduce acetyl-CoA units. The acyl chains in the fatty acids are extended by a cycle of reactions that add the acyl group, reduce it to an alcohol, dehydrate it to an alkene group and then reduce it again to an alkane group. The enzymes of fatty acid biosynthesis are divided into two groups: in animals and fungi, all these fatty acid synthase reactions are carried out by a single multifunctional type I protein, while in plant plastids and bacteria separate type II enzymes perform each step in the pathway. Terpenes and isoprenoids are a large class of lipids that include the carotenoids and form the largest class of plant natural products. These compounds are made by the assembly and modification of isoprene units donated from the reactive precursors isopentenyl pyrophosphate and dimethylallyl pyrophosphate. These precursors can be made in different ways. In animals and archaea, the mevalonate pathway produces these compounds from acetyl-CoA, while in plants and bacteria the non-mevalonate pathway uses pyruvate and glyceraldehyde 3-phosphate as substrates. One important reaction that uses these activated isoprene donors is sterol biosynthesis. Here, the isoprene units are joined to make squalene and then folded up and formed into a set of rings to make lanosterol. Lanosterol can then be converted into other sterols such as cholesterol and ergosterol. Proteins Organisms vary in their ability to synthesize the 20 common amino acids. Most bacteria and plants can synthesize all twenty, but mammals can only synthesize eleven nonessential amino acids, so nine essential amino acids must be obtained from food. Some simple parasites, such as the bacteria Mycoplasma pneumoniae, lack all amino acid synthesis and take their amino acids directly from their hosts. All amino acids are synthesized from intermediates in glycolysis, the citric acid cycle, or the pentose phosphate pathway. Nitrogen is provided by glutamate and glutamine. Nonessensial amino acid synthesis depends on the formation of the appropriate alpha-keto acid, which is then transaminated to form an amino acid. Amino acids are made into proteins by being joined in a chain of peptide bonds. Each different protein has a unique sequence of amino acid residues: this is its primary structure. Just as the letters of the alphabet can be combined to form an almost endless variety of words, amino acids can be linked in varying sequences to form a huge variety of proteins. Proteins are made from amino acids that have been activated by attachment to a transfer RNA molecule through an ester bond. This aminoacyl-tRNA precursor is produced in an ATP-dependent reaction carried out by an aminoacyl tRNA synthetase. This aminoacyl-tRNA is then a substrate for the ribosome, which joins the amino acid onto the elongating protein chain, using the sequence information in a messenger RNA. Nucleotide synthesis and salvage Nucleotides are made from amino acids, carbon dioxide and formic acid in pathways that require large amounts of metabolic energy. Consequently, most organisms have efficient systems to salvage preformed nucleotides. Purines are synthesized as nucleosides (bases attached to ribose). Both adenine and guanine are made from the precursor nucleoside inosine monophosphate, which is synthesized using atoms from the amino acids glycine, glutamine, and aspartic acid, as well as formate transferred from the coenzyme tetrahydrofolate. Pyrimidines, on the other hand, are synthesized from the base orotate, which is formed from glutamine and aspartate. Xenobiotics and redox metabolism All organisms are constantly exposed to compounds that they cannot use as foods and that would be harmful if they accumulated in cells, as they have no metabolic function. These potentially damaging compounds are called xenobiotics. Xenobiotics such as synthetic drugs, natural poisons and antibiotics are detoxified by a set of xenobiotic-metabolizing enzymes. In humans, these include cytochrome P450 oxidases, UDP-glucuronosyltransferases, and glutathione S-transferases. This system of enzymes acts in three stages to firstly oxidize the xenobiotic (phase I) and then conjugate water-soluble groups onto the molecule (phase II). The modified water-soluble xenobiotic can then be pumped out of cells and in multicellular organisms may be further metabolized before being excreted (phase III). In ecology, these reactions are particularly important in microbial biodegradation of pollutants and the bioremediation of contaminated land and oil spills. Many of these microbial reactions are shared with multicellular organisms, but due to the incredible diversity of types of microbes these organisms are able to deal with a far wider range of xenobiotics than multicellular organisms, and can degrade even persistent organic pollutants such as organochloride compounds. A related problem for aerobic organisms is oxidative stress. Here, processes including oxidative phosphorylation and the formation of disulfide bonds during protein folding produce reactive oxygen species such as hydrogen peroxide. These damaging oxidants are removed by antioxidant metabolites such as glutathione and enzymes such as catalases and peroxidases. Thermodynamics of living organisms Living organisms must obey the laws of thermodynamics, which describe the transfer of heat and work. The second law of thermodynamics states that in any isolated system, the amount of entropy (disorder) cannot decrease. Although living organisms' amazing complexity appears to contradict this law, life is possible as all organisms are open systems that exchange matter and energy with their surroundings. Living systems are not in equilibrium, but instead are dissipative systems that maintain their state of high complexity by causing a larger increase in the entropy of their environments. The metabolism of a cell achieves this by coupling the spontaneous processes of catabolism to the non-spontaneous processes of anabolism. In thermodynamic terms, metabolism maintains order by creating disorder. Regulation and control As the environments of most organisms are constantly changing, the reactions of metabolism must be finely regulated to maintain a constant set of conditions within cells, a condition called homeostasis. Metabolic regulation also allows organisms to respond to signals and interact actively with their environments. Two closely linked concepts are important for understanding how metabolic pathways are controlled. Firstly, the regulation of an enzyme in a pathway is how its activity is increased and decreased in response to signals. Secondly, the control exerted by this enzyme is the effect that these changes in its activity have on the overall rate of the pathway (the flux through the pathway). For example, an enzyme may show large changes in activity (i.e. it is highly regulated) but if these changes have little effect on the flux of a metabolic pathway, then this enzyme is not involved in the control of the pathway. There are multiple levels of metabolic regulation. In intrinsic regulation, the metabolic pathway self-regulates to respond to changes in the levels of substrates or products; for example, a decrease in the amount of product can increase the flux through the pathway to compensate. This type of regulation often involves allosteric regulation of the activities of multiple enzymes in the pathway. Extrinsic control involves a cell in a multicellular organism changing its metabolism in response to signals from other cells. These signals are usually in the form of water-soluble messengers such as hormones and growth factors and are detected by specific receptors on the cell surface. These signals are then transmitted inside the cell by second messenger systems that often involved the phosphorylation of proteins. A very well understood example of extrinsic control is the regulation of glucose metabolism by the hormone insulin. Insulin is produced in response to rises in blood glucose levels. Binding of the hormone to insulin receptors on cells then activates a cascade of protein kinases that cause the cells to take up glucose and convert it into storage molecules such as fatty acids and glycogen. The metabolism of glycogen is controlled by activity of phosphorylase, the enzyme that breaks down glycogen, and glycogen synthase, the enzyme that makes it. These enzymes are regulated in a reciprocal fashion, with phosphorylation inhibiting glycogen synthase, but activating phosphorylase. Insulin causes glycogen synthesis by activating protein phosphatases and producing a decrease in the phosphorylation of these enzymes. Evolution The central pathways of metabolism described above, such as glycolysis and the citric acid cycle, are present in all three domains of living things and were present in the last universal common ancestor. This universal ancestral cell was prokaryotic and probably a methanogen that had extensive amino acid, nucleotide, carbohydrate and lipid metabolism. The retention of these ancient pathways during later evolution may be the result of these reactions having been an optimal solution to their particular metabolic problems, with pathways such as glycolysis and the citric acid cycle producing their end products highly efficiently and in a minimal number of steps. The first pathways of enzyme-based metabolism may have been parts of purine nucleotide metabolism, while previous metabolic pathways were a part of the ancient RNA world. Many models have been proposed to describe the mechanisms by which novel metabolic pathways evolve. These include the sequential addition of novel enzymes to a short ancestral pathway, the duplication and then divergence of entire pathways as well as the recruitment of pre-existing enzymes and their assembly into a novel reaction pathway. The relative importance of these mechanisms is unclear, but genomic studies have shown that enzymes in a pathway are likely to have a shared ancestry, suggesting that many pathways have evolved in a step-by-step fashion with novel functions created from pre-existing steps in the pathway. An alternative model comes from studies that trace the evolution of proteins' structures in metabolic networks, this has suggested that enzymes are pervasively recruited, borrowing enzymes to perform similar functions in different metabolic pathways (evident in the MANET database) These recruitment processes result in an evolutionary enzymatic mosaic. A third possibility is that some parts of metabolism might exist as "modules" that can be reused in different pathways and perform similar functions on different molecules. As well as the evolution of new metabolic pathways, evolution can also cause the loss of metabolic functions. For example, in some parasites metabolic processes that are not essential for survival are lost and preformed amino acids, nucleotides and carbohydrates may instead be scavenged from the host. Similar reduced metabolic capabilities are seen in endosymbiotic organisms. Investigation and manipulation Classically, metabolism is studied by a reductionist approach that focuses on a single metabolic pathway. Particularly valuable is the use of radioactive tracers at the whole-organism, tissue and cellular levels, which define the paths from precursors to final products by identifying radioactively labelled intermediates and products. The enzymes that catalyze these chemical reactions can then be purified and their kinetics and responses to inhibitors investigated. A parallel approach is to identify the small molecules in a cell or tissue; the complete set of these molecules is called the metabolome. Overall, these studies give a good view of the structure and function of simple metabolic pathways, but are inadequate when applied to more complex systems such as the metabolism of a complete cell. An idea of the complexity of the metabolic networks in cells that contain thousands of different enzymes is given by the figure showing the interactions between just 43 proteins and 40 metabolites to the right: the sequences of genomes provide lists containing anything up to 26.500 genes. However, it is now possible to use this genomic data to reconstruct complete networks of biochemical reactions and produce more holistic mathematical models that may explain and predict their behavior. These models are especially powerful when used to integrate the pathway and metabolite data obtained through classical methods with data on gene expression from proteomic and DNA microarray studies. Using these techniques, a model of human metabolism has now been produced, which will guide future drug discovery and biochemical research. These models are now used in network analysis, to classify human diseases into groups that share common proteins or metabolites. Bacterial metabolic networks are a striking example of bow-tie organization, an architecture able to input a wide range of nutrients and produce a large variety of products and complex macromolecules using a relatively few intermediate common currencies. A major technological application of this information is metabolic engineering. Here, organisms such as yeast, plants or bacteria are genetically modified to make them more useful in biotechnology and aid the production of drugs such as antibiotics or industrial chemicals such as 1,3-propanediol and shikimic acid. These genetic modifications usually aim to reduce the amount of energy used to produce the product, increase yields and reduce the production of wastes. History The term metabolism is derived from the Ancient Greek word μεταβολή—"metabole" for "a change" which is derived from μεταβάλλειν—"metaballein", meaning "to change" Greek philosophy Aristotle's The Parts of Animals sets out enough details of his views on metabolism for an open flow model to be made. He believed that at each stage of the process, materials from food were transformed, with heat being released as the classical element of fire, and residual materials being excreted as urine, bile, or faeces. Ibn al-Nafis described metabolism in his 1260 AD work titled Al-Risalah al-Kamiliyyah fil Siera al-Nabawiyyah (The Treatise of Kamil on the Prophet's Biography) which included the following phrase "Both the body and its parts are in a continuous state of dissolution and nourishment, so they are inevitably undergoing permanent change." Application of the scientific method and Modern metabolic theories The history of the scientific study of metabolism spans several centuries and has moved from examining whole animals in early studies, to examining individual metabolic reactions in modern biochemistry. The first controlled experiments in human metabolism were published by Santorio Santorio in 1614 in his book Ars de statica medicina. He described how he weighed himself before and after eating, sleep, working, sex, fasting, drinking, and excreting. He found that most of the food he took in was lost through what he called "insensible perspiration". In these early studies, the mechanisms of these metabolic processes had not been identified and a vital force was thought to animate living tissue. In the 19th century, when studying the fermentation of sugar to alcohol by yeast, Louis Pasteur concluded that fermentation was catalyzed by substances within the yeast cells he called "ferments". He wrote that "alcoholic fermentation is an act correlated with the life and organization of the yeast cells, not with the death or putrefaction of the cells." This discovery, along with the publication by Friedrich Wöhler in 1828 of a paper on the chemical synthesis of urea, and is notable for being the first organic compound prepared from wholly inorganic precursors. This proved that the organic compounds and chemical reactions found in cells were no different in principle than any other part of chemistry. It was the discovery of enzymes at the beginning of the 20th century by Eduard Buchner that separated the study of the chemical reactions of metabolism from the biological study of cells, and marked the beginnings of biochemistry. The mass of biochemical knowledge grew rapidly throughout the early 20th century. One of the most prolific of these modern biochemists was Hans Krebs who made huge contributions to the study of metabolism. He discovered the urea cycle and later, working with Hans Kornberg, the citric acid cycle and the glyoxylate cycle. See also , a "metabolism first" theory of the origin of life Microphysiometry Oncometabolism References Further reading Introductory Advanced External links General information The Biochemistry of Metabolism (archived 8 March 2005) Sparknotes SAT biochemistry Overview of biochemistry. School level. MIT Biology Hypertextbook Undergraduate-level guide to molecular biology. Human metabolism Topics in Medical Biochemistry Guide to human metabolic pathways. School level. THE Medical Biochemistry Page Comprehensive resource on human metabolism. Databases Flow Chart of Metabolic Pathways at ExPASy IUBMB-Nicholson Metabolic Pathways Chart SuperCYP: Database for Drug-Cytochrome-Metabolism Metabolic pathways Metabolism reference Pathway Underwater diving physiology
0.76737
0.999218
0.76677
Isenthalpic process
An isenthalpic process or isoenthalpic process is a process that proceeds without any change in enthalpy, H; or specific enthalpy, h. Overview If a steady-state, steady-flow process is analysed using a control volume, everything outside the control volume is considered to be the surroundings. Such a process will be isenthalpic if there is no transfer of heat to or from the surroundings, no work done on or by the surroundings, and no change in the kinetic energy of the fluid. This is a sufficient but not necessary condition for isoenthalpy. The necessary condition for a process to be isoenthalpic is that the sum of each of the terms of the energy balance other than enthalpy (work, heat, changes in kinetic energy, etc.) cancel each other, so that the enthalpy remains unchanged. For a process in which magnetic and electric effects (among others) give negligible contributions, the associated energy balance can be written as If then it must be that The throttling process is a good example of an isoenthalpic process in which significant changes in pressure and temperature can occur to the fluid, and yet the net sum the associated terms in the energy balance is null, thus rendering the transformation isoenthalpic. The lifting of a relief (or safety) valve on a pressure vessel is an example of throttling process. The specific enthalpy of the fluid inside the pressure vessel is the same as the specific enthalpy of the fluid as it escapes through the valve. With a knowledge of the specific enthalpy of the fluid and the pressure outside the pressure vessel, it is possible to determine the temperature and speed of the escaping fluid. In an isenthalpic process: , . Isenthalpic processes on an ideal gas follow isotherms, since . See also Adiabatic process Joule–Thomson effect Ideal gas laws Isentropic process References G. J. Van Wylen and R. E. Sonntag (1985), Fundamentals of Classical Thermodynamics, John Wiley & Sons, Inc., New York Notes   Thermodynamic processes Enthalpy
0.785018
0.976745
0.766763
Penman–Monteith equation
The Penman-Monteith equation approximates net evapotranspiration (ET) from meteorological data as a replacement for direct measurement of evapotranspiration. The equation is widely used, and was derived by the United Nations Food and Agriculture Organization for modeling reference evapotranspiration ET0. Significance Evapotranspiration contributions are significant in a watershed's water balance, yet are often not emphasized in results because the precision of this component is often weak relative to more directly measured phenomena, e.g., rain and stream flow. In addition to weather uncertainties, the Penman-Monteith equation is sensitive to vegetation-specific parameters, e.g., stomatal resistance or conductance. Various forms of crop coefficients (Kc) account for differences between specific vegetation modeled and a reference evapotranspiration (RET or ET0) standard. Stress coefficients (Ks) account for reductions in ET due to environmental stress (e.g. soil saturation reduces root-zone O2, low soil moisture induces wilt, air pollution effects, and salinity). Models of native vegetation cannot assume crop management to avoid recurring stress. Equation Per Monteith's Evaporation and Environment, the equation is: λv = Latent heat of vaporization. The energy required per unit mass of water vaporized. (J g−1) Lv = Volumetric latent heat of vaporization. The energy required per unit volume of water vaporized. (Lv = 2453 MJ m−3) E = Mass water evapotranspiration rate (g s−1 m−2) ET = Water volume evapotranspired (mm s−1) Δ = Rate of change of saturation specific humidity with air temperature. (Pa K−1) Rn = Net irradiance (W m−2), the external source of energy flux G = Ground heat flux (W m−2), usually difficult to measure cp = Specific heat capacity of air (J kg−1 K−1) ρa = dry air density (kg m−3) δe = vapor pressure deficit (Pa) ga = Conductivity of air, atmospheric conductance (m s−1) gs = Conductivity of stoma, surface or stomatal conductance (m s−1) γ = Psychrometric constant (γ ≈ 66 Pa K−1) Note: Often, resistances are used rather than conductivities. where rc refers to the resistance to flux from a vegetation canopy to the extent of some defined boundary layer. The atmospheric conductance ga accounts for aerodynamic effects like the zero plane displacement height and the roughness length of the surface. The stomatal conductance gs accounts for the effect of leaf density (Leaf Area Index), water stress, and concentration in the air, that is to say plant reaction to external factors. Different models exist to link the stomatal conductance to these vegetation characteristics, like the ones from P.G. Jarvis (1976) or Jacobs et al. (1996). Accuracy While the Penman-Monteith method is widely considered accurate for practical purposes and is recommended by the Food and Agriculture Organization of the United Nations, errors when compared to direct measurement or other techniques can range from -9 to 40%. Variations and alternatives FAO 56 Penman-Monteith equation To avoid the inherent complexity of determining stomatal and atmospheric conductance, the Food and Agriculture Organization proposed in 1998 a simplified equation for the reference evapotranspiration ET0. It is defined as the evapotranpiration for "[an] hypothetical reference crop with an assumed crop height of 0.12 m, a fixed surface resistance of 70 s m-1 and an albedo of 0.23." This reference surface is defined to represent "an extensive surface of green grass of uniform height, actively growing, completely shading the ground and with adequate water". The corresponding equation is: ET0 = Reference evapotranspiration, Water volume evapotranspired (mm day−1) Δ = Rate of change of saturation specific humidity with air temperature. (Pa K−1) Rn = Net irradiance (MJ m−2 day−1), the external source of energy flux G = Ground heat flux (MJ m−2 day−1), usually equivalent to zero on a day T = Air temperature at 2m (K) u2 = Wind speed at 2m height (m/s) δe = vapor pressure deficit (kPa) γ = Psychrometric constant (γ ≈ 66 Pa K−1) N.B.: The coefficients 0.408 and 900 are not unitless but account for the conversion from energy values to equivalent water depths: radiation [mm day−1] = 0.408 radiation [MJ m−2 day−1]. This reference evapotranspiration ET0 can then be used to evaluate the evapotranspiration rate ET from unstressed plants through crop coefficients Kc: ET = Kc * ET0. Variations The standard methods of the American Society of Civil Engineers modify the standard Penman-Monteith equation for use with an hourly time step. The SWAT model is one of many GIS-integrated hydrologic models estimating ET using Penman-Monteith equations. Priestley–Taylor The Priestley–Taylor equation was developed as a substitute for the Penman-Monteith equation to remove dependence on observations. For Priestley–Taylor, only radiation (irradiance) observations are required. This is done by removing the aerodynamic terms from the Penman-Monteith equation and adding an empirically derived constant factor, . The underlying concept behind the Priestley–Taylor model is that an air mass moving above a vegetated area with abundant water would become saturated with water. In these conditions, the actual evapotranspiration would match the Penman rate of reference evapotranspiration. However, observations revealed that actual evaporation was 1.26 times greater than reference evaporation. Therefore, the equation for actual evaporation was found by taking reference evapotranspiration and multiplying it by . The assumption here is for vegetation with an abundant water supply (i.e. the plants have low moisture stress). Areas like arid regions with high moisture stress are estimated to have higher values. The assumption that an air mass moving over a vegetated surface with abundant water saturates has been questioned later. The atmosphere's lowest and most turbulent part, the atmospheric boundary layer, is not a closed box but constantly brings in dry air from higher up in the atmosphere towards the surface. As water evaporates more readily into a dry atmosphere, evapotranspiration is enhanced. This explains the larger-than-unity value of the Priestley-Taylor parameter . The proper equilibrium of the system has been derived. It involves the characteristics of the interface of the atmospheric boundary layer and the overlying free atmosphere. History The equation is named after Howard Penman and John Monteith. Penman published his equation in 1948, and Monteith revised it in 1965. References External links Derivation of the equation Equations Hydrology Agronomy Meteorological concepts
0.782119
0.980333
0.766737
Phase diagram
A phase diagram in physical chemistry, engineering, mineralogy, and materials science is a type of chart used to show conditions (pressure, temperature, etc.) at which thermodynamically distinct phases (such as solid, liquid or gaseous states) occur and coexist at equilibrium. Overview Common components of a phase diagram are lines of equilibrium or phase boundaries, which refer to lines that mark conditions under which multiple phases can coexist at equilibrium. Phase transitions occur along lines of equilibrium. Metastable phases are not shown in phase diagrams as, despite their common occurrence, they are not equilibrium phases. Triple points are points on phase diagrams where lines of equilibrium intersect. Triple points mark conditions at which three different phases can coexist. For example, the water phase diagram has a triple point corresponding to the single temperature and pressure at which solid, liquid, and gaseous water can coexist in a stable equilibrium ( and a partial vapor pressure of ). The pressure on a pressure-temperature diagram (such as the water phase diagram shown) is the partial pressure of the substance in question. The solidus is the temperature below which the substance is stable in the solid state. The liquidus is the temperature above which the substance is stable in a liquid state. There may be a gap between the solidus and liquidus; within the gap, the substance consists of a mixture of crystals and liquid (like a "slurry"). Working fluids are often categorized on the basis of the shape of their phase diagram. Types 2-dimensional diagrams Pressure vs temperature The simplest phase diagrams are pressure–temperature diagrams of a single simple substance, such as water. The axes correspond to the pressure and temperature. The phase diagram shows, in pressure–temperature space, the lines of equilibrium or phase boundaries between the three phases of solid, liquid, and gas. The curves on the phase diagram show the points where the free energy (and other derived properties) becomes non-analytic: their derivatives with respect to the coordinates (temperature and pressure in this example) change discontinuously (abruptly). For example, the heat capacity of a container filled with ice will change abruptly as the container is heated past the melting point. The open spaces, where the free energy is analytic, correspond to single phase regions. Single phase regions are separated by lines of non-analytical behavior, where phase transitions occur, which are called phase boundaries. In the diagram on the right, the phase boundary between liquid and gas does not continue indefinitely. Instead, it terminates at a point on the phase diagram called the critical point. This reflects the fact that, at extremely high temperatures and pressures, the liquid and gaseous phases become indistinguishable, in what is known as a supercritical fluid. In water, the critical point occurs at around Tc = , pc = and ρc = 356 kg/m3. The existence of the liquid–gas critical point reveals a slight ambiguity in labelling the single phase regions. When going from the liquid to the gaseous phase, one usually crosses the phase boundary, but it is possible to choose a path that never crosses the boundary by going to the right of the critical point. Thus, the liquid and gaseous phases can blend continuously into each other. The solid–liquid phase boundary can only end in a critical point if the solid and liquid phases have the same symmetry group. For most substances, the solid–liquid phase boundary (or fusion curve) in the phase diagram has a positive slope so that the melting point increases with pressure. This is true whenever the solid phase is denser than the liquid phase. The greater the pressure on a given substance, the closer together the molecules of the substance are brought to each other, which increases the effect of the substance's intermolecular forces. Thus, the substance requires a higher temperature for its molecules to have enough energy to break out of the fixed pattern of the solid phase and enter the liquid phase. A similar concept applies to liquid–gas phase changes. Water is an exception which has a solid-liquid boundary with negative slope so that the melting point decreases with pressure. This occurs because ice (solid water) is less dense than liquid water, as shown by the fact that ice floats on water. At a molecular level, ice is less dense because it has a more extensive network of hydrogen bonding which requires a greater separation of water molecules. Other exceptions include antimony and bismuth. At very high pressures above 50 GPa (500 000 atm), liquid nitrogen undergoes a liquid-liquid phase transition to a polymeric form and becomes denser than solid nitrogen at the same pressure. Under these conditions therefore, solid nitrogen also floats in its liquid. The value of the slope dP/dT is given by the Clausius–Clapeyron equation for fusion (melting) where ΔHfus is the heat of fusion which is always positive, and ΔVfus is the volume change for fusion. For most substances ΔVfus is positive so that the slope is positive. However for water and other exceptions, ΔVfus is negative so that the slope is negative. Other thermodynamic properties In addition to temperature and pressure, other thermodynamic properties may be graphed in phase diagrams. Examples of such thermodynamic properties include specific volume, specific enthalpy, or specific entropy. For example, single-component graphs of temperature vs. specific entropy (T vs. s) for water/steam or for a refrigerant are commonly used to illustrate thermodynamic cycles such as a Carnot cycle, Rankine cycle, or vapor-compression refrigeration cycle. Any two thermodynamic quantities may be shown on the horizontal and vertical axes of a two-dimensional diagram. Additional thermodynamic quantities may each be illustrated in increments as a series of lines—curved, straight, or a combination of curved and straight. Each of these iso-lines represents the thermodynamic quantity at a certain constant value. 3-dimensional diagrams It is possible to envision three-dimensional (3D) graphs showing three thermodynamic quantities. For example, for a single component, a 3D Cartesian coordinate type graph can show temperature (T) on one axis, pressure (p) on a second axis, and specific volume (v) on a third. Such a 3D graph is sometimes called a p–v–T diagram. The equilibrium conditions are shown as curves on a curved surface in 3D with areas for solid, liquid, and vapor phases and areas where solid and liquid, solid and vapor, or liquid and vapor coexist in equilibrium. A line on the surface called a triple line is where solid, liquid and vapor can all coexist in equilibrium. The critical point remains a point on the surface even on a 3D phase diagram. An orthographic projection of the 3D p–v–T graph showing pressure and temperature as the vertical and horizontal axes collapses the 3D plot into the standard 2D pressure–temperature diagram. When this is done, the solid–vapor, solid–liquid, and liquid–vapor surfaces collapse into three corresponding curved lines meeting at the triple point, which is the collapsed orthographic projection of the triple line. Binary mixtures Other much more complex types of phase diagrams can be constructed, particularly when more than one pure component is present. In that case, concentration becomes an important variable. Phase diagrams with more than two dimensions can be constructed that show the effect of more than two variables on the phase of a substance. Phase diagrams can use other variables in addition to or in place of temperature, pressure and composition, for example the strength of an applied electrical or magnetic field, and they can also involve substances that take on more than just three states of matter. One type of phase diagram plots temperature against the relative concentrations of two substances in a binary mixture called a binary phase diagram, as shown at right. Such a mixture can be either a solid solution, eutectic or peritectic, among others. These two types of mixtures result in very different graphs. Another type of binary phase diagram is a boiling-point diagram for a mixture of two components, i. e. chemical compounds. For two particular volatile components at a certain pressure such as atmospheric pressure, a boiling-point diagram shows what vapor (gas) compositions are in equilibrium with given liquid compositions depending on temperature. In a typical binary boiling-point diagram, temperature is plotted on a vertical axis and mixture composition on a horizontal axis. A two component diagram with components A and B in an "ideal" solution is shown. The construction of a liquid vapor phase diagram assumes an ideal liquid solution obeying Raoult's law and an ideal gas mixture obeying Dalton's law of partial pressure. A tie line from the liquid to the gas at constant pressure would indicate the two compositions of the liquid and gas respectively. A simple example diagram with hypothetical components 1 and 2 in a non-azeotropic mixture is shown at right. The fact that there are two separate curved lines joining the boiling points of the pure components means that the vapor composition is usually not the same as the liquid composition the vapor is in equilibrium with. See Vapor–liquid equilibrium for more information. In addition to the above-mentioned types of phase diagrams, there are many other possible combinations. Some of the major features of phase diagrams include congruent points, where a solid phase transforms directly into a liquid. There is also the peritectoid, a point where two solid phases combine into one solid phase during cooling. The inverse of this, when one solid phase transforms into two solid phases during cooling, is called the eutectoid. A complex phase diagram of great technological importance is that of the iron–carbon system for less than 7% carbon (see steel). The x-axis of such a diagram represents the concentration variable of the mixture. As the mixtures are typically far from dilute and their density as a function of temperature is usually unknown, the preferred concentration measure is mole fraction. A volume-based measure like molarity would be inadvisable. Ternary phase diagrams A system with three components is called a ternary system. At constant pressure the maximum number of independent variables is three – the temperature and two concentration values. For a representation of ternary equilibria a three-dimensional phase diagram is required. Often such a diagram is drawn with the composition as a horizontal plane and the temperature on an axis perpendicular to this plane. To represent composition in a ternary system an equilateral triangle is used, called Gibbs triangle (see also Ternary plot). The temperature scale is plotted on the axis perpendicular to the composition triangle. Thus, the space model of a ternary phase diagram is a right-triangular prism. The prism sides represent corresponding binary systems A-B, B-C, A-C. However, the most common methods to present phase equilibria in a ternary system are the following: 1) projections on the concentration triangle ABC of the liquidus, solidus, solvus surfaces; 2) isothermal sections; 3) vertical sections. Crystals Polymorphic and polyamorphic substances have multiple crystal or amorphous phases, which can be graphed in a similar fashion to solid, liquid, and gas phases. Mesophases Some organic materials pass through intermediate states between solid and liquid; these states are called mesophases. Attention has been directed to mesophases because they enable display devices and have become commercially important through the so-called liquid-crystal technology. Phase diagrams are used to describe the occurrence of mesophases. See also CALPHAD (method) Computational thermodynamics Congruent melting and incongruent melting Gibbs phase rule Glass databases Hamiltonian mechanics Phase separation Saturation dome Schreinemaker's analysis Simple phase envelope algorithm References External links Iron-Iron Carbide Phase Diagram Example How to build a phase diagram Phase Changes: Phase Diagrams: Part 1 Equilibrium Fe-C phase diagram Phase diagrams for lead free solders DoITPoMS Phase Diagram Library DoITPoMS Teaching and Learning Package – "Phase Diagrams and Solidification" Phase Diagrams: The Beginning of Wisdom – Open Access Journal Article Binodal curves, tie-lines, lever rule and invariant points – How to read phase diagrams (Video by SciFox on TIB AV-Portal) The Alloy Phase Diagram International Commission (APDIC) Periodic table of phase diagrams of the elements (pdf poster) Diagram Equilibrium chemistry Materials science Metallurgy Charts Diagrams Gases Chemical engineering thermodynamics
0.76988
0.995915
0.766735
Reactionless drive
A reactionless drive is a hypothetical device producing motion without the exhaust of a propellant. A propellantless drive is not necessarily reactionless when it constitutes an open system interacting with external fields; but a reactionless drive is a particular case of a propellantless drive that is a closed system, presumably in contradiction with the law of conservation of momentum. Reactionless drives are often considered similar to a perpetual motion machine. The name comes from Newton's third law, often expressed as: "For every action, there is an equal and opposite reaction." Many infeasible reactionless drives are a staple of science fiction for space propulsion. Closed systems Through the years there have been numerous claims for functional reactionless drive designs using ordinary mechanics (i.e., devices not said to be based on quantum mechanics, relativity or atomic forces or effects). Two of these represent their general classes: the Dean drive is perhaps the best known example of a "linear oscillating mechanism" reactionless drive; the gyroscopic inertial thruster is perhaps the best known example of a "rotating mechanism" reactionless drive. These two also stand out as they both received much publicity from their promoters and the popular press in their day and both were eventually rejected when proven to not produce any reactionless drive forces. The rise and fall of these devices now serves as a cautionary tale for those making and reviewing similar claims. More recently, the EmDrive was taken seriously enough to be tested by a handful of physics labs, but similarly proved to not produce a reactionless drive force. Dean drive The Dean drive was a mechanical device concept promoted by inventor Norman L. Dean. Dean claimed that his device was a "reactionless thruster" and that his working models could demonstrate this effect. He held several private demonstrations but never revealed the exact design of the models nor allowed independent analysis of them. Dean's claims of reactionless thrust generation were subsequently shown to be in error and the "thrust" producing the directional motion was likely to be caused by friction between the device and the surface on which the device was resting and would not work in free space. Gyroscopic Inertial Thruster (GIT) The Gyroscopic Inertial Thruster is a proposed reactionless drive based on the mechanical principles of a rotating mechanism. The concept involves various methods of leverage applied against the supports of a large gyroscope. The supposed operating principle of a GIT is a mass traveling around a circular trajectory at a variable speed. The high-speed part of the trajectory allegedly generates greater centrifugal force than the low, so that there is a greater thrust in one direction than the other. Scottish inventor Sandy Kidd, a former RAF radar technician, investigated the possibility (without success) in the 1980s. He posited that a gyroscope set at various angles could provide a lifting force, defying gravity. In the 1990s, several people sent suggestions to the Space Exploration Outreach Program (SEOP) at NASA recommending that NASA study a gyroscopic inertial drive, especially the developments attributed to the American inventor Robert Cook and the Canadian inventor Roy Thornson. In the 1990s and 2000s, enthusiasts attempted the building and testing of GIT machines. Eric Laithwaite, the "Father of Maglev", received a US patent for his own propulsion system, which was claimed to create a linear thrust through gyroscopic and inertial forces. However, after years of theoretical analysis and laboratory testing of actual devices, no rotating (or any other) mechanical device has been found to produce unidirectional reactionless thrust in free space. Helical engine David M. Burns, formerly a NASA engineer at the Marshall Space Flight Center in Alabama, theorized a potential spacecraft propulsion drive that could possibly exploit the known mass-altering effects that occur at near the speed of light. He wrote a paper published in 2019 by NASA in which he describes it as "A new concept for in-space propulsion is proposed in which propellant is not ejected from the engine, but instead is captured to create a nearly infinite specific impulse". Open systems Movement with thrust Several kinds of thrust-generating methods are in use or have been proposed that are propellantless, as they do not work like rockets and reaction mass is not carried nor expelled from the device. However they are not reactionless, as they constitute open systems interacting with electromagnetic waves or various kinds of fields. Most famous propellantless methods are the gravity assist maneuver or gravitational slingshot of a spacecraft accelerating at the expense of the momentum of the planet it orbits, through the gravitational field, or beam-powered propulsion and solar sailing, using the radiation pressure of electromagnetic waves from a distant source like a laser or the sun. More speculative methods have also been proposed, like the Mach effect, the quantum vacuum plasma thruster or various hypotheses associated with resonant cavity thrusters. Movement without thrust Because there is no well-defined "center of mass" in curved spacetime, general relativity allows a stationary object to, in a sense, "change its position" in a counter-intuitive manner, without violating conservation of momentum. The Alcubierre drive is a hypothetical method of apparent faster-than-light propulsion for interstellar travel postulated from the theory of general relativity. Although this concept may be allowed by the currently accepted laws of physics, it remains unproven; implementation would require a negative energy density, and possibly a better understanding of quantum gravity. It is not clear how (or whether) this effect could provide a useful means of accelerating an actual space vehicle and no practical designs have been proposed. "Swimming in spacetime" is a general relativistic effect, where an extended body can change its position by using cyclic deformations in shape to exploit the curvature of space, such as due to a gravitational field. In weak gravitational fields, like that of Earth, the change in position per deformation cycle would be far too small to detect. See also Beam-powered propulsion Bernard Haisch Field propulsion Harold E. Puthoff Inertialess drive Perpetual motion RF resonant cavity thruster (EmDrive) Spacecraft propulsion Stochastic electrodynamics References External links "Breakthroughs" commonly submitted to NASA Inertial Propulsion Engine Reactionless Propulsion (Not) at MathPages Spacecraft propulsion Pseudoscience Perpetual motion Hypothetical technology Propulsion Discovery and invention controversies Fringe physics
0.77793
0.985604
0.766731
Unruh effect
The Unruh effect (also known as the Fulling–Davies–Unruh effect) is a theoretical prediction in quantum field theory that an observer who is uniformly accelerating through empty space will perceive a thermal bath. This means that even in the absence of any external heat sources, an accelerating observer will detect particles and experience a temperature. In contrast, an inertial observer in the same region of spacetime would observe no temperature. In other words, the background appears to be warm from an accelerating reference frame. In layman's terms, an accelerating thermometer in empty space (like one being waved around), without any other contribution to its temperature, will record a non-zero temperature, just from its acceleration. Heuristically, for a uniformly accelerating observer, the ground state of an inertial observer is seen as a mixed state in thermodynamic equilibrium with a non-zero temperature bath. The Unruh effect was first described by Stephen Fulling in 1973, Paul Davies in 1975 and W. G. Unruh in 1976. It is currently not clear whether the Unruh effect has actually been observed, since the claimed observations are disputed. There is also some doubt about whether the Unruh effect implies the existence of Unruh radiation. Temperature equation The Unruh temperature, sometimes called the Davies–Unruh temperature, was derived separately by Paul Davies and William Unruh and is the effective temperature experienced by a uniformly accelerating detector in a vacuum field. It is given by where is the reduced Planck constant, is the proper uniform acceleration, is the speed of light, and is the Boltzmann constant. Thus, for example, a proper acceleration of corresponds approximately to a temperature of . Conversely, an acceleration of corresponds to a temperature of . The Unruh temperature has the same form as the Hawking temperature with denoting the surface gravity of a black hole, which was derived by Stephen Hawking in 1974. In the light of the equivalence principle, it is, therefore, sometimes called the Hawking–Unruh temperature. Solving the Unruh temperature for the uniform acceleration, it can be expressed as , where is Planck acceleration and is Planck temperature. Explanation Unruh demonstrated theoretically that the notion of vacuum depends on the path of the observer through spacetime. From the viewpoint of the accelerating observer, the vacuum of the inertial observer will look like a state containing many particles in thermal equilibrium—a warm gas. The Unruh effect would only appear to an accelerating observer. And although the Unruh effect would initially be perceived as counter-intuitive, it makes sense if the word vacuum is interpreted in the following specific way. In quantum field theory, the concept of "vacuum" is not the same as "empty space": Space is filled with the quantized fields that make up the universe. Vacuum is simply the lowest possible energy state of these fields. The energy states of any quantized field are defined by the Hamiltonian, based on local conditions, including the time coordinate. According to special relativity, two observers moving relative to each other must use different time coordinates. If those observers are accelerating, there may be no shared coordinate system. Hence, the observers will see different quantum states and thus different vacua. In some cases, the vacuum of one observer is not even in the space of quantum states of the other. In technical terms, this comes about because the two vacua lead to unitarily inequivalent representations of the quantum field canonical commutation relations. This is because two mutually accelerating observers may not be able to find a globally defined coordinate transformation relating their coordinate choices. An accelerating observer will perceive an apparent event horizon forming (see Rindler spacetime). The existence of Unruh radiation could be linked to this apparent event horizon, putting it in the same conceptual framework as Hawking radiation. On the other hand, the theory of the Unruh effect explains that the definition of what constitutes a "particle" depends on the state of motion of the observer. The free field needs to be decomposed into positive and negative frequency components before defining the creation and annihilation operators. This can only be done in spacetimes with a timelike Killing vector field. This decomposition happens to be different in Cartesian and Rindler coordinates (although the two are related by a Bogoliubov transformation). This explains why the "particle numbers", which are defined in terms of the creation and annihilation operators, are different in both coordinates. The Rindler spacetime has a horizon, and locally any non-extremal black hole horizon is Rindler. So the Rindler spacetime gives the local properties of black holes and cosmological horizons. It is possible to rearrange the metric restricted to these regions to obtain the Rindler metric. The Unruh effect would then be the near-horizon form of Hawking radiation. The Unruh effect is also expected to be present in de Sitter space. It is worth stressing that the Unruh effect only says that, according to uniformly-accelerated observers, the vacuum state is a thermal state specified by its temperature, and one should resist reading too much into the thermal state or bath. Different thermal states or baths at the same temperature need not be equal, for they depend on the Hamiltonian describing the system. In particular, the thermal bath seen by accelerated observers in the vacuum state of a quantum field is not the same as a thermal state of the same field at the same temperature according to inertial observers. Furthermore, uniformly accelerated observers, static with respect to each other, can have different proper accelerations (depending on their separation), which is a direct consequence of relativistic red-shift effects. This makes the Unruh temperature spatially inhomogeneous across the uniformly accelerated frame. Calculations In special relativity, an observer moving with uniform proper acceleration through Minkowski spacetime is conveniently described with Rindler coordinates, which are related to the standard (Cartesian) Minkowski coordinates by The line element in Rindler coordinates, i.e. Rindler space is where , and where is related to the observer's proper time by (here ). An observer moving with fixed traces out a hyperbola in Minkowski space, therefore this type of motion is called hyperbolic motion. The coordinate is related to the Schwarzschild spherical coordinate by the relation An observer moving along a path of constant is uniformly accelerating, and is coupled to field modes which have a definite steady frequency as a function of . These modes are constantly Doppler shifted relative to ordinary Minkowski time as the detector accelerates, and they change in frequency by enormous factors, even after only a short proper time. Translation in is a symmetry of Minkowski space: it can be shown that it corresponds to a boost in x, t coordinate around the origin. Any time translation in quantum mechanics is generated by the Hamiltonian operator. For a detector coupled to modes with a definite frequency in , we can treat as "time" and the boost operator is then the corresponding Hamiltonian. In Euclidean field theory, where the minus sign in front of the time in the Rindler metric is changed to a plus sign by multiplying to the Rindler time, i.e. a Wick rotation or imaginary time, the Rindler metric is turned into a polar-coordinate-like metric. Therefore any rotations must close themselves after 2 in a Euclidean metric to avoid being singular. So A path integral with real time coordinate is dual to a thermal partition function, related by a Wick rotation. The periodicity of imaginary time corresponds to a temperature of in thermal quantum field theory. Note that the path integral for this Hamiltonian is closed with period 2. This means that the modes are thermally occupied with temperature . This is not an actual temperature, because is dimensionless. It is conjugate to the timelike polar angle , which is also dimensionless. To restore the length dimension, note that a mode of fixed frequency in at position has a frequency which is determined by the square root of the (absolute value of the) metric at , the redshift factor. This can be seen by transforming the time coordinate of a Rindler observer at fixed to an inertial, co-moving observer observing a proper time. From the Rindler-line-element given above, this is just . The actual inverse temperature at this point is therefore It can be shown that the acceleration of a trajectory at constant in Rindler coordinates is equal to , so the actual inverse temperature observed is Restoring units yields The temperature of the vacuum, seen by an isolated observer accelerating at the Earth's gravitational acceleration of = , is only . For an experimental test of the Unruh effect it is planned to use accelerations up to , which would give a temperature of about . The Rindler derivation of the Unruh effect is unsatisfactory to some, since the detector's path is super-deterministic. Unruh later developed the Unruh–DeWitt particle detector model to circumvent this objection. Other implications The Unruh effect would also cause the decay rate of accelerating particles to differ from inertial particles. Stable particles like the electron could have nonzero transition rates to higher mass states when accelerating at a high enough rate. Unruh radiation Although Unruh's prediction that an accelerating detector would see a thermal bath is not controversial, the interpretation of the transitions in the detector in the non-accelerating frame is. It is widely, although not universally, believed that each transition in the detector is accompanied by the emission of a particle, and that this particle will propagate to infinity and be seen as Unruh radiation. The existence of Unruh radiation is not universally accepted. Smolyaninov claims that it has already been observed, while O'Connell and Ford claim that it is not emitted at all. While these skeptics accept that an accelerating object thermalizes at the Unruh temperature, they do not believe that this leads to the emission of photons, arguing that the emission and absorption rates of the accelerating particle are balanced. Experimental observation Researchers claim experiments that successfully detected the Sokolov–Ternov effect may also detect the Unruh effect under certain conditions. Theoretical work in 2011 suggests that accelerating detectors could be used for the direct detection of the Unruh effect with current technology. The Unruh effect may have been observed for the first time in 2019 in the high energy channeling radiation explored by the NA63 experiment at CERN. See also Dynamical Casimir effect Cosmic Background Radiation Hawking radiation Black hole thermodynamics Pair production Quantum information Superradiance Virtual particle References Further reading External links Thermodynamics Quantum field theory Theory of relativity Acceleration Physical phenomena Hypothetical processes
0.772591
0.992409
0.766726
Einstein–de Haas effect
The Einstein–de Haas effect is a physical phenomenon in which a change in the magnetic moment of a free body causes this body to rotate. The effect is a consequence of the conservation of angular momentum. It is strong enough to be observable in ferromagnetic materials. The experimental observation and accurate measurement of the effect demonstrated that the phenomenon of magnetization is caused by the alignment (polarization) of the angular momenta of the electrons in the material along the axis of magnetization. These measurements also allow the separation of the two contributions to the magnetization: that which is associated with the spin and with the orbital motion of the electrons. The effect also demonstrated the close relation between the notions of angular momentum in classical and in quantum physics. The effect was predicted by O. W. Richardson in 1908. It is named after Albert Einstein and Wander Johannes de Haas, who published two papers in 1915 claiming the first experimental observation of the effect. Description The orbital motion of an electron (or any charged particle) around a certain axis produces a magnetic dipole with the magnetic moment of where and are the charge and the mass of the particle, while is the angular momentum of the motion (SI units are used). In contrast, the intrinsic magnetic moment of the electron is related to its intrinsic angular momentum (spin) as (see Landé g-factor and anomalous magnetic dipole moment). If a number of electrons in a unit volume of the material have a total orbital angular momentum of with respect to a certain axis, their magnetic moments would produce the magnetization of . For the spin contribution the relation would be . A change in magnetization, implies a proportional change in the angular momentum, of the electrons involved. Provided that there is no external torque along the magnetization axis applied to the body in the process, the rest of the body (practically all its mass) should acquire an angular momentum due to the law of conservation of angular momentum. Experimental setup The experiments involve a cylinder of a ferromagnetic material suspended with the aid of a thin string inside a cylindrical coil which is used to provide an axial magnetic field that magnetizes the cylinder along its axis. A change in the electric current in the coil changes the magnetic field the coil produces, which changes the magnetization of the ferromagnetic cylinder and, due to the effect described, its angular momentum. A change in the angular momentum causes a change in the rotational speed of the cylinder, monitored using optical devices. The external field interacting with a magnetic dipole cannot produce any torque along the field direction. In these experiments the magnetization happens along the direction of the field produced by the magnetizing coil, therefore, in absence of other external fields, the angular momentum along this axis must be conserved. In spite of the simplicity of such a layout, the experiments are not easy. The magnetization can be measured accurately with the help of a pickup coil around the cylinder, but the associated change in the angular momentum is small. Furthermore, the ambient magnetic fields, such as the Earth field, can provide a 107–108 times larger mechanical impact on the magnetized cylinder. The later accurate experiments were done in a specially constructed demagnetized environment with active compensation of the ambient fields. The measurement methods typically use the properties of the torsion pendulum, providing periodic current to the magnetization coil at frequencies close to the pendulum's resonance. The experiments measure directly the ratio: and derive the dimensionless gyromagnetic factor of the material from the definition: . The quantity is called gyromagnetic ratio. History The expected effect and a possible experimental approach was first described by Owen Willans Richardson in a paper published in 1908. The electron spin was discovered in 1925, therefore only the orbital motion of electrons was considered before that. Richardson derived the expected relation of . The paper mentioned the ongoing attempts to observe the effect at Princeton University. In that historical context the idea of the orbital motion of electrons in atoms contradicted classical physics. This contradiction was addressed in the Bohr model in 1913, and later was removed with the development of quantum mechanics. Samuel Jackson Barnett, motivated by the Richardson's paper realized that the opposite effect should also happen – a change in rotation should cause a magnetization (the Barnett effect). He published the idea in 1909, after which he pursued the experimental studies of the effect. Einstein and de Haas published two papers in April 1915 containing a description of the expected effect and the experimental results. In the paper "Experimental proof of the existence of Ampere's molecular currents" they described in details the experimental apparatus and the measurements performed. Their result for the ratio of the angular momentum of the sample to its magnetic moment (the authors called it ) was very close (within 3%) to the expected value of . It was realized later that their result with the quoted uncertainty of 10% was not consistent with the correct value which is close to . Apparently, the authors underestimated the experimental uncertainties. Barnett reported the results of his measurements at several scientific conferences in 1914. In October 1915 he published the first observation of the Barnett effect in a paper titled "Magnetization by Rotation". His result for was close to the right value of , which was unexpected at that time. In 1918 John Quincy Stewart published the results of his measurements confirming the Barnett's result. In his paper he was calling the phenomenon the 'Richardson effect'. The following experiments demonstrated that the gyromagnetic ratio for iron is indeed close to rather than . This phenomenon, dubbed "gyromagnetic anomaly" was finally explained after the discovery of the spin and introduction of the Dirac equation in 1928. The experimental equipment was later donated by Geertruida de Haas-Lorentz, wife of de Haas and daughter of Lorentz, to the Ampère Museum in Lyon France in 1961. It went lost and was later rediscovered in 2023. Literature about the effect and its discovery Detailed accounts of the historical context and the explanations of the effect can be found in literature Commenting on the papers by Einstein, Calaprice in The Einstein Almanac writes: 52. "Experimental Proof of Ampère's Molecular Currents" (Experimenteller Nachweis der Ampereschen Molekularströme) (with Wander J. de Hass). Deutsche Physikalische Gesellschaft, Verhandlungen 17 (1915): 152–170. Considering [André-Marie] Ampère's hypothesis that magnetism is caused by the microscopic circular motions of electric charges, the authors proposed a design to test [Hendrik] Lorentz's theory that the rotating particles are electrons. The aim of the experiment was to measure the torque generated by a reversal of the magnetisation of an iron cylinder. Calaprice further writes: 53. "Experimental Proof of the Existence of Ampère's Molecular Currents" (with Wander J. de Haas) (in English). Koninklijke Akademie van Wetenschappen te Amsterdam, Proceedings 18 (1915–16). Einstein wrote three papers with Wander J. de Haas on experimental work they did together on Ampère's molecular currents, known as the Einstein–De Haas effect. He immediately wrote a correction to paper 52 (above) when Dutch physicist H. A. Lorentz pointed out an error. In addition to the two papers above [that is 52 and 53] Einstein and de Haas cowrote a "Comment" on paper 53 later in the year for the same journal. This topic was only indirectly related to Einstein's interest in physics, but, as he wrote to his friend Michele Besso, "In my old age I am developing a passion for experimentation." The second paper by Einstein and de Haas was communicated to the "Proceedings of the Royal Netherlands Academy of Arts and Sciences" by Hendrik Lorentz who was the father-in-law of de Haas. According to Viktor Frenkel, Einstein wrote in a report to the German Physical Society: "In the past three months I have performed experiments jointly with de Haas–Lorentz in the Imperial Physicotechnical Institute that have firmly established the existence of Ampère molecular currents." Probably, he attributed the hyphenated name to de Haas, not meaning both de Haas and H. A. Lorentz. Later measurements and applications The effect was used to measure the properties of various ferromagnetic elements and alloys. The key to more accurate measurements was better magnetic shielding, while the methods were essentially similar to those of the first experiments. The experiments measure the value of the g-factor (here we use the projections of the pseudovectors and onto the magnetization axis and omit the sign). The magnetization and the angular momentum consist of the contributions from the spin and the orbital angular momentum: , . Using the known relations , and , where is the g-factor for the anomalous magnetic moment of the electron, one can derive the relative spin contribution to magnetization as: . For pure iron the measured value is , and . Therefore, in pure iron 96% of the magnetization is provided by the polarization of the electrons' spins, while the remaining 4% is provided by the polarization of their orbital angular momenta. See also Barnett effect References External links "Einsteins's only experiment" (links to a directory of the Home Page of Physikalisch-Technische Bundesanstalt (PTB), Germany ). Here is a replica to be seen of the original apparatus on which the Einstein–de Haas experiment was carried out. Experimental physics Magnetism Quantum magnetism Albert Einstein
0.786381
0.974958
0.766688
Geometric Brownian motion
A geometric Brownian motion (GBM) (also known as exponential Brownian motion) is a continuous-time stochastic process in which the logarithm of the randomly varying quantity follows a Brownian motion (also called a Wiener process) with drift. It is an important example of stochastic processes satisfying a stochastic differential equation (SDE); in particular, it is used in mathematical finance to model stock prices in the Black–Scholes model. Technical definition: the SDE A stochastic process St is said to follow a GBM if it satisfies the following stochastic differential equation (SDE): where is a Wiener process or Brownian motion, and ('the percentage drift') and ('the percentage volatility') are constants. The former parameter is used to model deterministic trends, while the latter parameter models unpredictable events occurring during the motion. Solving the SDE For an arbitrary initial value S0 the above SDE has the analytic solution (under Itô's interpretation): The derivation requires the use of Itô calculus. Applying Itô's formula leads to where is the quadratic variation of the SDE. When , converges to 0 faster than , since . So the above infinitesimal can be simplified by Plugging the value of in the above equation and simplifying we obtain Taking the exponential and multiplying both sides by gives the solution claimed above. Arithmetic Brownian Motion The process for , satisfying the SDE or more generally the process solving the SDE where and are real constants and for an initial condition , is called an Arithmetic Brownian Motion (ABM). This was the model postulated by Louis Bachelier in 1900 for stock prices, in the first published attempt to model Brownian motion, known today as Bachelier model. As was shown above, the ABM SDE can be obtained through the logarithm of a GBM via Itô's formula. Similarly, a GBM can be obtained by exponentiation of an ABM through Itô's formula. Properties of GBM The above solution (for any value of t) is a log-normally distributed random variable with expected value and variance given by They can be derived using the fact that is a martingale, and that The probability density function of is: To derive the probability density function for GBM, we must use the Fokker-Planck equation to evaluate the time evolution of the PDF: where is the Dirac delta function. To simplify the computation, we may introduce a logarithmic transform , leading to the form of GBM: Then the equivalent Fokker-Planck equation for the evolution of the PDF becomes: Define and . By introducing the new variables and , the derivatives in the Fokker-Planck equation may be transformed as: Leading to the new form of the Fokker-Planck equation: However, this is the canonical form of the heat equation. which has the solution given by the heat kernel: Plugging in the original variables leads to the PDF for GBM: When deriving further properties of GBM, use can be made of the SDE of which GBM is the solution, or the explicit solution given above can be used. For example, consider the stochastic process log(St). This is an interesting process, because in the Black–Scholes model it is related to the log return of the stock price. Using Itô's lemma with f(S) = log(S) gives It follows that . This result can also be derived by applying the logarithm to the explicit solution of GBM: Taking the expectation yields the same result as above: . Simulating sample paths # Python code for the plot import numpy as np import matplotlib.pyplot as plt mu = 1 n = 50 dt = 0.1 x0 = 100 np.random.seed(1) sigma = np.arange(0.8, 2, 0.2) x = np.exp( (mu - sigma ** 2 / 2) * dt + sigma * np.random.normal(0, np.sqrt(dt), size=(len(sigma), n)).T ) x = np.vstack([np.ones(len(sigma)), x]) x = x0 * x.cumprod(axis=0) plt.plot(x) plt.legend(np.round(sigma, 2)) plt.xlabel("$t$") plt.ylabel("$x$") plt.title( "Realizations of Geometric Brownian Motion with different variances\n $\mu=1$" ) plt.show() Multivariate version GBM can be extended to the case where there are multiple correlated price paths. Each price path follows the underlying process where the Wiener processes are correlated such that where . For the multivariate case, this implies that A multivariate formulation that maintains the driving Brownian motions independent is where the correlation between and is now expressed through the terms. Use in finance Geometric Brownian motion is used to model stock prices in the Black–Scholes model and is the most widely used model of stock price behavior. Some of the arguments for using GBM to model stock prices are: The expected returns of GBM are independent of the value of the process (stock price), which agrees with what we would expect in reality. A GBM process only assumes positive values, just like real stock prices. A GBM process shows the same kind of 'roughness' in its paths as we see in real stock prices. Calculations with GBM processes are relatively easy. However, GBM is not a completely realistic model, in particular it falls short of reality in the following points: In real stock prices, volatility changes over time (possibly stochastically), but in GBM, volatility is assumed constant. In real life, stock prices often show jumps caused by unpredictable events or news, but in GBM, the path is continuous (no discontinuity). Apart from modeling stock prices, Geometric Brownian motion has also found applications in the monitoring of trading strategies. Extensions In an attempt to make GBM more realistic as a model for stock prices, also in relation to the volatility smile problem, one can drop the assumption that the volatility is constant. If we assume that the volatility is a deterministic function of the stock price and time, this is called a local volatility model. A straightforward extension of the Black Scholes GBM is a local volatility SDE whose distribution is a mixture of distributions of GBM, the lognormal mixture dynamics, resulting in a convex combination of Black Scholes prices for options. If instead we assume that the volatility has a randomness of its own—often described by a different equation driven by a different Brownian Motion—the model is called a stochastic volatility model, see for example the Heston model. See also Brownian surface References External links Geometric Brownian motion models for stock movement except in rare events. Excel Simulation of a Geometric Brownian Motion to simulate Stock Prices Non-Newtonian calculus website Trading Strategy Monitoring: Modeling the PnL as a Geometric Brownian Motion Wiener process Non-Newtonian calculus Articles with example Python (programming language) code
0.769895
0.995802
0.766663
Speed of electricity
The word electricity refers generally to the movement of electrons, or other charge carriers, through a conductor in the presence of a potential difference or an electric field. The speed of this flow has multiple meanings. In everyday electrical and electronic devices, the signals travel as electromagnetic waves typically at 50%–99% of the speed of light in vacuum. The electrons themselves move much more slowly. See drift velocity and electron mobility. Electromagnetic waves The speed at which energy or signals travel down a cable is actually the speed of the electromagnetic wave traveling along (guided by) the cable. I.e., a cable is a form of a waveguide. The propagation of the wave is affected by the interaction with the material(s) in and surrounding the cable, caused by the presence of electric charge carriers, interacting with the electric field component, and magnetic dipoles, interacting with the magnetic field component. These interactions are typically described using mean field theory by the permeability and the permittivity of the materials involved. The energy/signal usually flows overwhelmingly outside the electric conductor of a cable. The purpose of the conductor is thus not to conduct energy, but to guide the energy-carrying wave. Velocity of electromagnetic waves in good dielectrics The velocity of electromagnetic waves in a low-loss dielectric is given by where = speed of light in vacuum. = the permeability of free space = 4π x 10−7 H/m. = relative magnetic permeability of the material. Usually in good dielectrics, e.g. vacuum, air, Teflon, . . = the permitivity of free space = 8.854 x 10−12 F/m. = relative permitivity of the material. Usually in good conductors e.g. copper, silver, gold, . . Velocity of electromagnetic waves in good conductors The velocity of transverse electromagnetic (TEM) mode waves in a good conductor is given by where = frequency. = angular frequency = 2f. = conductivity of annealed copper = . = conductivity of the material relative to the conductivity of copper. For hard drawn copper may be as low as 0.97. . and permeability is defined as above in = the permeability of free space = 4π x 10−7 H/m. = relative magnetic permeability of the material. Nonmagnetic conductive materials such as copper typically have a near 1. . This velocity is the speed with which electromagnetic waves penetrate into the conductor and is not the drift velocity of the conduction electrons. In copper at 60Hz, 3.2m/s. As a consequence of Snell's Law and the extremely low speed, electromagnetic waves always enter good conductors in a direction that is within a milliradian of normal to the surface, regardless of the angle of incidence. Electromagnetic waves in circuits In the theoretical investigation of electric circuits, the velocity of propagation of the electromagnetic field through space is usually not considered; the field is assumed, as a precondition, to be present throughout space. The magnetic component of the field is considered to be in phase with the current, and the electric component is considered to be in phase with the voltage. The electric field starts at the conductor, and propagates through space at the velocity of light, which depends on the material it is traveling through. The electromagnetic fields do not move through space. It is the electromagnetic energy that moves. The corresponding fields simply grow and decline in a region of space in response to the flow of energy. At any point in space, the electric field corresponds not to the condition of the electric energy flow at that moment, but to that of the flow at a moment earlier. The latency is determined by the time required for the field to propagate from the conductor to the point under consideration. In other words, the greater the distance from the conductor, the more the electric field lags. Since the velocity of propagation is very high – about 300,000 kilometers per second – the wave of an alternating or oscillating current, even of high frequency, is of considerable length. At 60 cycles per second, the wavelength is 5,000 kilometers, and even at 100,000 hertz, the wavelength is 3 kilometers. This is a very large distance compared to those typically used in field measurement and application. The important part of the electric field of a conductor extends to the return conductor, which usually is only a few feet distant. At greater distance, the aggregate field can be approximated by the differential field between conductor and return conductor, which tend to cancel. Hence, the intensity of the electric field is usually inappreciable at a distance which is still small compared to the wavelength. Within the range in which an appreciable field exists, this field is practically in phase with the flow of energy in the conductor. That is, the velocity of propagation has no appreciable effect unless the return conductor is very distant, or entirely absent, or the frequency is so high that the distance to the return conductor is an appreciable portion of the wavelength. Charge carrier drift The drift velocity deals with the average velocity of a particle, such as an electron, due to an electric field. In general, an electron will propagate randomly in a conductor at the Fermi velocity. Free electrons in a conductor follow a random path. Without the presence of an electric field, the electrons have no net velocity. When a DC voltage is applied, the electron drift velocity will increase in speed proportionally to the strength of the electric field. The drift velocity in a 2 mm diameter copper wire in 1 ampere current is approximately 8 cm per hour. AC voltages cause no net movement. The electrons oscillate back and forth in response to the alternating electric field, over a distance of a few micrometers – see example calculation. See also Speed of light Speed of gravity Speed of sound Telegrapher's equations Reflections of signals on conducting lines References Further reading Alfvén, H. (1950). Cosmical electrodynamics. Oxford: Clarendon Press Alfvén, H. (1981). Cosmic plasma. Taylor & Francis US. "Velocity of Propagation of Electric Field", Theory and Calculation of Transient Electric Phenomena and Oscillations by Charles Proteus Steinmetz, Chapter VIII, p. 394-, McGraw-Hill, 1920. Fleming, J. A. (1911). Propagation of electric currents in telephone & telegraph conductors. New York: Van Nostrand Electromagnetism Electricity
0.771342
0.993932
0.766661
Neorealism (international relations)
Neorealism or structural realism is a theory of international relations that emphasizes the role of power politics in international relations, sees competition and conflict as enduring features and sees limited potential for cooperation. The anarchic state of the international system means that states cannot be certain of other states' intentions and their security, thus prompting them to engage in power politics. It was first outlined by Kenneth Waltz in his 1979 book Theory of International Politics. Alongside neoliberalism, neorealism is one of the two most influential contemporary approaches to international relations; the two perspectives dominated international relations theory from the 1960s to the 1990s. Neorealism emerged from the North American discipline of political science, and reformulates the classical realist tradition of E. H. Carr, Hans Morgenthau, George Kennan, and Reinhold Niebuhr. Neorealism is subdivided into defensive and offensive neorealism. Origins Neorealism is an ideological departure from Hans Morgenthau's writing on classical realism. Classical realism originally explained the machinations of international politics as being based on human nature and therefore subject to the ego and emotion of world leaders. Neorealist thinkers instead propose that structural constraints—not strategy, egoism, or motivation—will determine behavior in international relations. John Mearsheimer made significant distinctions between his version of offensive neorealism and Morgenthau in his book titled The Tragedy of Great Power Politics. Theory Structural realism holds that the nature of the international structure is defined by its ordering principle (anarchy), units of the system (states), and by the distribution of capabilities (measured by the number of great powers within the international system), with only the last being considered an independent variable with any meaningful change over time. The anarchic ordering principle of the international structure is decentralized, meaning there is no formal central authority; every sovereign state is formally equal in this system. These states act according to the logic of egoism, meaning states seek their own interest and will not subordinate their interest to the interests of other states. States are assumed at a minimum to want to ensure their own survival as this is a prerequisite to pursue other goals. This driving force of survival is the primary factor influencing their behavior and in turn ensures states develop offensive military capabilities for foreign interventionism and as a means to increase their relative power. Because states can never be certain of other states' future intentions, there is a lack of trust between states which requires them to be on guard against relative losses of power which could enable other states to threaten their survival. This lack of trust, based on uncertainty, is called the security dilemma. States are deemed similar in terms of needs but not in capabilities for achieving them. The positional placement of states in terms of abilities determines the distribution of capabilities. The structural distribution of capabilities then limits cooperation among states through fears of relative gains made by other states, and the possibility of dependence on other states. The desire and relative abilities of each state to maximize relative power constrain each other, resulting in a 'balance of power', which shapes international relations. It also gives rise to the 'security dilemma' that all nations face. There are two ways in which states balance power: internal balancing and external balancing. Internal balancing occurs as states grow their own capabilities by increasing economic growth and/or increasing military spending. External balancing occurs as states enter into alliances to check the power of more powerful states or alliances. Neorealism sees states as "black boxes," as the structure of the international system is emphasized rather than the units and their unique characteristics within it as being causal. Neorealists contend that there are essentially three possible systems according to changes in the distribution of capabilities, defined by the number of great powers within the international system. A unipolar system contains only one great power, a bipolar system contains two great powers, and a multipolar system contains more than two great powers. Neorealists conclude that a bipolar system is more stable (less prone to great power war and systemic change) than a multipolar system because balancing can only occur through internal balancing as there are no extra great powers with which to form alliances. Because there is only internal balancing in a bipolar system, rather than external balancing, there is less opportunity for miscalculations and therefore less chance of great power war. That is a simplification and a theoretical ideal. Neorealists argue that processes of emulation and competition lead states to behave in the aforementioned ways. Emulation leads states to adopt the behaviors of successful states (for example, those victorious in war), whereas competition leads states to vigilantly ensure their security and survival through the best means possible. Due to the anarchic nature of the international system and the inability of states to rely on other states or organizations, states have to engage in "self-help." For neorealists, social norms are considered largely irrelevant. This is in contrast to some classical realists which did see norms as potentially important. Neorealists are also skeptical of the ability of international organizations to act independently in the international system and facilitate cooperation between states. Defensive realism Structural realism has become divided into two branches, defensive and offensive realism, following the publication of Mearsheimer's The Tragedy of Great Power Politics in 2001. Waltz's original formulation of neorealism is now sometimes called defensive realism, while Mearsheimer's modification of the theory is referred to as offensive realism. Both branches agree that the structure of the system is what causes states to compete, but defensive realism posits that most states concentrate on maintaining their security (i.e. states are security maximizers), while offensive realism claims that all states seek to gain as much power as possible (i.e. states are power maximizers). A foundational study in the area of defensive realism is Robert Jervis' classic 1978 article on the "security dilemma." It examines how uncertainty and the offense-defense balance may heighten or soften the security dilemma. Building on Jervis, Stephen Van Evera explores the causes of war from a defensive realist perspective. Offensive realism Offensive realism, developed by Mearsheimer differs in the amount of power that states desire. Mearsheimer proposes that states maximize relative power ultimately aiming for regional hegemony. In addition to Mearsheimer, a number of other scholars have sought to explain why states expand when opportunities to do so arise. For instance, Randall Schweller refers to states' revisionist agendas to account for their aggressive military action. Eric Labs investigates the expansion of war aims during wartime as an example of offensive behavior. Fareed Zakaria analyzes the history of US foreign relations from 1865 to 1914 and asserts that foreign interventions during this period were not motivated by worries about external threats but by a desire to expand US influence. Scholarly debate Within realist thought While neorealists agree that the structure of the international relations is the primary impetus in seeking security, there is disagreement among neorealist scholars as to whether states merely aim to survive or whether states want to maximize their relative power. The former represents the ideas of Kenneth Waltz, while the latter represents the ideas of John Mearsheimer and offensive realism. Other debates include the extent to which states balance against power (in Waltz's original neorealism and classic realism), versus the extent to which states balance against threats (as introduced in Stephen Walt's 'The Origins of Alliances' (1987)), or balance against competing interests (as introduced in Randall Schweller's 'Deadly Imbalances' (1998)). With other schools of thought Neorealists conclude that because war is an effect of the anarchic structure of the international system, it is likely to continue in the future. Indeed, neorealists often argue that the ordering principle of the international system has not fundamentally changed from the time of Thucydides to the advent of nuclear warfare. The view that long-lasting peace is not likely to be achieved is described by other theorists as a largely pessimistic view of international relations. One of the main challenges to neorealist theory is the democratic peace theory and supporting research, such as the book Never at War. Neorealists answer this challenge by arguing that democratic peace theorists tend to pick and choose the definition of democracy to achieve the desired empirical result. For example, the Germany of Kaiser Wilhelm II, the Dominican Republic of Juan Bosch, and the Chile of Salvador Allende are not considered to be "democracies of the right kind" or the conflicts do not qualify as wars according to these theorists. Furthermore, they claim several wars between democratic states have been averted only by causes other than ones covered by democratic peace theory. Advocates of democratic peace theory see the spreading of democracy as helping to mitigate the effects of anarchy. With enough democracies in the world, Bruce Russett thinks that it "may be possible in part to supersede the 'realist' principles (anarchy, the security dilemma of states) that have dominated practice since at least the seventeenth century." John Mueller believes that it is not the spreading of democracy but rather other conditions (e.g., power) that bring about democracy and peace. In consenting with Mueller's argument, Kenneth Waltz notes that "some of the major democracies—Britain in the nineteenth century and the United States in the twentieth century—have been among the most powerful states of their eras." One of the most notable schools contending with neorealist thought, aside from neoliberalism, is the constructivist school, which is often seen to disagree with the neorealist focus on power and instead emphasises a focus on ideas and identity as an explanatory point for international relations trends. Recently, however, a school of thought called the English School merges neo-realist tradition with the constructivist technique of analyzing social norms to provide an increasing scope of analysis for international relations. Criticism Neorealism has been criticized from various directions. Other major paradigms of international relations scholarship, such as liberal and constructivist approaches have criticized neorealist scholarship in terms of theory and empirics. Within realism, classical realists and neoclassical realists have also challenged some aspects of neorealism. Among the issues that neorealism has been criticized over is the neglect of domestic politics, race, gains from trade, the pacifying effects of institutions, and the relevance of regime type for foreign policy behavior. David Strang argues that neorealist predictions fail to account for transformations in sovereignty over time and across regions. These transformations in sovereignty have had implications for cooperation and competition, as polities that were recognized as sovereign have seen considerably greater stability. In response to criticisms that neorealism lacks relevance for contemporary international policy and does a poor job explaining the foreign policy behavior of major powers, Charles Glaser wrote in 2003, "this is neither surprising nor a serious problem, because scholars who use a realist lens to understand international politics can, and have, without inconsistency or contradiction also employed other theories to understand issues that fall outside realism's central focus." Notable neorealists Robert J. Art Richard K. Betts Robert Gilpin Robert W. Tucker Joseph Grieco Robert Jervis Christopher Layne Jack Snyder John Mearsheimer Stephen Walt Kenneth Waltz Stephen Van Evera Barry Posen Charles L. Glaser Marc Trachtenberg Gottfried-Karl Kindermann See also Foreign interventionism International relations theory Mercantilism Neofunctionalism Neoliberalism Realpolitik Notes References Further reading Books Waltz, Kenneth N. (1959). Man, The State, and War: A Theoretical Analysis . Walt, Stephen (1990). The Origins of Alliances Van Evera, Stephen. (2001). Causes of War Waltz, Kenneth N. (2008). Realism and International Politics Art, Robert J. (2008). America's Grand Strategy and World Politics Glaser, Charles L. (2010). Rational Theory of International Politics: The Logic of Competition and Cooperation Articles Jervis, Robert (1978). Cooperation Under the Security Dilemma (World Politics, Vol. 30, No.2, 1978) Art, Robert J. (1998). Geopolitics Updated: The Strategy of Selective Engagement (International Security, Vol. 23, No. 3, 1998–99) Farber, Henry S.; Gowa, Jeanne (1995). Polities and Peace (International Security, Vol. 20, No. 2, 1995) Gilpin, Robert (1988). The Theory of Hegemonic War (The Journal of Interdisciplinary History, Vol. 18, No. 4, 1988) Posen, Barry (2003). Command of the Commons: The Military Foundations of U.S. Hegemony (International Security, Vol. 28, No. 1, 2003) External links Theory Talks Interview with Kenneth Waltz, founder of neorealism (May 2011) Theory Talks Interview with neorealist Robert Jervis (July 2008) International relations theory
0.769718
0.996023
0.766658
Havok (software)
Havok is a middleware software suite developed by the Irish company Havok. Havok provides physics engine, navigation, and cloth simulation components that can be integrated into video game engines. In 2007, Intel acquired Havok Inc. In 2008, Havok was honored at the 59th Annual Technology & Engineering Emmy Awards for advancing the development of physics engines in electronic entertainment. In 2015, Microsoft acquired Havok. Products The Havok middleware suite consists of the following modules: Havok Physics: It is designed primarily for video games, and allows for real-time collision and dynamics of rigid bodies in three dimensions. It provides multiple types of dynamic constraints between rigid bodies (e.g. for ragdoll physics), and has a highly optimized collision detection library. By using dynamical simulation, Havok Physics allows for more realistic virtual worlds in games. The company was developing a specialized version of Havok Physics called Havok FX that made use of ATI and Nvidia GPUs for physics simulations, but the goal of GPU acceleration did not materialize until several years later. Havok Navigation: In 2009, Havok released Havok AI, which provides advanced pathfinding capabilities for games. Havok AI provides navigation mesh generation, pathfinding and path following for video game environments. In 2024, this product was renamed to Havok Navigation. Havok Cloth: Released in 2008, Havok Cloth deals with efficient simulation of character garments and soft body dynamics. Havok Destruction (discontinued): Also released in 2008, Havok Destruction provides tools for creation of destructible and deformable rigid body environments. Havok Animation Studio (discontinued): Havok Animation Studio is formally known as Havok Behavior and Havok Animation. Havok Behavior is a runtime SDK for controlling game character animation at a high level using finite state machines. Havok Animation provides efficient playback and compression of character animations in games, and features such as inverse kinematics. Havok Script (discontinued): Havok Script is a Lua-compatible virtual machine designed for video game development. It is shipped as part of the Havok Script Studio. Havok Vision Engine (discontinued): In 2011, Havok acquired German game engine development company Trinigy and their Vision Engine and toolset. Supported platforms Version 1.0 of the Havok SDK was unveiled at the Game Developers Conference (GDC) in 2000. The Havok SDK is multi-platform by nature and is always updated to run on the majority of the latest platforms. Licensees are given access to most of the C/C++ source-code, giving them the freedom to customize the engine's features, or port it to different platforms although some libraries are only provided in binary format. In March 2011, Havok showed off a version of the Havok physics engine designed for use with the Sony Xperia Play, or more specifically, Android 2.3. During Microsoft's //BUILD/ 2012 conference, Havok unveiled a full technology suite for Windows 8, Windows RT, Windows Phone 8 and later Windows 10. As of February 2023, Havok supports 18 targets across 10 platforms. These platforms include: Windows, Linux, Xbox Series S/X, Playstation 5, iOS, Nintendo Switch and Android. Prebuilt engines Unity In 2019, Unity and Havok signed a partnership to build a complete physics solution for DOTS-based projects in Unity. This was completed and released as production ready in December 2022. Unreal Engine Havok maintains integrations for all of their products to Epic's Unreal Engine. Havok Physics can be used to replace the inbuilt physics engine (Chaos Physics) at an engine level, while Havok Navigation is a stand alone plugin, and Havok Cloth is a separate tool that works alongside the engine. Babylon.js In April 2023, Babylon.js 6.0 was released with a physics implementation by Havok. This implementation was released as a WASM plugin and involved an overhaul of the Babylon.js Physics API. Usage Video games The first game to use Havok Physics was London Racer by Davilex Games. In 2023, Havok products were used in twelve of the top twenty best selling video games in the United States. Other software Havok can also be found in: Futuremark's 3DMark2001 and 03 benchmarking tools a plug-in for Maya animation software Valve's Source game engine uses VPhysics, which is a physics engine modified from Havok Havok addons in 3D Studio Max Havok supplies tools (the "Havok Content Tools") for export of assets for use with all Havok products from Autodesk 3ds Max, Autodesk Maya, and (formerly) Autodesk Softimage. Havok was also used in the virtual world Second Life, with all physics handled by its online simulator servers, rather than by the users' client computers. An upgrade to Havok version 4 was released in April 2008 and an upgrade to version 7 started in June 2010. Second Life resident Emilin Nakamori constructed a weight-driven, pendulum-regulated mechanical clock functioning entirely by Havok Physics in March 2019. References 2000 software Computer physics engines Microsoft software Middleware for video games Video game development software Video game engines Virtual reality
0.771236
0.994043
0.766643
Classical electromagnetism and special relativity
The theory of special relativity plays an important role in the modern theory of classical electromagnetism. It gives formulas for how electromagnetic objects, in particular the electric and magnetic fields, are altered under a Lorentz transformation from one inertial frame of reference to another. It sheds light on the relationship between electricity and magnetism, showing that frame of reference determines if an observation follows electric or magnetic laws. It motivates a compact and convenient notation for the laws of electromagnetism, namely the "manifestly covariant" tensor form. Maxwell's equations, when they were first stated in their complete form in 1865, would turn out to be compatible with special relativity. Moreover, the apparent coincidences in which the same effect was observed due to different physical phenomena by two different observers would be shown to be not coincidental in the least by special relativity. In fact, half of Einstein's 1905 first paper on special relativity, "On the Electrodynamics of Moving Bodies," explains how to transform Maxwell's equations. Transformation of the fields between inertial frames The E and B fields This equation considers two inertial frames. The primed frame is moving relative to the unprimed frame at velocity v. Fields defined in the primed frame are indicated by primes, and fields defined in the unprimed frame lack primes. The field components parallel to the velocity v are denoted by and while the field components perpendicular to v are denoted as and . In these two frames moving at relative velocity v, the E-fields and B-fields are related by: where is called the Lorentz factor and c is the speed of light in free space. The equations above are in SI. In CGS these equations can be derived by replacing with , and with , except . Lorentz factor is the same in both systems. The inverse transformations are the same except . An equivalent, alternative expression is: where is the velocity unit vector. With previous notations, one actually has and . Component by component, for relative motion along the x-axis , this works out to be the following: If one of the fields is zero in one frame of reference, that doesn't necessarily mean it is zero in all other frames of reference. This can be seen by, for instance, making the unprimed electric field zero in the transformation to the primed electric field. In this case, depending on the orientation of the magnetic field, the primed system could see an electric field, even though there is none in the unprimed system. This does not mean two completely different sets of events are seen in the two frames, but that the same sequence of events is described in two different ways (see Moving magnet and conductor problem below). If a particle of charge q moves with velocity u with respect to frame S, then the Lorentz force in frame S is: In frame S', the Lorentz force is: A derivation for the transformation of the Lorentz force for the particular case u = 0 is given here. A more general one can be seen here. The transformations in this form can be made more compact by introducing the electromagnetic tensor (defined below), which is a covariant tensor. The D and H fields For the electric displacement D and magnetic intensity H, using the constitutive relations and the result for c2: gives Analogously for E and B, the D and H form the electromagnetic displacement tensor. The φ and A fields An alternative simpler transformation of the EM field uses the electromagnetic potentials - the electric potential φ and magnetic potential A: where is the parallel component of A to the direction of relative velocity between frames v, and is the perpendicular component. These transparently resemble the characteristic form of other Lorentz transformations (like time-position and energy-momentum), while the transformations of E and B above are slightly more complicated. The components can be collected together as: The ρ and J fields Analogously for the charge density ρ and current density J, Collecting components together: Non-relativistic approximations For speeds v ≪ c, the relativistic factor γ ≈ 1, which yields: so that there is no need to distinguish between the spatial and temporal coordinates in Maxwell's equations. Relationship between electricity and magnetism Deriving magnetism from electric laws The chosen reference frame determines whether an electromagnetic phenomenon is viewed as an electric or magnetic effect or a combination of the two. Authors usually derive magnetism from electrostatics when special relativity and charge invariance are taken into account. The Feynman Lectures on Physics (vol. 2, ch. 13–6) uses this method to derive the magnetic force on charge in parallel motion next to a current-carrying wire. See also Haskell and Landau. If the charge instead moves perpendicular to a current-carrying wire, electrostatics cannot be used to derive the magnetic force. In this case, it can instead be derived by considering the relativistic compression of the electric field due to the motion of the charges in the wire. Fields intermix in different frames The above transformation rules show that the electric field in one frame contributes to the magnetic field in another frame, and vice versa. This is often described by saying that the electric field and magnetic field are two interrelated aspects of a single object, called the electromagnetic field. Indeed, the entire electromagnetic field can be represented in a single rank-2 tensor called the electromagnetic tensor; see below. Moving magnet and conductor problem A famous example of the intermixing of electric and magnetic phenomena in different frames of reference is called the "moving magnet and conductor problem", cited by Einstein in his 1905 paper on Special Relativity. If a conductor moves with a constant velocity through the field of a stationary magnet, eddy currents will be produced due to a magnetic force on the electrons in the conductor. In the rest frame of the conductor, on the other hand, the magnet will be moving and the conductor stationary. Classical electromagnetic theory predicts that precisely the same microscopic eddy currents will be produced, but they will be due to an electric force. Covariant formulation in vacuum The laws and mathematical objects in classical electromagnetism can be written in a form which is manifestly covariant. Here, this is only done so for vacuum (or for the microscopic Maxwell equations, not using macroscopic descriptions of materials such as electric permittivity), and uses SI units. This section uses Einstein notation, including Einstein summation convention. See also Ricci calculus for a summary of tensor index notations, and raising and lowering indices for definition of superscript and subscript indices, and how to switch between them. The Minkowski metric tensor η here has metric signature (+ − − −). Field tensor and 4-current The above relativistic transformations suggest the electric and magnetic fields are coupled together, in a mathematical object with 6 components: an antisymmetric second-rank tensor, or a bivector. This is called the electromagnetic field tensor, usually written as Fμν. In matrix form: where c the speed of light - in natural units c = 1. There is another way of merging the electric and magnetic fields into an antisymmetric tensor, by replacing E/c → B and B → − E/c, to get the dual tensor Gμν. In the context of special relativity, both of these transform according to the Lorentz transformation according to , where Λαν is the Lorentz transformation tensor for a change from one reference frame to another. The same tensor is used twice in the summation. The charge and current density, the sources of the fields, also combine into the four-vector called the four-current. Maxwell's equations in tensor form Using these tensors, Maxwell's equations reduce to: where the partial derivatives may be written in various ways, see 4-gradient. The first equation listed above corresponds to both Gauss's Law (for β = 0) and the Ampère-Maxwell Law (for β = 1, 2, 3). The second equation corresponds to the two remaining equations, Gauss's law for magnetism (for β = 0) and Faraday's Law (for β = 1, 2, 3). These tensor equations are manifestly covariant, meaning they can be seen to be covariant by the index positions. This short form of Maxwell's equations illustrates an idea shared amongst some physicists, namely that the laws of physics take on a simpler form when written using tensors. By lowering the indices on Fαβ to obtain Fαβ: the second equation can be written in terms of Fαβ as: where is the contravariant Levi-Civita symbol. Notice the cyclic permutation of indices in this equation: . Another covariant electromagnetic object is the electromagnetic stress-energy tensor, a covariant rank-2 tensor which includes the Poynting vector, Maxwell stress tensor, and electromagnetic energy density. 4-potential The EM field tensor can also be written where is the four-potential and is the four-position. Using the 4-potential in the Lorenz gauge, an alternative manifestly-covariant formulation can be found in a single equation (a generalization of an equation due to Bernhard Riemann by Arnold Sommerfeld, known as the Riemann–Sommerfeld equation, or the covariant form of the Maxwell equations): where is the d'Alembertian operator, or four-Laplacian. See also Mathematical descriptions of the electromagnetic field Relativistic electromagnetism References Electromagnetism Special relativity
0.776043
0.987872
0.766631
Degrees of freedom (mechanics)
In physics, the degrees of freedom (DOF) of a mechanical system is the number of independent parameters that define its configuration or state. It is important in the analysis of systems of bodies in mechanical engineering, structural engineering, aerospace engineering, robotics, and other fields. The position of a single railcar (engine) moving along a track has one degree of freedom because the position of the car is defined by the distance along the track. A train of rigid cars connected by hinges to an engine still has only one degree of freedom because the positions of the cars behind the engine are constrained by the shape of the track. An automobile with highly stiff suspension can be considered to be a rigid body traveling on a plane (a flat, two-dimensional space). This body has three independent degrees of freedom consisting of two components of translation and one angle of rotation. Skidding or drifting is a good example of an automobile's three independent degrees of freedom. The position and orientation of a rigid body in space is defined by three components of translation and three components of rotation, which means that it has six degrees of freedom. The exact constraint mechanical design method manages the degrees of freedom to neither underconstrain nor overconstrain a device. Motions and dimensions The position of an n-dimensional rigid body is defined by the rigid transformation, [T] = [A, d], where d is an n-dimensional translation and A is an n × n rotation matrix, which has n translational degrees of freedom and n(n − 1)/2 rotational degrees of freedom. The number of rotational degrees of freedom comes from the dimension of the rotation group SO(n). A non-rigid or deformable body may be thought of as a collection of many minute particles (infinite number of DOFs), this is often approximated by a finite DOF system. When motion involving large displacements is the main objective of study (e.g. for analyzing the motion of satellites), a deformable body may be approximated as a rigid body (or even a particle) in order to simplify the analysis. The degree of freedom of a system can be viewed as the minimum number of coordinates required to specify a configuration. Applying this definition, we have: For a single particle in a plane two coordinates define its location so it has two degrees of freedom; A single particle in space requires three coordinates so it has three degrees of freedom; Two particles in space have a combined six degrees of freedom; If two particles in space are constrained to maintain a constant distance from each other, such as in the case of a diatomic molecule, then the six coordinates must satisfy a single constraint equation defined by the distance formula. This reduces the degree of freedom of the system to five, because the distance formula can be used to solve for the remaining coordinate once the other five are specified. Rigid bodies A single rigid body has at most six degrees of freedom (6 DOF) 3T3R consisting of three translations 3T and three rotations 3R. See also Euler angles. For example, the motion of a ship at sea has the six degrees of freedom of a rigid body, and is described as: Translation and rotation: Walking (or surging): Moving forward and backward; Strafing (or swaying): Moving left and right; Elevating (or heaving): Moving up and down; Roll rotation: Pivots side to side; Pitch rotation: Tilts forward and backward; Yaw rotation: Swivels left and right; For example, the trajectory of an airplane in flight has three degrees of freedom and its attitude along the trajectory has three degrees of freedom, for a total of six degrees of freedom. For rolling in flight and ship dynamics, see roll (aviation) and roll (ship motion), respectively. An important derivative is the roll rate (or roll velocity), which is the angular speed at which an aircraft can change its roll attitude, and is typically expressed in degrees per second. For pitching in flight and ship dynamics, see pitch (aviation) and pitch (ship motion), respectively. For yawing in flight and ship dynamics, see yaw (aviation) and yaw (ship motion), respectively. One important derivative is the yaw rate (or yaw velocity), the angular speed of yaw rotation, measured with a yaw rate sensor. Another important derivative is the yawing moment, the angular momentum of a yaw rotation, which is important for adverse yaw in aircraft dynamics. Lower mobility Physical constraints may limit the number of degrees of freedom of a single rigid body.  For example, a block sliding around on a flat table has 3 DOF 2T1R consisting of two translations 2T and 1 rotation 1R.  An XYZ positioning robot like SCARA has 3 DOF 3T lower mobility. Mobility formula The mobility formula counts the number of parameters that define the configuration of a set of rigid bodies that are constrained by joints connecting these bodies. Consider a system of n rigid bodies moving in space has 6n degrees of freedom measured relative to a fixed frame. In order to count the degrees of freedom of this system, include the fixed body in the count of bodies, so that mobility is independent of the choice of the body that forms the fixed frame. Then the degree-of-freedom of the unconstrained system of N = n + 1 is because the fixed body has zero degrees of freedom relative to itself. Joints that connect bodies in this system remove degrees of freedom and reduce mobility. Specifically, hinges and sliders each impose five constraints and therefore remove five degrees of freedom. It is convenient to define the number of constraints c that a joint imposes in terms of the joint's freedom f, where c = 6 − f. In the case of a hinge or slider, which are one degree of freedom joints, have f = 1 and therefore c = 6 − 1 = 5. The result is that the mobility of a system formed from n moving links and j joints each with freedom fi, i = 1, ..., j, is given by Recall that N includes the fixed link. There are two important special cases: (i) a simple open chain, and (ii) a simple closed chain. A single open chain consists of n moving links connected end to end by n joints, with one end connected to a ground link. Thus, in this case N = j + 1 and the mobility of the chain is For a simple closed chain, n moving links are connected end-to-end by n + 1 joints such that the two ends are connected to the ground link forming a loop. In this case, we have N = j and the mobility of the chain is An example of a simple open chain is a serial robot manipulator. These robotic systems are constructed from a series of links connected by six one degree-of-freedom revolute or prismatic joints, so the system has six degrees of freedom. An example of a simple closed chain is the RSSR spatial four-bar linkage. The sum of the freedom of these joints is eight, so the mobility of the linkage is two, where one of the degrees of freedom is the rotation of the coupler around the line joining the two S joints. Planar and spherical movement It is common practice to design the linkage system so that the movement of all of the bodies are constrained to lie on parallel planes, to form what is known as a planar linkage. It is also possible to construct the linkage system so that all of the bodies move on concentric spheres, forming a spherical linkage. In both cases, the degrees of freedom of the links in each system is now three rather than six, and the constraints imposed by joints are now c = 3 − f. In this case, the mobility formula is given by and the special cases become planar or spherical simple open chain, planar or spherical simple closed chain, An example of a planar simple closed chain is the planar four-bar linkage, which is a four-bar loop with four one degree-of-freedom joints and therefore has mobility M = 1. Systems of bodies A system with several bodies would have a combined DOF that is the sum of the DOFs of the bodies, less the internal constraints they may have on relative motion. A mechanism or linkage containing a number of connected rigid bodies may have more than the degrees of freedom for a single rigid body. Here the term degrees of freedom is used to describe the number of parameters needed to specify the spatial pose of a linkage. It is also defined in context of the configuration space, task space and workspace of a robot. A specific type of linkage is the open kinematic chain, where a set of rigid links are connected at joints; a joint may provide one DOF (hinge/sliding), or two (cylindrical). Such chains occur commonly in robotics, biomechanics, and for satellites and other space structures. A human arm is considered to have seven DOFs. A shoulder gives pitch, yaw, and roll, an elbow allows for pitch, and a wrist allows for pitch, yaw and roll. Only 3 of those movements would be necessary to move the hand to any point in space, but people would lack the ability to grasp things from different angles or directions. A robot (or object) that has mechanisms to control all 6 physical DOF is said to be holonomic. An object with fewer controllable DOFs than total DOFs is said to be non-holonomic, and an object with more controllable DOFs than total DOFs (such as the human arm) is said to be redundant. Although keep in mind that it is not redundant in the human arm because the two DOFs; wrist and shoulder, that represent the same movement; roll, supply each other since they can't do a full 360. The degree of freedom are like different movements that can be made. In mobile robotics, a car-like robot can reach any position and orientation in 2-D space, so it needs 3 DOFs to describe its pose, but at any point, you can move it only by a forward motion and a steering angle. So it has two control DOFs and three representational DOFs; i.e. it is non-holonomic. A fixed-wing aircraft, with 3–4 control DOFs (forward motion, roll, pitch, and to a limited extent, yaw) in a 3-D space, is also non-holonomic, as it cannot move directly up/down or left/right. A summary of formulas and methods for computing the degrees-of-freedom in mechanical systems has been given by Pennestri, Cavacece, and Vita. Electrical engineering In electrical engineering degrees of freedom is often used to describe the number of directions in which a phased array antenna can form either beams or nulls. It is equal to one less than the number of elements contained in the array, as one element is used as a reference against which either constructive or destructive interference may be applied using each of the remaining antenna elements. Radar practice and communication link practice, with beam steering being more prevalent for radar applications and null steering being more prevalent for interference suppression in communication links. See also References Mechanics Robot kinematics Rigid bodies
0.771294
0.993938
0.766619
Power (physics)
Power is the amount of energy transferred or converted per unit time. In the International System of Units, the unit of power is the watt, equal to one joule per second. Power is a scalar quantity. Specifying power in particular systems may require attention to other quantities; for example, the power involved in moving a ground vehicle is the product of the aerodynamic drag plus traction force on the wheels, and the velocity of the vehicle. The output power of a motor is the product of the torque that the motor generates and the angular velocity of its output shaft. Likewise, the power dissipated in an electrical element of a circuit is the product of the current flowing through the element and of the voltage across the element. Definition Power is the rate with respect to time at which work is done; it is the time derivative of work: where is power, is work, and is time. We will now show that the mechanical power generated by a force F on a body moving at the velocity v can be expressed as the product: If a constant force F is applied throughout a distance x, the work done is defined as . In this case, power can be written as: If instead the force is variable over a three-dimensional curve C, then the work is expressed in terms of the line integral: From the fundamental theorem of calculus, we know that Hence the formula is valid for any general situation. In older works, power is sometimes called activity. Units The dimension of power is energy divided by time. In the International System of Units (SI), the unit of power is the watt (W), which is equal to one joule per second. Other common and traditional measures are horsepower (hp), comparing to the power of a horse; one mechanical horsepower equals about 745.7 watts. Other units of power include ergs per second (erg/s), foot-pounds per minute, dBm, a logarithmic measure relative to a reference of 1 milliwatt, calories per hour, BTU per hour (BTU/h), and tons of refrigeration. Average power and instantaneous power As a simple example, burning one kilogram of coal releases more energy than detonating a kilogram of TNT, but because the TNT reaction releases energy more quickly, it delivers more power than the coal. If is the amount of work performed during a period of time of duration , the average power over that period is given by the formula It is the average amount of work done or energy converted per unit of time. Average power is often called "power" when the context makes it clear. Instantaneous power is the limiting value of the average power as the time interval approaches zero. When power is constant, the amount of work performed in time period can be calculated as In the context of energy conversion, it is more customary to use the symbol rather than . Mechanical power Power in mechanical systems is the combination of forces and movement. In particular, power is the product of a force on an object and the object's velocity, or the product of a torque on a shaft and the shaft's angular velocity. Mechanical power is also described as the time derivative of work. In mechanics, the work done by a force on an object that travels along a curve is given by the line integral: where defines the path and is the velocity along this path. If the force is derivable from a potential (conservative), then applying the gradient theorem (and remembering that force is the negative of the gradient of the potential energy) yields: where and are the beginning and end of the path along which the work was done. The power at any point along the curve is the time derivative: In one dimension, this can be simplified to: In rotational systems, power is the product of the torque and angular velocity , where is angular frequency, measured in radians per second. The represents scalar product. In fluid power systems such as hydraulic actuators, power is given by where is pressure in pascals or N/m2, and is volumetric flow rate in m3/s in SI units. Mechanical advantage If a mechanical system has no losses, then the input power must equal the output power. This provides a simple formula for the mechanical advantage of the system. Let the input power to a device be a force acting on a point that moves with velocity and the output power be a force acts on a point that moves with velocity . If there are no losses in the system, then and the mechanical advantage of the system (output force per input force) is given by The similar relationship is obtained for rotating systems, where and are the torque and angular velocity of the input and and are the torque and angular velocity of the output. If there are no losses in the system, then which yields the mechanical advantage These relations are important because they define the maximum performance of a device in terms of velocity ratios determined by its physical dimensions. See for example gear ratios. Electrical power The instantaneous electrical power P delivered to a component is given by where is the instantaneous power, measured in watts (joules per second), is the potential difference (or voltage drop) across the component, measured in volts, and is the current through it, measured in amperes. If the component is a resistor with time-invariant voltage to current ratio, then: where is the electrical resistance, measured in ohms. Peak power and duty cycle In the case of a periodic signal of period , like a train of identical pulses, the instantaneous power is also a periodic function of period . The peak power is simply defined by: The peak power is not always readily measurable, however, and the measurement of the average power is more commonly performed by an instrument. If one defines the energy per pulse as then the average power is One may define the pulse length such that so that the ratios are equal. These ratios are called the duty cycle of the pulse train. Radiant power Power is related to intensity at a radius ; the power emitted by a source can be written as: See also Simple machines Orders of magnitude (power) Pulsed power Intensity – in the radiative sense, power per area Power gain – for linear, two-port networks Power density Signal strength Sound power References Force Temporal rates Physical quantities
0.767416
0.998958
0.766616
Euler's equations (rigid body dynamics)
In classical mechanics, Euler's rotation equations are a vectorial quasilinear first-order ordinary differential equation describing the rotation of a rigid body, using a rotating reference frame with angular velocity ω whose axes are fixed to the body. Their general vector form is where M is the applied torques and I is the inertia matrix. The vector is the angular acceleration. Again, note that all quantities are defined in the rotating reference frame. In orthogonal principal axes of inertia coordinates the equations become where Mk are the components of the applied torques, Ik are the principal moments of inertia and ωk are the components of the angular velocity. In the absence of applied torques, one obtains the Euler top. When the torques are due to gravity, there are special cases when the motion of the top is integrable. Derivation In an inertial frame of reference (subscripted "in"), Euler's second law states that the time derivative of the angular momentum L equals the applied torque: For point particles such that the internal forces are central forces, this may be derived using Newton's second law. For a rigid body, one has the relation between angular momentum and the moment of inertia Iin given as In the inertial frame, the differential equation is not always helpful in solving for the motion of a general rotating rigid body, as both Iin and ω can change during the motion. One may instead change to a coordinate frame fixed in the rotating body, in which the moment of inertia tensor is constant. Using a reference frame such as that at the center of mass, the frame's position drops out of the equations. In any rotating reference frame, the time derivative must be replaced so that the equation becomes and so the cross product arises, see time derivative in rotating reference frame. The vector components of the torque in the inertial and the rotating frames are related by where is the rotation tensor (not rotation matrix), an orthogonal tensor related to the angular velocity vector by for any vector u. Now is substituted and the time derivatives are taken in the rotating frame, while realizing that the particle positions and the inertia tensor does not depend on time. This leads to the general vector form of Euler's equations which are valid in such a frame The equations are also derived from Newton's laws in the discussion of the resultant torque. More generally, by the tensor transform rules, any rank-2 tensor has a time-derivative such that for any vector , one has . This yields the Euler's equations by plugging in Principal axes form When choosing a frame so that its axes are aligned with the principal axes of the inertia tensor, its component matrix is diagonal, which further simplifies calculations. As described in the moment of inertia article, the angular momentum L can then be written Also in some frames not tied to the body can it be possible to obtain such simple (diagonal tensor) equations for the rate of change of the angular momentum. Then ω must be the angular velocity for rotation of that frames axes instead of the rotation of the body. It is however still required that the chosen axes are still principal axes of inertia. The resulting form of the Euler rotation equations is useful for rotation-symmetric objects that allow some of the principal axes of rotation to be chosen freely. Special case solutions Torque-free precessions Torque-free precessions are non-trivial solution for the situation where the torque on the right hand side is zero. When I is not constant in the external reference frame (i.e. the body is moving and its inertia tensor is not constantly diagonal) then I cannot be pulled through the derivative operator acting on L. In this case I(t) and ω(t) do change together in such a way that the derivative of their product is still zero. This motion can be visualized by Poinsot's construction. Generalized Euler equations The Euler equations can be generalized to any simple Lie algebra. The original Euler equations come from fixing the Lie algebra to be , with generators satisfying the relation . Then if (where is a time coordinate, not to be confused with basis vectors ) is an -valued function of time, and (with respect to the Lie algebra basis), then the (untorqued) original Euler equations can be written To define in a basis-independent way, it must be a self-adjoint map on the Lie algebra with respect to the invariant bilinear form on . This expression generalizes readily to an arbitrary simple Lie algebra, say in the standard classification of simple Lie algebras. This can also be viewed as a Lax pair formulation of the generalized Euler equations, suggesting their integrability. See also Euler angles Dzhanibekov effect Moment of inertia Poinsot's ellipsoid Rigid rotor References C. A. Truesdell, III (1991) A First Course in Rational Continuum Mechanics. Vol. 1: General Concepts, 2nd ed., Academic Press. . Sects. I.8-10. C. A. Truesdell, III and R. A. Toupin (1960) The Classical Field Theories, in S. Flügge (ed.) Encyclopedia of Physics. Vol. III/1: Principles of Classical Mechanics and Field Theory, Springer-Verlag. Sects. 166–168, 196–197, and 294. Landau L.D. and Lifshitz E.M. (1976) Mechanics, 3rd. ed., Pergamon Press. (hardcover) and (softcover). Goldstein H. (1980) Classical Mechanics, 2nd ed., Addison-Wesley. Symon KR. (1971) Mechanics, 3rd. ed., Addison-Wesley. Rigid bodies Rigid bodies mechanics Rotation in three dimensions Equations de:Eulersche Gleichungen it:Equazioni di Eulero (dinamica)
0.77407
0.990352
0.766602
Sandia National Laboratories
Sandia National Laboratories (SNL), also known as Sandia, is one of three research and development laboratories of the United States Department of Energy's National Nuclear Security Administration (NNSA). Headquartered in Kirtland Air Force Base in Albuquerque, New Mexico, it has a second principal facility next to Lawrence Livermore National Laboratory in Livermore, California, and a test facility in Waimea, Kauai, Hawaii. Sandia is owned by the U.S. federal government but privately managed and operated by National Technology and Engineering Solutions of Sandia, a wholly owned subsidiary of Honeywell International. Established in 1949, SNL is a "multimission laboratory" with the primary goal of advancing U.S. national security by developing various science-based technologies. Its work spans roughly 70 areas of activity, including nuclear deterrence, arms control, nonproliferation, hazardous waste disposal, and climate change. Sandia hosts a wide variety of research initiatives, including computational biology, physics, materials science, alternative energy, psychology, MEMS, and cognitive science. Most notably, it hosted some of the world's earliest and fastest supercomputers, ASCI Red and ASCI Red Storm, and is currently home to the Z Machine, the largest X-ray generator in the world, which is designed to test materials in conditions of extreme temperature and pressure. Sandia conducts research through partnership agreements with academic, governmental, and commercial entities; educational opportunities are available through several programs, including the Securing Top Academic Research & Talent at Historically Black Colleges and Universities (START HBCU) Program and the Sandia University Partnerships Network (a collaboration with Purdue University, University of Texas at Austin, Georgia Institute of Technology, University of Illinois Urbana–Champaign, and University of New Mexico). Lab history Sandia National Laboratories' roots go back to World War II and the Manhattan Project. Prior to the United States formally entering the war, the U.S. Army leased land near an Albuquerque, New Mexico airport known as Oxnard Field to service transient Army and U.S. Navy aircraft. In January 1941 construction began on the Albuquerque Army Air Base, leading to establishment of the Bombardier School-Army Advanced Flying School near the end of the year. Soon thereafter it was renamed Kirtland Field, after early Army military pilot Colonel Roy C. Kirtland, and in mid-1942 the Army acquired Oxnard Field. During the war years facilities were expanded further and Kirtland Field served as a major Army Air Forces training installation. In the many months leading up to successful detonation of the first atomic bomb, the Trinity test, and delivery of the first airborne atomic weapon, Project Alberta, J. Robert Oppenheimer, Director of Los Alamos Laboratory, and his technical advisor, Hartly Rowe, began looking for a new site convenient to Los Alamos for the continuation of weapons development especially its non-nuclear aspects. They felt a separate division would be best to perform these functions. Kirtland had fulfilled Los Alamos' transportation needs for both the Trinity and Alberta projects, thus, Oxnard Field was transferred from the jurisdiction of the Army Air Corps to the U.S. Army Service Forces Chief of Engineer District, and thereafter, assigned to the Manhattan Engineer District. In July 1945, the forerunner of Sandia Laboratory, known as "Z" Division, was established at Oxnard Field to handle future weapons development, testing, and bomb assembly for the Manhattan Engineer District. The District-directive calling for establishing a secure area and construction of "Z" Division facilities referred to this as "Sandia Base" , after the nearby Sandia Mountains — apparently the first official recognition of the "Sandia" name. Sandia Laboratory was operated by the University of California until 1949, when President Harry S. Truman asked Western Electric, a subsidiary of American Telephone and Telegraph (AT&T), to assume the operation as an "opportunity to render an exceptional service in the national interest." Sandia Corporation, a wholly owned subsidiary of Western Electric, was formed on October 5, 1949, and, on November 1, 1949, took over management of the Laboratory. The United States Congress designated Sandia Laboratories as a National laboratory in 1979. In October 1993, Sandia National Laboratories (SNL) was managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin. In December 2016, it was announced that National Technology and Engineering Solutions of Sandia, under the direction of Honeywell International, would take over the management of Sandia National Laboratories beginning May 1, 2017; this contract remains in effect as of November 2022, covering government-owned facilities in Albuquerque, New Mexico (SNL/NM); Livermore, California (SNL/CA); Tonopah, Nevada; Shoreview, Minnesota; and Kauai, Hawaii. SNL/NM is the headquarters and the largest laboratory, employing more than 12,000 employees, while SNL/CA is a smaller laboratory, with around 1,700 employees. Tonopah and Kauai are occupied on a "campaign" basis, as test schedules dictate. The lab also managed the DOE/SNL Scaled Wind Farm Technology (SWiFT) Facility in Lubbock, Texas. Sandia led a project that studied how to decontaminate a subway system in the event of a biological weapons attack (such as anthrax). As of September 2017, the process to decontaminate subways in such an event is "virtually ready to implement," said a lead Sandia engineer. Sandia's integration with its local community includes a program through the Department of Energy's Tribal Energy program to deliver alternative renewable power to remote Navajo communities, spearheaded by senior engineer Sandra Begay. Legal issues On February 13, 2007, a New Mexico State Court found Sandia Corporation liable for $4.7 million in damages for the firing of a former network security analyst, Shawn Carpenter, who had reported to his supervisors that hundreds of military installations and defense contractors' networks were compromised and sensitive information was being stolen including hundreds of sensitive Lockheed documents on the Mars Reconnaissance Orbiter project. When his supervisors told him to drop the investigation and do nothing with the information, he went to intelligence officials in the United States Army and later the Federal Bureau of Investigation to address the national security breaches. When Sandia managers discovered his actions months later, they revoked his security clearance and fired him. In 2014, an investigation determined Sandia Corp. used lab operations funds to pay for lobbying related to the renewal of its $2 billion contract to operate the lab. Sandia Corp. and its parent company, Lockheed Martin, agreed to pay a $4.8 million fine. Technical areas SNL/NM consists of five technical areas (TA) and several additional test areas. Each TA has its own distinctive operations; however, the operations of some groups at Sandia may span more than one TA, with one part of a team working on a problem from one angle, and another subset of the same team located in a different building or area working with other specialized equipment. A description of each area is given below. TA-I operations are dedicated primarily to three activities: the design, research, and development of weapon systems; limited production of weapon system components; and energy programs. TA-I facilities include the main library and offices, laboratories, and shops used by administrative and technical staff. TA-II is a facility that was established in 1948 for the assembly of chemical high explosive main charges for nuclear weapons and later for production scale assembly of nuclear weapons. Activities in TA-II include the decontamination, decommissioning, and remediation of facilities and landfills used in past research and development activities. Remediation of the Classified Waste Landfill which started in March 1998, neared completion in FY2000. A testing facility, the Explosive Component Facility, integrates many of the previous TA-II test activities as well as some testing activities previously performed in other remote test areas. The Access Delay Technology Test Facility is also located in TA-II. TA-III is adjacent to and south of TA-V [both are approximately seven miles (11 km) south of TA-I]. TA-III facilities include extensive design-test facilities such as rocket sled tracks, centrifuges and a radiant heat facility. Other facilities in TA-III include a paper destructor, the Melting and Solidification Laboratory and the Radioactive and Mixed Waste Management Facility (RMWMF). RMWMF serves as central processing facility for packaging and storage of low-level and mixed waste. The remediation of the Chemical Waste Landfill, which started in September 1998, is an ongoing activity in TA-III. TA-IV, located approximately south of TA-I, consists of several inertial-confinement fusion research and pulsed power research facilities, including the High Energy Radiation Megavolt Electron Source (Hermes-III), the Z Facility, the Short Pulsed High Intensity Nanosecond X-Radiator (SPHINX) Facility, and the Saturn Accelerator. TA-IV also hosts some computer science and cognition research. TA-V contains two research reactor facilities, an intense gamma irradiation facility (using cobalt-60 and caesium-137 sources), and the Hot Cell Facility. SNL/NM also has test areas outside of the five technical areas listed above. These test areas, collectively known as Coyote Test Field, are located southeast of TA-III and/or in the canyons on the west side of the Manzanita Mountains. Facilities in the Coyote Canyon Test Field include the Solar Tower Facility (34.9623 N, 106.5097 W), the Lurance Canyon Burn Site and the Aerial Cable Facility. DOE/SNL Scaled Wind Farm Technology (SWIFT) Facility In collaboration with the Wind Energy Technologies Office (WETO) of U.S. Department of Energy, Texas Tech University, and the Vestas wind turbine corporation, SNL operates the Scaled Wind Farm Technology (SWiFT) Facility in Lubbock, Texas. Open-source software In the 1970s, the Sandia, Los Alamos, Air Force Weapons Laboratory Technical Exchange Committee initiated the development of the SLATEC library of mathematical and statistical routines, written in FORTRAN 77. Today, Sandia National Laboratories is home to several open-source software projects: FCLib (Feature Characterization Library) is a library for the identification and manipulation of coherent regions or structures from spatio-temporal data. FCLib focuses on providing data structures that are "feature-aware" and support feature-based analysis. It is written in C and developed under a "BSD-like" license. LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) is a molecular dynamics library that can be used to model parallel atomic/subatomic processes at large scale. It is produced under the GNU General Public License (GPL) and distributed on the Sandia National Laboratories website as well as SourceForge. LibVMI is a library for simplifying the reading and writing of memory in running virtual machines, a technique known as virtual machine introspection. It is licensed under the GNU Lesser General Public License. MapReduce-MPI Library is an implementation of MapReduce for distributed-memory parallel machines, utilizing the Message Passing Interface (MPI) for communication. It is developed under a modified Berkeley Software Distribution license. MultiThreaded Graph Library (MTGL) is a collection of graph-based algorithms designed to take advantage of parallel, shared-memory architectures such as the Cray XMT, Symmetric Multiprocessor (SMP) machines, and multi-core workstations. It is developed under a BSD License. ParaView is a cross-platform application for performing data analysis and visualization. It is a collaborative effort, developed by Sandia National Laboratories, Los Alamos National Laboratories, and the United States Army Research Laboratory, and funded by the Advanced Simulation and Computing Program. It is developed under a BSD license. Pyomo is a python-based optimization Mathematical Programming Language which supports most commercial and open-source solver engines. Soccoro, a collaborative effort with Wake Forest and Vanderbilt Universities, is object-oriented software for performing electronic-structure calculations based on density-functional theory. It utilizes libraries such as MPI, BLAS, and LAPACK and is developed under the GNU General Public License. Titan Informatics Toolkit is a collection of cross-platform libraries for ingesting, analyzing, and displaying scientific and informatics data. It is a collaborative effort with Kitware, Inc., and uses various open-source components such as the Boost Graph Library. It is developed under a New BSD license. Trilinos is an object-oriented library for building scalable scientific and engineering applications, with a focus on linear algebra techniques. Most Trilinos packages are licensed under a Modified BSD License. Xyce is an open source, SPICE-compatible, high-performance analog circuit simulator, capable of solving extremely large circuit problems. Charon is a TCAD simulator which was open-sourced by Sandia in 2020. It is significant as previously there were no major TCAD simulators for large-scale simulations that were open source. In addition, Sandia National Laboratories collaborates with Kitware, Inc. in developing the Visualization Toolkit (VTK), a cross-platform graphics and visualization software suite. This collaboration has focused on enhancing the information visualization capabilities of VTK and has in turn fed back into other projects such as ParaView and Titan. Self-guided bullet On January 30, 2012, Sandia announced that it successfully test-fired a self-guided dart that can hit targets at . The dart is long, has its center of gravity at the nose, and is made to be fired from a small-caliber smoothbore gun. It is kept straight in flight by four electromagnetically actuated fins encased in a plastic puller sabot that falls off when the dart leaves the bore. The dart cannot be fired from conventional rifled barrels because the gyroscopic stability provided by rifling grooves for regular bullets would prevent the self-guided bullet from reliably turning towards a target when in flight, so fins are responsible for stabilizing rather than spinning. A laser designator marks a target, which is tracked by the dart's optical sensor and 8-bit CPU. The guided projectile is kept cheap because it does not need an inertial measurement unit, since its small size allows it to make the fast corrections necessary without the aid of an IMU. The natural body frequency of the bullet is about 30 hertz, so corrections can be made 30 times per second in flight. Muzzle velocity with commercial gunpowder is (Mach 2.1), but military customized gunpowder can increase its speed and range. Computer modeling shows that a standard bullet would miss a target at by , while an equivalent guided bullet would hit within . Accuracy increases as distances get longer, since the bullet's motions settle more the longer it is in flight. Supercomputers List of supercomputers that have been operated by or resided at Sandia: Intel Paragon XP/S 140, 1993 to ? ASCI Red, 1997 to 2006 Red Storm, 2005 to 2012 Cielo, 2010 to 2016 Trinity, 2015 to current Astra, 2018 to current, based on ARM processors Attaway, 2019 to current See also National Renewable Energy Laboratory Brookhaven National Laboratory Lawrence Livermore National Laboratory Test Readiness Program Jess (programming language) VxInsight Decontamination foam Titan Rain References Further reading Computerworld article "Reverse Hacker Case Gets Costlier for Sandia Labs" San Jose Mercury News article "Ill Lab Workers Fight For Federal Compensation" Wired Magazine article "Linkin Park's Mysterious Cyberstalker" Slate article "Stalking Linkin Park" FedSmith.com article "Linkin Park, Nuclear Research and Obsession" The Santa Fe New Mexican article "Judge Upholds $4.3 Million Jury Award to Fired Sandia Lab Analyst" TIME article "A Security Analyst Wins Big in Court" The Santa Fe New Mexican article "Jury Awards Fired Sandia Analyst $4.3 Million" HPCwire article "Sandia May Unwittingly Have Sold Supercomputer to China" Federal Computer Weekly article "Intercepts: Chinese Checkers" Congressional Research Service report "China: Suspected Acquisition of U.S. Nuclear Weapon Secrets" Sandia National Laboratory Cooperative Monitoring Center article "Engagement with China" BBC News "Security Overhaul at US Nuclear Labs" Fox News "Iowa Republican Demands Tighter Nuclear Lab Security" UPI article "Workers Get Bonus After Being Disciplined" IndustryWeek article "3D Silicon Photonic Lattice" October 6, 2005 The Santa Fe New Mexican article "Sandia Security Managers Recorded Workers' Calls" May 17, 2002 New Mexico Business Weekly article "Sandia National Laboratories Says it's Worthless" External links DOE Laboratory Fact Sheet Economy of Albuquerque, New Mexico Nuclear weapons infrastructure of the United States Plasma physics facilities United States Department of Energy national laboratories Federally Funded Research and Development Centers Supercomputer sites Weapons manufacturing companies Honeywell Lockheed Martin Livermore, California Military research of the United States 1949 establishments in New Mexico Research institutes in New Mexico
0.768868
0.997044
0.766595
Computer graphics (computer science)
Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to the study of three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing. Overview Computer graphics studies manipulation of visual and geometric information using computational techniques. It focuses on the mathematical and computational foundations of image generation and processing rather than purely aesthetic issues. Computer graphics is often differentiated from the field of visualization, although the two fields have many similarities. Connected studies include: Applied mathematics Computational geometry Computational topology Computer vision Image processing Information visualization Scientific visualization Applications of computer graphics include: Print design Digital art Special effects Video games Visual effects History There are several international conferences and journals where the most significant results in computer graphics are published. Among them are the SIGGRAPH and Eurographics conferences and the Association for Computing Machinery (ACM) Transactions on Graphics journal. The joint Eurographics and ACM SIGGRAPH symposium series features the major venues for the more specialized sub-fields: Symposium on Geometry Processing, Symposium on Rendering, Symposium on Computer Animation, and High Performance Graphics. As in the rest of computer science, conference publications in computer graphics are generally more significant than journal publications (and subsequently have lower acceptance rates). Subfields A broad classification of major subfields in computer graphics might be: Geometry: ways to represent and process surfaces Animation: ways to represent and manipulate motion Rendering: algorithms to reproduce light transport Imaging: image acquisition or image editing Geometry The subfield of geometry studies the representation of three-dimensional objects in a discrete digital setting. Because the appearance of an object depends largely on its exterior, boundary representations are most commonly used. Two dimensional surfaces are a good representation for most objects, though they may be non-manifold. Since surfaces are not finite, discrete digital approximations are used. Polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation, although point-based representations have become more popular recently (see for instance the Symposium on Point-Based Graphics). These representations are Lagrangian, meaning the spatial locations of the samples are independent. Recently, Eulerian surface descriptions (i.e., where spatial samples are fixed) such as level sets have been developed into a useful representation for deforming surfaces which undergo many topological changes (with fluids being the most notable example). Geometry subfields include: Implicit surface modeling – an older subfield which examines the use of algebraic surfaces, constructive solid geometry, etc., for surface representation. Digital geometry processing – surface reconstruction, simplification, fairing, mesh repair, parameterization, remeshing, mesh generation, surface compression, and surface editing all fall under this heading. Discrete differential geometry – a nascent field which defines geometric quantities for the discrete surfaces used in computer graphics. Point-based graphics – a recent field which focuses on points as the fundamental representation of surfaces. Subdivision surfaces Out-of-core mesh processing – another recent field which focuses on mesh datasets that do not fit in main memory. Animation The subfield of animation studies descriptions for surfaces (and other phenomena) that move or deform over time. Historically, most work in this field has focused on parametric and data-driven models, but recently physical simulation has become more popular as computers have become more powerful computationally. Animation subfields include: Performance capture Character animation Physical simulation (e.g. cloth modeling, animation of fluid dynamics, etc.) Rendering Rendering generates images from a model. Rendering may simulate light transport to create realistic images or it may create images that have a particular artistic style in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light passes from one place to another) and scattering (how surfaces interact with light). See Rendering (computer graphics) for more information. Rendering subfields include: Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of light transport. Scattering: Models of scattering (how light interacts with the surface at a given point) and shading (how material properties vary across the surface) are used to describe the appearance of a surface. In graphics these problems are often studied within the context of rendering since they can substantially affect the design of rendering algorithms. Descriptions of scattering are usually given in terms of a bidirectional scattering distribution function (BSDF). The latter issue addresses how different types of scattering are distributed across the surface (i.e., which scattering function applies where). Descriptions of this kind are typically expressed with a program called a shader. (There is some confusion since the word "shader" is sometimes used for programs that describe local geometric variation.) Non-photorealistic rendering Physically based rendering – concerned with generating images according to the laws of geometric optics Real-time rendering – focuses on rendering for interactive applications, typically using specialized hardware like GPUs Relighting – recent area concerned with quickly re-rendering scenes Notable researchers Arthur Appel James Arvo Brian A. Barsky Jim Blinn Jack E. Bresenham Loren Carpenter Edwin Catmull James H. Clark Robert L. Cook Franklin C. Crow Paul Debevec David C. Evans Ron Fedkiw Steven K. Feiner James D. Foley David Forsyth Henry Fuchs Andrew Glassner Henri Gouraud (computer scientist) Donald P. Greenberg Eric Haines R. A. Hall Pat Hanrahan John Hughes Jim Kajiya Takeo Kanade Kenneth Knowlton Marc Levoy Martin Newell (computer scientist) James O'Brien Ken Perlin Matt Pharr Bui Tuong Phong Przemyslaw Prusinkiewicz William Reeves David F. Rogers Holly Rushmeier Peter Shirley James Sethian Ivan Sutherland Demetri Terzopoulos Kenneth Torrance Greg Turk Andries van Dam Henrik Wann Jensen Gregory Ward John Warnock J. Turner Whitted Lance Williams Applications for their use Bitmap Design / Image Editing Adobe Photoshop Corel Photo-Paint GIMP Krita Vector drawing Adobe Illustrator CorelDRAW Inkscape Affinity Designer Sketch Architecture VariCAD FreeCAD AutoCAD QCAD LibreCAD DataCAD Corel Designer Video editing Adobe Premiere Pro Sony Vegas Final Cut DaVinci Resolve Cinelerra VirtualDub Sculpting, Animation, and 3D Modeling Blender 3D Wings 3D ZBrush Sculptris SolidWorks Rhino3D SketchUp 3ds Max Cinema 4D Maya Houdini Digital composition Nuke Blackmagic Fusion Adobe After Effects Natron Rendering V-Ray RedShift RenderMan Octane Render Mantra Lumion (Architectural visualization) Other applications examples ACIS - geometric core Autodesk Softimage POV-Ray Scribus Silo Hexagon Lightwave See also Computer facial animation Computer science Computer science and engineering Computer graphics Digital geometry Digital image editing Geometry processing IBM PCPG, (1980s) Painter's algorithm Stanford Bunny Utah Teapot References Further reading Foley et al. Computer Graphics: Principles and Practice. Shirley. Fundamentals of Computer Graphics. Watt. 3D Computer Graphics. External links A Critical History of Computer Graphics and Animation History of Computer Graphics series of articles Industry Industrial labs doing "blue sky" graphics research include: Adobe Advanced Technology Labs MERL Microsoft Research – Graphics Nvidia Research Major film studios notable for graphics research include: ILM PDI/Dreamworks Animation Pixar +
0.774411
0.989898
0.766588
Persistence (computer science)
In computer science, persistence refers to the characteristic of state of a system that outlives (persists more than) the process that created it. This is achieved in practice by storing the state as data in computer data storage. Programs have to transfer data to and from storage devices and have to provide mappings from the native programming-language data structures to the storage device data structures. Picture editing programs or word processors, for example, achieve state persistence by saving their documents to files. Orthogonal or transparent persistence Persistence is said to be "orthogonal" or "transparent" when it is implemented as an intrinsic property of the execution environment of a program. An orthogonal persistence environment does not require any specific actions by programs running in it to retrieve or save their state. Non-orthogonal persistence requires data to be written and read to and from storage using specific instructions in a program, resulting in the use of persist as a transitive verb: On completion, the program persists the data. The advantage of orthogonal persistence environments is simpler and less error-prone programs. The term "persistent" was first introduced by Atkinson and Morrison in the sense of orthogonal persistence: they used an adjective rather than a verb to emphasize persistence as a property of the data, as distinct from an imperative action performed by a program. The use of the transitive verb "persist" (describing an action performed by a program) is a back-formation. Adoption Orthogonal persistence is widely adopted in operating systems for hibernation and in platform virtualization systems such as VMware and VirtualBox for state saving. Research prototype languages such as PS-algol, Napier88, Fibonacci and pJama, successfully demonstrated the concepts along with the advantages to programmers. Persistence techniques System images Using system images is the simplest persistence strategy. Notebook hibernation is an example of orthogonal persistence using a system image because it does not require any actions by the programs running on the machine. An example of non-orthogonal persistence using a system image is a simple text editing program executing specific instructions to save an entire document to a file. Shortcomings: Requires enough RAM to hold the entire system state. State changes made to a system after its last image was saved are lost in the case of a system failure or shutdown. Saving an image for every single change would be too time-consuming for most systems, so images are not used as the single persistence technique for critical systems. Journals Using journals is the second simplest persistence technique. Journaling is the process of storing events in a log before each one is applied to a system. Such logs are called journals. On startup, the journal is read and each event is reapplied to the system, avoiding data loss in the case of system failure or shutdown. The entire "Undo/Redo" history of user commands in a picture editing program, for example, when written to a file, constitutes a journal capable of recovering the state of an edited picture at any point in time. Journals are used by journaling file systems, prevalent systems and database management systems where they are also called "transaction logs" or "redo logs". Shortcomings: When journals are used exclusively, the entire (potentially large) history of all system events must be reapplied on every system startup. As a result, journals are often combined with other persistence techniques. Dirty writes This technique is the writing to storage of only those portions of system state that have been modified (are dirty) since their last write. Sophisticated document editing applications, for example, will use dirty writes to save only those portions of a document that were actually changed since the last save. Shortcomings: This technique requires state changes to be intercepted within a program. This is achieved in a non-transparent way by requiring specific storage-API calls or in a transparent way with automatic program transformation. This results in code that is slower than native code and more complicated to debug. Persistence layers Any software layer that makes it easier for a program to persist its state is generically called a persistence layer. Most persistence layers will not achieve persistence directly but will use an underlying database management system. System prevalence System prevalence is a technique that combines system images and transaction journals, mentioned above, to overcome their limitations. Shortcomings: A prevalent system must have enough RAM to hold the entire system state. Database management systems (DBMSs) DBMSs use a combination of the dirty writes and transaction journaling techniques mentioned above. They provide not only persistence but also other services such as queries, auditing and access control. Persistent operating systems Persistent operating systems are operating systems that remain persistent even after a crash or unexpected shutdown. Operating systems that employ this ability include KeyKOS EROS, the successor to KeyKOS Coyotos, successor to EROS Multics with its single-level store Phantom IBM System/38 IBM i Grasshopper OS Lua OS tahrpuppy-6.0.5 See also Persistent data Persistent data structure Persistent identifier Persistent memory Copy-on-write CRUD Java Data Objects Java Persistence API System prevalence Orthogonality Service Data Object Snapshot (computer storage) References Computing terminology Computer programming Models of computation
0.77706
0.986512
0.766579
Knudsen number
The Knudsen number (Kn) is a dimensionless number defined as the ratio of the molecular mean free path length to a representative physical length scale. This length scale could be, for example, the radius of a body in a fluid. The number is named after Danish physicist Martin Knudsen (1871–1949). The Knudsen number helps determine whether statistical mechanics or the continuum mechanics formulation of fluid dynamics should be used to model a situation. If the Knudsen number is near or greater than one, the mean free path of a molecule is comparable to a length scale of the problem, and the continuum assumption of fluid mechanics is no longer a good approximation. In such cases, statistical methods should be used. Definition The Knudsen number is a dimensionless number defined as where = mean free path [L1], = representative physical length scale [L1]. The representative length scale considered, , may correspond to various physical traits of a system, but most commonly relates to a gap length over which thermal transport or mass transport occurs through a gas phase. This is the case in porous and granular materials, where the thermal transport through a gas phase depends highly on its pressure and the consequent mean free path of molecules in this phase. For a Boltzmann gas, the mean free path may be readily calculated, so that where is the Boltzmann constant (1.380649 × 10−23 J/K in SI units) [M1 L2 T−2 Θ−1], is the thermodynamic temperature [θ1], is the particle hard-shell diameter [L1], is the static pressure [M1 L−1 T−2], is the specific gas constant [L2 T−2 θ−1] (287.05 J/(kg K) for air), is the density [M1 L−3]. If the temperature is increased, but the volume kept constant, then the Knudsen number (and the mean free path) doesn't change (for an ideal gas). In this case, the density stays the same. If the temperature is increased, and the pressure kept constant, then the gas expands and therefore its density decreases. In this case, the mean free path increases and so does the Knudsen number. Hence, it may be helpful to keep in mind that the mean free path (and therefore the Knudsen number) is really dependent on the thermodynamic variable density (proportional to the reciprocal of density), and only indirectly on temperature and pressure. For particle dynamics in the atmosphere, and assuming standard temperature and pressure, i.e. 0 °C and 1 atm, we have ≈ (80 nm). Relationship to Mach and Reynolds numbers in gases The Knudsen number can be related to the Mach number and the Reynolds number. Using the dynamic viscosity with the average molecule speed (from Maxwell–Boltzmann distribution) the mean free path is determined as follows: Dividing through by L (some characteristic length), the Knudsen number is obtained: where is the average molecular speed from the Maxwell–Boltzmann distribution [L1 T−1], T is the thermodynamic temperature [θ1], μ is the dynamic viscosity [M1 L−1 T−1], m is the molecular mass [M1], kB is the Boltzmann constant [M1 L2 T−2 θ−1], is the density [M1 L−3]. The dimensionless Mach number can be written as where the speed of sound is given by where U∞ is the freestream speed [L1 T−1], R is the Universal gas constant (in SI, 8.314 47215 J K−1 mol−1) [M1 L2 T−2 θ−1 mol−1], M is the molar mass [M1 mol−1], is the ratio of specific heats [1]. The dimensionless Reynolds number can be written as Dividing the Mach number by the Reynolds number: and by multiplying by yields the Knudsen number: The Mach, Reynolds and Knudsen numbers are therefore related by Application The Knudsen number can be used to determine the rarefaction of a flow: : Continuum flow : Slip flow : Transitional flow : Free molecular flow This regime classification is empirical and problem dependent but has proven useful to adequately model flows. Problems with high Knudsen numbers include the calculation of the motion of a dust particle through the lower atmosphere and the motion of a satellite through the exosphere. One of the most widely used applications for the Knudsen number is in microfluidics and MEMS device design where flows range from continuum to free-molecular. In recent years, it has been applied in other disciplines such as transport in porous media, e.g., petroleum reservoirs. Movements of fluids in situations with a high Knudsen number are said to exhibit Knudsen flow, also called free molecular flow. Airflow around an aircraft such as an airliner has a low Knudsen number, making it firmly in the realm of continuum mechanics. Using the Knudsen number an adjustment for Stokes' law can be used in the Cunningham correction factor, this is a drag force correction due to slip in small particles (i.e. dp < 5 μm). The flow of water through a nozzle will usually be a situation with a low Knudsen number. Mixtures of gases with different molecular masses can be partly separated by sending the mixture through small holes of a thin wall because the numbers of molecules that pass through a hole is proportional to the pressure of the gas and inversely proportional to its molecular mass. The technique has been used to separate isotopic mixtures, such as uranium, using porous membranes, It has also been successfully demonstrated for use in hydrogen production from water. The Knudsen number also plays an important role in thermal conduction in gases. For insulation materials, for example, where gases are contained under low pressure, the Knudsen number should be as high as possible to ensure low thermal conductivity. See also References External links Knudsen number and diffusivity calculators Dimensionless numbers Fluid dynamics Dimensionless numbers of fluid mechanics
0.773888
0.990552
0.766577
Torricelli's law
Torricelli's law, also known as Torricelli's theorem, is a theorem in fluid dynamics relating the speed of fluid flowing from an orifice to the height of fluid above the opening. The law states that the speed of efflux of a fluid through a sharp-edged hole in the wall of the tank filled to a height above the hole is the same as the speed that a body would acquire in falling freely from a height , where is the acceleration due to gravity. This expression comes from equating the kinetic energy gained, , with the potential energy lost, , and solving for . The law was discovered (though not in this form) by the Italian scientist Evangelista Torricelli, in 1643. It was later shown to be a particular case of Bernoulli's principle. Derivation Under the assumptions of an incompressible fluid with negligible viscosity, Bernoulli's principle states that the hydraulic energy is constant at any two points in the flowing liquid. Here is fluid speed, is the acceleration due to gravity, is the height above some reference point, is the pressure, and is the density. In order to derive Torricelli's formula the first point with no index is taken at the liquid's surface, and the second just outside the opening. Since the liquid is assumed to be incompressible, is equal to and; both can be represented by one symbol . The pressure and are typically both atmospheric pressure, so . Furthermore is equal to the height of the liquid's surface over the opening: The velocity of the surface can by related to the outflow velocity by the continuity equation , where is the orifice's cross section and is the (cylindrical) vessel's cross section. Renaming to (A like Aperture) gives: Torricelli's law is obtained as a special case when the opening is very small relative to the horizontal cross-section of the container : Torricelli's law can only be applied when viscous effects can be neglected which is the case for water flowing out through orifices in vessels. Experimental verification: Spouting can experiment Every physical theory must be verified by experiments. The spouting can experiment consists of a cylindrical vessel filled up with water and with several holes in different heights. It is designed to show that in a liquid with an open surface, pressure increases with depth. The fluid exit velocity is greater further down the vessel. The outflowing jet forms a downward parabola where every parabola reaches farther out the larger the distance between the orifice and the surface is. The shape of the parabola is only dependent on the outflow velocity and can be determined from the fact that every molecule of the liquid forms a ballistic trajectory (see projectile motion) where the initial velocity is the outflow velocity : The results confirm the correctness of Torricelli's law very well. Discharge and time to empty a cylindrical vessel Assuming that a vessel is cylindrical with fixed cross-sectional area , with orifice of area at the bottom, then rate of change of water level height is not constant. The water volume in the vessel is changing due to the discharge out of the vessel: Integrating both sides and re-arranging, we obtain where is the initial height of the water level and is the total time taken to drain all the water and hence empty the vessel. This formula has several implications. If a tank with volume with cross section and height , so that , is fully filled, then the time to drain all the water is This implies that high tanks with same filling volume drain faster than wider ones. Lastly, we can re-arrange the above equation to determine the height of the water level as a function of time as where is the height of the container while is the discharge time as given above. Discharge experiment, coefficient of discharge The discharge theory can be tested by measuring the emptying time or time series of the water level within the cylindrical vessel. In many cases, such experiments do not confirm the presented discharge theory: when comparing the theoretical predictions of the discharge process with measurements, very large differences can be found in such cases. In reality, the tank usually drains much more slowly. Looking at the discharge formula two quantities could be responsible for this discrepancy: the outflow velocity or the effective outflow cross section. In 1738 Daniel Bernoulli attributed the discrepancy between the theoretical and the observed outflow behavior to the formation of a vena contracta which reduces the outflow cross-section from the orifice's cross-section to the contracted cross-section and stated that the discharge is: Actually this is confirmed by state-of-the-art experiments (see ) in which the discharge, the outflow velocity and the cross-section of the vena contracta were measured. Here it was also shown that the outflow velocity is predicted extremely well by Torricelli's law and that no velocity correction (like a "coefficient of velocity") is needed. The problem remains how to determine the cross-section of the vena contracta. This is normally done by introducing a discharge coefficient which relates the discharge to the orifice's cross-section and Torricelli's law: For low viscosity liquids (such as water) flowing out of a round hole in a tank, the discharge coefficient is in the order of 0.65. By discharging through a round tube or hose, the coefficient of discharge can be increased to over 0.9. For rectangular openings, the discharge coefficient can be up to 0.67, depending on the height-width ratio. Applications Horizontal distance covered by the jet of liquid If is height of the orifice above the ground and is height of the liquid column from the ground (height of liquid's surface), then the horizontal distance covered by the jet of liquid to reach the same level as the base of the liquid column can be easily derived. Since be the vertical height traveled by a particle of jet stream, we have from the laws of falling body where is the time taken by the jet particle to fall from the orifice to the ground. If the horizontal efflux velocity is , then the horizontal distance traveled by the jet particle during the time duration is Since the water level is above the orifice, the horizontal efflux velocity as given by Torricelli's law. Thus, we have from the two equations The location of the orifice that yields the maximum horizontal range is obtained by differentiating the above equation for with respect to , and solving . Here we have Solving we obtain and the maximum range Clepsydra problem A clepsydra is a clock that measures time by the flow of water. It consists of a pot with a small hole at the bottom through which the water can escape. The amount of escaping water gives the measure of time. As given by the Torricelli's law, the rate of efflux through the hole depends on the height of the water; and as the water level diminishes, the discharge is not uniform. A simple solution is to keep the height of the water constant. This can be attained by letting a constant stream of water flow into the vessel, the overflow of which is allowed to escape from the top, from another hole. Thus having a constant height, the discharging water from the bottom can be collected in another cylindrical vessel with uniform graduation to measure time. This is an inflow clepsydra. Alternatively, by carefully selecting the shape of the vessel, the water level in the vessel can be made to decrease at constant rate. By measuring the level of water remaining in the vessel, the time can be measured with uniform graduation. This is an example of outflow clepsydra. Since the water outflow rate is higher when the water level is higher (due to more pressure), the fluid's volume should be more than a simple cylinder when the water level is high. That is, the radius should be larger when the water level is higher. Let the radius increase with the height of the water level above the exit hole of area That is, . We want to find the radius such that the water level has a constant rate of decrease, i.e. . At a given water level , the water surface area is . The instantaneous rate of change in water volume is From Torricelli's law, the rate of outflow is From these two equations, Thus, the radius of the container should change in proportion to the quartic root of its height, Likewise, if the shape of the vessel of the outflow clepsydra cannot be modified according to the above specification, then we need to use non-uniform graduation to measure time. The emptying time formula above tells us the time should be calibrated as the square root of the discharged water height, More precisely, where is the time taken by the water level to fall from the height of to height of . Torricelli's original derivation Evangelista Torricelli's original derivation can be found in the second book 'De motu aquarum' of his 'Opera Geometrica'. He starts a tube AB (Figure (a)) filled up with water to the level A. Then a narrow opening is drilled at the level of B and connected to a second vertical tube BC. Due to the hydrostatic principle of communicating vessels the water lifts up to the same filling level AC in both tubes (Figure (b)). When finally the tube BC is removed (Figure (c)) the water should again lift up to this height, which is named AD in Figure (c). The reason for that behavior is the fact that a droplet's falling velocity from a height A to B is equal to the initial velocity that is needed to lift up a droplet from B to A. When performing such an experiment only the height C (instead of D in figure (c)) will be reached which contradicts the proposed theory. Torricelli attributes this defect to the air resistance and to the fact that the descending drops collide with ascending drops. Torricelli's argumentation is, as a matter of fact, wrong because the pressure in free jet is the surrounding atmospheric pressure, while the pressure in a communicating vessel is the hydrostatic pressure. At that time the concept of pressure was unknown. See also Darcy's law Dynamic pressure Fluid statics Hagen–Poiseuille equation Helmholtz's theorems Kirchhoff equations Knudsen equation Manning equation Mild-slope equation Morison equation Navier–Stokes equations Oseen flow Pascal's law Poiseuille's law Potential flow Pressure Static pressure Pressure head Relativistic Euler equations Reynolds decomposition Stokes flow Stokes stream function Stream function Streamlines, streaklines and pathlines References Further reading Stanley Middleman, An Introduction to Fluid Dynamics: Principles of Analysis and Design (John Wiley & Sons, 1997) Eponymous theorems of physics Fluid dynamics Physics experiments
0.773697
0.990796
0.766576
The Theoretical Minimum
The Theoretical Minimum: What You Need to Know to Start Doing Physics is a popular science book by Leonard Susskind and George Hrabovsky. The book was initially published on January 29, 2013 by Basic Books. The Theoretical Minimum is a book and a Stanford University-based continuing-education lecture series, which became a popular YouTube-featured content. The series commenced with What You Need to Know (above) reissued under the title Classical Mechanics: The Theoretical Minimum. The series presently stands at four books (as of early 2023) covering the first four of six core courses devoted to: classical mechanics, quantum mechanics, special relativity and classical field theory, general relativity, cosmology, and statistical mechanics. Videos for all of these courses are available online. In addition, Susskind has made available video lectures over a range of supplement subject areas including: advanced quantum mechanics, the Higgs boson, quantum entanglement, string theory, and black holes. The full series delivers over 100 lectures amounting to something on the order of 200 hours of content, with some of the individual lectures having received over a million YouTube views. What You Need to Know book overview The book is a mathematical introduction to various theoretical physics concepts, such as principle of least action, Lagrangian mechanics, Hamiltonian mechanics, Poisson brackets, and electromagnetism. It is the first book in a series called The Theoretical Minimum, based on Stanford Continuing Studies courses taught by world renowned physicist Leonard Susskind. The courses collectively teach everything required to gain a basic understanding of each area of modern physics, including much of the fundamental mathematics. Full lecture series Core Course 1: Classical Mechanics The book, also published in 2014 by Penguin Books under the title Classical Mechanics: The Theoretical Minimum, is complemented by video recordings of the complete lectures which are available on-line. There is also a supplemental website for the book. Core Course 2: Quantum Mechanics The second book in the series, by Leonard Susskind and Art Friedman, was published in 2014 by Basic Books under the title Quantum Mechanics: The Theoretical Minimum. Video recordings of the complete lectures are available on-line. Core Course 3: Special Relativity and Classical Field Theory The third book in the series, by Leonard Susskind and Art Friedman, was published in 2017. This covers special relativity and classical field theory. Core Course 4: General Relativity The fourth book in the series, by Leonard Susskind and André Cabannes, was published in January 2023. This covers the general theory of relativity. Core Courses 5-6 Lectures in the remaining two courses, on the subjects of: Cosmology. Statistical mechanics. are available on-line as video recordings, or in written notes Supplemental Courses Further lecture courses in the Theoretical Minimum series have been delivered by Susskind, on these subjects (or with these titles): Advanced quantum mechanics. Higgs boson. Quantum entanglement. Relativity. Particle Physics 1: Basic Concepts. Particle Physics 2: Standard Model. Particle Physics 3: Super-symmetry and Grand Unification. String theory. Cosmology and black holes. These are also available on-line as video recordings. References External links The Theoretical Minimum website at the Stanford Institute for Theoretical Physics. Solutions to The Theoretical Minimum, Classical Mechanics by Filip Van Lijsebetten. Solutions to The Theoretical Minimum, Quantum Mechanics by Filip Van Lijsebetten. Popular physics books 2013 non-fiction books Basic Books books Books of lectures
0.778106
0.985173
0.766569
Quantum electrodynamics
In particle physics, quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons and represents the quantum counterpart of classical electromagnetism giving a complete account of matter and light interaction. In technical terms, QED can be described as a very accurate way to calculate the probability of the position and movement of particles, even those massless such as photons, and the quantity depending on position (field) of those particles, and described light and matter beyond the wave-particle duality proposed by Albert Einstein in 1905. Richard Feynman called it "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen. It is the most precise and stringently tested theory in physics. History The first formulation of a quantum theory describing radiation and matter interaction is attributed to British scientist Paul Dirac, who (during the 1920s) was able to compute the coefficient of spontaneous emission of an atom. He is also credited with coining the term "quantum electrodynamics". Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang Pauli, Eugene Wigner, Pascual Jordan, Werner Heisenberg and an elegant formulation of quantum electrodynamics by Enrico Fermi, physicists came to believe that, in principle, it would be possible to perform any computation for any physical process involving photons and charged particles. However, further studies by Felix Bloch with Arnold Nordsieck, and Victor Weisskopf, in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory, a problem already pointed out by Robert Oppenheimer. At higher orders in the series infinities emerged, making such computations meaningless and casting serious doubts on the internal consistency of the theory itself. With no solution for this problem known at the time, it appeared that a fundamental incompatibility existed between special relativity and quantum mechanics. Difficulties with the theory increased through the end of the 1940s. Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, now known as the Lamb shift and magnetic moment of the electron. These experiments exposed discrepancies which the theory was unable to explain. A first indication of a possible way out was given by Hans Bethe in 1947, after attending the Shelter Island Conference. While he was traveling by train from the conference to Schenectady he made the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Lamb and Retherford. Despite the limitations of the computation, agreement was excellent. The idea was simply to attach infinities to corrections of mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result in good agreement with experiments. This procedure was named renormalization. Based on Bethe's intuition and fundamental papers on the subject by Shin'ichirō Tomonaga, Julian Schwinger, Richard Feynman and Freeman Dyson, it was finally possible to get fully covariant formulations that were finite at any order in a perturbation series of quantum electrodynamics. Shin'ichirō Tomonaga, Julian Schwinger and Richard Feynman were jointly awarded with the 1965 Nobel Prize in Physics for their work in this area. Their contributions, and those of Freeman Dyson, were about covariant and gauge-invariant formulations of quantum electrodynamics that allow computations of observables at any order of perturbation theory. Feynman's mathematical technique, based on his diagrams, initially seemed very different from the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Freeman Dyson later showed that the two approaches were equivalent. Renormalization, the need to attach a physical meaning at certain divergences appearing in the theory through integrals, has subsequently become one of the fundamental aspects of quantum field theory and has come to be seen as a criterion for a theory's general acceptability. Even though renormalization works very well in practice, Feynman was never entirely comfortable with its mathematical validity, even referring to renormalization as a "shell game" and "hocus pocus". Thence, neither Feynman nor Dirac were happy with that way to approach the observations made in theoretical physics, above all in quantum mechanics. QED has served as the model and template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1970s work by H. David Politzer, Sidney Coleman, David Gross and Frank Wilczek. Building on the pioneering work of Schwinger, Gerald Guralnik, Dick Hagen, and Tom Kibble, Peter Higgs, Jeffrey Goldstone, and others, Sheldon Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force. Feynman's view of quantum electrodynamics Introduction Near the end of his life, Richard Feynman gave a series of lectures on QED intended for the lay public. These lectures were transcribed and published as Feynman (1985), QED: The Strange Theory of Light and Matter, a classic non-mathematical exposition of QED from the point of view articulated below. The key components of Feynman's presentation of QED are three basic actions. A photon goes from one place and time to another place and time. An electron goes from one place and time to another place and time. An electron emits or absorbs a photon at a certain place and time. These actions are represented in the form of visual shorthand by the three basic elements of diagrams: a wavy line for the photon, a straight line for the electron and a junction of two straight lines and a wavy one for a vertex representing emission or absorption of a photon by an electron. These can all be seen in the adjacent diagram. As well as the visual shorthand for the actions, Feynman introduces another kind of shorthand for the numerical quantities called probability amplitudes. The probability is the square of the absolute value of total probability amplitude, . If a photon moves from one place and time to another place and time , the associated quantity is written in Feynman's shorthand as , and it depends on only the momentum and polarization of the photon. The similar quantity for an electron moving from to is written . It depends on the momentum and polarization of the electron, in addition to a constant Feynman calls n, sometimes called the "bare" mass of the electron: it is related to, but not the same as, the measured electron mass. Finally, the quantity that tells us about the probability amplitude for an electron to emit or absorb a photon Feynman calls j, and is sometimes called the "bare" charge of the electron: it is a constant, and is related to, but not the same as, the measured electron charge e. QED is based on the assumption that complex interactions of many electrons and photons can be represented by fitting together a suitable collection of the above three building blocks and then using the probability amplitudes to calculate the probability of any such complex interaction. It turns out that the basic idea of QED can be communicated while assuming that the square of the total of the probability amplitudes mentioned above (P(A to B), E(C to D) and j) acts just like our everyday probability (a simplification made in Feynman's book). Later on, this will be corrected to include specifically quantum-style mathematics, following Feynman. The basic rules of probability amplitudes that will be used are: The indistinguishability criterion in (a) is very important: it means that there is no observable feature present in the given system that in any way "reveals" which alternative is taken. In such a case, one cannot observe which alternative actually takes place without changing the experimental setup in some way (e.g. by introducing a new apparatus into the system). Whenever one is able to observe which alternative takes place, one always finds that the probability of the event is the sum of the probabilities of the alternatives. Indeed, if this were not the case, the very term "alternatives" to describe these processes would be inappropriate. What (a) says is that once the physical means for observing which alternative occurred is removed, one cannot still say that the event is occurring through "exactly one of the alternatives" in the sense of adding probabilities; one must add the amplitudes instead. Similarly, the independence criterion in (b) is very important: it only applies to processes which are not "entangled". Basic constructions Suppose we start with one electron at a certain place and time (this place and time being given the arbitrary label A) and a photon at another place and time (given the label B). A typical question from a physical standpoint is: "What is the probability of finding an electron at C (another place and a later time) and a photon at D (yet another place and time)?". The simplest process to achieve this end is for the electron to move from A to C (an elementary action) and for the photon to move from B to D (another elementary action). From a knowledge of the probability amplitudes of each of these sub-processes – E(A to C) and P(B to D) – we would expect to calculate the probability amplitude of both happening together by multiplying them, using rule b) above. This gives a simple estimated overall probability amplitude, which is squared to give an estimated probability. But there are other ways in which the result could come about. The electron might move to a place and time E, where it absorbs the photon; then move on before emitting another photon at F; then move on to C, where it is detected, while the new photon moves on to D. The probability of this complex process can again be calculated by knowing the probability amplitudes of each of the individual actions: three electron actions, two photon actions and two vertexes – one emission and one absorption. We would expect to find the total probability amplitude by multiplying the probability amplitudes of each of the actions, for any chosen positions of E and F. We then, using rule a) above, have to add up all these probability amplitudes for all the alternatives for E and F. (This is not elementary in practice and involves integration.) But there is another possibility, which is that the electron first moves to G, where it emits a photon, which goes on to D, while the electron moves on to H, where it absorbs the first photon, before moving on to C. Again, we can calculate the probability amplitude of these possibilities (for all points G and H). We then have a better estimation for the total probability amplitude by adding the probability amplitudes of these two possibilities to our original simple estimate. Incidentally, the name given to this process of a photon interacting with an electron in this way is Compton scattering. There is an infinite number of other intermediate "virtual" processes in which more and more photons are absorbed and/or emitted. For each of these processes, a Feynman diagram could be drawn describing it. This implies a complex computation for the resulting probability amplitudes, but provided it is the case that the more complicated the diagram, the less it contributes to the result, it is only a matter of time and effort to find as accurate an answer as one wants to the original question. This is the basic approach of QED. To calculate the probability of any interactive process between electrons and photons, it is a matter of first noting, with Feynman diagrams, all the possible ways in which the process can be constructed from the three basic elements. Each diagram involves some calculation involving definite rules to find the associated probability amplitude. That basic scaffolding remains when one moves to a quantum description, but some conceptual changes are needed. One is that whereas we might expect in our everyday life that there would be some constraints on the points to which a particle can move, that is not true in full quantum electrodynamics. There is a nonzero probability amplitude of an electron at A, or a photon at B, moving as a basic action to any other place and time in the universe. That includes places that could only be reached at speeds greater than that of light and also earlier times. (An electron moving backwards in time can be viewed as a positron moving forward in time.) Probability amplitudes Quantum mechanics introduces an important change in the way probabilities are computed. Probabilities are still represented by the usual real numbers we use for probabilities in our everyday world, but probabilities are computed as the square modulus of probability amplitudes, which are complex numbers. Feynman avoids exposing the reader to the mathematics of complex numbers by using a simple but accurate representation of them as arrows on a piece of paper or screen. (These must not be confused with the arrows of Feynman diagrams, which are simplified representations in two dimensions of a relationship between points in three dimensions of space and one of time.) The amplitude arrows are fundamental to the description of the world given by quantum theory. They are related to our everyday ideas of probability by the simple rule that the probability of an event is the square of the length of the corresponding amplitude arrow. So, for a given process, if two probability amplitudes, v and w, are involved, the probability of the process will be given either by or The rules as regards adding or multiplying, however, are the same as above. But where you would expect to add or multiply probabilities, instead you add or multiply probability amplitudes that now are complex numbers. Addition and multiplication are common operations in the theory of complex numbers and are given in the figures. The sum is found as follows. Let the start of the second arrow be at the end of the first. The sum is then a third arrow that goes directly from the beginning of the first to the end of the second. The product of two arrows is an arrow whose length is the product of the two lengths. The direction of the product is found by adding the angles that each of the two have been turned through relative to a reference direction: that gives the angle that the product is turned relative to the reference direction. That change, from probabilities to probability amplitudes, complicates the mathematics without changing the basic approach. But that change is still not quite enough because it fails to take into account the fact that both photons and electrons can be polarized, which is to say that their orientations in space and time have to be taken into account. Therefore, P(A to B) consists of 16 complex numbers, or probability amplitude arrows. There are also some minor changes to do with the quantity j, which may have to be rotated by a multiple of 90° for some polarizations, which is only of interest for the detailed bookkeeping. Associated with the fact that the electron can be polarized is another small necessary detail, which is connected with the fact that an electron is a fermion and obeys Fermi–Dirac statistics. The basic rule is that if we have the probability amplitude for a given complex process involving more than one electron, then when we include (as we always must) the complementary Feynman diagram in which we exchange two electron events, the resulting amplitude is the reverse – the negative – of the first. The simplest case would be two electrons starting at A and B ending at C and D. The amplitude would be calculated as the "difference", , where we would expect, from our everyday idea of probabilities, that it would be a sum. Propagators Finally, one has to compute P(A to B) and E(C to D) corresponding to the probability amplitudes for the photon and the electron respectively. These are essentially the solutions of the Dirac equation, which describe the behavior of the electron's probability amplitude and the Maxwell's equations, which describes the behavior of the photon's probability amplitude. These are called Feynman propagators. The translation to a notation commonly used in the standard literature is as follows: where a shorthand symbol such as stands for the four real numbers that give the time and position in three dimensions of the point labeled A. Mass renormalization A problem arose historically which held up progress for twenty years: although we start with the assumption of three basic "simple" actions, the rules of the game say that if we want to calculate the probability amplitude for an electron to get from A to B, we must take into account all the possible ways: all possible Feynman diagrams with those endpoints. Thus there will be a way in which the electron travels to C, emits a photon there and then absorbs it again at D before moving on to B. Or it could do this kind of thing twice, or more. In short, we have a fractal-like situation in which if we look closely at a line, it breaks up into a collection of "simple" lines, each of which, if looked at closely, are in turn composed of "simple" lines, and so on ad infinitum. This is a challenging situation to handle. If adding that detail only altered things slightly, then it would not have been too bad, but disaster struck when it was found that the simple correction mentioned above led to infinite probability amplitudes. In time this problem was "fixed" by the technique of renormalization. However, Feynman himself remained unhappy about it, calling it a "dippy process", and Dirac also criticized this procedure as "in mathematics one does not get rid of infinities when it does not please you". Conclusions Within the above framework physicists were then able to calculate to a high degree of accuracy some of the properties of electrons, such as the anomalous magnetic dipole moment. However, as Feynman points out, it fails to explain why particles such as the electron have the masses they do. "There is no theory that adequately explains these numbers. We use the numbers in all our theories, but we don't understand them – what they are, or where they come from. I believe that from a fundamental point of view, this is a very interesting and serious problem." Mathematical formulation QED action Mathematically, QED is an abelian gauge theory with the symmetry group U(1), defined on Minkowski space (flat spacetime). The gauge field, which mediates the interaction between the charged spin-1/2 fields, is the electromagnetic field. The QED Lagrangian for a spin-1/2 field interacting with the electromagnetic field in natural units gives rise to the action where are Dirac matrices. a bispinor field of spin-1/2 particles (e.g. electron–positron field). , called "psi-bar", is sometimes referred to as the Dirac adjoint. is the gauge covariant derivative. e is the coupling constant, equal to the electric charge of the bispinor field. is the covariant four-potential of the electromagnetic field generated by the electron itself. It is also known as a gauge field or a connection. is the external field imposed by external source. m is the mass of the electron or positron. is the electromagnetic field tensor. This is also known as the curvature of the gauge field. Expanding the covariant derivative reveals a second useful form of the Lagrangian (external field set to zero for simplicity) where is the conserved current arising from Noether's theorem. It is written Equations of motion Expanding the covariant derivative in the Lagrangian gives For simplicity, has been set to zero. Alternatively, we can absorb into a new gauge field and relabel the new field as From this Lagrangian, the equations of motion for the and fields can be obtained. Equation of motion for ψ These arise most straightforwardly by considering the Euler-Lagrange equation for . Since the Lagrangian contains no terms, we immediately get so the equation of motion can be written Equation of motion for Aμ Using the Euler–Lagrange equation for the field, the derivatives this time are Substituting back into leads to which can be written in terms of the current as Now, if we impose the Lorenz gauge condition the equations reduce to which is a wave equation for the four-potential, the QED version of the classical Maxwell equations in the Lorenz gauge. (The square represents the wave operator, .) Interaction picture This theory can be straightforwardly quantized by treating bosonic and fermionic sectors as free. This permits us to build a set of asymptotic states that can be used to start computation of the probability amplitudes for different processes. In order to do so, we have to compute an evolution operator, which for a given initial state will give a final state in such a way to have This technique is also known as the S-matrix. The evolution operator is obtained in the interaction picture, where time evolution is given by the interaction Hamiltonian, which is the integral over space of the second term in the Lagrangian density given above: and so, one has where T is the time-ordering operator. This evolution operator only has meaning as a series, and what we get here is a perturbation series with the fine-structure constant as the development parameter. This series is called the Dyson series. Feynman diagrams Despite the conceptual clarity of this Feynman approach to QED, almost no early textbooks follow him in their presentation. When performing calculations, it is much easier to work with the Fourier transforms of the propagators. Experimental tests of quantum electrodynamics are typically scattering experiments. In scattering theory, particles' momenta rather than their positions are considered, and it is convenient to think of particles as being created or annihilated when they interact. Feynman diagrams then look the same, but the lines have different interpretations. The electron line represents an electron with a given energy and momentum, with a similar interpretation of the photon line. A vertex diagram represents the annihilation of one electron and the creation of another together with the absorption or creation of a photon, each having specified energies and momenta. Using Wick's theorem on the terms of the Dyson series, all the terms of the S-matrix for quantum electrodynamics can be computed through the technique of Feynman diagrams. In this case, rules for drawing are the following To these rules we must add a further one for closed loops that implies an integration on momenta , since these internal ("virtual") particles are not constrained to any specific energy–momentum, even that usually required by special relativity (see Propagator for details). The signature of the metric is . From them, computations of probability amplitudes are straightforwardly given. An example is Compton scattering, with an electron and a photon undergoing elastic scattering. Feynman diagrams are in this case and so we are able to get the corresponding amplitude at the first order of a perturbation series for the S-matrix: from which we can compute the cross section for this scattering. Nonperturbative phenomena The predictive success of quantum electrodynamics largely rests on the use of perturbation theory, expressed in Feynman diagrams. However, quantum electrodynamics also leads to predictions beyond perturbation theory. In the presence of very strong electric fields, it predicts that electrons and positrons will be spontaneously produced, so causing the decay of the field. This process, called the Schwinger effect, cannot be understood in terms of any finite number of Feynman diagrams and hence is described as nonperturbative. Mathematically, it can be derived by a semiclassical approximation to the path integral of quantum electrodynamics. Renormalizability Higher-order terms can be straightforwardly computed for the evolution operator, but these terms display diagrams containing the following simpler ones that, being closed loops, imply the presence of diverging integrals having no mathematical meaning. To overcome this difficulty, a technique called renormalization has been devised, producing finite results in very close agreement with experiments. A criterion for the theory being meaningful after renormalization is that the number of diverging diagrams is finite. In this case, the theory is said to be "renormalizable". The reason for this is that to get observables renormalized, one needs a finite number of constants to maintain the predictive value of the theory untouched. This is exactly the case of quantum electrodynamics displaying just three diverging diagrams. This procedure gives observables in very close agreement with experiment as seen e.g. for electron gyromagnetic ratio. Renormalizability has become an essential criterion for a quantum field theory to be considered as a viable one. All the theories describing fundamental interactions, except gravitation, whose quantum counterpart is only conjectural and presently under very active research, are renormalizable theories. Nonconvergence of series An argument by Freeman Dyson shows that the radius of convergence of the perturbation series in QED is zero. The basic argument goes as follows: if the coupling constant were negative, this would be equivalent to the Coulomb force constant being negative. This would "reverse" the electromagnetic interaction so that like charges would attract and unlike charges would repel. This would render the vacuum unstable against decay into a cluster of electrons on one side of the universe and a cluster of positrons on the other side of the universe. Because the theory is "sick" for any negative value of the coupling constant, the series does not converge but is at best an asymptotic series. From a modern perspective, we say that QED is not well defined as a quantum field theory to arbitrarily high energy. The coupling constant runs to infinity at finite energy, signalling a Landau pole. The problem is essentially that QED appears to suffer from quantum triviality issues. This is one of the motivations for embedding QED within a Grand Unified Theory. Electrodynamics in curved spacetime This theory can be extended, at least as a classical field theory, to curved spacetime. This arises similarly to the flat spacetime case, from coupling a free electromagnetic theory to a free fermion theory and including an interaction which promotes the partial derivative in the fermion theory to a gauge-covariant derivative. See also Abraham–Lorentz force Anomalous magnetic moment Bhabha scattering Cavity quantum electrodynamics Circuit quantum electrodynamics Compton scattering Euler–Heisenberg Lagrangian Gupta–Bleuler formalism Lamb shift Landau pole Moeller scattering Non-relativistic quantum electrodynamics Photon polarization Positronium Precision tests of QED QED vacuum QED: The Strange Theory of Light and Matter Quantization of the electromagnetic field Scalar electrodynamics Schrödinger equation Schwinger model Schwinger–Dyson equation Vacuum polarization Vertex function Wheeler–Feynman absorber theory References Further reading Books Journals External links Feynman's Nobel Prize lecture describing the evolution of QED and his role in it Feynman's New Zealand lectures on QED for non-physicists http://qed.wikina.org/ – Animations demonstrating QED Freeman Dyson Quantum electronics Quantum field theory
0.768798
0.997098
0.766567
Phenomenon
A phenomenon (: phenomena), sometimes spelled phaenomenon, is an observable event. The term came into its modern philosophical usage through Immanuel Kant, who contrasted it with the noumenon, which cannot be directly observed. Kant was heavily influenced by Gottfried Wilhelm Leibniz in this part of his philosophy, in which phenomenon and noumenon serve as interrelated technical terms. Far predating this, the ancient Greek Pyrrhonist philosopher Sextus Empiricus also used phenomenon and noumenon as interrelated technical terms. Common usage In popular usage, a phenomenon often refers to an extraordinary, unusual or notable event. According to the Dictionary of Visual Discourse:In ordinary language 'phenomenon/phenomena' refer to any occurrence worthy of note and investigation, typically an untoward or unusual event, person or fact that is of special significance or otherwise notable. Philosophy In modern philosophical use, the term phenomena means things as they are experienced through the senses and processed by the mind as distinct from things in and of themselves (noumena). In his inaugural dissertation, titled On the Form and Principles of the Sensible and Intelligible World, Immanuel Kant (1770) theorizes that the human mind is restricted to the logical world and thus can only interpret and understand occurrences according to their physical appearances. He wrote that humans could infer only as much as their senses allowed, but not experience the actual object itself. Thus, the term phenomenon refers to any incident deserving of inquiry and investigation, especially processes and events which are particularly unusual or of distinctive importance. Science In scientific usage, a phenomenon is any event that is observable, including the use of instrumentation to observe, record, or compile data. Especially in physics, the study of a phenomenon may be described as measurements related to matter, energy, or time, such as Isaac Newton's observations of the Moon's orbit and of gravity; or Galileo Galilei's observations of the motion of a pendulum. In natural sciences, a phenomenon is an observable happening or event. Often, this term is used without considering the causes of a particular event. Example of a physical phenomenon is an observable phenomenon of the lunar orbit or the phenomenon of oscillations of a pendulum. A mechanical phenomenon is a physical phenomenon associated with the equilibrium or motion of objects. Some examples are Newton's cradle, engines, and double pendulums. Sociology Group phenomena concern the behavior of a particular group of individual entities, usually organisms and most especially people. The behavior of individuals often changes in a group setting in various ways, and a group may have its own behaviors not possible for an individual because of the herd mentality. Social phenomena apply especially to organisms and people in that subjective states are implicit in the term. Attitudes and events particular to a group may have effects beyond the group, and either be adapted by the larger society, or seen as aberrant, being punished or shunned. See also Awareness Condition of possibility Essence Electrical phenomena Experience Intuition List of cycles List of effects List of electrical phenomena List of geological phenomena List of Internet phenomena List of natural phenomena List of severe weather phenomena List of syntactic phenomena Observation Optical phenomena References External links Concepts in metaphysics Observation Phenomenology
0.769264
0.996476
0.766553
Circular motion
In physics, circular motion is a movement of an object along the circumference of a circle or rotation along a circular arc. It can be uniform, with a constant rate of rotation and constant tangential speed, or non-uniform with a changing rate of rotation. The rotation around a fixed axis of a three-dimensional body involves the circular motion of its parts. The equations of motion describe the movement of the center of mass of a body, which remains at a constant distance from the axis of rotation. In circular motion, the distance between the body and a fixed point on its surface remains the same, i.e., the body is assumed rigid. Examples of circular motion include: special satellite orbits around the Earth (circular orbits), a ceiling fan's blades rotating around a hub, a stone that is tied to a rope and is being swung in circles, a car turning through a curve in a race track, an electron moving perpendicular to a uniform magnetic field, and a gear turning inside a mechanism. Since the object's velocity vector is constantly changing direction, the moving object is undergoing acceleration by a centripetal force in the direction of the center of rotation. Without this acceleration, the object would move in a straight line, according to Newton's laws of motion. Uniform circular motion In physics, uniform circular motion describes the motion of a body traversing a circular path at a constant speed. Since the body describes circular motion, its distance from the axis of rotation remains constant at all times. Though the body's speed is constant, its velocity is not constant: velocity, a vector quantity, depends on both the body's speed and its direction of travel. This changing velocity indicates the presence of an acceleration; this centripetal acceleration is of constant magnitude and directed at all times toward the axis of rotation. This acceleration is, in turn, produced by a centripetal force which is also constant in magnitude and directed toward the axis of rotation. In the case of rotation around a fixed axis of a rigid body that is not negligibly small compared to the radius of the path, each particle of the body describes a uniform circular motion with the same angular velocity, but with velocity and acceleration varying with the position with respect to the axis. Formula For motion in a circle of radius , the circumference of the circle is . If the period for one rotation is , the angular rate of rotation, also known as angular velocity, is: and the units are radians/second. The speed of the object traveling the circle is: The angle swept out in a time is: The angular acceleration, , of the particle is: In the case of uniform circular motion, will be zero. The acceleration due to change in the direction is: The centripetal and centrifugal force can also be found using acceleration: The vector relationships are shown in Figure 1. The axis of rotation is shown as a vector perpendicular to the plane of the orbit and with a magnitude . The direction of is chosen using the right-hand rule. With this convention for depicting rotation, the velocity is given by a vector cross product as which is a vector perpendicular to both and , tangential to the orbit, and of magnitude . Likewise, the acceleration is given by which is a vector perpendicular to both and of magnitude and directed exactly opposite to . In the simplest case the speed, mass, and radius are constant. Consider a body of one kilogram, moving in a circle of radius one metre, with an angular velocity of one radian per second. The speed is 1 metre per second. The inward acceleration is 1 metre per square second, . It is subject to a centripetal force of 1 kilogram metre per square second, which is 1 newton. The momentum of the body is 1 kg·m·s−1. The moment of inertia is 1 kg·m2. The angular momentum is 1 kg·m2·s−1. The kinetic energy is 0.5 joule. The circumference of the orbit is 2 (~6.283) metres. The period of the motion is 2 seconds. The frequency is (2)−1 hertz. In polar coordinates During circular motion, the body moves on a curve that can be described in the polar coordinate system as a fixed distance from the center of the orbit taken as the origin, oriented at an angle from some reference direction. See Figure 4. The displacement vector is the radial vector from the origin to the particle location: where is the unit vector parallel to the radius vector at time and pointing away from the origin. It is convenient to introduce the unit vector orthogonal to as well, namely . It is customary to orient to point in the direction of travel along the orbit. The velocity is the time derivative of the displacement: Because the radius of the circle is constant, the radial component of the velocity is zero. The unit vector has a time-invariant magnitude of unity, so as time varies its tip always lies on a circle of unit radius, with an angle the same as the angle of . If the particle displacement rotates through an angle in time , so does , describing an arc on the unit circle of magnitude . See the unit circle at the left of Figure 4. Hence: where the direction of the change must be perpendicular to (or, in other words, along ) because any change in the direction of would change the size of . The sign is positive because an increase in implies the object and have moved in the direction of . Hence the velocity becomes: The acceleration of the body can also be broken into radial and tangential components. The acceleration is the time derivative of the velocity: The time derivative of is found the same way as for . Again, is a unit vector and its tip traces a unit circle with an angle that is . Hence, an increase in angle by implies traces an arc of magnitude , and as is orthogonal to , we have: where a negative sign is necessary to keep orthogonal to . (Otherwise, the angle between and would decrease with an increase in .) See the unit circle at the left of Figure 4. Consequently, the acceleration is: The centripetal acceleration is the radial component, which is directed radially inward: while the tangential component changes the magnitude of the velocity: Using complex numbers Circular motion can be described using complex numbers. Let the axis be the real axis and the axis be the imaginary axis. The position of the body can then be given as , a complex "vector": where is the imaginary unit, and is the argument of the complex number as a function of time, . Since the radius is constant: where a dot indicates differentiation in respect of time. With this notation, the velocity becomes: and the acceleration becomes: The first term is opposite in direction to the displacement vector and the second is perpendicular to it, just like the earlier results shown before. Velocity Figure 1 illustrates velocity and acceleration vectors for uniform motion at four different points in the orbit. Because the velocity is tangent to the circular path, no two velocities point in the same direction. Although the object has a constant speed, its direction is always changing. This change in velocity is caused by an acceleration , whose magnitude is (like that of the velocity) held constant, but whose direction also is always changing. The acceleration points radially inwards (centripetally) and is perpendicular to the velocity. This acceleration is known as centripetal acceleration. For a path of radius , when an angle is swept out, the distance traveled on the periphery of the orbit is . Therefore, the speed of travel around the orbit is where the angular rate of rotation is . (By rearrangement, .) Thus, is a constant, and the velocity vector also rotates with constant magnitude , at the same angular rate . Relativistic circular motion In this case, the three-acceleration vector is perpendicular to the three-velocity vector, and the square of proper acceleration, expressed as a scalar invariant, the same in all reference frames, becomes the expression for circular motion, or, taking the positive square root and using the three-acceleration, we arrive at the proper acceleration for circular motion: Acceleration The left-hand circle in Figure 2 is the orbit showing the velocity vectors at two adjacent times. On the right, these two velocities are moved so their tails coincide. Because speed is constant, the velocity vectors on the right sweep out a circle as time advances. For a swept angle the change in is a vector at right angles to and of magnitude , which in turn means that the magnitude of the acceleration is given by Non-uniform circular motion In a non-uniform circular motion, an object is moving in a circular path with a varying speed. Since the speed is changing, there is tangential acceleration in addition to normal acceleration. In a non-uniform circular motion, the net acceleration (a) is along the direction of , which is directed inside the circle but does not pass through its center (see figure). The net acceleration may be resolved into two components: tangential acceleration and normal acceleration also known as the centripetal or radial acceleration. Unlike tangential acceleration, centripetal acceleration is present in both uniform and non-uniform circular motion. In a non-uniform circular motion, normal force does not always point in the opposite direction of weight. Here is an example with an object traveling in a straight path then looping a loop back into a straight path again. This diagram shows the normal force pointing in other directions rather than opposite to the weight force. The normal force is actually the sum of the radial and tangential forces. The component of weight force is responsible for the tangential force here (We have neglected frictional force). The radial force (centripetal force) is due to the change in the direction of velocity as discussed earlier. In a non-uniform circular motion, normal force and weight may point in the same direction. Both forces can point down, yet the object will remain in a circular path without falling straight down. First, let's see why normal force can point down in the first place. In the first diagram, let's say the object is a person sitting inside a plane, the two forces point down only when it reaches the top of the circle. The reason for this is that the normal force is the sum of the tangential force and centripetal force. The tangential force is zero at the top (as no work is performed when the motion is perpendicular to the direction of force applied. Here weight force is perpendicular to the direction of motion of the object at the top of the circle) and centripetal force points down, thus normal force will point down as well. From a logical standpoint, a person who is travelling in the plane will be upside down at the top of the circle. At that moment, the person's seat is actually pushing down on the person, which is the normal force. The reason why the object does not fall down when subjected to only downward forces is a simple one. Think about what keeps an object up after it is thrown. Once an object is thrown into the air, there is only the downward force of Earth's gravity that acts on the object. That does not mean that once an object is thrown in the air, it will fall instantly. What keeps that object up in the air is its velocity. The first of Newton's laws of motion states that an object's inertia keeps it in motion, and since the object in the air has a velocity, it will tend to keep moving in that direction. A varying angular speed for an object moving in a circular path can also be achieved if the rotating body does not have a homogeneous mass distribution. For inhomogeneous objects, it is necessary to approach the problem as in. One can deduce the formulae of speed, acceleration and jerk, assuming all the variables to depend on : Further transformations may involve and corresponding derivatives: Applications Solving applications dealing with non-uniform circular motion involves force analysis. With a uniform circular motion, the only force acting upon an object traveling in a circle is the centripetal force. In a non-uniform circular motion, there are additional forces acting on the object due to a non-zero tangential acceleration. Although there are additional forces acting upon the object, the sum of all the forces acting on the object will have to be equal to the centripetal force. Radial acceleration is used when calculating the total force. Tangential acceleration is not used in calculating total force because it is not responsible for keeping the object in a circular path. The only acceleration responsible for keeping an object moving in a circle is the radial acceleration. Since the sum of all forces is the centripetal force, drawing centripetal force into a free body diagram is not necessary and usually not recommended. Using , we can draw free body diagrams to list all the forces acting on an object and then set it equal to . Afterward, we can solve for whatever is unknown (this can be mass, velocity, radius of curvature, coefficient of friction, normal force, etc.). For example, the visual above showing an object at the top of a semicircle would be expressed as . In a uniform circular motion, the total acceleration of an object in a circular path is equal to the radial acceleration. Due to the presence of tangential acceleration in a non uniform circular motion, that does not hold true any more. To find the total acceleration of an object in a non uniform circular, find the vector sum of the tangential acceleration and the radial acceleration. Radial acceleration is still equal to . Tangential acceleration is simply the derivative of the speed at any given point: . This root sum of squares of separate radial and tangential accelerations is only correct for circular motion; for general motion within a plane with polar coordinates , the Coriolis term should be added to , whereas radial acceleration then becomes . See also Angular momentum Equations of motion for circular motion Fictitious force Geostationary orbit Geosynchronous orbit Pendulum (mechanics) Reactive centrifugal force Reciprocating motion Sling (weapon) References External links Physclips: Mechanics with animations and video clips from the University of New South Wales Circular Motion – a chapter from an online textbook, Mechanics, by Benjamin Crowell (2019) Circular Motion Lecture – a video lecture on CM – an online textbook with different analysis for circular motion Rotation Classical mechanics Motion (physics) Circles
0.769034
0.996758
0.76654
Deflagration
Deflagration (Lat: de + flagrare, 'to burn down') is subsonic combustion in which a pre-mixed flame propagates through an explosive or a mixture of fuel and oxidizer. Deflagrations in high and low explosives or fuel–oxidizer mixtures may transition to a detonation depending upon confinement and other factors. Most fires found in daily life are diffusion flames. Deflagrations with flame speeds in the range of 1 m/s differ from detonations which propagate supersonically with detonation velocities in the range of km/s. Applications Deflagrations are often used in engineering applications when the force of the expanding gas is used to move an object such as a projectile down a barrel, or a piston in an internal combustion engine. Deflagration systems and products can also be used in mining, demolition and stone quarrying via gas pressure blasting as a beneficial alternative to high explosives. Terminology of explosive safety When studying or discussing explosive safety, or the safety of systems containing explosives, the terms deflagration, detonation and deflagration-to-detonation transition (commonly referred to as DDT) must be understood and used appropriately to convey relevant information. As explained above, a deflagration is a subsonic reaction, whereas a detonation is a supersonic (greater than the sound speed of the material) reaction. Distinguishing between a deflagration or a detonation can be difficult to impossible to the casual observer. Rather, confidently differentiating between the two requires instrumentation and diagnostics to ascertain reaction speed in the affected material. Therefore, when an unexpected event or an accident occurs with an explosive material or an explosive-containing system it is usually impossible to know whether the explosive deflagrated or detonated as both can appear as very violent, energetic reactions. Therefore, the energetic materials community coined the term "high explosive violent reaction" or "HEVR" to describe a violent reaction that, because it lacked diagnostics to measure sound-speed, could have been either a deflagration or a detonation. Flame physics The underlying flame physics can be understood with the help of an idealized model consisting of a uniform one-dimensional tube of unburnt and burned gaseous fuel, separated by a thin transitional region of width in which the burning occurs. The burning region is commonly referred to as the flame or flame front. In equilibrium, thermal diffusion across the flame front is balanced by the heat supplied by burning. Two characteristic timescales are important here. The first is the thermal diffusion timescale , which is approximately equal to where is the thermal diffusivity. The second is the burning timescale that strongly decreases with temperature, typically as where is the activation barrier for the burning reaction and is the temperature developed as the result of burning; the value of this so-called "flame temperature" can be determined from the laws of thermodynamics. For a stationary moving deflagration front, these two timescales must be equal: the heat generated by burning is equal to the heat carried away by heat transfer. This makes it possible to calculate the characteristic width of the flame front: thus Now, the thermal flame front propagates at a characteristic speed , which is simply equal to the flame width divided by the burn time: This simplified model neglects the change of temperature and thus the burning rate across the deflagration front. This model also neglects the possible influence of turbulence. As a result, this derivation gives only the laminar flame speed—hence the designation . Damaging events Damage to buildings, equipment and people can result from a large-scale, short-duration deflagration. The potential damage is primarily a function of the total amount of fuel burned in the event (total energy available), the maximum reaction velocity that is achieved, and the manner in which the expansion of the combustion gases is contained. Vented deflagrations tend to be less violent or damaging than contained deflagrations. In free-air deflagrations, there is a continuous variation in deflagration effects relative to the maximum flame velocity. When flame velocities are low, the effect of a deflagration is to release heat, such as in a flash fire. At flame velocities near the speed of sound, the energy released is in the form of pressure, and the resulting high pressure can damage equipment and buildings. See also Conflagration Deflagration to detonation transition Pressure piling References Combustion Explosives Physical chemistry Process safety
0.770717
0.99457
0.766532
Centripetal force
A centripetal force (from Latin centrum, "center" and petere, "to seek") is a force that makes a body follow a curved path. The direction of the centripetal force is always orthogonal to the motion of the body and towards the fixed point of the instantaneous center of curvature of the path. Isaac Newton described it as "a force by which bodies are drawn or impelled, or in any way tend, towards a point as to a centre". In Newtonian mechanics, gravity provides the centripetal force causing astronomical orbits. One common example involving centripetal force is the case in which a body moves with uniform speed along a circular path. The centripetal force is directed at right angles to the motion and also along the radius towards the centre of the circular path. The mathematical description was derived in 1659 by the Dutch physicist Christiaan Huygens. Formula From the kinematics of curved motion it is known that an object moving at tangential speed v along a path with radius of curvature r accelerates toward the center of curvature at a rate Here, is the centripetal acceleration and is the difference between the velocity vectors at and . By Newton's second law, the cause of acceleration is a net force acting on the object, which is proportional to its mass m and its acceleration. The force, usually referred to as a centripetal force, has a magnitude and is, like centripetal acceleration, directed toward the center of curvature of the object's trajectory. Derivation The centripetal acceleration can be inferred from the diagram of the velocity vectors at two instances. In the case of uniform circular motion the velocities have constant magnitude. Because each one is perpendicular to its respective position vector, simple vector subtraction implies two similar isosceles triangles with congruent angles – one comprising a base of and a leg length of , and the other a base of (position vector difference) and a leg length of : Therefore, can be substituted with : The direction of the force is toward the center of the circle in which the object is moving, or the osculating circle (the circle that best fits the local path of the object, if the path is not circular). The speed in the formula is squared, so twice the speed needs four times the force, at a given radius. This force is also sometimes written in terms of the angular velocity ω of the object about the center of the circle, related to the tangential velocity by the formula so that Expressed using the orbital period T for one revolution of the circle, the equation becomes In particle accelerators, velocity can be very high (close to the speed of light in vacuum) so the same rest mass now exerts greater inertia (relativistic mass) thereby requiring greater force for the same centripetal acceleration, so the equation becomes: where is the Lorentz factor. Thus the centripetal force is given by: which is the rate of change of relativistic momentum . Sources In the case of an object that is swinging around on the end of a rope in a horizontal plane, the centripetal force on the object is supplied by the tension of the rope. The rope example is an example involving a 'pull' force. The centripetal force can also be supplied as a 'push' force, such as in the case where the normal reaction of a wall supplies the centripetal force for a wall of death or a Rotor rider. Newton's idea of a centripetal force corresponds to what is nowadays referred to as a central force. When a satellite is in orbit around a planet, gravity is considered to be a centripetal force even though in the case of eccentric orbits, the gravitational force is directed towards the focus, and not towards the instantaneous center of curvature. Another example of centripetal force arises in the helix that is traced out when a charged particle moves in a uniform magnetic field in the absence of other external forces. In this case, the magnetic force is the centripetal force that acts towards the helix axis. Analysis of several cases Below are three examples of increasing complexity, with derivations of the formulas governing velocity and acceleration. Uniform circular motion Uniform circular motion refers to the case of constant rate of rotation. Here are two approaches to describing this case. Calculus derivation In two dimensions, the position vector , which has magnitude (length) and directed at an angle above the x-axis, can be expressed in Cartesian coordinates using the unit vectors and : The assumption of uniform circular motion requires three things: The object moves only on a circle. The radius of the circle does not change in time. The object moves with constant angular velocity around the circle. Therefore, where is time. The velocity and acceleration of the motion are the first and second derivatives of position with respect to time: The term in parentheses is the original expression of in Cartesian coordinates. Consequently, negative shows that the acceleration is pointed towards the center of the circle (opposite the radius), hence it is called "centripetal" (i.e. "center-seeking"). While objects naturally follow a straight path (due to inertia), this centripetal acceleration describes the circular motion path caused by a centripetal force. Derivation using vectors The image at right shows the vector relationships for uniform circular motion. The rotation itself is represented by the angular velocity vector Ω, which is normal to the plane of the orbit (using the right-hand rule) and has magnitude given by: with θ the angular position at time t. In this subsection, dθ/dt is assumed constant, independent of time. The distance traveled dℓ of the particle in time dt along the circular path is which, by properties of the vector cross product, has magnitude rdθ and is in the direction tangent to the circular path. Consequently, In other words, Differentiating with respect to time, Lagrange's formula states: Applying Lagrange's formula with the observation that Ω • r(t) = 0 at all times, In words, the acceleration is pointing directly opposite to the radial displacement r at all times, and has a magnitude: where vertical bars |...| denote the vector magnitude, which in the case of r(t) is simply the radius r of the path. This result agrees with the previous section, though the notation is slightly different. When the rate of rotation is made constant in the analysis of nonuniform circular motion, that analysis agrees with this one. A merit of the vector approach is that it is manifestly independent of any coordinate system. Example: The banked turn The upper panel in the image at right shows a ball in circular motion on a banked curve. The curve is banked at an angle θ from the horizontal, and the surface of the road is considered to be slippery. The objective is to find what angle the bank must have so the ball does not slide off the road. Intuition tells us that, on a flat curve with no banking at all, the ball will simply slide off the road; while with a very steep banking, the ball will slide to the center unless it travels the curve rapidly. Apart from any acceleration that might occur in the direction of the path, the lower panel of the image above indicates the forces on the ball. There are two forces; one is the force of gravity vertically downward through the center of mass of the ball mg, where m is the mass of the ball and g is the gravitational acceleration; the second is the upward normal force exerted by the road at a right angle to the road surface man. The centripetal force demanded by the curved motion is also shown above. This centripetal force is not a third force applied to the ball, but rather must be provided by the net force on the ball resulting from vector addition of the normal force and the force of gravity. The resultant or net force on the ball found by vector addition of the normal force exerted by the road and vertical force due to gravity must equal the centripetal force dictated by the need to travel a circular path. The curved motion is maintained so long as this net force provides the centripetal force requisite to the motion. The horizontal net force on the ball is the horizontal component of the force from the road, which has magnitude . The vertical component of the force from the road must counteract the gravitational force: , which implies . Substituting into the above formula for yields a horizontal force to be: On the other hand, at velocity |v| on a circular path of radius r, kinematics says that the force needed to turn the ball continuously into the turn is the radially inward centripetal force Fc of magnitude: Consequently, the ball is in a stable path when the angle of the road is set to satisfy the condition: or, As the angle of bank θ approaches 90°, the tangent function approaches infinity, allowing larger values for |v|2/r. In words, this equation states that for greater speeds (bigger |v|) the road must be banked more steeply (a larger value for θ), and for sharper turns (smaller r) the road also must be banked more steeply, which accords with intuition. When the angle θ does not satisfy the above condition, the horizontal component of force exerted by the road does not provide the correct centripetal force, and an additional frictional force tangential to the road surface is called upon to provide the difference. If friction cannot do this (that is, the coefficient of friction is exceeded), the ball slides to a different radius where the balance can be realized. These ideas apply to air flight as well. See the FAA pilot's manual. Nonuniform circular motion As a generalization of the uniform circular motion case, suppose the angular rate of rotation is not constant. The acceleration now has a tangential component, as shown the image at right. This case is used to demonstrate a derivation strategy based on a polar coordinate system. Let r(t) be a vector that describes the position of a point mass as a function of time. Since we are assuming circular motion, let , where R is a constant (the radius of the circle) and ur is the unit vector pointing from the origin to the point mass. The direction of ur is described by θ, the angle between the x-axis and the unit vector, measured counterclockwise from the x-axis. The other unit vector for polar coordinates, uθ is perpendicular to ur and points in the direction of increasing θ. These polar unit vectors can be expressed in terms of Cartesian unit vectors in the x and y directions, denoted and respectively: and One can differentiate to find velocity: where is the angular velocity . This result for the velocity matches expectations that the velocity should be directed tangentially to the circle, and that the magnitude of the velocity should be . Differentiating again, and noting that we find that the acceleration, a is: Thus, the radial and tangential components of the acceleration are: and where is the magnitude of the velocity (the speed). These equations express mathematically that, in the case of an object that moves along a circular path with a changing speed, the acceleration of the body may be decomposed into a perpendicular component that changes the direction of motion (the centripetal acceleration), and a parallel, or tangential component, that changes the speed. General planar motion Polar coordinates The above results can be derived perhaps more simply in polar coordinates, and at the same time extended to general motion within a plane, as shown next. Polar coordinates in the plane employ a radial unit vector uρ and an angular unit vector uθ, as shown above. A particle at position r is described by: where the notation ρ is used to describe the distance of the path from the origin instead of R to emphasize that this distance is not fixed, but varies with time. The unit vector uρ travels with the particle and always points in the same direction as r(t). Unit vector uθ also travels with the particle and stays orthogonal to uρ. Thus, uρ and uθ form a local Cartesian coordinate system attached to the particle, and tied to the path travelled by the particle. By moving the unit vectors so their tails coincide, as seen in the circle at the left of the image above, it is seen that uρ and uθ form a right-angled pair with tips on the unit circle that trace back and forth on the perimeter of this circle with the same angle θ(t) as r(t). When the particle moves, its velocity is To evaluate the velocity, the derivative of the unit vector uρ is needed. Because uρ is a unit vector, its magnitude is fixed, and it can change only in direction, that is, its change duρ has a component only perpendicular to uρ. When the trajectory r(t) rotates an amount dθ, uρ, which points in the same direction as r(t), also rotates by dθ. See image above. Therefore, the change in uρ is or In a similar fashion, the rate of change of uθ is found. As with uρ, uθ is a unit vector and can only rotate without changing size. To remain orthogonal to uρ while the trajectory r(t) rotates an amount dθ, uθ, which is orthogonal to r(t), also rotates by dθ. See image above. Therefore, the change duθ is orthogonal to uθ and proportional to dθ (see image above): The equation above shows the sign to be negative: to maintain orthogonality, if duρ is positive with dθ, then duθ must decrease. Substituting the derivative of uρ into the expression for velocity: To obtain the acceleration, another time differentiation is done: Substituting the derivatives of uρ and uθ, the acceleration of the particle is: As a particular example, if the particle moves in a circle of constant radius R, then dρ/dt = 0, v = vθ, and: where These results agree with those above for nonuniform circular motion. See also the article on non-uniform circular motion. If this acceleration is multiplied by the particle mass, the leading term is the centripetal force and the negative of the second term related to angular acceleration is sometimes called the Euler force. For trajectories other than circular motion, for example, the more general trajectory envisioned in the image above, the instantaneous center of rotation and radius of curvature of the trajectory are related only indirectly to the coordinate system defined by uρ and uθ and to the length |r(t)| = ρ. Consequently, in the general case, it is not straightforward to disentangle the centripetal and Euler terms from the above general acceleration equation. To deal directly with this issue, local coordinates are preferable, as discussed next. Local coordinates Local coordinates mean a set of coordinates that travel with the particle, and have orientation determined by the path of the particle. Unit vectors are formed as shown in the image at right, both tangential and normal to the path. This coordinate system sometimes is referred to as intrinsic or path coordinates or nt-coordinates, for normal-tangential, referring to these unit vectors. These coordinates are a very special example of a more general concept of local coordinates from the theory of differential forms. Distance along the path of the particle is the arc length s, considered to be a known function of time. A center of curvature is defined at each position s located a distance ρ (the radius of curvature) from the curve on a line along the normal un (s). The required distance ρ(s) at arc length s is defined in terms of the rate of rotation of the tangent to the curve, which in turn is determined by the path itself. If the orientation of the tangent relative to some starting position is θ(s), then ρ(s) is defined by the derivative dθ/ds: The radius of curvature usually is taken as positive (that is, as an absolute value), while the curvature κ is a signed quantity. A geometric approach to finding the center of curvature and the radius of curvature uses a limiting process leading to the osculating circle. See image above. Using these coordinates, the motion along the path is viewed as a succession of circular paths of ever-changing center, and at each position s constitutes non-uniform circular motion at that position with radius ρ. The local value of the angular rate of rotation then is given by: with the local speed v given by: As for the other examples above, because unit vectors cannot change magnitude, their rate of change is always perpendicular to their direction (see the left-hand insert in the image above): Consequently, the velocity and acceleration are: and using the chain-rule of differentiation: with the tangential acceleration In this local coordinate system, the acceleration resembles the expression for nonuniform circular motion with the local radius ρ(s), and the centripetal acceleration is identified as the second term. Extending this approach to three dimensional space curves leads to the Frenet–Serret formulas. Alternative approach Looking at the image above, one might wonder whether adequate account has been taken of the difference in curvature between ρ(s) and ρ(s + ds) in computing the arc length as ds = ρ(s)dθ. Reassurance on this point can be found using a more formal approach outlined below. This approach also makes connection with the article on curvature. To introduce the unit vectors of the local coordinate system, one approach is to begin in Cartesian coordinates and describe the local coordinates in terms of these Cartesian coordinates. In terms of arc length s, let the path be described as: Then an incremental displacement along the path ds is described by: where primes are introduced to denote derivatives with respect to s. The magnitude of this displacement is ds, showing that: (Eq. 1) This displacement is necessarily a tangent to the curve at s, showing that the unit vector tangent to the curve is: while the outward unit vector normal to the curve is Orthogonality can be verified by showing that the vector dot product is zero. The unit magnitude of these vectors is a consequence of Eq. 1. Using the tangent vector, the angle θ of the tangent to the curve is given by: and The radius of curvature is introduced completely formally (without need for geometric interpretation) as: The derivative of θ can be found from that for sinθ: Now: in which the denominator is unity. With this formula for the derivative of the sine, the radius of curvature becomes: where the equivalence of the forms stems from differentiation of Eq. 1: With these results, the acceleration can be found: as can be verified by taking the dot product with the unit vectors ut(s) and un(s). This result for acceleration is the same as that for circular motion based on the radius ρ. Using this coordinate system in the inertial frame, it is easy to identify the force normal to the trajectory as the centripetal force and that parallel to the trajectory as the tangential force. From a qualitative standpoint, the path can be approximated by an arc of a circle for a limited time, and for the limited time a particular radius of curvature applies, the centrifugal and Euler forces can be analyzed on the basis of circular motion with that radius. This result for acceleration agrees with that found earlier. However, in this approach, the question of the change in radius of curvature with s is handled completely formally, consistent with a geometric interpretation, but not relying upon it, thereby avoiding any questions the image above might suggest about neglecting the variation in ρ. Example: circular motion To illustrate the above formulas, let x, y be given as: Then: which can be recognized as a circular path around the origin with radius α. The position s = 0 corresponds to [α, 0], or 3 o'clock. To use the above formalism, the derivatives are needed: With these results, one can verify that: The unit vectors can also be found: which serve to show that s = 0 is located at position [ρ, 0] and s = ρπ/2 at [0, ρ], which agrees with the original expressions for x and y. In other words, s is measured counterclockwise around the circle from 3 o'clock. Also, the derivatives of these vectors can be found: To obtain velocity and acceleration, a time-dependence for s is necessary. For counterclockwise motion at variable speed v(t): where v(t) is the speed and t is time, and s(t = 0) = 0. Then: where it already is established that α = ρ. This acceleration is the standard result for non-uniform circular motion. See also Analytical mechanics Applied mechanics Bertrand theorem Central force Centrifugal force Circular motion Classical mechanics Coriolis force Dynamics (physics) Eskimo yo-yo Example: circular motion Fictitious force Frenet-Serret formulas History of centrifugal and centripetal forces Kinematics Kinetics Mechanics of planar particle motion Orthogonal coordinates Reactive centrifugal force Statics Notes and references Further reading Centripetal force vs. Centrifugal force, from an online Regents Exam physics tutorial by the Oswego City School District External links Notes from Physics and Astronomy HyperPhysics at Georgia State University Force Mechanics Kinematics Rotation Acceleration Articles containing video clips
0.767785
0.998308
0.766486
Unmoved mover
The unmoved mover or prime mover is a concept advanced by Aristotle as a primary cause (or first uncaused cause) or "mover" of all the motion in the universe. As is implicit in the name, the moves other things, but is not itself moved by any prior action. In Book 12 of his Metaphysics, Aristotle describes the unmoved mover as being perfectly beautiful, indivisible, and contemplating only the perfect contemplation: self-contemplation. He equates this concept also with the active intellect. This Aristotelian concept had its roots in cosmological speculations of the earliest Greek pre-Socratic philosophers and became highly influential and widely drawn upon in medieval philosophy and theology. St. Thomas Aquinas, for example, elaborated on the unmoved mover in the . First philosophy Aristotle argues, in Book 8 of the Physics and Book 12 of the Metaphysics, "that there must be an immortal, unchanging being, ultimately responsible for all wholeness and orderliness in the sensible world". In the Physics (VIII 4–6) Aristotle finds "surprising difficulties" explaining even commonplace change, and in support of his approach of explanation by four causes, he required "a fair bit of technical machinery". This "machinery" includes potentiality and actuality, hylomorphism, the theory of categories, and "an audacious and intriguing argument, that the bare existence of change requires the postulation of a first cause, an unmoved mover whose necessary existence underpins the ceaseless activity of the world of motion". Aristotle's "first philosophy", or Metaphysics ("after the Physics"), develops his peculiar theology of the prime mover, as : an independent divine eternal unchanging immaterial substance. Celestial spheres Aristotle adopted the geometrical model of Eudoxus of Cnidus, to provide a general explanation of the apparent wandering of the classical planets arising from uniform circular motions of celestial spheres. While the number of spheres in the model itself was subject to change (47 or 55), Aristotle's account of aether, and of potentiality and actuality, required an individual unmoved mover for each sphere. Final cause and efficient cause Despite their apparent function in the celestial model, the unmoved movers were a final cause, not an efficient cause for the movement of the spheres; they were solely a constant inspiration, and even if taken for an efficient cause precisely due to being a final cause, the nature of the explanation is purely teleological. Aristotle's theology The unmoved mover, if they were anywhere, were said to fill the outer void, beyond the sphere of fixed stars: The unmoved mover is immaterial substance (separate and individual beings), having neither parts nor magnitude. As such, it would be physically impossible for them to move material objects of any size by pushing, pulling, or collision. Because matter is, for Aristotle, a substratum in which a potential to change can be actualized, any and all potentiality must be actualized in a being that is eternal but it must not be still, because continuous activity is essential for all forms of life. This immaterial form of activity must be intellectual in nature and it cannot be contingent upon sensory perception if it is to remain uniform; therefore, eternal substance must think only of thinking itself and exist outside the starry sphere, where even the notion of place is undefined for Aristotle. Their influence on lesser beings is purely the result of an "aspiration or desire", and each aetheric celestial sphere emulates one of the unmoved movers, as best it can, by uniform circular motion. The first heaven, the outmost sphere of fixed stars, is moved by a desire to emulate the prime mover (first cause), in relation to whom, the subordinate movers suffer an accidental dependency. Many of Aristotle's contemporaries complained that oblivious, powerless gods are unsatisfactory. Nonetheless, it was a life which Aristotle enthusiastically endorsed as one most enviable and perfect, the unembellished basis of theology. As the whole of nature depends on the inspiration of the eternal unmoved movers, Aristotle was concerned to establish the metaphysical necessity of the perpetual motions of the heavens. It is through the seasonal action of the Sun upon the terrestrial spheres, that the cycles of generation and corruption give rise to all natural motion as efficient cause. The intellect, nous, "or whatever else it be that is thought to rule and lead us by nature, and to have cognizance of what is noble and divine" is the highest activity, according to Aristotle (contemplation or speculative thinking, theōríā). It is also the most sustainable, pleasant, self-sufficient activity; something which is aimed at for its own sake. (In contrast to politics and warfare, it does not involve doing things we'd rather not do, but rather something we do at our leisure.) This aim is not strictly human: to achieve it means to live in accordance not with mortal thoughts, but something immortal and divine which is within humans. According to Aristotle, contemplation is the only type of happy activity which it would not be ridiculous to imagine the gods having. In Aristotle's psychology and biology, the intellect is the soul (see also eudaimonia). According to Giovanni Reale, the first Unmoved Mover is a living, thinking and personal God who "possesses the theoretical knowledge alone or in the highest degree...knows not only Himself, but all things in their causes and first principles." First cause In Book VIII of his Physics, Aristotle examines the notions of change or motion, and attempts to show by a challenging argument, that the mere supposition of a 'before' and an 'after', requires a first principle. He argues that in the beginning, if the cosmos had come to be, its first motion would lack an antecedent state; and, as Parmenides said, "nothing comes from nothing". The cosmological argument, later attributed to Aristotle, thereby draws the conclusion that God exists. However, if the cosmos had a beginning, Aristotle argued, it would require an efficient first cause, a notion that Aristotle took to demonstrate a critical flaw. The purpose of Aristotle's cosmological argument, that at least one eternal unmoved mover must exist, is to support everyday change. In Aristotle's estimation, an explanation without the temporal actuality and potentiality of an infinite locomotive chain is required for an eternal cosmos with neither beginning nor end: an unmoved eternal substance for whom the Primum Mobile turns diurnally and whereby all terrestrial cycles are driven: day and night, the seasons of the year, the transformation of the elements, and the nature of plants and animals. Substance and change Aristotle begins by describing substance, of which he says there are three types: the sensible, which is subdivided into the perishable, which belongs to physics, and the eternal, which belongs to "another science". He notes that sensible substance is changeable and that there are several types of change, including quality and quantity, generation and destruction, increase and diminution, alteration, and motion. Change occurs when one given state becomes something contrary to it: that is to say, what exists potentially comes to exist actually (see potentiality and actuality). Therefore, "a thing [can come to be], incidentally, out of that which is not, [and] also all things come to be out of that which is, but is potentially, and is not actually." That by which something is changed is the mover, that which is changed is the matter, and that into which it is changed is the form. Substance is necessarily composed of different elements. The proof for this is that there are things which are different from each other and that all things are composed of elements. Since elements combine to form composite substances, and because these substances differ from each other, there must be different elements: in other words, "b or a cannot be the same as ba". Number of movers Near the end of Metaphysics, Book , Aristotle introduces a surprising question, asking "whether we have to suppose one such [mover] or more than one, and if the latter, how many". Aristotle concludes that the number of all the movers equals the number of separate movements, and we can determine these by considering the mathematical science most akin to philosophy, i.e., astronomy. Although the mathematicians differ on the number of movements, Aristotle considers that the number of celestial spheres would be 47 or 55. Nonetheless, he concludes his Metaphysics, Book , with a quotation from the Iliad: "The rule of many is not good; one ruler let there be." Influence John Burnet (1892) noted Aristotle's principles of being (see section above) influenced Anselm's view of God, whom he called "that than which nothing greater can be conceived." Anselm thought that God did not feel emotions such as anger or love, but appeared to do so through our imperfect understanding. The incongruity of judging "being" against something that might not exist, may have led Anselm to his famous ontological argument for God's existence. Many medieval philosophers made use of the idea of approaching a knowledge of God through negative attributes. For example, we should not say that God exists in the usual sense of the term; all we can safely say is that God is not nonexistent. We should not say that God is wise; but, we can say that God is not ignorant (i.e. in some way God has some properties of knowledge). We should not say that God is One; but, we can state that there is no multiplicity in God's being. Aristotelian theological concepts were accepted by many later Jewish, Islamic, and Christian philosophers. Key Jewish philosophers included Samuel Ibn Tibbon, Maimonides, and Gersonides, among many others. Their views of God are considered mainstream by many Jews of all denominations even today. Preeminent among Islamic philosophers who were influenced by Aristotelian theology are Avicenna and Averroes. In Christian theology, the key philosopher influenced by Aristotle was undoubtedly Thomas Aquinas. There had been earlier Aristotelian influences within Christianity (notably Anselm), but Aquinas (who, incidentally, found his Aristotelian influence via Avicenna, Averroes, and Maimonides) incorporated extensive Aristotelian ideas throughout his own theology. Through Aquinas and the Scholastic Christian theology of which he was a significant part, Aristotle became "academic theology's great authority in the course of the thirteenth century" and exerted an influence upon Christian theology that become both widespread and deeply embedded. However, notable Christian theologians rejected Aristotelian theological influence, especially the first generation of Christian Reformers and most notably Martin Luther. In subsequent Protestant theology, Aristotelian thought quickly reemerged in Protestant scholasticism. See also Notes References Sources The Theology of Aristotle in the Stanford Encyclopedia of Philosophy Philosophy of Aristotle Conceptions of God Causality Aristotelianism Concepts in metaphysics
0.767506
0.99859
0.766424
Isothermal process
An isothermal process is a type of thermodynamic process in which the temperature T of a system remains constant: ΔT = 0. This typically occurs when a system is in contact with an outside thermal reservoir, and a change in the system occurs slowly enough to allow the system to be continuously adjusted to the temperature of the reservoir through heat exchange (see quasi-equilibrium). In contrast, an adiabatic process is where a system exchanges no heat with its surroundings (Q = 0). Simply, we can say that in an isothermal process For ideal gases only, internal energy while in adiabatic processes: Etymology The noun isotherm is derived from the Ancient Greek words , meaning "equal", and , meaning "heat". Examples Isothermal processes can occur in any kind of system that has some means of regulating the temperature, including highly structured machines, and even living cells. Some parts of the cycles of some heat engines are carried out isothermally (for example, in the Carnot cycle). In the thermodynamic analysis of chemical reactions, it is usual to first analyze what happens under isothermal conditions and then consider the effect of temperature. Phase changes, such as melting or evaporation, are also isothermal processes when, as is usually the case, they occur at constant pressure. Isothermal processes are often used as a starting point in analyzing more complex, non-isothermal processes. Isothermal processes are of special interest for ideal gases. This is a consequence of Joule's second law which states that the internal energy of a fixed amount of an ideal gas depends only on its temperature. Thus, in an isothermal process the internal energy of an ideal gas is constant. This is a result of the fact that in an ideal gas there are no intermolecular forces. Note that this is true only for ideal gases; the internal energy depends on pressure as well as on temperature for liquids, solids, and real gases. In the isothermal compression of a gas there is work done on the system to decrease the volume and increase the pressure. Doing work on the gas increases the internal energy and will tend to increase the temperature. To maintain the constant temperature energy must leave the system as heat and enter the environment. If the gas is ideal, the amount of energy entering the environment is equal to the work done on the gas, because internal energy does not change. For isothermal expansion, the energy supplied to the system does work on the surroundings. In either case, with the aid of a suitable linkage the change in gas volume can perform useful mechanical work. For details of the calculations, see calculation of work. For an adiabatic process, in which no heat flows into or out of the gas because its container is well insulated, Q = 0. If there is also no work done, i.e. a free expansion, there is no change in internal energy. For an ideal gas, this means that the process is also isothermal. Thus, specifying that a process is isothermal is not sufficient to specify a unique process. Details for an ideal gas For the special case of a gas to which Boyle's law applies, the product pV (p for gas pressure and V for gas volume) is a constant if the gas is kept at isothermal conditions. The value of the constant is nRT, where n is the number of moles of the present gas and R is the ideal gas constant. In other words, the ideal gas law pV = nRT applies. Therefore: holds. The family of curves generated by this equation is shown in the graph in Figure 1. Each curve is called an isotherm, meaning a curve at a same temperature T. Such graphs are termed indicator diagrams and were first used by James Watt and others to monitor the efficiency of engines. The temperature corresponding to each curve in the figure increases from the lower left to the upper right. Calculation of work In thermodynamics, the reversible work involved when a gas changes from state A to state B is where p for gas pressure and V for gas volume. For an isothermal (constant temperature T), reversible process, this integral equals the area under the relevant PV (pressure-volume) isotherm, and is indicated in purple in Figure 2 for an ideal gas. Again, p =  applies and with T being constant (as this is an isothermal process), the expression for work becomes: In IUPAC convention, work is defined as work on a system by its surroundings. If, for example, the system is compressed, then the work is done on the system by the surrounding so the work is positive and the internal energy of the system increases. Conversely, if the system expands (i.e., system surrounding expansion, so free expansions not the case), then the work is negative as the system does work on the surroundings and the internal energy of the system decreases. It is also worth noting that for ideal gases, if the temperature is held constant, the internal energy of the system U also is constant, and so ΔU = 0. Since the First Law of Thermodynamics states that ΔU = Q + W in IUPAC convention, it follows that Q = −W for the isothermal compression or expansion of ideal gases. Example of an isothermal process The reversible expansion of an ideal gas can be used as an example of work produced by an isothermal process. Of particular interest is the extent to which heat is converted to usable work, and the relationship between the confining force and the extent of expansion. During isothermal expansion of an ideal gas, both and change along an isotherm with a constant product (i.e., constant T). Consider a working gas in a cylindrical chamber 1 m high and 1 m2 area (so 1m3 volume) at 400 K in static equilibrium. The surroundings consist of air at 300 K and 1 atm pressure (designated as ). The working gas is confined by a piston connected to a mechanical device that exerts a force sufficient to create a working gas pressure of 2 atm (state ). For any change in state that causes a force decrease, the gas will expand and perform work on the surroundings. Isothermal expansion continues as long as the applied force decreases and appropriate heat is added to keep = 2 [atm·m3] (= 2 atm × 1 m3). The expansion is said to be internally reversible if the piston motion is sufficiently slow such that at each instant during the expansion the gas temperature and pressure is uniform and conform to the ideal gas law. Figure 3 shows the relationship for = 2 [atm·m3] for isothermal expansion from 2 atm (state ) to 1 atm (state ). The work done (designated ) has two components. First, expansion work against the surrounding atmosphere pressure (designated as ), and second, usable mechanical work (designated as ). The output here could be movement of the piston used to turn a crank-arm, which would then turn a pulley capable of lifting water out of flooded salt mines. The system attains state ( = 2 [atm·m3] with = 1 atm and = 2 m3) when the applied force reaches zero. At that point, equals –140.5 kJ, and is –101.3 kJ. By difference, = –39.1 kJ, which is 27.9% of the heat supplied to the process (- 39.1 kJ / - 140.5 kJ). This is the maximum amount of usable mechanical work obtainable from the process at the stated conditions. The percentage of is a function of and , and approaches 100% as approaches zero. To pursue the nature of isothermal expansion further, note the red line on Figure 3. The fixed value of causes an exponential increase in piston rise vs. pressure decrease. For example, a pressure decrease from 2 to 1.9 atm causes a piston rise of 0.0526 m. In comparison, a pressure decrease from 1.1 to 1 atm causes a piston rise of 0.1818 m. Entropy changes Isothermal processes are especially convenient for calculating changes in entropy since, in this case, the formula for the entropy change, ΔS, is simply where Qrev is the heat transferred (internally reversible) to the system and T is absolute temperature. This formula is valid only for a hypothetical reversible process; that is, a process in which equilibrium is maintained at all times. A simple example is an equilibrium phase transition (such as melting or evaporation) taking place at constant temperature and pressure. For a phase transition at constant pressure, the heat transferred to the system is equal to the enthalpy of transformation, ΔHtr, thus Q = ΔHtr. At any given pressure, there will be a transition temperature, Ttr, for which the two phases are in equilibrium (for example, the normal boiling point for vaporization of a liquid at one atmosphere pressure). If the transition takes place under such equilibrium conditions, the formula above may be used to directly calculate the entropy change . Another example is the reversible isothermal expansion (or compression) of an ideal gas from an initial volume VA and pressure PA to a final volume VB and pressure PB. As shown in Calculation of work, the heat transferred to the gas is . This result is for a reversible process, so it may be substituted in the formula for the entropy change to obtain . Since an ideal gas obeys Boyle's Law, this can be rewritten, if desired, as . Once obtained, these formulas can be applied to an irreversible process, such as the free expansion of an ideal gas. Such an expansion is also isothermal and may have the same initial and final states as in the reversible expansion. Since entropy is a state function (that depends on an equilibrium state, not depending on a path that the system takes to reach that state), the change in entropy of the system is the same as in the reversible process and is given by the formulas above. Note that the result Q = 0 for the free expansion can not be used in the formula for the entropy change since the process is not reversible. The difference between the reversible and irreversible is found in the entropy of the surroundings. In both cases, the surroundings are at a constant temperature, T, so that ΔSsur = −; the minus sign is used since the heat transferred to the surroundings is equal in magnitude and opposite in sign to the heat Q transferred to the system. In the reversible case, the change in entropy of the surroundings is equal and opposite to the change in the system, so the change in entropy of the universe is zero. In the irreversible, Q = 0, so the entropy of the surroundings does not change and the change in entropy of the universe is equal to ΔS for the system. See also Joule–Thomson effect Joule expansion (also called free expansion) Adiabatic process Cyclic process Isobaric process Isochoric process Polytropic process Spontaneous process References Thermodynamic processes Atmospheric thermodynamics
0.769996
0.995349
0.766415
Chemotaxis
Chemotaxis (from chemo- + taxis) is the movement of an organism or entity in response to a chemical stimulus. Somatic cells, bacteria, and other single-cell or multicellular organisms direct their movements according to certain chemicals in their environment. This is important for bacteria to find food (e.g., glucose) by swimming toward the highest concentration of food molecules, or to flee from poisons (e.g., phenol). In multicellular organisms, chemotaxis is critical to early development (e.g., movement of sperm towards the egg during fertilization) and development (e.g., migration of neurons or lymphocytes) as well as in normal function and health (e.g., migration of leukocytes during injury or infection). In addition, it has been recognized that mechanisms that allow chemotaxis in animals can be subverted during cancer metastasis, and the aberrant change of the overall property of these networks, which control chemotaxis, can lead to carcinogenesis. The aberrant chemotaxis of leukocytes and lymphocytes also contribute to inflammatory diseases such as atherosclerosis, asthma, and arthritis. Sub-cellular components, such as the polarity patch generated by mating yeast, may also display chemotactic behavior. Positive chemotaxis occurs if the movement is toward a higher concentration of the chemical in question; negative chemotaxis if the movement is in the opposite direction. Chemically prompted kinesis (randomly directed or nondirectional) can be called chemokinesis. History of chemotaxis research Although migration of cells was detected from the early days of the development of microscopy by Leeuwenhoek, a Caltech lecture regarding chemotaxis propounds that 'erudite description of chemotaxis was only first made by T. W. Engelmann (1881) and W. F. Pfeffer (1884) in bacteria, and H. S. Jennings (1906) in ciliates'. The Nobel Prize laureate I. Metchnikoff also contributed to the study of the field during 1882 to 1886, with investigations of the process as an initial step of phagocytosis. The significance of chemotaxis in biology and clinical pathology was widely accepted in the 1930s, and the most fundamental definitions underlying the phenomenon were drafted by this time. The most important aspects in quality control of chemotaxis assays were described by H. Harris in the 1950s. In the 1960s and 1970s, the revolution of modern cell biology and biochemistry provided a series of novel techniques that became available to investigate the migratory responder cells and subcellular fractions responsible for chemotactic activity. The availability of this technology led to the discovery of C5a, a major chemotactic factor involved in acute inflammation. The pioneering works of J. Adler modernized Pfeffer's capillary assay and represented a significant turning point in understanding the whole process of intracellular signal transduction of bacteria. Bacterial chemotaxis—general characteristics Some bacteria, such as E. coli, have several flagella per cell (4–10 typically). These can rotate in two ways: Counter-clockwise rotation aligns the flagella into a single rotating bundle, causing the bacterium to swim in a straight line; and Clockwise rotation breaks the flagella bundle apart such that each flagellum points in a different direction, causing the bacterium to tumble in place. The directions of rotation are given for an observer outside the cell looking down the flagella toward the cell. Behavior The overall movement of a bacterium is the result of alternating tumble and swim phases, called run-and-tumble motion. As a result, the trajectory of a bacterium swimming in a uniform environment will form a random walk with relatively straight swims interrupted by random tumbles that reorient the bacterium. Bacteria such as E. coli are unable to choose the direction in which they swim, and are unable to swim in a straight line for more than a few seconds due to rotational diffusion; in other words, bacteria "forget" the direction in which they are going. By repeatedly evaluating their course, and adjusting if they are moving in the wrong direction, bacteria can direct their random walk motion toward favorable locations. In the presence of a chemical gradient bacteria will chemotax, or direct their overall motion based on the gradient. If the bacterium senses that it is moving in the correct direction (toward attractant/away from repellent), it will keep swimming in a straight line for a longer time before tumbling; however, if it is moving in the wrong direction, it will tumble sooner. Bacteria like E. coli use temporal sensing to decide whether their situation is improving or not, and in this way, find the location with the highest concentration of attractant, detecting even small differences in concentration. This biased random walk is a result of simply choosing between two methods of random movement; namely tumbling and straight swimming. The helical nature of the individual flagellar filament is critical for this movement to occur. The protein structure that makes up the flagellar filament, flagellin, is conserved among all flagellated bacteria. Vertebrates seem to have taken advantage of this fact by possessing an immune receptor (TLR5) designed to recognize this conserved protein. As in many instances in biology, there are bacteria that do not follow this rule. Many bacteria, such as Vibrio, are monoflagellated and have a single flagellum at one pole of the cell. Their method of chemotaxis is different. Others possess a single flagellum that is kept inside the cell wall. These bacteria move by spinning the whole cell, which is shaped like a corkscrew. Signal transduction Chemical gradients are sensed through multiple transmembrane receptors, called methyl-accepting chemotaxis proteins (MCPs), which vary in the molecules that they detect. Thousands of MCP receptors are known to be encoded across the bacterial kingdom. These receptors may bind attractants or repellents directly or indirectly through interaction with proteins of periplasmatic space. The signals from these receptors are transmitted across the plasma membrane into the cytosol, where Che proteins are activated. The Che proteins alter the tumbling frequency, and alter the receptors. Flagellum regulation The proteins CheW and CheA bind to the receptor. The absence of receptor activation results in autophosphorylation in the histidine kinase, CheA, at a single highly conserved histidine residue. CheA, in turn, transfers phosphoryl groups to conserved aspartate residues in the response regulators CheB and CheY; CheA is a histidine kinase and it does not actively transfer the phosphoryl group, rather, the response regulator CheB takes the phosphoryl group from CheA. This mechanism of signal transduction is called a two-component system, and it is a common form of signal transduction in bacteria. CheY induces tumbling by interacting with the flagellar switch protein FliM, inducing a change from counter-clockwise to clockwise rotation of the flagellum. Change in the rotation state of a single flagellum can disrupt the entire flagella bundle and cause a tumble. Receptor regulation CheB, when activated by CheA, acts as a methylesterase, removing methyl groups from glutamate residues on the cytosolic side of the receptor; it works antagonistically with CheR, a methyltransferase, which adds methyl residues to the same glutamate residues. If the level of an attractant remains high, the level of phosphorylation of CheA (and, therefore, CheY and CheB) will remain low, the cell will swim smoothly, and the level of methylation of the MCPs will increase (because CheB-P is not present to demethylate). The MCPs no longer respond to the attractant when they are fully methylated; therefore, even though the level of attractant might remain high, the level of CheA-P (and CheB-P) increases and the cell begins to tumble. The MCPs can be demethylated by CheB-P, and, when this happens, the receptors can once again respond to attractants. The situation is the opposite with regard to repellents: fully methylated MCPs respond best to repellents, while least-methylated MCPs respond worst to repellents. This regulation allows the bacterium to 'remember' chemical concentrations from the recent past, a few seconds, and compare them to those it is currently experiencing, thus 'know' whether it is traveling up or down a gradient. that bacteria have to chemical gradients, other mechanisms are involved in increasing the absolute value of the sensitivity on a given background. Well-established examples are the ultra-sensitive response of the motor to the CheY-P signal, and the clustering of chemoreceptors. Chemoattractants and chemorepellents Chemoattractants and chemorepellents are inorganic or organic substances possessing chemotaxis-inducer effect in motile cells. These chemotactic ligands create chemical concentration gradients that organisms, prokaryotic and eukaryotic, move toward or away from, respectively. Effects of chemoattractants are elicited via chemoreceptors such as methyl-accepting chemotaxis proteins (MCP). MCPs in E.coli include Tar, Tsr, Trg and Tap. Chemoattracttants to Trg include ribose and galactose with phenol as a chemorepellent. Tap and Tsr recognize dipeptides and serine as chemoattractants, respectively. Chemoattractants or chemorepellents bind MCPs at its extracellular domain; an intracellular signaling domain relays the changes in concentration of these chemotactic ligands to downstream proteins like that of CheA which then relays this signal to flagellar motors via phosphorylated CheY (CheY-P). CheY-P can then control flagellar rotation influencing the direction of cell motility. For E.coli, S. meliloti, and R. spheroides, the binding of chemoattractants to MCPs inhibit CheA and therefore CheY-P activity, resulting in smooth runs, but for B. substilis, CheA activity increases. Methylation events in E.coli cause MCPs to have lower affinity to chemoattractants which causes increased activity of CheA and CheY-P resulting in tumbles. In this way cells are able to adapt to the immediate chemoattractant concentration and detect further changes to modulate cell motility. Chemoattractants in eukaryotes are well characterized for immune cells. Formyl peptides, such as fMLF, attract leukocytes such as neutrophils and macrophages, causing movement toward infection sites. Non-acylated methioninyl peptides do not act as chemoattractants to neutrophils and macrophages. Leukocytes also move toward chemoattractants C5a, a complement component, and pathogen-specific ligands on bacteria. Mechanisms concerning chemorepellents are less known than chemoattractants. Although chemorepellents work to confer an avoidance response in organisms, Tetrahymena thermophila adapt to a chemorepellent, Netrin-1 peptide, within 10 minutes of exposure; however, exposure to chemorepellents such as GTP, PACAP-38, and nociceptin show no such adaptations. GTP and ATP are chemorepellents in micro-molar concentrations to both Tetrahymena and Paramecium. These organisms avoid these molecules by producing avoiding reactions to re-orient themselves away from the gradient. Eukaryotic chemotaxis The mechanism of chemotaxis that eukaryotic cells employ is quite different from that in the bacteria E. coli; however, sensing of chemical gradients is still a crucial step in the process. Due to their small size and other biophysical constraints, E. coli cannot directly detect a concentration gradient. Instead, they employ temporal gradient sensing, where they move over larger distances several times their own width and measure the rate at which perceived chemical concentration changes. Eukaryotic cells are much larger than prokaryotes and have receptors embedded uniformly throughout the cell membrane. Eukaryotic chemotaxis involves detecting a concentration gradient spatially by comparing the asymmetric activation of these receptors at the different ends of the cell. Activation of these receptors results in migration towards chemoattractants, or away from chemorepellants. In mating yeast, which are non-motile, patches of polarity proteins on the cell cortex can relocate in a chemotactic fashion up pheromone gradients. It has also been shown that both prokaryotic and eukaryotic cells are capable of chemotactic memory. In prokaryotes, this mechanism involves the methylation of receptors called methyl-accepting chemotaxis proteins (MCPs). This results in their desensitization and allows prokaryotes to "remember" and adapt to a chemical gradient. In contrast, chemotactic memory in eukaryotes can be explained by the Local Excitation Global Inhibition (LEGI) model. LEGI involves the balance between a fast excitation and delayed inhibition which controls downstream signaling such as Ras activation and PIP3 production. Levels of receptors, intracellular signalling pathways and the effector mechanisms all represent diverse, eukaryotic-type components. In eukaryotic unicellular cells, amoeboid movement and cilium or the eukaryotic flagellum are the main effectors (e.g., Amoeba or Tetrahymena). Some eukaryotic cells of higher vertebrate origin, such as immune cells also move to where they need to be. Besides immune competent cells (granulocyte, monocyte, lymphocyte) a large group of cells—considered previously to be fixed into tissues—are also motile in special physiological (e.g., mast cell, fibroblast, endothelial cells) or pathological conditions (e.g., metastases). Chemotaxis has high significance in the early phases of embryogenesis as development of germ layers is guided by gradients of signal molecules. Detection of a gradient of chemoattractant The specific molecule/s that allow a eukaryotic cells detect a gradient of chemoattractant ligands (that is, a sort of the molecular compass that detects the direction of a chemoattractant) seems to change depending on the cell and chemoattractant receptor involved or even the concentration of the chemoattractant. However, these molecules apparently are activated independently of the motility of the cell. That is, even an immnobilized cell is still able to detect the direction of a chemoattractant. There appear to be mechanisms by which an external chemotactic gradient is sensed and turned into an intracellular Ras and PIP3 gradients, which results in a gradient and the activation of a signaling pathway, culminating in the polymerisation of actin filaments. The growing distal end of actin filaments develops connections with the internal surface of the plasma membrane via different sets of peptides and results in the formation of anterior pseudopods and posterior uropods. Cilia of eukaryotic cells can also produce chemotaxis; in this case, it is mainly a Ca2+-dependent induction of the microtubular system of the basal body and the beat of the 9 + 2 microtubules within cilia. The orchestrated beating of hundreds of cilia is synchronized by a submembranous system built between basal bodies. The details of the signaling pathways are still not totally clear. Chemotaxis-related migratory responses Chemotaxis refers to the directional migration of cells in response to chemical gradients; several variations of chemical-induced migration exist as listed below. Chemokinesis refers to an increase in cellular motility in response to chemicals in the surrounding environment. Unlike chemotaxis, the migration stimulated by chemokinesis lacks directionality, and instead increases environmental scanning behaviors. In haptotaxis the gradient of the chemoattractant is expressed or bound on a surface, in contrast to the classical model of chemotaxis, in which the gradient develops in a soluble fluid. The most common biologically active haptotactic surface is the extracellular matrix (ECM); the presence of bound ligands is responsible for induction of transendothelial migration and angiogenesis. Necrotaxis embodies a special type of chemotaxis when the chemoattractant molecules are released from necrotic or apoptotic cells. Depending on the chemical character of released substances, necrotaxis can accumulate or repel cells, which underlines the pathophysiological significance of this phenomenon. Receptors In general, eukaryotic cells sense the presence of chemotactic stimuli through the use of 7-transmembrane (or serpentine) heterotrimeric G-protein-coupled receptors, a class representing a significant portion of the genome. Some members of this gene superfamily are used in eyesight (rhodopsins) as well as in olfaction (smelling). The main classes of chemotaxis receptors are triggered by: Formyl peptides - formyl peptide receptors (FPR), Chemokines - chemokine receptors (CCR or CXCR), and Leukotrienes - leukotriene receptors (BLT). However, induction of a wide set of membrane receptors (e.g., cyclic nucleotides, amino acids, insulin, vasoactive peptides) also elicit migration of the cell. Chemotactic selection While some chemotaxis receptors are expressed in the surface membrane with long-term characteristics, as they are determined genetically, others have short-term dynamics, as they are assembled ad hoc in the presence of the ligand. The diverse features of the chemotaxis receptors and ligands allows for the possibility of selecting chemotactic responder cells with a simple chemotaxis assay By chemotactic selection, we can determine whether a still-uncharacterized molecule acts via the long- or the short-term receptor pathway. The term chemotactic selection is also used to designate a technique that separates eukaryotic or prokaryotic cells according to their chemotactic responsiveness to selector ligands. Chemotactic ligands The number of molecules capable of eliciting chemotactic responses is relatively high, and we can distinguish primary and secondary chemotactic molecules. The main groups of the primary ligands are as follows: Formyl peptides are di-, tri-, tetrapeptides of bacterial origin, formylated on the N-terminus of the peptide. They are released from bacteria in vivo or after decomposition of the cell, a typical member of this group is the N-formylmethionyl-leucyl-phenylalanine (abbreviated fMLF or fMLP). Bacterial fMLF is a key component of inflammation has characteristic chemoattractant effects in neutrophil granulocytes and monocytes. The chemotactic factor ligands and receptors related to formyl peptides are summarized in the related article, Formyl peptide receptors. Complement 3a (C3a) and complement 5a (C5a) are intermediate products of the complement cascade. Their synthesis is joined to the three alternative pathways (classical, lectin-dependent, and alternative) of complement activation by a convertase enzyme. The main target cells of these derivatives are neutrophil granulocytes and monocytes as well. Chemokines belong to a special class of cytokines; not only do their groups (C, CC, CXC, CX3C chemokines) represent structurally related molecules with a special arrangement of disulfide bridges but also their target cell specificity is diverse. CC chemokines act on monocytes (e.g., RANTES), and CXC chemokines are neutrophil granulocyte-specific (e.g., IL-8). Investigations of the three-dimensional structures of chemokines provided evidence that a characteristic composition of beta-sheets and an alpha helix provides expression of sequences required for interaction with the chemokine receptors. Formation of dimers and their increased biological activity was demonstrated by crystallography of several chemokines, e.g. IL-8. Metabolites of polyunsaturated fatty acids Leukotrienes are eicosanoid lipid mediators made by the metabolism of arachidonic acid by ALOX5 (also termed 5-lipoxygenase). Their most prominent member with chemotactic factor activity is leukotriene B4, which elicits adhesion, chemotaxis, and aggregation of leukocytes. The chemoattractant action of LTB4 is induced via either of two G protein–coupled receptors, BLT1 and BLT2, which are highly expressed in cells involved in inflammation and allergy. The family of 5-Hydroxyicosatetraenoic acid eicosanoids are arachidonic acid metabolites also formed by ALOX5. Three members of the family form naturally and have prominent chemotactic activity. These, listed in order of decreasing potency, are: 5-oxo-eicosatetraenoic acid, 5-oxo-15-hydroxy-eicosatetraenoic acid, and 5-Hydroxyeicosatetraenoic acid. This family of agonists stimulates chemotactic responses in human eosinophils, neutrophils, and monocytes by binding to the Oxoeicosanoid receptor 1, which like the receptors for leukotriene B4, is a G protein-coupled receptor. Aside from the skin, neutrophils are the body's first line of defense against bacterial infections. After leaving nearby blood vessels, these cells recognize chemicals produced by bacteria in a cut or scratch and migrate "toward the smell". 5-hydroxyeicosatrienoic acid and 5-oxoeicosatrienoic acid are metabolites of Mead acid (5Z,8Z,11Z-eicosatrirenoid acid); they stimulate leukocyte chemotaxis through the oxoeicosanoid receptor 1 with 5-oxoeicosatrienoic acid being as potent as its arachidonic acid-derived analog, 5-oxo-eicosatetraenoic acid, in stimulating human blood eosinophil and neutrophil chemotaxis. 12-Hydroxyeicosatetraenoic acid is an eicosanoid metabolite of arachidonic acid made by ALOX12 which stimulates leukocyte chemotaxis through the leukotriene B4 receptor, BLT2. Prostaglandin D2 is an eicosanoid metabolite of arachidononic acid made by cyclooxygenase 1 or cyclooxygenase 2 that stimulates chemotaxis through the Prostaglandin DP2 receptor. It elicits chemotactic responses in eosinophils, basophils, and T helper cells of the Th2 subtype. 12-Hydroxyheptadecatrienoic acid is a non-eicosanoid metabolite of arachidonic acid made by cyclooxygenase 1 or cyclooxygenase 2 that stimulates leukocyte chemotaxis though the leukotriene B4 receptor, BLT2. 15-oxo-eicosatetraenoic acid is an eicosanoid metabolite of arachidonic acid made my ALOX15; it has weak chemotactic activity for human monocytes (sees 15-Hydroxyeicosatetraenoic acid#15-oxo-ETE). The receptor or other mechanism by which this metabolite stimulates chemotaxis has not been elucidated. Chemotactic range fitting Chemotactic responses elicited by ligand-receptor interactions vary with the concentration of the ligand. Investigations of ligand families (e.g. amino acids or oligopeptides) demonstrates that chemoattractant activity occurs over a wide range, while chemorepellent activities have narrow ranges. Clinical significance A changed migratory potential of cells has relatively high importance in the development of several clinical symptoms and syndromes. Altered chemotactic activity of extracellular (e.g., Escherichia coli) or intracellular (e.g., Listeria monocytogenes) pathogens itself represents a significant clinical target. Modification of endogenous chemotactic ability of these microorganisms by pharmaceutical agents can decrease or inhibit the ratio of infections or spreading of infectious diseases. Apart from infections, there are some other diseases wherein impaired chemotaxis is the primary etiological factor, as in Chédiak–Higashi syndrome, where giant intracellular vesicles inhibit normal migration of cells. Mathematical models Several mathematical models of chemotaxis were developed depending on the type of Migration (e.g., basic differences of bacterial swimming, movement of unicellular eukaryotes with cilia/flagellum and amoeboid migration) Physico-chemical characteristics of the chemicals (e.g., diffusion) working as ligands Biological characteristics of the ligands (attractant, neutral, and repellent molecules) Assay systems applied to evaluate chemotaxis (see incubation times, development, and stability of concentration gradients) Other environmental effects possessing direct or indirect influence on the migration (lighting, temperature, magnetic fields, etc.) Although interactions of the factors listed above make the behavior of the solutions of mathematical models of chemotaxis rather complex, it is possible to describe the basic phenomenon of chemotaxis-driven motion in a straightforward way. Indeed, let us denote with the spatially non-uniform concentration of the chemo-attractant and as its gradient. Then the chemotactic cellular flow (also called current) that is generated by the chemotaxis is linked to the above gradient by the law:where is the spatial density of the cells and is the so-called 'Chemotactic coefficient' - is often not constant, but a decreasing function of the chemo-attractant. For some quantity that is subject to total flux and generation/destruction term , it is possible to formulate a continuity equation: where is the divergence. This general equation applies to both the cell density and the chemo-attractant. Therefore, incorporating a diffusion flux into the total flux term, the interactions between these quantities are governed by a set of coupled reaction-diffusion partial differential equations describing the change in and :where describes the growth in cell density, is the kinetics/source term for the chemo-attractant, and the diffusion coefficients for cell density and the chemo-attractant are respectively and . Spatial ecology of soil microorganisms is a function of their chemotactic sensitivities towards substrate and fellow organisms. The chemotactic behavior of the bacteria was proven to lead to non-trivial population patterns even in the absence of environmental heterogeneities. The presence of structural pore scale heterogeneities has an extra impact on the emerging bacterial patterns. Measurement of chemotaxis A wide range of techniques is available to evaluate chemotactic activity of cells or the chemoattractant and chemorepellent character of ligands. The basic requirements of the measurement are as follows: Concentration gradients can develop relatively quickly and persist for a long time in the system Chemotactic and chemokinetic activities are distinguished Migration of cells is free toward and away on the axis of the concentration gradient Detected responses are the results of active migration of cells Despite the fact that an ideal chemotaxis assay is still not available, there are several protocols and pieces of equipment that offer good correspondence with the conditions described above. The most commonly used are summarised in the table below: Artificial chemotactic systems Chemical robots that use artificial chemotaxis to navigate autonomously have been designed. Applications include targeted delivery of drugs in the body. More recently, enzyme molecules have also shown positive chemotactic behavior in the gradient of their substrates. The thermodynamically favorable binding of enzymes to their specific substrates is recognized as the origin of enzymatic chemotaxis. Additionally, enzymes in cascades have also shown substrate-driven chemotactic aggregation. Apart from active enzymes, non-reacting molecules also show chemotactic behavior. This has been demonstrated by using dye molecules that move directionally in gradients of polymer solution through favorable hydrophobic interactions. See also McCutcheon index Tropism Durotaxis Haptotaxis Mechanotaxis Plithotaxis Thin layers (oceanography) References Further reading External links Chemotaxis Neutrophil Chemotaxis Cell Migration Gateway Downloadable Matlab chemotaxis simulator Bacterial Chemotaxis Interactive Simulator (web-app) Motile cells Perception Taxes (biology) Transmembrane receptors Transport phenomena
0.772444
0.992195
0.766415
Relativistic angular momentum
In physics, relativistic angular momentum refers to the mathematical formalisms and physical concepts that define angular momentum in special relativity (SR) and general relativity (GR). The relativistic quantity is subtly different from the three-dimensional quantity in classical mechanics. Angular momentum is an important dynamical quantity derived from position and momentum. It is a measure of an object's rotational motion and resistance to changes in its rotation. Also, in the same way momentum conservation corresponds to translational symmetry, angular momentum conservation corresponds to rotational symmetry – the connection between symmetries and conservation laws is made by Noether's theorem. While these concepts were originally discovered in classical mechanics, they are also true and significant in special and general relativity. In terms of abstract algebra, the invariance of angular momentum, four-momentum, and other symmetries in spacetime, are described by the Lorentz group, or more generally the Poincaré group. Physical quantities that remain separate in classical physics are naturally combined in SR and GR by enforcing the postulates of relativity. Most notably, the space and time coordinates combine into the four-position, and energy and momentum combine into the four-momentum. The components of these four-vectors depend on the frame of reference used, and change under Lorentz transformations to other inertial frames or accelerated frames. Relativistic angular momentum is less obvious. The classical definition of angular momentum is the cross product of position x with momentum p to obtain a pseudovector , or alternatively as the exterior product to obtain a second order antisymmetric tensor . What does this combine with, if anything? There is another vector quantity not often discussed – it is the time-varying moment of mass polar-vector (not the moment of inertia) related to the boost of the centre of mass of the system, and this combines with the classical angular momentum pseudovector to form an antisymmetric tensor of second order, in exactly the same way as the electric field polar-vector combines with the magnetic field pseudovector to form the electromagnetic field antisymmetric tensor. For rotating mass–energy distributions (such as gyroscopes, planets, stars, and black holes) instead of point-like particles, the angular momentum tensor is expressed in terms of the stress–energy tensor of the rotating object. In special relativity alone, in the rest frame of a spinning object, there is an intrinsic angular momentum analogous to the "spin" in quantum mechanics and relativistic quantum mechanics, although for an extended body rather than a point particle. In relativistic quantum mechanics, elementary particles have spin and this is an additional contribution to the orbital angular momentum operator, yielding the total angular momentum tensor operator. In any case, the intrinsic "spin" addition to the orbital angular momentum of an object can be expressed in terms of the Pauli–Lubanski pseudovector. Definitions Orbital 3d angular momentum For reference and background, two closely related forms of angular momentum are given. In classical mechanics, the orbital angular momentum of a particle with instantaneous three-dimensional position vector and momentum vector , is defined as the axial vector which has three components, that are systematically given by cyclic permutations of Cartesian directions (e.g. change to , to , to , repeat) A related definition is to conceive orbital angular momentum as a plane element. This can be achieved by replacing the cross product by the exterior product in the language of exterior algebra, and angular momentum becomes a contravariant second order antisymmetric tensor or writing and momentum vector , the components can be compactly abbreviated in tensor index notation where the indices and take the values 1, 2, 3. On the other hand, the components can be systematically displayed fully in a 3 × 3 antisymmetric matrix This quantity is additive, and for an isolated system, the total angular momentum of a system is conserved. Dynamic mass moment In classical mechanics, the three-dimensional quantity for a particle of mass m moving with velocity u has the dimensions of mass moment – length multiplied by mass. It is equal to the mass of the particle or system of particles multiplied by the distance from the space origin to the centre of mass (COM) at the time origin, as measured in the lab frame. There is no universal symbol, nor even a universal name, for this quantity. Different authors may denote it by other symbols if any (for example μ), may designate other names, and may define N to be the negative of what is used here. The above form has the advantage that it resembles the familiar Galilean transformation for position, which in turn is the non-relativistic boost transformation between inertial frames. This vector is also additive: for a system of particles, the vector sum is the resultant where the system's centre of mass position and velocity and total mass are respectively For an isolated system, N is conserved in time, which can be seen by differentiating with respect to time. The angular momentum L is a pseudovector, but N is an "ordinary" (polar) vector, and is therefore invariant under inversion. The resultant Ntot for a multiparticle system has the physical visualization that, whatever the complicated motion of all the particles are, they move in such a way that the system's COM moves in a straight line. This does not necessarily mean all particles "follow" the COM, nor that all particles all move in almost the same direction simultaneously, only that the collective motion of the particles is constrained in relation to the centre of mass. In special relativity, if the particle moves with velocity u relative to the lab frame, then where is the Lorentz factor and m is the mass (i.e. the rest mass) of the particle. The corresponding relativistic mass moment in terms of , , , , in the same lab frame is The Cartesian components are Special relativity Coordinate transformations for a boost in the x direction Consider a coordinate frame which moves with velocity relative to another frame F, along the direction of the coincident axes. The origins of the two coordinate frames coincide at times . The mass–energy and momentum components of an object, as well as position coordinates and time in frame are transformed to , , , and in according to the Lorentz transformations The Lorentz factor here applies to the velocity v, the relative velocity between the frames. This is not necessarily the same as the velocity u of an object. For the orbital 3-angular momentum L as a pseudovector, we have In the second terms of and , the and components of the cross product can be inferred by recognizing cyclic permutations of and with the components of , Now, is parallel to the relative velocity , and the other components and are perpendicular to . The parallel–perpendicular correspondence can be facilitated by splitting the entire 3-angular momentum pseudovector into components parallel (∥) and perpendicular (⊥) to v, in each frame, Then the component equations can be collected into the pseudovector equations Therefore, the components of angular momentum along the direction of motion do not change, while the components perpendicular do change. By contrast to the transformations of space and time, time and the spatial coordinates change along the direction of motion, while those perpendicular do not. These transformations are true for all , not just for motion along the axes. Considering as a tensor, we get a similar result where The boost of the dynamic mass moment along the direction is Collecting parallel and perpendicular components as before Again, the components parallel to the direction of relative motion do not change, those perpendicular do change. Vector transformations for a boost in any direction So far these are only the parallel and perpendicular decompositions of the vectors. The transformations on the full vectors can be constructed from them as follows (throughout here is a pseudovector for concreteness and compatibility with vector algebra). Introduce a unit vector in the direction of , given by . The parallel components are given by the vector projection of or into while the perpendicular component by vector rejection of L or N from n and the transformations are or reinstating , These are very similar to the Lorentz transformations of the electric field and magnetic field , see Classical electromagnetism and special relativity. Alternatively, starting from the vector Lorentz transformations of time, space, energy, and momentum, for a boost with velocity , inserting these into the definitions gives the transformations. 4d angular momentum as a bivector In relativistic mechanics, the COM boost and orbital 3-space angular momentum of a rotating object are combined into a four-dimensional bivector in terms of the four-position X and the four-momentum P of the object In components which are six independent quantities altogether. Since the components of and are frame-dependent, so is . Three components are those of the familiar classical 3-space orbital angular momentum, and the other three are the relativistic mass moment, multiplied by . The tensor is antisymmetric; The components of the tensor can be systematically displayed as a matrix in which the last array is a block matrix formed by treating N as a row vector which matrix transposes to the column vector NT, and as a 3 × 3 antisymmetric matrix. The lines are merely inserted to show where the blocks are. Again, this tensor is additive: the total angular momentum of a system is the sum of the angular momentum tensors for each constituent of the system: Each of the six components forms a conserved quantity when aggregated with the corresponding components for other objects and fields. The angular momentum tensor M is indeed a tensor, the components change according to a Lorentz transformation matrix Λ, as illustrated in the usual way by tensor index notation where, for a boost (without rotations) with normalized velocity , the Lorentz transformation matrix elements are and the covariant βi and contravariant βi components of β are the same since these are just parameters. In other words, one can Lorentz-transform the four position and four momentum separately, and then antisymmetrize those newly found components to obtain the angular momentum tensor in the new frame. Rigid body rotation For a particle moving in a curve, the cross product of its angular velocity (a pseudovector) and position give its tangential velocity which cannot exceed a magnitude of , since in SR the translational velocity of any massive object cannot exceed the speed of light c. Mathematically this constraint is , the vertical bars denote the magnitude of the vector. If the angle between and is (assumed to be nonzero, otherwise u would be zero corresponding to no motion at all), then and the angular velocity is restricted by The maximum angular velocity of any massive object therefore depends on the size of the object. For a given |x|, the minimum upper limit occurs when and are perpendicular, so that and . For a rotating rigid body rotating with an angular velocity , the is tangential velocity at a point inside the object. For every point in the object, there is a maximum angular velocity. The angular velocity (pseudovector) is related to the angular momentum (pseudovector) through the moment of inertia tensor (the dot denotes tensor contraction on one index). The relativistic angular momentum is also limited by the size of the object. Spin in special relativity Four-spin A particle may have a "built-in" angular momentum independent of its motion, called spin and denoted s. It is a 3d pseudovector like orbital angular momentum L. The spin has a corresponding spin magnetic moment, so if the particle is subject to interactions (like electromagnetic fields or spin-orbit coupling), the direction of the particle's spin vector will change, but its magnitude will be constant. The extension to special relativity is straightforward. For some lab frame F, let F′ be the rest frame of the particle and suppose the particle moves with constant 3-velocity u. Then F′ is boosted with the same velocity and the Lorentz transformations apply as usual; it is more convenient to use . As a four-vector in special relativity, the four-spin S generally takes the usual form of a four-vector with a timelike component st and spatial components s, in the lab frame although in the rest frame of the particle, it is defined so the timelike component is zero and the spatial components are those of particle's actual spin vector, in the notation here s′, so in the particle's frame Equating norms leads to the invariant relation so if the magnitude of spin is given in the rest frame of the particle and lab frame of an observer, the magnitude of the timelike component st is given in the lab frame also. The covariant constraint on the spin is orthogonality to the velocity vector, In 3-vector notation for explicitness, the transformations are The inverse relations are the components of spin the lab frame, calculated from those in the particle's rest frame. Although the spin of the particle is constant for a given particle, it appears to be different in the lab frame. The Pauli–Lubanski pseudovector The Pauli–Lubanski pseudovector applies to both massive and massless particles. Spin–orbital decomposition In general, the total angular momentum tensor splits into an orbital component and a spin component, This applies to a particle, a mass–energy–momentum distribution, or field. Angular momentum of a mass–energy–momentum distribution Angular momentum from the mass–energy–momentum tensor The following is a summary from MTW. Throughout for simplicity, Cartesian coordinates are assumed. In special and general relativity, a distribution of mass–energy–momentum, e.g. a fluid, or a star, is described by the stress–energy tensor Tβγ (a second order tensor field depending on space and time). Since T00 is the energy density, Tj0 for j = 1, 2, 3 is the jth component of the object's 3d momentum per unit volume, and Tij form components of the stress tensor including shear and normal stresses, the orbital angular momentum density about the position 4-vector β is given by a 3rd order tensor This is antisymmetric in α and β. In special and general relativity, T is a symmetric tensor, but in other contexts (e.g., quantum field theory), it may not be. Let Ω be a region of 4d spacetime. The boundary is a 3d spacetime hypersurface ("spacetime surface volume" as opposed to "spatial surface area"), denoted ∂Ω where "∂" means "boundary". Integrating the angular momentum density over a 3d spacetime hypersurface yields the angular momentum tensor about , where dΣγ is the volume 1-form playing the role of a unit vector normal to a 2d surface in ordinary 3d Euclidean space. The integral is taken over the coordinates X, not . The integral within a spacelike surface of constant time is which collectively form the angular momentum tensor. Angular momentum about the centre of mass There is an intrinsic angular momentum in the centre-of-mass frame, in other words, the angular momentum about any event on the wordline of the object's center of mass. Since T00 is the energy density of the object, the spatial coordinates of the center of mass are given by Setting Y = XCOM obtains the orbital angular momentum density about the centre-of-mass of the object. Angular momentum conservation The conservation of energy–momentum is given in differential form by the continuity equation where ∂γ is the four-gradient. (In non-Cartesian coordinates and general relativity this would be replaced by the covariant derivative). The total angular momentum conservation is given by another continuity equation The integral equations use Gauss' theorem in spacetime Torque in special relativity The torque acting on a point-like particle is defined as the derivative of the angular momentum tensor given above with respect to proper time: or in tensor components: where F is the 4d force acting on the particle at the event X. As with angular momentum, torque is additive, so for an extended object one sums or integrates over the distribution of mass. Angular momentum as the generator of spacetime boosts and rotations The angular momentum tensor is the generator of boosts and rotations for the Lorentz group. Lorentz boosts can be parametrized by rapidity, and a 3d unit vector pointing in the direction of the boost, which combine into the "rapidity vector" where is the speed of the relative motion divided by the speed of light. Spatial rotations can be parametrized by the axis–angle representation, the angle and a unit vector pointing in the direction of the axis, which combine into an "axis-angle vector" Each unit vector only has two independent components, the third is determined from the unit magnitude. Altogether there are six parameters of the Lorentz group; three for rotations and three for boosts. The (homogeneous) Lorentz group is 6-dimensional. The boost generators and rotation generators can be combined into one generator for Lorentz transformations; the antisymmetric angular momentum tensor, with components and correspondingly, the boost and rotation parameters are collected into another antisymmetric four-dimensional matrix , with entries: where the summation convention over the repeated indices i, j, k has been used to prevent clumsy summation signs. The general Lorentz transformation is then given by the matrix exponential and the summation convention has been applied to the repeated matrix indices α and β. The general Lorentz transformation Λ is the transformation law for any four vector A = (A0, A1, A2, A3), giving the components of this same 4-vector in another inertial frame of reference The angular momentum tensor forms 6 of the 10 generators of the Poincaré group, the other four are the components of the four-momentum for spacetime translations. Angular momentum in general relativity The angular momentum of test particles in a gently curved background is more complicated in GR but can be generalized in a straightforward manner. If the Lagrangian is expressed with respect to angular variables as the generalized coordinates, then the angular momenta are the functional derivatives of the Lagrangian with respect to the angular velocities. Referred to Cartesian coordinates, these are typically given by the off-diagonal shear terms of the spacelike part of the stress–energy tensor. If the spacetime supports a Killing vector field tangent to a circle, then the angular momentum about the axis is conserved. One also wishes to study the effect of a compact, rotating mass on its surrounding spacetime. The prototype solution is of the Kerr metric, which describes the spacetime around an axially symmetric black hole. It is obviously impossible to draw a point on the event horizon of a Kerr black hole and watch it circle around. However, the solution does support a constant of the system that acts mathematically similarly to an angular momentum. See also References Further reading Special relativity General relativity External links Angular momentum Dynamics (mechanics) Angular momentum Rotation Angular momentum
0.784343
0.97711
0.76639
Causality
Causality is an influence by which one event, process, state, or object (a cause) contributes to the production of another event, process, state, or object (an effect) where the cause is at least partly responsible for the effect, and the effect is at least partly dependent on the cause. In general, a process can have multiple causes, which are also said to be causal factors for it, and all lie in its past. An effect can in turn be a cause of, or causal factor for, many other effects, which all lie in its future. Some writers have held that causality is metaphysically prior to notions of time and space. Causality is an abstraction that indicates how the world progresses. As such it is a basic concept; it is more apt to be an explanation of other concepts of progression than something to be explained by other more fundamental concepts. The concept is like those of agency and efficacy. For this reason, a leap of intuition may be needed to grasp it. Accordingly, causality is implicit in the structure of ordinary language, as well as explicit in the language of scientific causal notation. In English studies of Aristotelian philosophy, the word "cause" is used as a specialized technical term, the translation of Aristotle's term αἰτία, by which Aristotle meant "explanation" or "answer to a 'why' question". Aristotle categorized the four types of answers as material, formal, efficient, and final "causes". In this case, the "cause" is the explanans for the explanandum, and failure to recognize that different kinds of "cause" are being considered can lead to futile debate. Of Aristotle's four explanatory modes, the one nearest to the concerns of the present article is the "efficient" one. David Hume, as part of his opposition to rationalism, argued that pure reason alone cannot prove the reality of efficient causality; instead, he appealed to custom and mental habit, observing that all human knowledge derives solely from experience. The topic of causality remains a staple in contemporary philosophy. Concept Metaphysics The nature of cause and effect is a concern of the subject known as metaphysics. Kant thought that time and space were notions prior to human understanding of the progress or evolution of the world, and he also recognized the priority of causality. But he did not have the understanding that came with knowledge of Minkowski geometry and the special theory of relativity, that the notion of causality can be used as a prior foundation from which to construct notions of time and space. Ontology A general metaphysical question about cause and effect is: "what kind of entity can be a cause, and what kind of entity can be an effect?" One viewpoint on this question is that cause and effect are of one and the same kind of entity, causality being an asymmetric relation between them. That is to say, it would make good sense grammatically to say either "A is the cause and B the effect" or "B is the cause and A the effect", though only one of those two can be actually true. In this view, one opinion, proposed as a metaphysical principle in process philosophy, is that every cause and every effect is respectively some process, event, becoming, or happening. An example is 'his tripping over the step was the cause, and his breaking his ankle the effect'. Another view is that causes and effects are 'states of affairs', with the exact natures of those entities being more loosely defined than in process philosophy. Another viewpoint on this question is the more classical one, that a cause and its effect can be of different kinds of entity. For example, in Aristotle's efficient causal explanation, an action can be a cause while an enduring object is its effect. For example, the generative actions of his parents can be regarded as the efficient cause, with Socrates being the effect, Socrates being regarded as an enduring object, in philosophical tradition called a 'substance', as distinct from an action. Epistemology Since causality is a subtle metaphysical notion, considerable intellectual effort, along with exhibition of evidence, is needed to establish knowledge of it in particular empirical circumstances. According to David Hume, the human mind is unable to perceive causal relations directly. On this ground, the scholar distinguished between the regularity view of causality and the counterfactual notion. According to the counterfactual view, X causes Y if and only if, without X, Y would not exist. Hume interpreted the latter as an ontological view, i.e., as a description of the nature of causality but, given the limitations of the human mind, advised using the former (stating, roughly, that X causes Y if and only if the two events are spatiotemporally conjoined, and X precedes Y) as an epistemic definition of causality. We need an epistemic concept of causality in order to distinguish between causal and noncausal relations. The contemporary philosophical literature on causality can be divided into five big approaches to causality. These include the (mentioned above) regularity, probabilistic, counterfactual, mechanistic, and manipulationist views. The five approaches can be shown to be reductive, i.e., define causality in terms of relations of other types. According to this reading, they define causality in terms of, respectively, empirical regularities (constant conjunctions of events), changes in conditional probabilities, counterfactual conditions, mechanisms underlying causal relations, and invariance under intervention. Geometrical significance Causality has the properties of antecedence and contiguity. These are topological, and are ingredients for space-time geometry. As developed by Alfred Robb, these properties allow the derivation of the notions of time and space. Max Jammer writes "the Einstein postulate ... opens the way to a straightforward construction of the causal topology ... of Minkowski space." Causal efficacy propagates no faster than light. Thus, the notion of causality is metaphysically prior to the notions of time and space. In practical terms, this is because use of the relation of causality is necessary for the interpretation of empirical experiments. Interpretation of experiments is needed to establish the physical and geometrical notions of time and space. Volition The deterministic world-view holds that the history of the universe can be exhaustively represented as a progression of events following one after the other as cause and effect. Incompatibilism holds that determinism is incompatible with free will, so if determinism is true, "free will" does not exist. Compatibilism, on the other hand, holds that determinism is compatible with, or even necessary for, free will. Necessary and sufficient causes Causes may sometimes be distinguished into two types: necessary and sufficient. A third type of causation, which requires neither necessity nor sufficiency, but which contributes to the effect, is called a "contributory cause". Necessary causes If x is a necessary cause of y, then the presence of y necessarily implies the prior occurrence of x. The presence of x, however, does not imply that y will occur. Sufficient causes If x is a sufficient cause of y, then the presence of x necessarily implies the subsequent occurrence of y. However, another cause z may alternatively cause y. Thus the presence of y does not imply the prior occurrence of x. Contributory causes For some specific effect, in a singular case, a factor that is a contributory cause is one among several co-occurrent causes. It is implicit that all of them are contributory. For the specific effect, in general, there is no implication that a contributory cause is necessary, though it may be so. In general, a factor that is a contributory cause is not sufficient, because it is by definition accompanied by other causes, which would not count as causes if it were sufficient. For the specific effect, a factor that is on some occasions a contributory cause might on some other occasions be sufficient, but on those other occasions it would not be merely contributory. J. L. Mackie argues that usual talk of "cause" in fact refers to INUS conditions (insufficient but non-redundant parts of a condition which is itself unnecessary but sufficient for the occurrence of the effect). An example is a short circuit as a cause for a house burning down. Consider the collection of events: the short circuit, the proximity of flammable material, and the absence of firefighters. Together these are unnecessary but sufficient to the house's burning down (since many other collections of events certainly could have led to the house burning down, for example shooting the house with a flamethrower in the presence of oxygen and so forth). Within this collection, the short circuit is an insufficient (since the short circuit by itself would not have caused the fire) but non-redundant (because the fire would not have happened without it, everything else being equal) part of a condition which is itself unnecessary but sufficient for the occurrence of the effect. So, the short circuit is an INUS condition for the occurrence of the house burning down. Contrasted with conditionals Conditional statements are not statements of causality. An important distinction is that statements of causality require the antecedent to precede or coincide with the consequent in time, whereas conditional statements do not require this temporal order. Confusion commonly arises since many different statements in English may be presented using "If ..., then ..." form (and, arguably, because this form is far more commonly used to make a statement of causality). The two types of statements are distinct, however. For example, all of the following statements are true when interpreting "If ..., then ..." as the material conditional: If Barack Obama is president of the United States in 2011, then Germany is in Europe. If George Washington is president of the United States in 2011, then . The first is true since both the antecedent and the consequent are true. The second is true in sentential logic and indeterminate in natural language, regardless of the consequent statement that follows, because the antecedent is false. The ordinary indicative conditional has somewhat more structure than the material conditional. For instance, although the first is the closest, neither of the preceding two statements seems true as an ordinary indicative reading. But the sentence: If Shakespeare of Stratford-on-Avon did not write Macbeth, then someone else did. intuitively seems to be true, even though there is no straightforward causal relation in this hypothetical situation between Shakespeare's not writing Macbeth and someone else's actually writing it. Another sort of conditional, the counterfactual conditional, has a stronger connection with causality, yet even counterfactual statements are not all examples of causality. Consider the following two statements: If A were a triangle, then A would have three sides. If switch S were thrown, then bulb B would light. In the first case, it would be incorrect to say that A's being a triangle caused it to have three sides, since the relationship between triangularity and three-sidedness is that of definition. The property of having three sides actually determines A's state as a triangle. Nonetheless, even when interpreted counterfactually, the first statement is true. An early version of Aristotle's "four cause" theory is described as recognizing "essential cause". In this version of the theory, that the closed polygon has three sides is said to be the "essential cause" of its being a triangle. This use of the word 'cause' is of course now far obsolete. Nevertheless, it is within the scope of ordinary language to say that it is essential to a triangle that it has three sides. A full grasp of the concept of conditionals is important to understanding the literature on causality. In everyday language, loose conditional statements are often enough made, and need to be interpreted carefully. Questionable cause Fallacies of questionable cause, also known as causal fallacies, non-causa pro causa (Latin for "non-cause for cause"), or false cause, are informal fallacies where a cause is incorrectly identified. Theories Counterfactual theories Counterfactual theories define causation in terms of a counterfactual relation, and can often be seen as "floating" their account of causality on top of an account of the logic of counterfactual conditionals. Counterfactual theories reduce facts about causation to facts about what would have been true under counterfactual circumstances. The idea is that causal relations can be framed in the form of "Had C not occurred, E would not have occurred." This approach can be traced back to David Hume's definition of the causal relation as that "where, if the first object had not been, the second never had existed." More full-fledged analysis of causation in terms of counterfactual conditionals only came in the 20th century after development of the possible world semantics for the evaluation of counterfactual conditionals. In his 1973 paper "Causation," David Lewis proposed the following definition of the notion of causal dependence: An event E causally depends on C if, and only if, (i) if C had occurred, then E would have occurred, and (ii) if C had not occurred, then E would not have occurred. Causation is then analyzed in terms of counterfactual dependence. That is, C causes E if and only if there exists a sequence of events C, D1, D2, ... Dk, E such that each event in the sequence counterfactually depends on the previous. This chain of causal dependence may be called a mechanism. Note that the analysis does not purport to explain how we make causal judgements or how we reason about causation, but rather to give a metaphysical account of what it is for there to be a causal relation between some pair of events. If correct, the analysis has the power to explain certain features of causation. Knowing that causation is a matter of counterfactual dependence, we may reflect on the nature of counterfactual dependence to account for the nature of causation. For example, in his paper "Counterfactual Dependence and Time's Arrow," Lewis sought to account for the time-directedness of counterfactual dependence in terms of the semantics of the counterfactual conditional. If correct, this theory can serve to explain a fundamental part of our experience, which is that we can causally affect the future but not the past. One challenge for the counterfactual account is overdetermination, whereby an effect has multiple causes. For instance, suppose Alice and Bob both throw bricks at a window and it breaks. If Alice hadn't thrown the brick, then it still would have broken, suggesting that Alice wasn't a cause; however, intuitively, Alice did cause the window to break. The Halpern-Pearl definitions of causality take account of examples like these. The first and third Halpern-Pearl conditions are easiest to understand: AC1 requires that Alice threw the brick and the window broke in the actual work. AC3 requires that Alice throwing the brick is a minimal cause (cf. blowing a kiss and throwing a brick). Taking the "updated" version of AC2(a), the basic idea is that we have to find a set of variables and settings thereof such that preventing Alice from throwing a brick also stops the window from breaking. One way to do this is to stop Bob from throwing the brick. Finally, for AC2(b), we have to hold things as per AC2(a) and show that Alice throwing the brick breaks the window. (The full definition is a little more involved, involving checking all subsets of variables.) Probabilistic causation Interpreting causation as a deterministic relation means that if A causes B, then A must always be followed by B. In this sense, war does not cause deaths, nor does smoking cause cancer or emphysema. As a result, many turn to a notion of probabilistic causation. Informally, A ("The person is a smoker") probabilistically causes B ("The person has now or will have cancer at some time in the future"), if the information that A occurred increases the likelihood of Bs occurrence. Formally, P{B|A}≥ P{B} where P{B|A} is the conditional probability that B will occur given the information that A occurred, and P{B} is the probability that B will occur having no knowledge whether A did or did not occur. This intuitive condition is not adequate as a definition for probabilistic causation because of its being too general and thus not meeting our intuitive notion of cause and effect. For example, if A denotes the event "The person is a smoker," B denotes the event "The person now has or will have cancer at some time in the future" and C denotes the event "The person now has or will have emphysema some time in the future," then the following three relationships hold: P{B|A} ≥ P{B}, P{C|A} ≥ P{C} and P{B|C} ≥ P{B}. The last relationship states that knowing that the person has emphysema increases the likelihood that he will have cancer. The reason for this is that having the information that the person has emphysema increases the likelihood that the person is a smoker, thus indirectly increasing the likelihood that the person will have cancer. However, we would not want to conclude that having emphysema causes cancer. Thus, we need additional conditions such as temporal relationship of A to B and a rational explanation as to the mechanism of action. It is hard to quantify this last requirement and thus different authors prefer somewhat different definitions. Causal calculus When experimental interventions are infeasible or illegal, the derivation of a cause-and-effect relationship from observational studies must rest on some qualitative theoretical assumptions, for example, that symptoms do not cause diseases, usually expressed in the form of missing arrows in causal graphs such as Bayesian networks or path diagrams. The theory underlying these derivations relies on the distinction between conditional probabilities, as in , and interventional probabilities, as in . The former reads: "the probability of finding cancer in a person known to smoke, having started, unforced by the experimenter, to do so at an unspecified time in the past", while the latter reads: "the probability of finding cancer in a person forced by the experimenter to smoke at a specified time in the past". The former is a statistical notion that can be estimated by observation with negligible intervention by the experimenter, while the latter is a causal notion which is estimated in an experiment with an important controlled randomized intervention. It is specifically characteristic of quantal phenomena that observations defined by incompatible variables always involve important intervention by the experimenter, as described quantitatively by the observer effect. In classical thermodynamics, processes are initiated by interventions called thermodynamic operations. In other branches of science, for example astronomy, the experimenter can often observe with negligible intervention. The theory of "causal calculus" (also known as do-calculus, Judea Pearl's Causal Calculus, Calculus of Actions) permits one to infer interventional probabilities from conditional probabilities in causal Bayesian networks with unmeasured variables. One very practical result of this theory is the characterization of confounding variables, namely, a sufficient set of variables that, if adjusted for, would yield the correct causal effect between variables of interest. It can be shown that a sufficient set for estimating the causal effect of on is any set of non-descendants of that -separate from after removing all arrows emanating from . This criterion, called "backdoor", provides a mathematical definition of "confounding" and helps researchers identify accessible sets of variables worthy of measurement. Structure learning While derivations in causal calculus rely on the structure of the causal graph, parts of the causal structure can, under certain assumptions, be learned from statistical data. The basic idea goes back to Sewall Wright's 1921 work on path analysis. A "recovery" algorithm was developed by Rebane and Pearl (1987) which rests on Wright's distinction between the three possible types of causal substructures allowed in a directed acyclic graph (DAG): Type 1 and type 2 represent the same statistical dependencies (i.e., and are independent given ) and are, therefore, indistinguishable within purely cross-sectional data. Type 3, however, can be uniquely identified, since and are marginally independent and all other pairs are dependent. Thus, while the skeletons (the graphs stripped of arrows) of these three triplets are identical, the directionality of the arrows is partially identifiable. The same distinction applies when and have common ancestors, except that one must first condition on those ancestors. Algorithms have been developed to systematically determine the skeleton of the underlying graph and, then, orient all arrows whose directionality is dictated by the conditional independencies observed. Alternative methods of structure learning search through the many possible causal structures among the variables, and remove ones which are strongly incompatible with the observed correlations. In general this leaves a set of possible causal relations, which should then be tested by analyzing time series data or, preferably, designing appropriately controlled experiments. In contrast with Bayesian Networks, path analysis (and its generalization, structural equation modeling), serve better to estimate a known causal effect or to test a causal model than to generate causal hypotheses. For nonexperimental data, causal direction can often be inferred if information about time is available. This is because (according to many, though not all, theories) causes must precede their effects temporally. This can be determined by statistical time series models, for instance, or with a statistical test based on the idea of Granger causality, or by direct experimental manipulation. The use of temporal data can permit statistical tests of a pre-existing theory of causal direction. For instance, our degree of confidence in the direction and nature of causality is much greater when supported by cross-correlations, ARIMA models, or cross-spectral analysis using vector time series data than by cross-sectional data. Derivation theories Nobel laureate Herbert A. Simon and philosopher Nicholas Rescher claim that the asymmetry of the causal relation is unrelated to the asymmetry of any mode of implication that contraposes. Rather, a causal relation is not a relation between values of variables, but a function of one variable (the cause) on to another (the effect). So, given a system of equations, and a set of variables appearing in these equations, we can introduce an asymmetric relation among individual equations and variables that corresponds perfectly to our commonsense notion of a causal ordering. The system of equations must have certain properties, most importantly, if some values are chosen arbitrarily, the remaining values will be determined uniquely through a path of serial discovery that is perfectly causal. They postulate the inherent serialization of such a system of equations may correctly capture causation in all empirical fields, including physics and economics. Manipulation theories Some theorists have equated causality with manipulability. Under these theories, x causes y only in the case that one can change x in order to change y. This coincides with commonsense notions of causations, since often we ask causal questions in order to change some feature of the world. For instance, we are interested in knowing the causes of crime so that we might find ways of reducing it. These theories have been criticized on two primary grounds. First, theorists complain that these accounts are circular. Attempting to reduce causal claims to manipulation requires that manipulation is more basic than causal interaction. But describing manipulations in non-causal terms has provided a substantial difficulty. The second criticism centers around concerns of anthropocentrism. It seems to many people that causality is some existing relationship in the world that we can harness for our desires. If causality is identified with our manipulation, then this intuition is lost. In this sense, it makes humans overly central to interactions in the world. Some attempts to defend manipulability theories are recent accounts that do not claim to reduce causality to manipulation. These accounts use manipulation as a sign or feature in causation without claiming that manipulation is more fundamental than causation. Process theories Some theorists are interested in distinguishing between causal processes and non-causal processes (Russell 1948; Salmon 1984). These theorists often want to distinguish between a process and a pseudo-process. As an example, a ball moving through the air (a process) is contrasted with the motion of a shadow (a pseudo-process). The former is causal in nature while the latter is not. Salmon (1984) claims that causal processes can be identified by their ability to transmit an alteration over space and time. An alteration of the ball (a mark by a pen, perhaps) is carried with it as the ball goes through the air. On the other hand, an alteration of the shadow (insofar as it is possible) will not be transmitted by the shadow as it moves along. These theorists claim that the important concept for understanding causality is not causal relationships or causal interactions, but rather identifying causal processes. The former notions can then be defined in terms of causal processes. A subgroup of the process theories is the mechanistic view on causality. It states that causal relations supervene on mechanisms. While the notion of mechanism is understood differently, the definition put forward by the group of philosophers referred to as the 'New Mechanists' dominate the literature. Fields Science For the scientific investigation of efficient causality, the cause and effect are each best conceived of as temporally transient processes. Within the conceptual frame of the scientific method, an investigator sets up several distinct and contrasting temporally transient material processes that have the structure of experiments, and records candidate material responses, normally intending to determine causality in the physical world. For instance, one may want to know whether a high intake of carrots causes humans to develop the bubonic plague. The quantity of carrot intake is a process that is varied from occasion to occasion. The occurrence or non-occurrence of subsequent bubonic plague is recorded. To establish causality, the experiment must fulfill certain criteria, only one example of which is mentioned here. For example, instances of the hypothesized cause must be set up to occur at a time when the hypothesized effect is relatively unlikely in the absence of the hypothesized cause; such unlikelihood is to be established by empirical evidence. A mere observation of a correlation is not nearly adequate to establish causality. In nearly all cases, establishment of causality relies on repetition of experiments and probabilistic reasoning. Hardly ever is causality established more firmly than as more or less probable. It is most convenient for establishment of causality if the contrasting material states of affairs are precisely matched, except for only one variable factor, perhaps measured by a real number. Physics One has to be careful in the use of the word cause in physics. Properly speaking, the hypothesized cause and the hypothesized effect are each temporally transient processes. For example, force is a useful concept for the explanation of acceleration, but force is not by itself a cause. More is needed. For example, a temporally transient process might be characterized by a definite change of force at a definite time. Such a process can be regarded as a cause. Causality is not inherently implied in equations of motion, but postulated as an additional constraint that needs to be satisfied (i.e. a cause always precedes its effect). This constraint has mathematical implications such as the Kramers-Kronig relations. Causality is one of the most fundamental and essential notions of physics. Causal efficacy cannot 'propagate' faster than light. Otherwise, reference coordinate systems could be constructed (using the Lorentz transform of special relativity) in which an observer would see an effect precede its cause (i.e. the postulate of causality would be violated). Causal notions appear in the context of the flow of mass-energy. Any actual process has causal efficacy that can propagate no faster than light. In contrast, an abstraction has no causal efficacy. Its mathematical expression does not propagate in the ordinary sense of the word, though it may refer to virtual or nominal 'velocities' with magnitudes greater than that of light. For example, wave packets are mathematical objects that have group velocity and phase velocity. The energy of a wave packet travels at the group velocity (under normal circumstances); since energy has causal efficacy, the group velocity cannot be faster than the speed of light. The phase of a wave packet travels at the phase velocity; since phase is not causal, the phase velocity of a wave packet can be faster than light. Causal notions are important in general relativity to the extent that the existence of an arrow of time demands that the universe's semi-Riemannian manifold be orientable, so that "future" and "past" are globally definable quantities. Engineering A causal system is a system with output and internal states that depends only on the current and previous input values. A system that has some dependence on input values from the future (in addition to possible past or current input values) is termed an acausal system, and a system that depends solely on future input values is an anticausal system. Acausal filters, for example, can only exist as postprocessing filters, because these filters can extract future values from a memory buffer or a file. We have to be very careful with causality in physics and engineering. Cellier, Elmqvist, and Otter describe causality forming the basis of physics as a misconception, because physics is essentially acausal. In their article they cite a simple example: "The relationship between voltage across and current through an electrical resistor can be described by Ohm's law: V = IR, yet, whether it is the current flowing through the resistor that causes a voltage drop, or whether it is the difference between the electrical potentials on the two wires that causes current to flow is, from a physical perspective, a meaningless question". In fact, if we explain cause-effect using the law, we need two explanations to describe an electrical resistor: as a voltage-drop-causer or as a current-flow-causer. There is no physical experiment in the world that can distinguish between action and reaction. Biology, medicine and epidemiology Austin Bradford Hill built upon the work of Hume and Popper and suggested in his paper "The Environment and Disease: Association or Causation?" that aspects of an association such as strength, consistency, specificity, and temporality be considered in attempting to distinguish causal from noncausal associations in the epidemiological situation. (See Bradford Hill criteria.) He did not note however, that temporality is the only necessary criterion among those aspects. Directed acyclic graphs (DAGs) are increasingly used in epidemiology to help enlighten causal thinking. Psychology Psychologists take an empirical approach to causality, investigating how people and non-human animals detect or infer causation from sensory information, prior experience and innate knowledge. Attribution: Attribution theory is the theory concerning how people explain individual occurrences of causation. Attribution can be external (assigning causality to an outside agent or force—claiming that some outside thing motivated the event) or internal (assigning causality to factors within the person—taking personal responsibility or accountability for one's actions and claiming that the person was directly responsible for the event). Taking causation one step further, the type of attribution a person provides influences their future behavior. The intention behind the cause or the effect can be covered by the subject of action. See also accident; blame; intent; and responsibility. Causal powers Whereas David Hume argued that causes are inferred from non-causal observations, Immanuel Kant claimed that people have innate assumptions about causes. Within psychology, Patricia Cheng attempted to reconcile the Humean and Kantian views. According to her power PC theory, people filter observations of events through an intuition that causes have the power to generate (or prevent) their effects, thereby inferring specific cause-effect relations. Causation and salience Our view of causation depends on what we consider to be the relevant events. Another way to view the statement, "Lightning causes thunder" is to see both lightning and thunder as two perceptions of the same event, viz., an electric discharge that we perceive first visually and then aurally. Naming and causality David Sobel and Alison Gopnik from the Psychology Department of UC Berkeley designed a device known as the blicket detector which would turn on when an object was placed on it. Their research suggests that "even young children will easily and swiftly learn about a new causal power of an object and spontaneously use that information in classifying and naming the object." Perception of launching events Some researchers such as Anjan Chatterjee at the University of Pennsylvania and Jonathan Fugelsang at the University of Waterloo are using neuroscience techniques to investigate the neural and psychological underpinnings of causal launching events in which one object causes another object to move. Both temporal and spatial factors can be manipulated. See Causal Reasoning (Psychology) for more information. Statistics and economics Statistics and economics usually employ pre-existing data or experimental data to infer causality by regression methods. The body of statistical techniques involves substantial use of regression analysis. Typically a linear relationship such as is postulated, in which is the ith observation of the dependent variable (hypothesized to be the caused variable), for j=1,...,k is the ith observation on the jth independent variable (hypothesized to be a causative variable), and is the error term for the ith observation (containing the combined effects of all other causative variables, which must be uncorrelated with the included independent variables). If there is reason to believe that none of the s is caused by y, then estimates of the coefficients are obtained. If the null hypothesis that is rejected, then the alternative hypothesis that and equivalently that causes y cannot be rejected. On the other hand, if the null hypothesis that cannot be rejected, then equivalently the hypothesis of no causal effect of on y cannot be rejected. Here the notion of causality is one of contributory causality as discussed above: If the true value , then a change in will result in a change in y unless some other causative variable(s), either included in the regression or implicit in the error term, change in such a way as to exactly offset its effect; thus a change in is not sufficient to change y. Likewise, a change in is not necessary to change y, because a change in y could be caused by something implicit in the error term (or by some other causative explanatory variable included in the model). The above way of testing for causality requires belief that there is no reverse causation, in which y would cause . This belief can be established in one of several ways. First, the variable may be a non-economic variable: for example, if rainfall amount is hypothesized to affect the futures price y of some agricultural commodity, it is impossible that in fact the futures price affects rainfall amount (provided that cloud seeding is never attempted). Second, the instrumental variables technique may be employed to remove any reverse causation by introducing a role for other variables (instruments) that are known to be unaffected by the dependent variable. Third, the principle that effects cannot precede causes can be invoked, by including on the right side of the regression only variables that precede in time the dependent variable; this principle is invoked, for example, in testing for Granger causality and in its multivariate analog, vector autoregression, both of which control for lagged values of the dependent variable while testing for causal effects of lagged independent variables. Regression analysis controls for other relevant variables by including them as regressors (explanatory variables). This helps to avoid false inferences of causality due to the presence of a third, underlying, variable that influences both the potentially causative variable and the potentially caused variable: its effect on the potentially caused variable is captured by directly including it in the regression, so that effect will not be picked up as an indirect effect through the potentially causative variable of interest. Given the above procedures, coincidental (as opposed to causal) correlation can be probabilistically rejected if data samples are large and if regression results pass cross-validation tests showing that the correlations hold even for data that were not used in the regression. Asserting with certitude that a common-cause is absent and the regression represents the true causal structure is in principle impossible. The problem of omitted variable bias, however, has to be balanced against the risk of inserting Causal colliders, in which the addition of a new variable induces a correlation between and via Berkson's paradox. Apart from constructing statistical models of observational and experimental data, economists use axiomatic (mathematical) models to infer and represent causal mechanisms. Highly abstract theoretical models that isolate and idealize one mechanism dominate microeconomics. In macroeconomics, economists use broad mathematical models that are calibrated on historical data. A subgroup of calibrated models, dynamic stochastic general equilibrium (DSGE) models are employed to represent (in a simplified way) the whole economy and simulate changes in fiscal and monetary policy. Management For quality control in manufacturing in the 1960s, Kaoru Ishikawa developed a cause and effect diagram, known as an Ishikawa diagram or fishbone diagram. The diagram categorizes causes, such as into the six main categories shown here. These categories are then sub-divided. Ishikawa's method identifies "causes" in brainstorming sessions conducted among various groups involved in the manufacturing process. These groups can then be labeled as categories in the diagrams. The use of these diagrams has now spread beyond quality control, and they are used in other areas of management and in design and engineering. Ishikawa diagrams have been criticized for failing to make the distinction between necessary conditions and sufficient conditions. It seems that Ishikawa was not even aware of this distinction. Humanities History In the discussion of history, events are sometimes considered as if in some way being agents that can then bring about other historical events. Thus, the combination of poor harvests, the hardships of the peasants, high taxes, lack of representation of the people, and kingly ineptitude are among the causes of the French Revolution. This is a somewhat Platonic and Hegelian view that reifies causes as ontological entities. In Aristotelian terminology, this use approximates to the case of the efficient cause. Some philosophers of history such as Arthur Danto have claimed that "explanations in history and elsewhere" describe "not simply an event—something that happens—but a change". Like many practicing historians, they treat causes as intersecting actions and sets of actions which bring about "larger changes", in Danto's words: to decide "what are the elements which persist through a change" is "rather simple" when treating an individual's "shift in attitude", but "it is considerably more complex and metaphysically challenging when we are interested in such a change as, say, the break-up of feudalism or the emergence of nationalism". Much of the historical debate about causes has focused on the relationship between communicative and other actions, between singular and repeated ones, and between actions, structures of action or group and institutional contexts and wider sets of conditions. John Gaddis has distinguished between exceptional and general causes (following Marc Bloch) and between "routine" and "distinctive links" in causal relationships: "in accounting for what happened at Hiroshima on August 6, 1945, we attach greater importance to the fact that President Truman ordered the dropping of an atomic bomb than to the decision of the Army Air Force to carry out his orders." He has also pointed to the difference between immediate, intermediate and distant causes. For his part, Christopher Lloyd puts forward four "general concepts of causation" used in history: the "metaphysical idealist concept, which asserts that the phenomena of the universe are products of or emanations from an omnipotent being or such final cause"; "the empiricist (or Humean) regularity concept, which is based on the idea of causation being a matter of constant conjunctions of events"; "the functional/teleological/consequential concept", which is "goal-directed, so that goals are causes"; and the "realist, structurist and dispositional approach, which sees relational structures and internal dispositions as the causes of phenomena". Law According to law and jurisprudence, legal cause must be demonstrated to hold a defendant liable for a crime or a tort (i.e. a civil wrong such as negligence or trespass). It must be proven that causality, or a "sufficient causal link" relates the defendant's actions to the criminal event or damage in question. Causation is also an essential legal element that must be proven to qualify for remedy measures under international trade law. History Hindu philosophy Vedic period (–500 BCE) literature has karma's Eastern origins. Karma is the belief held by Sanatana Dharma and major religions that a person's actions cause certain effects in the current life and/or in future life, positively or negatively. The various philosophical schools (darshanas) provide different accounts of the subject. The doctrine of satkaryavada affirms that the effect inheres in the cause in some way. The effect is thus either a real or apparent modification of the cause. The doctrine of asatkaryavada affirms that the effect does not inhere in the cause, but is a new arising. See Nyaya for some details of the theory of causation in the Nyaya school. In Brahma Samhita, Brahma describes Krishna as the prime cause of all causes. Bhagavad-gītā 18.14 identifies five causes for any action (knowing which it can be perfected): the body, the individual soul, the senses, the efforts and the supersoul. According to Monier-Williams, in the Nyāya causation theory from Sutra I.2.I,2 in the Vaisheshika philosophy, from causal non-existence is effectual non-existence; but, not effectual non-existence from causal non-existence. A cause precedes an effect. With a threads and cloth metaphors, three causes are: Co-inherence cause: resulting from substantial contact, 'substantial causes', threads are substantial to cloth, corresponding to Aristotle's material cause. Non-substantial cause: Methods putting threads into cloth, corresponding to Aristotle's formal cause. Instrumental cause: Tools to make the cloth, corresponding to Aristotle's efficient cause. Monier-Williams also proposed that Aristotle's and the Nyaya's causality are considered conditional aggregates necessary to man's productive work. Buddhist philosophy Karma is the causality principle focusing on 1) causes, 2) actions, 3) effects, where it is the mind's phenomena that guide the actions that the actor performs. Buddhism trains the actor's actions for continued and uncontrived virtuous outcomes aimed at reducing suffering. This follows the Subject–verb–object structure. The general or universal definition of pratityasamutpada (or "dependent origination" or "dependent arising" or "interdependent co-arising") is that everything arises in dependence upon multiple causes and conditions; nothing exists as a singular, independent entity. A traditional example in Buddhist texts is of three sticks standing upright and leaning against each other and supporting each other. If one stick is taken away, the other two will fall to the ground. Causality in the Chittamatrin Buddhist school approach, Asanga's mind-only Buddhist school, asserts that objects cause consciousness in the mind's image. Because causes precede effects, which must be different entities, then subject and object are different. For this school, there are no objects which are entities external to a perceiving consciousness. The Chittamatrin and the Yogachara Svatantrika schools accept that there are no objects external to the observer's causality. This largely follows the Nikayas approach. The Vaibhashika is an early Buddhist school which favors direct object contact and accepts simultaneous cause and effects. This is based in the consciousness example which says, intentions and feelings are mutually accompanying mental factors that support each other like poles in tripod. In contrast, simultaneous cause and effect rejectors say that if the effect already exists, then it cannot effect the same way again. How past, present and future are accepted is a basis for various Buddhist school's causality viewpoints. All the classic Buddhist schools teach karma. "The law of karma is a special instance of the law of cause and effect, according to which all our actions of body, speech, and mind are causes and all our experiences are their effects." Western philosophy Aristotelian Aristotle identified four kinds of answer or explanatory mode to various "Why?" questions. He thought that, for any given topic, all four kinds of explanatory mode were important, each in its own right. As a result of traditional specialized philosophical peculiarities of language, with translations between ancient Greek, Latin, and English, the word 'cause' is nowadays in specialized philosophical writings used to label Aristotle's four kinds. In ordinary language, the word 'cause' has a variety of meanings, the most common of which refers to efficient causation, which is the topic of the present article. Material cause, the material whence a thing has come or that which persists while it changes, as for example, one's mother or the bronze of a statue (see also substance theory). Formal cause, whereby a thing's dynamic form or static shape determines the thing's properties and function, as a human differs from a statue of a human or as a statue differs from a lump of bronze. Efficient cause, which imparts the first relevant movement, as a human lifts a rock or raises a statue. This is the main topic of the present article. Final cause, the criterion of completion, or the end; it may refer to an action or to an inanimate process. Examples: Socrates takes a walk after dinner for the sake of his health; earth falls to the lowest level because that is its nature. Of Aristotle's four kinds or explanatory modes, only one, the 'efficient cause' is a cause as defined in the leading paragraph of this present article. The other three explanatory modes might be rendered material composition, structure and dynamics, and, again, criterion of completion. The word that Aristotle used was . For the present purpose, that Greek word would be better translated as "explanation" than as "cause" as those words are most often used in current English. Another translation of Aristotle is that he meant "the four Becauses" as four kinds of answer to "why" questions. Aristotle assumed efficient causality as referring to a basic fact of experience, not explicable by, or reducible to, anything more fundamental or basic. In some works of Aristotle, the four causes are listed as (1) the essential cause, (2) the logical ground, (3) the moving cause, and (4) the final cause. In this listing, a statement of essential cause is a demonstration that an indicated object conforms to a definition of the word that refers to it. A statement of logical ground is an argument as to why an object statement is true. These are further examples of the idea that a "cause" in general in the context of Aristotle's usage is an "explanation". The word "efficient" used here can also be translated from Aristotle as "moving" or "initiating". Efficient causation was connected with Aristotelian physics, which recognized the four elements (earth, air, fire, water), and added the fifth element (aether). Water and earth by their intrinsic property gravitas or heaviness intrinsically fall toward, whereas air and fire by their intrinsic property levitas or lightness intrinsically rise away from, Earth's center—the motionless center of the universe—in a straight line while accelerating during the substance's approach to its natural place. As air remained on Earth, however, and did not escape Earth while eventually achieving infinite speed—an absurdity—Aristotle inferred that the universe is finite in size and contains an invisible substance that holds planet Earth and its atmosphere, the sublunary sphere, centered in the universe. And since celestial bodies exhibit perpetual, unaccelerated motion orbiting planet Earth in unchanging relations, Aristotle inferred that the fifth element, aither, that fills space and composes celestial bodies intrinsically moves in perpetual circles, the only constant motion between two points. (An object traveling a straight line from point A to B and back must stop at either point before returning to the other.) Left to itself, a thing exhibits natural motion, but can—according to Aristotelian metaphysics—exhibit enforced motion imparted by an efficient cause. The form of plants endows plants with the processes nutrition and reproduction, the form of animals adds locomotion, and the form of humankind adds reason atop these. A rock normally exhibits natural motion—explained by the rock's material cause of being composed of the element earth—but a living thing can lift the rock, an enforced motion diverting the rock from its natural place and natural motion. As a further kind of explanation, Aristotle identified the final cause, specifying a purpose or criterion of completion in light of which something should be understood. Aristotle himself explained, Aristotle further discerned two modes of causation: proper (prior) causation and accidental (chance) causation. All causes, proper and accidental, can be spoken as potential or as actual, particular or generic. The same language refers to the effects of causes, so that generic effects are assigned to generic causes, particular effects to particular causes, and actual effects to operating causes. Averting infinite regress, Aristotle inferred the first mover—an unmoved mover. The first mover's motion, too, must have been caused, but, being an unmoved mover, must have moved only toward a particular goal or desire. Pyrrhonism While the plausibility of causality was accepted in Pyrrhonism, it was equally accepted that it was plausible that nothing was the cause of anything. Middle Ages In line with Aristotelian cosmology, Thomas Aquinas posed a hierarchy prioritizing Aristotle's four causes: "final > efficient > material > formal". Aquinas sought to identify the first efficient cause—now simply first cause—as everyone would agree, said Aquinas, to call it God. Later in the Middle Ages, many scholars conceded that the first cause was God, but explained that many earthly events occur within God's design or plan, and thereby scholars sought freedom to investigate the numerous secondary causes. After the Middle Ages For Aristotelian philosophy before Aquinas, the word cause had a broad meaning. It meant 'answer to a why question' or 'explanation', and Aristotelian scholars recognized four kinds of such answers. With the end of the Middle Ages, in many philosophical usages, the meaning of the word 'cause' narrowed. It often lost that broad meaning, and was restricted to just one of the four kinds. For authors such as Niccolò Machiavelli, in the field of political thinking, and Francis Bacon, concerning science more generally, Aristotle's moving cause was the focus of their interest. A widely used modern definition of causality in this newly narrowed sense was assumed by David Hume. He undertook an epistemological and metaphysical investigation of the notion of moving cause. He denied that we can ever perceive cause and effect, except by developing a habit or custom of mind where we come to associate two types of object or event, always contiguous and occurring one after the other. In Part III, section XV of his book A Treatise of Human Nature, Hume expanded this to a list of eight ways of judging whether two things might be cause and effect. The first three: "The cause and effect must be contiguous in space and time." "The cause must be prior to the effect." "There must be a constant union betwixt the cause and effect. 'Tis chiefly this quality, that constitutes the relation." And then additionally there are three connected criteria which come from our experience and which are "the source of most of our philosophical reasonings": And then two more: In 1949, physicist Max Born distinguished determination from causality. For him, determination meant that actual events are so linked by laws of nature that certainly reliable predictions and retrodictions can be made from sufficient present data about them. He describes two kinds of causation: nomic or generic causation and singular causation. Nomic causality means that cause and effect are linked by more or less certain or probabilistic general laws covering many possible or potential instances; this can be recognized as a probabilized version of Hume's criterion 3. An occasion of singular causation is a particular occurrence of a definite complex of events that are physically linked by antecedence and contiguity, which may be recognized as criteria 1 and 2. See also General Catch-22 (logic) Causal research Causal inference Causality (book) Causation (sociology) Cosmological argument Domino effect Sequence of events Mathematics Causal filter Causal system Causality conditions Chaos theory Physics Anthropic principle Arrow of time Butterfly effect Chain reaction Delayed choice quantum eraser Feedback Grandfather paradox Quantum Zeno effect Retrocausality Schrödinger's cat Wheeler–Feynman absorber theory Philosophy Aetiology Arche (ἀρχή) Causa sui Chance (philosophy) Chicken or the egg Condition of possibility Determinism Mill's Methods Newcomb's paradox Non sequitur (logic) Ontological paradox Post hoc ergo propter hoc Predestination paradox Proposed proofs of universal validity (principle of causality) Proximate and ultimate causation Quidditism Supervenience Philosophy of mind Synchronicity Statistics Causal loop diagram Causal Markov condition Correlation does not imply causation Experimental design Granger causality Linear regression Randomness Causal model (structural causal model) Rubin causal model Validity (statistics) Psychology and medicine Adverse effect Clinical trial Force dynamics Iatrogenesis Nocebo Placebo Scientific control Suggestibility Suggestion Pathology and epidemiology Causal inference Epidemiology Etiology Molecular pathology Molecular pathological epidemiology Pathogenesis Pathology Sociology and economics Instrumental variable Root cause analysis Self-fulfilling prophecy Supply and demand Unintended consequence Virtuous circle and vicious circle Environmental issues Causes of global warming Causes of deforestation Causes of land degradation Causes of soil contamination Causes of habitat fragmentation References Further reading Arthur Danto (1965). Analytical Philosophy of History. Cambridge University Press. Idem, 'Complex Events', Philosophy and Phenomenological Research, 30 (1969), 66–77. Idem, 'On Explanations in History', Philosophy of Science, 23 (1956), 15–30. Green, Celia (2003). The Lost Cause: Causation and the Mind-Body Problem. Oxford: Oxford Forum. Includes three chapters on causality at the microlevel in physics. Hewitson, Mark (2014). History and Causality. Palgrave Macmillan. . Little, Daniel (1998). Microfoundations, Method and Causation: On the Philosophy of the Social Sciences. New York: Transaction. Lloyd, Christopher (1993). The Structures of History. Oxford: Blackwell. Idem (1986). Explanation in Social History. Oxford: Blackwell. Maurice Mandelbaum (1977). The Anatomy of Historical Knowledge. Baltimore: Johns Hopkins Press. Judea Pearl (2000). Causality: Models of Reasoning and Inference CAUSALITY, 2nd Edition, 2009 Cambridge University Press Rosenberg, M. (1968). The Logic of Survey Analysis. New York: Basic Books, Inc. Spirtes, Peter, Clark Glymour and Richard Scheines Causation, Prediction, and Search, MIT Press, University of California journal articles, including Judea Pearl's articles between 1984 and 1998 Search Results - Technical Reports . Miguel Espinoza, Théorie du déterminisme causal, L'Harmattan, Paris, 2006. . External links Causation – Internet Encyclopedia of Philosophy Metaphysics of Science – Internet Encyclopedia of Philosophy Causal Processes at the Stanford Encyclopedia of Philosophy The Art and Science of Cause and Effect – A slide show and tutorial lecture by Judea Pearl Donald Davidson: Causal Explanation of Action – The Internet Encyclopedia of Philosophy Causal inference in statistics: An overview – By Judea Pearl (September 2009) An R implementation of causal calculus TimeSleuth – A tool for discovering causality Concepts in epistemology Metaphysical properties Conditionals Time Philosophy of science Scientific method
0.767556
0.998472
0.766383
Applications of quantum mechanics
Quantum physics is a branch of modern physics in which energy and matter are described at their most fundamental level, that of energy quanta, elementary particles, and quantum fields. Quantum physics encompasses any discipline concerned with systems that exhibit notable quantum-mechanical effects, where waves have properties of particles, and particles behave like waves. Applications of quantum mechanics include explaining phenomena found in nature as well as developing technologies that rely upon quantum effects, like integrated circuits and lasers. Quantum mechanics is also critically important for understanding how individual atoms are joined by covalent bonds to form molecules. The application of quantum mechanics to chemistry is known as quantum chemistry. Quantum mechanics can also provide quantitative insight into ionic and covalent bonding processes by explicitly showing which molecules are energetically favorable to which others and the magnitudes of the energies involved. Historically, the first applications of quantum mechanics to physical systems were the algebraic determination of the hydrogen spectrum by Wolfgang Pauli and the treatment of diatomic molecules by Lucy Mensing. In many aspects modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the optical amplifier and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA. Electronics Many modern electronic devices are designed using quantum mechanics. Examples include lasers, electron microscopes, magnetic resonance imaging (MRI) devices and the components used in computing hardware. The study of semiconductors led to the invention of the diode and the transistor, which are indispensable parts of modern electronics systems, computer and telecommunications devices. Another application is for making laser diodes and light-emitting diodes, which are a high-efficiency source of light. The global positioning system (GPS) makes use of atomic clocks to measure precise time differences and therefore determine a user's location. Many electronic devices operate using the effect of quantum tunneling. Flash memory chips found in USB drives use quantum tunneling to erase their memory cells. Some negative differential resistance devices also utilize the quantum tunneling effect, such as resonant tunneling diodes. Unlike classical diodes, its current is carried by resonant tunneling through two or more potential barriers (see figure at right). Its negative resistance behavior can only be understood with quantum mechanics: As the confined state moves close to Fermi level, tunnel current increases. As it moves away, the current decreases. Quantum mechanics is necessary to understand and design such electronic devices. Cryptography Many scientists are currently seeking robust methods of directly manipulating quantum states. Efforts are being made to more fully develop quantum cryptography, which will theoretically allow guaranteed secure transmission of information. An inherent advantage yielded by quantum cryptography when compared to classical cryptography is the detection of passive eavesdropping. This is a natural result of the behavior of quantum bits; due to the observer effect, if a bit in a superposition state were to be observed, the superposition state would collapse into an eigenstate. Because the intended recipient was expecting to receive the bit in a superposition state, the intended recipient would know there was an attack, because the bit's state would no longer be in a superposition. Quantum computing Another goal is the development of quantum computers, which are expected to perform certain computational tasks exponentially faster than classical computers. Instead of using classical bits, quantum computers use qubits, which can be in superpositions of states. Quantum programmers are able to manipulate the superposition of qubits in order to solve problems that classical computing cannot do effectively, such as searching unsorted databases or integer factorization. IBM claims that the advent of quantum computing may progress the fields of medicine, logistics, financial services, artificial intelligence and cloud security. Another active research topic is quantum teleportation, which deals with techniques to transmit quantum information over arbitrary distances. Macroscale quantum effects While quantum mechanics primarily applies to the smaller atomic regimes of matter and energy, some systems exhibit quantum mechanical effects on a large scale. Superfluidity, the frictionless flow of a liquid at temperatures near absolute zero, is one well-known example. So is the closely related phenomenon of superconductivity, the frictionless flow of an electron gas in a conducting material (an electric current) at sufficiently low temperatures. The fractional quantum Hall effect is a topological ordered state which corresponds to patterns of long-range quantum entanglement. States with different topological orders (or different patterns of long range entanglements) cannot change into each other without a phase transition. Other phenomena Quantum theory also provides accurate descriptions for many previously unexplained phenomena, such as black-body radiation and the stability of the orbitals of electrons in atoms. It has also given insight into the workings of many different biological systems, including smell receptors and protein structures. Recent work on photosynthesis has provided evidence that quantum correlations play an essential role in this fundamental process of plants and many other organisms. Even so, classical physics can often provide good approximations to results otherwise obtained by quantum physics, typically in circumstances with large numbers of particles or large quantum numbers. Since classical formulas are much simpler and easier to compute than quantum formulas, classical approximations are used and preferred when the system is large enough to render the effects of quantum mechanics insignificant. Notes References Quantum mechanics
0.778042
0.985014
0.766382
Detailed balance
The principle of detailed balance can be used in kinetic systems which are decomposed into elementary processes (collisions, or steps, or elementary reactions). It states that at equilibrium, each elementary process is in equilibrium with its reverse process. History The principle of detailed balance was explicitly introduced for collisions by Ludwig Boltzmann. In 1872, he proved his H-theorem using this principle. The arguments in favor of this property are founded upon microscopic reversibility. Five years before Boltzmann, James Clerk Maxwell used the principle of detailed balance for gas kinetics with the reference to the principle of sufficient reason. He compared the idea of detailed balance with other types of balancing (like cyclic balance) and found that "Now it is impossible to assign a reason" why detailed balance should be rejected (pg. 64). In 1901, Rudolf Wegscheider introduced the principle of detailed balance for chemical kinetics. In particular, he demonstrated that the irreversible cycles A1 -> A2 -> \cdots -> A_\mathit{n} -> A1 are impossible and found explicitly the relations between kinetic constants that follow from the principle of detailed balance. In 1931, Lars Onsager used these relations in his works, for which he was awarded the 1968 Nobel Prize in Chemistry. Albert Einstein in 1916 used the principle of detailed balance in a background for his quantum theory of emission and absorption of radiation. The principle of detailed balance has been used in Markov chain Monte Carlo methods since their invention in 1953. In particular, in the Metropolis–Hastings algorithm and in its important particular case, Gibbs sampling, it is used as a simple and reliable condition to provide the desirable equilibrium state. Now, the principle of detailed balance is a standard part of the university courses in statistical mechanics, physical chemistry, chemical and physical kinetics. Microscopic background The microscopic "reversing of time" turns at the kinetic level into the "reversing of arrows": the elementary processes transform into their reverse processes. For example, the reaction transforms into and conversely. (Here, are symbols of components or states, are coefficients). The equilibrium ensemble should be invariant with respect to this transformation because of microreversibility and the uniqueness of thermodynamic equilibrium. This leads us immediately to the concept of detailed balance: each process is equilibrated by its reverse process. This reasoning is based on three assumptions: does not change under time reversal; Equilibrium is invariant under time reversal; The macroscopic elementary processes are microscopically distinguishable. That is, they represent disjoint sets of microscopic events. Any of these assumptions may be violated. For example, Boltzmann's collision can be represented as where is a particle with velocity v. Under time reversal transforms into . Therefore, the collision is transformed into the reverse collision by the PT transformation, where P is the space inversion and T is the time reversal. Detailed balance for Boltzmann's equation requires PT-invariance of collisions' dynamics, not just T-invariance. Indeed, after the time reversal the collision transforms into For the detailed balance we need transformation into For this purpose, we need to apply additionally the space reversal P. Therefore, for the detailed balance in Boltzmann's equation not T-invariance but PT-invariance is needed. Equilibrium may be not T- or PT-invariant even if the laws of motion are invariant. This non-invariance may be caused by the spontaneous symmetry breaking. There exist nonreciprocal media (for example, some bi-isotropic materials) without T and PT invariance. If different macroscopic processes are sampled from the same elementary microscopic events then macroscopic detailed balance may be violated even when microscopic detailed balance holds. Now, after almost 150 years of development, the scope of validity and the violations of detailed balance in kinetics seem to be clear. Detailed balance Reversibility A Markov process is called a reversible Markov process or reversible Markov chain if there exists a positive stationary distribution π that satisfies the detailed balance equationswhere Pij is the Markov transition probability from state i to state j, i.e. , and πi and πj are the equilibrium probabilities of being in states i and j, respectively. When for all i, this is equivalent to the joint probability matrix, being symmetric in i and j; or symmetric in and t. The definition carries over straightforwardly to continuous variables, where π becomes a probability density, and a transition kernel probability density from state s′ to state s:The detailed balance condition is stronger than that required merely for a stationary distribution, because there are Markov processes with stationary distributions that do not have detailed balance. Transition matrices that are symmetric or always have detailed balance. In these cases, a uniform distribution over the states is an equilibrium distribution. Kolmogorov's criterion Reversibility is equivalent to Kolmogorov's criterion: the product of transition rates over any closed loop of states is the same in both directions. For example, it implies that, for all a, b and c,For example, if we have a Markov chain three states such that only these transitions are possible: , then they violate Kolmogorov's criterion. Closest reversible Markov chain For continuous systems with detailed balance, it may be possible to continuously transform the coordinates until the equilibrium distribution is uniform, with a transition kernel which then is symmetric. In the case of discrete states, it may be possible to achieve something similar by breaking the Markov states into appropriately-sized degenerate sub-states. For a Markov transition matrix and a stationary distribution, the detailed balance equations may not be valid. However, it can be shown that a unique Markov transition matrix exists which is closest according to the stationary distribution and a given norm. The closest Matrix can be computed by solving a quadratic-convex optimization problem. Detailed balance and entropy increase For many systems of physical and chemical kinetics, detailed balance provides sufficient conditions for the strict increase of entropy in isolated systems. For example, the famous Boltzmann H-theorem states that, according to the Boltzmann equation, the principle of detailed balance implies positivity of entropy production. The Boltzmann formula (1872) for entropy production in rarefied gas kinetics with detailed balance served as a prototype of many similar formulas for dissipation in mass action kinetics and generalized mass action kinetics with detailed balance. Nevertheless, the principle of detailed balance is not necessary for entropy growth. For example, in the linear irreversible cycle A1 -> A2 -> A3 -> A1, entropy production is positive but the principle of detailed balance does not hold. Thus, the principle of detailed balance is a sufficient but not necessary condition for entropy increase in Boltzmann kinetics. These relations between the principle of detailed balance and the second law of thermodynamics were clarified in 1887 when Hendrik Lorentz objected to the Boltzmann H-theorem for polyatomic gases. Lorentz stated that the principle of detailed balance is not applicable to collisions of polyatomic molecules. Boltzmann immediately invented a new, more general condition sufficient for entropy growth. Boltzmann's condition holds for all Markov processes, irrespective of time-reversibility. Later, entropy increase was proved for all Markov processes by a direct method. These theorems may be considered as simplifications of the Boltzmann result. Later, this condition was referred to as the "cyclic balance" condition (because it holds for irreversible cycles) or the "semi-detailed balance" or the "complex balance". In 1981, Carlo Cercignani and Maria Lampis proved that the Lorentz arguments were wrong and the principle of detailed balance is valid for polyatomic molecules. Nevertheless, the extended semi-detailed balance conditions invented by Boltzmann in this discussion remain the remarkable generalization of the detailed balance. Wegscheider's conditions for the generalized mass action law In chemical kinetics, the elementary reactions are represented by the stoichiometric equations where are the components and are the stoichiometric coefficients. Here, the reverse reactions with positive constants are included in the list separately. We need this separation of direct and reverse reactions to apply later the general formalism to the systems with some irreversible reactions. The system of stoichiometric equations of elementary reactions is the reaction mechanism. The stoichiometric matrix is , (gain minus loss). This matrix need not be square. The stoichiometric vector is the rth row of with coordinates . According to the generalized mass action law, the reaction rate for an elementary reaction is where is the activity (the "effective concentration") of . The reaction mechanism includes reactions with the reaction rate constants . For each r the following notations are used: ; ; is the reaction rate constant for the reverse reaction if it is in the reaction mechanism and 0 if it is not; is the reaction rate for the reverse reaction if it is in the reaction mechanism and 0 if it is not. For a reversible reaction, is the equilibrium constant. The principle of detailed balance for the generalized mass action law is: For given values there exists a positive equilibrium that satisfies detailed balance, that is, . This means that the system of linear detailed balance equations is solvable. The following classical result gives the necessary and sufficient conditions for the existence of a positive equilibrium with detailed balance (see, for example, the textbook). Two conditions are sufficient and necessary for solvability of the system of detailed balance equations: If then and, conversely, if then (reversibility); For any solution of the system the Wegscheider's identity holds: Remark. It is sufficient to use in the Wegscheider conditions a basis of solutions of the system . In particular, for any cycle in the monomolecular (linear) reactions the product of the reaction rate constants in the clockwise direction is equal to the product of the reaction rate constants in the counterclockwise direction. The same condition is valid for the reversible Markov processes (it is equivalent to the "no net flow" condition). A simple nonlinear example gives us a linear cycle supplemented by one nonlinear step: A1 <=> A2 A2 <=> A3 A3 <=> A1 {A1}+A2 <=> 2A3 There are two nontrivial independent Wegscheider's identities for this system: and They correspond to the following linear relations between the stoichiometric vectors: and The computational aspect of the Wegscheider conditions was studied by D. Colquhoun with co-authors. The Wegscheider conditions demonstrate that whereas the principle of detailed balance states a local property of equilibrium, it implies the relations between the kinetic constants that are valid for all states far from equilibrium. This is possible because a kinetic law is known and relations between the rates of the elementary processes at equilibrium can be transformed into relations between kinetic constants which are used globally. For the Wegscheider conditions this kinetic law is the law of mass action (or the generalized law of mass action). Dissipation in systems with detailed balance To describe dynamics of the systems that obey the generalized mass action law, one has to represent the activities as functions of the concentrations cj and temperature. For this purpose, use the representation of the activity through the chemical potential: where μi is the chemical potential of the species under the conditions of interest, is the chemical potential of that species in the chosen standard state, R is the gas constant and T is the thermodynamic temperature. The chemical potential can be represented as a function of c and T, where c is the vector of concentrations with components cj. For the ideal systems, and : the activity is the concentration and the generalized mass action law is the usual law of mass action. Consider a system in isothermal (T=const) isochoric (the volume V=const) condition. For these conditions, the Helmholtz free energy measures the “useful” work obtainable from a system. It is a functions of the temperature T, the volume V and the amounts of chemical components Nj (usually measured in moles), N is the vector with components Nj. For the ideal systems, The chemical potential is a partial derivative: . The chemical kinetic equations are If the principle of detailed balance is valid then for any value of T there exists a positive point of detailed balance ceq: Elementary algebra gives where For the dissipation we obtain from these formulas: The inequality holds because ln is a monotone function and, hence, the expressions and have always the same sign. Similar inequalities are valid for other classical conditions for the closed systems and the corresponding characteristic functions: for isothermal isobaric conditions the Gibbs free energy decreases, for the isochoric systems with the constant internal energy (isolated systems) the entropy increases as well as for isobaric systems with the constant enthalpy. Onsager reciprocal relations and detailed balance Let the principle of detailed balance be valid. Then, for small deviations from equilibrium, the kinetic response of the system can be approximated as linearly related to its deviation from chemical equilibrium, giving the reaction rates for the generalized mass action law as: Therefore, again in the linear response regime near equilibrium, the kinetic equations are: This is exactly the Onsager form: following the original work of Onsager, we should introduce the thermodynamic forces and the matrix of coefficients in the form The coefficient matrix is symmetric: These symmetry relations, , are exactly the Onsager reciprocal relations. The coefficient matrix is non-positive. It is negative on the linear span of the stoichiometric vectors . So, the Onsager relations follow from the principle of detailed balance in the linear approximation near equilibrium. Semi-detailed balance To formulate the principle of semi-detailed balance, it is convenient to count the direct and inverse elementary reactions separately. In this case, the kinetic equations have the form: Let us use the notations , for the input and the output vectors of the stoichiometric coefficients of the rth elementary reaction. Let be the set of all these vectors . For each , let us define two sets of numbers: if and only if is the vector of the input stoichiometric coefficients for the rth elementary reaction; if and only if is the vector of the output stoichiometric coefficients for the rth elementary reaction. The principle of semi-detailed balance means that in equilibrium the semi-detailed balance condition holds: for every The semi-detailed balance condition is sufficient for the stationarity: it implies that For the Markov kinetics the semi-detailed balance condition is just the elementary balance equation and holds for any steady state. For the nonlinear mass action law it is, in general, sufficient but not necessary condition for stationarity. The semi-detailed balance condition is weaker than the detailed balance one: if the principle of detailed balance holds then the condition of semi-detailed balance also holds. For systems that obey the generalized mass action law the semi-detailed balance condition is sufficient for the dissipation inequality (for the Helmholtz free energy under isothermal isochoric conditions and for the dissipation inequalities under other classical conditions for the corresponding thermodynamic potentials). Boltzmann introduced the semi-detailed balance condition for collisions in 1887 and proved that it guaranties the positivity of the entropy production. For chemical kinetics, this condition (as the complex balance condition) was introduced by Horn and Jackson in 1972. The microscopic backgrounds for the semi-detailed balance were found in the Markov microkinetics of the intermediate compounds that are present in small amounts and whose concentrations are in quasiequilibrium with the main components. Under these microscopic assumptions, the semi-detailed balance condition is just the balance equation for the Markov microkinetics according to the Michaelis–Menten–Stueckelberg theorem. Dissipation in systems with semi-detailed balance Let us represent the generalized mass action law in the equivalent form: the rate of the elementary process is where is the chemical potential and is the Helmholtz free energy. The exponential term is called the Boltzmann factor and the multiplier is the kinetic factor. Let us count the direct and reverse reaction in the kinetic equation separately: An auxiliary function of one variable is convenient for the representation of dissipation for the mass action law This function may be considered as the sum of the reaction rates for deformed input stoichiometric coefficients . For it is just the sum of the reaction rates. The function is convex because . Direct calculation gives that according to the kinetic equations This is the general dissipation formula for the generalized mass action law. Convexity of gives the sufficient and necessary conditions for the proper dissipation inequality: The semi-detailed balance condition can be transformed into identity . Therefore, for the systems with semi-detailed balance . Cone theorem and local equivalence of detailed and complex balance For any reaction mechanism and a given positive equilibrium a cone of possible velocities for the systems with detailed balance is defined for any non-equilibrium state N where cone stands for the conical hull and the piecewise-constant functions do not depend on (positive) values of equilibrium reaction rates and are defined by thermodynamic quantities under assumption of detailed balance. The cone theorem states that for the given reaction mechanism and given positive equilibrium, the velocity (dN/dt) at a state N for a system with complex balance belongs to the cone . That is, there exists a system with detailed balance, the same reaction mechanism, the same positive equilibrium, that gives the same velocity at state N. According to cone theorem, for a given state N, the set of velocities of the semidetailed balance systems coincides with the set of velocities of the detailed balance systems if their reaction mechanisms and equilibria coincide. This means local equivalence of detailed and complex balance. Detailed balance for systems with irreversible reactions Detailed balance states that in equilibrium each elementary process is equilibrated by its reverse process and requires reversibility of all elementary processes. For many real physico-chemical complex systems (e.g. homogeneous combustion, heterogeneous catalytic oxidation, most enzyme reactions etc.), detailed mechanisms include both reversible and irreversible reactions. If one represents irreversible reactions as limits of reversible steps, then it becomes obvious that not all reaction mechanisms with irreversible reactions can be obtained as limits of systems or reversible reactions with detailed balance. For example, the irreversible cycle A1 -> A2 -> A3 -> A1 cannot be obtained as such a limit but the reaction mechanism A1 -> A2 -> A3 <- A1 can.Gorban–Yablonsky theorem'. A system of reactions with some irreversible reactions is a limit of systems with detailed balance when some constants tend to zero if and only if (i) the reversible part of this system satisfies the principle of detailed balance and (ii) the convex hull of the stoichiometric vectors of the irreversible reactions has empty intersection with the linear span of the stoichiometric vectors of the reversible reactions.'' Physically, the last condition means that the irreversible reactions cannot be included in oriented cyclic pathways. See also T-symmetry Microscopic reversibility Master equation Balance equation Gibbs sampling Metropolis–Hastings algorithm Atomic spectral line (deduction of the Einstein coefficients) Random walks on graphs References Non-equilibrium thermodynamics Statistical mechanics Markov models Chemical kinetics
0.775585
0.988114
0.766367
Mass
Mass is an intrinsic property of a body. It was traditionally believed to be related to the quantity of matter in a body, until the discovery of the atom and particle physics. It was found that different atoms and different elementary particles, theoretically with the same amount of matter, have nonetheless different masses. Mass in modern physics has multiple definitions which are conceptually distinct, but physically equivalent. Mass can be experimentally defined as a measure of the body's inertia, meaning the resistance to acceleration (change of velocity) when a net force is applied. The object's mass also determines the strength of its gravitational attraction to other bodies. The SI base unit of mass is the kilogram (kg). In physics, mass is not the same as weight, even though mass is often determined by measuring the object's weight using a spring scale, rather than balance scale comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity, but it would still have the same mass. This is because weight is a force, while mass is the property that (along with gravity) determines the strength of this force. In the Standard Model of physics, the mass of elementary particles is believed to be a result of their coupling with the Higgs boson in what is known as the Brout–Englert–Higgs mechanism. Phenomena There are several distinct phenomena that can be used to measure mass. Although some theorists have speculated that some of these phenomena could be independent of each other, current experiments have found no difference in results regardless of how it is measured: Inertial mass measures an object's resistance to being accelerated by a force (represented by the relationship ). Active gravitational mass determines the strength of the gravitational field generated by an object. Passive gravitational mass measures the gravitational force exerted on an object in a known gravitational field. The mass of an object determines its acceleration in the presence of an applied force. The inertia and the inertial mass describe this property of physical bodies at the qualitative and quantitative level respectively. According to Newton's second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A body's mass also determines the degree to which it generates and is affected by a gravitational field. If a first body of mass mA is placed at a distance r (center of mass to center of mass) from a second body of mass mB, each body is subject to an attractive force , where is the "universal gravitational constant". This is sometimes referred to as gravitational mass. Repeated experiments since the 17th century have demonstrated that inertial and gravitational mass are identical; since 1915, this observation has been incorporated a priori in the equivalence principle of general relativity. Units of mass The International System of Units (SI) unit of mass is the kilogram (kg). The kilogram is 1000 grams (g), and was first defined in 1795 as the mass of one cubic decimetre of water at the melting point of ice. However, because precise measurement of a cubic decimetre of water at the specified temperature and pressure was difficult, in 1889 the kilogram was redefined as the mass of a metal object, and thus became independent of the metre and the properties of water, this being a copper prototype of the grave in 1793, the platinum Kilogramme des Archives in 1799, and the platinum–iridium International Prototype of the Kilogram (IPK) in 1889. However, the mass of the IPK and its national copies have been found to drift over time. The re-definition of the kilogram and several other units came into effect on 20 May 2019, following a final vote by the CGPM in November 2018. The new definition uses only invariant quantities of nature: the speed of light, the caesium hyperfine frequency, the Planck constant and the elementary charge. Non-SI units accepted for use with SI units include: the tonne (t) (or "metric ton"), equal to 1000 kg the electronvolt (eV), a unit of energy, used to express mass in units of eV/c2 through mass–energy equivalence the dalton (Da), equal to 1/12 of the mass of a free carbon-12 atom, approximately . Outside the SI system, other units of mass include: the slug (sl), an Imperial unit of mass (about 14.6 kg) the pound (lb), a unit of mass (about 0.45 kg), which is used alongside the similarly named pound (force) (about 4.5 N), a unit of force the Planck mass (about ), a quantity derived from fundamental constants the solar mass, defined as the mass of the Sun, primarily used in astronomy to compare large masses such as stars or galaxies (≈ ) the mass of a particle, as identified with its inverse Compton wavelength the mass of a star or black hole, as identified with its Schwarzschild radius. Definitions In physical science, one may distinguish conceptually between at least seven different aspects of mass, or seven physical notions that involve the concept of mass. Every experiment to date has shown these seven values to be proportional, and in some cases equal, and this proportionality gives rise to the abstract concept of mass. There are a number of ways mass can be measured or operationally defined: Inertial mass is a measure of an object's resistance to acceleration when a force is applied. It is determined by applying a force to an object and measuring the acceleration that results from that force. An object with small inertial mass will accelerate more than an object with large inertial mass when acted upon by the same force. One says the body of greater mass has greater inertia. Active gravitational mass is a measure of the strength of an object's gravitational flux (gravitational flux is equal to the surface integral of gravitational field over an enclosing surface). Gravitational field can be measured by allowing a small "test object" to fall freely and measuring its free-fall acceleration. For example, an object in free-fall near the Moon is subject to a smaller gravitational field, and hence accelerates more slowly, than the same object would if it were in free-fall near the Earth. The gravitational field near the Moon is weaker because the Moon has less active gravitational mass. Passive gravitational mass is a measure of the strength of an object's interaction with a gravitational field. Passive gravitational mass is determined by dividing an object's weight by its free-fall acceleration. Two objects within the same gravitational field will experience the same acceleration; however, the object with a smaller passive gravitational mass will experience a smaller force (less weight) than the object with a larger passive gravitational mass. According to relativity, mass is nothing else than the rest energy of a system of particles, meaning the energy of that system in a reference frame where it has zero momentum. Mass can be converted into other forms of energy according to the principle of mass–energy equivalence. This equivalence is exemplified in a large number of physical processes including pair production, beta decay and nuclear fusion. Pair production and nuclear fusion are processes in which measurable amounts of mass are converted to kinetic energy or vice versa. Curvature of spacetime is a relativistic manifestation of the existence of mass. Such curvature is extremely weak and difficult to measure. For this reason, curvature was not discovered until after it was predicted by Einstein's theory of general relativity. Extremely precise atomic clocks on the surface of the Earth, for example, are found to measure less time (run slower) when compared to similar clocks in space. This difference in elapsed time is a form of curvature called gravitational time dilation. Other forms of curvature have been measured using the Gravity Probe B satellite. Quantum mass manifests itself as a difference between an object's quantum frequency and its wave number. The quantum mass of a particle is proportional to the inverse Compton wavelength and can be determined through various forms of spectroscopy. In relativistic quantum mechanics, mass is one of the irreducible representation labels of the Poincaré group. Weight vs. mass In everyday usage, mass and "weight" are often used interchangeably. For instance, a person's weight may be stated as 75 kg. In a constant gravitational field, the weight of an object is proportional to its mass, and it is unproblematic to use the same unit for both concepts. But because of slight differences in the strength of the Earth's gravitational field at different places, the distinction becomes important for measurements with a precision better than a few percent, and for places far from the surface of the Earth, such as in space or on other planets. Conceptually, "mass" (measured in kilograms) refers to an intrinsic property of an object, whereas "weight" (measured in newtons) measures an object's resistance to deviating from its current course of free fall, which can be influenced by the nearby gravitational field. No matter how strong the gravitational field, objects in free fall are weightless, though they still have mass. The force known as "weight" is proportional to mass and acceleration in all situations where the mass is accelerated away from free fall. For example, when a body is at rest in a gravitational field (rather than in free fall), it must be accelerated by a force from a scale or the surface of a planetary body such as the Earth or the Moon. This force keeps the object from going into free fall. Weight is the opposing force in such circumstances and is thus determined by the acceleration of free fall. On the surface of the Earth, for example, an object with a mass of 50 kilograms weighs 491 newtons, which means that 491 newtons is being applied to keep the object from going into free fall. By contrast, on the surface of the Moon, the same object still has a mass of 50 kilograms but weighs only 81.5 newtons, because only 81.5 newtons is required to keep this object from going into a free fall on the moon. Restated in mathematical terms, on the surface of the Earth, the weight W of an object is related to its mass m by , where is the acceleration due to Earth's gravitational field, (expressed as the acceleration experienced by a free-falling object). For other situations, such as when objects are subjected to mechanical accelerations from forces other than the resistance of a planetary surface, the weight force is proportional to the mass of an object multiplied by the total acceleration away from free fall, which is called the proper acceleration. Through such mechanisms, objects in elevators, vehicles, centrifuges, and the like, may experience weight forces many times those caused by resistance to the effects of gravity on objects, resulting from planetary surfaces. In such cases, the generalized equation for weight W of an object is related to its mass m by the equation , where a is the proper acceleration of the object caused by all influences other than gravity. (Again, if gravity is the only influence, such as occurs when an object falls freely, its weight will be zero). Inertial vs. gravitational mass Although inertial mass, passive gravitational mass and active gravitational mass are conceptually distinct, no experiment has ever unambiguously demonstrated any difference between them. In classical mechanics, Newton's third law implies that active and passive gravitational mass must always be identical (or at least proportional), but the classical theory offers no compelling reason why the gravitational mass has to equal the inertial mass. That it does is merely an empirical fact. Albert Einstein developed his general theory of relativity starting with the assumption that the inertial and passive gravitational masses are the same. This is known as the equivalence principle. The particular equivalence often referred to as the "Galilean equivalence principle" or the "weak equivalence principle" has the most important consequence for freely falling objects. Suppose an object has inertial and gravitational masses m and M, respectively. If the only force acting on the object comes from a gravitational field g, the force on the object is: Given this force, the acceleration of the object can be determined by Newton's second law: Putting these together, the gravitational acceleration is given by: This says that the ratio of gravitational to inertial mass of any object is equal to some constant K if and only if all objects fall at the same rate in a given gravitational field. This phenomenon is referred to as the "universality of free-fall". In addition, the constant K can be taken as 1 by defining our units appropriately. The first experiments demonstrating the universality of free-fall were—according to scientific 'folklore'—conducted by Galileo obtained by dropping objects from the Leaning Tower of Pisa. This is most likely apocryphal: he is more likely to have performed his experiments with balls rolling down nearly frictionless inclined planes to slow the motion and increase the timing accuracy. Increasingly precise experiments have been performed, such as those performed by Loránd Eötvös, using the torsion balance pendulum, in 1889. , no deviation from universality, and thus from Galilean equivalence, has ever been found, at least to the precision 10−6. More precise experimental efforts are still being carried out. The universality of free-fall only applies to systems in which gravity is the only acting force. All other forces, especially friction and air resistance, must be absent or at least negligible. For example, if a hammer and a feather are dropped from the same height through the air on Earth, the feather will take much longer to reach the ground; the feather is not really in free-fall because the force of air resistance upwards against the feather is comparable to the downward force of gravity. On the other hand, if the experiment is performed in a vacuum, in which there is no air resistance, the hammer and the feather should hit the ground at exactly the same time (assuming the acceleration of both objects towards each other, and of the ground towards both objects, for its own part, is negligible). This can easily be done in a high school laboratory by dropping the objects in transparent tubes that have the air removed with a vacuum pump. It is even more dramatic when done in an environment that naturally has a vacuum, as David Scott did on the surface of the Moon during Apollo 15. A stronger version of the equivalence principle, known as the Einstein equivalence principle or the strong equivalence principle, lies at the heart of the general theory of relativity. Einstein's equivalence principle states that within sufficiently small regions of spacetime, it is impossible to distinguish between a uniform acceleration and a uniform gravitational field. Thus, the theory postulates that the force acting on a massive object caused by a gravitational field is a result of the object's tendency to move in a straight line (in other words its inertia) and should therefore be a function of its inertial mass and the strength of the gravitational field. Origin In theoretical physics, a mass generation mechanism is a theory which attempts to explain the origin of mass from the most fundamental laws of physics. To date, a number of different models have been proposed which advocate different views of the origin of mass. The problem is complicated by the fact that the notion of mass is strongly related to the gravitational interaction but a theory of the latter has not been yet reconciled with the currently popular model of particle physics, known as the Standard Model. Pre-Newtonian concepts Weight as an amount The concept of amount is very old and predates recorded history. The concept of "weight" would incorporate "amount" and acquire a double meaning that was not clearly recognized as such. Humans, at some early era, realized that the weight of a collection of similar objects was directly proportional to the number of objects in the collection: where W is the weight of the collection of similar objects and n is the number of objects in the collection. Proportionality, by definition, implies that two values have a constant ratio: , or equivalently An early use of this relationship is a balance scale, which balances the force of one object's weight against the force of another object's weight. The two sides of a balance scale are close enough that the objects experience similar gravitational fields. Hence, if they have similar masses then their weights will also be similar. This allows the scale, by comparing weights, to also compare masses. Consequently, historical weight standards were often defined in terms of amounts. The Romans, for example, used the carob seed (carat or siliqua) as a measurement standard. If an object's weight was equivalent to 1728 carob seeds, then the object was said to weigh one Roman pound. If, on the other hand, the object's weight was equivalent to 144 carob seeds then the object was said to weigh one Roman ounce (uncia). The Roman pound and ounce were both defined in terms of different sized collections of the same common mass standard, the carob seed. The ratio of a Roman ounce (144 carob seeds) to a Roman pound (1728 carob seeds) was: Planetary motion In 1600 AD, Johannes Kepler sought employment with Tycho Brahe, who had some of the most precise astronomical data available. Using Brahe's precise observations of the planet Mars, Kepler spent the next five years developing his own method for characterizing planetary motion. In 1609, Johannes Kepler published his three laws of planetary motion, explaining how the planets orbit the Sun. In Kepler's final planetary model, he described planetary orbits as following elliptical paths with the Sun at a focal point of the ellipse. Kepler discovered that the square of the orbital period of each planet is directly proportional to the cube of the semi-major axis of its orbit, or equivalently, that the ratio of these two values is constant for all planets in the Solar System. On 25 August 1609, Galileo Galilei demonstrated his first telescope to a group of Venetian merchants, and in early January 1610, Galileo observed four dim objects near Jupiter, which he mistook for stars. However, after a few days of observation, Galileo realized that these "stars" were in fact orbiting Jupiter. These four objects (later named the Galilean moons in honor of their discoverer) were the first celestial bodies observed to orbit something other than the Earth or Sun. Galileo continued to observe these moons over the next eighteen months, and by the middle of 1611, he had obtained remarkably accurate estimates for their periods. Galilean free fall Sometime prior to 1638, Galileo turned his attention to the phenomenon of objects in free fall, attempting to characterize these motions. Galileo was not the first to investigate Earth's gravitational field, nor was he the first to accurately describe its fundamental characteristics. However, Galileo's reliance on scientific experimentation to establish physical principles would have a profound effect on future generations of scientists. It is unclear if these were just hypothetical experiments used to illustrate a concept, or if they were real experiments performed by Galileo, but the results obtained from these experiments were both realistic and compelling. A biography by Galileo's pupil Vincenzo Viviani stated that Galileo had dropped balls of the same material, but different masses, from the Leaning Tower of Pisa to demonstrate that their time of descent was independent of their mass. In support of this conclusion, Galileo had advanced the following theoretical argument: He asked if two bodies of different masses and different rates of fall are tied by a string, does the combined system fall faster because it is now more massive, or does the lighter body in its slower fall hold back the heavier body? The only convincing resolution to this question is that all bodies must fall at the same rate. A later experiment was described in Galileo's Two New Sciences published in 1638. One of Galileo's fictional characters, Salviati, describes an experiment using a bronze ball and a wooden ramp. The wooden ramp was "12 cubits long, half a cubit wide and three finger-breadths thick" with a straight, smooth, polished groove. The groove was lined with "parchment, also smooth and polished as possible". And into this groove was placed "a hard, smooth and very round bronze ball". The ramp was inclined at various angles to slow the acceleration enough so that the elapsed time could be measured. The ball was allowed to roll a known distance down the ramp, and the time taken for the ball to move the known distance was measured. The time was measured using a water clock described as follows: a large vessel of water placed in an elevated position; to the bottom of this vessel was soldered a pipe of small diameter giving a thin jet of water, which we collected in a small glass during the time of each descent, whether for the whole length of the channel or for a part of its length; the water thus collected was weighed, after each descent, on a very accurate balance; the differences and ratios of these weights gave us the differences and ratios of the times, and this with such accuracy that although the operation was repeated many, many times, there was no appreciable discrepancy in the results. Galileo found that for an object in free fall, the distance that the object has fallen is always proportional to the square of the elapsed time: Galileo had shown that objects in free fall under the influence of the Earth's gravitational field have a constant acceleration, and Galileo's contemporary, Johannes Kepler, had shown that the planets follow elliptical paths under the influence of the Sun's gravitational mass. However, Galileo's free fall motions and Kepler's planetary motions remained distinct during Galileo's lifetime. Mass as distinct from weight According to K. M. Browne: "Kepler formed a [distinct] concept of mass ('amount of matter' (copia materiae)), but called it 'weight' as did everyone at that time." Finally, in 1686, Newton gave this distinct concept its own name. In the first paragraph of Principia, Newton defined quantity of matter as “density and bulk conjunctly”, and mass as quantity of matter. Newtonian mass Robert Hooke had published his concept of gravitational forces in 1674, stating that all celestial bodies have an attraction or gravitating power towards their own centers, and also attract all the other celestial bodies that are within the sphere of their activity. He further stated that gravitational attraction increases by how much nearer the body wrought upon is to its own center. In correspondence with Isaac Newton from 1679 and 1680, Hooke conjectured that gravitational forces might decrease according to the double of the distance between the two bodies. Hooke urged Newton, who was a pioneer in the development of calculus, to work through the mathematical details of Keplerian orbits to determine if Hooke's hypothesis was correct. Newton's own investigations verified that Hooke was correct, but due to personal differences between the two men, Newton chose not to reveal this to Hooke. Isaac Newton kept quiet about his discoveries until 1684, at which time he told a friend, Edmond Halley, that he had solved the problem of gravitational orbits, but had misplaced the solution in his office. After being encouraged by Halley, Newton decided to develop his ideas about gravity and publish all of his findings. In November 1684, Isaac Newton sent a document to Edmund Halley, now lost but presumed to have been titled De motu corporum in gyrum (Latin for "On the motion of bodies in an orbit"). Halley presented Newton's findings to the Royal Society of London, with a promise that a fuller presentation would follow. Newton later recorded his ideas in a three-book set, entitled Philosophiæ Naturalis Principia Mathematica (English: Mathematical Principles of Natural Philosophy). The first was received by the Royal Society on 28 April 1685–86; the second on 2 March 1686–87; and the third on 6 April 1686–87. The Royal Society published Newton's entire collection at their own expense in May 1686–87. Isaac Newton had bridged the gap between Kepler's gravitational mass and Galileo's gravitational acceleration, resulting in the discovery of the following relationship which governed both of these: where g is the apparent acceleration of a body as it passes through a region of space where gravitational fields exist, μ is the gravitational mass (standard gravitational parameter) of the body causing gravitational fields, and R is the radial coordinate (the distance between the centers of the two bodies). By finding the exact relationship between a body's gravitational mass and its gravitational field, Newton provided a second method for measuring gravitational mass. The mass of the Earth can be determined using Kepler's method (from the orbit of Earth's Moon), or it can be determined by measuring the gravitational acceleration on the Earth's surface, and multiplying that by the square of the Earth's radius. The mass of the Earth is approximately three-millionths of the mass of the Sun. To date, no other accurate method for measuring gravitational mass has been discovered. Newton's cannonball Newton's cannonball was a thought experiment used to bridge the gap between Galileo's gravitational acceleration and Kepler's elliptical orbits. It appeared in Newton's 1728 book A Treatise of the System of the World. According to Galileo's concept of gravitation, a dropped stone falls with constant acceleration down towards the Earth. However, Newton explains that when a stone is thrown horizontally (meaning sideways or perpendicular to Earth's gravity) it follows a curved path. "For a stone projected is by the pressure of its own weight forced out of the rectilinear path, which by the projection alone it should have pursued, and made to describe a curve line in the air; and through that crooked way is at last brought down to the ground. And the greater the velocity is with which it is projected, the farther it goes before it falls to the Earth." Newton further reasons that if an object were "projected in an horizontal direction from the top of a high mountain" with sufficient velocity, "it would reach at last quite beyond the circumference of the Earth, and return to the mountain from which it was projected." Universal gravitational mass In contrast to earlier theories (e.g. celestial spheres) which stated that the heavens were made of entirely different material, Newton's theory of mass was groundbreaking partly because it introduced universal gravitational mass: every object has gravitational mass, and therefore, every object generates a gravitational field. Newton further assumed that the strength of each object's gravitational field would decrease according to the square of the distance to that object. If a large collection of small objects were formed into a giant spherical body such as the Earth or Sun, Newton calculated the collection would create a gravitational field proportional to the total mass of the body, and inversely proportional to the square of the distance to the body's center. For example, according to Newton's theory of universal gravitation, each carob seed produces a gravitational field. Therefore, if one were to gather an immense number of carob seeds and form them into an enormous sphere, then the gravitational field of the sphere would be proportional to the number of carob seeds in the sphere. Hence, it should be theoretically possible to determine the exact number of carob seeds that would be required to produce a gravitational field similar to that of the Earth or Sun. In fact, by unit conversion it is a simple matter of abstraction to realize that any traditional mass unit can theoretically be used to measure gravitational mass. Measuring gravitational mass in terms of traditional mass units is simple in principle, but extremely difficult in practice. According to Newton's theory, all objects produce gravitational fields and it is theoretically possible to collect an immense number of small objects and form them into an enormous gravitating sphere. However, from a practical standpoint, the gravitational fields of small objects are extremely weak and difficult to measure. Newton's books on universal gravitation were published in the 1680s, but the first successful measurement of the Earth's mass in terms of traditional mass units, the Cavendish experiment, did not occur until 1797, over a hundred years later. Henry Cavendish found that the Earth's density was 5.448 ± 0.033 times that of water. As of 2009, the Earth's mass in kilograms is only known to around five digits of accuracy, whereas its gravitational mass is known to over nine significant figures. Given two objects A and B, of masses MA and MB, separated by a displacement RAB, Newton's law of gravitation states that each object exerts a gravitational force on the other, of magnitude , where G is the universal gravitational constant. The above statement may be reformulated in the following way: if g is the magnitude at a given location in a gravitational field, then the gravitational force on an object with gravitational mass M is . This is the basis by which masses are determined by weighing. In simple spring scales, for example, the force F is proportional to the displacement of the spring beneath the weighing pan, as per Hooke's law, and the scales are calibrated to take g into account, allowing the mass M to be read off. Assuming the gravitational field is equivalent on both sides of the balance, a balance measures relative weight, giving the relative gravitation mass of each object. Inertial mass Mass was traditionally believed to be a measure of the quantity of matter in a physical body, equal to the "amount of matter" in an object. For example, Barre´ de Saint-Venant argued in 1851 that every object contains a number of "points" (basically, interchangeable elementary particles), and that mass is proportional to the number of points the object contains. (In practice, this "amount of matter" definition is adequate for most of classical mechanics, and sometimes remains in use in basic education, if the priority is to teach the difference between mass from weight.) This traditional "amount of matter" belief was contradicted by the fact that different atoms (and, later, different elementary particles) can have different masses, and was further contradicted by Einstein's theory of relativity (1905), which showed that the measurable mass of an object increases when energy is added to it (for example, by increasing its temperature or forcing it near an object that electrically repels it.) This motivates a search for a different definition of mass that is more accurate than the traditional definition of "the amount of matter in an object". Inertial mass is the mass of an object measured by its resistance to acceleration. This definition has been championed by Ernst Mach and has since been developed into the notion of operationalism by Percy W. Bridgman. The simple classical mechanics definition of mass differs slightly from the definition in the theory of special relativity, but the essential meaning is the same. In classical mechanics, according to Newton's second law, we say that a body has a mass m if, at any instant of time, it obeys the equation of motion where F is the resultant force acting on the body and a is the acceleration of the body's centre of mass. For the moment, we will put aside the question of what "force acting on the body" actually means. This equation illustrates how mass relates to the inertia of a body. Consider two objects with different masses. If we apply an identical force to each, the object with a bigger mass will experience a smaller acceleration, and the object with a smaller mass will experience a bigger acceleration. We might say that the larger mass exerts a greater "resistance" to changing its state of motion in response to the force. However, this notion of applying "identical" forces to different objects brings us back to the fact that we have not really defined what a force is. We can sidestep this difficulty with the help of Newton's third law, which states that if one object exerts a force on a second object, it will experience an equal and opposite force. To be precise, suppose we have two objects of constant inertial masses m1 and m2. We isolate the two objects from all other physical influences, so that the only forces present are the force exerted on m1 by m2, which we denote F12, and the force exerted on m2 by m1, which we denote F21. Newton's second law states that where a1 and a2 are the accelerations of m1 and m2, respectively. Suppose that these accelerations are non-zero, so that the forces between the two objects are non-zero. This occurs, for example, if the two objects are in the process of colliding with one another. Newton's third law then states that and thus If is non-zero, the fraction is well-defined, which allows us to measure the inertial mass of m1. In this case, m2 is our "reference" object, and we can define its mass m as (say) 1 kilogram. Then we can measure the mass of any other object in the universe by colliding it with the reference object and measuring the accelerations. Additionally, mass relates a body's momentum p to its linear velocity v: , and the body's kinetic energy K to its velocity: . The primary difficulty with Mach's definition of mass is that it fails to take into account the potential energy (or binding energy) needed to bring two masses sufficiently close to one another to perform the measurement of mass. This is most vividly demonstrated by comparing the mass of the proton in the nucleus of deuterium, to the mass of the proton in free space (which is greater by about 0.239%—this is due to the binding energy of deuterium). Thus, for example, if the reference weight m2 is taken to be the mass of the neutron in free space, and the relative accelerations for the proton and neutron in deuterium are computed, then the above formula over-estimates the mass m1 (by 0.239%) for the proton in deuterium. At best, Mach's formula can only be used to obtain ratios of masses, that is, as m1 / m2 =  / . An additional difficulty was pointed out by Henri Poincaré, which is that the measurement of instantaneous acceleration is impossible: unlike the measurement of time or distance, there is no way to measure acceleration with a single measurement; one must make multiple measurements (of position, time, etc.) and perform a computation to obtain the acceleration. Poincaré termed this to be an "insurmountable flaw" in the Mach definition of mass. Atomic masses Typically, the mass of objects is measured in terms of the kilogram, which since 2019 is defined in terms of fundamental constants of nature. The mass of an atom or other particle can be compared more precisely and more conveniently to that of another atom, and thus scientists developed the dalton (also known as the unified atomic mass unit). By definition, 1 Da (one dalton) is exactly one-twelfth of the mass of a carbon-12 atom, and thus, a carbon-12 atom has a mass of exactly 12 Da. In relativity Special relativity In some frameworks of special relativity, physicists have used different definitions of the term. In these frameworks, two kinds of mass are defined: rest mass (invariant mass), and relativistic mass (which increases with velocity). Rest mass is the Newtonian mass as measured by an observer moving along with the object. Relativistic mass is the total quantity of energy in a body or system divided by c2. The two are related by the following equation: where is the Lorentz factor: The invariant mass of systems is the same for observers in all inertial frames, while the relativistic mass depends on the observer's frame of reference. In order to formulate the equations of physics such that mass values do not change between observers, it is convenient to use rest mass. The rest mass of a body is also related to its energy E and the magnitude of its momentum p by the relativistic energy-momentum equation: So long as the system is closed with respect to mass and energy, both kinds of mass are conserved in any given frame of reference. The conservation of mass holds even as some types of particles are converted to others. Matter particles (such as atoms) may be converted to non-matter particles (such as photons of light), but this does not affect the total amount of mass or energy. Although things like heat may not be matter, all types of energy still continue to exhibit mass. Thus, mass and energy do not change into one another in relativity; rather, both are names for the same thing, and neither mass nor energy appear without the other. Both rest and relativistic mass can be expressed as an energy by applying the well-known relationship E = mc2, yielding rest energy and "relativistic energy" (total system energy) respectively: The "relativistic" mass and energy concepts are related to their "rest" counterparts, but they do not have the same value as their rest counterparts in systems where there is a net momentum. Because the relativistic mass is proportional to the energy, it has gradually fallen into disuse among physicists. There is disagreement over whether the concept remains useful pedagogically. In bound systems, the binding energy must often be subtracted from the mass of the unbound system, because binding energy commonly leaves the system at the time it is bound. The mass of the system changes in this process merely because the system was not closed during the binding process, so the energy escaped. For example, the binding energy of atomic nuclei is often lost in the form of gamma rays when the nuclei are formed, leaving nuclides which have less mass than the free particles (nucleons) of which they are composed. Mass–energy equivalence also holds in macroscopic systems. For example, if one takes exactly one kilogram of ice, and applies heat, the mass of the resulting melt-water will be more than a kilogram: it will include the mass from the thermal energy (latent heat) used to melt the ice; this follows from the conservation of energy. This number is small but not negligible: about 3.7 nanograms. It is given by the latent heat of melting ice (334 kJ/kg) divided by the speed of light squared (c2 ≈ ). General relativity In general relativity, the equivalence principle is the equivalence of gravitational and inertial mass. At the core of this assertion is Albert Einstein's idea that the gravitational force as experienced locally while standing on a massive body (such as the Earth) is the same as the pseudo-force experienced by an observer in a non-inertial (i.e. accelerated) frame of reference. However, it turns out that it is impossible to find an objective general definition for the concept of invariant mass in general relativity. At the core of the problem is the non-linearity of the Einstein field equations, making it impossible to write the gravitational field energy as part of the stress–energy tensor in a way that is invariant for all observers. For a given observer, this can be achieved by the stress–energy–momentum pseudotensor. In quantum physics In classical mechanics, the inert mass of a particle appears in the Euler–Lagrange equation as a parameter m: After quantization, replacing the position vector x with a wave function, the parameter m appears in the kinetic energy operator: In the ostensibly covariant (relativistically invariant) Dirac equation, and in natural units, this becomes: where the "mass" parameter m is now simply a constant associated with the quantum described by the wave function ψ. In the Standard Model of particle physics as developed in the 1960s, this term arises from the coupling of the field ψ to an additional field Φ, the Higgs field. In the case of fermions, the Higgs mechanism results in the replacement of the term mψ in the Lagrangian with . This shifts the explanandum of the value for the mass of each elementary particle to the value of the unknown coupling constant Gψ. Tachyonic particles and imaginary (complex) mass A tachyonic field, or simply tachyon, is a quantum field with an imaginary mass. Although tachyons (particles that move faster than light) are a purely hypothetical concept not generally believed to exist, fields with imaginary mass have come to play an important role in modern physics and are discussed in popular books on physics. Under no circumstances do any excitations ever propagate faster than light in such theories—the presence or absence of a tachyonic mass has no effect whatsoever on the maximum velocity of signals (there is no violation of causality). While the field may have imaginary mass, any physical particles do not; the "imaginary mass" shows that the system becomes unstable, and sheds the instability by undergoing a type of phase transition called tachyon condensation (closely related to second order phase transitions) that results in symmetry breaking in current models of particle physics. The term "tachyon" was coined by Gerald Feinberg in a 1967 paper, but it was soon realized that Feinberg's model in fact did not allow for superluminal speeds. Instead, the imaginary mass creates an instability in the configuration:- any configuration in which one or more field excitations are tachyonic will spontaneously decay, and the resulting configuration contains no physical tachyons. This process is known as tachyon condensation. Well known examples include the condensation of the Higgs boson in particle physics, and ferromagnetism in condensed matter physics. Although the notion of a tachyonic imaginary mass might seem troubling because there is no classical interpretation of an imaginary mass, the mass is not quantized. Rather, the scalar field is; even for tachyonic quantum fields, the field operators at spacelike separated points still commute (or anticommute), thus preserving causality. Therefore, information still does not propagate faster than light, and solutions grow exponentially, but not superluminally (there is no violation of causality). Tachyon condensation drives a physical system that has reached a local limit and might naively be expected to produce physical tachyons, to an alternate stable state where no physical tachyons exist. Once the tachyonic field reaches the minimum of the potential, its quanta are not tachyons any more but rather are ordinary particles with a positive mass-squared. This is a special case of the general rule, where unstable massive particles are formally described as having a complex mass, with the real part being their mass in the usual sense, and the imaginary part being the decay rate in natural units. However, in quantum field theory, a particle (a "one-particle state") is roughly defined as a state which is constant over time; i.e., an eigenvalue of the Hamiltonian. An unstable particle is a state which is only approximately constant over time; If it exists long enough to be measured, it can be formally described as having a complex mass, with the real part of the mass greater than its imaginary part. If both parts are of the same magnitude, this is interpreted as a resonance appearing in a scattering process rather than a particle, as it is considered not to exist long enough to be measured independently of the scattering process. In the case of a tachyon, the real part of the mass is zero, and hence no concept of a particle can be attributed to it. In a Lorentz invariant theory, the same formulas that apply to ordinary slower-than-light particles (sometimes called "bradyons" in discussions of tachyons) must also apply to tachyons. In particular the energy–momentum relation: (where p is the relativistic momentum of the bradyon and m is its rest mass) should still apply, along with the formula for the total energy of a particle: This equation shows that the total energy of a particle (bradyon or tachyon) contains a contribution from its rest mass (the "rest mass–energy") and a contribution from its motion, the kinetic energy. When v is larger than c, the denominator in the equation for the energy is "imaginary", as the value under the radical is negative. Because the total energy must be real, the numerator must also be imaginary: i.e. the rest mass m must be imaginary, as a pure imaginary number divided by another pure imaginary number is a real number. See also Mass versus weight Effective mass (spring–mass system) Effective mass (solid-state physics) Extension (metaphysics) International System of Quantities 2019 revision of the SI base Notes References External links Jim Baggott (27 September 2017). The Concept of Mass (video) published by the Royal Institution on YouTube. Physical quantities SI base quantities Moment (physics) Extensive quantities
0.767252
0.998821
0.766347
Hartree
The hartree (symbol: Eh), also known as the Hartree energy, is the unit of energy in the atomic units system, named after the British physicist Douglas Hartree. Its CODATA recommended value is = The hartree is approximately the negative electric potential energy of the electron in a hydrogen atom in its ground state and, by the virial theorem, approximately twice its ionization energy; the relationships are not exact because of the finite mass of the nucleus of the hydrogen atom and relativistic corrections. The hartree is usually used as a unit of energy in atomic physics and computational chemistry: for experimental measurements at the atomic scale, the electronvolt (eV) or the reciprocal centimetre (cm−1) are much more widely used. Other relationships = 2 Ry = 2 R∞hc = = = ≘ ≘ ≘ ≘ where: ħ is the reduced Planck constant, me is the electron mass, e is the elementary charge, a0 is the Bohr radius, ε0 is the electric constant, c is the speed of light in vacuum, and α is the fine-structure constant. Effective hartree units are used in semiconductor physics where is replaced by and is the static dielectric constant. Also, the electron mass is replaced by the effective band mass . The effective hartree in semiconductors becomes small enough to be measured in millielectronvolts (meV). References Units of energy Physical constants
0.777157
0.986078
0.766338
Climate change mitigation
Climate change mitigation (or decarbonisation) is action to limit the greenhouse gases in the atmosphere that cause climate change. Climate change mitigation actions include conserving energy and replacing fossil fuels with clean energy sources. Secondary mitigation strategies include changes to land use and removing carbon dioxide (CO2) from the atmosphere. Current climate change mitigation policies are insufficient as they would still result in global warming of about 2.7 °C by 2100, significantly above the 2015 Paris Agreement's goal of limiting global warming to below 2 °C. Solar energy and wind power can replace fossil fuels at the lowest cost compared to other renewable energy options. The availability of sunshine and wind is variable and can require electrical grid upgrades, such as using long-distance electricity transmission to group a range of power sources. Energy storage can also be used to even out power output, and demand management can limit power use when power generation is low. Cleanly generated electricity can usually replace fossil fuels for powering transportation, heating buildings, and running industrial processes. Certain processes are more difficult to decarbonise, such as air travel and cement production. Carbon capture and storage (CCS) can be an option to reduce net emissions in these circumstances, although fossil fuel power plants with CCS technology is currently a high cost climate change mitigation strategy. Human land use changes such as agriculture and deforestation cause about 1/4th of climate change. These changes impact how much is absorbed by plant matter and how much organic matter decays or burns to release . These changes are part of the fast carbon cycle, whereas fossil fuels release that was buried underground as part of the slow carbon cycle. Methane is a short lived greenhouse gas that is produced by decaying organic matter and livestock, as well as fossil fuel extraction. Land use changes can also impact precipitation patterns and the reflectivity of the surface of the Earth. It is possible to cut emissions from agriculture by reducing food waste, switching to a more plant-based diet (also referred to as low-carbon diet), and by improving farming processes. Various policies can encourage climate change mitigation. Carbon pricing systems have been set up that either tax emissions or cap total emissions and trade emission credits. Fossil fuel subsidies can be eliminated in favor of clean energy subsidies, and incentives offered for installing energy efficiency measures or switching to electric power sources. Another issue is overcoming environmental objections when constructing new clean energy sources and making grid modifications. Definitions and scope Climate change mitigation aims to sustain ecosystems to maintain human civilisation. This requires drastic cuts in greenhouse gas emissions . The Intergovernmental Panel on Climate Change (IPCC) defines mitigation (of climate change) as "a human intervention to reduce emissions or enhance the sinks of greenhouse gases". It is possible to approach various mitigation measures in parallel. This is because there is no single pathway to limit global warming to 1.5 or 2 °C. There are four types of measures: Sustainable energy and sustainable transport Energy conservation, including efficient energy use Sustainable agriculture and green industrial policy Enhancing carbon sinks and carbon dioxide removal (CDR), including carbon sequestration The IPCC defined carbon dioxide removal as "Anthropogenic activities removing carbon dioxide from the atmosphere and durably storing it in geological, terrestrial, or ocean reservoirs, or in products. It includes existing and potential anthropogenic enhancement of biological or geochemical sinks and direct air carbon dioxide capture and storage (DACCS), but excludes natural uptake not directly caused by human activities." Relationship with solar radiation modification (SRM) While solar radiation modification (SRM) could reduce surface temperatures, it temporarily masks climate change rather than addressing the root cause, which is greenhouse gases. SRM would work by altering how much solar radiation the Earth absorbs. Examples include reducing the amount of sunlight reaching the surface, reducing the optical thickness and lifetime of clouds, and changing the ability of the surface to reflect radiation. The IPCC describes SRM as a climate risk reduction strategy or supplementary option rather than a climate mitigation option. The terminology in this area is still evolving. Experts sometimes use the term geoengineering or climate engineering in the scientific literature for both CDR or SRM, if the techniques are used at a global scale. IPCC reports no longer use the terms geoengineering or climate engineering. Emission trends and pledges Greenhouse gas emissions from human activities strengthen the greenhouse effect. This contributes to climate change. Most is carbon dioxide from burning fossil fuels: coal, oil, and natural gas. Human-caused emissions have increased atmospheric carbon dioxide by about 50% over pre-industrial levels. Emissions in the 2010s averaged a record 56 billion tons (Gt) a year. In 2016, energy for electricity, heat and transport was responsible for 73.2% of GHG emissions. Direct industrial processes accounted for 5.2%, waste for 3.2% and agriculture, forestry and land use for 18.4%. Electricity generation and transport are major emitters. The largest single source is coal-fired power stations with 20% of greenhouse gas emissions. Deforestation and other changes in land use also emit carbon dioxide and methane. The largest sources of anthropogenic methane emissions are agriculture, and gas venting and fugitive emissions from the fossil-fuel industry. The largest agricultural methane source is livestock. Agricultural soils emit nitrous oxide, partly due to fertilizers. There is now a political solution to the problem of fluorinated gases from refrigerants. This is because many countries have ratified the Kigali Amendment. Carbon dioxide is the dominant emitted greenhouse gas. Methane emissions almost have the same short-term impact. Nitrous oxide (N2O) and fluorinated gases (F-Gases) play a minor role. Livestock and manure produce 5.8% of all greenhouse gas emissions. But this depends on the time frame used to calculate the global warming potential of the respective gas. Greenhouse gas (GHG) emissions are measured in equivalents. Scientists determine their equivalents from their global warming potential (GWP). This depends on their lifetime in the atmosphere. There are widely used greenhouse gas accounting methods that convert volumes of methane, nitrous oxide and other greenhouse gases to carbon dioxide equivalents. Estimates largely depend on the ability of oceans and land sinks to absorb these gases. Short-lived climate pollutants (SLCPs) persist in the atmosphere for a period ranging from days to 15 years. Carbon dioxide can remain in the atmosphere for millennia. Short-lived climate pollutants include methane, hydrofluorocarbons (HFCs), tropospheric ozone and black carbon. Scientists increasingly use satellites to locate and measure greenhouse gas emissions and deforestation. Earlier, scientists largely relied on or calculated estimates of greenhouse gas emissions and governments' self-reported data. Needed emissions cuts The annual "Emissions Gap Report" by UNEP stated in 2022 that it was necessary to almost halve emissions. "To get on track for limiting global warming to 1.5°C, global annual GHG emissions must be reduced by 45 per cent compared with emissions projections under policies currently in place in just eight years, and they must continue to decline rapidly after 2030, to avoid exhausting the limited remaining atmospheric carbon budget." The report commented that the world should focus on broad-based economy-wide transformations and not incremental change. In 2022, the Intergovernmental Panel on Climate Change (IPCC) released its Sixth Assessment Report on climate change. It warned that greenhouse gas emissions must peak before 2025 at the latest and decline 43% by 2030 to have a good chance of limiting global warming to 1.5 °C (2.7 °F). Or in the words of Secretary-General of the United Nations António Guterres: "Main emitters must drastically cut emissions starting this year". Pledges Climate Action Tracker described the situation on 9 November 2021 as follows. The global temperature will rise by 2.7 °C by the end of the century with current policies and by 2.9 °C with nationally adopted policies. The temperature will rise by 2.4 °C if countries only implement the pledges for 2030. The rise would be 2.1 °C with the achievement of the long-term targets too. Full achievement of all announced targets would mean the rise in global temperature will peak at 1.9 °C and go down to 1.8 °C by the year 2100. Experts gather information about climate pledges in the Global Climate Action Portal - Nazca. The scientific community is checking their fulfilment. There has not been a definitive or detailed evaluation of most goals set for 2020. But it appears the world failed to meet most or all international goals set for that year. One update came during the 2021 United Nations Climate Change Conference in Glasgow. The group of researchers running the Climate Action Tracker looked at countries responsible for 85% of greenhouse gas emissions. It found that only four countries or political entities—the EU, UK, Chile and Costa Rica—have published a detailed official policyplan that describes the steps to realise 2030 mitigation targets. These four polities are responsible for 6% of global greenhouse gas emissions. In 2021 the US and EU launched the Global Methane Pledge to cut methane emissions by 30% by 2030. The UK, Argentina, Indonesia, Italy and Mexico joined the initiative. Ghana and Iraq signaled interest in joining. A White House summary of the meeting noted those countries represent six of the top 15 methane emitters globally. Israel also joined the initiative. Low-carbon energy The energy system includes the delivery and use of energy. It is the main emitter of carbon dioxide. Rapid and deep reductions in the carbon dioxide and other greenhouse gas emissions from the energy sector are necessary to limit global warming to well below 2 °C. IPCC recommendations include reducing fossil fuel consumption, increasing production from low- and zero carbon energy sources, and increasing use of electricity and alternative energy carriers. Nearly all scenarios and strategies involve a major increase in the use of renewable energy in combination with increased energy efficiency measures. It will be necessary to accelerate the deployment of renewable energy six-fold from 0.25% annual growth in 2015 to 1.5% to keep global warming under 2 °C. The competitiveness of renewable energy is a key to a rapid deployment. In 2020, onshore wind and solar photovoltaics were the cheapest source for new bulk electricity generation in many regions. Renewables may have higher storage costs but non-renewables may have higher clean-up costs. A carbon price can increase the competitiveness of renewable energy. Solar and wind energy Wind and sun can provide large amounts of low-carbon energy at competitive production costs. The IPCC estimates that these two mitigation options have the largest potential to reduce emissions before 2030 at low cost. Solar photovoltaics (PV) has become the cheapest way to generate electricity in many regions of the world. The growth of photovoltaics has been close to exponential. It has about doubled every three years since the 1990s. A different technology is concentrated solar power (CSP). This uses mirrors or lenses to concentrate a large area of sunlight on to a receiver. With CSP, the energy can be stored for a few hours. This provides supply in the evening. Solar water heating doubled between 2010 and 2019. Regions in the higher northern and southern latitudes have the greatest potential for wind power. Offshore wind farms are more expensive. But offshore units deliver more energy per installed capacity with less fluctuations. In most regions, wind power generation is higher in the winter when PV output is low. For this reason, combinations of wind and solar power lead to better-balanced systems. Other renewables Other well-established renewable energy forms include hydropower, bioenergy and geothermal energy. Hydroelectricity is electricity generated by hydropower and plays a leading role in countries like Brazil, Norway and China. but there are geographical limits and environmental issues. Tidal power can be used in coastal regions. Bioenergy can provide energy for electricity, heat and transport. Bioenergy, in particular biogas, can provide dispatchable electricity generation. While burning plant-derived biomass releases , the plants withdraw from the atmosphere while they grow. The technologies for producing, transporting and processing a fuel have a significant impact on the lifecycle emissions of the fuel. For example, aviation is starting to use renewable biofuels. Geothermal power is electrical power generated from geothermal energy. Geothermal electricity generation is currently used in 26 countries. Geothermal heating is in use in 70 countries. Integrating variable renewable energy Wind and solar power production does not consistently match demand. To deliver reliable electricity from variable renewable energy sources such as wind and solar, electrical power systems must be flexible. Most electrical grids were constructed for non-intermittent energy sources such as coal-fired power plants. The integration of larger amounts of solar and wind energy into the grid requires a change of the energy system; this is necessary to ensure that the supply of electricity matches demand. There are various ways to make the electricity system more flexible. In many places, wind and solar generation are complementary on a daily and a seasonal scale. There is more wind during the night and in winter when solar energy production is low. Linking different geographical regions through long-distance transmission lines also makes it possible to reduce variability. It is possible to shift energy demand in time. Energy demand management and the use of smart grids make it possible to match the times when variable energy production is highest. Sector coupling can provide further flexibility. This involves coupling the electricity sector to the heat and mobility sector via power-to-heat-systems and electric vehicles. Energy storage helps overcome barriers to intermittent renewable energy. The most commonly used and available storage method is pumped-storage hydroelectricity. This requires locations with large differences in height and access to water. Batteries are also in wide use. They typically store electricity for short periods. Batteries have low energy density. This and their cost makes them impractical for the large energy storage necessary to balance inter-seasonal variations in energy production. Some locations have implemented pumped hydro storage with capacity for multi-month usage. Nuclear power Nuclear power could complement renewables for electricity. On the other hand, environmental and security risks could outweigh the benefits. The construction of new nuclear reactors currently takes about 10 years. This is much longer than scaling up the deployment of wind and solar. And this timing gives rise to credit risks. However nuclear may be much cheaper in China. China is building a significant number of new power plants. the cost of extending nuclear power plant lifetimes is competitive with other electricity generation technologies if long term costs for nuclear waste disposal are excluded from the calculation. There is also no sufficient financial insurance for nuclear accidents. Replacing coal with natural gas Demand reduction Reducing demand for products and services that cause greenhouse gas emissions can help in mitigating climate change. One is to reduce demand by behavioural and cultural changes, for example by making changes in diet, especially the decision to reduce meat consumption, an effective action individuals take to fight climate change. Another is by reducing the demand by improving infrastructure, by building a good public transport network, for example. Lastly, changes in end-use technology can reduce energy demand. For instance a well-insulated house emits less than a poorly-insulated house. Mitigation options that reduce demand for products or services help people make personal choices to reduce their carbon footprint. This could be in their choice of transport or food. So these mitigation options have many social aspects that focus on demand reduction; they are therefore demand-side mitigation actions. For example, people with high socio-economic status often cause more greenhouse gas emissions than those from a lower status. If they reduce their emissions and promote green policies, these people could become low-carbon lifestyle role models. However, there are many psychological variables that influence consumers. These include awareness and perceived risk. Government policies can support or hinder demand-side mitigation options. For example, public policy can promote circular economy concepts which would support climate change mitigation. Reducing greenhouse gas emissions is linked to the sharing economy. There is a debate regarding the correlation of economic growth and emissions. It seems economic growth no longer necessarily means higher emissions. Energy conservation and efficiency Global primary energy demand exceeded 161,000 terawatt hours (TWh) in 2018. This refers to electricity, transport and heating including all losses. In transport and electricity production, fossil fuel usage has a low efficiency of less than 50%. Large amounts of heat in power plants and in motors of vehicles go to waste. The actual amount of energy consumed is significantly lower at 116,000 TWh. Energy conservation is the effort made to reduce the consumption of energy by using less of an energy service. One way is to use energy more efficiently. This means using less energy than before to produce the same service. Another way is to reduce the amount of service used. An example of this would be to drive less. Energy conservation is at the top of the sustainable energy hierarchy. When consumers reduce wastage and losses they can conserve energy. The upgrading of technology as well as the improvements to operations and maintenance can result in overall efficiency improvements. Efficient energy use (or energy efficiency) is the process of reducing the amount of energy required to provide products and services. Improved energy efficiency in buildings ("green buildings"), industrial processes and transportation could reduce the world's energy needs in 2050 by one third. This would help reduce global emissions of greenhouse gases. For example, insulating a building allows it to use less heating and cooling energy to achieve and maintain thermal comfort. Improvements in energy efficiency are generally achieved by adopting a more efficient technology or production process. Another way is to use commonly accepted methods to reduce energy losses. Lifestyle changes Individual action on climate change can include personal choices in many areas. These include diet, travel, household energy use, consumption of goods and services, and family size. People who wish to reduce their carbon footprint can take high-impact actions such as avoiding frequent flying and petrol-fuelled cars, eating mainly a plant-based diet, having fewer children, using clothes and electrical products for longer, and electrifying homes. These approaches are more practical for people in high-income countries with high-consumption lifestyles. Naturally, it is more difficult for those with lower income statuses to make these changes. This is because choices like electric-powered cars may not be available. Excessive consumption is more to blame for climate change than population increase. High-consumption lifestyles have a greater environmental impact, with the richest 10% of people emitting about half the total lifestyle emissions. Dietary change Some scientists say that avoiding meat and dairy foods is the single biggest way an individual can reduce their environmental impact. The widespread adoption of a vegetarian diet could cut food-related greenhouse gas emissions by 63% by 2050. China introduced new dietary guidelines in 2016 which aim to cut meat consumption by 50% and thereby reduce greenhouse gas emissions by 1Gt per year by 2030. Overall, food accounts for the largest share of consumption-based greenhouse gas emissions. It is responsible for nearly 20% of the global carbon footprint. Almost 15% of all anthropogenic greenhouse gas emissions have been attributed to the livestock sector. A shift towards plant-based diets would help to mitigate climate change. In particular, reducing meat consumption would help to reduce methane emissions. If high-income nations switched to a plant-based diet, vast amounts of land used for animal agriculture could be allowed to return to their natural state. This in turn has the potential to sequester 100 billion tonnes of by the end of the century. A comprehensive analysis found that plant based diets reduce emissions, water pollution and land use significantly (by 75%), while reducing the destruction of wildlife and usage of water. Family size Population growth has resulted in higher greenhouse gas emissions in most regions, particularly Africa. However, economic growth has a bigger effect than population growth. Rising incomes, changes in consumption and dietary patterns, as well as population growth, cause pressure on land and other natural resources. This leads to more greenhouse gas emissions and fewer carbon sinks. Some scholars have argued that humane policies to slow population growth should be part of a broad climate response together with policies that end fossil fuel use and encourage sustainable consumption. Advances in female education and reproductive health, especially voluntary family planning, can contribute to reducing population growth. Preserving and enhancing carbon sinks An important mitigation measure is "preserving and enhancing carbon sinks". This refers to the management of Earth's natural carbon sinks in a way that preserves or increases their capability to remove CO2 from the atmosphere and to store it durably. Scientists call this process also carbon sequestration. In the context of climate change mitigation, the IPCC defines a sink as "Any process, activity or mechanism which removes a greenhouse gas, an aerosol or a precursor of a greenhouse gas from the atmosphere". Globally, the two most important carbon sinks are vegetation and the ocean. To enhance the ability of ecosystems to sequester carbon, changes are necessary in agriculture and forestry. Examples are preventing deforestation and restoring natural ecosystems by reforestation. Scenarios that limit global warming to 1.5 °C typically project the large-scale use of carbon dioxide removal methods over the 21st century. There are concerns about over-reliance on these technologies, and their environmental impacts. But ecosystem restoration and reduced conversion are among the mitigation tools that can yield the most emissions reductions before 2030. Land-based mitigation options are referred to as "AFOLU mitigation options" in the 2022 IPCC report on mitigation. The abbreviation stands for "agriculture, forestry and other land use" The report described the economic mitigation potential from relevant activities around forests and ecosystems as follows: "the conservation, improved management, and restoration of forests and other ecosystems (coastal wetlands, peatlands, savannas and grasslands)". A high mitigation potential is found for reducing deforestation in tropical regions. The economic potential of these activities has been estimated to be 4.2 to 7.4 gigatonnes of carbon dioxide equivalent (GtCO2 -eq) per year. Forests Conservation The Stern Review on the economics of climate change stated in 2007 that curbing deforestation was a highly cost-effective way of reducing greenhouse gas emissions. About 95% of deforestation occurs in the tropics, where clearing of land for agriculture is one of the main causes. One forest conservation strategy is to transfer rights over land from public ownership to its indigenous inhabitants. Land concessions often go to powerful extractive companies. Conservation strategies that exclude and even evict humans, called fortress conservation, often lead to more exploitation of the land. This is because the native inhabitants turn to work for extractive companies to survive. Proforestation is promoting forests to capture their full ecological potential. This is a mitigation strategy as secondary forests that have regrown in abandoned farmland are found to have less biodiversity than the original old-growth forests. Original forests store 60% more carbon than these new forests. Strategies include rewilding and establishing wildlife corridors. Afforestation and reforestation Afforestation is the establishment of trees where there was previously no tree cover. Scenarios for new plantations covering up to 4000 million hectares (Mha) (6300 x 6300 km) suggest cumulative carbon storage of more than 900 GtC (2300 Gt) until 2100. But they are not a viable alternative to aggressive emissions reduction. This is because the plantations would need to be so large they would eliminate most natural ecosystems or reduce food production. One example is the Trillion Tree Campaign. However, preserving biodiversity is also important and for example not all grasslands are suitable for conversion into forests. Grasslands can even turn from carbon sinks to carbon sources. Reforestation is the restocking of existing depleted forests or in places where there were recently forests. Reforestation could save at least 1GtCO2 per year, at an estimated cost of $5–15 per tonne of carbon dioxide (tCO2). Restoring all degraded forests all over the world could capture about 205 GtC (750 Gt). With increased intensive agriculture and urbanization, there is an increase in the amount of abandoned farmland. By some estimates, for every acre of original old-growth forest cut down, more than 50 acres of new secondary forests are growing. In some countries, promoting regrowth on abandoned farmland could offset years of emissions. Planting new trees can be expensive and a risky investment. For example, about 80 percent of planted trees in the Sahel die within two years. Reforestation has higher carbon storage potential than afforestation. Even long-deforested areas still contain an "underground forest" of living roots and tree stumps. Helping native species sprout naturally is cheaper than planting new trees and they are more likely to survive. This could include pruning and coppicing to accelerate growth. This also provides woodfuel, which is otherwise a major source of deforestation. Such practices, called farmer-managed natural regeneration, are centuries old but the biggest obstacle towards implementation is ownership of the trees by the state. The state often sells timber rights to businesses which leads to locals uprooting seedlings because they see them as a liability. Legal aid for locals and changes to property law such as in Mali and Niger have led to significant changes. Scientists describe them as the largest positive environmental transformation in Africa. It is possible to discern from space the border between Niger and the more barren land in Nigeria, where the law has not changed. Soils There are many measures to increase soil carbon. This makes it complex and hard to measure and account for. One advantage is that there are fewer trade-offs for these measures than for BECCS or afforestation, for example. Globally, protecting healthy soils and restoring the soil carbon sponge could remove 7.6 billion tonnes of carbon dioxide from the atmosphere annually. This is more than the annual emissions of the US. Trees capture while growing above ground and exuding larger amounts of carbon below ground. Trees contribute to the building of a soil carbon sponge. Carbon formed above ground is released as immediately when wood is burned. If dead wood remains untouched, only some of the carbon returns to the atmosphere as decomposition proceeds. Farming can deplete soil carbon and render soil incapable of supporting life. However, conservation farming can protect carbon in soils, and repair damage over time. The farming practice of cover crops is a form of carbon farming. Methods that enhance carbon sequestration in soil include no-till farming, residue mulching and crop rotation. Scientists have described the best management practices for European soils to increase soil organic carbon. These are conversion of arable land to grassland, straw incorporation, reduced tillage, straw incorporation combined with reduced tillage, ley cropping system and cover crops. Another mitigation option is the production of biochar and its storage in soils This is the solid material that remains after the pyrolysis of biomass. Biochar production releases half of the carbon from the biomass—either released into the atmosphere or captured with CCS—and retains the other half in the stable biochar. It can endure in soil for thousands of years. Biochar may increase the soil fertility of acidic soils and increase agricultural productivity. During production of biochar, heat is released which may be used as bioenergy. Wetlands Wetland restoration is an important mitigation measure. It has moderate to great mitigation potential on a limited land area with low trade-offs and costs. Wetlands perform two important functions in relation to climate change. They can sequester carbon, converting carbon dioxide to solid plant material through photosynthesis. They also store and regulate water. Wetlands store about 45 million tonnes of carbon per year globally. Some wetlands are a significant source of methane emissions. Some also emit nitrous oxide. Peatland globally covers just 3% of the land's surface. But it stores up to 550 gigatonnes (Gt) of carbon. This represents 42% of all soil carbon and exceeds the carbon stored in all other vegetation types, including the world's forests. The threat to peatlands includes draining the areas for agriculture. Another threat is cutting down trees for lumber, as the trees help hold and fix the peatland. Additionally, peat is often sold for compost. It is possible to restore degraded peatlands by blocking drainage channels in the peatland, and allowing natural vegetation to recover. Mangroves, salt marshes and seagrasses make up the majority of the ocean's vegetated habitats. They only equal 0.05% of the plant biomass on land. But they store carbon 40 times faster than tropical forests. Bottom trawling, dredging for coastal development and fertilizer runoff have damaged coastal habitats. Notably, 85% of oyster reefs globally have been removed in the last two centuries. Oyster reefs clean the water and help other species thrive. This increases biomass in that area. In addition, oyster reefs mitigate the effects of climate change by reducing the force of waves from hurricanes. They also reduce the erosion from rising sea levels. Restoration of coastal wetlands is thought to be more cost-effective than restoration of inland wetlands. Deep ocean These options focus on the carbon which ocean reservoirs can store. They include ocean fertilization, ocean alkalinity enhancement or enhanced weathering. The IPCC found in 2022 ocean-based mitigation options currently have only limited deployment potential. But it assessed that their future mitigation potential is large. It found that in total, ocean-based methods could remove 1–100 Gt of per year. Their costs are in the order of US$40–500 per tonne of . Most of these options could also help to reduce ocean acidification. This is the drop in pH value caused by increased atmospheric CO2 concentrations. Blue carbon management is another type of ocean-based biological carbon dioxide removal (CDR). It can involve land-based as well as ocean-based measures. The term usually refers to the role that tidal marshes, mangroves and seagrasses can play in carbon sequestration. Some of these efforts can also take place in deep ocean waters. This is where the vast majority of ocean carbon is held. These ecosystems can contribute to climate change mitigation and also to ecosystem-based adaptation. Conversely, when blue carbon ecosystems are degraded or lost they release carbon back to the atmosphere. There is increasing interest in developing blue carbon potential. Scientists have found that in some cases these types of ecosystems remove far more carbon per area than terrestrial forests. However, the long-term effectiveness of blue carbon as a carbon dioxide removal solution remains under discussion. Enhanced weathering Enhanced weathering could remove 2–4 Gt of per year. This process aims to accelerate natural weathering by spreading finely ground silicate rock, such as basalt, onto surfaces. This speeds up chemical reactions between rocks, water, and air. It removes carbon dioxide from the atmosphere, permanently storing it in solid carbonate minerals or ocean alkalinity. Cost estimates are in the US$50–200 per tonne range of . Other methods to capture and store CO2 In addition to traditional land-based methods to remove carbon dioxide (CO2) from the air, other technologies are under development. These could reduce CO2 emissions and lower existing atmospheric CO2 levels. Carbon capture and storage (CCS) is a method to mitigate climate change by capturing CO2 from large point sources, such as cement factories or biomass power plants. It then stores it away safely instead of releasing it into the atmosphere. The IPCC estimates that the costs of halting global warming would double without CCS. Bioenergy with carbon capture and storage (BECCS) expands on the potential of CCS and aims to lower atmospheric CO2 levels. This process uses biomass grown for bioenergy. The biomass yields energy in useful forms such as electricity, heat, biofuels, etc. through consumption of the biomass via combustion, fermentation, or pyrolysis. The process captures the CO2 that was extracted from the atmosphere when it grew. It then stores it underground or via land application as biochar. This effectively removes it from the atmosphere. This makes BECCS a negative emissions technology (NET). Scientists estimated the potential range of negative emissions from BECCS in 2018 as 0–22 Gt per year. , BECCS was capturing approximately 2 million tonnes per year of CO2 annually. The cost and availability of biomass limits wide deployment of BECCS. BECCS currently forms a big part of achieving climate targets beyond 2050 in modelling, such as by the Integrated Assessment Models (IAMs) associated with the IPCC process. But many scientists are sceptical due to the risk of loss of biodiversity. Direct air capture is a process of capturing directly from the ambient air. This is in contrast to CCS which captures carbon from point sources. It generates a concentrated stream of for sequestration, utilization or production of carbon-neutral fuel and windgas. Artificial processes vary, and there are concerns about the long-term effects of some of these processes. Mitigation by sector Buildings The building sector accounts for 23% of global energy-related emissions. About half of the energy is used for space and water heating. Building insulation can reduce the primary energy demand significantly. Heat pump loads may also provide a flexible resource that can participate in demand response to integrate variable renewable resources into the grid. Solar water heating uses thermal energy directly. Sufficiency measures include moving to smaller houses when the needs of households change, mixed use of spaces and the collective use of devices. Planners and civil engineers can construct new buildings using passive solar building design, low-energy building, or zero-energy building techniques. In addition, it is possible to design buildings that are more energy-efficient to cool by using lighter-coloured, more reflective materials in the development of urban areas. Heat pumps efficiently heat buildings, and cool them by air conditioning. A modern heat pump typically transports around three to five times more thermal energy than electrical energy consumed. The amount depends on the coefficient of performance and the outside temperature. Refrigeration and air conditioning account for about 10% of global emissions caused by fossil fuel-based energy production and the use of fluorinated gases. Alternative cooling systems, such as passive cooling building design and passive daytime radiative cooling surfaces, can reduce air conditioning use. Suburbs and cities in hot and arid climates can significantly reduce energy consumption from cooling with daytime radiative cooling. Energy consumption for cooling is likely to rise significantly due to increasing heat and availability of devices in poorer countries. Of the 2.8 billion people living in the hottest parts of the world, only 8% currently have air conditioners, compared with 90% of people in the US and Japan. Adoption of air conditioners typically increases in warmer areas at above $10,000 annual household income. By combining energy efficiency improvements and decarbonising electricity for air conditioning with the transition away from super-polluting refrigerants, the world could avoid cumulative greenhouse gas emissions of up to 210–460 Gt-eq over the next four decades. A shift to renewable energy in the cooling sector comes with two advantages: Solar energy production with mid-day peaks corresponds with the load required for cooling and additionally, cooling has a large potential for load management in the electric grid. Urban planning Cities emitted 28 GtCO2-eq in 2020 of combined CO2 and emissions. This was from producing and consuming goods and services. Climate-smart urban planning aims to reduce sprawl to reduce the distance travelled. This lowers emissions from transportation. Switching from cars by improving walkability and cycling infrastructure is beneficial to a country's economy as a whole. Urban forestry, lakes and other blue and green infrastructure can reduce emissions directly and indirectly by reducing energy demand for cooling. Methane emissions from municipal solid waste can be reduced by segregation, composting, and recycling. Transport Transportation accounts for 15% of emissions worldwide. Increasing the use of public transport, low-carbon freight transport and cycling are important components of transport decarbonisation. Electric vehicles and environmentally friendly rail help to reduce the consumption of fossil fuels. In most cases, electric trains are more efficient than air transport and truck transport. Other efficiency means include improved public transport, smart mobility, carsharing and electric hybrids. Fossil-fuel for passenger cars can be included in emissions trading. Furthermore, moving away from a car-dominated transport system towards low-carbon advanced public transport system is important. Heavyweight, large personal vehicles (such as cars) require a lot of energy to move and take up much urban space. Several alternatives modes of transport are available to replace these. The European Union has made smart mobility part of its European Green Deal. In smart cities, smart mobility is also important. The World Bank is helping lower income countries buy electric buses. Their purchase price is higher than diesel buses. But lower running costs and health improvements due to cleaner air can offset this higher price. Between one quarter and three quarters of cars on the road by 2050 are forecast to be electric vehicles. Hydrogen may be a solution for long-distance heavy freight trucks, if batteries alone are too heavy. Shipping In the shipping industry, the use of liquefied natural gas (LNG) as a marine bunker fuel is driven by emissions regulations. Ship operators must switch from heavy fuel oil to more expensive oil-based fuels, implement costly flue gas treatment technologies or switch to LNG engines. Methane slip, when gas leaks unburned through the engine, lowers the advantages of LNG. Maersk, the world's biggest container shipping line and vessel operator, warns of stranded assets when investing in transitional fuels like LNG. The company lists green ammonia as one of the preferred fuel types of the future. It has announced the first carbon-neutral vessel on the water by 2023, running on carbon-neutral methanol. Cruise operators are trialling partially hydrogen-powered ships. Hybrid and all electric ferries are suitable for short distances. Norway's goal is an all electric fleet by 2025. Air transport Jet airliners contribute to climate change by emitting carbon dioxide, nitrogen oxides, contrails and particulates. Their radiative forcing is estimated at 1.3–1.4 that of alone, excluding induced cirrus cloud. In 2018, global commercial operations generated 2.4% of all emissions. The aviation industry has become more fuel efficient. But overall emissions have risen as the volume of air travel has increased. By 2020, aviation emissions were 70% higher than in 2005 and they could grow by 300% by 2050. It is possible to reduce aviation's environmental footprint by better fuel economy in aircraft. Optimising flight routes to lower non- effects on climate from nitrogen oxides, particulates or contrails can also help. Aviation biofuel, carbon emission trading and carbon offsetting, part of the 191 nation ICAO's Carbon Offsetting and Reduction Scheme for International Aviation (CORSIA), can lower emissions. Short-haul flight bans, train connections, personal choices and taxation on flights can lead to fewer flights. Hybrid electric aircraft and electric aircraft or hydrogen-powered aircraft may replace fossil fuel-powered aircraft. Experts expect emissions from aviation to rise in most projections, at least until 2040. They currently amount to 180 Mt of or 11% of transport emissions. Aviation biofuel and hydrogen can only cover a small proportion of flights in the coming years. Experts expect hybrid-driven aircraft to start commercial regional scheduled flights after 2030. Battery-powered aircraft are likely to enter the market after 2035. Under CORSIA, flight operators can purchase carbon offsets to cover their emissions above 2019 levels. CORSIA will be compulsory from 2027. Agriculture, forestry and land use Almost 20% of greenhouse gas emissions come from the agriculture and forestry sector. To significantly reduce these emissions, annual investments in the agriculture sector need to increase to $260 billion by 2030. The potential benefits from these investments are estimated at about $4.3 trillion by 2030, offering a substantial economic return of 16-to-1. Mitigation measures in the food system can be divided into four categories. These are demand-side changes, ecosystem protections, mitigation on farms, and mitigation in supply chains. On the demand side, limiting food waste is an effective way to reduce food emissions. Changes to a diet less reliant on animal products such as plant-based diets are also effective. With 21% of global methane emissions, cattle are a major driver of global warming. When rainforests are cut and the land is converted for grazing, the impact is even higher. In Brazil, producing 1 kg of beef can result in the emission of up to 335 kg CO2-eq. Other livestock, manure management and rice cultivation also emit greenhouse gases, in addition to fossil fuel combustion in agriculture. Important mitigation options for reducing the greenhouse gas emissions from livestock include genetic selection, introduction of methanotrophic bacteria into the rumen, vaccines, feeds, diet modification and grazing management. Other options are diet changes towards ruminant-free alternatives, such as milk substitutes and meat analogues. Non-ruminant livestock, such as poultry, emit far fewer GHGs. It is possible to cut methane emissions in rice cultivation by improved water management, combining dry seeding and one drawdown, or executing a sequence of wetting and drying. This results in emission reductions of up to 90% compared to full flooding and even increased yields. Industry Industry is the largest emitter of greenhouse gases when direct and indirect emissions are included. Electrification can reduce emissions from industry. Green hydrogen can play a major role in energy-intensive industries for which electricity is not an option. Further mitigation options involve the steel and cement industry, which can switch to a less polluting production process. Products can be made with less material to reduce emission-intensity and industrial processes can be made more efficient. Finally, circular economy measures reduce the need for new materials. This also saves on emissions that would have been released from the mining of collecting of those materials. The decarbonisation of cement production requires new technologies, and therefore investment in innovation. Bioconcrete is one possibility to reduce emissions. But no technology for mitigation is yet mature. So CCS will be necessary at least in the short-term. Another sector with a significant carbon footprint is the steel sector, which is responsible for about 7% of global emissions. Emissions can be reduced by using electric arc furnaces to melt and recycle scrap steel. To produce virgin steel without emissions, blast furnaces could be replaced by hydrogen direct reduced iron and electric arc furnaces. Alternatively, carbon capture and storage solutions can be used. Coal, gas and oil production often come with significant methane leakage. In the early 2020s some governments recognized the scale of the problem and introduced regulations. Methane leaks at oil and gas wells and processing plants are cost-effective to fix in countries which can easily trade gas internationally. There are leaks in countries where gas is cheap; such as Iran, Russia, and Turkmenistan. Nearly all this can be stopped by replacing old components and preventing routine flaring. Coalbed methane may continue leaking even after the mine has been closed. But it can be captured by drainage and/or ventilation systems. Fossil fuel firms do not always have financial incentives to tackle methane leakage. Co-benefits Co-benefits of climate change mitigation, also often referred to as ancillary benefits, were firstly dominated in the scientific literature by studies that describe how lower GHG emissions lead to better air quality and consequently impact human health positively. The scope of co-benefits research expanded to its economic, social, ecological and political implications. Positive secondary effects that occur from climate mitigation and adaptation measures have been mentioned in research since the 1990s. The IPCC first mentioned the role of co-benefits in 2001, followed by its fourth and fifth assessment cycle stressing improved working environment, reduced waste, health benefits and reduced capital expenditures. In the early 2000s the OECD was further fostering its efforts in promoting ancillary benefits. The IPCC pointed out in 2007: "Co-benefits of GHG mitigation can be an important decision criteria in analyses carried out by policy-makers, but they are often neglected" and added that the co-benefits are "not quantified, monetised or even identified by businesses and decision-makers". Appropriate consideration of co-benefits can greatly "influence policy decisions concerning the timing and level of mitigation action", and there can be "significant advantages to the national economy and technical innovation". An analysis of climate action in the UK found that public health benefits are a major component of the total benefits derived from climate action. Employment and economic development Co-benefits can positively impact employment, industrial development, states' energy independence and energy self-consumption. The deployment of renewable energies can foster job opportunities. Depending on the country and deployment scenario, replacing coal power plants with renewable energy can more than double the number of jobs per average MW capacity. Investments in renewable energies, especially in solar- and wind energy, can boost the value of production. Countries which rely on energy imports can enhance their energy independence and ensure supply security by deploying renewables. National energy generation from renewables lowers the demand for fossil fuel imports which scales up annual economic saving. The European Commission forecasts a shortage of 180,000 skilled workers in hydrogen production and 66,000 in solar photovoltaic power by 2030. Energy security A higher share of renewables can additionally lead to more energy security. Socioeconomic co-benefits have been analysed such as energy access in rural areas and improved rural livelihoods. Rural areas which are not fully electrified can benefit from the deployment of renewable energies. Solar-powered mini-grids can remain economically viable, cost-competitive and reduce the number of power cuts. Energy reliability has additional social implications: stable electricity improves the quality of education. The International Energy Agency (IEA) spelled out the "multiple benefits approach" of energy efficiency while the International Renewable Energy Agency (IRENA) operationalised the list of co-benefits of the renewable energy sector. Health and well-being The health benefits from climate change mitigation are significant. Potential measures can not only mitigate future health impacts from climate change but also improve health directly. Climate change mitigation is interconnected with various health co-benefits, such as those from reduced air pollution. Air pollution generated by fossil fuel combustion is both a major driver of global warming and the cause of a large number of annual deaths. Some estimates are as high as excess deaths during 2018. A 2023 study estimated that fossil fuels kill over 5 million people each year, as of 2019, by causing diseases such as heart attack, stroke and chronic obstructive pulmonary disease. Particulate air pollution kills by far the most, followed by ground-level ozone. Mitigation policies can also promote healthier diets such as less red meat, more active lifestyles, and increased exposure to green urban spaces. Access to urban green spaces provides benefits to mental health as well. The increased use of green and blue infrastructure can reduce the urban heat island effect. This reduces heat stress on people. Climate change adaptation Some mitigation measures have co-benefits in the area of climate change adaptation. This is for example the case for many nature-based solutions. Examples in the urban context include urban green and blue infrastructure which provide mitigation as well as adaptation benefits. This can be in the form of urban forests and street trees, green roofs and walls, urban agriculture and so forth. The mitigation is achieved through the conservation and expansion of carbon sinks and reduced energy use of buildings. Adaptation benefits come for example through reduced heat stress and flooding risk. Negative side effects Mitigation measures can also have negative side effects and risks. In agriculture and forestry, mitigation measures can affect biodiversity and ecosystem functioning. In renewable energy, mining for metals and minerals can increase threats to conservation areas. There is some research into ways to recycle solar panels and electronic waste. This would create a source for materials so there is no need to mine them. Scholars have found that discussions about risks and negative side effects of mitigation measures can lead to deadlock or the feeling that there are insuperable barriers to taking action. Costs and funding Several factors affect mitigation cost estimates. One is the baseline. This is a reference scenario that the alternative mitigation scenario is compared with. Others are the way costs are modelled, and assumptions about future government policy. Cost estimates for mitigation for specific regions depend on the quantity of emissions allowed for that region in future, as well as the timing of interventions. Mitigation costs will vary according to how and when emissions are cut. Early, well-planned action will minimize the costs. Globally, the benefits of keeping warming under 2 °C exceed the costs. Economists estimate the cost of climate change mitigation at between 1% and 2% of GDP. While this is a large sum, it is still far less than the subsidies governments provide to the ailing fossil fuel industry. The International Monetary Fund estimated this at more than $5 trillion per year. Another estimate says that financial flows for climate mitigation and adaptation are going to be over $800 billion per year. These financial requirements are predicted to exceed $4 trillion per year by 2030. Globally, limiting warming to 2 °C may result in higher economic benefits than economic costs. The economic repercussions of mitigation vary widely across regions and households, depending on policy design and level of international cooperation. Delayed global cooperation increases policy costs across regions, especially in those that are relatively carbon intensive at present. Pathways with uniform carbon values show higher mitigation costs in more carbon-intensive regions, in fossil-fuels exporting regions and in poorer regions. Aggregate quantifications expressed in GDP or monetary terms undervalue the economic effects on households in poorer countries. The actual effects on welfare and well-being are comparatively larger. Cost–benefit analysis may be unsuitable for analysing climate change mitigation as a whole. But it is still useful for analysing the difference between a 1.5 °C target and 2 °C. One way of estimating the cost of reducing emissions is by considering the likely costs of potential technological and output changes. Policymakers can compare the marginal abatement costs of different methods to assess the cost and amount of possible abatement over time. The marginal abatement costs of the various measures will differ by country, by sector, and over time. Eco-tariffs on only imports contribute to reduced global export competitiveness and to deindustrialization. Avoided costs of climate change effects It is possible to avoid some of the costs of the effects of climate change by limiting climate change. According to the Stern Review, inaction can be as high as the equivalent of losing at least 5% of global gross domestic product (GDP) each year, now and forever. This can be up to 20% of GDP or more when including a wider range of risks and impacts. But mitigating climate change will only cost about 2% of GDP. Also it may not be a good idea from a financial perspective to delay significant reductions in greenhouse gas emissions. Mitigation solutions are often evaluated in terms of costs and greenhouse gas reduction potentials. This fails to take into account the direct effects on human well-being. Distributing emissions abatement costs Mitigation at the speed and scale required to limit warming to 2 °C or below implies deep economic and structural changes. These raise multiple types of distributional concerns across regions, income classes and sectors. There have been different proposals on how to allocate responsibility for cutting emissions. These include egalitarianism, basic needs according to a minimum level of consumption, proportionality and the polluter-pays principle. A specific proposal is "equal per capita entitlements". This approach has two categories. In the first category, emissions are allocated according to national population. In the second category, emissions are allocated in a way that attempts to account for historical or cumulative emissions. Funding In order to reconcile economic development with mitigating carbon emissions, developing countries need particular support. This would be both financial and technical. The IPCC found that accelerated support would also tackle inequities in financial and economic vulnerability to climate change. One way to achieve this is the Kyoto Protocol's Clean Development Mechanism (CDM). Policies National policies Climate change mitigation policies can have a large and complex impact on the socio-economic status of individuals and countries This can be both positive and negative. It is important to design policies well and make them inclusive. Otherwise climate change mitigation measures can impose higher financial costs on poor households. An evaluation was conducted on 1,500 climate policy interventions made between 1998 and 2022. The interventions took place in 41 countries and across 6 continents, which together contributed 81% of the world's total emissions as of 2019. The evaluation found 63 successful interventions that resulted in significant emission reductions; the total release averted by these interventions was between 0.6 and 1.8 billion metric tonnes. The study focused on interventions with at least 4.5% emission reductions, but the researchers noted that meeting the reductions required by the Paris Agreement would require 23 billion metric tonnes per year. Generally, carbon pricing was found to be most effective in developed countries, while regulation was most effective in the developing countries. Complementary policy mixes benefited from synergies, and were mostly found to be more effective interventions than the implementation of isolated policies. The OECD recognise 48 distinct climate mitigation policies suitable for implementation at national level. Broadly, these can be categorised into three types: market based instruments, non market based instruments and other policies. Other policies include the Establishing an Independent climate advisory body. Non market based policies include the Implementing or tighening of Regulatory standards. These set technology or performance standards. They can be effective in addressing the market failure of informational barriers. Among market based policies, the carbon price has been found to be the most effective (at least for developed economies), and has its own section below. Additional market based policy instruments for climate change mitigation include: Emissions taxes These often require domestic emitters to pay a fixed fee or tax for every tonne of CO2 emissions they release into the atmosphere. Methane emissions from fossil fuel extraction are also occasionally taxed. But methane and nitrous oxide from agriculture are typically not subject to tax. Removing unhelpful subsidies: Many countries provide subsidies for activities that affect emissions. For example, significant fossil fuel subsidies are present in many countries. Phasing-out fossil fuel subsidies is crucial to address the climate crisis. It must however be done carefully to avoid protests and making poor people poorer. Creating helpful subsidies: Creating subsidies and financial incentives. One example is energy subsidies to support clean generation which is not yet commercially viable such as tidal power. Tradable permits: A permit system can limit emissions. Carbon pricing Imposing additional costs on greenhouse gas emissions can make fossil fuels less competitive and accelerate investments into low-carbon sources of energy. A growing number of countries raise a fixed carbon tax or participate in dynamic carbon emission trading (ETS) systems. In 2021, more than 21% of global greenhouse gas emissions were covered by a carbon price. This was a big increase from earlier due to the introduction of the Chinese national carbon trading scheme. Trading schemes offer the possibility to limit emission allowances to certain reduction targets. However, an oversupply of allowances keeps most ETS at low price levels around $10 with a low impact. This includes the Chinese ETS which started with $7/t in 2021. One exception is the European Union Emission Trading Scheme where prices began to rise in 2018. They reached about €80/t in 2022. This results in additional costs of about €0.04/KWh for coal and €0.02/KWh for gas combustion for electricity, depending on the emission intensity. Industries which have high energy requirements and high emissions often pay only very low energy taxes, or even none at all. While this is often part of national schemes, carbon offsets and credits can be part of a voluntary market as well such as on the international market. Notably, the company Blue Carbon of the UAE has bought ownership over an area equivalent to the United Kingdom to be preserved in return for carbon credits. International agreements Almost all countries are parties to the United Nations Framework Convention on Climate Change (UNFCCC). The ultimate objective of the UNFCCC is to stabilize atmospheric concentrations of greenhouse gases at a level that would prevent dangerous human interference with the climate system. Although not designed for this purpose, the Montreal Protocol has benefited climate change mitigation efforts. The Montreal Protocol is an international treaty that has successfully reduced emissions of ozone-depleting substances such as CFCs. These are also greenhouse gases. Paris Agreement History Historically efforts to deal with climate change have taken place at a multinational level. They involve attempts to reach a consensus decision at the United Nations, under the United Nations Framework Convention on Climate Change (UNFCCC). This is the dominant approach historically of engaging as many international governments as possible in taking action on a worldwide public issue. The Montreal Protocol in 1987 is a precedent that this approach can work. But some critics say the top-down framework of only utilizing the UNFCCC consensus approach is ineffective. They put forward counter-proposals of bottom-up governance. At this same time this would lessen the emphasis on the UNFCCC. The Kyoto Protocol to the UNFCCC adopted in 1997 set out legally binding emission reduction commitments for the "Annex 1" countries. The Protocol defined three international policy instruments ("Flexibility Mechanisms") which could be used by the Annex 1 countries to meet their emission reduction commitments. According to Bashmakov, use of these instruments could significantly reduce the costs for Annex 1 countries in meeting their emission reduction commitments. The Paris Agreement reached in 2015 succeeded the Kyoto Protocol which expired in 2020. Countries that ratified the Kyoto protocol committed to reduce their emissions of carbon dioxide and five other greenhouse gases, or engage in carbon emissions trading if they maintain or increase emissions of these gases. In 2015, the UNFCCC's "structured expert dialogue" came to the conclusion that, "in some regions and vulnerable ecosystems, high risks are projected even for warming above 1.5 °C". Together with the strong diplomatic voice of the poorest countries and the island nations in the Pacific, this expert finding was the driving force leading to the decision of the 2015 Paris Climate Conference to lay down this 1.5 °C long-term target on top of the existing 2 °C goal. Society and culture Commitments to divest More than 1000 organizations with investments worth US$8 trillion have made commitments to fossil fuel divestment. Socially responsible investing funds allow investors to invest in funds that meet high environmental, social and corporate governance (ESG) standards. Barriers There are individual, institutional and market barriers to achieving climate change mitigation. They differ for all the different mitigation options, regions and societies. Difficulties with accounting for carbon dioxide removal can act as economic barriers. This would apply to BECCS (bioenergy with carbon capture and storage). The strategies that companies follow can act as a barrier. But they can also accelerate decarbonisation. In order to decarbonise societies the state needs to play a predominant role. This is because it requires a massive coordination effort. This strong government role can only work well if there is social cohesion, political stability and trust. For land-based mitigation options, finance is a major barrier. Other barriers are cultural values, governance, accountability and institutional capacity. Developing countries face further barriers to mitigation. The cost of capital increased in the early 2020s. A lack of available capital and finance is common in developing countries. Together with the absence of regulatory standards, this barrier supports the proliferation of inefficient equipment. There are also financial and capacity barrier in many of these countries. One study estimates that only 0.12% of all funding for climate-related research goes on the social science of climate change mitigation. Vastly more funding goes on natural science studies of climate change. Considerable sums also go on studies of the impact of climate change and adaptation to it. Impacts of the COVID-19 pandemic The COVID-19 pandemic led some governments to shift their focus away from climate action, at least temporarily. This obstacle to environmental policy efforts may have contributed to slowed investment in green energy technologies. The economic slowdown resulting from COVID-19 added to this effect. In 2020, carbon dioxide emissions fell by 6.4% or 2.3 billion tonnes globally. Greenhouse gas emissions rebounded later in the pandemic as many countries began lifting restrictions. The direct impact of pandemic policies had a negligible long-term impact on climate change. Examples by country United States China China has committed to peak emissions by 2030 and reach net zero by 2060. Warming cannot be limited to 1.5 °C if any coal plants in China (without carbon capture) operate after 2045. The Chinese national carbon trading scheme started in 2021. European Union The European Commission estimates that an additional €477 million in annual investment is needed for the European Union to meet its Fit-for-55 decarbonization goals. In the European Union, government-driven policies and the European Green Deal have helped position greentech (as an example) as a vital area for venture capital investment. By 2023, venture capital in the EU's greentech sector equaled that of the United States, reflecting a concerted effort to drive innovation and mitigate climate change through targeted financial support. The European Green Deal has fostered policies that contributed to a 30% rise in venture capital for greentech companies in the EU from 2021 to 2023, despite a downturn in other sectors during the same period. While overall venture capital investment in the EU remains about six times lower than in the United States, the greentech sector has closed this gap significantly, attracting substantial funding. Key areas benefitting from increased investments are energy storage, circular economy initiatives, and agricultural technology. This is supported by the EU's ambitious goal to reduce greenhouse gas emissions by at least 55% by 2030. See also Carbon budget Carbon offsets and credits Carbon price Climate movement Climate change denial Tipping points in the climate system References Biogeochemical cycle Biogeography Cycle Chemical oceanography Climate change policy Geochemistry Numerical climate and weather models Soil
0.76963
0.995714
0.766331
Ekman transport
Ekman transport is part of Ekman motion theory, first investigated in 1902 by Vagn Walfrid Ekman. Winds are the main source of energy for ocean circulation, and Ekman transport is a component of wind-driven ocean current. Ekman transport occurs when ocean surface waters are influenced by the friction force acting on them via the wind. As the wind blows it casts a friction force on the ocean surface that drags the upper 10-100m of the water column with it. However, due to the influence of the Coriolis effect, the ocean water moves at a 90° angle from the direction of the surface wind. The direction of transport is dependent on the hemisphere: in the northern hemisphere, transport occurs at 90° clockwise from wind direction, while in the southern hemisphere it occurs at 90° anticlockwise. This phenomenon was first noted by Fridtjof Nansen, who recorded that ice transport appeared to occur at an angle to the wind direction during his Arctic expedition of the 1890s. Ekman transport has significant impacts on the biogeochemical properties of the world's oceans. This is because it leads to upwelling (Ekman suction) and downwelling (Ekman pumping) in order to obey mass conservation laws. Mass conservation, in reference to Ekman transfer, requires that any water displaced within an area must be replenished. This can be done by either Ekman suction or Ekman pumping depending on wind patterns. Theory Ekman theory explains the theoretical state of circulation if water currents were driven only by the transfer of momentum from the wind. In the physical world, this is difficult to observe because of the influences of many simultaneous current driving forces (for example, pressure and density gradients). Though the following theory technically applies to the idealized situation involving only wind forces, Ekman motion describes the wind-driven portion of circulation seen in the surface layer. Surface currents flow at a 45° angle to the wind due to a balance between the Coriolis force and the drags generated by the wind and the water. If the ocean is divided vertically into thin layers, the magnitude of the velocity (the speed) decreases from a maximum at the surface until it dissipates. The direction also shifts slightly across each subsequent layer (right in the northern hemisphere and left in the southern hemisphere). This is called the Ekman spiral. The layer of water from the surface to the point of dissipation of this spiral is known as the Ekman layer. If all flow over the Ekman layer is integrated, the net transportation is at 90° to the right (left) of the surface wind in the northern (southern) hemisphere. Mechanisms There are three major wind patterns that lead to Ekman suction or pumping. The first are wind patterns that are parallel to the coastline. Due to the Coriolis effect, surface water moves at a 90° angle to the wind current. If the wind moves in a direction causing the water to be pulled away from the coast then Ekman suction will occur. On the other hand, if the wind is moving in such a way that surface waters move towards the shoreline then Ekman pumping will take place. The second mechanism of wind currents resulting in Ekman transfer is the Trade Winds both north and south of the equator pulling surface waters towards the poles. There is a great deal of upwelling Ekman suction at the equator because water is being pulled northward north of the equator and southward south of the equator. This leads to a divergence in the water, resulting in Ekman suction, and therefore, upwelling. The third wind pattern influencing Ekman transfer is large-scale wind patterns in the open ocean. Open ocean wind circulation can lead to gyre-like structures of piled up sea surface water resulting in horizontal gradients of sea surface height. This pile up of water causes the water to have a downward flow and suction, due to gravity and mass balance. Ekman pumping downward in the central ocean is a consequence of this convergence of water. Ekman suction Ekman suction is the component of Ekman transport that results in areas of upwelling due to the divergence of water. Returning to the concept of mass conservation, any water displaced by Ekman transport must be replenished. As the water diverges it creates space and acts as a suction in order to fill in the space by pulling up, or upwelling, deep sea water to the euphotic zone. Ekman suction has major consequences for the biogeochemical processes in the area because it leads to upwelling. Upwelling carries nutrient rich, and cold deep-sea water to the euphotic zone, promoting phytoplankton blooms and kickstarting an extremely high-productive environment. Areas of upwelling lead to the promotion of fisheries, in fact nearly half of the world's fish catch comes from areas of upwelling. Ekman suction occurs both along coastlines and in the open ocean, but also occurs along the equator. Along the Pacific coastline of California, Central America, and Peru, as well as along the Atlantic coastline of Africa there are areas of upwelling due to Ekman suction, as the currents move equatorwards. Due to the Coriolis effect the surface water moves 90° to the left (in the South Hemisphere, as it travels toward the equator) of the wind current, therefore causing the water to diverge from the coast boundary, leading to Ekman suction. Additionally, there are areas of upwelling as a consequence of Ekman suction where the Polar Easterlies winds meet the Westerlies in the subpolar regions north of the subtropics, as well as where the Northeast Trade Winds meet the Southeast Trade Winds along the Equator. Similarly, due to the Coriolis effect the surface water moves 90° to the left (in the South Hemisphere) of the wind currents, and the surface water diverges along these boundaries, resulting in upwelling in order to conserve mass. Ekman pumping Ekman pumping is the component of Ekman transport that results in areas of downwelling due to the convergence of water. As discussed above, the concept of mass conservation requires that a pile up of surface water must be pushed downward. This pile up of warm, nutrient-poor surface water gets pumped vertically down the water column, resulting in areas of downwelling. Ekman pumping has dramatic impacts on the surrounding environments. Downwelling, due to Ekman pumping, leads to nutrient poor waters, therefore reducing the biological productivity of the area. Additionally, it transports heat and dissolved oxygen vertically down the water column as warm oxygen rich surface water is being pumped towards the deep ocean water. Ekman pumping can be found along the coasts as well as in the open ocean. Along the Pacific Coast in the Southern Hemisphere northerly winds move parallel to the coastline. Due to the Coriolis effect the surface water gets pulled 90° to the left of the wind current, therefore causing the water to converge along the coast boundary, leading to Ekman pumping. In the open ocean Ekman pumping occurs with gyres. Specifically, in the subtropics, between 20°N and 50°N, there is Ekman pumping as the tradewinds shift to westerlies causing a pile up of surface water. Mathematical derivation Some assumptions of the fluid dynamics involved in the process must be made in order to simplify the process to a point where it is solvable. The assumptions made by Ekman were: no boundaries; infinitely deep water; eddy viscosity, , is constant (this is only true for laminar flow. In the turbulent atmospheric and oceanic boundary layer it is a strong function of depth); the wind forcing is steady and has been blowing for a long time; barotropic conditions with no geostrophic flow; the Coriolis parameter, is kept constant. The simplified equations for the Coriolis force in the x and y directions follow from these assumptions: where is the wind stress, is the density, is the east–west velocity, and is the north–south velocity. Integrating each equation over the entire Ekman layer: where Here and represent the zonal and meridional mass transport terms with units of mass per unit time per unit length. Contrarily to common logic, north–south winds cause mass transport in the east–west direction. In order to understand the vertical velocity structure of the water column, equations and can be rewritten in terms of the vertical eddy viscosity term. where is the vertical eddy viscosity coefficient. This gives a set of differential equations of the form In order to solve this system of two differential equations, two boundary conditions can be applied: as friction is equal to wind stress at the free surface. Things can be further simplified by considering wind blowing in the y-direction only. This means is the results will be relative to a north–south wind (although these solutions could be produced relative to wind in any other direction): where and represent Ekman transport in the u and v direction; in equation the plus sign applies to the northern hemisphere and the minus sign to the southern hemisphere; is the wind stress on the sea surface; is the Ekman depth (depth of Ekman layer). By solving this at z=0, the surface current is found to be (as expected) 45 degrees to the right (left) of the wind in the Northern (Southern) Hemisphere. This also gives the expected shape of the Ekman spiral, both in magnitude and direction. Integrating these equations over the Ekman layer shows that the net Ekman transport term is 90 degrees to the right (left) of the wind in the Northern (Southern) Hemisphere. Applications Ekman transport leads to coastal upwelling, which provides the nutrient supply for some of the largest fishing markets on the planet and can impact the stability of the Antarctic Ice Sheet by pulling warm deep water onto the continental shelf. Wind in these regimes blows parallel to the coast (such as along the coast of Peru, where the wind blows out of the southeast, and also in California, where it blows out of the northwest). From Ekman transport, surface water has a net movement of 90° to right of wind direction in the northern hemisphere (left in the southern hemisphere). Because the surface water flows away from the coast, the water must be replaced with water from below. In shallow coastal waters, the Ekman spiral is normally not fully formed and the wind events that cause upwelling episodes are typically rather short. This leads to many variations in the extent of upwelling, but the ideas are still generally applicable. Ekman transport is similarly at work in equatorial upwelling, where, in both hemispheres, a trade wind component towards the west causes a net transport of water towards the pole, and a trade wind component towards the east causes a net transport of water away from the pole. On smaller scales, cyclonic winds induce Ekman transport which causes net divergence and upwelling, or Ekman suction, while anti-cyclonic winds cause net convergence and downwelling, or Ekman pumping Ekman transport is also a factor in the circulation of the ocean gyres and garbage patches. Ekman transport causes water to flow toward the center of the gyre in all locations, creating a sloped sea-surface, and initiating geostrophic flow (Colling p 65). Harald Sverdrup applied Ekman transport while including pressure gradient forces to develop a theory for this (see Sverdrup balance). See also Notes References Colling, A., Ocean Circulation, Open University Course Team. Second Edition. 2001. Emerson, Steven R.; Hedges, John I. (2017). Chemical Oceanography and the Marine Carbon Cycle. New York, United States of America: Cambridge University Press. . Knauss, J.A., Introduction to Physical Oceanography, Waveland Press. Second Edition. 2005. Lindstrom, Eric J. "Ocean Motion : Definition : Wind Driven Surface Currents - Upwelling and Downwelling". oceanmotion.org. Mann, K.H. and Lazier J.R., Dynamics of Marine Ecosystems, Blackwell Publishing. Third Edition. 2006. Miller, Charles B.; Wheeler, Patricia A. Biological Oceanography (Second ed.). Wiley-Blackwell. . Pond, S. and Pickard, G. L., Introductory Dynamical Oceanography, Pergamon Press. Second edition. 1983. Sarmiento, Jorge L.; Gruber, Nicolas (2006). Ocean biogeochemical dynamics. Princeton University Press. . Sverdrup, K.A., Duxbury, A.C., Duxbury, A.B., An Introduction to The World's Oceans, McGraw-Hill. Eighth Edition. 2005. External links What is Ekman transport ? Aquatic ecology Oceanography Fluid dynamics Science of underwater diving Transport phenomena
0.778157
0.984799
0.766328
Angular momentum operator
In quantum mechanics, the angular momentum operator is one of several related operators analogous to classical angular momentum. The angular momentum operator plays a central role in the theory of atomic and molecular physics and other quantum problems involving rotational symmetry. Being an observable, its eigenfunctions represent the distinguishable physical states of a system's angular momentum, and the corresponding eigenvalues the observable experimental values. When applied to a mathematical representation of the state of a system, yields the same state multiplied by its angular momentum value if the state is an eigenstate (as per the eigenstates/eigenvalues equation). In both classical and quantum mechanical systems, angular momentum (together with linear momentum and energy) is one of the three fundamental properties of motion. There are several angular momentum operators: total angular momentum (usually denoted J), orbital angular momentum (usually denoted L), and spin angular momentum (spin for short, usually denoted S). The term angular momentum operator can (confusingly) refer to either the total or the orbital angular momentum. Total angular momentum is always conserved, see Noether's theorem. Overview In quantum mechanics, angular momentum can refer to one of three different, but related things. Orbital angular momentum The classical definition of angular momentum is . The quantum-mechanical counterparts of these objects share the same relationship: where r is the quantum position operator, p is the quantum momentum operator, × is cross product, and L is the orbital angular momentum operator. L (just like p and r) is a vector operator (a vector whose components are operators), i.e. where Lx, Ly, Lz are three different quantum-mechanical operators. In the special case of a single particle with no electric charge and no spin, the orbital angular momentum operator can be written in the position basis as: where is the vector differential operator, del. Spin angular momentum There is another type of angular momentum, called spin angular momentum (more often shortened to spin), represented by the spin operator . Spin is often depicted as a particle literally spinning around an axis, but this is only a metaphor: the closest classical analog is based on wave circulation. All elementary particles have a characteristic spin (scalar bosons have zero spin). For example, electrons always have "spin 1/2" while photons always have "spin 1" (details below). Total angular momentum Finally, there is total angular momentum , which combines both the spin and orbital angular momentum of a particle or system: Conservation of angular momentum states that J for a closed system, or J for the whole universe, is conserved. However, L and S are not generally conserved. For example, the spin–orbit interaction allows angular momentum to transfer back and forth between L and S, with the total J remaining constant. Commutation relations Commutation relations between components The orbital angular momentum operator is a vector operator, meaning it can be written in terms of its vector components . The components have the following commutation relations with each other: where denotes the commutator This can be written generally as where l, m, n are the component indices (1 for x, 2 for y, 3 for z), and denotes the Levi-Civita symbol. A compact expression as one vector equation is also possible: The commutation relations can be proved as a direct consequence of the canonical commutation relations , where is the Kronecker delta. There is an analogous relationship in classical physics: where Ln is a component of the classical angular momentum operator, and is the Poisson bracket. The same commutation relations apply for the other angular momentum operators (spin and total angular momentum): These can be assumed to hold in analogy with L. Alternatively, they can be derived as discussed below. These commutation relations mean that L has the mathematical structure of a Lie algebra, and the are its structure constants. In this case, the Lie algebra is SU(2) or SO(3) in physics notation ( or respectively in mathematics notation), i.e. Lie algebra associated with rotations in three dimensions. The same is true of J and S. The reason is discussed below. These commutation relations are relevant for measurement and uncertainty, as discussed further below. In molecules the total angular momentum F is the sum of the rovibronic (orbital) angular momentum N, the electron spin angular momentum S, and the nuclear spin angular momentum I. For electronic singlet states the rovibronic angular momentum is denoted J rather than N. As explained by Van Vleck, the components of the molecular rovibronic angular momentum referred to molecule-fixed axes have different commutation relations from those given above which are for the components about space-fixed axes. Commutation relations involving vector magnitude Like any vector, the square of a magnitude can be defined for the orbital angular momentum operator, is another quantum operator. It commutes with the components of , One way to prove that these operators commute is to start from the [Lℓ, Lm] commutation relations in the previous section: Mathematically, is a Casimir invariant of the Lie algebra SO(3) spanned by . As above, there is an analogous relationship in classical physics: where is a component of the classical angular momentum operator, and is the Poisson bracket. Returning to the quantum case, the same commutation relations apply to the other angular momentum operators (spin and total angular momentum), as well, Uncertainty principle In general, in quantum mechanics, when two observable operators do not commute, they are called complementary observables. Two complementary observables cannot be measured simultaneously; instead they satisfy an uncertainty principle. The more accurately one observable is known, the less accurately the other one can be known. Just as there is an uncertainty principle relating position and momentum, there are uncertainty principles for angular momentum. The Robertson–Schrödinger relation gives the following uncertainty principle: where is the standard deviation in the measured values of X and denotes the expectation value of X. This inequality is also true if x, y, z are rearranged, or if L is replaced by J or S. Therefore, two orthogonal components of angular momentum (for example Lx and Ly) are complementary and cannot be simultaneously known or measured, except in special cases such as . It is, however, possible to simultaneously measure or specify L2 and any one component of L; for example, L2 and Lz. This is often useful, and the values are characterized by the azimuthal quantum number (l) and the magnetic quantum number (m). In this case the quantum state of the system is a simultaneous eigenstate of the operators L2 and Lz, but not of Lx or Ly. The eigenvalues are related to l and m, as shown in the table below. Quantization In quantum mechanics, angular momentum is quantized – that is, it cannot vary continuously, but only in "quantum leaps" between certain allowed values. For any system, the following restrictions on measurement results apply, where is reduced Planck constant: Derivation using ladder operators A common way to derive the quantization rules above is the method of ladder operators. The ladder operators for the total angular momentum are defined as: Suppose is a simultaneous eigenstate of and (i.e., a state with a definite value for and a definite value for ). Then using the commutation relations for the components of , one can prove that each of the states and is either zero or a simultaneous eigenstate of and , with the same value as for but with values for that are increased or decreased by respectively. The result is zero when the use of a ladder operator would otherwise result in a state with a value for that is outside the allowable range. Using the ladder operators in this way, the possible values and quantum numbers for and can be found. Since and have the same commutation relations as , the same ladder analysis can be applied to them, except that for there is a further restriction on the quantum numbers that they must be integers. Visual interpretation Since the angular momenta are quantum operators, they cannot be drawn as vectors like in classical mechanics. Nevertheless, it is common to depict them heuristically in this way. Depicted on the right is a set of states with quantum numbers , and for the five cones from bottom to top. Since , the vectors are all shown with length . The rings represent the fact that is known with certainty, but and are unknown; therefore every classical vector with the appropriate length and z-component is drawn, forming a cone. The expected value of the angular momentum for a given ensemble of systems in the quantum state characterized by and could be somewhere on this cone while it cannot be defined for a single system (since the components of do not commute with each other). Quantization in macroscopic systems The quantization rules are widely thought to be true even for macroscopic systems, like the angular momentum L of a spinning tire. However they have no observable effect so this has not been tested. For example, if is roughly 100000000, it makes essentially no difference whether the precise value is an integer like 100000000 or 100000001, or a non-integer like 100000000.2—the discrete steps are currently too small to measure. Angular momentum as the generator of rotations The most general and fundamental definition of angular momentum is as the generator of rotations. More specifically, let be a rotation operator, which rotates any quantum state about axis by angle . As , the operator approaches the identity operator, because a rotation of 0° maps all states to themselves. Then the angular momentum operator about axis is defined as: where 1 is the identity operator. Also notice that R is an additive morphism : ; as a consequence where exp is matrix exponential. The existence of the generator is guaranteed by the Stone's theorem on one-parameter unitary groups. In simpler terms, the total angular momentum operator characterizes how a quantum system is changed when it is rotated. The relationship between angular momentum operators and rotation operators is the same as the relationship between Lie algebras and Lie groups in mathematics, as discussed further below. Just as J is the generator for rotation operators, L and S are generators for modified partial rotation operators. The operator rotates the position (in space) of all particles and fields, without rotating the internal (spin) state of any particle. Likewise, the operator rotates the internal (spin) state of all particles, without moving any particles or fields in space. The relation J = L + S comes from: i.e. if the positions are rotated, and then the internal states are rotated, then altogether the complete system has been rotated. SU(2), SO(3), and 360° rotations Although one might expect (a rotation of 360° is the identity operator), this is not assumed in quantum mechanics, and it turns out it is often not true: When the total angular momentum quantum number is a half-integer (1/2, 3/2, etc.), , and when it is an integer, . Mathematically, the structure of rotations in the universe is not SO(3), the group of three-dimensional rotations in classical mechanics. Instead, it is SU(2), which is identical to SO(3) for small rotations, but where a 360° rotation is mathematically distinguished from a rotation of 0°. (A rotation of 720° is, however, the same as a rotation of 0°.) On the other hand, in all circumstances, because a 360° rotation of a spatial configuration is the same as no rotation at all. (This is different from a 360° rotation of the internal (spin) state of the particle, which might or might not be the same as no rotation at all.) In other words, the operators carry the structure of SO(3), while and carry the structure of SU(2). From the equation , one picks an eigenstate and draws which is to say that the orbital angular momentum quantum numbers can only be integers, not half-integers. Connection to representation theory Starting with a certain quantum state , consider the set of states for all possible and , i.e. the set of states that come about from rotating the starting state in every possible way. The linear span of that set is a vector space, and therefore the manner in which the rotation operators map one state onto another is a representation of the group of rotation operators. From the relation between J and rotation operators, (The Lie algebras of SU(2) and SO(3) are identical.) The ladder operator derivation above is a method for classifying the representations of the Lie algebra SU(2). Connection to commutation relations Classical rotations do not commute with each other: For example, rotating 1° about the x-axis then 1° about the y-axis gives a slightly different overall rotation than rotating 1° about the y-axis then 1° about the x-axis. By carefully analyzing this noncommutativity, the commutation relations of the angular momentum operators can be derived. (This same calculational procedure is one way to answer the mathematical question "What is the Lie algebra of the Lie groups SO(3) or SU(2)?") Conservation of angular momentum The Hamiltonian H represents the energy and dynamics of the system. In a spherically symmetric situation, the Hamiltonian is invariant under rotations: where R is a rotation operator. As a consequence, , and then due to the relationship between J and R. By the Ehrenfest theorem, it follows that J is conserved. To summarize, if H is rotationally-invariant (The Hamiltonian function defined on an inner product space is said to have rotational invariance if its value does not change when arbitrary rotations are applied to its coordinates.), then total angular momentum J is conserved. This is an example of Noether's theorem. If H is just the Hamiltonian for one particle, the total angular momentum of that one particle is conserved when the particle is in a central potential (i.e., when the potential energy function depends only on ). Alternatively, H may be the Hamiltonian of all particles and fields in the universe, and then H is always rotationally-invariant, as the fundamental laws of physics of the universe are the same regardless of orientation. This is the basis for saying conservation of angular momentum is a general principle of physics. For a particle without spin, J = L, so orbital angular momentum is conserved in the same circumstances. When the spin is nonzero, the spin–orbit interaction allows angular momentum to transfer from L to S or back. Therefore, L is not, on its own, conserved. Angular momentum coupling Often, two or more sorts of angular momentum interact with each other, so that angular momentum can transfer from one to the other. For example, in spin–orbit coupling, angular momentum can transfer between L and S, but only the total J = L + S is conserved. In another example, in an atom with two electrons, each has its own angular momentum J1 and J2, but only the total J = J1 + J2 is conserved. In these situations, it is often useful to know the relationship between, on the one hand, states where all have definite values, and on the other hand, states where all have definite values, as the latter four are usually conserved (constants of motion). The procedure to go back and forth between these bases is to use Clebsch–Gordan coefficients. One important result in this field is that a relationship between the quantum numbers for : For an atom or molecule with J = L + S, the term symbol gives the quantum numbers associated with the operators . Orbital angular momentum in spherical coordinates Angular momentum operators usually occur when solving a problem with spherical symmetry in spherical coordinates. The angular momentum in the spatial representation is In spherical coordinates the angular part of the Laplace operator can be expressed by the angular momentum. This leads to the relation When solving to find eigenstates of the operator , we obtain the following where are the spherical harmonics. See also Runge–Lenz vector (used to describe the shape and orientation of bodies in orbit) Holstein–Primakoff transformation Jordan map (Schwinger's bosonic model of angular momentum) Pauli–Lubanski pseudovector Angular momentum diagrams (quantum mechanics) Spherical basis Tensor operator Orbital magnetization Orbital angular momentum of free electrons Orbital angular momentum of light Notes References Further reading Angular momentum Quantum mechanics Rotational symmetry
0.770252
0.994881
0.766309
Stern–Gerlach experiment
In quantum physics, the Stern–Gerlach experiment demonstrated that the spatial orientation of angular momentum is quantized. Thus an atomic-scale system was shown to have intrinsically quantum properties. In the original experiment, silver atoms were sent through a spatially-varying magnetic field, which deflected them before they struck a detector screen, such as a glass slide. Particles with non-zero magnetic moment were deflected, owing to the magnetic field gradient, from a straight path. The screen revealed discrete points of accumulation, rather than a continuous distribution, owing to their quantized spin. Historically, this experiment was decisive in convincing physicists of the reality of angular-momentum quantization in all atomic-scale systems. After its conception by Otto Stern in 1921, the experiment was first successfully conducted with Walther Gerlach in early 1922. Description The Stern–Gerlach experiment involves sending silver atoms through an inhomogeneous magnetic field and observing their deflection. Silver atoms were evaporated using an electric furnace in a vacuum. Using thin slits, the atoms were guided into a flat beam and the beam sent through an inhomogeneous magnetic field before colliding with a metallic plate. The laws of classical physics predict that the collection of condensed silver atoms on the plate should form a thin solid line in the same shape as the original beam. However, the inhomogeneous magnetic field caused the beam to split in two separate directions, creating two lines on the metallic plate. The results show that particles possess an intrinsic angular momentum that is closely analogous to the angular momentum of a classically spinning object, but that takes only certain quantized values. Another important result is that only one component of a particle's spin can be measured at one time, meaning that the measurement of the spin along the z-axis destroys information about a particle's spin along the x and y axis. The experiment is normally conducted using electrically neutral particles such as silver atoms. This avoids the large deflection in the path of a charged particle moving through a magnetic field and allows spin-dependent effects to dominate. If the particle is treated as a classical spinning magnetic dipole, it will precess in a magnetic field because of the torque that the magnetic field exerts on the dipole (see torque-induced precession). If it moves through a homogeneous magnetic field, the forces exerted on opposite ends of the dipole cancel each other out and the trajectory of the particle is unaffected. However, if the magnetic field is inhomogeneous then the force on one end of the dipole will be slightly greater than the opposing force on the other end, so that there is a net force which deflects the particle's trajectory. If the particles were classical spinning objects, one would expect the distribution of their spin angular momentum vectors to be random and continuous. Each particle would be deflected by an amount proportional to the dot product of its magnetic moment with the external field gradient, producing some density distribution on the detector screen. Instead, the particles passing through the Stern–Gerlach apparatus are deflected either up or down by a specific amount. This was a measurement of the quantum observable now known as spin angular momentum, which demonstrated possible outcomes of a measurement where the observable has a discrete set of values or point spectrum. Although some discrete quantum phenomena, such as atomic spectra, were observed much earlier, the Stern–Gerlach experiment allowed scientists to directly observe separation between discrete quantum states for the first time in the history of science. Theoretically, quantum angular momentum of any kind has a discrete spectrum, which is sometimes briefly expressed as "angular momentum is quantized". Experiment using particles with +1/2 or −1/2 spin If the experiment is conducted using charged particles like electrons, there will be a Lorentz force that tends to bend the trajectory in a circle. This force can be cancelled by an electric field of appropriate magnitude oriented transverse to the charged particle's path. Electrons are spin-1/2 particles. These have only two possible spin angular momentum values measured along any axis, or , a purely quantum mechanical phenomenon. Because its value is always the same, it is regarded as an intrinsic property of electrons, and is sometimes known as "intrinsic angular momentum" (to distinguish it from orbital angular momentum, which can vary and depends on the presence of other particles). If one measures the spin along a vertical axis, electrons are described as "spin up" or "spin down", based on the magnetic moment pointing up or down, respectively. To mathematically describe the experiment with spin particles, it is easiest to use Dirac's bra–ket notation. As the particles pass through the Stern–Gerlach device, they are deflected either up or down, and observed by the detector which resolves to either spin up or spin down. These are described by the angular momentum quantum number , which can take on one of the two possible allowed values, either or . The act of observing (measuring) the momentum along the axis corresponds to the -axis angular momentum operator, often denoted . In mathematical terms, the initial state of the particles is where constants and are complex numbers. This initial state spin can point in any direction. The squares of the absolute values and are respectively the probabilities for a system in the state to be found in and after the measurement along axis is made. The constants and must also be normalized in order that the probability of finding either one of the values be unity, that is we must ensure that . However, this information is not sufficient to determine the values of and , because they are complex numbers. Therefore, the measurement yields only the squared magnitudes of the constants, which are interpreted as probabilities. Sequential experiments If we link multiple Stern–Gerlach apparatuses (the rectangles containing S-G), we can clearly see that they do not act as simple selectors, i.e. filtering out particles with one of the states (pre-existing to the measurement) and blocking the others. Instead they alter the state by observing it (as in light polarization). In the figure below, x and z name the directions of the (inhomogenous) magnetic field, with the x-z-plane being orthogonal to the particle beam. In the three S-G systems shown below, the cross-hatched squares denote the blocking of a given output, i.e. each of the S-G systems with a blocker allows only particles with one of two states to enter the next S-G apparatus in the sequence. Experiment 1 The top illustration shows that when a second, identical, S-G apparatus is placed at the exit of the first apparatus, only z+ is seen in the output of the second apparatus. This result is expected since all particles at this point are expected to have z+ spin, as only the z+ beam from the first apparatus entered the second apparatus. Experiment 2 The middle system shows what happens when a different S-G apparatus is placed at the exit of the z+ beam resulting of the first apparatus, the second apparatus measuring the deflection of the beams on the x axis instead of the z axis. The second apparatus produces x+ and x- outputs. Now classically we would expect to have one beam with the x characteristic oriented + and the z characteristic oriented +, and another with the x characteristic oriented - and the z characteristic oriented +. Experiment 3 The bottom system contradicts that expectation. The output of the third apparatus which measures the deflection on the z axis again shows an output of z- as well as z+. Given that the input to the second S-G apparatus consisted only of z+, it can be inferred that a S-G apparatus must be altering the states of the particles that pass through it. This experiment can be interpreted to exhibit the uncertainty principle: since the angular momentum cannot be measured on two perpendicular directions at the same time, the measurement of the angular momentum on the x direction destroys the previous determination of the angular momentum in the z direction. That's why the third apparatus measures renewed z+ and z- beams like the x measurement really made a clean slate of the z+ output. History The Stern–Gerlach experiment was conceived by Otto Stern in 1921 and performed by him and Walther Gerlach in Frankfurt in 1922. At the time of the experiment, the most prevalent model for describing the atom was the Bohr-Sommerfeld model, which described electrons as going around the positively charged nucleus only in certain discrete atomic orbitals or energy levels. Since the electron was quantized to be only in certain positions in space, the separation into distinct orbits was referred to as space quantization. The Stern–Gerlach experiment was meant to test the Bohr–Sommerfeld hypothesis that the direction of the angular momentum of a silver atom is quantized. The experiment was first performed with an electromagnet that allowed the non-uniform magnetic field to be turned on gradually from a null value. When the field was null, the silver atoms were deposited as a single band on the detecting glass slide. When the field was made stronger, the middle of the band began to widen and eventually to split into two, so that the glass-slide image looked like a lip-print, with an opening in the middle, and closure at either end. In the middle, where the magnetic field was strong enough to split the beam into two, statistically half of the silver atoms had been deflected by the non-uniformity of the field. Note that the experiment was performed several years before George Uhlenbeck and Samuel Goudsmit formulated their hypothesis about the existence of electron spin in 1925. Even though the result of the Stern−Gerlach experiment has later turned out to be in agreement with the predictions of quantum mechanics for a spin-1/2 particle, the experimental result was also consistent with the Bohr–Sommerfeld theory. In 1927, T.E. Phipps and J.B. Taylor reproduced the effect using hydrogen atoms in their ground state, thereby eliminating any doubts that may have been caused by the use of silver atoms. However, in 1926 the non-relativistic scalar Schrödinger equation had incorrectly predicted the magnetic moment of hydrogen to be zero in its ground state. To correct this problem Wolfgang Pauli considered a spin-1/2 version of the Schrödinger equation using the 3 Pauli matrices which now bear his name, which was later shown by Paul Dirac in 1928 to be a consequence of his relativistic Dirac equation. In the early 1930's Stern, together with Otto Robert Frisch and Immanuel Estermann improved the molecular beam apparatus sufficiently to measure the magnetic moment of the proton, a value nearly 2000 times smaller than the electron moment. In 1931, theoretical analysis by Gregory Breit and Isidor Isaac Rabi showed that this apparatus could be used to measure nuclear spin whenever the electronic configuration of the atom was known. The concept was applied by Rabi and Victor W. Cohen in 1934 to determine the spin of sodium atoms. In 1938 Rabi and coworkers inserted an oscillating magnetic field element into their apparatus, inventing nuclear magnetic resonance spectroscopy. By tuning the frequency of the oscillator to the frequency of the nuclear precessions they could selectively tune into each quantum level of the material under study. Rabi was awarded the Nobel Prize in 1944 for this work. Importance The Stern–Gerlach experiment was the first direct evidence of angular-momentum quantization in quantum mechanics, and it strongly influenced later developments in modern physics: In the decade that followed, scientists showed using similar techniques, that the nuclei of some atoms also have quantized angular momentum. It is the interaction of this nuclear angular momentum with the spin of the electron that is responsible for the hyperfine structure of the spectroscopic lines. Norman F. Ramsey later modified the Rabi apparatus to improve its sensitivity (using the separated oscillatory field method). In the early sixties, Ramsey, H. Mark Goldenberg, and Daniel Kleppner used a Stern–Gerlach system to produce a beam of polarized hydrogen as the source of energy for the hydrogen maser. This led to developing an extremely stable clock based on a hydrogen maser. From 1967 until 2019, the second was defined based on 9,192,631,770 Hz hyperfine transition of a cesium-133 atom; the atomic clock which is used to set this standard is an application of Ramsey's work. The Stern–Gerlach experiment has become a prototype for quantum measurement, demonstrating the observation of a single, real value (eigenvalue) of a previously indeterminate physical property. Entering the Stern–Gerlach magnet, the direction of the silver atom's magnetic moment is indefinite, but when the atom is registered at the screen, it is observed to be at either one spot or the other, and this outcome cannot be predicted in advance. Because the experiment illustrates the character of quantum measurements, The Feynman Lectures on Physics use idealized Stern–Gerlach apparatuses to explain the basic mathematics of quantum theory. See also Photon polarization Stern–Gerlach Medal German inventors and discoverers References Further reading External links Stern–Gerlach Experiment Java Applet Animation Stern–Gerlach Experiment Flash Model Detailed explanation of the Stern–Gerlach Experiment Animation, applications and research linked to the spin (Université Paris Sud) Wave Mechanics and Stern–Gerlach experiment at MIT OpenCourseWare Quantum measurement Foundational quantum physics Physics experiments Spintronics 1922 in science Articles containing video clips
0.768897
0.996624
0.766301
Directed-energy weapon
A directed-energy weapon (DEW) is a ranged weapon that damages its target with highly focused energy without a solid projectile, including lasers, microwaves, particle beams, and sound beams. Potential applications of this technology include weapons that target personnel, missiles, vehicles, and optical devices. In the United States, the Pentagon, DARPA, the Air Force Research Laboratory, United States Army Armament Research Development and Engineering Center, and the Naval Research Laboratory are researching directed-energy weapons to counter ballistic missiles, hypersonic cruise missiles, and hypersonic glide vehicles. These systems of missile defense are expected to come online no sooner than the mid to late-2020s. China, France, Germany, the United Kingdom, Russia, India, Israel, and Pakistan are also developing military-grade directed-energy weapons, while Iran and Turkey claim to have them in active service. The first use of directed-energy weapons in combat between military forces was claimed to have occurred in Libya in August 2019 by Turkey, which claimed to use the ALKA directed-energy weapon. After decades of research and development, most directed-energy weapons are still at the experimental stage and it remains to be seen if or when they will be deployed as practical, high-performance military weapons. Operational advantages Directed energy weapons could have several main advantages over conventional weaponry: Directed-energy weapons can be used discreetly; radiation does not generate sound and is invisible if outside the visible spectrum.<ref name="Defence IQ Press">"Defence IQ talks to Dr Palíšek about Directed Energy Weapon systems", Defence iQ', Nov. 20, 2012</ref> Light is, for practical purposes, unaffected by gravity, windage and Coriolis force, giving it an almost perfectly flat trajectory. This makes aim much more precise and extends the range to line-of-sight, limited only by beam diffraction and spread (which dilute the power and weaken the effect), and absorption or scattering by intervening atmospheric contents. Lasers travel at light-speed and have long range, making them suitable for use in space warfare. Laser weapons potentially eliminate many logistical problems in terms of ammunition supply, as long as there is enough energy to power them. Depending on several operational factors, directed-energy weapons may be cheaper to operate than conventional weapons in certain contexts. Use of high-powered microwave weapons, which are typically used to degrade and damage electronics such as drones, can be hard to attribute to a particular actor. Types Microwave Some devices are described as microwave weapons; the microwave frequency is commonly defined as being between 300 MHz and 300 GHz (wavelengths of 1 meter to 1 millimeter), which is within the radiofrequency (RF) range. Some examples of weapons which have been publicized by the military are as follows: Active Denial System Active Denial System is a millimeter wave source that heats the water in a human target's skin and thus causes incapacitating pain. It was developed by the U.S. Air Force Research Laboratory and Raytheon for riot-control duty. Though intended to cause severe pain while leaving no lasting damage, concern has been voiced as to whether the system could cause irreversible damage to the eyes. There has yet to be testing for long-term side effects of exposure to the microwave beam. It can also destroy unshielded electronics. Vigilant Eagle Vigilant Eagle is a ground-based airport defense system that directs high-frequency microwaves towards any projectile that is fired at an aircraft. It was announced by Raytheon in 2005, and the effectiveness of its waveforms was reported to have been demonstrated in field tests to be highly effective in defeating MANPADS missiles. The system consists of a missile-detecting and tracking subsystem (MDT), a command and control system, and a scanning array. The MDT is a fixed grid of passive infrared (IR) cameras. The command and control system determines the missile launch point. The scanning array projects microwaves that disrupt the surface-to-air missile's guidance system, deflecting it from the aircraft. Vigilant Eagle was not mentioned on Raytheon's Web site in 2022. Bofors HPM Blackout Bofors HPM Blackout is a high-powered microwave weapon that is said to be able to destroy at short distance a wide variety of commercial off-the-shelf (COTS) electronic equipment and is purportedly non-lethal.Magnus Karlsson (2009). "Bofors HPM Blackout". Artilleri-Tidskrift (2–2009): s. s 12–15. Retrieved 2010-01-04. EL/M-2080 Green Pine|EL/M-2080 Green P The effective radiated power (ERP) of the EL/M-2080 Green Pine radar makes it a hypothetical candidate for conversion into a directed-energy weapon, by focusing pulses of radar energy on target missiles. The energy spikes are tailored to enter missiles through antennas or sensor apertures where they can fool guidance systems, scramble computer memories or even burn out sensitive electronic components. Active electronically scanned array AESA radars mounted on fighter aircraft have been slated as directed energy weapons against missiles, however, a senior US Air Force officer noted: "they aren't particularly suited to create weapons effects on missiles because of limited antenna size, power and field of view". Potentially lethal effects are produced only inside 100 meters range, and disruptive effects at distances on the order of one kilometer. Moreover, cheap countermeasures can be applied to existing missiles. Anti-drone rifle A weapon often described as an "anti-drone rifle" or "anti-drone gun" is a battery-powered electromagnetic pulse weapon held to an operator's shoulder, pointed at a flying target in a way similar to a rifle, and operated. While not a rifle or gun, it is so nicknamed as it is handled in the same way as a personal rifle. The device emits separate electromagnetic pulses to suppress navigation and transmission channels used to operate an aerial drone, terminating the drone's contact with its operator; the out-of-control drone then crashes. The Russian Stupor is reported to have a range of two kilometers, covering a 20-degree sector; it also suppresses the drone's cameras. Stupor is reported to have been used by Russian forces during the Russian military intervention in the Syrian civil war. Both Russia and Ukraine are reported to use these devices during the 2022 Russian invasion of Ukraine. The Ukrainian army are reported to use the Ukrainian KVS G-6, with a 3.5 km range and able to operate continuously for 30 minutes. The manufacturer states that the weapon can disrupt remote control, the transmission of video at 2.4 and 5 GHz, and GPS and Glonass satellite navigation signals. Ukraine has also used the EDM4S anti drone rifle to shoot down Russian Eleron-3 drones. Due to the threat posed by drones in regard to terrorism, several police forces have carried anti-drone guns as part of their equipment. For example, during the policing of the Commonwealth Games in 2018, the Australian Queensland Police Service carried anti-drone guns with an effective range of . In Myanmar, police have been equipped with anti-drone guns "ostensibly to defend VIPs". Counter-electronics High Power Microwave Advanced Missile Project THOR/Mjolnir Radio Frequency Directed Energy Weapon (RFDEW) This UK-developed system was unveiled in May 2024 and uses radio waves to fry the electronic components of its targets, rendering them inoperable. It is capable of engaging multiple targets, including drone swarms, and reportedly costs less than 10 pence (13 cents) per shot, making it a cheaper alternative to traditional missile-based air defense systems. Laser A laser weapon is a directed-energy weapon based on lasers. DragonFire An example of a laser directed-energy weapon is the DragonFire currently being developed by the United Kingdom. It is reportedly in the 50 kW class and is capable of engaging any target within line-of-sight at a currently classified range. It has been tested against drones and mortar rounds and is expected to equip ships, aircraft and ground vehicles from 2027. Particle-beam Particle-beam weapons can use charged or neutral particles, and can be either endoatmospheric or exoatmospheric. Particle beams as beam weapons are theoretically possible, but practical weapons have not been demonstrated yet. Certain types of particle beams have the advantage of being self-focusing in the atmosphere. Blooming is also a problem in particle-beam weapons. Energy that would otherwise be focused on the target spreads out and the beam becomes less effective: Thermal blooming occurs in both charged and neutral particle beams, and occurs when particles bump into one another under the effects of thermal vibration, or bump into air molecules. Electrical blooming occurs only in charged particle beams, as ions of like charge repel one another. Plasma Plasma weapons fire a beam, bolt, or stream of plasma, which is an excited state of matter consisting of atomic electrons and nuclei, and free electrons if ionized, or other particles if pinched. The MARAUDER (Magnetically Accelerated Ring to Achieve Ultra-high Directed-Energy and Radiation) used the Shiva Star project (a high energy capacitor bank which provided the means to test weapons and other devices requiring brief and extremely large amounts of energy) to accelerate a toroid of plasma at a significant percentage of the speed of light. Additionally, the Russian Federation claims to be developing various plasma weapons. Sonic Long Range Acoustic Device (LRAD) The Long Range Acoustic Device (LRAD) is an acoustic hailing device developed by Genasys (formerly LRAD Corporation) to send messages and warning tones over longer distances or at higher volume than normal loudspeakers, and as a non-lethal directed-acoustic-energy weapon. LRAD systems are used for long-range communications in a variety of applications and as a means of non-lethal, non-projectile crowd control. They are also used on ships as an anti-piracy measure. According to the manufacturer's specifications, the systems weigh from and can emit sound in a 30°- 60° beam at 2.5 kHz. They range in size from small, portable handheld units which can be strapped to a person's chest, to larger models which require a mount. The power of the sound beam which LRADs produce is sufficient to penetrate vehicles and buildings while retaining a high degree of fidelity, so that verbal messages can be conveyed clearly in some situations. History Ancient Mirrors of Archimedes According to a legend, Archimedes created a mirror with an adjustable focal length (or more likely, a series of mirrors focused on a common point) to focus sunlight on ships of the Roman fleet as they invaded Syracuse, setting them on fire. Historians point out that the earliest accounts of the battle did not mention a "burning mirror", but merely stated that Archimedes's ingenuity combined with a way to hurl fire were relevant to the victory. Some attempts to replicate this feat have had some success; in particular, an experiment by students at MIT showed that a mirror-based weapon was at least possible, if not necessarily practical. The hosts of MythBusters tackled the Mirrors of Archimedes three times (in episodes 19, 57 and 172) and were never able to make the target ship catch fire, declaring the myth busted three separate times. 20th Century Robert Watson-Watt In 1935, the British Air Ministry asked Robert Watson-Watt of the Radio Research Station whether a "death ray" was possible. He and colleague Arnold Wilkins quickly concluded that it was not feasible, but as a consequence suggested using radio for the detection of aircraft and this started the development of radar in Britain. The fictional "engine-stopping ray" Stories in the 1930s and World War II gave rise to the idea of an "engine-stopping ray". They seemed to have arisen from the testing of the television transmitter in Feldberg, Germany. Because electrical noise from car engines would interfere with field strength measurements, sentries would stop all traffic in the vicinity for the twenty minutes or so needed for a test. Reversing the order of events in retelling the story created a "tale" where tourists car engine stopped first and then were approached by a German soldier who told them that they had to wait. The soldier returned a short time later to say that the engine would now work and the tourists drove off. Such stories were circulating in Britain around 1938 and during the war British Intelligence relaunched the myth as a "British engine-stopping ray," trying to spoof the Germans into researching what the British had supposedly invented in an attempt to tie up German scientific resources. German World War II experimental weapons During the early 1940s Axis engineers developed a sonic cannon that could cause fatal vibrations in its target body. A methane gas combustion chamber leading to two parabolic dishes pulse-detonated at roughly 44 Hz. This sound, magnified by the dish reflectors, caused vertigo and nausea at by vibrating the middle ear bones and shaking the cochlear fluid within the inner ear. At distances of , the sound waves could act on organ tissues and fluids by repeatedly compressing and releasing compressive resistant organs such as the kidneys, spleen, and liver. (It had little detectable effect on malleable organs such as the heart, stomach and intestines.) Lung tissue was affected at only the closest ranges as atmospheric air is highly compressible and only the blood rich alveoli resist compression. In practice, the weapon was highly vulnerable to enemy fire. Rifle, bazooka and mortar rounds easily deformed the parabolic reflectors, rendering the wave amplification ineffective. In the later phases of World War II, Nazi Germany increasingly put its hopes on research into technologically revolutionary secret weapons, the Wunderwaffe. Among the directed-energy weapons the Nazis investigated were X-ray beam weapons developed under Heinz Schmellenmeier, Richard Gans and Fritz Houtermans. They built an electron accelerator called Rheotron to generate hard X-ray synchrotron beams for the Reichsluftfahrtministerium (RLM). Invented by Max Steenbeck at Siemens-Schuckert in the 1930s, these were later called Betatrons by the Americans. The intent was to pre-ionize ignition in aircraft engines and hence serve as anti-aircraft DEW and bring planes down into the reach of the flak. The Rheotron was captured by the Americans in Burggrub on April 14, 1945. Another approach was Ernst Schiebolds 'Röntgenkanone' developed from 1943 in Großostheim near Aschaffenburg. Richert Seifert & Co from Hamburg delivered parts. Reported use in Sino-Soviet conflicts The Central Intelligence Agency informed Secretary Henry Kissinger that it had twelve reports of Soviet forces using laser weapons against Chinese forces during the 1969 Sino-Soviet border clashes, though William Colby doubted that they had actually been employed. Northern Ireland "squawk box" field trials In 1973, New Scientist magazine reported that a sonic weapon known as a "squawk box" underwent successful field trials in Northern Ireland, using soldiers as guinea pigs. The device combined two slightly different frequencies which when heard would be heard as the sum of the two frequencies (ultrasonic) and the difference between the two frequencies (infrasonic) e.g. two directional speakers emitting 16,000 Hz and 16,002 Hz frequencies would produce in the ear two frequencies of 32,002 Hz and 2 Hz. The article states: "The squawk box is highly directional which gives it its appeal. Its effective beam width is so small that it can be directed at individuals in a riot. Other members of a crowd are unaffected, except by panic when they see people fainting, being sick, or running from the scene with their hands over their ears. The virtual inaudibility of the equipment is said to produce a 'spooky' psychological effect." The UK's Ministry of Defence denied the existence of such a device. It stated that it did have, however, an "ultra-loud public address system which [...] could be 'used for verbal communication over two miles, or put out a sustained or modulated sound blanket to make conversation, and thus crowd organisation, impossible.'" East German "decomposition" methods In East Germany in the 1960s, in an effort to avoid international condemnation for arresting and interrogating people for holding politically incorrect views or for performing actions deemed hostile by the state the state security service, the Stasi, attempted alternative methods of repression which could paralyze people without imprisoning them. One such alternative method was called decomposition (transl. Zersetzung). In the 1970s and 1980s it became the primary method of repressing domestic "hostile-negative" forces. Some of the victims of this method suffered from cancer and claimed that they had also been targeted with directed X-rays. In addition, when the East German state collapsed, powerful X-ray equipment was found in prisons without there being any apparent reason to justify its presence. In 1999, the modern German state was investigating the possibility that this X-ray equipment was being used as weaponry and that it was a deliberate policy of the Stasi to attempt to give prisoners radiation poisoning, and thereby cancer, through the use of directed X-rays. The negative effects of the radiation poisoning and cancer would extend past the period of incarceration. In this manner someone could be debilitated even though they were no longer imprisoned. The historian Mary Fulbrook states, Strategic Defense Initiative In the 1980s, U.S. President Ronald Reagan proposed the Strategic Defense Initiative (SDI) program, which was nicknamed Star Wars. It suggested that lasers, perhaps space-based X-ray lasers, could destroy ICBMs in flight. Panel discussions on the role of high-power lasers in SDI took place at various laser conferences, during the 1980s, with the participation of noted physicists including Edward Teller.Duarte, F. J. (Ed.), Proceedings of the International Conference on Lasers '87 (STS, McLean, Va, 1988). A notable example of a directed energy system which came out of the SDI program is the Neutral Particle Beam Accelerator developed by Los Alamos National Laboratory. This system is officially described (on the Smithsonian Air and Space Museum website) as a low power neutral particle beam (NPB) accelerator, which was among several directed energy weapons examined by the Strategic Defense Initiative Organization for potential use in missile defense. In July 1989, the accelerator was launched from White Sands Missile Range as part of the Beam Experiment Aboard Rocket (BEAR) project, reaching an altitude of 200 kilometers (124 miles) and operating successfully in space before being recovered intact after reentry. The primary objectives of the test were to assess NPB propagation characteristics in space and gauge the effects on spacecraft components. Despite continued research into NPBs, no known weapon system utilizing this technology has been deployed. Though the strategic missile defense concept has continued to the present under the Missile Defense Agency, most of the directed-energy weapon concepts were shelved. However, Boeing has been somewhat successful with the Boeing YAL-1 and Boeing NC-135, the first of which destroyed two missiles in February 2010. Funding has been cut to both of the programs. Iraq War During the Iraq War, electromagnetic weapons, including high power microwaves, were used by the U.S. military to disrupt and destroy Iraqi electronic systems and may have been used for crowd control. Types and magnitudes of exposure to electromagnetic fields are unknown. Alleged tracking of Space Shuttle Challenger The Soviet Union invested some effort in the development of ruby and carbon dioxide lasers as anti-ballistic missile systems, and later as a tracking and anti-satellite system. There are reports that the Terra-3 complex at Sary Shagan was used on several occasions to temporarily "blind" US spy satellites in the IR range. It has been claimed that the USSR made use of the lasers at the Terra-3 site to target the Space Shuttle Challenger in 1984. At the time, the Soviet Union was concerned that the shuttle was being used as a reconnaissance platform. On 10 October 1984 (STS-41-G), the Terra-3 tracking laser was allegedly aimed at Challenger as it passed over the facility. Early reports claimed that this was responsible for causing "malfunctions on the space shuttle and distress to the crew", and that the United States filed a diplomatic protest about the incident. However, this story is comprehensively denied by the crew members of STS-41-G and knowledgeable members of the US intelligence community. After the end of the Cold War, the Terra-3 facility was found to be a low-power laser testing site with limited satellite tracking capabilities, which is now abandoned and partially disassembled. Modern 21st-century use Havana syndrome Havana syndrome is a disputed medical condition reported by US personnel in Havana, Cuba and other locations, originally suspected to be caused by microwave radiation. In January 2022, the Central Intelligence Agency issued an interim assessment concluding that the syndrome is not the result of "a sustained global campaign by a hostile power." Foreign involvement was ruled out in 976 cases of the 1,000 reviewed. In February 2022, the State Department released a report by the JASON Advisory Group, which stated that it was highly unlikely that a directed-energy attack had caused the health incidents. The cause of Havana syndrome remains unknown and controversial. Anti-piracy measures LRADs are often fitted on commercial and military ships. They have been used on several occasions to repel pirate attacks by sending warnings and by producing intolerable levels of sound. For example, in 2005 the cruise liner Seabourn Spirit used a sonic weapon to defend itself from Somali pirates in the Indian ocean. A few years later, the cruise liner Spirit of Adventure also defended itself from Somali pirates by using its LRAD to force them to retreat. Non-lethal weapon capability The TECOM Technology Symposium in 1997 concluded on non-lethal weapons, "determining the target effects on personnel is the greatest challenge to the testing community", primarily because "the potential of injury and death severely limits human tests". Also, "directed-energy weapons that target the central nervous system and cause neurophysiological disorders may violate the Certain Conventional Weapons Convention of 1980. Weapons that go beyond non-lethal intentions and cause 'superfluous injury or unnecessary suffering' may also violate the Protocol I to the Geneva Conventions of 1977." Some common bio-effects of non-lethal electromagnetic weapons include: Difficulty breathing Disorientation Nausea Pain Vertigo Other systemic discomfort Interference with breathing poses the most significant, potentially lethal results. Light and repetitive visual signals can induce epileptic seizures. Vection and motion sickness can also occur. Russia has reportedly been using blinding laser weapons during the Russo-Ukrainian War. See also Electronic warfare Electromagnetic pulse Ivan's hammer L3Harris Technologies Laser applications MEDUSA (weapon) Notes References The E-Bomb: How America's New Directed Energy Weapons Will Change the Way Future Wars Will Be Fought. Doug Beason (2005). . US claims that China has used high-energy lasers to interfere with US satellites: Jane's Defence Weekly, 18 October 2006 China jamming test sparks U.S. satellite concerns: USA Today Beijing secretly fires lasers to disable US satellites: The Daily Telegraph China Attempted To Blind U.S. Satellites With Laser: Defense News China Has Not Attacked US Satellites Says DoD: United Press International "China Has Not Attacked US Satellites Says DoD": Space Daily'' External links Airpower Australia Applied Energetics – Photonic and high-voltage energetics (formerly Ionatron) Wired (AP) article on weapons deployment in Iraq, Active Denial System and Stunstrike, July 10, 2005 Boeing Tests Laser-Mounted Humvee as IED Hunter, November 13, 2007 WSTIAC Quarterly, Vol. 7, No. 1 – "Directed Energy Weapons" Ogonek Report on '21st Century Weapons' "How 'Revolutionary' Is CHAMP, New Air Force Microwave Weapon?", November 28, 2012 by David Axe Electromagnetic radiation Non-lethal weapons
0.767187
0.998831
0.76629
Svedberg
In chemistry, a Svedberg unit or svedberg (symbol S, sometimes Sv) is a non-SI metric unit for sedimentation coefficients. The Svedberg unit offers a measure of a particle's size indirectly based on its sedimentation rate under acceleration (i.e. how fast a particle of given size and shape settles out of suspension). The svedberg is a measure of time, defined as exactly 10−13 seconds (100 fs). For biological macromolecules and cell organelles like ribosomes, the sedimentation rate is typically measured as the rate of travel in a centrifuge tube subjected to high g-force. The svedberg (S) is distinct from the SI unit sievert or the non-SI unit sverdrup, which also use the symbol Sv, and to the SI unit Siemens which uses the symbol S too. Naming The unit is named after the Swedish chemist Theodor Svedberg (1884–1971), winner of the 1926 Nobel Prize in chemistry for his work on disperse systems, colloids and his invention of the ultracentrifuge. Factors The Svedberg coefficient is a nonlinear function. A particle's mass, density, and shape will determine its S value. The S value depends on the frictional forces retarding its movement, which, in turn, are related to the average cross-sectional area of the particle. The sedimentation coefficient is the ratio of the speed of a substance in a centrifuge to its acceleration in comparable units. A substance with a sedimentation coefficient of 26S will travel at 26 micrometers per second under the influence of an acceleration of a million gravities (107 m/s2). Centrifugal acceleration is given as rω; where r is the radial distance from the rotation axis and ω is the angular velocity in radians per second. Bigger particles tend to sediment faster and so have higher Svedberg values. Svedberg units are not directly additive since they represent a rate of sedimentation, not weight. Use In centrifugation of small biochemical species, a convention has developed in which sedimentation coefficients are expressed in the Svedberg units. The svedberg is the most important measure used to distinguish ribosomes. Ribosomes are composed of two complex subunits, each including rRNA and protein components. In prokaryotes (including bacteria), the subunits are named 30S and 50S for their "size" in Svedberg units. These subunits are made up of three forms of rRNA: 16S, 23S, and 5S and ribosomal proteins. For bacterial ribosomes, ultracentrifugation yields intact ribosomes (70S) as well as separated ribosomal subunits, the large subunit (50S) and the small subunit (30S). Within cells, ribosomes normally exist as a mixture of joined and separate subunits. The largest particles (whole ribosomes) sediment near the bottom of the tube, whereas the smaller particles (separated 50S and 30S subunits) appear in upper fractions. See also Sedimentation coefficient Differential centrifugation Footnotes References External links Svedberg unit - nobelprize.org Units of time Non-SI metric units
0.776773
0.986494
0.766282
Spin–statistics theorem
The spin–statistics theorem proves that the observed relationship between the intrinsic spin of a particle (angular momentum not due to the orbital motion) and the quantum particle statistics of collections of such particles is a consequence of the mathematics of quantum mechanics. In units of the reduced Planck constant ħ, all particles that move in 3 dimensions have either integer spin and obey Bose–Einstein statistics or half-integer spin and obey Fermi–Dirac statistics. Spin-statistics connection All known particles obey either Fermi–Dirac statistics or Bose–Einstein statistics. A particle's intrinsic spin always predicts the statistics of a collection of such particles and conversely: integral-spin particles are bosons with Bose–Einstein statistics, half-integral-spin particle are fermions with Fermi–Dirac statistics. A spin–statistics theorem shows that the mathematical logic of quantum mechanics predicts or explains this physical result. The statistics of indistinguishable particles is among the most fundamental of physical effects. The Pauli exclusion principle that every occupied quantum state contains at most one fermion controls the formation of matter. The basic building blocks of matter such as protons, neutrons, and electrons are all fermions. Conversely, particles such as the photon, which mediate forces between matter particles, are all bosons. A spin–statistics theorem attempts explain the origin of this fundamental dichotomy. Background Naively, spin, an angular momentum property intrinsic to a particle, would be unrelated to fundamental properties of a collection of such particles. However, these are indistinguishable particles: any physical prediction relating multiple indistinguishable particles must not change when the particles are exchanged. Quantum states and indistinguishable particles In a quantum system, a physical state is described by a state vector. A pair of distinct state vectors are physically equivalent if they differ only by an overall phase factor, ignoring other interactions. A pair of indistinguishable particles such as this have only one state. This means that if the positions of the particles are exchanged (i.e., they undergo a permutation), this does not identify a new physical state, but rather one matching the original physical state. In fact, one cannot tell which particle is in which position. While the physical state does not change under the exchange of the particles' positions, it is possible for the state vector to change sign as a result of an exchange. Since this sign change is just an overall phase, this does not affect the physical state. The essential ingredient in proving the spin-statistics relation is relativity, that the physical laws do not change under Lorentz transformations. The field operators transform under Lorentz transformations according to the spin of the particle that they create, by definition. Additionally, the assumption (known as microcausality) that spacelike-separated fields either commute or anticommute can be made only for relativistic theories with a time direction. Otherwise, the notion of being spacelike is meaningless. However, the proof involves looking at a Euclidean version of spacetime, in which the time direction is treated as a spatial one, as will be now explained. Lorentz transformations include 3-dimensional rotations and boosts. A boost transfers to a frame of reference with a different velocity and is mathematically like a rotation into time. By analytic continuation of the correlation functions of a quantum field theory, the time coordinate may become imaginary, and then boosts become rotations. The new "spacetime" has only spatial directions and is termed Euclidean. Exchange symmetry or permutation symmetry Bosons are particles whose wavefunction is symmetric under such an exchange or permutation, so if we swap the particles, the wavefunction does not change. Fermions are particles whose wavefunction is antisymmetric, so under such a swap the wavefunction gets a minus sign, meaning that the amplitude for two identical fermions to occupy the same state must be zero. This is the Pauli exclusion principle: two identical fermions cannot occupy the same state. This rule does not hold for bosons. In quantum field theory, a state or a wavefunction is described by field operators operating on some basic state called the vacuum. In order for the operators to project out the symmetric or antisymmetric component of the creating wavefunction, they must have the appropriate commutation law. The operator (with an operator and a numerical function with complex values) creates a two-particle state with wavefunction , and depending on the commutation properties of the fields, either only the antisymmetric parts or the symmetric parts matter. Let us assume that and the two operators take place at the same time; more generally, they may have spacelike separation, as is explained hereafter. If the fields commute, meaning that the following holds: then only the symmetric part of contributes, so that , and the field will create bosonic particles. On the other hand, if the fields anti-commute, meaning that has the property that then only the antisymmetric part of contributes, so that , and the particles will be fermionic. Proofs An elementary explanation for the spin–statistics theorem cannot be given despite the fact that the theorem is so simple to state. In the Feynman Lectures on Physics, Richard Feynman said that this probably means that we do not have a complete understanding of the fundamental principle involved. Numerous notable proofs have been published, with different kinds of limitations and assumptions. They are all "negative proofs", meaning that they establish that integral spin fields cannot result in fermion statistics while half-integral spin fields cannot result in boson statistics. Proofs that avoid using any relativistic quantum field theory mechanism have defects. Many such proofs rely on a claim that where the operator permutes the coordinates. However, the value on the left-hand side represents the probability of particle 1 at , particle 2 at , and so on, and is thus quantum-mechanically invalid for indistinguishable particles. The first proof was formulated in 1939 by Markus Fierz, a student of Wolfgang Pauli, and was rederived in a more systematic way by Pauli the following year. In a later summary, Pauli listed three postulates within relativistic quantum field theory as required for these versions of the theorem: Any state with particle occupation has higher energy than the vacuum state. Spatially separated measurements do not disturb each other (they commute). Physical probabilities are positive (the metric of the Hilbert space is positive-definite). Their analysis neglected particle interactions other than commutation/anti-commutation of the state. In 1949 Richard Feynman gave a completely different type of proof based on vacuum polarization, which was later critiqued by Pauli. Pauli showed that Feynman's proof explicitly relied on the first two postulates he used and implicitly used the third one by first allowing negative probabilities but then rejecting field theory results with probabilities greater than one. A proof by Julian Schwinger in 1950 based on time-reversal invariance followed a proof by Frederik Belinfante in 1940 based on charge-conjugation invariance, leading to a connection to the CPT theorem more fully developed by Pauli in 1955. These proofs were notably difficult to follow. Work on the mathematical foundations of quantum mechanics by Arthur Wightman lead to a theorem that stated that the expectation value of the product of two fields, , could be analytically continued to all separations . (The first two postulates of the Pauli-era proofs involve the vacuum state and fields at separate locations.) The new result allowed more rigorous proofs of the spin–statistics theorems by Gerhart Luders and Bruno Zumino and by Peter Burgoyne. In 1957 Res Jost derived the CPT theorem using the spin–statistics theorem, and Burgoyne's proof of the spin–statistics theorem in 1958 required no constraints on the interactions nor on the form of the field theories. These results are among the most rigorous practical theorems. In spite of these successes, Feynman, in his 1963 undergraduate lecture that discussed the spin–statistics connection, says: "We apologize for the fact that we cannot give you an elementary explanation." Neuenschwander echoed this in 1994, asking whether there was any progress, spurring additional proofs and books. Neuenschwander's 2013 popularization of the spin–statistics connection suggested that simple explanations remain elusive. Experimental tests In 1987 Greenberg and Mohaparra proposed that the spin–statistics theorem could have small violations. With the help of very precise calculations for states of the He atom that violate the Pauli exclusion principle, Deilamian, Gillaspy and Kelleher looked for the 1s2s 1S0 state of He using an atomic-beam spectrometer. The search was unsuccessful with an upper limit of 5×10−6. Relation to representation theory of the Lorentz group The Lorentz group has no non-trivial unitary representations of finite dimension. Thus it seems impossible to construct a Hilbert space in which all states have finite, non-zero spin and positive, Lorentz-invariant norm. This problem is overcome in different ways depending on particle spin–statistics. For a state of integer spin the negative norm states (known as "unphysical polarization") are set to zero, which makes the use of gauge symmetry necessary. For a state of half-integer spin the argument can be circumvented by having fermionic statistics. Quasiparticle anyons in 2 dimensions In 1982, physicist Frank Wilczek published a research paper on the possibilities of possible fractional-spin particles, which he termed anyons from their ability to take on "any" spin. He wrote that they were theoretically predicted to arise in low-dimensional systems where motion is restricted to fewer than three spatial dimensions. Wilczek described their spin statistics as "interpolating continuously between the usual boson and fermion cases". The effect has become the basis for understanding the fractional quantum hall effect. See also Parastatistics Anyonic statistics Braid statistics References Further reading External links A nice nearly-proof at John Baez's home page Animation of the Dirac belt trick with a double belt, showing that belts behave as spin 1/2 particles Animation of a Dirac belt trick variant showing that spin 1/2 particles are fermions Articles containing proofs Particle statistics Physics theorems Quantum field theory Statistical mechanics theorems Theorems in quantum mechanics Theorems in mathematical physics
0.775065
0.98864
0.76626
Atomic, molecular, and optical physics
Atomic, molecular, and optical physics (AMO) is the study of matter–matter and light–matter interactions, at the scale of one or a few atoms and energy scales around several electron volts. The three areas are closely interrelated. AMO theory includes classical, semi-classical and quantum treatments. Typically, the theory and applications of emission, absorption, scattering of electromagnetic radiation (light) from excited atoms and molecules, analysis of spectroscopy, generation of lasers and masers, and the optical properties of matter in general, fall into these categories. Atomic and molecular physics Atomic physics is the subfield of AMO that studies atoms as an isolated system of electrons and an atomic nucleus, while molecular physics is the study of the physical properties of molecules. The term atomic physics is often associated with nuclear power and nuclear bombs, due to the synonymous use of atomic and nuclear in standard English. However, physicists distinguish between atomic physics — which deals with the atom as a system consisting of a nucleus and electrons — and nuclear physics, which considers atomic nuclei alone. The important experimental techniques are the various types of spectroscopy. Molecular physics, while closely related to atomic physics, also overlaps greatly with theoretical chemistry, physical chemistry and chemical physics. Both subfields are primarily concerned with electronic structure and the dynamical processes by which these arrangements change. Generally this work involves using quantum mechanics. For molecular physics, this approach is known as quantum chemistry. One important aspect of molecular physics is that the essential atomic orbital theory in the field of atomic physics expands to the molecular orbital theory. Molecular physics is concerned with atomic processes in molecules, but it is additionally concerned with effects due to the molecular structure. Additionally to the electronic excitation states which are known from atoms, molecules are able to rotate and to vibrate. These rotations and vibrations are quantized; there are discrete energy levels. The smallest energy differences exist between different rotational states, therefore pure rotational spectra are in the far infrared region (about 30 - 150 μm wavelength) of the electromagnetic spectrum. Vibrational spectra are in the near infrared (about 1 - 5 μm) and spectra resulting from electronic transitions are mostly in the visible and ultraviolet regions. From measuring rotational and vibrational spectra properties of molecules like the distance between the nuclei can be calculated. As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified. Optical physics Optical physics is the study of the generation of electromagnetic radiation, the properties of that radiation, and the interaction of that radiation with matter, especially its manipulation and control. It differs from general optics and optical engineering in that it is focused on the discovery and application of new phenomena. There is no strong distinction, however, between optical physics, applied optics, and optical engineering, since the devices of optical engineering and the applications of applied optics are necessary for basic research in optical physics, and that research leads to the development of new devices and applications. Often the same people are involved in both the basic research and the applied technology development, for example the experimental demonstration of electromagnetically induced transparency by S. E. Harris and of slow light by Harris and Lene Vestergaard Hau. Researchers in optical physics use and develop light sources that span the electromagnetic spectrum from microwaves to X-rays. The field includes the generation and detection of light, linear and nonlinear optical processes, and spectroscopy. Lasers and laser spectroscopy have transformed optical science. Major study in optical physics is also devoted to quantum optics and coherence, and to femtosecond optics. In optical physics, support is also provided in areas such as the nonlinear response of isolated atoms to intense, ultra-short electromagnetic fields, the atom-cavity interaction at high fields, and quantum properties of the electromagnetic field. Other important areas of research include the development of novel optical techniques for nano-optical measurements, diffractive optics, low-coherence interferometry, optical coherence tomography, and near-field microscopy. Research in optical physics places an emphasis on ultrafast optical science and technology. The applications of optical physics create advancements in communications, medicine, manufacturing, and even entertainment. History One of the earliest steps towards atomic physics was the recognition that matter was composed of atoms, in modern terms the basic unit of a chemical element. This theory was developed by John Dalton in the 18th century. At this stage, it wasn't clear what atoms were - although they could be described and classified by their observable properties in bulk; summarized by the developing periodic table, by John Newlands and Dmitri Mendeleyev around the mid to late 19th century. Later, the connection between atomic physics and optical physics became apparent, by the discovery of spectral lines and attempts to describe the phenomenon - notably by Joseph von Fraunhofer, Fresnel, and others in the 19th century. From that time to the 1920s, physicists were seeking to explain atomic spectra and blackbody radiation. One attempt to explain hydrogen spectral lines was the Bohr atom model. Experiments including electromagnetic radiation and matter - such as the photoelectric effect, Compton effect, and spectra of sunlight the due to the unknown element of Helium, the limitation of the Bohr model to Hydrogen, and numerous other reasons, lead to an entirely new mathematical model of matter and light: quantum mechanics. Classical oscillator model of matter Early models to explain the origin of the index of refraction treated an electron in an atomic system classically according to the model of Paul Drude and Hendrik Lorentz. The theory was developed to attempt to provide an origin for the wavelength-dependent refractive index n of a material. In this model, incident electromagnetic waves forced an electron bound to an atom to oscillate. The amplitude of the oscillation would then have a relationship to the frequency of the incident electromagnetic wave and the resonant frequencies of the oscillator. The superposition of these emitted waves from many oscillators would then lead to a wave which moved more slowly. Early quantum model of matter and light Max Planck derived a formula to describe the electromagnetic field inside a box when in thermal equilibrium in 1900. His model consisted of a superposition of standing waves. In one dimension, the box has length L, and only sinusoidal waves of wavenumber can occur in the box, where n is a positive integer (mathematically denoted by ). The equation describing these standing waves is given by: . where E0 is the magnitude of the electric field amplitude, and E is the magnitude of the electric field at position x. From this basic, Planck's law was derived. In 1911, Ernest Rutherford concluded, based on alpha particle scattering, that an atom has a central pointlike proton. He also thought that an electron would be still attracted to the proton by Coulomb's law, which he had verified still held at small scales. As a result, he believed that electrons revolved around the proton. Niels Bohr, in 1913, combined the Rutherford model of the atom with the quantisation ideas of Planck. Only specific and well-defined orbits of the electron could exist, which also do not radiate light. In jumping orbit the electron would emit or absorb light corresponding to the difference in energy of the orbits. His prediction of the energy levels was then consistent with observation. These results, based on a discrete set of specific standing waves, were inconsistent with the continuous classical oscillator model. Work by Albert Einstein in 1905 on the photoelectric effect led to the association of a light wave of frequency with a photon of energy . In 1917 Einstein created an extension to Bohrs model by the introduction of the three processes of stimulated emission, spontaneous emission and absorption (electromagnetic radiation). Modern treatments The largest steps towards the modern treatment was the formulation of quantum mechanics with the matrix mechanics approach by Werner Heisenberg and the discovery of the Schrödinger equation by Erwin Schrödinger. There are a variety of semi-classical treatments within AMO. Which aspects of the problem are treated quantum mechanically and which are treated classically is dependent on the specific problem at hand. The semi-classical approach is ubiquitous in computational work within AMO, largely due to the large decrease in computational cost and complexity associated with it. For matter under the action of a laser, a fully quantum mechanical treatment of the atomic or molecular system is combined with the system being under the action of a classical electromagnetic field. Since the field is treated classically it can not deal with spontaneous emission. This semi-classical treatment is valid for most systems, particular those under the action of high intensity laser fields. The distinction between optical physics and quantum optics is the use of semi-classical and fully quantum treatments respectively. Within collision dynamics and using the semi-classical treatment, the internal degrees of freedom may be treated quantum mechanically, whilst the relative motion of the quantum systems under consideration are treated classically. When considering medium to high speed collisions, the nuclei can be treated classically while the electron is treated quantum mechanically. In low speed collisions the approximation fails. Classical Monte-Carlo methods for the dynamics of electrons can be described as semi-classical in that the initial conditions are calculated using a fully quantum treatment, but all further treatment is classical. Isolated atoms and molecules Atomic, Molecular and Optical physics frequently considers atoms and molecules in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons, whilst molecular models are typically concerned with molecular hydrogen and its molecular hydrogen ion. It is concerned with processes such as ionization, above threshold ionization and excitation by photons or collisions with atomic particles. While modelling atoms in isolation may not seem realistic, if one considers molecules in a gas or plasma then the time-scales for molecule-molecule interactions are huge in comparison to the atomic and molecular processes that we are concerned with. This means that the individual molecules can be treated as if each were in isolation for the vast majority of the time. By this consideration atomic and molecular physics provides the underlying theory in plasma physics and atmospheric physics even though both deal with huge numbers of molecules. Electronic configuration Electrons form notional shells around the nucleus. These are naturally in a ground state but can be excited by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically other electrons). Electrons that populate a shell are said to be in a bound state. The energy necessary to remove an electron from its shell (taking it to infinity) is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy. The atom is said to have undergone the process of ionization. In the event that the electron absorbs a quantity of energy less than the binding energy, it may transition to an excited state or to a virtual state. After a statistically sufficient quantity of time, an electron in an excited state will undergo a transition to a lower state via spontaneous emission. The change in energy between the two energy levels must be accounted for (conservation of energy). In a neutral atom, the system will emit a photon of the difference in energy. However, if the lower state is in an inner shell, a phenomenon known as the Auger effect may take place where the energy is transferred to another bound electrons causing it to go into the continuum. This allows one to multiply ionize an atom with a single photon. There are strict selection rules as to the electronic configurations that can be reached by excitation by light—however there are no such rules for excitation by collision processes. See also Born–Oppenheimer approximation Frequency doubling Diffraction Hyperfine structure Interferometry Isomeric shift Metamaterial cloaking Molecular energy state Molecular modeling Nanotechnology Negative index metamaterials Nonlinear optics Optical engineering Photon polarization Quantum chemistry Quantum optics Rigid rotor Spectroscopy Superlens Stationary state Transition of state Notes References Solid State Physics (2nd Edition), J.R. Hook, H.E. Hall, Manchester Physics Series, John Wiley & Sons, 2010, Light and Matter: Electromagnetism, Optics, Spectroscopy and Lasers, Y.B. Band, John Wiley & Sons, 2010, The Light Fantastic – Introduction to Classic and Quantum Optics, I.R. Kenyon, Oxford University Press, 2008, Handbook of atomic, molecular, and optical physics, Editor: Gordon Drake, Springer, Various authors, 1996, External links ScienceDirect - Advances In Atomic, Molecular, and Optical Physics Journal of Physics B: Atomic, Molecular and Optical Physics Institutions American Physical Society - Division of Atomic, Molecular & Optical Physics European Physical Society - Atomic, Molecular & Optical Physics Division National Science Foundation - Atomic, Molecular and Optical Physics MIT-Harvard Center for Ultracold Atoms Stanford QFARM Initiative for Quantum Science & Enginneering JILA - Atomic and Molecular Physics Joint Quantum Institute at University of Maryland and NIST ORNL Physics Division Queen's University Belfast - Center for Theoretical, Atomic, Molecular and Optical Physics, University of California, Berkeley - Atomic, Molecular and Optical Physics
0.780248
0.982046
0.766239
Telematics
Telematics is an interdisciplinary field encompassing telecommunications, vehicular technologies (road transport, road safety, etc.), electrical engineering (sensors, instrumentation, wireless communications, etc.), and computer science (multimedia, Internet, etc.). Telematics can involve any of the following: The technology of sending, receiving, and storing information using telecommunication devices to control remote objects The integrated use of telecommunications and informatics for application in vehicles and to control vehicles on the move Global navigation satellite system technology integrated with computers and mobile communications technology in automotive navigation systems (Most narrowly) The use of such systems within road vehicles (also called vehicle telematics) History Telematics is a translation of the French word télématique, which was first coined by Simon Nora and Alain Minc in a 1978 report to the French government on the computerization of society. It referred to the transfer of information over telecommunications and was a portmanteau blending the French words télécommunications ("telecommunications") and informatique ("computing science"). The original broad meaning of telematics continues to be used in academic fields, but in commerce it now generally means vehicle telematics. Vehicle telematics Telematics can be described as thus: The convergence of telecommunications and information processing, the term later evolved to refer to automation in automobiles, such as the invention of the emergency warning system for vehicles. GPS navigation, integrated hands-free cell phones, wireless safety communications, and automatic driving assistance systems all are covered under the telematics umbrella. The science of telecommunications and informatics applied in wireless technologies and computational systems. 802.11p, the IEEE standard in the 802.11 family and also referred to as Wireless Access for the Vehicular Environment (WAVE), is the primary standard that addresses and enhances Intelligent Transport System. Vehicle telematics can help improve the efficiency of an organization. Vehicle tracking Vehicle tracking is monitoring the location, movements, status, and behavior of a vehicle or fleet of vehicles. This is achieved through a combination of a GPS (GNSS) receiver and an electronic device (usually comprising a GSM GPRS modem or SMS sender) installed in each vehicle, communicating with the user (dispatching, emergency, or co-ordinating unit) and PC-based or web-based software. The data is turned into information by management reporting tools in conjunction with a visual display on computerized mapping software. Vehicle tracking systems may also use odometry or dead reckoning as an alternative or complementary means of navigation. GPS tracking is usually accurate to around 10–20 meters, but the European Space Agency has developed the EGNOS technology to provide accuracy to 1.5 meters. Trailer tracking Trailer tracking refers to the tracking of movements and position of an articulated vehicle's trailer unit through the use of a location unit fitted to the trailer and a method of returning the position data via mobile communication network, IOT (Internet of things), or geostationary satellite communications for use through either PC- or web-based software. Cold-store freight trailers that deliver fresh or frozen foods are increasingly incorporating telematics to gather time-series data on the temperature inside the cargo container, both to trigger alarms and record an audit trail for business purposes. An increasingly sophisticated array of sensors, many incorporating RFID technology, is being used to ensure the cold chain. Container tracking Freight containers can be tracked by GPS using a similar approach to that used for trailer tracking (i.e. a battery-powered GPS device communicating its position via mobile phone or satellite communications). Benefits of this approach include increased security and the possibility to reschedule the container transport movements based on accurate information about its location. According to Berg Insight, the installed base of tracking units in the intermodal shipping container segment reached 190,000 at the end of 2013. Growing at a compound annual growth rate of 38.2 percent, the installed base reached 960,000 units at the end of 2018. Fleet management Fleet management is the management of a company's fleet and includes the management of ships and/or motor vehicles such as cars, vans, and trucks. Fleet (vehicle) management can include a range of functions, such as vehicle financing, vehicle maintenance, vehicle telematics (tracking and diagnostics), driver management, fuel management, health and safety management, and dynamic vehicle scheduling. Fleet management is a function which allows companies that rely on transport in their business to remove or minimize the risks associated with vehicle investment, improving efficiency and productivity while reducing overall transport costs and ensuring compliance with government legislation and Duty of Care obligations. These functions can either be dealt with by an in-house fleet management department or an outsourced fleet management provider. Telematics standards The Association of Equipment Management Professionals (AEMP) developed the industry's first telematics standard. In 2008, AEMP brought together the major construction equipment manufacturers and telematics providers in the heavy equipment industry to discuss the development of the industry's first telematics standard. Following agreement from Caterpillar, Volvo CE, Komatsu, and John Deere Construction & Forestry to support such a standard, the AEMP formed a standards development subcommittee chaired by Pat Crail CEM to develop the standard. This committee consisted of developers provided by the Caterpillar/Trimble joint venture known as Virtual Site Solutions, Volvo CE, and John Deere. This group worked from February 2009 through September 2010 to develop the industry's first standard for the delivery of telematics data. The result, the AEMP Telematics Data Standard V1.1, was released in 2010 and officially went live on October 1, 2010. As of November 1, 2010, Caterpillar, Volvo CE, John Deere Construction & Forestry, OEM Data Delivery, and Navman Wireless are able to support customers with delivery of basic telematics data in a standard xml format. Komatsu, Topcon, and others are finishing beta testing and have indicated their ability to support customers in the near future. The AEMP's telematics data standard was developed to allow end users to integrate key telematics data (operating hours, location, fuel consumed, and odometer reading where applicable) into their existing fleet management reporting systems. As such, the standard was primarily intended to facilitate importation of these data elements into enterprise software systems such as those used by many medium-to-large construction contractors. Prior to the standard, end users had few options for integrating this data into their reporting systems in a mixed-fleet environment consisting of multiple brands of machines and a mix of telematics-equipped machines and legacy machines (those without telematics devices where operating data is still reported manually via pen and paper). One option available to machine owners was to visit multiple websites to manually retrieve data from each manufacturer's telematics interface and then manually enter it into their fleet management program's database. This option was cumbersome and labor-intensive. A second option was for the end user to develop an API (Application Programming Interface), or program, to integrate the data from each telematics provider into their database. This option was quite costly as each telematics provider had different procedures for accessing and retrieving the data and the data format varied from provider to provider. This option automated the process, but because each provider required a unique, custom API to retrieve and parse the data, it was an expensive option. In addition, another API had to be developed any time another brand of machine or telematics device was added to the fleet. A third option for mixed-fleet integration was to replace the various factory-installed telematics devices with devices from a third party telematics provider. Although this solved the problem of having multiple data providers requiring unique integration methods, this was by far the most expensive option. In addition to the expense, many third-party devices available for construction equipment are unable to access data directly from the machine's electronic control modules (ECMs), or computers, and are more limited than the device installed by the OEM (Cat, Volvo, Deere, Komatsu, etc.) in the data they are able to provide. In some cases, these devices are limited to location and engine runtime, although they are increasingly able to accommodate a number of add-on sensors to provide additional data. The AEMP Telematics Data Standard provides a fourth option. By concentrating on the key data elements that drive the majority of fleet management reports (hours, miles, location, fuel consumption), making those data elements available in a standardized xml format, and standardizing the means by which the document is retrieved, the standard enables the end user to use one API to retrieve data from any participating telematics provider (as opposed to the unique API for each provider that was required previously), greatly reducing integration development costs. The current draft version of the AEMP Telematics Data Standard is now called the AEM/AEMP Draft Telematics API Standard, which expands the original standard Version 1.2 to include 19 data fields (with fault code capability). This new draft standard is a collaborative effort of AEMP and the Association of Equipment Manufacturers (AEM), working on behalf of their members and the industry. This Draft API replaces the current version 1.2 and does not currently cover some types of equipment, e.g., agriculture equipment, cranes, mobile elevating work platforms, air compressors, and other niche products. In addition to the new data fields, the AEM/AEMP Draft Telematics API Standard changes how data is accessed in an effort to make it easier to consume and integrate with other systems and processes. It includes standardized communication protocols for the ability to transfer telematics information in mixed-equipment fleets to end user business enterprise systems, enabling the end user to employ their own business software to collect and then analyze asset data from mixed-equipment fleets without the need to work across multiple telematics provider applications. To achieve a globally recognized standard for conformity worldwide, the AEM/AEMP Draft Telematics API Standard will be submitted for acceptance by the International Organization for Standardization (ISO). Final language is dependent upon completion of the ISO acceptance process. Satellite navigation Satellite navigation in the context of vehicle telematics is the technology of using a GPS and electronic mapping tool to enable a driver to locate a position, plan a route, and navigate a journey. Mobile data Mobile data is the use of wireless data communications using radio waves to send and receive real-time computer data to, from, and between devices used by field-based personnel. These devices can be fitted solely for use while in the vehicle (Fixed Data Terminal) or for use in and out of the vehicle (Mobile Data Terminal). See mobile Internet. The common methods for mobile data communication for telematics were based on private vendors' RF communication infrastructure. During the early 2000s, manufacturers of mobile data terminals/AVL devices moved to try cellular data communication to offer cheaper ways to transmit telematics information and wider range based on cellular provider coverage. Since then, as a result of cellular providers offering low GPRS (2.5G) and later UMTS (3G) rates, mobile data is almost totally offered to telematics customers via cellular communication. Wireless vehicle safety communications Wireless vehicle safety communications telematics aid in car safety and road safety. It is an electronic subsystem in a vehicle used for exchanging safety information about road hazards and the locations and speeds of vehicles over short-range radio links. This may involve temporary ad hoc wireless local area networks. Wireless units are often installed in vehicles and fixed locations, such as near traffic signals and emergency call boxes along the road. Sensors in vehicles and at fixed locations, as well as in possible connections to wider networks, provide information displayed to drivers. The range of the radio links can be extended by forwarding messages along multi-hop paths. Even without fixed units, information about fixed hazards can be maintained by moving vehicles by passing it backwards. It also seems possible for traffic lights, which one can expect to become smarter, to use this information to reduce the chance of collisions. In the future, it may connect directly to the adaptive cruise control or other vehicle control aids. Cars and trucks with the wireless system connected to their brakes may move in convoys to save fuel and space on the roads. When a column member slows down, those behind it will automatically slow also. Certain scenarios may required less engineering effort, such as when a radio beacon is connected to a brake light. In fall 2008, network ideas were tested in Europe, where radio frequency bandwidth had been allocated. The 30 MHz allocated is at 5.9 GHz, and unallocated bandwidth at 5.4 GHz may also be used. The standard is IEEE 802.11p, a low-latency form of the Wi-Fi local area network standard. Similar efforts are underway in Japan and the USA. Emergency warning system for vehicles Telematics technologies are self-orientating open network architecture structures of variable programmable intelligent beacons developed for application in the development of intelligent vehicles with the intent to accord (blend or mesh) warning information with surrounding vehicles in the vicinity of travel, intra-vehicle, and infrastructure. Emergency warning systems for vehicle telematics are developed particularly for international harmonization and standardization of vehicle-to-vehicle, infrastructure-to-vehicle, and vehicle-to-infrastructure real-time Dedicated Short-Range Communication (DSRC) systems. Telematics most commonly relate to computerized systems that update information at the same rate they receive data, enabling them to direct or control a process such as an instantaneous autonomous warning notification in a remote machine or group of machines. In the use of telematics relating to intelligent vehicle technologies, instantaneous direction travel cognizance of a vehicle may be transmitted in real-time to surrounding vehicles traveling in the local area of vehicles equipped (with EWSV) to receive said warning signals of danger. Intelligent vehicle technologies Telematics comprise electronic, electromechanical, and electromagnetic devices—usually silicon micro-machined components operating in conjunction with computer-controlled devices and radio transceivers to provide precision repeatability functions (such as in robotics artificial intelligence systems) emergency warning validation performance reconstruction. Intelligent vehicle technologies commonly apply to car safety systems and self-contained autonomous electromechanical sensors generating warnings that can be transmitted within a specified targeted area of interest, i.e. within 100 meters of the emergency warning system for the vehicle's transceiver. In ground applications, intelligent vehicle technologies are utilized for safety and commercial communications between vehicles or between a vehicle and a sensor along the road. On November 3, 2009, the most advanced Intelligent Vehicle concept car was demonstrated in New York City when a 2010 Toyota Prius became the first LTE connected car. The demonstration was provided by the NG Connect project, a collaboration of automotive telematic technologies designed to exploit in-car 4G wireless network connectivity. Carsharing Telematics technology has enabled the emergence of carsharing services such as Local Motion, Uber, Lyft, Car2Go, Zipcar worldwide, or City Car Club in the UK. Telematics-enabled computers allow organizers to track members' usage and bill them on a pay-as-you-drive basis. Some systems show users where to find an idle vehicle. Car Clubs such as Australia's Charter Drive use telematics to monitor and report on vehicle use within predefined geofence areas to demonstrate the reach of their transit media car club fleet. Auto insurance/Usage-based insurance (UBI) The general idea of telematics auto insurance is that a driver's behavior is monitored directly while the person drives and this information is transmitted to an insurance company. The insurance company then assesses the risk of that driver having an accident and charges insurance premiums accordingly. A driver who drives less responsibly will be charged a higher premium than a driver who drives smoothly and with less calculated risk of claim propensity. Other benefits can be delivered to end users with Telematics2.0-based telematics as customer engagement can be enhanced with direct customer interaction. Telematics auto insurance was independently invented and patented by a major U.S. auto insurance company, Progressive Auto Insurance , and a Spanish independent inventor, Salvador Minguijon Perez (European Patent EP0700009B1). The Perez patents cover monitoring the car's engine control computer to determine distance driven, speed, time of day, braking force, etc. Progressive is currently developing the Perez technology in the U.S. and European auto insurer Norwich Union is developing the Progressive technology for Europe. Both patents have since been overturned in courts due to prior work in the commercial insurance sectors. Trials conducted by Norwich Union in 2005 found that young drivers (18- to 23-year-olds) signing up for telematics auto insurance have had a 20% lower accident rate than average. In 2007, theoretical economic research on the social welfare effects of Progressive's telematics technology business process patents questioned whether the business process patents are pareto efficient for society. Preliminary results suggested that it was not, but more work is needed. In April 2014, Progressive patents were overturned by the U.S. legal system on the grounds of "lack of originality." The smartphone as the in-vehicle device for insurance telematics has been discussed in great detail and the instruments are available for the design of smartphone-driven insurance telematics. Telematics education Engineering Degree programs Federico Santa María Technical University (UTFSM) in Chile has a Telematics Engineering program which is a six-year full-time program of study (12 academic semesters). The final degree in Telematics Engineering has the title of Ingeniería Civil Telemática (with the suffix of Civil). Pontifical Catholic University Mother and Teacher (PUCMM) in the Dominican Republic has a Telematics Engineering program which is a four-year full-time program of study (12 academic four-month periods). The final degree in Telematics Engineering has the title of Ingeniería Telemática. University Bachelor programs Harokopio University of Athens has a four-year full-time program of study. The department goal is the development and advancement of computer science, primarily in the field of network information systems and relative e-services. For this purpose, attention is focused in the fields of telematics (teleinformatics) which are relative to network and internet technologies, e-business, e-government, e-health, advanced transport telematics, etc. TH Wildau in Wildau, Germany has provided a three-year full-time telematics Bachelor study program since 1999. TU Graz in Graz, Austria offers a three-year Bachelor in telematics (now called "Information and Computer Engineering"). Singapore Institute of Technology offers a three-year Bachelor in Telematics. National Open and Distance Learning University of Mexico* (UNADM) offers a four-year degree in Telematics delivered online. University Masters programs Several universities provide two-year Telematics Master of Science programs: Norwegian University of Science and Technology (NTNU), Norway University of Twente (UT), The Netherlands University Carlos III of Madrid (UC3M), Spain Harokopio University Athens TH Wildau in Wildau, Germany TU Graz in Graz, Austria (now called "Information and Computer Engineering") European Automotive Digital Innovation Studio (EADIS) In 2007, a project entitled the European Automotive Digital Innovation Studio (EADIS) was awarded 400,000 Euros from the European commission under its Leonardo da Vinci program. EADIS used a virtual work environment called the Digital Innovation Studio to train and develop professional designers in the automotive industry in the impact and application of vehicle telematics so they could integrate new technologies into future products within the automotive industry. Funding ended in 2013. See also Artificial Passenger Fleet telematics system Floating car data GNSS road pricing Infotainment Map database management Mass surveillance Telematic art Telematic control unit Telematics for Libraries Program Notes References Matthew Wright, Editor, UK Telematics Online IEEE Communications Magazine, April 2005, "Ad Hoc Peer-to-Peer Network Architecture for Vehicle Safety Communications" IEEE Communications Magazine, April 2005, "The Application-Based Clustering Concept and Requirements for Intervehicle Networks" Jerzy Mikulski, Editor, "Advances in Transport Systems Telematics". Monograph. Publisher Jacek Skalmierski Computer Studio. Katowice 2006. Jerzy Mikulski, Editor, "Advances in Transport Systems Telematics 2". Monograph. Publisher Chair of Automatic Control in Transport, Faculty of Transport, Silesian University of Technology. Katowice 2007. World report on road traffic injury prevention. World Health Organization. Automotive electronics Dashboard head units Global Positioning System American inventions Vehicle technology Wireless locating
0.771289
0.99345
0.766238
Geosphere
There are several conflicting usages of geosphere, variously defined. It may be taken as the collective name for the lithosphere, the hydrosphere, the cryosphere, and the atmosphere. The different collectives of the geosphere are able to exchange different mass and/or energy fluxes (the measurable amount of change). The exchange of these fluxes affects the balance of the different spheres of the geosphere. An example is how the soil acts as a part of the biosphere, while also acting as a source of flux exchange. In Aristotelian physics, the term was applied to four spherical natural places, concentrically nested around the center of the Earth, as described in the lectures Physica and Meteorologica. They were believed to explain the motions of the four terrestrial elements: Earth, Water, Air, and Fire. In modern texts and in Earth system science, geosphere refers to the solid parts of the Earth; it is used along with atmosphere, hydrosphere, and biosphere to describe the systems of the Earth (the interaction of these systems with the magnetosphere is sometimes listed). In that context, sometimes the term lithosphere is used instead of geosphere or solid Earth. The lithosphere, however, only refers to the uppermost layers of the solid Earth (oceanic and continental crustal rocks and uppermost mantle). Since space exploration began, it has been observed that the extent of the ionosphere or plasmasphere is highly variable, and often much larger than previously appreciated, at times extending to the boundaries of the Earth's magnetosphere. This highly variable outer boundary of geogenic matter has been referred to as the "geopause" (or magnetopause), to suggest the relative scarcity of such matter beyond it, where the solar wind dominates. References Geophysics
0.773833
0.990137
0.766201
Alcubierre drive
The Alcubierre drive is a speculative warp drive idea according to which a spacecraft could achieve apparent faster-than-light travel by contracting space in front of it and expanding space behind it, under the assumption that a configurable energy-density field lower than that of vacuum (that is, negative mass) could be created. Proposed by theoretical physicist Miguel Alcubierre in 1994, the Alcubierre drive is based on a solution of Einstein's field equations. Since those solutions are metric tensors, the Alcubierre drive is also referred to as Alcubierre metric. Objects cannot accelerate to the speed of light within normal spacetime; instead, the Alcubierre drive shifts space around an object so that the object would arrive at its destination more quickly than light would in normal space without breaking any physical laws. Although the metric proposed by Alcubierre is consistent with the Einstein field equations, construction of such a drive is not necessarily possible. The proposed mechanism of the Alcubierre drive implies a negative energy density and therefore requires exotic matter or manipulation of dark energy. If exotic matter with the correct properties cannot exist, then the drive cannot be constructed. At the close of his original article, however, Alcubierre argued (following an argument developed by physicists analyzing traversable wormholes) that the Casimir vacuum between parallel plates could fulfill the negative-energy requirement for the Alcubierre drive. Another possible issue is that, although the Alcubierre metric is consistent with Einstein's equations, general relativity does not incorporate quantum mechanics. Some physicists have presented arguments to suggest that a theory of quantum gravity (which would incorporate both theories) would eliminate those solutions in general relativity that allow for backward time travel (see the chronology protection conjecture) and thus make the Alcubierre drive invalid. History In 1994, Miguel Alcubierre proposed a method for changing the geometry of space by creating a wave that would cause the fabric of space ahead of a spacecraft to contract and the space behind it to expand. The ship would then ride this wave inside a region of flat space, known as a warp bubble, and would not move within this bubble but instead be carried along as the region itself moves due to the actions of the drive. The local velocity relative to the deformed spacetime would be subluminal, but the speed at which a spacecraft could move would be superluminal, thereby rendering possible interstellar flight, such as a visit to Proxima Centauri within a few days. Alcubierre metric The Alcubierre metric defines the warp-drive spacetime. It is a Lorentzian manifold that, if interpreted in the context of general relativity, allows a warp bubble to appear in previously flat spacetime and move away at effectively faster-than-light speed. The interior of the bubble is an inertial reference frame and inhabitants experience no proper acceleration. This method of transport does not involve objects in motion at faster-than-light speeds with respect to the contents of the warp bubble; that is, a light beam within the warp bubble would still always move more quickly than the ship. Because objects within the bubble are not moving (locally) more quickly than light, the mathematical formulation of the Alcubierre metric is consistent with the conventional claims of the laws of relativity (namely, that an object with mass cannot attain or exceed the speed of light) and conventional relativistic effects such as time dilation would not apply as they would with conventional motion at near-light speeds. An extension of the Alcubierre metric that eliminates the expansion of the volume elements and instead relies on the change in distances along the direction of travel is that of mathematician José Natário. In his metric, spacetime contracts towards the prow of the ship and expands in the direction perpendicular to the motion, meaning that the bubble actually "slides" through space, roughly speaking by "pushing space aside". The Alcubierre drive remains a hypothetical concept with seemingly difficult problems, although the amount of energy required is no longer thought to be unobtainably large. Furthermore, Alexey Bobrick and Gianni Martire claim that, in principle, a class of subluminal, spherically symmetric warp drive spacetimes can be constructed based on physical principles presently known to humanity, such as positive energy. Mathematics Using the ADM formalism of general relativity, the spacetime is described by a foliation of space-like hypersurfaces of constant coordinate time , with the metric taking the following general form: where is the lapse function that gives the interval of proper time between nearby hypersurfaces, is the shift vector that relates the spatial coordinate systems on different hypersurfaces, is a positive-definite metric on each of the hypersurfaces. The particular form that Alcubierre studied is defined by: where with arbitrary parameters and . Alcubierre's specific form of the metric can thus be written: With this particular form of the metric, it can be shown that the energy density measured by observers whose 4-velocity is normal to the hypersurfaces is given by: where is the determinant of the metric tensor. Thus, because the energy density is negative, one needs exotic matter to travel more quickly than the speed of light. The existence of exotic matter is not theoretically ruled out; however, generating and sustaining enough exotic matter to perform feats such as faster-than-light travel (and to keep open the "throat" of a wormhole) is thought to be impractical. According to writer Robert Low, within the context of general relativity it is impossible to construct a warp drive in the absence of exotic matter. Connection to dark energy and dark matter Astrophysicist Jamie Farnes from the University of Oxford has proposed a theory, published in the peer-reviewed scientific journal Astronomy & Astrophysics, that unifies dark energy and dark matter into a single dark fluid, and which is expected to be testable by the Square Kilometre Array around 2030. Farnes found that Albert Einstein had explored the idea of gravitationally repulsive negative masses while developing the equations of general relativity, an idea which leads to a "beautiful" hypothesis where the cosmos has equal amounts of positive and negative qualities. Farnes' theory relies on negative masses that behave identically to the physics of the Alcubierre drive, providing a natural solution for the current "crisis in cosmology" due to a time-variable Hubble parameter. As Farnes' theory allows a positive mass (i.e. a ship) to reach a speed equal to the speed of light, it has been dubbed "controversial". If the theory is correct, which has been highly debated in the scientific literature, it would explain dark energy, dark matter, allow closed timelike curves (see time travel), and suggest that an Alcubierre drive is physically possible with exotic matter. Physics With regard to certain specific effects of special relativity, such as Lorentz contraction and time dilation, the Alcubierre metric has some apparently peculiar aspects. In particular, Alcubierre has shown that a ship using an Alcubierre drive travels on a free-fall geodesic even while the warp bubble is accelerating: its crew would be in free fall while accelerating without experiencing accelerational g-forces. Enormous tidal forces, however, would be present near the edges of the flat-space volume because of the large space curvature there, but a suitable specification of the metric would keep the tidal forces very small within the volume occupied by the ship. The original warp-drive metric and simple variants of it happen to have the ADM form, which is often used in discussing the initial-value formulation of general relativity. This might explain the widespread misconception that this spacetime is a solution of the field equation of general relativity. Metrics in ADM form are adapted to a certain family of inertial observers, but these observers are not really physically distinguished from other such families. Alcubierre interpreted his "warp bubble" in terms of a contraction of space ahead of the bubble and an expansion behind, but this interpretation could be misleading, since the contraction and expansion actually refer to the relative motion of nearby members of the family of ADM observers. In general relativity, one often first specifies a plausible distribution of matter and energy, and then finds the geometry of the spacetime associated with it; but it is also possible to run the Einstein field equations in the other direction, first specifying a metric and then finding the energy–momentum tensor associated with it, and this is what Alcubierre did in building his metric. This practice means that the solution can violate various energy conditions and require exotic matter. The need for exotic matter raises questions about whether one can distribute the matter in an initial spacetime that lacks a warp bubble in such a way that the bubble is created at a later time, although some physicists have proposed models of dynamical warp-drive spacetimes in which a warp bubble is formed in a previously flat space. Moreover, according to Serguei Krasnikov, generating a bubble in a previously flat space for a one-way faster-than-light trip requires forcing the exotic matter to move at local faster-than-light speeds, something that would require the existence of tachyons, although Krasnikov also notes that when the spacetime is not flat from the outset, a similar result could be achieved without tachyons by placing in advance some devices along the travel path and programming them to come into operation at preassigned moments and to operate in a preassigned manner. Some suggested methods avoid the problem of tachyonic motion, but would probably generate a naked singularity at the front of the bubble. Allen Everett and Thomas Roman comment on Krasnikov's finding (Krasnikov tube): [The finding] does not mean that Alcubierre bubbles, if it were possible to create them, could not be used as a means of superluminal travel. It only means that the actions required to change the metric and create the bubble must be taken beforehand by some observer whose forward light cone contains the entire trajectory of the bubble. For example, if one wanted to travel to Deneb (2,600 light-years away) and arrive less than 2,600 years in the future according to external clocks, it would be required that someone had already begun work on warping the space from Earth to Deneb at least 2,600 years ago: A spaceship appropriately located with respect to the bubble trajectory could then choose to enter the bubble, rather like a passenger catching a passing trolley car, and thus make the superluminal journey ... as Krasnikov points out, causality considerations do not prevent the crew of a spaceship from arranging, by their own actions, to complete a round trip from Earth to a distant star and back in an arbitrarily short time, as measured by clocks on Earth, by altering the metric along the path of their outbound trip. Difficulties Mass–energy requirement The metric of this form has significant difficulties because all known warp-drive spacetime theories violate various energy conditions. Nevertheless, an Alcubierre-type warp drive might be realized by exploiting certain experimentally verified quantum phenomena, such as the Casimir effect, that lead to stress–energy tensors that also violate the energy conditions, such as negative mass–energy, when described in the context of the quantum field theories. If certain quantum inequalities conjectured by Ford and Roman hold, the energy requirements for some warp drives may be unfeasibly large as well as negative. For example, the energy equivalent of −1064 kg might be required to transport a small spaceship across the Milky Way—an amount orders of magnitude greater than the estimated mass of the observable universe. Counterarguments to these apparent problems have also been offered, although the energy requirements still generally require a Type III civilization on the Kardashev scale. Chris Van Den Broeck of the Katholieke Universiteit Leuven in Belgium, in 1999, tried to address the potential issues. By contracting the 3+1-dimensional surface area of the bubble being transported by the drive, while at the same time expanding the three-dimensional volume contained inside, Van Den Broeck was able to reduce the total energy needed to transport small atoms to less than three solar masses. Later in 2003, by slightly modifying the Van den Broeck metric, Serguei Krasnikov reduced the necessary total amount of negative mass to a few milligrams. Van Den Broeck detailed this by saying that the total energy can be reduced dramatically by keeping the surface area of the warp bubble itself microscopically small, while at the same time expanding the spatial volume inside the bubble. However, Van Den Broeck concludes that the energy densities required are still unachievable, as are the small size (a few orders of magnitude above the Planck scale) of the spacetime structures needed. In 2012, physicist Harold White and collaborators announced that modifying the geometry of exotic matter could reduce the mass–energy requirements for a macroscopic space ship from the equivalent of the planet Jupiter to that of the Voyager 1 spacecraft (c. 700 kg) or less, and stated their intent to perform small-scale experiments in constructing warp fields. White proposed to thicken the extremely thin wall of the warp bubble, so the energy is focused in a larger volume, but the overall peak energy density is actually smaller. In a flat 2D representation, the ring of positive and negative energy, initially very thin, becomes a larger, fuzzy torus (donut shape). However, as this less energetic warp bubble also thickens toward the interior region, it leaves less flat space to house the spacecraft, which has to be smaller. Furthermore, if the intensity of the space warp can be oscillated over time, the energy required is reduced even more. According to White, a modified Michelson–Morley interferometer could test the idea: one of the legs of the interferometer would appear to have a slightly different length when the test devices were energised. Alcubierre has expressed skepticism about the experiment, saying: "from my understanding there is no way it can be done, probably not for centuries if at all". In 2021, physicist Erik Lentz described a way warp drives sourced from known and familiar purely positive energy could exist—warp bubbles based on superluminal self-reinforcing "soliton" waves. The claim is controversial, with other physicists arguing that all physically reasonable warp drives violate the weak energy condition, as well as both the strong and dominant energy conditions. Placement of matter Krasnikov proposed that if tachyonic matter cannot be found or used, then a solution might be to arrange for masses along the path of the vessel to be set in motion in such a way that the required field was produced. But in this case, the Alcubierre drive vessel can only travel routes that, like a railroad, have first been equipped with the necessary infrastructure. The pilot inside the bubble is causally disconnected from its walls and cannot carry out any action outside the bubble: the bubble cannot be used for the first trip to a distant star because the pilot cannot place infrastructure ahead of the bubble while "in transit". For example, traveling to Vega (which is 25 light-years from Earth) requires arranging everything so that the bubble moving toward Vega with a superluminal velocity would appear; such arrangements will always take more than 25 years. Coule has argued that schemes, such as the one proposed by Alcubierre, are infeasible because matter placed en route of the intended path of a craft must be placed at superluminal speed—that constructing an Alcubierre drive requires an Alcubierre drive even if the metric that allows it is physically meaningful. Coule further argues that an analogous objection will apply to any proposed method of constructing an Alcubierre drive. Survivability inside the bubble An article by José Natário (2002) argues that crew members could not control, steer or stop the ship in its warp bubble because the ship could not send signals to the front of the bubble. A 2009 article by Carlos Barceló, Stefano Finazzi, and Stefano Liberati uses quantum theory to argue that the Alcubierre drive at faster-than-light velocities is impossible mostly because extremely high temperatures caused by Hawking radiation would destroy anything inside the bubble at superluminal velocities and destabilize the bubble itself; the article also argues that these problems are absent if the bubble velocity is subluminal, although the drive still requires exotic matter. Damaging effect on destination Brendan McMonigal, Geraint F. Lewis, and Philip O'Byrne have argued that were an Alcubierre-driven ship to decelerate from superluminal speed, the particles that its bubble had gathered in transit would be released in energetic outbursts akin to the infinitely-blueshifted radiation hypothesized to occur at the inner event horizon of a Kerr black hole; forward-facing particles would thereby be energetic enough to destroy anything at the destination directly in front of the ship. Wall thickness The amount of negative energy required for such a propulsion is not yet known. Pfenning and Allen Everett of Tufts hold that a warp bubble traveling at 10-times the speed of light must have a wall thickness of no more than 10−32 meters—close to the limiting Planck length, 1.6 × 10−35 meters. In Alcubierre's original calculations, a bubble macroscopically large enough to enclose a ship of 200 meters would require a total amount of exotic matter greater than the mass of the observable universe, and straining the exotic matter to an extremely thin band of 10−32 meters is considered impractical. Similar constraints apply to Krasnikov's superluminal subway. Chris Van den Broeck constructed a modification of Alcubierre's model that requires much less exotic matter but places the ship in a curved spacetime "bottle" whose neck is about 10−32 meters. Causality violation and semiclassical instability Calculations by physicist Allen Everett show that warp bubbles could be used to create closed timelike curves in general relativity, meaning that the theory predicts that they could be used for backwards time travel. While it is possible that the fundamental laws of physics might allow closed timelike curves, the chronology protection conjecture hypothesizes that in all cases where the classical theory of general relativity allows them, quantum effects would intervene to eliminate the possibility, making these spacetimes impossible to realize. A possible type of effect that would accomplish this is a buildup of vacuum fluctuations on the border of the region of spacetime where time travel would first become possible, causing the energy density to become high enough to destroy the system that would otherwise become a time machine. Some results in semiclassical gravity appear to support the conjecture, including a calculation dealing specifically with quantum effects in warp-drive spacetimes that suggested that warp bubbles would be semiclassically unstable, but ultimately the conjecture can only be decided by a full theory of quantum gravity. Alcubierre briefly discusses some of these issues in a series of lecture slides posted online, where he writes: "beware: in relativity, any method to travel faster than light can in principle be used to travel back in time (a time machine)". In the next slide, he brings up the chronology protection conjecture and writes: "The conjecture has not been proven (it wouldn't be a conjecture if it had), but there are good arguments in its favor based on quantum field theory. The conjecture does not prohibit faster-than-light travel. It just states that if a method to travel faster than light exists, and one tries to use it to build a time machine, something will go wrong: the energy accumulated will explode, or it will create a black hole." Relation to Star Trek warp drive The Star Trek television series and films use the term "warp drive" to describe their method of faster-than-light travel. Neither the Alcubierre theory, nor anything similar, existed when the series was conceived—the term "warp drive" and general concept originated with John W. Campbell's 1931 science fiction novel Islands of Space. Alcubierre stated in an email to William Shatner that his theory was directly inspired by the term used in the show and cites the "'warp drive' of science fiction" in his 1994 article. A USS Alcubierre appears in the Star Trek tabletop RPG Star Trek Adventures. Since the release of Star Trek: The Original Series, more recent Star Trek spin-off series have made closer use of the theory behind the Alcubierre Drive incorporating warp bubbles/fields into the in-universe science. See also EmDrive Exact solutions in general relativity (for more on the sense in which the Alcubierre spacetime is a solution) IXS Enterprise Quantum vacuum thruster Reactionless drive Spacecraft propulsion Unruh effect Notes References External links It describes the concept in laymans terms. (hosted by John Michael Godier). A short video clip of the hypothetical effects of the warp drive. Marcelo B. Ribeiro's Page on Warp Drive Theory. Interstellar travel Warp drive theory Lorentzian manifolds Science fiction themes Hypothetical technology 1994 introductions Exact solutions in general relativity
0.767153
0.998755
0.766198
Hounsfield scale
The Hounsfield scale, named after Sir Godfrey Hounsfield, is a quantitative scale for describing radiodensity. It is frequently used in CT scans, where its value is also termed CT number. Definition The Hounsfield unit (HU) scale is a linear transformation of the original linear attenuation coefficient measurement into one in which the radiodensity of distilled water at standard pressure and temperature (STP) is defined as 0 Hounsfield units (HU), while the radiodensity of air at STP is defined as −1000 HU. In a voxel with average linear attenuation coefficient , the corresponding HU value is therefore given by: where and are respectively the linear attenuation coefficients of water and air. Thus, a change of one Hounsfield unit (HU) represents a change of 0.1% of the attenuation coefficient of water since the attenuation coefficient of air is nearly zero. Calibration tests of HU with reference to water and other materials may be done to ensure standardised response. This is particularly important for CT scans used in radiotherapy treatment planning, where HU is converted to electron density. Variation in the measured values of reference materials with known composition, and variation between and within slices may be used as part of test procedures. Rationale The above standards were chosen as they are universally available references and suited to the key application for which computed axial tomography was developed: imaging the internal anatomy of living creatures based on organized water structures and mostly living in air, e.g. humans. Values for different body tissues and material HU-based differentiation of material applies to medical-grade dual-energy CT scans but not to cone beam computed tomography (CBCT) scans, as CBCT scans provide unreliable HU readings. Values reported here are approximations. Different dynamics are reported from one study to another. Exact HU dynamics can vary from one CT acquisition to another due to CT acquisition and reconstruction parameters (kV, filters, reconstruction algorithms, etc.). The use of contrast agents modifies HU as well in some body parts (mainly blood). A practical application of this is in evaluation of tumors, where, for example, an adrenal tumor with a radiodensity of less than 10 HU is rather fatty in composition and almost certainly a benign adrenal adenoma. See also Cone beam computed tomography: Bone density and the Hounsfield scale. References External links Hounsfield Unit - fpnotebook.com Imaging of deep brain stimulation leads using extended Hounsfield unit CT. Stereotact Funct Neurosurg. 2009;87(3):155-60. doi: 10.1159/000209296 Radiology
0.77137
0.993276
0.766184
Fine-structure constant
In physics, the fine-structure constant, also known as the Sommerfeld constant, commonly denoted by (the Greek letter alpha), is a fundamental physical constant which quantifies the strength of the electromagnetic interaction between elementary charged particles. It is a dimensionless quantity, independent of the system of units used, which is related to the strength of the coupling of an elementary charge e with the electromagnetic field, by the formula . Its numerical value is approximately , with a relative uncertainty of The constant was named by Arnold Sommerfeld, who introduced it in 1916 when extending the Bohr model of the atom. quantified the gap in the fine structure of the spectral lines of the hydrogen atom, which had been measured precisely by Michelson and Morley in 1887. Why the constant should have this value is not understood, but there are a number of ways to measure its value. Definition In terms of other physical constants, may be defined as: where is the elementary charge; is the Planck constant; is the reduced Planck constant, is the speed of light; is the electric constant. Since the 2019 revision of the SI, the only quantity in this list that does not have an exact value in SI units is the electric constant (vacuum permittivity). Alternative systems of units The electrostatic CGS system implicitly sets , as commonly found in older physics literature, where the expression of the fine-structure constant becomes A nondimensionalised system commonly used in high energy physics sets , where the expressions for the fine-structure constant becomes As such, the fine-structure constant is just a quantity determining (or determined by) the elementary charge: in terms of such a natural unit of charge. In the system of atomic units, which sets , the expression for the fine-structure constant becomes Measurement The CODATA recommended value of is This has a relative standard uncertainty of This value for gives , 0.8 times the standard uncertainty away from its old defined value, with the mean differing from the old value by only 0.13 parts per billion. Historically the value of the reciprocal of the fine-structure constant is often given. The CODATA recommended value is While the value of can be determined from estimates of the constants that appear in any of its definitions, the theory of quantum electrodynamics (QED) provides a way to measure directly using the quantum Hall effect or the anomalous magnetic moment of the electron. Other methods include the A.C. Josephson effect and photon recoil in atom interferometry. There is general agreement for the value of , as measured by these different methods. The preferred methods in 2019 are measurements of electron anomalous magnetic moments and of photon recoil in atom interferometry. The theory of QED predicts a relationship between the dimensionless magnetic moment of the electron and the fine-structure constant (the magnetic moment of the electron is also referred to as the electron -factor ). One of the most precise values of obtained experimentally (as of 2023) is based on a measurement of using a one-electron so-called "quantum cyclotron" apparatus, together with a calculation via the theory of QED that involved tenth-order Feynman diagrams: This measurement of has a relative standard uncertainty of . This value and uncertainty are about the same as the latest experimental results. Further refinement of the experimental value was published by the end of 2020, giving the value with a relative accuracy of , which has a significant discrepancy from the previous experimental value. Physical interpretations The fine-structure constant, , has several physical interpretations. is: When perturbation theory is applied to quantum electrodynamics, the resulting perturbative expansions for physical results are expressed as sets of power series in . Because is much less than one, higher powers of are soon unimportant, making the perturbation theory practical in this case. On the other hand, the large value of the corresponding factors in quantum chromodynamics makes calculations involving the strong nuclear force extremely difficult. Variation with energy scale In quantum electrodynamics, the more thorough quantum field theory underlying the electromagnetic coupling, the renormalization group dictates how the strength of the electromagnetic interaction grows logarithmically as the relevant energy scale increases. The value of the fine-structure constant is linked to the observed value of this coupling associated with the energy scale of the electron mass: the electron's mass gives a lower bound for this energy scale, because it (and the positron) is the lightest charged object whose quantum loops can contribute to the running. Therefore, is the asymptotic value of the fine-structure constant at zero energy. At higher energies, such as the scale of the Z boson, about 90 GeV, one instead measures an effective ≈ 1/127. As the energy scale increases, the strength of the electromagnetic interaction in the Standard Model approaches that of the other two fundamental interactions, a feature important for grand unification theories. If quantum electrodynamics were an exact theory, the fine-structure constant would actually diverge at an energy known as the Landau pole – this fact undermines the consistency of quantum electrodynamics beyond perturbative expansions. History Based on the precise measurement of the hydrogen atom spectrum by Michelson and Morley in 1887, Arnold Sommerfeld extended the Bohr model to include elliptical orbits and relativistic dependence of mass on velocity. He introduced a term for the fine-structure constant in 1916. The first physical interpretation of the fine-structure constant was as the ratio of the velocity of the electron in the first circular orbit of the relativistic Bohr atom to the speed of light in the vacuum. Equivalently, it was the quotient between the minimum angular momentum allowed by relativity for a closed orbit, and the minimum angular momentum allowed for it by quantum mechanics. It appears naturally in Sommerfeld's analysis, and determines the size of the splitting or fine-structure of the hydrogenic spectral lines. This constant was not seen as significant until Paul Dirac's linear relativistic wave equation in 1928, which gave the exact fine structure formula. With the development of quantum electrodynamics (QED) the significance of has broadened from a spectroscopic phenomenon to a general coupling constant for the electromagnetic field, determining the strength of the interaction between electrons and photons. The term is engraved on the tombstone of one of the pioneers of QED, Julian Schwinger, referring to his calculation of the anomalous magnetic dipole moment. History of measurements The CODATA values in the above table are computed by averaging other measurements; they are not independent experiments. Potential variation over time Physicists have pondered whether the fine-structure constant is in fact constant, or whether its value differs by location and over time. A varying has been proposed as a way of solving problems in cosmology and astrophysics. String theory and other proposals for going beyond the Standard Model of particle physics have led to theoretical interest in whether the accepted physical constants (not just ) actually vary. In the experiments below, represents the change in over time, which can be computed by prev − now . If the fine-structure constant really is a constant, then any experiment should show that or as close to zero as experiment can measure. Any value far away from zero would indicate that does change over time. So far, most experimental data is consistent with being constant. Past rate of change The first experimenters to test whether the fine-structure constant might actually vary examined the spectral lines of distant astronomical objects and the products of radioactive decay in the Oklo natural nuclear fission reactor. Their findings were consistent with no variation in the fine-structure constant between these two vastly separated locations and times. Improved technology at the dawn of the 21st century made it possible to probe the value of at much larger distances and to a much greater accuracy. In 1999, a team led by John K. Webb of the University of New South Wales claimed the first detection of a variation in . Using the Keck telescopes and a data set of 128 quasars at redshifts , Webb et al. found that their spectra were consistent with a slight increase in over the last 10–12 billion years. Specifically, they found that In other words, they measured the value to be somewhere between and . This is a very small value, but the error bars do not actually include zero. This result either indicates that is not constant or that there is experimental error unaccounted for. In 2004, a smaller study of 23 absorption systems by Chand et al., using the Very Large Telescope, found no measurable variation: However, in 2007 simple flaws were identified in the analysis method of Chand et al., discrediting those results. King et al. have used Markov chain Monte Carlo methods to investigate the algorithm used by the UNSW group to determine from the quasar spectra, and have found that the algorithm appears to produce correct uncertainties and maximum likelihood estimates for for particular models. This suggests that the statistical uncertainties and best estimate for stated by Webb et al. and Murphy et al. are robust. Lamoreaux and Torgerson analyzed data from the Oklo natural nuclear fission reactor in 2004, and concluded that has changed in the past 2 billion years by 45 parts per billion. They claimed that this finding was "probably accurate to within 20%". Accuracy is dependent on estimates of impurities and temperature in the natural reactor. These conclusions have to be verified. In 2007, Khatri and Wandelt of the University of Illinois at Urbana-Champaign realized that the 21 cm hyperfine transition in neutral hydrogen of the early universe leaves a unique absorption line imprint in the cosmic microwave background radiation. They proposed using this effect to measure the value of during the epoch before the formation of the first stars. In principle, this technique provides enough information to measure a variation of 1 part in (4 orders of magnitude better than the current quasar constraints). However, the constraint which can be placed on is strongly dependent upon effective integration time, going as . The European LOFAR radio telescope would only be able to constrain to about 0.3%. The collecting area required to constrain to the current level of quasar constraints is on the order of 100 square kilometers, which is economically impracticable at present. Present rate of change In 2008, Rosenband et al. used the frequency ratio of and in single-ion optical atomic clocks to place a very stringent constraint on the present-time temporal variation of , namely = per year. A present day null constraint on the time variation of alpha does not necessarily rule out time variation in the past. Indeed, some theories that predict a variable fine-structure constant also predict that the value of the fine-structure constant should become practically fixed in its value once the universe enters its current dark energy-dominated epoch. Spatial variation – Australian dipole Researchers from Australia have said they had identified a variation of the fine-structure constant across the observable universe. These results have not been replicated by other researchers. In September and October 2010, after released research by Webb et al., physicists C. Orzel and S.M. Carroll separately suggested various approaches of how Webb's observations may be wrong. Orzel argues that the study may contain wrong data due to subtle differences in the two telescopes a totally different approach; he looks at the fine-structure constant as a scalar field and claims that if the telescopes are correct and the fine-structure constant varies smoothly over the universe, then the scalar field must have a very small mass. However, previous research has shown that the mass is not likely to be extremely small. Both of these scientists' early criticisms point to the fact that different techniques are needed to confirm or contradict the results, a conclusion Webb, et al., previously stated in their study. Other research finds no meaningful variation in the fine structure constant. Anthropic explanation The anthropic principle is an argument about the reason the fine-structure constant has the value it does: stable matter, and therefore life and intelligent beings, could not exist if its value were very different. One example is that, if modern grand unified theories are correct, then needs to be between around 1/180 and 1/85 to have proton decay to be slow enough for life to be possible. Numerological explanations As a dimensionless constant which does not seem to be directly related to any mathematical constant, the fine-structure constant has long fascinated physicists. Arthur Eddington argued that the value could be "obtained by pure deduction" and he related it to the Eddington number, his estimate of the number of protons in the universe. This led him in 1929 to conjecture that the reciprocal of the fine-structure constant was not approximately but precisely the integer 137. By the 1940s experimental values for deviated sufficiently from 137 to refute Eddington's arguments. Physicist Wolfgang Pauli commented on the appearance of certain numbers in physics, including the fine-structure constant, which he also noted approximates the prime number 137. This constant so intrigued him that he collaborated with psychoanalyst Carl Jung in a quest to understand its significance. Similarly, Max Born believed that if the value of differed, the universe would degenerate, and thus that = is a law of nature. Richard Feynman, one of the originators and early developers of the theory of quantum electrodynamics (QED), referred to the fine-structure constant in these terms: Conversely, statistician I. J. Good argued that a numerological explanation would only be acceptable if it could be based on a good theory that is not yet known but "exists" in the sense of a Platonic Ideal. Attempts to find a mathematical basis for this dimensionless constant have continued up to the present time. However, no numerological explanation has ever been accepted by the physics community. In the early 21st century, multiple physicists, including Stephen Hawking in his book A Brief History of Time, began exploring the idea of a multiverse, and the fine-structure constant was one of several universal constants that suggested the idea of a fine-tuned universe. Quotes See also Dimensionless physical constant Hyperfine structure Footnotes References External links (adapted from the Encyclopædia Britannica, 15th ed. by NIST) Physicists Nail Down the ‘Magic Number’ That Shapes the Universe (Natalie Wolchover, Quanta magazine, December 2, 2020). The value of this constant is given here as 1/137.035999206 (note the difference in the last three digits). It was determined by a team of four physicists led by Saïda Guellati-Khélifa at the Kastler Brossel Laboratory in Paris. Dimensionless constants Electromagnetism Fundamental constants Arnold Sommerfeld
0.767682
0.998043
0.76618
Goldilocks principle
The Goldilocks principle is named by analogy to the children's story "Goldilocks and the Three Bears", in which a young girl named Goldilocks tastes three different bowls of porridge and finds she prefers porridge that is neither too hot nor too cold but has just the right temperature. The concept of "just the right amount" is easily understood and applied to a wide range of disciplines, including developmental psychology, biology, astronomy, economics and engineering. Applications In cognitive science and developmental psychology, the Goldilocks effect or principle refers to an infant's preference to attend events that are neither too simple nor too complex according to their current representation of the world. This effect was observed in infants, who are less likely to look away from a visual sequence when the current event is moderately probable, as measured by an idealized learning model. In astrobiology, the Goldilocks zone refers to the habitable zone around a star. As Stephen Hawking put it, "Like Goldilocks, the development of intelligent life requires that planetary temperatures be 'just right. The Rare Earth hypothesis uses the Goldilocks principle in the argument that a planet must be neither too far away from nor too close to a star and galactic centre to support life, while either extreme would result in a planet incapable of supporting life. Such a planet is colloquially called a "Goldilocks Planet". Paul Davies has argued for the extension of the principle to cover the selection of our universe from a (postulated) multiverse: "Observers arise only in those universes where, like Goldilocks' porridge, things are by accident 'just right. In medicine, it can refer to a drug that can hold both antagonist (inhibitory) and agonist (excitatory) properties. For example, the antipsychotic Aripiprazole causes not only antagonism of dopamine D2 receptors in areas such as the mesolimbic area of the brain (which shows increased dopamine activity in psychosis) but also agonism of dopamine receptors in areas of dopamine hypoactivity, such as the mesocortical area. In economics, a Goldilocks economy sustains moderate economic growth and low inflation, which allows a market-friendly monetary policy. A Goldilocks market occurs when the price of commodities sits between a bear market and a bull market. Goldilocks pricing, also known as good–better–best pricing, is a marketing strategy that uses product differentiation to offer three versions of a product to corner different parts of the market: a high-end version, a middle version, and a low-end version. In communication, the Goldilocks principle describes the amount, type, and detail of communication necessary in a system to maximise effectiveness while minimising redundancy and excessive scope on the "too much" side and avoiding incomplete or inaccurate communication on the "too little" side. In statistics, the "Goldilocks Fit" references a linear regression model that represents the perfect flexibility to reduce the error caused by bias and variance. In the design sprint, the "Goldilocks Quality" means to create a prototype with just enough quality to evoke honest reactions from customers. In machine learning, the Goldilocks learning rate is the learning rate that results in an algorithm taking the fewest steps to achieve minimal loss. Algorithms with a learning rate that is too large often fail to converge at all, while those with too small a learning rate take too long to converge. See also Cosmic Jackpot Frugality Anthropic principle Big History Fine-tuned universe Golden mean (philosophy) Anna Karenina principle References Astronomical hypotheses Articles containing video clips Goldilocks and the Three Bears de: Goldlöckchen-Prinzip
0.769604
0.995485
0.76613
Vicsek model
The Vicsek model is a mathematical model used to describe active matter. One motivation of the study of active matter by physicists is the rich phenomenology associated to this field. Collective motion and swarming are among the most studied phenomena. Within the huge number of models that have been developed to catch such behavior from a microscopic description, the most famous is the model introduced by Tamás Vicsek et al. in 1995. Physicists have a great interest in this model as it is minimal and describes a kind of universality. It consists in point-like self-propelled particles that evolve at constant speed and align their velocity with their neighbours' one in presence of noise. Such a model shows collective motion at high density of particles or low noise on the alignment. Model (mathematical description) As this model aims at being minimal, it assumes that flocking is due to the combination of any kind of self propulsion and of effective alignment.Since velocities of each particle is a constant, the net momentum of the system is not conserved during collisions. An individual is described by its position and the angle defining the direction of its velocity at time . The discrete time evolution of one particle is set by two equations: At each time step , each agent aligns with its neighbours within a given distance with an uncertainty due to a noise : The particle then moves at constant speed in the new direction: In these equations, denotes the average direction of the velocities of particles (including particle ) within a circle of radius surrounding particle . The average normalized velocity acts as the order parameter for this system, and is given by . The whole model is controlled by three parameters: the density of particles, the amplitude of the noise on the alignment and the ratio of the travel distance to the interaction range . From these two simple iteration rules, various continuous theories have been elaborated such as the Toner Tu theory which describes the system at the hydrodynamic level. An Enskog-like kinetic theory, which is valid at arbitrary particle density, has been developed. This theory quantitatively describes the formation of steep density waves, also called invasion waves, near the transition to collective motion. Phenomenology This model shows a phase transition from a disordered motion to large-scale ordered motion. At large noise or low density, particles are on average not aligned, and they can be described as a disordered gas. At low noise and large density, particles are globally aligned and move in the same direction (collective motion). This state is interpreted as an ordered liquid. The transition between these two phases is not continuous, indeed the phase diagram of the system exhibits a first order phase transition with a microphase separation. In the co-existence region, finite-size liquid bands emerge in a gas environment and move along their transverse direction. Recently, a new phase has been discovered: a polar ordered Cross sea phase of density waves with inherently selected crossing angle. This spontaneous organization of particles epitomizes collective motion. Extensions Since its appearance in 1995 this model has been very popular within the physics community; many scientists have worked on and extended it. For example, one can extract several universality classes from simple symmetry arguments concerning the motion of the particles and their alignment. Moreover, in real systems, many parameters can be included in order to give a more realistic description, for example attraction and repulsion between agents (finite-size particles), chemotaxis (biological systems), memory, non-identical particles, the surrounding liquid. A simpler theory, the Active Ising model, has been developed to facilitate the analysis of the Vicsek model. References Multi-agent systems
0.787784
0.972508
0.766126
Hypervelocity
Hypervelocity is very high velocity, approximately over 3,000 meters per second (11,000 km/h, 6,700 mph, 10,000 ft/s, or Mach 8.8). In particular, hypervelocity is velocity so high that the strength of materials upon impact is very small compared to inertial stresses. Thus, metals and fluids behave alike under hypervelocity impact. An impact under extreme hypervelocity results in vaporization of the impactor and target. For structural metals, hypervelocity is generally considered to be over 2,500 m/s (5,600 mph, 9,000 km/h, 8,200 ft/s, or Mach 7.3). Meteorite craters are also examples of hypervelocity impacts. Overview The term "hypervelocity" refers to velocities in the range from a few kilometers per second to some tens of kilometers per second. This is especially relevant in the field of space exploration and military use of space, where hypervelocity impacts (e.g. by space debris or an attacking projectile) can result in anything from minor component degradation to the complete destruction of a spacecraft or missile. The impactor, as well as the surface it hits, can undergo temporary liquefaction. The impact process can generate plasma discharges, which can interfere with spacecraft electronics. Hypervelocity usually occurs during meteor showers and deep space reentries, as carried out during the Zond, Apollo and Luna programs. Given the intrinsic unpredictability of the timing and trajectories of meteors, space capsules are prime data gathering opportunities for the study of thermal protection materials at hypervelocity (in this context, hypervelocity is defined as greater than escape velocity). Given the rarity of such observation opportunities since the 1970s, the Genesis and Stardust Sample Return Capsule (SRC) reentries as well as the recent Hayabusa SRC reentry have spawned observation campaigns, most notably at NASA's Ames Research Center. Hypervelocity collisions can be studied by examining the results of naturally occurring collisions (between micrometeorites and spacecraft, or between meteorites and planetary bodies), or they may be performed in laboratories. Currently, the primary tool for laboratory experiments is a light-gas gun, but some experiments have used linear motors to accelerate projectiles to hypervelocity. The properties of metals under hypervelocity have been integrated with weapons, such as explosively formed penetrator. The vaporization upon impact and liquification of surfaces allow metal projectiles formed under hypervelocity forces to penetrate vehicle armor better than conventional bullets. NASA studies the effects of simulated orbital debris at the White Sands Test Facility Remote Hypervelocity Test Laboratory (RHTL). Objects smaller than a softball cannot be detected on radar. This has prompted spacecraft designers to develop shields to protect spacecraft from unavoidable collisions. At RHTL, micrometeoroid and orbital debris (MMOD) impacts are simulated on spacecraft components and shields allowing designers to test threats posed by the growing orbital debris environment and evolve shield technology to stay one step ahead. At RHTL, four two-stage light-gas guns propel diameter projectiles to velocities as fast as . Hypervelocity reentry events Other definitions of hypervelocity According to the United States Army, hypervelocity can also refer to the muzzle velocity of a weapon system, with the exact definition dependent upon the weapon in question. When discussing small arms a muzzle velocity of 5,000 ft/s (1524 m/s) or greater is considered hypervelocity, while for tank cannons the muzzle velocity must meet or exceed 3,350 ft/s (1021.08 m/s) to be considered hypervelocity, and the threshold for artillery cannons is 3,500 ft/s (1066.8 m/s). See also 2009 satellite collision Hypersonic aircraft Hypersonic flight Hypersonic Hypervelocity star Impact depth#Newton's approximation for the impact depth Kinetic energy penetrator Terminal velocity References Collision Materials science Physical quantities Space hazards Spaceflight concepts Velocity
0.777626
0.985202
0.766119
Isaac Newton
Sir Isaac Newton (25 December 1642 – 20 March 1726/27) was an English polymath active as a mathematician, physicist, astronomer, alchemist, theologian, and author who was described in his time as a natural philosopher. He was a key figure in the Scientific Revolution and the Enlightenment that followed. His pioneering book (Mathematical Principles of Natural Philosophy), first published in 1687, consolidated many previous results and established classical mechanics. Newton also made seminal contributions to optics, and shares credit with German mathematician Gottfried Wilhelm Leibniz for formulating infinitesimal calculus, though he developed calculus years before Leibniz. In the , Newton formulated the laws of motion and universal gravitation that formed the dominant scientific viewpoint for centuries until it was superseded by the theory of relativity. He used his mathematical description of gravity to derive Kepler's laws of planetary motion, account for tides, the trajectories of comets, the precession of the equinoxes and other phenomena, eradicating doubt about the Solar System's heliocentricity. He demonstrated that the motion of objects on Earth and celestial bodies could be accounted for by the same principles. Newton's inference that the Earth is an oblate spheroid was later confirmed by the geodetic measurements of Maupertuis, La Condamine, and others, convincing most European scientists of the superiority of Newtonian mechanics over earlier systems. He built the first practical reflecting telescope and developed a sophisticated theory of colour based on the observation that a prism separates white light into the colours of the visible spectrum. His work on light was collected in his highly influential book Opticks, published in 1704. He formulated an empirical law of cooling, which was the first heat transfer formulation, made the first theoretical calculation of the speed of sound, and introduced the notion of a Newtonian fluid. Furthermore, he made early investigations into electricity, with an idea from his book Opticks arguably the beginning of the field theory of the electric force. In addition to his work on calculus, as a mathematician, he contributed to the study of power series, generalised the binomial theorem to non-integer exponents, developed a method for approximating the roots of a function, and classified most of the cubic plane curves. Newton was a fellow of Trinity College and the second Lucasian Professor of Mathematics at the University of Cambridge. He was a devout but unorthodox Christian who privately rejected the doctrine of the Trinity. He refused to take holy orders in the Church of England, unlike most members of the Cambridge faculty of the day. Beyond his work on the mathematical sciences, Newton dedicated much of his time to the study of alchemy and biblical chronology, but most of his work in those areas remained unpublished until long after his death. Politically and personally tied to the Whig party, Newton served two brief terms as Member of Parliament for the University of Cambridge, in 1689–1690 and 1701–1702. He was knighted by Queen Anne in 1705 and spent the last three decades of his life in London, serving as Warden (1696–1699) and Master (1699–1727) of the Royal Mint, as well as president of the Royal Society (1703–1727). Early life Isaac Newton was born (according to the Julian calendar in use in England at the time) on Christmas Day, 25 December 1642 (NS 4 January 1643) at Woolsthorpe Manor in Woolsthorpe-by-Colsterworth, a hamlet in the county of Lincolnshire. His father, also named Isaac Newton, had died three months before. Born prematurely, Newton was a small child; his mother Hannah Ayscough reportedly said that he could have fit inside a quart mug. When Newton was three, his mother remarried and went to live with her new husband, the Reverend Barnabas Smith, leaving her son in the care of his maternal grandmother, Margery Ayscough (née Blythe). Newton disliked his stepfather and maintained some enmity towards his mother for marrying him, as revealed by this entry in a list of sins committed up to the age of 19: "Threatening my father and mother Smith to burn them and the house over them." Newton's mother had three children (Mary, Benjamin, and Hannah) from her second marriage. The King's School From the age of about twelve until he was seventeen, Newton was educated at The King's School in Grantham, which taught Latin and Ancient Greek and probably imparted a significant foundation of mathematics. He was removed from school by his mother and returned to Woolsthorpe-by-Colsterworth by October 1659. His mother, widowed for the second time, attempted to make him a farmer, an occupation he hated. Henry Stokes, master at The King's School, persuaded his mother to send him back to school. Motivated partly by a desire for revenge against a schoolyard bully, he became the top-ranked student, distinguishing himself mainly by building sundials and models of windmills. University of Cambridge In June 1661, Newton was admitted to Trinity College at the University of Cambridge. His uncle the Reverend William Ayscough, who had studied at Cambridge, recommended him to the university. At Cambridge, Newton started as a subsizar, paying his way by performing valet duties until he was awarded a scholarship in 1664, which covered his university costs for four more years until the completion of his MA. At the time, Cambridge's teachings were based on those of Aristotle, whom Newton read along with then more modern philosophers, including Descartes and astronomers such as Galileo Galilei and Thomas Street. He set down in his notebook a series of "Quaestiones" about mechanical philosophy as he found it. In 1665, he discovered the generalised binomial theorem and began to develop a mathematical theory that later became calculus. Soon after Newton obtained his BA degree at Cambridge in August 1665, the university temporarily closed as a precaution against the Great Plague. Although he had been undistinguished as a Cambridge student, Newton's private studies at his home in Woolsthorpe over the next two years saw the development of his theories on calculus, optics, and the law of gravitation. In April 1667, Newton returned to the University of Cambridge, and in October he was elected as a fellow of Trinity. Fellows were required to take holy orders and be ordained as Anglican priests, although this was not enforced in the Restoration years, and an assertion of conformity to the Church of England was sufficient. He made the commitment that "I will either set Theology as the object of my studies and will take holy orders when the time prescribed by these statutes [7 years] arrives, or I will resign from the college." Up until this point he had not thought much about religion and had twice signed his agreement to the Thirty-nine Articles, the basis of Church of England doctrine. By 1675 the issue could not be avoided, and by then his unconventional views stood in the way. His academic work impressed the Lucasian professor Isaac Barrow, who was anxious to develop his own religious and administrative potential (he became master of Trinity College two years later); in 1669, Newton succeeded him, only one year after receiving his MA. The terms of the Lucasian professorship required that the holder be active in the church – presumably to leave more time for science. Newton argued that this should exempt him from the ordination requirement, and King Charles II, whose permission was needed, accepted this argument; thus, a conflict between Newton's religious views and Anglican orthodoxy was averted. The Lucasian Professor of Mathematics at Cambridge position included the responsibility of instructing geography. In 1672, and again in 1681, Newton published a revised, corrected, and amended edition of the Geographia Generalis, a geography textbook first published in 1650 by the then-deceased Bernhardus Varenius. In the Geographia Generalis, Varenius attempted to create a theoretical foundation linking scientific principles to classical concepts in geography, and considered geography to be a mix between science and pure mathematics applied to quantifying features of the Earth. While it is unclear if Newton ever lectured in geography, the 1733 Dugdale and Shaw English translation of the book stated Newton published the book to be read by students while he lectured on the subject. The Geographia Generalis is viewed by some as the dividing line between ancient and modern traditions in the history of geography, and Newton's involvement in the subsequent editions is thought to be a large part of the reason for this enduring legacy. Newton was elected a Fellow of the Royal Society (FRS) in 1672. Mid-life Calculus Newton's work has been said "to distinctly advance every branch of mathematics then studied". His work on the subject, usually referred to as fluxions or calculus, seen in a manuscript of October 1666, is now published among Newton's mathematical papers. His work De analysi per aequationes numero terminorum infinitas, sent by Isaac Barrow to John Collins in June 1669, was identified by Barrow in a letter sent to Collins that August as the work "of an extraordinary genius and proficiency in these things". Newton later became involved in a dispute with Leibniz over priority in the development of calculus. Most modern historians believe that Newton and Leibniz developed calculus independently, although with very different mathematical notations. However, it is established that Newton came to develop calculus much earlier than Leibniz. Leibniz's notation and "differential Method", nowadays recognised as much more convenient notations, were adopted by continental European mathematicians, and after 1820 or so, also by British mathematicians. His work extensively uses calculus in geometric form based on limiting values of the ratios of vanishingly small quantities: in the Principia itself, Newton gave demonstration of this under the name of "the method of first and last ratios" and explained why he put his expositions in this form, remarking also that "hereby the same thing is performed as by the method of indivisibles." Because of this, the Principia has been called "a book dense with the theory and application of the infinitesimal calculus" in modern times and in Newton's time "nearly all of it is of this calculus." His use of methods involving "one or more orders of the infinitesimally small" is present in his De motu corporum in gyrum of 1684 and in his papers on motion "during the two decades preceding 1684". Newton had been reluctant to publish his calculus because he feared controversy and criticism. He was close to the Swiss mathematician Nicolas Fatio de Duillier. In 1691, Duillier started to write a new version of Newton's Principia, and corresponded with Leibniz. In 1693, the relationship between Duillier and Newton deteriorated and the book was never completed. Starting in 1699, other members of the Royal Society accused Leibniz of plagiarism. The dispute then broke out in full force in 1711 when the Royal Society proclaimed in a study that it was Newton who was the true discoverer and labelled Leibniz a fraud; it was later found that Newton wrote the study's concluding remarks on Leibniz. Thus began the bitter controversy which marred the lives of both Newton and Leibniz until the latter's death in 1716. Newton is generally credited with the generalised binomial theorem, valid for any exponent. He discovered Newton's identities, Newton's method, classified cubic plane curves (polynomials of degree three in two variables), made substantial contributions to the theory of finite differences, and was the first to use fractional indices and to employ coordinate geometry to derive solutions to Diophantine equations. He approximated partial sums of the harmonic series by logarithms (a precursor to Euler's summation formula) and was the first to use power series with confidence and to revert power series. Newton's work on infinite series was inspired by Simon Stevin's decimals. Optics In 1666, Newton observed that the spectrum of colours exiting a prism in the position of minimum deviation is oblong, even when the light ray entering the prism is circular, which is to say, the prism refracts different colours by different angles. This led him to conclude that colour is a property intrinsic to light – a point which had, until then, been a matter of debate. From 1670 to 1672, Newton lectured on optics. During this period he investigated the refraction of light, demonstrating that the multicoloured image produced by a prism, which he named a spectrum, could be recomposed into white light by a lens and a second prism. Modern scholarship has revealed that Newton's analysis and resynthesis of white light owes a debt to corpuscular alchemy. He showed that coloured light does not change its properties by separating out a coloured beam and shining it on various objects, and that regardless of whether reflected, scattered, or transmitted, the light remains the same colour. Thus, he observed that colour is the result of objects interacting with already-coloured light rather than objects generating the colour themselves. This is known as Newton's theory of colour. From this work, he concluded that the lens of any refracting telescope would suffer from the dispersion of light into colours (chromatic aberration). As a proof of the concept, he constructed a telescope using reflective mirrors instead of lenses as the objective to bypass that problem. Building the design, the first known functional reflecting telescope, today known as a Newtonian telescope, involved solving the problem of a suitable mirror material and shaping technique. Newton ground his own mirrors out of a custom composition of highly reflective speculum metal, using Newton's rings to judge the quality of the optics for his telescopes. In late 1668, he was able to produce this first reflecting telescope. It was about eight inches long and it gave a clearer and larger image. In 1671, the Royal Society asked for a demonstration of his reflecting telescope. Their interest encouraged him to publish his notes, Of Colours, which he later expanded into the work Opticks. When Robert Hooke criticised some of Newton's ideas, Newton was so offended that he withdrew from public debate. Newton and Hooke had brief exchanges in 1679–80, when Hooke, appointed to manage the Royal Society's correspondence, opened up a correspondence intended to elicit contributions from Newton to Royal Society transactions, which had the effect of stimulating Newton to work out a proof that the elliptical form of planetary orbits would result from a centripetal force inversely proportional to the square of the radius vector. But the two men remained generally on poor terms until Hooke's death. Newton argued that light is composed of particles or corpuscles, which were refracted by accelerating into a denser medium. He verged on soundlike waves to explain the repeated pattern of reflection and transmission by thin films (Opticks Bk. II, Props. 12), but still retained his theory of 'fits' that disposed corpuscles to be reflected or transmitted (Props.13). However, later physicists favoured a purely wavelike explanation of light to account for the interference patterns and the general phenomenon of diffraction. Today's quantum mechanics, photons, and the idea of wave–particle duality bear only a minor resemblance to Newton's understanding of light. In his Hypothesis of Light of 1675, Newton posited the existence of the ether to transmit forces between particles. The contact with the Cambridge Platonist philosopher Henry More revived his interest in alchemy. He replaced the ether with occult forces based on Hermetic ideas of attraction and repulsion between particles. John Maynard Keynes, who acquired many of Newton's writings on alchemy, stated that "Newton was not the first of the age of reason: He was the last of the magicians." Newton's contributions to science cannot be isolated from his interest in alchemy. This was at a time when there was no clear distinction between alchemy and science. In 1704, Newton published Opticks, in which he expounded his corpuscular theory of light. He considered light to be made up of extremely subtle corpuscles, that ordinary matter was made of grosser corpuscles and speculated that through a kind of alchemical transmutation "Are not gross Bodies and Light convertible into one another, ... and may not Bodies receive much of their Activity from the Particles of Light which enter their Composition?" Newton also constructed a primitive form of a frictional electrostatic generator, using a glass globe. In his book Opticks, Newton was the first to show a diagram using a prism as a beam expander, and also the use of multiple-prism arrays. Some 278 years after Newton's discussion, multiple-prism beam expanders became central to the development of narrow-linewidth tunable lasers. Also, the use of these prismatic beam expanders led to the multiple-prism dispersion theory. Subsequent to Newton, much has been amended. Young and Fresnel discarded Newton's particle theory in favour of Huygens' wave theory to show that colour is the visible manifestation of light's wavelength. Science also slowly came to realise the difference between perception of colour and mathematisable optics. The German poet and scientist, Goethe, could not shake the Newtonian foundation but "one hole Goethe did find in Newton's armour, ... Newton had committed himself to the doctrine that refraction without colour was impossible. He, therefore, thought that the object-glasses of telescopes must forever remain imperfect, achromatism and refraction being incompatible. This inference was proved by Dollond to be wrong." Gravity Newton had been developing his theory of gravitation as far back as 1665. In 1679, Newton returned to his work on celestial mechanics by considering gravitation and its effect on the orbits of planets with reference to Kepler's laws of planetary motion. This followed stimulation by a brief exchange of letters in 1679–80 with Hooke, who had been appointed Secretary of the Royal Society, and who opened a correspondence intended to elicit contributions from Newton to Royal Society transactions. Newton's reawakening interest in astronomical matters received further stimulus by the appearance of a comet in the winter of 1680–1681, on which he corresponded with John Flamsteed. After the exchanges with Hooke, Newton worked out a proof that the elliptical form of planetary orbits would result from a centripetal force inversely proportional to the square of the radius vector. Newton communicated his results to Edmond Halley and to the Royal Society in , a tract written on about nine sheets which was copied into the Royal Society's Register Book in December 1684. This tract contained the nucleus that Newton developed and expanded to form the Principia. The was published on 5 July 1687 with encouragement and financial help from Halley. In this work, Newton stated the three universal laws of motion. Together, these laws describe the relationship between any object, the forces acting upon it and the resulting motion, laying the foundation for classical mechanics. They contributed to many advances during the Industrial Revolution which soon followed and were not improved upon for more than 200 years. Many of these advances continue to be the underpinnings of non-relativistic technologies in the modern world. He used the Latin word gravitas (weight) for the effect that would become known as gravity, and defined the law of universal gravitation. In the same work, Newton presented a calculus-like method of geometrical analysis using 'first and last ratios', gave the first analytical determination (based on Boyle's law) of the speed of sound in air, inferred the oblateness of Earth's spheroidal figure, accounted for the precession of the equinoxes as a result of the Moon's gravitational attraction on the Earth's oblateness, initiated the gravitational study of the irregularities in the motion of the Moon, provided a theory for the determination of the orbits of comets, and much more. Newton's biographer David Brewster reported that the complexity of applying his theory of gravity to the motion of the moon was so great it affected Newton's health: "[H]e was deprived of his appetite and sleep" during his work on the problem in 1692–93, and told the astronomer John Machin that "his head never ached but when he was studying the subject". According to Brewster, Edmund Halley also told John Conduitt that when pressed to complete his analysis Newton "always replied that it made his head ache, and kept him awake so often, that he would think of it no more". [Emphasis in original] Newton made clear his heliocentric view of the Solar System—developed in a somewhat modern way because already in the mid-1680s he recognised the "deviation of the Sun" from the centre of gravity of the Solar System. For Newton, it was not precisely the centre of the Sun or any other body that could be considered at rest, but rather "the common centre of gravity of the Earth, the Sun and all the Planets is to be esteem'd the Centre of the World", and this centre of gravity "either is at rest or moves uniformly forward in a right line". (Newton adopted the "at rest" alternative in view of common consent that the centre, wherever it was, was at rest.) Newton was criticised for introducing "occult agencies" into science because of his postulate of an invisible force able to act over vast distances. Later, in the second edition of the Principia (1713), Newton firmly rejected such criticisms in a concluding General Scholium, writing that it was enough that the phenomena implied a gravitational attraction, as they did; but they did not so far indicate its cause, and it was both unnecessary and improper to frame hypotheses of things that were not implied by the phenomena. (Here Newton used what became his famous expression .) With the , Newton became internationally recognised. He acquired a circle of admirers, including the Swiss-born mathematician Nicolas Fatio de Duillier. In 1710, Newton found 72 of the 78 "species" of cubic curves and categorised them into four types. In 1717, and probably with Newton's help, James Stirling proved that every cubic was one of these four types. Newton also claimed that the four types could be obtained by plane projection from one of them, and this was proved in 1731, four years after his death. Later life Royal Mint In the 1690s, Newton wrote a number of religious tracts dealing with the literal and symbolic interpretation of the Bible. A manuscript Newton sent to John Locke in which he disputed the fidelity of 1 John 5:7—the Johannine Comma—and its fidelity to the original manuscripts of the New Testament, remained unpublished until 1785. Newton was also a member of the Parliament of England for Cambridge University in 1689 and 1701, but according to some accounts his only comments were to complain about a cold draught in the chamber and request that the window be closed. He was, however, noted by Cambridge diarist Abraham de la Pryme to have rebuked students who were frightening locals by claiming that a house was haunted. Newton moved to London to take up the post of warden of the Royal Mint during the reign of King William III in 1696, a position that he had obtained through the patronage of Charles Montagu, 1st Earl of Halifax, then Chancellor of the Exchequer. He took charge of England's great recoining, trod on the toes of Lord Lucas, Governor of the Tower, and secured the job of deputy comptroller of the temporary Chester branch for Edmond Halley. Newton became perhaps the best-known Master of the Mint upon the death of Thomas Neale in 1699, a position Newton held for the last 30 years of his life. These appointments were intended as sinecures, but Newton took them seriously. He retired from his Cambridge duties in 1701, and exercised his authority to reform the currency and punish clippers and counterfeiters. As Warden, and afterwards as Master, of the Royal Mint, Newton estimated that 20 percent of the coins taken in during the Great Recoinage of 1696 were counterfeit. Counterfeiting was high treason, punishable by the felon being hanged, drawn and quartered. Despite this, convicting even the most flagrant criminals could be extremely difficult, but Newton proved equal to the task. Disguised as a habitué of bars and taverns, he gathered much of that evidence himself. For all the barriers placed to prosecution, and separating the branches of government, English law still had ancient and formidable customs of authority. Newton had himself made a justice of the peace in all the home counties. A draft letter regarding the matter is included in Newton's personal first edition of Philosophiæ Naturalis Principia Mathematica, which he must have been amending at the time. Then he conducted more than 100 cross-examinations of witnesses, informers, and suspects between June 1698 and Christmas 1699. Newton successfully prosecuted 28 coiners. Newton was made president of the Royal Society in 1703 and an associate of the French Académie des Sciences. In his position at the Royal Society, Newton made an enemy of John Flamsteed, the Astronomer Royal, by prematurely publishing Flamsteed's Historia Coelestis Britannica, which Newton had used in his studies. Knighthood In April 1705, Queen Anne knighted Newton during a royal visit to Trinity College, Cambridge. The knighthood is likely to have been motivated by political considerations connected with the parliamentary election in May 1705, rather than any recognition of Newton's scientific work or services as Master of the Mint. Newton was the second scientist to be knighted, after Francis Bacon. As a result of a report written by Newton on 21 September 1717 to the Lords Commissioners of His Majesty's Treasury, the bimetallic relationship between gold coins and silver coins was changed by royal proclamation on 22 December 1717, forbidding the exchange of gold guineas for more than 21 silver shillings. This inadvertently resulted in a silver shortage as silver coins were used to pay for imports, while exports were paid for in gold, effectively moving Britain from the silver standard to its first gold standard. It is a matter of debate as to whether he intended to do this or not. It has been argued that Newton conceived of his work at the Mint as a continuation of his alchemical work. Newton was invested in the South Sea Company and lost some £20,000 (£4.4 million in 2020) when it collapsed in around 1720. Toward the end of his life, Newton took up residence at Cranbury Park, near Winchester, with his niece and her husband, until his death. His half-niece, Catherine Barton, served as his hostess in social affairs at his house on Jermyn Street in London; he was her "very loving Uncle", according to his letter to her when she was recovering from smallpox. Death Newton died in his sleep in London on 20 March 1727 (OS 20 March 1726; NS 31 March 1727). He was given a ceremonial funeral, attended by nobles, scientists, and philosophers, and was buried in Westminster Abbey among kings and queens. He was the first scientist to be buried in the abbey. Voltaire may have been present at his funeral. A bachelor, he had divested much of his estate to relatives during his last years, and died intestate. His papers went to John Conduitt and Catherine Barton. Shortly after his death, a plaster death mask was moulded of Newton. It was used by Flemish sculptor John Michael Rysbrack in making a sculpture of Newton. It is now held by the Royal Society, who created a 3D scan of it in 2012. Newton's hair was posthumously examined and found to contain mercury, probably resulting from his alchemical pursuits. Mercury poisoning could explain Newton's eccentricity in late life. Personality Although it was claimed that he was once engaged, Newton never married. The French writer and philosopher Voltaire, who was in London at the time of Newton's funeral, said that he "was never sensible to any passion, was not subject to the common frailties of mankind, nor had any commerce with women—a circumstance which was assured me by the physician and surgeon who attended him in his last moments.” There exists a widespread belief that Newton died a virgin, and writers as diverse as mathematician Charles Hutton, economist John Maynard Keynes, and physicist Carl Sagan have commented on it. Newton had a close friendship with the Swiss mathematician Nicolas Fatio de Duillier, whom he met in London around 1689—some of their correspondence has survived. Their relationship came to an abrupt and unexplained end in 1693, and at the same time Newton suffered a nervous breakdown, which included sending wild accusatory letters to his friends Samuel Pepys and John Locke. His note to the latter included the charge that Locke had endeavoured to "embroil" him with "woemen & by other means". Newton was relatively modest about his achievements, writing in a letter to Robert Hooke in February 1676, "If I have seen further it is by standing on the shoulders of giants." Two writers think that the sentence, written at a time when Newton and Hooke were in dispute over optical discoveries, was an oblique attack on Hooke (said to have been short and hunchbacked), rather than—or in addition to—a statement of modesty. On the other hand, the widely known proverb about standing on the shoulders of giants, published among others by seventeenth-century poet George Herbert (a former orator of the University of Cambridge and fellow of Trinity College) in his (1651), had as its main point that "a dwarf on a giant's shoulders sees farther of the two", and so its effect as an analogy would place Newton himself rather than Hooke as the 'dwarf'. In a later memoir, Newton wrote, "I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me." Theology Religious views Although born into an Anglican family, by his thirties Newton held a Christian faith that, had it been made public, would not have been considered orthodox by mainstream Christianity, with one historian labelling him a heretic. By 1672, he had started to record his theological researches in notebooks which he showed to no one and which have only been available for public examination since 1972. Over half of what Newton wrote concerned theology and alchemy, and most has never been printed. His writings demonstrate an extensive knowledge of early Church writings and show that in the conflict between Athanasius and Arius which defined the Creed, he took the side of Arius, the loser, who rejected the conventional view of the Trinity. Newton "recognized Christ as a divine mediator between God and man, who was subordinate to the Father who created him." He was especially interested in prophecy, but for him, "the great apostasy was trinitarianism." Newton tried unsuccessfully to obtain one of the two fellowships that exempted the holder from the ordination requirement. At the last moment in 1675 he received a dispensation from the government that excused him and all future holders of the Lucasian chair. Worshipping Jesus Christ as God was, in Newton's eyes, idolatry, an act he believed to be the fundamental sin. In 1999, historian Stephen D. Snobelen wrote, "Isaac Newton was a heretic. But ... he never made a public declaration of his private faith—which the orthodox would have deemed extremely radical. He hid his faith so well that scholars are still unraveling his personal beliefs." Snobelen concludes that Newton was at least a Socinian sympathiser (he owned and had thoroughly read at least eight Socinian books), possibly an Arian and almost certainly an anti-trinitarian. Although the laws of motion and universal gravitation became Newton's best-known discoveries, he warned against using them to view the Universe as a mere machine, as if akin to a great clock. He said, "So then gravity may put the planets into motion, but without the Divine Power it could never put them into such a circulating motion, as they have about the sun". Along with his scientific fame, Newton's studies of the Bible and of the early Church Fathers were also noteworthy. Newton wrote works on textual criticism, most notably An Historical Account of Two Notable Corruptions of Scripture and Observations upon the Prophecies of Daniel, and the Apocalypse of St. John. He placed the crucifixion of Jesus Christ at 3 April, AD 33, which agrees with one traditionally accepted date. He believed in a rationally immanent world, but he rejected the hylozoism implicit in Leibniz and Baruch Spinoza. The ordered and dynamically informed Universe could be understood, and must be understood, by an active reason. In his correspondence, Newton claimed that in writing the Principia "I had an eye upon such Principles as might work with considering men for the belief of a Deity". He saw evidence of design in the system of the world: "Such a wonderful uniformity in the planetary system must be allowed the effect of choice". But Newton insisted that divine intervention would eventually be required to reform the system, due to the slow growth of instabilities. For this, Leibniz lampooned him: "God Almighty wants to wind up his watch from time to time: otherwise it would cease to move. He had not, it seems, sufficient foresight to make it a perpetual motion." Newton's position was vigorously defended by his follower Samuel Clarke in a famous correspondence. A century later, Pierre-Simon Laplace's work Celestial Mechanics had a natural explanation for why the planet orbits do not require periodic divine intervention. The contrast between Laplace's mechanistic worldview and Newton's one is the most strident considering the famous answer which the French scientist gave Napoleon, who had criticised him for the absence of the Creator in the Mécanique céleste: "Sire, j'ai pu me passer de cette hypothèse" ("Sir, I didn't need this hypothesis"). Scholars long debated whether Newton disputed the doctrine of the Trinity. His first biographer, David Brewster, who compiled his manuscripts, interpreted Newton as questioning the veracity of some passages used to support the Trinity, but never denying the doctrine of the Trinity as such. In the twentieth century, encrypted manuscripts written by Newton and bought by John Maynard Keynes (among others) were deciphered and it became known that Newton did indeed reject Trinitarianism. Religious thought Newton and Robert Boyle's approach to the mechanical philosophy was promoted by rationalist pamphleteers as a viable alternative to the pantheists and enthusiasts, and was accepted hesitantly by orthodox preachers as well as dissident preachers like the latitudinarians. The clarity and simplicity of science was seen as a way to combat the emotional and metaphysical superlatives of both superstitious enthusiasm and the threat of atheism, and at the same time, the second wave of English deists used Newton's discoveries to demonstrate the possibility of a "Natural Religion". The attacks made against pre-Enlightenment "magical thinking", and the mystical elements of Christianity, were given their foundation with Boyle's mechanical conception of the universe. Newton gave Boyle's ideas their completion through mathematical proofs and, perhaps more importantly, was very successful in popularising them. Alchemy Of an estimated ten million words of writing in Newton's papers, about one million deal with alchemy. Many of Newton's writings on alchemy are copies of other manuscripts, with his own annotations. Alchemical texts mix artisanal knowledge with philosophical speculation, often hidden behind layers of wordplay, allegory, and imagery to protect craft secrets. Some of the content contained in Newton's papers could have been considered heretical by the church. In 1888, after spending sixteen years cataloguing Newton's papers, Cambridge University kept a small number and returned the rest to the Earl of Portsmouth. In 1936, a descendant offered the papers for sale at Sotheby's. The collection was broken up and sold for a total of about £9,000. John Maynard Keynes was one of about three dozen bidders who obtained part of the collection at auction. Keynes went on to reassemble an estimated half of Newton's collection of papers on alchemy before donating his collection to Cambridge University in 1946. All of Newton's known writings on alchemy are currently being put online in a project undertaken by Indiana University: "The Chymistry of Isaac Newton" and summarised in a book. In June 2020, two unpublished pages of Newton's notes on Jan Baptist van Helmont's book on plague, De Peste, were being auctioned online by Bonhams. Newton's analysis of this book, which he made in Cambridge while protecting himself from London's 1665–1666 infection, is the most substantial written statement he is known to have made about the plague, according to Bonhams. As far as the therapy is concerned, Newton writes that "the best is a toad suspended by the legs in a chimney for three days, which at last vomited up earth with various insects in it, on to a dish of yellow wax, and shortly after died. Combining powdered toad with the excretions and serum made into lozenges and worn about the affected area drove away the contagion and drew out the poison". Legacy Fame The mathematician and astronomer Joseph-Louis Lagrange frequently asserted that Newton was the greatest genius who ever lived, and once added that Newton was also "the most fortunate, for we cannot find more than once a system of the world to establish." English poet Alexander Pope wrote the famous epitaph: But this was not allowed to be inscribed in Newton's monument at Westminster. The epitaph added is as follows: which can be translated as follows: In 2005, a dual survey of both the public and of members of Britain's Royal Society (formerly headed by Newton) asking who had the greater effect on the history of science, Newton or Albert Einstein, both the Royal Society members and the public deemed Newton to have made the greater overall contributions. In 1999, an opinion poll of 100 of the day's leading physicists voted Einstein the "greatest physicist ever," with Newton the runner-up, while a parallel survey of rank-and-file physicists by the site PhysicsWeb gave the top spot to Newton. New Scientist called Newton "the supreme genius and most enigmatic character in the history of science". Newton has been called the "most influential figure in the history of Western science". Einstein kept a picture of Newton on his study wall alongside ones of Michael Faraday and James Clerk Maxwell. Physicist Lev Landau ranked physicists on a logarithmic scale of productivity ranging from 0 to 5. The highest ranking, 0, was assigned to Newton. Albert Einstein was ranked 0.5. A rank of 1 was awarded to the "founding fathers" of quantum mechanics, Niels Bohr, Werner Heisenberg, Paul Dirac and Erwin Schrödinger. Landau, a Nobel prize winner and discoverer of superfluidity, ranked himself as 2. The SI derived unit of force is named the newton in his honour. Woolsthorpe Manor is a Grade I listed building by Historic England through being his birthplace and "where he discovered gravity and developed his theories regarding the refraction of light". In 1816, a tooth said to have belonged to Newton was sold for £730 in London to an aristocrat who had it set in a ring. Guinness World Records 2002 classified it as the most valuable tooth in the world, which would value approximately £25,000 (35,700) in late 2001. Who bought it and who currently has it has not been disclosed. Apple incident Newton himself often told the story that he was inspired to formulate his theory of gravitation by watching the fall of an apple from a tree. The story is believed to have passed into popular knowledge after being related by Catherine Barton, Newton's niece, to Voltaire. Voltaire then wrote in his Essay on Epic Poetry (1727), "Sir Isaac Newton walking in his gardens, had the first thought of his system of gravitation, upon seeing an apple falling from a tree." Although it has been said that the apple story is a myth and that he did not arrive at his theory of gravity at any single moment, acquaintances of Newton (such as William Stukeley, whose manuscript account of 1752 has been made available by the Royal Society) do in fact confirm the incident, though not the apocryphal version that the apple actually hit Newton's head. Stukeley recorded in his Memoirs of Sir Isaac Newton's Life a conversation with Newton in Kensington on 15 April 1726: John Conduitt, Newton's assistant at the Royal Mint and husband of Newton's niece, also described the event when he wrote about Newton's life: It is known from his notebooks that Newton was grappling in the late 1660s with the idea that terrestrial gravity extends, in an inverse-square proportion, to the Moon; however, it took him two decades to develop the full-fledged theory. The question was not whether gravity existed, but whether it extended so far from Earth that it could also be the force holding the Moon to its orbit. Newton showed that if the force decreased as the inverse square of the distance, one could indeed calculate the Moon's orbital period, and get good agreement. He guessed the same force was responsible for other orbital motions, and hence named it "universal gravitation". Various trees are claimed to be "the" apple tree which Newton describes. The King's School, Grantham claims that the tree was purchased by the school, uprooted and transported to the headmaster's garden some years later. The staff of the (now) National Trust-owned Woolsthorpe Manor dispute this, and claim that a tree present in their gardens is the one described by Newton. A descendant of the original tree can be seen growing outside the main gate of Trinity College, Cambridge, below the room Newton lived in when he studied there. The National Fruit Collection at Brogdale in Kent can supply grafts from their tree, which appears identical to Flower of Kent, a coarse-fleshed cooking variety. Commemorations Newton's monument (1731) can be seen in Westminster Abbey, at the north of the entrance to the choir against the choir screen, near his tomb. It was executed by the sculptor Michael Rysbrack (1694–1770) in white and grey marble with design by the architect William Kent. The monument features a figure of Newton reclining on top of a sarcophagus, his right elbow resting on several of his great books and his left hand pointing to a scroll with a mathematical design. Above him is a pyramid and a celestial globe showing the signs of the Zodiac and the path of the comet of 1680. A relief panel depicts putti using instruments such as a telescope and prism. From 1978 until 1988, an image of Newton designed by Harry Ecclestone appeared on Series D £1 banknotes issued by the Bank of England (the last £1 notes to be issued by the Bank of England). Newton was shown on the reverse of the notes holding a book and accompanied by a telescope, a prism and a map of the Solar System. A statue of Isaac Newton, looking at an apple at his feet, can be seen at the Oxford University Museum of Natural History. A large bronze statue, Newton, after William Blake, by Eduardo Paolozzi, dated 1995 and inspired by Blake's etching, dominates the piazza of the British Library in London. A bronze statue of Newton was erected in 1858 in the centre of Grantham where he went to school, prominently standing in front of Grantham Guildhall. The still-surviving farmhouse at Woolsthorpe By Colsterworth is a Grade I listed building by Historic England through being his birthplace and "where he discovered gravity and developed his theories regarding the refraction of light". The Enlightenment Enlightenment philosophers chose a short history of scientific predecessors—Galileo, Boyle, and Newton principally—as the guides and guarantors of their applications of the singular concept of nature and natural law to every physical and social field of the day. In this respect, the lessons of history and the social structures built upon it could be discarded. It is held by European philosophers of the Enlightenment and by historians of the Enlightenment that Newton's publication of the Principia was a turning point in the Scientific Revolution and started the Enlightenment. It was Newton's conception of the universe based upon natural and rationally understandable laws that became one of the seeds for Enlightenment ideology. Locke and Voltaire applied concepts of natural law to political systems advocating intrinsic rights; the physiocrats and Adam Smith applied natural conceptions of psychology and self-interest to economic systems; and sociologists criticised the current social order for trying to fit history into natural models of progress. Monboddo and Samuel Clarke resisted elements of Newton's work, but eventually rationalised it to conform with their strong religious views of nature. Works Published in his lifetime De analysi per aequationes numero terminorum infinitas (1669, published 1711) Of Natures Obvious Laws & Processes in Vegetation (unpublished, –75) De motu corporum in gyrum (1684) Philosophiæ Naturalis Principia Mathematica (1687) Scala graduum Caloris. Calorum Descriptiones & signa (1701) Opticks (1704) Reports as Master of the Mint (1701–1725) Arithmetica Universalis (1707) Published posthumously De mundi systemate (The System of the World) (1728) Optical Lectures (1728) The Chronology of Ancient Kingdoms Amended (1728) Observations on Daniel and The Apocalypse of St. John (1733) Method of Fluxions (1671, published 1736) An Historical Account of Two Notable Corruptions of Scripture (1754) See also Elements of the Philosophy of Newton, a book by Voltaire List of multiple discoveries: seventeenth century List of things named after Isaac Newton List of presidents of the Royal Society References Notes Citations Bibliography Further reading Primary Newton, Isaac. The Principia: Mathematical Principles of Natural Philosophy. University of California Press, (1999) Brackenridge, J. Bruce. The Key to Newton's Dynamics: The Kepler Problem and the Principia: Containing an English Translation of Sections 1, 2, and 3 of Book One from the First (1687) Edition of Newton's Mathematical Principles of Natural Philosophy, University of California Press (1996) Newton, Isaac. The Optical Papers of Isaac Newton. Vol. 1: The Optical Lectures, 1670–1672, Cambridge University Press (1984) Newton, Isaac. Opticks (4th ed. 1730) online edition Newton, I. (1952). Opticks, or A Treatise of the Reflections, Refractions, Inflections & Colours of Light. New York: Dover Publications. Newton, I. Sir Isaac Newton's Mathematical Principles of Natural Philosophy and His System of the World, tr. A. Motte, rev. Florian Cajori. Berkeley: University of California Press (1934)  – 8 volumes. Newton, Isaac. The correspondence of Isaac Newton, ed. H.W. Turnbull and others, 7 vols (1959–77) Newton's Philosophy of Nature: Selections from His Writings edited by H.S. Thayer (1953; online edition) Isaac Newton, Sir; J Edleston; Roger Cotes, Correspondence of Sir Isaac Newton and Professor Cotes, including letters of other eminent men, London, John W. Parker, West Strand; Cambridge, John Deighton (1850, Google Books) Maclaurin, C. (1748). An Account of Sir Isaac Newton's Philosophical Discoveries, in Four Books. London: A. Millar and J. Nourse Newton, I. (1958). Isaac Newton's Papers and Letters on Natural Philosophy and Related Documents, eds. I.B. Cohen and R.E. Schofield. Cambridge: Harvard University Press Newton, I. (1962). The Unpublished Scientific Papers of Isaac Newton: A Selection from the Portsmouth Collection in the University Library, Cambridge, ed. A.R. Hall and M.B. Hall. Cambridge: Cambridge University Press Newton, I. (1975). Isaac Newton's 'Theory of the Moon's Motion''' (1702). London: Dawson Alchemy  – Preface by Albert Einstein. Reprinted by Johnson Reprint Corporation, New York (1972) Keynes took a close interest in Newton and owned many of Newton's private papers. (edited by A.H. White; originally published in 1752) Trabue, J. "Ann and Arthur Storer of Calvert County, Maryland, Friends of Sir Isaac Newton," The American Genealogist 79 (2004): 13–27. Religion Dobbs, Betty Jo Tetter. The Janus Faces of Genius: The Role of Alchemy in Newton's Thought. (1991), links the alchemy to Arianism Force, James E., and Richard H. Popkin, eds. Newton and Religion: Context, Nature, and Influence. (1999), pp. xvii, 325.; 13 papers by scholars using newly opened manuscripts Science Berlinski, David. Newton's Gift: How Sir Isaac Newton Unlocked the System of the World. (2000); Cohen, I. Bernard and Smith, George E., ed. The Cambridge Companion to Newton. (2002). Focuses on philosophical issues only; excerpt and text search; complete edition online This well documented work provides, in particular, valuable information regarding Newton's knowledge of Patristics Hawking, Stephen, ed. On the Shoulders of Giants. Places selections from Newton's Principia in the context of selected writings by Copernicus, Kepler, Galileo and Einstein Newton, Isaac. Papers and Letters in Natural Philosophy'', edited by I. Bernard Cohen. Harvard University Press, 1958, 1978; . External links Enlightening Science digital project : Texts of his papers, "Popularisations" and podcasts at the Newton Project Writings by Newton Newton's works – full texts, at the Newton Project Newton's papers in the Royal Society's archives The Newton Manuscripts at the National Library of Israel – the collection of all his religious writings "Newton Papers"  – Cambridge Digital Library 1642 births 1727 deaths 17th-century alchemists 17th-century apocalypticists 17th-century English astronomers 17th-century English mathematicians 17th-century English male writers 17th-century English writers 17th-century writers in Latin 18th-century alchemists 18th-century apocalypticists 18th-century British astronomers 18th-century British scientists 18th-century English mathematicians 18th-century English male writers 18th-century English writers 18th-century writers in Latin Alumni of Trinity College, Cambridge Antitrinitarians Ballistics experts British scientific instrument makers British writers in Latin Burials at Westminster Abbey Color scientists Copernican Revolution Creators of temperature scales British critics of atheism English alchemists English Anglicans English Christians English inventors English justices of the peace English knights English mathematicians English MPs 1689–1690 English MPs 1701–1702 English physicists Enlightenment scientists Experimental physicists Fellows of the Royal Society Fellows of Trinity College, Cambridge Fluid dynamicists British geometers Linear algebraists Hermeticists History of calculus Knights Bachelor Lucasian Professors of Mathematics Masters of the Mint Members of the pre-1707 Parliament of England for the University of Cambridge Natural philosophers Nontrinitarian Christians Optical physicists People educated at The King's School, Grantham People from South Kesteven District Philosophers of science Post-Reformation Arian Christians Presidents of the Royal Society Reputed virgins Theoretical physicists Writers about religion and science
0.766147
0.999956
0.766113
Holism
Holism is the interdisciplinary idea that systems possess properties as wholes apart from the properties of their component parts. The aphorism "The whole is greater than the sum of its parts", typically attributed to Aristotle, is often given as a glib summary of this proposal. The concept of holism can inform the methodology for a broad array of scientific fields and lifestyle practices. When applications of holism are said to reveal properties of a whole system beyond those of its parts, these qualities are referred to as emergent properties of that system. Holism in all contexts is often placed in opposition to reductionism, a dominant notion in the philosophy of science that systems containing parts contain no unique properties beyond those parts. Proponents of holism consider the search for emergent properties within systems to be demonstrative of their perspective. Background The term "holism" was coined by Jan Smuts (1870–1950) in his 1926 book Holism and Evolution. While he never assigned a consistent meaning to the word, Smuts used holism to represent at least three features of reality. First, holism claims that every scientifically measurable thing, either physical or psychological, does possess a nature as a whole beyond its parts. His examples include atoms, cells, or an individual's personality. Smuts discussed this sense of holism in his claim that an individual's body and mind are not completely separated but instead connect and represent the holistic idea of a person. In his second sense, Smuts referred to holism as the cause of evolution. He argued that evolution is neither an accident nor is it brought about by the actions of some transcendant force, such as a God. Smuts criticized writers who emphasized Darwinian concepts of natural selection and genetic variation to support an accidental view of natural processes within the universe. Smuts perceived evolution as the process of nature correcting itself creatively and intentionally. In this way, holism is described as the tendency of a whole system to creatively respond to environmental stressors, a process in which parts naturally work together to bring the whole into more advanced states. Smuts used Pavlovian studies to argue that the inheritance of behavioral changes supports his idea of creative evolution as opposed to purely accidental development in nature. Smuts believed that this creative process was intrinsic within all physical systems of parts and ruled out indirect, transcendent forces. Finally, Smuts used holism to explain the concrete (nontranscendent) nature of the universe in general. In his words, holism is "the ultimate synthetic, ordering, organizing, regulative activity in the universe which accounts for all the structural groupings and syntheses in it." Smuts argued that a holistic view of the universe explains its processes and their evolution more effectively than a reductive view. Professional philosophers of science and linguistics did not consider Holism and Evolution seriously upon its initial publication in 1926 and the work has received criticism for a lack of theoretical coherence. Some biological scientists, however, did offer favorable assessments shortly after its first print. Over time, the meaning of the word holism became most closely associated with Smuts' first conception of the term, yet without any metaphysical commitments to monism, dualism, or similar concepts which can be inferred from his work. Scientific applications Physics Nonseparability The advent of holism in the 20th century coincided with the gradual development of quantum mechanics. Holism in physics is the nonseparability of physical systems from their parts, especially quantum phenomena. Classical physics cannot be regarded as holistic, as the behavior of individual parts represents the whole. However, the state of a system in quantum theory resists a certain kind of reductive analysis. For example, two spatially separated quantum systems are described as "entangled," or nonseparable from each other, when a meaningful analysis of one system is indistinguishable from that of the other. There are different conceptions of nonseparability in physics and its exploration is considered to broadly present insight into the ontological problem. Variants In one sense, holism for physics is a perspective about the best way to understand the nature of a physical system. In this sense, holism is the methodological claim that systems are accurately understood according to their properties as a whole. A methodological reductionist in physics might seek to explain, for example, the behavior of a liquid by examining its component molecules, atoms, ions or electrons. A methodological holist, on the other hand, believes there is something misguided about this approach; one proponent, a condensed matter physicist, puts it: “the most important advances in this area come about by the emergence of qualitatively new concepts at the intermediate or macroscopic levels—concepts which, one hopes, will be compatible with one’s information about the microscopic constituents, but which are in no sense logically dependent on it.” This perspective is considered a conventional attitude among contemporary physicists. In another sense, holism is a metaphysical claim that the nature of a system is not determined by the properties of its component parts. There are three varieties of this sense of physical holism. Ontological holism: some systems are not merely composed of their physical parts Property holism: some systems have properties independent of their physical parts Nomological holism: some systems follow physical laws beyond the laws followed by their physical parts The metaphysical claim does not assert that physical systems involve abstract properties beyond the composition of its physical parts, but that there are concrete properties aside from those of its basic physical parts. Theoretical physicist David Bohm (1917-1992) supports this view head-on. Bohm believed that a complete description of the universe would have to go beyond a simple list of all its particles and their positions, there would also have to be a physical quantum field associated with the properties of those particles guiding their trajectories. Bohm's ontological holism concerning the nature of whole physical systems was literal. But Niels Bohr (1885-1962), on the other hand, held ontological holism from an epistemological angle, rather than a literal one. Bohr saw an observational apparatus to be a part of a system under observation, besides the basic physical parts themselves. His theory agrees with Bohm that whole systems were not merely composed of their parts and it identifies properties such as position and momentum as those of whole systems beyond those of its components. But Bohr states that these holistic properties are only meaningful in experimental contexts when physical systems are under observation and that these systems, when not under observation, cannot be said to have meaningful properties, even if these properties took place outside our observation. While Bohr claims these holistic properties exist only insofar as they can be observed, Bohm took his ontological holism one step further by claiming these properties must exist regardless. Linguistics Semantic holism suggests that the meaning of individual words depends on the meaning of other words, forming a large web of interconnections. In general, meaning holism states that the properties which determine the meaning of a word are connected such that if the meaning of one word changes, the meaning of every other word in the web changes as well. The set of words that alter in meaning due to a change in the meaning of some other is not necessarily specified in meaning holism, but typically such a change is taken straightforwardly to affect the meaning of every word in the language. In scientific disciplines, reductionism is the opposing viewpoint to holism. But in the context of linguistics or the philosophy of language, reductionism is typically referred to as atomism. Specifically, atomism states that each word's meaning is independent and so there are no emergent properties within a language. Additionally, there is meaning molecularism which states that a change in one word alters the meaning of only a relatively small set of other words. The linguistic perspective of meaning holism is traced back to Quine but was subsequently formalized by analytic philosophers Michael Dummett, Jerry Fodor, and Ernest Lepore. While this holistic approach attempts to resolve a classical problem for the philosophy of language concerning how words convey meaning, there is debate over its validity mostly from two angles of criticism: opposition to compositionality and, especially, instability of meaning. The first claims that meaning holism conflicts with the compositionality of language. Meaning in some languages is compositional in that meaning comes from the structure of an expression's parts. Meaning holism suggests that the meaning of words plays an inferential role in the meaning of other words: "pet fish" might infer a meaning of "less than 3 ounces." Since holistic views of meaning assume meaning depends on which words are used and how those words infer meaning onto other words, rather than how they are structured, meaning holism stands in conflict with compositionalism and leaves statements with potentially ambiguous meanings. The second criticism claims that meaning holism makes meaning in language unstable. If some words must be used to infer the meaning of other words, then in order to communicate a message, the sender and the receiver must share an identical set of inferential assumptions or beliefs. If these beliefs were different, meaning may be lost. Many types of communication would be directly affected by the principles of meaning holism such as informative communication, language learning, and communication about psychological states. Nevertheless, some meaning holists maintain that the instability of meaning holism is an acceptable feature from several different angles. In one example, contextual holists make this point simply by suggesting we often do not actually share identical inferential assumptions but instead rely on context to counter differences of inference and support communication. Biology Scientific applications of holism within biology are referred to as systems biology. The opposing analytical approach of systems biology is biological organization which models biological systems and structures only in terms of their component parts. "The reductionist approach has successfully identified most of the components and many of the interactions but, unfortunately, offers no convincing concepts or methods to understand how system properties emerge...the pluralism of causes and effects in biological networks is better addressed by observing, through quantitative measures, multiple components simultaneously and by rigorous data integration with mathematical models." The objective in systems biology is to advance models of the interactions in a system. Holistic approaches to modelling have involved cellular modelling strategies, genomic interaction analysis, and phenotype prediction. Systems medicine Systems medicine is a practical approach to systems biology and accepts its holistic assumptions. Systems medicine takes the systems of the human body as made up of a complete whole and uses this as a starting point in its research and, ultimately, treatment. Lifestyle applications The term holism is also sometimes used in the context of various lifestyle practices, such as dieting, education, and healthcare, to refer to ways of life that either supplement or replace conventional practices. In these contexts, holism is not necessarily a rigorous or well-defined methodology for obtaining a particular lifestyle outcome. It is sometimes simply an adjective to describe practices which account for factors that standard forms of these practices may discount, especially in the context of alternative medicine. See also Confirmation holism Emergentism Holism and Evolution Holism in ecological anthropology Holistic education Holon (philosophy) Holarchy Isomorphism Logical holism aka Theoretical holism Mereology Monism Reductionism Systems theory References External links Holism Philosophical theories Metaphysics of science Social theories Emergence Jan Smuts
0.768693
0.996625
0.766099
Maxwell's equations in curved spacetime
In physics, Maxwell's equations in curved spacetime govern the dynamics of the electromagnetic field in curved spacetime (where the metric may not be the Minkowski metric) or where one uses an arbitrary (not necessarily Cartesian) coordinate system. These equations can be viewed as a generalization of the vacuum Maxwell's equations which are normally formulated in the local coordinates of flat spacetime. But because general relativity dictates that the presence of electromagnetic fields (or energy/matter in general) induce curvature in spacetime, Maxwell's equations in flat spacetime should be viewed as a convenient approximation. When working in the presence of bulk matter, distinguishing between free and bound electric charges may facilitate analysis. When the distinction is made, they are called the macroscopic Maxwell's equations. Without this distinction, they are sometimes called the "microscopic" Maxwell's equations for contrast. The electromagnetic field admits a coordinate-independent geometric description, and Maxwell's equations expressed in terms of these geometric objects are the same in any spacetime, curved or not. Also, the same modifications are made to the equations of flat Minkowski space when using local coordinates that are not rectilinear. For example, the equations in this article can be used to write Maxwell's equations in spherical coordinates. For these reasons, it may be useful to think of Maxwell's equations in Minkowski space as a special case of the general formulation. Summary In general relativity, the metric tensor is no longer a constant (like as in Examples of metric tensor) but can vary in space and time, and the equations of electromagnetism in vacuum become where is the density of the Lorentz force, is the inverse of the metric tensor , and is the determinant of the metric tensor. Notice that and are (ordinary) tensors, while , , and are tensor densities of weight +1. Despite the use of partial derivatives, these equations are invariant under arbitrary curvilinear coordinate transformations. Thus, if one replaced the partial derivatives with covariant derivatives, the extra terms thereby introduced would cancel out (see ). Electromagnetic potential The electromagnetic potential is a covariant vector Aα, which is the undefined primitive of electromagnetism. Being a covariant vector, its components transform from one coordinate system to another according to Electromagnetic field The electromagnetic field is a covariant antisymmetric tensor of degree 2, which can be defined in terms of the electromagnetic potential by To see that this equation is invariant, we transform the coordinates as described in the classical treatment of tensors: This definition implies that the electromagnetic field satisfies which incorporates Faraday's law of induction and Gauss's law for magnetism. This is seen from Thus, the right-hand side of that Maxwell law is zero identically, meaning that the classic EM field theory leaves no room for magnetic monopoles or currents of such to act as sources of the field. Although there appear to be 64 equations in Faraday–Gauss, it actually reduces to just four independent equations. Using the antisymmetry of the electromagnetic field, one can either reduce to an identity (0 = 0) or render redundant all the equations except for those with {λ, μ, ν} being either {1, 2, 3}, {2, 3, 0}, {3, 0, 1}, or {0, 1, 2}. The Faraday–Gauss equation is sometimes written where a semicolon indicates a covariant derivative, a comma indicates a partial derivative, and square brackets indicate anti-symmetrization (see Ricci calculus for the notation). The covariant derivative of the electromagnetic field is where Γαβγ is the Christoffel symbol, which is symmetric in its lower indices. Electromagnetic displacement The electric displacement field D and the auxiliary magnetic field H form an antisymmetric contravariant rank-2 tensor density of weight +1. In vacuum, this is given by This equation is the only place where the metric (and thus gravity) enters into the theory of electromagnetism. Furthermore, the equation is invariant under a change of scale, that is, multiplying the metric by a constant has no effect on this equation. Consequently, gravity can only affect electromagnetism by changing the speed of light relative to the global coordinate system being used. Light is only deflected by gravity because it is slower near massive bodies. So it is as if gravity increased the index of refraction of space near massive bodies. More generally, in materials where the magnetization–polarization tensor is non-zero, we have The transformation law for electromagnetic displacement is where the Jacobian determinant is used. If the magnetization-polarization tensor is used, it has the same transformation law as the electromagnetic displacement. Electric current The electric current is the divergence of the electromagnetic displacement. In vacuum, If magnetization–polarization is used, then this just gives the free portion of the current This incorporates Ampere's law and Gauss's law. In either case, the fact that the electromagnetic displacement is antisymmetric implies that the electric current is automatically conserved: because the partial derivatives commute. The Ampere–Gauss definition of the electric current is not sufficient to determine its value because the electromagnetic potential (from which it was ultimately derived) has not been given a value. Instead, the usual procedure is to equate the electric current to some expression in terms of other fields, mainly the electron and proton, and then solve for the electromagnetic displacement, electromagnetic field, and electromagnetic potential. The electric current is a contravariant vector density, and as such it transforms as follows: Verification of this transformation law: So all that remains is to show that which is a version of a known theorem (see ). Lorentz force density The density of the Lorentz force is a covariant vector density given by The force on a test particle subject only to gravity and electromagnetism is where pα is the linear 4-momentum of the particle, t is any time coordinate parameterizing the world line of the particle, Γβαγ is the Christoffel symbol (gravitational force field), and q is the electric charge of the particle. This equation is invariant under a change in the time coordinate; just multiply by and use the chain rule. It is also invariant under a change in the x coordinate system. Using the transformation law for the Christoffel symbol, we get Lagrangian In vacuum, the Lagrangian density for classical electrodynamics (in joules per cubic meter) is a scalar density where The 4-current should be understood as an abbreviation of many terms expressing the electric currents of other charged fields in terms of their variables. If we separate free currents from bound currents, the Lagrangian becomes Electromagnetic stress–energy tensor As part of the source term in the Einstein field equations, the electromagnetic stress–energy tensor is a covariant symmetric tensor using a metric of signature (−, +, +, +). If using the metric with signature (+, −, −, −), the expression for will have opposite sign. The stress–energy tensor is trace-free: because electromagnetism propagates at the local invariant speed, and is conformal-invariant. In the expression for the conservation of energy and linear momentum, the electromagnetic stress–energy tensor is best represented as a mixed tensor density From the equations above, one can show that where the semicolon indicates a covariant derivative. This can be rewritten as which says that the decrease in the electromagnetic energy is the same as the work done by the electromagnetic field on the gravitational field plus the work done on matter (via the Lorentz force), and similarly the rate of decrease in the electromagnetic linear momentum is the electromagnetic force exerted on the gravitational field plus the Lorentz force exerted on matter. Derivation of conservation law: which is zero because it is the negative of itself (see four lines above). Electromagnetic wave equation The nonhomogeneous electromagnetic wave equation in terms of the field tensor is modified from the special-relativity form to where Racbd is the covariant form of the Riemann tensor, and is a generalization of the d'Alembertian operator for covariant derivatives. Using Maxwell's source equations can be written in terms of the 4-potential [ref. 2, p. 569] as or, assuming the generalization of the Lorenz gauge in curved spacetime, where is the Ricci curvature tensor. This is the same form of the wave equation as in flat spacetime, except that the derivatives are replaced by covariant derivatives and there is an additional term proportional to the curvature. The wave equation in this form also bears some resemblance to the Lorentz force in curved spacetime, where Aa plays the role of the 4-position. For the case of a metric signature in the form , the derivation of the wave equation in curved spacetime is carried out in the article. Nonlinearity of Maxwell's equations in a dynamic spacetime When Maxwell's equations are treated in a background-independent manner, that is, when the spacetime metric is taken to be a dynamical variable dependent on the electromagnetic field, then the electromagnetic wave equation and Maxwell's equations are nonlinear. This can be seen by noting that the curvature tensor depends on the stress–energy tensor through the Einstein field equation where is the Einstein tensor, G is the Newtonian constant of gravitation, gab is the metric tensor, and R (scalar curvature) is the trace of the Ricci curvature tensor. The stress–energy tensor is composed of the stress–energy from particles, but also stress–energy from the electromagnetic field. This generates the nonlinearity. Geometric formulation In the differential geometric formulation of the electromagnetic field, the antisymmetric Faraday tensor can be considered as the Faraday 2-form . In this view, one of Maxwell's two equations is where is the exterior derivative operator. This equation is completely coordinate- and metric-independent and says that the electromagnetic flux through a closed two-dimensional surface in space–time is topological, more precisely, depends only on its homology class (a generalization of the integral form of Gauss law and Maxwell–Faraday equation, as the homology class in Minkowski space is automatically 0). By the Poincaré lemma, this equation implies (at least locally) that there exists a 1-form satisfying The other equation is In this context, is the current 3-form (or even more precise, twisted 3-form), and the star denotes the Hodge star operator. The dependence of Maxwell's equation on the metric of spacetime lies in the Hodge star operator on 2-forms, which is conformally invariant. Written this way, Maxwell's equation is the same in any space–time, manifestly coordinate-invariant, and convenient to use (even in Minkowski space or Euclidean space and time, especially with curvilinear coordinates). An alternative geometric interpretation is that the Faraday 2-form is (up to a factor ) the curvature 2-form of a U(1)-connection on a principal U(1)-bundle whose sections represent charged fields. The connection is much like the vector potential, since every connection can be written as for a "base" connection , and In this view, the Maxwell "equation" is a mathematical identity known as the Bianchi identity. The equation is the only equation with any physical content in this formulation. This point of view is particularly natural when considering charged fields or quantum mechanics. It can be interpreted as saying that, much like gravity can be understood as being the result of the necessity of a connection to parallel transport vectors at different points, electromagnetic phenomena, or more subtle quantum effects like the Aharonov–Bohm effect, can be understood as a result from the necessity of a connection to parallel transport charged fields or wave sections at different points. In fact, just as the Riemann tensor is the holonomy of the Levi-Civita connection along an infinitesimal closed curve, the curvature of the connection is the holonomy of the U(1)-connection. See also Electromagnetic wave equation Inhomogeneous electromagnetic wave equation Mathematical descriptions of the electromagnetic field Covariant formulation of classical electromagnetism Theoretical motivation for general relativity Introduction to the mathematics of general relativity Electrovacuum solution Paradox of radiation of charged particles in a gravitational field Notes References External links Electromagnetic fields in curved spacetimes Maxwell's equations in curved spacetime Maxwell's equations in curved spacetime Curved spacetime
0.777081
0.985854
0.766088
Aspect ratio (aeronautics)
In aeronautics, the aspect ratio of a wing is the ratio of its span to its mean chord. It is equal to the square of the wingspan divided by the wing area. Thus, a long, narrow wing has a high aspect ratio, whereas a short, wide wing has a low aspect ratio. Aspect ratio and other features of the planform are often used to predict the aerodynamic efficiency of a wing because the lift-to-drag ratio increases with aspect ratio, improving the fuel economy in powered airplanes and the gliding angle of sailplanes. Definition The aspect ratio is the ratio of the square of the wingspan to the projected wing area , which is equal to the ratio of the wingspan to the standard mean chord : Mechanism As a useful simplification, an airplane in flight can be imagined to affect a cylinder of air with a diameter equal to the wingspan. A large wingspan affects a large cylinder of air, and a small wingspan affects a small cylinder of air. A small air cylinder must be pushed down with a greater power (energy change per unit time) than a large cylinder in order to produce an equal upward force (momentum change per unit time). This is because giving the same momentum change to a smaller mass of air requires giving it a greater velocity change, and a much greater energy change because energy is proportional to the square of the velocity while momentum is only linearly proportional to the velocity. The aft-leaning component of this change in velocity is proportional to the induced drag, which is the force needed to take up that power at that airspeed. It is important to keep in mind that this is a drastic oversimplification, and an airplane wing affects a very large area around itself. In aircraft Although a long, narrow wing with a high aspect ratio has aerodynamic advantages like better lift-to-drag-ratio (see also details below), there are several reasons why not all aircraft have high aspect-ratio wings: Structural: A long wing has higher bending stress for a given load than a short one and therefore requires higher structural-design (architectural and/or material) specifications. Also, longer wings may have some torsion for a given load, and in some applications this torsion is undesirable (e.g. if the warped wing interferes with aileron effect). Maneuverability: a low aspect-ratio wing will have a higher roll angular acceleration than one with high aspect ratio, because a high aspect-ratio wing has a higher moment of inertia to overcome. In a steady roll, the longer wing gives a higher roll moment because of the longer moment arm of the aileron. Low aspect-ratio wings are usually used on fighter aircraft, not only for the higher roll rates, but especially for longer chord and thinner airfoils involved in supersonic flight. Parasitic drag: While high aspect wings create less induced drag, they have greater parasitic drag (drag due to shape, frontal area, and surface friction). This is because, for an equal wing area, the average chord (length in the direction of wind travel over the wing) is smaller. Due to the effects of Reynolds number, the value of the section drag coefficient is an inverse logarithmic function of the characteristic length of the surface, which means that, even if two wings of the same area are flying at equal speeds and equal angles of attack, the section drag coefficient is slightly higher on the wing with the smaller chord. However, this variation is very small when compared to the variation in induced drag with changing wingspan.For example, the section drag coefficient of a NACA 23012 airfoil (at typical lift coefficients) is inversely proportional to chord length to the power 0.129: A 20% increase in chord length would decrease the section drag coefficient by 2.38%. Practicality: low aspect ratios have a greater useful internal volume, since the maximum thickness is greater, which can be used to house the fuel tanks, retractable landing gear and other systems. Airfield size: Airfields, hangars, and other ground equipment define a maximum wingspan, which cannot be exceeded. To generate enough lift at a given wingspan, the aircraft designer must increase wing area by lengthening the chord, thus lowering the aspect ratio. This limits the Airbus A380 to 80m wide with an aspect ratio of 7.8, while the Boeing 787 or Airbus A350 have an aspect ratio of 9.5, influencing flight economy. Variable aspect ratio Aircraft which approach or exceed the speed of sound sometimes incorporate variable-sweep wings. These wings give a high aspect ratio when unswept and a low aspect ratio at maximum sweep. In subsonic flow, steeply swept and narrow wings are inefficient compared to a high-aspect-ratio wing. However, as the flow becomes transonic and then supersonic, the shock wave first generated along the wing's upper surface causes wave drag on the aircraft, and this drag is proportional to the span of the wing. Thus a long span, valuable at low speeds, causes excessive drag at transonic and supersonic speeds. By varying the sweep the wing can be optimised for the current flight speed. However, the extra weight and complexity of a moveable wing mean that such a system is not included in many designs. Birds and bats The aspect ratios of birds' and bats' wings vary considerably. Birds that fly long distances or spend long periods soaring such as albatrosses and eagles often have wings of high aspect ratio. By contrast, birds which require good maneuverability, such as the Eurasian sparrowhawk, have wings of low aspect ratio. Details For a constant-chord wing of chord c and span b, the aspect ratio is given by: If the wing is swept, c is measured parallel to the direction of forward flight. For most wings the length of the chord is not a constant but varies along the wing, so the aspect ratio AR is defined as the square of the wingspan b divided by the wing area S. In symbols, . For such a wing with varying chord, the standard mean chord SMC is defined as The performance of aspect ratio AR related to the lift-to-drag-ratio and wingtip vortices is illustrated in the formula used to calculate the drag coefficient of an aircraft where {| border="0" |- | || is the aircraft drag coefficient |- |   || is the aircraft zero-lift drag coefficient, |- | || is the aircraft lift coefficient, |- | || is the circumference-to-diameter ratio of a circle, pi, |- | || is the Oswald efficiency number |- | || is the aspect ratio. |} Wetted aspect ratio The wetted aspect ratio considers the whole wetted surface area of the airframe, , rather than just the wing. It is a better measure of the aerodynamic efficiency of an aircraft than the wing aspect ratio. It is defined as: where is span and is the wetted surface. Illustrative examples are provided by the Boeing B-47 and Avro Vulcan. Both aircraft have very similar performance although they are radically different. The B-47 has a high aspect ratio wing, while the Avro Vulcan has a low aspect ratio wing. They have, however, a very similar wetted aspect ratio. See also Wing configuration Notes References Anderson, John D. Jr, Introduction to Flight, 5th edition, McGraw-Hill. New York, NY. Anderson, John D. Jr, Fundamentals of Aerodynamics, Section 5.3 (4th edition), McGraw-Hill. New York, NY. L. J. Clancy (1975), Aerodynamics, Pitman Publishing Limited, London John P. Fielding. Introduction to Aircraft Design, Cambridge University Press, Daniel P. Raymer (1989). Aircraft Design: A Conceptual Approach, American Institute of Aeronautics and Astronautics, Inc., Washington, DC. McLean, Doug, Understanding Aerodynamics: Arguing from the Real Physics, Section 3.3.5 (1st Edition), Wiley. Aircraft aerodynamics Engineering ratios Aircraft wing design Wing configurations
0.770582
0.994168
0.766088
Magnus effect
The Magnus effect is a phenomenon that occurs when a spinning object is moving through a fluid. A lift force acts on the spinning object and its path may be deflected in a manner not present when it is not spinning. The strength and direction of the Magnus effect is dependent on the speed and direction the of rotation of the object. The Magnus effect is named after Heinrich Gustav Magnus, the German physicist who investigated it. The force on a rotating cylinder is an example of Kutta–Joukowski lift, named after Martin Kutta and Nikolay Zhukovsky (or Joukowski), mathematicians who contributed to the knowledge of how lift is generated in a fluid flow. Description The most readily observable case of the Magnus effect is when a spinning sphere (or cylinder) curves away from the arc it would follow if it were not spinning. It is often used by football (soccer) and volleyball players, baseball pitchers, and cricket bowlers. Consequently, the phenomenon is important in the study of the physics of many ball sports. It is also an important factor in the study of the effects of spinning on guided missiles—and has some engineering uses, for instance in the design of rotor ships and Flettner airplanes. Topspin in ball games is defined as spin about a horizontal axis perpendicular to the direction of travel that moves the top surface of the ball in the direction of travel. Under the Magnus effect, topspin produces a downward swerve of a moving ball, greater than would be produced by gravity alone. Backspin produces an upwards force that prolongs the flight of a moving ball. Likewise side-spin causes swerve to either side as seen during some baseball pitches, e.g. slider. The overall behaviour is similar to that around an aerofoil (see lift force), but with a circulation generated by mechanical rotation rather than shape of the foil. In baseball, this effect is used to generate the downward motion of a curveball, in which the baseball is rotating forward (with 'topspin'). Participants in other sports played with a ball also take advantage of this effect. Physics The Magnus effect or Magnus force acts on a rotating body moving relative to a fluid. Examples include a "curve ball" in baseball or a tennis ball hit obliquely. The rotation alters the boundary layer between the object and the fluid. The force is perpendicular to the relative direction of motion and oriented towards the direction of rotation, i.e. the direction the "nose" of the ball is turning towards. The magnitude of the force depends primarily on the rotation rate, the relative velocity, and the geometry of the body; the magnitude also depends upon the body's surface roughness and viscosity of the fluid. Accurate quantitative predictions of the force are difficult, but as with other examples of aerodynamic lift there are simpler, qualitative explanations: Flow deflection The diagram shows lift being produced on a back-spinning ball. The wake and trailing air-flow have been deflected downwards; according to Newton's third law of motion there must be a reaction force in the opposite direction. Pressure differences The air's viscosity and the surface roughness of the object cause the air to be carried around the object. This adds to the air velocity on one side of the object and decreases the velocity on the other side. Bernoulli's principle states that under certain conditions increased flow speed is associated with reduced pressure, implying that there is lower air pressure on one side than the other. This pressure difference results in a force perpendicular to the direction of travel. Kutta–Joukowski lift On a cylinder, the force due to rotation is an example of Kutta–Joukowski lift. It can be analysed in terms of the vortex produced by rotation. The lift per unit length of the cylinder , is the product of the freestream velocity (in m/s), the fluid density (in kg/m3), and circulation due to viscous effects: where the vortex strength (assuming that the surrounding fluid obeys the no-slip condition) is given by where ω is the angular velocity of the cylinder (in rad/s) and r is the radius of the cylinder (in m). Inverse Magnus effect In wind tunnel studies, (rough surfaced) baseballs show the Magnus effect, but smooth spheres do not. Further study has shown that certain combinations of conditions result in turbulence in the fluid on one side of the rotating body but laminar flow on the other side. In these cases are called the inverse Magnus effect: the deflection is opposite to that of the typical Magnus effect. Magnus effect in potential flow Potential flow is a mathematical model of the steady flow of a fluid with no viscosity or vorticity present. For potential flow around a circular cylinder, it provides the following results: Non-spinning cylinder The flow pattern is symmetric about a horizontal axis through the centre of the cylinder. At each point above the axis and its corresponding point below the axis, the spacing of streamlines is the same so velocities are also the same at the two points. Bernoulli’s principle shows that, outside the boundary layers, pressures are also the same at corresponding points. There is no lift acting on the cylinder. Spinning cylinder Streamlines are closer spaced immediately above the cylinder than below, so the air flows faster past the upper surface than past the lower surface. Bernoulli’s principle shows that the pressure adjacent to the upper surface is lower than the pressure adjacent to the lower surface. The Magnus force acts vertically upwards on the cylinder. Streamlines immediately above the cylinder are curved with radius little more than the radius of the cylinder. This means there is low pressure close to the upper surface of the cylinder. Streamlines immediately below the cylinder are curved with a larger radius than streamlines above the cylinder. This means there is higher pressure acting on the lower surface than on the upper. Air immediately above and below the cylinder is curving downwards, accelerated by the pressure gradient. A downwards force is acting on the air. Newton's third law predicts that the Magnus force and the downwards force acting on the air are equal in magnitude and opposite in direction. History The effect is named after German physicist Heinrich Gustav Magnus who demonstrated the effect with a rapidly rotating brass cylinder and an air blower in 1852. In 1672, Isaac Newton had speculated on the effect after observing tennis players in his Cambridge college. In 1742, Benjamin Robins, a British mathematician, ballistics researcher, and military engineer, explained deviations in the trajectories of musket balls due to their rotation. Pioneering wind tunnel research on the Magnus effect was carried out with smooth rotating spheres in 1928. Lyman Briggs later studied baseballs in a wind tunnel, and others have produced images of the effect. The studies show that a turbulent wake behind the spinning ball causes aerodynamic drag, plus there is a noticeable angular deflection in the wake, and this deflection is in the direction of spin. In sport The Magnus effect explains commonly observed deviations from the typical trajectories or paths of spinning balls in sport, notably association football, table tennis, tennis, volleyball, golf, baseball, and cricket. The curved path of a golf ball known as slice or hook is largely due to the ball's spin axis being tilted away from the horizontal due to the combined effects of club face angle and swing path, causing the Magnus effect to act at an angle, moving the ball away from a straight line in its trajectory. Backspin (upper surface rotating backwards from the direction of movement) on a golf ball causes a vertical force that counteracts the force of gravity slightly, and enables the ball to remain airborne a little longer than it would were the ball not spinning: this allows the ball to travel farther than a ball not spinning about its horizontal axis. In table tennis, the Magnus effect is easily observed, because of the small mass and low density of the ball. An experienced player can place a wide variety of spins on the ball. Table tennis rackets usually have a surface made of rubber to give the racket maximum grip on the ball to impart a spin. In cricket, the Magnus effect contributes to the types of motion known as drift, dip and lift in spin bowling, depending on the axis of rotation of the spin applied to the ball. The Magnus effect is not responsible for the movement seen in conventional swing bowling, in which the pressure gradient is not caused by the ball's spin, but rather by its raised seam, and the asymmetric roughness or smoothness of its two halves; however, the Magnus effect may be responsible for so-called "Malinga Swing", as observed in the bowling of the swing bowler Lasith Malinga. In airsoft, a system known as hop-up is used to create a backspin on a fired BB, which greatly increases its range, using the Magnus effect in a similar manner as in golf. In baseball, pitchers often impart different spins on the ball, causing it to curve in the desired direction due to the Magnus effect. The PITCHf/x system measures the change in trajectory caused by Magnus in all pitches thrown in Major League Baseball. The match ball for the 2010 FIFA World Cup has been criticised for the different Magnus effect from previous match balls. The ball was described as having less Magnus effect and as a result flies farther but with less controllable swerve. In external ballistics The Magnus effect can also be found in advanced external ballistics. First, a spinning bullet in flight is often subject to a crosswind, which can be simplified as blowing from either the left or the right. In addition to this, even in completely calm air a bullet experiences a small sideways wind component due to its yawing motion. This yawing motion along the bullet's flight path means that the nose of the bullet points in a slightly different direction from the direction the bullet travels. In other words, the bullet "skids" sideways at any given moment, and thus experiences a small sideways wind component in addition to any crosswind component. The combined sideways wind component of these two effects causes a Magnus force to act on the bullet, which is perpendicular both to the direction the bullet is pointing and the combined sideways wind. In a very simple case where we ignore various complicating factors, the Magnus force from the crosswind would cause an upward or downward force to act on the spinning bullet (depending on the left or right wind and rotation), causing deflection of the bullet's flight path up or down, thus influencing the point of impact. Overall, the effect of the Magnus force on a bullet's flight path itself is usually insignificant compared to other forces such as aerodynamic drag. However, it greatly affects the bullet's stability, which in turn affects the amount of drag, how the bullet behaves upon impact, and many other factors. The stability of the bullet is affected, because the Magnus effect acts on the bullet's centre of pressure instead of its centre of gravity. This means that it affects the yaw angle of the bullet; it tends to twist the bullet along its flight path, either towards the axis of flight (decreasing the yaw thus stabilising the bullet) or away from the axis of flight (increasing the yaw thus destabilising the bullet). The critical factor is the location of the centre of pressure, which depends on the flowfield structure, which in turn depends mainly on the bullet's speed (supersonic or subsonic), but also the shape, air density and surface features. If the centre of pressure is ahead of the centre of gravity, the effect is destabilizing; if the centre of pressure is behind the centre of gravity, the effect is stabilising. In aviation Some aircraft have been built to use the Magnus effect to create lift with a rotating cylinder instead of a wing, allowing flight at lower horizontal speeds. The earliest attempt to use the Magnus effect for a heavier-than-air aircraft was in 1910 by a US member of Congress, Butler Ames of Massachusetts. The next attempt was in the early 1930s by three inventors in New York state. Ship propulsion and stabilization Rotor ships use mast-like cylinders, called Flettner rotors, for propulsion. These are mounted vertically on the ship's deck. When the wind blows from the side, the Magnus effect creates a forward thrust. Thus, as with any sailing ship, a rotor ship can only move forwards when there is a wind blowing. The effect is also used in a special type of ship stabilizer consisting of a rotating cylinder mounted beneath the waterline and emerging laterally. By controlling the direction and speed of rotation, strong lift or downforce can be generated. The largest deployment of the system to date is in the motor yacht Eclipse. See also Air resistance Ball of the Century Bernoulli's principle Coandă effect Fluid dynamics Kite types Navier–Stokes equations Potential flow around a circular cylinder Reynolds number Tesla turbine References Further reading External links Magnus Cups, Ri Channel Video, January 2012 Analytic Functions, The Magnus Effect, and Wings at MathPages How do bullets fly? Ruprecht Nennstiel, Wiesbaden, Germany How do bullets fly? old version (1998), by Ruprecht Nennstiel Anthony Thyssen's Rotor Kites page Has plans on how to build a model Harnessing wind power using the Magnus effect Researchers Observe Magnus Effect in Light for First Time Quantum Maglift Video:Applications of the Magnus effect Fluid dynamics Physical phenomena
0.768066
0.997424
0.766087
Exercise physiology
Exercise physiology is the physiology of physical exercise. It is one of the allied health professions, and involves the study of the acute responses and chronic adaptations to exercise. Exercise physiologists are the highest qualified exercise professionals and utilise education, lifestyle intervention and specific forms of exercise to rehabilitate and manage acute and chronic injuries and conditions. Understanding the effect of exercise involves studying specific changes in muscular, cardiovascular, and neurohumoral systems that lead to changes in functional capacity and strength due to endurance training or strength training. The effect of training on the body has been defined as the reaction to the adaptive responses of the body arising from exercise or as "an elevation of metabolism produced by exercise". Exercise physiologists study the effect of exercise on pathology, and the mechanisms by which exercise can reduce or reverse disease progression. History British physiologist Archibald Hill introduced the concepts of maximal oxygen uptake and oxygen debt in 1922. Hill and German physician Otto Meyerhof shared the 1922 Nobel Prize in Physiology or Medicine for their independent work related to muscle energy metabolism. Building on this work, scientists began measuring oxygen consumption during exercise. Notable contributions were made by Henry Taylor at the University of Minnesota, Scandinavian scientists Per-Olof Åstrand and Bengt Saltin in the 1950s and 60s, the Harvard Fatigue Laboratory, German universities, and the Copenhagen Muscle Research Centre among others. In some countries it is a Primary Health Care Provider. Accredited Exercise Physiologists (AEP's) are university-trained professionals who prescribe exercise-based interventions to treat various conditions using dose response prescriptions specific to each individual. Energy expenditure Humans have a high capacity to expend energy for many hours during sustained exertion. For example, one individual cycling at a speed of through over 50 consecutive days expended a total of 1,145 MJ (273,850 kcal; 273,850 dieter calories) with an average power output of 173.8 W. Skeletal muscle burns 90 mg (0.5 mmol) of glucose each minute during continuous activity (such as when repetitively extending the human knee), generating ≈24 W of mechanical energy, and since muscle energy conversion is only 22–26% efficient, ≈76 W of heat energy. Resting skeletal muscle has a basal metabolic rate (resting energy consumption) of 0.63 W/kg making a 160 fold difference between the energy consumption of inactive and active muscles. For short duration muscular exertion, energy expenditure can be far greater: an adult human male when jumping up from a squat can mechanically generate 314 W/kg. Such rapid movement can generate twice this amount in nonhuman animals such as bonobos, and in some small lizards. This energy expenditure is very large compared to the basal resting metabolic rate of the adult human body. This rate varies somewhat with size, gender and age but is typically between 45 W and 85 W. Total energy expenditure (TEE) due to muscular expended energy is much higher and depends upon the average level of physical work and exercise done during the day. Thus exercise, particularly if sustained for very long periods, dominates the energy metabolism of the body. Physical activity energy expenditure correlates strongly with the gender, age, weight, heart rate, and VO2 max of an individual, during physical activity. Metabolic changes Rapid energy sources Energy needed to perform short lasting, high intensity bursts of activity is derived from anaerobic metabolism within the cytosol of muscle cells, as opposed to aerobic respiration which utilizes oxygen, is sustainable, and occurs in the mitochondria. The quick energy sources consist of the phosphocreatine (PCr) system, fast glycolysis, and adenylate kinase. All of these systems re-synthesize adenosine triphosphate (ATP), which is the universal energy source in all cells. The most rapid source, but the most readily depleted of the above sources is the PCr system which utilizes the enzyme creatine kinase. This enzyme catalyzes a reaction that combines phosphocreatine and adenosine diphosphate (ADP) into ATP and creatine. This resource is short lasting because oxygen is required for the resynthesis of phosphocreatine via mitochondrial creatine kinase. Therefore, under anaerobic conditions, this substrate is finite and only lasts between approximately 10 to 30 seconds of high intensity work. Fast glycolysis, however, can function for approximately 2 minutes prior to fatigue, and predominately uses intracellular glycogen as a substrate. Glycogen is broken down rapidly via glycogen phosphorylase into individual glucose units during intense exercise. Glucose is then oxidized to pyruvate and under anaerobic conditions is reduced to lactic acid. This reaction oxidizes NADH to NAD, thereby releasing a hydrogen ion, promoting acidosis. For this reason, fast glycolysis can not be sustained for long periods of time. Plasma glucose Plasma glucose is said to be maintained when there is an equal rate of glucose appearance (entry into the blood) and glucose disposal (removal from the blood). In the healthy individual, the rates of appearance and disposal are essentially equal during exercise of moderate intensity and duration; however, prolonged exercise or sufficiently intense exercise can result in an imbalance leaning towards a higher rate of disposal than appearance, at which point glucose levels fall producing the onset of fatigue. Rate of glucose appearance is dictated by the amount of glucose being absorbed at the gut as well as liver (hepatic) glucose output. Although glucose absorption from the gut is not typically a source of glucose appearance during exercise, the liver is capable of catabolizing stored glycogen (glycogenolysis) as well as synthesizing new glucose from specific reduced carbon molecules (glycerol, pyruvate, and lactate) in a process called gluconeogenesis. The ability of the liver to release glucose into the blood from glycogenolysis is unique, since skeletal muscle, the other major glycogen reservoir, is incapable of doing so. Unlike skeletal muscle, liver cells contain the enzyme glycogen phosphatase, which removes a phosphate group from glucose-6-P to release free glucose. In order for glucose to exit a cell membrane, the removal of this phosphate group is essential. Although gluconeogenesis is an important component of hepatic glucose output, it alone cannot sustain exercise. For this reason, when glycogen stores are depleted during exercise, glucose levels fall and fatigue sets in. Glucose disposal, the other side of the equation, is controlled by the uptake of glucose by the working skeletal muscles. During exercise, despite decreased insulin concentrations, muscle increases GLUT4 translocation and glucose uptake. The mechanism for increased GLUT4 translocation is an area of ongoing research. glucose control: As mentioned above, insulin secretion is reduced during exercise, and does not play a major role in maintaining normal blood glucose concentration during exercise, but its counter-regulatory hormones appear in increasing concentrations. Principle among these are glucagon, epinephrine, and growth hormone. All of these hormones stimulate liver (hepatic) glucose output, among other functions. For instance, both epinephrine and growth hormone also stimulate adipocyte lipase, which increases non-esterified fatty acid (NEFA) release. By oxidizing fatty acids, this spares glucose utilization and helps to maintain blood sugar level during exercise. Exercise for diabetes: Exercise is a particularly potent tool for glucose control in those who have diabetes mellitus. In a situation of elevated blood glucose (hyperglycemia), moderate exercise can induce greater glucose disposal than appearance, thereby decreasing total plasma glucose concentrations. As stated above, the mechanism for this glucose disposal is independent of insulin, which makes it particularly well-suited for people with diabetes. In addition, there appears to be an increase in sensitivity to insulin for approximately 12–24 hours post-exercise. This is particularly useful for those who have type II diabetes and are producing sufficient insulin but demonstrate peripheral resistance to insulin signaling. However, during extreme hyperglycemic episodes, people with diabetes should avoid exercise due to potential complications associated with ketoacidosis. Exercise could exacerbate ketoacidosis by increasing ketone synthesis in response to increased circulating NEFA's. Type II diabetes is also intricately linked to obesity, and there may be a connection between type II diabetes and how fat is stored within pancreatic, muscle, and liver cells. Likely due to this connection, weight loss from both exercise and diet tends to increase insulin sensitivity in the majority of people. In some people, this effect can be particularly potent and can result in normal glucose control. Although nobody is technically cured of diabetes, individuals can live normal lives without the fear of diabetic complications; however, regain of weight would assuredly result in diabetes signs and symptoms. Oxygen Vigorous physical activity (such as exercise or hard labor) increases the body's demand for oxygen. The first-line physiologic response to this demand is an increase in heart rate, breathing rate, and depth of breathing. Oxygen consumption (VO2) during exercise is best described by the Fick Equation: VO2=Q x (a-vO2diff), which states that the amount of oxygen consumed is equal to cardiac output (Q) multiplied by the difference between arterial and venous oxygen concentrations. More simply put, oxygen consumption is dictated by the quantity of blood distributed by the heart as well as the working muscle's ability to take up the oxygen within that blood; however, this is a bit of an oversimplification. Although cardiac output is thought to be the limiting factor of this relationship in healthy individuals, it is not the only determinant of VO2 max. That is, factors such as the ability of the lung to oxygenate the blood must also be considered. Various pathologies and anomalies cause conditions such as diffusion limitation, ventilation/perfusion mismatch, and pulmonary shunts that can limit oxygenation of the blood and therefore oxygen distribution. In addition, the oxygen carrying capacity of the blood is also an important determinant of the equation. Oxygen carrying capacity is often the target of exercise (ergogenic aids) aids used in endurance sports to increase the volume percentage of red blood cells (hematocrit), such as through blood doping or the use of erythropoietin (EPO). Furthermore, peripheral oxygen uptake is reliant on a rerouting of blood flow from relatively inactive viscera to the working skeletal muscles, and within the skeletal muscle, capillary to muscle fiber ratio influences oxygen extraction. Dehydration Dehydration refers both to hypohydration (dehydration induced prior to exercise) and to exercise-induced dehydration (dehydration that develops during exercise). The latter reduces aerobic endurance performance and results in increased body temperature, heart rate, perceived exertion, and possibly increased reliance on carbohydrate as a fuel source. Although the negative effects of exercise-induced dehydration on exercise performance were clearly demonstrated in the 1940s, athletes continued to believe for years thereafter that fluid intake was not beneficial. More recently, negative effects on performance have been demonstrated with modest (<2%) dehydration, and these effects are exacerbated when the exercise is performed in a hot environment. The effects of hypohydration may vary, depending on whether it is induced through diuretics or sauna exposure, which substantially reduce plasma volume, or prior exercise, which has much less impact on plasma volume. Hypohydration reduces aerobic endurance, but its effects on muscle strength and endurance are not consistent and require further study. Intense prolonged exercise produces metabolic waste heat, and this is removed by sweat-based thermoregulation. A male marathon runner loses each hour around 0.83 L in cool weather and 1.2 L in warm (losses in females are about 68 to 73% lower). People doing heavy exercise may lose two and half times as much fluid in sweat as urine. This can have profound physiological effects. Cycling for 2 hours in the heat (35 °C) with minimal fluid intake causes body mass decline by 3 to 5%, blood volume likewise by 3 to 6%, body temperature to rise constantly, and in comparison with proper fluid intake, higher heart rates, lower stroke volumes and cardiac outputs, reduced skin blood flow, and higher systemic vascular resistance. These effects are largely eliminated by replacing 50 to 80% of the fluid lost in sweat. Other Plasma catecholamine concentrations increase 10-fold in whole body exercise. Ammonia is produced by exercised skeletal muscles from ADP (the precursor of ATP) by purine nucleotide deamination and amino acid catabolism of myofibrils. interleukin-6 (IL-6) increases in blood circulation due to its release from working skeletal muscles. This release is reduced if glucose is taken, suggesting it is related to energy depletion stresses. Sodium absorption is affected by the release of interleukin-6 as this can cause the secretion of arginine vasopressin which, in turn, can lead to exercise-associated dangerously low sodium levels (hyponatremia). This loss of sodium in blood plasma can result in swelling of the brain. This can be prevented by awareness of the risk of drinking excessive amounts of fluids during prolonged exercise. Brain At rest, the human brain receives 15% of total cardiac output, and uses 20% of the body's energy consumption. The brain is normally dependent for its high energy expenditure upon aerobic metabolism. The brain as a result is highly sensitive to failure of its oxygen supply with loss of consciousness occurring within six to seven seconds, with its EEG going flat in 23 seconds. Therefore, the brain's function would be disrupted if exercise affected its supply of oxygen and glucose. Protecting the brain from even minor disruption is important since exercise depends upon motor control. Because humans are bipeds, motor control is needed for keeping balance. For this reason, brain energy consumption is increased during intense physical exercise due to the demands in the motor cognition needed to control the body. Exercise Physiologists treat a range of neurological conditions including (but not limited to): Parkinson's, Alzheimer's, Traumatic Brain Injury, Spinal Cord Injury, Cerebral Palsy and mental health conditions. Cerebral oxygen Cerebral autoregulation usually ensures the brain has priority to cardiac output, though this is impaired slightly by exhaustive exercise. During submaximal exercise, cardiac output increases and cerebral blood flow increases beyond the brain's oxygen needs. However, this is not the case for continuous maximal exertion: "Maximal exercise is, despite the increase in capillary oxygenation [in the brain], associated with a reduced mitochondrial O2 content during whole body exercise" The autoregulation of the brain's blood supply is impaired particularly in warm environments Glucose In adults, exercise depletes the plasma glucose available to the brain: short intense exercise (35 min ergometer cycling) can reduce brain glucose uptake by 32%. At rest, energy for the adult brain is normally provided by glucose but the brain has a compensatory capacity to replace some of this with lactate. Research suggests that this can be raised, when a person rests in a brain scanner, to about 17%, with a higher percentage of 25% occurring during hypoglycemia. During intense exercise, lactate has been estimated to provide a third of the brain's energy needs. There is evidence that the brain might, however, in spite of these alternative sources of energy, still suffer an energy crisis since IL-6 (a sign of metabolic stress) is released during exercise from the brain. Hyperthermia Humans use sweat thermoregulation for body heat removal, particularly to remove the heat produced during exercise. Moderate dehydration as a consequence of exercise and heat is reported to impair cognition. These impairments can start after body mass lost that is greater than 1%. Cognitive impairment, particularly due to heat and exercise is likely to be due to loss of integrity to the blood brain barrier. Hyperthermia can also lower cerebral blood flow, and raise brain temperature. Fatigue Intense activity Researchers once attributed fatigue to a build-up of lactic acid in muscles. However, this is no longer believed. Rather, lactate may stop muscle fatigue by keeping muscles fully responding to nerve signals. The available oxygen and energy supply, and disturbances of muscle ion homeostasis are the main factors determining exercise performance, at least during brief very intense exercise. Each muscle contraction involves an action potential that activates voltage sensors, and so releases Ca2+ ions from the muscle fibre's sarcoplasmic reticulum. The action potentials that cause this also require ion changes: Na influxes during the depolarization phase and K effluxes for the repolarization phase. Cl− ions also diffuse into the sarcoplasm to aid the repolarization phase. During intense muscle contraction, the ion pumps that maintain homeostasis of these ions are inactivated and this (with other ion related disruption) causes ionic disturbances. This causes cellular membrane depolarization, inexcitability, and so muscle weakness. Ca2+ leakage from type 1 ryanodine receptor) channels has also been identified with fatigue. Endurance failure After intense prolonged exercise, there can be a collapse in body homeostasis. Some famous examples include: Dorando Pietri in the 1908 Summer Olympic men's marathon ran the wrong way and collapsed several times. Jim Peters in the marathon of the 1954 Commonwealth Games staggered and collapsed several times, and though he had a five-kilometre (three-mile) lead, failed to finish. Though it was formerly believed that this was due to severe dehydration, more recent research suggests it was the combined effects upon the brain of hyperthermia, hypertonic hypernatraemia associated with dehydration, and possibly hypoglycaemia. Gabriela Andersen-Schiess in the woman's marathon at the Los Angeles 1984 Summer Olympics in the race's final 400 meters, stopping occasionally and shown signs of heat exhaustion. Though she fell across the finish line, she was released from medical care only two hours later. Central governor Tim Noakes, based on an earlier idea by the 1922 Nobel Prize in Physiology or Medicine winner Archibald Hill has proposed the existence of a central governor. In this, the brain continuously adjusts the power output by muscles during exercise in regard to a safe level of exertion. These neural calculations factor in prior length of strenuous exercise, the planned duration of further exertion, and the present metabolic state of the body. This adjusts the number of activated skeletal muscle motor units, and is subjectively experienced as fatigue and exhaustion. The idea of a central governor rejects the earlier idea that fatigue is only caused by mechanical failure of the exercising muscles ("peripheral fatigue"). Instead, the brain models the metabolic limits of the body to ensure that whole body homeostasis is protected, in particular that the heart is guarded from hypoxia, and an emergency reserve is always maintained. The idea of the central governor has been questioned since ‘physiological catastrophes’ can and do occur suggesting that if it did exist, athletes (such as Dorando Pietri, Jim Peters and Gabriela Andersen-Schiess) can override it. Other factors Exercise fatigue has also been suggested to be affected by: brain hyperthermia glycogen depletion in brain cells depletion of muscle and liver glycogen (see "hitting the wall") reactive oxygen species impairing skeletal muscle function reduced level of glutamate secondary to uptake of ammonia in the brain Fatigue in diaphragm and abdominal respiratory muscles limiting breathing Impaired oxygen supply to muscles Ammonia effects upon the brain Serotonin pathways in the brain Cardiac biomarkers Prolonged exercise such as marathons can increase cardiac biomarkers such as troponin, B-type natriuretic peptide (BNP), and ischemia-modified (aka MI) albumin. This can be misinterpreted by medical personnel as signs of myocardial infarction, or cardiac dysfunction. In these clinical conditions, such cardiac biomarkers are produced by irreversible injury of muscles. In contrast, the processes that create them after strenuous exertion in endurance sports are reversible, with their levels returning to normal within 24-hours (further research, however, is still needed). Human adaptations Humans are specifically adapted to engage in prolonged strenuous muscular activity (such as efficient long distance bipedal running). This capacity for endurance running may have evolved to allow the running down of game animals by persistent slow but constant chase over many hours. Central to the success of this is the ability of the human body to effectively remove muscle heat waste. In most animals, this is stored by allowing a temporary increase in body temperature. This allows them to escape from animals that quickly speed after them for a short duration (the way nearly all predators catch their prey). Humans, unlike other animals that catch prey, remove heat with a specialized thermoregulation based on sweat evaporation. One gram of sweat can remove 2,598 J of heat energy. Another mechanism is increased skin blood flow during exercise that allows for greater convective heat loss that is aided by our upright posture. This skin based cooling has resulted in humans acquiring an increased number of sweat glands, combined with a lack of body fur that would otherwise stop air circulation and efficient evaporation. Because humans can remove exercise heat, they can avoid the fatigue from heat exhaustion that affects animals chased in a persistent manner, and so eventually catch them. Selective breeding experiments with rodents Rodents have been specifically bred for exercise behavior or performance in several different studies. For example, laboratory rats have been bred for high or low performance on a motorized treadmill with electrical stimulation as motivation. The high-performance line of rats also exhibits increased voluntary wheel-running behavior as compared with the low-capacity line. In an experimental evolution approach, four replicate lines of laboratory mice have been bred for high levels of voluntary exercise on wheels, while four additional control lines are maintained by breeding without regard to the amount of wheel running. These selected lines of mice also show increased endurance capacity in tests of forced endurance capacity on a motorized treadmill. However, in neither selection experiment have the precise causes of fatigue during either forced or voluntary exercise been determined. Exercise-induced muscle pain Physical exercise may cause pain both as an immediate effect that may result from stimulation of free nerve endings by low pH, as well as a delayed onset muscle soreness. The delayed soreness is fundamentally the result of ruptures within the muscle, although apparently not involving the rupture of whole muscle fibers. Muscle pain can range from a mild soreness to a debilitating injury depending on intensity of exercise, level of training, and other factors. There is some preliminary evidence to suggest that moderate intensity continuous training has the ability to increase someone's pain threshold. Education in exercise physiology Accreditation programs exist with professional bodies in most developed countries, ensuring the quality and consistency of education. In Canada, one may obtain the professional certification title – Certified Exercise Physiologist for those working with clients (both clinical and non clinical) in the health and fitness industry. In Australia, one may obtain the professional certification title - Accredited Exercise Physiologist (AEP) through the professional body Exercise and Sports Science Australia (ESSA). In Australia, it is common for an AEP to also have the qualification of an Accredited Exercise Scientist (AES). The premiere governing body is the American College of Sports Medicine. An exercise physiologist's area of study may include but is not limited to biochemistry, bioenergetics, cardiopulmonary function, hematology, biomechanics, skeletal muscle physiology, neuroendocrine function, and central and peripheral nervous system function. Furthermore, exercise physiologists range from basic scientists, to clinical researchers, to clinicians, to sports trainers. Colleges and universities offer exercise physiology as a program of study on various different levels, including undergraduate, graduate degrees and certificates, and doctoral programs. The basis of Exercise Physiology as a major is to prepare students for a career in the field of health sciences. A program that focuses on the scientific study of the physiological processes involved in physical or motor activity, including sensorimotor interactions, response mechanisms, and the effects of injury, disease, and disability. Includes instruction in muscular and skeletal anatomy; molecular and cellular basis of muscle contraction; fuel utilization; neurophysiology of motor mechanics; systemic physiological responses (respiration, blood flow, endocrine secretions, and others); fatigue and exhaustion; muscle and body training; physiology of specific exercises and activities; physiology of injury; and the effects of disabilities and disease. Careers available with a degree in Exercise Physiology can include: non-clinical, client-based work; strength and conditioning specialists; cardiopulmonary treatment; and clinical-based research. In order to gauge the multiple areas of study, students are taught processes in which to follow on a client-based level. Practical and lecture teachings are instructed in the classroom and in a laboratory setting. These include: Health and risk assessment: In order to safely work with a client on the job, you must first be able to know the benefits and risks associated with physical activity. Examples of this include knowing specific injuries the body can experience during exercise, how to properly screen a client before their training begins, and what factors to look for that may inhibit their performance. Exercise testing: Coordinating exercise tests in order to measure body compositions, cardiorespiratory fitness, muscular strength/endurance, and flexibility. Functional tests are also used in order to gain understanding of a more specific part of the body. Once the information is gathered about a client, exercise physiologists must also be able to interpret the test data and decide what health-related outcomes have been discovered. Exercise prescription: Forming training programs that best meet an individual's health and fitness goals. Must be able to take into account different types of exercises, the reasons/goal for a client's workout, and pre-screened assessments. Knowing how to prescribe exercises for special considerations and populations is also required. These may include age differences, pregnancy, joint diseases, obesity, pulmonary disease, etc. Curriculum The curriculum for exercise physiology includes biology, chemistry, and applied sciences. The purpose of the classes selected for this major is to have a proficient understanding of human anatomy, human physiology, and exercise physiology. Includes instruction in muscular and skeletal anatomy; molecular and cellular basis of muscle contraction; fuel utilization; neurophysiology of motor mechanics; systemic physiological responses (respiration, blood flow, endocrine secretions, and others); fatigue and exhaustion; muscle and body training; physiology of specific exercises and activities; physiology of injury; and the effects of disabilities and disease. Not only is a full class schedule needed to complete a degree in Exercise Physiology, but a minimum amount of practicum experience is required and internships are recommended. See also Bioenergetics Excess post-exercise oxygen consumption (EPOC) Hill's model Physical therapy Sports science Sports medicine References External links Athletic training Endurance games Evolutionary biology Human evolution Physiology Strength training Physical exercise
0.770849
0.993818
0.766084
Friction
Friction is the force resisting the relative motion of solid surfaces, fluid layers, and material elements sliding against each other. Types of friction include dry, fluid, lubricated, skin, and internal -- an incomplete list. The study of the processes involved is called tribology, and has a history of more than 2000 years. Friction can have dramatic consequences, as illustrated by the use of friction created by rubbing pieces of wood together to start a fire. Another important consequence of many types of friction can be wear, which may lead to performance degradation or damage to components. It is known that frictional energy losses account for about 20% of the total energy expenditure of the world. As briefly discussed later, there are many different contributors to the retarding force in friction, ranging from asperity deformation to the generation of charges and changes in local structure. Friction is not itself a fundamental force, it is a non-conservative force – work done against friction is path dependent. In the presence of friction, some mechanical energy is transformed to heat as well as the free energy of the structural changes and other types of dissipation, so mechanical energy is not conserved. The complexity of the interactions involved makes the calculation of friction from first principles difficult and it is often easier to use empirical methods for analysis and the development of theory. Types There are several types of friction: Dry friction is a force that opposes the relative lateral motion of two solid surfaces in contact. Dry friction is subdivided into static friction ("stiction") between non-moving surfaces, and kinetic friction between moving surfaces. With the exception of atomic or molecular friction, dry friction generally arises from the interaction of surface features, known as asperities (see Figure). Fluid friction describes the friction between layers of a viscous fluid that are moving relative to each other. Lubricated friction is a case of fluid friction where a lubricant fluid separates two solid surfaces. Skin friction is a component of drag, the force resisting the motion of a fluid across the surface of a body. Internal friction is the force resisting motion between the elements making up a solid material while it undergoes deformation. History Many ancient authors including Aristotle, Vitruvius, and Pliny the Elder, were interested in the cause and mitigation of friction. They were aware of differences between static and kinetic friction with Themistius stating in 350 that "it is easier to further the motion of a moving body than to move a body at rest". The classic laws of sliding friction were discovered by Leonardo da Vinci in 1493, a pioneer in tribology, but the laws documented in his notebooks were not published and remained unknown. These laws were rediscovered by Guillaume Amontons in 1699 and became known as Amonton's three laws of dry friction. Amontons presented the nature of friction in terms of surface irregularities and the force required to raise the weight pressing the surfaces together. This view was further elaborated by Bernard Forest de Bélidor and Leonhard Euler (1750), who derived the angle of repose of a weight on an inclined plane and first distinguished between static and kinetic friction. John Theophilus Desaguliers (1734) first recognized the role of adhesion in friction. Microscopic forces cause surfaces to stick together; he proposed that friction was the force necessary to tear the adhering surfaces apart. The understanding of friction was further developed by Charles-Augustin de Coulomb (1785). Coulomb investigated the influence of four main factors on friction: the nature of the materials in contact and their surface coatings; the extent of the surface area; the normal pressure (or load); and the length of time that the surfaces remained in contact (time of repose). Coulomb further considered the influence of sliding velocity, temperature and humidity, in order to decide between the different explanations on the nature of friction that had been proposed. The distinction between static and dynamic friction is made in Coulomb's friction law (see below), although this distinction was already drawn by Johann Andreas von Segner in 1758. The effect of the time of repose was explained by Pieter van Musschenbroek (1762) by considering the surfaces of fibrous materials, with fibers meshing together, which takes a finite time in which the friction increases. John Leslie (1766–1832) noted a weakness in the views of Amontons and Coulomb: If friction arises from a weight being drawn up the inclined plane of successive asperities, then why is it not balanced through descending the opposite slope? Leslie was equally skeptical about the role of adhesion proposed by Desaguliers, which should on the whole have the same tendency to accelerate as to retard the motion. In Leslie's view, friction should be seen as a time-dependent process of flattening, pressing down asperities, which creates new obstacles in what were cavities before. In the long course of the development of the law of conservation of energy and of the first law of thermodynamics, friction was recognised as a mode of conversion of mechanical work into heat. In 1798, Benjamin Thompson reported on cannon boring experiments. Arthur Jules Morin (1833) developed the concept of sliding versus rolling friction. In 1842, Julius Robert Mayer frictionally generated heat in paper pulp and measured the temperature rise. In 1845, Joule published a paper entitled The Mechanical Equivalent of Heat, in which he specified a numerical value for the amount of mechanical work required to "produce a unit of heat", based on the friction of an electric current passing through a resistor, and on the friction of a paddle wheel rotating in a vat of water. Osborne Reynolds (1866) derived the equation of viscous flow. This completed the classic empirical model of friction (static, kinetic, and fluid) commonly used today in engineering. In 1877, Fleeming Jenkin and J. A. Ewing investigated the continuity between static and kinetic friction. In 1907, G.H. Bryan published an investigation of the foundations of thermodynamics, Thermodynamics: an Introductory Treatise dealing mainly with First Principles and their Direct Applications. He noted that for a driven hard surface sliding on a body driven by it, the work done by the driver exceeds the work received by the body. The difference is accounted for by heat generated by friction. Over the years, for example in his 1879 thesis, but particularly in 1926, Planck advocated regarding the generation of heat by rubbing as the most specific way to define heat, and the prime example of an irreversible thermodynamic process. The focus of research during the 20th century has been to understand the physical mechanisms behind friction. Frank Philip Bowden and David Tabor (1950) showed that, at a microscopic level, the actual area of contact between surfaces is a very small fraction of the apparent area. This actual area of contact, caused by asperities increases with pressure. The development of the atomic force microscope (ca. 1986) enabled scientists to study friction at the atomic scale, showing that, on that scale, dry friction is the product of the inter-surface shear stress and the contact area. These two discoveries explain Amonton's first law (below); the macroscopic proportionality between normal force and static frictional force between dry surfaces. Laws of dry friction The elementary property of sliding (kinetic) friction were discovered by experiment in the 15th to 18th centuries and were expressed as three empirical laws: Amontons' First Law: The force of friction is directly proportional to the applied load. Amontons' Second Law: The force of friction is independent of the apparent area of contact. Coulomb's Law of Friction: Kinetic friction is independent of the sliding velocity. Dry friction Dry friction resists relative lateral motion of two solid surfaces in contact. The two regimes of dry friction are 'static friction' ("stiction") between non-moving surfaces, and kinetic friction (sometimes called sliding friction or dynamic friction) between moving surfaces. Coulomb friction, named after Charles-Augustin de Coulomb, is an approximate model used to calculate the force of dry friction. It is governed by the model: where is the force of friction exerted by each surface on the other. It is parallel to the surface, in a direction opposite to the net applied force. is the coefficient of friction, which is an empirical property of the contacting materials, is the normal force exerted by each surface on the other, directed perpendicular (normal) to the surface. The Coulomb friction may take any value from zero up to , and the direction of the frictional force against a surface is opposite to the motion that surface would experience in the absence of friction. Thus, in the static case, the frictional force is exactly what it must be in order to prevent motion between the surfaces; it balances the net force tending to cause such motion. In this case, rather than providing an estimate of the actual frictional force, the Coulomb approximation provides a threshold value for this force, above which motion would commence. This maximum force is known as traction. The force of friction is always exerted in a direction that opposes movement (for kinetic friction) or potential movement (for static friction) between the two surfaces. For example, a curling stone sliding along the ice experiences a kinetic force slowing it down. For an example of potential movement, the drive wheels of an accelerating car experience a frictional force pointing forward; if they did not, the wheels would spin, and the rubber would slide backwards along the pavement. Note that it is not the direction of movement of the vehicle they oppose, it is the direction of (potential) sliding between tire and road. Normal force The normal force is defined as the net force compressing two parallel surfaces together, and its direction is perpendicular to the surfaces. In the simple case of a mass resting on a horizontal surface, the only component of the normal force is the force due to gravity, where . In this case, conditions of equilibrium tell us that the magnitude of the friction force is zero, . In fact, the friction force always satisfies , with equality reached only at a critical ramp angle (given by ) that is steep enough to initiate sliding. The friction coefficient is an empirical (experimentally measured) structural property that depends only on various aspects of the contacting materials, such as surface roughness. The coefficient of friction is not a function of mass or volume. For instance, a large aluminum block has the same coefficient of friction as a small aluminum block. However, the magnitude of the friction force itself depends on the normal force, and hence on the mass of the block. Depending on the situation, the calculation of the normal force might include forces other than gravity. If an object is on a and subjected to an external force tending to cause it to slide, then the normal force between the object and the surface is just , where is the block's weight and is the downward component of the external force. Prior to sliding, this friction force is , where is the horizontal component of the external force. Thus, in general. Sliding commences only after this frictional force reaches the value . Until then, friction is whatever it needs to be to provide equilibrium, so it can be treated as simply a reaction. If the object is on a such as an inclined plane, the normal force from gravity is smaller than , because less of the force of gravity is perpendicular to the face of the plane. The normal force and the frictional force are ultimately determined using vector analysis, usually via a free body diagram. In general, process for solving any statics problem with friction is to treat contacting surfaces tentatively as immovable so that the corresponding tangential reaction force between them can be calculated. If this frictional reaction force satisfies , then the tentative assumption was correct, and it is the actual frictional force. Otherwise, the friction force must be set equal to , and then the resulting force imbalance would then determine the acceleration associated with slipping. Coefficient of friction The coefficient of friction (COF), often symbolized by the Greek letter μ, is a dimensionless scalar value which equals the ratio of the force of friction between two bodies and the force pressing them together, either during or at the onset of slipping. The coefficient of friction depends on the materials used; for example, ice on steel has a low coefficient of friction, while rubber on pavement has a high coefficient of friction. Coefficients of friction range from near zero to greater than one. The coefficient of friction between two surfaces of similar metals is greater than that between two surfaces of different metals; for example, brass has a higher coefficient of friction when moved against brass, but less if moved against steel or aluminum. For surfaces at rest relative to each other, , where is the coefficient of static friction. This is usually larger than its kinetic counterpart. The coefficient of static friction exhibited by a pair of contacting surfaces depends upon the combined effects of material deformation characteristics and surface roughness, both of which have their origins in the chemical bonding between atoms in each of the bulk materials and between the material surfaces and any adsorbed material. The fractality of surfaces, a parameter describing the scaling behavior of surface asperities, is known to play an important role in determining the magnitude of the static friction. For surfaces in relative motion , where is the coefficient of kinetic friction. The Coulomb friction is equal to , and the frictional force on each surface is exerted in the direction opposite to its motion relative to the other surface. Arthur Morin introduced the term and demonstrated the utility of the coefficient of friction. The coefficient of friction is an empirical measurementit has to be measured experimentally, and cannot be found through calculations. Rougher surfaces tend to have higher effective values. Both static and kinetic coefficients of friction depend on the pair of surfaces in contact; for a given pair of surfaces, the coefficient of static friction is usually larger than that of kinetic friction; in some sets the two coefficients are equal, such as teflon-on-teflon. Most dry materials in combination have friction coefficient values between 0.3 and 0.6. Values outside this range are rarer, but teflon, for example, can have a coefficient as low as 0.04. A value of zero would mean no friction at all, an elusive property. Rubber in contact with other surfaces can yield friction coefficients from 1 to 2. Occasionally it is maintained that μ is always < 1, but this is not true. While in most relevant applications μ < 1, a value above 1 merely implies that the force required to slide an object along the surface is greater than the normal force of the surface on the object. For example, silicone rubber or acrylic rubber-coated surfaces have a coefficient of friction that can be substantially larger than 1. While it is often stated that the COF is a "material property," it is better categorized as a "system property." Unlike true material properties (such as conductivity, dielectric constant, yield strength), the COF for any two materials depends on system variables like temperature, velocity, atmosphere and also what are now popularly described as aging and deaging times; as well as on geometric properties of the interface between the materials, namely surface structure. For example, a copper pin sliding against a thick copper plate can have a COF that varies from 0.6 at low speeds (metal sliding against metal) to below 0.2 at high speeds when the copper surface begins to melt due to frictional heating. The latter speed, of course, does not determine the COF uniquely; if the pin diameter is increased so that the frictional heating is removed rapidly, the temperature drops, the pin remains solid and the COF rises to that of a 'low speed' test. In systems with significant non-uniform stress fields, because local slip occurs before the system slides, the macroscopic coefficient of static friction depends on the applied load, system size, or shape; Amontons' law is not satisfied macroscopically. Approximate coefficients of friction Under certain conditions some materials have very low friction coefficients. An example is (highly ordered pyrolytic) graphite which can have a friction coefficient below 0.01. This ultralow-friction regime is called superlubricity. Static friction Static friction is friction between two or more solid objects that are not moving relative to each other. For example, static friction can prevent an object from sliding down a sloped surface. The coefficient of static friction, typically denoted as μs, is usually higher than the coefficient of kinetic friction. Static friction is considered to arise as the result of surface roughness features across multiple length scales at solid surfaces. These features, known as asperities are present down to nano-scale dimensions and result in true solid to solid contact existing only at a limited number of points accounting for only a fraction of the apparent or nominal contact area. The linearity between applied load and true contact area, arising from asperity deformation, gives rise to the linearity between static frictional force and normal force, found for typical Amonton–Coulomb type friction. The static friction force must be overcome by an applied force before an object can move. The maximum possible friction force between two surfaces before sliding begins is the product of the coefficient of static friction and the normal force: . When there is no sliding occurring, the friction force can have any value from zero up to . Any force smaller than attempting to slide one surface over the other is opposed by a frictional force of equal magnitude and opposite direction. Any force larger than overcomes the force of static friction and causes sliding to occur. The instant sliding occurs, static friction is no longer applicable—the friction between the two surfaces is then called kinetic friction. However, an apparent static friction can be observed even in the case when the true static friction is zero. An example of static friction is the force that prevents a car wheel from slipping as it rolls on the ground. Even though the wheel is in motion, the patch of the tire in contact with the ground is stationary relative to the ground, so it is static rather than kinetic friction. Upon slipping, the wheel friction changes to kinetic friction. An anti-lock braking system operates on the principle of allowing a locked wheel to resume rotating so that the car maintains static friction. The maximum value of static friction, when motion is impending, is sometimes referred to as limiting friction, although this term is not used universally. Kinetic friction Kinetic friction, also known as dynamic friction or sliding friction, occurs when two objects are moving relative to each other and rub together (like a sled on the ground). The coefficient of kinetic friction is typically denoted as μk, and is usually less than the coefficient of static friction for the same materials. However, Richard Feynman comments that "with dry metals it is very hard to show any difference." The friction force between two surfaces after sliding begins is the product of the coefficient of kinetic friction and the normal force: . This is responsible for the Coulomb damping of an oscillating or vibrating system. New models are beginning to show how kinetic friction can be greater than static friction. In many other cases roughness effects are dominant, for example in rubber to road friction. Surface roughness and contact area affect kinetic friction for micro- and nano-scale objects where surface area forces dominate inertial forces. The origin of kinetic friction at nanoscale can be rationalized by an energy model. During sliding, a new surface forms at the back of a sliding true contact, and existing surface disappears at the front of it. Since all surfaces involve the thermodynamic surface energy, work must be spent in creating the new surface, and energy is released as heat in removing the surface. Thus, a force is required to move the back of the contact, and frictional heat is released at the front. Angle of friction For certain applications, it is more useful to define static friction in terms of the maximum angle before which one of the items will begin sliding. This is called the angle of friction or friction angle. It is defined as: and thus: where is the angle from horizontal and μs is the static coefficient of friction between the objects. This formula can also be used to calculate μs from empirical measurements of the friction angle. Friction at the atomic level Determining the forces required to move atoms past each other is a challenge in designing nanomachines. In 2008 scientists for the first time were able to move a single atom across a surface, and measure the forces required. Using ultrahigh vacuum and nearly zero temperature (5 K), a modified atomic force microscope was used to drag a cobalt atom, and a carbon monoxide molecule, across surfaces of copper and platinum. Limitations of the Coulomb model The Coulomb approximation follows from the assumptions that: surfaces are in atomically close contact only over a small fraction of their overall area; that this contact area is proportional to the normal force (until saturation, which takes place when all area is in atomic contact); and that the frictional force is proportional to the applied normal force, independently of the contact area. The Coulomb approximation is fundamentally an empirical construct. It is a rule-of-thumb describing the approximate outcome of an extremely complicated physical interaction. The strength of the approximation is its simplicity and versatility. Though the relationship between normal force and frictional force is not exactly linear (and so the frictional force is not entirely independent of the contact area of the surfaces), the Coulomb approximation is an adequate representation of friction for the analysis of many physical systems. When the surfaces are conjoined, Coulomb friction becomes a very poor approximation (for example, adhesive tape resists sliding even when there is no normal force, or a negative normal force). In this case, the frictional force may depend strongly on the area of contact. Some drag racing tires are adhesive for this reason. However, despite the complexity of the fundamental physics behind friction, the relationships are accurate enough to be useful in many applications. "Negative" coefficient of friction , a single study has demonstrated the potential for an effectively negative coefficient of friction in the low-load regime, meaning that a decrease in normal force leads to an increase in friction. This contradicts everyday experience in which an increase in normal force leads to an increase in friction. This was reported in the journal Nature in October 2012 and involved the friction encountered by an atomic force microscope stylus when dragged across a graphene sheet in the presence of graphene-adsorbed oxygen. Numerical simulation of the Coulomb model Despite being a simplified model of friction, the Coulomb model is useful in many numerical simulation applications such as multibody systems and granular material. Even its most simple expression encapsulates the fundamental effects of sticking and sliding which are required in many applied cases, although specific algorithms have to be designed in order to efficiently numerically integrate mechanical systems with Coulomb friction and bilateral or unilateral contact. Some quite nonlinear effects, such as the so-called Painlevé paradoxes, may be encountered with Coulomb friction. Dry friction and instabilities Dry friction can induce several types of instabilities in mechanical systems which display a stable behaviour in the absence of friction. These instabilities may be caused by the decrease of the friction force with an increasing velocity of sliding, by material expansion due to heat generation during friction (the thermo-elastic instabilities), or by pure dynamic effects of sliding of two elastic materials (the Adams–Martins instabilities). The latter were originally discovered in 1995 by George G. Adams and João Arménio Correia Martins for smooth surfaces and were later found in periodic rough surfaces. In particular, friction-related dynamical instabilities are thought to be responsible for brake squeal and the 'song' of a glass harp, phenomena which involve stick and slip, modelled as a drop of friction coefficient with velocity. A practically important case is the self-oscillation of the strings of bowed instruments such as the violin, cello, hurdy-gurdy, erhu, etc. A connection between dry friction and flutter instability in a simple mechanical system has been discovered, watch the movie for more details. Frictional instabilities can lead to the formation of new self-organized patterns (or "secondary structures") at the sliding interface, such as in-situ formed tribofilms which are utilized for the reduction of friction and wear in so-called self-lubricating materials. Fluid friction Fluid friction occurs between fluid layers that are moving relative to each other. This internal resistance to flow is named viscosity. In everyday terms, the viscosity of a fluid is described as its "thickness". Thus, water is "thin", having a lower viscosity, while honey is "thick", having a higher viscosity. The less viscous the fluid, the greater its ease of deformation or movement. All real fluids (except superfluids) offer some resistance to shearing and therefore are viscous. For teaching and explanatory purposes it is helpful to use the concept of an inviscid fluid or an ideal fluid which offers no resistance to shearing and so is not viscous. Lubricated friction Lubricated friction is a case of fluid friction where a fluid separates two solid surfaces. Lubrication is a technique employed to reduce wear of one or both surfaces in close proximity moving relative to each another by interposing a substance called a lubricant between the surfaces. In most cases the applied load is carried by pressure generated within the fluid due to the frictional viscous resistance to motion of the lubricating fluid between the surfaces. Adequate lubrication allows smooth continuous operation of equipment, with only mild wear, and without excessive stresses or seizures at bearings. When lubrication breaks down, metal or other components can rub destructively over each other, causing heat and possibly damage or failure. Skin friction Skin friction arises from the interaction between the fluid and the skin of the body, and is directly related to the area of the surface of the body that is in contact with the fluid. Skin friction follows the drag equation and rises with the square of the velocity. Skin friction is caused by viscous drag in the boundary layer around the object. There are two ways to decrease skin friction: the first is to shape the moving body so that smooth flow is possible, like an airfoil. The second method is to decrease the length and cross-section of the moving object as much as is practicable. Internal friction Internal friction is the force resisting motion between the elements making up a solid material while it undergoes deformation. Plastic deformation in solids is an irreversible change in the internal molecular structure of an object. This change may be due to either (or both) an applied force or a change in temperature. The change of an object's shape is called strain. The force causing it is called stress. Elastic deformation in solids is reversible change in the internal molecular structure of an object. Stress does not necessarily cause permanent change. As deformation occurs, internal forces oppose the applied force. If the applied stress is not too large these opposing forces may completely resist the applied force, allowing the object to assume a new equilibrium state and to return to its original shape when the force is removed. This is known as elastic deformation or elasticity. Radiation friction As a consequence of light pressure, Einstein in 1909 predicted the existence of "radiation friction" which would oppose the movement of matter. He wrote, "radiation will exert pressure on both sides of the plate. The forces of pressure exerted on the two sides are equal if the plate is at rest. However, if it is in motion, more radiation will be reflected on the surface that is ahead during the motion (front surface) than on the back surface. The backward-acting force of pressure exerted on the front surface is thus larger than the force of pressure acting on the back. Hence, as the resultant of the two forces, there remains a force that counteracts the motion of the plate and that increases with the velocity of the plate. We will call this resultant 'radiation friction' in brief." Other types of friction Rolling resistance Rolling resistance is the force that resists the rolling of a wheel or other circular object along a surface caused by deformations in the object or surface. Generally the force of rolling resistance is less than that associated with kinetic friction. Typical values for the coefficient of rolling resistance are 0.001. One of the most common examples of rolling resistance is the movement of motor vehicle tires on a road, a process which generates heat and sound as by-products. Braking friction Any wheel equipped with a brake is capable of generating a large retarding force, usually for the purpose of slowing and stopping a vehicle or piece of rotating machinery. Braking friction differs from rolling friction because the coefficient of friction for rolling friction is small whereas the coefficient of friction for braking friction is designed to be large by choice of materials for brake pads. Triboelectric effect Rubbing two materials against each other can lead to charge transfer, either electrons or ions. The energy required for this contributes to the friction. In addition, sliding can cause a build-up of electrostatic charge, which can be hazardous if flammable gases or vapours are present. When the static build-up discharges, explosions can be caused by ignition of the flammable mixture. Belt friction Belt friction is a physical property observed from the forces acting on a belt wrapped around a pulley, when one end is being pulled. The resulting tension, which acts on both ends of the belt, can be modeled by the belt friction equation. In practice, the theoretical tension acting on the belt or rope calculated by the belt friction equation can be compared to the maximum tension the belt can support. This helps a designer of such a rig to know how many times the belt or rope must be wrapped around the pulley to prevent it from slipping. Mountain climbers and sailing crews demonstrate a standard knowledge of belt friction when accomplishing basic tasks. Reduction Devices Devices such as wheels, ball bearings, roller bearings, and air cushion or other types of fluid bearings can change sliding friction into a much smaller type of rolling friction. Many thermoplastic materials such as nylon, HDPE and PTFE are commonly used in low friction bearings. They are especially useful because the coefficient of friction falls with increasing imposed load. For improved wear resistance, very high molecular weight grades are usually specified for heavy duty or critical bearings. Lubricants A common way to reduce friction is by using a lubricant, such as oil, water, or grease, which is placed between the two surfaces, often dramatically lessening the coefficient of friction. The science of friction and lubrication is called tribology. Lubricant technology is when lubricants are mixed with the application of science, especially to industrial or commercial objectives. Superlubricity, a recently discovered effect, has been observed in graphite: it is the substantial decrease of friction between two sliding objects, approaching zero levels. A very small amount of frictional energy would still be dissipated. Lubricants to overcome friction need not always be thin, turbulent fluids or powdery solids such as graphite and talc; acoustic lubrication actually uses sound as a lubricant. Another way to reduce friction between two parts is to superimpose micro-scale vibration to one of the parts. This can be sinusoidal vibration as used in ultrasound-assisted cutting or vibration noise, known as dither. Energy of friction According to the law of conservation of energy, no energy is destroyed due to friction, though it may be lost to the system of concern. Mechanical energy is transformed into heat. A sliding hockey puck comes to rest because friction converts its kinetic energy into heat which raises the internal energy of the puck and the ice surface. Since heat quickly dissipates, many early philosophers, including Aristotle, wrongly concluded that moving objects come to rest spontaneously. When an object is pushed along a surface along a path C, the energy converted to heat is given by a line integral, in accordance with the definition of work where is the friction force, is the vector obtained by multiplying the magnitude of the normal force by a unit vector pointing against the object's motion, is the coefficient of kinetic friction, which is inside the integral because it may vary from location to location (e.g. if the material changes along the path), is the position of the object. Dissipation of energy by friction in a process is a classic example of thermodynamic irreversibility. Work of friction The work done by friction can translate into deformation, wear, and heat that can affect the contact surface properties (even the coefficient of friction between the surfaces). This can be beneficial as in polishing. The work of friction is used to mix and join materials such as in the process of friction welding. Excessive erosion or wear of mating sliding surfaces occurs when work due to frictional forces rise to unacceptable levels. Harder corrosion particles caught between mating surfaces in relative motion (fretting) exacerbates wear of frictional forces. As surfaces are worn by work due to friction, fit and surface finish of an object may degrade until it no longer functions properly. For example, bearing seizure or failure may result from excessive wear due to work of friction. In the reference frame of the interface between two surfaces, static friction does no work, because there is never displacement between the surfaces. In the same reference frame, kinetic friction is always in the direction opposite the motion, and does negative work. However, friction can do positive work in certain frames of reference. One can see this by placing a heavy box on a rug, then pulling on the rug quickly. In this case, the box slides backwards relative to the rug, but moves forward relative to the frame of reference in which the floor is stationary. Thus, the kinetic friction between the box and rug accelerates the box in the same direction that the box moves, doing positive work. When sliding takes place between two rough bodies in contact, the algebraic sum of the works done is different from zero, and the algebraic sum of the quantities of heat gained by the two bodies is equal to the quantity of work lost by friction, and the total quantity of heat gained is positive. In a natural thermodynamic process, the work done by an agency in the surroundings of a thermodynamic system or working body is greater than the work received by the body, because of friction. Thermodynamic work is measured by changes in a body's state variables, sometimes called work-like variables, other than temperature and entropy. Examples of work-like variables, which are ordinary macroscopic physical variables and which occur in conjugate pairs, are pressure – volume, and electric field – electric polarization. Temperature and entropy are a specifically thermodynamic conjugate pair of state variables. They can be affected microscopically at an atomic level, by mechanisms such as friction, thermal conduction, and radiation. The part of the work done by an agency in the surroundings that does not change the volume of the working body but is dissipated in friction, is called isochoric work. It is received as heat, by the working body and sometimes partly by a body in the surroundings. It is not counted as thermodynamic work received by the working body. Applications Friction is an important factor in many engineering disciplines. Transportation Automobile brakes inherently rely on friction, slowing a vehicle by converting its kinetic energy into heat. Incidentally, dispersing this large amount of heat safely is one technical challenge in designing brake systems. Disk brakes rely on friction between a disc and brake pads that are squeezed transversely against the rotating disc. In drum brakes, brake shoes or pads are pressed outwards against a rotating cylinder (brake drum) to create friction. Since braking discs can be more efficiently cooled than drums, disc brakes have better stopping performance. Rail adhesion refers to the grip wheels of a train have on the rails, see Frictional contact mechanics. Road slipperiness is an important design and safety factor for automobiles Split friction is a particularly dangerous condition arising due to varying friction on either side of a car. Road texture affects the interaction of tires and the driving surface. Measurement A tribometer is an instrument that measures friction on a surface. A profilograph is a device used to measure pavement surface roughness. Household usage Friction is used to heat and ignite matchsticks (friction between the head of a matchstick and the rubbing surface of the match box). Sticky pads are used to prevent object from slipping off smooth surfaces by effectively increasing the friction coefficient between the surface and the object. See also Contact dynamics Contact mechanics Factor of adhesion Friction Acoustics Frictionless plane Galling Lateral adhesion Non-smooth mechanics Normal contact stiffness Stick-slip phenomenon Transient friction loading Triboelectric effect Unilateral contact Friction torque References External links Coefficients of Friction – tables of coefficients, plus many links Measurement of friction power Physclips: Mechanics with animations and video clips from the University of New South Wales Values for Coefficient of Friction – CRC Handbook of Chemistry and Physics Coefficients of friction of various material pairs in atmosphere and vacuum. Classical mechanics Force Tribology
0.767097
0.998667
0.766075
Thermophoresis
Thermophoresis (also thermomigration, thermodiffusion, the Soret effect, or the Ludwig–Soret effect) is a phenomenon observed in mixtures of mobile particles where the different particle types exhibit different responses to the force of a temperature gradient. This phenomenon tends to move light molecules to hot regions and heavy molecules to cold regions. The term thermophoresis most often applies to aerosol mixtures whose mean free path is comparable to its characteristic length scale , but may also commonly refer to the phenomenon in all phases of matter. The term Soret effect normally applies to liquid mixtures, which behave according to different, less well-understood mechanisms than gaseous mixtures. Thermophoresis may not apply to thermomigration in solids, especially multi-phase alloys. Thermophoretic force The phenomenon is observed at the scale of one millimeter or less. An example that may be observed by the naked eye with good lighting is when the hot rod of an electric heater is surrounded by tobacco smoke: the smoke goes away from the immediate vicinity of the hot rod. As the small particles of air nearest the hot rod are heated, they create a fast flow away from the rod, down the temperature gradient. While the kinetic energy of the particles is similar at the same temperature, lighter particles acquire higher velocity compared to the heavy ones. When they collide with the large, slower-moving particles of the tobacco smoke they push the latter away from the rod. The force that has pushed the smoke particles away from the rod is an example of a thermophoretic force, as the mean free path of air at ambient conditions is 68 nm and the characteristic length scales are between 100–1000 nm. Thermodiffusion is labeled "positive" when particles move from a hot to cold region and "negative" when the reverse is true. Typically the heavier/larger species in a mixture exhibit positive thermophoretic behavior while the lighter/smaller species exhibit negative behavior. In addition to the sizes of the various types of particles and the steepness of the temperature gradient, the heat conductivity and heat absorption of the particles play a role. Recently, Braun and coworkers have suggested that the charge and entropy of the hydration shell of molecules play a major role for the thermophoresis of biomolecules in aqueous solutions. The quantitative description is given by: particle concentration; diffusion coefficient; and the thermodiffusion coefficient. The quotient of both coefficients is called Soret coefficient. The thermophoresis factor has been calculated from molecular interaction potentials derived from known molecular models. Applications The thermophoretic force has a number of practical applications. The basis for applications is that, because different particle types move differently under the force of the temperature gradient, the particle types can be separated by that force after they have been mixed together, or prevented from mixing if they are already separated. Impurity ions may move from the cold side of a semiconductor wafer towards the hot side, since the higher temperature makes the transition structure required for atomic jumps more achievable. The diffusive flux may occur in either direction (either up or down the temperature gradient), dependent on the materials involved. Thermophoretic force has been used in commercial precipitators for applications similar to electrostatic precipitators. It is exploited in the manufacturing of optical fiber in vacuum deposition processes. It can be important as a transport mechanism in fouling. Thermophoresis has also been shown to have potential in facilitating drug discovery by allowing the detection of aptamer binding by comparison of the bound versus unbound motion of the target molecule. This approach has been termed microscale thermophoresis. Furthermore, thermophoresis has been demonstrated as a versatile technique for manipulating single biological macromolecules, such as genomic-length DNA, and HIV virus in micro- and nanochannels by means of light-induced local heating. Thermophoresis is one of the methods used to separate different polymer particles in field flow fractionation. History Thermophoresis in gas mixtures was first observed and reported by John Tyndall in 1870 and further understood by John Strutt (Baron Rayleigh) in 1882. Thermophoresis in liquid mixtures was first observed and reported by Carl Ludwig in 1856 and further understood by Charles Soret in 1879. James Clerk Maxwell wrote in 1873 concerning mixtures of different types of molecules (and this could include small particulates larger than molecules): "This process of diffusion... goes on in gases and liquids and even in some solids.... The dynamical theory also tells us what will happen if molecules of different masses are allowed to knock about together. The greater masses will go slower than the smaller ones, so that, on an average, every molecule, great or small, will have the same energy of motion. The proof of this dynamical theorem, in which I claim the priority, has recently been greatly developed and improved by Dr. Ludwig Boltzmann." It has been analyzed theoretically by Sydney Chapman. Thermophoresis at solids interfaces was numerically discovered by Schoen et al. in 2006 and was experimentally confirmed by Barreiro et al. Negative thermophoresis in fluids was first noticed in 1967 by Dwyer in a theoretical solution, and the name was coined by Sone. Negative thermophoresis at solids interfaces was first observed by Leng et al. in 2016. See also Deposition (aerosol physics) Dufour effect Maxwell–Stefan diffusion Microscale thermophoresis References External links A short introduction to thermophoresis, including helpful animated graphics, is at aerosols.wustl.edu Ternary mixtures HCl Alkali bromides Non-equilibrium thermodynamics Aerosols
0.783392
0.977894
0.766075
Planck constant
The Planck constant, or Planck's constant, denoted by is a fundamental physical constant of foundational importance in quantum mechanics: a photon's energy is equal to its frequency multiplied by the Planck constant, and the wavelength of a matter wave equals the Planck constant divided by the associated particle momentum. The closely related reduced Planck constant, equal to and denoted is commonly used in quantum physics equations. The constant was postulated by Max Planck in 1900 as a proportionality constant needed to explain experimental black-body radiation. Planck later referred to the constant as the "quantum of action". In 1905, Albert Einstein associated the "quantum" or minimal element of the energy to the electromagnetic wave itself. Max Planck received the 1918 Nobel Prize in Physics "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta". In metrology, the Planck constant is used, together with other constants, to define the kilogram, the SI unit of mass. The SI units are defined in such a way that, when the Planck constant is expressed in SI units, it has the exact value History Origin of the constant Planck's constant was formulated as part of Max Planck's successful effort to produce a mathematical expression that accurately predicted the observed spectral distribution of thermal radiation from a closed furnace (black-body radiation). This mathematical expression is now known as Planck's law. In the last years of the 19th century, Max Planck was investigating the problem of black-body radiation first posed by Kirchhoff some 40 years earlier. Every physical body spontaneously and continuously emits electromagnetic radiation. There was no expression or explanation for the overall shape of the observed emission spectrum. At the time, Wien's law fit the data for short wavelengths and high temperatures, but failed for long wavelengths. Also around this time, but unknown to Planck, Lord Rayleigh had derived theoretically a formula, now known as the Rayleigh–Jeans law, that could reasonably predict long wavelengths but failed dramatically at short wavelengths. Approaching this problem, Planck hypothesized that the equations of motion for light describe a set of harmonic oscillators, one for each possible frequency. He examined how the entropy of the oscillators varied with the temperature of the body, trying to match Wien's law, and was able to derive an approximate mathematical function for the black-body spectrum, which gave a simple empirical formula for long wavelengths. Planck tried to find a mathematical expression that could reproduce Wien's law (for short wavelengths) and the empirical formula (for long wavelengths). This expression included a constant, , which is thought to be for Hilfsgrösse (auxiliary variable), and subsequently became known as the Planck constant. The expression formulated by Planck showed that the spectral radiance per unit frequency of a body for frequency at absolute temperature is given by , where is the Boltzmann constant, is the Planck constant, and is the speed of light in the medium, whether material or vacuum. The spectral radiance of a body, , describes the amount of energy it emits at different radiation frequencies. It is the power emitted per unit area of the body, per unit solid angle of emission, per unit frequency. The spectral radiance can also be expressed per unit wavelength instead of per unit frequency. Substituting in the relation above we get , showing how radiated energy emitted at shorter wavelengths increases more rapidly with temperature than energy emitted at longer wavelengths. Planck's law may also be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation. The SI unit of is , while that of is . Planck soon realized that his solution was not unique. There were several different solutions, each of which gave a different value for the entropy of the oscillators. To save his theory, Planck resorted to using the then-controversial theory of statistical mechanics, which he described as "an act of desperation". One of his new boundary conditions was With this new condition, Planck had imposed the quantization of the energy of the oscillators, "a purely formal assumption ... actually I did not think much about it ..." in his own words, but one that would revolutionize physics. Applying this new approach to Wien's displacement law showed that the "energy element" must be proportional to the frequency of the oscillator, the first version of what is now sometimes termed the "Planck–Einstein relation": Planck was able to calculate the value of from experimental data on black-body radiation: his result, , is within 1.2% of the currently defined value. He also made the first determination of the Boltzmann constant from the same data and theory. Development and application The black-body problem was revisited in 1905, when Lord Rayleigh and James Jeans (together) and Albert Einstein independently proved that classical electromagnetism could never account for the observed spectrum. These proofs are commonly known as the "ultraviolet catastrophe", a name coined by Paul Ehrenfest in 1911. They contributed greatly (along with Einstein's work on the photoelectric effect) in convincing physicists that Planck's postulate of quantized energy levels was more than a mere mathematical formalism. The first Solvay Conference in 1911 was devoted to "the theory of radiation and quanta". Photoelectric effect The photoelectric effect is the emission of electrons (called "photoelectrons") from a surface when light is shone on it. It was first observed by Alexandre Edmond Becquerel in 1839, although credit is usually reserved for Heinrich Hertz, who published the first thorough investigation in 1887. Another particularly thorough investigation was published by Philipp Lenard (Lénárd Fülöp) in 1902. Einstein's 1905 paper discussing the effect in terms of light quanta would earn him the Nobel Prize in 1921, after his predictions had been confirmed by the experimental work of Robert Andrews Millikan. The Nobel committee awarded the prize for his work on the photo-electric effect, rather than relativity, both because of a bias against purely theoretical physics not grounded in discovery or experiment, and dissent amongst its members as to the actual proof that relativity was real. Before Einstein's paper, electromagnetic radiation such as visible light was considered to behave as a wave: hence the use of the terms "frequency" and "wavelength" to characterize different types of radiation. The energy transferred by a wave in a given time is called its intensity. The light from a theatre spotlight is more intense than the light from a domestic lightbulb; that is to say that the spotlight gives out more energy per unit time and per unit space (and hence consumes more electricity) than the ordinary bulb, even though the color of the light might be very similar. Other waves, such as sound or the waves crashing against a seafront, also have their intensity. However, the energy account of the photoelectric effect did not seem to agree with the wave description of light. The "photoelectrons" emitted as a result of the photoelectric effect have a certain kinetic energy, which can be measured. This kinetic energy (for each photoelectron) is independent of the intensity of the light, but depends linearly on the frequency; and if the frequency is too low (corresponding to a photon energy that is less than the work function of the material), no photoelectrons are emitted at all, unless a plurality of photons, whose energetic sum is greater than the energy of the photoelectrons, acts virtually simultaneously (multiphoton effect). Assuming the frequency is high enough to cause the photoelectric effect, a rise in intensity of the light source causes more photoelectrons to be emitted with the same kinetic energy, rather than the same number of photoelectrons to be emitted with higher kinetic energy. Einstein's explanation for these observations was that light itself is quantized; that the energy of light is not transferred continuously as in a classical wave, but only in small "packets" or quanta. The size of these "packets" of energy, which would later be named photons, was to be the same as Planck's "energy element", giving the modern version of the Planck–Einstein relation: Einstein's postulate was later proven experimentally: the constant of proportionality between the frequency of incident light and the kinetic energy of photoelectrons was shown to be equal to the Planck constant . Atomic structure In 1912 John William Nicholson developed an atomic model and found the angular momentum of the electrons in the model were related by h/2. Nicholson's nuclear quantum atomic model influenced the development of Niels Bohr 's atomic model and Bohr quoted him in his 1913 paper of the Bohr model of the atom. Bohr's model went beyond Planck's abstract harmonic oscillator concept: an electron in a Bohr atom could only have certain defined energies where is the speed of light in vacuum, is an experimentally determined constant (the Rydberg constant) and . This approach also allowed Bohr to account for the Rydberg formula, an empirical description of the atomic spectrum of hydrogen, and to account for the value of the Rydberg constant in terms of other fundamental constants. In discussing angular momentum of the electrons in his model Bohr introduced the quantity , now known as the reduced Planck constant as the quantum of angular momentum. Uncertainty principle The Planck constant also occurs in statements of Werner Heisenberg's uncertainty principle. Given numerous particles prepared in the same state, the uncertainty in their position, , and the uncertainty in their momentum, , obey where the uncertainty is given as the standard deviation of the measured value from its expected value. There are several other such pairs of physically measurable conjugate variables which obey a similar rule. One example is time vs. energy. The inverse relationship between the uncertainty of the two conjugate variables forces a tradeoff in quantum experiments, as measuring one quantity more precisely results in the other quantity becoming imprecise. In addition to some assumptions underlying the interpretation of certain values in the quantum mechanical formulation, one of the fundamental cornerstones to the entire theory lies in the commutator relationship between the position operator and the momentum operator : where is the Kronecker delta. Photon energy The Planck relation connects the particular photon energy with its associated wave frequency : This energy is extremely small in terms of ordinarily perceived everyday objects. Since the frequency , wavelength , and speed of light are related by , the relation can also be expressed as de Broglie wavelength In 1923, Louis de Broglie generalized the Planck–Einstein relation by postulating that the Planck constant represents the proportionality between the momentum and the quantum wavelength of not just the photon, but the quantum wavelength of any particle. This was confirmed by experiments soon afterward. This holds throughout the quantum theory, including electrodynamics. The de Broglie wavelength of the particle is given by where denotes the linear momentum of a particle, such as a photon, or any other elementary particle. The energy of a photon with angular frequency is given by while its linear momentum relates to where is an angular wavenumber. These two relations are the temporal and spatial parts of the special relativistic expression using 4-vectors. Statistical mechanics Classical statistical mechanics requires the existence of (but does not define its value). Eventually, following upon Planck's discovery, it was speculated that physical action could not take on an arbitrary value, but instead was restricted to integer multiples of a very small quantity, the "[elementary] quantum of action", now called the Planck constant. This was a significant conceptual part of the so-called "old quantum theory" developed by physicists including Bohr, Sommerfeld, and Ishiwara, in which particle trajectories exist but are hidden, but quantum laws constrain them based on their action. This view has been replaced by fully modern quantum theory, in which definite trajectories of motion do not even exist; rather, the particle is represented by a wavefunction spread out in space and in time. Related to this is the concept of energy quantization which existed in old quantum theory and also exists in altered form in modern quantum physics. Classical physics cannot explain quantization of energy. Dimension and value The Planck constant has the same dimensions as action and as angular momentum. In SI units, the Planck constant is expressed with the unit joule per hertz (J⋅Hz) or joule-second (J⋅s). = = = . The above values have been adopted as fixed in the 2019 revision of the SI. Since 2019, the numerical value of the Planck constant has been fixed, with a finite decimal representation. This fixed value is used to define the SI unit of mass, the kilogram: "the kilogram [...] is defined by taking the fixed numerical value of to be when expressed in the unit J⋅s, which is equal to kg⋅m2⋅s−1, where the metre and the second are defined in terms of speed of light and duration of hyperfine transition of the ground state of an unperturbed caesium-133 atom ." Technologies of mass metrology such as the Kibble balance measure refine the value of kilogram applying fixed value of the Planck constant. Significance of the value The Planck constant is one of the smallest constants used in physics. This reflects the fact that on a scale adapted to humans, where energies are typical of the order of kilojoules and times are typical of the order of seconds or minutes, the Planck constant is very small. When the product of energy and time for a physical event approaches the Planck constant, quantum effects dominate. Equivalently, the order of the Planck constant reflects the fact that everyday objects and systems are made of a large number of microscopic particles. For example, in green light (with a wavelength of 555 nanometres or a frequency of ) each photon has an energy . That is a very small amount of energy in terms of everyday experience, but everyday experience is not concerned with individual photons any more than with individual atoms or molecules. An amount of light more typical in everyday experience (though much larger than the smallest amount perceivable by the human eye) is the energy of one mole of photons; its energy can be computed by multiplying the photon energy by the Avogadro constant, , with the result of , about the food energy in three apples. Reduced Planck constant Many equations in quantum physics are customarily written using the reduced Planck constant, equal to and denoted (pronounced h-bar). The fundamental equations look simpler when written using as opposed to and it is usually rather than that gives the most reliable results when used in order-of-magnitude estimates. For example, using dimensional analysis to estimate the ionization energy of a hydrogen atom, the relevant parameters that determine the ionization energy are the mass of the electron the electron charge and either the Planck constant or the reduced Planck constant : Since both constants have the same dimensions, they will enter the dimensional analysis in the same way, but with the estimate is within a factor of two, while with the error is closer to Names and symbols The reduced Planck constant is known by many other names: reduced Planck's constant ), the rationalized Planck constant (or rationalized Planck's constant , the Dirac constant (or Dirac's constant ), the Dirac (or Dirac's ), the Dirac (or Dirac's ), and h-bar. It is also common to refer to this as "Planck's constant" while retaining the relationship . By far the most common symbol for the reduced Planck constant is . However, there are some sources that denote it by instead, in which case they usually refer to it as the "Dirac " (or "Dirac's "). History The combination appeared in Niels Bohr's 1913 paper, where it was denoted by For the next 15 years, the combination continued to appear in the literature, but normally without a separate symbol. Then, in 1926, in their seminal papers, Schrödinger and Dirac again introduced special symbols for it: in the case of Schrödinger, and in the case of Dirac. Dirac continued to use in this way until 1930, when he introduced the symbol in his book The Principles of Quantum Mechanics. See also Committee on Data of the International Science Council International System of Units Introduction to quantum mechanics List of scientists whose names are used in physical constants Planck units Wave–particle duality Notes References Citations Sources External links "The role of the Planck constant in physics" – presentation at 26th CGPM meeting at Versailles, France, November 2018 when voting took place. "The Planck constant and its units" – presentation at the 35th Symposium on Chemical Physics at the University of Waterloo, Waterloo, Ontario, Canada, November 3 2019. Fundamental constants 1900 in science Max Planck
0.766285
0.999718
0.766069
Entropy as an arrow of time
Entropy is one of the few quantities in the physical sciences that require a particular direction for time, sometimes called an arrow of time. As one goes "forward" in time, the second law of thermodynamics says, the entropy of an isolated system can increase, but not decrease. Thus, entropy measurement is a way of distinguishing the past from the future. In thermodynamic systems that are not isolated, local entropy can decrease over time, accompanied by a compensating entropy increase in the surroundings; examples include objects undergoing cooling, living systems, and the formation of typical crystals. Much like temperature, despite being an abstract concept, everyone has an intuitive sense of the effects of entropy. For example, it is often very easy to tell the difference between a video being played forwards or backwards. A video may depict a wood fire that melts a nearby ice block; played in reverse, it would show a puddle of water turning a cloud of smoke into unburnt wood and freezing itself in the process. Surprisingly, in either case, the vast majority of the laws of physics are not broken by these processes, with the second law of thermodynamics being one of the only exceptions. When a law of physics applies equally when time is reversed, it is said to show T-symmetry; in this case, entropy is what allows one to decide if the video described above is playing forwards or in reverse as intuitively we identify that only when played forwards the entropy of the scene is increasing. Because of the second law of thermodynamics, entropy prevents macroscopic processes showing T-symmetry. When studying at a microscopic scale, the above judgements cannot be made. Watching a single smoke particle buffeted by air, it would not be clear if a video was playing forwards or in reverse, and, in fact, it would not be possible as the laws which apply show T-symmetry. As it drifts left or right, qualitatively it looks no different; it is only when the gas is studied at a macroscopic scale that the effects of entropy become noticeable (see Loschmidt's paradox). On average it would be expected that the smoke particles around a struck match would drift away from each other, diffusing throughout the available space. It would be an astronomically improbable event for all the particles to cluster together, yet the movement of any one smoke particle cannot be predicted. By contrast, certain subatomic interactions involving the weak nuclear force violate the conservation of parity, but only very rarely. According to the CPT theorem, this means they should also be time irreversible, and so establish an arrow of time. This, however, is neither linked to the thermodynamic arrow of time, nor has anything to do with the daily experience of time irreversibility. Overview The second law of thermodynamics allows for the entropy to remain the same regardless of the direction of time. If the entropy is constant in either direction of time, there would be no preferred direction. However, the entropy can only be a constant if the system is in the highest possible state of disorder, such as a gas that always was, and always will be, uniformly spread out in its container. The existence of a thermodynamic arrow of time implies that the system is highly ordered in one time direction only, which would by definition be the "past". Thus this law is about the boundary conditions rather than the equations of motion. The second law of thermodynamics is statistical in nature, and therefore its reliability arises from the huge number of particles present in macroscopic systems. It is not impossible, in principle, for all 6 × 1023 atoms in a mole of a gas to spontaneously migrate to one half of a container; it is only fantastically unlikely—so unlikely that no macroscopic violation of the Second Law has ever been observed. The thermodynamic arrow is often linked to the cosmological arrow of time, because it is ultimately about the boundary conditions of the early universe. According to the Big Bang theory, the Universe was initially very hot with energy distributed uniformly. For a system in which gravity is important, such as the universe, this is a low-entropy state (compared to a high-entropy state of having all matter collapsed into black holes, a state to which the system may eventually evolve). As the Universe grows, its temperature drops, which leaves less energy [per unit volume of space] available to perform work in the future than was available in the past. Additionally, perturbations in the energy density grow (eventually forming galaxies and stars). Thus the Universe itself has a well-defined thermodynamic arrow of time. But this does not address the question of why the initial state of the universe was that of low entropy. If cosmic expansion were to halt and reverse due to gravity, the temperature of the Universe would once again grow hotter, but its entropy would also continue to increase due to the continued growth of perturbations and the eventual black hole formation, until the latter stages of the Big Crunch when entropy would be lower than now. An example of apparent irreversibility Consider the situation in which a large container is filled with two separated liquids, for example a dye on one side and water on the other. With no barrier between the two liquids, the random jostling of their molecules will result in them becoming more mixed as time passes. However, if the dye and water are mixed then one does not expect them to separate out again when left to themselves. A movie of the mixing would seem realistic when played forwards, but unrealistic when played backwards. If the large container is observed early on in the mixing process, it might be found only partially mixed. It would be reasonable to conclude that, without outside intervention, the liquid reached this state because it was more ordered in the past, when there was greater separation, and will be more disordered, or mixed, in the future. Now imagine that the experiment is repeated, this time with only a few molecules, perhaps ten, in a very small container. One can easily imagine that by watching the random jostling of the molecules it might occur—by chance alone—that the molecules became neatly segregated, with all dye molecules on one side and all water molecules on the other. That this can be expected to occur from time to time can be concluded from the fluctuation theorem; thus it is not impossible for the molecules to segregate themselves. However, for a large number of molecules it is so unlikely that one would have to wait, on average, many times longer than the current age of the universe for it to occur. Thus a movie that showed a large number of molecules segregating themselves as described above would appear unrealistic and one would be inclined to say that the movie was being played in reverse. See Boltzmann's second law as a law of disorder. Mathematics of the arrow The mathematics behind the arrow of time, entropy, and basis of the second law of thermodynamics derive from the following set-up, as detailed by Carnot (1824), Clapeyron (1832), and Clausius (1854): Here, as common experience demonstrates, when a hot body T1, such as a furnace, is put into physical contact, such as being connected via a body of fluid (working body), with a cold body T2, such as a stream of cold water, energy will invariably flow from hot to cold in the form of heat Q, and given time the system will reach equilibrium. Entropy, defined as Q/T, was conceived by Rudolf Clausius as a function to measure the molecular irreversibility of this process, i.e. the dissipative work the atoms and molecules do on each other during the transformation. In this diagram, one can calculate the entropy change ΔS for the passage of the quantity of heat Q from the temperature T1, through the "working body" of fluid (see heat engine), which was typically a body of steam, to the temperature T2. Moreover, one could assume, for the sake of argument, that the working body contains only two molecules of water. Next, if we make the assignment, as originally done by Clausius: Then the entropy change or "equivalence-value" for this transformation is: which equals: and by factoring out Q, we have the following form, as was derived by Clausius: Thus, for example, if Q was 50 units, T1 was initially 100 degrees, and T2 was 1 degree, then the entropy change for this process would be 49.5. Hence, entropy increased for this process, the process took a certain amount of "time", and one can correlate entropy increase with the passage of time. For this system configuration, subsequently, it is an "absolute rule". This rule is based on the fact that all natural processes are irreversible by virtue of the fact that molecules of a system, for example two molecules in a tank, not only do external work (such as to push a piston), but also do internal work on each other, in proportion to the heat used to do work (see: Mechanical equivalent of heat) during the process. Entropy accounts for the fact that internal inter-molecular friction exists. Correlations An important difference between the past and the future is that in any system (such as a gas of particles) its initial conditions are usually such that its different parts are uncorrelated, but as the system evolves and its different parts interact with each other, they become correlated. For example, whenever dealing with a gas of particles, it is always assumed that its initial conditions are such that there is no correlation between the states of different particles (i.e. the speeds and locations of the different particles are completely random, up to the need to conform with the macrostate of the system). This is closely related to the second law of thermodynamics: For example, in a finite system interacting with finite heat reservoirs, entropy is equivalent to system-reservoir correlations, and thus both increase together. Take for example (experiment A) a closed box that is, at the beginning, half-filled with ideal gas. As time passes, the gas obviously expands to fill the whole box, so that the final state is a box full of gas. This is an irreversible process, since if the box is full at the beginning (experiment B), it does not become only half-full later, except for the very unlikely situation where the gas particles have very special locations and speeds. But this is precisely because we always assume that the initial conditions in experiment B are such that the particles have random locations and speeds. This is not correct for the final conditions of the system in experiment A, because the particles have interacted between themselves, so that their locations and speeds have become dependent on each other, i.e. correlated. This can be understood if we look at experiment A backwards in time, which we'll call experiment C: now we begin with a box full of gas, but the particles do not have random locations and speeds; rather, their locations and speeds are so particular, that after some time they all move to one half of the box, which is the final state of the system (this is the initial state of experiment A, because now we're looking at the same experiment backwards!). The interactions between particles now do not create correlations between the particles, but in fact turn them into (at least seemingly) random, "canceling" the pre-existing correlations. The only difference between experiment C (which defies the Second Law of Thermodynamics) and experiment B (which obeys the Second Law of Thermodynamics) is that in the former the particles are uncorrelated at the end, while in the latter the particles are uncorrelated at the beginning. In fact, if all the microscopic physical processes are reversible (see discussion below), then the Second Law of Thermodynamics can be proven for any isolated system of particles with initial conditions in which the particles states are uncorrelated. To do this, one must acknowledge the difference between the measured entropy of a system—which depends only on its macrostate (its volume, temperature etc.)—and its information entropy, which is the amount of information (number of computer bits) needed to describe the exact microstate of the system. The measured entropy is independent of correlations between particles in the system, because they do not affect its macrostate, but the information entropy does depend on them, because correlations lower the randomness of the system and thus lowers the amount of information needed to describe it. Therefore, in the absence of such correlations the two entropies are identical, but otherwise the information entropy is smaller than the measured entropy, and the difference can be used as a measure of the amount of correlations. Now, by Liouville's theorem, time-reversal of all microscopic processes implies that the amount of information needed to describe the exact microstate of an isolated system (its information-theoretic joint entropy) is constant in time. This joint entropy is equal to the marginal entropy (entropy assuming no correlations) plus the entropy of correlation (mutual entropy, or its negative mutual information). If we assume no correlations between the particles initially, then this joint entropy is just the marginal entropy, which is just the initial thermodynamic entropy of the system, divided by the Boltzmann constant. However, if these are indeed the initial conditions (and this is a crucial assumption), then such correlations form with time. In other words, there is a decreasing mutual entropy (or increasing mutual information), and for a time that is not too long—the correlations (mutual information) between particles only increase with time. Therefore, the thermodynamic entropy, which is proportional to the marginal entropy, must also increase with time (note that "not too long" in this context is relative to the time needed, in a classical version of the system, for it to pass through all its possible microstates—a time that can be roughly estimated as , where is the time between particle collisions and S is the system's entropy. In any practical case this time is huge compared to everything else). Note that the correlation between particles is not a fully objective quantity. One cannot measure the mutual entropy, one can only measure its change, assuming one can measure a microstate. Thermodynamics is restricted to the case where microstates cannot be distinguished, which means that only the marginal entropy, proportional to the thermodynamic entropy, can be measured, and, in a practical sense, always increases. Arrow of time in various phenomena Phenomena that occur differently according to their time direction can ultimately be linked to the second law of thermodynamics, for example ice cubes melt in hot coffee rather than assembling themselves out of the coffee and a block sliding on a rough surface slows down rather than speeds up. The idea that we can remember the past and not the future is called the "psychological arrow of time" and it has deep connections with Maxwell's demon and the physics of information; memory is linked to the second law of thermodynamics if one views it as correlation between brain cells (or computer bits) and the outer world: Since such correlations increase with time, memory is linked to past events, rather than to future events. Current research Current research focuses mainly on describing the thermodynamic arrow of time mathematically, either in classical or quantum systems, and on understanding its origin from the point of view of cosmological boundary conditions. Dynamical systems Some current research in dynamical systems indicates a possible "explanation" for the arrow of time. There are several ways to describe the time evolution of a dynamical system. In the classical framework, one considers an ordinary differential equation, where the parameter is explicitly time. By the very nature of differential equations, the solutions to such systems are inherently time-reversible. However, many of the interesting cases are either ergodic or mixing, and it is strongly suspected that mixing and ergodicity somehow underlie the fundamental mechanism of the arrow of time. While the strong suspicion may be but a fleeting sense of intuition, it cannot be denied that, when there are multiple parameters, the field of partial differential equations comes into play. In such systems there is the Feynman–Kac formula in play, which assures for specific cases, a one-to-one correspondence between specific linear stochastic differential equation and partial differential equation. Therefore, any partial differential equation system is tantamount to a random system of a single parameter, which is not reversible due to the aforementioned correspondence. Mixing and ergodic systems do not have exact solutions, and thus proving time irreversibility in a mathematical sense is impossible. The concept of "exact" solutions is an anthropic one. Does "exact" mean the same as closed form in terms of already know expressions, or does it mean simply a single finite sequence of strokes of a/the writing utensil/human finger? There are myriad of systems known to humanity that are abstract and have recursive definitions but no non-self-referential notation currently exists. As a result of this complexity, it is natural to look elsewhere for different examples and perspectives. Some progress can be made by studying discrete-time models or difference equations. Many discrete-time models, such as the iterated functions considered in popular fractal-drawing programs, are explicitly not time-reversible, as any given point "in the present" may have several different "pasts" associated with it: indeed, the set of all pasts is known as the Julia set. Since such systems have a built-in irreversibility, it is inappropriate to use them to explain why time is not reversible. There are other systems that are chaotic, and are also explicitly time-reversible: among these is the baker's map, which is also exactly solvable. An interesting avenue of study is to examine solutions to such systems not by iterating the dynamical system over time, but instead, to study the corresponding Frobenius-Perron operator or transfer operator for the system. For some of these systems, it can be explicitly, mathematically shown that the transfer operators are not trace-class. This means that these operators do not have a unique eigenvalue spectrum that is independent of the choice of basis. In the case of the baker's map, it can be shown that several unique and inequivalent diagonalizations or bases exist, each with a different set of eigenvalues. It is this phenomenon that can be offered as an "explanation" for the arrow of time. That is, although the iterated, discrete-time system is explicitly time-symmetric, the transfer operator is not. Furthermore, the transfer operator can be diagonalized in one of two inequivalent ways: one that describes the forward-time evolution of the system, and one that describes the backwards-time evolution. As of 2006, this type of time-symmetry breaking has been demonstrated for only a very small number of exactly-solvable, discrete-time systems. The transfer operator for more complex systems has not been consistently formulated, and its precise definition is mired in a variety of subtle difficulties. In particular, it has not been shown that it has a broken symmetry for the simplest exactly-solvable continuous-time ergodic systems, such as Hadamard's billiards, or the Anosov flow on the tangent space of PSL(2,R). Quantum mechanics Research on irreversibility in quantum mechanics takes several different directions. One avenue is the study of rigged Hilbert spaces, and in particular, how discrete and continuous eigenvalue spectra intermingle. For example, the rational numbers are completely intermingled with the real numbers, and yet have a unique, distinct set of properties. It is hoped that the study of Hilbert spaces with a similar inter-mingling will provide insight into the arrow of time. Another distinct approach is through the study of quantum chaos by which attempts are made to quantize systems as classically chaotic, ergodic or mixing. The results obtained are not dissimilar from those that come from the transfer operator method. For example, the quantization of the Boltzmann gas, that is, a gas of hard (elastic) point particles in a rectangular box reveals that the eigenfunctions are space-filling fractals that occupy the entire box, and that the energy eigenvalues are very closely spaced and have an "almost continuous" spectrum (for a finite number of particles in a box, the spectrum must be, of necessity, discrete). If the initial conditions are such that all of the particles are confined to one side of the box, the system very quickly evolves into one where the particles fill the entire box. Even when all of the particles are initially on one side of the box, their wave functions do, in fact, permeate the entire box: they constructively interfere on one side, and destructively interfere on the other. Irreversibility is then argued by noting that it is "nearly impossible" for the wave functions to be "accidentally" arranged in some unlikely state: such arrangements are a set of zero measure. Because the eigenfunctions are fractals, much of the language and machinery of entropy and statistical mechanics can be imported to discuss and argue the quantum case. Cosmology Some processes that involve high energy particles and are governed by the weak force (such as K-meson decay) defy the symmetry between time directions. However, all known physical processes do preserve a more complicated symmetry (CPT symmetry), and are therefore unrelated to the second law of thermodynamics, or to the day-to-day experience of the arrow of time. A notable exception is the wave function collapse in quantum mechanics, an irreversible process which is considered either real (by the Copenhagen interpretation) or apparent only (by the many-worlds interpretation of quantum mechanics). In either case, the wave function collapse always follows quantum decoherence, a process which is understood to be a result of the second law of thermodynamics. The universe was in a uniform, high density state at its very early stages, shortly after the Big Bang. The hot gas in the early universe was near thermodynamic equilibrium (see Horizon problem); in systems where gravitation plays a major role, this is a state of low entropy, due to the negative heat capacity of such systems (this is in contrary to non-gravitational systems where thermodynamic equilibrium is a state of maximum entropy). Moreover, due to its small volume compared to future epochs, the entropy was even lower as gas expansion increases its entropy. Thus the early universe can be considered to be highly ordered. Note that the uniformity of this early near-equilibrium state has been explained by the theory of cosmic inflation. According to this theory the universe (or, rather, its accessible part, a radius of 46 billion light years around Earth) evolved from a tiny, totally uniform volume (a portion of a much bigger universe), which expanded greatly; hence it was highly ordered. Fluctuations were then created by quantum processes related to its expansion, in a manner supposed to be such that these fluctuations went through quantum decoherence, so that they became uncorrelated for any practical use. This is supposed to give the desired initial conditions needed for the Second Law of Thermodynamics; different decoherent states ultimately evolved to different specific arrangements of galaxies and stars. The universe is apparently an open universe, so that its expansion will never terminate, but it is an interesting thought experiment to imagine what would have happened had the universe been closed. In such a case, its expansion would stop at a certain time in the distant future, and then begin to shrink. Moreover, a closed universe is finite. It is unclear what would happen to the second law of thermodynamics in such a case. One could imagine at least two different scenarios, though in fact only the first one is plausible, as the other requires a highly smooth cosmic evolution, contrary to what is observed: The broad consensus among the scientific community today is that smooth initial conditions lead to a highly non-smooth final state, and that this is in fact the source of the thermodynamic arrow of time. Gravitational systems tend to gravitationally collapse to compact bodies such as black holes (a phenomenon unrelated to wavefunction collapse), so the universe would end in a Big Crunch that is very different than a Big Bang run in reverse, since the distribution of the matter would be highly non-smooth; as the universe shrinks, such compact bodies merge to larger and larger black holes. It may even be that it is impossible for the universe to have both a smooth beginning and a smooth ending. Note that in this scenario the energy density of the universe in the final stages of its shrinkage is much larger than in the corresponding initial stages of its expansion (there is no destructive interference, unlike in the second scenario described below), and consists of mostly black holes rather than free particles. A highly controversial view is that instead, the arrow of time will reverse. The quantum fluctuations—which in the meantime have evolved into galaxies and stars—will be in superposition in such a way that the whole process described above is reversed—i.e., the fluctuations are erased by destructive interference and total uniformity is achieved once again. Thus the universe ends in a Big Crunch, which is similar to its beginning in the Big Bang. Because the two are totally symmetric, and the final state is very highly ordered, entropy must decrease close to the end of the universe, so that the second law of thermodynamics reverses when the universe shrinks. This can be understood as follows: in the very early universe, interactions between fluctuations created entanglement (quantum correlations) between particles spread all over the universe; during the expansion, these particles became so distant that these correlations became negligible (see quantum decoherence). At the time the expansion halts and the universe starts to shrink, such correlated particles arrive once again at contact (after circling around the universe), and the entropy starts to decrease—because highly correlated initial conditions may lead to a decrease in entropy. Another way of putting it, is that as distant particles arrive, more and more order is revealed because these particles are highly correlated with particles that arrived earlier. In this scenario, the cosmological arrow of time is the reason for both the thermodynamic arrow of time and the quantum arrow of time. Both will slowly disappear as the universe will come to a halt, and will later be reversed. In the first and more consensual scenario, it is the difference between the initial state and the final state of the universe that is responsible for the thermodynamic arrow of time. This is independent of the cosmological arrow of time. See also Arrow of time Cosmic inflation Entropy H-theorem History of entropy Loschmidt's paradox References Further reading (technical). Dover has reprinted the monograph in 2003. For a short paper listing "the essential points of that argument, correcting presentation points that were confusing ... and emphasizing conclusions more forcefully than previously" see External links Thermodynamic Asymmetry in Time at the online Stanford Encyclopedia of Philosophy Thermodynamic entropy Asymmetry
0.774353
0.989288
0.766058
Penrose process
The Penrose process (also called Penrose mechanism) is theorised by Sir Roger Penrose as a means whereby energy can be extracted from a rotating black hole. The process takes advantage of the ergosphere – a region of spacetime around the black hole dragged by its rotation faster than the speed of light, meaning that from the point of view of an outside observer any matter inside is forced to move in the direction of the rotation of the black hole. In the process, a working body falls (black thick line in the figure) into the ergosphere (gray region). At its lowest point (red dot) the body fires a propellant backwards; however, to a faraway observer both seem to continue to move forward due to frame-dragging (albeit at different speeds). The propellant, being slowed, falls (thin gray line) to the event horizon of the black hole (black disk). The remains of the body, being sped up, fly away (thin black line) with an excess of energy (that more than offsets the loss of the propellant and the energy used to shoot it). The maximum amount of energy gain possible for a single particle decay via the original (or classical) Penrose process is 20.7% of its mass in the case of an uncharged black hole (assuming the best case of maximal rotation of the black hole). The energy is taken from the rotation of the black hole, so there is a limit on how much energy one can extract by Penrose process and similar strategies (for an uncharged black hole no more than 29% of its original mass; larger efficiencies are possible for charged rotating black holes). Details of the ergosphere The outer surface of the ergosphere is the surface at which light that moves in the direction opposite to the rotation of the black hole remains at a fixed angular coordinate, according to an external observer. Since massive particles necessarily travel slower than light, massive particles will necessarily move along with the black hole's rotation. The inner boundary of the ergosphere is the event horizon, the spatial perimeter beyond which light cannot escape. Inside the ergosphere even light cannot keep up with the rotation of the black hole, as the trajectories of stationary (from the outside perspective) objects become space-like, rather than time-like (that normal matter would have), or light-like. Mathematically, the component of the metric changes its sign inside the ergosphere. That allows matter to have negative energy inside of the ergosphere as long as it moves counter the black hole's rotation fast enough (or, from outside perspective, resists being dragged along to a sufficient degree). Penrose mechanism exploits that by diving into the ergosphere, dumping an object that was given negative energy, and returning with more energy than before. In this way, rotational energy is extracted from the black hole, resulting in the black hole being spun down to a lower rotational speed. The maximum amount of energy (per mass of the thrown in object) is extracted if the black hole is rotating at the maximal rate, the object just grazes the event horizon and decays into forwards and backwards moving packets of light (the first escapes the black hole, the second falls inside). In an adjunct process, a black hole can be spun up (its rotational speed increased) by sending in particles that do not split up, but instead give their entire angular momentum to the black hole. However, this is not a reverse of the Penrose process, as both increase the entropy of the black hole by throwing material into it. See also High Life, a 2018 science-fiction film that includes a mission to harness the process References Further reading Black holes Energy sources Hypothetical technology
0.770298
0.994479
0.766045
THESEUS
Transient High-Energy Sky and Early Universe Surveyor (THESEUS) is a space telescope mission proposal by the European Space Agency that would study gamma-ray bursts and X-rays for investigating the early universe. If developed, the mission would investigate star formation rates and metallicity evolution, as well as studying the sources and physics of reionization. Overview THESEUS is a mission concept that would monitor transient events in the high-energy Universe across the whole sky and over the entirety of cosmic history. In particular, it expects to make a complete census of gamma-ray bursts (GRBs) from the Universe's first billion years, to help understand the life cycle of the first stars. THESEUS would provide real-time triggers and accurate locations of the sources, which could also be followed up by other space- or ground-based telescopes operating at complementary wavelengths. The concept was selected in May 2018 as a finalist to become the fifth Medium-class mission (M5) of the Cosmic Vision programme by the European Space Agency (ESA). The other finalist was EnVision, a Venus orbiter. The winner, EnVision, was selected in June 2021 for launch in 2031. In November 2023, following a new selection process (2022) and a Phase-0 study (2023), THESEUS was selected by ESA for a new 2.5 year Phase-A study as one of the three candidates M7 missions (together with M-Matisse and Plasma Observatory). The space observatory would study GRBs and X-rays and their association with the explosive death of massive stars, supernova shock break-outs, black hole tidal disruption events, and magnetar flares. This can provide fundamental information on the cosmic star formation rate, the number density and properties of low-mass galaxies, the neutral hydrogen fraction, and the escape fraction of ultraviolet photons from galaxies. Scientific payload The conceptual payload of THESEUS includes: Soft X-ray Imager (SXI), sensitive to 0.3-6 keV is a set of 4 lobster-eye telescope units, covering a total field of view (FOV) of 1 sr with source location accuracy <1-2 arcmin. InfraRed Telescope (IRT), sensitive to 0.7-1.8 μm is a 0.7 m NIR telescope with 15x15 arcmin FOV, for fast response, with both imaging and moderate spectroscopic capabilities (R~400). Mass: 112.6 kg. X-Gamma ray Imaging Spectrometer (XGIS), sensitive to 2 keV-20 MeV, is a set of coded-mask cameras using monolithic X-gamma ray detectors based on bars of silicon diodes coupled with CsI crystal scintillator, granting a 1.5 sr FOV, a source location accuracy of 5 arcmin in 2-30 keV and an unprecedentedly broad energy band. Mass: 37.3 kg. See also Gamma-ray astronomy List of proposed space observatories X-ray astronomy References Cosmic Vision Gamma-ray telescopes X-ray telescopes Space telescopes European Space Agency satellites Classical mythology in popular culture 2010s in science 2020s in science 2037 in science
0.766107
0.999895
0.766027
Energy harvesting
Energy harvesting (EH) – also known as power harvesting, energy scavenging, or ambient power – is the process by which energy is derived from external sources (e.g., solar power, thermal energy, wind energy, salinity gradients, and kinetic energy, also known as ambient energy), then stored for use by small, wireless autonomous devices, like those used in wearable electronics, condition monitoring, and wireless sensor networks. Energy harvesters usually provide a very small amount of power for low-energy electronics. While the input fuel to some large-scale energy generation costs resources (oil, coal, etc.), the energy source for energy harvesters is present as ambient background. For example, temperature gradients exist from the operation of a combustion engine and in urban areas, there is a large amount of electromagnetic energy in the environment due to radio and television broadcasting. One of the first examples of ambient energy being used to produce electricity was the successful use of electromagnetic radiation (EMR) to generate the crystal radio. The principles of energy harvesting from ambient EMR can be demonstrated with basic components. Operation Energy harvesting devices converting ambient energy into electrical energy have attracted much interest in both the military and commercial sectors. Some systems convert motion, such as that of ocean waves, into electricity to be used by oceanographic monitoring sensors for autonomous operation. Future applications may include high-power output devices (or arrays of such devices) deployed at remote locations to serve as reliable power stations for large systems. Another application is in wearable electronics, where energy-harvesting devices can power or recharge cell phones, mobile computers, and radio communication equipment. All of these devices must be sufficiently robust to endure long-term exposure to hostile environments and have a broad range of dynamic sensitivity to exploit the entire spectrum of wave motions. In addition, one of the latest techniques to generate electric power from vibration waves is the utilization of Auxetic Boosters. This method falls under the category of piezoelectric-based vibration energy harvesting (PVEH), where the harvested electric energy can be directly used to power wireless sensors, monitoring cameras, and other Internet of Things (IoT) devices. Accumulating energy Energy can also be harvested to power small autonomous sensors such as those developed using MEMS technology. These systems are often very small and require little power, but their applications are limited by the reliance on battery power. Scavenging energy from ambient vibrations, wind, heat, or light could enable smart sensors to function indefinitely. Typical power densities available from energy harvesting devices are highly dependent upon the specific application (affecting the generator's size) and the design itself of the harvesting generator. In general, for motion-powered devices, typical values are a few μW/cm3 for human body-powered applications and hundreds of μW/cm3 for generators powered by machinery. Most energy-scavenging devices for wearable electronics generate very little power. Storage of power In general, energy can be stored in a capacitor, super capacitor, or battery. Capacitors are used when the application needs to provide huge energy spikes. Batteries leak less energy and are therefore used when the device needs to provide a steady flow of energy. These aspects of the battery depend on the type that is used. A common type of battery that is used for this purpose is the lead acid or lithium-ion battery although older types such as nickel metal hydride are still widely used today. Compared to batteries, super capacitors have virtually unlimited charge-discharge cycles and can therefore operate forever, enabling a maintenance-free operation in IoT and wireless sensor devices. Use of the power Current interest in low-power energy harvesting is for independent sensor networks. In these applications, an energy harvesting scheme puts power stored into a capacitor then boosts/regulates it to a second storage capacitor or battery for use in the microprocessor or in the data transmission. The power is usually used in a sensor application and the data is stored or transmitted, possibly through a wireless method. Motivation One of the main driving forces behind the search for new energy harvesting devices is the desire to power sensor networks and mobile devices without batteries that need external charging or service. Batteries have several limitations, such as limited lifespan, environmental impact, size, weight, and cost. Energy harvesting devices can provide an alternative or complementary source of power for applications that require low power consumption, such as remote sensing, wearable electronics, condition monitoring, and wireless sensor networks.  Energy harvesting devices can also extend the battery life or enable batteryless operation of some applications. Another motivation for energy harvesting is the potential to address the issue of climate change by reducing greenhouse gas emissions and fossil fuel consumption. Energy harvesting devices can utilize renewable and clean sources of energy that are abundant and ubiquitous in the environment, such as solar, thermal, wind, and kinetic energy. Energy harvesting devices can also reduce the need for power transmission and distribution systems that cause energy losses and environmental impacts. Energy harvesting devices can therefore contribute to the development of a more sustainable and resilient energy system. Recent research in energy harvesting has led to the innovation of devices capable of powering themselves through user interactions. Notable examples include battery-free game boys and other toys, which showcase the potential of devices powered by the energy generated from user actions, such as pressing buttons or turning knobs. These studies highlight how energy harvested from interactions can not only power the devices themselves but also extend their operational autonomy, promoting the use of renewable energy sources and reducing reliance on traditional batteries. Energy sources There are many small-scale energy sources that generally cannot be scaled up to industrial size in terms of comparable output to industrial size solar, wind or wave power: Some wristwatches are powered by kinetic energy (called automatic watches) generated through movement of the arm when walking. The arm movement causes winding of the watch's mainspring. Other designs, like Seiko's Kinetic, use a loose internal permanent magnet to generate electricity. Photovoltaics is a method of generating electrical power by converting solar radiation into direct current electricity using semiconductors that exhibit the photovoltaic effect. Photovoltaic power generation employs solar panels composed of a number of cells containing a photovoltaic material. Photovoltaics have been scaled up to industrial size and large-scale solar farms now exist. Thermoelectric generators (TEGs) consist of the junction of two dissimilar materials and the presence of a thermal gradient. High-voltage outputs are possible by connecting many junctions electrically in series and thermally in parallel. Typical performance is 100–300 μV/K per junction. These can be utilized to capture mWs of energy from industrial equipment, structures, and even the human body. They are typically coupled with heat sinks to improve temperature gradient. Micro wind turbines are used to harvest kinetic energy readily available in the environment in the form of wind to fuel low-power electronic devices such as wireless sensor nodes. When air flows across the blades of the turbine, a net pressure difference is developed between the wind speeds above and below the blades. This will result in a lift force generated which in turn rotates the blades. Similar to photovoltaics, wind farms have been constructed on an industrial scale and are being used to generate substantial amounts of electrical energy. Piezoelectric crystals or fibers generate a small voltage whenever they are mechanically deformed. Vibration from engines can stimulate piezoelectric materials, as can the heel of a shoe or the pushing of a button. Special antennas can collect energy from stray radio waves. This can also be done with a Rectenna and theoretically at even higher frequency EM radiation with a Nantenna. Power from keys pressed during use of a portable electronic device or remote controller, using magnet and coil or piezoelectric energy converters, may be used to help power the device. Vibration energy harvesting, based on electromagnetic induction, uses a magnet and a copper coil in the most simple versions to generate a current that can be converted into electricity. Electrically-charged humidity produces electricity in the Air-gen, a nanopore-based device invented by a group at the University of Massachusetts at Amherst led by Jun Yao. Ambient-radiation sources A possible source of energy comes from ubiquitous radio transmitters. Historically, either a large collection area or close proximity to the radiating wireless energy source is needed to get useful power levels from this source. The nantenna is one proposed development which would overcome this limitation by making use of the abundant natural radiation (such as solar radiation). One idea is to deliberately broadcast RF energy to power and collect information from remote devices. This is now commonplace in passive radio-frequency identification (RFID) systems, but the Safety and US Federal Communications Commission (and equivalent bodies worldwide) limit the maximum power that can be transmitted this way to civilian use. This method has been used to power individual nodes in a wireless sensor network. Fluid flow Various turbine and non-turbine generator technologies can harvest airflow. Towered wind turbines and airborne wind energy systems (AWES) harness the flow of air. Multiple companies are developing these technologies, which can operate in low-light environments, such as HVAC ducts, and can be scaled and optimized for the energy requirements of specific applications. The flow of blood can also be utilized to power devices. For example, a pacemaker developed at the University of Bern, uses blood flow to wind up a spring, which then drives an electrical micro-generator. Water energy harvesting has seen advancements in design, such as generators with transistor-like architecture, achieving high energy conversion efficiency and power density. Photovoltaic Photovoltaic (PV) energy harvesting wireless technology offers significant advantages over wired or solely battery-powered sensor solutions: virtually inexhaustible sources of power with little or no adverse environmental effects. Indoor PV harvesting solutions have to date been powered by specially tuned amorphous silicon (aSi)a technology most used in Solar Calculators. In recent years new PV technologies have come to the forefront in Energy Harvesting such as Dye-Sensitized Solar Cells (DSSC). The dyes absorb light much like chlorophyll does in plants. Electrons released on impact escape to the layer of TiO2 and from there diffuse, through the electrolyte, as the dye can be tuned to the visible spectrum much higher power can be produced. At a DSSC can provide over per cm2. Piezoelectric The piezoelectric effect converts mechanical strain into electric current or voltage. This strain can come from many different sources. Human motion, low-frequency seismic vibrations, and acoustic noise are everyday examples. Except in rare instances the piezoelectric effect operates in AC requiring time-varying inputs at mechanical resonance to be efficient. Most piezoelectric electricity sources produce power on the order of milliwatts, too small for system application, but enough for hand-held devices such as some commercially available self-winding wristwatches. One proposal is that they are used for micro-scale devices, such as in a device harvesting micro-hydraulic energy. In this device, the flow of pressurized hydraulic fluid drives a reciprocating piston supported by three piezoelectric elements which convert the pressure fluctuations into an alternating current. As piezo energy harvesting has been investigated only since the late 1990s, it remains an emerging technology. Nevertheless, some interesting improvements were made with the self-powered electronic switch at INSA school of engineering, implemented by the spin-off Arveni. In 2006, the proof of concept of a battery-less wireless doorbell push button was created, and recently, a product showed that classical wireless wallswitch can be powered by a piezo harvester. Other industrial applications appeared between 2000 and 2005, to harvest energy from vibration and supply sensors for example, or to harvest energy from shock. Piezoelectric systems can convert motion from the human body into electrical power. DARPA has funded efforts to harness energy from leg and arm motion, shoe impacts, and blood pressure for low level power to implantable or wearable sensors. The nanobrushes are another example of a piezoelectric energy harvester. They can be integrated into clothing. Multiple other nanostructures have been exploited to build an energy-harvesting device, for example, a single crystal PMN-PT nanobelt was fabricated and assembled into a piezoelectric energy harvester in 2016. Careful design is needed to minimise user discomfort. These energy harvesting sources by association affect the body. The Vibration Energy Scavenging Project is another project that is set up to try to scavenge electrical energy from environmental vibrations and movements. Microbelt can be used to gather electricity from respiration. Besides, as the vibration of motion from human comes in three directions, a single piezoelectric cantilever based omni-directional energy harvester is created by using 1:2 internal resonance. Finally, a millimeter-scale piezoelectric energy harvester has also already been created. Piezo elements are being embedded in walkways to recover the "people energy" of footsteps. They can also be embedded in shoes to recover "walking energy". Researchers at MIT developed the first micro-scale piezoelectric energy harvester using thin film PZT in 2005. Arman Hajati and Sang-Gook Kim invented the Ultra Wide-Bandwidth micro-scale piezoelectric energy harvesting device by exploiting the nonlinear stiffness of a doubly clamped microelectromechanical systems (MEMSs) resonator. The stretching strain in a doubly clamped beam shows a nonlinear stiffness, which provides a passive feedback and results in amplitude-stiffened Duffing mode resonance. Typically, piezoelectric cantilevers are adopted for the above-mentioned energy harvesting system. One drawback is that the piezoelectric cantilever has gradient strain distribution, i.e., the piezoelectric transducer is not fully utilized. To address this issue, triangle shaped and L-shaped cantilever are proposed for uniform strain distribution. In 2018, Soochow University researchers reported hybridizing a triboelectric nanogenerator and a silicon solar cell by sharing a mutual electrode. This device can collect solar energy or convert the mechanical energy of falling raindrops into electricity. UK telecom company Orange UK created an energy harvesting T-shirt and boots. Other companies have also done the same. Energy from smart roads and piezoelectricity Brothers Pierre Curie and Jacques Curie gave the concept of piezoelectric effect in 1880. Piezoelectric effect converts mechanical strain into voltage or electric current and generates electric energy from motion, weight, vibration and temperature changes as shown in the figure. Considering piezoelectric effect in thin film lead zirconate titanate PZT, microelectromechanical systems (MEMS) power generating device has been developed. During recent improvement in piezoelectric technology, Aqsa Abbasi ) differentiated two modes called and in vibration converters and re-designed to resonate at specific frequencies from an external vibration energy source, thereby creating electrical energy via the piezoelectric effect using electromechanical damped mass. However, Aqsa further developed beam-structured electrostatic devices that are more difficult to fabricate than PZT MEMS devices versus a similar because general silicon processing involves many more mask steps that do not require PZT film. Piezoelectric type sensors and actuators have a cantilever beam structure that consists of a membrane bottom electrode, film, piezoelectric film, and top electrode. More than mask steps are required for patterning of each layer while have very low induced voltage. Pyroelectric crystals that have a unique polar axis and have spontaneous polarization, along which the spontaneous polarization exists. These are the crystals of classes , , , , , , ,, . The special polar axis—crystallophysical axis – coincides with the axes ,, , and of the crystals or lies in the unique straight plane . Consequently, the electric centers of positive and negative charges are displaced of an elementary cell from equilibrium positions, i.e., the spontaneous polarization of the crystal changes. Therefore, all considered crystals have spontaneous polarization . Since piezoelectric effect in pyroelectric crystals arises as a result of changes in their spontaneous polarization under external effects (electric fields, mechanical stresses). As a result of displacement, Aqsa Abbasi introduced change in the components along all three axes . Suppose that is proportional to the mechanical stresses causing in a first approximation, which results where represents the mechanical stress and represents the piezoelectric modules. PZT thin films have attracted attention for applications such as force sensors, accelerometers, gyroscopes actuators, tunable optics, micro pumps, ferroelectric RAM, display systems and smart roads, when energy sources are limited, energy harvesting plays an important role in the environment. Smart roads have the potential to play an important role in power generation. Embedding piezoelectric material in the road can convert pressure exerted by moving vehicles into voltage and current. Smart transportation intelligent system Piezoelectric sensors are most useful in smart-road technologies that can be used to create systems that are intelligent and improve productivity in the long run. Imagine highways that alert motorists of a traffic jam before it forms. Or bridges that report when they are at risk of collapse, or an electric grid that fixes itself when blackouts hit. For many decades, scientists and experts have argued that the best way to fight congestion is intelligent transportation systems, such as roadside sensors to measure traffic and synchronized traffic lights to control the flow of vehicles. But the spread of these technologies has been limited by cost. There are also some other smart-technology shovel ready projects which could be deployed fairly quickly, but most of the technologies are still at the development stage and might not be practically available for five years or more. Pyroelectric The pyroelectric effect converts a temperature change into electric current or voltage. It is analogous to the piezoelectric effect, which is another type of ferroelectric behavior. Pyroelectricity requires time-varying inputs and suffers from small power outputs in energy harvesting applications due to its low operating frequencies. However, one key advantage of pyroelectrics over thermoelectrics is that many pyroelectric materials are stable up to 1200 °C or higher, enabling energy harvesting from high temperature sources and thus increasing thermodynamic efficiency. One way to directly convert waste heat into electricity is by executing the Olsen cycle on pyroelectric materials. The Olsen cycle consists of two isothermal and two isoelectric field processes in the electric displacement-electric field (D-E) diagram. The principle of the Olsen cycle is to charge a capacitor via cooling under low electric field and to discharge it under heating at higher electric field. Several pyroelectric converters have been developed to implement the Olsen cycle using conduction, convection, or radiation. It has also been established theoretically that pyroelectric conversion based on heat regeneration using an oscillating working fluid and the Olsen cycle can reach Carnot efficiency between a hot and a cold thermal reservoir. Moreover, recent studies have established polyvinylidene fluoride trifluoroethylene [P(VDF-TrFE)] polymers and lead lanthanum zirconate titanate (PLZT) ceramics as promising pyroelectric materials to use in energy converters due to their large energy densities generated at low temperatures. Additionally, a pyroelectric scavenging device that does not require time-varying inputs was recently introduced. The energy-harvesting device uses the edge-depolarizing electric field of a heated pyroelectric to convert heat energy into mechanical energy instead of drawing electric current off two plates attached to the crystal-faces. Thermoelectrics In 1821, Thomas Johann Seebeck discovered that a thermal gradient formed between two dissimilar conductors produces a voltage. At the heart of the thermoelectric effect is the fact that a temperature gradient in a conducting material results in heat flow; this results in the diffusion of charge carriers. The flow of charge carriers between the hot and cold regions in turn creates a voltage difference. In 1834, Jean Charles Athanase Peltier discovered that running an electric current through the junction of two dissimilar conductors could, depending on the direction of the current, cause it to act as a heater or cooler. The heat absorbed or produced is proportional to the current, and the proportionality constant is known as the Peltier coefficient. Today, due to knowledge of the Seebeck and Peltier effects, thermoelectric materials can be used as heaters, coolers and generators (TEGs). Ideal thermoelectric materials have a high Seebeck coefficient, high electrical conductivity, and low thermal conductivity. Low thermal conductivity is necessary to maintain a high thermal gradient at the junction. Standard thermoelectric modules manufactured today consist of P- and N-doped bismuth-telluride semiconductors sandwiched between two metallized ceramic plates. The ceramic plates add rigidity and electrical insulation to the system. The semiconductors are connected electrically in series and thermally in parallel. Miniature thermocouples have been developed that convert body heat into electricity and generate 40 μ W at 3 V with a 5-degree temperature gradient, while on the other end of the scale, large thermocouples are used in nuclear RTG batteries. Practical examples are the finger-heartratemeter by the Holst Centre and the thermogenerators by the Fraunhofer-Gesellschaft. Advantages to thermoelectrics: No moving parts allow continuous operation for many years. Thermoelectrics contain no materials that must be replenished. Heating and cooling can be reversed. One downside to thermoelectric energy conversion is low efficiency (currently less than 10%). The development of materials that are able to operate in higher temperature gradients, and that can conduct electricity well without also conducting heat (something that was until recently thought impossible ), will result in increased efficiency. Future work in thermoelectrics could be to convert wasted heat, such as in automobile engine combustion, into electricity. Electrostatic (capacitive) This type of harvesting is based on the changing capacitance of vibration-dependent capacitors. Vibrations separate the plates of a charged variable capacitor, and mechanical energy is converted into electrical energy. Electrostatic energy harvesters need a polarization source to work and to convert mechanical energy from vibrations into electricity. The polarization source should be in the order of some hundreds of volts; this greatly complicates the power management circuit. Another solution consists in using electrets, that are electrically charged dielectrics able to keep the polarization on the capacitor for years. It's possible to adapt structures from classical electrostatic induction generators, which also extract energy from variable capacitances, for this purpose. The resulting devices are self-biasing, and can directly charge batteries, or can produce exponentially growing voltages on storage capacitors, from which energy can be periodically extracted by DC/DC converters. Magnetic induction Magnetic induction refers to the production of an electromotive force (i.e., voltage) in a changing magnetic field. This changing magnetic field can be created by motion, either rotation (i.e. Wiegand effect and Wiegand sensors) or linear movement (i.e. vibration). Magnets wobbling on a cantilever are sensitive to even small vibrations and generate microcurrents by moving relative to conductors due to Faraday's law of induction. By developing a miniature device of this kind in 2007, a team from the University of Southampton made possible the planting of such a device in environments that preclude having any electrical connection to the outside world. Sensors in inaccessible places can now generate their own power and transmit data to outside receivers. One of the major limitations of the magnetic vibration energy harvester developed at University of Southampton is the size of the generator, in this case approximately one cubic centimeter, which is much too large to integrate into today's mobile technologies. The complete generator including circuitry is a massive 4 cm by 4 cm by 1 cm nearly the same size as some mobile devices such as the iPod nano. Further reductions in the dimensions are possible through the integration of new and more flexible materials as the cantilever beam component. In 2012, a group at Northwestern University developed a vibration-powered generator out of polymer in the form of a spring. This device was able to target the same frequencies as the University of Southampton groups silicon based device but with one third the size of the beam component. A new approach to magnetic induction based energy harvesting has also been proposed by using ferrofluids. The journal article, "Electromagnetic ferrofluid-based energy harvester", discusses the use of ferrofluids to harvest low frequency vibrational energy at 2.2 Hz with a power output of ~80 mW per g. Quite recently, the change in domain wall pattern with the application of stress has been proposed as a method to harvest energy using magnetic induction. In this study, the authors have shown that the applied stress can change the domain pattern in microwires. Ambient vibrations can cause stress in microwires, which can induce a change in domain pattern and hence change the induction. Power, of the order of uW/cm2 has been reported. Commercially successful vibration energy harvesters based on magnetic induction are still relatively few in number. Examples include products developed by Swedish company ReVibe Energy, a technology spin-out from Saab Group. Another example is the products developed from the early University of Southampton prototypes by Perpetuum. These have to be sufficiently large to generate the power required by wireless sensor nodes (WSN) but in M2M applications this is not normally an issue. These harvesters are now being supplied in large volumes to power WSNs made by companies such as GE and Emerson and also for train bearing monitoring systems made by Perpetuum. Overhead powerline sensors can use magnetic induction to harvest energy directly from the conductor they are monitoring. Blood sugar Another way of energy harvesting is through the oxidation of blood sugars. These energy harvesters are called biobatteries. They could be used to power implanted electronic devices (e.g., pacemakers, implanted biosensors for diabetics, implanted active RFID devices, etc.). At present, the Minteer Group of Saint Louis University has created enzymes that could be used to generate power from blood sugars. However, the enzymes would still need to be replaced after a few years. In 2012, a pacemaker was powered by implantable biofuel cells at Clarkson University under the leadership of Dr. Evgeny Katz. Tree-based Tree metabolic energy harvesting is a type of bio-energy harvesting. Voltree has developed a method for harvesting energy from trees. These energy harvesters are being used to power remote sensors and mesh networks as the basis for a long term deployment system to monitor forest fires and weather in the forest. According to Voltree's website, the useful life of such a device should be limited only by the lifetime of the tree to which it is attached. A small test network was recently deployed in a US National Park forest. Other sources of energy from trees include capturing the physical movement of the tree in a generator. Theoretical analysis of this source of energy shows some promise in powering small electronic devices. A practical device based on this theory has been built and successfully powered a sensor node for a year. Metamaterial A metamaterial-based device wirelessly converts a 900 MHz microwave signal to 7.3 volts of direct current (greater than that of a USB device). The device can be tuned to harvest other signals including Wi-Fi signals, satellite signals, or even sound signals. The experimental device used a series of five fiberglass and copper conductors. Conversion efficiency reached 37 percent. When traditional antennas are close to each other in space they interfere with each other. But since RF power goes down by the cube of the distance, the amount of power is very very small. While the claim of 7.3 volts is grand, the measurement is for an open circuit. Since the power is so low, there can be almost no current when any load is attached. Atmospheric pressure changes The pressure of the atmosphere changes naturally over time from temperature changes and weather patterns. Devices with a sealed chamber can use these pressure differences to extract energy. This has been used to provide power for mechanical clocks such as the Atmos clock. Ocean energy A relatively new concept of generating energy is to generate energy from oceans. Large masses of waters are present on the planet which carry with them great amounts of energy. The energy in this case can be generated by tidal streams, ocean waves, difference in salinity and also difference in temperature. , efforts are underway to harvest energy this way. United States Navy recently was able to generate electricity using difference in temperatures present in the ocean. One method to use the temperature difference across different levels of the thermocline in the ocean is by using a thermal energy harvester that is equipped with a material that changes phase while in different temperatures regions. This is typically a polymer-based material that can handle reversible heat treatments. When the material is changing phase, the energy differential is converted into mechanical energy. The materials used will need to be able to alter phases, from liquid to solid, depending on the position of the thermocline underwater. These phase change materials within thermal energy harvesting units would be an ideal way to recharge or power an unmanned underwater vehicle (UUV) being that it will rely on the warm and cold water already present in large bodies of water; minimizing the need for standard battery recharging. Capturing this energy would allow for longer-term missions since the need to be collected or return for charging can be eliminated. This is also a very environmentally friendly method of powering underwater vehicles. There are no emissions that come from utilizing a phase change fluid, and it will likely have a longer lifespan than that of a standard battery. Future directions Electroactive polymers (EAPs) have been proposed for harvesting energy. These polymers have a large strain, elastic energy density, and high energy conversion efficiency. The total weight of systems based on EAPs (electroactive polymers) is proposed to be significantly lower than those based on piezoelectric materials. Nanogenerators, such as the one made by Georgia Tech, could provide a new way for powering devices without batteries. As of 2008, it only generates some dozen nanowatts, which is too low for any practical application. Noise has been the subject of a proposal by NiPS Laboratory in Italy to harvest wide spectrum low scale vibrations via a nonlinear dynamical mechanism that can improve harvester efficiency up to a factor 4 compared to traditional linear harvesters. Combinations of different types of energy harvesters can further reduce dependence on batteries, particularly in environments where the available ambient energy types change periodically. This type of complementary balanced energy harvesting has the potential to increase reliability of wireless sensor systems for structural health monitoring. See also Airborne wind energy Automotive thermoelectric generators EnOcean Future energy development IEEE 802.15 Ultra Wideband (UWB) List of energy resources Outline of energy Parasitic load Real-time locating system (RTL) Rechargeable battery Rectenna Solar charger Thermoacoustic heat engine Thermoelectric generator Ubiquitous Sensor Network Unmanned aerial vehicles can be powered by energy harvesting Wireless power transfer References External links Microtechnology Energy harvesting research centers
0.773336
0.990543
0.766023
Ballistic training
Ballistic training, also known as compensatory acceleration training, uses exercises which accelerate a force through the entire range of motion. It is a form of power training which can involve throwing weights, jumping with weights, or swinging weights in order to increase explosive power. The intention in ballistic exercises is to maximise the acceleration phase of an object's movement and minimise the deceleration phase. For instance, throwing a medicine ball maximises the acceleration of the ball. This can be contrasted with a standard weight training exercise where there would be a pronounced deceleration phase at the end of the repetition i.e. at the end of a bench press exercise the barbell is decelerated and brought to a halt. Similarly, an athlete jumping whilst holding a trap bar maximises the acceleration of the weight through the process of holding it whilst they jump- where as they would decelerate it at the end of a standard trap bar deadlift. History The word ballistic comes from the Greek word βάλλειν (ballein), which means “to throw”. Evidence of ballistic training can be seen throughout recorded history, especially in depictions which show the throwing of a large stone (stone put). Other ballistic disciplines from antiquity include the javelin throw and the discus throw. The hammer throw is a younger discipline, known from the 16th century. Such throws have been both a popular sporting pastime, and a training method employed by soldiers. Ballistic training was first used in the modern day by elite athletes when they were looking to enhance their ability to perform explosively. Commonly used modern ballistic training exercises are medicine ball throws, bench throws, jump squats, and kettlebell swings. Focus and effects Ballistic training requires the muscles to adapt to contracting very quickly and forcefully. This training requires the central nervous system and muscular system to coordinate and produce the greatest amount of force in the shortest time possible i.e. to increase the rate of force development (RFD). Ballistic training exercises involve dramatically increasing the acceleration phase of the weight's movement and reducing the deceleration phase. For example, in a medicine ball throw the weight is accelerated throughout the exercise in order to propel it into the air. In a weighted jump, the weight continues to be held and so continues to be accelerated throughout the concentric phase of the jumping action. This can be contrasted with standard weight training exercises where the weight is decelerated and brought to a halt at the end of the repetition. For example, in a bench press the barbell is decelerated to a halt at the end of a standard repetition, but in a bench press throw it continues to be accelerated as it is thrown into the air. An exercise performed in a ballistic manner allows for the weight to be moved more forcefully. Criteria 1. Muscle recruitment principles. Ballistic lifts force the muscles to produce the greatest amount of force in the shortest amount of time. In accordance with Henneman's size principle muscle fibers are recruited from a low to a high threshold as force requirements increase. 2. Speed of the movement. To ensure full muscle fiber recruitment the speed of the lift must be propulsive through the entire range of the movement up until release. 3. Intensity of the exercise. The duration of the lift should be measured by repetitions or time. The lift should be stopped when the bar decelerates. Research has shown the 6-8 repetitions or 20–30 seconds produces the best results. 4. Cardiovascular benefits. Ballistic exercises performed continuously for a minimum of 20 seconds followed by a 30-second rest period and then repeated until deceleration occurs has been proven to elevate the heart rate to training zone level. 5. Co-ordination. Research at the University of Connecticut found that high-intensity training has profound effects on the nervous system. The exercise had to be of an intensity that elevate the heart rate to 90% of maximum rate and had to sustain that rate for at least 20 seconds. 6. Electronic measurement. There are several electronic measurement systems that measure the velocity, power, and effectiveness of a lift. The athlete should stop the lift when the speed of a lift has fallen to 90% of their previous lift. The 90% number signals that there has been a significant change in the recruitment of the fast-twitch muscle fibers. Below the 90% number the lift is no longer ballistic 7. Specificity of training. Ballistic training emphasizes throwing and jumping with a weighted object. Research has resulted in positive increases in vertical jump, throwing velocity, and running speed. There is limited transfer to a specific sport. Use in metabolic conditioning Ballistic exercises have traditionally been left out of metabolic conditioning workouts and training programs. This may be due to the fact that they are often technical lifts, or lifts/exercises for which technique is crucial to safe and effective completion. However, with the extensive availability of information and guidance in learning and developing proficiency in ballistic exercise, this trend is changing. Many training programs which employ circuit training or metabolic conditioning now include ballistic exercises such as kettlebell cleans and snatches, Olympic lifts and variations, throws and plyometric variations. The benefits of their inclusion in these types of programs include higher levels of motor unit recruitment, higher caloric burn and improvements in a number of measurable athletic outputs. See also Calisthenics Complex training Plyometrics Power training Strength training Velocity Based Training (VBT) References Power training Physical exercise
0.784471
0.976478
0.766018
Position and momentum spaces
In physics and geometry, there are two closely related vector spaces, usually three-dimensional but in general of any finite dimension. Position space (also real space or coordinate space) is the set of all position vectors r in Euclidean space, and has dimensions of length; a position vector defines a point in space. (If the position vector of a point particle varies with time, it will trace out a path, the trajectory of a particle.) Momentum space is the set of all momentum vectors p a physical system can have; the momentum vector of a particle corresponds to its motion, with units of [mass][length][time]−1. Mathematically, the duality between position and momentum is an example of Pontryagin duality. In particular, if a function is given in position space, f(r), then its Fourier transform obtains the function in momentum space, φ(p). Conversely, the inverse Fourier transform of a momentum space function is a position space function. These quantities and ideas transcend all of classical and quantum physics, and a physical system can be described using either the positions of the constituent particles, or their momenta, both formulations equivalently provide the same information about the system in consideration. Another quantity is useful to define in the context of waves. The wave vector k (or simply "k-vector") has dimensions of reciprocal length, making it an analogue of angular frequency ω which has dimensions of reciprocal time. The set of all wave vectors is k-space. Usually r is more intuitive and simpler than k, though the converse can also be true, such as in solid-state physics. Quantum mechanics provides two fundamental examples of the duality between position and momentum, the Heisenberg uncertainty principle ΔxΔp ≥ ħ/2 stating that position and momentum cannot be simultaneously known to arbitrary precision, and the de Broglie relation p = ħk which states the momentum and wavevector of a free particle are proportional to each other. In this context, when it is unambiguous, the terms "momentum" and "wavevector" are used interchangeably. However, the de Broglie relation is not true in a crystal. Position and momentum spaces in classical mechanics Lagrangian mechanics Most often in Lagrangian mechanics, the Lagrangian L(q, dq/dt, t) is in configuration space, where q = (q1, q2,..., qn) is an n-tuple of the generalized coordinates. The Euler–Lagrange equations of motion are (One overdot indicates one time derivative). Introducing the definition of canonical momentum for each generalized coordinate the Euler–Lagrange equations take the form The Lagrangian can be expressed in momentum space also, L′(p, dp/dt, t), where p = (p1, p2, ..., pn) is an n-tuple of the generalized momenta. A Legendre transformation is performed to change the variables in the total differential of the generalized coordinate space Lagrangian; where the definition of generalized momentum and Euler–Lagrange equations have replaced the partial derivatives of L. The product rule for differentials allows the exchange of differentials in the generalized coordinates and velocities for the differentials in generalized momenta and their time derivatives, which after substitution simplifies and rearranges to Now, the total differential of the momentum space Lagrangian L′ is so by comparison of differentials of the Lagrangians, the momenta, and their time derivatives, the momentum space Lagrangian L′ and the generalized coordinates derived from L′ are respectively Combining the last two equations gives the momentum space Euler–Lagrange equations The advantage of the Legendre transformation is that the relation between the new and old functions and their variables are obtained in the process. Both the coordinate and momentum forms of the equation are equivalent and contain the same information about the dynamics of the system. This form may be more useful when momentum or angular momentum enters the Lagrangian. Hamiltonian mechanics In Hamiltonian mechanics, unlike Lagrangian mechanics which uses either all the coordinates or the momenta, the Hamiltonian equations of motion place coordinates and momenta on equal footing. For a system with Hamiltonian H(q, p, t), the equations are Position and momentum spaces in quantum mechanics In quantum mechanics, a particle is described by a quantum state. This quantum state can be represented as a superposition (i.e. a linear combination as a weighted sum) of basis states. In principle one is free to choose the set of basis states, as long as they span the space. If one chooses the eigenfunctions of the position operator as a set of basis functions, one speaks of a state as a wave function in position space (our ordinary notion of space in terms of length). The familiar Schrödinger equation in terms of the position r is an example of quantum mechanics in the position representation. By choosing the eigenfunctions of a different operator as a set of basis functions, one can arrive at a number of different representations of the same state. If one picks the eigenfunctions of the momentum operator as a set of basis functions, the resulting wave function is said to be the wave function in momentum space. A feature of quantum mechanics is that phase spaces can come in different types: discrete-variable, rotor, and continuous-variable. The table below summarizes some relations involved in the three types of phase spaces. Relation between space and reciprocal space The momentum representation of a wave function is very closely related to the Fourier transform and the concept of frequency domain. Since a quantum mechanical particle has a frequency proportional to the momentum (de Broglie's equation given above), describing the particle as a sum of its momentum components is equivalent to describing it as a sum of frequency components (i.e. a Fourier transform). This becomes clear when we ask ourselves how we can transform from one representation to another. Functions and operators in position space Suppose we have a three-dimensional wave function in position space , then we can write this functions as a weighted sum of orthogonal basis functions : or, in the continuous case, as an integral It is clear that if we specify the set of functions , say as the set of eigenfunctions of the momentum operator, the function holds all the information necessary to reconstruct and is therefore an alternative description for the state . In quantum mechanics, the momentum operator is given by (see matrix calculus for the denominator notation) with appropriate domain. The eigenfunctions are and eigenvalues ħk. So and we see that the momentum representation is related to the position representation by a Fourier transform. Functions and operators in momentum space Conversely, a three-dimensional wave function in momentum space can be expressed as a weighted sum of orthogonal basis functions , or as an integral, The position operator is given by with eigenfunctions and eigenvalues r. So a similar decomposition of can be made in terms of the eigenfunctions of this operator, which turns out to be the inverse Fourier transform, Unitary equivalence between position and momentum operator The r and p operators are unitarily equivalent, with the unitary operator being given explicitly by the Fourier transform, namely a quarter-cycle rotation in phase space, generated by the oscillator Hamiltonian. Thus, they have the same spectrum. In physical language, p acting on momentum space wave functions is the same as r acting on position space wave functions (under the image of the Fourier transform). Reciprocal space and crystals For an electron (or other particle) in a crystal, its value of k relates almost always to its crystal momentum, not its normal momentum. Therefore, k and p are not simply proportional but play different roles. See k·p perturbation theory for an example. Crystal momentum is like a wave envelope that describes how the wave varies from one unit cell to the next, but does not give any information about how the wave varies within each unit cell. When k relates to crystal momentum instead of true momentum, the concept of k-space is still meaningful and extremely useful, but it differs in several ways from the non-crystal k-space discussed above. For example, in a crystal's k-space, there is an infinite set of points called the reciprocal lattice which are "equivalent" to k = 0 (this is analogous to aliasing). Likewise, the "first Brillouin zone" is a finite volume of k-space, such that every possible k is "equivalent" to exactly one point in this region. See also Phase space Reciprocal space Configuration space Fractional Fourier transform Footnotes References Momentum Quantum mechanics de:Impulsraum
0.77658
0.986378
0.766001
Buckingham π theorem
In engineering, applied mathematics, and physics, the Buckingham theorem is a key theorem in dimensional analysis. It is a formalisation of Rayleigh's method of dimensional analysis. Loosely, the theorem states that if there is a physically meaningful equation involving a certain number n of physical variables, then the original equation can be rewritten in terms of a set of p = n − k dimensionless parameters 1, 2, ..., p constructed from the original variables, where k is the number of physical dimensions involved; it is obtained as the rank of a particular matrix. The theorem provides a method for computing sets of dimensionless parameters from the given variables, or nondimensionalization, even if the form of the equation is still unknown. The Buckingham theorem indicates that validity of the laws of physics does not depend on a specific unit system. A statement of this theorem is that any physical law can be expressed as an identity involving only dimensionless combinations (ratios or products) of the variables linked by the law (for example, pressure and volume are linked by Boyle's law – they are inversely proportional). If the dimensionless combinations' values changed with the systems of units, then the equation would not be an identity, and the theorem would not hold. History Although named for Edgar Buckingham, the theorem was first proved by the French mathematician Joseph Bertrand in 1878. Bertrand considered only special cases of problems from electrodynamics and heat conduction, but his article contains, in distinct terms, all the basic ideas of the modern proof of the theorem and clearly indicates the theorem's utility for modelling physical phenomena. The technique of using the theorem ("the method of dimensions") became widely known due to the works of Rayleigh. The first application of the theorem in the general case to the dependence of pressure drop in a pipe upon governing parameters probably dates back to 1892, a heuristic proof with the use of series expansions, to 1894. Formal generalization of the theorem for the case of arbitrarily many quantities was given first by in 1892, then in 1911—apparently independently—by both A. Federman and D. Riabouchinsky, and again in 1914 by Buckingham. It was Buckingham's article that introduced the use of the symbol "" for the dimensionless variables (or parameters), and this is the source of the theorem's name. Statement More formally, the number of dimensionless terms that can be formed is equal to the nullity of the dimensional matrix, and is the rank. For experimental purposes, different systems that share the same description in terms of these dimensionless numbers are equivalent. In mathematical terms, if we have a physically meaningful equation such as where are any physical variables, and there is a maximal dimensionally independent subset of size , then the above equation can be restated as where are dimensionless parameters constructed from the by dimensionless equations — the so-called Pi groups — of the form where the exponents are rational numbers. (They can always be taken to be integers by redefining as being raised to a power that clears all denominators.) If there are fundamental units in play, then . Significance The Buckingham theorem provides a method for computing sets of dimensionless parameters from given variables, even if the form of the equation remains unknown. However, the choice of dimensionless parameters is not unique; Buckingham's theorem only provides a way of generating sets of dimensionless parameters and does not indicate the most "physically meaningful". Two systems for which these parameters coincide are called similar (as with similar triangles, they differ only in scale); they are equivalent for the purposes of the equation, and the experimentalist who wants to determine the form of the equation can choose the most convenient one. Most importantly, Buckingham's theorem describes the relation between the number of variables and fundamental dimensions. Proof For simplicity, it will be assumed that the space of fundamental and derived physical units forms a vector space over the real numbers, with the fundamental units as basis vectors, and with multiplication of physical units as the "vector addition" operation, and raising to powers as the "scalar multiplication" operation: represent a dimensional variable as the set of exponents needed for the fundamental units (with a power of zero if the particular fundamental unit is not present). For instance, the standard gravity has units of (length over time squared), so it is represented as the vector with respect to the basis of fundamental units (length, time). We could also require that exponents of the fundamental units be rational numbers and modify the proof accordingly, in which case the exponents in the pi groups can always be taken as rational numbers or even integers. Rescaling units Suppose we have quantities , where the units of contain length raised to the power . If we originally measure length in meters but later switch to centimeters, then the numerical value of would be rescaled by a factor of . Any physically meaningful law should be invariant under an arbitrary rescaling of every fundamental unit; this is the fact that the pi theorem hinges on. Formal proof Given a system of dimensional variables in fundamental (basis) dimensions, the dimensional matrix is the matrix whose rows correspond to the fundamental dimensions and whose columns are the dimensions of the variables: the th entry (where and ) is the power of the th fundamental dimension in the th variable. The matrix can be interpreted as taking in a combination of the variable quantities and giving out the dimensions of the combination in terms of the fundamental dimensions. So the (column) vector that results from the multiplication consists of the units of in terms of the fundamental independent (basis) units. If we rescale the th fundamental unit by a factor of , then gets rescaled by , where is the th entry of the dimensional matrix. In order to convert this into a linear algebra problem, we take logarithms (the base is irrelevant), yielding which is an action of on . We define a physical law to be an arbitrary function such that is a permissible set of values for the physical system when . We further require to be invariant under this action. Hence it descends to a function . All that remains is to exhibit an isomorphism between and , the (log) space of pi groups . We construct an matrix whose columns are a basis for . It tells us how to embed into as the kernel of . That is, we have an exact sequence Taking tranposes yields another exact sequence The first isomorphism theorem produces the desired isomorphism, which sends the coset to . This corresponds to rewriting the tuple into the pi groups coming from the columns of . The International System of Units defines seven base units, which are the ampere, kelvin, second, metre, kilogram, candela and mole. It is sometimes advantageous to introduce additional base units and techniques to refine the technique of dimensional analysis. (See orientational analysis and reference.) Examples Speed This example is elementary but serves to demonstrate the procedure. Suppose a car is driving at 100 km/h; how long does it take to go 200 km? This question considers dimensioned variables: distance time and speed and we are seeking some law of the form Any two of these variables are dimensionally independent, but the three taken together are not. Thus there is dimensionless quantity. The dimensional matrix is in which the rows correspond to the basis dimensions and and the columns to the considered dimensions where the latter stands for the speed dimension. The elements of the matrix correspond to the powers to which the respective dimensions are to be raised. For instance, the third column states that represented by the column vector is expressible in terms of the basis dimensions as since For a dimensionless constant we are looking for vectors such that the matrix-vector product equals the zero vector In linear algebra, the set of vectors with this property is known as the kernel (or nullspace) of the dimensional matrix. In this particular case its kernel is one-dimensional. The dimensional matrix as written above is in reduced row echelon form, so one can read off a non-zero kernel vector to within a multiplicative constant: If the dimensional matrix were not already reduced, one could perform Gauss–Jordan elimination on the dimensional matrix to more easily determine the kernel. It follows that the dimensionless constant, replacing the dimensions by the corresponding dimensioned variables, may be written: Since the kernel is only defined to within a multiplicative constant, the above dimensionless constant raised to any arbitrary power yields another (equivalent) dimensionless constant. Dimensional analysis has thus provided a general equation relating the three physical variables: or, letting denote a zero of function which can be written in the desired form (which recall was ) as The actual relationship between the three variables is simply In other words, in this case has one physically relevant root, and it is unity. The fact that only a single value of will do and that it is equal to 1 is not revealed by the technique of dimensional analysis. The simple pendulum We wish to determine the period of small oscillations in a simple pendulum. It will be assumed that it is a function of the length the mass and the acceleration due to gravity on the surface of the Earth which has dimensions of length divided by time squared. The model is of the form (Note that it is written as a relation, not as a function: is not written here as a function of ) Period, mass, and length are dimensionally independent, but acceleration can be expressed in terms of time and length, which means the four variables taken together are not dimensionally independent. Thus we need only dimensionless parameter, denoted by and the model can be re-expressed as where is given by for some values of The dimensions of the dimensional quantities are: The dimensional matrix is: (The rows correspond to the dimensions and and the columns to the dimensional variables For instance, the 4th column, states that the variable has dimensions of ) We are looking for a kernel vector such that the matrix product of on yields the zero vector The dimensional matrix as written above is in reduced row echelon form, so one can read off a kernel vector within a multiplicative constant: Were it not already reduced, one could perform Gauss–Jordan elimination on the dimensional matrix to more easily determine the kernel. It follows that the dimensionless constant may be written: In fundamental terms: which is dimensionless. Since the kernel is only defined to within a multiplicative constant, if the above dimensionless constant is raised to any arbitrary power, it will yield another equivalent dimensionless constant. In this example, three of the four dimensional quantities are fundamental units, so the last (which is ) must be a combination of the previous. Note that if (the coefficient of ) had been non-zero then there would be no way to cancel the value; therefore be zero. Dimensional analysis has allowed us to conclude that the period of the pendulum is not a function of its mass (In the 3D space of powers of mass, time, and distance, we can say that the vector for mass is linearly independent from the vectors for the three other variables. Up to a scaling factor, is the only nontrivial way to construct a vector of a dimensionless parameter.) The model can now be expressed as: Then this implies that for some zero of the function If there is only one zero, call it then It requires more physical insight or an experiment to show that there is indeed only one zero and that the constant is in fact given by For large oscillations of a pendulum, the analysis is complicated by an additional dimensionless parameter, the maximum swing angle. The above analysis is a good approximation as the angle approaches zero. Electric power To demonstrate the application of the theorem, consider the power consumption of a stirrer with a given shape. The power, P, in dimensions [M · L2/T3], is a function of the density, ρ [M/L3], and the viscosity of the fluid to be stirred, μ [M/(L · T)], as well as the size of the stirrer given by its diameter, D [L], and the angular speed of the stirrer, n [1/T]. Therefore, we have a total of n = 5 variables representing our example. Those n = 5 variables are built up from k = 3 independent dimensions, e.g., length: L (SI units: m), time: T (s), and mass: M (kg). According to the -theorem, the n = 5 variables can be reduced by the k = 3 dimensions to form p = n − k = 5 − 3 = 2 independent dimensionless numbers. Usually, these quantities are chosen as , commonly named the Reynolds number which describes the fluid flow regime, and , the power number, which is the dimensionless description of the stirrer. Note that the two dimensionless quantities are not unique and depend on which of the n = 5 variables are chosen as the k = 3 dimensionally independent basis variables, which, in this example, appear in both dimensionless quantities. The Reynolds number and power number fall from the above analysis if , n, and D are chosen to be the basis variables. If, instead, , n, and D are selected, the Reynolds number is recovered while the second dimensionless quantity becomes . We note that is the product of the Reynolds number and the power number. Other examples An example of dimensional analysis can be found for the case of the mechanics of a thin, solid and parallel-sided rotating disc. There are five variables involved which reduce to two non-dimensional groups. The relationship between these can be determined by numerical experiment using, for example, the finite element method. The theorem has also been used in fields other than physics, for instance in sports science. See also Blast wave Dimensionless quantity Natural units Similitude (model) Reynolds number References Notes Citations Bibliography Original sources External links Some reviews and original sources on the history of pi theorem and the theory of similarity (in Russian) Articles containing proofs Dimensional analysis Eponymous theorems of physics
0.770952
0.993537
0.765969
Third law of thermodynamics
The third law of thermodynamics states that the entropy of a closed system at thermodynamic equilibrium approaches a constant value when its temperature approaches absolute zero. This constant value cannot depend on any other parameters characterizing the system, such as pressure or applied magnetic field. At absolute zero (zero kelvins) the system must be in a state with the minimum possible energy. Entropy is related to the number of accessible microstates, and there is typically one unique state (called the ground state) with minimum energy. In such a case, the entropy at absolute zero will be exactly zero. If the system does not have a well-defined order (if its order is glassy, for example), then there may remain some finite entropy as the system is brought to very low temperatures, either because the system becomes locked into a configuration with non-minimal energy or because the minimum energy state is non-unique. The constant value is called the residual entropy of the system. Formulations The third law has many formulations, some more general than others, some equivalent, and some neither more general nor equivalent. The Planck statement applies only to perfect crystalline substances:As temperature falls to zero, the entropy of any pure crystalline substance tends to a universal constant. That is, , where is a universal constant that applies for all possible crystals, of all possible sizes, in all possible external constraints. So it can be taken as zero, giving . The Nernst statement concerns thermodynamic processes at a fixed, low temperature, for condensed systems, which are liquids and solids: The entropy change associated with any condensed system undergoing a reversible isothermal process approaches zero as the temperature at which it is performed approaches 0 K. That is, . Or equivalently, At absolute zero, the entropy change becomes independent of the process path. That is, where represents a change in the state variable . The unattainability principle of Nernst: It is impossible for any process, no matter how idealized, to reduce the entropy of a system to its absolute-zero value in a finite number of operations. This principle implies that cooling a system to absolute zero would require an infinite number of steps or an infinite amount of time. The statement in adiabatic accessibility: It is impossible to start from a state of positive temperature, and adiabatically reach a state with zero temperature. The Einstein statement: The entropy of any substance approaches a finite value as the temperature approaches absolute zero. That is, where is the entropy, the zero-point entropy is finite-valued, is the temperature, and represents other relevant state variables. This implies that the heat capacity of a substance must (uniformly) vanish at absolute zero, as otherwise the entropy would diverge. There is also a formulation as the impossibility of "perpetual motion machines of the third kind". History The third law was developed by chemist Walther Nernst during the years 1906 to 1912 and is therefore often referred to as the Nernst heat theorem, or sometimes the Nernst-Simon heat theorem to include the contribution of Nernst's doctoral student Francis Simon. The third law of thermodynamics states that the entropy of a system at absolute zero is a well-defined constant. This is because a system at zero temperature exists in its ground state, so that its entropy is determined only by the degeneracy of the ground state. In 1912 Nernst stated the law thus: "It is impossible for any procedure to lead to the isotherm in a finite number of steps." An alternative version of the third law of thermodynamics was enunciated by Gilbert N. Lewis and Merle Randall in 1923: If the entropy of each element in some (perfect) crystalline state be taken as zero at the absolute zero of temperature, every substance has a finite positive entropy; but at the absolute zero of temperature the entropy may become zero, and does so become in the case of perfect crystalline substances. This version states not only will reach zero at 0 K, but itself will also reach zero as long as the crystal has a ground state with only one configuration. Some crystals form defects which cause a residual entropy. This residual entropy disappears when the kinetic barriers to transitioning to one ground state are overcome. With the development of statistical mechanics, the third law of thermodynamics (like the other laws) changed from a fundamental law (justified by experiments) to a derived law (derived from even more basic laws). The basic law from which it is primarily derived is the statistical-mechanics definition of entropy for a large system: where is entropy, is the Boltzmann constant, and is the number of microstates consistent with the macroscopic configuration. The counting of states is from the reference state of absolute zero, which corresponds to the entropy of . Explanation In simple terms, the third law states that the entropy of a perfect crystal of a pure substance approaches zero as the temperature approaches zero. The alignment of a perfect crystal leaves no ambiguity as to the location and orientation of each part of the crystal. As the energy of the crystal is reduced, the vibrations of the individual atoms are reduced to nothing, and the crystal becomes the same everywhere. The third law provides an absolute reference point for the determination of entropy at any other temperature. The entropy of a closed system, determined relative to this zero point, is then the absolute entropy of that system. Mathematically, the absolute entropy of any system at zero temperature is the natural log of the number of ground states times the Boltzmann constant . The entropy of a perfect crystal lattice as defined by Nernst's theorem is zero provided that its ground state is unique, because . If the system is composed of one-billion atoms that are all alike and lie within the matrix of a perfect crystal, the number of combinations of one billion identical things taken one billion at a time is . Hence: The difference is zero; hence the initial entropy can be any selected value so long as all other such calculations include that as the initial entropy. As a result, the initial entropy value of zero is selected is used for convenience. Example: Entropy change of a crystal lattice heated by an incoming photon Suppose a system consisting of a crystal lattice with volume of identical atoms at , and an incoming photon of wavelength and energy . Initially, there is only one accessible microstate: Let us assume the crystal lattice absorbs the incoming photon. There is a unique atom in the lattice that interacts and absorbs this photon. So after absorption, there are possible microstates accessible by the system, each corresponding to one excited atom, while the other atoms remain at ground state. The entropy, energy, and temperature of the closed system rises and can be calculated. The entropy change is From the second law of thermodynamics: Hence Calculating entropy change: We assume and . The energy change of the system as a result of absorbing the single photon whose energy is : The temperature of the closed system rises by This can be interpreted as the average temperature of the system over the range from . A single atom is assumed to absorb the photon, but the temperature and entropy change characterizes the entire system. Systems with non-zero entropy at absolute zero An example of a system that does not have a unique ground state is one whose net spin is a half-integer, for which time-reversal symmetry gives two degenerate ground states. For such systems, the entropy at zero temperature is at least (which is negligible on a macroscopic scale). Some crystalline systems exhibit geometrical frustration, where the structure of the crystal lattice prevents the emergence of a unique ground state. Ground-state helium (unless under pressure) remains liquid. Glasses and solid solutions retain significant entropy at 0 K, because they are large collections of nearly degenerate states, in which they become trapped out of equilibrium. Another example of a solid with many nearly-degenerate ground states, trapped out of equilibrium, is ice Ih, which has "proton disorder". For the entropy at absolute zero to be zero, the magnetic moments of a perfectly ordered crystal must themselves be perfectly ordered; from an entropic perspective, this can be considered to be part of the definition of a "perfect crystal". Only ferromagnetic, antiferromagnetic, and diamagnetic materials can satisfy this condition. However, ferromagnetic materials do not, in fact, have zero entropy at zero temperature, because the spins of the unpaired electrons are all aligned and this gives a ground-state spin degeneracy. Materials that remain paramagnetic at 0 K, by contrast, may have many nearly degenerate ground states (for example, in a spin glass), or may retain dynamic disorder (a quantum spin liquid). Consequences Absolute zero The third law is equivalent to the statement that It is impossible by any procedure, no matter how idealized, to reduce the temperature of any closed system to zero temperature in a finite number of finite operations. The reason that cannot be reached according to the third law is explained as follows: Suppose that the temperature of a substance can be reduced in an isentropic process by changing the parameter X from X2 to X1. One can think of a multistage nuclear demagnetization setup where a magnetic field is switched on and off in a controlled way. If there were an entropy difference at absolute zero, could be reached in a finite number of steps. However, at T = 0 there is no entropy difference, so an infinite number of steps would be needed. The process is illustrated in Fig. 1. Example: magnetic refrigeration To be concrete, we imagine that we are refrigerating magnetic material. Suppose we have a large bulk of paramagnetic salt and an adjustable external magnetic field in the vertical direction. Let the parameter represent the external magnetic field. At the same temperature, if the external magnetic field is strong, then the internal atoms in the salt would strongly align with the field, so the disorder (entropy) would decrease. Therefore, in Fig. 1, the curve for is the curve for lower magnetic field, and the curve for is the curve for higher magnetic field. The refrigeration process repeats the following two steps: Isothermal process. Here, we have a chunk of salt in magnetic field and temperature . We divide the chunk into two parts: a large part playing the role of "environment", and a small part playing the role of "system". We slowly increase the magnetic field on the system to , but keep the magnetic field constant on the environment. The atoms in the system would lose directional degrees of freedom (DOF), and the energy in the directional DOF would be squeezed out into the vibrational DOF. This makes it slightly hotter, and then it would lose thermal energy to the environment, to remain in the same temperature . (The environment is now discarded.) Isentropic cooling. Here, the system is wrapped in adiathermal covering, and the external magnetic field is slowly lowered to . This frees up the direction DOF, absorbing some energy from the vibrational DOF. The effect is that the system has the same entropy, but reaches a lower temperature . At every two-step of the process, the mass of the system decreases, as we discard more and more salt as the "environment". However, if the equations of state for this salt is as shown in Fig. 1 (left), then we can start with a large but finite amount of salt, and end up with a small piece of salt that has . Specific heat A non-quantitative description of his third law that Nernst gave at the very beginning was simply that the specific heat of a material can always be made zero by cooling it down far enough. A modern, quantitative analysis follows. Suppose that the heat capacity of a sample in the low temperature region has the form of a power law asymptotically as , and we wish to find which values of are compatible with the third law. We have By the discussion of third law above, this integral must be bounded as , which is only possible if . So the heat capacity must go to zero at absolute zero if it has the form of a power law. The same argument shows that it cannot be bounded below by a positive constant, even if we drop the power-law assumption. On the other hand, the molar specific heat at constant volume of a monatomic classical ideal gas, such as helium at room temperature, is given by with the molar ideal gas constant. But clearly a constant heat capacity does not satisfy Eq.. That is, a gas with a constant heat capacity all the way to absolute zero violates the third law of thermodynamics. We can verify this more fundamentally by substituting in Eq., which yields In the limit this expression diverges, again contradicting the third law of thermodynamics. The conflict is resolved as follows: At a certain temperature the quantum nature of matter starts to dominate the behavior. Fermi particles follow Fermi–Dirac statistics and Bose particles follow Bose–Einstein statistics. In both cases the heat capacity at low temperatures is no longer temperature independent, even for ideal gases. For Fermi gases with the Fermi temperature TF given by Here is the Avogadro constant, the molar volume, and the molar mass. For Bose gases with given by The specific heats given by Eq. and both satisfy Eq.. Indeed, they are power laws with and respectively. Even within a purely classical setting, the density of a classical ideal gas at fixed particle number becomes arbitrarily high as goes to zero, so the interparticle spacing goes to zero. The assumption of non-interacting particles presumably breaks down when they are sufficiently close together, so the value of gets modified away from its ideal constant value. Vapor pressure The only liquids near absolute zero are 3He and 4He. Their heat of evaporation has a limiting value given by with and constant. If we consider a container partly filled with liquid and partly gas, the entropy of the liquid–gas mixture is where is the entropy of the liquid and is the gas fraction. Clearly the entropy change during the liquid–gas transition ( from 0 to 1) diverges in the limit of T→0. This violates Eq.. Nature solves this paradox as follows: at temperatures below about 100 mK, the vapor pressure is so low that the gas density is lower than the best vacuum in the universe. In other words, below 100 mK there is simply no gas above the liquid. Miscibility If liquid helium with mixed 3He and 4He were cooled to absolute zero, the liquid must have zero entropy. This either means they are ordered perfectly as a mixed liquid, which is impossible for a liquid, or that they fully separate out into two layers of pure liquid. This is precisely what happens. For example, if a solution with 3 3He to 2 4He atoms were cooled, it would start the separation at 0.9 K, purifying more and more, until at absolute zero, when the upper layer becomes purely 3He, and the lower layer becomes purely 4He. Surface tension Let be the surface tension of liquid, then the entropy per area is . So if a liquid can exist down to absolute zero, then since its entropy is constant no matter its shape at absolute zero, its entropy per area must converge to zero. That is, its surface tension would become constant at low temperatures. In particular, the surface tension of 3He is well-approximated by for some parameters . Latent heat of melting The melting curves of 3He and 4He both extend down to absolute zero at finite pressure. At the melting pressure, liquid and solid are in equilibrium. The third law demands that the entropies of the solid and liquid are equal at . As a result, the latent heat of melting is zero, and the slope of the melting curve extrapolates to zero as a result of the Clausius–Clapeyron equation. Thermal expansion coefficient The thermal expansion coefficient is defined as With the Maxwell relation and Eq. with it is shown that So the thermal expansion coefficient of all materials must go to zero at zero kelvin. See also Adiabatic process Ground state Laws of thermodynamics Quantum thermodynamics Residual entropy Thermodynamic entropy Timeline of thermodynamics, statistical mechanics, and random processes Quantum heat engines and refrigerators References Further reading Goldstein, Martin & Inge F. (1993) The Refrigerator and the Universe. Cambridge MA: Harvard University Press. . Chpt. 14 is a nontechnical discussion of the Third Law, one including the requisite elementary quantum mechanics. 3
0.768258
0.997004
0.765956
Circular polarization
In electrodynamics, circular polarization of an electromagnetic wave is a polarization state in which, at each point, the electromagnetic field of the wave has a constant magnitude and is rotating at a constant rate in a plane perpendicular to the direction of the wave. In electrodynamics, the strength and direction of an electric field is defined by its electric field vector. In the case of a circularly polarized wave, the tip of the electric field vector, at a given point in space, relates to the phase of the light as it travels through time and space. At any instant of time, the electric field vector of the wave indicates a point on a helix oriented along the direction of propagation. A circularly polarized wave can rotate in one of two possible senses: right-handed circular polarization (RHCP) in which the electric field vector rotates in a right-hand sense with respect to the direction of propagation, and left-handed circular polarization (LHCP) in which the vector rotates in a left-hand sense. Circular polarization is a limiting case of elliptical polarization. The other special case is the easier-to-understand linear polarization. All three terms were coined by Augustin-Jean Fresnel, in a memoir read to the French Academy of Sciences on 9 December 1822. Fresnel had first described the case of circular polarization, without yet naming it, in 1821. The phenomenon of polarization arises as a consequence of the fact that light behaves as a two-dimensional transverse wave. Circular polarization occurs when the two orthogonal electric field component vectors are of equal magnitude and are out of phase by exactly 90°, or one-quarter wavelength. Characteristics In a circularly polarized electromagnetic wave, the individual electric field vectors, as well as their combined vector, have a constant magnitude, and with changing phase angle. Given that this is a plane wave, each vector represents the magnitude and direction of the electric field for an entire plane that is perpendicular to the optical axis. Specifically, given that this is a circularly polarized plane wave, these vectors indicate that the electric field, from plane to plane, has a constant strength while its direction steadily rotates. Refer to these two images in the plane wave article to better appreciate this dynamic. This light is considered to be right-hand, clockwise circularly polarized if viewed by the receiver. Since this is an electromagnetic wave, each electric field vector has a corresponding, but not illustrated, magnetic field vector that is at a right angle to the electric field vector and proportional in magnitude to it. As a result, the magnetic field vectors would trace out a second helix if displayed. Circular polarization is often encountered in the field of optics and, in this section, the electromagnetic wave will be simply referred to as light. The nature of circular polarization and its relationship to other polarizations is often understood by thinking of the electric field as being divided into two components that are perpendicular to each other. The vertical component and its corresponding plane are illustrated in blue, while the horizontal component and its corresponding plane are illustrated in green. Notice that the rightward (relative to the direction of travel) horizontal component leads the vertical component by one quarter of a wavelength, a 90° phase difference. It is this quadrature phase relationship that creates the helix and causes the points of maximum magnitude of the vertical component to correspond with the points of zero magnitude of the horizontal component, and vice versa. The result of this alignment are select vectors, corresponding to the helix, which exactly match the maxima of the vertical and horizontal components. To appreciate how this quadrature phase shift corresponds to an electric field that rotates while maintaining a constant magnitude, imagine a dot traveling clockwise in a circle. Consider how the vertical and horizontal displacements of the dot, relative to the center of the circle, vary sinusoidally in time and are out of phase by one quarter of a cycle. The displacements are said to be out of phase by one quarter of a cycle because the horizontal maximum displacement (toward the left) is reached one quarter of a cycle before the vertical maximum displacement is reached. Now referring again to the illustration, imagine the center of the circle just described, traveling along the axis from the front to the back. The circling dot will trace out a helix with the displacement toward our viewing left, leading the vertical displacement. Just as the horizontal and vertical displacements of the rotating dot are out of phase by one quarter of a cycle in time, the magnitude of the horizontal and vertical components of the electric field are out of phase by one quarter of a wavelength. The next pair of illustrations is that of left-handed, counterclockwise circularly polarized light when viewed by the receiver. Because it is left-handed, the rightward (relative to the direction of travel) horizontal component is now lagging the vertical component by one quarter of a wavelength, rather than leading it. Reversal of handedness Waveplate To convert circularly polarized light to the other handedness, one can use a half-waveplate. A half-waveplate shifts a given linear component of light one half of a wavelength relative to its orthogonal linear component. Reflection The handedness of polarized light is reversed reflected off a surface at normal incidence. Upon such reflection, the rotation of the plane of polarization of the reflected light is identical to that of the incident field. However, with propagation now in the opposite direction, the same rotation direction that would be described as "right-handed" for the incident beam, is "left-handed" for propagation in the reverse direction, and vice versa. Aside from the reversal of handedness, the ellipticity of polarization is also preserved (except in cases of reflection by a birefringent surface). Note that this principle only holds strictly for light reflected at normal incidence. For instance, right circularly polarized light reflected from a dielectric surface at grazing incidence (an angle beyond the Brewster angle) will still emerge as right-handed, but elliptically, polarized. Light reflected by a metal at non-normal incidence will generally have its ellipticity changed as well. Such situations may be solved by decomposing the incident circular (or other) polarization into components of linear polarization parallel and perpendicular to the plane of incidence, commonly denoted p and s respectively. The reflected components in the p and s linear polarizations are found by applying the Fresnel coefficients of reflection, which are generally different for those two linear polarizations. Only in the special case of normal incidence, where there is no distinction between p and s, are the Fresnel coefficients for the two components identical, leading to the above property. Conversion to linear polarization Circularly polarized light can be converted into linearly polarized light by passing it through a quarter-waveplate. Passing linearly polarized light through a quarter-waveplate with its axes at 45° to its polarization axis will convert it to circular polarization. In fact, this is the most common way of producing circular polarization in practice. Note that passing linearly polarized light through a quarter-waveplate at an angle other than 45° will generally produce elliptical polarization. Handedness conventions Circular polarization may be referred to as right-handed or left-handed, and clockwise or anti-clockwise, depending on the direction in which the electric field vector rotates. Unfortunately, two opposing historical conventions exist. From the point of view of the source Using this convention, polarization is defined from the point of view of the source. When using this convention, left- or right-handedness is determined by pointing one's left or right thumb from the source, in the direction that the wave is propagating, and matching the curling of one's fingers to the direction of the temporal rotation of the field at a given point in space. When determining if the wave is clockwise or anti-clockwise circularly polarized, one again takes the point of view of the source, and while looking from the source and in the direction of the wave's propagation, one observes the direction of the field's temporal rotation. Using this convention, the electric field vector of a left-handed circularly polarized wave is as follows: As a specific example, refer to the circularly polarized wave in the first animation. Using this convention, that wave is defined as right-handed because when one points one's right thumb in the same direction of the wave's propagation, the fingers of that hand curl in the same direction of the field's temporal rotation. It is considered clockwise circularly polarized because, from the point of view of the source, looking in the same direction of the wave's propagation, the field rotates in the clockwise direction. The second animation is that of left-handed or anti-clockwise light, using this same convention. This convention is in conformity with the Institute of Electrical and Electronics Engineers (IEEE) standard and, as a result, it is generally used in the engineering community. Quantum physicists also use this convention of handedness because it is consistent with their convention of handedness for a particle's spin. Radio astronomers also use this convention in accordance with an International Astronomical Union (IAU) resolution made in 1973. From the point of view of the receiver In this alternative convention, polarization is defined from the point of view of the receiver. Using this convention, left- or right-handedness is determined by pointing one's left or right thumb the source, the direction of propagation, and then matching the curling of one's fingers to the temporal rotation of the field. When using this convention, in contrast to the other convention, the defined handedness of the wave matches the handedness of the screw type nature of the field in space. Specifically, if one freezes a right-handed wave in time, when one curls the fingers of one's right hand around the helix, the thumb will point in the direction of progression for the helix, given the sense of rotation. Note that, in the context of the nature of all screws and helices, it does not matter in which direction you point your thumb when determining its handedness. When determining if the wave is clockwise or anti-clockwise circularly polarized, one again takes the point of view of the receiver and, while looking the source, the direction of propagation, one observes the direction of the field's temporal rotation. Just as in the other convention, right-handedness corresponds to a clockwise rotation, and left-handedness corresponds to an anti-clockwise rotation. Many optics textbooks use this second convention. It is also used by SPIE as well as the International Union of Pure and Applied Chemistry (IUPAC). Uses of the two conventions As stated earlier, there is significant confusion with regards to these two conventions. As a general rule, the engineering, quantum physics, and radio astronomy communities use the first convention, in which the wave is observed from the point of view of the source. In many physics textbooks dealing with optics, the second convention is used, in which the light is observed from the point of view of the receiver. To avoid confusion, it is good practice to specify "as defined from the point of view of the source" or "as defined from the point of view of the receiver" when discussing polarization matters. The archive of the US Federal Standard 1037C proposes two contradictory conventions of handedness. Note that the IEEE defines RHCP and LHCP the opposite as those used by physicists. The IEEE 1979 Antenna Standard will show RHCP on the South Pole of the Poincare Sphere. The IEEE defines RHCP using the right hand with thumb pointing in the direction of transmit, and the fingers showing the direction of rotation of the E field with time. The rationale for the opposite conventions used by Physicists and Engineers is that Astronomical Observations are always done with the incoming wave traveling toward the observer, where as for most engineers, they are assumed to be standing behind the transmitter watching the wave traveling away from them. This article is not using the IEEE 1979 Antenna Standard and is not using the +t convention typically used in IEEE work. FM radio FM broadcast radio stations sometimes employ circular polarization to improve signal penetration into buildings and vehicles. It is one example of what the International Telecommunication Union refers to as "mixed polarization", i.e. radio emissions that include both horizontally- and vertically-polarized components. In the United States, Federal Communications Commission regulations state that horizontal polarization is the standard for FM broadcasting, but that "circular or elliptical polarization may be employed if desired". Dichroism Circular dichroism (CD) is the differential absorption of left- and right-handed circularly polarized light. Circular dichroism is the basis of a form of spectroscopy that can be used to determine the optical isomerism and secondary structure of molecules. In general, this phenomenon will be exhibited in absorption bands of any optically active molecule. As a consequence, circular dichroism is exhibited by most biological molecules, because of the dextrorotary (e.g., some sugars) and levorotary (e.g., some amino acids) molecules they contain. Noteworthy as well is that a secondary structure will also impart a distinct CD to its respective molecules. Therefore, the alpha helix, beta sheet and random coil regions of proteins and the double helix of nucleic acids have CD spectral signatures representative of their structures. Also, under the right conditions, even non-chiral molecules will exhibit magnetic circular dichroism — that is, circular dichroism induced by a magnetic field. Luminescence Circularly polarized luminescence (CPL) can occur when either a luminophore or an ensemble of luminophores is chiral. The extent to which emissions are polarized is quantified in the same way it is for circular dichroism, in terms of the dissymmetry factor, also sometimes referred to as the anisotropy factor. This value is given by: where corresponds to the quantum yield of left-handed circularly polarized light, and to that of right-handed light. The maximum absolute value of gem, corresponding to purely left- or right-handed circular polarization, is therefore 2. Meanwhile, the smallest absolute value that gem can achieve, corresponding to linearly polarized or unpolarized light, is zero. Mathematical description The classical sinusoidal plane wave solution of the electromagnetic wave equation for the electric and magnetic fields is: where k is the wavenumber; is the angular frequency of the wave; is an orthogonal matrix whose columns span the transverse x-y plane; and is the speed of light. Here, is the amplitude of the field, and is the normalized Jones vector in the x-y plane. If is rotated by radians with respect to and the x amplitude equals the y amplitude, the wave is circularly polarized. The Jones vector is: where the plus sign indicates left circular polarization, and the minus sign indicates right circular polarization. In the case of circular polarization, the electric field vector of constant magnitude rotates in the x-y plane. If basis vectors are defined such that: and: then the polarization state can be written in the "R-L basis" as: where: and: Antennas A number of different types of antenna elements can be used to produce circularly polarized (or nearly so) radiation; following Balanis, one can use dipole elements: "... two crossed dipoles provide the two orthogonal field components.... If the two dipoles are identical, the field intensity of each along zenith ... would be of the same intensity. Also, if the two dipoles were fed with a 90° degree time-phase difference (phase quadrature), the polarization along zenith would be circular.... One way to obtain the 90° time-phase difference between the two orthogonal field components, radiated respectively by the two dipoles, is by feeding one of the two dipoles with a transmission line which is 1/4 wavelength longer or shorter than that of the other," p.80; or helical elements: "To achieve circular polarization [in axial or end-fire mode] ... the circumference C of the helix must be ... with C/wavelength = 1 near optimum, and the spacing about S = wavelength/4," p.571; or patch elements: "... circular and elliptical polarizations can be obtained using various feed arrangements or slight modifications made to the elements.... Circular polarization can be obtained if two orthogonal modes are excited with a 90° time-phase difference between them. This can be accomplished by adjusting the physical dimensions of the patch.... For a square patch element, the easiest way to excite ideally circular polarization is to feed the element at two adjacent edges.... The quadrature phase difference is obtained by feeding the element with a 90° power divider," p.859. In quantum mechanics In the quantum mechanical view, light is composed of photons. Polarization is a manifestation of the spin angular momentum of light. More specifically, in quantum mechanics, the direction of spin of a photon is tied to the handedness of the circularly polarized light, and the spin of a beam of photons is similar to the spin of a beam of particles, such as electrons. In nature Only a few mechanisms in nature are known to systematically produce circularly polarized light. In 1911, Albert Abraham Michelson discovered that light reflected from the golden scarab beetle Chrysina resplendens is preferentially left-polarized. Since then, circular polarization has been measured in several other scarab beetles such as Chrysina gloriosa, as well as some crustaceans such as the mantis shrimp. In these cases, the underlying mechanism is the molecular-level helicity of the chitinous cuticle. The bioluminescence of the larvae of fireflies is also circularly polarized, as reported in 1980 for the species Photuris lucicrescens and Photuris versicolor. For fireflies, it is more difficult to find a microscopic explanation for the polarization, because the left and right lanterns of the larvae were found to emit polarized light of opposite senses. The authors suggest that the light begins with a linear polarization due to inhomogeneities inside aligned photocytes, and it picks up circular polarization while passing through linearly birefringent tissue. Circular polarization has been detected in light reflected from leaves and photosynthetic microbes. Water-air interfaces provide another source of circular polarization. Sunlight that gets scattered back up towards the surface is linearly polarized. If this light is then totally internally reflected back down, its vertical component undergoes a phase shift. To an underwater observer looking up, the faint light outside Snell's window therefore is (partially) circularly polarized. Weaker sources of circular polarization in nature include multiple scattering by linear polarizers, as in the circular polarization of starlight, and selective absorption by circularly dichroic media. Radio emission from pulsars can be strongly circularly polarized. Two species of mantis shrimp have been reported to be able to detect circular polarized light. See also Polarizer 3D film Chirality Sinusoidal plane-wave solutions of the electromagnetic wave equation Starlight polarization Waveplate References Further reading External links Circularly polarized light: beetles and displays Article on the mantis shrimp and circular polarization Animation of Circular Polarization (on YouTube) Comparison of Circular Polarization with Linear and Elliptical Polarizations (YouTube Animation) Reversal of handedness of circularly polarized light by mirror. A demonstration – simple, cheap & instructive Concepts in astrophysics Polarization (waves) Stellar astronomy
0.769313
0.995618
0.765942
Automatic differentiation
In mathematics and computer algebra, automatic differentiation (auto-differentiation, autodiff, or AD), also called algorithmic differentiation, computational differentiation, is a set of techniques to evaluate the partial derivative of a function specified by a computer program. Automatic differentiation exploits the fact that every computer calculation, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, partial derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor of more arithmetic operations than the original program. Difference from other differentiation methods Automatic differentiation is distinct from symbolic differentiation and numerical differentiation. Symbolic differentiation faces the difficulty of converting a computer program into a single mathematical expression and can lead to inefficient code. Numerical differentiation (the method of finite differences) can introduce round-off errors in the discretization process and cancellation. Both of these classical methods have problems with calculating higher derivatives, where complexity and errors increase. Finally, both of these classical methods are slow at computing partial derivatives of a function with respect to many inputs, as is needed for gradient-based optimization algorithms. Automatic differentiation solves all of these problems. Applications Automatic differentiation is particularly important in the field of machine learning. For example, it allows one to implement backpropagation in a neural network without a manually-computed derivative. Forward and reverse accumulation Chain rule of partial derivatives of composite functions Fundamental to automatic differentiation is the decomposition of differentials provided by the chain rule of partial derivatives of composite functions. For the simple composition the chain rule gives Two types of automatic differentiation Usually, two distinct modes of automatic differentiation are presented. forward accumulation (also called bottom-up, forward mode, or tangent mode) reverse accumulation (also called top-down, reverse mode, or adjoint mode) Forward accumulation specifies that one traverses the chain rule from inside to outside (that is, first compute and then and at last ), while reverse accumulation has the traversal from outside to inside (first compute and then and at last ). More succinctly, Forward accumulation computes the recursive relation: with , and, Reverse accumulation computes the recursive relation: with . The value of the partial derivative, called seed, is propagated forward or backward and is initially or . Forward accumulation evaluates the function and calculates the derivative with respect to one independent variable in one pass. For each independent variable a separate pass is therefore necessary in which the derivative with respect to that independent variable is set to one and of all others to zero. In contrast, reverse accumulation requires the evaluated partial functions for the partial derivatives. Reverse accumulation therefore evaluates the function first and calculates the derivatives with respect to all independent variables in an additional pass. Which of these two types should be used depends on the sweep count. The computational complexity of one sweep is proportional to the complexity of the original code. Forward accumulation is more efficient than reverse accumulation for functions with as only sweeps are necessary, compared to sweeps for reverse accumulation. Reverse accumulation is more efficient than forward accumulation for functions with as only sweeps are necessary, compared to sweeps for forward accumulation. Backpropagation of errors in multilayer perceptrons, a technique used in machine learning, is a special case of reverse accumulation. Forward accumulation was introduced by R.E. Wengert in 1964. According to Andreas Griewank, reverse accumulation has been suggested since the late 1960s, but the inventor is unknown. Seppo Linnainmaa published reverse accumulation in 1976. Forward accumulation In forward accumulation AD, one first fixes the independent variable with respect to which differentiation is performed and computes the derivative of each sub-expression recursively. In a pen-and-paper calculation, this involves repeatedly substituting the derivative of the inner functions in the chain rule: This can be generalized to multiple variables as a matrix product of Jacobians. Compared to reverse accumulation, forward accumulation is natural and easy to implement as the flow of derivative information coincides with the order of evaluation. Each variable is augmented with its derivative (stored as a numerical value, not a symbolic expression), as denoted by the dot. The derivatives are then computed in sync with the evaluation steps and combined with other derivatives via the chain rule. Using the chain rule, if has predecessors in the computational graph: As an example, consider the function: For clarity, the individual sub-expressions have been labeled with the variables . The choice of the independent variable to which differentiation is performed affects the seed values and . Given interest in the derivative of this function with respect to , the seed values should be set to: With the seed values set, the values propagate using the chain rule as shown. Figure 2 shows a pictorial depiction of this process as a computational graph. {| class="wikitable" !Operations to compute value !!Operations to compute derivative |- | || (seed) |- | || (seed) |- | || |- | || |- | || |} To compute the gradient of this example function, which requires not only but also , an additional sweep is performed over the computational graph using the seed values . Implementation Pseudocode Forward accumulation calculates the function and the derivative (but only for one independent variable each) in one pass. The associated method call expects the expression Z to be derived with regard to a variable V. The method returns a pair of the evaluated function and its derivative. The method traverses the expression tree recursively until a variable is reached. If the derivative with respect to this variable is requested, its derivative is 1, 0 otherwise. Then the partial function as well as the partial derivative are evaluated. tuple<float,float> evaluateAndDerive(Expression Z, Variable V) { if isVariable(Z) if (Z = V) return {valueOf(Z), 1}; else return {valueOf(Z), 0}; else if (Z = A + B) {a, a'} = evaluateAndDerive(A, V); {b, b'} = evaluateAndDerive(B, V); return {a + b, a' + b'}; else if (Z = A - B) {a, a'} = evaluateAndDerive(A, V); {b, b'} = evaluateAndDerive(B, V); return {a - b, a' - b'}; else if (Z = A * B) {a, a'} = evaluateAndDerive(A, V); {b, b'} = evaluateAndDerive(B, V); return {a * b, b * a' + a * b'}; } C++ #include <iostream> struct ValueAndPartial { float value, partial; }; struct Variable; struct Expression { virtual ValueAndPartial evaluateAndDerive(Variable *variable) = 0; }; struct Variable: public Expression { float value; Variable(float value): value(value) {} ValueAndPartial evaluateAndDerive(Variable *variable) { float partial = (this == variable) ? 1.0f : 0.0f; return {value, partial}; } }; struct Plus: public Expression { Expression *a, *b; Plus(Expression *a, Expression *b): a(a), b(b) {} ValueAndPartial evaluateAndDerive(Variable *variable) { auto [valueA, partialA] = a->evaluateAndDerive(variable); auto [valueB, partialB] = b->evaluateAndDerive(variable); return {valueA + valueB, partialA + partialB}; } }; struct Multiply: public Expression { Expression *a, *b; Multiply(Expression *a, Expression *b): a(a), b(b) {} ValueAndPartial evaluateAndDerive(Variable *variable) { auto [valueA, partialA] = a->evaluateAndDerive(variable); auto [valueB, partialB] = b->evaluateAndDerive(variable); return {valueA * valueB, valueB * partialA + valueA * partialB}; } }; int main { // Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3) Variable x(2), y(3); Plus p1(&x, &y); Multiply m1(&x, &p1); Multiply m2(&y, &y); Plus z(&m1, &m2); float xPartial = z.evaluateAndDerive(&x).partial; float yPartial = z.evaluateAndDerive(&y).partial; std::cout << "∂z/∂x = " << xPartial << ", " << "∂z/∂y = " << yPartial << std::endl; // Output: ∂z/∂x = 7, ∂z/∂y = 8 return 0; } Reverse accumulation In reverse accumulation AD, the dependent variable to be differentiated is fixed and the derivative is computed with respect to each sub-expression recursively. In a pen-and-paper calculation, the derivative of the outer functions is repeatedly substituted in the chain rule: In reverse accumulation, the quantity of interest is the adjoint, denoted with a bar ; it is a derivative of a chosen dependent variable with respect to a subexpression : Using the chain rule, if has successors in the computational graph: Reverse accumulation traverses the chain rule from outside to inside, or in the case of the computational graph in Figure 3, from top to bottom. The example function is scalar-valued, and thus there is only one seed for the derivative computation, and only one sweep of the computational graph is needed to calculate the (two-component) gradient. This is only half the work when compared to forward accumulation, but reverse accumulation requires the storage of the intermediate variables as well as the instructions that produced them in a data structure known as a "tape" or a Wengert list (however, Wengert published forward accumulation, not reverse accumulation), which may consume significant memory if the computational graph is large. This can be mitigated to some extent by storing only a subset of the intermediate variables and then reconstructing the necessary work variables by repeating the evaluations, a technique known as rematerialization. Checkpointing is also used to save intermediary states. The operations to compute the derivative using reverse accumulation are shown in the table below (note the reversed order): The data flow graph of a computation can be manipulated to calculate the gradient of its original calculation. This is done by adding an adjoint node for each primal node, connected by adjoint edges which parallel the primal edges but flow in the opposite direction. The nodes in the adjoint graph represent multiplication by the derivatives of the functions calculated by the nodes in the primal. For instance, addition in the primal causes fanout in the adjoint; fanout in the primal causes addition in the adjoint; a unary function in the primal causes in the adjoint; etc. Implementation Pseudo code Reverse accumulation requires two passes: In the forward pass, the function is evaluated first and the partial results are cached. In the reverse pass, the partial derivatives are calculated and the previously derived value is backpropagated. The corresponding method call expects the expression Z to be derived and seed with the derived value of the parent expression. For the top expression, Z derived with regard to Z, this is 1. The method traverses the expression tree recursively until a variable is reached and adds the current seed value to the derivative expression. void derive(Expression Z, float seed) { if isVariable(Z) partialDerivativeOf(Z) += seed; else if (Z = A + B) derive(A, seed); derive(B, seed); else if (Z = A - B) derive(A, seed); derive(B, -seed); else if (Z = A * B) derive(A, valueOf(B) * seed); derive(B, valueOf(A) * seed); } C++ #include <iostream> struct Expression { float value; virtual void evaluate() = 0; virtual void derive(float seed) = 0; }; struct Variable: public Expression { float partial; Variable(float value) { this->value = value; partial = 0.0f; } void evaluate() {} void derive(float seed) { partial += seed; } }; struct Plus: public Expression { Expression *a, *b; Plus(Expression *a, Expression *b): a(a), b(b) {} void evaluate() { a->evaluate(); b->evaluate(); value = a->value + b->value; } void derive(float seed) { a->derive(seed); b->derive(seed); } }; struct Multiply: public Expression { Expression *a, *b; Multiply(Expression *a, Expression *b): a(a), b(b) {} void evaluate() { a->evaluate(); b->evaluate(); value = a->value * b->value; } void derive(float seed) { a->derive(b->value * seed); b->derive(a->value * seed); } }; int main { // Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3) Variable x(2), y(3); Plus p1(&x, &y); Multiply m1(&x, &p1); Multiply m2(&y, &y); Plus z(&m1, &m2); z.evaluate(); std::cout << "z = " << z.value << std::endl; // Output: z = 19 z.derive(1); std::cout << "∂z/∂x = " << x.partial << ", " << "∂z/∂y = " << y.partial << std::endl; // Output: ∂z/∂x = 7, ∂z/∂y = 8 return 0; } Beyond forward and reverse accumulation Forward and reverse accumulation are just two (extreme) ways of traversing the chain rule. The problem of computing a full Jacobian of with a minimum number of arithmetic operations is known as the optimal Jacobian accumulation (OJA) problem, which is NP-complete. Central to this proof is the idea that algebraic dependencies may exist between the local partials that label the edges of the graph. In particular, two or more edge labels may be recognized as equal. The complexity of the problem is still open if it is assumed that all edge labels are unique and algebraically independent. Automatic differentiation using dual numbers Forward mode automatic differentiation is accomplished by augmenting the algebra of real numbers and obtaining a new arithmetic. An additional component is added to every number to represent the derivative of a function at the number, and all arithmetic operators are extended for the augmented algebra. The augmented algebra is the algebra of dual numbers. Replace every number with the number , where is a real number, but is an abstract number with the property (an infinitesimal; see Smooth infinitesimal analysis). Using only this, regular arithmetic gives using . Now, polynomials can be calculated in this augmented arithmetic. If , then where denotes the derivative of with respect to its first argument, and , called a seed, can be chosen arbitrarily. The new arithmetic consists of ordered pairs, elements written , with ordinary arithmetics on the first component, and first order differentiation arithmetic on the second component, as described above. Extending the above results on polynomials to analytic functions gives a list of the basic arithmetic and some standard functions for the new arithmetic: and in general for the primitive function , where and are the derivatives of with respect to its first and second arguments, respectively. When a binary basic arithmetic operation is applied to mixed arguments—the pair and the real number —the real number is first lifted to . The derivative of a function at the point is now found by calculating using the above arithmetic, which gives as the result. Implementation An example implementation based on the dual number approach follows. Pseudo code C++ #include <iostream> struct Dual { float realPart, infinitesimalPart; Dual(float realPart, float infinitesimalPart=0): realPart(realPart), infinitesimalPart(infinitesimalPart) {} Dual operator+(Dual other) { return Dual( realPart + other.realPart, infinitesimalPart + other.infinitesimalPart ); } Dual operator*(Dual other) { return Dual( realPart * other.realPart, other.realPart * infinitesimalPart + realPart * other.infinitesimalPart ); } }; // Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3) Dual f(Dual x, Dual y) { return x * (x + y) + y * y; } int main { Dual x = Dual(2); Dual y = Dual(3); Dual epsilon = Dual(0, 1); Dual a = f(x + epsilon, y); Dual b = f(x, y + epsilon); std::cout << "∂z/∂x = " << a.infinitesimalPart << ", " << "∂z/∂y = " << b.infinitesimalPart << std::endl; // Output: ∂z/∂x = 7, ∂z/∂y = 8 return 0; } Vector arguments and functions Multivariate functions can be handled with the same efficiency and mechanisms as univariate functions by adopting a directional derivative operator. That is, if it is sufficient to compute , the directional derivative of at in the direction may be calculated as using the same arithmetic as above. If all the elements of are desired, then function evaluations are required. Note that in many optimization applications, the directional derivative is indeed sufficient. High order and many variables The above arithmetic can be generalized to calculate second order and higher derivatives of multivariate functions. However, the arithmetic rules quickly grow complicated: complexity is quadratic in the highest derivative degree. Instead, truncated Taylor polynomial algebra can be used. The resulting arithmetic, defined on generalized dual numbers, allows efficient computation using functions as if they were a data type. Once the Taylor polynomial of a function is known, the derivatives are easily extracted. Implementation Forward-mode AD is implemented by a nonstandard interpretation of the program in which real numbers are replaced by dual numbers, constants are lifted to dual numbers with a zero epsilon coefficient, and the numeric primitives are lifted to operate on dual numbers. This nonstandard interpretation is generally implemented using one of two strategies: source code transformation or operator overloading. Source code transformation (SCT) The source code for a function is replaced by an automatically generated source code that includes statements for calculating the derivatives interleaved with the original instructions. Source code transformation can be implemented for all programming languages, and it is also easier for the compiler to do compile time optimizations. However, the implementation of the AD tool itself is more difficult and the build system is more complex. Operator overloading (OO) Operator overloading is a possibility for source code written in a language supporting it. Objects for real numbers and elementary mathematical operations must be overloaded to cater for the augmented arithmetic depicted above. This requires no change in the form or sequence of operations in the original source code for the function to be differentiated, but often requires changes in basic data types for numbers and vectors to support overloading and often also involves the insertion of special flagging operations. Due to the inherent operator overloading overhead on each loop, this approach usually demonstrates weaker speed performance. Operator overloading and source code transformation Overloaded Operators can be used to extract the valuation graph, followed by automatic generation of the AD-version of the primal function at run-time. Unlike the classic OO AAD, such AD-function does not change from one iteration to the next one. Hence there is any OO or tape interpretation run-time overhead per Xi sample. With the AD-function being generated at runtime, it can be optimised to take into account the current state of the program and precompute certain values. In addition, it can be generated in a way to consistently utilize native CPU vectorization to process 4(8)-double chunks of user data (AVX2\AVX512 speed up x4-x8). With multithreading added into account, such approach can lead to a final acceleration of order 8 × #Cores compared to the traditional AAD tools. A reference implementation is available on GitHub. See also Differentiable programming Notes References Further reading External links www.autodiff.org, An "entry site to everything you want to know about automatic differentiation" Automatic Differentiation of Parallel OpenMP Programs Automatic Differentiation, C++ Templates and Photogrammetry Automatic Differentiation, Operator Overloading Approach Compute analytic derivatives of any Fortran77, Fortran95, or C program through a web-based interface Automatic Differentiation of Fortran programs Description and example code for forward Automatic Differentiation in Scala finmath-lib stochastic automatic differentiation, Automatic differentiation for random variables (Java implementation of the stochastic automatic differentiation). Adjoint Algorithmic Differentiation: Calibration and Implicit Function Theorem C++ Template-based automatic differentiation article and implementation Tangent Source-to-Source Debuggable Derivatives Exact First- and Second-Order Greeks by Algorithmic Differentiation Adjoint Algorithmic Differentiation of a GPU Accelerated Application Adjoint Methods in Computational Finance Software Tool Support for Algorithmic Differentiationop More than a Thousand Fold Speed Up for xVA Pricing Calculations with Intel Xeon Scalable Processors Sparse truncated Taylor series implementation with VBIC95 example for higher order derivatives Differential calculus Computer algebra Articles with example pseudocode Articles with example Python (programming language) code Articles with example C++ code
0.768493
0.996675
0.765938
Non-inertial reference frame
A non-inertial reference frame (also known as an accelerated reference frame) is a frame of reference that undergoes acceleration with respect to an inertial frame. An accelerometer at rest in a non-inertial frame will, in general, detect a non-zero acceleration. While the laws of motion are the same in all inertial frames, in non-inertial frames, they vary from frame to frame, depending on the acceleration. In classical mechanics it is often possible to explain the motion of bodies in non-inertial reference frames by introducing additional fictitious forces (also called inertial forces, pseudo-forces, and d'Alembert forces) to Newton's second law. Common examples of this include the Coriolis force and the centrifugal force. In general, the expression for any fictitious force can be derived from the acceleration of the non-inertial frame. As stated by Goodman and Warner, "One might say that F ma holds in any coordinate system provided the term 'force' is redefined to include the so-called 'reversed effective forces' or 'inertia forces'." In the theory of general relativity, the curvature of spacetime causes frames to be locally inertial, but globally non-inertial. Due to the non-Euclidean geometry of curved space-time, there are no global inertial reference frames in general relativity. More specifically, the fictitious force which appears in general relativity is the force of gravity. Avoiding fictitious forces in calculations In flat spacetime, the use of non-inertial frames can be avoided if desired. Measurements with respect to non-inertial reference frames can always be transformed to an inertial frame, incorporating directly the acceleration of the non-inertial frame as that acceleration as seen from the inertial frame. This approach avoids the use of fictitious forces (it is based on an inertial frame, where fictitious forces are absent, by definition) but it may be less convenient from an intuitive, observational, and even a calculational viewpoint. As pointed out by Ryder for the case of rotating frames as used in meteorology: Detection of a non-inertial frame: need for fictitious forces That a given frame is non-inertial can be detected by its need for fictitious forces to explain observed motions. For example, the rotation of the Earth can be observed using a Foucault pendulum. The rotation of the Earth seemingly causes the pendulum to change its plane of oscillation because the surroundings of the pendulum move with the Earth. As seen from an Earth-bound (non-inertial) frame of reference, the explanation of this apparent change in orientation requires the introduction of the fictitious Coriolis force. Another famous example is that of the tension in the string between two spheres rotating about each other. In that case, the prediction of the measured tension in the string based on the motion of the spheres as observed from a rotating reference frame requires the rotating observers to introduce a fictitious centrifugal force. In this connection, it may be noted that a change in coordinate system, for example, from Cartesian to polar, if implemented without any change in relative motion, does not cause the appearance of fictitious forces, although the form of the laws of motion varies from one type of curvilinear coordinate system to another. Fictitious forces in curvilinear coordinates A different use of the term "fictitious force" often is used in curvilinear coordinates, particularly polar coordinates. To avoid confusion, this distracting ambiguity in terminologies is pointed out here. These so-called "forces" are non-zero in all frames of reference, inertial or non-inertial, and do not transform as vectors under rotations and translations of the coordinates (as all Newtonian forces do, fictitious or otherwise). This incompatible use of the term "fictitious force" is unrelated to non-inertial frames. These so-called "forces" are defined by determining the acceleration of a particle within the curvilinear coordinate system, and then separating the simple double-time derivatives of coordinates from the remaining terms. These remaining terms then are called "fictitious forces". More careful usage calls these terms "generalized fictitious forces" to indicate their connection to the generalized coordinates of Lagrangian mechanics. The application of Lagrangian methods to polar coordinates can be found here. Relativistic point of view Frames and flat spacetime If a region of spacetime is declared to be Euclidean, and effectively free from obvious gravitational fields, then if an accelerated coordinate system is overlaid onto the same region, it can be said that a uniform fictitious field exists in the accelerated frame (we reserve the word gravitational for the case in which a mass is involved). An object accelerated to be stationary in the accelerated frame will "feel" the presence of the field, and they will also be able to see environmental matter with inertial states of motion (stars, galaxies, etc.) to be apparently falling "downwards" in the field al g curved trajectories as if the field is real. In frame-based descriptions, this supposed field can be made to appear or disappear by switching between "accelerated" and "inertial" coordinate systems. More advanced descriptions As the situation is modeled in finer detail, using the general principle of relativity, the concept of a frame-dependent gravitational field becomes less realistic. In these Machian models, the accelerated body can agree that the apparent gravitational field is associated with the motion of the background matter, but can also claim that the motion of the material as if there is a gravitational field, causes the gravitational field - the accelerating background matter "drags light". Similarly, a background observer can argue that the forced acceleration of the mass causes an apparent gravitational field in the region between it and the environmental material (the accelerated mass also "drags light"). This "mutual" effect, and the ability of an accelerated mass to warp lightbeam geometry and lightbeam-based coordinate systems, is referred to as frame-dragging. Frame-dragging removes the usual distinction between accelerated frames (which show gravitational effects) and inertial frames (where the geometry is supposedly free from gravitational fields). When a forcibly-accelerated body physically "drags" a coordinate system, the problem becomes an exercise in warped spacetime for all observers. See also Rotating reference frame Fictitious force Centrifugal force Coriolis effect Inertial frame of reference Free motion equation References and notes Frames of reference Classical mechanics
0.774148
0.989378
0.765925
Kinesis (biology)
Kinesis, like a taxis or tropism, is a movement or activity of a cell or an organism in response to a stimulus (such as gas exposure, light intensity or ambient temperature). Unlike taxis, the response to the stimulus provided is non-directional. The animal does not move toward or away from the stimulus but moves at either a slow or fast rate depending on its "comfort zone." In this case, a fast movement (non-random) means that the animal is searching for its comfort zone while a slow movement indicates that it has found it. Types There are two main types of kineses, both resulting in aggregations. However, the stimulus does not act to attract or repel individuals. Orthokinesis: in which the speed of movement of the individual is dependent upon the stimulus intensity. For example, the locomotion of the collembola, Orchesella cincta, in relation to water. With increased water saturation in the soil there is an increase in the direction of its movement towards the aimed place. Klinokinesis: in which the frequency or rate of turning is proportional to stimulus intensity. For example, the behaviour of the flatworm (Dendrocoelum lacteum) which turns more frequently in response to increasing light thus ensuring that it spends more time in dark areas. Basic model of kinesis The kinesis strategy controlled by the locally and instantly evaluated well-being (fitness) can be described in simple words: Animals stay longer in good conditions and leave bad conditions more quickly. If the well-being is measured by the local reproduction coefficient then the minimal reaction-diffusion model of kinesis can be written as follows: For each population in the biological community, where: is the population density of ith species, represents the abiotic characteristics of the living conditions (can be multidimensional), is the reproduction coefficient, which depends on all and on s, is the equilibrium diffusion coefficient (defined for equilibrium ). The coefficient characterises dependence of the diffusion coefficient on the reproduction coefficient. The models of kinesis were tested with typical situations. It was demonstrated that kinesis is beneficial for assimilation of both patches and fluctuations of food distribution. Kinesis may delay invasion and spreading of species with the Allee effect. See also Brownian motion Chemokinesis Cranial kinesis Cytokinesis Diffusion Nastic movements Photokinesis Rapid plant movement Taxis References Kendeigh, S. Charles. 1961. Animal Ecology. Prentice-Hall, Inc., Englewood Cliffs, N.J., 468 p. External links Host-plant finding by insects: orientation, sensory input and search patterns Physiology Perception Signal transduction
0.789342
0.970299
0.765897
Principal component analysis
Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing. The data is linearly transformed onto a new coordinate system such that the directions (principal components) capturing the largest variation in the data can be easily identified. The principal components of a collection of points in a real coordinate space are a sequence of unit vectors, where the -th vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared perpendicular distance from the points to the line. These directions (i.e., principal components) constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Many studies use the first two principal components in order to plot the data in two dimensions and to visually identify clusters of closely related data points. Principal component analysis has applications in many fields such as population genetics, microbiome studies, and atmospheric science. Overview When performing PCA, the first principal component of a set of variables is the derived variable formed as a linear combination of the original variables that explains the most variance. The second principal component explains the most variance in what is left once the effect of the first component is removed, and we may proceed through iterations until all the variance is explained. PCA is most commonly used when many of the variables are highly correlated with each other and it is desirable to reduce their number to an independent set. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The -th principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. For either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain-specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed. History PCA was invented in 1901 by Karl Pearson, as an analogue of the principal axis theorem in mechanics; it was later independently developed and named by Harold Hotelling in the 1930s. Depending on the field of application, it is also named the discrete Karhunen–Loève transform (KLT) in signal processing, the Hotelling transform in multivariate quality control, proper orthogonal decomposition (POD) in mechanical engineering, singular value decomposition (SVD) of X (invented in the last quarter of the 19th century), eigenvalue decomposition (EVD) of XTX in linear algebra, factor analysis (for a discussion of the differences between PCA and factor analysis see Ch. 7 of Jolliffe's Principal Component Analysis), Eckart–Young theorem (Harman, 1960), or empirical orthogonal functions (EOF) in meteorological science (Lorenz, 1956), empirical eigenfunction decomposition (Sirovich, 1987), quasiharmonic modes (Brooks et al., 1988), spectral decomposition in noise and vibration, and empirical modal analysis in structural dynamics. Intuition PCA can be thought of as fitting a p-dimensional ellipsoid to the data, where each axis of the ellipsoid represents a principal component. If some axis of the ellipsoid is small, then the variance along that axis is also small. To find the axes of the ellipsoid, we must first center the values of each variable in the dataset on 0 by subtracting the mean of the variable's observed values from each of those values. These transformed values are used instead of the original observed values for each of the variables. Then, we compute the covariance matrix of the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we must normalize each of the orthogonal eigenvectors to turn them into unit vectors. Once this is done, each of the mutually-orthogonal unit eigenvectors can be interpreted as an axis of the ellipsoid fitted to the data. This choice of basis will transform the covariance matrix into a diagonalized form, in which the diagonal elements represent the variance of each axis. The proportion of the variance that each eigenvector represents can be calculated by dividing the eigenvalue corresponding to that eigenvector by the sum of all eigenvalues. Biplots and scree plots (degree of explained variance) are used to interpret findings of the PCA. Details PCA is defined as an orthogonal linear transformation on a real inner product space that transforms the data to a new coordinate system such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on. Consider an data matrix, X, with column-wise zero empirical mean (the sample mean of each column has been shifted to zero), where each of the n rows represents a different repetition of the experiment, and each of the p columns gives a particular kind of feature (say, the results from a particular sensor). Mathematically, the transformation is defined by a set of size of p-dimensional vectors of weights or coefficients that map each row vector of X to a new vector of principal component scores , given by in such a way that the individual variables of t considered over the data set successively inherit the maximum possible variance from X, with each coefficient vector w constrained to be a unit vector (where is usually selected to be strictly less than to reduce dimensionality). The above may equivalently be written in matrix form as where , , and . First component In order to maximize variance, the first weight vector w(1) thus has to satisfy Equivalently, writing this in matrix form gives Since w(1) has been defined to be a unit vector, it equivalently also satisfies The quantity to be maximised can be recognised as a Rayleigh quotient. A standard result for a positive semidefinite matrix such as XTX is that the quotient's maximum possible value is the largest eigenvalue of the matrix, which occurs when w is the corresponding eigenvector. With w(1) found, the first principal component of a data vector x(i) can then be given as a score t1(i) = x(i) ⋅ w(1) in the transformed co-ordinates, or as the corresponding vector in the original variables, {x(i) ⋅ w(1)} w(1). Further components The k-th component can be found by subtracting the first k − 1 principal components from X: and then finding the weight vector which extracts the maximum variance from this new data matrix It turns out that this gives the remaining eigenvectors of XTX, with the maximum values for the quantity in brackets given by their corresponding eigenvalues. Thus the weight vectors are eigenvectors of XTX. The k-th principal component of a data vector x(i) can therefore be given as a score tk(i) = x(i) ⋅ w(k) in the transformed coordinates, or as the corresponding vector in the space of the original variables, {x(i) ⋅ w(k)} w(k), where w(k) is the kth eigenvector of XTX. The full principal components decomposition of X can therefore be given as where W is a p-by-p matrix of weights whose columns are the eigenvectors of XTX. The transpose of W is sometimes called the whitening or sphering transformation. Columns of W multiplied by the square root of corresponding eigenvalues, that is, eigenvectors scaled up by the variances, are called loadings in PCA or in Factor analysis. Covariances XTX itself can be recognized as proportional to the empirical sample covariance matrix of the dataset XT. The sample covariance Q between two of the different principal components over the dataset is given by: where the eigenvalue property of w(k) has been used to move from line 2 to line 3. However eigenvectors w(j) and w(k) corresponding to eigenvalues of a symmetric matrix are orthogonal (if the eigenvalues are different), or can be orthogonalised (if the vectors happen to share an equal repeated value). The product in the final line is therefore zero; there is no sample covariance between different principal components over the dataset. Another way to characterise the principal components transformation is therefore as the transformation to coordinates which diagonalise the empirical sample covariance matrix. In matrix form, the empirical covariance matrix for the original variables can be written The empirical covariance matrix between the principal components becomes where Λ is the diagonal matrix of eigenvalues λ(k) of XTX. λ(k) is equal to the sum of the squares over the dataset associated with each component k, that is, λ(k) = Σi tk2(i) = Σi (x(i) ⋅ w(k))2. Dimensionality reduction The transformation T = X W maps a data vector x(i) from an original space of p variables to a new space of p variables which are uncorrelated over the dataset. However, not all the principal components need to be kept. Keeping only the first L principal components, produced by using only the first L eigenvectors, gives the truncated transformation where the matrix TL now has n rows but only L columns. In other words, PCA learns a linear transformation where the columns of matrix form an orthogonal basis for the L features (the components of representation t) that are decorrelated. By construction, of all the transformed data matrices with only L columns, this score matrix maximises the variance in the original data that has been preserved, while minimising the total squared reconstruction error or . Such dimensionality reduction can be a very useful step for visualising and processing high-dimensional datasets, while still retaining as much of the variance in the dataset as possible. For example, selecting L = 2 and keeping only the first two principal components finds the two-dimensional plane through the high-dimensional dataset in which the data is most spread out, so if the data contains clusters these too may be most spread out, and therefore most visible to be plotted out in a two-dimensional diagram; whereas if two directions through the data (or two of the original variables) are chosen at random, the clusters may be much less spread apart from each other, and may in fact be much more likely to substantially overlay each other, making them indistinguishable. Similarly, in regression analysis, the larger the number of explanatory variables allowed, the greater is the chance of overfitting the model, producing conclusions that fail to generalise to other datasets. One approach, especially when there are strong correlations between different possible explanatory variables, is to reduce them to a few principal components and then run the regression against them, a method called principal component regression. Dimensionality reduction may also be appropriate when the variables in a dataset are noisy. If each column of the dataset contains independent identically distributed Gaussian noise, then the columns of T will also contain similarly identically distributed Gaussian noise (such a distribution is invariant under the effects of the matrix W, which can be thought of as a high-dimensional rotation of the co-ordinate axes). However, with more of the total variance concentrated in the first few principal components compared to the same noise variance, the proportionate effect of the noise is less—the first few components achieve a higher signal-to-noise ratio. PCA thus can have the effect of concentrating much of the signal into the first few principal components, which can usefully be captured by dimensionality reduction; while the later principal components may be dominated by noise, and so disposed of without great loss. If the dataset is not too large, the significance of the principal components can be tested using parametric bootstrap, as an aid in determining how many principal components to retain. Singular value decomposition The principal components transformation can also be associated with another matrix factorization, the singular value decomposition (SVD) of X, Here Σ is an n-by-p rectangular diagonal matrix of positive numbers σ(k), called the singular values of X; U is an n-by-n matrix, the columns of which are orthogonal unit vectors of length n called the left singular vectors of X; and W is a p-by-p matrix whose columns are orthogonal unit vectors of length p and called the right singular vectors of X. In terms of this factorization, the matrix XTX can be written where is the square diagonal matrix with the singular values of X and the excess zeros chopped off that satisfies . Comparison with the eigenvector factorization of XTX establishes that the right singular vectors W of X are equivalent to the eigenvectors of XTX, while the singular values σ(k) of are equal to the square-root of the eigenvalues λ(k) of XTX. Using the singular value decomposition the score matrix T can be written so each column of T is given by one of the left singular vectors of X multiplied by the corresponding singular value. This form is also the polar decomposition of T. Efficient algorithms exist to calculate the SVD of X without having to form the matrix XTX, so computing the SVD is now the standard way to calculate a principal components analysis from a data matrix, unless only a handful of components are required. As with the eigen-decomposition, a truncated score matrix TL can be obtained by considering only the first L largest singular values and their singular vectors: The truncation of a matrix M or T using a truncated singular value decomposition in this way produces a truncated matrix that is the nearest possible matrix of rank L to the original matrix, in the sense of the difference between the two having the smallest possible Frobenius norm, a result known as the Eckart–Young theorem [1936]. Further considerations The singular values (in Σ) are the square roots of the eigenvalues of the matrix XTX. Each eigenvalue is proportional to the portion of the "variance" (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is associated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information (see below). PCA is often used in this manner for dimensionality reduction. PCA has the distinction of being the optimal orthogonal transformation for keeping the subspace that has largest "variance" (as defined above). This advantage, however, comes at the price of greater computational requirements if compared, for example, and when applicable, to the discrete cosine transform, and in particular to the DCT-II which is simply known as the "DCT". Nonlinear dimensionality reduction techniques tend to be more computationally demanding than PCA. PCA is sensitive to the scaling of the variables. If we have just two variables and they have the same sample variance and are completely correlated, then the PCA will entail a rotation by 45° and the "weights" (they are the cosines of rotation) for the two variables with respect to the principal component will be equal. But if we multiply all values of the first variable by 100, then the first principal component will be almost the same as that variable, with a small contribution from the other variable, whereas the second component will be almost aligned with the second original variable. This means that whenever the different variables have different units (like temperature and mass), PCA is a somewhat arbitrary method of analysis. (Different results would be obtained if one used Fahrenheit rather than Celsius for example.) Pearson's original paper was entitled "On Lines and Planes of Closest Fit to Systems of Points in Space" – "in space" implies physical Euclidean space where such concerns do not arise. One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses (or expands) the fluctuations in all dimensions of the signal space to unit variance. Mean subtraction (a.k.a. "mean centering") is necessary for performing classical PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component might instead correspond more or less to the mean of the data. A mean of zero is needed for finding a basis that minimizes the mean square error of the approximation of the data. Mean-centering is unnecessary if performing a principal components analysis on a correlation matrix, as the data are already centered after calculating correlations. Correlations are derived from the cross-product of two standard scores (Z-scores) or statistical moments (hence the name: Pearson Product-Moment Correlation). Also see the article by Kromrey & Foster-Johnson (1998) on "Mean-centering in Moderated Regression: Much Ado About Nothing". Since covariances are correlations of normalized variables (Z- or standard-scores) a PCA based on the correlation matrix of X is equal to a PCA based on the covariance matrix of Z, the standardized version of X. PCA is a popular primary technique in pattern recognition. It is not, however, optimized for class separability. However, it has been used to quantify the distance between two or more classes by calculating center of mass for each class in principal component space and reporting Euclidean distance between center of mass of two or more classes. The linear discriminant analysis is an alternative which is optimized for class separability. Table of symbols and abbreviations Properties and limitations Properties Some properties of PCA include: Property 1: For any integer q, 1 ≤ q ≤ p, consider the orthogonal linear transformation where is a q-element vector and is a (q × p) matrix, and let be the variance-covariance matrix for . Then the trace of , denoted , is maximized by taking , where consists of the first q columns of is the transpose of . ( is not defined here) Property 2: Consider again the orthonormal transformation with and defined as before. Then is minimized by taking where consists of the last q columns of . The statistical implication of this property is that the last few PCs are not simply unstructured left-overs after removing the important PCs. Because these last PCs have variances as small as possible they are useful in their own right. They can help to detect unsuspected near-constant linear relationships between the elements of , and they may also be useful in regression, in selecting a subset of variables from , and in outlier detection. Property 3: (Spectral decomposition of ) Before we look at its usage, we first look at diagonal elements, Then, perhaps the main statistical implication of the result is that not only can we decompose the combined variances of all the elements of into decreasing contributions due to each PC, but we can also decompose the whole covariance matrix into contributions from each PC. Although not strictly decreasing, the elements of will tend to become smaller as increases, as is nonincreasing for increasing , whereas the elements of tend to stay about the same size because of the normalization constraints: . Limitations As noted above, the results of PCA depend on the scaling of the variables. This can be cured by scaling each feature by its standard deviation, so that one ends up with dimensionless features with unital variance. The applicability of PCA as described above is limited by certain (tacit) assumptions made in its derivation. In particular, PCA can capture linear correlations between the features but fails when this assumption is violated (see Figure 6a in the reference). In some cases, coordinate transformations can restore the linearity assumption and PCA can then be applied (see kernel PCA). Another limitation is the mean-removal process before constructing the covariance matrix for PCA. In fields such as astronomy, all the signals are non-negative, and the mean-removal process will force the mean of some astrophysical exposures to be zero, which consequently creates unphysical negative fluxes, and forward modeling has to be performed to recover the true magnitude of the signals. As an alternative method, non-negative matrix factorization focusing only on the non-negative elements in the matrices, which is well-suited for astrophysical observations. See more at Relation between PCA and Non-negative Matrix Factorization. PCA is at a disadvantage if the data has not been standardized before applying the algorithm to it. PCA transforms original data into data that is relevant to the principal components of that data, which means that the new data variables cannot be interpreted in the same ways that the originals were. They are linear interpretations of the original variables. Also, if PCA is not performed properly, there is a high likelihood of information loss. PCA relies on a linear model. If a dataset has a pattern hidden inside it that is nonlinear, then PCA can actually steer the analysis in the complete opposite direction of progress. Researchers at Kansas State University discovered that the sampling error in their experiments impacted the bias of PCA results. "If the number of subjects or blocks is smaller than 30, and/or the researcher is interested in PC's beyond the first, it may be better to first correct for the serial correlation, before PCA is conducted". The researchers at Kansas State also found that PCA could be "seriously biased if the autocorrelation structure of the data is not correctly handled". PCA and information theory Dimensionality reduction results in a loss of information, in general. PCA-based dimensionality reduction tends to minimize that information loss, under certain signal and noise models. Under the assumption that that is, that the data vector is the sum of the desired information-bearing signal and a noise signal one can show that PCA can be optimal for dimensionality reduction, from an information-theoretic point-of-view. In particular, Linsker showed that if is Gaussian and is Gaussian noise with a covariance matrix proportional to the identity matrix, the PCA maximizes the mutual information between the desired information and the dimensionality-reduced output . If the noise is still Gaussian and has a covariance matrix proportional to the identity matrix (that is, the components of the vector are iid), but the information-bearing signal is non-Gaussian (which is a common scenario), PCA at least minimizes an upper bound on the information loss, which is defined as The optimality of PCA is also preserved if the noise is iid and at least more Gaussian (in terms of the Kullback–Leibler divergence) than the information-bearing signal . In general, even if the above signal model holds, PCA loses its information-theoretic optimality as soon as the noise becomes dependent. Computation using the covariance method The following is a detailed description of PCA using the covariance method as opposed to the correlation method. The goal is to transform a given data set X of dimension p to an alternative data set Y of smaller dimension L. Equivalently, we are seeking to find the matrix Y, where Y is the Karhunen–Loève transform (KLT) of matrix X: Organize the data set Suppose you have data comprising a set of observations of p variables, and you want to reduce the data so that each observation can be described with only L variables, L < p. Suppose further, that the data are arranged as a set of n data vectors with each representing a single grouped observation of the p variables. Write as row vectors, each with p elements. Place the row vectors into a single matrix X of dimensions n × p. Calculate the empirical mean Find the empirical mean along each column j = 1, ..., p. Place the calculated mean values into an empirical mean vector u of dimensions p × 1. Calculate the deviations from the mean Mean subtraction is an integral part of the solution towards finding a principal component basis that minimizes the mean square error of approximating the data. Hence we proceed by centering the data as follows: Subtract the empirical mean vector from each row of the data matrix X. Store mean-subtracted data in the n × p matrix B. where h is an column vector of all 1s: In some applications, each variable (column of B) may also be scaled to have a variance equal to 1 (see Z-score). This step affects the calculated principal components, but makes them independent of the units used to measure the different variables. Find the covariance matrix Find the p × p empirical covariance matrix C from matrix B: where is the conjugate transpose operator. If B consists entirely of real numbers, which is the case in many applications, the "conjugate transpose" is the same as the regular transpose. The reasoning behind using instead of n to calculate the covariance is Bessel's correction. Find the eigenvectors and eigenvalues of the covariance matrix Compute the matrix V of eigenvectors which diagonalizes the covariance matrix C: where D is the diagonal matrix of eigenvalues of C. This step will typically involve the use of a computer-based algorithm for computing eigenvectors and eigenvalues. These algorithms are readily available as sub-components of most matrix algebra systems, such as SAS, R, MATLAB, Mathematica, SciPy, IDL (Interactive Data Language), or GNU Octave as well as OpenCV. Matrix D will take the form of an p × p diagonal matrix, where is the jth eigenvalue of the covariance matrix C, and Matrix V, also of dimension p × p, contains p column vectors, each of length p, which represent the p eigenvectors of the covariance matrix C. The eigenvalues and eigenvectors are ordered and paired. The jth eigenvalue corresponds to the jth eigenvector. Matrix V denotes the matrix of right eigenvectors (as opposed to left eigenvectors). In general, the matrix of right eigenvectors need not be the (conjugate) transpose of the matrix of left eigenvectors. Rearrange the eigenvectors and eigenvalues Sort the columns of the eigenvector matrix V and eigenvalue matrix D in order of decreasing eigenvalue. Make sure to maintain the correct pairings between the columns in each matrix. Compute the cumulative energy content for each eigenvector The eigenvalues represent the distribution of the source data's energy among each of the eigenvectors, where the eigenvectors form a basis for the data. The cumulative energy content g for the jth eigenvector is the sum of the energy content across all of the eigenvalues from 1 through j: Select a subset of the eigenvectors as basis vectors Save the first L columns of V as the p × L matrix W: where Use the vector g as a guide in choosing an appropriate value for L. The goal is to choose a value of L as small as possible while achieving a reasonably high value of g on a percentage basis. For example, you may want to choose L so that the cumulative energy g is above a certain threshold, like 90 percent. In this case, choose the smallest value of L such that Project the data onto the new basis The projected data points are the rows of the matrix That is, the first column of is the projection of the data points onto the first principal component, the second column is the projection onto the second principal component, etc. Derivation using the covariance method Let X be a d-dimensional random vector expressed as column vector. Without loss of generality, assume X has zero mean. We want to find a orthonormal transformation matrix P so that PX has a diagonal covariance matrix (that is, PX is a random vector with all its distinct components pairwise uncorrelated). A quick computation assuming were unitary yields: Hence holds if and only if were diagonalisable by . This is very constructive, as cov(X) is guaranteed to be a non-negative definite matrix and thus is guaranteed to be diagonalisable by some unitary matrix. Covariance-free computation In practical implementations, especially with high dimensional data (large ), the naive covariance method is rarely used because it is not efficient due to high computational and memory costs of explicitly determining the covariance matrix. The covariance-free approach avoids the operations of explicitly calculating and storing the covariance matrix , instead utilizing one of matrix-free methods, for example, based on the function evaluating the product at the cost of operations. Iterative computation One way to compute the first principal component efficiently is shown in the following pseudo-code, for a data matrix with zero mean, without ever computing its covariance matrix. = a random vector of length r = r / norm(r) do times: (a vector of length ) return This power iteration algorithm simply calculates the vector , normalizes, and places the result back in . The eigenvalue is approximated by , which is the Rayleigh quotient on the unit vector for the covariance matrix . If the largest singular value is well separated from the next largest one, the vector gets close to the first principal component of within the number of iterations , which is small relative to , at the total cost . The power iteration convergence can be accelerated without noticeably sacrificing the small cost per iteration using more advanced matrix-free methods, such as the Lanczos algorithm or the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method. Subsequent principal components can be computed one-by-one via deflation or simultaneously as a block. In the former approach, imprecisions in already computed approximate principal components additively affect the accuracy of the subsequently computed principal components, thus increasing the error with every new computation. The latter approach in the block power method replaces single-vectors and with block-vectors, matrices and . Every column of approximates one of the leading principal components, while all columns are iterated simultaneously. The main calculation is evaluation of the product . Implemented, for example, in LOBPCG, efficient blocking eliminates the accumulation of the errors, allows using high-level BLAS matrix-matrix product functions, and typically leads to faster convergence, compared to the single-vector one-by-one technique. The NIPALS method Non-linear iterative partial least squares (NIPALS) is a variant the classical power iteration with matrix deflation by subtraction implemented for computing the first few components in a principal component or partial least squares analysis. For very-high-dimensional datasets, such as those generated in the *omics sciences (for example, genomics, metabolomics) it is usually only necessary to compute the first few PCs. The non-linear iterative partial least squares (NIPALS) algorithm updates iterative approximations to the leading scores and loadings t1 and r1T by the power iteration multiplying on every iteration by X on the left and on the right, that is, calculation of the covariance matrix is avoided, just as in the matrix-free implementation of the power iterations to , based on the function evaluating the product . The matrix deflation by subtraction is performed by subtracting the outer product, t1r1T from X leaving the deflated residual matrix used to calculate the subsequent leading PCs. For large data matrices, or matrices that have a high degree of column collinearity, NIPALS suffers from loss of orthogonality of PCs due to machine precision round-off errors accumulated in each iteration and matrix deflation by subtraction. A Gram–Schmidt re-orthogonalization algorithm is applied to both the scores and the loadings at each iteration step to eliminate this loss of orthogonality. NIPALS reliance on single-vector multiplications cannot take advantage of high-level BLAS and results in slow convergence for clustered leading singular values—both these deficiencies are resolved in more sophisticated matrix-free block solvers, such as the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method. Online/sequential estimation In an "online" or "streaming" situation with data arriving piece by piece rather than being stored in a single batch, it is useful to make an estimate of the PCA projection that can be updated sequentially. This can be done efficiently, but requires different algorithms. Qualitative variables In PCA, it is common that we want to introduce qualitative variables as supplementary elements. For example, many quantitative variables have been measured on plants. For these plants, some qualitative variables are available as, for example, the species to which the plant belongs. These data were subjected to PCA for quantitative variables. When analyzing the results, it is natural to connect the principal components to the qualitative variable species. For this, the following results are produced. Identification, on the factorial planes, of the different species, for example, using different colors. Representation, on the factorial planes, of the centers of gravity of plants belonging to the same species. For each center of gravity and each axis, p-value to judge the significance of the difference between the center of gravity and origin. These results are what is called introducing a qualitative variable as supplementary element. This procedure is detailed in and Husson, Lê, & Pagès (2009) and Pagès (2013). Few software offer this option in an "automatic" way. This is the case of SPAD that historically, following the work of Ludovic Lebart, was the first to propose this option, and the R package FactoMineR. Applications Intelligence The earliest application of factor analysis was in locating and measuring components of human intelligence. It was believed that intelligence had various uncorrelated components such as spatial intelligence, verbal intelligence, induction, deduction etc and that scores on these could be adduced by factor analysis from results on various tests, to give a single index known as the Intelligence Quotient (IQ). The pioneering statistical psychologist Spearman actually developed factor analysis in 1904 for his two-factor theory of intelligence, adding a formal technique to the science of psychometrics. In 1924 Thurstone looked for 56 factors of intelligence, developing the notion of Mental Age. Standard IQ tests today are based on this early work. Residential differentiation In 1949, Shevky and Williams introduced the theory of factorial ecology, which dominated studies of residential differentiation from the 1950s to the 1970s. Neighbourhoods in a city were recognizable or could be distinguished from one another by various characteristics which could be reduced to three by factor analysis. These were known as 'social rank' (an index of occupational status), 'familism' or family size, and 'ethnicity'; Cluster analysis could then be applied to divide the city into clusters or precincts according to values of the three key factor variables. An extensive literature developed around factorial ecology in urban geography, but the approach went out of fashion after 1980 as being methodologically primitive and having little place in postmodern geographical paradigms. One of the problems with factor analysis has always been finding convincing names for the various artificial factors. In 2000, Flood revived the factorial ecology approach to show that principal components analysis actually gave meaningful answers directly, without resorting to factor rotation. The principal components were actually dual variables or shadow prices of 'forces' pushing people together or apart in cities. The first component was 'accessibility', the classic trade-off between demand for travel and demand for space, around which classical urban economics is based. The next two components were 'disadvantage', which keeps people of similar status in separate neighbourhoods (mediated by planning), and ethnicity, where people of similar ethnic backgrounds try to co-locate. About the same time, the Australian Bureau of Statistics defined distinct indexes of advantage and disadvantage taking the first principal component of sets of key variables that were thought to be important. These SEIFA indexes are regularly published for various jurisdictions, and are used frequently in spatial analysis. Development indexes PCA can be used as a formal method for the development of indexes. As an alternative confirmatory composite analysis has been proposed to develop and assess indexes. The City Development Index was developed by PCA from about 200 indicators of city outcomes in a 1996 survey of 254 global cities. The first principal component was subject to iterative regression, adding the original variables singly until about 90% of its variation was accounted for. The index ultimately used about 15 indicators but was a good predictor of many more variables. Its comparative value agreed very well with a subjective assessment of the condition of each city. The coefficients on items of infrastructure were roughly proportional to the average costs of providing the underlying services, suggesting the Index was actually a measure of effective physical and social investment in the city. The country-level Human Development Index (HDI) from UNDP, which has been published since 1990 and is very extensively used in development studies, has very similar coefficients on similar indicators, strongly suggesting it was originally constructed using PCA. Population genetics In 1978 Cavalli-Sforza and others pioneered the use of principal components analysis (PCA) to summarise data on variation in human gene frequencies across regions. The components showed distinctive patterns, including gradients and sinusoidal waves. They interpreted these patterns as resulting from specific ancient migration events. Since then, PCA has been ubiquitous in population genetics, with thousands of papers using PCA as a display mechanism. Genetics varies largely according to proximity, so the first two principal components actually show spatial distribution and may be used to map the relative geographical location of different population groups, thereby showing individuals who have wandered from their original locations. PCA in genetics has been technically controversial, in that the technique has been performed on discrete non-normal variables and often on binary allele markers. The lack of any measures of standard error in PCA are also an impediment to more consistent usage. In August 2022, the molecular biologist Eran Elhaik published a theoretical paper in Scientific Reports analyzing 12 PCA applications. He concluded that it was easy to manipulate the method, which, in his view, generated results that were 'erroneous, contradictory, and absurd.' Specifically, he argued, the results achieved in population genetics were characterized by cherry-picking and circular reasoning. Market research and indexes of attitude Market research has been an extensive user of PCA. It is used to develop customer satisfaction or customer loyalty scores for products, and with clustering, to develop market segments that may be targeted with advertising campaigns, in much the same way as factorial ecology will locate geographical areas with similar characteristics. PCA rapidly transforms large amounts of data into smaller, easier-to-digest variables that can be more rapidly and readily analyzed. In any consumer questionnaire, there are series of questions designed to elicit consumer attitudes, and principal components seek out latent variables underlying these attitudes. For example, the Oxford Internet Survey in 2013 asked 2000 people about their attitudes and beliefs, and from these analysts extracted four principal component dimensions, which they identified as 'escape', 'social networking', 'efficiency', and 'problem creating'. Another example from Joe Flood in 2008 extracted an attitudinal index toward housing from 28 attitude questions in a national survey of 2697 households in Australia. The first principal component represented a general attitude toward property and home ownership. The index, or the attitude questions it embodied, could be fed into a General Linear Model of tenure choice. The strongest determinant of private renting by far was the attitude index, rather than income, marital status or household type. Quantitative finance In quantitative finance, PCA is used in financial risk management, and has been applied to other problems such as portfolio optimization. PCA is commonly used in problems involving fixed income securities and portfolios, and interest rate derivatives. Valuations here depend on the entire yield curve, comprising numerous highly correlated instruments, and PCA is used to define a set of components or factors that explain rate movements, thereby facilitating the modelling. One common risk management application is to calculating value at risk, VaR, applying PCA to the Monte Carlo simulation. Here, for each simulation-sample, the components are stressed, and rates, and in turn option values, are then reconstructed; with VaR calculated, finally, over the entire run. PCA is also used in hedging exposure to interest rate risk, given partial durations and other sensitivities. Under both, the first three, typically, principal components of the system are of interest (representing "shift", "twist", and "curvature"). These principal components are derived from an eigen-decomposition of the covariance matrix of yield at predefined maturities; and where the variance of each component is its eigenvalue (and as the components are orthogonal, no correlation need be incorporated in subsequent modelling). For equity, an optimal portfolio is one where the expected return is maximized for a given level of risk, or alternatively, where risk is minimized for a given return; see Markowitz model for discussion. Thus, one approach is to reduce portfolio risk, where allocation strategies are applied to the "principal portfolios" instead of the underlying stocks. A second approach is to enhance portfolio return, using the principal components to select companies' stocks with upside potential. PCA has also been used to understand relationships between international equity markets, and within markets between groups of companies in industries or sectors. PCA may also be applied to stress testing, essentially an analysis of a bank's ability to endure a hypothetical adverse economic scenario. Its utility is in "distilling the information contained in [several] macroeconomic variables into a more manageable data set, which can then [be used] for analysis." Here, the resulting factors are linked to e.g. interest rates – based on the largest elements of the factor's eigenvector – and it is then observed how a "shock" to each of the factors affects the implied assets of each of the banks. Neuroscience A variant of principal components analysis is used in neuroscience to identify the specific properties of a stimulus that increases a neuron's probability of generating an action potential. This technique is known as spike-triggered covariance analysis. In a typical application an experimenter presents a white noise process as a stimulus (usually either as a sensory input to a test subject, or as a current injected directly into the neuron) and records a train of action potentials, or spikes, produced by the neuron as a result. Presumably, certain features of the stimulus make the neuron more likely to spike. In order to extract these features, the experimenter calculates the covariance matrix of the spike-triggered ensemble, the set of all stimuli (defined and discretized over a finite time window, typically on the order of 100 ms) that immediately preceded a spike. The eigenvectors of the difference between the spike-triggered covariance matrix and the covariance matrix of the prior stimulus ensemble (the set of all stimuli, defined over the same length time window) then indicate the directions in the space of stimuli along which the variance of the spike-triggered ensemble differed the most from that of the prior stimulus ensemble. Specifically, the eigenvectors with the largest positive eigenvalues correspond to the directions along which the variance of the spike-triggered ensemble showed the largest positive change compared to the variance of the prior. Since these were the directions in which varying the stimulus led to a spike, they are often good approximations of the sought after relevant stimulus features. In neuroscience, PCA is also used to discern the identity of a neuron from the shape of its action potential. Spike sorting is an important procedure because extracellular recording techniques often pick up signals from more than one neuron. In spike sorting, one first uses PCA to reduce the dimensionality of the space of action potential waveforms, and then performs clustering analysis to associate specific action potentials with individual neurons. PCA as a dimension reduction technique is particularly suited to detect coordinated activities of large neuronal ensembles. It has been used in determining collective variables, that is, order parameters, during phase transitions in the brain. Relation with other methods Correspondence analysis Correspondence analysis (CA) was developed by Jean-Paul Benzécri and is conceptually similar to PCA, but scales the data (which should be non-negative) so that rows and columns are treated equivalently. It is traditionally applied to contingency tables. CA decomposes the chi-squared statistic associated to this table into orthogonal factors. Because CA is a descriptive technique, it can be applied to tables for which the chi-squared statistic is appropriate or not. Several variants of CA are available including detrended correspondence analysis and canonical correspondence analysis. One special extension is multiple correspondence analysis, which may be seen as the counterpart of principal component analysis for categorical data. Factor analysis Principal component analysis creates variables that are linear combinations of the original variables. The new variables have the property that the variables are all orthogonal. The PCA transformation can be helpful as a pre-processing step before clustering. PCA is a variance-focused approach seeking to reproduce the total variable variance, in which components reflect both common and unique variance of the variable. PCA is generally preferred for purposes of data reduction (that is, translating variable space into optimal factor space) but not when the goal is to detect the latent construct or factors. Factor analysis is similar to principal component analysis, in that factor analysis also involves linear combinations of variables. Different from PCA, factor analysis is a correlation-focused approach seeking to reproduce the inter-correlations among variables, in which the factors "represent the common variance of variables, excluding unique variance". In terms of the correlation matrix, this corresponds with focusing on explaining the off-diagonal terms (that is, shared co-variance), while PCA focuses on explaining the terms that sit on the diagonal. However, as a side result, when trying to reproduce the on-diagonal terms, PCA also tends to fit relatively well the off-diagonal correlations. Results given by PCA and factor analysis are very similar in most situations, but this is not always the case, and there are some problems where the results are significantly different. Factor analysis is generally used when the research purpose is detecting data structure (that is, latent constructs or factors) or causal modeling. If the factor model is incorrectly formulated or the assumptions are not met, then factor analysis will give erroneous results. -means clustering It has been asserted that the relaxed solution of -means clustering, specified by the cluster indicators, is given by the principal components, and the PCA subspace spanned by the principal directions is identical to the cluster centroid subspace. However, that PCA is a useful relaxation of -means clustering was not a new result, and it is straightforward to uncover counterexamples to the statement that the cluster centroid subspace is spanned by the principal directions. Non-negative matrix factorization Non-negative matrix factorization (NMF) is a dimension reduction method where only non-negative elements in the matrices are used, which is therefore a promising method in astronomy, in the sense that astrophysical signals are non-negative. The PCA components are orthogonal to each other, while the NMF components are all non-negative and therefore constructs a non-orthogonal basis. In PCA, the contribution of each component is ranked based on the magnitude of its corresponding eigenvalue, which is equivalent to the fractional residual variance (FRV) in analyzing empirical data. For NMF, its components are ranked based only on the empirical FRV curves. The residual fractional eigenvalue plots, that is, as a function of component number given a total of components, for PCA have a flat plateau, where no data is captured to remove the quasi-static noise, then the curves drop quickly as an indication of over-fitting (random noise). The FRV curves for NMF is decreasing continuously when the NMF components are constructed sequentially, indicating the continuous capturing of quasi-static noise; then converge to higher levels than PCA, indicating the less over-fitting property of NMF. Iconography of correlations It is often difficult to interpret the principal components when the data include many variables of various origins, or when some variables are qualitative. This leads the PCA user to a delicate elimination of several variables. If observations or variables have an excessive impact on the direction of the axes, they should be removed and then projected as supplementary elements. In addition, it is necessary to avoid interpreting the proximities between the points close to the center of the factorial plane. The iconography of correlations, on the contrary, which is not a projection on a system of axes, does not have these drawbacks. We can therefore keep all the variables. The principle of the diagram is to underline the "remarkable" correlations of the correlation matrix, by a solid line (positive correlation) or dotted line (negative correlation). A strong correlation is not "remarkable" if it is not direct, but caused by the effect of a third variable. Conversely, weak correlations can be "remarkable". For example, if a variable Y depends on several independent variables, the correlations of Y with each of them are weak and yet "remarkable". Generalizations Sparse PCA A particular disadvantage of PCA is that the principal components are usually linear combinations of all input variables. Sparse PCA overcomes this disadvantage by finding linear combinations that contain just a few input variables. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by adding sparsity constraint on the input variables. Several approaches have been proposed, including a regression framework, a convex relaxation/semidefinite programming framework, a generalized power method framework an alternating maximization framework forward-backward greedy search and exact methods using branch-and-bound techniques, Bayesian formulation framework. The methodological and theoretical developments of Sparse PCA as well as its applications in scientific studies were recently reviewed in a survey paper. Nonlinear PCA Most of the modern methods for nonlinear dimensionality reduction find their theoretical and algorithmic roots in PCA or K-means. Pearson's original idea was to take a straight line (or plane) which will be "the best fit" to a set of data points. Trevor Hastie expanded on this concept by proposing Principal curves as the natural extension for the geometric interpretation of PCA, which explicitly constructs a manifold for data approximation followed by projecting the points onto it. See also the elastic map algorithm and principal geodesic analysis. Another popular generalization is kernel PCA, which corresponds to PCA performed in a reproducing kernel Hilbert space associated with a positive definite kernel. In multilinear subspace learning, PCA is generalized to multilinear PCA (MPCA) that extracts features directly from tensor representations. MPCA is solved by performing PCA in each mode of the tensor iteratively. MPCA has been applied to face recognition, gait recognition, etc. MPCA is further extended to uncorrelated MPCA, non-negative MPCA and robust MPCA. N-way principal component analysis may be performed with models such as Tucker decomposition, PARAFAC, multiple factor analysis, co-inertia analysis, STATIS, and DISTATIS. Robust PCA While PCA finds the mathematically optimal method (as in minimizing the squared error), it is still sensitive to outliers in the data that produce large errors, something that the method tries to avoid in the first place. It is therefore common practice to remove outliers before computing PCA. However, in some contexts, outliers can be difficult to identify. For example, in data mining algorithms like correlation clustering, the assignment of points to clusters and outliers is not known beforehand. A recently proposed generalization of PCA based on a weighted PCA increases robustness by assigning different weights to data objects based on their estimated relevancy. Outlier-resistant variants of PCA have also been proposed, based on L1-norm formulations (L1-PCA). Robust principal component analysis (RPCA) via decomposition in low-rank and sparse matrices is a modification of PCA that works well with respect to grossly corrupted observations. Similar techniques Independent component analysis Independent component analysis (ICA) is directed to similar problems as principal component analysis, but finds additively separable components rather than successive approximations. Network component analysis Given a matrix , it tries to decompose it into two matrices such that . A key difference from techniques such as PCA and ICA is that some of the entries of are constrained to be 0. Here is termed the regulatory layer. While in general such a decomposition can have multiple solutions, they prove that if the following conditions are satisfied : has full column rank Each column of must have at least zeroes where is the number of columns of (or alternatively the number of rows of ). The justification for this criterion is that if a node is removed from the regulatory layer along with all the output nodes connected to it, the result must still be characterized by a connectivity matrix with full column rank. must have full row rank. then the decomposition is unique up to multiplication by a scalar. Discriminant analysis of principal components Discriminant analysis of principal components (DAPC) is a multivariate method used to identify and describe clusters of genetically related individuals. Genetic variation is partitioned into two components: variation between groups and within groups, and it maximizes the former. Linear discriminants are linear combinations of alleles which best separate the clusters. Alleles that most contribute to this discrimination are therefore those that are the most markedly different across groups. The contributions of alleles to the groupings identified by DAPC can allow identifying regions of the genome driving the genetic divergence among groups In DAPC, data is first transformed using a principal components analysis (PCA) and subsequently clusters are identified using discriminant analysis (DA). A DAPC can be realized on R using the package Adegenet. (more info: adegenet on the web) Directional component analysis Directional component analysis (DCA) is a method used in the atmospheric sciences for analysing multivariate datasets. Like PCA, it allows for dimension reduction, improved visualization and improved interpretability of large data-sets. Also like PCA, it is based on a covariance matrix derived from the input dataset. The difference between PCA and DCA is that DCA additionally requires the input of a vector direction, referred to as the impact. Whereas PCA maximises explained variance, DCA maximises probability density given impact. The motivation for DCA is to find components of a multivariate dataset that are both likely (measured using probability density) and important (measured using the impact). DCA has been used to find the most likely and most serious heat-wave patterns in weather prediction ensembles , and the most likely and most impactful changes in rainfall due to climate change . Software/source code ALGLIB – a C++ and C# library that implements PCA and truncated PCA Analytica – The built-in EigenDecomp function computes principal components. ELKI – includes PCA for projection, including robust variants of PCA, as well as PCA-based clustering algorithms. Gretl – principal component analysis can be performed either via the pca command or via the princomp() function. Julia – Supports PCA with the pca function in the MultivariateStats package KNIME – A java based nodal arranging software for Analysis, in this the nodes called PCA, PCA compute, PCA Apply, PCA inverse make it easily. Maple (software) – The PCA command is used to perform a principal component analysis on a set of data. Mathematica – Implements principal component analysis with the PrincipalComponents command using both covariance and correlation methods. MathPHP – PHP mathematics library with support for PCA. MATLAB – The SVD function is part of the basic system. In the Statistics Toolbox, the functions princomp and pca (R2012b) give the principal components, while the function pcares gives the residuals and reconstructed matrix for a low-rank PCA approximation. Matplotlib – Python library have a PCA package in the .mlab module. mlpack – Provides an implementation of principal component analysis in C++. mrmath – A high performance math library for Delphi and FreePascal can perform PCA; including robust variants. NAG Library – Principal components analysis is implemented via the g03aa routine (available in both the Fortran versions of the Library). NMath – Proprietary numerical library containing PCA for the .NET Framework. GNU Octave – Free software computational environment mostly compatible with MATLAB, the function princomp gives the principal component. OpenCV Oracle Database 12c – Implemented via DBMS_DATA_MINING.SVDS_SCORING_MODE by specifying setting value SVDS_SCORING_PCA Orange (software) – Integrates PCA in its visual programming environment. PCA displays a scree plot (degree of explained variance) where user can interactively select the number of principal components. Origin – Contains PCA in its Pro version. Qlucore – Commercial software for analyzing multivariate data with instant response using PCA. R – Free statistical package, the functions princomp and prcomp can be used for principal component analysis; prcomp uses singular value decomposition which generally gives better numerical accuracy. Some packages that implement PCA in R, include, but are not limited to: ade4, vegan, ExPosition, dimRed, and FactoMineR. SAS – Proprietary software; for example, see scikit-learn – Python library for machine learning which contains PCA, Probabilistic PCA, Kernel PCA, Sparse PCA and other techniques in the decomposition module. Scilab – Free and open-source, cross-platform numerical computational package, the function princomp computes principal component analysis, the function pca computes principal component analysis with standardized variables. SPSS – Proprietary software most commonly used by social scientists for PCA, factor analysis and associated cluster analysis. Weka – Java library for machine learning which contains modules for computing principal components. See also Correspondence analysis (for contingency tables) Multiple correspondence analysis (for qualitative variables) Factor analysis of mixed data (for quantitative and qualitative variables) Canonical correlation CUR matrix approximation (can replace of low-rank SVD approximation) Detrended correspondence analysis Directional component analysis Dynamic mode decomposition Eigenface Expectation–maximization algorithm Exploratory factor analysis (Wikiversity) Factorial code Functional principal component analysis Geometric data analysis Independent component analysis Kernel PCA L1-norm principal component analysis Low-rank approximation Matrix decomposition Non-negative matrix factorization Nonlinear dimensionality reduction Oja's rule Point distribution model (PCA applied to morphometry and computer vision) Principal component analysis (Wikibooks) Principal component regression Singular spectrum analysis Singular value decomposition Sparse PCA Transform coding Weighted least squares References Further reading Jackson, J.E. (1991). A User's Guide to Principal Components (Wiley). Husson François, Lê Sébastien & Pagès Jérôme (2009). Exploratory Multivariate Analysis by Example Using R. Chapman & Hall/CRC The R Series, London. 224p. Pagès Jérôme (2014). Multiple Factor Analysis by Example Using R. Chapman & Hall/CRC The R Series London 272 p External links A Tutorial on Principal Component Analysis (a video of less than 100 seconds.) See also the list of Software implementations Matrix decompositions Dimension reduction
0.766285
0.999493
0.765896
Optical flow
Optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene. Optical flow can also be defined as the distribution of apparent velocities of movement of brightness pattern in an image. The concept of optical flow was introduced by the American psychologist James J. Gibson in the 1940s to describe the visual stimulus provided to animals moving through the world. Gibson stressed the importance of optic flow for affordance perception, the ability to discern possibilities for action within the environment. Followers of Gibson and his ecological approach to psychology have further demonstrated the role of the optical flow stimulus for the perception of movement by the observer in the world; perception of the shape, distance and movement of objects in the world; and the control of locomotion. The term optical flow is also used by roboticists, encompassing related techniques from image processing and control of navigation including motion detection, object segmentation, time-to-contact information, focus of expansion calculations, luminance, motion compensated encoding, and stereo disparity measurement. Estimation Sequences of ordered images allow the estimation of motion as either instantaneous image velocities or discrete image displacements. Fleet and Weiss provide a tutorial introduction to gradient based optical flow. John L. Barron, David J. Fleet, and Steven Beauchemin provide a performance analysis of a number of optical flow techniques. It emphasizes the accuracy and density of measurements. The optical flow methods try to calculate the motion between two image frames which are taken at times and at every voxel position. These methods are called differential since they are based on local Taylor series approximations of the image signal; that is, they use partial derivatives with respect to the spatial and temporal coordinates. For a (2D + t)-dimensional case (3D or n-D cases are similar) a voxel at location with intensity will have moved by , and between the two image frames, and the following brightness constancy constraint can be given: Assuming the movement to be small, the image constraint at with Taylor series can be developed to get: higher-order terms By truncating the higher order terms (which performs a linearization) it follows that: or, dividing by , which results in where are the and components of the velocity or optical flow of and , and are the derivatives of the image at in the corresponding directions. , and can be written for the derivatives in the following. Thus: or This is an equation in two unknowns and cannot be solved as such. This is known as the aperture problem of the optical flow algorithms. To find the optical flow another set of equations is needed, given by some additional constraint. All optical flow methods introduce additional conditions for estimating the actual flow. Methods for determination Phase correlation – inverse of normalized cross-power spectrum Block-based methods – minimizing sum of squared differences or sum of absolute differences, or maximizing normalized cross-correlation Differential methods of estimating optical flow, based on partial derivatives of the image signal and/or the sought flow field and higher-order partial derivatives, such as: Lucas–Kanade method – regarding image patches and an affine model for the flow field Horn–Schunck method – optimizing a functional based on residuals from the brightness constancy constraint, and a particular regularization term expressing the expected smoothness of the flow field Buxton–Buxton method – based on a model of the motion of edges in image sequences Black–Jepson method – coarse optical flow via correlation General variational methods – a range of modifications/extensions of Horn–Schunck, using other data terms and other smoothness terms. Discrete optimization methods – the search space is quantized, and then image matching is addressed through label assignment at every pixel, such that the corresponding deformation minimizes the distance between the source and the target image. The optimal solution is often recovered through Max-flow min-cut theorem algorithms, linear programming or belief propagation methods. Many of these, in addition to the current state-of-the-art algorithms are evaluated on the Middlebury Benchmark Dataset. Other popular benchmark datasets are KITTI and Sintel. Uses Motion estimation and video compression have developed as a major aspect of optical flow research. While the optical flow field is superficially similar to a dense motion field derived from the techniques of motion estimation, optical flow is the study of not only the determination of the optical flow field itself, but also of its use in estimating the three-dimensional nature and structure of the scene, as well as the 3D motion of objects and the observer relative to the scene, most of them using the image Jacobian. Optical flow was used by robotics researchers in many areas such as: object detection and tracking, image dominant plane extraction, movement detection, robot navigation and visual odometry. Optical flow information has been recognized as being useful for controlling micro air vehicles. The application of optical flow includes the problem of inferring not only the motion of the observer and objects in the scene, but also the structure of objects and the environment. Since awareness of motion and the generation of mental maps of the structure of our environment are critical components of animal (and human) vision, the conversion of this innate ability to a computer capability is similarly crucial in the field of machine vision. Consider a five-frame clip of a ball moving from the bottom left of a field of vision, to the top right. Motion estimation techniques can determine that on a two dimensional plane the ball is moving up and to the right and vectors describing this motion can be extracted from the sequence of frames. For the purposes of video compression (e.g., MPEG), the sequence is now described as well as it needs to be. However, in the field of machine vision, the question of whether the ball is moving to the right or if the observer is moving to the left is unknowable yet critical information. Not even if a static, patterned background were present in the five frames, could we confidently state that the ball was moving to the right, because the pattern might have an infinite distance to the observer. Optical flow sensor Various configurations of optical flow sensors exist. One configuration is an image sensor chip connected to a processor programmed to run an optical flow algorithm. Another configuration uses a vision chip, which is an integrated circuit having both the image sensor and the processor on the same die, allowing for a compact implementation. An example of this is a generic optical mouse sensor used in an optical mouse. In some cases the processing circuitry may be implemented using analog or mixed-signal circuits to enable fast optical flow computation using minimal current consumption. One area of contemporary research is the use of neuromorphic engineering techniques to implement circuits that respond to optical flow, and thus may be appropriate for use in an optical flow sensor. Such circuits may draw inspiration from biological neural circuitry that similarly responds to optical flow. Optical flow sensors are used extensively in computer optical mice, as the main sensing component for measuring the motion of the mouse across a surface. Optical flow sensors are also being used in robotics applications, primarily where there is a need to measure visual motion or relative motion between the robot and other objects in the vicinity of the robot. The use of optical flow sensors in unmanned aerial vehicles (UAVs), for stability and obstacle avoidance, is also an area of current research. See also Ambient optic array Optical mouse Range imaging Vision processing unit Continuity Equation Motion field References External links Finding Optic Flow Art of Optical Flow article on fxguide.com (using optical flow in visual effects) Optical flow evaluation and ground truth sequences. Middlebury Optical flow evaluation and ground truth sequences. mrf-registration.net - Optical flow estimation through MRF The French Aerospace Lab: GPU implementation of a Lucas-Kanade based optical flow CUDA Implementation by CUVI (CUDA Vision & Imaging Library) Horn and Schunck Optical Flow: Online demo and source code of the Horn and Schunck method TV-L1 Optical Flow: Online demo and source code of the Zach et al. method Robust Optical Flow: Online demo and source code of the Brox et al. method Motion in computer vision
0.771174
0.993153
0.765894
Governor (device)
A governor, or speed limiter or controller, is a device used to measure and regulate the speed of a machine, such as an engine. A classic example is the centrifugal governor, also known as the Watt or fly-ball governor on a reciprocating steam engine, which uses the effect of inertial force on rotating weights driven by the machine output shaft to regulate its speed by altering the input flow of steam. History Centrifugal governors were used to regulate the distance and pressure between millstones in windmills since the 17th century. Early steam engines employed a purely reciprocating motion, and were used for pumping water – an application that could tolerate variations in the working speed. It was not until the Scottish engineer James Watt introduced the rotative steam engine, for driving factory machinery, that a constant operating speed became necessary. Between the years 1775 and 1800, Watt, in partnership with industrialist Matthew Boulton, produced some 500 rotative beam engines. At the heart of these engines was Watt's self-designed "conical pendulum" governor: a set of revolving steel balls attached to a vertical spindle by link arms, where the controlling force consists of the weight of the balls. The theoretical basis for the operation of governors was described by James Clerk Maxwell in 1868 in his seminal paper 'On Governors'. Building on Watt's design was American engineer Willard Gibbs who in 1872 theoretically analyzed Watt's conical pendulum governor from a mathematical energy balance perspective. During his Graduate school years at Yale University, Gibbs observed that the operation of the device in practice was beset with the disadvantages of sluggishness and a tendency to over-correct for the changes in speed it was supposed to control. Gibbs theorized that, analogous to the equilibrium of the simple Watt governor (which depends on the balancing of two torques: one due to the weight of the "balls" and the other due to their rotation), thermodynamic equilibrium for any work producing thermodynamic system depends on the balance of two entities. The first is the heat energy supplied to the intermediate substance, and the second is the work energy performed by the intermediate substance. In this case, the intermediate substance is steam. These sorts of theoretical investigations culminated in the 1876 publication of Gibbs' famous work On the Equilibrium of Heterogeneous Substances and in the construction of the Gibbs’ governor. These formulations are ubiquitous today in the natural sciences in the form of the Gibbs' free energy equation, which is used to determine the equilibrium of chemical reactions; also known as Gibbs equilibrium. Governors were also to be found on early motor vehicles (such as the 1900 Wilson-Pilcher), where they were an alternative to a hand throttle. They were used to set the required engine speed, and the vehicle's throttle and timing were adjusted by the governor to hold the speed constant, similar to a modern cruise control. Governors were also optional on utility vehicles with engine-driven accessories such as winches or hydraulic pumps (such as Land Rovers), again to keep the engine at the required speed regardless of variations of the load being driven. Speed limiters Governors can be used to limit the top speed for vehicles, and for some classes of vehicle such devices are a legal requirement. They can more generally be used to limit the rotational speed of the internal combustion engine or protect the engine from damage due to excessive rotational speed. Cars Today, BMW, Audi, Volkswagen and Mercedes-Benz limit their production cars to . Certain Audi Sport GmbH and AMG cars, and the Mercedes/McLaren SLR are exceptions. The BMW Rolls-Royces are limited to . Jaguars, although British, also have a limiter, as do the Swedish Saab and Volvo on cars where it is necessary. German manufacturers initially started the "gentlemen's agreement", electronically limiting their vehicles to a top speed of , since such high speeds are more likely on the Autobahn. This was done to reduce the political desire to introduce a legal speed limit. In European markets, General Motors Europe sometimes choose to discount the agreement, meaning that certain high-powered Opel or Vauxhall cars can exceed the mark, whereas their Cadillacs do not. Ferrari, Lamborghini, Maserati, Porsche, Aston Martin and Bentley also do not limit their cars, at least not to . The Chrysler 300C SRT8 is limited to 270 km/h. Most Japanese domestic market vehicles are limited to only or . The top speed is a strong sales argument, though speeds above about are not likely reachable on public roads. Many performance cars are limited to a speed of to limit insurance costs of the vehicle, and reduce the risk of tires failing. Mopeds Mopeds in the United Kingdom have had to have a speed limiter since 1977. Most other European countries have similar rules (see the main article). Public services vehicles Public service vehicles often have a legislated top speed. Scheduled coach services in the United kingdom (and also bus services) are limited to 65 mph. Urban public buses often have speed governors which are typically set to between and . Trucks All heavy vehicles in Europe and New Zealand have law/by-law governors that limits their speeds to or . Fire engines and other emergency vehicles are exempt from this requirement. Example uses Aircraft Aircraft propellers are another application. The governor senses shaft RPM, and adjusts or controls the angle of the blades to vary the torque load on the engine. Thus as the aircraft speeds up (as in a dive) or slows (in climb) the RPM is held constant. Small engines Small engines, used to power lawn mowers, portable generators, and lawn and garden tractors, are equipped with a governor to limit fuel to the engine to a maximum safe speed when unloaded and to maintain a relatively constant speed despite changes in loading. In the case of generator applications, the engine speed must be closely controlled so the output frequency of the generator will remain reasonably constant. Small engine governors are typically one of three types: Pneumatic: the governor mechanism detects air flow from the flywheel blower used to cool an air-cooled engine. The typical design includes an air vane mounted inside the engine's blower housing and linked to the carburetor's throttle shaft. A spring pulls the throttle open and, as the engine gains speed, increased air flow from the blower forces the vane back against the spring, partially closing the throttle. Eventually, a point of equilibrium will be reached and the engine will run at a relatively constant speed. Pneumatic governors are simple in design and inexpensive to produce. They do not regulate engine speed very accurately and are affected by air density, as well as external conditions that may influence airflow. Centrifugal: a flyweight mechanism driven by the engine is linked to the throttle and works against a spring in a fashion similar to that of the pneumatic governor, resulting in essentially identical operation. A centrifugal governor is more complex to design and produce than a pneumatic governor. The centrifugal design is more sensitive to speed changes and hence is better suited to engines that experience large fluctuations in loading. Electronic: a servo motor is linked to the throttle and controlled by an electronic module that senses engine speed by counting electrical pulses emitted by the ignition system or a magnetic pickup. The frequency of these pulses varies directly with engine speed, allowing the control module to apply a proportional voltage to the servo to regulate engine speed. Due to their sensitivity and rapid response to speed changes, electronic governors are often fitted to engine-driven generators designed to power computer hardware, as the generator's output frequency must be held within narrow limits to avoid malfunction. Turbine controls In steam turbines, the steam turbine governing is the procedure of monitoring and controlling the flow rate of steam into the turbine with the objective of maintaining its speed of rotation as constant. The flow rate of steam is monitored and controlled by interposing valves between the boiler and the turbine. In water turbines, governors have been used since the mid-19th century to control their speed. A typical system would use a Flyball governor acting directly on the turbine input valve or the wicket gate to control the amount of water entering the turbine. By 1930, mechanical governors started to use PID controllers for more precise control. In the later part of the twentieth century, electronic governors and digital systems started to replace mechanical governors. Electrical generator For electrical generation on synchronous electrical grids, prime movers drive electrical generators which are electrically coupled to any other generators on the grid. With droop speed control, the frequency of the entire grid determines the fuel delivered to each generator, so that if the grid runs faster, the fuel is reduced to each generator by its governor to limit the speed. Elevator Governors are used in elevators. It acts as a stopping mechanism in case the elevator runs beyond its tripping speed (which is usually a factor of the maximum speed of the lift and is preset by the manufacturer as per the international lift safety guidelines). This device must be installed in traction elevators and roped hydraulic elevators. Music box Governors are used in some wind-up music boxes to keep the music playing at a somewhat constant speed while the tension on the spring is decreasing. See also Regulator Servomechanism Hit and miss engine Centrifugal governor References Mechanisms (engineering) Mechanical power control Articles containing video clips sv:Regulator (reglerteknik)
0.771409
0.992843
0.765888
Reynolds-averaged Navier–Stokes equations
The Reynolds-averaged Navier–Stokes equations (RANS equations) are time-averaged equations of motion for fluid flow. The idea behind the equations is Reynolds decomposition, whereby an instantaneous quantity is decomposed into its time-averaged and fluctuating quantities, an idea first proposed by Osborne Reynolds. The RANS equations are primarily used to describe turbulent flows. These equations can be used with approximations based on knowledge of the properties of flow turbulence to give approximate time-averaged solutions to the Navier–Stokes equations. For a stationary flow of an incompressible Newtonian fluid, these equations can be written in Einstein notation in Cartesian coordinates as: The left hand side of this equation represents the change in mean momentum of a fluid element owing to the unsteadiness in the mean flow and the convection by the mean flow. This change is balanced by the mean body force, the isotropic stress owing to the mean pressure field, the viscous stresses, and apparent stress owing to the fluctuating velocity field, generally referred to as the Reynolds stress. This nonlinear Reynolds stress term requires additional modeling to close the RANS equation for solving, and has led to the creation of many different turbulence models. The time-average operator is a Reynolds operator. Derivation of RANS equations The basic tool required for the derivation of the RANS equations from the instantaneous Navier–Stokes equations is the Reynolds decomposition. Reynolds decomposition refers to separation of the flow variable (like velocity ) into the mean (time-averaged) component and the fluctuating component. Because the mean operator is a Reynolds operator, it has a set of properties. One of these properties is that the mean of the fluctuating quantity is equal to zero . Thus, where is the position vector. Some authors prefer using instead of for the mean term (since an overbar is sometimes used to represent a vector). In this case, the fluctuating term is represented instead by . This is possible because the two terms do not appear simultaneously in the same equation. To avoid confusion, the notation , , and will be used to represent the instantaneous, mean, and fluctuating terms, respectively. The properties of Reynolds operators are useful in the derivation of the RANS equations. Using these properties, the Navier–Stokes equations of motion, expressed in tensor notation, are (for an incompressible Newtonian fluid): where is a vector representing external forces. Next, each instantaneous quantity can be split into time-averaged and fluctuating components, and the resulting equation time-averaged, to yield: The momentum equation can also be written as, On further manipulations this yields, where, is the mean rate of strain tensor. Finally, since integration in time removes the time dependence of the resultant terms, the time derivative must be eliminated, leaving: Equations of Reynolds stress The time evolution equation of Reynolds stress is given by: This equation is very complicated. If is traced, turbulence kinetic energy is obtained. The last term is turbulent dissipation rate. All RANS models are based on the above equation. Applications (RANS modelling) A model for testing performance was determined that, when combined with the vortex lattice (VLM) or boundary element method (BEM), RANS was found useful for modelling the flow of water between two contrary rotation propellers, where VLM or BEM are applied to the propellers and RANS is used for the dynamically fluxing inter-propeller state. The RANS equations have been widely utilized as a model for determining flow characteristics and assessing wind comfort in urban environments. This computational approach can be executed through direct calculations involving the solution of the RANS equations, or through an indirect method involving the training of machine learning algorithms using the RANS equations as a basis. The direct approach is more accurate than the indirect approach but it requires expertise in numerical methods and computational fluid dynamics (CFD), as well as substantial computational resources to handle the complexity of the equations. Notes See also Favre averaging References Fluid dynamics Turbulence Turbulence models Computational fluid dynamics
0.77268
0.991207
0.765886
Curl (mathematics)
In vector calculus, the curl, also known as rotor, is a vector operator that describes the infinitesimal circulation of a vector field in three-dimensional Euclidean space. The curl at a point in the field is represented by a vector whose length and direction denote the magnitude and axis of the maximum circulation. The curl of a field is formally defined as the circulation density at each point of the field. A vector field whose curl is zero is called irrotational. The curl is a form of differentiation for vector fields. The corresponding form of the fundamental theorem of calculus is Stokes' theorem, which relates the surface integral of the curl of a vector field to the line integral of the vector field around the boundary curve. The notation is more common in North America. In the rest of the world, particularly in 20th century scientific literature, the alternative notation is traditionally used, which comes from the "rate of rotation" that it represents. To avoid confusion, modern authors tend to use the cross product notation with the del (nabla) operator, as in which also reveals the relation between curl (rotor), divergence, and gradient operators. Unlike the gradient and divergence, curl as formulated in vector calculus does not generalize simply to other dimensions; some generalizations are possible, but only in three dimensions is the geometrically defined curl of a vector field again a vector field. This deficiency is a direct consequence of the limitations of vector calculus; on the other hand, when expressed as an antisymmetric tensor field via the wedge operator of geometric calculus, the curl generalizes to all dimensions. The circumstance is similar to that attending the 3-dimensional cross product, and indeed the connection is reflected in the notation for the curl. The name "curl" was first suggested by James Clerk Maxwell in 1871 but the concept was apparently first used in the construction of an optical field theory by James MacCullagh in 1839. Definition The curl of a vector field , denoted by , or , or , is an operator that maps functions in to functions in , and in particular, it maps continuously differentiable functions to continuous functions . It can be defined in several ways, to be mentioned below: One way to define the curl of a vector field at a point is implicitly through its components along various axes passing through the point: if is any unit vector, the component of the curl of along the direction may be defined to be the limiting value of a closed line integral in a plane perpendicular to divided by the area enclosed, as the path of integration is contracted indefinitely around the point. More specifically, the curl is defined at a point as where the line integral is calculated along the boundary of the area in question, being the magnitude of the area. This equation defines the component of the curl of along the direction . The infinitesimal surfaces bounded by have as their normal. is oriented via the right-hand rule. The above formula means that the component of the curl of a vector field along a certain axis is the infinitesimal area density of the circulation of the field in a plane perpendicular to that axis. This formula does not a priori define a legitimate vector field, for the individual circulation densities with respect to various axes a priori need not relate to each other in the same way as the components of a vector do; that they do indeed relate to each other in this precise manner must be proven separately. To this definition fits naturally the Kelvin–Stokes theorem, as a global formula corresponding to the definition. It equates the surface integral of the curl of a vector field to the above line integral taken around the boundary of the surface. Another way one can define the curl vector of a function at a point is explicitly as the limiting value of a vector-valued surface integral around a shell enclosing divided by the volume enclosed, as the shell is contracted indefinitely around . More specifically, the curl may be defined by the vector formula where the surface integral is calculated along the boundary of the volume , being the magnitude of the volume, and pointing outward from the surface perpendicularly at every point in . In this formula, the cross product in the integrand measures the tangential component of at each point on the surface , and points along the surface at right angles to the tangential projection of . Integrating this cross product over the whole surface results in a vector whose magnitude measures the overall circulation of around , and whose direction is at right angles to this circulation. The above formula says that the curl of a vector field at a point is the infinitesimal volume density of this "circulation vector" around the point. To this definition fits naturally another global formula (similar to the Kelvin-Stokes theorem) which equates the volume integral of the curl of a vector field to the above surface integral taken over the boundary of the volume. Whereas the above two definitions of the curl are coordinate free, there is another "easy to memorize" definition of the curl in curvilinear orthogonal coordinates, e.g. in Cartesian coordinates, spherical, cylindrical, or even elliptical or parabolic coordinates: The equation for each component can be obtained by exchanging each occurrence of a subscript 1, 2, 3 in cyclic permutation: 1 → 2, 2 → 3, and 3 → 1 (where the subscripts represent the relevant indices). If are the Cartesian coordinates and are the orthogonal coordinates, then is the length of the coordinate vector corresponding to . The remaining two components of curl result from cyclic permutation of indices: 3,1,2 → 1,2,3 → 2,3,1. Usage In practice, the two coordinate-free definitions described above are rarely used because in virtually all cases, the curl operator can be applied using some set of curvilinear coordinates, for which simpler representations have been derived. The notation has its origins in the similarities to the 3-dimensional cross product, and it is useful as a mnemonic in Cartesian coordinates if is taken as a vector differential operator del. Such notation involving operators is common in physics and algebra. Expanded in 3-dimensional Cartesian coordinates (see Del in cylindrical and spherical coordinates for spherical and cylindrical coordinate representations), is, for composed of (where the subscripts indicate the components of the vector, not partial derivatives): where , , and are the unit vectors for the -, -, and -axes, respectively. This expands as follows: Although expressed in terms of coordinates, the result is invariant under proper rotations of the coordinate axes but the result inverts under reflection. In a general coordinate system, the curl is given by where denotes the Levi-Civita tensor, the covariant derivative, is the determinant of the metric tensor and the Einstein summation convention implies that repeated indices are summed over. Due to the symmetry of the Christoffel symbols participating in the covariant derivative, this expression reduces to the partial derivative: where are the local basis vectors. Equivalently, using the exterior derivative, the curl can be expressed as: Here and are the musical isomorphisms, and is the Hodge star operator. This formula shows how to calculate the curl of in any coordinate system, and how to extend the curl to any oriented three-dimensional Riemannian manifold. Since this depends on a choice of orientation, curl is a chiral operation. In other words, if the orientation is reversed, then the direction of the curl is also reversed. Examples Example 1 Suppose the vector field describes the velocity field of a fluid flow (such as a large tank of liquid or gas) and a small ball is located within the fluid or gas (the center of the ball being fixed at a certain point). If the ball has a rough surface, the fluid flowing past it will make it rotate. The rotation axis (oriented according to the right hand rule) points in the direction of the curl of the field at the center of the ball, and the angular speed of the rotation is half the magnitude of the curl at this point. The curl of the vector field at any point is given by the rotation of an infinitesimal area in the xy-plane (for z-axis component of the curl), zx-plane (for y-axis component of the curl) and yz-plane (for x-axis component of the curl vector). This can be seen in the examples below. Example 2 The vector field can be decomposed as Upon visual inspection, the field can be described as "rotating". If the vectors of the field were to represent a linear force acting on objects present at that point, and an object were to be placed inside the field, the object would start to rotate clockwise around itself. This is true regardless of where the object is placed. Calculating the curl: The resulting vector field describing the curl would at all points be pointing in the negative direction. The results of this equation align with what could have been predicted using the right-hand rule using a right-handed coordinate system. Being a uniform vector field, the object described before would have the same rotational intensity regardless of where it was placed. Example 3 For the vector field the curl is not as obvious from the graph. However, taking the object in the previous example, and placing it anywhere on the line , the force exerted on the right side would be slightly greater than the force exerted on the left, causing it to rotate clockwise. Using the right-hand rule, it can be predicted that the resulting curl would be straight in the negative direction. Inversely, if placed on , the object would rotate counterclockwise and the right-hand rule would result in a positive direction. Calculating the curl: The curl points in the negative direction when is positive and vice versa. In this field, the intensity of rotation would be greater as the object moves away from the plane . Further examples In a vector field describing the linear velocities of each part of a rotating disk in uniform circular motion, the curl has the same value at all points, and this value turns out to be exactly two times the vectorial angular velocity of the disk (oriented as usual by the right-hand rule). More generally, for any flowing mass, the linear velocity vector field at each point of the mass flow has a curl (the vorticity of the flow at that point) equal to exactly two times the local vectorial angular velocity of the mass about the point. For any solid object subject to an external physical force (such as gravity or the electromagnetic force), one may consider the vector field representing the infinitesimal force-per-unit-volume contributions acting at each of the points of the object. This force field may create a net torque on the object about its center of mass, and this torque turns out to be directly proportional and vectorially parallel to the (vector-valued) integral of the curl of the force field over the whole volume. Of the four Maxwell's equations, two—Faraday's law and Ampère's law—can be compactly expressed using curl. Faraday's law states that the curl of an electric field is equal to the opposite of the time rate of change of the magnetic field, while Ampère's law relates the curl of the magnetic field to the current and the time rate of change of the electric field. Identities In general curvilinear coordinates (not only in Cartesian coordinates), the curl of a cross product of vector fields and can be shown to be Interchanging the vector field and operator, we arrive at the cross product of a vector field with curl of a vector field: where is the Feynman subscript notation, which considers only the variation due to the vector field (i.e., in this case, is treated as being constant in space). Another example is the curl of a curl of a vector field. It can be shown that in general coordinates and this identity defines the vector Laplacian of , symbolized as . The curl of the gradient of any scalar field is always the zero vector field which follows from the antisymmetry in the definition of the curl, and the symmetry of second derivatives. The divergence of the curl of any vector field is equal to zero: If is a scalar valued function and is a vector field, then Generalizations The vector calculus operations of grad, curl, and div are most easily generalized in the context of differential forms, which involves a number of steps. In short, they correspond to the derivatives of 0-forms, 1-forms, and 2-forms, respectively. The geometric interpretation of curl as rotation corresponds to identifying bivectors (2-vectors) in 3 dimensions with the special orthogonal Lie algebra of infinitesimal rotations (in coordinates, skew-symmetric 3 × 3 matrices), while representing rotations by vectors corresponds to identifying 1-vectors (equivalently, 2-vectors) and these all being 3-dimensional spaces. Differential forms In 3 dimensions, a differential 0-form is a real-valued function ; a differential 1-form is the following expression, where the coefficients are functions: a differential 2-form is the formal sum, again with function coefficients: and a differential 3-form is defined by a single term with one function as coefficient: (Here the -coefficients are real functions of three variables; the "wedge products", e.g. , can be interpreted as some kind of oriented area elements, , etc.) The exterior derivative of a -form in is defined as the -form from above—and in if, e.g., then the exterior derivative leads to The exterior derivative of a 1-form is therefore a 2-form, and that of a 2-form is a 3-form. On the other hand, because of the interchangeability of mixed derivatives, and antisymmetry, the twofold application of the exterior derivative yields (the zero -form). Thus, denoting the space of -forms by and the exterior derivative by one gets a sequence: Here is the space of sections of the exterior algebra vector bundle over Rn, whose dimension is the binomial coefficient ; note that for or . Writing only dimensions, one obtains a row of Pascal's triangle: the 1-dimensional fibers correspond to scalar fields, and the 3-dimensional fibers to vector fields, as described below. Modulo suitable identifications, the three nontrivial occurrences of the exterior derivative correspond to grad, curl, and div. Differential forms and the differential can be defined on any Euclidean space, or indeed any manifold, without any notion of a Riemannian metric. On a Riemannian manifold, or more generally pseudo-Riemannian manifold, -forms can be identified with -vector fields (-forms are -covector fields, and a pseudo-Riemannian metric gives an isomorphism between vectors and covectors), and on an oriented vector space with a nondegenerate form (an isomorphism between vectors and covectors), there is an isomorphism between -vectors and -vectors; in particular on (the tangent space of) an oriented pseudo-Riemannian manifold. Thus on an oriented pseudo-Riemannian manifold, one can interchange -forms, -vector fields, -forms, and -vector fields; this is known as Hodge duality. Concretely, on this is given by: 1-forms and 1-vector fields: the 1-form corresponds to the vector field . 1-forms and 2-forms: one replaces by the dual quantity (i.e., omit ), and likewise, taking care of orientation: corresponds to , and corresponds to . Thus the form corresponds to the "dual form" . Thus, identifying 0-forms and 3-forms with scalar fields, and 1-forms and 2-forms with vector fields: grad takes a scalar field (0-form) to a vector field (1-form); curl takes a vector field (1-form) to a pseudovector field (2-form); div takes a pseudovector field (2-form) to a pseudoscalar field (3-form) On the other hand, the fact that corresponds to the identities for any scalar field , and for any vector field . Grad and div generalize to all oriented pseudo-Riemannian manifolds, with the same geometric interpretation, because the spaces of 0-forms and -forms at each point are always 1-dimensional and can be identified with scalar fields, while the spaces of 1-forms and -forms are always fiberwise -dimensional and can be identified with vector fields. Curl does not generalize in this way to 4 or more dimensions (or down to 2 or fewer dimensions); in 4 dimensions the dimensions are so the curl of a 1-vector field (fiberwise 4-dimensional) is a 2-vector field, which at each point belongs to 6-dimensional vector space, and so one has which yields a sum of six independent terms, and cannot be identified with a 1-vector field. Nor can one meaningfully go from a 1-vector field to a 2-vector field to a 3-vector field (4 → 6 → 4), as taking the differential twice yields zero. Thus there is no curl function from vector fields to vector fields in other dimensions arising in this way. However, one can define a curl of a vector field as a 2-vector field in general, as described below. Curl geometrically 2-vectors correspond to the exterior power ; in the presence of an inner product, in coordinates these are the skew-symmetric matrices, which are geometrically considered as the special orthogonal Lie algebra of infinitesimal rotations. This has dimensions, and allows one to interpret the differential of a 1-vector field as its infinitesimal rotations. Only in 3 dimensions (or trivially in 0 dimensions) we have , which is the most elegant and common case. In 2 dimensions the curl of a vector field is not a vector field but a function, as 2-dimensional rotations are given by an angle (a scalar – an orientation is required to choose whether one counts clockwise or counterclockwise rotations as positive); this is not the div, but is rather perpendicular to it. In 3 dimensions the curl of a vector field is a vector field as is familiar (in 1 and 0 dimensions the curl of a vector field is 0, because there are no non-trivial 2-vectors), while in 4 dimensions the curl of a vector field is, geometrically, at each point an element of the 6-dimensional Lie algebra The curl of a 3-dimensional vector field which only depends on 2 coordinates (say and ) is simply a vertical vector field (in the direction) whose magnitude is the curl of the 2-dimensional vector field, as in the examples on this page. Considering curl as a 2-vector field (an antisymmetric 2-tensor) has been used to generalize vector calculus and associated physics to higher dimensions. Inverse In the case where the divergence of a vector field is zero, a vector field exists such that . This is why the magnetic field, characterized by zero divergence, can be expressed as the curl of a magnetic vector potential. If is a vector field with , then adding any gradient vector field to will result in another vector field such that as well. This can be summarized by saying that the inverse curl of a three-dimensional vector field can be obtained up to an unknown irrotational field with the Biot–Savart law. See also Helmholtz decomposition Hiptmair–Xu preconditioner Del in cylindrical and spherical coordinates Vorticity References Further reading External links Differential operators Linear operators in calculus Vector calculus Analytic geometry
0.767326
0.9981
0.765868
Isentropic nozzle flow
In fluid mechanics, isentropic nozzle flow describes the movement of a fluid through a narrow opening without an increase in entropy (an isentropic process). Overview Whenever a gas is forced through a tube, the gaseous molecules are deflected by the tube's walls. If the speed of the gas is much less than the speed of sound, the gas density will remain constant and the velocity of the flow will increase. However, as the speed of the flow approaches the speed of sound, compressibility effects on the gas are to be considered. The density of the gas becomes position dependent. While considering flow through a tube, if the flow is very gradually compressed (i.e. area decreases) and then gradually expanded (i.e. area increases), the flow conditions are restored (i.e. return to its initial position). So, such a process is a reversible process. According to the Second Law of Thermodynamics, whenever there is a reversible and adiabatic flow, constant value of entropy is maintained. Engineers classify this type of flow as an isentropic flow of fluids. Isentropic is the combination of the Greek word "iso" (which means - same) and entropy. When the change in flow variables is small and gradual, isentropic flows occur. The generation of sound waves is an isentropic process. A supersonic flow that is turned while there is an increase in flow area is also isentropic. Since there is an increase in area, therefore we call this an isentropic expansion. If a supersonic flow is turned abruptly and the flow area decreases, the flow is irreversible due to the generation of shock waves. The isentropic relations are no longer valid and the flow is governed by the oblique or normal shock relations. Set of Equations Below are nine equations commonly used when evaluating isentropic flow conditions. These assume the gas is calorically perfect; i.e. the ratio of specific heats is a constant across the temperature range. In typical cases the actual variation is only slight. Properties without a subscript are evaluated at the point of interest (this point may be chosen anywhere along the length of the nozzle, but once chosen, all properties in a calculation must be evaluated at the same point) Subscript denotes a property at total/stagnation conditions. In a rocket or jet engine, this means the conditions inside the combustion chamber. For example, is total pressure/stagnation pressure/chamber pressure (all equivalent). is the local Mach number of the gas is the speed of the gas (m/s) is the local speed of sound through the gas (m/s) is the ratio of specific heats of the gas is the pressure of the gas (Pa) is the density of the gas (kg/m3) is the temperature of the gas (K) is the cross sectional area of the nozzle at the point of interest (m2) is the cross sectional area of the nozzle at the sonic point, or the point where gas velocity is Mach 1 (m2). Ideally this will occur at the nozzle throat. Stagnation properties In fluid dynamics, a stagnation point is a point in a flow field where the local velocity of the fluid is zero. The isentropic stagnation state is the state a flowing fluid would attain if it underwent a reversible adiabatic deceleration to zero velocity. There are both actual and the isentropic stagnation states for a typical gas or vapor. Sometimes it is advantageous to make a distinction between the actual and the isentropic stagnation states. The actual stagnation state is the state achieved after an actual deceleration to zero velocity (as at the nose of a body placed in a fluid stream), and there may be irreversibility associated with the deceleration process. Therefore, the term "stagnation property" is sometimes reserved for the properties associated with the actual state, and the term total property is used for the isentropic stagnation state. The enthalpy is the same for both the actual and isentropic stagnation states (assuming that the actual process is adiabatic). Therefore, for an ideal gas, the actual stagnation temperature is the same as the isentropic stagnation temperature. However, the actual stagnation pressure may be less than the isentropic stagnation pressure. For this reason the term "total pressure" (meaning isentropic stagnation pressure) has particular meaning compared to the actual stagnation pressure. Flow analysis The isentropic efficiency is . The variation of fluid density for compressible flows requires attention to density and other fluid property relationships. The fluid equation of state, often unimportant for incompressible flows, is vital in the analysis of compressible flows. Also, temperature variations for compressible flows are usually significant and thus the energy equation is important. Curious phenomena can occur with compressible flows. For simplicity, the gas is assumed to be an ideal gas. The gas flow is isentropic. The gas flow is constant. The gas flow is along a straight line from gas inlet to exhaust gas exit. The gas flow behavior is compressible. There are numerous applications where a steady, uniform, isentropic flow is a good approximation to the flow in conduits. These include the flow through a jet engine, through the nozzle of a rocket, from a broken gas line, and past the blades of a turbine. = Mach number = velocity = universal gas constant = pressure = specific heat ratio = temperature * = sonic conditions = density = area = molar mass Energy equation for the steady flow: To model such situations, consider the control volume in the changing area of the conduit of Fig. The continuity equation between two sections an infinitesimal distance apart is If only the first-order terms in a differential quantity are retained, continuity takes the form The energy equation is: This simplifies to, neglecting higher-order terms: Assuming an isentropic flow, the energy equation becomes: Substitute from the continuity equation to obtain or, in terms of the Mach number: This equation applies to a steady, uniform, isentropic flow. There are several observations that can be made from an analysis of Eq. (9.26). They are: For a subsonic flow in an expanding conduit ( and ), the flow is decelerating. For a subsonic flow in a converging conduit ( and ), the flow is accelerating. For a supersonic flow in an expanding conduit ( and ), the flow is accelerating. For a supersonic flow in a converging conduit ( and ), the flow is decelerating. At a throat where , either or (the flow could be accelerating through , or it may reach a velocity such that ). Supersonic flow A nozzle for a supersonic flow must increase in area in the flow direction, and a diffuser must decrease in area, opposite to a nozzle and diffuser for a subsonic flow. So, for a supersonic flow to develop from a reservoir where the velocity is zero, the subsonic flow must first accelerate through a converging area to a throat, followed by continued acceleration through an enlarging area. The nozzles on a rocket designed to place satellites in orbit are constructed using such converging-diverging geometry. The energy and continuity equations can take on particularly helpful forms for the steady, uniform, isentropic flow through the nozzle. Apply the energy equation with between the reservoir and some location in the nozzle to obtain Any quantity with a zero subscript refers to a stagnation point where the velocity is zero, such as in the reservoir. Using several thermodynamic relations, equations can be put in the forms: If the above equations are applied at the throat (the critical area signified by an asterisk (*) superscript, where ), the energy equation takes the forms The critical area is often referenced even though a throat does not exist. For air with , the equations above provide The mass flux through the nozzle is of interest and is given by: With the use of Eq. (9.28), the mass flux, after applying some algebra, can be expressed as If the critical area is selected where , this takes the form which, when combined with previous it provides: Converging nozzle Consider a converging nozzle connecting a reservoir with a receiver. If the reservoir pressure is held constant and the receiver pressure reduced, the Mach number at the exit of the nozzle will increase until is reached, indicated by the left curve in figure 2. After is reached at the nozzle exit for , the condition of choked flow occurs and the velocity throughout the nozzle cannot change with further decreases in . This is due to the fact that pressure changes downstream of the exit cannot travel upstream to cause changes in the flow conditions. The right curve of figure 2. represents the case when the reservoir pressure is increased and the receiver pressure is held constant. When , the condition of choked flow also occurs; but Eq indicates that the mass flux will continue to increase as is increased. This is the case when a gas line ruptures. It is interesting that the exit pressure is able to be greater than the receiver pressure . Nature allows this by providing the streamlines of a gas the ability to make a sudden change of direction at the exit and expand to a much greater area resulting in a reduction of the pressure from to . The case of a converging-diverging nozzle allows a supersonic flow to occur, providing the receiver pressure is sufficiently low. This is shown in figure 3 assuming a constant reservoir pressure with a decreasing receiver pressure. If the receiver pressure is equal to the reservoir pressure, no flow occurs, represented by curve A. If is slightly less than , the flow is subsonic throughout, with a minimum pressure at the throat, represented by curve B. As the pressure is reduced still further, a pressure is reached that result in at the throat with subsonic flow throughout the remainder of the nozzle. There is another receiver pressure substantially below that of curve C that also results in isentropic flow throughout the nozzle, represented by curve D; after the throat the flow is supersonic. Pressures in the receiver in between those of curve C and curve D result in non-isentropic flow (a shock wave occurs in the flow). If is below that of curve D, the exit pressure is greater than . Once again, for receiver pressures below that of curve C, the mass flux remains constant since the conditions at the throat remain unchanged. It may appear that the supersonic flow will tend to separate from the nozzle, but just the opposite is true. A supersonic flow can turn very sharp angles, since nature provides expansion fans that do not exist in subsonic flows. To avoid separation in subsonic nozzles, the expansion angle should not exceed 10°. For larger angles, vanes are used so that the angle between the vanes does not exceed 10°. See also de Laval nozzle Fanno flow Supersonic gas separation Compressible flow References Colbert, Elton J. Isentropic Flow Through Nozzles. University of Nevada, Reno. 3 May 2001. Accessed 15 July 2014. Benson, Tom. "Isentropic Flow". NASA.gov. National Aeronautics and Space Administration. 21 June 2014. Accessed 15 July 2014. Bar-Meir, Genick. "Isenotropic Flow". Potto.org. Potto Project. 21 November 2007. Accessed 15 July 2014. Thermodynamic processes Thermodynamic entropy
0.781855
0.979544
0.765861
Van de Graaff generator
A Van de Graaff generator is an electrostatic generator which uses a moving belt to accumulate electric charge on a hollow metal globe on the top of an insulated column, creating very high electric potentials. It produces very high voltage direct current (DC) electricity at low current levels. It was invented by American physicist Robert J. Van de Graaff in 1929. The potential difference achieved by modern Van de Graaff generators can be as much as 5 megavolts. A tabletop version can produce on the order of 100 kV and can store enough energy to produce visible electric sparks. Small Van de Graaff machines are produced for entertainment, and for physics education to teach electrostatics; larger ones are displayed in some science museums. The Van de Graaff generator was originally developed as a particle accelerator for physics research, as its high potential can be used to accelerate subatomic particles to great speeds in an evacuated tube. It was the most powerful type of accelerator until the cyclotron was developed in the early 1930s. Van de Graaff generators are still used as accelerators to generate energetic particle and X-ray beams for nuclear research and nuclear medicine. The voltage produced by an open-air Van de Graaff machine is limited by arcing and corona discharge to about 5 MV. Most modern industrial machines are enclosed in a pressurized tank of insulating gas; these can achieve potentials as large as about 25 MV. History Background The concept of an electrostatic generator in which charge is mechanically transported in small amounts into the interior of a high-voltage electrode originated with the Kelvin water dropper, invented in 1867 by William Thomson (Lord Kelvin), in which charged drops of water fall into a bucket with the same polarity charge, adding to the charge. In a machine of this type, the gravitational force moves the drops against the opposing electrostatic field of the bucket. Kelvin himself first suggested using a belt to carry the charge instead of water. The first electrostatic machine that used an endless belt to transport charge was constructed in 1872 by Augusto Righi. It used an india rubber belt with wire rings along its length as charge carriers, which passed into a spherical metal electrode. The charge was applied to the belt from the grounded lower roller by electrostatic induction using a charged plate. John Gray also invented a belt machine about 1890. Another more complicated belt machine was invented in 1903 by Juan Burboa A more immediate inspiration for Van de Graaff was a generator W. F. G. Swann was developing in the 1920s in which charge was transported to an electrode by falling metal balls, thus returning to the principle of the Kelvin water dropper. Initial development The Van de Graaff generator was developed, starting in 1929, by physicist Robert J. Van de Graaff at Princeton University, with help from colleague Nicholas Burke. The first model was demonstrated in October 1929. The first machine used an ordinary tin can, a small motor, and a silk ribbon bought at a five-and-dime store. After that, he went to the chairman of the physics department requesting $100 to make an improved version. He did get the money, with some difficulty. By 1931, he could report achieving 1.5 million volts, saying "The machine is simple, inexpensive, and portable. An ordinary lamp socket provides the only power needed." According to a patent application, it had two 60-cm-diameter charge-accumulation spheres mounted on borosilicate glass columns 180 cm high; the apparatus cost $90 in 1931. Van de Graaff applied for a second patent in December 1931, which was assigned to Massachusetts Institute of Technology in exchange for a share of net income; the patent was later granted. In 1933, Van de Graaff built a 40 ft (12 m) model at MIT's Round Hill facility, the use of which was donated by Colonel Edward H. R. Green. One consequence of the location of this generator in an aircraft hangar was the "pigeon effect": arcing from accumulated droppings on the outer surface of the spheres. Higher energy machines In 1937, the Westinghouse Electric company built a machine, the Westinghouse Atom Smasher capable of generating 5 MeV in Forest Hills, Pennsylvania. It marked the beginning of nuclear research for civilian applications. It was decommissioned in 1958 and was partially demolished in 2015. (The enclosure was laid on its side for safety reasons.) A more recent development is the tandem Van de Graaff accelerator, containing one or more Van de Graaff generators, in which negatively charged ions are accelerated through one potential difference before being stripped of two or more electrons, inside a high-voltage terminal, and accelerated again. An example of a three-stage operation has been built in Oxford Nuclear Laboratory in 1964 of a 10 MV single-ended "injector" and a 6 MV EN tandem. By the 1970s, as much as 14 MV could be achieved at the terminal of a tandem that used a tank of high-pressure sulfur hexafluoride (SF6) gas to prevent sparking by trapping electrons. This allowed the generation of heavy ion beams of several tens of MeV, sufficient to study light-ion direct nuclear reactions. The greatest potential sustained by a Van de Graaff accelerator is 25.5 MV, achieved by the tandem in the Holifield Radioactive Ion Beam Facility in Oak Ridge National Laboratory. A further development is the pelletron, where the rubber or fabric belt is replaced by a chain of short conductive rods connected by insulating links, and the air-ionizing electrodes are replaced by a grounded roller and inductive charging electrode. The chain can be operated at a much greater velocity than a belt, and both the voltage and currents attainable are much greater than with a conventional Van de Graaff generator. The 14 UD Heavy Ion Accelerator at the Australian National University houses a 15 MV pelletron. Its chains are more than 20 m long and can travel faster than . The Nuclear Structure Facility (NSF) at Daresbury Laboratory was proposed in the 1970s, commissioned in 1981, and opened for experiments in 1983. It consisted of a tandem Van de Graaff generator operating routinely at 20 MV, housed in a distinctive building 70 m high. During its lifetime, it accelerated 80 different ion beams for experimental use, ranging from protons to uranium. A particular feature was the ability to accelerate rare isotopic and radioactive beams. Perhaps the most important discovery made using the NSF was that of super-deformed nuclei. These nuclei, when formed from the fusion of lighter elements, rotate very rapidly. The pattern of gamma rays emitted as they slow down provided detailed information about the inner structure of the nucleus. Following financial cutbacks, the NSF closed in 1993. Description A simple Van de Graaff generator consists of a belt of rubber (or a similar flexible dielectric material) moving over two rollers of differing material, one of which is surrounded by a hollow metal sphere. A comb-shaped metal electrode with sharp points (2 and 7 in the diagram), is positioned near each roller. The upper comb (2) is connected to the sphere, and the lower one (7) to ground. When a motor is used to drive the belt, the triboelectric effect causes the transfer of electrons from the dissimilar materials of the belt and the two rollers. In the example shown, the rubber of the belt will become negatively charged while the acrylic glass of the upper roller will become positively charged. The belt carries away negative charge on its inner surface while the upper roller accumulates positive charge. Next, the strong electric field surrounding the positive upper roller (3) induces a very high electric field near the points of the nearby comb (2). At the points of the comb, the field becomes strong enough to ionize air molecules. The electrons from the air molecules are attracted to the outside of the belt, while the positive ions go to the comb. At the comb they are neutralized by electrons from the metal, thus leaving the comb and the attached outer shell (1) with fewer net electrons and a net positive charge. By Gauss's law (as illustrated in the Faraday ice pail experiment), the excess positive charge is accumulated on the outer surface of the outer shell, leaving no electric field inside the shell. Continuing to drive the belt causes further electrostatic induction, which can build up large amounts of charge on the shell. Charge will continue to accumulate until the rate of charge leaving the sphere (through leakage and corona discharge) equals the rate at which new charge is being carried into the sphere by the belt. Outside the terminal sphere, a high electric field results from the high voltage on the sphere, which would prevent the addition of further charge from the outside. However, since electrically charged conductors do not have any electric field inside, charges can be added continuously from the inside without needing to overcome the full potential of the outer shell. The larger the sphere and the farther it is from ground, the higher its peak potential. The sign of the charge (positive or negative) can be controlled by the selection of materials for the belt and rollers. Higher potentials on the sphere can also be achieved by using a voltage source to charge the belt directly, rather than relying solely on the triboelectric effect. A Van de Graaff generator terminal does not need to be sphere-shaped to work, and in fact, the optimum shape is a sphere with an inward curve around the hole where the belt enters. A rounded terminal minimizes the electric field around it, allowing greater potentials to be achieved without ionization of the air, or other dielectric gas, surrounding it. Since a Van de Graaff generator can supply the same small current at almost any level of electrical potential, it is an example of a nearly ideal current source. The maximal achievable potential is roughly equal to the sphere radius R multiplied by the electric field Emax at which corona discharges begin to form within the surrounding gas. For air at standard temperature and pressure (STP) the breakdown field is about . Therefore, a polished spherical electrode in diameter could be expected to develop a maximal voltage of about . This explains why Van de Graaff generators are often made with the largest possible diameter. Use as a particle accelerator The initial motivation for the development of the Van de Graaff generator was as a source of high voltage to accelerate particles for nuclear physics experiments. The high potential difference between the surface of the terminal and ground results in a corresponding electric field. When an ion source is placed near the surface of the sphere (typically within the sphere itself) the field will accelerate charged particles of the appropriate sign away from the sphere. By insulating the generator with pressurized gas, the breakdown voltage can be raised, increasing the maximum energy of accelerated particles. Tandem accelerators Particle-beam Van de Graaff accelerators are often used in a "tandem" configuration with the high potential terminal located at the center of the machine. Negatively charged ions are injected at one end, where they are accelerated by attractive force toward the terminal. When the particles reach the terminal, they are stripped of some electrons to make them positively charged, and are subsequently accelerated by repulsive forces away from the terminal. This configuration results in two accelerations for the cost of one Van de Graaff generator and has the added advantage of leaving the ion source instrumentation accessible near ground potential. Pelletron The pelletron is a style of tandem accelerator designed to overcome some of the disadvantages of using a belt to transfer charge to the high voltage terminal. In the pelletron, the belt is replaced with "pellets", metal spheres joined by insulating links into a chain. This chain of spheres serves the same function as the belt in a traditional Van de Graff accelerator – to convey charge to the high voltage terminal. The separate charged spheres and higher durability of the chain mean that higher voltages can be achieved at the high voltage terminal, and charge can be conveyed to the terminal more quickly. Entertainment and educational generators The largest air-insulated Van de Graaff generator in the world, built by Dr. Van de Graaff in the 1930s, is now displayed permanently at Boston's Museum of Science. With two conjoined aluminium spheres standing on columns tall, this generator can often obtain 2 MV (2 million volts). Shows using the Van de Graaff generator and several Tesla coils are conducted two to three times a day. Many science museums, such as the American Museum of Science and Energy, have small-scale Van de Graaff generators on display, and exploit their static-producing qualities to create "lightning" or make people's hair stand up. Van de Graaff generators are also used in schools and science shows. Comparison with other electrostatic generators Other electrostatic machines such as the Wimshurst machine or Bonetti machine work similarly to the Van De Graaff generator; charge is transported by moving plates, disks, or cylinders to a high voltage electrode. For these generators, however, corona discharge from exposed metal parts at high potentials and poorer insulation result in smaller voltages. In an electrostatic generator, the rate of charge transported (current) to the high-voltage electrode is very small. After the machine is started, the voltage on the terminal electrode increases until the leakage current from the electrode equals the rate of charge transport. Therefore, leakage from the terminal determines the maximum voltage attainable. In the Van de Graaff generator, the belt allows the transport of charge into the interior of a large hollow spherical electrode. This is the ideal shape to minimize leakage and corona discharge, so the Van de Graaff generator can produce the greatest voltage. This is why the Van de Graaff design has been used for all electrostatic particle accelerators. In general, the larger the diameter and the smoother the sphere is, the higher the voltage that can be achieved. Patents — "Electrostatic Generator" — "Apparatus For Reducing Electron Loading In Positive-Ion Accelerators" See also – Metalworking process used to fabricate thin metal spheres References External links How Van de Graaff Generators Work with how to build, HowStuffWorks Interactive Java tutorial – Van de Graaff Generator National High Magnetic Field Laboratory Tandem Van de Graaff Accelerator Western Michigan University Physics Dr. Van de Graaff's huge machine at Museum of Science Van de Graaff Generator Frequently Asked Questions, Science Hobbyist Illustration from Report on Van de Graaff Generator From "Progress Report on the M.I.T. High-Voltage Generator at Round Hill" Nikola Tesla, "". Scientific American, March, 1934. (.doc format) Paolo Brenni,The Van de Graaff Generator – An Electrostatic Machine for the 20th Century Bulletin of the Scientific Instrument Society No. 63 (1999) Charrier Jacques "Le générateur de Van de Graaff". Faculté des Sciences de Nantes. Hellborg, Ragnar, ed. Electrostatic Accelerators: Fundamentals and Applications [N.Y., N.Y.: Springer, 2005]. Available online at: https://books.google.com/books?id=tc6CEuIV1jEC&pg=PA51&lpg=PA51&dq=electrostatic+accelerator+book American Physical Society names ORNL's Holifield Facility historic physics site Accelerator physics American inventions Electrostatic generators 1929 introductions
0.767915
0.997298
0.76584
Pierre Curie
Pierre Curie ( ; ; 15 May 1859 – 19 April 1906) was a French physicist, a pioneer in crystallography, magnetism, piezoelectricity, and radioactivity. In 1903, he received the Nobel Prize in Physics with his wife, Marie Skłodowska–Curie, and Henri Becquerel, "in recognition of the extraordinary services they have rendered by their joint researches on the radiation phenomena discovered by Professor Henri Becquerel". With their win, the Curies became the first married couple to win the Nobel Prize, launching the Curie family legacy of five Nobel Prizes. Early life Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie (1827–1910), a doctor of French Huguenot Protestant origin from Alsace, and Sophie-Claire Curie (née Depouilly; 1832–1897). He was educated by his father and in his early teens showed a strong aptitude for mathematics and geometry. When he was 16, he earned his Bachelor of Science in mathematics. By the age of 18, he earned his license in physical sciences from the Faculty of Sciences at the Sorbonne, also known as the University of Paris. He did not proceed immediately to a doctorate due to lack of money. Instead, he worked as a laboratory instructor. When Pierre Curie was preparing for his Bachelor of Science degree, he worked in the laboratory of Jean-Gustave Bourbouze in the Faculty of Science. In 1895, he went on to receive his doctorate at the University of Paris. The submission material for his doctorate consisted of his research over magnetism. After obtaining his doctorate, he became professor of physics and in 1900, he became professor in the faculty of sciences. In 1880, Pierre and his older brother Paul-Jacques (1856–1941) demonstrated that an electric potential was generated when crystals were compressed, i.e., piezoelectricity. To aid this work they invented the piezoelectric quartz electrometer. The following year they demonstrated the reverse effect: that crystals could be made to deform when subject to an electric field.<ref name="Brothers" /> Almost all digital electronic circuits now rely on this in the form of crystal oscillators. In subsequent work on magnetism Pierre Curie defined the Curie scale. This work also involved delicate equipment – balances, electrometers, etc. Pierre Curie was introduced to Maria Skłodowska by their friend, physicist Józef Wierusz-Kowalski. Curie took her into his laboratory as his student. His admiration for her grew when he realized that she would not inhibit his research. He began to regard Skłodowska as his muse. She refused his initial proposal, but finally agreed to marry him on 26 July 1895. The Curies had a happy, affectionate marriage, and they were known for their devotion to each other. Research Before his famous doctoral studies on magnetism, he designed and perfected an extremely sensitive torsion balance for measuring magnetic coefficients. Variations on this equipment were commonly used by future workers in that area. Pierre Curie studied ferromagnetism, paramagnetism, and diamagnetism for his doctoral thesis, and discovered the effect of temperature on paramagnetism which is now known as Curie's law. The material constant in Curie's law is known as the Curie constant. He also discovered that ferromagnetic substances exhibited a critical temperature transition, above which the substances lost their ferromagnetic behavior. This is now known as the Curie temperature. The Curie temperature is used to study plate tectonics, treat hypothermia, measure caffeine, and to understand extraterrestrial magnetic fields. The Curie is a unit of measurement (3.7 × 1010 decays per second or 37 gigabecquerels) used to describe the intensity of a sample of radioactive material and was named after Marie and Pierre Curie by the Radiology Congress in 1910. Pierre Curie formulated what is now known as the Curie Dissymmetry Principle: a physical effect cannot have a dissymmetry absent from its efficient cause. For example, a random mixture of sand in zero gravity has no dissymmetry (it is isotropic). Introduce a gravitational field, and there is a dissymmetry because of the direction of the field. Then the sand grains can 'self-sort' with the density increasing with depth. But this new arrangement, with the directional arrangement of sand grains, actually reflects the dissymmetry of the gravitational field that causes the separation. Curie worked with his wife in isolating polonium and radium. They were the first to use the term "radioactivity", and were pioneers in its study. Their work, including Marie Curie's celebrated doctoral work, made use of a sensitive piezoelectric electrometer constructed by Pierre and his brother Jacques Curie. Pierre Curie's 26 December 1898 publication with his wife and M. G. Bémont for their discovery of radium and polonium was honored by a Citation for Chemical Breakthrough Award from the Division of History of Chemistry of the American Chemical Society presented to the ESPCI ParisTech (officially the École supérieure de physique et de Chimie industrielles de la Ville de Paris) in 2015. In 1903, to honor the Curies' work, the Royal Society of London invited Pierre to present their research. Marie Curie was not permitted to give the lecture so Lord Kelvin sat beside her while Pierre spoke on their research. After this, Lord Kelvin held a luncheon for Pierre. While in London, Pierre and Marie were awarded the Davy Medal of the Royal Society of London. In the same year, Pierre and Marie Curie, as well as Henri Becquerel, were awarded a Nobel Prize in physics for their research of radioactivity. Curie and one of his students, Albert Laborde, made the first discovery of nuclear energy, by identifying the continuous emission of heat from radium particles. Curie also investigated the radiation emissions of radioactive substances, and through the use of magnetic fields was able to show that some of the emissions were positively charged, some were negative and some were neutral. These correspond to alpha, beta and gamma radiation. Spiritualism In the late nineteenth century, Pierre Curie was investigating the mysteries of ordinary magnetism when he became aware of the spiritualist experiments of other European scientists, such as Charles Richet and Camille Flammarion. Pierre Curie initially thought the systematic investigation into the paranormal could help with some unanswered questions about magnetism. He wrote to Marie, then his fiancée: "I must admit that those spiritual phenomena intensely interest me. I think they are questions that deal with physics." Pierre Curie's notebooks from this period show he read many books on spiritualism. He did not attend séances such as those of Eusapia Palladino in Paris in June 1905 as a mere spectator, and his goal certainly was not to communicate with spirits. He saw the séances as scientific experiments, tried to monitor different parameters, and took detailed notes of every observation. Despite studying spiritualism, Curie was an atheist. Family Pierre Curie's grandfather, Paul Curie (1799–1853), a doctor of medicine, was a committed Malthusian humanist and married Augustine Hofer, daughter of Jean Hofer and great-granddaughter of Jean-Henri Dollfus, great industrialists from Mulhouse in the second half of the 18th century and the first part of the 19th century. Through this paternal grandmother, Pierre Curie is also a direct descendant of the Basel scientist and mathematician Jean Bernoulli (1667–1748), as is Pierre-Gilles de Gennes, winner of the 1991 Nobel Prize in Physics. Pierre and Marie Curie's daughter, Irène, and their son-in-law, Frédéric Joliot-Curie, were also physicists involved in the study of radioactivity, and each also received Nobel prizes for their work. The Curies' other daughter, Ève, wrote a noted biography of her mother. She was the only member of the Curie family to not become a physicist. Ève married Henry Richardson Labouisse Jr., who received a Nobel Peace Prize on behalf of UNICEF in 1965. Pierre and Marie Curie's granddaughter, Hélène Langevin-Joliot, is a professor of nuclear physics at the University of Paris, and their grandson, Pierre Joliot, who was named after Pierre Curie, is a noted biochemist. Death Pierre Curie died in a street collision in Paris on 19 April 1906. Crossing the busy Rue Dauphine in the rain at the Quai de Conti, he slipped and fell under a heavy horse-drawn cart. One of the wheels ran over his head, fracturing his skull and killing him instantly. Both the Curies experienced radium burns, both accidentally and voluntarily, and were exposed to extensive doses of radiation while conducting their research. They experienced radiation sickness and Marie Curie died from radiation-induced aplastic anemia in 1934. Even now, all their papers from the 1890s, even her cookbooks, are too dangerous to touch. Their laboratory books are kept in special lead boxes and people who want to see them have to wear protective clothing. Most of these items can be found at . Had Pierre Curie not been killed in an accident as he was, he would most likely have eventually died of the effects of radiation, as did his wife, their daughter Irène, and her husband Frédéric Joliot. In April 1995, Pierre and Marie Curie were moved from their original resting place, a family cemetery, and enshrined in the crypt of the Panthéon in Paris. Awards Nobel Prize in Physics, with Marie Curie and Henri Becquerel (1903) Davy Medal, with Marie Curie (1903) Matteucci Medal, with Marie Curie (1904) Elliott Cresson Medal (1909) awarded posthumously during Marie Curie's award ceremony Citation for Chemical Breakthrough Award from the Division of History of Chemistry of the American Chemical Society (2015) References External links NobelPrize.org: History of Pierre and Marie Pierre Curie's Nobel prize including the Nobel Lecture, 6 June 1905 Radioactive Substances, Especially Radium Biography American Institute of Physics Annotated bibliography for Pierre Curie from the Alsos Digital Library for Nuclear Issues Alsos Digital Library closure Curie's publication in French Academy of Sciences papers 1859 births 1906 deaths 19th-century French chemists 20th-century French chemists 19th-century French physicists 20th-century French physicists 19th-century atheists 20th-century atheists French atheists French nuclear physicists Pierre Discoverers of chemical elements French Nobel laureates Nobel laureates in Physics Members of the French Academy of Sciences University of Paris alumni Academic staff of the University of Paris Legion of Honour refusals Burials at the Panthéon, Paris Pedestrian road incident deaths Road incident deaths in France Scientists from Paris Deaths by acute radiation syndrome Recipients of the Matteucci Medal
0.766917
0.998592
0.765837
Undulatory locomotion
Undulatory locomotion is the type of motion characterized by wave-like movement patterns that act to propel an animal forward. Examples of this type of gait include crawling in snakes, or swimming in the lamprey. Although this is typically the type of gait utilized by limbless animals, some creatures with limbs, such as the salamander, forgo use of their legs in certain environments and exhibit undulatory locomotion. In robotics this movement strategy is studied in order to create novel robotic devices capable of traversing a variety of environments. Environmental interactions In limbless locomotion, forward locomotion is generated by propagating flexural waves along the length of the animal's body. Forces generated between the animal and surrounding environment lead to a generation of alternating sideways forces that act to move the animal forward. These forces generate thrust and drag. Hydrodynamics Simulation predicts that thrust and drag are dominated by viscous forces at low Reynolds numbers and inertial forces at higher Reynolds numbers. When the animal swims in a fluid, two main forces are thought to play a role: Skin Friction: Generated due to the resistance of a fluid to shearing and is proportional to speed of the flow. This dominates undulatory swimming in spermatozoa and the nematode. Form Force: Generated by the differences in pressure on the surface of the body and it varies with the square of flow speed. At low Reynolds number (Re~100), skin friction accounts for nearly all of the thrust and drag. For those animals which undulate at intermediate Reynolds number (Re~101), such as the Ascidian larvae, both skin friction and form force account for the production of drag and thrust. At high Reynolds number (Re~102), both skin friction and form force act to generate drag, but only form force produces thrust. Kinematics In animals that move without use of limbs, the most common feature of the locomotion is a rostral to caudal wave that travels down their body. However, this pattern can change based on the particular undulating animal, the environment, and the metric in which the animal is optimizing (i.e. speed, energy, etc.). The most common mode of motion is simple undulations in which lateral bending is propagated from head to tail. Snakes can exhibit 5 different modes of terrestrial locomotion: (1) lateral undulation, (2) sidewinding, (3) concertina, (4) rectilinear, and (5) slide-pushing. Lateral undulation closely resembles the simple undulatory motion observed in many other animals such as in lizards, eels and fish, in which waves of lateral bending propagate down the snake's body. The American eel typically moves in an aquatic environment, though it can also move on land for short periods of time. It is able to successfully move about in both environments by producing traveling waves of lateral undulations. However, differences between terrestrial and aquatic locomotor strategy suggest that the axial musculature is being activated differently, (see muscle activation patterns below). In terrestrial locomotion, all points along the body move on approximately the same path and, therefore, the lateral displacements along the length of the eel's body is approximately the same. However, in aquatic locomotion, different points along the body follow different paths with increasing lateral amplitude more posteriorly. In general, the amplitude of the lateral undulation and angle of intervertebral flexion is much greater during terrestrial locomotion than that of aquatic. Musculoskeletal system Muscle architecture A typical characteristic of many animals that utilize undulatory locomotion is that they have segmented muscles, or blocks of myomeres, running from their head to tails which are separated by connective tissue called myosepta. In addition, some segmented muscle groups, such as the lateral hypaxial musculature in the salamander are oriented at an angle to the longitudinal direction. For these obliquely oriented fibers the strain in the longitudinal direction is greater than the strain in the muscle fiber direction leading to an architectural gear ratio greater than 1. A higher initial angle of orientation and more dorsoventral bulging produces a faster muscle contraction but results in a lower amount of force production. It is hypothesized that animals employ a variable gearing mechanism that allows self-regulation of force and velocity to meet the mechanical demands of the contraction. When a pennate muscle is subjected to a low force, resistance to width changes in the muscle cause it to rotate which consequently produces a higher architectural gear ratio (AGR) (high velocity). However, when subject to a high force, the perpendicular fiber force component overcomes the resistance to width changes and the muscle compresses producing a lower AGR (capable of maintaining a higher force output). Most fishes bend as a simple, homogenous beam during swimming via contractions of longitudinal red muscle fibers and obliquely oriented white muscle fibers within the segmented axial musculature. The fiber strain (εf) experienced by the longitudinal red muscle fibers is equivalent to the longitudinal strain (εx). The deeper white muscle fibers fishes show diversity in arrangement. These fibers are organized into cone-shaped structures and attach to connective tissue sheets known as myosepta; each fiber shows a characteristic dorsoventral (α) and mediolateral (φ) trajectory. The segmented architecture theory predicts that, εx > εf. This phenomenon results in an architectural gear ratio, determined as longitudinal strain divided by fiber strain (εx / εf), greater than one and longitudinal velocity amplification; furthermore, this emergent velocity amplification may be augmented by variable architectural gearing via mesolateral and dorsoventral shape changes, a pattern seen in pennate muscle contractions. A red-to-white gearing ratio (red εf / white εf) captures the combined effect of the longitudinal red muscle fiber and oblique white muscle fiber strains. Simple bending behavior in homogenous beams suggests ε increases with distance from the neutral axis (z). This poses a problem to animals, such as fishes and salamanders, which undergo undulatory movement. Muscle fibers are constrained by the length-tension and force-velocity curves. Furthermore, it has been hypothesized that muscle fibers recruited for a particular task must operate within an optimal range of strains (ε) and contractile velocities to generate peak force and power respectively. Non-uniform ε generation during undulatory movement would force differing muscle fibers recruited for the same task to operate on differing portions of the length-tension and force-velocity curves; performance would not be optimal. Alexander predicted that the dorsoventral (α) and mediolateral (φ) orientation of the white fibers of the fish axial musculature may allow more uniform strain across varying mesolateral fiber distances. Unfortunately, the white muscle fiber musculature of fishes is too complex to study uniform strain generation; however, Brainerd and Azizi studied this phenomenon using a simplified salamander model. Siren lacertina, an aquatic salamander, utilizes swimming motions similar to the aforementioned fishes yet contains hypaxial muscle fibers (which generate bending) characterized by a simpler organization. The hypaxial muscle fibers of S. lacertina are obliquely oriented, but have a near zero mediolateral (φ) trajectory and a constant dorsolateral (α) trajectory within each segment. Therefore, the effect of dorsolateral (α) trajectory and the distance between a given hypaxial muscle layer and the neutral axis of bending (z) on muscle fiber strain (ε) can be studied. Brainerd and Azizi found that longitudinal contractions of the constant volume hypaxial muscles were compensated by an increase in the dorsoventral dimensions. Bulging was accompanied by fiber rotation as well as an increase in both α hypaxial fiber trajectory and architectural gear ratio (AGR), a phenomenon also seen in pennate muscle contractions. They constructed a mathematical model to predict the final hypaxial fiber angle, AGR and dorsoventral height, where: λx = longitudinal extension ratio of the segment (portion of final longitudinal length after contraction to initial longitudinal length), β = final fiber angle, α = initial fiber angle, f = initial fiber length, and and = longitudinal and fiber strain respectively. AGR = This relationship shows that AGR increase with an increase in fiber angle from α to β. In addition, final fiber angle (β) increases with dorsolateral bulging (y) and fiber contraction, but decreases as a function of initial fiber length. The application of the latter conclusions can be seen in S. lacertina. This organism undulates as a homogenous beam (just as in fishes) during swimming; thus the distance of a muscle fiber from the neutral axis (z) during bending must be greater for external oblique muscle layers (EO) than internal oblique muscle layers (IO). The relationship between the strains experienced by the EO and IO and their respective z values is given by the following equation: where EO and IO = strain of the external and internal oblique muscle layers, and zEO and zIO = distance of the external and internal oblique muscle layers respectively from the neutral axis. EO = IO (zEO / zIO) Via this equation, we see that z is directly proportional to ; the strain experienced by the EO exceeds that of the IO. Azizi et al. discovered that the initial hypaxial fiber α trajectory in the EO is greater than that of the IO. Because initial α trajectory is proportional to the AGR, the EO contracts with a greater AGR than the IO. The resulting velocity amplification allows both layers of muscles to operate at similar strains and shortening velocities; this enables the EO and IO to function on comparable portions of the length-tension and force-velocity curves. Muscles recruited for a similar task ought to operate at similar strains and velocities to maximize force and power output. Therefore, variability in AGR within the hypaxial musculature of the Siren lacertina counteracts varying mesolateral fiber distances and optimizes performance. Azizi et al. termed this phenomenon as fiber strain homogeneity in segmented musculature. Muscle activity In addition to a rostral to caudal kinematic wave that travels down the animal's body during undulatory locomotion, there is also a corresponding wave of muscle activation that travels in the rostro-caudal direction. However, while this pattern is characteristic of undulatory locomotion, it too can vary with environment. American eel Aquatic Locomotion: Electromyogram (EMG) recordings of the American eel reveal a similar pattern of muscle activation during aquatic movement as that of fish. At slow speeds only the most posterior end of the eel's muscles are activated with more anterior muscle recruited at higher speeds. As in many other animals, the muscles activate late in the lengthening phase of the muscle strain cycle, just prior to muscle shortening which is a pattern believed to maximize work output from the muscle. Terrestrial Locomotion: EMG recordings show a longer absolute duration and duty cycle of muscle activity during locomotion on land. Also, the absolute intensity is much higher while on land which is expected from the increase in gravitational forces acting on the animal. However, the intensity level decreases more posteriorly along the length of the eel's body. Also, the timing of muscle activation shifts to later in the strain cycle of muscle shortening. Energetics Animals with elongated bodies and reduced or no legs have evolved differently from their limbed relatives. In the past, some have speculated that this evolution was due to a lower energetic cost associated with limbless locomotion. The biomechanical arguments used to support this rationale include that (1) there is no cost associatied with the vertical displacement of the center of mass typically found with limbed animals, (2) there is no cost associated with accelerating or decelerating limbs, and (3) there is a lower cost for supporting the body. This hypothesis has been studied further by examining the oxygen consumption rates in the snake during different modes of locomotion: lateral undulation, concertina, and sidewinding. The net cost of transport (NCT), which indicates the amount of energy required to move a unit of mass a given distance, for a snake moving with a lateral undulatory gait is identical to that of a limbed lizard with the same mass. However, a snake utilizing concertina locomotion produces a much higher net cost of transport, while sidewinding actually produces a lower net cost of transport. Therefore, the different modes of locomotion are of primary importance when determining energetic cost. The reason that lateral undulation has the same energetic efficiency as limbed animals and not less, as hypothesized earlier, might be due to the additional biomechanical cost associated with this type of movement due to the force needed to bend the body laterally, push its sides against a vertical surface, and overcome sliding friction. Neuromuscular system Intersegmental coordination Wavelike motor pattern typically arise from a series of coupled segmental oscillator. Each segmental oscillator is capable of producing a rhythmic motor output in the absence of sensory feedback. One such example is the half center oscillator which consists of two neurons that are mutually inhibitory and produce activity 180 degrees out of phase. The phase relationships between these oscillators are established by the emergent properties of the oscillators and the coupling between them. Forward swimming can be accomplished by a series of coupled oscillators in which the anterior oscillators have a shorter endogenous frequency than the posterior oscillators. In this case, all oscillators will be driven at the same period but the anterior oscillators will lead in phase. In addition, the phase relations can be established by asymmetries in the couplings between oscillators or by sensory feedback mechanisms. Leech The leech moves by producing dorsoventral undulations. The phase lags between body segments is about 20 degrees and independent of cycle period. Thus, both hemisegments of the oscillator fire synchronously to produce a contraction. Only the ganglia rostral to the midpoint are capable of producing oscillation individually. There is U-shaped gradient in endogenous segment oscillation as well with the highest oscillations frequencies occurring near the middle of the animal. Although the couplings between neurons spans six segments in both the anterior and posterior direction, there are asymmetries between the various interconnections because the oscillators are active at three different phases. Those that are active in the 0 degree phase project only in the descending direction while those projecting in the ascending direction are active at 120 degrees or 240 degrees. In addition, sensory feedback from the environment may contribute to resultant phase lag. Lamprey The lamprey moves using lateral undulation and consequently left and right motor hemisegments are active 180 degrees out of phase. Also, it has been found that the endogenous frequency of the more anterior oscillators is higher than that of the more posterior ganglia. In addition, inhibitory interneurons in the lamprey project 14-20 segments caudally but have short rostral projections. Sensory feedback may be important for appropriately responding to perturbations, but seems to be less important for the maintenance of appropriate phase relations. Robotics Based on biologically hypothesized connections of the central pattern generator in the salamander, a robotic system has been created which exhibits the same characteristics of the actual animal. Electrophysiology studies have shown that stimulation of the mesencephalic locomotor region (MLR) located in the brain of the salamander produce different gaits, swimming or walking, depending on intensity level. Similarly, the CPG model in the robot can exhibit walking at low levels of tonic drive and swimming at high levels of tonic drive. The model is based on the four assumptions that: Tonic stimulation of the body CPG produces spontaneous traveling waves. When the limb CPG is activated it overrides the body CPG. The strength of the coupling from the limb to the body CPG is stronger than that from body to limb. Limb oscillators saturate and stop oscillating at higher tonic drives. Limb oscillators have lower intrinsic frequencies than body CPGs at the same tonic drive. This model encompasses the basic features of salamander locomotion. See also Animal locomotion Aquatic locomotion Locomotion in space Locomotive Robot locomotion Terrestrial locomotion References External links Biologically Inspired Robotics Group @ EPFL Center for Biologically Inspired Design at Georgia Tech Functional Morphology and Biomechanics Laboratory, Brown University Peristalsis Robot Snake Locomotion Research for this Wikipedia entry was conducted as a part of a Locomotion Neuromechanics course (APPH 6232) offered in the School of Applied Physiology at Georgia Tech Animal locomotion Wave mechanics
0.783044
0.978005
0.765821
Lambda-CDM model
The Lambda-CDM, Lambda cold dark matter, or ΛCDM model is a mathematical model of the Big Bang theory with three major components: a cosmological constant, denoted by lambda (Λ), associated with dark energy the postulated cold dark matter, denoted by CDM ordinary matter It is referred to as the standard model of Big Bang cosmology because it is the simplest model that provides a reasonably good account of: the existence and structure of the cosmic microwave background the large-scale structure in the distribution of galaxies the observed abundances of hydrogen (including deuterium), helium, and lithium the accelerating expansion of the universe observed in the light from distant galaxies and supernovae The model assumes that general relativity is the correct theory of gravity on cosmological scales. It emerged in the late 1990s as a concordance cosmology, after a period of time when disparate observed properties of the universe appeared mutually inconsistent, and there was no consensus on the makeup of the energy density of the universe. Some alternative models challenge the assumptions of the ΛCDM model. Examples of these are modified Newtonian dynamics, entropic gravity, modified gravity, theories of large-scale variations in the matter density of the universe, bimetric gravity, scale invariance of empty space, and decaying dark matter (DDM). Overview The ΛCDM model includes an expansion of metric space that is well documented, both as the redshift of prominent spectral absorption or emission lines in the light from distant galaxies, and as the time dilation in the light decay of supernova luminosity curves. Both effects are attributed to a Doppler shift in electromagnetic radiation as it travels across expanding space. Although this expansion increases the distance between objects that are not under shared gravitational influence, it does not increase the size of the objects (e.g. galaxies) in space. It also allows for distant galaxies to recede from each other at speeds greater than the speed of light; local expansion is less than the speed of light, but expansion summed across great distances can collectively exceed the speed of light. The letter Λ (lambda) represents the cosmological constant, which is associated with a vacuum energy or dark energy in empty space that is used to explain the contemporary accelerating expansion of space against the attractive effects of gravity. A cosmological constant has negative pressure, , which contributes to the stress–energy tensor that, according to the general theory of relativity, causes accelerating expansion. The fraction of the total energy density of our (flat or almost flat) universe that is dark energy, , is estimated to be 0.669 ± 0.038 based on the 2018 Dark Energy Survey results using Type Ia supernovae or based on the 2018 release of Planck satellite data, or more than 68.3 % (2018 estimate) of the mass–energy density of the universe. Dark matter is postulated in order to account for gravitational effects observed in very large-scale structures (the "non-keplerian" rotation curves of galaxies; the gravitational lensing of light by galaxy clusters; and the enhanced clustering of galaxies) that cannot be accounted for by the quantity of observed matter. The ΛCDM model proposes specifically cold dark matter, hypothesized as: Non-baryonic: Consists of matter other than protons and neutrons (and electrons, by convention, although electrons are not baryons) Cold: Its velocity is far less than the speed of light at the epoch of radiation–matter equality (thus neutrinos are excluded, being non-baryonic but not cold) Dissipationless: Cannot cool by radiating photons Collisionless: Dark matter particles interact with each other and other particles only through gravity and possibly the weak force Dark matter constitutes about 26.5 % of the mass–energy density of the universe. The remaining 4.9 % comprises all ordinary matter observed as atoms, chemical elements, gas and plasma, the stuff of which visible planets, stars and galaxies are made. The great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10 % of the ordinary matter contribution to the mass–energy density of the universe. The model includes a single originating event, the "Big Bang", which was not an explosion but the abrupt appearance of expanding spacetime containing radiation at temperatures of around 1015 K. This was immediately (within 10−29 seconds) followed by an exponential expansion of space by a scale multiplier of 1027 or more, known as cosmic inflation. The early universe remained hot (above 10 000 K) for several hundred thousand years, a state that is detectable as a residual cosmic microwave background, or CMB, a very low-energy radiation emanating from all parts of the sky. The "Big Bang" scenario, with cosmic inflation and standard particle physics, is the only cosmological model consistent with the observed continuing expansion of space, the observed distribution of lighter elements in the universe (hydrogen, helium, and lithium), and the spatial texture of minute irregularities (anisotropies) in the CMB radiation. Cosmic inflation also addresses the "horizon problem" in the CMB; indeed, it seems likely that the universe is larger than the observable particle horizon. The model uses the Friedmann–Lemaître–Robertson–Walker metric, the Friedmann equations, and the cosmological equations of state to describe the observable universe from approximately 0.1 s to the present. Cosmic expansion history The expansion of the universe is parameterized by a dimensionless scale factor (with time counted from the birth of the universe), defined relative to the present time, so ; the usual convention in cosmology is that subscript 0 denotes present-day values, so denotes the age of the universe. The scale factor is related to the observed redshift of the light emitted at time by The expansion rate is described by the time-dependent Hubble parameter, , defined as where is the time-derivative of the scale factor. The first Friedmann equation gives the expansion rate in terms of the matter+radiation density the curvature and the cosmological constant where, as usual is the speed of light and is the gravitational constant. A critical density is the present-day density, which gives zero curvature , assuming the cosmological constant is zero, regardless of its actual value. Substituting these conditions to the Friedmann equation gives where is the reduced Hubble constant. If the cosmological constant were actually zero, the critical density would also mark the dividing line between eventual recollapse of the universe to a Big Crunch, or unlimited expansion. For the Lambda-CDM model with a positive cosmological constant (as observed), the universe is predicted to expand forever regardless of whether the total density is slightly above or below the critical density; though other outcomes are possible in extended models where the dark energy is not constant but actually time-dependent. It is standard to define the present-day density parameter for various species as the dimensionless ratio where the subscript is one of for baryons, for cold dark matter, for radiation (photons plus relativistic neutrinos), and for dark energy. Since the densities of various species scale as different powers of , e.g. for matter etc., the Friedmann equation can be conveniently rewritten in terms of the various density parameters as where is the equation of state parameter of dark energy, and assuming negligible neutrino mass (significant neutrino mass requires a more complex equation). The various parameters add up to by construction. In the general case this is integrated by computer to give the expansion history and also observable distance–redshift relations for any chosen values of the cosmological parameters, which can then be compared with observations such as supernovae and baryon acoustic oscillations. In the minimal 6-parameter Lambda-CDM model, it is assumed that curvature is zero and , so this simplifies to Observations show that the radiation density is very small today, ; if this term is neglected the above has an analytic solution where this is fairly accurate for or million years. Solving for gives the present age of the universe in terms of the other parameters. It follows that the transition from decelerating to accelerating expansion (the second derivative crossing zero) occurred when which evaluates to or for the best-fit parameters estimated from the Planck spacecraft. Historical development The discovery of the cosmic microwave background (CMB) in 1964 confirmed a key prediction of the Big Bang cosmology. From that point on, it was generally accepted that the universe started in a hot, dense state and has been expanding over time. The rate of expansion depends on the types of matter and energy present in the universe, and in particular, whether the total density is above or below the so-called critical density. During the 1970s, most attention focused on pure-baryonic models, but there were serious challenges explaining the formation of galaxies, given the small anisotropies in the CMB (upper limits at that time). In the early 1980s, it was realized that this could be resolved if cold dark matter dominated over the baryons, and the theory of cosmic inflation motivated models with critical density. During the 1980s, most research focused on cold dark matter with critical density in matter, around 95 % CDM and 5 % baryons: these showed success at forming galaxies and clusters of galaxies, but problems remained; notably, the model required a Hubble constant lower than preferred by observations, and observations around 1988–1990 showed more large-scale galaxy clustering than predicted. These difficulties sharpened with the discovery of CMB anisotropy by the Cosmic Background Explorer in 1992, and several modified CDM models, including ΛCDM and mixed cold and hot dark matter, came under active consideration through the mid-1990s. The ΛCDM model then became the leading model following the observations of accelerating expansion in 1998, and was quickly supported by other observations: in 2000, the BOOMERanG microwave background experiment measured the total (matter–energy) density to be close to 100 % of critical, whereas in 2001 the 2dFGRS galaxy redshift survey measured the matter density to be near 25 %; the large difference between these values supports a positive Λ or dark energy. Much more precise spacecraft measurements of the microwave background from WMAP in 2003–2010 and Planck in 2013–2015 have continued to support the model and pin down the parameter values, most of which are constrained below 1 percent uncertainty. Research is active into many aspects of the ΛCDM model, both to refine the parameters and to resolve the tensions between recent observations and the ΛCDM model, such as the Hubble tension and the CMB dipole. In addition, ΛCDM has no explicit physical theory for the origin or physical nature of dark matter or dark energy; the nearly scale-invariant spectrum of the CMB perturbations, and their image across the celestial sphere, are believed to result from very small thermal and acoustic irregularities at the point of recombination. Historically, a large majority of astronomers and astrophysicists support the ΛCDM model or close relatives of it, but recent observations that contradict the ΛCDM model have led some astronomers and astrophysicists to search for alternatives to the ΛCDM model, which include dropping the Friedmann–Lemaître–Robertson–Walker metric or modifying dark energy. On the other hand, Milgrom, McGaugh, and Kroupa have long been leading critics of the ΛCDM model, attacking the dark matter portions of the theory from the perspective of galaxy formation models and supporting the alternative modified Newtonian dynamics (MOND) theory, which requires a modification of the Einstein field equations and the Friedmann equations as seen in proposals such as modified gravity theory (MOG theory) or tensor–vector–scalar gravity theory (TeVeS theory). Other proposals by theoretical astrophysicists of cosmological alternatives to Einstein's general relativity that attempt to account for dark energy or dark matter include f(R) gravity, scalar–tensor theories such as galileon theories, brane cosmologies, the DGP model, and massive gravity and its extensions such as bimetric gravity. Successes In addition to explaining many pre-2000 observations, the model has made a number of successful predictions: notably the existence of the baryon acoustic oscillation feature, discovered in 2005 in the predicted location; and the statistics of weak gravitational lensing, first observed in 2000 by several teams. The polarization of the CMB, discovered in 2002 by DASI, has been successfully predicted by the model: in the 2015 Planck data release, there are seven observed peaks in the temperature (TT) power spectrum, six peaks in the temperature–polarization (TE) cross spectrum, and five peaks in the polarization (EE) spectrum. The six free parameters can be well constrained by the TT spectrum alone, and then the TE and EE spectra can be predicted theoretically to few-percent precision with no further adjustments allowed. Challenges Over the years, numerous simulations of ΛCDM and observations of our universe have been made that challenge the validity of the ΛCDM model, to the point where some cosmologists believe that the ΛCDM model may be superseded by a different, as yet unknown cosmological model. Lack of detection Extensive searches for dark matter particles have so far shown no well-agreed detection, while dark energy may be almost impossible to detect in a laboratory, and its value is extremely small compared to vacuum energy theoretical predictions. Violations of the cosmological principle The ΛCDM model has been shown to satisfy the cosmological principle, which states that, on a large-enough scale, the universe looks the same in all directions (isotropy) and from every location (homogeneity); "the universe looks the same whoever and wherever you are." The cosmological principle exists because when the predecessors of the ΛCDM model were being developed, there was not sufficient data available to distinguish between more complex anisotropic or inhomogeneous models, so homogeneity and isotropy were assumed to simplify the models, and the assumptions were carried over into the ΛCDM model. However, recent findings have suggested that violations of the cosmological principle, especially of isotropy, exist. These violations have called the ΛCDM model into question, with some authors suggesting that the cosmological principle is obsolete or that the Friedmann–Lemaître–Robertson–Walker metric breaks down in the late universe. This has additional implications for the validity of the cosmological constant in the ΛCDM model, as dark energy is implied by observations only if the cosmological principle is true. Violations of isotropy Evidence from galaxy clusters, quasars, and type Ia supernovae suggest that isotropy is violated on large scales. Data from the Planck Mission shows hemispheric bias in the cosmic microwave background in two respects: one with respect to average temperature (i.e. temperature fluctuations), the second with respect to larger variations in the degree of perturbations (i.e. densities). The European Space Agency (the governing body of the Planck Mission) has concluded that these anisotropies in the CMB are, in fact, statistically significant and can no longer be ignored. Already in 1967, Dennis Sciama predicted that the cosmic microwave background has a significant dipole anisotropy. In recent years, the CMB dipole has been tested, and the results suggest our motion with respect to distant radio galaxies and quasars differs from our motion with respect to the cosmic microwave background. The same conclusion has been reached in recent studies of the Hubble diagram of Type Ia supernovae and quasars. This contradicts the cosmological principle. The CMB dipole is hinted at through a number of other observations. First, even within the cosmic microwave background, there are curious directional alignments and an anomalous parity asymmetry that may have an origin in the CMB dipole. Separately, the CMB dipole direction has emerged as a preferred direction in studies of alignments in quasar polarizations, scaling relations in galaxy clusters, strong lensing time delay, Type Ia supernovae, and quasars and gamma-ray bursts as standard candles. The fact that all these independent observables, based on different physics, are tracking the CMB dipole direction suggests that the Universe is anisotropic in the direction of the CMB dipole. Nevertheless, some authors have stated that the universe around Earth is isotropic at high significance by studies of the cosmic microwave background temperature maps. Violations of homogeneity Based on N-body simulations in ΛCDM, Yadav and his colleagues showed that the spatial distribution of galaxies is statistically homogeneous if averaged over scales 260/h Mpc or more. However, many large-scale structures have been discovered, and some authors have reported some of the structures to be in conflict with the predicted scale of homogeneity for ΛCDM, including The Clowes–Campusano LQG, discovered in 1991, which has a length of 580 Mpc The Sloan Great Wall, discovered in 2003, which has a length of 423 Mpc U1.11, a large quasar group discovered in 2011, which has a length of 780 Mpc The Huge-LQG, discovered in 2012, which is three times longer than and twice as wide as is predicted possible according to ΛCDM The Hercules–Corona Borealis Great Wall, discovered in November 2013, which has a length of 2000–3000 Mpc (more than seven times that of the SGW) The Giant Arc, discovered in June 2021, which has a length of 1000 Mpc The Big Ring, reported in 2024, which has a diameter of 399 Mpc and is shaped like a ring Other authors claim that the existence of structures larger than the scale of homogeneity in the ΛCDM model does not necessarily violate the cosmological principle in the ΛCDM model. El Gordo galaxy cluster collision El Gordo is a massive interacting galaxy cluster in the early Universe. The extreme properties of El Gordo in terms of its redshift, mass, and the collision velocity leads to strong tension with the ΛCDM model. The properties of El Gordo are however consistent with cosmological simulations in the framework of MOND due to more rapid structure formation. KBC void The KBC void is an immense, comparatively empty region of space containing the Milky Way approximately 2 billion light-years (600 megaparsecs, Mpc) in diameter. Some authors have said the existence of the KBC void violates the assumption that the CMB reflects baryonic density fluctuations at or Einstein's theory of general relativity, either of which would violate the ΛCDM model, while other authors have claimed that supervoids as large as the KBC void are consistent with the ΛCDM model. Hubble tension Statistically significant differences remain in measurements of the Hubble constant based on the cosmic background radiation compared to astronomical distance measurements. This difference has been called the Hubble tension. The Hubble tension in cosmology is widely acknowledged to be a major problem for the ΛCDM model. In December 2021, National Geographic reported that the cause of the Hubble tension discrepancy is not known. However, if the cosmological principle fails (see Violations of the cosmological principle), then the existing interpretations of the Hubble constant and the Hubble tension have to be revised, which might resolve the Hubble tension. Some authors postulate that the Hubble tension can be explained entirely by the KBC void, as measuring galactic supernovae inside a void is predicted by the authors to yield a larger local value for the Hubble constant than cosmological measures of the Hubble constant. However, other work has found no evidence for this in observations, finding the scale of the claimed underdensity to be incompatible with observations which extend beyond its radius. Important deficiencies were subsequently pointed out in this analysis, leaving open the possibility that the Hubble tension is indeed caused by outflow from the KBC void. As a result of the Hubble tension, other researchers have called for new physics beyond the ΛCDM model. Moritz Haslbauer et al. proposed that MOND would resolve the Hubble tension. Another group of researchers led by Marc Kamionkowski proposed a cosmological model with early dark energy to replace ΛCDM. S8 tension The tension in cosmology is another major problem for the ΛCDM model. The parameter in the ΛCDM model quantifies the amplitude of matter fluctuations in the late universe and is defined as Early- (e.g. from CMB data collected using the Planck observatory) and late-time (e.g. measuring weak gravitational lensing events) facilitate increasingly precise values of . However, these two categories of measurement differ by more standard deviations than their uncertainties. This discrepancy is called the tension. The name "tension" reflects that the disagreement is not merely between two data sets: the many sets of early- and late-time measurements agree well within their own categories, but there is an unexplained difference between values obtained from different points in the evolution of the universe. Such a tension indicates that the ΛCDM model may be incomplete or in need of correction. Some values for are (2020 Planck), (2021 KIDS), (2022 DES), (2023 DES+KIDS), – (2023 HSC-SSP), (2024 EROSITA). Values have also obtained using peculiar velocities, (2020) and (2020), among other methods. Axis of evil Cosmological lithium problem The actual observable amount of lithium in the universe is less than the calculated amount from the ΛCDM model by a factor of 3–4. If every calculation is correct, then solutions beyond the existing ΛCDM model might be needed. Shape of the universe The ΛCDM model assumes that the shape of the universe is of zero curvature (is flat) and has an undetermined topology. In 2019, interpretation of Planck data suggested that the curvature of the universe might be positive (often called "closed"), which would contradict the ΛCDM model. Some authors have suggested that the Planck data detecting a positive curvature could be evidence of a local inhomogeneity in the curvature of the universe rather than the universe actually being globally a 3-manifold of positive curvature. Violations of the strong equivalence principle The ΛCDM model assumes that the strong equivalence principle is true. However, in 2020 a group of astronomers analyzed data from the Spitzer Photometry and Accurate Rotation Curves (SPARC) sample, together with estimates of the large-scale external gravitational field from an all-sky galaxy catalog. They concluded that there was highly statistically significant evidence of violations of the strong equivalence principle in weak gravitational fields in the vicinity of rotationally supported galaxies. They observed an effect inconsistent with tidal effects in the ΛCDM model. These results have been challenged as failing to consider inaccuracies in the rotation curves and correlations between galaxy properties and clustering strength. and as inconsistent with similar analysis of other galaxies. Cold dark matter discrepancies Several discrepancies between the predictions of cold dark matter in the ΛCDM model and observations of galaxies and their clustering have arisen. Some of these problems have proposed solutions, but it remains unclear whether they can be solved without abandoning the ΛCDM model. Cuspy halo problem The density distributions of dark matter halos in cold dark matter simulations (at least those that do not include the impact of baryonic feedback) are much more peaked than what is observed in galaxies by investigating their rotation curves. Dwarf galaxy problem Cold dark matter simulations predict large numbers of small dark matter halos, more numerous than the number of small dwarf galaxies that are observed around galaxies like the Milky Way. Satellite disk problem Dwarf galaxies around the Milky Way and Andromeda galaxies are observed to be orbiting in thin, planar structures whereas the simulations predict that they should be distributed randomly about their parent galaxies. However, latest research suggests this seemingly bizarre alignment is just a quirk which will dissolve over time. High-velocity galaxy problem Galaxies in the NGC 3109 association are moving away too rapidly to be consistent with expectations in the ΛCDM model. In this framework, NGC 3109 is too massive and distant from the Local Group for it to have been flung out in a three-body interaction involving the Milky Way or Andromeda Galaxy. Galaxy morphology problem If galaxies grew hierarchically, then massive galaxies required many mergers. Major mergers inevitably create a classical bulge. On the contrary, about 80 % of observed galaxies give evidence of no such bulges, and giant pure-disc galaxies are commonplace. The tension can be quantified by comparing the observed distribution of galaxy shapes today with predictions from high-resolution hydrodynamical cosmological simulations in the ΛCDM framework, revealing a highly significant problem that is unlikely to be solved by improving the resolution of the simulations. The high bulgeless fraction was nearly constant for 8 billion years. Fast galaxy bar problem If galaxies were embedded within massive halos of cold dark matter, then the bars that often develop in their central regions would be slowed down by dynamical friction with the halo. This is in serious tension with the fact that observed galaxy bars are typically fast. Small scale crisis Comparison of the model with observations may have some problems on sub-galaxy scales, possibly predicting too many dwarf galaxies and too much dark matter in the innermost regions of galaxies. This problem is called the "small scale crisis". These small scales are harder to resolve in computer simulations, so it is not yet clear whether the problem is the simulations, non-standard properties of dark matter, or a more radical error in the model. High redshift galaxies Observations from the James Webb Space Telescope have resulted in various galaxies confirmed by spectroscopy at high redshift, such as JADES-GS-z13-0 at cosmological redshift of 13.2. Other candidate galaxies which have not been confirmed by spectroscopy include CEERS-93316 at cosmological redshift of 16.4. Existence of surprisingly massive galaxies in the early universe challenges the preferred models describing how dark matter halos drive galaxy formation. It remains to be seen whether a revision of the Lambda-CDM model with parameters given by Planck Collaboration is necessary to resolve this issue. The discrepancies could also be explained by particular properties (stellar masses or effective volume) of the candidate galaxies, yet unknown force or particle outside of the Standard Model through which dark matter interacts, more efficient baryonic matter accumulation by the dark matter halos, early dark energy models, or the hypothesized long-sought Population III stars. Missing baryon problem Massimo Persic and Paolo Salucci first estimated the baryonic density today present in ellipticals, spirals, groups and clusters of galaxies. They performed an integration of the baryonic mass-to-light ratio over luminosity (in the following ), weighted with the luminosity function over the previously mentioned classes of astrophysical objects: The result was: where . Note that this value is much lower than the prediction of standard cosmic nucleosynthesis , so that stars and gas in galaxies and in galaxy groups and clusters account for less than 10 % of the primordially synthesized baryons. This issue is known as the problem of the "missing baryons". The missing baryon problem is claimed to be resolved. Using observations of the kinematic Sunyaev–Zel'dovich effect spanning more than 90 % of the lifetime of the Universe, in 2021 astrophysicists found that approximately 50 % of all baryonic matter is outside dark matter haloes, filling the space between galaxies. Together with the amount of baryons inside galaxies and surrounding them, the total amount of baryons in the late time Universe is compatible with early Universe measurements. Unfalsifiability It has been argued that the ΛCDM model is built upon a foundation of conventionalist stratagems, rendering it unfalsifiable in the sense defined by Karl Popper. Parameters The simple ΛCDM model is based on six parameters: physical baryon density parameter; physical dark matter density parameter; the age of the universe; scalar spectral index; curvature fluctuation amplitude; and reionization optical depth. In accordance with Occam's razor, six is the smallest number of parameters needed to give an acceptable fit to the observations; other possible parameters are fixed at "natural" values, e.g. total density parameter = 1.00, dark energy equation of state = −1. (See below for extended models that allow these to vary.) The values of these six parameters are mostly not predicted by theory (though, ideally, they may be related by a future "Theory of Everything"), except that most versions of cosmic inflation predict the scalar spectral index should be slightly smaller than 1, consistent with the estimated value 0.96. The parameter values, and uncertainties, are estimated using large computer searches to locate the region of parameter space providing an acceptable match to cosmological observations. From these six parameters, the other model values, such as the Hubble constant and the dark energy density, can be readily calculated. Commonly, the set of observations fitted includes the cosmic microwave background anisotropy, the brightness/redshift relation for supernovae, and large-scale galaxy clustering including the baryon acoustic oscillation feature. Other observations, such as the Hubble constant, the abundance of galaxy clusters, weak gravitational lensing and globular cluster ages, are generally consistent with these, providing a check of the model, but are less precisely measured at present. Parameter values listed in the table are from the Planck Collaboration Cosmological parameters 68 % confidence limits for the base ΛCDM model from Planck CMB power spectra, in combination with lensing reconstruction and external data (BAO + JLA + H0). See also Planck (spacecraft). Extended models Extended models allow one or more of the "fixed" parameters above to vary, in addition to the basic six; so these models join smoothly to the basic six-parameter model in the limit that the additional parameter(s) approach the default values. For example, possible extensions of the simplest ΛCDM model allow for spatial curvature ( may be different from 1); or quintessence rather than a cosmological constant where the equation of state of dark energy is allowed to differ from −1. Cosmic inflation predicts tensor fluctuations (gravitational waves). Their amplitude is parameterized by the tensor-to-scalar ratio (denoted ), which is determined by the unknown energy scale of inflation. Other modifications allow hot dark matter in the form of neutrinos more massive than the minimal value, or a running spectral index; the latter is generally not favoured by simple cosmic inflation models. Allowing additional variable parameter(s) will generally increase the uncertainties in the standard six parameters quoted above, and may also shift the central values slightly. The table below shows results for each of the possible "6+1" scenarios with one additional variable parameter; this indicates that, as of 2015, there is no convincing evidence that any additional parameter is different from its default value. Some researchers have suggested that there is a running spectral index, but no statistically significant study has revealed one. Theoretical expectations suggest that the tensor-to-scalar ratio should be between 0 and 0.3, and the latest results are within those limits. See also Bolshoi cosmological simulation Galaxy formation and evolution Illustris project List of cosmological computation software Millennium Run Weakly interacting massive particles (WIMPs) The ΛCDM model is also known as the standard model of cosmology, but is not related to the Standard Model of particle physics. References Further reading External links Cosmology tutorial/NedWright Millennium Simulation WMAP estimated cosmological parameters/Latest Summary Dark matter Dark energy Concepts in astronomy Scientific models
0.767905
0.997282
0.765818
Six degrees of freedom
Six degrees of freedom (6DOF), or sometimes six degrees of movement, refers to the six mechanical degrees of freedom of movement of a rigid body in three-dimensional space. Specifically, the body is free to change position as forward/backward (surge), up/down (heave), left/right (sway) translation in three perpendicular axes, combined with changes in orientation through rotation about three perpendicular axes, often termed yaw (normal axis), pitch (transverse axis), and roll (longitudinal axis). Three degrees of freedom (3DOF), a term often used in the context of virtual reality, typically refers to tracking of rotational motion only: pitch, yaw, and roll. Robotics Serial and parallel manipulator systems are generally designed to position an end-effector with six degrees of freedom, consisting of three in translation and three in orientation. This provides a direct relationship between actuator positions and the configuration of the manipulator defined by its forward and inverse kinematics. Robot arms are described by their degrees of freedom. This is a practical metric, in contrast to the abstract definition of degrees of freedom which measures the aggregate positioning capability of a system. In 2007, Dean Kamen, inventor of the Segway, unveiled a prototype robotic arm with 14 degrees of freedom for DARPA. Humanoid robots typically have 30 or more degrees of freedom, with six degrees of freedom per arm, five or six in each leg, and several more in torso and neck. Engineering The term is important in mechanical systems, especially biomechanical systems, for analyzing and measuring properties of these types of systems that need to account for all six degrees of freedom. Measurement of the six degrees of freedom is accomplished today through both AC and DC magnetic or electromagnetic fields in sensors that transmit positional and angular data to a processing unit. The data is made relevant through software that integrates the data based on the needs and programming of the users. The six degrees of freedom of a mobile unit are divided in two motional classes as described below. Translational envelopes: Moving forward and backward on the X-axis. (Surge) Moving left and right on the Y-axis. (Sway) Moving up and down on the Z-axis. (Heave) Rotational envelopes: Tilting side to side on the X-axis. (Roll) Tilting forward and backward on the Y-axis. (Pitch) Turning left and right on the Z-axis. (Yaw) In terms of a headset, such as the kind used for virtual reality, rotational envelopes can also be thought of in the following terms: Pitch: Nodding "yes" Yaw: Shaking "no" Roll: Bobbling from side to side Operational envelope types There are three types of operational envelope in the Six degrees of freedom. These types are Direct, Semi-direct (conditional) and Non-direct, all regardless of the time remaining for the execution of the maneuver, the energy remaining to execute the maneuver and finally, if the motion is commanded via a biological entity (e.g. human), a robotical entity (e.g. computer) or both. Direct type: Involved a degree can be commanded directly without particularly conditions and described as a normal operation. (An aileron on a basic airplane) Semi-direct type: Involved a degree can be commanded when some specific conditions are met. (Reverse thrust on an aircraft) Non-direct type: Involved a degree when is achieved via the interaction with its environment and cannot be commanded. (Pitching motion of a vessel at sea) Transitional type also exists in some vehicles. For example, when the Space Shuttle operated in low Earth orbit, the craft was described as fully-direct-six because in the vacuum of space, its six degrees could be commanded via reaction wheels and RCS thrusters. However, when the Space Shuttle was descending through the Earth's atmosphere for its return, the fully-direct-six degrees were no longer applicable as it was gliding through the air using its wings and control surfaces. Game controllers Six degrees of freedom also refers to movement in video game-play. First-person shooter (FPS) games generally provide five degrees of freedom: forwards/backwards, slide left/right, up/down (jump/crouch/lie), yaw (turn left/right), and pitch (look up/down). If the game allows leaning control, then some consider it a sixth DOF; however, this may not be completely accurate, as a lean is a limited partial rotation. The term 6DOF has sometimes been used to describe games which allow freedom of movement, but do not necessarily meet the full 6DOF criteria. For example, Dead Space 2, and to a lesser extent, Homeworld and Zone Of The Enders allow freedom of movement. Some examples of true 6DOF games, which allow independent control of all three movement axes and all three rotational axes, include Elite Dangerous, Shattered Horizon, the Descent franchise, the Everspace franchise, Retrovirus, Miner Wars, Space Engineers, Forsaken and Overload (from the same creators of Descent). The space MMO Vendetta Online also features 6 degrees of freedom. Motion tracking hardware devices such as TrackIR and software-based apps like Eyeware Beam are used for 6DOF head tracking. This device often finds its places in flight simulators and other vehicle simulators that require looking around the cockpit to locate enemies or simply avoiding accidents in-game. The acronym 3DOF, meaning movement in the three dimensions but not rotation, is sometimes encountered. The Razer Hydra, a motion controller for PC, tracks position and rotation of two wired nunchucks, providing six degrees of freedom on each hand. The SpaceOrb 360 is a 6DOF computer input device released in 1996 originally manufactured and sold by the SpaceTec IMC company (first bought by Labtec, which itself was later bought by Logitech). They now offer the 3Dconnexion range of 6DOF controllers, primarily targeting the professional CAD industry. The controllers sold with HTC VIVE provide 6DOF information by the lighthouse, which adopts Time of Flight (TOF) technology to determine the position of controllers. See also References Mechanics Biomedical engineering Video game control methods Robot kinematics
0.770876
0.993398
0.765787
Gravitational constant
The gravitational constant is an empirical physical constant involved in the calculation of gravitational effects in Sir Isaac Newton's law of universal gravitation and in Albert Einstein's theory of general relativity. It is also known as the universal gravitational constant, the Newtonian constant of gravitation, or the Cavendish gravitational constant, denoted by the capital letter . In Newton's law, it is the proportionality constant connecting the gravitational force between two bodies with the product of their masses and the inverse square of their distance. In the Einstein field equations, it quantifies the relation between the geometry of spacetime and the energy–momentum tensor (also referred to as the stress–energy tensor). The measured value of the constant is known with some certainty to four significant digits. In SI units, its value is approximately The modern notation of Newton's law involving was introduced in the 1890s by C. V. Boys. The first implicit measurement with an accuracy within about 1% is attributed to Henry Cavendish in a 1798 experiment. Definition According to Newton's law of universal gravitation, the magnitude of the attractive force between two bodies each with a spherically symmetric density distribution is directly proportional to the product of their masses, and , and inversely proportional to the square of the distance, , directed along the line connecting their centres of mass: The constant of proportionality, , in this non-relativistic formulation is the gravitational constant. Colloquially, the gravitational constant is also called "Big G", distinct from "small g", which is the local gravitational field of Earth (also referred to as free-fall acceleration). Where is the mass of the Earth and is the radius of the Earth, the two quantities are related by: The gravitational constant appears in the Einstein field equations of general relativity, where is the Einstein tensor (not the gravitational constant despite the use of ), is the cosmological constant, is the metric tensor, is the stress–energy tensor, and is the Einstein gravitational constant, a constant originally introduced by Einstein that is directly related to the Newtonian constant of gravitation: Value and uncertainty The gravitational constant is a physical constant that is difficult to measure with high accuracy. This is because the gravitational force is an extremely weak force as compared to other fundamental forces at the laboratory scale. In SI units, the CODATA-recommended value of the gravitational constant is: = The relative standard uncertainty is . Natural units Due to its use as a defining constant in some systems of natural units, particularly geometrized unit systems such as Planck units and Stoney units, the value of the gravitational constant will generally have a numeric value of 1 or a value close to it when expressed in terms of those units. Due to the significant uncertainty in the measured value of G in terms of other known fundamental constants, a similar level of uncertainty will show up in the value of many quantities when expressed in such a unit system. Orbital mechanics In astrophysics, it is convenient to measure distances in parsecs (pc), velocities in kilometres per second (km/s) and masses in solar units . In these units, the gravitational constant is: For situations where tides are important, the relevant length scales are solar radii rather than parsecs. In these units, the gravitational constant is: In orbital mechanics, the period of an object in circular orbit around a spherical object obeys where is the volume inside the radius of the orbit, and is the total mass of the two objects. It follows that This way of expressing shows the relationship between the average density of a planet and the period of a satellite orbiting just above its surface. For elliptical orbits, applying Kepler's 3rd law, expressed in units characteristic of Earth's orbit: where distance is measured in terms of the semi-major axis of Earth's orbit (the astronomical unit, AU), time in years, and mass in the total mass of the orbiting system. The above equation is exact only within the approximation of the Earth's orbit around the Sun as a two-body problem in Newtonian mechanics, the measured quantities contain corrections from the perturbations from other bodies in the solar system and from general relativity. From 1964 until 2012, however, it was used as the definition of the astronomical unit and thus held by definition: Since 2012, the AU is defined as exactly, and the equation can no longer be taken as holding precisely. The quantity —the product of the gravitational constant and the mass of a given astronomical body such as the Sun or Earth—is known as the standard gravitational parameter (also denoted ). The standard gravitational parameter appears as above in Newton's law of universal gravitation, as well as in formulas for the deflection of light caused by gravitational lensing, in Kepler's laws of planetary motion, and in the formula for escape velocity. This quantity gives a convenient simplification of various gravity-related formulas. The product is known much more accurately than either factor is. Calculations in celestial mechanics can also be carried out using the units of solar masses, mean solar days and astronomical units rather than standard SI units. For this purpose, the Gaussian gravitational constant was historically in widespread use, , expressing the mean angular velocity of the Sun–Earth system. The use of this constant, and the implied definition of the astronomical unit discussed above, has been deprecated by the IAU since 2012. History of measurement Early history The existence of the constant is implied in Newton's law of universal gravitation as published in the 1680s (although its notation as dates to the 1890s), but is not calculated in his Philosophiæ Naturalis Principia Mathematica where it postulates the inverse-square law of gravitation. In the Principia, Newton considered the possibility of measuring gravity's strength by measuring the deflection of a pendulum in the vicinity of a large hill, but thought that the effect would be too small to be measurable. Nevertheless, he had the opportunity to estimate the order of magnitude of the constant when he surmised that "the mean density of the earth might be five or six times as great as the density of water", which is equivalent to a gravitational constant of the order: ≈ A measurement was attempted in 1738 by Pierre Bouguer and Charles Marie de La Condamine in their "Peruvian expedition". Bouguer downplayed the significance of their results in 1740, suggesting that the experiment had at least proved that the Earth could not be a hollow shell, as some thinkers of the day, including Edmond Halley, had suggested. The Schiehallion experiment, proposed in 1772 and completed in 1776, was the first successful measurement of the mean density of the Earth, and thus indirectly of the gravitational constant. The result reported by Charles Hutton (1778) suggested a density of ( times the density of water), about 20% below the modern value. This immediately led to estimates on the densities and masses of the Sun, Moon and planets, sent by Hutton to Jérôme Lalande for inclusion in his planetary tables. As discussed above, establishing the average density of Earth is equivalent to measuring the gravitational constant, given Earth's mean radius and the mean gravitational acceleration at Earth's surface, by setting Based on this, Hutton's 1778 result is equivalent to . The first direct measurement of gravitational attraction between two bodies in the laboratory was performed in 1798, seventy-one years after Newton's death, by Henry Cavendish. He determined a value for implicitly, using a torsion balance invented by the geologist Rev. John Michell (1753). He used a horizontal torsion beam with lead balls whose inertia (in relation to the torsion constant) he could tell by timing the beam's oscillation. Their faint attraction to other balls placed alongside the beam was detectable by the deflection it caused. In spite of the experimental design being due to Michell, the experiment is now known as the Cavendish experiment for its first successful execution by Cavendish. Cavendish's stated aim was the "weighing of Earth", that is, determining the average density of Earth and the Earth's mass. His result, , corresponds to value of . It is surprisingly accurate, about 1% above the modern value (comparable to the claimed relative standard uncertainty of 0.6%). 19th century The accuracy of the measured value of has increased only modestly since the original Cavendish experiment. is quite difficult to measure because gravity is much weaker than other fundamental forces, and an experimental apparatus cannot be separated from the gravitational influence of other bodies. Measurements with pendulums were made by Francesco Carlini (1821, ), Edward Sabine (1827, ), Carlo Ignazio Giulio (1841, ) and George Biddell Airy (1854, ). Cavendish's experiment was first repeated by Ferdinand Reich (1838, 1842, 1853), who found a value of , which is actually worse than Cavendish's result, differing from the modern value by 1.5%. Cornu and Baille (1873), found . Cavendish's experiment proved to result in more reliable measurements than pendulum experiments of the "Schiehallion" (deflection) type or "Peruvian" (period as a function of altitude) type. Pendulum experiments still continued to be performed, by Robert von Sterneck (1883, results between 5.0 and ) and Thomas Corwin Mendenhall (1880, ). Cavendish's result was first improved upon by John Henry Poynting (1891), who published a value of , differing from the modern value by 0.2%, but compatible with the modern value within the cited relative standard uncertainty of 0.55%. In addition to Poynting, measurements were made by C. V. Boys (1895) and Carl Braun (1897), with compatible results suggesting = . The modern notation involving the constant was introduced by Boys in 1894 and becomes standard by the end of the 1890s, with values usually cited in the cgs system. Richarz and Krigar-Menzel (1898) attempted a repetition of the Cavendish experiment using 100,000 kg of lead for the attracting mass. The precision of their result of was, however, of the same order of magnitude as the other results at the time. Arthur Stanley Mackenzie in The Laws of Gravitation (1899) reviews the work done in the 19th century. Poynting is the author of the article "Gravitation" in the Encyclopædia Britannica Eleventh Edition (1911). Here, he cites a value of = with a relative uncertainty of 0.2%. Modern value Paul R. Heyl (1930) published the value of (relative uncertainty 0.1%), improved to (relative uncertainty 0.045% = 450 ppm) in 1942. However, Heyl used the statistical spread as his standard deviation, and he admitted himself that measurements using the same material yielded very similar results while measurements using different materials yielded vastly different results. He spent the next 12 years after his 1930 paper to do more precise measurements, hoping that the composition-dependent effect would go away, but it did not, as he noted in his final paper from the year 1942. Published values of derived from high-precision measurements since the 1950s have remained compatible with Heyl (1930), but within the relative uncertainty of about 0.1% (or 1000 ppm) have varied rather broadly, and it is not entirely clear if the uncertainty has been reduced at all since the 1942 measurement. Some measurements published in the 1980s to 2000s were, in fact, mutually exclusive. Establishing a standard value for with a relative standard uncertainty better than 0.1% has therefore remained rather speculative. By 1969, the value recommended by the National Institute of Standards and Technology (NIST) was cited with a relative standard uncertainty of 0.046% (460 ppm), lowered to 0.012% (120 ppm) by 1986. But the continued publication of conflicting measurements led NIST to considerably increase the standard uncertainty in the 1998 recommended value, by a factor of 12, to a standard uncertainty of 0.15%, larger than the one given by Heyl (1930). The uncertainty was again lowered in 2002 and 2006, but once again raised, by a more conservative 20%, in 2010, matching the relative standard uncertainty of 120 ppm published in 1986. For the 2014 update, CODATA reduced the uncertainty to 46 ppm, less than half the 2010 value, and one order of magnitude below the 1969 recommendation. The following table shows the NIST recommended values published since 1969: In the January 2007 issue of Science, Fixler et al. described a measurement of the gravitational constant by a new technique, atom interferometry, reporting a value of , 0.28% (2800 ppm) higher than the 2006 CODATA value. An improved cold atom measurement by Rosi et al. was published in 2014 of . Although much closer to the accepted value (suggesting that the Fixler et al. measurement was erroneous), this result was 325 ppm below the recommended 2014 CODATA value, with non-overlapping standard uncertainty intervals. As of 2018, efforts to re-evaluate the conflicting results of measurements are underway, coordinated by NIST, notably a repetition of the experiments reported by Quinn et al. (2013). In August 2018, a Chinese research group announced new measurements based on torsion balances, and based on two different methods. These are claimed as the most accurate measurements ever made, with standard uncertainties cited as low as 12 ppm. The difference of 2.7σ between the two results suggests there could be sources of error unaccounted for. Constancy Analysis of observations of 580 type Ia supernovae shows that the gravitational constant has varied by less than one part in ten billion per year over the last nine billion years. See also Gravity of Earth Standard gravity Gaussian gravitational constant Orbital mechanics Escape velocity Gravitational potential Gravitational wave Strong gravity Dirac large numbers hypothesis Accelerating expansion of the universe Lunar Laser Ranging experiment Cosmological constant References Footnotes Citations Sources (Complete report available online: PostScript; PDF. Tables from the report also available: Astrodynamic Constants and Parameters) External links Newtonian constant of gravitation at the National Institute of Standards and Technology References on Constants, Units, and Uncertainty The Controversy over Newton's Gravitational Constant — additional commentary on measurement problems Gravity Fundamental constants
0.766319
0.999305
0.765786
EOn
eOn was a volunteer computing project running on the Berkeley Open Infrastructure for Network Computing (BOINC) platform, which uses theoretical chemistry techniques to solve problems in condensed matter physics and materials science. It was a project of the Institute for Computational Engineering and Sciences at the University of Texas. Traditional molecular dynamics can accurately model events that occur within a fraction of a millisecond. In order to model events that take place on much longer timescales, Eon combines transition state theory with kinetic Monte Carlo. The result is a combination of classical mechanics and quantum methods like density functional theory. Since the generation of new work units depended on the results of previous units, the project could only give each host a few units at a time. On May 26, 2014, it was announced that eOn would be retiring from BOINC. See also List of volunteer computing projects References Science in society Free science software Volunteer computing projects
0.778522
0.983638
0.765784
Scale invariance
In physics, mathematics and statistics, scale invariance is a feature of objects or laws that do not change if scales of length, energy, or other variables, are multiplied by a common factor, and thus represent a universality. The technical term for this transformation is a dilatation (also known as dilation). Dilatations can form part of a larger conformal symmetry. In mathematics, scale invariance usually refers to an invariance of individual functions or curves. A closely related concept is self-similarity, where a function or curve is invariant under a discrete subset of the dilations. It is also possible for the probability distributions of random processes to display this kind of scale invariance or self-similarity. In classical field theory, scale invariance most commonly applies to the invariance of a whole theory under dilatations. Such theories typically describe classical physical processes with no characteristic length scale. In quantum field theory, scale invariance has an interpretation in terms of particle physics. In a scale-invariant theory, the strength of particle interactions does not depend on the energy of the particles involved. In statistical mechanics, scale invariance is a feature of phase transitions. The key observation is that near a phase transition or critical point, fluctuations occur at all length scales, and thus one should look for an explicitly scale-invariant theory to describe the phenomena. Such theories are scale-invariant statistical field theories, and are formally very similar to scale-invariant quantum field theories. Universality is the observation that widely different microscopic systems can display the same behaviour at a phase transition. Thus phase transitions in many different systems may be described by the same underlying scale-invariant theory. In general, dimensionless quantities are scale-invariant. The analogous concept in statistics are standardized moments, which are scale-invariant statistics of a variable, while the unstandardized moments are not. Scale-invariant curves and self-similarity In mathematics, one can consider the scaling properties of a function or curve under rescalings of the variable . That is, one is interested in the shape of for some scale factor , which can be taken to be a length or size rescaling. The requirement for to be invariant under all rescalings is usually taken to be for some choice of exponent Δ, and for all dilations . This is equivalent to   being a homogeneous function of degree Δ. Examples of scale-invariant functions are the monomials , for which , in that clearly An example of a scale-invariant curve is the logarithmic spiral, a kind of curve that often appears in nature. In polar coordinates , the spiral can be written as Allowing for rotations of the curve, it is invariant under all rescalings ; that is, is identical to a rotated version of . Projective geometry The idea of scale invariance of a monomial generalizes in higher dimensions to the idea of a homogeneous polynomial, and more generally to a homogeneous function. Homogeneous functions are the natural denizens of projective space, and homogeneous polynomials are studied as projective varieties in projective geometry. Projective geometry is a particularly rich field of mathematics; in its most abstract forms, the geometry of schemes, it has connections to various topics in string theory. Fractals It is sometimes said that fractals are scale-invariant, although more precisely, one should say that they are self-similar. A fractal is equal to itself typically for only a discrete set of values , and even then a translation and rotation may have to be applied to match the fractal up to itself. Thus, for example, the Koch curve scales with , but the scaling holds only for values of for integer . In addition, the Koch curve scales not only at the origin, but, in a certain sense, "everywhere": miniature copies of itself can be found all along the curve. Some fractals may have multiple scaling factors at play at once; such scaling is studied with multi-fractal analysis. Periodic external and internal rays are invariant curves . Scale invariance in stochastic processes If is the average, expected power at frequency , then noise scales as with Δ = 0 for white noise, Δ = −1 for pink noise, and Δ = −2 for Brownian noise (and more generally, Brownian motion). More precisely, scaling in stochastic systems concerns itself with the likelihood of choosing a particular configuration out of the set of all possible random configurations. This likelihood is given by the probability distribution. Examples of scale-invariant distributions are the Pareto distribution and the Zipfian distribution. Scale-invariant Tweedie distributions Tweedie distributions are a special case of exponential dispersion models, a class of statistical models used to describe error distributions for the generalized linear model and characterized by closure under additive and reproductive convolution as well as under scale transformation. These include a number of common distributions: the normal distribution, Poisson distribution and gamma distribution, as well as more unusual distributions like the compound Poisson-gamma distribution, positive stable distributions, and extreme stable distributions. Consequent to their inherent scale invariance Tweedie random variables Y demonstrate a variance var(Y) to mean E(Y) power law: , where a and p are positive constants. This variance to mean power law is known in the physics literature as fluctuation scaling, and in the ecology literature as Taylor's law. Random sequences, governed by the Tweedie distributions and evaluated by the method of expanding bins exhibit a biconditional relationship between the variance to mean power law and power law autocorrelations. The Wiener–Khinchin theorem further implies that for any sequence that exhibits a variance to mean power law under these conditions will also manifest 1/f noise. The Tweedie convergence theorem provides a hypothetical explanation for the wide manifestation of fluctuation scaling and 1/f noise. It requires, in essence, that any exponential dispersion model that asymptotically manifests a variance to mean power law will be required express a variance function that comes within the domain of attraction of a Tweedie model. Almost all distribution functions with finite cumulant generating functions qualify as exponential dispersion models and most exponential dispersion models manifest variance functions of this form. Hence many probability distributions have variance functions that express this asymptotic behavior, and the Tweedie distributions become foci of convergence for a wide range of data types. Much as the central limit theorem requires certain kinds of random variables to have as a focus of convergence the Gaussian distribution and express white noise, the Tweedie convergence theorem requires certain non-Gaussian random variables to express 1/f noise and fluctuation scaling. Cosmology In physical cosmology, the power spectrum of the spatial distribution of the cosmic microwave background is near to being a scale-invariant function. Although in mathematics this means that the spectrum is a power-law, in cosmology the term "scale-invariant" indicates that the amplitude, , of primordial fluctuations as a function of wave number, , is approximately constant, i.e. a flat spectrum. This pattern is consistent with the proposal of cosmic inflation. Scale invariance in classical field theory Classical field theory is generically described by a field, or set of fields, φ, that depend on coordinates, x. Valid field configurations are then determined by solving differential equations for φ, and these equations are known as field equations. For a theory to be scale-invariant, its field equations should be invariant under a rescaling of the coordinates, combined with some specified rescaling of the fields, The parameter Δ is known as the scaling dimension of the field, and its value depends on the theory under consideration. Scale invariance will typically hold provided that no fixed length scale appears in the theory. Conversely, the presence of a fixed length scale indicates that a theory is not scale-invariant. A consequence of scale invariance is that given a solution of a scale-invariant field equation, we can automatically find other solutions by rescaling both the coordinates and the fields appropriately. In technical terms, given a solution, φ(x), one always has other solutions of the form Scale invariance of field configurations For a particular field configuration, φ(x), to be scale-invariant, we require that where Δ is, again, the scaling dimension of the field. We note that this condition is rather restrictive. In general, solutions even of scale-invariant field equations will not be scale-invariant, and in such cases the symmetry is said to be spontaneously broken. Classical electromagnetism An example of a scale-invariant classical field theory is electromagnetism with no charges or currents. The fields are the electric and magnetic fields, E(x,t) and B(x,t), while their field equations are Maxwell's equations. With no charges or currents, these field equations take the form of wave equations where c is the speed of light. These field equations are invariant under the transformation Moreover, given solutions of Maxwell's equations, E(x, t) and B(x, t), it holds that E(λx, λt) and B(λx, λt) are also solutions. Massless scalar field theory Another example of a scale-invariant classical field theory is the massless scalar field (note that the name scalar is unrelated to scale invariance). The scalar field, is a function of a set of spatial variables, x, and a time variable, . Consider first the linear theory. Like the electromagnetic field equations above, the equation of motion for this theory is also a wave equation, and is invariant under the transformation The name massless refers to the absence of a term in the field equation. Such a term is often referred to as a `mass' term, and would break the invariance under the above transformation. In relativistic field theories, a mass-scale, is physically equivalent to a fixed length scale through and so it should not be surprising that massive scalar field theory is not scale-invariant. φ4 theory The field equations in the examples above are all linear in the fields, which has meant that the scaling dimension, Δ, has not been so important. However, one usually requires that the scalar field action is dimensionless, and this fixes the scaling dimension of . In particular, where is the combined number of spatial and time dimensions. Given this scaling dimension for , there are certain nonlinear modifications of massless scalar field theory which are also scale-invariant. One example is massless φ4 theory for  = 4. The field equation is (Note that the name 4 derives from the form of the Lagrangian, which contains the fourth power of .) When  = 4 (e.g. three spatial dimensions and one time dimension), the scalar field scaling dimension is Δ = 1. The field equation is then invariant under the transformation The key point is that the parameter must be dimensionless, otherwise one introduces a fixed length scale into the theory: For 4 theory, this is only the case in  = 4. Note that under these transformations the argument of the function is unchanged. Scale invariance in quantum field theory The scale-dependence of a quantum field theory (QFT) is characterised by the way its coupling parameters depend on the energy-scale of a given physical process. This energy dependence is described by the renormalization group, and is encoded in the beta-functions of the theory. For a QFT to be scale-invariant, its coupling parameters must be independent of the energy-scale, and this is indicated by the vanishing of the beta-functions of the theory. Such theories are also known as fixed points of the corresponding renormalization group flow. Quantum electrodynamics A simple example of a scale-invariant QFT is the quantized electromagnetic field without charged particles. This theory actually has no coupling parameters (since photons are massless and non-interacting) and is therefore scale-invariant, much like the classical theory. However, in nature the electromagnetic field is coupled to charged particles, such as electrons. The QFT describing the interactions of photons and charged particles is quantum electrodynamics (QED), and this theory is not scale-invariant. We can see this from the QED beta-function. This tells us that the electric charge (which is the coupling parameter in the theory) increases with increasing energy. Therefore, while the quantized electromagnetic field without charged particles is scale-invariant, QED is not scale-invariant. Massless scalar field theory Free, massless quantized scalar field theory has no coupling parameters. Therefore, like the classical version, it is scale-invariant. In the language of the renormalization group, this theory is known as the Gaussian fixed point. However, even though the classical massless φ4 theory is scale-invariant in D = 4, the quantized version is not scale-invariant. We can see this from the beta-function for the coupling parameter, g. Even though the quantized massless φ4 is not scale-invariant, there do exist scale-invariant quantized scalar field theories other than the Gaussian fixed point. One example is the Wilson–Fisher fixed point, below. Conformal field theory Scale-invariant QFTs are almost always invariant under the full conformal symmetry, and the study of such QFTs is conformal field theory (CFT). Operators in a CFT have a well-defined scaling dimension, analogous to the scaling dimension, ∆, of a classical field discussed above. However, the scaling dimensions of operators in a CFT typically differ from those of the fields in the corresponding classical theory. The additional contributions appearing in the CFT are known as anomalous scaling dimensions. Scale and conformal anomalies The φ4 theory example above demonstrates that the coupling parameters of a quantum field theory can be scale-dependent even if the corresponding classical field theory is scale-invariant (or conformally invariant). If this is the case, the classical scale (or conformal) invariance is said to be anomalous. A classically scale-invariant field theory, where scale invariance is broken by quantum effects, provides an explication of the nearly exponential expansion of the early universe called cosmic inflation, as long as the theory can be studied through perturbation theory. Phase transitions In statistical mechanics, as a system undergoes a phase transition, its fluctuations are described by a scale-invariant statistical field theory. For a system in equilibrium (i.e. time-independent) in spatial dimensions, the corresponding statistical field theory is formally similar to a -dimensional CFT. The scaling dimensions in such problems are usually referred to as critical exponents, and one can in principle compute these exponents in the appropriate CFT. The Ising model An example that links together many of the ideas in this article is the phase transition of the Ising model, a simple model of ferromagnetic substances. This is a statistical mechanics model, which also has a description in terms of conformal field theory. The system consists of an array of lattice sites, which form a -dimensional periodic lattice. Associated with each lattice site is a magnetic moment, or spin, and this spin can take either the value +1 or −1. (These states are also called up and down, respectively.) The key point is that the Ising model has a spin-spin interaction, making it energetically favourable for two adjacent spins to be aligned. On the other hand, thermal fluctuations typically introduce a randomness into the alignment of spins. At some critical temperature, , spontaneous magnetization is said to occur. This means that below the spin-spin interaction will begin to dominate, and there is some net alignment of spins in one of the two directions. An example of the kind of physical quantities one would like to calculate at this critical temperature is the correlation between spins separated by a distance . This has the generic behaviour: for some particular value of , which is an example of a critical exponent. CFT description The fluctuations at temperature are scale-invariant, and so the Ising model at this phase transition is expected to be described by a scale-invariant statistical field theory. In fact, this theory is the Wilson–Fisher fixed point, a particular scale-invariant scalar field theory. In this context, is understood as a correlation function of scalar fields, Now we can fit together a number of the ideas seen already. From the above, one sees that the critical exponent, , for this phase transition, is also an anomalous dimension. This is because the classical dimension of the scalar field, is modified to become where is the number of dimensions of the Ising model lattice. So this anomalous dimension in the conformal field theory is the same as a particular critical exponent of the Ising model phase transition. Note that for dimension , can be calculated approximately, using the epsilon expansion, and one finds that . In the physically interesting case of three spatial dimensions, we have =1, and so this expansion is not strictly reliable. However, a semi-quantitative prediction is that is numerically small in three dimensions. On the other hand, in the two-dimensional case the Ising model is exactly soluble. In particular, it is equivalent to one of the minimal models, a family of well-understood CFTs, and it is possible to compute (and the other critical exponents) exactly, . Schramm–Loewner evolution The anomalous dimensions in certain two-dimensional CFTs can be related to the typical fractal dimensions of random walks, where the random walks are defined via Schramm–Loewner evolution (SLE). As we have seen above, CFTs describe the physics of phase transitions, and so one can relate the critical exponents of certain phase transitions to these fractal dimensions. Examples include the 2d critical Ising model and the more general 2d critical Potts model. Relating other 2d CFTs to SLE is an active area of research. Universality A phenomenon known as universality is seen in a large variety of physical systems. It expresses the idea that different microscopic physics can give rise to the same scaling behaviour at a phase transition. A canonical example of universality involves the following two systems: The Ising model phase transition, described above. The liquid-vapour transition in classical fluids. Even though the microscopic physics of these two systems is completely different, their critical exponents turn out to be the same. Moreover, one can calculate these exponents using the same statistical field theory. The key observation is that at a phase transition or critical point, fluctuations occur at all length scales, and thus one should look for a scale-invariant statistical field theory to describe the phenomena. In a sense, universality is the observation that there are relatively few such scale-invariant theories. The set of different microscopic theories described by the same scale-invariant theory is known as a universality class. Other examples of systems which belong to a universality class are: Avalanches in piles of sand. The likelihood of an avalanche is in power-law proportion to the size of the avalanche, and avalanches are seen to occur at all size scales. The frequency of network outages on the Internet, as a function of size and duration. The frequency of citations of journal articles, considered in the network of all citations amongst all papers, as a function of the number of citations in a given paper. The formation and propagation of cracks and tears in materials ranging from steel to rock to paper. The variations of the direction of the tear, or the roughness of a fractured surface, are in power-law proportion to the size scale. The electrical breakdown of dielectrics, which resemble cracks and tears. The percolation of fluids through disordered media, such as petroleum through fractured rock beds, or water through filter paper, such as in chromatography. Power-law scaling connects the rate of flow to the distribution of fractures. The diffusion of molecules in solution, and the phenomenon of diffusion-limited aggregation. The distribution of rocks of different sizes in an aggregate mixture that is being shaken (with gravity acting on the rocks). The key observation is that, for all of these different systems, the behaviour resembles a phase transition, and that the language of statistical mechanics and scale-invariant statistical field theory may be applied to describe them. Other examples of scale invariance Newtonian fluid mechanics with no applied forces Under certain circumstances, fluid mechanics is a scale-invariant classical field theory. The fields are the velocity of the fluid flow, , the fluid density, , and the fluid pressure, . These fields must satisfy both the Navier–Stokes equation and the continuity equation. For a Newtonian fluid these take the respective forms where is the dynamic viscosity. In order to deduce the scale invariance of these equations we specify an equation of state, relating the fluid pressure to the fluid density. The equation of state depends on the type of fluid and the conditions to which it is subjected. For example, we consider the isothermal ideal gas, which satisfies where is the speed of sound in the fluid. Given this equation of state, Navier–Stokes and the continuity equation are invariant under the transformations Given the solutions and , we automatically have that and are also solutions. Computer vision In computer vision and biological vision, scaling transformations arise because of the perspective image mapping and because of objects having different physical size in the world. In these areas, scale invariance refers to local image descriptors or visual representations of the image data that remain invariant when the local scale in the image domain is changed. Detecting local maxima over scales of normalized derivative responses provides a general framework for obtaining scale invariance from image data. Examples of applications include blob detection, corner detection, ridge detection, and object recognition via the scale-invariant feature transform. See also Invariant (mathematics) Inverse square potential Power law Scale-free network References Further reading Extensive discussion of scale invariance in quantum and statistical field theories, applications to critical phenomena and the epsilon expansion and related topics. Symmetry Scaling symmetries Conformal field theory Critical phenomena
0.77599
0.986835
0.765775
Modern physics
Modern physics is a branch of physics that developed in the early 20th century and onward or branches greatly influenced by early 20th century physics. Notable branches of modern physics include quantum mechanics, special relativity, and general relativity. Classical physics is typically concerned with everyday conditions: speeds are much lower than the speed of light, sizes are much greater than that of atoms, and energies are relatively small. Modern physics, however, is concerned with more extreme conditions, such as high velocities that are comparable to the speed of light (special relativity), small distances comparable to the atomic radius (quantum mechanics), and very high energies (relativity). In general, quantum and relativistic effects are believed to exist across all scales, although these effects may be very small at human scale. While quantum mechanics is compatible with special relativity (See: Relativistic quantum mechanics), one of the unsolved problems in physics is the unification of quantum mechanics and general relativity, which the Standard Model of particle physics currently cannot account for. Modern physics is an effort to understand the underlying processes of the interactions of matter using the tools of science and engineering. In a literal sense, the term modern physics means up-to-date physics. In this sense, a significant portion of so-called classical physics is modern. However, since roughly 1890, new discoveries have caused significant paradigm shifts: especially the advent of quantum mechanics (QM) and relativity (ER). Physics that incorporates elements of either QM or ER (or both) is said to be modern physics. It is in this latter sense that the term is generally used. Modern physics is often encountered when dealing with extreme conditions. Quantum mechanical effects tend to appear when dealing with "lows" (low temperatures, small distances), while relativistic effects tend to appear when dealing with "highs" (high velocities, large distances), the "middles" being classical behavior. For example, when analyzing the behavior of a gas at room temperature, most phenomena will involve the (classical) Maxwell–Boltzmann distribution. However, near absolute zero, the Maxwell–Boltzmann distribution fails to account for the observed behavior of the gas, and the (modern) Fermi–Dirac or Bose–Einstein distributions have to be used instead. Very often, it is possible to find – or "retrieve" – the classical behavior from the modern description by analyzing the modern description at low speeds and large distances (by taking a limit, or by making an approximation). When doing so, the result is called the classical limit. Hallmarks These are generally considered to be the topics regarded as the "core" of the foundation of modern physics: See also References Notes External links History of physics
0.770927
0.993307
0.765767
Naval architecture
Naval architecture, or naval engineering, is an engineering discipline incorporating elements of mechanical, electrical, electronic, software and safety engineering as applied to the engineering design process, shipbuilding, maintenance, and operation of marine vessels and structures. Naval architecture involves basic and applied research, design, development, design evaluation (classification) and calculations during all stages of the life of a marine vehicle. Preliminary design of the vessel, its detailed design, construction, trials, operation and maintenance, launching and dry-docking are the main activities involved. Ship design calculations are also required for ships being modified (by means of conversion, rebuilding, modernization, or repair). Naval architecture also involves formulation of safety regulations and damage-control rules and the approval and certification of ship designs to meet statutory and non-statutory requirements. Main subjects The word "vessel" includes every description of watercraft, mainly ships and boats, but also including non-displacement craft, WIG craft and seaplanes, used or capable of being used as a means of transportation on water. The principal elements of naval architecture are detailed in the following sections. Hydrostatics Hydrostatics concerns the conditions to which the vessel is subjected while at rest in water and to its ability to remain afloat. This involves computing buoyancy, displacement, and other hydrostatic properties such as trim (the measure of the longitudinal inclination of the vessel) and stability (the ability of a vessel to restore itself to an upright position after being inclined by wind, sea, or loading conditions). Hydrodynamics Hydrodynamics concerns the flow of water around the ship's hull, bow, and stern, and over bodies such as propeller blades or rudder, or through thruster tunnels. Ship resistance and propulsion concern resistance towards motion in water primarily caused due to flow of water around the hull. Powering calculation is done based on this. Propulsion is used to move the vessel through water using propellers, thrusters, water jets, sails etc. Engine types are mainly internal combustion. Some vessels are electrically powered using nuclear or solar energy. Ship motions involves motions of the vessel in seaway and its responses in waves and wind. Controllability (maneuvering) involves controlling and maintaining position and direction of the vessel. Flotation and stability While atop a liquid surface a floating body has 6 degrees of freedom in its movements, these are categorized in either translation or rotation. Translation Sway: transverse Surge: fore and aft Heave: vertical Rotation Yaw: about a vertical axis Pitch or trim: about a transverse axis Roll or heel: about a fore and aft axis Longitudinal stability for longitudinal inclinations, the stability depends upon the distance between the center of gravity and the longitudinal meta-center. In other words, the basis in which the ship maintains its center of gravity is its distance set equally apart from both the aft and forward section of the ship. While a body floats on a liquid surface it still encounters the force of gravity pushing down on it. In order to stay afloat and avoid sinking there is an opposed force acting against the body known as the hydrostatic pressures. The forces acting on the body must be of the same magnitude and same line of motion in order to maintain the body at equilibrium. This description of equilibrium is only present when a freely floating body is in still water, when other conditions are present the magnitude of which these forces shifts drastically creating the swaying motion of the body. The buoyancy force is equal to the weight of the body, in other words, the mass of the body is equal to the mass of the water displaced by the body. This adds an upward force to the body by the amount of surface area times the area displaced in order to create an equilibrium between the surface of the body and the surface of the water. The stability of a ship under most conditions is able to overcome any form or restriction or resistance encountered in rough seas; however, ships have undesirable roll characteristics when the balance of oscillations in roll is two times that of oscillations in heave, thus causing the ship to capsize. Structures Structures involves selection of material of construction, structural analysis of global and local strength of the vessel, vibration of the structural components and structural responses of the vessel during motions in seaway. Depending on type of ship, the structure and design will vary in what material to use as well as how much of it. Some ships are made from glass reinforced plastics but the vast majority are steel with possibly some aluminium in the superstructure. The complete structure of the ship is designed with panels shaped in a rectangular form consisting of steel plating supported on four edges. Combined in a large surface area the Grillages create the hull of the ship, deck, and bulkheads while still providing mutual support of the frames. Though the structure of the ship is sturdy enough to hold itself together the main force it has to overcome is longitudinal bending creating a strain against its hull, its structure must be designed so that the material is disposed as much forward and aft as possible. The principal longitudinal elements are the deck, shell plating, inner bottom all of which are in the form of grillages, and additional longitudinal stretching to these. The dimensions of the ship are in order to create enough spacing between the stiffeners in prevention of buckling. Warships have used a longitudinal system of stiffening that many modern commercial vessels have adopted. This system was widely used in early merchant ships such as the SS Great Eastern, but later shifted to transversely framed structure another concept in ship hull design that proved more practical. This system was later implemented on modern vessels such as tankers because of its popularity and was then named the Isherwood System. The arrangement of the Isherwood system consists of stiffening decks both side and bottom by longitudinal members, they are separated enough so they have the same distance between them as the frames and beams. This system works by spacing out the transverse members that support the longitudinal by about 3 or 4 meters, with the wide spacing this causes the traverse strength needed by displacing the amount of force the bulkheads provide. Arrangements Arrangements involves concept design, layout and access, fire protection, allocation of spaces, ergonomics and capacity. Construction Construction depends on the material used. When steel or aluminium is used this involves welding of the plates and profiles after rolling, marking, cutting and bending as per the structural design drawings or models, followed by erection and launching. Other joining techniques are used for other materials like fibre reinforced plastic and glass-reinforced plastic. The process of construction is thought-out cautiously while considering all factors like safety, strength of structure, hydrodynamics, and ship arrangement. Each factor considered presents a new option for materials to consider as well as ship orientation. When the strength of the structure is considered the acts of ship collision are considered in the way that the ships structure is altered. Therefore, the properties of materials are considered carefully as applied material on the struck ship has elastic properties, the energy absorbed by the ship being struck is then deflected in the opposite direction, so both ships go through the process of rebounding to prevent further damage. Science and craft Traditionally, naval architecture has been more craft than science. The suitability of a vessel's shape was judged by looking at a half-model of a vessel or a prototype. Ungainly shapes or abrupt transitions were frowned on as being flawed. This included rigging, deck arrangements, and even fixtures. Subjective descriptors such as ungainly, full, and fine were used as a substitute for the more precise terms used today. A vessel was, and still is described as having a ‘fair’ shape. The term ‘fair’ is meant to denote not only a smooth transition from fore to aft but also a shape that was ‘right.’ Determining what is ‘right’ in a particular situation in the absence of definitive supporting analysis encompasses the art of naval architecture to this day. Modern low-cost digital computers and dedicated software, combined with extensive research to correlate full-scale, towing tank and computational data, have enabled naval architects to more accurately predict the performance of a marine vehicle. These tools are used for static stability (intact and damaged), dynamic stability, resistance, powering, hull development, structural analysis, green water modelling, and slamming analysis. Data are regularly shared in international conferences sponsored by RINA, Society of Naval Architects and Marine Engineers (SNAME) and others. Computational Fluid Dynamics is being applied to predict the response of a floating body in a random sea. The naval architect Due to the complexity associated with operating in a marine environment, naval architecture is a co-operative effort between groups of technically skilled individuals who are specialists in particular fields, often coordinated by a lead naval architect. This inherent complexity also means that the analytical tools available are much less evolved than those for designing aircraft, cars and even spacecraft. This is due primarily to the paucity of data on the environment the marine vehicle is required to work in and the complexity of the interaction of waves and wind on a marine structure. A naval architect is an engineer who is responsible for the design, classification, survey, construction, and/or repair of ships, boats, other marine vessels, and offshore structures, both commercial and military, including: Merchant ships – oil tankers, gas tankers, cargo ships, bulk carriers, container ships Passenger/vehicle ferries, cruise ships Warships – frigates, destroyers, aircraft carriers, amphibious ships Submarines and underwater vehicles Icebreakers High speed craft – hovercraft, multi-hull ships, hydrofoil craft Workboats – barges, fishing boats, anchor handling tug supply vessels, platform supply vessels, tug boats, pilot vessels, rescue craft Yachts, power boats, and other recreational watercraft Offshore platforms and subsea developments Some of these vessels are amongst the largest (such as supertankers), most complex (such as aircraft carriers), and highly valued movable structures produced by mankind. They are typically the most efficient method of transporting the world's raw materials and products. Modern engineering on this scale is essentially a team activity conducted by specialists in their respective fields and disciplines. Naval architects integrate these activities. This demanding leadership role requires managerial qualities and the ability to bring together the often-conflicting demands of the various design constraints to produce a product which is fit for the purpose. In addition to this leadership role, a naval architect also has a specialist function in ensuring that a safe, economic, environmentally sound and seaworthy design is produced. To undertake all these tasks, a naval architect must have an understanding of many branches of engineering and must be in the forefront of high technology areas. He or she must be able to effectively utilize the services provided by scientists, lawyers, accountants, and business people of many kinds. Naval architects typically work for shipyards, ship owners, design firms and consultancies, equipment manufacturers, Classification societies, regulatory bodies (Admiralty law), navies, and governments. A small majority of Naval Architects also work in education, of which only 5 universities in the United States are accredited with Naval Architecture & Marine Engineering programs. The United States Naval Academy is home to one of the most knowledgeable professors of Naval Architecture; CAPT. Michael Bito, USN. See also References Further reading Paasch, H. Dictionary of Naval Terms, from Keel to Truck. London: G. Philip & Son, 1908. Engineering disciplines Marine occupations Shipbuilding Nautical terminology
0.771323
0.992792
0.765763
Thermoeconomics
Thermoeconomics, also referred to as biophysical economics, is a school of heterodox economics that applies the laws of statistical mechanics to economic theory. Thermoeconomics can be thought of as the statistical physics of economic value and is a subfield of econophysics. It is the study of the ways and means by which human societies procure and use energy and other biological and physical resources to produce, distribute, consume and exchange goods and services, while generating various types of waste and environmental impacts. Biophysical economics builds on both social sciences and natural sciences to overcome some of the most fundamental limitations and blind spots of conventional economics. It makes it possible to understand some key requirements and framework conditions for economic growth, as well as related constraints and boundaries. Thermodynamics "Rien ne se perd, rien ne se crée, tout se transforme" "Nothing is lost, nothing is created, everything is transformed." -Antoine Lavoisier, one of the fathers of chemistryThermoeconomists maintain that human economic systems can be modeled as thermodynamic systems. Thermoeconomists argue that economic systems always involve matter, energy, entropy, and information. Then, based on this premise, theoretical economic analogs of the first and second laws of thermodynamics are developed. The global economy is viewed as an open system. Moreover, many economic activities result in the formation of structures. Thermoeconomics applies the statistical mechanics of non-equilibrium thermodynamics to model these activities. In thermodynamic terminology, human economic activity may be described as a dissipative system, which flourishes by consuming free energy in transformations and exchange of resources, goods, and services. Energy Return on Investment Thermoeconomics is based on the proposition that the role of energy in biological evolution should be defined and understood not through the second law of thermodynamics but in terms of such economic criteria as productivity, efficiency, and especially the costs and benefits (or profitability) of the various mechanisms for capturing and utilizing available energy to build biomass and do work. Peak oil Political Implications "[T]he escalation of social protest and political instability around the world is causally related to the unstoppable thermodynamics of global hydrocarbon energy decline and its interconnected environmental and economic consequences." Energy Backed Credit Under this analysis, a reduction of GDP in advanced economies is now likely: when we can no longer access consumption via adding credit, and with a shift towards lower quality and more costly energy and resources. The 20th  century experienced increasing energy quality and decreasing energy prices. The 21st century will be a story of decreasing energy quality and increasing energy cost. See also Econophysics Ecodynamics Kinetic exchange models of markets Systems ecology Ecological economics Nicholas Georgescu-Roegen Energy quality Limits to growth Myron Tribus References Further reading Chen, Jing (2015). The Unity of Science and Economics: A New Foundation of Economic Theory: Springer. Charles A.S. Hall, Kent Klitgaard (2018). Energy and the Wealth of Nations: An Introduction to Biophysical Economics: Springer. Jean-Marc Jancovici, Christopher Blain (2020). World Without End. Europe Comics N.J. Hagens (2019). Economics for the future – Beyond the superorganism. Science Direct. Nafeez Ahmed (2017). Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence. Springer Briefs in Energy Smil, Vaclav (2018). Energy and Civilization: A History. MIT Press External links Yuri Yegorov, article Econo-physics: A Perspective of Matching Two Sciences, Evol. Inst. Econ. Rev. 4(1): 143–170 (2007) Borisas Cimbleris (1998): Economy and Thermodynamics Schwartzman, David. (2007). "The Limits to Entropy: the Continuing Misuse of Thermodynamics in Environmental and Marxist theory", In Press, Science & Society. Saslow, Wayne M. (1999). "An Economic Analogy to Thermodynamics" American Association of Physics Teachers. Biophysical Economics Institute Schools of economic thought Industrial ecology Ecological economics
0.788731
0.970873
0.765757
Thermochemistry
Thermochemistry is the study of the heat energy which is associated with chemical reactions and/or phase changes such as melting and boiling. A reaction may release or absorb energy, and a phase change may do the same. Thermochemistry focuses on the energy exchange between a system and its surroundings in the form of heat. Thermochemistry is useful in predicting reactant and product quantities throughout the course of a given reaction. In combination with entropy determinations, it is also used to predict whether a reaction is spontaneous or non-spontaneous, favorable or unfavorable. Endothermic reactions absorb heat, while exothermic reactions release heat. Thermochemistry coalesces the concepts of thermodynamics with the concept of energy in the form of chemical bonds. The subject commonly includes calculations of such quantities as heat capacity, heat of combustion, heat of formation, enthalpy, entropy, and free energy. Thermochemistry is one part of the broader field of chemical thermodynamics, which deals with the exchange of all forms of energy between system and surroundings, including not only heat but also various forms of work, as well the exchange of matter. When all forms of energy are considered, the concepts of exothermic and endothermic reactions are generalized to exergonic reactions and endergonic reactions. History Thermochemistry rests on two generalizations. Stated in modern terms, they are as follows: Lavoisier and Laplace's law (1780): The energy change accompanying any transformation is equal and opposite to energy change accompanying the reverse process. Hess' law of constant heat summation (1840): The energy change accompanying any transformation is the same whether the process occurs in one step or many. These statements preceded the first law of thermodynamics (1845) and helped in its formulation. Thermochemistry also involves the measurement of the latent heat of phase transitions. Joseph Black had already introduced the concept of latent heat in 1761, based on the observation that heating ice at its melting point did not raise the temperature but instead caused some ice to melt. Gustav Kirchhoff showed in 1858 that the variation of the heat of reaction is given by the difference in heat capacity between products and reactants: dΔH / dT = ΔCp. Integration of this equation permits the evaluation of the heat of reaction at one temperature from measurements at another temperature. Calorimetry The measurement of heat changes is performed using calorimetry, usually an enclosed chamber within which the change to be examined occurs. The temperature of the chamber is monitored either using a thermometer or thermocouple, and the temperature plotted against time to give a graph from which fundamental quantities can be calculated. Modern calorimeters are frequently supplied with automatic devices to provide a quick read-out of information, one example being the differential scanning calorimeter. Systems Several thermodynamic definitions are very useful in thermochemistry. A system is the specific portion of the universe that is being studied. Everything outside the system is considered the surroundings or environment. A system may be: a (completely) isolated system which can exchange neither energy nor matter with the surroundings, such as an insulated bomb calorimeter a thermally isolated system which can exchange mechanical work but not heat or matter, such as an insulated closed piston or balloon a mechanically isolated system which can exchange heat but not mechanical work or matter, such as an uninsulated bomb calorimeter a closed system which can exchange energy but not matter, such as an uninsulated closed piston or balloon an open system which it can exchange both matter and energy with the surroundings, such as a pot of boiling water Processes A system undergoes a process when one or more of its properties changes. A process relates to the change of state. An isothermal (same-temperature) process occurs when temperature of the system remains constant. An isobaric (same-pressure) process occurs when the pressure of the system remains constant. A process is adiabatic when no heat exchange occurs. See also Calorimetry Chemical kinetics Cryochemistry Differential scanning calorimetry Isodesmic reaction Important publications in thermochemistry Photoelectron photoion coincidence spectroscopy Principle of maximum work Reaction Calorimeter Thermodynamic databases for pure substances Thermodynamics Thomsen-Berthelot principle Julius Thomsen References External links Physical chemistry Branches of thermodynamics
0.776782
0.985799
0.76575
Vector fields in cylindrical and spherical coordinates
Note: This page uses common physics notation for spherical coordinates, in which is the angle between the z axis and the radius vector connecting the origin to the point in question, while is the angle between the projection of the radius vector onto the x-y plane and the x axis. Several other definitions are in use, and so care must be taken in comparing different sources. Cylindrical coordinate system Vector fields Vectors are defined in cylindrical coordinates by (ρ, φ, z), where ρ is the length of the vector projected onto the xy-plane, φ is the angle between the projection of the vector onto the xy-plane (i.e. ρ) and the positive x-axis (0 ≤ φ < 2π), z is the regular z-coordinate. (ρ, φ, z) is given in Cartesian coordinates by: or inversely by: Any vector field can be written in terms of the unit vectors as: The cylindrical unit vectors are related to the Cartesian unit vectors by: Note: the matrix is an orthogonal matrix, that is, its inverse is simply its transpose. Time derivative of a vector field To find out how the vector field A changes in time, the time derivatives should be calculated. For this purpose Newton's notation will be used for the time derivative. In Cartesian coordinates this is simply: However, in cylindrical coordinates this becomes: The time derivatives of the unit vectors are needed. They are given by: So the time derivative simplifies to: Second time derivative of a vector field The second time derivative is of interest in physics, as it is found in equations of motion for classical mechanical systems. The second time derivative of a vector field in cylindrical coordinates is given by: To understand this expression, A is substituted for P, where P is the vector (ρ, φ, z). This means that . After substituting, the result is given: In mechanics, the terms of this expression are called: Spherical coordinate system Vector fields Vectors are defined in spherical coordinates by (r, θ, φ), where r is the length of the vector, θ is the angle between the positive Z-axis and the vector in question (0 ≤ θ ≤ π), and φ is the angle between the projection of the vector onto the xy-plane and the positive X-axis (0 ≤ φ < 2π). (r, θ, φ) is given in Cartesian coordinates by: or inversely by: Any vector field can be written in terms of the unit vectors as: The spherical unit vectors are related to the Cartesian unit vectors by: Note: the matrix is an orthogonal matrix, that is, its inverse is simply its transpose. The Cartesian unit vectors are thus related to the spherical unit vectors by: Time derivative of a vector field To find out how the vector field A changes in time, the time derivatives should be calculated. In Cartesian coordinates this is simply: However, in spherical coordinates this becomes: The time derivatives of the unit vectors are needed. They are given by: Thus the time derivative becomes: See also Del in cylindrical and spherical coordinates for the specification of gradient, divergence, curl, and Laplacian in various coordinate systems. References Vector calculus Coordinate systems
0.771003
0.993171
0.765738