question
stringlengths 6
296
| context
stringlengths 1.9k
8.48k
| answer
stringlengths 0
9.92k
|
---|---|---|
what's the melting temperature of diamond? | <p> diamond is thermodynamically stable at high pressures and temperatures, with the phase transition from graphite occurring at greater temperatures as the pressure increases. thus, underneath continents it becomes stable at temperatures of 950 degrees celsius and pressures of 4.5 gigapascals, corresponding to depths of 150 kilometers or greater. in subduction zones, which are colder, it becomes stable at temperatures of 800 degrees c and pressures of 3.5 gigapascals. at depths greater than 240 km, iron-nickel metal phases are present and carbon is likely to be either dissolved in them or in the form of carbides. thus, the deeper origin of some diamonds may reflect unusual growth environments.
<p> bullet::::2. diamond is the best natural conductor of heat; it even feels cold to the touch. its thermal conductivity (2,200 w/m•k) is five times greater than the most conductive metal (ag at 429); 300 times higher than the least conductive metal (pu at 6.74); and nearly 4,000 times that of water (0.58) and 100,000 times that of air (0.0224). this high thermal conductivity is used by jewelers and gemologists to separate diamonds from imitations.
<p> typically, diamond is formed by heating carbon at very high temperatures (5,000 k) and pressures (120,000 atmospheres). however, narayan and his group used kinetics and time control of pulsed nanosecond laser melting to overcome thermodynamic limitations and create a supercooled state that enables conversion of carbon into q-carbon and diamond at ambient temperatures and pressures. the process uses a high-powered laser pulse, similar to that used in eye surgery, lasting approximately 200 nanoseconds. this raises the temperature of the carbon to approximately 4,000 k (3,700 °c; 6,700 °f) at atmospheric pressure. the resulting liquid is then quenched (rapidly cooled); it is this stage that is the source of the "q" in the material's name. the degree of supercooling below the melting temperature determines the new phase of carbon, whether q-carbon or diamond. higher rates of cooling result in q-carbon, whereas diamond tends to form when the free energy of the carbon liquid equals that of diamond.
<p> above the triple point, the melting point of diamond increases slowly with increasing pressure; but at pressures of hundreds of gpa, it decreases. at high pressures, silicon and germanium have a bc8 body-centered cubic crystal structure, and a similar structure is predicted for carbon at high pressures. at , the transition is predicted to occur at .
<p> diamond is the hardest known material to date, with a vickers hardness in the range of 70–150 gpa. diamond demonstrates both high thermal conductivity and electrically insulating properties and much attention has been put into finding practical applications of this material. however, diamond has several limitations for mass industrial application, including its high cost and oxidation at temperatures above 800 °c. in addition, diamond dissolves in iron and forms iron carbides at high temperatures and therefore is inefficient in cutting ferrous materials including steel. therefore, recent research of superhard materials has been focusing on compounds which would be thermally and chemically more stable than pure diamond.
<p> an attempt to predict the bulk melting point of crystalline materials was first made in 1910 by frederick lindemann. the idea behind the theory was the observation that the average amplitude of thermal vibrations increases with increasing temperature. melting initiates when the amplitude of vibration becomes large enough for adjacent atoms to partly occupy the same space. the lindemann criterion states that melting is expected when the vibration root mean square amplitude exceeds a threshold value.
<p> the chemical element with the highest melting point is tungsten, at ; this property makes tungsten excellent for use as filaments in light bulbs. the often-cited carbon does not melt at ambient pressure but sublimes at about ; a liquid phase only exists above pressures of and estimated (see ). tantalum hafnium carbide (tahfc) is a refractory compound with a very high melting point of 4215 k (3942 °c, 7128 °f). at the other end of the scale, helium does not freeze at all at normal pressure even at temperatures arbitrarily close to absolute zero; a pressure of more than twenty times normal atmospheric pressure is necessary. | Diamonds are made out of carbon, so the melting point should be around the melting point of carbon, 3,552° C. Edit: Refined from the CRC Handbook of Chemistry and Physics: Carbon (diamond) C 7782-40-3 4440° C (12.4 GPa) It has to be at 12 GPa for that melting point,apparently. |
does the volume of excess solvent affect the rate at which a solute is dissolved? | <p> when a solute dissolves, it may form several species in the solution. for example, an aqueous suspension of ferrous hydroxide, , will contain the series [(oh)] as well as other species. furthermore, the solubility of ferrous hydroxide and the composition of its soluble components depend on ph. in general, solubility in the solvent phase can be given only for a specific solute that is thermodynamically stable, and the value of the solubility will include all the species in the solution (in the example above, all the iron-containing complexes).
<p> a solute dissolves in a solvent when it forms favorable interactions with the solvent. this dissolving process all depends upon the free energy change of both solute and solvent. the free energy of solvation is a combination of several factors.
<p> suggests that pressure and volume can also be changed to force a system into a supersaturated state. if the volume of solvent is decreased, the concentration of the solute can be above the saturation point and thus create a supersaturated solution. the decrease in volume is most commonly generated through evaporation. similarly, an increase in pressure can drive a solution to a supersaturated state. all three of these mechanisms rely on the fact that the conditions of the solution can be changed quicker than the solute can precipitate or crystallize out.
<p> addition of solute to form a solution stabilizes the solvent in the liquid phase, and lowers the solvent chemical potential so that solvent molecules have less tendency to move to the gas or solid phases. as a result, liquid solutions slightly above the solvent boiling point at a given pressure become stable, which means that the boiling point increases. similarly, liquid solutions slightly below the solvent freezing point become stable meaning that the freezing point decreases. both the boiling point elevation and the freezing point depression are proportional to the lowering of vapour pressure in a dilute solution.
<p> the solubility of a given solute in a given solvent typically depends on temperature. depending on the nature of the solute the solubility may increase or decrease with temperature. for most solids and liquids, their solubility increases with temperature. in liquid water at high temperatures, (e.g., that approaching the critical temperature), the solubility of ionic solutes tends to decrease due to the change of properties and structure of liquid water; the lower dielectric constant results in a less polar solvent.
<p> maroncelli’s research seeks to develop a fundamental understanding of the molecular nature of solvation and how it affects chemical reactions taking place in solution. solvation involves the interactions between dissolved molecules (solutes) and molecules of the solvent. favorable arrangements of solvent molecules around the solute lower its energy, which leads to dissolution. the interactions involved are typically very rapid, taking place in as short a time as 1 ps (10^-12 s). because the key steps in most chemical reactions also occur on these fast time scales, time-dependent aspects of solvation partly determine how a solvent influences the rate and course of chemical reactions. maroncelli uses ultrafast spectroscopic techniques in combination with modern computational-chemistry methods to observe, analyze, and predict the solvation process and its impact on the chemical steps that occur during the particular reaction being investigated.
<p> what complicates the effect is that a solute can exist in a different concentration at the surface of a solvent than in its bulk. this difference varies from one solute–solvent combination to another. | Yes. Diffusion is driven by a chemical potential gradient. The larger the gradient the faster the diffusion. In the case of excess solvent, as compared to less solvent, the concentration of solute in solution is small so the chemical potential gradient is high. Since the gradient is high, the rate of mass transfer from the solid to solution phase will be greater. |
why is a normal curve shaped like it is? | <p> the shape of the curve results from the interaction of bound oxygen molecules with incoming molecules. the binding of the first molecule is difficult. however, this facilitates the binding of the second, third and fourth, this is due to the induced conformational change in the structure of the hemoglobin molecule induced by the binding of an oxygen molecule.
<p> if a smooth simple closed curve undergoes the curve-shortening flow, it remains smoothly embedded without self-intersections. it will eventually become convex, and once it does so it will remain convex. after this time, all points of the curve will move inwards, and the shape of the curve will converge to a circle as the whole curve shrinks to a single point. this behavior is sometimes summarized by saying that every simple closed curve shrinks to a "round point".
<p> a curve is called simple if it does not intersect itself. a closed regular plane simple curve "c" is convex "if and only if" its curvature is either always non-negative or always non-positive—i.e., if and only if the "turning angle" (the angle of the tangent to the curve) is a weakly monotone function of the parametrization of the curve.
<p> a curve is called a general helix or cylindrical helix if its tangent makes a constant angle with a fixed line in space. a curve is a general helix if and only if the ratio of curvature to torsion is constant.
<p> the definition of a curve includes figures that can hardly be called curves in common usage. for example, the image of a simple curve can cover a square in the plane (space-filling curve) and thus have a positive area. fractal curves can have properties that are strange for the common sense. for example, a fractal curve can have a hausdorff dimension bigger than one (see koch snowflake) and even a positive area. an example is the dragon curve, which has many other unusual properties.
<p> any shape that stays self-similar but shrinks under the mean curvature flow forms an ancient solution to the flow, one that can be extrapolated backwards for all time. however, the reverse is not true. in the same paper in which he published the angenent torus, angenent also described the angenent ovals; these are not self-similar, but they are the only simple closed curves in the plane, other than a circle, that give ancient solutions to the curve-shortening flow.
<p> a curve is called equichordal when it has an equichordal point. such a curve may be constructed as the pedal curve of a curve of constant width. for instance, the pedal curve of a circle is either another circle (when the center of the circle is the pedal point) or a limaçon; both are equichordal curves. | Are you asking why f(x) = exp[-x^(2)] is shaped the way it is, or why lots of random variables in nature seem to be distributed like Gaussians? |
is there evidence that mankind has already polluted the deepest parts of the ocean? | <p> bullet::::- 15 november – a study led by newcastle university finds that sea life in some of the deepest parts of the pacific ocean – as far down as 11 km (7 miles) – is contaminated with plastic pollution.
<p> a recent survey of global ocean health concluded that all parts of the ocean have been impacted by human development and that 41 percent has been fouled with human polluted runoff, overfishing, and other abuses. pollution is not easy to fix, because pollution sources are so dispersed, and are built into the economic systems we depend on.
<p> with increasing ocean exploration over the last two decades has come the realisation that humans have had an extensive impact on the world’s oceans, not just close to our shores, but also reaching down into the deep sea. from destructive fishing practices and exploitation of mineral resources to pollution and litter, evidence of human impact can be found in virtually all deep-sea ecosystems. in response, the international community has set a series of ambitious goals aimed at protecting the marine environment and its resources for future generations. three of these initiatives, decided on by world leaders during the 2002 world summit on sustainable development (johannesburg), are to achieve a significant reduction in biodiversity loss by 2010, to introduce an ecosystems approach to marine resource assessment and management by 2010, and to designate a network of marine protected areas by 2012. a crucial requirement for implementing these is the availability of high-quality scientific data and knowledge, as well as effective science-policy interfaces to ensure the policy relevance of research and to enable the rapid translation of scientific information into science policy.
<p> these data suggest human occupation when the sea level was lower than present, and that submerged archaeological sites could occur along the paleocoastline beyond the current shorelines of haida gwaii (fedje & christensen, 1999) and southeast alaska.
<p> if deep-sea ocean sequestration becomes a common practice, long term effects will continue to be investigated to predict future scenarios of deep sea impacts by carbon dioxide. ocean sequestration of liquid carbon dioxide would not only impact deep-sea ecosystems, but in the long-run would begin to affect surface-water species. it is estimated that organisms not fit for high carbon dioxide levels will begin to experience permanent effects at levels of 400/500ppm of carbon dioxide and/or shifts of 0.1-0.3 units in ph. these levels of carbon dioxide are predicted to be met solely as a result of atmospheric carbon dioxide acidifying the surface waters over a matter of a century, without considering ocean sequestration effects.
<p> however, the deep sea is increasingly threatened by humans: most of this deep-ocean frontier lies within europe's exclusive economic zone (eez) and has significant potential for the exploitation of biological, energy, and mineral resources. research and exploration over the last two decades has shown clear signs of direct and indirect anthropogenic impacts in the deep sea, resulting from such activities as overfishing, littering and pollution. this raises concerns because deep-sea processes and ecosystems are not only important for the marine web of life, but also fundamentally contribute to the global biogeochemical cycle.
<p> ocean sequestration in deep sea sediments has the potential to impact deep sea life. the chemical and physical composition of the deep sea does not undergo changes in the way that surface waters do. due to its limited contact with the atmosphere, most organisms have evolved with very little physical and chemical disturbance and exposed to minimal levels of carbon dioxide. most of their energy is obtained from feeding off of particulate matter that descends from the surface water of the ocean and its ecosystems. deep sea ecosystems do not have rapid reproduction rates nor give birth to many offspring because of their limited access to oxygen and nutrients. in particular, species that inhabit the 2000-3000m deep range of the ocean have small, diverse populations. introducing lethal amounts of carbon dioxide into the environment of such a species can have a serious impact on the population size and will take longer to recover relative to surface water species. | Polluted in what way? Oceanic acidification has the potential to significantly change the chemical make-up of our oceans. And because our oceans act as a buffer for atmospheric CO2 levels, it will not only be our oceans that are affected. It's a problem and is most definitely being facilitated by humankind. |
do birds ping? | <p> their most distinctive behaviour is the beating and whistling sound their wings make when they take off. this is most likely to draw the attention of predators to birds on the wing, and away from any birds remaining on the ground, and as an alarm call to other pigeons. when the birds land, their tails tilt upwards and the flight patterns are similar to those of the spotted turtle dove. they are generally solitary. although they can be seen in pairs, they can be highly social and tend to be seen in flocks. they are highly gregarious birds when in contact with humans.
<p> if startled, the crested pigeon takes to the air with a distinctive whistling 'call', the source of the noise can be attributed to the way the air rushes over a modified primary feather found on the wings.
<p> its flight is quick, performed by regular beats, with an occasional sharp flick of the wings, characteristic of pigeons in general. it takes off with a loud clattering. it perches well, and in its nuptial display walks along a horizontal branch with swelled neck, lowered wings, and fanned tail. during the display flight the bird climbs, the wings are smartly cracked like a whiplash, and the bird glides down on stiff wings. the common wood pigeon is gregarious, often forming very large flocks outside the breeding season. like many species of pigeon, wood pigeons take advantage of trees and buildings to gain a vantage point over the surrounding area, and their distinctive call means that they are usually heard before they are seen.
<p> the new zealand pigeons make occasional soft "coo" sounds (hence the onomatopoeic names), and their wings make a very distinctive "whooshing" sound as they fly. the bird's flight is also distinctive. birds will often ascend slowly before making impressively steep parabolic dives; it is thought that this behaviour is often associated with nesting, or nest failure.
<p> one or two birds and small flocks are usually found; large flocks are occasionally seen. the pigeon flies swiftly and directly. it plucks fruits from branches in the canopy, and it flies across the sea to search for food. its calls include a rising and repeated "c-wooooohooo" given when the bird is upright, a loud series of descending coos while bobbing up and down, and a high-pitched "crrrrrurrr". in display, it flies up at an angle of 70° and then glides. breeding has been observed from june to september and in march. the nest is built at the end of a branch and made of twigs. one egg is laid.
<p> this bird has an aerial display, which involves flying high in circles, followed by a powerful stoop during which the bird makes a "drumming" sound, caused by vibrations of modified outer tail feathers.
<p> this bird has a spectacular aerial display, which involves flying high in circles, followed by a powerful stoop during which the bird makes a drumming sound, caused by vibrations of modified outer tail feathers. | I know a few bird species that do this ( it may not be common across a lot of them) cardinals, blujays, chickadees, and a variety of song birds use "contact calls" to locate one another. |
what makes toxic waste, war heads, and other sour candies so sour? | <p> warheads extreme sour hard candy derive their strong sour flavor primarily from malic acid, which is applied as a coating to the outside of the small, hard candies. the intense sour flavor fades after about 5 to 10 seconds, leaving a fairly mild candy that contains the much less sour ascorbic acid and citric acid.
<p> some toxic waste products have a hard, sour exterior and a sour liquid filling. toxic waste candy products are made in brazil, pakistan and spain. the product is distributed in the united states, united kingdom, by newbridge foods of bromborough, wirral, europe, canada, south africa and other countries.
<p> bullet::::2. bitter: it is poisonous, as it contains a high level of cyanic acid. it is not suitable for direct human consumption or for direct feeding to animals. it must be processed into flour, pellets, alcohol, or another derivative.
<p> toxic waste is a line of sour candies owned and marketed by american company candy dynamics inc., which is headquartered in indianapolis, indiana. the products are sold primarily in the united states and canada as well as several international markets such as the united kingdom, south africa and ireland. the toxic waste candy is packed in novelty drum containers, each holding 16 pieces of sour candy which come in five different flavors.
<p> raxacoricofallapatorians are vulnerable to acetic acid, which reacts explosively—and fatally—with their bodies, making slitheen allergic to vinegar, ketchup and coca-cola. one of the raxacoricofallapatorian methods of execution is the lowering of the condemned into a cauldron of acetic acid, which is then heated to boiling. the acidity of the solution is formulated to dissolve the skin, allowing the internal organs to drop into the liquid while the condemned is still alive, reducing them to "soup" in a slow and painful death. in "world war three", when a single slitheen was electrocuted, the effects were transmitted to other slitheen, even those across the city.
<p> malic acid, when added to food products, is denoted by e number e296. malic acid is the source of extreme tartness in united states-produced confectionery, the so-called "extreme candy". it is also used with or in place of the less sour citric acid in sour sweets. these sweets are sometimes labeled with a warning stating that excessive consumption can cause irritation of the mouth. it is approved for use as a food additive in the eu, us and australia and new zealand (where it is listed by its ins number 296).
<p> to render food unpleasant or dangerous to consume, it is denatured by adding a substance known as a denaturant. aversive agents—primarily bitterants and pungent agents—are used to produce an unpleasant flavor. for example, the bitterant denatonium might be added to food used in a laboratory, where such food is not intended for human consumption. a poisonous substance may be added as an even more powerful deterrent. for example, methanol is blended with ethanol to produce denatured alcohol. the addition of methanol, which is poisonous, renders denatured alcohol unfit for consumption, as ingesting denatured alcohol may result in serious injury or death. thus denatured alcohol is not subject to the taxes usually levied on the production and sale of alcoholic beverages. aniline was used to denature colza oil in the 1980s. | Sour is one of the five primary tastes (think like primary colors on a color wheel). Sour is the taste of acid. The more acidic the food, the more sour it will taste. Lemons, vinegar, spoiled milk, vomit, all of these contain significant quantities of acid. Sour candies like Warheads are dusted with a powder of acid (often citric acid) mixed with sugar. This is known as sour sanding. The acid is also why your mouth gets super irritated after eating lots of sour candies or Salt & Vinegar chips. The skin of your mouth is being literally corroded away, and it needs time to heal. |
how much pressure does water put on each inch of the item holding it? | <p> small pressure vessels are normally tested using a water jacket test. the vessel is visually examined for defects and then placed in a container filled with water, and in which the change in volume of the vessel can be measured, usually by monitoring the water level in a calibrated tube. the vessel is then pressurised for a specified period, usually 30 or more seconds, and if specified, the expansion will be measured by reading off the amount of liquid that has been forced into the measuring tube by the volume increase of the pressurised vessel. the vessel is then depressurised, and the permanent volume increase due to plastic deformation while under pressure is measured by comparing the final volume in the measuring tube with the volume before pressurisation. a leak will give a similar result to permanent set, but will be detectable by holding the volume in the pressurised vessel by closing the inlet valve for a period before depressurising, as the pressure will drop steadily during this period if there is a leak. in most cases a permanent set that exceeds the specified maximum will indicate failure. a leak may also be a failure criterion, but it may be that the leak is due to poor sealing of the test equipment. if the vessel fails, it will normally go through a condemning process marking the cylinder as unsafe.
<p> pressure vessels are held together against the gas pressure due to tensile forces within the walls of the container. the normal (tensile) stress in the walls of the container is proportional to the pressure and radius of the vessel and inversely proportional to the thickness of the walls. therefore, pressure vessels are designed to have a thickness proportional to the radius of tank and the pressure of the tank and inversely proportional to the maximum allowed normal stress of the particular material used in the walls of the container.
<p> inches of water, inches of water gauge (iwg or in.w.g.), inches water column (inch wc or just wc), inaq, aq, or inho is a non-si unit for pressure. the units are conventionally used for measurement of certain pressure differentials such as small pressure differences across an orifice, or in a pipeline or shaft.
<p> design pressure is the pressure a pressurized item is designed to, and is higher than any expected operating pressures. due to the availability of standard wall thickness materials, many components will have a mawp higher than the required design pressure. for pressure vessels, all pressures are defined as being at highest point of the unit in the operating position, and do not include static head pressure. the equipment designer needs to account for the higher pressures occurring at some components due to static head pressure.
<p> pressure describes the amount of force exerted on a system by the contained and pressurized gas. most compressed gases will not exceed 2,000 to 2,640 pounds per square inch gage (psig), but some can reach pressures of 6,000 psig. the system's weakest point determines the pressure limit, so any parts weakened by heat, corrosion, or stress may potentially lower the maximum pressure of the system or cause the vessel to rupture. many times this is at the point of welds.
<p> the operating pressure is commonly up to 7 bars for metal. the improvement of the technology makes it possible to remove large amount of moisture at 16 bar of pressure and operate at 30 bars. however, the pressure is 4-5 bars for wood or plastic frames. if the concentration of solids in the feed tank increase until the solid particles are attached to each other. it is possible to install moving blades in the filter press to reduce resistance to flow of liquid through the slurry.
<p> millimeters, water gauge, also known as a millimetre of water (us spelling millimeter of water) or millimetres water column and abbreviated to mmwg, mmho or mmwc, respectively, is a less commonly used unit of pressure. it may be defined as the pressure exerted by a column of water of 1 mm in height at 4 °c (temperature of maximum density) at the standard acceleration of gravity, so that = × × 1 mm = ≈ , but conventionally a nominal maximum water density of is used, giving . | it would depend entirely on the depth/amount of the water, pressure increases in water with one atmosphere (1 atm) every 10 meters. If this is a flood situation we're talking about though, the flow of water would also increase pressure and weight would be more important, 1 L of seawater has a mass of 1 Kg, so it depends on how much those doors would be able to take. |
how is personality formed? | <p> personality development is the relatively enduring pattern of the thoughts, feelings, and behaviours that distinguish individuals from one another. the dominant view in the field of personality psychology today holds that personality emerges early and continues to change in meaningful ways throughout the lifespan.
<p> personality is defined as the characteristic set of behaviors, cognitions, and emotional patterns that evolve from biological and environmental factors. while there is no generally agreed upon definition of personality, most theories focus on motivation and psychological interactions with one's environment. trait-based personality theories, such as those defined by raymond cattell define personality as the traits that predict a person's behavior. on the other hand, more behaviorally based approaches define personality through learning and habits. nevertheless, most theories view personality as relatively stable.
<p> personality can be defined as a dynamic and organized set of personal traits and patterns of behavior. "personality includes attitudes, modes of thought, feelings, impulses, strivings, actions, responses to opportunity and stress and everyday modes of interacting with others." personality style is apparent "when these elements of personality are expressed in a characteristically repeated and dynamic combination."
<p> identity formation, also known as individuation, is the development of the distinct personality of an individual regarded as a persisting entity (known as personal continuity) in a particular stage of life in which individual characteristics are possessed and by which a person is recognized or known (such as the establishment of a reputation). this process defines individuals to others and themselves. pieces of the person's actual identity include a sense of continuity, a sense of uniqueness from others, and a sense of affiliation. identity formation leads to a number of issues of personal identity and an identity where the individual has some sort of comprehension of themselves as a discrete and separate entity. this may be through individuation whereby the undifferentiated individual tends to become unique, or undergoes stages through which differentiated facets of a person's life tend toward becoming a more indivisible whole.
<p> personality is the overall characteristics that a person possesses. all of these characteristics are acquired within a culture. however, when a person changes his or her culture, his or her personality automatically changes because the person learns to follow the norms and values of the new culture, and this, in turn, influences the individual's personal characteristics.
<p> personality also changes through life stages. this may be due to physiological changes associated with development but also experiences that impact behavior. adolescence and young adulthood have been found to be prime periods of personality changes, especially in the domains of extraversion and agreeableness. it has long been believed that personality development is shaped by life experiences that intensify the propensities that led individuals to those experiences in the first place, which is known as the corresponsive principle.
<p> personality can be defined as a set of characteristics or traits that drive individual differences in human behavior. from a biological perspective, these traits can be traced back to brain structures and neural mechanisms. however, this definition and theory of biological basis is not universally accepted. there are many conflicting theories of personality in the fields of psychology, psychiatry, philosophy, and neuroscience. a few examples of this are the nature vs. nurture debate and how the idea of a 'soul' fits into biological theories of personality. | SometHing I can actually answer! I am on the train at the moment so references will be sparse, but most of the information will come from funder's 2001 paper. Okay so there are many different ideas, approaches and factors to take into account so I will try and outline some of the main approaches and what they believe. There is the behaviourist approach that believes our personality emerges from our experience and interactions with our environment.this occurs through mechanisms such as classical conditioning, which is where we learn to associate co-occuring stimuli. This can be seen with pavlovs dog experiment and watsons (1925) little albert experiment. Another mechanism is operant condition proposed by B F Skinner, this claims basically we will perform tasks we are rewarded for more often, and ones we are punished for less. Another approach is the biological approach that claims that our personality is determined by chemicals, hormones and neurotransmitters in the brain. Examples of this is seratonin, which amongst other things, has been linked to happiness, and has been effectively harnessed to create effective anti-depressant medications There is also the evolutionary approach that posits that we inherit our personality through genes and natural selection. Some evidence does exist for this such as Loehlin and Nicholas (1976) which displayed behavioural concordance between twins. There is also the socio-cognitive approach which believes that personality comes from thought processing styles and social experience. Evidence from this can be seen in Banduras (1977) bobo doll experiment where he taught aggressive behaviour to children through them observing aggressive behaviour. Other theories in this area also include Baldwins (1999) relational schemas that claim that our behaviour is determined by our relation to those around us Another, but contentious approach is Psychodynamics, which is widely known as Freud's area of psychology. This approach believes that personality is formed from developmental stages in early life, and the conflict between the ID (desires), ego (implementing reality onto desires) and superego (conscience) The humanist approach also has views on personality, but provides little in the way of testable theories. This approach claims that people can only be understood through their unique experience of reality, and has therefore brought into question the validity of many cross-cultural approaches to testing personality. Studies such as hofstede (1976, 2011) have attempted to examine the effects of culture in personality, and have found significant effects, but an important thing to note is that whilst means differ, all types of personality can be found everywhere. When we talk about measures of personality we often measure it with the big five measure (goldberg et al., 1980: Digman, 1989). This measure includes openness to new experience, conscientious, agreeableness, neuroticism, and extraversion. There is more to say but I cannot be too extensive currently, hope this helps. If people want more info just say and I can fill in more detail later Sources: Funder. D. C (2001) Personality, annual reviews of psychology, 52, 197-221. . Other sources I cannot access on a train . Bsc, Psychology, university of sheffield |
how is personality formed? | <p> personality development is the relatively enduring pattern of the thoughts, feelings, and behaviours that distinguish individuals from one another. the dominant view in the field of personality psychology today holds that personality emerges early and continues to change in meaningful ways throughout the lifespan.
<p> personality is defined as the characteristic set of behaviors, cognitions, and emotional patterns that evolve from biological and environmental factors. while there is no generally agreed upon definition of personality, most theories focus on motivation and psychological interactions with one's environment. trait-based personality theories, such as those defined by raymond cattell define personality as the traits that predict a person's behavior. on the other hand, more behaviorally based approaches define personality through learning and habits. nevertheless, most theories view personality as relatively stable.
<p> personality can be defined as a dynamic and organized set of personal traits and patterns of behavior. "personality includes attitudes, modes of thought, feelings, impulses, strivings, actions, responses to opportunity and stress and everyday modes of interacting with others." personality style is apparent "when these elements of personality are expressed in a characteristically repeated and dynamic combination."
<p> identity formation, also known as individuation, is the development of the distinct personality of an individual regarded as a persisting entity (known as personal continuity) in a particular stage of life in which individual characteristics are possessed and by which a person is recognized or known (such as the establishment of a reputation). this process defines individuals to others and themselves. pieces of the person's actual identity include a sense of continuity, a sense of uniqueness from others, and a sense of affiliation. identity formation leads to a number of issues of personal identity and an identity where the individual has some sort of comprehension of themselves as a discrete and separate entity. this may be through individuation whereby the undifferentiated individual tends to become unique, or undergoes stages through which differentiated facets of a person's life tend toward becoming a more indivisible whole.
<p> personality is the overall characteristics that a person possesses. all of these characteristics are acquired within a culture. however, when a person changes his or her culture, his or her personality automatically changes because the person learns to follow the norms and values of the new culture, and this, in turn, influences the individual's personal characteristics.
<p> personality also changes through life stages. this may be due to physiological changes associated with development but also experiences that impact behavior. adolescence and young adulthood have been found to be prime periods of personality changes, especially in the domains of extraversion and agreeableness. it has long been believed that personality development is shaped by life experiences that intensify the propensities that led individuals to those experiences in the first place, which is known as the corresponsive principle.
<p> personality can be defined as a set of characteristics or traits that drive individual differences in human behavior. from a biological perspective, these traits can be traced back to brain structures and neural mechanisms. however, this definition and theory of biological basis is not universally accepted. there are many conflicting theories of personality in the fields of psychology, psychiatry, philosophy, and neuroscience. a few examples of this are the nature vs. nurture debate and how the idea of a 'soul' fits into biological theories of personality. | The MaTcH study is a meta-analysis of twin studies (Nature Genetics, 2015). The link is to their interactive webpage which is quite nice at allowing you to explore the 'nature vs nurture' proportion for various factors. Unsurprisingly, things like disease markers are very largely inherited. The study seems to suggest a roughly 50-50 split between genetic factors and environmental factors under the subchapter measure of 'Temperament and Personality Functions'. From my reading of personality psych papers over the last few years, it seems we have some reasonably consistent personality traits (like Extraversion and Neuroticism) that are likely related to inherited biological factors. However, early environment obviously plays a large role in how these biological markers are expressed in later life. For example, one could be born with a tendency to respond more strongly to negative stimuli, but those with this trait and an undesirable childhood may be much more likely to develop anxiety and depression issues overall. (did a PhD partly involving personality, and have spoken to a few professors about this very subject) |
is it possible to create superconductor batteries? | <p> in superconductors, charge can flow without any resistance. it is possible to make pieces of superconductor with a large built-in persistent current, either by creating the superconducting state (cooling the material) while charge is flowing through it, or by changing the magnetic field around the superconductor after creating the superconducting state. this principle is used in superconducting electromagnets to generate sustained high magnetic fields that only require a small amount of power to maintain. the persistent current was first identified by onnes, and attempts to set a lower bound on their duration have reached values of over 100,000 years.
<p> superinsulators could potentially be used to create batteries that do not lose charge when not in use. combined with superconductors, superinsulators could be used to create electrical circuits with no energy lost as heat.
<p> before such devices can be created a major problem needs to be overcome. even though all of these devices use a superconductor in the role of a permanent magnet and even though the superconductor can trap potentially huge magnetic fields (greater than 10 t) the problem is the induction of the magnetic fields, this applies both to bulk and to coils operating in persistent mode. there are four possible known methods:
<p> high-temperature superconductors (hts) promise to revolutionize power distribution by providing lossless transmission of electrical power. the development of superconductors with transition temperatures higher than the boiling point of liquid nitrogen has made the concept of superconducting power lines commercially feasible, at least for high-load applications. it has been estimated that the waste would be halved using this method, since the necessary refrigeration equipment would consume about half the power saved by the elimination of the majority of resistive losses. some companies such as consolidated edison and american superconductor have already begun commercial production of such systems. in one hypothetical future system called a supergrid, the cost of cooling would be eliminated by coupling the transmission line with a liquid hydrogen pipeline.
<p> superconducting electric machines are electromechanical systems that rely on the use of one or more superconducting elements. since superconductors have no dc resistance, they typically have greater efficiency. the most important parameter that is of utmost interest in superconducting machine is the generation of a very high magnetic field that is not possible in a conventional machine. this leads to a substantial decrease in the motor volume; which means a great increase in the power density. however, since superconductors only have zero resistance under a certain superconducting transition temperature, "t" that is hundreds of degrees lower than room temperature, cryogenics are required.
<p> the first practical application of superconductivity was developed in 1954 with dudley allen buck's invention of the cryotron. two superconductors with greatly different values of critical magnetic field are combined to produce a fast, simple switch for computer elements.
<p> covellite was the first identified naturally occurring superconductor. the framework of cus /cus allow for an electron excess that facilitate superconduction during particular states, with exceptionally low thermal loss. material science is now aware of several of covellite's favorable properties and several researchers are intent on synthesizing covellite. uses of covellite cus superconductivity research can be seen in lithium batteries’ cathodes, ammonium gas sensors, and solar electric devices with metal chalcogenide thin films. | Pretty much exactly yes. A major limitation is that superconductors are not very strong and the enormous magnetic field is trying to crush your system to a point. Superconductors are also not free and need to be cooled to extremely low temperatures to work. Nevertheless, superconducting magnetic energy storage is extremely efficient and switches on much faster than most other batteries - they're used a lot as buffers where you need a very stable supply of electricity. |
does the earth actually make 366.25 rotation's in a year? | <p> earth's rate of rotation must be integrated to obtain time, which is earth's angular position (specifically, the orientation of the meridian of greenwich relative to the fictitious mean sun). integrating +1.7 ms/d/cy and centering the resulting parabola on the year 1820 yields (to a first approximation) seconds for δ"t". smoothed historical measurements of δ"t" using total solar eclipses are about +17190 s in the year −500 (501 bc), +10580 s in 0 (1 bc), +5710 s in 500, +1570 s in 1000, and +200 s in 1500. after the invention of the telescope, measurements were made by observing occultations of stars by the moon, which allowed the derivation of more closely spaced and more accurate values for δ"t". δ"t" continued to decrease until it reached a plateau of +11 ± 6 s between 1680 and 1866. for about three decades immediately before 1902 it was negative, reaching −6.64 s. then it increased to +63.83 s in january 2000 and +68.97 s in january 2018. this will require the addition of an ever-greater number of leap seconds to utc as long as utc tracks ut1 with one-second adjustments. (the si second as now used for utc, when adopted, was already a little shorter than the current value of the second of mean solar time.) physically, the meridian of greenwich in universal time is almost always to the east of the meridian in terrestrial time, both in the past and in the future. +17190 s or about h corresponds to 71.625°e. this means that in the year −500 (501 bc), earth's faster rotation would cause a total solar eclipse to occur 71.625° to the east of the location calculated using the uniform tt.
<p> the angular speed of earth's rotation in inertial space is ± . multiplying by (180°/π radians) × (86,400 seconds/day) yields , indicating that earth rotates more than 360° relative to the fixed stars in one solar day. earth's movement along its nearly circular orbit while it is rotating once around its axis requires that earth rotate slightly more than once relative to the fixed stars before the mean sun can pass overhead again, even though it rotates only once (360°) relative to the mean sun. multiplying the value in rad/s by earth's equatorial radius of (wgs84 ellipsoid) (factors of 2π radians needed by both cancel) yields an equatorial speed of . some sources state that earth's equatorial speed is slightly less, or . this is obtained by dividing earth's equatorial circumference by . however, the use of only one circumference unwittingly implies only one rotation in inertial space, so the corresponding time unit must be a sidereal hour. this is confirmed by multiplying by the number of sidereal days in one mean solar day, , which yields the equatorial speed in mean solar hours given above of .
<p> the rotation rate of the earth ("ω" = 7.2921 × 10 rad/s) can be calculated as 2"π" / "t" radians per second, where "t" is the rotation period of the earth which is one "sidereal" day (23 hr 56 m 4.1 s). in the midlatitudes, the typical value for formula_2 is about 10 rad/s. inertial oscillations on the surface of the earth have this frequency. these oscillations are the result of the coriolis effect.
<p> the surface velocity due to the earth's rotation is a maximum at the equator and is equal to the circumference (pi × the diameter of the earth) per 24 hours (or 3.14159 × 12,756 ÷ 24 = 1670 km/h = 1 equatorial velocity unit, evu). the time of an earth's rotation is inversely related to the angular velocity and the surface velocity (t = 1 day for 2 pi radians, or at the equator, 1 circumferential unit per 1 evu = 40,075 km ÷ 1670 km/h ÷ 24 h/day = 1 day).
<p> at the equator, the radius of the earth is "r" = 6,378,137 meters. in addition, the rotation of the earth needs to be taken into account. this imparts on an observer an angular velocity of formula_18 of 2"π" divided by the sidereal period of the earth's rotation, 86162.4 seconds. so formula_19. the proper time equation then produces
<p> after 380 to 390 years or so, the kidney-bean-shaped orbit approaches earth again from the other side, and the earth, once more, alters the orbit of cruithne so that its period of revolution around the sun is again slightly "less" than a year (this last happened with a series of close approaches centred on 1902, and will next happen with a series centered on 2676). the pattern then repeats itself.
<p> the earth revolves around the earth-moon barycentre once a sidereal month, with 1/81 the speed of the moon, or about per second. this motion is superimposed on the much larger revolution of the earth around the sun at a speed of about per second. | Close enough. The Earth makes 366.242199 _rotations_, without an apostrophe, in a year, i.e. from one vernal equinox to the next. |
the male angler fish famously attach itself to its female, fuse with its body and shares its circulatory system. how do they manage this without triggering an immune rejection like for a transplant ? | <p> the methods by which the anglerfish locate mates are variable. some species have minute eyes unfit for identifying females, while others have underdeveloped nostrils, making it unlikely that they effectively find females using olfaction. when a male finds a female, he bites into her skin, and releases an enzyme that digests the skin of his mouth and her body, fusing the pair down to the blood-vessel level. the male becomes dependent on the female host for survival by receiving nutrients via their now-shared circulatory system, and provides sperm to the female in return. after fusing, males increase in volume and become much larger relative to free-living males of the species. they live and remain reproductively functional as long as the female stays alive, and can take part in multiple spawnings. this extreme sexual dimorphism ensures that when the female is ready to spawn she has a mate immediately available. multiple males can be incorporated into a single individual female with up to eight males in some species, though some taxa appear to have a one male per female rule.
<p> in some species of anglerfish, when a male finds a female, he bites into her skin, and releases an enzyme that digests the skin of his mouth and her body, fusing the pair down to the blood-vessel level. the male becomes dependent on the female host for survival by receiving nutrients via their shared circulatory system, and provides sperm to the female in return. after fusing, males increase in volume and become much larger relative to free-living males of the species. they live and remain reproductively functional as long as the female lives, and can take part in multiple spawnings. this extreme sexual dimorphism ensures, when the female is ready to spawn, she has a mate immediately available. multiple males can be incorporated into a single individual female with up to eight males in some species, though some taxa appear to have a one male per female rule.
<p> the males in some deep sea anglerfishes are much smaller than the females. when they find a female they bite into her skin, releasing an enzyme that digests the skin of their mouth and her body and fusing the pair down to the blood-vessel level. the male then slowly atrophies, losing first his digestive organs, then his brain, heart, and eyes, ending as nothing more than a pair of gonads, which release sperm in response to hormones in the female's bloodstream indicating egg release. this ensures that, when the female is ready to spawn, she has a mate immediately available. a single anglerfish female can "mate" with many males in this manner.
<p> a unique form of insemination has been described in "corydoras aeneus". when these fish reproduce, the male will present his abdomen to the female. the female will attach her mouth to the male's genital opening, creating the well-known "t-position" many "corydoras" exhibit during courtship. the female will then drink the sperm. the sperm rapidly moves through her intestines and is discharged together with her eggs into a pouch formed by her pelvic fins. the female can then swim away and deposit the pouch somewhere else alone. because the t-position is exhibited in other species than just "c. aeneus", it is likely that this behavior is common in the genus.
<p> mating is initiated when up to five males follow closely behind a female and bite at her fins and body, possibly cued by pheromones indicating the female's readiness. each male attempts to seize the female by engulfing one of her pectoral fins; at times two males might grasp a female on both sides simultaneously. once engaged, the sharks sink to the bottom, whereupon the male (or males) rotates one of his claspers forward, inflates the associated siphon sac (a subcutaneous abdominal organ that takes in seawater that is used to flush sperm into the female), and attempts to make contact with the female's vent. in many cases, the female resists by pressing her belly against the bottom and arching her tail; this may reflect mate choice on her part. the male has a limited time in which to achieve copulation, as while he is holding the female's pectoral fin in his mouth he is being deprived of oxygen. on the other hand, if the female is willing, the pair settles side-by-side with their heads pressed against the bottom and their bodies at an upward angle.
<p> the males in some deep sea anglerfishes are much smaller than the females. when they find a female they bite into her skin, releasing an enzyme that digests the skin of their mouth and her body and fusing the pair down to the blood-vessel level. the male then slowly atrophies, losing first his digestive organs, then his brain, heart, and eyes, ending as nothing more than a pair of gonads, which release sperm in response to hormones in the female's bloodstream indicating egg release. this extreme sexual dimorphism ensures that, when the female is ready to spawn, she has a mate immediately available. a single anglerfish female can "mate" with many males in this manner.
<p> in laboratory observations, it was found that bronze corydoras have a unique method of insemination. when these fish reproduce, the male will present his abdomen to the female. the female will attach her mouth to the male's genital opening, creating the well-known "t-position" many "corydoras" exhibit during courtship. the female will then drink the sperm. the sperm rapidly moves through her intestines and is discharged together with her eggs into a pouch formed by her pelvic fins. the female can then swim away and deposit the pouch somewhere else alone. because the t-position is exhibited in other species than just "c. aeneus", it is likely that they also exhibit this behavior. in the wild, eggs are laid on waterweeds. | I just want to give your question an answer since you aren't getting one. The answer is nobody knows. Angler fish are deep sea fish so even seeing them is rare, studying their antibodies in action would be near impossible. I will say this, however. Many organisms have developed mechanisms to deter rejections from a host body. Most parasitic species have to do this. I imagine the male angler does something similar, but I'm just speculating. |
does the dynamic range of your eyes increase when wearing sunglasses? | <p> in humans, the total optical power of the relaxed eye is approximately 60 dioptres. the cornea accounts for approximately two-thirds of this refractive power (about 40 dioptres) and the crystalline lens contributes the remaining one-third (about 20 dioptres). in focusing, the ciliary muscle contracts to reduce the tension or stress transferred to the lens by the suspensory ligaments. this results in increased convexity of the lens which in turn increases the optical power of the eye. the amplitude of accommodation is about 15 to 20 dioptres in the very young, decreasing to about 10 dioptres at age 25, and to around 1 dioptre above age 50.
<p> vertex distance is the space between the front of the eye and the back surface of the lens. in glasses with powers beyond ±4.00d, the vertex distance can affect the effective power of the glasses. a shorter vertex distance can expand the field of view, but if the vertex distance is too small, the eyelashes will come into contact with the back of the lens, smudging the lens and causing annoyance for the wearer. a skilled frame stylist will help the wearer select a good balance of fashionable frame size with good vertex distance in order to achieve ideal aesthetics and field of view. the average vertex distance in a pair of glasses is 12-14mm. a contact lens is placed directly on the eye and thus has a vertex distance of zero.
<p> as the eye shifts its gaze from looking through the optical center of the corrective lens, the lens-induced astigmatism value increases. in a spherical lens, especially one with a strong correction whose base curve is not in the best spherical form, such increases can significantly impact the clarity of vision in the periphery.
<p> during the accommodation reflex, the pupil constricts to increase the depth of focus of the eye by blocking the light scattered by the periphery of the cornea. the lens then increases its curvature to become more biconvex, thus increasing refractive power. the ciliary muscles are responsible for the lens accommodation response.
<p> adjustable focus eyeglasses are eyeglasses with an adjustable focal length. they compensate for refractive errors (such as presbyopia) by providing variable focusing, allowing users to adjust them for desired distance or prescription, or both.
<p> the lens is flexible and its curvature is controlled by ciliary muscles through the zonules. by changing the curvature of the lens, one can focus the eye on objects at different distances from it. this process is called accommodation. at short focal distance the ciliary muscle contracts, zonule fibers loosen, and the lens thickens, resulting in a rounder shape and thus high refractive power. changing focus to an object at a greater distance requires the relaxation of the
<p> sunglasses are often worn to reduce glare; polarized sunglasses are designed to reduce glare caused by light reflected from non-metallic surfaces such as water, glossy printed matter or painted surfaces. an anti-reflective treatment on eyeglasses reduces the glare at night and glare from inside lights and computer screens that is caused by light bouncing off the lens. some types of eyeglasses can reduce glare that occurs because of the imperfections on the surface of the eye. | "Dynamic range" means the extremes of light and dark that the eyes can *adjust to.* Once your eyes adjust to lower light, like photo film there's a response curve where details are lost above and below a given optimum. I don't believe that curve is any wider in dim light than in bright, but it wouldn't be surprising if a small difference could be teased out. It would be a much smaller effect than the total range of human vision. The eyes use two independent sets of "sensors" with widely different ranges and capabilities. Night vision is monochrome and very sensitive. Day vision sees colors and is not so sensitive. Sunglasses, in most cases, aren't dark enough to shift from day vision \(cones\) to night vision \(rods\) because you can still see colors wearing sunglasses. Next there are also two separate regulating mechanisms.. retinal adaptation and pupil size. Both of these affect the "dynamic range" on the fly, and retinal adaptation is even localized, so that you can stare at a bright object surrounded by a dark background and after a little time, both will become better defined, until you look away and the retinal image moves. I don't see sunglasses changing this much. |
this crazy science comment was posted earlier on reddit. can you guys tell me if what he said can actually happen? if so, could you elaborate? | <p> "he spoke about a friend who was a psychic and experiments they did. he said he set up a pinwheel experiment – i don't know how, but he knew how to set up an experiment that would be valid – and he told me that for about a week he could turn it with his mind, with his thoughts, but after about a week he couldn't do it anymore. he also told me a story about being in a car parked on the street, he was into thought experiments, and he said he projected a thought into her mind to get into my car, and as the woman was walking by the car she stopped, opened the door and sat down and looked at him, and i don't know if she shrieked or what but she was absolutely stunned at what she was doing. he said, i willed her to get into the car, and she did. i think he was as shocked as she was. they were both shocked.
<p> bullet::::- nostradamus: he is an overweight, pimply nerd who falsely believes that he can predict anything, he is known for saying he knew something was going to happen after it already did happen. because of that he is often seen as a nuisance. he is a lover of science fiction and fantasy. he was also seen hanging with marilyn monroe at the grassy knoll.
<p> herbie reports that he has seen a scene enacted in the near future which he could not understand until his childhood research in astronomy has explained it to him: he has learned about something called a "nova." what he has really seen, and had not wanted to tell his audience, was that "tomorrow – the sun is going to explode."
<p> "no, it is more of a study of how it might have been or how i feel it might have been. i mean, for example, some of the people i have met. we all met in a bar, there was a blond french guy sitting at a table, he bought us drinks. and, two or three days later, i saw his face in the headlines of a paris paper. he had been arrested and was later guillotined. that stuck in my mind."
<p> how do you do? mr. carl laemmle feels it would be a little unkind to present this picture without just a word of friendly warning: we are about to unfold the story of frankenstein, a man of science who sought to create a man after his own image without reckoning upon god. it is one of the strangest tales ever told. it deals with the two great mysteries of creation; life and death. i think it will thrill you. it may shock you. it might even" horrify "you. so, if any of you feel that you do not care to subject your nerves to such a strain, now's your chance to uh, well,––we "warned" you!!
<p> scientists who have reviewed "what the bleep do we know!?" have described distinct assertions made in the film as pseudoscience. lisa randall refers to the film as "the bane of scientists". amongst the assertions in the film that have been challenged are that water molecules can be influenced by thought (as popularized by masaru emoto), that meditation can reduce violent crime rates, and that quantum physics implies that "consciousness is the ground of all being." the film was also discussed in a letter published in "physics today" that challenges how physics is taught, saying teaching fails to "expose the mysteries physics has encountered [and] reveal the limits of our understanding". in the letter, the authors write: "the movie illustrates the uncertainty principle with a bouncing basketball being in several places at once. there's nothing wrong with that. it's recognized as pedagogical exaggeration. but the movie gradually moves to quantum 'insights' that lead a woman to toss away her antidepressant medication, to the quantum channeling of ramtha, the 35,000-year-old lemurian warrior, and on to even greater nonsense." it went on to say that "most laypeople cannot tell where the quantum physics ends and the quantum nonsense begins, and many are susceptible to being misguided," and that "a physics student may be unable to convincingly confront unjustified extrapolations of quantum mechanics," a shortcoming which the authors attribute to the current teaching of quantum mechanics, in which "we tacitly deny the mysteries physics has encountered".
<p> "as loath as i am to give any credit for what's happened here, which was egregious, i think it's clear that some of the conversations that this has generated, some of the debate, actually probably needed to happen," he said. "it's unfortunate they didn't happen some time ago, but if there's a good side to this, that's it." | Sounds like he's talking about quantum tunneling. It's not because the atoms line up (that wouldn't work anyway because the electrons repel each other), but rather because trapped quantum particles have a non-zero chance of escaping their trap. With a person in a room, it would never happen. |
why do we use uranium as the primary atom in nuclear reactions? | <p> after the discoveries of fission, moderation and of the theoretical possibility of a nuclear chain reaction, early experimental results rapidly showed that natural uranium could only undergo a sustained chain reaction using graphite or heavy water as a moderator. while the world's first reactors (cp-1, x10 etc.) were successfully reaching criticality, uranium enrichment began to develop from theoretical concept to practical applications in order to meet the goal of the manhattan project, to build a nuclear explosive.
<p> most nuclear fuels contain heavy fissile actinide elements that are capable of undergoing and sustaining nuclear fission. the three most relevant fissile isotopes are uranium-233, uranium-235 and plutonium-239. when the unstable nuclei of these atoms are hit by a slow-moving neutron, they split, creating two daughter nuclei and two or three more neutrons. these neutrons then go on to split more nuclei. this creates a self-sustaining chain reaction that is controlled in a nuclear reactor, or uncontrolled in a nuclear weapon.
<p> the atomic nucleus of u-235 will nearly always fission when struck by a free neutron, and the isotope is therefore said to be a "fissile" isotope. the nucleus of a u-238 atom on the other hand, rather than undergoing fission when struck by a free neutron, will nearly always absorb the neutron and yield an atom of the isotope u-239. this isotope then undergoes natural radioactive decay to yield pu-239, which, like u-235, is a fissile isotope. the atoms of u-238 are said to be fertile, because, through neutron irradiation in the core, some eventually yield atoms of fissile pu-239.
<p> in nature, uranium is found as uranium-238 (99.2742%) and uranium-235 (0.7204%). isotope separation concentrates (enriches) the fissionable uranium-235 for nuclear weapons and most nuclear power plants, except for gas cooled reactors and pressurised heavy water reactors. most neutrons released by a fissioning atom of uranium-235 must impact other uranium-235 atoms to sustain the nuclear chain reaction. the concentration and amount of uranium-235 needed to achieve this is called a 'critical mass'.
<p> natural uranium is a mix of several isotopes, mainly a trace amount of u-235 and over 99% u-238. when they undergo fission, both of these elements release fast neutrons with an energy distribution peaking around 1 to 2 mev. this energy is too low to cause fission in u-238, which means it cannot sustain a chain reaction. u-235 will undergo fission when struck by neutrons of this energy, so it is possible for u-235 to sustain a chain reaction, as is the case in a nuclear bomb. however, the probability of one neutron causing fission in another u-235 atom before it escapes the fuel is too low to maintain criticality in a mass of natural uranium, so the chain reaction can only occur in fuels with increased amounts of u-235. this is accomplished by concentrating, or "enriching", the fuel, increasing the amount of u-235 to produce enriched uranium, while the leftover, now mostly u-238, is a waste product known as depleted uranium.
<p> the uranium isotope u is used as the fuel for nuclear reactors and nuclear weapons. it is the only isotope existing in nature to any appreciable extent that is fissile, that is, fissionable by thermal neutrons. the isotope u is also important because it absorbs neutrons to produce a radioactive isotope that subsequently decays to the isotope 239pu (plutonium), which also is fissile. uranium in its natural state comprises just 0.71% u and 99.3% u, and the main focus of uranium metallurgy is the enrichment of uranium through isotope separation.
<p> uranium enrichment is difficult because the chemical properties of u and u are identical, so physical processes such as gaseous diffusion, gas centrifuge or mass spectrometry must be used for isotopic separation based on small differences in mass. because enrichment is the main technical hurdle to production of nuclear fuel and simple nuclear weapons, enrichment technology is politically sensitive. | Because it's relatively plentiful and it's big and unstable enough to undergo the necessary decays to generate power. |
is heat generation using ac in anyway more efficient or easy than with dc? | <p> power supply (psu) is made quieter through the use of higher efficiency (which reduces waste heat and need for airflow), quieter fans, more intelligent fan controllers (ones for which the correlation between temperature and fan speed is more complex than linear), more effective heat sinks, and designs that allow air to flow through with less resistance. for a given power supply size, more efficient supplies such as those certified 80 plus generate less heat.
<p> efficient conversion of dc power to ac requires the inverter to store energy from the panel while the grid's ac voltage is near zero, and then release it again when it rises. this requires considerable amounts of energy storage in a small package. the lowest-cost option for the required amount of storage is the electrolytic capacitor, but these have relatively short lifetimes normally measured in years, and those lifetimes are shorter when operated hot, like on a rooftop solar panel. this has led to considerable development effort on the part of microinverter developers, who have introduced a variety of conversion topologies with lowered storage requirements, some using the much less capable but far longer lived thin film capacitors where possible.
<p> inverters convert low frequency main ac power to higher frequency for use in induction heating. to do this, ac power is first rectified to provide dc power. the inverter then changes the dc power to high frequency ac power. due to the reduction in the number of dc sources employed, the structure becomes more reliable and the output voltage has higher resolution due to an increase in the number of steps so that the reference sinusoidal voltage can be better achieved. this configuration has recently become very popular in ac power supply and adjustable speed drive applications. this new inverter can avoid extra clamping diodes or voltage balancing capacitors.
<p> moreover, in installations where electricity is converted to ac, such as solar power plants, the actual total electricity generation capacity is limited by the inverter, which is usually sized at a lower peak capacity than the solar system for economic reasons. since the peak dc power is reached only for a few hours each year, using a smaller inverter allows to save money on the inverter while clipping (wasting) only a very small portion of the total energy production. the capacity of the power plant after dc-ac conversion is usually reported in w as opposed to w or w.
<p> the electric power is generally dc rather than ac, even though this requires large rectifiers. dc motors were formerly more efficient for railway applications, and once a dc system is in place, converting it to ac is generally considered infeasible.
<p> a test in 2005 revealed computer power supplies are generally about 70–80% efficient. for a 75% efficient power supply to produce 75 w of dc output it would require 100 w of ac input and dissipate the remaining 25 w in heat. higher-quality power supplies can be over 80% efficient; as a result, energy-efficient psus waste less energy in heat and require less airflow to cool, resulting in quieter operation.
<p> one advantage of direct current over ac is that dc current penetrates the entire conductor as opposed to ac current which only penetrates to the skin depth. for the same conductor size the effective resistance is greater with ac than dc, hence more power is lost as heat. in general the total costs for hvdc are less than an ac line if the line length is over 500–600 miles, and with advances in conversion technology this distance has been reduced considerably. a dc line is also ideal for connecting two ac systems that are not synchronized with each another. hvdc lines can help stabilize a power grid against cascading blackouts, since power flow through the line is controllable. | The other comments seem to be missing the obvious. You are describing welding. Your estimates are for power volts and amps are all low (compared to typical welding power supplies), the materials aren't ideal, but the general idea is obviously sound. What you're doing involves electric power at levels which will trivially blind, burn, and kill you. Tread carefully. |
are planets with a large orbital path occasionally closer to the sun than those with a shorter orbit? | <p> this method has two major disadvantages. first, planetary transits are observable only when the planet's orbit happens to be perfectly aligned from the astronomers' vantage point. the probability of a planetary orbital plane being directly on the line-of-sight to a star is the ratio of the diameter of the star to the diameter of the orbit (in small stars, the radius of the planet is also an important factor). about 10% of planets with small orbits have such an alignment, and the fraction decreases for planets with larger orbits. for a planet orbiting a sun-sized star at 1 au, the probability of a random alignment producing a transit is 0.47%. therefore, the method cannot guarantee that any particular star is not a host to planets. however, by scanning large areas of the sky containing thousands or even hundreds of thousands of stars at once, transit surveys can find more extrasolar planets than the radial-velocity method. several surveys have taken that approach, such as the ground-based mearth project, superwasp, kelt, and hatnet, as well as the space-based corot and kepler missions. the transit method has also the advantage of detecting planets around stars that are located a few thousand light years away. the most distant planets detected by sagittarius window eclipsing extrasolar planet search are located near the galactic center. however, reliable follow-up observations of these stars are nearly impossible with current technology.
<p> terrestrial planets in multiple star systems, those containing three or more stars, are not likely to have stable orbits in the long term. stable orbits in binary systems take one of two forms: s-type (satellite or circumstellar) orbits around one of the stars, and p-type (planetary or circumbinary) orbits around the entire binary pair. eccentric jupiters may also disrupt the orbits of planets in habitable zones.
<p> several planets or dwarf planets in the solar system (such as neptune and pluto) have orbital periods that are in resonance with each other or with smaller bodies (this is also common in satellite systems). all except mercury and venus have natural satellites, often called "moons". earth has one, mars has two, and the giant planets have numerous moons in complex planetary-type systems. many moons of the giant planets have features similar to those on the terrestrial planets and dwarf planets, and some have been studied as possible abodes of life (especially europa).
<p> the planet's orbit has a low orbital eccentricity, like most of the planets in the solar system. the semimajor axis of the orbit is only 0.63 au, similar to that of venus. however, its star is less massive and energetic than the sun (with a luminosity of 0.62 ), thereby putting the planet within its habitable zone.
<p> the orbits of the trappist-1 planetary system are very flat and compact. all seven of trappist-1's planets orbit much closer than mercury orbits the sun. except for "b", they orbit farther than the galilean satellites do around jupiter, but closer than most of the other moons of jupiter. the distance between the orbits of "b" and "c" is only 1.6 times the distance between the earth and the moon. the planets should appear prominently in each other's skies, in some cases appearing several times larger than the moon appears from earth. a year on the closest planet passes in only 1.5 earth days, while the seventh planet's year passes in only 18.8 days.
<p> a necessary condition for the existence of a planet in this system are stable zones where the object can remain in orbit for long intervals. for hypothetical planets in a circular orbit around the individual members of this star system, this maximum orbital radius is computed to be 1.01 au for the primary and 0.41 au for the secondary. (note that the orbit of the earth is 1 au from the sun.) a planet orbiting outside of both stars would need to be at least 18.4 au distant.
<p> orbital resonance from major orbiting bodies creates regions around the sun that are free of long-term stable orbits. results from simulations of planetary formation support the idea that a randomly chosen stable planetary system will likely satisfy a titius–bode law. | Of the eight major planets in our solar system, none has a perihelion that's closer than the aphelion of any of the major planets with shorter orbital periods. In other words, none of their orbits "cross". However if you look at smaller objects, like dwarf planets, asteroids, and comets, this happens all the time. Here's the range of distances from the Sun of the four dwarf planets beyond Neptune, listed in order of their average distance from the Sun/orbital period: Name | Perihelion (AU) | Aphelion (AU) ---------|---------|--------- Pluto | 29.7 | 48.9 Haumea | 34.7 | 51.5 Makemake | 38.5 | 53.1 Eris | 38.3 | 97.7 As you can see, at a given time any of the four can be closest to the Sun, and any of the four can be farthest. Pluto is sometimes closer to the Sun than Neptune. And a comet with an orbital period of thousands of years can have a perihelion closer to the Sun than Mercury. |
have we discovered all the elements? | <p> from the work of henry moseley in 1914, it was known that several elements had not yet been discovered. their chemical properties could be deduced from the vacant places in the periodic table of dmitri mendeleev. several scientists claimed the discovery of the missing elements.
<p> the history of the discovery and use of the elements began with primitive human societies that found native elements like carbon, sulfur, copper and gold. later civilizations extracted elemental copper, tin, lead and iron from their ores by smelting, using charcoal. alchemists and chemists subsequently identified many more; all of the naturally occurring elements were known by 1950.
<p> the discovery of the 118 chemical elements known to exist as of 2019 is presented in chronological order. the elements are listed generally in the order in which each was first defined as the pure element, as the exact date of discovery of most elements cannot be accurately determined. there are plans to synthesise more elements, and it is not known how many elements are possible.
<p> this chapter mostly emphasized the discoveries of the last elements in the periodic table. glenn seaborg and albert ghiorso with join efforts worked at uc berkeley and found at least one-sixth of the elements on the table, the most elements than anyone else in history. discovering elements involved many experiments where one little mistake could ruin the whole experiment and waste thousands of dollars. kean discussed the many arguments and fights raised for the naming rights of these final elements. the russians found element 104 in 1964 before the berkeley team did and later discovered element 105 but fights arose when both teams found element 106 just months apart and the big feuds for naming rights began. the disagreements ran into the 1990s but the fights and feuds were so extreme that had to give the final names. they studied the data of both teams and came up with a list of names. both teams had lists of names they wanted. seaborg was alive when an element was named after him and he was the first to be alive when such an occurrence happens.
<p> at the beginning of the 19th century only 55 of the 92 naturally occurring elements had been discovered. scientists had no idea how many more they might find, or indeed if there were an infinite number of elements. they also sought to answer a fundamental question, namely: is there a pattern to the elements?
<p> meanwhile, the american team had discovered seaborgium, and the next six elements had been discovered by a german team: bohrium, hassium, meitnerium, darmstadtium, roentgenium, and copernicium. element 113, nihonium, was discovered by a japanese team; the last five known elements, flerovium, moscovium, livermorium, tennessine, and oganesson, were discovered by russian–american collaborations and complete the seventh row of the periodic table.
<p> as of 2010, there are 118 known elements (in this context, "known" means observed well enough, even from just a few decay products, to have been differentiated from other elements). of these 118 elements, 94 occur naturally on earth. six of these occur in extreme trace quantities: technetium, atomic number 43; promethium, number 61; astatine, number 85; francium, number 87; neptunium, number 93; and plutonium, number 94. these 94 elements have been detected in the universe at large, in the spectra of stars and also supernovae, where short-lived radioactive elements are newly being made. the first 94 elements have been detected directly on earth as primordial nuclides present from the formation of the solar system, or as naturally occurring fission or transmutation products of uranium and thorium. | We probably haven't discovered all of the possible elements. But the ones that we have yet to discover are likely too unstable to be found in nature, except possibly in very violent astrophysical events which may produce them. |
why do some fabrics change shades when rubbed in certain directions? | <p> american researcher alan d. adler, confirming the presence of bilirubin on the fabric, noted that it is not light-stable and may change the color under any light. according to adler, since the image fibers are at or near saturation while the surrounding cloth is not, the latter will gradually get darker until the image first becomes a silhouette and later finally vanishes.
<p> lampshades are made of fabric, parchment, glass, tiffany glass, paper or plastic. common fabric materials include silk, linen and cotton. fabric shades are reinforced by metal frames to give the lampshades their shape, while paper or plastic shades can hold their shape without support. for this reason, paper shades can be more fragile than fabric shades. darker shades sometimes add a reflective liner such as gold or silver in order to maximize light output.
<p> metzinger's response in "coucher de soleil no. 1", in addition to illustrating actual radiation emanating in concentric circles from the sun, was to separate colors in such a way as to avoid mixtures, leading to inert tones. contrary to the impressionists related hues, often placed on top of one another while still wet—leading to a result the divisionists found dull—contrasting hues placed side by side for the effect or creating optical vibrations were essential to divisionists.
<p> linen fabric feels cool to touch, a phenomenon which indicates its higher conductivity (the same principle that makes metals feel "cold"). it is smooth, making the finished fabric lint-free, and gets softer the more it is washed. however, constant creasing in the same place in sharp folds will tend to break the linen threads. this wear can show up in collars, hems, and any area that is iron creased during laundering. linen has poor elasticity and does not spring back readily, explaining why it wrinkles so easily.
<p> when mixing the printing inks the lightfastness of the ink being weaker by its lightfastness defines the lightfastness of the whole color. the fading of one of the pigments leads to tone shift towards to component with better lightfastness. if it is required that there will be something visible from the printing even though its dominant pigment would fade, small amount of pigment with excellent lightfastness can be mixed with it.
<p> some red inclusions, found in some blankets, may be made made of wool fabric and were used by the salish weavers in the last quarter of the 1800s. strips were torn from imported blankets or other materials and used in weaving. it is likely that the introduction of these foreign materials into the weaving was based on colour; strips of richly dyed fabric are common in the later plain or solid style salish blankets while brightly coloured commercial yarns are included in many of the decorative blankets. in most cases the introduced fabric strips and yarns use colours not available through native plant or mineral dyes.
<p> flannel may be brushed to create extra softness or remain unbrushed. brushing is a mechanical process wherein a fine metal brush rubs the fabric to raise fine fibres from the loosely spun yarns to form a nap on one or both sides. if the flannel is not napped, it gains its softness through the loosely spun yarn in its woven form. | Nap : And : |
why is so much extra light needed when filming with a high frame rate? | <p> according to robert yeoman, filming proved to be difficult because natural light lasted between seven and a half or eight hours and the film stock was slow. the crew solved those problems by working faster through the day to get the shots.
<p> the high resolution photographic film used for cinema projection is exposed at the rate of 24 frames per second but usually projected at 48, each frame getting projected twice helping to minimise flicker. one exception to this was the 1986 national film board of canada short film "momentum", which briefly experimented with both filming and projecting at 48 frame/s, in a process known as imax hd.
<p> for the purposes of making the above illustration readable a projection speed of 10 frames per second (frame/s) has been selected, in fact film is usually projected at 24 frame/s making the equivalent slow overcranking rare, but available on professional equipment.
<p> in the early 20th century when 35mm movie film was developed, producers found that 18–24 frames per second was adequate for portraying motion in a movie theater environment. flicker was still a problem at these rates, but projectors solved this by projecting each frame twice, thus creating a refresh rate of 36–48 hz without using excessive amounts of film. however when television was developed, there was no corresponding way to capture a video frame and project it twice. the solution to this was interlace, which had a side effect that 50 to 60 images per second were presented to the viewer.
<p> the future presence of digital projectors in theaters opens up the possibility that hollywood movies could someday include high motion—perhaps in action films intercut with 24 frame/s for non-action scenes. the maxivision48 3-perf film format promotes this use with its ability to switch from 24 frame/s to 48 frame/s on the fly during projection. however, 3-perf has not seen much adaptation as a projection format.
<p> in recent years, work has taken place to recreate the effects of daylight artificially. this is however expensive in terms of both equipment and energy consumption and is applied almost exclusively in specialist areas such as filmmaking, where light of such intensity is required anyway. in some filmmaking locations, such as sweden or norway, there is too much light due to long summer days. as a result, in location films such as "marianne" (2011), night scenes have to be shot during daylight hours and are digitally altered later.
<p> when motion picture film was developed, the movie screen had to be illuminated at a high rate to prevent visible flicker. the exact rate necessary varies by brightness — 50 hz is (barely) acceptable for small, low brightness displays in dimly lit rooms, whilst 80 hz or more may be necessary for bright displays that extend into peripheral vision. the film solution was to project each frame of film three times using a three-bladed shutter: a movie shot at 16 frames per second illuminated the screen 48 times per second. later, when sound film became available, the higher projection speed of 24 frames per second enabled a two bladed shutter to produce 48 times per second illumination—but only in projectors incapable of projecting at the lower speed. | Taking a photograph on film is a chemical reaction. When light hits film, it causes chemical changes in the film itself, and the pattern of these changes makes up the image. Like any chemical reaction, it takes a particular amount of time. If I expose a particular film to light, it might take 1/24 of a second for the chemical reaction to occur, as is the case for standard movie film. If I want to increase the frame rate with that film to 1/48 of a second, I am going to have a problem. The same chemical reaction will have to happen in half of the time. A potential solution? Double the amount of light to compensate. Another way to analogize this is that is takes a particular number of photons to record an image. If I want to capture images more quickly, I can simply increase the number of photons per second, ie. the brightness, to compensate. |
the effects of gravity on the fabric of space time is often depicted as the earth supported by a two dimensional plane being bent downwards as if it were a taut blanket holding a bowling ball. is there any diagrams that depict the actual effect in three dimensional space? a vector field perhaps? | <p> according to einstein's theory of general relativity, particles of negligible mass travel along geodesics in the space-time. in flat space-time, far from a source of gravity, these geodesics correspond to straight lines; however, they may deviate from straight lines when the space-time is curved. the equation for the geodesic lines is
<p> in both newtonian mechanics and special relativity, space and then spacetime are assumed to be flat, and we can construct a global cartesian coordinate system. in general relativity, these restrictions on the shape of spacetime and on the coordinate system to be used are lost. therefore, a different definition of inertial motion is required. in relativity, inertial motion occurs along timelike or null geodesics as parameterized by proper time. this is expressed mathematically by the geodesic equation:
<p> in the 1980s, 't hooft's attention was drawn to the subject of gravity in 3 spacetime dimensions. together with deser and jackiw he published an article in 1984 describing the dynamics of flat space where the only local degrees of freedom were propagating point defects. his attention returned to this model at various points in time, showing that gott pairs would not cause causality violating timelike loops, and showing how the model could be quantized. more recently he proposed generalizing this piecewise flat model of gravity to 4 spacetime dimensions.
<p> einstein proposed that spacetime is curved by matter, and that free-falling objects are moving along locally straight paths in curved spacetime. these straight paths are called geodesics. like newton's first law of motion, einstein's theory states that if a force is applied on an object, it would deviate from a geodesic. for instance, we are no longer following geodesics while standing because the mechanical resistance of the earth exerts an upward force on us, and we are non-inertial on the ground as a result. this explains why moving along the geodesics in spacetime is considered inertial.
<p> in general relativity, gravity has curvature effects on the four dimensions of the universe. a common analogy is placing a heavy object on a stretched out rubber sheet, causing the sheet to bend downward. this curves the coordinate system around the object, much like an object in the universe curves the coordinate system it sits in. the mathematics here are conceptually more complex than on earth, as it results in four dimensions of curved coordinates instead of three as used to describe a curved 2d surface.
<p> according to einstein's theory of general relativity, particles of negligible mass travel along geodesics in the space-time. in uncurved space-time, far from a source of gravity, these geodesics correspond to straight lines; however, they may deviate from straight lines when the space-time is curved. the equation for the geodesic lines is
<p> what this means is that in a spacetime with non-vanishing curvature, gravity is modified from newtonian gravity. at distances comparable to the radius of the space, objects feel an additional linear repulsion from the center of coordinates. | I know this is probably stretching (sorry) the analogy too far, but as space time expands, does that mean mass will have less of an effect on space time, making gravity weaker? |
how do microwave detectors work? picture in comment. | <p> this device emits microwaves from a transmitter and detects any reflected microwaves or reduction in beam intensity using a receiver. the transmitter and receiver are usually combined inside a single housing (monostatic) for indoor applications, and separate housings (bistatic) for outdoor applications. to reduce false alarms this type of detector is usually combined with a passive infrared detector, or dual tec brand or similar alarm.
<p> bullet::::1. microwave impedator (aka "mister fuck up"): roughly a briefcase-sized device, it can render useless infrared and other photo-electric detectors. also can jam transmitters of audio and motion detectors, which operate upon the doppler principle. has a built-in self-destruct mechanism.
<p> bullet::::4. microwave sensors. similar to the ultrasonic sensor, a microwave sensor also works on the doppler shift principle. a microwave sensor will send high frequency microwaves in an area and will check for their reflected patterns. if the reflected pattern is changing continuously then it assumes that there is occupancy and the lighting load connected is turned on. if the reflected pattern is the same for a preset time then the sensor assumes there is no occupancy and the load is switched off. a microwave sensor has high sensitivity as well as detection range compared to other types of sensors.
<p> non contact microwave-based radar sensors are able to see through low conductivity 'microwave-transparent' (non-conductive) glass/plastic windows or vessel walls through which the microwave beam can be passed and measure a 'microwave reflective' (conductive) liquid inside (in the same way as to use a plastic bowl in a microwave oven). they are also largely unaffected by high temperature, pressure, vacuum or vibration. as these sensors do not require physical contact with the process material, so the transmitter /receiver can be mounted a safe distance above/from the process, even with an antenna extension of several metres to reduce temperature, yet still respond to the changes in level or distance changes e.g. they are ideal for measurement of molten metal products at over 1200 °c. microwave transmitters also offer the same key advantage of ultrasonics: the presence of a microprocessor to process the signal, provide numerous monitoring, controls, communications, setup and diagnostic capabilities and are independent of changing density, viscosity and electrical properties. additionally, they solve some of the application limitations of ultrasonics: operation in high pressure and vacuum, high temperatures, dust, temperature and vapor layers.
<p> microwave detectors respond to a doppler shift in the frequency of the reflected energy, by a phase shift, or by a sudden reduction of the level of received energy. any of these effects may indicate motion of an intruder.
<p> a piece of calibrated equipment is required for these tests to detect and measure leakage of the 2.4 ghz microwave radiation, it is usually a hand-held device with a sensing antenna that can be scanned over the areas where the door meets the casing to find any radiation "hot-spots" whilst the unit is operating. as microwave ovens are not normally designed to be operated without a load this will usually take the form of an open container containing a quantity of water which is used to absorb the energy and as it gets warmed gives an indication that a unit not previously examined by a tester is actually producing microwaves. after checking for leakage the door is required to be opened by whatever means is provided and the measurement device is not to record a level above the given limit. in some scenarios a known quantity of water is heated for a known period of time and the temperature rise over the period of operation is used to generate an indication of the effective power output of the magnetron. this can be helpful to determine whether the oven is operating at the expected power levels indicated by labelling.
<p> some microwave meters use a ceramic probe that is directly inserted into the sample. this allows the meter to have direct contact to the sample in question. however, this limits the types of slurries and sludges that can flow through the pipe line. abrasive slurries with particulates can damage the sensor probe. | The two legs of the LED, when spread apart, make a pretty good receiver antenna. The length is close to half wavelength: The other component twisted onto the LED legs is probably an RF diode for rectifying the high frequency radio signal into direct current for driving the LED. The LED is also a diode but its response time is probably too slow to rectify the RF signal. Not so sure how the Green/Red stuff works. Probably different threshold voltages. |
would a kidney transplant cure kidney stones? is the kidney the problem or something else? | <p> in patients with a history of kidney disease, intravenous ascorbic acid therapy has been shown to exacerbate symptoms and may cause kidney failure. moreover, individuals with a high risk of kidney stones formation should not partake in intravenous ascorbic acid therapy. high doses of ascorbic acid has been shown to increase urinary oxalate and uric acid secretion in individuals with renal dysfunction and thus, lead to the formation of kidney stones.
<p> kidneys can be used from category ii donors, and all organs except the heart can potentially be used from category iii, iv and v donors. an unsuccessful kidney recipient can remain on dialysis, unlike recipients of some other organs, meaning that a failure will not result in death.
<p> for some illnesses, there are alternatives today that do not require the extraction of a kidney. such alternatives include renal embolization for those who are poor candidates for surgery, or partial nephrectomy if possible.
<p> transplants with artificial organs do not pose any problems in jewish law (with the exception of artificial heart transplants), as long as the prospects for success are greater than the risks. therefore, there is no conflict with jewish law against artificial heart valves, bone parts, joints, and use of dialysis. artificial heart transplants are not permissible according to jewish law due to low success rates and the serious medical complications involved. medical science has not reached the point of being able to use artificial organs or animal organs as protocol for transplantation.
<p> when the kidneys are no longer able to sustain the demands of the body, end-stage kidney failure is said to have occurred. without renal replacement therapy, death from kidney failure will eventually result. dialysis is an artificial method of replacing some kidney function to prolong life. renal transplantation replaces kidney function by inserting into the body a healthier kidney from an organ donor and inducing immunologic tolerance of that organ with immunosuppression. at present, renal transplantation is the most effective treatment for end-stage kidney failure although its worldwide availability is limited by lack of availability of donor organs.
<p> people with kidney failure are often malnourished, which may contribute to gynecomastia development. dialysis may attenuate malnutrition of kidney failure. additionally, many kidney failure patients experience a hormonal imbalance due to the suppression of testosterone production and testicular damage from high levels of urea also known as uremia-associated hypogonadism.
<p> bullet::::- kidney stone is also an associated risk of the intestinal bypass surgery. this is mainly due to enteric hyperoxaluria. increased absorption of oxalate in colons rises the risk of the formation of kidney stones. | It depends! There are quite a few different types of kidney stones. Some of these are the result of renal problems, whereas others are the result of... other problems. The kidney is essentially nothing more than an insanely complicated filter for the blood. It works by taking arterial blood (from the renal arteries) and pushing that blood through specialized blood vessels such that the plasma of the blood (the portion of it that carries things like ions, drugs, chemicals, sugars, and so on) is pushed into the kidney proper. The kidney then selectively reabsorbs what it wants into the blood and the rest becomes urine. So, a simplified schematic of the kidney's function might be: **Blood with stuff -- > Kidney -- > Blood with less stuff** (where the stuff that isn't there anymore is what went into the urine) So, with the physiology down, we can look at the pathophysiology (i.e. when things go wrong...). What if we increase the amount of bad stuff in the blood? Then we'd have Blood with tons of stuff -- > Kidney -- > Blood with stuff The kidney in this scenario is filtering lots of extra stuff, which therefore exists in the urine in **very high concentrations**. The higher the concentrations of things, the more likely that they might crystallize or solidify into stones. Examples of stones caused by this could be calcium stones. If you have lots of calcium in the blood (hypercalcemia), then more calcium also exists in the urine, and that calcium coalesces into calcium stones. You could have hypercalcemia for a number of reasons, including cancer or hyperparathyroidism, so in this case a kidney transplant would only temporarily eliminate stones, as it wouldn't prevent the formation of new ones. Another example is gout, in which you might have a defect in uric acid metabolism that causes lots of uric acid to exist in the blood. That can precipitate in your joints, causing the classic gout sign of a swollen joint, or it can precipitate in your kidneys, causing kidney stones. An interesting tangent to both of these examples is tumor lysis syndrome. Imagine that you have a patient with leukemia, and then you give them a powerful chemotherapy drug that kills all the cancer at once. You suddenly have LOADS of cellular debris in the blood, and that goes into the kidney, and then you go into acute kidney failure because your kidney can't handle it. Now, that was a bunch of prerenal pathology (i.e. the problem was with too much stuff being created). But we can also have problems with the kidneys themselves. Transporters can be disrupted, or we can have tumors that disrupt the flow and lead to fluid stasis (which can encourage stones or chronic or acute renal failure). So, if the kidney stone is the result of an intrinsic problem with the kidney, yes, a transplant could cure the problem. For the sake of completion: we can also have postrenal problems, like a urethra that's blocked by benign prostatic hyperplasia (BPH, which affects many older men). If the urethra is blocked, urine can't be excreted, and the system backs up. This causes the kidney to be unable to filter fluid (because it's already full). That was a long answer. Hopefully it wasn't too rambly. Feel free to ask followup questions. |
how does the spectrum of electromagnetic radiation emitted by an object change as the object's temperature changes? | <p> all objects emit electromagnetic radiation of a wavelength dependent on the object's temperature. the frequency of the radiation is inversely proportional to the temperature. in infrared thermography, the radiation is detected and measured with infrared imagers (radiometers). the imagers contain an infrared detector that converts the emitting radiation into electrical signals that are displayed on a color or black and white computer display monitor.
<p> temperature is related to the average kinetic energy (energy of motion) of the atoms or molecules in a material, so agitating the molecules in this way increases the temperature of the material. thus, dipole rotation is a mechanism by which energy in the form of electromagnetic radiation can raise the temperature of an object. there are also many other mechanisms by which this conversion occurs.
<p> variations in temperature will cause a multitude of effects. the object will change in size by thermal expansion, which will be detected as a strain by the gauge. resistance of the gauge will change, and resistance of the connecting wires will change.
<p> the thermal motion of ions will result in a shift of emission lines up or down, depending on whether the ion is moving toward or away from the observer. the magnitude of the shift is proportional to the velocity along the line of sight. the net effect is a characteristic broadening of spectral lines, known as doppler broadening, from which the ion temperature can be determined.
<p> these energy states are quantized, meaning they can assume only some "discrete" values of energy. when electromagnetic radiation is shined on a sample, the molecules can absorb energy from the radiation and change their vibrational energy state. however, the molecules can absorb energy from radiation only under certain condition, namely- there should be a change in the electric dipole moment of the molecule when it is vibrating. this change in the electric dipole moment of the molecule leads to the transition dipole moment of the molecule, for transition from the lower to higher energy state, being non-zero which is an essential condition for any transition to take place in the vibrational state of the molecule (due to selection rules).
<p> part of the radiation reaching an object is absorbed and the remainder reflected. usually the absorbed radiation is converted to thermal energy, increasing the object's temperature. manmade or natural systems, however, can convert part of the absorbed radiation into another form such as electricity or chemical bonds, as in the case of photovoltaic cells or plants. the proportion of reflected radiation is the object's reflectivity or albedo.
<p> if an electron in an atom is moving on an orbit with period "t", classically the electromagnetic radiation will repeat itself every orbital period. if the coupling to the electromagnetic field is weak, so that the orbit doesn't decay very much in one cycle, the radiation will be emitted in a pattern which repeats every period, so that the fourier transform will have frequencies which are only multiples of 1/"t". this is the classical radiation law: the frequencies emitted are integer multiples of 1/"t". | This is the distribution. You can see it plotted for various temperatures in the figure. |
if co2 is heavier than air, why doesn't it all sink to the ground? | <p> since cfc molecules are heavier than air (nitrogen or oxygen), it is commonly believed that the cfc molecules cannot reach the stratosphere in significant amount. however, atmospheric gases are not sorted by weight; the forces of wind can fully mix the gases in the atmosphere. lighter cfcs are evenly distributed throughout the turbosphere and reach the upper atmosphere, although some of the heavier cfcs are not evenly distributed.
<p> in this developing region, low-grade fuels are used to meet high demands for food, and energy. so and co are released in the air, and due to deforestation and the growing amount of air pollution, the air pollutants in the atmosphere are slowly building up.
<p> at 106 g co/mj, the carbon dioxide emissions of peat are higher than those of coal (at 94.6 g co/mj) and natural gas (at 56.1). according to one study, increasing the average amount of wood in the fuel mixture from the current 2.6% to 12.5% would take the emissions down to 93 g co/mj. that said, little effort is being made to achieve this.
<p> as well as replacing coal with gas, which is cleaner and emits less carbon dioxide (co). opponents argue that these are outweighed by the potential environmental impacts, which include risks of ground and surface water contamination, air and noise pollution, and the triggering of earthquakes, along with the consequential hazards to public health and the environment.
<p> much of the carbon in the peat deposits produced by coal forests came from photosynthetic splitting of existing carbon dioxide, which released the accompanying split-off oxygen into the atmosphere. this process may have greatly increased the oxygen level, possibly as high as about 35%, making the air more easily breathable by animals with inefficient respiratory systems, as indicated by the size of "meganeura" compared to modern dragonflies.
<p> the effects of fossil fuels emissions, the largest contributor to climate change, cause rising co2 levels in the earth’s atmosphere. this raises atmospheric temperatures and levels of precipitation in the northwestern forested mountains. being a very mountainous region, weather patterns contribute higher levels of precipitation. this can cause landslides, channel erosion and floods. the warmer air temperatures also create more rain and less snow, something dangerous for many animal and tree species; with less snow pack comes more vulnerability for trees and insects.
<p> the atmosphere of soil, or soil gas, is very different from the atmosphere above. the consumption of oxygen by microbes and plant roots, and their release of carbon dioxide, decrease oxygen and increase carbon dioxide concentration. atmospheric co concentration is 0.04%, but in the soil pore space it may range from 10 to 100 times that level, thus potentially contributing to the inhibition of root respiration. calcareous soils regulate co concentration by carbonate buffering, contrary to acid soils in which all co respired accumulates in the soil pore system. at extreme levels co is toxic. this suggests a possible negative feedback control of soil co concentration through its inhibitory effects on root and microbial respiration (also called 'soil respiration'). in addition, the soil voids are saturated with water vapour, at least until the point of maximal hygroscopicity, beyond which a vapour-pressure deficit occurs in the soil pore space. adequate porosity is necessary, not just to allow the penetration of water, but also to allow gases to diffuse in and out. movement of gases is by diffusion from high concentrations to lower, the diffusion coefficient decreasing with soil compaction. oxygen from above atmosphere diffuses in the soil where it is consumed and levels of carbon dioxide in excess of above atmosphere diffuse out with other gases (including greenhouse gases) as well as water. soil texture and structure strongly affect soil porosity and gas diffusion. it is the total pore space (porosity) of soil, not the pore size, and the degree of pore interconnection (or conversely pore sealing), together with water content, air turbulence and temperature, that determine the rate of diffusion of gases into and out of soil. platy soil structure and soil compaction (low porosity) impede gas flow, and a deficiency of oxygen may encourage anaerobic bacteria to reduce (strip oxygen) from nitrate no to the gases n, no, and no, which are then lost to the atmosphere, thereby depleting the soil of nitrogen. aerated soil is also a net sink of methane ch but a net producer of methane (a strong heat-absorbing greenhouse gas) when soils are depleted of oxygen and subject to elevated temperatures. | If left undisturbed, CO2 *does* sink lower to the ground than oxygen, although they both form separate exponential profiles (it's not like the bottom half of a volume is 100% CO2 while the top half is 100% oxygen - they both have distributions that tail off exponentially with height - it's just that the CO2 /oxygen density ratio is higher at the bottom of a large undisturbed container than at the top). For example, in large grain silos where CO2 can chemically build up, the CO2 sinks more to the bottom. A worker who walks directly into the bottom of the silo may find that he has too much CO2/not enough oxygen and can't breath properly. From the Penn State agricultural website: "Like carbon dioxide, nitrogen dioxide is heavier than air so the highest concentration of gas is typically located at the silage surface, which is the area where a person will be going if they need to enter the silo for any reason." Out in the open atmosphere, there is enough motion in the air to keep the oxygen and carbon dioxide mixed up. This motion of the air mostly comes from convection currents (i.e. wind) caused by temperature differences. But if you go up high enough in the atmosphere, the temperature differences even out enough that the vertical mixing becomes insignificant. The higher part of the atmosphere does indeed have the different molecules settle out to different altitudes depending on their density. The altitude that marks the point separating the part of the atmosphere that is mixed and the higher part of the atmosphere that is not mixed is called the turbopause. In the region above the turbopause, hydrogen and helium are the lightest and form the highest parts of the atmosphere. Oxygen is heavier and is farther down in this region. The turbopause is at about 13 km above Earth's surface, which is above the highest mountain, but below where many airplanes fly. UPDATE: I cleaned up the language to be more clear. |
how is a singularity formed? | <p> it was known that singularities (including those that, roughly speaking, occur after the flow has continued for an infinite amount of time) must occur in many cases. however, any singularity that develops in a finite time is essentially a "pinching" along certain spheres corresponding to the prime decomposition of the 3-manifold. furthermore, any "infinite time" singularities result from certain collapsing pieces of the jsj decomposition. perelman's work proves this claim and thus proves the geometrization conjecture.
<p> an example of singularity formation is given by the ricci flow: richard s. hamilton showed that while short time solutions exist, singularities will usually form after a finite time. grigori perelman's solution of the poincaré conjecture depended on a deep study of these singularities, where he showed how to continue the solution past the singularities.
<p> while in a non-rotating black hole the singularity occurs at a single point in the model coordinates, called a "point singularity", in a rotating black hole, also known as a kerr black hole, the singularity occurs on a ring (a circular line), known as a "ring singularity". such a singularity may also theoretically become a wormhole.
<p> the fictional singularity corresponding to the powers (0, 0, 1) arises as a result of time line coordinates crossing over some 2-dimensional "[[focal surface]]". as pointed out in, a synchronous reference frame can always be chosen in such a way that this inevitable time line crossing occurs exactly on such surface (instead of a 3-dimensional caustic surface). therefore, a solution with such simultaneous for the whole space fictional singularity must exist with a full set of arbitrary functions needed for the general solution. close to the point "t" = 0 it allows a regular expansion by whole powers of "t".
<p> from concepts drawn from rotating black holes, it is shown that a singularity, spinning rapidly, can become a ring-shaped object. this results in two event horizons, as well as an ergosphere, which draw closer together as the spin of the singularity increases. when the outer and inner event horizons merge, they shrink toward the rotating singularity and eventually expose it to the rest of the universe.
<p> a coordinate singularity occurs when an apparent singularity or discontinuity occurs in one coordinate frame, which can be removed by choosing a different frame. an example is the apparent singularity at the 90 degree latitude in spherical coordinates. an object moving due north (for example, along the line 0 degrees longitude) on the surface of a sphere will suddenly experience an instantaneous change in longitude at the pole (in the case of the example, jumping from longitude 0 to longitude 180 degrees). this discontinuity, however, is only apparent; it is an artifact of the coordinate system chosen, which is singular at the poles. a different coordinate system would eliminate the apparent discontinuity, e.g. by replacing the latitude/longitude representation with an -vector representation.
<p> in general relativity, a singularity is a place that objects or light rays can reach in a finite time where the curvature becomes infinite, or space-time stops being a manifold. singularities can be found in all the black-hole spacetimes, the schwarzschild metric, the reissner–nordström metric, the kerr metric and the kerr–newman metric and in all cosmological solutions that do not have a scalar field energy or a cosmological constant. | Any star collapsing beyond its Schwarzschild radius will develop a singularity and become a black hole. The Schwarzschild radius will then be the event horizon for that black hole. The event horizon is just the 'point of no return'. If you were to pass the event horizon of a black hole large enough you wouldn't see or feel anything different. Contrary to some beliefs the event horizon itself wouldn't annihilate you. |
how is meth different from adhd meds? | <p> in both adults and children, adhd has a high rate of comorbidity with other mental health disorders such as learning disability, conduct disorder, anxiety disorder, major depressive disorder, bipolar disorder, and substance use disorders.
<p> adhd is a neurodevelopmental disorder which is most pronounced in children. current pharmacological treatments consist of stimulant medications (e.g. methylphenidate), non-stimulant medication (e.g. atomoxetin) and α2 agonists. these medications have a great deal of adverse effects as well as being potentially addictive. developing alternative treatments is therefore desirable. "in vivo" studies show potential of using hr antagonists in adhd to aid in attention and cognitive activity by elevating release of neurotransmitters such as acetylcholine and dopamine.
<p> approximately 70% of those who use these stimulants see improvements in adhd symptoms. children with adhd who use stimulant medications generally have better relationships with peers and family members, generally perform better in school, are less distractible and impulsive, and have longer attention spans. people with adhd have an increased risk of substance use disorders, and stimulant medications reduce this risk. some studies suggest that since adhd diagnosis is increasing significantly around the world, using the drug may cause more harm than good in some populations using methylphenidate as a "study drug". this applies to people who potentially may be experiencing a different issue and are misdiagnosed with adhd. people in this category can then experience negative side-effects of the drug which worsen their condition, and make it harder for them to receive adequate care as providers around them may believe the drugs are sufficient and the problem lies with the user. methylphenidate is not approved for children under six years of age. immediate release methylphenidate is used daily along with the longer-acting form to achieve full-day control of symptoms.
<p> adhd has no single cause but can be genetically inherited in many cases, and roughly 76% of those diagnosed inherited it from their parent(s). for the remaining percentage of individuals, 14-15%, adhd may have been caused due to their environment, such as trauma in the womb or during birth . changes in the genes that influence the neurochemicals serotonin, dopamine, and norepinephrine levels can cause them to be overactive or under active, possibly playing a role in the development of an individual with adhd. it has also been shown that activity in the frontal lobe is decreased in an individual with adhd compared to an individual without adhd. the adult adhd self-reporting scale was created to estimate the pervasiveness of an adult with adhd in an easy self survey.
<p> due to these concerns regarding prevalence rates of adhd, the american academy of pediatrics (aap, 2000) and the national institute of health (nih, 1998) have stressed the need to develop new standardized, evidence-based assessments that have strong psychometric properties, and are easily administered in schools and other clinical settings.
<p> reviews of mri studies on individuals with adhd suggest that the long-term treatment of attention deficit hyperactivity disorder (adhd) with stimulants, such as amphetamine or methylphenidate, decreases abnormalities in brain structure and function found in subjects with adhd, and improves function in several parts of the brain, such as the right caudate nucleus of the basal ganglia.
<p> adhd is generally believed to be a children’s disorder and is not commonly studied in adults. however, adhd in adults results in lower household incomes, less educational achievement as well as a higher risk of marital issues and substance abuse. activities such as driving can be affected; adults who suffer from inattentiveness due to adhd experience increased rates of car accidents. adults with adhd tend to be more creative, vibrant, aware of multiple activities, and are able to multitask when interested in a certain topic. | Methamphetamine is a second line treatment for ADHD. The difference between the version you get at the pharmacy and meth the street drug is that it is prescribed by a docter and properly dosed and produced up to pharmaceutical standerds. With amphetamine dosage is extremely important as they are addictive and the relapse rate is high: Brecht ML, Herbeck D (June 2014). "Time to relapse following treatment for methamphetamine use: a long-term perspective on patterns and predictors". Drug Alcohol Depend. 139: 18–25. doi:10.1016/j.drugalcdep.2014.02.702. So I would say a proper dosage and regime with professional oversight are the main difference. |
how is meth different from adhd meds? | <p> in both adults and children, adhd has a high rate of comorbidity with other mental health disorders such as learning disability, conduct disorder, anxiety disorder, major depressive disorder, bipolar disorder, and substance use disorders.
<p> adhd is a neurodevelopmental disorder which is most pronounced in children. current pharmacological treatments consist of stimulant medications (e.g. methylphenidate), non-stimulant medication (e.g. atomoxetin) and α2 agonists. these medications have a great deal of adverse effects as well as being potentially addictive. developing alternative treatments is therefore desirable. "in vivo" studies show potential of using hr antagonists in adhd to aid in attention and cognitive activity by elevating release of neurotransmitters such as acetylcholine and dopamine.
<p> approximately 70% of those who use these stimulants see improvements in adhd symptoms. children with adhd who use stimulant medications generally have better relationships with peers and family members, generally perform better in school, are less distractible and impulsive, and have longer attention spans. people with adhd have an increased risk of substance use disorders, and stimulant medications reduce this risk. some studies suggest that since adhd diagnosis is increasing significantly around the world, using the drug may cause more harm than good in some populations using methylphenidate as a "study drug". this applies to people who potentially may be experiencing a different issue and are misdiagnosed with adhd. people in this category can then experience negative side-effects of the drug which worsen their condition, and make it harder for them to receive adequate care as providers around them may believe the drugs are sufficient and the problem lies with the user. methylphenidate is not approved for children under six years of age. immediate release methylphenidate is used daily along with the longer-acting form to achieve full-day control of symptoms.
<p> adhd has no single cause but can be genetically inherited in many cases, and roughly 76% of those diagnosed inherited it from their parent(s). for the remaining percentage of individuals, 14-15%, adhd may have been caused due to their environment, such as trauma in the womb or during birth . changes in the genes that influence the neurochemicals serotonin, dopamine, and norepinephrine levels can cause them to be overactive or under active, possibly playing a role in the development of an individual with adhd. it has also been shown that activity in the frontal lobe is decreased in an individual with adhd compared to an individual without adhd. the adult adhd self-reporting scale was created to estimate the pervasiveness of an adult with adhd in an easy self survey.
<p> due to these concerns regarding prevalence rates of adhd, the american academy of pediatrics (aap, 2000) and the national institute of health (nih, 1998) have stressed the need to develop new standardized, evidence-based assessments that have strong psychometric properties, and are easily administered in schools and other clinical settings.
<p> reviews of mri studies on individuals with adhd suggest that the long-term treatment of attention deficit hyperactivity disorder (adhd) with stimulants, such as amphetamine or methylphenidate, decreases abnormalities in brain structure and function found in subjects with adhd, and improves function in several parts of the brain, such as the right caudate nucleus of the basal ganglia.
<p> adhd is generally believed to be a children’s disorder and is not commonly studied in adults. however, adhd in adults results in lower household incomes, less educational achievement as well as a higher risk of marital issues and substance abuse. activities such as driving can be affected; adults who suffer from inattentiveness due to adhd experience increased rates of car accidents. adults with adhd tend to be more creative, vibrant, aware of multiple activities, and are able to multitask when interested in a certain topic. | Methamphetamine is actually prescribed sometimes for ADHD. Its drug name is Dexosyn. See: The only difference between Dexosyn and street meth is purity and formulation (although to be fair, formulation is pretty important for determining the effects of a drug, and as u/CanaryBean pointed out the route of administration is also important). |
how is meth different from adhd meds? | <p> in both adults and children, adhd has a high rate of comorbidity with other mental health disorders such as learning disability, conduct disorder, anxiety disorder, major depressive disorder, bipolar disorder, and substance use disorders.
<p> adhd is a neurodevelopmental disorder which is most pronounced in children. current pharmacological treatments consist of stimulant medications (e.g. methylphenidate), non-stimulant medication (e.g. atomoxetin) and α2 agonists. these medications have a great deal of adverse effects as well as being potentially addictive. developing alternative treatments is therefore desirable. "in vivo" studies show potential of using hr antagonists in adhd to aid in attention and cognitive activity by elevating release of neurotransmitters such as acetylcholine and dopamine.
<p> approximately 70% of those who use these stimulants see improvements in adhd symptoms. children with adhd who use stimulant medications generally have better relationships with peers and family members, generally perform better in school, are less distractible and impulsive, and have longer attention spans. people with adhd have an increased risk of substance use disorders, and stimulant medications reduce this risk. some studies suggest that since adhd diagnosis is increasing significantly around the world, using the drug may cause more harm than good in some populations using methylphenidate as a "study drug". this applies to people who potentially may be experiencing a different issue and are misdiagnosed with adhd. people in this category can then experience negative side-effects of the drug which worsen their condition, and make it harder for them to receive adequate care as providers around them may believe the drugs are sufficient and the problem lies with the user. methylphenidate is not approved for children under six years of age. immediate release methylphenidate is used daily along with the longer-acting form to achieve full-day control of symptoms.
<p> adhd has no single cause but can be genetically inherited in many cases, and roughly 76% of those diagnosed inherited it from their parent(s). for the remaining percentage of individuals, 14-15%, adhd may have been caused due to their environment, such as trauma in the womb or during birth . changes in the genes that influence the neurochemicals serotonin, dopamine, and norepinephrine levels can cause them to be overactive or under active, possibly playing a role in the development of an individual with adhd. it has also been shown that activity in the frontal lobe is decreased in an individual with adhd compared to an individual without adhd. the adult adhd self-reporting scale was created to estimate the pervasiveness of an adult with adhd in an easy self survey.
<p> due to these concerns regarding prevalence rates of adhd, the american academy of pediatrics (aap, 2000) and the national institute of health (nih, 1998) have stressed the need to develop new standardized, evidence-based assessments that have strong psychometric properties, and are easily administered in schools and other clinical settings.
<p> reviews of mri studies on individuals with adhd suggest that the long-term treatment of attention deficit hyperactivity disorder (adhd) with stimulants, such as amphetamine or methylphenidate, decreases abnormalities in brain structure and function found in subjects with adhd, and improves function in several parts of the brain, such as the right caudate nucleus of the basal ganglia.
<p> adhd is generally believed to be a children’s disorder and is not commonly studied in adults. however, adhd in adults results in lower household incomes, less educational achievement as well as a higher risk of marital issues and substance abuse. activities such as driving can be affected; adults who suffer from inattentiveness due to adhd experience increased rates of car accidents. adults with adhd tend to be more creative, vibrant, aware of multiple activities, and are able to multitask when interested in a certain topic. | Most of the good stuff has been covered, but what hasn't been covered is that both amphetamine and methamphetamine are analogues of a chemical that is already in your body called phenethylamine. This is used by your body to regulate dopamine and a number of other neurotransmitters, and all that amphetamine and methamphetamine do are to replicate the action of this normal body chemical. |
how is meth different from adhd meds? | <p> in both adults and children, adhd has a high rate of comorbidity with other mental health disorders such as learning disability, conduct disorder, anxiety disorder, major depressive disorder, bipolar disorder, and substance use disorders.
<p> adhd is a neurodevelopmental disorder which is most pronounced in children. current pharmacological treatments consist of stimulant medications (e.g. methylphenidate), non-stimulant medication (e.g. atomoxetin) and α2 agonists. these medications have a great deal of adverse effects as well as being potentially addictive. developing alternative treatments is therefore desirable. "in vivo" studies show potential of using hr antagonists in adhd to aid in attention and cognitive activity by elevating release of neurotransmitters such as acetylcholine and dopamine.
<p> approximately 70% of those who use these stimulants see improvements in adhd symptoms. children with adhd who use stimulant medications generally have better relationships with peers and family members, generally perform better in school, are less distractible and impulsive, and have longer attention spans. people with adhd have an increased risk of substance use disorders, and stimulant medications reduce this risk. some studies suggest that since adhd diagnosis is increasing significantly around the world, using the drug may cause more harm than good in some populations using methylphenidate as a "study drug". this applies to people who potentially may be experiencing a different issue and are misdiagnosed with adhd. people in this category can then experience negative side-effects of the drug which worsen their condition, and make it harder for them to receive adequate care as providers around them may believe the drugs are sufficient and the problem lies with the user. methylphenidate is not approved for children under six years of age. immediate release methylphenidate is used daily along with the longer-acting form to achieve full-day control of symptoms.
<p> adhd has no single cause but can be genetically inherited in many cases, and roughly 76% of those diagnosed inherited it from their parent(s). for the remaining percentage of individuals, 14-15%, adhd may have been caused due to their environment, such as trauma in the womb or during birth . changes in the genes that influence the neurochemicals serotonin, dopamine, and norepinephrine levels can cause them to be overactive or under active, possibly playing a role in the development of an individual with adhd. it has also been shown that activity in the frontal lobe is decreased in an individual with adhd compared to an individual without adhd. the adult adhd self-reporting scale was created to estimate the pervasiveness of an adult with adhd in an easy self survey.
<p> due to these concerns regarding prevalence rates of adhd, the american academy of pediatrics (aap, 2000) and the national institute of health (nih, 1998) have stressed the need to develop new standardized, evidence-based assessments that have strong psychometric properties, and are easily administered in schools and other clinical settings.
<p> reviews of mri studies on individuals with adhd suggest that the long-term treatment of attention deficit hyperactivity disorder (adhd) with stimulants, such as amphetamine or methylphenidate, decreases abnormalities in brain structure and function found in subjects with adhd, and improves function in several parts of the brain, such as the right caudate nucleus of the basal ganglia.
<p> adhd is generally believed to be a children’s disorder and is not commonly studied in adults. however, adhd in adults results in lower household incomes, less educational achievement as well as a higher risk of marital issues and substance abuse. activities such as driving can be affected; adults who suffer from inattentiveness due to adhd experience increased rates of car accidents. adults with adhd tend to be more creative, vibrant, aware of multiple activities, and are able to multitask when interested in a certain topic. | The biggest difference is dosage. Psychiatrists will typically decrease dosage if a patient reports inadequate sleep, and provided the patient is providing accurate information, this usually results in a dosage that isn't incredibly euphoric or "recreational." Meth users, on the other hand, often take dosages that keep them up for days. Even one night of sleep deprivation is neurotoxic, so recurrent recreational meth use is extremely dangerous through chronic sleep deprivation alone. |
how is meth different from adhd meds? | <p> in both adults and children, adhd has a high rate of comorbidity with other mental health disorders such as learning disability, conduct disorder, anxiety disorder, major depressive disorder, bipolar disorder, and substance use disorders.
<p> adhd is a neurodevelopmental disorder which is most pronounced in children. current pharmacological treatments consist of stimulant medications (e.g. methylphenidate), non-stimulant medication (e.g. atomoxetin) and α2 agonists. these medications have a great deal of adverse effects as well as being potentially addictive. developing alternative treatments is therefore desirable. "in vivo" studies show potential of using hr antagonists in adhd to aid in attention and cognitive activity by elevating release of neurotransmitters such as acetylcholine and dopamine.
<p> approximately 70% of those who use these stimulants see improvements in adhd symptoms. children with adhd who use stimulant medications generally have better relationships with peers and family members, generally perform better in school, are less distractible and impulsive, and have longer attention spans. people with adhd have an increased risk of substance use disorders, and stimulant medications reduce this risk. some studies suggest that since adhd diagnosis is increasing significantly around the world, using the drug may cause more harm than good in some populations using methylphenidate as a "study drug". this applies to people who potentially may be experiencing a different issue and are misdiagnosed with adhd. people in this category can then experience negative side-effects of the drug which worsen their condition, and make it harder for them to receive adequate care as providers around them may believe the drugs are sufficient and the problem lies with the user. methylphenidate is not approved for children under six years of age. immediate release methylphenidate is used daily along with the longer-acting form to achieve full-day control of symptoms.
<p> adhd has no single cause but can be genetically inherited in many cases, and roughly 76% of those diagnosed inherited it from their parent(s). for the remaining percentage of individuals, 14-15%, adhd may have been caused due to their environment, such as trauma in the womb or during birth . changes in the genes that influence the neurochemicals serotonin, dopamine, and norepinephrine levels can cause them to be overactive or under active, possibly playing a role in the development of an individual with adhd. it has also been shown that activity in the frontal lobe is decreased in an individual with adhd compared to an individual without adhd. the adult adhd self-reporting scale was created to estimate the pervasiveness of an adult with adhd in an easy self survey.
<p> due to these concerns regarding prevalence rates of adhd, the american academy of pediatrics (aap, 2000) and the national institute of health (nih, 1998) have stressed the need to develop new standardized, evidence-based assessments that have strong psychometric properties, and are easily administered in schools and other clinical settings.
<p> reviews of mri studies on individuals with adhd suggest that the long-term treatment of attention deficit hyperactivity disorder (adhd) with stimulants, such as amphetamine or methylphenidate, decreases abnormalities in brain structure and function found in subjects with adhd, and improves function in several parts of the brain, such as the right caudate nucleus of the basal ganglia.
<p> adhd is generally believed to be a children’s disorder and is not commonly studied in adults. however, adhd in adults results in lower household incomes, less educational achievement as well as a higher risk of marital issues and substance abuse. activities such as driving can be affected; adults who suffer from inattentiveness due to adhd experience increased rates of car accidents. adults with adhd tend to be more creative, vibrant, aware of multiple activities, and are able to multitask when interested in a certain topic. | Doctor here. I don't see this mentioned in any of the top responses, so I'll give a try at explaining. Sometimes the drugs used to treat ADHD are methanfetamine. Sometimes they are similar drugs of the same class. They basically act in a similar way: they are stimulants that increase the availability of cathecolamines in the synapse. But a lot of what causes addiction in drugs is the speed at which they act. Faster acting drugs tend to cause more addiction. The rush is higher, and the crash afterwards is more intense, too. This makes you want to go back and take another. This is one of the reasons why heroin causes more addiction than methadone, even though both act basically on the same opiate receptors. ADHD medications are usually taken orally, which has a slower absorption and lower peak of effect than if they were smoked, inhaled or injected, so they tend to cause less addiction, too. Some of the drugs used most effectively to treat ADHD have a longer half life, either because they are absorbed slower, or because they need to be metabolized in our bodies to produce the most active form of the drug. This speed affects the way they act. Additionally, the context and circumstances matter a lot. Getting a drug from a doctor, you know that you have oversight and you can't just go buy more. It's a controlled situation. It's very different from buying it recreationally where only you decide when to get more. Also, having low levels of life satisfaction can also increase your chances of becoming addicted. If you are diagnosed with ADHD by a doctor, generally that means you have either a family that cares enough to take you to the doctor, or a job that allows you to afford it, both of which make it less likely that you'll become addicted. Edit: there's a great explanation here |
how is meth different from adhd meds? | <p> in both adults and children, adhd has a high rate of comorbidity with other mental health disorders such as learning disability, conduct disorder, anxiety disorder, major depressive disorder, bipolar disorder, and substance use disorders.
<p> adhd is a neurodevelopmental disorder which is most pronounced in children. current pharmacological treatments consist of stimulant medications (e.g. methylphenidate), non-stimulant medication (e.g. atomoxetin) and α2 agonists. these medications have a great deal of adverse effects as well as being potentially addictive. developing alternative treatments is therefore desirable. "in vivo" studies show potential of using hr antagonists in adhd to aid in attention and cognitive activity by elevating release of neurotransmitters such as acetylcholine and dopamine.
<p> approximately 70% of those who use these stimulants see improvements in adhd symptoms. children with adhd who use stimulant medications generally have better relationships with peers and family members, generally perform better in school, are less distractible and impulsive, and have longer attention spans. people with adhd have an increased risk of substance use disorders, and stimulant medications reduce this risk. some studies suggest that since adhd diagnosis is increasing significantly around the world, using the drug may cause more harm than good in some populations using methylphenidate as a "study drug". this applies to people who potentially may be experiencing a different issue and are misdiagnosed with adhd. people in this category can then experience negative side-effects of the drug which worsen their condition, and make it harder for them to receive adequate care as providers around them may believe the drugs are sufficient and the problem lies with the user. methylphenidate is not approved for children under six years of age. immediate release methylphenidate is used daily along with the longer-acting form to achieve full-day control of symptoms.
<p> adhd has no single cause but can be genetically inherited in many cases, and roughly 76% of those diagnosed inherited it from their parent(s). for the remaining percentage of individuals, 14-15%, adhd may have been caused due to their environment, such as trauma in the womb or during birth . changes in the genes that influence the neurochemicals serotonin, dopamine, and norepinephrine levels can cause them to be overactive or under active, possibly playing a role in the development of an individual with adhd. it has also been shown that activity in the frontal lobe is decreased in an individual with adhd compared to an individual without adhd. the adult adhd self-reporting scale was created to estimate the pervasiveness of an adult with adhd in an easy self survey.
<p> due to these concerns regarding prevalence rates of adhd, the american academy of pediatrics (aap, 2000) and the national institute of health (nih, 1998) have stressed the need to develop new standardized, evidence-based assessments that have strong psychometric properties, and are easily administered in schools and other clinical settings.
<p> reviews of mri studies on individuals with adhd suggest that the long-term treatment of attention deficit hyperactivity disorder (adhd) with stimulants, such as amphetamine or methylphenidate, decreases abnormalities in brain structure and function found in subjects with adhd, and improves function in several parts of the brain, such as the right caudate nucleus of the basal ganglia.
<p> adhd is generally believed to be a children’s disorder and is not commonly studied in adults. however, adhd in adults results in lower household incomes, less educational achievement as well as a higher risk of marital issues and substance abuse. activities such as driving can be affected; adults who suffer from inattentiveness due to adhd experience increased rates of car accidents. adults with adhd tend to be more creative, vibrant, aware of multiple activities, and are able to multitask when interested in a certain topic. | Ok so apparently I am the first medicinal chemist to discover this post! I have some things that I could shed some light on that nobody else has seemed to cover! So, yes, amphetamine, the main ingredient in adderall, is extremely similar to methamphetamine. In fact, meth is simply amphetamine with an added methyl group at the N-position. The addition of this methyl group has two consequences that make methamphetamine a more powerful drug than amphetamine. 1. The methyl group makes the molecule overall more lipophillic (fat-soluable). As such, fat soluble compounds diffuse across the blood brain barrier much more quickly and in higher concentrations. This in tern elicits a more powerful rush and euphoric high, because that drug rushes into the brain much quicker. This effect is enhanced by quicker routes of administration such as smoking or injecting that already send a large amount of the drug directly to the blood stream. 2. The methyl group has effects on metabolism. Methamphetamine is active on it's own, but as soon as it enters the body, the methyl group is slowly being cleaved, as the molecule is metabolized into amphetamine. This increases the duration of the drugs effects by a large percentage, because not only does methamphetamine have to go through it's elimination halflife before it is cleared from the body, but the methamphetamine that is metabolized into amphetamine, is active on it's own, and must go through it's own halflife just as if someone were to have taken the amphetamine alone. So yea, Meth is innately a stronger and more euphoric/addictive drug than amphetamine because of these medicinal chemistry properties, but I would argue that this isn't what makes street meth so much more dangerous than prescription meth, the other answers reflect this a lot better. The purity of the drug is a huge danger as you don't know exact ingredients like you would pharm grade drugs. The lack of accurately measured dosages is a big danger, especially since even 10mg of meth may be cut with 5mg or more of inactive or different ingredients with unknown effects. Also, people redose and redose for days on end because you can buy tons of meth in powder form, this is when amphetamine psychosis kicks in and people start doing stereotypical meth head shit. Amphetamine psychosis can happen to people on ADD meds too, I saw it happen to my GF in college as she picked bugs out of her face even when she knew they were not there. And yea the worst thing about street meth/amphetamines vs ADD meds is route of administration. Just as I said the pharmacological differences of meth are enhanced by more direct method of administration such as smoking or injecting, these are the methods that most often are associated with the most danger. There isn't really a way to achieve the same type of rush from prepared ADHD medications, as one does from smoking or injecting straight crystalline forms of the drug. Now in the UK, speed is popular, which is a clandestine amphetamine preparation, and I am sure you see all of the same shit you see from meth in the US, despite the fact that amphetamine is the same chemical in adderall. Preparation and method of administration and dosage measurement are the main differences between street and Adhd stimulants. |
if we know an asteroid's mass and velocity, why does it have a "chance" of hitting the earth? | <p> for asteroids that are actually on track to hit earth the predicted probability of impact continues to increase as more observations are made. this initially very similar pattern makes it difficult to quickly differentiate between asteroids which will be millions of kilometres from earth and those which will actually hit it. this in turn makes it difficult to decide when to raise an alarm as gaining more certainty takes time, which reduces the time available to react to a predicted impact. however raising the alarm too soon has the danger of causing a false alarm and creating a boy who cried wolf effect if the asteroid in fact misses earth.
<p> for asteroids that are actually on track to hit earth the predicted probability of impact continues to increase as more observations are made. this similar pattern makes it difficult to differentiate between asteroids that will only come close to earth and those that will actually hit it. this in turn makes it difficult to decide when to raise an alarm as gaining more certainty takes time, which reduces time available to react to a predicted impact. however raising the alarm too soon has the danger of causing a false alarm and creating a boy who cried wolf effect if the asteroid in fact misses earth.
<p> the energy released by an impactor depends on diameter, density, velocity, and angle. the diameter of most near-earth asteroids that have not been studied by radar or infrared can generally only be estimated within about a factor of two based on the asteroid brightness. the density is generally assumed because the diameter and mass are also generally estimates. due to earth's escape velocity, the minimum impact velocity is 11 km/s with asteroid impacts averaging around 17 km/s on the earth. the most probable impact angle is 45 degrees.
<p> bullet::::- 17 july – astronomers rule out the chances of ~ asteroid 's hitting earth in september 2019 by eliminating the possibility of its passing through an area where it would have to be if it were on an impacting orbit. prior to this, the asteroid had been given a one-in-7,000 chance of hitting earth.
<p> a number of considerations arise concerning means for avoiding a devastating collision with an asteroidal object, should one be discovered on a trajectory that were determined to lead to earth impact at some future date. one of the main challenges is how to transmit the impulse required (possibly quite large), to an asteroid of unknown mass, composition, and mechanical strength, without shattering it into fragments, some of which might be themselves dangerous to earth if left in a collision orbit.
<p> analysis of the uncertainty involved in nuclear deflection shows that the ability to protect the planet does not imply the ability to target the planet. a nuclear explosion that changes an asteroid's velocity by 10 meters/second (plus or minus 20%) would be adequate to push it out of an earth-impacting orbit. however, if the uncertainty of the velocity change was more than a few percent, there would be no chance of directing the asteroid to a particular target.
<p> for larger asteroids ( 100m to 1 km across), prediction is based on cataloging the asteroid, years to centuries before it could impact. this technique is possible as they can be seen from a long distance due to their large size. their orbits therefore can be measured and any future impacts predicted long before they are on their final approach to earth. this long period of warning is important as an impact from a 1 km object would cause worldwide damage and a long lead time would be needed to deflect it away from earth. as of 2018, the inventory is nearly complete for the kilometer-size objects (around 900) which would cause global damage, and approximately one third complete for 140 meter objects (around 8500) which would cause major regional damage. | Basically, we *don't* know the position, velocity, or mass (although mass isn't very important here). You have to take measurements over a period of time to track its movement, and there are always uncertainties. Sometimes we don't even spot an asteroid until it's on the way out of the Earth-moon system. But even with known asteroids, there's a range of possible orbits that are all within observational uncertainty, and some fraction of those will intersect the earth - that gives the odds that it'll hit the Earth. Additionally, sometimes there are very sensitive unstable points in an orbit. Here, a very small difference in position at this point can cause a huge difference in possible trajectories in the future. If an asteroid passes through a narrow keyhole it should collide with the Earth on its next pass - but this is a very narrow window, and you can't really tell if an asteroid is going to pass through it until it's basically already there. For the second part, basically it's actually quite hard to get "sucked into" a planet or star (or even black hole). If you fall in from far away, you accelerate as you fall inwards, such that you are always above escape velocity. The gravity of the planet/star/black-hole can help "focus" incoming orbits, so you don't have to be aimed *exactly* at the planet or whatever, but you typically do have to aim pretty close to actually hit the object. This is also why supermassive black holes don't just suck up the whole galaxy. |
is the twin paradox an actual paradox in curved space? | <p> in physics, the twin paradox is a thought experiment in special relativity involving identical twins, one of whom makes a journey into space in a high-speed rocket and returns home to find that the twin who remained on earth has aged more. this result appears puzzling because each twin sees the other twin as moving, and so, according to an incorrect and naive application of time dilation and the principle of relativity, each should paradoxically find the other to have aged less. however, this scenario can be resolved within the standard framework of special relativity: the travelling twin's trajectory involves two different inertial frames, one for the outbound journey and one for the inbound journey, and so there is no symmetry between the spacetime paths of the twins. therefore, the twin paradox is not a paradox in the sense of a logical contradiction.
<p> the twin paradox is a thought experiment involving identical twins, one of whom makes a journey into space in a high-speed rocket, returning home to find that the twin who remained on earth has aged more. this result appears puzzling because each twin observes the other twin as moving, and so at first glance, it would appear that each should find the other to have aged less. the twin paradox sidesteps the justification for mutual time dilation presented above by avoiding the requirement for a third clock. nevertheless, the "twin paradox" is not a true paradox because it is easily understood within the context of special relativity.
<p> the impression that a paradox exists stems from a misunderstanding of what special relativity states. special relativity does not declare all frames of reference to be equivalent, only inertial frames. the traveling twin's frame is not inertial during periods when she is accelerating. furthermore, the difference between the twins is observationally detectable: the traveling twin needs to fire her rockets to be able to return home, while the stay-at-home twin does not.
<p> if einstein showed that space-time was curved, nottale shows that it is not only curved, but also fractal. nottale has proven a key theorem which shows that a space which is continuous and non-differentiable is necessarily fractal. it means that such a space depends on scale.
<p> for the classic peano and hilbert space-filling curves, where two subcurves intersect (in the technical sense), there is self-contact without self-crossing. a space-filling curve can be (everywhere) self-crossing if its approximation curves are self-crossing. a space-filling curve's approximations can be self-avoiding, as the figures above illustrate. in 3 dimensions, self-avoiding approximation curves can even contain knots. approximation curves remain within a bounded portion of "n"-dimensional space, but their lengths increase without bound.
<p> the resolution of the paradox again lies in the relativity of simultaneity (ferraro 2007). the length of a physical object is defined as the distance between two "simultaneous" events occurring at each end of the body, and since simultaneity is relative, so is this length. this variability in length is just the lorentz contraction. similarly, a physical angle is defined as the angle formed by three "simultaneous" events, and this angle will also be a relative quantity. in the above paradox, although the rod and the plane of the ring are parallel in the rest frame of the ring, they are not parallel in the rest frame of the rod. the uncontracted rod passes through the lorentz-contracted ring because the plane of the ring is rotated relative to the rod by an amount sufficient to let the rod pass through.
<p> bullet::::- nielsen realization problem. kravetz claimed to solve this in 1959 by first showing that teichmuller space is negatively curved, but in 1974 masur showed that it is not negatively curved. the nielsen realization problem was finally solved in 1980 by kerskhoff. | Funny enough, my PhD advisor wrote a paper addressing exactly this question. He and a collaborator looked at a twin-paradox-type setup in *compact spaces*, where if you travel far enough you come back "out the other side." The resolution is that the compact nature of the universe would pick out a preferred frame, throwing out the notion, from flat space, that both twins' age measurements are equally correct. Here's why. In a compact universe, faraway points are *identified*, meaning we declare that they're the same point. This is a lot like what happens on a circle; we say that the angles 0 and 2π (or 360°) are one and the same. If we didn't identify those points, then we'd have a line segment instead of a circle. So in a compact universe, different spatial points would be identified with each other. But this picks out a preferred frame in which all spatial points are identified with other points *at the same time*. In another frame, point A would be identified with point B 100 years in its future, for example. (Of course, this doesn't mean anything is travelling through time. It's just an artifact of the fact that different observers will disagree on whether two events occurred simultaneously.) It's a short and nicely-written paper, even if you skim over the math in the middle you should be able to pick up a lot. |
do you get tanned through normal windows? | <p> cabin windows, made from much lighter than glass stretched acrylic glass, consists of multiple panes: an outer one built to support four times the maximum cabin pressure, an inner one for redundancy and a scratch pane near the passenger. acrylic is susceptible to crazing : a network of fine cracks appears but can be polished to restore optical transparency, removal and polishing typically undergo every 2–3 years for uncoated windows.
<p> to reduce the heat transfer from a surface, such as a glass window, a clear reflective film with a low emissivity coating can be placed on the interior of the surface. “low-emittance (low-e) coatings are microscopically thin, virtually invisible, metal or metallic oxide layers deposited on a window or skylight glazing surface primarily to reduce the u-factor by suppressing radiative heat flow”. by adding this coating we are limiting the amount of radiation that leaves the window thus increasing the amount of heat that is retained inside the window.
<p> a warm filter is a photographic filter that improves the color of all skin tones and absorbs blue cast often caused by electronic flash or outdoor shade. they add warmth to pale, washed-out flesh tones and are ideal for portraits as they smooth facial details while adding warmth to skin tones (for color imaging).
<p> window tints can be used in applications like shopfront windows, office block windows, and house windows. this is often done to increase privacy, and decrease heating and cooling costs. window tints are used in some energy efficient buildings.
<p> when the temperature is high and the relative humidity is low, evaporation of water is rapid; soil dries, wet clothes hung on a line or rack dry quickly, and perspiration readily evaporates from the skin. wooden furniture can shrink, causing the paint that covers these surfaces to fracture.
<p> in construction, capping or window capping (window cladding, window wrapping) refers to the application of aluminum or vinyl sheeting cut and formed with a brake to fit over the exterior, wood trim of a building. the aluminum is intended to make aging trim with peeling paint look better, reduce future paint maintenance, and provide a weather-proof layer to control the infiltration of water.
<p> often, clerestory windows also shine onto interior wall surfaces painted white or another light color. these walls are placed so as to reflect indirect light to interior areas where it is needed. this method has the advantage of reducing the directionality of light to make it softer and more diffuse, reducing shadows. | Nothing is completely transparent to everything in the EM spectrum. Even windows absorb small amounts of light - consider the "greening" effect of facing "infinite" mirrors. They assuredly block a decent amount of shorter wavelength UV light, and you can get commercial glass designed to be almost entirely opaque to the UV spectrum. While that won't prevent you from "cooking skin" via infrared, it will slow any attempt at tanning to a noticeable degree. |
what would be the effects on gps if einstein didn't discover relativity. (effects of time dilation on everyday technology) | <p> time dilation is of practical importance. for instance, the clocks in gps satellites experience this effect due to the reduced gravity they experience (making their clocks appear to run more quickly than those on earth) and must therefore incorporate relativistically corrected calculations when reporting locations to users. if general relativity were not accounted for, a navigational fix based on the gps satellites would be false after only 2 minutes, and errors in global positions would continue to accumulate at a rate of about 10 kilometers each day.
<p> the effect of gravitational frequency shift on the gps due to general relativity is that a clock closer to a massive object will be slower than a clock farther away. applied to the gps, the receivers are much closer to earth than the satellites, causing the gps clocks to be faster by a factor of 5×10^(−10), or about 45.9 μs/day. this gravitational frequency shift is noticeable.
<p> although the global positioning system (gps) is not designed as a test of fundamental physics, it must account for the gravitational redshift in its timing system, and physicists have analyzed timing data from the gps to confirm other tests. when the first satellite was launched, some engineers resisted the prediction that a noticeable gravitational time dilation would occur, so the first satellite was launched without the clock adjustment that was later built into subsequent satellites. it showed the predicted shift of 38 microseconds per day. this rate of discrepancy is sufficient to substantially impair function of gps within hours if not accounted for. an excellent account of the role played by general relativity in the design of gps can be found in ashby 2003.
<p> special and general relativity predict that the clocks on the gps satellites would be seen by the earth's observers to run 38 microseconds faster per day than the clocks on the earth. the gps calculated positions would quickly drift into error, accumulating to . this was corrected for in the design of gps.
<p> later tests can be done with the global positioning system (gps), which must account for the gravitational redshift in its timing system, and physicists have analyzed timing data from the gps to confirm other tests. when the first satellite was launched, it showed the predicted shift of 38 microseconds per day. this rate of the discrepancy is sufficient to substantially impair the function of gps within hours if not accounted for. an excellent account of the role played by general relativity in the design of gps can be found in ashby 2003.
<p> to calculate the amount of daily time dilation experienced by gps satellites relative to earth we need to separately determine the amounts due to special relativity (velocity) and general relativity (gravity) and add them together.
<p> inconsistencies of atmospheric conditions affect the speed of the gps signals as they pass through the earth's atmosphere, especially the ionosphere. correcting these errors is a significant challenge to improving gps position accuracy. these effects are smallest when the satellite is directly overhead and become greater for satellites nearer the horizon since the path through the atmosphere is longer (see airmass). once the receiver's approximate location is known, a mathematical model can be used to estimate and compensate for these errors. | Since GPS depends on synchronizing time between satellites and Earth (it uses time to determine distance), a GPS system without relativity compensation would probably see the location accuracy of the determined location slowly worsen over time. We would probably notice this, do experiments with stationary receivers on Earth, find some kind of function that approximates the time drift, and apply it as a correction to the satellite time that is transmitted. By this time we would probably be wondering why the same atomic clock, which is supposed to be super precise, would begin drifting apart when all that is different is its location and velocity. We would probably engineer the equation, based on empirical observation, before a theorist comes up with the theory. |
can there be an arctic methane release large enough to cause an extinction level event and how long would that take? | <p> the sudden release of large amounts of natural gas from methane clathrate deposits in runaway climate change could be a cause of past, future, and present climate changes. the release of this trapped methane is a potential major outcome of a rise in temperature; some have suggested that this was a main factor in the planet warming 6 °c, which happened during the end-permian extinction, as methane is much more powerful as a greenhouse gas than carbon dioxide. despite its atmospheric lifetime of around 12 years, it has a global warming potential of 72 over 20 years, 25 over 100 years, and 33 when accounted for aerosol interactions. the theory also predicts this will greatly affect available oxygen and hydroxyl radical content of the atmosphere.
<p> two events possibly linked to methane excursions are the permian–triassic extinction event and the paleocene–eocene thermal maximum (petm). equatorial permafrost methane clathrate may have had a role in the sudden warm-up of "snowball earth", 630 million years ago. however, warming at the end of the last ice age is not thought to be due to methane release. a similar event is the methane hydrate releases, following ice-sheet retreat during the last glacial period, around 12,000 years ago, in response to the bølling-allerød warming.
<p> bullet::::- dave valentine of uc santa barbara and chris reddy of the woods hole oceanographic institute wrote concerning increased elevation of methane near the well "total quantity of methane and other hydrocarbons is enough to cause problems with the regional ecosystem, there is no plausible scenario by which this event alone will cause global-scale extinctions." the article also notes that permiamn event lasted a millennium and was not an overnight event.
<p> even with existing levels of warming and melting of the arctic region, submarine methane releases linked to clathrate breakdown have been discovered, and demonstrated to be leaking into the atmosphere. a 2011 russian survey off the east siberian coast found plumes wider than one kilometer releasing methane directly into the atmosphere.
<p> the queensland government report also stated: "significantly, this probably represents the first recorded mammalian extinction due to anthropogenic climate change." the report said the "root cause" of the extinction was sea-level rise as a consequence of global warming. senior scientist for climate change biology with conservation international lee hannah said the species could have been saved.
<p> current photochemical models cannot explain the apparent rapid variability of the methane levels in mars. research suggests that the implied methane destruction lifetime is as long as ≈ 4 earth years and as short as ≈ 0.6 earth years. this unexplained fast destruction rate also suggests a very active replenishing source. a team from the italian national institute for astrophysics suspects that the methane detected by the "curiosity" rover may have been released from a nearby area called medusae fossae formation located about 500 km east of gale crater. the region is fractured and is likely volcanic in origin.
<p> shakhova et al. (2008) estimate that not less than 1,400 gigatonnes (gt) of carbon is presently locked up as methane and methane hydrates under the arctic submarine permafrost, and 5–10% of that area is subject to puncturing by open taliks. they conclude that "release of up to 50 gt of predicted amount of hydrate storage [is] highly possible for abrupt release at any time". that would increase the methane content of the planet's atmosphere by a factor of twelve. | Mmm, very interesting question. Let's take "extinction level event" to mean a mass extinction like the big 5. The closest analogue to this is the PETM, the Paleocene-Eocene thermal maximum. This was an event about 50 million years ago that had a short term increase in global temperatures of about 5 degrees C. The best evidence suggests that it was caused by a cascade of methane, leading to global warming and sea level rise that is larger than what we are looking at today. This is the best candidate for a historic arctic methane event, and if there were others we might know. So we know that it is possible, but very rare to have this kind of event. We can't tell how long it took, but we know from beginning to end the event was under 10,000 years, but that's just because it's tough to measure short time periods in the geologic record. It could have been much shorter, especially on the release end. The next question is -- did the PETM cause a mass extinction? The answer is that no, it did not. The Paleocene/Eocene is associated with a faunal changeover but not a mass extinction. However, this doesn't mean that there wasn't a smaller extinction associated with the PETM! Almost all transitions between geologic periods are defined by changes in fauna of moderate size, and especially in some groups we'd expect to be hit hard by a change in ocean chemistry and temperature like benthic foraminifera. However, even these "hard hit" groups didn't really show mass extinction levels of turnover. Alternative is that rates of evolution were higher for many groups, so their old forms disappear from the fossil record while their descendents simply look different -- a "pseudo-extinction". It seems very likely that the methane release was the cause of this worldwide shift in biota. The next part of answering "can there be..." is to think about if the PETM is the worst possible event that could happen... if we could show it's likely that an event could be much worse, we might decide the answer to your question is "very likely so". The world today is different than it was in Paleocene time. First off, we have massive continental ice shelfs-- there was no continental ice shelfs then. Sudden increase in global temperature interacting with continental ice might cause cascading events that lead to a much worse event. Next, If much larger quantities of methane were locked up in the sea floor, you could potententially have a much larger temperature excursion, including one that would cause a mass extinction. I don't think anyone really knows enough about methane to say if this is plausible or likely. So, we certainly can't exclude the possibility that an arctic methane event could cause a mass extinction. However, it also doesn't seem that likely. We are, after all, talking about a single event from 50 million years ago. The conditions for a massive release clearly aren't common. To summarize, methane hydrate release seems to have caused a "small" extinction about 50 million years ago, in a very short period of time. While that event didn't cause a truely large extinction, there are reasons to think that event wasn't as bad it could be. We can not exclude arctic methane release as a potential cause of a major extinction, if everything lined up right... but it also doesn't seem very likely. |
is it even possible to offer a simple explanation for how computers work? | <p> bullet::::- computer – is a device that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. modern computers have the ability to follow generalized sets of operations, called "programs." these programs enable computers to perform an extremely wide range of tasks.
<p> computers – programmable machines designed to automatically carry out sequences of arithmetic or logical operations. the sequences of operations can be changed readily, allowing computers to solve more than one kind of problem.
<p> a computer is a machine that manipulates data according to a set of instructions called a computer program. the program has an executable form that the computer can use directly to execute the instructions. the same program in its human-readable source code form, enables a programmer to study and develop a sequence of steps known as an algorithm. because the instructions can be carried out in different types of computers, a single set of source instructions converts to machine instructions according to the cpu type.
<p> bullet::::- computer – is a device that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. modern computers have the ability to follow generalized sets of operations, called "programs." these programs enable computers to perform an extremely wide range of tasks. a "complete" computer including the hardware, the operating system (main software), and peripheral equipment required and used for "full" operation can be referred to as a computer system. this term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster.
<p> a mechanical computer is built from mechanical components such as levers and gears, rather than electronic components. the most common examples are adding machines and mechanical counters, which use the turning of gears to increment output displays. more complex examples could carry out multiplication and division—friden used a moving head which paused at each column—and even differential analysis. one model sold in the 1960s calculated square roots.
<p> bullet::::- computers – general purpose devices that can be programmed to carry out a finite set of arithmetic or logical operations. since a sequence of operations can be readily changed, computers can solve more than one kind of problem.
<p> bullet::::- computer (see below) – general purpose device that can be programmed to carry out a set of arithmetic or logical operations automatically. since a sequence of operations (an algorithm) can be readily changed, the computer can solve more than one kind of problem. | The ones and zeros of binary correspond to a circuit being on or off. So, if there is current, we call that "1", if there isn't, we call it "0". Now, we can mathematically do quite a lot of things with just a binary number system, using Boolean Algebra. For quite a few decades now, we've been able to create physical implementations of Boolean Algebraic functions using various methods and materials, the most ubiquitous being silicon logic gates. And Boolean Algebra can be used to perform arithmetic. This is how, in a tiny nutshell, we can make electronic calculators, which, from a historical point of view, is the really hard part. Getting from this to Skyrim is... complicated. Without getting into too many details, once you've got a mathematical system that is sufficiently complex to do arithmetic, you can also do quite a lot more. For example, we can build a series of chips connected to lights, and control the behavior of those chips by defining a matrix, so that depending on what functions are performed by the chips, different bulbs will light up, and now we've got a simple visual display. From that point on, it's really just a matter of increased mathematical and electronic sophistication. tl;dr: Your personal computer is a bunch of materials engineered to reliably behave in mathematically well-defined ways, which we can freely manipulate using progressively higher levels of abstraction. |
do objects orbiting one another travel in straight lines through curved space? | <p> bullet::::- "floating" objects in a spacecraft in leo are actually in independent orbits around the earth. if two objects are placed side-by-side (relative to their direction of motion), they will be orbiting the earth in different orbital planes. since all orbital planes pass through the center of the earth, any two orbital planes intersect along a line. therefore, two objects placed side-by-side (at any distance apart) will come together after one quarter of a revolution. if they are placed so they miss each other, they will oscillate past each other, with the same period as the orbit. this corresponds to an inward acceleration of 0.128 μ"g" per meter horizontal distance from the center at 400 km altitude. if they are placed one ahead of the other in the same orbital plane, they will maintain their separation. if they are placed one above the other (at different radii from the center of the earth), they will have different potential energies, so the size, eccentricity, and period of their orbits will be different, causing them to move in a complex looping pattern relative to each other.
<p> it can be said that two objects in space orbiting each other in the absence of other forces are in free fall around each other, e.g. that the moon or an artificial satellite "falls around" the earth, or a planet "falls around" the sun. assuming spherical objects means that the equation of motion is governed by newton's law of universal gravitation, with solutions to the gravitational two-body problem being elliptic orbits obeying kepler's laws of planetary motion. this connection between falling objects close to the earth and orbiting objects is best illustrated by the thought experiment, newton's cannonball.
<p> according to einstein's theory of general relativity, particles of negligible mass travel along geodesics in the space-time. in uncurved space-time, far from a source of gravity, these geodesics correspond to straight lines; however, they may deviate from straight lines when the space-time is curved. the equation for the geodesic lines is
<p> according to einstein's theory of general relativity, particles of negligible mass travel along geodesics in the space-time. in flat space-time, far from a source of gravity, these geodesics correspond to straight lines; however, they may deviate from straight lines when the space-time is curved. the equation for the geodesic lines is
<p> as an example: an inertial body moving along a geodesic through space can be trapped into an orbit around a large gravitational mass without ever experiencing acceleration. this is possible because spacetime is radically curved in close vicinity to a large gravitational mass. in such a situation the geodesic lines bend inward around the center of the mass and a free-floating (weightless) inertial body will simply follow those curved geodesics into an elliptical orbit. an accelerometer on-board would never record any acceleration.
<p> there is another kinematic way of understanding parallel transport and geodesic curvature in terms of "rolling without slipping or twisting". although well known to differential geometers since the early part of the twentieth century, it has also been applied to problems in engineering and robotics. consider the 2-sphere as a rigid body in three-dimensional space rolling without slipping or twisting on a horizontal plane. the point of contact will describe a curve in the plane and on the surface. at each point of contact the different tangent planes of the sphere can be identified with the horizontal plane itself and hence with one another.
<p> considering possible hovering positions or orbits of the tractor around the asteroid, note that if two objects are gravitationally bound in a mutual orbit, then if one receives an arbitrary impulse which is less than that needed to free it from orbit around the other, because of the gravitational forces between them, the impulse will alter the momentum of both, together regarded as a composite system. | In a word, Yes. The orbits are straight in a *local* sense: the elapsed relativistic proper time (the time measured by a perfect wristwatch affixed to one of the objects) is invariant under small changes in the orbit. Non-inertial trajectories don't have that property: in any orbit other than the inertial one, changing the trajectory slightly would change the proper time along the trajectory. This view of curvature is fundamental to the *calculus of variations*, which is about finding curves with certain invariance properties. You can use the calculus of variations to find straight lines in normal Euclidean space -- in Euclidean space, "the shortest distance between two points is a straight line" is equivalent to "a straight line is the curve whose length is invariant under small perturbations of its shape", because Euclidean space has no curves that are local *maxima* of length -- only local *minima*, so any curve whose length is invariant under infinitesimal changes must be a local minimum. Einsteinan relativity uses hyperbolic space, so the inertial path is actually a local *maximum* in proper time -- if you follow any course other than the inertial one between two events in spacetime, your wristwatch will show *less* elapsed time than if you followed the inertial trajectory. Just like, in Euclidean space, if you follow any curve other than a straight line between two points, you'll find *more* distance covered than on the straight line path. That notion of curvature is tied very deeply into the shift from classical mechanics to quantum mechanics: we now understand classical mechanics to be due to constructive interference from a *sheaf* of possible trajectories taken by any object. The observed trajectory of a large object like ISS or a Soyuz must be a trajectory whose total phase shift (a measure of elapsed proper time) is stationary under perturbations, because that property causes constructive interference of the possible paths taken by the object. That sounds spooky and weird at first, but it's exactly analogous to the construction of ray optics from the theory of electromagnetic waves. |
is solid hydrogen flammable? | <p> hydrogen fluoride (hf), like water, is a polar molecule, and due to its polarity it can dissolve many ionic compounds. its melting point is −84 °c, and its boiling point is 19.54 °c (at atmospheric pressure); the difference between the two is a little more than 100 k. hf also makes hydrogen bonds with its neighbor molecules, as do water and ammonia. it has been considered as a possible solvent for life by scientists such as peter sneath and carl sagan.
<p> unlike hydrogen fluoride, anhydrous liquid hydrogen bromide is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into hbr and ions – the latter, in any case, are much less stable than the bifluoride ions () due to the very weak hydrogen bonding between hydrogen and bromine, though its salts with very large and weakly polarising cations such as cs and (r = me, et, bu) may still be isolated. anhydrous hydrogen bromide is a poor solvent, only able to dissolve small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides.
<p> hf is miscible with water (dissolves in any proportion). in contrast, the other hydrogen halides exhibit limiting solubilities in water. hydrogen fluoride form a monohydrate hfho (−40 °c (−40 °f), which is 44 °c (79 °f) above the melting point of pure hf.
<p> hydrogen fluoride is an excellent solvent. reflecting the ability of hf to participate in hydrogen bonding, even proteins and carbohydrates dissolve in hf and can be recovered from it. in contrast, most non-fluoride inorganic chemicals react with hf rather than dissolving.
<p> unlike hydrogen fluoride, anhydrous liquid hydrogen chloride is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into hcl and ions – the latter, in any case, are much less stable than the bifluoride ions () due to the very weak hydrogen bonding between hydrogen and chlorine, though its salts with very large and weakly polarising cations such as cs and (r = me, et, bu) may still be isolated. anhydrous hydrogen chloride is a poor solvent, only able to dissolve small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides. it readily protonates electrophiles containing lone-pairs or π bonds. solvolysis, ligand replacement reactions, and oxidations are well-characterised in hydrogen chloride solution:
<p> hydrogen and fluorine combine to yield hydrogen fluoride, in which discrete molecules form clusters by hydrogen bonding, resembling water more than hydrogen chloride. it boils at a much higher temperature than heavier hydrogen halides and unlike them is fully miscible with water. hydrogen fluoride readily hydrates on contact with water to form aqueous hydrogen fluoride, also known as hydrofluoric acid. unlike the other hydrohalic acids, which are strong, hydrofluoric acid is a weak acid at low concentrations. however, it can attack glass, something the other acids cannot do.
<p> embrittlement of materials when tensile loaded in contact with gaseous hydrogen is known as hydrogen environment embrittlement or external hydrogen embrittlement. it has been observed in alloy steels and alloys of nickel, titanium, uranium and niobium. | Yes. Not that it's going to stay solid for long, given its melting/boiling points. |
is there a middle ground between short term and long term memory? | <p> in contrast to the short-term memory, long-term memory refers to the ability to hold information for a prolonged time and is possibly the most complex component of the human memory system. the atkinson–shiffrin model of memory (atkinson 1968) suggests that the items stored in short-term memory moves to long-term memory through repeated practice and use. long-term storage may be similar to learning—the process by which information that may be needed again is stored for recall on demand. the process of locating this information and bringing it back to working memory is called retrieval. this knowledge that is easily recalled is explicit knowledge, whereas most long-term memory is implicit knowledge and is not readily retrievable. scientists speculate that the hippocampus is involved in the creation of long-term memory. it is unclear where long-term memory is stored, although there is evidence depicting long-term memory is stored in various parts of the nervous system. long-term memory is permanent. memory can be recalled, which, according to the dual-store memory search model, enhances the long-term memory. forgetting may occur when the memory fails to be recalled on later occasions.
<p> long-term memory is the site for which information such as facts, physical skills and abilities, procedures and semantic material are stored. long-term memory is important for the retention of learned information, allowing for a genuine understanding and meaning of ideas and concepts. in comparison to short-term memory, the storage capacity of long-term memory can last for days, months, years or for an entire lifetime. long-term memory has three components. procedural memory is responsible for guiding how we perform certain tasks and providing the knowledge of how to do things, such as walking or talking. semantic memory is responsible for providing general world knowledge through the information we have accumulated over our lives. episodic memory is responsible for storing autobiographical events that we have personally experienced, which can be stated explicitly.
<p> not all researchers agree that short-term and long-term memory are separate systems. some theorists propose that memory is unitary over all time scales, from milliseconds to years. support for the unitary memory hypothesis comes from the fact that it has been difficult to demarcate a clear boundary between short-term and long-term memory. for instance, tarnow shows that the recall probability vs. latency curve is a straight line from 6 to 600 seconds (ten minutes), with the probability of failure to recall only saturating after 600 seconds. if there were really two different memory stores operating in this time frame, one could expect a discontinuity in this curve. other research has shown that the detailed pattern of recall errors looks remarkably similar for recall of a list immediately after learning (it is presumed, from short-term memory) and recall after 24 hours (necessarily from long-term memory).
<p> short term memory is defined as the ability to store information for a short period of time. if it is rehearsed enough, it will be transferred into long term memory. this is important to know in regards to eyewitness testimonies because children have problems transferring short term memories to long term, as discussed previously.
<p> short-term memory is responsible for retaining and processing information very temporarily. it is the information that we are currently aware of thinking about. the storage capacity and duration of short-term memory is very limited; information can be lost easily with distraction. a famous paper written by psychologist george miller in 1956 analyses this concept further. miller wrote how short-term memory only has the ability to process or hold seven, plus or minus two items at a time, which then expires after roughly 30 seconds. this is due to short-term memory only having a certain number of "slots" in which to store information in. despite the quick disappearance of information, short-term memory is an essential step for retaining information in long-term memory stores. without it, information would not be able to be relayed into long-term memory.
<p> short-term memory (or "primary" or "active memory") is the capacity for holding, but not manipulating, a small amount of information in mind in an active, readily available state for a short period of time. for example, short-term memory can be used to remember a phone number that has just been recited. the duration of short-term memory (when rehearsal or active maintenance is prevented) is believed to be in the order of seconds. the most commonly cited capacity is "the magical number seven, plus or minus two" (which is frequently referred to as "miller's law"), despite the facts that miller himself stated that the figure was intended as "little more than a joke" (miller, 1989, page 401) and that cowan (2001) provided evidence that a more realistic figure is 4±1 units. in contrast, long-term memory can hold the information indefinitely.
<p> short-term memory is also known as working memory. short-term memory allows recall for a period of several seconds to a minute without rehearsal. its capacity is also very limited: george a. miller (1956), when working at bell laboratories, conducted experiments showing that the store of short-term memory was 7±2 items (the title of his famous paper, "the magical number 7±2"). modern estimates of the capacity of short-term memory are lower, typically of the order of 4–5 items; however, memory capacity can be increased through a process called chunking. for example, in recalling a ten-digit telephone number, a person could chunk the digits into three groups: first, the area code (such as 123), then a three-digit chunk (456) and lastly a four-digit chunk (7890). this method of remembering telephone numbers is far more effective than attempting to remember a string of 10 digits; this is because we are able to chunk the information into meaningful groups of numbers. this may be reflected in some countries in the tendency to display telephone numbers as several chunks of two to four numbers. | Long-term memories must be reinforced to be maintained. The more frequently you access those memories, the more you are able to recall them. If you want to remember something, you should review it periodically. Even better if you begin using that knowledge in your every day life to get very strong memories. You might get an especially strong memory of the meaning of this particular term if you propose participation in the referenced activity to someone else. |
how can there be a layer of ice underneath an ocean of liquid water on titan? if ice is less dense than liquid water shouldn't it float to the top? | <p> titan is probably partially differentiated into distinct layers with a rocky center. this rocky center is surrounded by several layers composed of different crystalline forms of ice. its interior may still be hot enough for a liquid layer consisting of a "magma" composed of water and ammonia between the ice i crust and deeper ice layers made of high-pressure forms of ice. the presence of ammonia allows water to remain liquid even at a temperature as low as (for eutectic mixture with water). the "cassini" probe discovered the evidence for the layered structure in the form of natural extremely-low-frequency radio waves in titan's atmosphere. titan's surface is thought to be a poor reflector of extremely-low-frequency radio waves, so they may instead be reflecting off the liquid–ice boundary of a subsurface ocean. surface features were observed by the "cassini" spacecraft to systematically shift by up to between october 2005 and may 2007, which suggests that the crust is decoupled from the interior, and provides additional evidence for an interior liquid layer. further supporting evidence for a liquid layer and ice shell decoupled from the solid core comes from the way the gravity field varies as titan orbits saturn. comparison of the gravity field with the radar-based topography observations also suggests that the ice shell may be substantially rigid.
<p> large bodies of liquid hydrocarbons are thought to be present on the surface of titan, although they are not large enough to be considered oceans and are sometimes referred to as "lakes" or seas. the cassini–huygens space mission initially discovered only what appeared to be dry lakebeds and empty river channels, suggesting that titan had lost what surface liquids it might have had. later flybys of titan provided radar and infrared images that showed a series of hydrocarbon lakes in the colder polar regions. titan is thought to have a subsurface liquid-water ocean under the ice in addition to the hydrocarbon mix that forms atop its outer crust.
<p> any waves on the lake are also far smaller than those that would be on a sizable body of liquid water on earth; their estimated maximum height was less than 3 mm during observations of a radar specular reflection during "cassini"'s t49 flyover of july 2009. on titan, waves can be generated at lower wind speeds than on earth, due to the four times greater atmospheric density, and should be seven times higher at a given wind speed, due to titan's surface gravity being one seventh as strong. on the other hand, pure liquid methane is only half as dense as water and may not be dense enough to form a wave in the first place, comparable of building a sand castle with bone dry sand. alternatively, the lack of waves could indicate either wind speeds less than 0.5 m/s, or an unexpectedly viscous composition of the hydrocarbon-mix fluid. in any case, the apparent presence of a wave-generated beach on the lake's northeast shore suggests that at times considerably higher waves form.
<p> the water below the ice remains liquid since geothermal heating balances the heat loss at the ice surface. the pressure causes the melting point of water to be below 0 °c. the ceiling of the subglacial lake will be at the level where the pressure melting point of water intersects the temperature gradient. in lake vostok the ice over the lake is thus much thicker than the ice sheet around it. hypersaline lakes remain liquid due to their salt content.
<p> the density of ice i is 0.917 g/cm which is less than that of liquid water. this is attributed to the presence of hydrogen bonds which causes atoms to become more distant in the solid phase. ice floats on water, which is highly unusual when compared to other materials.the solid phase of materials is usually more closely and neatly packed and has a higher density than the liquid phase. when lakes freeze, they only do so at the surface while the bottom of the lake remains near because water is densest at this temperature. no matter how cold the surface becomes, there is always a layer at the bottom of the lake that is . this anomalous behavior of water and ice is what allows fish to survive harsh winters. the density of ice i increases when cooled, down to about ; below that temperature, the ice expands again (negative thermal expansion).
<p> the smallest solid objects can have water. at earth, falling particles returned by high-altitude planes and balloons show water contents. in the outer solar system, atmospheres show water spectra where water should have been depleted. the atmospheres of giant planets and titan are replenished by infall from an external source. micrometeorites and interplanetary dust particles contain , some co, and possibly co.
<p> because the density of pure ice is about 920 kg/m, and that of seawater about 1025 kg/m, typically about one-tenth of the volume of an iceberg is above water (which follows from archimedes's principle of buoyancy). the shape of the underwater portion can be difficult to judge by looking at the portion above the surface. | The liquid on Titan is not water, but rather hydrocarbons, possibly methane or ethane. These hydrocarbons may not share water's unusual property of having a solid phase that is less dense than its liquid phase. |
are there any examples of a species that became an invasive pest where it was introduced, meanwhile becoming extinct where it originated? | <p> yet another prominent example of an introduced species that became invasive is the european rabbit in australia. thomas austin, a british landowner had rabbits released on his estate in victoria because he missed hunting them. a more recent example is the introduction of the common wall lizard to north america by a cincinnati boy, george rau, around 1950 after a family vacation to italy.
<p> a number of non-native, invasive species have been identified as a threat to native biodiversity, including giant hogweed, japanese knotweed and rhododendron. in may 2008 it was announced that psyllid lice from japan, which feed on the knotweed, may be introduced to the uk to bring the plant under control. this would be the first time that an alien species has been used in britain in this way. scientists at the commonwealth agricultural bureaux international do not believe the lice will cause any environmental damage. over-grazing caused by the large numbers of red deer and sheep has also resulted in the impoverishment of moorland and upland habitats and a loss of native woodland.
<p> most accidentally or intentionally introduced species do not become invasive as the ones mentioned above. for instance some 179 coccinellid species have been introduced to the u.s. and canada; about 27 of these non-native species have become established, and only a handful can be considered invasive, including the intentionally introduced "harmonia axyridis", multicolored asian lady beetle. however the small percentage of introduced species that become invasive can produce profound ecological changes. in north america "harmonia axyridis" has become the most abundant lady beetle and probably accounts for more observations than all the native lady beetles put together.
<p> not all introduced species are invasive, nor all invasive species deliberately introduced. in cases such as the zebra mussel, invasion of us waterways was unintentional. in other cases, such as mongooses in hawaii, the introduction is deliberate but ineffective (nocturnal rats were not vulnerable to the diurnal mongoose). in other cases, such as oil palms in indonesia and malaysia, the introduction produces substantial economic benefits, but the benefits are accompanied by costly unintended consequences.
<p> invasive species include "l. ferocissimum", which was introduced to australia and new zealand and has become a dense, thorny pest plant there. it injures livestock, harbors pest mammals and insects, and displaces native species.
<p> most introduced species do not become invasive. examples of introduced animals that have become invasive include the gypsy moth in eastern north america, the zebra mussel and alewife in the great lakes, the canada goose and gray squirrel in europe, the muskrat in europe and asia, the cane toad and red fox in australia, nutria in north america, eurasia, and africa, and the common brushtail possum in new zealand. in taiwan, the success of introduced bird species was related to their native range size and body size; larger species with larger native range sizes were found to have larger introduced range sizes.
<p> another example of an invasive species introduced in the 19th century is the fire tree, which is a small shrub that was brought from the azores, madeira, and the canary islands as an ornamental plant or for firewood. however, now it poses a serious threat to native plants on young volcanic sites, lowland forests, and shrublands, where it forms dense monocultural stands | well they aren't extinct yet but apparently Asian Carp is rare in China whereas the US is doing all it can to stop the Asian Carp from moving upstream into the Great Lakes. This is a good article from the New Yorker 'People say it’s difficult to find Asian carp in China because they’re all fished out. I like to think, sellin’ silver carp and bighead carp to the Chinese, that we’re sendin’ their own product back to ’em' |
is travelling against the earth's rotation faster than going the other way? | <p> the tangential speed of earth's rotation at a point on earth can be approximated by multiplying the speed at the equator by the cosine of the latitude. for example, the kennedy space center is located at latitude 28.59° n, which yields a speed of: cos 28.59° × 1674.4 km/h = 1470.2 km/h.
<p> because of the curvature of the earth's surface (due to it being curved around as a globe), the chariot would generally not continue to point due south as it moves. for example, if the chariot moves along a geodesic (as approximated by any great circle) the pointer should instead stay at a fixed angle to the path. also, if two chariots travel by different routes between the same starting and finishing points, their pointers, which were aimed in the same direction at the start, usually do not point in the same direction at the finish. likewise, if a chariot goes around a closed loop, starting and finishing at the same point on the earth's surface, its pointer generally does not aim in the same direction at the finish as it did at the start. the difference is the holonomy of the path, and is proportional to the enclosed area. if the journeys are short compared with the radius of the earth, these discrepancies are small and may have no practical importance. nevertheless, they show that this type of chariot, based on differential gears, would be an imperfect compass even if constructed exactly and used in ideal conditions.
<p> if there were a railway line running round the earth's equator, a train moving westward along it fast enough would remain stationary in a frame moving (but not rotating) with the earth; it would stand still as the earth spun beneath it. in this inertial frame the situation is easy to analyze. the only forces acting on the train (assuming no wind resistance or other horizontal forces) are its gravity (downward) and the equal and opposite (upward) force from the track. there is no net force on the train and it therefore remains stationary.
<p> all other planetary bodies in the solar system also appear to periodically switch direction as they cross earth's sky. though all stars and planets appear to move from east to west on a nightly basis in response to the rotation of earth, the outer planets generally drift slowly eastward relative to the stars. asteroids and kuiper belt objects (including pluto) exhibit apparent retrogradation. this motion is normal for the planets, and so is considered direct motion. however, since earth completes its orbit in a shorter period of time than the planets outside its orbit, it periodically overtakes them, like a faster car on a multi-lane highway. when this occurs, the planet being passed will first appear to stop its eastward drift, and then drift back toward the west. then, as earth swings past the planet in its orbit, it appears to resume its normal motion west to east. inner planets venus and mercury appear to move in retrograde in a similar mechanism, but as they can never be in opposition to the sun as seen from earth, their retrograde cycles are tied to their inferior conjunctions with the sun. they are unobservable in the sun's glare and in their "new" phase, with mostly their dark sides toward earth; they occur in the transition from evening star to morning star.
<p> to travel along a circular path, an object needs to be subject to a centripetal acceleration (e.g.: the moon circles around the earth because of gravity; a car turns its front wheels inward to generate a centripetal force). if a vehicle traveling on a straight path were to suddenly transition to a tangential circular path, it would require centripetal acceleration suddenly switching at the tangent point from zero to the required value; this would be difficult to achieve (think of a driver instantly moving the steering wheel from straight line to turning position, and the car actually doing it), putting mechanical stress on the vehicle's parts, and causing much discomfort (causing jerk).
<p> because of a planet's rotation around its own axis, the gravitational acceleration is less at the equator than at the poles. in the 17th century, following the invention of the pendulum clock, french scientists found that clocks sent to french guiana, on the northern coast of south america, ran slower than their exact counterparts in paris. measurements of the acceleration due to gravity at the equator must also take into account the planet's rotation. any object that is stationary with respect to the surface of the earth is actually following a circular trajectory, circumnavigating the earth's axis. pulling an object into such a circular trajectory requires a force. the acceleration that is required to circumnavigate the earth's axis along the equator at one revolution per sidereal day is 0.0339 m/s². providing this acceleration decreases the effective gravitational acceleration. at the equator, the effective gravitational acceleration is 9.7805 m/s. this means that the true gravitational acceleration at the equator must be 9.8144 m/s (9.7805 + 0.0339 = 9.8144).
<p> if the speed is higher than the orbital velocity, but not high enough to leave earth altogether (lower than the escape velocity), it will continue revolving around earth along an elliptical orbit. (d) for example horizontal speed of 7,300 to approximately 10,000 m/s for earth. | If you're launching from the surface of the Earth, the answer is no. From your perspective, the Earth is stationary, because *both* you and the surface of the Earth are currently travelling around the center of the Earth at the same speed. If you want to travel against the rotation of the Earth you'll have to accelerate the exact same amount as to travel with the rotation of the Earth. Now, if you're coming from, say, the surface of the sun, the answer is yes, because you're not starting from a position at rest relative to the surface of the Earth. |
if the universe is expanding, is the distance between my atoms increasing right now? | <p> based on large quantities of experimental observation and theoretical work, the scientific consensus is that "space itself is expanding", and that it expanded very rapidly within the first fraction of a second after the big bang. this kind of expansion is known as "metric expansion". in mathematics and physics, a "metric" means a measure of distance, and the term implies that "the sense of distance within the universe is itself changing".
<p> based on a huge amount of experimental observation and theoretical work, it is now believed that the reason for the observation is that "space itself is expanding", and that it expanded very rapidly within the first fraction of a second after the big bang. this kind of expansion is known as a ""metric"" expansion. in the terminology of mathematics and physics, a "metric" is a measure of distance that satisfies a specific list of properties, and the term implies that "the sense of distance within the universe is itself changing", although at this time it is far too small an effect to see on less than an intergalactic scale.
<p> even if the overall spatial extent is infinite and thus the universe cannot get any "larger", we still say that space is expanding because, locally, the characteristic distance between objects is increasing. as an infinite space grows, it remains infinite.
<p> regardless of the overall shape of the universe, the question of what the universe is expanding into is one which does not require an answer according to the theories which describe the expansion; the way we define space in our universe in no way requires additional exterior space into which it can expand since an expansion of an infinite expanse can happen without changing the infinite extent of the expanse. all that is certain is that the manifold of space in which we live simply has the property that the distances between objects are getting larger as time goes on. this only implies the simple observational consequences associated with the metric expansion explored below. no "outside" or embedding in hyperspace is required for an expansion to occur. the visualizations often seen of the universe growing as a bubble into nothingness are misleading in that respect. there is no reason to believe there is anything "outside" of the expanding universe into which the universe expands.
<p> the second complication is cosmological concerns of redshift and the expanding universe, which must be considered when looking at distant objects. in these cases, the quantity of interest is the comoving distance, which is a constant distance between two objects assuming that they are moving away from each other solely with the expansion of the universe, known as the hubble flow. in effect, this comoving distance is the object's separation if the universe's expansion were neglected, and it can be easily related to the actual distance by accounting for how it would have expanded. the comoving distance can be used to calculate the respective comoving volume as usual, or a relation between the actual and comoving volumes can also be easily established. if z is the object's redshift, relating to how far emitted light is shifted toward longer wavelengths as a result of the object moving away from us with the universal expansion, d and v are the actual distance and volume (or what would be measured today) and d and v are the comoving distance and volumes of interest, then
<p> however, the metric expansion of space is accelerating. an ant on a rubber rope whose expansion increases with time is not guaranteed to reach the endpoint. the light from sufficiently distant galaxies may still therefore never reach earth.
<p> universe will expand forever. contrary to this he shows that if ω is a number greater than 1 then the universe will eventually collapse into itself in a "big crunch", the opposite of the big bang. ferris then shows, in a third possibility, that the universe is hanging in the balance in a "critical density" that says ω looks to be "exactly" 1. ferris makes the summation the universe will then always expand, but at a slower and slower rate that never completely comes to a halt. he follows this up with some exceptions that ω is not always observed as "exactly" 1 by all cosmologists. | Here is a very similar recently asked question. The answer is that the distance between your atoms *would* be increasing if it weren't for the fact that the forces holding your atoms together, well, keep holding them together. It's like if you stretch out a spring -- the spring doesn't stay stretched out -- it springs back to its natural length. Incidentally, even if the atoms weren't being held together, the expansion of space is so small on the distance scale of atoms that the atoms would separate at about 1/10^23 mm/s. In other words it would take well more than a trillion years for the atoms to separate by a millimeter. But again, long before that happens, the atoms "fall back" to their original separation due to the attractive force between them. |
x, y, z axis and the dimension of "time" | <p> the -axis represents the worldline of a clock resting in , with representing the duration between two events happening on this worldline, also called the proper time between these events. length upon the -axis represents the rest length or proper length of a rod resting in . the same interpretation can also be applied to distance upon the - and -axes for clocks and rods resting in .
<p> a temporal dimension is a dimension of time. time is often referred to as the "fourth dimension" for this reason, but that is not to imply that it is a spatial dimension. a temporal dimension is one way to measure physical change. it is perceived differently from the three spatial dimensions in that there is only one of it, and that we cannot move freely in time but subjectively move in one direction.
<p> in 1895, "the time machine" by h. g. wells used time as an additional "dimension" in this sense, taking the four-dimensional model of classical physics and interpreting time as a space-like dimension in which humans could travel with the right equipment. wells also used the concept of parallel universes as a consequence of time as the fourth dimension in stories like "the wonderful visit" and "men like gods", an idea proposed by the astronomer simon newcomb, who talked about both time and parallel universes; "add a fourth dimension to space, and there is room for an indefinite number of universes, all alongside of each other, as there is for an indefinite number of sheets of paper when we pile them upon each other"
<p> in the theory of special relativity, physical quantities are expressed in terms of four-vectors that include time as a fourth coordinate along with the three space coordinates. these vectors are generally represented by capital letters, for example for position. the expression for the "four-momentum" depends on how the coordinates are expressed. time may be given in its normal units or multiplied by the speed of light so that all the components of the four-vector have dimensions of length. if the latter scaling is used, an interval of proper time, , defined by
<p> the same "c"("x", "y") is called the autocovariance function in two instances: in time series (to denote exactly the same concept except that "x" and "y" refer to locations in time rather than in space), and in multivariate random fields (to refer to the covariance of a variable with itself, as opposed to the cross covariance between two different variables at different locations, cov("z"("x"), "y"("x"))).
<p> bullet::::- "time dimension": while the triple bottom line incorporates the social, economical and environmental (people, planet, profit) dimensions of sustainable development, it does not explicitly address the fourth dimension: time. the time dimension focuses on preserving current value in all three other dimensions for later. this means assessment of short term, longer term and long term consequences of any action.
<p> note, that this article will use notation that includes time as a dimension, i.e. we consider ("d" − 1)-dimensional space together with 1-dimensional time. the theory and notation easily carries over to "d"-dimensional space (either including time herin or in a setting involving no time at all). | Not in reality, but if we think of the fourth dimension as one of space rather than one of time, one can do some interesting things. After thinking about it for a few minutes, in order to rotate through time, part of the object must be going backwards in time, which is impossible in reality. Though like I said, if one treats time like they would space, you are able to do some fun things with geometry. |
how can ants count their steps? we've developed numbers as a system of organization and measurement, so we can literally speak "1, 2, 3" but how can an insect keep track of the amount of steps it has taken? | <p> ants were shown to be able to count up to 20 and add and subtract numbers within 5. in highly social species such as red wood ants scouting individuals can transfer to foragers the information about the number of branches of a special “counting maze” they had to go to in order to obtain syrup. the findings concerning number sense in ants are based on comparisons of duration of information contacts between scouts and foragers which preceded successful trips by the foraging teams. similar to some archaic human languages, the length of the code of a given number in ants’ communication is proportional to its value. in experiments in which the bait appeared on different branches with different frequencies, the ants used simple additions and subtractions to optimize their messages.
<p> the ant appears to use an internal pedometer to count its steps in a harsh environment where odors quickly vanish, enabling it to "count back" to its nest. when stilts were glued on to the ants legs, they overshot the distance of their nests, while ants with cut legs traveled short of their nest. it's suspected that while the ants unlikely have the brainpower to literally count steps, they are somehow doing it intuitively.
<p> ants are able to use quantitative values and transmit this information. for instance, ants of several species are able to estimate quite precisely numbers of encounters with members of other colonies on their feeding territories. numeracy has been described in the yellow mealworm beetle ("tenebrio molitor") and the honeybee.
<p> ants are simple animals and their behavioural repertory is limited to somewhere between ten and forty elementary behaviours. this is an attempt to explain the different patterns of self-organization in ants.
<p> instead, ants use a flexible task-allocation system that allows the colony to respond rapidly to changing needs for achieving these goals. this task-allocation system, similar to a division of labor is flexible in that all tasks rely on either the number of ant encounters (which take the form of antennal contact) and the sensing of chemical gradients (using olfactory sensing for pheromone trails) and can thus be applied to the entire ant population. while recent research has shown that certain tasks may have physiologically and age-based response thresholds, all tasks can be completed by "any" ant in the colony.
<p> with an aco algorithm, the shortest path in a graph, between two points a and b, is built from a combination of several paths. it is not easy to give a precise definition of what algorithm is or is not an ant colony, because the definition may vary according to the authors and uses. broadly speaking, ant colony algorithms are regarded as populated metaheuristics with each solution represented by an ant moving in the search space. ants mark the best solutions and take account of previous markings to optimize their search. they can be seen as probabilistic multi-agent algorithms using a probability distribution to make the transition between each iteration. in their versions for combinatorial problems, they use an iterative construction of solutions. according to some authors, the thing which distinguishes aco algorithms from other relatives (such as algorithms to estimate the distribution or particle swarm optimization) is precisely their constructive aspect. in combinatorial problems, it is possible that the best solution eventually be found, even though no ant would prove effective. thus, in the example of the travelling salesman problem, it is not necessary that an ant actually travels the shortest route: the shortest route can be built from the strongest segments of the best solutions. however, this definition can be problematic in the case of problems in real variables, where no structure of 'neighbours' exists. the collective behaviour of social insects remains a source of inspiration for researchers. the wide variety of algorithms (for optimization or not) seeking self-organization in biological systems has led to the concept of "swarm intelligence", which is a very general framework in which ant colony algorithms fit.
<p> the "trail level" represents a posteriori indication of the desirability of that move. trails are updated usually when all ants have completed their solution, increasing or decreasing the level of trails corresponding to moves that were part of "good" or "bad" solutions, respectively. | The authors of the study that I believe you are referring to weren't sure what internal mechanism is guiding it. And this paper was put out nine years ago and I don't see anything on it since then. However insects and animals in general are capable of doing things we think have to think about (counting for example) instinctually. Same idea with termites "knowing" how to design their mounds so that they naturally have efficient air circulation or honey bees build honey combs with hexagons in such an efficient manner. Natural selection has selected for ants (unsure if the entire formicidae family can count like this as the study only used foraging saharan desert ants, *Cataglyphis fortis*) that can count their steps to, as the authors believe, aid in navigation. |
how/why do modern fighter jets use turbofan engines? | <p> turbofans in civilian aircraft usually have a pronounced large front area to accommodate a very large fan, as their design involves a much larger mass of air bypassing the core so they can benefit from these effects, while in military aircraft, where noise and efficiency are less important compared to performance and drag, a smaller amount of air typically bypasses the core. turbofans designed for subsonic civilian aircraft also usually have a just a single front fan, because their additional thrust is generated by a large additional mass of air which is only moderately compressed, rather than a smaller amount of air which is greatly compressed.
<p> turbofans differ from turbojets in that they have an additional fan at the front of the engine, which accelerates air in a duct bypassing the core gas turbine engine. turbofans are the dominant engine type for medium and long-range airliners.
<p> modern turbofans are a development of the turbojet; they are basically a turbojet that includes a new section called the "fan stage". rather than using all of its exhaust gases to provide direct thrust like a turbojet, the turbofan engine extracts some of the power from the exhaust gases inside the engine and uses it to power the fan stage. the fan stage accelerates a large volume of air through a duct, bypassing the "engine core" (the actual gas turbine component of the engine), and expelling it at the rear as a jet, creating thrust. a proportion of the air that comes through the fan stage enters the engine core rather than being ducted to the rear, and is thus compressed and heated; some of the energy is extracted to power the compressors and fans, while the remainder is exhausted at the rear. this high-speed, hot-gas exhaust blends with the low speed, cool-air exhaust from the fan stage, and both contribute to the overall thrust of the engine. depending on what proportion of cool air is bypassed around the engine core, a turbofan can be called "low-bypass", "high-bypass", or "very-high-bypass" engines.
<p> a turbofan engine is much the same as a turbojet, but with an enlarged fan at the front that provides thrust in much the same way as a ducted propeller, resulting in improved fuel efficiency. though the fan creates thrust like a propeller, the surrounding duct frees it from many of the restrictions that limit propeller performance. this operation is a more efficient way to provide thrust than simply using the jet nozzle alone, and turbofans are more efficient than propellers in the transsonic range of aircraft speeds and can operate in the supersonic realm. a turbofan typically has extra turbine stages to turn the fan. turbofans were among the first engines to use multiple "spools"—concentric shafts that are free to rotate at their own speed—to let the engine react more quickly to changing power requirements. turbofans are coarsely split into low-bypass and high-bypass categories. bypass air flows through the fan, but around the jet core, not mixing with fuel and burning. the ratio of this air to the amount of air flowing through the engine core is the bypass ratio. low-bypass engines are preferred for military applications such as fighters due to high thrust-to-weight ratio, while high-bypass engines are preferred for civil use for good fuel efficiency and low noise. high-bypass turbofans are usually most efficient when the aircraft is traveling at 500 to 550 miles per hour (800 to 885 km/h), the cruise speed of most large airliners. low-bypass turbofans can reach supersonic speeds, though normally only when fitted with afterburners.
<p> turbofans were invented to circumvent an awkward feature of turbojets, which was that they were inefficient for subsonic flight. to raise the efficiency of a turbojet, the obvious approach would be to increase the burner temperature, to give better carnot efficiency and fit larger compressors and nozzles. however, while that does increase thrust somewhat, the exhaust jet leaves the engine with even higher velocity, which at subsonic flight speeds, takes most of the extra energy with it, wasting fuel.
<p> most modern jet planes use turbofan jet engines, which balance the advantages of a propeller while retaining the exhaust speed and power of a jet. this is essentially a ducted propeller attached to a jet engine, much like a turboprop, but with a smaller diameter. when installed on an airliner, it is efficient so long as it remains below the speed of sound (or subsonic). jet fighters and other supersonic aircraft that do not spend a great deal of time supersonic also often use turbofans, but to function, air intake ducting is needed to slow the air down so that when it arrives at the front of the turbofan, it is subsonic. when passing through the engine, it is then re-accelerated back to supersonic speeds. to further boost the power output, fuel is dumped into the exhaust stream, where it ignites. this is called an afterburner and has been used on both pure jet aircraft and turbojet aircraft although it is only normally used on combat aircraft due to the amount of fuel consumed, and even then may only be used for short periods of time. supersonic airliners (e.g. concorde) are no longer in use largely because flight at supersonic speed creates a sonic boom, which is prohibited in most heavily populated areas, and because of the much higher consumption of fuel supersonic flight requires.
<p> to boost fuel economy and reduce noise, almost all of today's jet airliners and most military transport aircraft (e.g., the c-17) are powered by low-specific-thrust/high-bypass-ratio turbofans. these engines evolved from the high-specific-thrust/low-bypass-ratio turbofans used in such aircraft in the 1960s. (modern combat aircraft tend to use low-bypass ratio turbofans, and some military transport aircraft use turboprops.) | * You asked if a low bypass turbofan is "essentially" a turbojet? There is a continuum of bypass ratios. On one side, with a bypass ratio of 0:1 for turbojets, through some intermediate for turbofans, and 1:0 for turboprops. * In general, turbofans have a higher thrust at zero speed (takeoff thrust) than turbojets. This is useful for aircraft that must take off from short runways on aircraft carriers. * The afterburner is after the turbine, and I *think* the bypass stream goes through the afterburner too. The point is that the afterburner can get to higher temperature than the main combustor without worrying about damaging the relatively fragile turbine blades |
are there superconductors for other forces or types of energy? | <p> for superconductors the energy gap is a region of suppressed density of states around the fermi energy, with the size of the energy gap much smaller than the energy scale of the band structure. the superconducting energy gap is a key aspect in the theoretical description of superconductivity and thus features prominently in bcs theory. here, the size of the energy gap indicates the energy gain for two electrons upon formation of a cooper pair.
<p> superconductivity is the set of physical properties observed in certain materials, wherein electrical resistance no longer exists and from which magnetic flux fields are expelled. any material exhibiting these properties is a superconductor. unlike an ordinary metallic conductor, whose resistance decreases gradually as its temperature is lowered even down to near absolute zero, a superconductor has a characteristic critical temperature below which the resistance drops abruptly to zero. an electric current through a loop of superconducting wire can persist indefinitely with no power source.
<p> the bcs theory of superconductivity has a fermion condensate. a pair of electrons in a metal with opposite spins can form a scalar bound state called a cooper pair. then, the bound states themselves form a condensate. since the cooper pair has electric charge, this fermion condensate breaks the electromagnetic gauge symmetry of a superconductor, giving rise to the wonderful electromagnetic properties of such states.
<p> superconductors are materials that have exactly zero resistance and infinite conductance, because they can have v=0 and i≠0. this also means there is no joule heating, or in other words no dissipation of electrical energy. therefore, if superconductive wire is made into a closed loop, current flows around the loop forever. superconductors require cooling to temperatures near 4 k with liquid helium for most metallic superconductors like niobium–tin alloys, or cooling to temperatures near 77k with liquid nitrogen for the expensive, brittle and delicate ceramic high temperature superconductors.
<p> in superconductors, charge can flow without any resistance. it is possible to make pieces of superconductor with a large built-in persistent current, either by creating the superconducting state (cooling the material) while charge is flowing through it, or by changing the magnetic field around the superconductor after creating the superconducting state. this principle is used in superconducting electromagnets to generate sustained high magnetic fields that only require a small amount of power to maintain. the persistent current was first identified by onnes, and attempts to set a lower bound on their duration have reached values of over 100,000 years.
<p> for heavy-fermion superconductors it is generally believed that the coupling mechanism cannot be phononic in nature. in contrast to many other unconventional superconductors, for updal there actually exists strong experimental evidence (namely from neutron scattering and tunneling spectroscopy ) that superconductivity is magnetically mediated.
<p> examples of multi-component superconductivity are multi-band superconductors magnesium diboride and oxypnictides and exotic superconductors with nontrivial cooper-pairing. there, one can distinguish two or more superconducting components associated, for example with electrons belong to different bands band structure. a different example of two component systems is the projected superconducting states | At the very least, we expect color superconductors to exist; these are superconductors of the strong force rather than the electromagnetic force. We haven't observed them yet, but they might be relevant for neutron stars, the early Universe, and/or heavy ion collisions. What about the weak force? Well, you can kind of think of the entire Universe as a weak superconductor, since the Higgs field gives mass to W/Z bosons exactly like the electron-pair condensate gives mass to photons inside a superconductor. In this way of thinking, the reason the weak force is weak is the same reason electric forces don't penetrate through (super)conductors. As for gravity, there's not really any analogy to a normal conductor (a neutral object with freely moving charges) since gravity always attracts (nothing is neutral) and mass isn't freely moving (there is always as much inertia as there is gravitational attraction; compare electrons, where electric forces overwhelm inertia). So I don't know what to say about a gravitational superconductor. Finally, I don't know much about it but this link suggests that superfluid helium-4 is in fact a perfect conductor of heat. |
what would happen to a piece of paper inside a glass jar while it is surrounded by extreme heat? | <p> for pieces to be framed under glass, except for the most disposable and inexpensive posters or temporary displays, the glass must be raised off the surface of the paper. this is done by means of matting, a lining of plastic "spacers", shadowboxing, stacking two mouldings with the glass in between, and similar methods. if the paper (or other media) were to touch the glass directly, any condensation inside the glass would absorb directly into the art, having no room to evaporate. this is harmful to almost any medium. it causes art sticking to the glass, mildew or mold spore growth, and other ill effects. raising the glass is also necessary when a piece is done in a loose media such as charcoal or pastel, to prevent smudging. care should be taken with these works however, if acrylic glass is used, as a static charge can build up which will attract the pigment particles off the paper. using real glass helps to prevent this.
<p> a hot glass bulb may fracture on contact with cold objects. when the glass envelope breaks, the bulb implodes, exposing the filament to ambient air. the air then usually destroys the hot filament through oxidation.
<p> heat-strengthened glass can take a strong direct hit without shattering, but has a weak edge. by simply tapping the edge of heat-strengthened glass with a solid object, it is possible to shatter the entire sheet.
<p> to avoid trapped air, the mould is perforated with a small vent hole. the hot glass otherwise forms a good seal with the lip of the mould and an air bubble is trapped. such a trapped bubble often causes problems - when cooling this air may contract to form a partial vacuum that is enough to break the glass. as the glass is not heated enough to become liquid, this air cannot escape as bubbles and so venting is required.
<p> in a similar vein, when a glass rod was put lightly in contact with dried woodchips, the rod would burn the wood and cause it to smoke, or if pressed against a woodchip, it would quickly burn through the chip, leaving behind a charred hole. all the while the glass rod remained cool, with the heating confined to the tip. when a glass rod is pressed lightly against a glass plate, it etches the glass plate, while if it is pressed, it bores right through the plate. microscopic examinations showed that the debris given off includes finely powdered glass and globules of molten glass.
<p> methods of storage, if done incorrectly, can also damage papyri. the traditional method in papyrus storage was to place the papyri between two sheets of glass and then seal the edges with cloth tape. this threatens the papyrus inside since it allows the object to come into direct contact with the glass, which could cause heat and moisture to come into contact with the papyrus. this could make the papyrus stick to the glass. this could also damage the ink and the surface of the papyrus when the glass is removed. a grayish material, which has been identified as a composite of sodium chloride and traces of vegetable carbohydrates, was seen around the edges of some papyri. other storage methods which use cellulose nitrate, paper backing, drumming techniques, hinges, adhesive mounts, dry mount systems and plexiglass which presses directly on the papyrus are not considered safe methods of storage. the threats from these include introducing materials which could stain the artifact, degrading the object or placing pressure on the object. these effects are also not easily reversed.
<p> the original form of the device is just a glass bottle partially filled with water, with a metal wire passing through a cork closing it. the role of the outer plate is provided by the hand of the experimenter. soon john bevis found (in 1747) that it was possible to coat the exterior of the jar with metal foil, and he also found that he could achieve the same effect by using a plate of glass with metal foil on both sides. these developments inspired william watson in the same year to have a jar made with a metal foil lining both inside and outside, dropping the use of water. | I'm assuming by zero air molecules you mean vacuum. It could also mean the jar is otherwise filled with helium, alcohol, sand, etc. The paper, being mostly cellulose, a carbohydrate, would decompose to charcoal, water and a lot of organic and inorganic solids, liquids and gases, such as CO, CO2, H2, CH4, benzene, tar, etc., many of them combustible, some carcinogenic. At higher temperatures all compounds would break up and be atomized, forming a plasma. At even higher temperatures the atomic nuclei would fuse to form other elements, such as magnesium, silicon, iron, uranium. At even higher temperatures the atomic nuclei would break apart again and other particles could be formed such as Higgs bosons. Someone would get a nobel prize for creating an indestructible glass jar. |
is listening to relaxing meditative music during study a help or a hindrance? | <p> some relaxation methods can also be used during other activities, for example, autosuggestion and prayer. at least one study has suggested that listening to certain types of music, particularly new-age music and classical music, can increase feelings associated with relaxation, such as peacefulness and a sense of ease.
<p> since the 1970s, clinical psychology and psychiatry have developed meditation techniques for numerous psychological conditions. mindfulness practice is employed in psychology to alleviate mental and physical conditions, such as reducing depression, stress, and anxiety. mindfulness is also used in the treatment of drug addiction. studies demonstrate that meditation has a moderate effect to reduce pain. there is insufficient evidence for any effect of meditation on positive mood, attention, eating habits, sleep, or body weight.
<p> evidence suggests that music therapy is beneficial for all individuals, both physically and mentally. benefits of music therapy include improved heart rate, reduced anxiety, stimulation of the brain, and improved learning. music therapists use their techniques to help their patients in many areas, ranging from stress relief before and after surgeries to neuropathologies such as alzheimer's disease. one study found that children who listened to music while having an iv inserted into their arms showed less distress and felt less pain than the children who did not listen to music while having an iv inserted. studies on patients diagnosed with mental disorders such as anxiety, depression, and schizophrenia have shown a visible improvement in their mental health after music therapy.
<p> in "neurology now", published by the american academy of neurology, the article "meditation as medicine" states that various well-designed studies show that meditation can increase attention span, sharpen focus, improve memory, and dull the perception of pain, and lists passage meditation as a common meditation method.
<p> although more research should be done to increase the reliability of this method of treatment, research suggests that music therapy can improve sleep quality in acute and chronic sleep disorders. in one particular study, participants (18 years or older) who had experienced acute or chronic sleep disorders were put in a randomly controlled trial and their sleep efficiency (overall time asleep) was observed. in order to assess sleep quality, researchers used subjective measures (i.e. questionnaires) and objective measures (i.e. polysomnography). the results of the study suggest that music therapy did improve sleep quality in subjects with acute or chronic sleep disorders, however only when tested subjectively. although these results are not fully conclusive and more research should be conducted, it still provides evidence that music therapy can be an effective treatment for sleep disorders.
<p> thousands of studies on meditation have been conducted, though the overall methodological quality of some of the studies is poor. recent reviews have pointed out many of these issues. nonetheless, mindfulness meditation is a popular subject for research, and many claim potential benefits for a wide array of conditions and outcomes. for example, the practice of mindfulness has been used as a potential tool for weight management, to achieve optimal athletic performance, as a beneficial intervention for children with special needs and their parents, as a viable treatment option for people with insomnia an effective intervention for healthy aging, as a strategy for managing dermatological conditions and as a useful intervention during pregnancy and the perinatal period. recent studies have also demonstrated that mindfulness meditation significantly attenuates physical pain through multiple, unique mechanisms.
<p> the us national center for complementary and alternative medicine states that ""meditation may be practiced for many reasons, such as to increase calmness and physical relaxation, to improve psychological balance, to cope with illness, or to enhance overall health and well-being."" meditation techniques have been used in western counseling and psychotherapy. relaxation training works toward achieving mental and muscle relaxation to reduce daily stresses. sahaja (mental silence) meditators scored above control group for emotional well-being and mental health measures on sf-36 ratings. | Being relaxed has been shown to have amazing effects on memory. We remember our relaxed moments way better and in more detail than stressed moments, which are often not really recorded at all, but some jumbled mess of half made up things. This is why eyewitness testimonial is often crap. Personally, I like listening to OCRemix or other non-vocalized music while studying (or programming now, my job). When I did exams, I often thought of a song or hummed it really quietly, which helped me relax. |
i know that caffeine and alcohol dehydrate you, but if you were dying of dehydration would drinking a coffee or a beer actually make you worse off? | <p> a caffeinated alcoholic drink is an alcoholic beverage that also contains caffeine, often in the form of an energy drink. the combination can result in reduced subjective alcohol intoxication but does not reduce in lowered objective intoxication.
<p> combined use of caffeine and alcohol may increase the rate of alcohol-related injury. energy drinks can mask the influence of alcohol, and a person may misinterpret their actual level of intoxication. since caffeine and alcohol are both diuretics, combined use increases the risk of dehydration, and the mixture of a stimulant (caffeine) and depressant (alcohol) sends contradictory messages to the nervous system and can lead to increased heart rate and palpitations. although people decide to drink energy drinks with alcohol with the intent of counteracting alcohol intoxication, many others do so to hide the taste of alcohol. however, in the 2015, the efsa concluded, that “consumption of other constituents of energy drinks at concentrations commonly present in such beverages would not affect the safety of single doses of caffeine up to 200 mg.” also the consumption of alcohol, leading to a blood alcohol content of about 0.08%, would, according to the efsa, not affect the safety of single doses of caffeine up to 200 mg. up to these levels of intake, caffeine is unlikely to mask the subjective perception of alcohol intoxication.
<p> caffeinated alcoholic energy drinks can be hazardous as caffeine can mask the influence of alcohol and may lead a person to misinterpret their actual level of intoxication. however, in 2012 the scientific review paper "energy drinks mixed with alcohol: misconception, myths and facts" was published, discussing the available scientific evidence on the effects of mixing energy drinks with alcohol. the authors note that excessive and irresponsible consumption of alcoholic drinks has adverse effects on human health and behaviour, but it should be clear that this is due to the alcohol, and not the mixer. they concluded that there is no consistent evidence that energy drinks alter the perceived level of intoxication of people who mix energy drinks with alcohol and found no evidence that co-consumption of energy drinks causes increased alcohol consumption.
<p> the guidelines recommend that people not mix alcohol and beverages containing caffeine, as this combined intake may result in greater alcohol consumption, with a greater risk of alcohol-related injury.
<p> bullet::::- some beverages combine alcohol with caffeine to create a caffeinated alcoholic drink. the stimulant effects of caffeine may mask the depressant effects of alcohol, potentially reducing the user's awareness of their level of intoxication. such beverages have been the subject of bans due to safety concerns. in particular, the united states food and drug administration has classified caffeine added to malt liquor beverages as an "unsafe food additive".
<p> the counteracting effects of caffeine and alcohol often causes the consumer to drink more than they normally would because of the delayed "drunk" feeling, as caffeine can mask some of the sensory cues individuals might normally rely on to determine their level of intoxication. the consumption of this drink and the delayed intoxication impairment could lead to negative consequences such as the increased stimulation and intoxication levels while decreasing the ability to operate a motor vehicle.
<p> ethanol has a dehydrating effect by causing increased urine production (diuresis), which could cause thirst, dry mouth, dizziness and may lead to an electrolyte imbalance. studies suggest that electrolyte changes play only a minor role in the genesis of the alcohol hangover and are caused by dehydration effects. drinking water may help relieve symptoms as a result of dehydration but it is unlikely that rehydration significantly reduces the presence and severity of alcohol hangover. alcohol's effect on the stomach lining can account for nausea because alcohol stimulates the production of hydrochloric acid in the stomach. | The whole premise of the question is flawed as the notion that caffeinated drinks make you dehydrated is a myth. |
how is it possible that the strong coupling constant is greater than 1? | <p> moreover, the perturbative beta function tells us that the coupling continues to increase, and qed becomes "strongly coupled" at high energy. in fact the coupling apparently becomes infinite at some finite energy. this phenomenon was first noted by lev landau, and is called the landau pole. however, one cannot expect the perturbative beta function to give accurate results at strong coupling, and so it is likely that the landau pole is an artifact of applying perturbation theory in a situation where it is no longer valid. the true scaling behaviour of formula_8 at large energies is not known.
<p> in quantum field theory and string theory, a coupling constant is a number that controls the strength of interactions in the theory. for example, the strength of gravity is described by a number called newton's constant, which appears in newton's law of gravity and also in the equations of albert einstein's general theory of relativity. similarly, the strength of the electromagnetic force is described by a coupling constant, which is related to the charge carried by a single proton.
<p> in physics, a coupling constant or gauge coupling parameter (or, more simply, a coupling), is a number that determines the strength of the force exerted in an interaction. usually, the lagrangian or the hamiltonian of a system describing an interaction can be separated into a "kinetic part" and an "interaction part". the coupling constant determines the strength of the interaction part with respect to the kinetic part, or between two sectors of the interaction part. for example, the electric charge of a particle is a coupling constant that characterizes an interaction with two charge-carrying fields and one photon field (hence the common feynman diagram with two arrows and one wavy line). since photons carry electromagnetism, this coupling determines how strongly electrons feel such a force, and has its value fixed by experiment.
<p> in a quantum field theory with a dimensionless coupling "g", if "g" is much less than 1, the theory is said to be "weakly coupled". in this case, it is well described by an expansion in powers of "g", called perturbation theory. if the coupling constant is of order one or larger, the theory is said to be "strongly coupled". an example of the latter is the hadronic theory of strong interactions (which is why it is called strong in the first place). in such a case, non-perturbative methods need be used to investigate the theory.
<p> therefore, the dynamic variable coupling constant in string theory contrasts the quantum field theory where it is constant. as long as supersymmetry is unbroken, such scalar fields can take arbitrary values moduli). however, supersymmetry breaking usually creates a potential energy for the scalar fields and the scalar fields localize near a minimum whose position should in principle calculate in string theory.
<p> the fact that the sign in front of the lowest-order term is positive suggests that the coupling constant increases with energy. if this behavior persisted at large couplings, this would indicate the presence of a landau pole at finite energy, arising from quantum triviality. however, the question can only be answered non-perturbatively, since it involves strong coupling.
<p> in the limit where the coupling constants we have added go to zero, one gets back to the original theory, plus the fermions we have added; the latter remain good degrees of freedom at every energy scale, as they are free fermions at this limit. the gauge symmetry anomaly can be computed at any energy scale, and must always be zero, so that the theory is consistent. one may now get the anomaly of the symmetry in the original theory by subtracting the free fermions we have added, and the result is independent of the energy scale. | > Doesn't this make an average strong interaction an infinite order interaction? When the strong coupling constant is large, the assumption that you can treat interactions perturbatively breaks down. That's why when you expand in a power series in the coupling constant, you reach this conclusion. Using the technique of expanding the S-matrix in a perturbation series is no longer useful, because you have infinitely many gluon self-couplings, and each contributes a vertex factor on the order of 1 or greater. So instead of using perturbation theory, you would use Feynman path integrals. You can put QCD on a lattice to simplify things, and use Monte Carlo methods to do very high-dimensional integrals. These techniques allow you to study QCD at low energies, before the onset of asymptotic freedom, where standard perturbation theory doesn't really work. |
is betelgeuse changing in ways other than brightness that might indicate a supernova is imminent? | <p> due to misunderstandings caused by the 2009 publication of the star's 15% contraction, apparently of its outer atmosphere, betelgeuse has frequently been the subject of scare stories and rumors suggesting that it will explode within a year, leading to exaggerated claims about the consequences of such an event. the timing and prevalence of these rumors have been linked to broader misconceptions of astronomy, particularly to doomsday predictions relating to the mayan calendar. betelgeuse is not likely to produce a gamma-ray burst and is not close enough for its x-rays, ultraviolet radiation, or ejected material to cause significant effects on earth.
<p> some media outlets tied the fact that the red supergiant star betelgeuse would undergo a supernova at some point in the future to the 2012 phenomenon. however, while betelgeuse was certainly in the final stages of its life, and would die as a supernova, there was no way to predict the timing of the event to within 100,000 years. to be a threat to earth, a supernova would need to be no further than 25 light years from the solar system. betelgeuse is roughly 600 light years away, and so its supernova would not affect earth. in december 2011, nasa's francis reddy issued a press release debunking the possibility of a supernova occurring in 2012.
<p> observations have failed to note signs of accretion leading up to type ia supernovae, and this is now thought to be because the star is first loaded up to above the chandrasekhar limit while also being spun up to a very high rate by the same process. once the accretion stops the star gradually slows until the spin is no longer enough to prevent the explosion.
<p> hubble's comment remained relatively unknown as the physical phenomenon of the explosion was not known at the time. eleven years later, when the fact that supernovae are very bright phenomena was highlighted by walter baade and fritz zwicky and when their nature was suggested by zwicky, nicholas mayall proposed that the star of 1054 was actually a supernova, based on the speed of expansion of the cloud, measured by spectroscopy, which allows astronomers to determine its physical size and distance, which he estimated at 5000 light-years. this was under the assumption that the velocities of expansion along the line of sight and perpendicularly to it were identical. based on the reference to the brightness of the star which featured in the first documents discovered in 1934, he deduced that it was a supernova rather than a nova.
<p> as of february 2006, the phenomenon was not yet well understood. however, an optical afterglow to the gamma-ray burst has been detected and is brightening, and some scientists believe that the appearance of a supernova (sn 2006aj) may be ongoing.
<p> betelgeuse is a red supergiant that has evolved from an o-type main sequence star. its core will eventually collapse, producing a supernova explosion and leaving behind a compact remnant. the details depend on the exact initial mass and other physical properties of that main sequence star.
<p> another hypothesis discussed is that effects of a supernova could have been a factor in the younger dryas. effects of a supernova have been suggested before, but without confirming evidence. potential evidence that these effects could have been caused by a celestial event, a supernova are observations of gamma-ray bursts and x-ray flashes have been compared to nebular records to test this as well as supernovae flash models, comparable to the records of in-galaxy supernovae, to study the effects of such an event on earth. these effects include depletion in the ozone layer, increased uv exposure, global cooling, and nitrogen changes in the earth's surface and troposphere. as brakenridge states, the only supernova possible at that time was the vela supernova, or classified as the vela supernova remnant. | > The news is all a-twitter about the possibility that the current dimming of Betelgeuse might be leading to a supernova soon. Not really, at least not coming from astronomers. Its brightness has varied for as long as we have reliable measurements, and it is not expected to dim in particular before a supernova either. We don't know how far it is in its evolution. There are indications that it didn't start carbon burning yet, if that is correct then a supernova is at least thousands of years away. Hours before a supernova (light speed delay subtracted) neutrino detectors will measure a rapid rise of intensity followed by an extremely intense burst from the supernova itself. A few hours later we'll get the light from it - it needs more time because the star is not transparent. Neutrino detectors will issue an alarm via SNEWS, everyone can sign up to receive a mail in that case. |
why do humans ovulate every month? | <p> ovulation occurs at the ovary surface and is described as the process in which an oocyte (female germ cell) is released from the follicle. ovulation is a non-deleterious 'inflammatory response' which is initiated by a luteinizing hormone (lh) surge. the mechanism of ovulation varies between species. in humans the ovulation process occurs around day 14 of the menstrual cycle, this can also be referred to as 'cyclical spontaneous ovulation'. however the monthly menstruation process is typically linked to humans and primates, all other animal species ovulate by various other mechanisms.
<p> in humans, ovulation occurs about midway through the menstrual cycle, after the follicular phase. the few days surrounding ovulation (from approximately days 10 to 18 of a 28-day cycle), constitute the most fertile phase. the time from the beginning of the last menstrual period (lmp) until ovulation is, on average, 14.6 days, but with substantial variation between females and between cycles in any single female, with an overall 95% prediction interval of 8.2 to 20.5 days.
<p> the start of ovulation can be detected by signs. because the signs are not readily discernible by people other than the female, humans are said to have a concealed ovulation. in many animal species there are distinctive signals indicating the period when the female is fertile. several explanations have been proposed to explain concealed ovulation in humans.
<p> human females, however, engage in sex throughout their ovulatory cycles, and even beyond their reproductive years. additionally, they do not show obvious physical signals of high fertility. this has led many researchers to conclude that humans lost their estrus through evolution. it has been hypothesized that this could be due to the adaptive benefits of concealed ovulation and extended sexuality.
<p> ovulation is based on a monthly cycle; the 14th day is the most fertile. on days one to four, menstruation and production of estrogen and progesterone decreases, and the endometrium starts thinning. the endometrium is sloughed off for the next three to six days. once menstruation ends, the cycle begins again with an fsh surge from the pituitary gland. days five to thirteen are known as the pre-ovulatory stage. during this stage, the pituitary gland secretes follicle-stimulating hormone (fsh). a negative feedback loop is enacted when estrogen is secreted to inhibit the release of fsh. estrogen thickens the endometrium of the uterus. a surge of luteinizing hormone (lh) triggers ovulation. on day 14, the lh surge causes a graafian follicle to surface the ovary. the follicle ruptures and the ripe ovum is expelled into the abdominal cavity. the fallopian tubes pick up the ovum with the fimbria. the cervical mucus changes to aid the movement of sperm. on days 15 to 28—the post-ovulatory stage, the graafian follicle—now called the corpus luteum—secretes estrogen. production of progesterone increases, inhibiting lh release. the endometrium thickens to prepare for implantation, and the ovum travels down the fallopian tubes to the uterus. if the ovum is not fertilized and does not implant, menstruation begins.
<p> some mammals (e.g. domestic cats, rabbits and camilidae) are termed "induced ovulators". for these species, the female ovulates due to an external stimulus during, or just prior, to mating, rather than ovulating cyclically or spontaneously. stimuli causing induced ovulation include the sexual behaviour of coitus, sperm and pheromones.
<p> several days after ovulation, the increasing amount of estrogen produced by the corpus luteum may cause one or two days of fertile cervical mucus, lower basal body temperatures, or both. this is known as a "secondary estrogen surge". | Monthly ovulation is a result of how ovums (egg cells) are formed in the ovaries. It's not necessarily a function of single-child bearing, although other mammals who bear single offspring tend to have similar reproductive cycles to humans. The timing of it is highly variable, even within species, but very loosely correlates with size (i.e. a mouse will ovulate more frequently than a gorilla). In many cases, ovulation will be seasonal (so offspring are birthed in the spring, for instance). All mammals (well, female mammals) have an estrous cycle, which essentially describes the periodic release of hormones such as gonadotrophin releasing hormone (GRH), luteinising hormone (LH), and follicle-stimulating hormone (FSH). These hormones determine when ovulation occurs. FSH begins by stimulating the ovaries to begin developing follicles (which contain oocytes, or immature ovums). This makes the follicles grow, which takes time. As they grow, they produce hormones of their own, and when there's enough (i.e. the follicle is large enough), the brain releases LH, which causes ovulation (i.e. the ovum moves into the fallopian tubes). Humans actually develop all of their follicle cells in the womb. This graph shows just how many we start with, over a million! That number is reduced quite rapidly though, down to about 600,000 at menarche. Of course, of these, only a tiny percentage will ever release ovums. Each menstrual cycle begins by stimulating the development of several of these follicles, not just one. As the cycle progresses, however, most of these die off, leaving just one to ovulate. So in order to store all of these things in the relatively small ovaries, they need to be stored in immature forms and grown before being released. The time needed to grow, and then move through the fallopian tubes is what determines the (roughly) monthly human menstrual cycle. Note however that only humans and a few simian species menstruate (i.e. shed the endometrium in the event that the ovum is not fertilised). I'm not entirely sure why this is, but would welcome any insight from those more informed! Finally, most animals begin ovulating right after birth because on a large enough scale, it increases the chances of evolutionary survival. If the offspring dies (which was much more common throughout most of evolutionary history than it is today), then the best chances for passing on genes comes from having more babies as soon as possible. For instance, kangaroos will give birth relatively rapidly, and immediately become sexually receptive again. If the newly released ovum is fertilised, it's put 'on hold' while the current joey matures. If it dies, or when it leaves, the stored embryo begins to develop. It's also worth mentioning that while not ideal, it's perfectly possible to raise a newborn whilst carrying another child to term for many mothers. |
how did the galileo galilei form the galileo hypothesis before newton discovered gravity? | <p> newton's gravitational theory simplified and formalized galileo's and kepler's ideas by recognizing kepler's "animal force or some other equivalent" beyond gravity and inertia were not needed, deducing from kepler's planetary laws how gravity reduces with distance.
<p> sometime prior to 1638, galileo turned his attention to the phenomenon of objects in free fall, attempting to characterize these motions. galileo was not the first to investigate earth's gravitational field, nor was he the first to accurately describe its fundamental characteristics. however, galileo's reliance on scientific experimentation to establish physical principles would have a profound effect on future generations of scientists. it is unclear if these were just hypothetical experiments used to illustrate a concept, or if they were real experiments performed by galileo, but the results obtained from these experiments were both realistic and compelling. a biography by galileo's pupil vincenzo viviani stated that galileo had dropped balls of the same material, but different masses, from the leaning tower of pisa to demonstrate that their time of descent was independent of their mass. in support of this conclusion, galileo had advanced the following theoretical argument: he asked if two bodies of different masses and different rates of fall are tied by a string, does the combined system fall faster because it is now more massive, or does the lighter body in its slower fall hold back the heavier body? the only convincing resolution to this question is that all bodies must fall at the same rate.
<p> newton's classical theory of gravity offered no prospect of identifying any mediator of gravitational interaction. his theory assumed that gravitation acts instantaneously, regardless of distance. kepler's observations gave strong evidence that in planetary motion angular momentum is conserved. (the mathematical proof is valid only in the case of a euclidean geometry.) gravity is also known as a force of attraction between two objects because of their mass.
<p> bullet::::- isaac newton (1643–1727) built upon the work of kepler, galileo and huygens. he showed that an inverse square law for gravity explained the elliptical orbits of the planets, and advanced the law of universal gravitation. his development of infinitesimal calculus (along with leibniz) opened up new applications of the methods of mathematics to science. newton taught that scientific theory should be coupled with rigorous experimentation, which became the keystone of modern science.
<p> newton's contribution to gravitational theory was to unify the motions of heavenly bodies, which aristotle had assumed were in a natural state of constant motion, with falling motion observed on the earth. he proposed a law of gravity that could account for the celestial motions that had been described earlier using kepler's laws of planetary motion.
<p> galileo's theoretical and experimental work on the motions of bodies, along with the largely independent work of kepler and rené descartes, was a precursor of the classical mechanics developed by sir isaac newton. galileo conducted several experiments with pendulums. it is popularly believed (thanks to the biography by vincenzo viviani) that these began by watching the swings of the bronze chandelier in the cathedral of pisa, using his pulse as a timer. later experiments are described in his "two new sciences". galileo claimed that a simple pendulum is isochronous, i.e. that its swings always take the same amount of time, independently of the amplitude. in fact, this is only approximately true, as was discovered by christiaan huygens. galileo also found that the square of the period varies directly with the length of the pendulum. galileo's son, vincenzo, sketched a clock based on his father's theories in 1642. the clock was never built and, because of the large swings required by its verge escapement, would have been a poor timekeeper. (see engineering above.)
<p> in "principia", newton formulated the laws of motion and universal gravitation that formed the dominant scientific viewpoint until it was superseded by the theory of relativity. newton used his mathematical description of gravity to prove kepler's laws of planetary motion, account for tides, the trajectories of comets, the precession of the equinoxes and other phenomena, eradicating doubt about the solar system's heliocentricity. he demonstrated that the motion of objects on earth and celestial bodies could be accounted for by the same principles. newton's inference that the earth is an oblate spheroid was later confirmed by the geodetic measurements of maupertuis, la condamine, and others, convincing most european scientists of the superiority of newtonian mechanics over earlier systems. | Neither Newton nor Galileo "discovered" gravity. Galileo figured out that acceleration was uniform by rolling weights down ramps and timing how long the weights would take to reach certain points. He noticed that if an object went one unit of distance in the first unit of time, it would go three in the next, then five, then seven, etc. From systematic measurements like he concluded that gravity makes things fall with uniform acceleration. It was Newton who made the connection between terrestrial gravity and celestial motion. |
does pi have any recognizable pattern when represented in anything other than base 10? | <p> variant pi or "pomega" (formula_1 or ϖ) is a glyph variant of lower case pi sometimes used in technical contexts as though it were a lower-case omega with a macron, though historically it is simply a cursive form of pi, with its legs bent inward to meet. it is used as a symbol for:
<p> only three kinds of bipyramids can have all edges of the same length (which implies that all faces are equilateral triangles, and thus the bipyramid is a deltahedron): the triangular, tetragonal, and pentagonal bipyramids. the tetragonal bipyramid with identical edges, or regular octahedron, counts among the platonic solids, while the triangular and pentagonal bipyramids with identical edges count among the johnson solids (j and j).
<p> referring to the figure, the 50 finite ordinary double points are arrayed as the vertices of 20 roughly tetrahedral shapes oriented such that the bases of these four-sided "outward pointing" shapes form the triangular faces of a regular icosidodecahedron. to these 30 icosidodecahedral vertices are added the summit vertices of the 20 tetrahedral shapes. these 20 points themselves are the vertices of a concentric regular dodecahedron circumscribed about the inner icosidodecahedron. together, these are the 50 finite ordinary double points of the figure.
<p> the parity of heptagonal numbers follows the pattern odd-odd-even-even. like square numbers, the digital root in base 10 of a heptagonal number can only be 1, 4, 7 or 9. five times a heptagonal number, plus 1 equals a triangular number.
<p> in geometry, the perles configuration is a configuration of 9 points and 9 lines that can be realized in the euclidean plane but for which every realization has at least one irrational number as one of its coordinates. it is not a projective configuration, however, because its points and lines do not all have the same number of incidences as each other. it was introduced by micha perles in the 1960s.
<p> in mathematics, a polygonal number is a number represented as dots or pebbles arranged in the shape of a regular polygon. the dots are thought of as alphas (units). these are one type of 2-dimensional figurate numbers.
<p> those positing a hebrew name have speculated "pi-hahiroth" might mean "mouth of the gorges", descriptive of its location as the end of a canal or river. in fact, part of the mystery may be resolved by understanding the initial syllable ′pi,′ which corresponds to the egyptian word "ipi" or "ipu", as "house of" such as in ′"pithom"′ or ′"pi-ramesses"′. the next literary fragment ′ha′ would indicate the ′desert hills or mountains to the west′ normally associated with libya, but a more ethereal rendering could possibly indicate the prominent mountainous range west of nuweiba beach on the west coast of the gulf of aqaba. | It probably is, though a proof doesn't exist to the best of my knowledge. The criterion you're asking about is whether or not pi is a normal number, and, as you can see from that article, we think that it probably is but haven't found a proof. |
is it possible for the wind to blow hard enough to change the direction of a photon? | <p> the breaking of rotation and boost invariance causes direction dependence in the theory as well as unconventional energy dependence that introduces novel effects, including lorentz-violating neutrino oscillations and modifications to the dispersion relations of different particle species, which naturally could make particles move faster than light.
<p> since the scattering is isotropic, the net momentum is transferred in the forward direction. on the quantum level, we picture the gradient force as forward rayleigh scattering in which identical photons are created and annihilated concurrently, while in the scattering (radiation) force the incident photons travel in the same direction and ‘scatter’ isotropically. by conservation of momentum, the particle must accumulate the photons' original momenta, causing a forward force in the latter.
<p> in the low-energy limit, the electric field of the incident wave (photon) accelerates the charged particle, causing it, in turn, to emit radiation at the same frequency as the incident wave, and thus the wave is scattered. thomson scattering is an important phenomenon in plasma physics and was first explained by the physicist j. j. thomson. as long as the motion of the particle is non-relativistic (i.e. its speed is much less than the speed of light), the main cause of the acceleration of the particle will be due to the electric field component of the incident wave. in a first approximation, the influence of the magnetic field can be neglected. the particle will move in the direction of the oscillating electric field, resulting in electromagnetic dipole radiation. the moving particle radiates most strongly in a direction perpendicular to its acceleration and that radiation will be polarized along the direction of its motion. therefore, depending on where an observer is located, the light scattered from a small volume element may appear to be more or less polarized.
<p> if the particle is located at the center of the beam, then individual rays of light are refracting through the particle symmetrically, resulting in no net lateral force. the net force in this case is along the axial direction of the trap, which cancels out the scattering force of the laser light. the cancellation of this axial gradient force with the scattering force is what causes the bead to be stably trapped slightly downstream of the beam waist.
<p> once the wind speed reaches a certain critical value, termed the "impact" or "fluid threshold", the drag and lift forces exerted by the fluid are sufficient to lift some particles from the surface. these particles are accelerated by the fluid, and pulled downward by gravity, causing them to travel in roughly ballistic trajectories. if a particle has obtained sufficient speed from the acceleration by the fluid, it can eject, or "splash", other particles in saltation, which propagates the process. depending on the surface, the particle could also disintegrate on impact, or eject much finer sediment from the surface. in air, this process of "saltation bombardment" creates most of the dust in dust storms. in rivers, this process repeats continually, gradually eroding away the river bed, but also transporting-in fresh material from upstream.
<p> robertson considered dust motion in a beam of radiation emanating from a point source. a. w. guess later considered the problem for a spherical source of radiation and found that for particles far from the source the resultant forces are in agreement with those concluded by poynting.
<p> bullet::::- if the electrons emit a light wave which is 270° out of phase with the light wave shaking them, it will cause the wave to travel faster. this is called "anomalous refraction", and is observed close to absorption lines (typically in infrared spectra), with x-rays in ordinary materials, and with radio waves in earth's ionosphere. it corresponds to a permittivity less than 1, which causes the refractive index to be also less than unity and the phase velocity of light greater than the speed of light in vacuum "c" (note that the signal velocity is still less than "c", as discussed above). if the response is sufficiently strong and out-of-phase, the result is a negative value of permittivity and imaginary index of refraction, as observed in metals or plasma. | **Short answer:** Nope. **Long answer:** There was actually a very good experiment done around 1900 that tested whether the speed of light would change in moving medium - flowing water in particular. It was found that it didn't. This was a big deal, and it meant that something really peculiar was going on with the motion of light. This exact experiment was actually a big piece of Einstein's inspiration when developing the special theory of relativity. Light will follow the same path whether or not the medium is moving, up to a small relativistic correction. However, in reality, moving fluids tend to also have funny pressure and temperature gradients, and that can change the index of refraction of the light, slightly warping the image that you're looking at. This is the source of stars twinkling, road mirages, among other familiar experiences with light. |
during the carboniferous, o2 levels were 163% modern levels while co2 was 800ppm. with so many plants, why were co2 levels so high relative to modern levels? | <p> the carboniferous spans from 359 million to 299 million years ago. during this period, average global temperatures were exceedingly high: the early carboniferous averaged at about 20 degrees celsius (but cooled to 10 degrees during the middle carboniferous). tropical swamps dominated the earth, and the large amounts of trees created much of the carbon that became coal deposits (hence the name carboniferous). the high oxygen levels caused by these swamps allowed massive arthropods, normally limited in size by their respiratory systems, to proliferate. perhaps the most important evolutionary development of the time was the evolution of amniotic eggs, which allowed amphibians to move farther inland and remain the dominant vertebrates throughout the period. also, the first reptiles and synapsids evolved in the swamps. throughout the carboniferous, there was a cooling pattern, which eventually led to the glaciation of gondwana as much of it was situated around the south pole, in an event known as the permo-carboniferous glaciation or the carboniferous rainforest collapse.
<p> the carboniferous spanned from 359 million to 299 million years ago. during this time, average global temperatures were exceedingly high; the early carboniferous averaged at about 20 degrees celsius (but cooled to 10 °c during the middle carboniferous). tropical swamps dominated the earth, and the lignin stiffened trees grew to greater heights and number. as the bacteria and fungi capable of eating the lignin had not yet evolved, their remains were left buried, which created much of the carbon that became the coal deposits of today (hence the name "carboniferous"). perhaps the most important evolutionary development of the time was the evolution of amniotic eggs, which allowed amphibians to move farther inland and remain the dominant vertebrates for the duration of this period. also, the first reptiles and synapsids evolved in the swamps. throughout the carboniferous, there was a cooling trend, which led to the permo-carboniferous glaciation or the carboniferous rainforest collapse. gondwana was glaciated as much of it was situated around the south pole.
<p> however, alternative hypotheses have been proposed. predictions of past co levels suggest that they may have previously dropped as precipitously low as that seen during the expansion of land plants: approximately 300 mya, during the proterozoic era. this being the case, there might have been a similar evolutionary pressure that resulted in the development of the pyrenoid, though it must be noted that in this case, a pyrenoid or pyrenoid-like structure could have developed, and have been lost as co levels then rose, only to be gained or developed again during the period of land colonisation by plants. evidence of multiple gains and losses of pyrenoids over relatively short geological time spans was found in hornworts.
<p> bullet::::5. reconstruction of paleoclimatological co2 concentrations demonstrates that carbon dioxide concentration today is near its lowest level since the cambrian era some 550 million years ago, when there was almost 20 times as much co2 in the atmosphere as there is today without causing a “runaway greenhouse effect.”
<p> it is expected for [co] to reach 500–1000 ppm by 2100. 96% of the past 400 000 years experienced below 280 ppm co levels. from this figure, it is highly probable that genotypes of today’s plants diverged from their pre-industrial relative.
<p> there are several hypotheses as to the origin of pyrenoids. with the rise of large terrestrial based flora following the colonisation of land by ancestors of charophyte algae, co levels dropped dramatically, with a concomitant increase in o atmospheric concentration. it has been suggested that this sharp fall in co levels acted as an evolutionary driver of ccm development, and thus gave rise to pyrenoids in doing so ensuring that rate of supply of co did not become a limiting factor for photosynthesis in the face of declining atmospheric co levels.
<p> a study for the european space agency found that up to 2.57 billion tons of carbon were released to the atmosphere in 1997 as a result of burning peat and vegetation in indonesia. this is equivalent to 40% of the average annual global carbon emissions from fossil fuels, and contributed greatly to the largest annual increase in atmospheric co2 concentration detected since records began in 1957. additionally, the 2002-3 fires released between 200 million to 1 billion tons of carbon into the atmosphere. | Are you sure you mean the Carboniferous here? That period actually saw a huge drop in global CO2 level, indicated by "C" in this graph around 300-350 million years ago. Those pCO2 levels had been steadily falling since the Cambrian period, but likely saw an extra large drop during the drop as the climate transitioned from greenhouse to icehouse and massive glaciations occured...and as temperatures fall, ocean CO2 solubility increases. |
what all goes into making a vaccine? | <p> a vaccine is a biological preparation that provides active acquired immunity to a particular disease. a vaccine typically contains an agent that resembles a disease-causing microorganism and is often made from weakened or killed forms of the microbe, its toxins, or one of its surface proteins. the agent stimulates the body's immune system to recognize the agent as a threat, destroy it, and to further recognize and destroy any of the microorganisms associated with that agent that it may encounter in the future. vaccines can be prophylactic (example: to prevent or ameliorate the effects of a future infection by a natural or "wild" pathogen), or therapeutic (e.g., vaccines against cancer are being investigated).
<p> a vaccine is a biological preparation that improves immunity to a particular disease. a vaccine typically contains an agent that resembles a disease-causing microorganism, and is often made from weakened or killed forms of the microbe or its toxins. the agent stimulates the body's immune system to recognize the agent as foreign, destroy it, and "remember" it, so that the immune system can more easily recognize and destroy any of these microorganisms that it later encounters.
<p> a vaccine is an antigenic preparation used to produce active immunity to a disease, in order to prevent or reduce the effects of infection by any natural or "wild" pathogen. many vaccines require multiple doses for maximum effectiveness, either to produce sufficient initial immune response or to boost response that fades over time. for example, tetanus vaccine boosters are often recommended every 10 years. vaccine schedules are developed by governmental agencies or physicians groups to achieve maximum effectiveness using required and recommended vaccines for a locality while minimizing the number of health care system interactions. over the past two decades, the recommended vaccination schedule has grown rapidly and become more complicated as many new vaccines have been developed.
<p> a vaccine administration may be oral, by injection (intramuscular, intradermal, subcutaneous), by puncture, transdermal or intranasal. several recent clinical trials have aimed to deliver the vaccines via mucosal surfaces to be up-taken by the common mucosal immunity system, thus avoiding the need for injections.
<p> a synthetic vaccine is a vaccine consisting mainly of synthetic peptides, carbohydrates, or antigens. they are usually considered to be safer than vaccines from bacterial cultures. creating vaccines synthetically has the ability to increase the speed of production. this is especially important in the event of a pandemic.
<p> the vaccine is given by injection. an initial dose provides protection lasting one year starting 2–4 weeks after vaccination; the second booster dose, given six to 12 months later, provides protection for over 20 years.
<p> as of august 2013, allison rice-ficht, ph.d. at texas a&m university and her team claim to be close to creating a human vaccine. it would primarily be used to immunize members of the military in case of exposure to weaponized "brucella" on the battlefield. | A vaccine is either an inactivated (previously virulent) micro-organism or an attenuated form (containing foreign antigens, such as surface proteins). After injection, your immune system recognizes the micro-organism or proteins thereof as foreign and starts to produce antibodies against it and "stores" that information. Then, if you ever come in contact with it again, your immune system can easily and quickly produce antibodies and the pathogen is killed off. |
what actually happens to your finger when you "jam it" like when it gets hit with a basketball straight on while extended and rigid? | <p> the finger roll is a specialized type of basketball layup shot where the ball is rolled off the tips of the player's fingers. the advantage of the finger roll is that the ball can travel high in the air over a defender that might otherwise block a regular jump shot or dunk, while the spin applied by the rolling over the fingers will carry the ball to the basket off the backboard. the shot was pioneered by center wilt chamberlain in the 1960s.
<p> trigger finger, also known as stenosing tenosynovitis, is a disorder characterized by catching or locking of the involved finger. pain may occur in the palm of the hand or knuckles. the name is due to the popping sound made by the affected finger when moved. most commonly the ring finger or thumb is affected.
<p> a finger roll is performed when a player shoots the ball with one hand during a layup and then lifts his fingers, rolling the ball into the basket. the rotation produced provides the ball with a soft touch, and the ball will roll around the rim and then drop into the basket. guard george "the iceman" gervin was known for having one of the best finger rolls in the game along center wilt chamberlain. michael jordan and scottie pippen are other notable practitioners, while former nba star jason kidd is renowned for his smooth finger rolls as well.
<p> bullet::::- the gesture of "flipping someone off" by hitting the wrist against the inside of the elbow (sometimes called "a banana" in brazil) is considered playful and not very offensive (in some other parts of the world, this is more akin to "the finger").
<p> bullet::::- the ball may bounce an unlimited number of times, but if it switches to merely rolling or if it comes to rest, (including by another player stamping on it with one foot while keeping the other in their own square), they are 'out'.
<p> the finger roll is notorious for being very difficult to master, and few players use it as their primary shot. another disadvantage is that the shot is one-handed, and therefore harder to protect the ball while executing. one famous exception was san antonio spurs forward george gervin, who turned the shot into a nearly invincible weapon when he led the national basketball association in scoring between 1978 and 1980.
<p> "at impact the back of the left hand faces toward your target. the wrist bone is definitely raised. it points to the target and, at the moment the ball is contacted, it is out in front, nearer to the target than any part of the hand." | All credit for this comment goes to /u/Ohmyquad Your finger takes enough force that this force directly translates as pressure to the bones. The bones bump at the joints, causing a bone bruise at the joint. The injury can be accompanied by strain on tendons/fibers depending on how the injury occurred. Pain is caused by the chemical response to the injury, and the nerve endings available to receive the signals. In your fingers, there are a lot of nerve endings, presumably (from an evolutionary standpoint) to allow for sensitivity and control. We use our hands for so much that it is good to be able to tell a lot about our environment from the hands. The greater number of nerve endings means a more intense response (pain). |
how long could you live on life support machines? | <p> the second patient, tom christerson, who was given less than a 20 percent chance of surviving 30 days at the time of his surgery, lived for 512 days after receiving the abiocor, dying on february 7, 2003 due to the wearing out of an internal membrane of the abiocor. an additional 12 patients had the device implanted into 2004, resulting in an average life span of less than five months among all 14 patients. in some cases the device extended survival by several months, allowing the patients to spend valuable time with family and friends. in two cases, the device extended survival by 10 and 17 months respectively, and one patient was discharged from the hospital to go home. for a patient to be eligible for implantation with the abiocor, the person must have had severe heart failure (with failure of both ventricles) and had to be likely to die within two weeks without transplantation.
<p> the machine is 5.07 lbs, 0.86 inches thick, and has between six and ten hours of battery life depending on usage. other features include its backlit keyboard, stereo speakers with dolby home theater, and motion control.
<p> as each component within a product is reviewed, those with a relatively short useful life span are identified. one example of this is an electrolytic capacitor. many designs have a useful life limitation of 10 years. since constant failure rates are only valid during the useful life period, this metric is valuable for interpreting fmeda result limitations.
<p> although long bearing life is often desirable, it is sometimes not necessary. describes a bearing for a rocket motor oxygen pump that gave several hours life, far in excess of the several tens of minutes life needed.
<p> people can live long normal lives with the devices. many patients have multiple implants. a patient in houston, texas had an implant at the age of 18 in 1994 by the recent dr. antonio pacifico. he was awarded "youngest patient with defibrillator" in 1996. though today these devices are implanted into small babies shortly after birth.
<p> the ultimate goals of life support depend on the specific patient situation. typically, life support is used to sustain life while the underlying injury or illness is being treated or evaluated for prognosis. life support techniques may also be used indefinitely if the underlying medical condition cannot be corrected, but a reasonable quality of life can still be expected.
<p> with more frequent measurements, it is possible to calculate sehcat retention whole-body half-life; this is not routinely measured in a clinical setting. a half-life of greater than 2.8 days has been quoted as normal. | No we can't. We can replace the heart, the lungs and the kidneys more or less sufficiently for quite some time, but these systems are not perfect, and sooner or later you'd die anyway. Moreover it is not possible to replace liver function from the outside, at least not fully. So when the liver fails and you don't get a transplant, you're toast. |
why do vestigial structures exist? | <p> vestigial structures are anatomical structures of organisms in a species which are considered to have lost much or all of their original function through evolution. these body parts can be classed as additional to the required functioning of the body. in human anatomy the vermiform appendix is sometimes classed as a vestigial remnant.
<p> in the context of human evolution, human vestigiality involves those traits (such as organs or behaviors) occurring in humans that have lost all or most of their original function through evolution. although structures called "vestigial" often appear functionless, a vestigial structure may retain lesser functions or develop minor new ones. in some cases, structures once identified as vestigial simply had an unrecognized function.
<p> vestigial structures are often called "vestigial organs", although many of them are not actually organs. such vestigial structures typically are degenerate, atrophied, or rudimentary, and tend to be much more variable than homologous non-vestigial parts. although structures commonly regarded "vestigial" may have lost some or all of the functional roles that they had played in ancestral organisms, such structures may retain lesser functions or may have become adapted to new roles in extant populations.
<p> vestigial features may take various forms; for example, they may be patterns of behavior, anatomical structures, or biochemical processes. like most other physical features, however functional, vestigial features in a given species may successively appear, develop, and persist or disappear at various stages within the life cycle of the organism, ranging from early embryonic development to late adulthood.
<p> the perichondrium (from greek περί ("peri" 'around') and χόνδρος ("chondros" 'cartilage')) is a layer of dense irregular connective tissue that surrounds the cartilage of developing bone. it consists of two separate layers: an outer fibrous layer and inner chondrogenic layer. the fibrous layer contains fibroblasts, which produce collagenous fibers. the chondrogenic layer remains undifferentiated and can form chondroblasts or chondrocytes. perichondrium can be found around the perimeter of elastic cartilage and hyaline cartilage.
<p> vestigial structures have been noticed since ancient times, and the reason for their existence was long speculated upon before darwinian evolution provided a widely accepted explanation. in the 4th century bc, aristotle was one of the earliest writers to comment, in his "history of animals", on the vestigial eyes of moles, calling them "stunted in development" due to the fact that moles can scarcely see. however, only in recent centuries have anatomical vestiges become a subject of serious study. in 1798, étienne geoffroy saint-hilaire noted on vestigial structures:
<p> during evolution, some structures may lose their original function and become vestigial structures. such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. examples include pseudogenes, the non-functional remains of eyes in blind cave-dwelling fish, wings in flightless birds, the presence of hip bones in whales and snakes, and sexual traits in organisms that reproduce via asexual reproduction. examples of vestigial structures in humans include wisdom teeth, the coccyx, the vermiform appendix, and other behavioural vestiges such as goose bumps and primitive reflexes. | A vestigial structure would exist because there is a lack of selective pressure for it to disappear. If we assume the appendix is actually useless (there's a lot of info that says it has uses, even if they're minor uses), it doesn't affect our ability to live and breed in any significantly (assuming appendicitis isn't significant enough) negative way, so there's no reason for it do go away. If an organ became useless, and keeping it had a negative affect on the animal's ability to breed, that would be a selective pressure for the structure to diminish. The Texas blind salamander that you're talking about actually has very small vestigial eyes, that are really just little black dots. I had trouble finding info on the evolution of it's eyes, but it can be assumed that having normal sized eyes was detrimental in some way to it (maybe pronness to infection, amount of energy used to upkeep an unused system), and so smaller and smaller eyes became a more beneficial trait. |
are there any natural occurring bodies of water that are sterile i.e. devoid of life? | <p> out of the 771 cases in 2001, only 28% were fresh water drowning. living diatoms do not inhabit domestic water sources, which limits the situations that diatoms can be used to create flora profiles or time of death estimations. diatoms can only tell when or where evidence was found in some situations and not the time of death if there is no body fluid sample available to be collected. if a body is placed in freshwater post mortem then diatoms cannot be used to evaluate time of death. without the inhalation of water and some circulation present in the victim, the diatoms will not be able to enter the alveolar system and blood stream making it difficult to extract a reliable sample. another issue with the use of diatoms in order to provide evidential support is that diatoms can also be found on clothes, in food and drink, or air. in a study conducted by spitz and schneider in 1964, 500 cubic meters of air was filtered for three days in april and there was between 662 and 1564 individual diatoms present on the filtrate. because the body can preserve these microscopic algae, the presence of diatoms may not only be on a victim or suspect through their relation to a crime scene, which affects the reliability of the results collected from a scene. diatoms can also be destroyed based on the biological make up of the body it encounters, this could affect the results in a criminal investigation.
<p> diatoms should normally never be present in human tissue unless water was aspirated, and their presence in tissues such as bone marrow suggests drowning, however, they are present in soil and the atmosphere and samples may easily be contaminated. an absence of diatoms does not rule out drowning, as they are not always present in water. a match of diatom shells to those found in the water may provide supporting evidence of the place of death. drowning in salt water can leave significantly different concentrations of sodium and chloride ions in the left and right chambers of the heart, but this will dissipate if the person survived for some time after the aspiration, or if cpr was attempted, and have been described in other causes of death.
<p> the concept of canals with flowing water and a world where life was possible were later proved erroneous by more accurate observation of the planet. later landings by american probes such as the two viking missions found a dead world too cold (and with far too thin an atmosphere) for water to exist in its fluid state.
<p> living organisms can live in a limited range of conditions on the earth that are limited by temperature and the existence of liquid water. the potential habitability of other planets or moons can also be assessed by the existence of liquid water.
<p> sewage contaminated water contains many viruses, over one hundred species are reported and can lead to diseases that affect human beings. for example, hepatitis, gastroenteritis, meningitis, fever, rash, and conjunctivitis can all be spread through contaminated water. more viruses are being discovered in water because of new detection and characterization methods, although only some of these viruses are human pathogens.
<p> class ii habitats include bodies which initially enjoy earth-like conditions, but do not keep their ability to sustain liquid water on their surface due to stellar or geophysical conditions. mars, and possibly venus are examples of this class where complex life forms may not develop.
<p> the ideas of canals with flowing water and an inhabited, if dying world, were later disproved by more accurate observation of the planet, and fly-bys and landings by russian and american probes such as the two viking missions which found a dead, frozen world where water could not exist in a liquid state. | I have never heard of it, however it would become non-sterile pretty much as soon as something touched it. As to all water must contain life, no, it does not mean that. Based on the only view of life developing being our planet something like the oceans of Europa may not have life because it's environment through its development has been so different from ours. Even if it did the organisms would have to survive which is another highly unlikely scenario, it'd be a fallacy to say well it happened here so it can happen there because a lot of the circumstances and evolution had to occur just right for survival. |
have humans evolved to the consumption of alcohol over the last several millenniums? | <p> discovery of late stone age jugs suggest that intentionally fermented drinks existed at least as early as the neolithic period (cir. 10,000 bc). many animals also consume alcohol when given the opportunity and are affected in much the same way as humans, although humans are the only species known to produce alcoholic drinks intentionally.
<p> extensive research of western cultures has consistently shown increased survival associated with light to moderate alcohol consumption. a 23-year prospective study of 12,000 male british physicians aged 48–78, found that overall mortality was significantly lower in current drinkers compared to non-drinkers even after correction for ex-drinkers. this benefit was strongest for ischemic heart disease, but was also noted for other vascular disease and respiratory disease. death rate amongst current drinkers was higher for 'alcohol augmentable' disease such as liver disease and oral cancers, but these deaths were much less common than cardiovascular and respiratory deaths. the lowest mortality rate was found for consumption of 8 to 14 'units' per week. in the uk a unit is defined as 10ml or 8g of pure alcohol. higher consumption increased overall mortality rate, but not above that of non-drinkers. other studies have found age-dependent mortality risks of low-to-moderate alcohol use: an increased risk for individuals aged 16–34 (due to increased risk of cancers, accidents, liver disease, and other factors), but a decreased risk for individuals ages 55+ (due to lower incidence of ischemic heart disease).
<p> alcohol has a long history of use and misuse throughout recorded history. biblical, egyptian and babylonian sources record the history of abuse and dependence on alcohol. in some ancient cultures alcohol was worshiped and in others, its abuse was condemned. excessive alcohol misuse and drunkenness were recognized as causing social problems even thousands of years ago. however, the defining of habitual drunkenness as it was then known as and its adverse consequences were not well established medically until the 18th century. in 1647 a greek monk named agapios was the first to document that chronic alcohol misuse was associated with toxicity to the nervous system and body which resulted in a range of medical disorders such as seizures, paralysis, and internal bleeding. in 1920 the effects of alcohol abuse and chronic drunkenness boosted membership of the temperance movement and led to the prohibition of alcohol in the united states, a nationwide constitutional ban on the production, importation, transportation, and sale of alcoholic beverages that remained in place until 1933; this policy resulted in the decline of death rates from cirrhosis and alcoholism. in 2005 alcohol dependence and abuse was estimated to cost the us economy approximately 220 billion dollars per year, more than cancer and obesity.
<p> michael niederman of "new york theatre review" notes, "caporale takes us on a millennia-long journey into just how much western civilization has been influenced by the introduction and preservation of alcohol consumption. basically, it is his thesis that without alcohol, it is very likely that we wouldn’t even have a civilization. in short, the regular consumption of alcoholic beverages made it possible for human beings to, in his words, 'not die'."
<p> the fermentation of sugar into ethanol is one of the earliest biotechnologies employed by humans. the intoxicating effects of ethanol consumption have been known since ancient times. ethanol has been used by humans since prehistory as the intoxicating ingredient of alcoholic beverages. dried residue on 9,000-year-old pottery found in china suggests that neolithic people consumed alcoholic beverages.
<p> in his research for the genocide convention, raphael lemkin proposed that distribution of alcohol was one of several tools (such as forced relocations, destruction of cultural symbols, and "re-education" of children) by which european colonists obliterated indigenous cultures--not only in the americas, but also in tasmania and australia. lemkin theorized that the availability of alcohol undermined social integrity, promoted violence, impeded organized resistance, and contributed to the belief that native americans were culturally inferior. lemkin argued that once a people becomes dependent on alcohol "the desire for cheap individual pleasure [would] be substituted for the desire for collective feelings and ideals based on a higher morality."
<p> these profound economic and social changes, and the breakup of native culture contributed to the increasing addiction to alcohol. before the spanish arrived, the incas had consumed alcohol only during religious ceremonies. indian use of the coca leaf also increased, and, according to one chronicler, at the end of the 16th century "in potosí alone, the trade in coca amounts to over half a million pesos a year, for 95,000 baskets of it are consumed." | Alcohol consumption by humans probably has origins in eating naturally fermented fruit. Many animals (elephants, monkeys, et al) become intoxicated this way. Human alcohol consumption became routine once we figured out how to "domesticate" the fermentation process--which was originally a means of preserving foods; intoxication was just a side effect. Even beer was originally a way of preserving foods, so running out of beer was a genuine crisis. I doubt humans as a species could become desensitized/tolerant unless alcohol gradually became the dominant food source and our liver, pancreas, kidneys, et al evolved accordingly. |
nitrate reactions | <p> nitrate reductases are molybdoenzymes that reduce nitrate (no) to nitrite (no). this reaction is critical for the production of protein in most crop plants, as nitrate is the predominant source of nitrogen in fertilized soils.
<p> sodium nitrate is a white solid very soluble in water. it is a readily available source of the nitrate anion (no), which is useful in several reactions carried out on industrial scales for the production of fertilizers, pyrotechnics and smoke bombs, glass and pottery enamels, food preservatives (esp. meats), and solid rocket propellant. it has been mined extensively for these purposes.
<p> a nitrate test is a chemical test used to determine the presence of nitrate ion in solution. testing for the presence of nitrate via wet chemistry is generally difficult compared with testing for other anions, as almost all nitrates are soluble in water. in contrast, many common ions give insoluble salts, e.g. halides precipitate with silver, and sulfate precipitate with barium.
<p> the sources of nitrate can include fertilizers used in agricultural lands, waste dumps or pit latrines. for example, cases of blue baby syndrome have been reported in villages in romania and bulgaria, and were thought to be caused by groundwater polluted by nitrate leaching from pit latrines. nitrate levels are subject to monitoring to comply with drinking water quality standards in the united states and other countries. the link between blue baby syndrome and nitrates in drinking water is widely accepted, but some studies indicate that other contaminants, or dietary nitrate sources, may also play a role in the syndrome.
<p> nitrate reductase (nr) is regulated at the transcriptional and translational levels induced by light, nitrate, and possibly a negative feedback mechanism. first, nitrate assimilation is initiated by the uptake of nitrate from the root system, reduced to nitrite by nitrate reductase, and then nitrite is reduced to ammonia by nitrite reductase. ammonia then goes into the gs-gogat pathway to be incorporated into amino acids. when the plant is under stress, instead of reducing nitrate via nr to be incorporated into amino acids, the nitrate is reduced to nitric oxide which can have many damaging effects on the plant. thus, the importance of regulating nitrate reductase activity is to limit the amount of nitric oxide being produced.
<p> the reduction of nitrate into nitrite occurs in the second step of the mechanism where the two dimethyl-dithiolene ligands have a key role in spreading the excess of negative charge near the mo atom to make it available for the chemical reaction. the reaction involves the oxidation of the sulfur atoms and not of the molybdenum as previously suggested. the mechanism is that of molybdenum and sulfur-based redox chemistry instead of the currently accepted redox chemistry based only on the mo ion. the second part of the mechanism involves two protonation steps that are promoted by the presence of mov species. movi intermediates might also be present in this stage depending on the availability of protons and electrons. once the water molecule is generated only the movi species allow water molecule dissociation and the concomitant enzymatic turnover.
<p> the historical standard method of testing for nitrate is the cadmium reduction method, which is reliable and accurate although it is dependent on a toxic metal cadmium and thus not suitable for all applications. an alternative method for nitrate and nitrite analysis is enzymatic reduction using nitrate reductase, which has recently been proposed by the us environmental protection agency as an alternate test procedure for determining nitrate. an open source photometer has been developed for this method to accurately detect nitrate in water, soils, forage, etc. | Nitrogen gas is a diatomic molecule with a very strong triple bond. When bonds are formed, energy is released, and the stronger the bond formed, the more energy is released. Nitro compounds can very easily be triggered into liberating nitrogen gas - because of the considerable energy released by this process, we call it an explosion. |
why do some people who contract infections show no symptoms? | <p> the symptoms of an infection depends on the type of disease. some signs of infection affect the whole body generally, such as fatigue, loss of appetite, weight loss, fevers, night sweats, chills, aches and pains. others are specific to individual body parts, such as skin rashes, coughing, or a runny nose.
<p> the majority of infections result in mild illness, including fever and headache. when infection is more severe the person may experience headache, high fever, neck stiffness, stupor, disorientation, coma, tremors, occasional convulsions and spastic paralysis. fatality ranges from . elderly people are more likely to have a fatal infection.
<p> an individual may only develop signs of an infection after a period of subclinical infection, a duration that is called the incubation period. this is the case, for example, for subclinical sexually transmitted diseases such as aids and genital warts. individuals with such subclinical infections, and those that never develop overt illness, creates a reserve of individuals that can transmit an infectious agent to infect other individuals. because such cases of infections do not come to clinical attention, health statistics can often fail to measure the true prevalence of an infection in a population, and this prevents the accurate modeling of its infectious transmission.
<p> because of the nonspecific nature of these symptoms, they are often not recognized as signs of hiv infection. even if patients go to their doctors or a hospital, they will often be misdiagnosed as having one of the more common infectious diseases with the same symptoms. as a consequence, these primary symptoms are not used to diagnose hiv infection, as they do not develop in all cases and because many are caused by other more common diseases. however, recognizing the syndrome can be important because the patient is much more infectious during this period.
<p> in humans, the virus can cause several syndromes. usually, sufferers have either no symptoms or only a mild illness with fever, headache, muscle pains, and liver abnormalities. in a small percentage of cases (< 2%), the illness can progress to hemorrhagic fever syndrome, meningoencephalitis (inflammation of the brain and tissues lining the brain), or affect the eye. patients who become ill usually experience fever, generalised weakness, back pain, dizziness, and weight loss at the onset of the illness. typically, people recover within two to seven days after onset.
<p> due to their nonspecific character, these symptoms are not often recognized as signs of hiv infection. even cases that do get seen by a family doctor or a hospital are often misdiagnosed as one of the many common infectious diseases with overlapping symptoms. thus, it is recommended that hiv be considered in people presenting with an unexplained fever who may have risk factors for the infection.
<p> many of those who are infected never develop symptoms. symptoms, when they occur, may include watery blisters in the skin or mucous membranes of the mouth, lips, nose, or genitals. lesions heal with a scab characteristic of herpetic disease. sometimes, the viruses cause mild or atypical symptoms during outbreaks. however, they can also cause more troublesome forms of herpes simplex. as neurotropic and neuroinvasive viruses, hsv-1 and -2 persist in the body by hiding from the immune system in the cell bodies of neurons. after the initial or primary infection, some infected people experience sporadic episodes of viral reactivation or outbreaks. in an outbreak, the virus in a nerve cell becomes active and is transported via the neuron's axon to the skin, where virus replication and shedding occur and cause new sores. | Almost all symptoms of a infectious disease is caused by the immune systems response and not the pathogen itself. The objective of an infectious pathogen is to actually remain undetected by the immune system as the more pathogenic it is, then the less likely it will be able to reproduce and infect another host, as either the immune system will kill the pathogen or the host will die. Viruses are especially are good at hiding from immune systems as they are able to rapidly evolve to form new strains capable of evading immune systems. This is why you need a new flu shot each year as the genetic variation is random and some strains evolve to be more pathogenic then others and cause flu outbreaks etc, which is bad for both us and the virus. Therefore most of us are actually infected with pathogens right now but only a small portion of us have an immune reaction and usually only to a particular strain of each pathogen, otherwise cohabitation would be impossible. |
is there a limit to how acidic (or basic) something can be? | <p> an acid is classified as "strong" when the concentration of its undissociated species is too low to be measured, as the equilibrium is shifted very far to the right (in the forward direction) because of a very large "k". any acid with a p"k" value of less than −2 is more than 99.9% dissociated at low ph. this is known as solvent leveling since all such acids are "fully dissociated", regardless of their p"k" values. hydrochloric acid, hcl, which has a p"k" value, estimated from thermodynamic quantities, of −9.3 in water is an example of a strong acid. it is said to be "fully dissociated" in aqueous solution because the amount of undissociated acid, in equilibrium with the dissociation products, is below the detection limit.
<p> when a strong acid is neutralized by a strong base there are no excess hydrogen ions left in the solution. the solution is said to be neutral as it is neither acidic nor alkaline. the ph of such a solution is close to a value of 7; the exact ph value is dependent on the temperature of the solution.
<p> any acid with a p"k" value which is less than about -2 is classed as a strong acid. this results from the very high buffer capacity of solutions with a ph value of 1 or less and is known as the leveling effect.
<p> when the acidic medium in question is a dilute aqueous solution, the "h" is approximately equal to the ph value, which is a negative logarithm of the concentration of aqueous h in solution. the ph of a simple solution of an acid in water is determined by both "k" and the acid concentration. for weak acid solutions, it depends on the degree of dissociation, which may be determined by an equilibrium calculation. for concentrated solutions of acids, especially strong acids for which ph 0, the "h" value is a better measure of acidity than the ph.
<p> acetic acid is an example of a weak acid. the ph of the neutralized solution is not close to 7, as with a strong acid, but depends on the acid dissociation constant (p"k") of the acid. the ph at the end-point or equivalence point in a titration may be easily calculated. at the end-point the acid is completely neutralized so the analytical hydrogen ion concentration, "t", is zero and the concentration of the conjugate base, a, is effectively equal to the analytical concentration of the acid; writing ah for the acid, [a] = "t". defining the acid dissociation constant, p"k", as
<p> acid is a monocarboxylic β-hydroxy acid and natural product with the molecular formula . at room temperature, pure acid occurs as a transparent, colorless to light yellow liquid which is soluble in water. acid is a weak acid with a p"k" of 4.4. its refractive index (formula_1) is 1.42.
<p> nitric acid is normally considered to be a strong acid at ambient temperatures. there is some disagreement over the value of the acid dissociation constant, though the p"k" value is usually reported as less than −1. this means that the nitric acid in diluted solution is fully dissociated except in extremely acidic solutions. the p"k" value rises to 1 at a temperature of 250 °c. | The way we calculate pH is by taking the concentration of H^+ ions in a solution and inputting it in this formula: pH = -log[H^+ ] So in theory as long as the concentration of H^+ is high enough negative pH is possible. The same goes for alkaline compounds (though there you're concerned with the concentration of OH^- ). The problem comes in practical applications. Measuring a negative pH is very difficult as the glass detectors can't measure the very low pH correctly and will therefor return higher values of pH than might actually be there. Further complicating matters is the facts that at very high concentrations even the stronger acids (for example HCl) don't fully dissociated (release their H^+ ) so calculating the pH from the concentration of HCl wouldn't yield the correct pH either. In short, theoretically there is no maximum to how high or low a pH can go, in practice, you'll find yourself limited in being able to measure the actual very low or high pH's and less than full dissociation at high concentrations. |
is there a limit to how acidic (or basic) something can be? | <p> an acid is classified as "strong" when the concentration of its undissociated species is too low to be measured, as the equilibrium is shifted very far to the right (in the forward direction) because of a very large "k". any acid with a p"k" value of less than −2 is more than 99.9% dissociated at low ph. this is known as solvent leveling since all such acids are "fully dissociated", regardless of their p"k" values. hydrochloric acid, hcl, which has a p"k" value, estimated from thermodynamic quantities, of −9.3 in water is an example of a strong acid. it is said to be "fully dissociated" in aqueous solution because the amount of undissociated acid, in equilibrium with the dissociation products, is below the detection limit.
<p> when a strong acid is neutralized by a strong base there are no excess hydrogen ions left in the solution. the solution is said to be neutral as it is neither acidic nor alkaline. the ph of such a solution is close to a value of 7; the exact ph value is dependent on the temperature of the solution.
<p> any acid with a p"k" value which is less than about -2 is classed as a strong acid. this results from the very high buffer capacity of solutions with a ph value of 1 or less and is known as the leveling effect.
<p> when the acidic medium in question is a dilute aqueous solution, the "h" is approximately equal to the ph value, which is a negative logarithm of the concentration of aqueous h in solution. the ph of a simple solution of an acid in water is determined by both "k" and the acid concentration. for weak acid solutions, it depends on the degree of dissociation, which may be determined by an equilibrium calculation. for concentrated solutions of acids, especially strong acids for which ph 0, the "h" value is a better measure of acidity than the ph.
<p> acetic acid is an example of a weak acid. the ph of the neutralized solution is not close to 7, as with a strong acid, but depends on the acid dissociation constant (p"k") of the acid. the ph at the end-point or equivalence point in a titration may be easily calculated. at the end-point the acid is completely neutralized so the analytical hydrogen ion concentration, "t", is zero and the concentration of the conjugate base, a, is effectively equal to the analytical concentration of the acid; writing ah for the acid, [a] = "t". defining the acid dissociation constant, p"k", as
<p> acid is a monocarboxylic β-hydroxy acid and natural product with the molecular formula . at room temperature, pure acid occurs as a transparent, colorless to light yellow liquid which is soluble in water. acid is a weak acid with a p"k" of 4.4. its refractive index (formula_1) is 1.42.
<p> nitric acid is normally considered to be a strong acid at ambient temperatures. there is some disagreement over the value of the acid dissociation constant, though the p"k" value is usually reported as less than −1. this means that the nitric acid in diluted solution is fully dissociated except in extremely acidic solutions. the p"k" value rises to 1 at a temperature of 250 °c. | There aren't really any theoretical limits on the pKa* of a compound. Butyl lithium, for example, has a pKa of nearly 50. That said, there are practical limits on them. In water, you're limited by the properties of water, such as a pKa of 16. That leads to the issues you see when mixing extremely high pKa compounds with water (namely fire, because it's extremely exothermic). *Side note here: pKa is the measurement of the acidity/basicity of a compound, pH is the acidity/basicity of a solution. |
how do mirrors work on an atomic level? why is the angle of reflection equal to the angle of incidence? | <p> reflection from the first surface amounts to an early reflection with unaltered chirp. this is prevented by sparing some layers for anti-reflective coating. in a simple case this is done with a single layer of mgf (which has a refractive index of 1.38 in the near infrared). the bandwidth is large, but not one octave. as the incidence varies from normal to brewster's angle, p-polarized light is less and less reflected. to eliminate residual reflections from the surface in the case of multiple mirrors, the distance between the surface and the stack is different for every mirror.
<p> generally, the reflections will have the same shape as the incident signal, but their sign and magnitude depend on the change in impedance level. if there is a step increase in the impedance, then the reflection will have the same sign as the incident signal; if there is a step decrease in impedance, the reflection will have the opposite sign. the magnitude of the reflection depends not only on the amount of the impedance change, but also upon the loss in the conductor.
<p> as the incidence angle on the lower surface of the lower prism is less than the critical angle, total internal reflection does not occur. to mitigate this problem, a mirror coating is used on this surface. typically an aluminum mirror coating (reflectivity of 87% to 93%) or silver mirror coating (reflectivity of 95% to 98%) is used.
<p> with this specific grid layout reflections from the walls act as mirror sources. interference of the subwoofers and mirror sources create a plane wave up to a certain frequency. the more sources the higher the frequency. this cutoff frequency formula_12 can be calculated for each dimension as follows:
<p> total internal reflection describes the fact that radiation (e.g. visible light) can, at certain angles, be totally reflected from an interface between two media of different indices of refraction (see snell's law). total internal reflection occurs when the first medium has a larger refractive index than the second medium, for example, light that starts in water and bounces off the water-to-air interface.
<p> in the diagram, a light ray po strikes a vertical mirror at point o, and the reflected ray is oq. by projecting an imaginary line through point o perpendicular to the mirror, known as the "normal", we can measure the "angle of incidence", "θ" and the "angle of reflection", "θ". the "law of reflection" states that "θ" = θ", or in other words, the angle of incidence equals the angle of reflection.
<p> the reflections inside the prism are not caused by total internal reflection, since the beams are incident at an angle less than the critical angle (the minimum angle for total internal reflection). instead, the two faces are coated to provide mirror surfaces. the two opposite transmitting faces are often coated with an antireflection coating to reduce spurious reflections. the fifth face of the prism is not used optically but truncates what would otherwise be an awkward angle joining the two mirrored faces. | Quantum electrodynamics deals with the interaction of light and matter. Here is a short, non-technical explanation for this: We know light can be absorbed by atoms, and we know light can be emitted by atoms. When a photon is absorbed then emitted, the direction of the emitted photon is _random_. That means, it is entirely possible to have photons being emitted straight back to the source, regardless of angle. (Disclaimer: the terms "absorption" and "emission" used here is different from molecular absorption and emission - i.e., it doesn't involve real electronic transitions, but rather _virtual_ energy level transitions) However, a mirror isn't a single atom or molecule that absorbs and emits light. You can treat mirrors as an array of points that absorb and emit light. For the purposes of this explanation, we'll look at one dimension only - but it will be obvious that it applies to two dimensions. If we set up a source of light and a detector, we can trace a line that goes from the source, to a point in the mirror, then to the detector. This is one possible path of light that leads to a "reflection". If we draw up all of them you'll get a diagram like this. Notice that the angle of incidence does not have to equal angle of reflection, since, as mentioned before, emission direction is random - and we're mapping out all the possible paths from the source, to the mirror, to the detector. However, we also know light is a wave. This means that they are able to constructively and destructively interfere with each other. For points that are next to each other, if they're out of phase, they can destructively interfere. In the diagram, the arrows on the bottom is one way to depict phase - if they're pointing in the same direction, they are in phase. Notice how adjacent points near the left and right of the diagram are often vastly out of phase, while points near the centre is in phase? This means there is mostly destructive interference near the two sides, and constructive interference near the centre. (_Addendum_: This is due to the difference in path length between the path from the source and the path to the detector.) If we integrate all these paths and examine the resultant probability, you'll find that there is an _overwhelming probability_ that a photon will follow the path described by classical mechanics: where angle of incidence equals angle of reflection. In the case of a mirror, this result applies for _all_ wavelengths. However, because we're dealing with light as waves, you can selectively cut out reflective surfaces such that you only get constructive interference for certain wavelengths - that is, you can look at the phase in the diagram provided and cut out adjacent points that are vastly out of phase. This is how a diffraction grating works. |
is it possible to create a better numeral system? | <p> duodecimal numeric systems have some practical advantages over decimal. it is much easier to divide the base digit twelve (which is a highly composite number) by many important divisors in market and trade settings, such as the numbers 2, 3, 4 and 6.
<p> as already mentioned, many older processors (and possibly some current ones) do not natively support fractional mathematics. in this case, fractional values can be scaled into integers by multiplying them by ten to the power of whatever decimal precision you want to retain. in other words, if you want to preserve "n" digits to the right of the decimal point, you need to multiply the entire number by 10. (or if you're working in binary and you want to save "m" digits to the right of the binary point, then you would multiply the number by 2, or alternately, bit shift the value "m" places to the left). for example, consider the following set of real world fractional values:
<p> in computers, the main numeral systems are based on the positional system in base 2 (binary numeral system), with two binary digits, 0 and 1. positional systems obtained by grouping binary digits by three (octal numeral system) or four (hexadecimal numeral system) are commonly used. for very large integers, bases 2 or 2 (grouping binary digits by 32 or 64, the length of the machine word) are used, as, for example, in gmp.
<p> some historical numeral systems may be described as non-standard positional numeral systems. e.g., the sexagesimal babylonian notation and the chinese rod numerals, which can be classified as standard systems of base 60 and 10, respectively, counting the space representing zero as a numeral, can also be classified as non-standard systems, more specifically, mixed-base systems with unary components, considering the primitive repeated glyphs making up the numerals.
<p> a bijective numeral system with base "b" uses "b" different numerals to represent all non-negative integers. however, the numerals have values 1, 2, 3, etc. up to and including "b", whereas zero is represented by an empty digit string. for example, it is possible to have decimal without a zero.
<p> with a second level of multiplicative method – multiplication by 10,000 – it became possible to expand the numeral set. the most common method, used by aristarchus, involved placing a numeral-phrase above a large m character (m = myriads = 10.000), to indicate multiplication by 10.000. this way they could express numbers up to 100.000.000 (10).
<p> very few computer languages include built-in support for fixed point values other than with the radix point immediately to the right of the least significant digit, because for most applications, binary or decimal floating-point representations are usually simpler to use and accurate enough. floating-point representations are easier to use than fixed-point representations, because they can handle a wider dynamic range and do not require programmers to specify the number of digits after the radix point. however, if they are needed, fixed-point numbers can be implemented even in programming languages like c and c++, which do not commonly include such support. | Arithmetic isn't really what makes most things hard. When you're working out a problem, it's pretty common to just have letters in place of all the constants that you assume, and constants that arise during a calculation can usually be expressed concisely with the integers we know and love, plus a small number of common constants we already have names for. (And if there's a common constant we don't have a name for, we invent one!) Most of the arithmetic we do these ways is on computers, and they have their own fairly reasonable set of number representations. Some of these representations can still be surprisingly tricky in certain situations, so it's still probably too soon to say there's no room for improvement. |
how are measurements in quantum physics made, and how does the uncertainty principle manifest itself? | <p> one of the key formulae of quantum mechanics is heisenberg's uncertainty principle, which shows that the uncertainty in the measurement of a particle's position (δ) and momentum (δ) cannot both be arbitrarily small at the same time (where is planck's constant):
<p> before a particular measurement is performed on a quantum system, the theory usually gives only a probability distribution for the outcome, and the form that this distribution takes is completely determined by the quantum state and the observable describing the measurement. these probability distributions arise for both mixed states and pure states: it is impossible in quantum mechanics (unlike classical mechanics) to prepare a state in which all properties of the system are fixed and certain. this is exemplified by the uncertainty principle, and reflects a core difference between classical and quantum physics. even in quantum theory, however, for every observable there are some states that have an exact and determined value for that observable.
<p> the uncertainty principle arose as an answer to the question: how does one measure the location of an electron around a nucleus if an electron is a wave? when quantum mechanics was developed, it was seen to be a relation between the classical and quantum descriptions of a system using wave mechanics.
<p> in quantum mechanics, the uncertainty principle (also known as heisenberg's uncertainty principle) is any of a variety of mathematical inequalities asserting a fundamental limit to the precision with which certain pairs of physical properties of a particle, known as complementary variables or canonically conjugate variables such as position "x" and momentum "p", can be known or, depending on interpretation, to what extent such conjugate properties maintain their approximate meaning, as the mathematical framework of quantum physics does not support the notion of simultaneously well-defined conjugate properties expressed by a single value.
<p> the heisenberg's uncertainty principle states that when two complementary measurements are made, there is a limit to the product of their accuracy. as an example, if one measures the position with an accuracy of formula_99 and the momentum with an accuracy of formula_100, then formula_101 if we make further measurements in order to get more information, we disturb the system and change the trajectory into a new one depending on the measurement setup; therefore, the measurement results are still subject to heisenberg's uncertainty relation.
<p> once a quantum system has been prepared in laboratory, some measurable quantity such as position or energy is measured. for pedagogic reasons, the measurement is usually assumed to be ideally accurate. the state of a system after measurement is assumed to "collapse" into an eigenstate of the operator corresponding to the measurement. repeating the same measurement without any evolution of the quantum state will lead to the same result. if the preparation is repeated, subsequent measurements will likely lead to different results.
<p> uncertainty of a measurement can be determined by repeating a measurement to arrive at an estimate of the standard deviation of the values. then, any single value has an uncertainty equal to the standard deviation. however, if the values are averaged, then the mean measurement value has a much smaller uncertainty, equal to the standard error of the mean, which is the standard deviation divided by the square root of the number of measurements. this procedure neglects systematic errors, however. | A way I like to think about it: In my research a standard task is to measure the spin of a single electron in a certain direction, let's call it 'z-basis'. In the system i work with this can be done optically, typically if hit it with a laser and measure 0 photon clicks on a detector, this means i measure the spin to be in 'down', if i measure more than 0 photons it's 'up'. That's the product you mentioned: click or no click, very simple in my case. (depends of course very much on the system you do your experiments on!) Prior to the measurement, the spin has has well-defined state that can be a superposition state, say, something in the form of cos(x/2) * up + sin(x/2) * down. Now i measure, and the spin gets projected into up or down (and i know which because of photons or no photons on the detector). In QM terms, I measured the z-component of the spin-operator. In principle i could also measure for instance the x-component, which is orthogonal to that. In terms of up (=z) and down (=-z), the two basis states i can measure then are up+down = x, and up-down = -x. If i measure the x-component after measuring the z-component, i will get, with 50 percent probability each, those two outcomes, x and -x. (because up = x + (-x)) Experimentally, i would measure that by first rotating the spin by 90 degrees (rather easy with magnetic resonance techniques), such that x becomes z and -x becomes -z, and then measure z. then my measurement result is again click or no click. In other words: you start with some superposition state of your system. Measuring an observable will have a back-action on your system, and it will end up in an eigenstate of that observable. If you measure the same observable over and over again, you will always get the same outcome. Subsequently measuring in a basis that is orthogonal to that however will then have maximum uncertainty, because an eigenstate in one basis is a superposition in an orthogonal basis. With position and momentum it's essentially the same as in the spin case I used above, just that x and p have a continuous spectrum and not just two eigenvalues as a spin-1/2. Hope that's somewhat clear and answers what you wanted to know :) |
why must we use radians (and not degrees) in calculus? | <p> when using trigonometric function in calculus, their argument is generally not an angle, but rather a real number. in this case, it is more suitable to express the argument of the trigonometric as the length of the arc of the unit circle delimited by an angle with the center of the circle as vertex. therefore, one uses the radian as angular unit: a radian is the angle that delimits an arc of length on the unit circle. a complete turn is thus an angle of radians.
<p> in calculus and most other branches of mathematics beyond practical geometry, angles are universally measured in radians. this is because radians have a mathematical "naturalness" that leads to a more elegant formulation of a number of important results.
<p> a theodolite can be considerably more accurate if used correctly, but it is also considerably more difficult to use correctly. there is no inherent way to align a theodolite with north and so the scale has to be calibrated using astronomical observation, usually the position of the sun. because the position of celestial bodies changes with the time of day due to the earth's rotation, the time of these calibration observations must be accurately known, or else there will be a systematic error in the measurements. horizon altitudes can be measured with a theodolite or a clinometer.
<p> calculus can be used in conjunction with other mathematical disciplines. for example, it can be used with linear algebra to find the "best fit" linear approximation for a set of points in a domain. or it can be used in probability theory to determine the probability of a continuous random variable from an assumed density function. in analytic geometry, the study of graphs of functions, calculus is used to find high points and low points (maxima and minima), slope, concavity and inflection points.
<p> the trigonometric functions rely on angles, and mathematicians generally use radians as units of measurement. plays an important role in angles measured in radians, which are defined so that a complete circle spans an angle of 2 radians. the angle measure of 180° is equal to radians, and 1° = /180 radians.
<p> since a radian is mathematically defined as the angle formed when the length of a circular arc equals the radius of the circle, a milliradian, is the angle formed when the length of a circular arc equals of the radius of the circle. just like the radian, the milliradian is dimensionless, but unlike the radian where the same unit must be used for radius and arc length, the milliradian needs to have a ratio between the units where the subtension is a thousandth of the radius when using the simplified formula.
<p> the radian is the si unit for measuring angles, and is the standard unit of angular measure used in many areas of mathematics. the length of an arc of a unit circle is numerically equal to the measurement in radians of the angle that it subtends; one radian is just under 57.3 degrees (expansion at ). the unit was formerly an si supplementary unit, but this category was abolished in 1995 and the radian is now considered an si derived unit. | You don't have to. Radians and degrees are different units for the same thing, just like inches vs centimeters. But since everyone else uses radians, and all the formulas are written in radians, you'd just be making your life more complicated and confusing. But if you go to any formula that has an angle X in radians it, and replace X with X / 360 * 2π, you now have a formula in degrees. |
robitussin is a combination of an anti-tussive and expectorant. isn't this counter productive? won't you end up producing lots of mucus and not coughing it out? | <p> crotamiton is a drug that is used both as a scabicidal (for treating scabies) and as a general antipruritic (anti-itching drug). it is a prescription, lotion-based medicine that is applied to the whole body to get rid of the scabies parasite that burrows under the skin and causes itching.
<p> bisacodyl (inn) is an organic compound that is used as a stimulant laxative drug. it works directly on the colon to produce a bowel movement. it is typically prescribed for relief of episodic and chronic constipation and for the management of neurogenic bowel dysfunction, as well as part of bowel preparation before medical examinations, such as for a colonoscopy.
<p> rifampicin is easily absorbed from the gastrointestinal (gi) tract; its ester functional group is quickly hydrolyzed in bile, and it is catalyzed by a high ph and substrate-specific esterases. after about 6 hours, almost all of the drug is deacetylated. even in this deacetylated form, rifampicin is still a potent antibiotic; however, it can no longer be reabsorbed by the intestines and is eliminated from the body. only about 7% of the administered drug is excreted unchanged in urine, though urinary elimination accounts for only about 30% of the drug excretion. about 60% to 65% is excreted through feces.
<p> propantheline bromide (inn) is an antimuscarinic agent used for the treatment of excessive sweating (hyperhidrosis), cramps or spasms of the stomach, intestines (gut) or bladder, and involuntary urination (enuresis). it can also be used to control the symptoms of irritable bowel syndrome and similar conditions. this agent can also be used for patients who experience intense gi symptoms while tapering off of tcas.
<p> maropitant has weak anti-inflammatory effects, and has thus been used as an adjunct treatment in severe bronchitis. it also alleviates visceral pain, and has been found to reduce the amount of needed general anesthesia (both sevoflurane and isoflurane) needed in some operations. some believe that maropitant can also be used in rabbits and guinea pigs to relieve pain caused by ileus (impaired bowel movements), though it lacks antiemetic effects in rabbits, who cannot vomit.
<p> co-danthrusate is a combination of dantron and docusate. dantron is a mild peristaltic stimulant which acts on the lower bowel to encourage normal bowel movement without causing irritation. it belongs to the group of medicines under the term stimulant laxative. it stimulates the nerves in the stomach wall which causes the stomach muscles to contract. this medicine is used for analgesic-induced constipation. it takes six to twelve hours to work. this can cause discoloration of urine and bowel and liver tumors.
<p> carbocisteine, also called carbocysteine, is a mucolytic that reduces the viscosity of sputum and so can be used to help relieve the symptoms of chronic obstructive pulmonary disorder (copd) and bronchiectasis by allowing the sufferer to bring up sputum more easily. carbocisteine should not be used with antitussives (cough suppressants) or medicines that dry up bronchial secretions. | During a minor upper respiratory infection, there are two reasons why you'd cough: 1) your mucus is too thick or too abundant to be cleared by normal motility 2) your upper airways are inflamed, triggering the cough reflex whether or not there is excess mucus present. An expectorant can help with #1, an anti-tussive can help with #2. |
how is a child born addicted to something? | <p> some claim the existence of “addictive beliefs” in people more likely to develop addictions, such as “i cannot make an impact on my world” or “i am not good enough”, which may lead to developing traits associated with addiction, such as depression and emotional insecurity. people who strongly believe that they control their own lives and are mostly self-reliant in learning information (rather than relying on others) are less likely to become addicted. however, it is unclear whether these traits are causes, results or merely associated coincidentally. for example, depression due to physical disease can cause feelings of hopelessness that are mitigated after successful treatment of the underlying condition, and addiction can increase dependence on others.
<p> people with addictive personalities typically switch from one addiction to the next. these individuals may show impulsive behavior such as excessive caffeine consumption, internet use, eating chocolate or other sugar-laden foods, television watching, or even running.
<p> when a non-addict takes a drug or performs a behavior for the first time he/she does not automatically become an addict. over time the non-addict chooses to continue to engage in a behavior or ingest a substance because of the pleasure the non-addict receives. the now addict has lost the ability to choose or forego the behavior or substance and the behavior becomes a compulsive action. the change from non-addict to addict occurs largely from the effects of prolonged substance use and behavior activities on brain functioning. addiction affects the brain circuits of reward and motivation, learning and memory, and the inhibitory control over behavior.
<p> addiction is a disorder of the brain's reward system which arises through transcriptional and neuroepigenetic mechanisms and occurs over time from chronically high levels of exposure to an addictive stimulus (e.g., morphine, cocaine, sexual intercourse, gambling, etc.). transgenerational epigenetic inheritance of addictive phenotypes has been noted to occur in preclinical studies.
<p> an addictive behavior is a behavior, or a stimulus related to a behavior (e.g., sex or food), that is both rewarding and reinforcing, and is associated with the development of an addiction. addictions involving addictive behaviors are normally referred to as behavioral addictions.
<p> addicts often believe that being in control of others is how to achieve success and happiness in life. people who follow this rule use it as a survival skill, having usually learned it in childhood. as long as they make the rules, no one can back them into a corner with their feelings.
<p> it seems that wherever one finds intoxication, one likely will find addiction. recently researchers have argued that the addiction process is like the disease model, with a target organ, a defect, and symptoms of the disease. in other accounts, addiction is a disorder of genes, reward, memory, stress, and choice. | It's not necessarily that the baby is addicted, but rather it has a tolerance to the drugs it was exposed to in the womb. When someone (adult or baby) uses drugs for a while the body responds by creating more receptors which the drug binds to, so after a while a dose which is higher than a previous dose would give the same amount of effect as the previous dose, as the same percentage of receptors are activated. Now in babies, when they are born they have so many receptors more than normal, that unless a high no. of them are activated the body won't function as it should, therefore to keep a sensible number of the receptors activated they are given drugs that affect those receptors to keep the body functioning as it should by hitting the right percentage of receptors. The baby has no psychological need to take the drug, but instead a physiological need to take it. |
how does our body tolerance to a certain compounds work? | <p> pharmacokinetic tolerance (dispositional tolerance) occurs because of a decreased quantity of the substance reaching the site it affects. this may be caused by an increase in induction of the enzymes required for degradation of the drug e.g. cyp450 enzymes. this is most commonly seen with substances such as ethanol.
<p> tolerance is a physiologic process where the body adjusts to a medication that is frequently present, usually requiring higher doses of the same medication over time to achieve the same effect. it is a common occurrence in individuals taking high doses of opioids for extended periods, but does not predict any relationship to misuse or addiction.
<p> tolerance is a process characterized by neuroadaptations that result in reduced drug effects. while receptor upregulation may often play an important role other mechanisms are also known. tolerance is more pronounced for some effects than for others; tolerance occurs slowly to the effects on mood, itching, urinary retention, and respiratory depression, but occurs more quickly to the analgesia and other physical side effects. however, tolerance does not develop to constipation or miosis (the constriction of the pupil of the eye to less than or equal to two millimeters). this idea has been challenged, however, with some authors arguing that tolerance "does" develop to miosis.
<p> the plateau effect is also experienced in acclimation, which is the process that allows organisms to adjust to changes in its environment. in humans, this is seen when the nose becomes acclimated to a certain smell. this immunity is the body's natural defense to distraction from stimulus. this is similar to drug tolerance, when a person's reaction to a specific drug is progressively reduced, requiring an increase in the amount of the drug they receive. over the counter medications, in particular, have a maximum possible effect, regardless of dose.
<p> behavioral tolerance occurs with the use of certain psychoactive drugs, where tolerance to a behavioral effect of a drug, such as increased motor activity by methamphetamine, occurs with repeated use. it may occur through drug-independent learning or as a form of pharmacodynamic tolerance in the brain; the former mechanism of behavioral tolerance occurs when one learns how to actively overcome drug-induced impairment through practice. behavioral tolerance is often context-dependent, meaning tolerance depends on the environment in which the drug is administered, and not on the drug itself. behavioral sensitization describes the opposite phenomenon.
<p> the opposite concept to drug tolerance is drug reverse tolerance (or drug sensitization), in which case the subject's reaction or effect will increase following its repeated use. the two notions are not incompatible and tolerance may sometimes lead to reverse tolerance. for example, heavy drinkers initially develop tolerance to alcohol (requiring them to drink larger amounts to achieve a similar effect) but excessive drinking can cause liver damage, which then puts them at risk of intoxication when drinking even very small amounts of alcohol.
<p> cross-tolerance is a phenomenon that occurs when tolerance to the effects of a certain drug produces tolerance to another drug. it often happens between two drugs with similar functions or effects—for example, acting on the same cell receptor or affecting the transmission of certain neurotransmitters. cross-tolerance has been observed with pharmaceutical drugs such as anti-anxiety agents and illicit substances, and sometimes the two of them together. often, a person who uses one drug can be tolerant to a drug that has a completely different function. this phenomenon allows one to become tolerant to a drug that they have never even used before. | (Nearly?) All drugs work by interacting with receptors. This is almost a truism, in the sense that a pharmacologist (one who studies the behaviour of drugs) calls anywhere a drug binds it's receptor, though in general, these binding sites are on specific proteins, and the whole protein gets referred to as the receptor (e.g. the Mu-opioid receptor is a large protein, that has a binding site for morphine). If you're struggling to picture this, imagine a big blob, with a keyhole in it. The binding site is the keyhole, and the receptor is the whole blob. (We had to change our terminology because a single protein can have multiple binding sites.) So drugs come along, and they bind to the receptor (fit in the keyhole). And the changes the behaviour of the receptor. Some drugs make the receptor work (which usually means it causes some biochemical reaction to begin inside the cell that the receptor lives on) but other drugs STOP the receptor from working. Some drugs just prevent other drugs from activating the receptor. Respectively, these are referred to as "agonists" "inverse agonists" and "antagonists". Generally speaking, it's what happens after the drug interacts with the receptor that decides whether you get tolerance. Some drugs, particularly agonists (activators) cause the receptor to get less sensitive. The two main ways this can happen is by the strength of which the agonist binds to the receptor decreasing (the cell modifies the receptor slightly) or It can happen by the cell literally removing receptors off its surface. Whether this happens, how fast it happens and how strongly it happens depends on the receptor AND the drug. Some drug-receptors interactions cause this to happen very strongly. Others don't. GENERALLY speaking, agonists (activators) of a receptor cause it to happen, and antagonists don't. Drugs which block enzymes generally don't cause it either. Morphine is as agonist while aspirin blocks enzymes. We don't really know how paracetamol/acetaminophen works. Caffeine is an odd one. Caffeine is not an agonist of much. BUT some it's effects are likely to be caused by certain blocking one receptor (adenosine 2A) which in turn causes more noradrenaline to be released, which then activates adrenoreceptors. So there could be a source of tolerance there. Honestly, I don't know any research (off the top of my head) that shows the degree of caffeine tolerance, nor how it occurs. I'm willing to bet it's not anywhere near as strong as opioid tolerance though, because it doesn't matter how much coffee you drink, you will feel one good espresso (and no, I don't believe you if you say "I drink 20 coffees a day I don't feel it"). Hopefully that sorts it out for you. If it doesn't, reply away. |
if friction between air molecules cause heat, why wouldn't vibrations (sounds) have the same result? | <p> when surfaces in contact move relative to each other, the friction between the two surfaces converts kinetic energy into thermal energy (that is, it converts work to heat). this property can have dramatic consequences, as illustrated by the use of friction created by rubbing pieces of wood together to start a fire. kinetic energy is converted to thermal energy whenever motion with friction occurs, for example when a viscous fluid is stirred. another important consequence of many types of friction can be wear, which may lead to performance degradation or damage to components. friction is a component of the science of tribology.
<p> coulomb damping absorbs energy with friction, which converts that kinetic energy into thermal energy or heat. the coulomb friction law is associated with two aspects. static and kinetic frictions occur in a vibrating system undergoing coulomb damping. static friction occurs when the two objects are stationary or undergoing no relative motion. for static friction, the friction force exerted between the surfaces having no relative motion cannot exceed a value that is proportional to the product of the normal force and the coefficient of static friction :
<p> new models are beginning to show how kinetic friction can be greater than static friction. kinetic friction is now understood, in many cases, to be primarily caused by chemical bonding between the surfaces, rather than interlocking asperities; however, in many other cases roughness effects are dominant, for example in rubber to road friction. surface roughness and contact area affect kinetic friction for micro- and nano-scale objects where surface area forces dominate inertial forces.
<p> since sound waves are produced by a vibrating body, the vibrating object moves in one direction and compresses the air directly in front of it. as the vibrating object moves in the opposite direction, the pressure on the air is lessened so that an expansion, or rarefaction, of air molecules occurs. one compression and one rarefaction make up one longitudinal wave. the vibrating air molecules move back and forth parallel to the direction of motion of the wave, receiving energy from adjacent molecules nearer the source and passing the energy to adjacent molecules farther from the source.
<p> kinetic friction, also known as dynamic friction or sliding friction, occurs when two objects are moving relative to each other and rub together (like a sled on the ground). the coefficient of kinetic friction is typically denoted as "μ", and is usually less than the coefficient of static friction for the same materials. however, richard feynman comments that "with dry metals it is very hard to show any difference."
<p> in chemistry it is known that increased temperature increases the rate or reaction of an experiment, however vibrational bonds are not formed like covalent bonds where electrons are shared between the two bonding atoms. vibrational bonds are created at high energy where the muonium bounces to and from bromine atoms "like a ping pong ball bouncing between two bowling balls," according to donald fleming. this bouncing action lowers the potential energy of the brmubr molecule, and therefore slows the rate of the reaction.
<p> static friction is friction between two or more solid objects that are not moving relative to each other. for example, static friction can prevent an object from sliding down a sloped surface. the coefficient of static friction, typically denoted as "μ", is usually higher than the coefficient of kinetic friction. static friction is considered to arise as the result of surface roughness features across multiple length-scales at solid surfaces. these features, known as asperities are present down to nano-scale dimensions and result in true solid to solid contact existing only at a limited number of points accounting for only a fraction of the apparent or nominal contact area. the linearity between applied load and true contact area, arising from asperity deformation, gives rise to the linearity between static frictional force and normal force, found for typical amonton-coulomb type friction. | Friction between air molecules does not cause heat. Perhaps you mean "if collisions between air molecules can transfer thermal energy, can sound do this, too?" To which the answer is yes. Regarding your thought experiment: If you thermally isolated the system, you would be adding energy to it by playing the music in. This would cause a small increase in the temperature. |
in parts of the world with venomous snakes (like africa, australia and the americas) what happens to larger game when bitten? | <p> throughout western asia, the species responsible for the majority of bites tend to be more venomous than european snakes, but deaths are infrequent. studies estimate that perhaps 100 fatal bites occur each year. the palestine viper and lebetine viper are the most important species. while larger and more venomous elapids, such as the egyptian cobra, are also found throughout the middle east, these species inflict fewer bites.
<p> although africa is home to four venomous snake families—atractaspididae, colubridae, elapidae, and viperidae—approximately 60% of all bites are caused by vipers alone. in drier regions of the continent, such as sahels and savannas, the saw-scaled vipers inflict up to 90% of all bites. the puff adder is responsible for the most fatalities overall, although saw-scaled vipers inflict more bites in north african countries, where the puff adder is typically not found. the black mamba, although responsible for far fewer snakebite incidents, is the species which has the highest mortality rate in africa and in the world.
<p> there are also venomous colubrids in africa, although of these only two arboreal genera, the boomslang and the twig snakes, are likely to inflict life-threatening bites. of the atractaspididae, "atractaspis" is the species involved in the majority of bites. since these snakes are nocturnal and fossorial, living in burrows underground, bites remain rare, peaking at 1 to 3% in certain areas of the sudanian savanna. however, there is no antivenom or other effective therapy for "atractaspis" envenomation, and the case fatality rate remains approximately 10%, with death typically occurring quickly.
<p> in europe, nearly all of the snakes responsible for venomous bites belong to the viper family, and of these, the coastal viper, nose-horned viper, asp viper, and lataste's viper inflict the majority of bites. although europe has a population of some 731 million people, snake bites are only responsible for between 1 and 7 (average of 4) fatalities each year, largely due to wide access to health care services and antivenom, as well as the relatively mild potency of many native species' venom.
<p> the type of snake that most often delivers serious bites depends on the region of the world. in africa, it is mambas, egyptian cobras, puff adders, and carpet vipers. in the middle east, it is carpet vipers and elapids. in central and south america, it is snakes of the "bothrops" and "crotalus" types, the latter including rattlesnakes. in north america, rattlesnakes are the primary concern, and up to 95% of all snakebite-related deaths in the united states are attributed to the western and eastern diamondback rattlesnakes. in south asia, it was previously believed that indian cobras, common kraits, russell's viper, and carpet vipers were the most dangerous; other snakes, however, may also cause significant problems in this area of the world.
<p> the number of venomous snakebites that occur each year may be as high as five million. they result in about 2.5 million poisonings and 20,000 to 125,000 deaths. the frequency and severity of bites vary greatly among different parts of the world. they occur most commonly in africa, asia, and latin america, with rural areas more greatly affected. deaths are relatively rare in australia, europe and north america. for example, in the united states, about seven to eight thousand people per year are bitten by venomous snakes (about one in 40 thousand people) and about five people die (about one death per 65 million people).
<p> the varieties of snakes that most often cause serious snakebites depend on the region of the world. in africa, the most dangerous species include black mambas, puff adders, and carpet vipers. in the middle east the species of greatest concern are carpet vipers and elapids; in central and south america, "bothrops" (including the terciopelo or fer-de-lance) and "crotalus" (rattlesnakes) are of greatest concern. in south asia, it has historically been believed that indian cobras, common kraits, russell's viper and carpet vipers were the most dangerous species; however other snakes may also cause significant problems in this area of the world. while several species of snakes may cause more bodily destruction than others, any of these venomous snakes are still very capable of causing human fatalities should a bite go untreated, regardless of their venom capabilities or behavioral tendencies. | Elephants have been killed by venomous snakes before, mostly king cobras. Due to elephants being so large, it takes a few hours for them to die once bitten, unlike a human which can take as little as 15 minutes. King cobra venom is not the most potent of all the venomous snakes, but it injects a large enough amount to kill an elephant. Because elephants are so large, it's possible that they may be able to survive a bite from a less venomous snake. |
why does basal body temperature increase when a woman begins ovulation? | <p> basal body temperature changes during the menstrual cycle. higher levels of progesterone released during the menstrual cycle causes an abrupt increase in basal body temperature by 0.5 °c to 1 °c at the time of ovulation. this enables identification of the fertile window through the use of commercial thermometers. this test can also indicate if there are issues with ovulation.
<p> body temperature is sensitive to many hormones, so women have a temperature rhythm that varies with the menstrual cycle, called a "circamensal" rhythm. a woman's basal body temperature rises sharply after ovulation, as estrogen production decreases and progesterone increases. fertility awareness programs use this change to identify when a woman has ovulated in order to achieve or avoid pregnancy. during the luteal phase of the menstrual cycle, both the lowest and the average temperatures are slightly higher than during other parts of the cycle. however, the amount that the temperature rises during each day is slightly lower than typical, so the highest temperature of the day is not very much higher than usual. hormonal contraceptives both suppress the circamensal rhythm and raise the typical body temperature by about .
<p> also, during the week following ovulation, progesterone levels increase, resulting in a woman experiencing difficulty achieving orgasm. although the last days of the menstrual cycle are marked by a constant testosterone level, women's libido may get a boost as a result of the thickening of the uterine lining which stimulates nerve endings and makes a woman feel aroused. also, during these days, estrogen levels decline, resulting in a decrease of natural lubrication.
<p> after menstruation and directly under the influence of estrogen, the cervix undergoes a series of changes in position and texture. during most of the menstrual cycle, the cervix remains firm, and is positioned low and closed. however, as ovulation approaches, the cervix becomes softer and rises to open in response to the higher levels of estrogen present. these changes are also accompanied by changes in cervical mucus, described below.
<p> in most girls, menarche does not mean that ovulation has occurred. in postmenarchal girls, about 80% of the cycles were anovulatory in the first year after menarche, 50% in the third and 10% in the sixth year. regular ovulation is usually indicated by predictable and consistent intervals between menses, predictable and consistent durations of menses, and predictable and consistent patterns of flow (e.g., heaviness or cramping). continuing ovulation typically requires a body fat content of at least 22%. an anthropological term for this state of potential fertility is nubility.
<p> during the follicular phase (which lasts from the first day of menstruation until the day of ovulation), the average basal body temperature in women ranges from 36.45 to 36.7 °c (97.6 to 98.1 °f). within 24 hours of ovulation, women experience an elevation of 0.15–0.45 °c (0.2–0.9 °f) due to the increased metabolic rate caused by sharply elevated levels of progesterone. the basal body temperature ranges between 36.7–37.3 °c (98.1–99.2 °f) throughout the luteal phase, and drops down to pre-ovulatory levels within a few days of menstruation. women can chart this phenomenon to determine whether and when they are ovulating, so as to aid conception or contraception.
<p> when menarche occurs, it confirms that the girl has had a gradual estrogen-induced growth of the uterus, especially the endometrium, and that the "outflow tract" from the uterus, through the cervix to the vagina, is open. | Estrogen will lower body temperature, and progesterone will increase it. When ovulation occurs, and progesterone is increased, the body temperature will increase indicating the body is ready for conception. Research that I've seen seems to indicate that there is no "clear answer," but some believe the reason why Progesterone increases body temperature is because it acts as a potent vasoconstrictor. When the blood vessels constrict (especially at the skin's surface), the body isn't able to exchange temperatures properly with the outside and maintains a higher temperature internally. Progesterone also increases T4 (a thyroid hormone) that increases your body's metabolism...and thus...your temperature. |
why are there all those enormous stars out there, but (unless i'm wrong) we never hear about any planets like that? | <p> although some of the stars named in works of science fiction are purely imaginary, many authors and artists have preferred to use the names of real stars that are well known to astronomers, and indeed the lay public, either because they are notably bright in the sky or because they are relatively close to earth.
<p> stars twinkle because they are so far from earth that they appear as point sources of light easily disturbed by earth's atmospheric turbulence, which acts like lenses and prisms diverting the light's path. large astronomical objects closer to earth, like the moon and other planets, encompass many points in space and can be resolved as objects with observable diameters. with multiple observed points of light traversing the atmosphere, their light's deviations average out and the viewer perceives less variation in light coming from them.
<p> most of the stars below are solar-type, mainly in the spectral classes f, g, and k, because astronomers tend to look for planets around stars similar to the sun. others are giants, which have used up all the hydrogen in their cores. finding planets around giant stars gives clues as to how planetary systems evolve and how the properties of planets change with the evolution of the stars.
<p> the "fixed stars" appear to be of different bignesses, not because they really are so, but because they are not all equally distant from us. those that are nearest will excel in lustre and bigness; the more remote "stars" will give a fainter light, and appear smaller to the eye. hence arise the distribution of "stars", according to their order and dignity, into "classes"; the first class containing those which are nearest to us, are called "stars" of the first magnitude; those that are next to them, are "stars" of the second magnitude ... and so forth, 'till we come to the "stars" of the sixth magnitude, which comprehend the smallest "stars" that can be discerned with the bare eye. for all the other "stars", which are only seen by the help of a telescope, and which are called telescopical, are not reckoned among these six orders. altho' the distinction of "stars" into six degrees of magnitude is commonly received by "astronomers"; yet we are not to judge, that every particular "star" is exactly to be ranked according to a certain bigness, which is one of the six; but rather in reality there are almost as many orders of "stars", as there are "stars", few of them being exactly of the same bigness and lustre. and even among those "stars" which are reckoned of the brightest class, there appears a variety of magnitude; for "sirius" or "arcturus" are each of them brighter than "aldebaran" or the "bull's" eye, or even than the "star" in "spica"; and yet all these "stars" are reckoned among the "stars" of the first order: and there are some "stars" of such an intermedial order, that the "astronomers" have differed in classing of them; some putting the same "stars" in one class, others in another. for example: the little "dog" was by "tycho" placed among the "stars" of the second magnitude, which "ptolemy" reckoned among the "stars" of the first class: and therefore it is not truly either of the first or second order, but ought to be ranked in a place between both.
<p> stars of this type are particularly rare; only 0.00002% (1 in 5,000,000) to 0.00005% (1 in 2,000,000) of all stars are o-type, but because they are very bright they can be seen at great distances and four of the 90 brightest stars as seen from earth are o type. due to their high mass, o-type stars end their lives rather quickly in violent supernova explosions, resulting in black holes or neutron stars. most of these stars are young massive main sequence, giant, or supergiant stars, but the central stars of planetary nebulae, old low-mass stars near the end of their lives, also usually have o spectra.
<p> "the fixed stars appear to be of different bignesses, not because they really are so, but because they are not all equally distant from us. those that are nearest will excel in lustre and bigness; the more remote stars will give a fainter light, and appear smaller to the eye. hence arise the distribution of stars, according to their order and dignity, into classes; the first class containing those which are nearest to us, are called stars of the first magnitude; those that are next to them, are stars of the second magnitude ... and so forth, 'till we come to the stars of the sixth magnitude, which comprehend the smallest stars that can be discerned with the bare eye. for all the other stars, which are only seen by the help of a telescope [...]"
<p> as of june 2014, 50 giant planets have been discovered around giant stars. however, these giant planets are more massive than the giant planets found around solar-type stars. this could be because giant stars are more massive than the sun (less massive stars will still be on the main sequence and will not have become giants yet) and more massive stars are expected to have more massive planets. however, the masses of the planets that have been found around giant stars do not correlate with the masses of the stars; therefore, the planets could be growing in mass during the stars' red giant phase. the growth in planet mass could be partly due to accretion from stellar wind, although a much larger effect would be roche lobe overflow causing mass-transfer from the star to the planet when the giant expands out to the orbital distance of the planet. | Anything that's more than about 80 times the mass of Jupiter will have enough pressure at its core for the hydrogen to start fusing into helium, and then it is a star. Objects between about 12 and 80 times the mass of Jupiter are in a gray area called brown dwarves, which are really neither stars nor planets. |
what exactly does "reinforcing neurological pathways" mean in the context of rem sleep? | <p> one key use of rem sleep is for the brain to process and store information from the previous day. in a sense, the brain is learning by establishing new neuronal connections for things that have been learned. neurophysiological studies have indicated a relationship between increased p-wave density during post-training rem sleep and learning performance. basically, the abundance of pgo waves translates into longer periods of rem sleep, which thereby allows the brain to have longer periods where neuronal connections are formed.
<p> according to the activation-synthesis hypothesis proposed by robert mccarley and allan hobson in 1975–1977, control over rem sleep involves pathways of "rem-on" and "rem-off" neurons in the brain stem. rem-on neurons are primarily cholinergic (i.e., involve acetylcholine); rem-off neurons activate serotonin and noradrenaline, which among other functions suppress the rem-on neurons. mccarley and hobson suggested that the rem-on neurons actually stimulate rem-off neurons, thereby serving as the mechanism for the cycling between rem and non-rem sleep. they used lotka–volterra equations to describe this cyclical inverse relationship. kayuza sakai and michel jouvet advanced a similar model in 1981. whereas acetylcholine manifests in the cortex equally during wakefulness and rem, it appears in higher concentrations in the brain stem during rem. the withdrawal of orexin and gaba may cause the absence of the other excitatory neurotransmitters; researchers in recent years increasingly include gaba regulation in their models.
<p> non-rem sleep is initiated by neurons in the preoptic and anterior hypothalamic area, whereas rem sleep is eventually elicited by the cells in the pontine tegmentum. electroencephalography is used to analyze brain wave patterns during sleep and has the capability to differentiate between rem sleep from non-rem sleep. rem sleep cycles mimic conscious brain patterns to an extent. night terrors, for example, involve the partial arousal out of non-rem sleep. similarly, rem behavior disorder occurs when patients have fits of violent behavior during rem sleep. benzodiazepines are the most common treatments for sleep-related disorders.
<p> other theories are that rem sleep warms the brain, stimulates and stabilizes the neural circuits that have not been activated during waking, or creates internal stimulation to aid development of the cns; while some argue that rem lacks any purpose, and simply results from random brain activation.
<p> an investigation of the differential brain structures can be conducted by clinico-anatomical correlations. here, the mechanisms associated with rem sleep are removed to observe whether there is a cessation in dreaming as well, then the areas thought to be associated with dreaming are removed to see if rem sleep is also made impossible. these studies, with the exception of natural accidents, are conducted with animals. a main problem with obliterating rem sleep is that the associated area, the brain stem, is responsible for consciousness. lesions large enough to stop rem completely can also render the subject unconscious.
<p> the neocortex has also been shown to play an influential role in sleep, memory and learning processes. semantic memories appear to be stored in the neocortex, specifically the anterolateral temporal lobe of the neocortex. it is also involved in instrumental conditioning; responsible for transmitting sensory information and information about plans for movement to the basal ganglia. the firing rate of neurons in the neocortex also has an effect on slow-wave sleep. when the neurons are at rest and are hyperpolarizing, a period of inhibition occurs during a slow oscillation, called the down state. when the neurons of the neocortex are in the excitatory depolarizing phase and are firing briefly at a high rate, a period of excitation occurs during a slow oscillation, called the up state.
<p> reentry is a neural structuring of the brain, specifically in humans, which is characterized by the ongoing bidirectional exchange of signals along reciprocal axonal fibers linking two or more brain areas. it is hypothesized to allow for widely distributed groups of neurons to achieve integrated and synchronized firing, which is proposed to be a requirement for consciousness, as outlined by gerald edelman and giulio tononi in their book "a universe of consciousness". | REM sleep is necessary for some types of memory consolidation. Avi Karni and Bob Stickgold have each done some work in this area. During REM sleep, the thalamocortical networks are quite active, desynchronized, levels of acetylcholine are high, and levels of noradrenaline are low. Virtually everyone believes this network state during REM sleep is critical to the consolidation effects that are REM dependent, but going to a stronger statement than that would be highly speculative. |
how do undersea cables compensate for tectonic movement? | <p> a shunt fault occurs when the cable insulation becomes damaged, such that there is a short circuit from the metallic core to the seawater directly. in this situation the apparent location of the virtual ground will move to the shunt fault location. as long as the power feed equipment farthest from the shunt fault has the capability of generating the additional voltage required to maintain the same current, the cable system will continue to carry traffic.
<p> subsea cable protection systems can encounter wear due to movement, and general changes in composition due to being submerged for a prolongued period of time, such as corrosion or changes in polymer based compounds. consideration should be given to the induced effects on the cps resulting from the dynamic elements in the environment. simple changes such as changes in temperature, current or salinity can result in changes in the ability of the cps to offer protection for the life of the cable. it is advisable to carefully assess the potential effects of movement of the cps, relating to the dynamic abilities of the cable. the cps may withstand the worst conditions seen over a 100yr period, but would the cable inside the cps survive these movements. in some instances, such as shore ends for fibre optic cables where rocky outcrops are present, dynamic influences can be reduced by securing the articulated pipe to the seabed rock, thus reducing the degree of movement remaining.
<p> special cable constructions and termination techniques are required for cables installed in ships. such assemblies are subjected to environmental and mechanical extremes. therefore, in addition to electrical and fire safety concerns, such cables may also be required to be pressure-resistant where they penetrate a vessel's bulkheads. they must also resist corrosion caused by salt water or salt spray, which is accomplished through the use of thicker, specially constructed jackets, and by tinning the individual wire stands.
<p> finally, the cable may be armored to protect it from environmental hazards, such as construction work or gnawing animals. undersea cables are more heavily armored in their near-shore portions to protect them from boat anchors, fishing gear, and even sharks, which may be attracted to the electrical power that is carried to power amplifiers or repeaters in the cable.
<p> seabed stability is an important factor associated with cable protection systems. should the cable protection system be too buoyant, it is less likely to remain in contact with the seabed, thus the cps is more likely to require additional remedial stability measures, such as installation of concrete mattresses, rockbags, or rockdumping.
<p> power can be transferred in either direction (which might become more likely due to the closure of longannet power station), but it is necessary for the link to be offline for a sufficient time prior to reversing the direction. mass impregnated non-draining (mind) cables are used in a bipolar arrangement, but no sea- or earth-return path is permitted for environmental reasons, meaning that both cables must be in service for the link to be operational. the cables are spaced apart to minimise thermal interference, but not so far as to materially impact any marine life which navigates using the magnetic field of the earth.
<p> the most common laying engine in use is the linear cable engine (lce). the lce is used to feed the cable down to the ocean floor, but this device can also be reversed and used to bring back up cable needing repair. these engines can feed 800 feet of cable a minute. however, ships are limited to a speed of 8 knots while laying cable to ensure the cable lies on the sea floor properly and to compensate for any small adjustments in course that might affect the cables' position, which must be carefully mapped so that they can be found again if they need to be repaired. linear cable engines are also equipped with a brake system that allows the flow of cable to be controlled or stopped if a problem arises. a common system used is a fleeting drum, a mechanical drum fitted with eoduldes (raised surfaces on the drum face) that help slow and guide the cable into the lce. | To protect the cable from physical damage such as rubbing against rocks, is quite simple: the cables have protective layers around them just like most other wires/cables do. To avoid making the cables longer than necessary due to mountains, etc. is also a fairly easy task these days because, although we have only mapped a very small portion of the oceans with great detail, we do have approximate terrain data on most of it. As well, the addition of a couple kilometres of cable in a cable that is thousands of kilometres long is not really a huge issue, as the latency and cost would only increase by a fraction of a percent. To protect against tectonic movement is more complicated, and this is a very good question. All materials are able to stretch some amount before breaking, and this fact helps with this problem, but the problem still exists. Even if a sudden earthquake caused a change in the required length of only a few metres, this could be disastrous because, although the cable can stretch, for this to happen, part of the cable has to slide along the ocean floor. The reason this is a problem isn't because of the potential for abrasive damage to occur, it's because of friction. The stretching of the cable would generate very large stresses in it, and the frictional force fighting its motion would increase these stresses. The protective layers I mentioned earlier also help with this. Additionally, these cables aren't under constant tension, (though this could significantly reduce their length); they do have a bit of slack in them. TL;DR: Undersea fiber-optic cables aren't just glass fibers; they have several protective layers surrounding them. This image demonstrates this well, though some fiber-optic cables may have even more protective layers than this. P.S. Another relevant question would be: "how are these cables protected against other forms of damage such as a sinking ship falling on them?" The answer is: they aren't. This actually happened several years ago, and the cable had to be repaired (with great difficulty, of course). |
why do many doctors prescribe prednisone for patients with infections? | <p> these drugs were widely used as a first line treatment for many infections, including very commons ones like acute sinusitis, acute bronchitis, and uncomplicated urinary tract infections. reports of serious adverse events began emerging, and the fda first added a boxed warning to fluoroquinolones in july 2008 for the increased risk of tendinitis and tendon rupture. in february 2011, the risk of worsening symptoms for those with myasthenia gravis was added to the boxed warning. in august 2013, the agency required updates to the labels to describe the potential for irreversible peripheral neuropathy (serious nerve damage).
<p> the practice of doctors prescribing placebos that are disguised as real medication is controversial. a chief concern is that it is deceptive and could harm the doctor–patient relationship in the long run. while some say that blanket consent, or the general consent to unspecified treatment given by patients beforehand, is ethical, others argue that patients should always obtain specific information about the name of the drug they are receiving, its side effects, and other treatment options. this view is shared by some on the grounds of patient autonomy. there are also concerns that legitimate doctors and pharmacists could open themselves up to charges of fraud or malpractice by using a placebo. critics also argued that using placebos can delay the proper diagnosis and treatment of serious medical conditions.
<p> the aim of treatment is mostly supportive such as pain control, duration of symptoms, viral shedding and in some cases, preventing outbreak. antibiotics are rarely prescribed to treat bacterial superinfection of oral lesions. antiviral drugs are used to treat herpetic gingivostomatitis such as aciclovir, valaciclovir, famciclovir, and in resistance cases foscarnet can be used. treatment does not prevent recurrence. most individuals who are immunocompetent will fully recover from recurrent herpes labialis in 7 to 14 days. however treatment with antipyretics, oral anaesthetics and analgesics is often needed. in severe cases of herpetic gingivostomatitis, mouth rinses are useful in relieving oral discomfort. these contain topical anaesthetic agents such as lidocaine and diphenhydramine as well as coating agents such as magnesium-containing antacids. in order to prevent dehydration, oral fluid intake is encouraged. other treatment options include good oral hygiene and gentle debridement of the mouth.
<p> the antivenom manufacturer's product information recommends one vial, although more has been used. past guidelines indicated two vials, with a further two vials recommended if symptoms did not resolve within two hours, however recent guidelines state "antivenom is sometimes given if there is a history, symptoms and signs consistent with systemic envenoming, and severe pain unresponsive to oral analgesics ... however recent trials show antivenom has a low response rate little better than placebo, and any effect is less than might be achieved with optimal use of standard analgesics. the antivenom can be given by injection intramuscularly (im) or intravenously (iv). the manufacturer recommends im use, with iv administration reserved for life-threatening cases. in january 2008 toxicologist geoffrey isbister suggested im antivenom was not as effective as iv antivenom, after proposing that im antivenom took longer to reach the blood serum. isbister subsequently found the difference between iv and im routes of administration was, at best, small and did not justify routinely choosing one route over the other.
<p> patients who are immunocompromised, either with hiv/aids or as a result of chemotherapy, may require systemic prevention or treatment with oral or intravenous administered anti-fungals. however there is strong evidence that drugs that are absorbed or partially absorbed from the gi tract can prevent candidiasis more effectively than drugs that are not absorbed in the same way.
<p> treatment is often started without confirmation of infection because of the serious complications that may result from delayed treatment. treatment depends on the infectious agent and generally involves the use of antibiotic therapy although there is no clear evidence of which antibiotic regimen is more effective and safe in the management of pid. if there is no improvement within two to three days, the patient is typically advised to seek further medical attention. hospitalization sometimes becomes necessary if there are other complications. treating sexual partners for possible stis can help in treatment and prevention.
<p> treatment for proctitis varies depending on severity and the cause. for example, the physician may prescribe antibiotics for proctitis caused by bacterial infection. if the proctitis is caused by crohn's disease or ulcerative colitis, the physician may prescribe the drug 5-aminosalicyclic acid (5asa) or corticosteroids applied directly to the area in enema or suppository form, or taken orally in pill form. enema and suppository applications are usually more effective, but some patients may require a combination of oral and rectal applications. | A well-functioning immune system often has undesirable by products when mounting an immune response. Additionally, the immune system is quite complex with numerous signaling pathways and different types of white blood cells responding. For example, the cascade of cytokines, which help propagate an immune response, also give you that "I feel like shit" feeling. Sometimes, the immune response is actually partially harmful. Many types of pneumonia, meningitis, or other widespread infections with big inflammatory responses show better outcomes when steroids are used. The idea, again, is that while the inflammatory response may be good at killing a pathogen, it may also be good at causing airway swelling along with it, for example. Here is a study showing a list of different illnesses where steroids have been shown to help. |
how does the body know the relative position of its parts (eg, where you arm is)? | <p> anatomical terms used to describe location are based on a body positioned in what is called the "standard anatomical position". this position is one in which a person is standing, feet apace, with palms forward and thumbs facing outwards. just as maps are normally oriented with north at the top, the standard body "map," or anatomical position, is that of the body standing upright, with the feet at shoulder width and parallel, toes forward. the upper limbs are held out to each side, and the palms of the hands face forward.
<p> standard anatomical position is rigidly defined for human anatomy. in standard anatomical position, the human body is standing erect and at rest. unlike the situation in other vertebrates, the limbs are placed in positions reminiscent of the supine position imposed on cadavers during autopsy. therefore, the body has its feet together (or slightly separated), and its arms are rotated outward so that the palms are forward, and the thumbs are pointed away from the body (forearms supine). as well, the arms are usually moved slightly out from the body, so that the hands do not touch the sides. the positions of the limbs (and the arms in particular) have important implications for directional terms in those appendages. the penis in the anatomical position is described in its erect position and therefore lies against the abdomen, hence the ventral surface of the penis is actually anterior when the penis is pointing down between the legs.
<p> bullet::::- instead of trying to visualize where the bones of the arm and shoulder are to get the above angle measured, the judge could use the angle between then point of shoulder and the humerus, which should be at the angle of around 85 degrees.
<p> individual vertebrae of the human vertebral column can be felt and used as surface anatomy, with reference points are taken from the middle of the vertebral body. this provides anatomical landmarks that can be used to guide procedures such as a lumbar puncture and also as vertical reference points to describe the locations of other parts of human anatomy, such as the positions of organs.
<p> for the body different kind of points are used, but, as with the head, the distances between these points are measured. seventy-three so-called anthropometry landmarks were extracted from the scans of a database used to create this system. these are point-to-point distances. the landmarks identify key bone joint structure and are adequate to segment the body and produce anatomical reference axis systems for the key body segments and joints.
<p> bullet::::- proprioception – provides the information on the relative "position" of the parts of the body. proprioception and touch are related in subtle ways, and their impairment results in surprising and deep deficits in perception and action.
<p> bullet::::- "axial", the plane that is horizontal and parallel to the axial plane of the body in the standard anatomical position. it contains (and thus is defined by) the lateral and the medial axes of the brain. | The concept is called proprioception. I'm sure someone will give a more in depth answer, but essentially there are multiple body maps in your brain, and these maps are used in conjunction with your peripheral nervous system in order to determine body position. For example, proprioceptors are found in skeletal muscles. The relative stretching/compression of these proprioreceptors gives information about the position of the limb. |
are there any carnivorous animals that only eat other carnivores? | <p> "carnivore" also may refer to the mammalian order carnivora, but this is somewhat misleading: many, but not all, carnivora are meat eaters, and even fewer are true obligate carnivores (see below). for example, while the arctic polar bear eats meat almost exclusively (more than 90% of its diet is meat), most species of bears are actually omnivorous, and the giant panda is exclusively herbivorous. there are also many carnivorous species that are not members of carnivora.
<p> though carnivora is a taxon for species classification, no such equivalent exists for omnivores, as omnivores are widespread across multiple taxonomic clades. the carnivora order does not include all carnivorous species, and not all species within the carnivora taxon are carnivorous. it is common to find physiological carnivores consuming materials from plants or physiological herbivores consuming material from animals, e.g. felines eating grass and deer eating birds. from a behavioral aspect, this would make them omnivores, but from the physiological standpoint, this may be due to zoopharmacognosy. physiologically, animals must be able to obtain both energy and nutrients from plant and animal materials to be considered omnivorous. thus, such animals are still able to be classified as carnivores and herbivores when they are just obtaining nutrients from materials originating from sources that do not seemingly complement their classification. for instance, it is well documented that animals such as giraffes, camels, and cattle will gnaw on bones, preferably dry bones, for particular minerals and nutrients. felines, which are usually regarded as obligate carnivores, occasionally eat grass to regurgitate indigestibles (e.g. hair, bones), aid with hemoglobin production, and as a laxative.
<p> carnivores are sometimes characterized by their type of prey. for example, animals that eat mainly insects and similar invertebrates are called insectivores, while those that eat mainly fish are called piscivores. the first tetrapods, or land-dwelling vertebrates, were piscivorous amphibians known as labyrinthodonts. they gave rise to insectivorous vertebrates and, later, to predators of other tetrapods.
<p> obligate or "true" carnivores are those whose diet requires nutrients found only in animal flesh. while obligate carnivores might be able to ingest small amounts of plant matter, they lack the necessary physiology required to digest it. in fact, some obligate carnivorous mammals will only ingest vegetation for the sole purpose of its use as an emetic, to self-induce vomiting of the vegetation along with the other food it had ingested that upset its stomach.
<p> some physiological carnivores consume plant matter and some physiological herbivores consume meat. from a behavioral aspect, this would make them omnivores, but from the physiological standpoint, this may be due to zoopharmacognosy. physiologically, animals must be able to obtain both energy and nutrients from plant and animal materials to be considered omnivorous. thus, such animals are still able to be classified as carnivores and herbivores when they are just obtaining nutrients from materials originating from sources that do not seemingly complement their classification. for example, it is well documented that some ungulates such as giraffes, camels, and cattle, will gnaw on bones to consume particular minerals and nutrients. also, cats, which are generally regarded as obligate carnivores, occasionally eat grass to regurgitate indigestible material (such as hairballs), aid with hemoglobin production, and as a laxative.
<p> carnivorans have teeth and claws adapted for catching and eating other animals. many hunt in packs and are social animals, giving them an advantage over larger prey. some carnivorans, such as cats and pinnipeds, depend entirely on meat for their nutrition. others, such as raccoons and bears, are more omnivorous, depending on the habitat. the giant panda is largely a herbivore, but also feeds on fish, eggs and insects. the polar bear subsists mainly on seals.
<p> characteristics commonly associated with carnivores include strength, speed, and keen senses for hunting, as well as teeth and claws for capturing and tearing prey. however, some carnivores do not hunt and are scavengers, lacking the physical characteristics to bring down prey; in addition, most hunting carnivores will scavenge when the opportunity arises. carnivores have comparatively short digestive systems, as they are not required to break down the tough cellulose found in plants. | First one that came to mind for me was the Northern Fur Seal. Other seals or sea lions may follow this pattern, but can't confirm that. My guess is that you'll be more likely to find these types of anumals in the marine environment, particularly animals that spend some part of their life in the open ocean, solely due to the size of the ecosystems and food webs. |
how much do we actually know about the bubonic plague? was it a virus? bacterial? how did it kill people? what was the pathogenesis/etiology? | <p> in september 1896 the first case of bubonic plague was detected in mandvi by acacio gabriel viegas. it spread rapidly to other parts of the city, and the death toll was estimated at 1,900 people per week through the rest of the year. many people fled from bombay at this time, and in the census of 1901, the population had actually fallen to 780,000. viegas correctly diagnosed the disease as bubonic plague and tended to patients at great personal risk. he then launched a vociferous campaign to clean up slums and exterminate rats, the carriers of the fleas which spread the plague bacterium. to confirm veigas' findings, four teams of independent experts were brought in. with his diagnosis proving to be correct, the governor of bombay invited w m haffkine, who had earlier formulated a vaccine for cholera, to do the same for the epidemic.
<p> the reference above to bubonic plague seems improbable. typhoid is far more likely; it was both endemic and epidemic at the period, killing prince albert in 1861, but bubonic plague had a heyday from 1348 to about 1700. cholera is just possible; there were outbreaks in 1832 in liverpool and reputedly as late as 1860 in london.
<p> the bubonic plague outbreak existed in three pandemic waves and is known as the black death. in the 1300s alone, an estimated 20–30 million people were killed in europe and approximately 12 million people were killed in china. these deaths were at least 30 percent of the european population at that time. the last major outbreak of the bubonic plague occurred in london from 1665–1666 and is known as the great plague.
<p> the bubonic plague was endemic in populations of infected ground rodents in central asia, and was a known cause of death among migrant and established human populations in that region for centuries. an influx of new people due to political conflicts and global trade led to the distribution of this disease throughout the world.
<p> bubonic plague is a variant of the deadly flea-borne disease plague, which is caused by the enterobacteria "yersinia pestis", that devastated human populations beginning in the 14th century. bubonic plague is primarily spread by fleas that lived on the black rat, an animal that originated in south asia and spread to europe by the 6th century. it became common to cities and villages, traveling by ship with explorers. a human would become infected after being bitten by an infected flea. the first sign of an infection of bubonic plague is swelling of the lymph nodes, and the formation of buboes. these buboes would first appear in the groin or armpit area, and would often ooze pus or blood. eventually infected individuals would become covered with dark splotches caused by bleeding under the skin. the symptoms would be accompanied by a high fever, and within four to seven days of infection, more than half the victims would die. during the 14th and 15th century, humans did not know that a bacterium was the cause of plague, and efforts to slow the spread of disease were futile.
<p> the disease is generally believed to have been bubonic plague, an infection by the bacterium "yersinia pestis", transmitted via a rat vector. other symptom patterns of the bubonic plague, such as septicemic plague and pneumonic plague were also present.
<p> the best-known symptom of bubonic plague is one or more infected, enlarged, and painful lymph nodes, known as buboes. after being transmitted via the bite of an infected flea, the "y. pestis" bacteria become localized in an inflamed lymph node, where they begin to colonize and reproduce. buboes associated with the bubonic plague are commonly found in the armpits, upper femoral, groin and neck region. acral gangrene (i.e., of the fingers, toes, lips and nose) is another common symptom. | You might actually want to start out on the wikipedia page for Bubonic plague. It does actually have some good details that will likely answer your basic questions. If you read that, and still have questions, please come back and ask them! |
if the heaviest elements sink toward the center of the earth, shouldn't there be stratified layers of heavy metals - one of them being gold? | <p> concentrations of heavy metals below the crust are generally higher, with most being found in the largely iron-silicon-nickel core. platinum, for example, comprises approximately 1 part per billion of the crust whereas its concentration in the core is thought to be nearly 6,000 times higher. recent speculation suggests that uranium (and thorium) in the core may generate a substantial amount of the heat that drives plate tectonics and (ultimately) sustains the earth's magnetic field.
<p> high-density materials tend to sink through lighter materials. this tendency is affected by the relative structural strengths, but such strength is reduced at temperatures where both materials are plastic or molten. iron, the most common element that is likely to form a very dense molten metal phase, tends to congregate towards planetary interiors. with it, many siderophile elements (i.e. materials that readily alloy with iron) also travel downward. however, not all heavy elements make this transition as some chalcophilic heavy elements bind into low-density silicate and oxide compounds, which differentiate in the opposite direction.
<p> the earth's crust is made of approximately 5% of heavy metals by weight, with iron comprising 95% of this quantity. light metals (~20%) and nonmetals (~75%) make up the other 95% of the crust. despite their overall scarcity, heavy metals can become concentrated in economically extractable quantities as a result of mountain building, erosion, or other geological processes.
<p> williamson and adams first developed the theory in 1923. they concluded that "it is therefore impossible to explain the high density of the earth on the basis of compression alone. the dense interior cannot consist of ordinary rocks compressed to a small volume; we must therefore fall back on the only reasonable alternative, namely, the presence of a heavier material, presumably some metal, which, to judge from its abundance in the earth's crust, in meteorites and in the sun, is probably iron."
<p> using the chondritic reference model and combining known compositions of the crust and mantle, the unknown component, the composition of the inner and outer core, can be determined; 85% fe, 5% ni, 0.9% cr, 0.25% co, and all other refractory metals at very low concentration. this leaves earth's core with a 5–10% weight deficit for the outer core, and a 4–5% weight deficit for the inner core; which is attributed to lighter elements that should be cosmically abundant and are iron-soluble; h, o, c, s, p, and si. earth's core contains half the earth's vanadium and chromium, and may contain considerable niobium and tantalum. earth's core is depleted in germanium and gallium.
<p> the iron may have been deposited by volcanic exhalation, perhaps in the presence of microorganisms. gold ore mineralization is most intense in the main ledge, at the surface, and the 9 ledge, at the 3200 level (feet below the incline shaft, at 1594 m above sea level).
<p> although most elemental metals have higher densities than most nonmetals, there is a wide variation in their densities, lithium being the least dense (0.534 g/cm) and osmium (22.59 g/cm) the most dense. magnesium, aluminium and titanium are light metals of significant commercial importance. their respective densities of 1.7, 2.7 and 4.5 g/cm can be compared to those of the older structural metals, like iron at 7.9 and copper at 8.9 g/cm. an iron ball would thus weigh about as much as three aluminium balls. | I think that the geological activity at these depths would be faster than the stratification of the heavy metals. The plate tectonics and magma flows would probably be faster than the rate at which the metal sink, and would constantly be mixing up the mantle and breaking up the lower crust. |
if the earth had multiple moons, would they all be in the same phase at once or in different phases? | <p> in western culture, the "four principal phases" of the moon are new moon, first quarter, full moon, and third quarter (also known as last quarter). these are the instances when the moon's ecliptic longitude and the sun's ecliptic longitude differ by 0°, 90°, 180°, and 270°, respectively. each of these phases occur at slightly different times when viewed from different points on earth. during the intervals between principal phases, the moon's apparent shape is either crescent or gibbous. these shapes, and the periods when the moon shows them, are called the "intermediate phases" and last one-quarter of a synodic month, or 7.38 days, on average. however, their durations vary slightly because the moon's orbit is rather elliptical, so the satellite's orbital speed is not constant. the descriptor "waxing" is used for an intermediate phase when the moon's apparent shape is thickening, from new to full moon, and "waning" when the shape is thinning.
<p> if the earth had a perfectly circular orbit centered around the sun, and the moon's orbit was also perfectly circular and centered around the earth, and both orbits were coplanar (on the same plane) with each other, then two eclipses would happen every lunar month (29.53 days). a lunar eclipse would occur at every full moon, a solar eclipse every new moon, and all solar eclipses would be the same type.
<p> the moon makes a complete orbit around earth with respect to the fixed stars about once every 27.3 days (its sidereal period). however, because earth is moving in its orbit around the sun at the same time, it takes slightly longer for the moon to show the same phase to earth, which is about 29.5 days (its synodic period). unlike most satellites of other planets, the moon orbits closer to the ecliptic plane than to the planet's equatorial plane. the moon's orbit is subtly perturbed by the sun and earth in many small, complex and interacting ways. for example, the plane of the moon's orbit gradually rotates once every 18.61 years, which affects other aspects of lunar motion. these follow-on effects are mathematically described by cassini's laws.
<p> earth and its moon have compositions so similar they must have come from the same body. the common explanation is that the moon formed from material blasted free by early collision with a mars-sized body. in his 2015 paper hamilton argues instead for lunar formation by the generally disfavored option of fissioning, spun off from a still partially molten, and rapidly spinning, young earth as it reached full size. slow fractionation of a magma ocean is commonly assumed to have formed lunar highlands, but geochronology, and petrologic problems with that explanation, led hamilton to suggest that here too whole-planet fractionation was complete by about 4.5 b.y., and subsequent surface magmatism was due to impact melting.
<p> the saturnian moons janus and epimetheus share their orbits, the difference in semi-major axes being less than either's mean diameter. this means the moon with the smaller semi-major axis will slowly catch up with the other. as it does this, the moons gravitationally tug at each other, increasing the semi-major axis of the moon that has caught up and decreasing that of the other. this reverses their relative positions proportionally to their masses and causes this process to begin anew with the moons' roles reversed. in other words, they effectively swap orbits, ultimately oscillating both about their mass-weighted mean orbit.
<p> although no other moons of earth have been found to date, there are various types of near-earth objects in 1:1 resonance with it, which are known as quasi-satellites. quasi-satellites orbit the sun from the same distance as a planet, rather than the planet itself. their orbits are unstable, and will fall into other resonances or be kicked into other orbits over thousands of years. quasi-satellites of earth include , ,
<p> this definition of double planet depends on the pair's distance from the sun. if the earth–moon system happened to orbit farther away from the sun than it does now, then earth would win the tug of war. for example, at the orbit of mars, the moon's tug-of-war value would be 1.05. also, several tiny moons discovered since asimov's proposal would qualify as double planets by this argument. neptune's small outer moons neso and psamathe, for example, have tug-of-war values of 0.42 and 0.44, less than that of earth's moon. yet their masses are tiny compared to neptune's, with an estimated ratio of 1.5 () and 0.4 (). | Assuming that the moons would be at different distances from the Earth, each would orbit the Earth in a different amount of time, with those farthest away taking longer to orbit. Therefore, most of the time they would be in different phases. Occasionally, if two moons were in the same position in the sky, their phases would appear the same. |
are ionic solids conductive? why or why not | <p> soluble ionic compounds like salt can easily be dissolved to provide electrolyte solutions. this is a simple way to control the concentration and ionic strength. the concentration of solutes affects many colligative properties, including increasing the osmotic pressure, and causing freezing-point depression and boiling-point elevation. because the solutes are charged ions they also increase the electrical conductivity of the solution. the increased ionic strength reduces the thickness of the electrical double layer around colloidal particles, and therefore the stability of emulsions and suspensions.
<p> although ionic compounds contain charged atoms or clusters, these materials do not typically conduct electricity to any significant extent when the substance is solid. in order to conduct, the charged particles must be mobile rather than stationary in a crystal lattice. this is achieved to some degree at high temperatures when the defect concentration increases the ionic mobility and solid state ionic conductivity is observed. when the ionic compounds are dissolved in a liquid or are melted into a liquid, they can conduct electricity because the ions become completely mobile. this conductivity gain upon dissolving or melting is sometimes used as a defining characteristic of ionic compounds.
<p> ionic compounds, such as salts, can dissociate in solution into their constituent ions, so there is not a one-to-one relationship between the molarity and the osmolarity of a solution. for example, sodium chloride (nacl) dissociates into na and cl ions. thus, for every 1 mole of nacl in solution, there are 2 osmoles of solute particles (i.e., a 1 mol/l nacl solution is a 2 osmol/l nacl solution). both sodium and chloride ions affect the osmotic pressure of the solution.
<p> in electrolytes, electrical conduction happens not by band electrons or holes, but by full atomic species (ions) traveling, each carrying an electrical charge. the resistivity of ionic solutions (electrolytes) varies tremendously with concentration – while distilled water is almost an insulator, salt water is a reasonable electrical conductor. conduction in ionic liquids is also controlled by the movement of ions, but here we are talking about molten salts rather than solvated ions. in biological membranes, currents are carried by ionic salts. small holes in cell membranes, called ion channels, are selective to specific ions and determine the membrane resistance.
<p> the charged components that make up ionic solids cannot exist in the high-density sea of delocalized electrons characteristic of strong metallic bonding. some molecular salts, however, feature both ionic bonding among molecules and substantial one-dimensional conductivity, indicating a degree of metallic bonding among structural components along the axis of conductivity. examples include tetrathiafulvalene salts.
<p> although early thinking was that a higher ratio of a cation's ion charge to ionic radius, or the charge density, resulted in more solvation, this does not stand up to scrutiny for ions like iron(iii) or lanthanides and actinides, which are readily hydrolyzed to form insoluble (hydrous) oxides. as these are solids, it is apparent that they are not solvated.
<p> when ionic compounds dissolve, the individual ions dissociate and are solvated by the solvent and dispersed throughout the resulting solution. because the ions are released into solution when dissolved, and can conduct charge, soluble ionic compounds are the most common class of strong electrolytes, and their solutions have a high electrical conductivity. | Adding onto the other commenter, some ionic solids do conduct with electrons and holes. Many semiconductors (GaAs, FeS2, CdTe) are ionic solids, and conduct because of semiconductor doping. (they have defects like impurity atoms or vacancies that make them conductive) Other things that come to mind are perovskites (SrTiO3 and others) that exhibit a really wide range of electronic behavior. In materials science at least you come to understand covalent vs ionic compounds as a real spectrum with many different outcomes in properties. Lots of ionic solids conduct with electrons and holes, lots conduct via the diffusion of ions (electrolytes), and lots are insulators. |
what about the dinosaurs made then unable to survive the k-t extinction? | <p> the k–pg extinction had a profound effect on the evolution of life on earth. the elimination of dominant cretaceous groups allowed other organisms to take their place, spurring a remarkable series of adaptive radiations in the paleogene. the most striking example is the replacement of dinosaurs by mammals. after the k–pg extinction, mammals evolved rapidly to fill the niches left vacant by the dinosaurs. also significant, within the mammalian genera, new species were approximately 9.1% larger after the k–pg boundary.
<p> bullet::::- russel reviewed various proposed hypotheses for the extinction of the non-avian dinosaurs. he concluded that the only viable proposal was that the dinosaurs had been wiped out by radiation emitted by a nearby supernova.
<p> a wide range of species perished in the k–pg extinction, the best-known being the non-avian dinosaurs. it also destroyed a plethora of other terrestrial organisms, including some mammals, pterosaurs, birds, lizards, insects, and plants. in the oceans, the k–pg extinction killed off plesiosaurs and the giant marine lizards (mosasauridae) and devastated fish, sharks, mollusks (especially ammonites, which became extinct), and many species of plankton. it is estimated that 75% or more of all species on earth vanished. yet the extinction also provided evolutionary opportunities: in its wake, many groups underwent remarkable adaptive radiation—sudden and prolific divergence into new forms and species within the disrupted and emptied ecological niches. mammals in particular diversified in the paleogene, evolving new forms such as horses, whales, bats, and primates. birds, fish, and perhaps lizards also radiated.
<p> the extinction of the dinosaurs at the end of the cretaceous period is generally thought to have been caused by the cretaceous–paleogene impact event, which created the chicxulub crater, demonstrating that impacts are a serious threat to life on earth. astronomers have speculated that without jupiter to mop up potential impactors, extinction events might have been more frequent on earth, and complex life might not have been able to develop. this is part of the argument used in the rare earth hypothesis.
<p> several researchers have stated that some non-avian dinosaurs survived into the paleocene and therefore the extinction of non-avian dinosaurs was gradual. their arguments were based on the finding of dinosaur remains in the hell creek formation up to above (40,000 years later than) the k–pg boundary. similar reports have come from other parts of the world, including china.
<p> bullet::::- a. audova analyzed the circumstances of the extinction of the dinosaurs and concluded that they were driven extinct gradually when earth's climate cooled too severely for their embryos to fully develop in the egg. he dismissed the idea that they went extinct due to factors like racial senility.
<p> excluding a few controversial claims, scientists agree that all non-avian dinosaurs became extinct at the k–pg boundary. the dinosaur fossil record has been interpreted to show both a decline in diversity and no decline in diversity during the last few million years of the cretaceous, and it may be that the quality of the dinosaur fossil record is simply not good enough to permit researchers to distinguish between the options. there is no evidence that late maastrichtian non-avian dinosaurs could burrow, swim, or dive, which suggests they were unable to shelter themselves from the worst parts of any environmental stress that occurred at the k–pg boundary. it is possible that small dinosaurs (other than birds) did survive, but they would have been deprived of food, as herbivorous dinosaurs would have found plant material scarce and carnivores would have quickly found prey in short supply. | This is an excellent, but incredibly complex question. But here goes: just a few notes on the subject but not at all a comprehensive answer: First off, dinosaurs weren't the only casualties - lots of groups went extinct, from many of the planktonic foraminifera, on up to many groups of mammals (yes, mammals went extinct at the KT, too), all of the marine reptiles (the ones that weren't already extinct) and many many more. Dinosaurs are the most conspicuous extinction because they were the dominant tetrapods of the time. However, what often gets glossed over without any mention is that dinosaurs were struggling before KT event. Sure T. rex and Triceratops were rocking the party up until closing time, but overall dino diversity had been shrinking steadily for a couple of million years before the main event. This was also true for other groups as well (marine icthyosaurs were extinct long before the KT). So a lot factors were leading up to a bad time for Earth even before the celestial body arrived. All that said, nothing over a certain size survived, eliminating all the big things. And as for the small dinosaurs, overspecialization was probably their biggest problem (highly specialized life history strategies and inability to generalize). I'm not sure what the person below meant about the smaller ones surviving. No dinosaurs survived the KT except for the true birds (which were already true birds by that time). |
what part of your brain gets activated when you "talk to yourself"? | <p> it is found that when the brain of an individual is activated by a piece of information of an event in which he/she has taken part, the brain of the individual will respond differently from that of a person who has received the same information from secondary sources (non-experiential).
<p> during auditory verbal imagery, the inferior frontal cortex and the insula were activated as well as the supplementary motor area, left superior temporal/inferior parietal region, the right posterior cerebellar cortex, the left precentral, and superior temporal gyri. other areas of the brain have been activated during auditory imagery however there hasn’t been an encoding process attributed to it yet such as frontopolar areas, and the subcallosal gyrus.
<p> much like the mcgurk effect, when listeners were also able to see the words being spoken, they were much more likely to correctly identify the missing phonemes. like every sense, the brain will use every piece of information it deems important to make a judgement about what it is perceiving. using the visual cues of mouth movements, the brain will you both in top-down processing to make a decision about what phoneme is supposed to be heard. vision is the primary sense for humans and for the most part assists in speech perception the most.
<p> the output of sense organs is first received by the thalamus. part of the thalamus' stimuli goes directly to the amygdala or "emotional/irrational brain", while other parts are sent to the neocortex or "thinking/rational brain". if the amygdala perceives a match to the stimulus, i.e., if the record of experiences in the hippocampus tells the amygdala that it is a fight, flight or freeze situation, then the amygdala triggers the hpa (hypothalmic-pituitary-adrenal) axis and hijacks the rational brain. this emotional brain activity processes information milliseconds earlier than the rational brain, so in case of a match, the amygdala acts before any possible direction from the neocortex can be received. if, however, the amygdala does not find any match to the stimulus received with its recorded threatening situations, then it acts according to the directions received from the neocortex. when the amygdala perceives a threat, it can lead that person to react irrationally and destructively.
<p> the noradrenergic neurons in the brain form a neurotransmitter system, that, when activated, exerts effects on large areas of the brain. the effects are manifested in alertness, arousal, and readiness for action.
<p> when people focus on things in a social context, the medial prefrontal cortex and precuneus areas of the brain are activated, however when people focus on a non-social context there is no activation of these areas. straube et al. hypothesized that the areas of the brain involved in mental processes were mainly responsible for social cue processing. it is believed that when iconic gestures are involved, the left temporal and occipital regions would be activated and when emblematic gestures were involved the temporal poles would be activated. when it came to abstract speech and gestures, the left frontal gyrus would be activated according to straube et al. after conducting an experiment on how body position, speech and gestures affected activation in different areas of the brain straube et al. came to the following conclusions:
<p> however, the results of neural imaging have to be taken with caution because the regions of the brain activated during spontaneous, natural internal speech diverge from those that are activated on demand. in research studies, individuals are asked to talk to themselves on demand, which is different than the natural development of inner speech within one's mind. the concept of internal monologue is an elusive study and is subjective to many implications with future studies. | When it comes to what's happening in our brain and our bodies during our inner speech, there are actually a lot of similarities between the words that we say out loud and the voice we hear in our head. Muscles in your larynx move when you speak out loud. But researchers have also uncovered that tiny muscular movements happen in the larynx when you talk to yourself silently in your head, too. They are only detectable via sensitive measuring techniques like electromyography, however, which is probably why you're not even aware of them. It gets even stranger though. The area of the brain that is active when we speak out loud — the left inferior frontal gyrus, also known as Broca's area — is also active when we 'speak' in our heads. What's more, scientists have shown that disrupting this region of the brain can interfere with our ability to engage in inner speech, much like it can interrupt our ability to speak audibly. This is probably because it's performing a similar function for our bodies, whether we're speaking out loud or just talking to ourselves silently. According to Dr. Nathan E. Chrone, an associate professor of neurology at Johns Hopkins, "We found that rather than carrying out the articulation of speech, Broca’s area is developing a plan for articulation, and then monitoring what is said to correct errors and make adjustments in the flow of speech." Why our body's physical actions are so similar, regardless of whether we're speaking out loud or inaudibly in our heads, is still unclear, but we are gaining a better understanding of how we can tell what voices are our own — whether internal or spoken — versus the voices of other people. That process has to do with a brain signal called "corollary discharge." As researcher Mark Scott of the University of British Columbia explains, "We spend a lot of time speaking and that can swamp our auditory system, making it difficult for us to hear other sounds when we are speaking. By attenuating the impact our own voice has on our hearing — using the ‘corollary discharge’ prediction — our hearing can remain sensitive to other sounds." Corollary discharge is essentially a copy of a motor signal which allows us to predict our own movements, including vocalizations, and which tells us that we're the ones moving or speaking rather than someone else. It's also thought that a malfunction in this process is part of what differentiates those that "hear voices" from everyone else who can distinguish their inner voice as "theirs." Source |
why is there a lyme vaccine for dogs, but we have yet to have an effective vaccine for humans? | <p> vaccinations are an important preventative animal health measure. the specific vaccinations recommended for dogs varies depending on geographic location, environment, travel history, and the activities the animal frequently engages in. in the united states, regardless of any of these factors, it is usually highly recommended that dogs be vaccinated against rabies, "canine parvovirus", canine distemper, and infectious canine hepatitis (using "canine adenovirus type 2" to avoid reaction). the decision on whether to vaccinate against other diseases, including leptospirosis, lyme disease, "bordetella bronchiseptica", "parainfluenza virus", and "canine coronavirus", should be made between an owner and a veterinarian, taking into account factors specific to the dog.
<p> in 2014, habib reportedly argued booster shots for pet vaccines increased risk of immune disorders, whereas medical experts cited large, long-term studies that show the benefit of vaccination outweighs the minimal risk of the adverse immune response. he supports “core vaccinations” for young dogs and remains opposed to booster shots due to his belief that it could increase risk of immune disorders, though veterinary scientists continue to defend giving regular booster shots to maintain immunity.
<p> programs supporting regular vaccination of dogs have contributed both to the health of dogs and to the public health. in countries where routine rabies vaccination of dogs is practiced, for example, rabies in humans is reduced to a very rare event.
<p> dogs are used for research because they can be domesticated, and because they have been used in studies concerning diabetes in the past. for example, dogs were used as subjects in a study of the effects of diet-induced obesity on insulin dispersion. in this experiment, it was found that a high-fat diet caused insulin resistance, contributing to cardiovascular disease, cancer, and type 2 diabetes.
<p> currently, there are geographically defined "core vaccines" and individually chosen "non-core vaccine" recommendations for dogs. a number of controversies surrounding adverse reactions to vaccines have resulted in authoritative bodies revising their guidelines as to the type, frequency, and methods/locations for dog vaccination.
<p> a vaccine is effective against "b. canis canis" (dogs in the mediterranean region), but is ineffective against "b. c. rossi". "b. imitans" causes a mild form of the disease that frequently resolves without treatment (dogs in southeast asia).
<p> the organization is working to end the inhumane culling of stray dogs, which many countries do in a misguided effort to eliminate rabies. the organization points out that vaccination programs are the only effective way to eliminate rabies, and work with governments on vaccination programs. in 2012, a mass vaccination program was started in the shaanxi, guizhou and anhui provinces of china, working with the chinese animal disease control centre; as of june 2014, 750 veterinarians have been trained and over 90,000 dogs have been vaccinated. mass vaccination programs have also been delivered in bali, the philippines, bangladesh, kenya, zanzibar, and kathmandu, nepal. | There was a human vaccine in the late 90s for Lyme disease called Lymerix. There were concerns at the time about it, namely that a certain epitope in the adjuvanted B. burgdorferi outer surface protein (OspA) might increase the risk of autoimmune Lyme arthritis. This later turned out to be false, but enough people were scared away or not offered it that the pharmaceutical company took it off the market owing to poor sales. |
why does rocket exhaust sometimes look like this. | <p> the rocket engine was a lox-ethanol, film-cooled, pressure-fed, blow-down design with a 10 to -long exhaust plume. plume-seeding technology allowed the plume color to vary from red to green to yellow to better facilitate race spectators in keeping track of specific racers while in the air.
<p> a rocket engine's formula_3 is usually high due to the high combustion temperatures and pressures, and the long converging-diverging nozzle used. it varies slightly with altitude due to changing atmospheric pressure, but can be up to 70%. most of the remainder is lost as heat in the exhaust.
<p> the jet exhaust is transparent and usually not visible in air. but in cold weather the water vapor, which is a large part of the steam-gas mixture, condenses soon after it leaves the nozzle, enveloping the pilot in a cloud of fog (for this reason, the very first tethered flights of the bell rocket belt were carried out in a hangar). the jet exhaust is also visible if the fuel is not decomposed completely in the gas generator, which can occur if the catalyst or the hydrogen peroxide is contaminated.
<p> some exhausts, notably alcohol fuelled rockets, can show visible shock diamonds. these are due to cyclic variations in the jet pressure relative to ambient creating shock waves that form 'mach disks'.
<p> the exhaust plume may also take on a corkscrew appearance as it is whipped around by upper level wind currents. it is typically seen within two to three minutes after a launch has occurred. depending on weather conditions, it could remain in the sky for up to half an hour before dispersing.
<p> the design of the rocket system caused some problems. the hot rocket exhaust could not be vented into the fighting compartment nor could the barrel withstand the pressure if the gasses were not vented. therefore, a ring of ventilation shafts was put around the barrel which channeled the exhaust and gave the weapon something of a pepperbox appearance.
<p> an exhaust plume contributes a significant infrared signature. one means to reduce ir signature is to have a non-circular tail pipe (a slit shape) to minimize the exhaust cross-sectional volume and maximize the mixing of hot exhaust with cool ambient air (see lockheed f-117 nighthawk). often, cool air is deliberately injected into the exhaust flow to boost this process (see ryan aqm-91 firefly and northrop grumman b-2 spirit). sometimes, the jet exhaust is vented above the wing surface to shield it from observers below, as in the lockheed f-117 nighthawk, and the unstealthy fairchild republic a-10 thunderbolt ii. to achieve infrared stealth, the exhaust gas is cooled to the temperatures where the brightest wavelengths it radiates are absorbed by atmospheric carbon dioxide and water vapor, dramatically reducing the infrared visibility of the exhaust plume. another way to reduce the exhaust temperature is to circulate coolant fluids such as fuel inside the exhaust pipe, where the fuel tanks serve as heat sinks cooled by the flow of air along the wings. | Those are Shock Diamonds, which are a complex phenomenon resulting from supersonic flow in air: You don't always see them because they need specific conditions to form, or at least to be visible, including excess fuel in the exhaust stream. |
(beta-positive decay) how can a proton decay to a neutron and a positron when it is lighter than a neutron? | <p> a very small minority of free neutron decays (about four per million) are so-called "two-body decays", in which the proton, electron and antineutrino are produced, but the electron fails to gain the 13.6 ev energy necessary to escape the proton, and therefore simply remains bound to it, as a neutral hydrogen atom. in this type of beta decay, in essence all of the neutron decay energy is carried off by the antineutrino.
<p> the two methods for this conversion are mediated by the weak force, and involve types of beta decay. in the simplest beta decay, neutrons are converted to protons by emitting a negative electron and an antineutrino. this is always possible outside a nucleus because neutrons are more massive than protons by an equivalent of about 2.5 electrons. in the opposite process, which only happens within a nucleus, and not to free particles, a proton may become a neutron by ejecting a positron. this is permitted if enough energy is available between parent and daughter nuclides to do this (the required energy difference is equal to 1.022 mev, which is the mass of 2 electrons). if the mass difference between parent and daughter is less than this, a proton-rich nucleus may still convert protons to neutrons by the process of electron capture, in which a proton simply electron captures one of the atom's k orbital electrons, emits a neutrino, and becomes a neutron.
<p> a very small minority of neutron decays (about four per million) are so-called "two-body (neutron) decays", in which a proton, electron and antineutrino are produced as usual, but the electron fails to gain the 13.6 ev necessary energy to escape the proton (the ionization energy of hydrogen), and therefore simply remains bound to it, as a neutral hydrogen atom (one of the "two bodies"). in this type of free neutron decay, almost all of the neutron decay energy is carried off by the antineutrino (the other "body"). (the hydrogen atom recoils with a speed of only about (decay energy)/(hydrogen rest energy) times the speed of light, or 250 km/s.)
<p> a very small minority of neutron decays (about four per million) are so-called "two-body (neutron) decays", in which a proton, electron and antineutrino are produced as usual, but the electron fails to gain the 13.6 ev necessary energy to escape the proton (the ionization energy of hydrogen), and therefore simply remains bound to it, as a neutral hydrogen atom (one of the "two bodies"). in this type of free neutron decay, in essence all of the neutron decay energy is carried off by the antineutrino (the other "body").
<p> the two types of beta decay are known as "beta minus" and "beta plus". in beta minus (β) decay, a neutron is converted to a proton, and the process creates an electron and an electron antineutrino; while in beta plus (β) decay, a proton is converted to a neutron and the process creates a positron and an electron neutrino. β decay is also known as positron emission.
<p> beta decay is characterized by the emission of a neutrino and a negatron which is equivalent to an electron. this process occurs when a nucleus has an excess of neutrons with respect to protons, as compared to the stable isobar. this type of transition converts a neutron into a proton; similarly, a positron is released when a proton is converted into a neutron. these decays follows the relation:
<p> beta decay conserves a quantum number known as the lepton number, or the number of electrons and their associated neutrinos (other leptons are the muon and tau particles). these particles have lepton number +1, while their antiparticles have lepton number −1. since a proton or neutron has lepton number zero, β decay (a positron, or antielectron) must be accompanied with an electron neutrino, while β decay (an electron) must be accompanied by an electron antineutrino. | Before we begin, note this occurs in proton rich isotopes, free protons are stable as far as we can tell. First, it's best not to think of mass as "stuff," but a number that relates to the energy of a particle by virtue of its own existence. This is Einstein's famous E=mc^(2). Second, in such decays, the resulting nucleus will actually be lighter overall than the original despite trading a lighter proton for a more massive neutron. A common example is sodium-22 which has a positron decay that produces neon-22 which is nearly 6 electron masses lighter. This mass different is coming from the nucleon binding energy of the atom, often called the mass defect. Groups of protons and neutrons are less mass than the sum of them all individually, this mass difference is the same energy that glues them together. |
how does emergency contraception (morning after pill) containing levonorgestrel work? pre-fertilization or post-fertilization moa? | <p> levonorgestrel, taken alone in a single high dose, was first evaluated as a form of emergency contraception in 1973. it was the second progestin to be evaluated for such purposes, following a study of quingestanol acetate in 1970. in 1974, the yuzpe regimen, which consisted of high doses of a combined birth control pill containing ethinylestradiol and norgestrel, was described as a method of emergency contraception by a. albert yuzpe and colleagues, and saw widespread interest. levonorgestrel-only emergency contraception was introduced under the brand name "postinor" by 1978. ho and kwan published the first study comparing levonorgestrel only and the yuzpe regimen as methods of emergency contraception in 1993 and found that they had similar effectiveness but that levonorgestrel alone was better-tolerated. in relation to this, the yuzpe regimen has largely been replaced as a method of emergency contraception by levonorgrestrel-only preparations. levonorgestrel-only emergency contraception was approved in the united states under the brand name "plan b" in 1999, and has also been marketed widely elsewhere throughout the world under other brand names such as "levonelle" and "norlevo" in addition to "postinor". in 2013, the food and drug administration approved "plan b one-step" for sale over-the-counter in the united states without a prescription or age restriction.
<p> as a type of emergency contraception, levonorgestrel is used after unprotected intercourse to reduce the risk of pregnancy. however, it can serve different hormonal purposes in its different methods of delivery. it is available for use in a variety of forms:
<p> the primary mechanism of action of levonorgestrel as a progestogen-only emergency contraceptive pill is, according to international federation of gynecology and obstetrics (figo), to prevent fertilization by inhibition of ovulation and thickening of cervical mucus. figo has stated that: "review of the evidence suggests that lng [levonorgestreol] ecps cannot prevent implantation of a fertilized egg. language on implantation should not be included in lng ecp product labeling." in november 2013, the european medicines agency (ema) approved a change to the label saying it cannot prevent implantation of a fertilized egg.
<p> the primary mechanism of action of progestogen-only emergency contraceptive pills is to prevent fertilization by inhibition of ovulation. the best available evidence is that they do not have any post-fertilization effects such as the prevention of implantation. the u.s. fda-approved labels and european ema-approved labels (except for hra pharma's "norlevo") levonorgestrel emergency contraceptive pills (based on labels for regular oral contraceptive pills) say they may cause endometrial changes that discourage implantation. daily use of regular oral contraceptive pills can alter the endometrium (although this has not been proven to interfere with implantation), but the isolated use of a levonorgestrel emergency contraceptive pill does not have time to alter the endometrium. in march 2011, the international federation of gynecology and obstetrics (figo) issued a statement that: "review of the evidence suggests that lng [levonorgestreol] ecps cannot prevent implantation of a fertilized egg. language on implantation should not be included in lng ecp product labeling." in june 2012, a "new york times" editorial called on the fda to remove from the label the unsupported suggestion that levonorgestrel emergency contraceptive pills inhibit implantation. in november 2013, the european medicines agency (ema) approved a change to the label for hra pharma's "norlevo" saying it cannot prevent implantation of a fertilized egg.
<p> levonorgestrel can be taken by mouth as a form of emergency birth control. the typical dosage is either 1.5 mg taken once or 0.75 mg taken 12-24 hours apart. the effectiveness is both methods is similar. the most widely used form of oral emergency contraception is the progestin-only pill, which contains a 1.5 mg dosage of levonorgestrel. levonorgestrel-only emergency contraceptive pills are reported to have an 89% effectiveness rate if taken within the recommended 72 hours after sex. the efficacy of the drug decreases by 50% for each 12 hour delay in taking the dose after the emergency contraceptive regimen has been started.
<p> development of a progesterone-containing intrauterine device (iud) for contraception began in the 1960s. incorporation of progesterone into iuds was initially studied to help reduce the risk of iud expulsion. however, while addition of progesterone to iuds showed no benefit on expulsion rates, it was unexpectedly found by antonio scommegna to induce endometrial atrophy. this led to the development and introduction of progestasert, a progesterone-containing product and the first progestogen-containing iud, in 1976. unfortunately, the product had various problems that limited its use. these included a short duration of efficacy of only one year, a high cost, a relatively high 2.9% failure rate, a lack of protection against ectopic pregnancy, and difficult and sometimes painful insertions that could necessitate use of a local anesthetic or analgesic. as a result of these issues, progestasert never became widely used, and was discontinued in 2001. it was used mostly in the united states and france while it was marketed.
<p> progestogen-only injectable contraceptives (poics) are a form of hormonal contraception and progestogen-only contraception that are administered by injection and providing long-lasting birth control. as opposed to combined injectable contraceptives, they contain only a progestogen without an estrogen, and include two progestin preparations: | Both. Pre-fertilisation: it can prevent the egg from being release or it being fertilised (by making it harder for sperm to enter the womb). Post fertilisation: it's effectively an abortive, which works by preventing the fertilised egg from embedding in the wall of the uterus. The divergence is probably down to the fact that there is more than one morning after drug. |
engineering, how does i2c bus work? | <p> the heci bus allows the host operating system (os) to communicate directly with the management engine (me) integrated in the chipset. this bi-directional, variable data-rate bus enables the host and me to communicate system management information and events in a standards-compliant way, essentially replacing the system management bus (smbus). the bus consists of four wires: a request and grant pair along with a serial transmit and receive data pair.
<p> the bus is completely asynchronous, allowing a mixture of fast and slow devices. it allows the overlapping of arbitration (selection of the next "bus master") while the current bus master is still performing data transfers. the 18 address lines allow the addressing of a maximum of . typically, the top is reserved for the registers of the memory-mapped i/o devices used in the pdp-11 architecture.
<p> the itanium bus interfaces to the rest of the system via a chipset. enterprise server manufacturers differentiate their systems by designing and developing chipsets that interface the processor to memory, interconnections, and peripheral controllers. the chipset is the heart of the system-level architecture for each system design. development of a chipset costs tens of millions of dollars and represents a major commitment to the use of the itanium. ibm created a chipset in 2003, and intel in 2002, but neither of them developed chipsets to support newer technologies such as ddr2 or pci express.
<p> the external bus interface, usually shortened to ebi, is a computer bus for interfacing small peripheral devices like flash memory with the processor. it is used to expand the internal bus of the processor to enable connection with external memories or other peripherals. ebi can be used to share i/o pins controlling memory devices that are connected to two different memory controllers. use of ebi reduces the total number of system pins required causing the system cost to come down. ebi manufacturers include barco,
<p> an iscsi host bus adapter (more commonly, hba) implements a hardware initiator. a typical hba is packaged as a combination of a gigabit (or 10 gigabit) ethernet network interface controller, some kind of tcp/ip offload engine (toe) technology and a scsi bus adapter, which is how it appears to the operating system.
<p> the primary use for hypertransport is to replace the intel-defined front-side bus, which is different for every type of intel processor. for instance, a pentium cannot be plugged into a pci express bus directly, but must first go through an adapter to expand the system. the proprietary front-side bus must connect through adapters for the various standard buses, like agp or pci express. these are typically included in the respective controller functions, namely the "northbridge" and "southbridge".
<p> the lpc bus was introduced by intel in 1998 as a software-compatible substitute for the industry standard architecture (isa) bus. it resembles isa to software, although physically it is quite different. the isa bus has a 16-bit data bus and a 24-bit address bus that can be used for both 16-bit i/o port addresses and 24-bit memory addresses; both run at speeds up to 8.33 mhz. the lpc bus uses a heavily multiplexed four-bit-wide bus operating at four times the clock speed (33.3 mhz) to transfer addresses and data with similar performance. | I2C Manual The I2C-Bus Specification If you still have questions after reading these, come back and ask again. |
do people have "brain prints"? people have unique finger prints, eyerises, and dna, do we have unique brain structures aswell? | <p> most scientists working on the relation between the human brain and neurologic or psychiatric diseases (or animal models of these diseases) use paxinos's maps and concepts of brain organisation. his human brain atlases are the most accurate available for identification of deep structures and are used in surgical theatres.
<p> the brain seems to be able to discriminate and adapt particularly well in certain contexts. for instance, human beings seem to have an enormous capacity for memorizing and recognizing faces. one of the key goals of computational neuroscience is to dissect how biological systems carry out these complex computations efficiently and potentially replicate these processes in building intelligent machines.
<p> cognitive scientists are very interested in finding out what brain structures are involved with mental imaging in order to provide consistent, localized, and more tangible evidence. it has been established that auditory imagery makes use of the right lobe since people with right lobe lesions tend to have difficulty generating auditory images. this is because auditory imaging requires the usage of the frontal and superior temporal right lobe as well as a lot of the right auditory association cortices. these portions of the brain are usually involved with interpreting the inflections of sounds (such as sad or angry sounds).
<p> however, mosso's manuscripts have remained largely unknown for more than a century, and therefore it was the structural radiographic techniques to dominate the field of the imaging of the human brain. unfortunately, because the brain is almost entirely composed of soft tissue that is not radio-opaque, it remains essentially invisible to ordinary or plain x-ray examination. this is also true of most brain abnormalities, though there are exceptions such as a calcified tumour (e.g.meningioma, craniopharyngioma, some types of glioma); whilst calcification in such normal structures as the pineal body, the choroid plexuses, or large brain arteries may indirectly give important clues to the presence of structural disease in the brain itself.
<p> "the brain" is a comprehensive compendium of every known criminal in london. it consists of an index card per criminal in alphabetical order, with the following information; their age, their date and place of birth, family history, schooling and service records; recognised methods of operation, known confederates, cell mates, bed mates and habitates; physical description, aliases, arrests, convictions and time served.
<p> kurzweil describes a series of thought experiments which suggest to him that the brain contains a hierarchy of pattern recognizers. based on this he introduces his pattern recognition theory of mind (prtm). he says the neocortex contains 300 million very general pattern recognition circuits and argues that they are responsible for most aspects of human thought. he also suggests that the brain is a "recursive probabilistic fractal" whose line of code is represented within the 30-100 million bytes of compressed code in the genome.
<p> there is considerable evidence that a person's cortex is essentially divided into two functional streams: an occipital-parietal-frontal pathway that processes "where" information and an occipital-temporal-frontal pathway that provides "what" information to the individual. | The short answer would be yes. The longer answer would be that the neurons in your brain form in a way unique to you, however the human adult has around 100 billion neurons, so making a "print" of this would be extraordinarily difficult. |
why don't i have to mow my lawn in the winter? | <p> maintaining a green lawn sometimes requires large amounts of water. this is not normally a problem in the temperate british isles, where the concept of the lawn originated, as natural rainfall is usually sufficient to maintain a lawn's health, although in times of drought hosepipe bans may be implemented by the water suppliers. the exportation of the lawn ideal to more arid regions of the world, however, such as the u.s. southwest and australia, has crimped already scarce water resources in such areas, requiring larger, more environmentally invasive water supply systems. grass typically goes dormant during cold, winter months, and during hot, dry summer months turns brown, thereby reducing its demand for water. most grasses typically recover quite well from a drought, but many property owners consider the brown "dead" appearance unacceptable, or are misled by it, and increase watering during the summer months.
<p> many us municipalities and homeowners' associations have rules which require lawns to be maintained to certain specifications, sanctioning those who allow the grass to grow too long. in communities with drought problems, watering of lawns may be restricted to certain times of day or days of the week.
<p> mowing a lawn can bring a person into contact with these hairs. one alternative is to adopt a grass mulching technique to reduce possible contact, and to speed up the biological breakdown of the irritant hairs.
<p> maintaining a rough lawn requires only occasional cutting with a suitable machine, or grazing by animals. maintaining a smooth and closely cut lawn, be it for aesthetic or practical reasons or because social pressure from neighbors and local municipal ordinances requires it, necessitates more organized and regular treatments. usually once a week is adequate for maintaining a lawn in most climates. however, in the hot and rainy seasons of regions contained in hardiness zones greater than 8, lawns may need to be maintained up to two times a week.
<p> a small amount of thatch may provide a beneficial insulating effect against fluctuations in temperature and moisture. however, excessive thatch can cause root problems and lawn mower difficulties. a dethatcher may be used to remove thatch from a lawn.
<p> lawn rollers are designed to even out or firm up the lawn surface, especially in climates where heaving causes the lawn to be lumpy. heaving may result when the ground freezes and thaws many times over winter. where this occurs, gardeners are advised to give the lawn a light rolling with a lawn roller in the spring. clay or wet soils should not be rolled as they become compacted.
<p> the government cut the grass everywhere to prevent disease. therefore, it was rare that one would own a lawn mower. the houses had silver stainless steel garbage cans with lids. 55-gallon drums were spaced out along the streets, labeled "dry trash". garbage was collected several time weekly by crews. | Because it's too cold for the grass to grow. Plants need a certain temperature to carry on chemical processes for growing. They also need more light than is typically given in the winter. Plenty of plants are green in below freezing temperatures, and even under the snow, but that's because they have stopped growing and the chloraphyll is preserved somehow: pansies, foxglove, petunias. Anyway, my type of grass simply goes brown in the winter. It is no longer growing. It is actually in some type of hibernation. |
how are electron spins read? | <p> the spin of the electron is an intrinsic angular momentum that is separate from the angular momentum due to its orbital motion. the magnitude of the projection of the electron's spin along an arbitrary axis is formula_1, implying that the electron acts as a fermion by the spin-statistics theorem. like orbital angular momentum, the spin has an associated magnetic moment, the magnitude of which is expressed as
<p> the electron has an intrinsic angular momentum or spin of . this property is usually stated by referring to the electron as a spin- particle. for such particles the spin magnitude is "ħ". while the result of the measurement of a projection of the spin on any axis can only be ±. in addition to spin, the electron has an intrinsic magnetic moment along its spin axis. it is approximately equal to one bohr magneton,=\frac{e\hbar}{2m_{\mathrm{e}}}.|group=note}} which is a physical constant equal to . the orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity.
<p> an isolated electron has an angular momentum and a magnetic moment resulting from its spin. while an electron's spin is sometimes visualized as a literal rotation about an axis, it cannot be attributed to mass distributed identically to the charge. the above classical relation does not hold, giving the wrong result by a dimensionless factor called the electron "g"-factor, denoted "g" (or just "g" when there is no risk of confusion):
<p> in a real atom, the spin of a moving electron can interact with the electric field of the nucleus through relativistic effects, a phenomenon known as spin-orbit interaction. when one takes this coupling into account, the spin and the orbital angular momentum are no longer conserved, which can be pictured by the electron precessing. therefore, one has to replace the quantum numbers "l", "m" and the projection of the spin "m" by quantum numbers that represent the total angular momentum (including spin), "j" and "m", as well as the quantum number of parity.
<p> electron spin plays an important role in magnetism, with applications for instance in computer memories. the manipulation of "nuclear spin" by radiofrequency waves (nuclear magnetic resonance) is important in chemical spectroscopy and medical imaging.
<p> the terms "spin up" and "spin down" are relative to a chosen direction, conventionally the z direction. an electron may be in a superposition of spin up and spin down, which corresponds to the spin axis pointing in some other direction. the spin state may depend on location.
<p> by the postulates of quantum mechanics, an experiment designed to measure the electron spin on the -, -, or -axis can only yield an eigenvalue of the corresponding spin operator (, or ) on that axis, i.e. or . the quantum state of a particle (with respect to spin), can be represented by a two component spinor: | Basically what you do is fire an electron through a magnetic field. Since the electron can be spin up or spin down, it is deflected either up or down, to produce two distinct patches on the detector. () |
what happens to oxygen (gasses in general) that get lost into the vacuum of space? could (theoretically) we pump oxygen in space to create breathable pockets? | <p> oxygen generators on board the international space station produce oxygen from water using electrolysis; the hydrogen produced was previously discarded into space. as astronauts consume oxygen, carbon dioxide is produced, which must then be removed from the air and discarded as well. this approach required copious amounts of water to be regularly transported to the space station for oxygen generation in addition to that used for human consumption, hygiene, and other uses—a luxury that will not be available to future long-duration missions beyond low earth orbit.
<p> human physiology is adapted to living within the atmosphere of earth, and a certain amount of oxygen is required in the air we breathe. if the body does not get enough oxygen, then the astronaut is at risk of becoming unconscious and dying from hypoxia. in the vacuum of space, gas exchange in the lungs continues as normal but results in the removal of all gases, including oxygen, from the bloodstream. after 9 to 12 seconds, the deoxygenated blood reaches the brain, and it results in the loss of consciousness. exposure to vacuum for up to 30 seconds is unlikely to cause permanent physical damage. animal experiments show that rapid and complete recovery is normal for exposures shorter than 90 seconds, while longer full-body exposures are fatal and resuscitation has never been successful. there is only a limited amount of data available from human accidents, but it is consistent with animal data. limbs may be exposed for much longer if breathing is not impaired.
<p> bullet::::- air depleted of oxygen has also proven fatal. in the past, anesthesia machines have malfunctioned, delivering low-oxygen gas mixtures to patients. additionally, oxygen in a confined space can be consumed if carbon dioxide scrubbers are used without sufficient attention to supplementing the oxygen which has been consumed.
<p> a hypoxic, carbon dioxide free metabolically inert gas is provided for inhalation by confining the gas supply and the head in an impermeable bag which prevents contamination with oxygen from the surrounding air, minimising the amount of gas required. (the same effect could be reached by flooding any enclosed space with the gas, but much more gas would be needed, and this would be hazardous to a third party entering the space, an effect which is well known as a cause of industrial fatalities.)
<p> the board determined the oxygen tank failure was caused by an unlikely chain of events. tanks storing cryogens, such as liquid oxygen and liquid hydrogen, require either venting, extremely good insulation, or both, in order to avoid excessive pressure buildup due to vaporization of the tanks' contents. the service module oxygen tanks were so well insulated that they could safely contain supercritical hydrogen and oxygen for years. each oxygen tank held several hundred pounds of oxygen, which was used for breathable air and the production of electricity and water. the construction of the tanks made internal inspection impossible.
<p> oxygen deficiency gas monitors are used for employee and workforce safety. cryogenic substances such as liquid nitrogen (ln2), liquid helium (he), and liquid argon (ar) are inert and can displace oxygen (o) in a confined space if a leak is present. a rapid decrease of oxygen can provide a very dangerous environment for employees, who may not notice this problem before they suddenly lose consciousness. with this in mind, an oxygen gas monitor is important to have when cryogenics are present. laboratories, mri rooms, pharmaceutical, semiconductor, and cryogenic suppliers are typical users of oxygen monitors.
<p> to prevent ebullism, a pure oxygen (o) atmosphere was used in early space flights to eliminate nitrogen in the blood. there are major fire hazards associated with using pure o as a breathing gas, which was central to the death of three astronauts in a fire during a ground test with apollo 1. nonetheless nasa continued to use a [nominally] pure oxygen atmosphere throughout the apollo program but switched to air for the follow-on space transport system "space shuttle". russian cosmonauts used pure oxygen before changing to a higher-pressure nitrox mixture, leading to incompatibility problems in 1975 on the apollo-soyuz test project. space suits are often pressurized to several psi lower than stations' capsules or shuttles and since they still use pure o, an acclimation period is common in the airlock to remove nitrogen and other gases from the bloodstream. | Gravity. You need something to "anchor" that gas locally, and only gravity can do that. To have enough gravity, you need a big enough lump of something. That's basically a planet. TLDR: You need a planet to keep the gas from wandering off into space. EDIT: Otherwise the gas cloud just keeps expanding. |
Subsets and Splits