question
stringlengths 6
296
| context
stringlengths 1.9k
8.48k
| answer
stringlengths 0
9.92k
|
---|---|---|
why is it that magnets affected older computer monitors, whereas now they don't? | <p> unlike other display technologies, electronic paper does not use any power while displaying an image. crt monitors typically use more power than lcd monitors. they also contain significant amounts of lead. lcd monitors typically use a cold-cathode fluorescent bulb to provide light for the display. some newer displays use an array of light-emitting diodes (leds) in place of the fluorescent bulb, which reduces the amount of electricity used by the display. fluorescent back-lights also contain mercury, whereas led back-lights do not.
<p> modern crts are much less susceptible to burn-in than older models due to improvements in phosphor coatings, and because modern computer images are generally lower contrast than the stark green- or white-on-black text and graphics of earlier machines. lcd computer monitors, including the display panels used in laptop computers, are not susceptible to burn-in because the image is not directly produced by phosphors (although they can suffer from a less extreme and usually non-permanent form of image persistence).
<p> since their appearance, anti-magnetic watches have been favored by people who deal with high magnetic fields. they are widespread among electronic engineers and in other professions where strong magnetic fields are present.
<p> new lighting systems have not used magnetic ballasts since the turn of the century, however some older installations still remain. fluorescent lamps with magnetic ballasts flicker at a normally unnoticeable frequency of 50 or 60 hz. this flickering can cause problems for some individuals with light sensitivity and are associated with headaches and eyestrain. such lamps are listed as problematic for some individuals with autism, epilepsy, lupus, chronic fatigue syndrome, lyme disease, and vertigo. newer fluorescent lights without magnetic ballasts have essentially eliminated flicker.
<p> anti-magnetic (non-magnetic) watches are those that are able to run with minimal deviation when exposed to a certain level of magnetic field. the international organization for standardization issued a standard for magnetic-resistant watches, which many countries have adopted.
<p> the twisted nematic effect ("tn-effect") was a main technology breakthrough that made lcds practical. unlike earlier displays, tn-cells did not require a current to flow for operation and used low operating voltages suitable for use with batteries. the introduction of tn-effect displays led to their rapid expansion in the display field, quickly pushing out other common technologies like monolithic leds and crts for most electronics. by the 1990s, tn-effect lcds were largely universal in portable electronics. in the meantime, many applications of lcds are using alternatives to the tn-effect such as in-plane switching (ips) or vertical alignment (va).
<p> magnetos have advantages of simplicity and reliability, but are limited in size owing to the magnetic flux available from their permanent magnets. the fixed excitation of a magneto made it difficult to control its terminal voltage or reactive power production when operating on a synchronized grid. this restricted their use for high-power applications. power generation magnetos were limited to narrow fields, such as powering arc lamps or lighthouses, where their particular features of output stability or simple reliability were most valued. | Old monitors, specifically CRT monitors, operated by shooting electrons at a layer of phosphor. The electrons striking the phosphor would cause it to light up. The image is created by an electron beam that is continuously moving across the phosphor layer, being changed in intensity depending on the image. For colour monitors, there would be multiple (typically three) beams and a raster containing three different phosphors that would each emit a different colour. Moving the electron beam is done using magnetic fields. Magnetic fields alter the direction of moving charged particles. Since electrons are charged and in motion in this case, a magnetic field can be used to steer them in the correct direction. Holding another magnet close to the monitor interferes with this process, which distorts the image. Modern monitors (LCD) work in a completely different way. An LCD has a grid of liquid crystals that can switch between opaque and transparent depending on the electric charge that is applied. This is used to control which pixels are on and which are off. Next you simply place a light source behind it, the so-called backlight, to generate a bright image. This process is not affected by magnetic fields (unless they're very strong, much stronger than what you need to affect a CRT monitor). |
what exactly causes this kind of rainbow coloration on stainless steel pots | <p> like many lake pigments, the exact colors produced depends on the ph of the mixture and the fixative used. aluminum mordants used with brazilin produce the standard red colors, while the use of a tin mordant, in the form of sncl or sncl added to the extract is capable of yielding a pink color.
<p> tarbuttite is white, yellow, red, green, brown, or colorless; in transmitted light it is colorless. traces of copper cause green coloring, while iron hydroxides cause the other colors. colorless crystals tend to be transparent while colored specimens have varying degrees of transparency.
<p> rainbow quartz have been treated with a combination of titanium and gold. titanium molecules are bonded to the quartz by the natural electrostatic charge of the crystal in a process known as magnetron ionization. the brilliant color of flame aura is the result of optical interference effects produced by layers of titanium. since only electricity is used to deposit the titanium layers and create these colors, very little heat is involved and the integrity of the crystal is maintained. the crystal does not become brittle or prone to breakage as with other treatments.
<p> the lake superior agate is noted for its rich red, orange, and yellow coloring. this color scheme is caused by the oxidation of iron. iron leached from rocks provided the pigment that gives the gemstone its beautiful array of color. the concentration of iron and the amount of oxidation determine the color within or between an agate's bands. there can also be white, grey, black and tan strips of color as well.
<p> the color and flavor of the flesh depends on the diet and freshness of the trout. farmed trout and some populations of wild trout, especially anadromous steelhead, have reddish or orange flesh as a result of high astaxanthin levels in their diets. astaxanthin is a powerful antioxidant that may be from a natural source or a synthetic trout feed. rainbow trout raised to have pinker flesh from a diet high in astaxanthin are sometimes sold in the u.s. with labeling calling them "steelhead". as wild steelhead are in decline in some parts of their range, farmed rainbow are viewed as a preferred alternative. in chile and norway, rainbow trout farmed in saltwater sea cages are sold labeled as steelhead.
<p> a monochrome or red rainbow is an optical and meteorological phenomenon and a rare variation of the more commonly seen multicolored rainbow. its formation process is identical to that of a normal rainbow (namely the reflection/refraction of light in water droplets), the difference being that a monochrome rainbow requires the sun to be close to the horizon; i.e., near sunrise or sunset. the low angle of the sun results in a longer distance for its light to travel through the atmosphere, causing shorter wavelengths of light, such as blue, green and yellow, to be scattered and leaving primarily red.
<p> the petrified wood of this tree is frequently referred to as "rainbow wood" because of the large variety of colors some specimens exhibit. the red and yellow are produced by large particulate forms of iron oxide, the yellow being limonite and the red being hematite. the purple hue comes from extremely fine spherules of hematite distributed throughout the quartz matrix, and not from manganese, as has sometimes been suggested. | Heating forms a thin layer of oxide over the steel, which causes the color effect by means of an optical process known as thin-film interference: basically the same effect that you can see in soap bubbles and oil puddles. |
why are carbon nano tubes (cnts) so much stronger then steel? | <p> carbon nanotubes (cnts) have attracted much attention because of their materials properties, including a high elastic modulus (~1–2 tpa), a high tensile strength (~13–53 gpa), and a high conductivity (metallic tubes can theoretically carry an electric current density of 4×10 a/cm, which is ~1000 times higher than for other metals such as copper). cnt thin films have been used as transparent electrodes in tcfs because of these good electronic properties.
<p> carbon nanotubes are the strongest and stiffest materials yet discovered in terms of tensile strength and elastic modulus. this strength results from the covalent sp bonds formed between the individual carbon atoms. in 2000, a multiwalled carbon nanotube was tested to have a tensile strength of . (for illustration, this translates into the ability to endure tension of a weight equivalent to on a cable with cross-section of ). further studies, such as one conducted in 2008, revealed that individual cnt shells have strengths of up to ≈, which is in agreement with quantum/atomistic models. because carbon nanotubes have a low density for a solid of 1.3 to 1.4 g/cm, its specific strength of up to 48,000 kn·m·kg is the best of known materials, compared to high-carbon steel's 154 kn·m·kg.
<p> carbon nanotubes are the strongest and stiffest materials yet discovered in terms of tensile strength and elastic modulus respectively. this strength results from the covalent sp bonds formed between the individual carbon atoms. in 2000, a multi-walled carbon nanotube was tested to have a tensile strength of 63 gigapascals (9,100,000 psi). (for illustration, this translates into the ability to endure tension of a weight equivalent to 6,422 kilograms-force (62,980 n; 14,160 lbf) on a cable with cross-section of 1 square millimetre (0.0016 sq in).) further studies, such as one conducted in 2008, revealed that individual cnt shells have strengths of up to ≈100 gigapascals (15,000,000 psi), which is in agreement with quantum/atomistic models. since carbon nanotubes have a low density for a solid of 1.3 to 1.4 g/cm, its specific strength of up to 48,000 kn·m·kg is the best of known materials, compared to high-carbon steel's 154 kn·m·kg.
<p> because of the strong covalent carbon–carbon bonding in the sp configuration, carbon nanotubes are chemically inert and are able to transport large amounts of electric current. in theory, carbon nanotubes are also able to conduct heat nearly as well as diamond or sapphire, and because of their miniaturized dimensions, the cntfet should switch reliably using much less power than a silicon-based device.
<p> carbon nanotubes are the strongest and stiffest materials yet discovered in terms of tensile strength and elastic modulus respectively. this strength results from the covalent sp bonds formed between the individual carbon atoms. multi-walled carbon nanotube was tested to have a tensile strength of 63 gigapascals (gpa). further studies, conducted in 2008, revealed that individual cnt shells have strengths of up to ~100 gpa, which is in good agreement with quantum/atomistic models. since carbon nanotubes have a low density for a solid of 1.3 to 1.4 g/cm, its specific strength of up to 48,000 kn·m·kg is the best of known materials, compared to high-carbon steel's 154 kn·m·kg.
<p> although the strength of individual cnt shells is extremely high, weak shear interactions between adjacent shells and tubes lead to significant reduction in the effective strength of multiwalled carbon nanotubes and carbon nanotube bundles down to only a few gpa. this limitation has been recently addressed by applying high-energy electron irradiation, which crosslinks inner shells and tubes, and effectively increases the strength of these materials to ≈60 gpa for multiwalled carbon nanotubes and ≈17 gpa for double-walled carbon nanotube bundles. cnts are not nearly as strong under compression. because of their hollow structure and high aspect ratio, they tend to undergo buckling when placed under compressive, torsional, or bending stress.
<p> carbon nanotubes (cnts) is another type of nanomaterial which has attracted a lot of interest for its potential as being building blocks for bottom-up applications. they have excellent mechanical, electrical, and thermal properties and can be fabricated to a wide range of nanoscale diameters, making them attractive and appropriate for the developments of electronic and mechanical devices. they demonstrate metal-like properties and are able to act as remarkable conductors. | When we talk about material strength, we usually are referring to the stress at fracture. When you apply a force to a material, it stretches. The higher the stress it can take before breaking, the stronger the material. To consider the strength of a carbon nanotube, we need to figure out the amount of force required to break all of the bonds around the circumference of the tube. A bit of physics can show us that the resulting stress at fracture is proportional to the binding energy between a single carbon-carbon bond divided by the carbon-carbon bond length squared and the nanotube diameter. The value of the numerator is far larger in this situation than the denominator. To put some numbers on it, the stress at fracture for graphene is about 200 GPa, while the value of good steel is around 2.4 GPa. Their Youngs modulii aren't as far apart (1000 GPa vs. 200 GPa) but this is as a result of the nanotubes being far more ductile. |
why did eris, the dwarf planet that caused pluto to get demoted, only get discovered in 2005? | <p> with the decision of the international astronomical union to reclassify pluto as a dwarf planet in 2006, the flyby of neptune by "voyager 2" in 1989 became the point when every known planet in the solar system had been visited at least once by a space probe.
<p> clyde tombaugh's discovery of pluto in 1930 appeared to validate lowell's hypothesis, and pluto was officially named the ninth planet. in 1978, pluto was conclusively determined to be too small for its gravity to affect the giant planets, resulting in a brief search for a tenth planet. the search was largely abandoned in the early 1990s, when a study of measurements made by the "voyager 2" spacecraft found that the irregularities observed in uranus's orbit were due to a slight overestimation of neptune's mass. after 1992, the discovery of numerous small icy objects with similar or even wider orbits than pluto led to a debate over whether pluto should remain a planet, or whether it and its neighbours should, like the asteroids, be given their own separate classification. although a number of the larger members of this group were initially described as planets, in 2006 the international astronomical union (iau) reclassified pluto and its largest neighbours as dwarf planets, leaving neptune the farthest known planet in the solar system.
<p> when pluto was demoted to a dwarf planet, mnemonics could no longer include the final "p". the first notable suggestion came from kyle sullivan of lumberton, mississippi, usa, whose mnemonic was published in the jan. 2007 issue of astronomy magazine: "my violent evil monster just scared us nuts". in august 2006, for the eight planets recognized under the new definition, phyllis lugger, professor of astronomy at indiana university suggested the following modification to the common mnemonic for the nine planets: "my very educated mother just served us nachos". she proposed this mnemonic to owen gingerich, chair of the international astronomical union (iau) planet definition committee and published the mnemonic in the american astronomical society committee on the status of women in astronomy bulletin board on august 25, 2006. it also appeared in indiana university's iu news room star trak on august 30, 2006. this mnemonic is used by the iau on their website for the public. others angry at the iau's decision to "demote" pluto composed sarcastic mnemonics in protest. schott's miscellany by ben schott included the mnemonic, "many very educated men justify stealing unique ninth". mike brown, who discovered eris, mentioned hearing "many very educated men just screwed up nature". one particular 9 planet mnemonic, "my very easy memory jingle seems useful naming planets", was easily changed once the demotion occurred, becoming the 8 planet mnemonic, "my very easy memory jingle seems useless now". slightly risque versions include, "mary's 'virgin' explanation made joseph suspect upstairs neighbor" and perhaps simplest of all: "my very easy method: just sun".
<p> in 1850, he "lost" a star that he had been observing, which lt. matthew maury, the superintendent of the observatory, claimed was evidence for a 9th planet (pluto had not yet been discovered). in 1878, however, chf peters, director of the hamilton college observatory in new york, showed that the star had not in fact vanished, and that the previous results had been due to human error.
<p> bullet::::- 2006 – as a result of the discovery of eris, a kuiper belt object larger than pluto, pluto is demoted to a "dwarf planet" after being considered a planet for 76 years, redefining the solar system to have eight planets and three dwarf planets.
<p> only a few months before the reclassification of pluto from a planet to a dwarf planet, with the debate going on about the issue, she said in an interview, "at my age, i've been largely indifferent [to the debate]; though i suppose i would prefer it to remain a planet."
<p> due to merest chance, humason missed discovering pluto. eleven years before clyde tombaugh, humason took a set of four photographs in which the image of pluto appeared. there is persistent speculation that he missed discovering the dwarf planet because it fell on a defect in the photographic plate. this is unlikely, however, given that it appeared in four separate photographs over three different nights. | I don't know for a fact but I'd guess it would be due to its highly eccentric orbit. Most of the time it's also further from the sun than any other planet in the solar system, and not on the same plane as the rest of the planets, so you just wouldn't know where to look. Anything smaller than a major planet is really hard to see unless you look directly at it (i.e. already know exactly where it is). |
do dogs, on average, really dislike cats more than other animals? | <p> dogs and cats have a range of interactions. the natural instincts of each species lead towards antagonistic interactions, though individual animals can have non-aggressive relationships with each other, particularly under conditions where humans have socialized non-aggressive behaviors.
<p> the cultural assumption that cats are distant from people and lack affection compared to dogs has complications. animals have individual characteristics based on their environment, particularly their past interactions with people.
<p> cats are considered as obligate carnivores and dogs are known as omnivores. cats are thus unable to down-regulate the amount of enzymes they are using based on the amount of protein in the body. regardless if they were on a high- or low-protein diet, they would be using the same amount of the enzymes to break down protein. in contrast, dogs are able to regulate the amount of nitrogen catabolic enzymes based on if they are consuming a high- or low-protein diet. cats also use much more protein for body maintenance than for growth, which is the opposite to dogs, meaning that cats have a higher protein turnover that consequentially increased their protein requirements.
<p> the reason that cats are seen as "yōkai" in japanese mythology is attributed to many of the characteristics that they possess: for example, the way the irises of their eyes change shape depending on the time of day, the way their fur seems to cause sparks due to static electricity when they are petted (especially in winter), the way they sometimes lick blood, the way they can walk without making a sound, their wild nature that remains despite the gentleness they can show at times, the way they are difficult to control (unlike dogs), the sharpness of their claws and teeth, their nocturnal habits, and their speed and agility.
<p> if appropriately socialized, cats and dogs may have relationships that are not antagonistic, and dogs raised with cats may prefer the presence of cats to other dogs. even cats and dogs that have got along together in the same household may revert to aggressive reactions due to external stimuli, illness, or play that escalates.
<p> the comedy films "cats & dogs," released in 2001, and its sequel "," released in 2010, both projected and amplified the above-mentioned antipathy between dogs and cats into an all-out war between the two species wherein cats are shown as being out-and-out enemies of humans, whereas dogs are shown as being more sympathetic to humans.
<p> the signals and behaviors that cats and dogs use to communicate are different and can lead to signals of aggression, fear, dominance, friendship or territoriality being misinterpreted by the other species. dogs have a natural instinct to chase smaller animals that flee, an instinct common among cats. most cats flee from a dog, while others take actions such as hissing, arching their backs and swiping at the dog. after being scratched by a cat, some dogs can become fearful of cats. | Dogs have a tendency to chase anything, including other dogs, tennis balls, squirrels, etc... Cats tend to be solitary animals that react with the claws when aggravated, say, by an exuberant puppy. So, if you have a cat that was chased by dogs, it will not tolerate a dog, if a dog has been badly scratched by a cat, it will be hostile to cats. Now, if you have a cat and dog that have not had negative experiences with the other, there won't be hostility. |
question about the signals we are sending into space | <p> any physical quantity that exhibits variation in space or time can be a signal used, among other possibilities, to share messages between observers. according to the "ieee transactions on signal processing", a signal can be audio, video, speech, image, sonar and radar-related and so on. in a later effort of redefining a signal, anything that is only a function of space, such as an image, is excluded from the category of signals. also, it is stated that a signal may or may not contain any information.
<p> carrying on from rocket research, radio telemetry was used routinely as space exploration got underway. spacecraft are in a place where a physical connection is not possible, leaving radio or other electromagnetic waves (such as infrared lasers) as the only viable option for telemetry. during manned space missions it is used to monitor not only parameters of the vehicle, but also the health and life support of the astronauts. during the cold war telemetry found uses in espionage. us intelligence found that they could monitor the telemetry from soviet missile tests by building a telemeter of their own to intercept the radio signals and hence learn a great deal about soviet capabilities.
<p> however, during the mission people discover a mysterious alien signal from outer space, which brings great danger to our own planet. an expedition is sent to the source of the signal to establish its nature.
<p> interplanetary-radio communication system not only communicate with spacecraft, but are also used to determine their position. radar can track targets near the earth, but spacecraft in deep space must have a working transponder on board to echo a radio signal back. orientation information can be obtained using star trackers.
<p> in radio communication systems, information is carried across space using radio waves. at the sending end, the information to be sent, in the form of a time-varying electrical signal, is applied to a radio transmitter. the information signal can be an audio signal representing sound from a microphone, a video signal representing moving images from a video camera, or a digital signal representing data from a computer. in the transmitter, an electronic oscillator generates an alternating current oscillating at a radio frequency, called the "carrier wave" because it serves to "carry" the information through the air. the information signal is used to modulate the carrier, altering some aspect of it, "piggybacking" the information on the carrier. the modulated carrier is amplified and applied to an antenna. the oscillating current pushes the electrons in the antenna back and forth, creating oscillating electric and magnetic fields, which radiate the energy away from the antenna as radio waves. the radio waves carry the information to the receiver location.
<p> in the year 2013, radio waves from outer space are suddenly received by scientists, proving space aliens to people. in the year 2015, the earth defense force, which is a unified multinational army organization sponsored by every country, is founded in case the aliens prove to be hostile.
<p> radio signals from the spacecraft are received by two widely separated deep-space ground stations on earth and the difference in the times of signal arrival is precisely measured (and used to calculate a bearing). this is corrected using information about the current delays due to earth’s atmosphere, obtained by simultaneously tracking (from each ground location) radio signals from a quasar (within 10 degrees of the same direction). | They're really weak. Electromagnetic waves lose strength with the square of their distance. Pretty much the signal strength degrades so quickly that it's swamped by background noise within the first few lightyears away from earth. Also with modern broadcasting being in digital and encrypted it's even less likely for potential listeners to notice. |
since it's possible to build antibodies through small levels of exposure, can humans become "immune" to salmonella from eating raw chicken? | <p> catching "wild" chickenpox as a child has been thought to commonly result in lifelong immunity. indeed, parents have deliberately ensured this in the past with "pox parties". historically, exposure of adults to contagious children has boosted their immunity, reducing the risk of shingles. the cdc and corresponding national organisations are carefully observing the failure rate which may be high compared with other modern vaccines—large outbreaks of chickenpox having occurred at schools which required their children to be vaccinated.
<p> vaccines are less effective among high-risk patients, as well as being more dangerous because they contain attenuated live virus. in a study performed on children with an impaired immune system, 30% had lost the antibody after five years, and 8% had already caught wild chickenpox in that five-year period.
<p> measles and chicken pox are very dangerous and potentially fatal for people on methylprednisolone therapy. exposure to these infections is especially risky for people who are not immune to them. exposures like these should be reported to a physician immediately, and may be treated with prophylactic immunoglobulin. also, live, attenuated vaccines can be bad for people taking immunosuppressive doses of methylprednisolone. the exception to this rule is patients receiving complete corticosteroid replacement therapy, e.g., for addison's disease, who may follow standard immunization protocols.
<p> because chickenpox is usually more severe in adults than it is in children, some parents deliberately expose their children to the virus, for example by taking them to "chickenpox parties". doctors counter that children are safer getting the vaccine, which is a weakened form of the virus, rather than getting the disease, which can be fatal. repeated exposure to chickenpox may protect against zoster.
<p> salmonella is mostly associated with under-cooked chicken meat. people who have weak immune systems, such as the elderly, young children, and those with various medical conditions, are most at risk. proper sanitation and cooking practices lessen the threat of contracting salmonellosis.
<p> the use may cause resistance in "salmonella" present in the intestinal tract of the target animal. resistant "salmonella" may also contaminate the carcass at slaughter and transfer to humans when used as food. when humans are infected and treated with a fourth-generation cephalosporin, effectiveness may be compromised.
<p> bullet::::- a new study by researchers at stanford university indicates the genetic engineering method known as crispr may trigger an immune response in humans, thus rendering it potentially ineffective in them. | Well... yes and no. 1) Raw chicken is a common breading ground for Salmonella however, without testing or cross contaminating each piece, not all raw chicken is guaranteed to carry it. 2) there are several groups of Salmonella with different proteins on their surfaces for antibodies to target. IE: it would take a while and a lot of infections to create antibodies for each group 3) there are several THOUSAND serotypes of Salmonella with varying proteins on their surfaces... so this lowers your chances even MORE of acquiring antibodies for every variation of Salmonella. But don't let me stop you, just stay near a toilet during your experiment and clear your schedule for the month. |
what causes the atmospheric phenomenon known as glory? | <p> orographic precipitation, also known as relief precipitation, is precipitation generated by a forced upward movement of air upon encountering a physiographic upland (see anabatic wind). this lifting can be caused by two mechanisms:
<p> a glory is an optical phenomenon, resembling an iconic saint's halo around the shadow of the observer's head, caused by sunlight or (more rarely) moonlight interacting with the tiny water droplets that compose mist or clouds. the glory consists of one or more concentric, successively dimmer rings, each of which is red on the outside and bluish towards the centre. due to its appearance, the phenomenon is sometimes mistaken for a circular rainbow, but the latter has a much larger diameter and is caused by different physical processes.
<p> the opposition surge (sometimes known as the opposition effect, opposition spike or seeliger effect) is the brightening of a rough surface, or an object with many particles, when illuminated from directly behind the observer. the term is most widely used in astronomy, where generally it refers to the sudden noticeable increase in the brightness of a celestial body such as a planet, moon, or comet as its phase angle of observation approaches zero. it is so named because the reflected light from the moon and mars appear significantly brighter than predicted by simple lambertian reflectance when at astronomical opposition. two physical mechanisms have been proposed for this observational phenomenon: shadow hiding and coherent backscatter.
<p> sprites are large-scale electrical discharges which occur high above a thunderstorm cloud, or cumulonimbus, giving rise to a quite varied range of visual shapes. they are triggered by the discharges of positive lightning between the thundercloud and the ground. the phenomena were named after the mischievous sprite, e.g., shakespeare's ariel or puck, and is also an acronym for stratospheric/mesospheric perturbations resulting from intense thunderstorm electrification. they normally are colored reddish-orange or greenish-blue, with hanging tendrils below and arcing branches above. they can also be preceded by a reddish halo. they often occur in clusters, lying to above the earth's surface. sprites have been witnessed thousands of times. sprites have been held responsible for otherwise unexplained accidents involving high altitude vehicular operations above thunderstorms.
<p> because the propagation of the wave is fundamentally caused by an imbalance of the forces acting on the air (which is often thought of in terms of air parcels when considering wave motion), the types of waves and their propagation characteristics vary latitudinally, principally because the coriolis effect on horizontal flow is maximal at the poles and zero at the equator.
<p> positive lightning has also been shown to trigger the occurrence of upward lightning flashes from the tops of tall structures and is largely responsible for the initiation of sprites several tens of kilometers above ground level. positive lightning tends to occur more frequently in winter storms, as with thundersnow, during intense tornadoes and in the dissipation stage of a thunderstorm. huge quantities of extremely low frequency (elf) and very low frequency (vlf) radio waves are also generated.
<p> initially, the centre for earth science studies (cess) stated that the likely cause of the red rain was an exploding meteor, which had dispersed about 1,000 kg (one ton) of material. a few days later, following a basic light microscopy evaluation, the cess retracted this as they noticed the particles resembled spores, and because debris from a meteor would not have continued to fall from the stratosphere onto the same area while unaffected by wind. a sample was, therefore, handed over to the tropical botanical garden and research institute (tbgri) for microbiological studies, where the spores were allowed to grow in a medium suitable for growth of algae and fungi. the inoculated petri dishes and conical flasks were incubated for three to seven days and the cultures were observed under a microscope. | A glory is seen when you are looking down at a cloud or fog with the sun directly behind you. It’s possible to see from a plane, a mountaintop, or even a tall building. If you see a shadow in the center of the glory, it’s known as a Brocken spectre. The glory is not a rainbow, it’s caused by a different phenomenon, and the spectre is directly linked except that the shadow is formed because you’re between the sun and the clouds. Rainbows are caused by a prism effect, but glories seemed to be caused by light passing *near* smaller droplets of water vapor through a process known as wave tunneling. It’s described in more detail here. I will defer to a physics or meteorology panelist for a more complete explanation, because I know about this from my background in environmental science, but I cannot pretend to have a deep understanding of the physics behind it. Another not-rainbow day atmospheric phenomenon I like is the sundog, which is formed by sunlight passing through ice crystals. |
where does the oceanic crust created at the mid-antlantic ridge subduct? | <p> this process is operative beneath and behind the inner walls of oceanic trenches (subduction zone) where slices of oceanic crust and mantle are ripped from the upper part of the descending plate and wedged and packed in high pressure assemblages against the leading edge of the other plate.
<p> the other process proposed to contribute to the formation of new oceanic crust at mid-ocean ridges is the "mantle conveyor" (see image). however, there have been some studies which have shown that the upper mantle (asthenosphere) is too plastic (flexible) to generate enough friction to pull the tectonic plate along. moreover, unlike in the image above, mantle upwelling that causes magma to form beneath the ocean ridges appears to involve only its upper , as deduced from seismic tomography and from studies of the seismic discontinuity at about . the relatively shallow depths from which the upwelling mantle rises below ridges are more consistent with the "slab-pull" process. on the other hand, some of the world's largest tectonic plates such as the north american plate are in motion, yet are nowhere being subducted.
<p> the subducting pacific plate was old, cold and dense, easily sinking into the mantle at a steep angle. the hinge zone of the plate also migrated oceanwards over time. so the trench retreated oceanwards, and the old trench and ocean floor become part of the continental plate. volcanoes formed inwards from the trench. the part of the oceanic plate attached to the continent was compressed, the suboceanic crust was severely shortened and thickened as well, giving rise to a duplex structure. this happened at the end of the ordovician period and in the early silurian. the sediments were heavily folded and overthrusted resulting in severe crustal shortening. in the canberra area the sediments were raised above sea level and eroded. the land to the west (around wagga wagga) was raised higher.
<p> the thinning of the overriding plate at the back-arc (i.e., back-arc rifting) can lead to the formation of new oceanic crust (i.e., back-arc spreading). as the lithosphere stretches, the asthenospheric mantle below rises to shallow depths and partially melts due to adiabatic decompression melting. as this melt nears the surface spreading begins.
<p> the ridge is buoyant, resulting in flat slab subduction of the nazca plate underneath peru. buoyancy is related to crustal age, and the buoyancy effect can be seen in oceanic crust aged from 30-40 ma. the nazca plate is dated to 45 ma where it subducts into the peru-chile trench. the extreme thickness of the buoyant ridge is responsible for the flat slab subduction of the older underlying plate. modeling has shown that this type of subduction is only concurrent with submarine ridges, and accounts for approximately 10% of convergent boundaries. the most recent estimate of the subduction angle for the nazca plate is 20° to a depth of 24 km at 110 km inland. at 80 km depth, approximately 220 km inland, the plate shifts to a horizontal orientation, and continues to travel horizontally for up to 700 km inland, before resuming subduction into the asthenosphere.
<p> off the western coast of south america, the oceanic nazca plate subducts beneath the south america plate in the peru-chile trench. volcanism associated with subduction in the region has been ongoing since the jurassic. dehydration of the downgoing slab causes melts to form in the abovelying asthenosphere which drive the activity in the volcanic arc.
<p> the trench is a result of a convergent boundary, where the eastern edge of the oceanic nazca plate is being subducted beneath the continental south american plate. two seamount ridges within the nazca plate enter the subduction zone along this trench: the nazca ridge and the juan fernández ridge. | It has passive margins on either side meaning the surrounded continents are moving in the same direction and the divergent fault at the mid Atlantic. So in North America, the divergent fault at the mid Atlantic ridge helps move the NA plate westward, forcing the Juan de fuca plate to subduct and a complicated strike-slip between pacific and NA plates |
if acoustic energy is converted to heat when a material absorbs sound, is it possible for an audio source to produce enough acoustic energy to ignite something? | <p> absorbing sound spontaneously converts part of the sound energy to a very small amount of heat in the intervening object (the absorbing material), rather than sound being transmitted or reflected. there are several ways in which a material can absorb sound. the choice of sound absorbing material will be determined by the frequency distribution of noise to be absorbed and the acoustic absorption profile required
<p> the energy dissipated within a medium as sound travels through it is analogous to the energy dissipated in electrical resistors or that dissipated in mechanical dampers for mechanical motion transmission systems. all three are equivalent to the resistive part of a system of resistive and reactive elements. the resistive elements dissipate energy (irreversibly into heat) and the reactive elements store and release energy (reversibly, neglecting small losses). the reactive parts of an acoustic medium are determined by its bulk modulus and its density, analogous to respectively an electrical capacitor and an electrical inductor, and analogous to, respectively, a mechanical spring attached to a mass.
<p> acoustic absorption refers to the process by which a material, structure, or object takes in sound energy when sound waves are encountered, as opposed to reflecting the energy. part of the absorbed energy is transformed into heat and part is transmitted through the absorbing body. the energy transformed into heat is said to have been 'lost'.
<p> acoustic absorption is of particular interest in soundproofing. soundproofing aims to absorb as much sound energy (often in particular frequencies) as possible converting it into heat or transmitting it away from a certain location.
<p> acoustic oscillations in a medium are a set of time depending properties, which may transfer energy along its path. along the path of an acoustic wave, pressure and density are not the only time dependent property, but also entropy and temperature. temperature changes along the wave can be invested to play the intended role in the thermoacoustic effect. the interplay of heat and sound is applicable in both conversion ways. the effect can be used to produce acoustic oscillations by supplying heat to the hot side of a stack, and sound oscillations can be used to induce a refrigeration effect by supplying a pressure wave inside a resonator where a stack is located. in a thermoacoustic prime mover, a high temperature gradient along a tube where a gas media is contained induces density variations. such variations in a constant volume of matter force changes in pressure. the cycle of thermoacoustic oscillation is a combination of heat transfer and pressure changes in a sinusoidal pattern. self-induced oscillations can be encouraged, according to lord rayleigh, by the appropriate phasing of heat transfer and pressure changes.
<p> the goal of a sound absorber is to convert acoustical energy into heat. in a traditional absorber, the sound wave propagates into the absorber. because of the proximity of the porous material, the oscillating air molecules inside the absorber lose their acoustical energy due to friction.
<p> if the conductivity "σ" of the material is small, or the frequency is high, such that (with ), then dielectric heating is the dominant mechanism of loss of energy from the electromagnetic field into the medium. | It IS possible, though as you said yourself highly improbable. Sound is simply the vibration of particles in the air (hence why there's no sound in space) and that vibration hits and object and can vibrate it or induce heat, which is why you can break glass with a sound (not the only reason, mind you). If you played a sound at a certain frequency or amplitude (loudness) you could light something on fire, that said your speaker would light on fire long before due to the way they work (which is another question entirely). |
how does artificially curling hair work? | <p> curling and straightening hair requires the stylist to use a curling rod or a flat iron to get a desired look. these irons use heat to manipulate the hair into a variety of waves, curls and reversing natural curls and temporarily straightening the hair. straightening or even curling hair can damage it due to direct heat from the iron and applying chemicals afterwards to keep its shape.
<p> an early alternative method for curling hair that was suitable for use on people was invented in 1905 by german hairdresser karl nessler. he used a mixture of cow urine and water. the first public demonstration took place on 8 october 1905, but nessler had been working on the idea since 1896. previously, wigs had been set with caustic chemicals to form curls, but these recipes were too harsh to use next to human skin. his method, called the spiral heat method, was only useful for long hair. the hair was wrapped in a spiral around rods connected to a machine with an electric heating device. sodium hydroxide (caustic soda) was applied and the hair was heated to or more for an extended period of time. the process used about twelve brass rollers and took six hours to complete. these hot rollers were kept from touching the scalp by a complex system of countering weights which were suspended from an overhead chandelier and mounted on a stand. nessler conducted his first experiments on his wife, katharina laible. the first two attempts resulted in completely burning her hair off and some scalp burns, but the method was improved and his electric permanent wave machine was used in london in 1909 on the long hair of the time.
<p> the natural hair can be twisted or braided, but is most commonly styled into cornrows before affixing the synthetic hair. using a latch hook or crochet hook, the synthetic hair (in the form of "loose bulk" or "braiding" hair) is then attached. parts of the hair extensions are grabbed by the hook and pulled through the underside of each cornrow, working from the front of the hair to the back at a 90 degree angle. this process can take up to 4-6 hours. popular methods include traditional, individuals, and "invisible knot method".
<p> a second type of curler was invented later, allegedly in 1924 by a czech hairdresser called josef mayer. in this method, the hair was fed through a small clamp which, after winding, would hold the two ends of a roller. the ends of the hair were held on the roller which was wound around a point until it reached the clamp into which it was inserted. for obvious reasons, this was called point-winding. mayer attempted to claim a patent on this method of winding, which was challenged in a federal lawsuit by the national hairdressers' and cosmetologists' association.
<p> partial curling can even be achieved by "nod" factor alone. this was demonstrated by the isolation of "nod" factors and their application to parts of the root hair. the root hairs curled in the direction of the application, demonstrating the action of a root hair attempting to curl around a bacterium. even application on lateral roots caused curling. this demonstrated that it is the "nod" factor itself, not the bacterium that causes the stimulation of the curling.
<p> a jheri curl requires a two-part application that consists of a softener (often called a "rearranging cream") to loosen the hair and a solution to set the curls. the rearranging cream uses pungent chemicals, causing the naturally tight curls to loosen. the looser curls are then set and a chemical solution is then added to the hair to permanently curl it.
<p> the pin curl is a staple of the pin-up style, "women utilized pin curls for their main hair curling technique". originating in the 1920s from the "water-waving technique", the hair style of the 1940s consisted of a fuller, gentle curl. the drying technique consists of curling a damp piece of hair, from the end to the root and pin in place. once the curl is dry, it is brushed through to create the desired soft curl, with a voluminous silhouette. | Your hair is essentially long strands of protein (keratin, specifically). When protein is heated it loses its composition (think of unraveling a tangled ball of string). When you use a curling iron you're heating the protein and "reshaping" to the curling desire you like. This is the same principle to hair straightening. Eventually the hair goes back to its native curly or straight state (that which I cannot explain in detail, I imagine that being left alone long enough the proteins naturally reconfigure themselves). Chemical treatments work under the same principle of essentially denature (unravel) the protein. Someone will probably give a better answer, but I think this is basically what you asked. |
before the invention of wooden/leather/hollow gourd canteens - how did hominids drink water? would they visit a watering hole and just drink a gallon of water at once for the entire day? | <p> in ancient peru, the nazca people employed a system of interconnected wells and an underground watercourse known as puquios. in spain and spanish america, a community operated watercourse known as an acequia, combined with a simple sand filtration system, provided potable water. beginning in the roman era a water wheel device known as a noria supplied water to aqueducts and other water distribution systems in major cities in europe and the middle east. london water supply infrastructure developed over many centuries from early mediaeval conduits, through major 19th-century treatment works built in response to cholera threats, to modern, large-scale reservoirs.
<p> the earliest missoulians drew their water directly from the clark fork river or nearby rattlesnake creek. the first water system consisted of a native american known as one-eyed riley and his friend filling buckets of water from the rattlesnake creek and hauling them door to door on a donkey cart. in 1871 city co-founder frank worden began construction of a log pipe and wooden main system that flowed from the rattlesnake creek north of the city. with the addition of two small covered reservoirs, the first municipal water system was begun in 1880. with an intake dam built in 1901 with a settling basin capacity of , the rattlesnake creek continued to meet demands of the city until 1935 when five wells were added to respond to increased summer and fall demand. this system is still maintained as an emergency backup, but was discontinued as a primary source after giardia outbreak in 1983. since then, missoula has relied on the missoula valley aquifer as the sole source of water. in 1889, the first electrical plant was built by to power his major downtown properties such as the missoula mercantile and the florence hotel. in 1905, the missoula mercantile (by then owned by copper king william a. clark purchased the water system and consolidated it with its vast electrical holdings to create the missoula light and water company (ml&w) a year later. electricity and water remained bundled after ml&w's sale to the montana power company (mpc) in 1929. in 1979, mpc sold its water utility holdings as mountain water company to park water company in downey, california, which since 2011 has been a subsidiary of the carlyle group. in 2015, the city of missoula was legally granted its right to acquire the water system by exercising its power of eminent domain", but the decision was upheld by a district court.
<p> cultures from arid regions often associated gourds with water, and they appear in many creation myths. since the beginning of their history, they have had a multitude of uses, including food, kitchen tools, toys, musical instruments and decoration. today, gourds are commonly used for a wide variety of crafts, including jewelry, furniture, dishes, utensils and a wide variety of decorations using carving, burning and other techniques.
<p> other theories have been proposed that suggest wading and the exploitation of aquatic food sources (providing essential nutrients for human brain evolution or critical fallback foods) may have exerted evolutionary pressures on human ancestors promoting adaptations which later assisted full-time bipedalism. it has also been thought that consistent water-based food sources had developed early hominid dependency and facilitated dispersal along seas and rivers.
<p> gourds continued to be used throughout history in almost every culture throughout the world. european contact in north america found extensive gourd use, including the use of bottle gourds as birdhouses to attract purple martins, which provided bug control for agriculture. almost every culture had musical instruments made of gourds, including drums, stringed instruments common to africa and wind instruments, including the nose flutes of the pacific.
<p> at this time the people still lived in traditional humpies. water was fetched from a well mainly by donkey wagons, but also by foot or by camel. children and women would travel back and forwards most of the day collecting water from the well and carrying it to the humpy area. the community obtained its food from rations from the station (flour, salt and meat). people also collected bush tucker including goannas, kangaroos, witchetty grubs, bush tomatoes and bush bananas.
<p> "kon-tiki" carried of drinking water in 56 water cans, as well as a number of sealed bamboo rods. the purpose stated by heyerdahl for carrying modern and ancient containers was to test the effectiveness of ancient water storage. for food "kon-tiki" carried 200 coconuts, sweet potatoes, bottle gourds and other assorted fruit and roots. the u.s. army quartermaster corps provided field rations, tinned food and survival equipment. in return, the "kon-tiki" explorers reported on the quality and utility of the provisions. they also caught plentiful numbers of fish, particularly flying fish, "dolphin fish", yellowfin tuna, bonito and shark. | They did the same thing all animals do, visited rivers or other water sources and drank when they needed to. They wouldn't have needed to drink a gallon of water a day, they would need to drink even less water than we do on average with a generally higher dietary content of fruits and vegetables and less salty foods. |
if i were to stand on titan, if i looked in the direction of saturn would i be able to see it or would the atmosphere be too thick? | <p> observations from the "voyager" space probes have shown that the titanean atmosphere is denser than earth's, with a surface pressure about 1.45 times that of earth's. titan's atmosphere is about 1.19 times as massive as earth's overall, or about 7.3 times more massive on a per surface area basis. it supports opaque haze layers that block most visible light from the sun and other sources and renders titan's surface features obscure. the atmosphere is so thick and the gravity so low that humans could fly through it by flapping "wings" attached to their arms. titan's lower gravity means that its atmosphere is far more extended than earth's; even at a distance of 975 km, the "cassini" spacecraft had to make adjustments to maintain a stable orbit against atmospheric drag. the atmosphere of titan is opaque at many wavelengths and a complete reflectance spectrum of the surface is impossible to acquire from the outside. it was not until the arrival of "cassini–huygens" in 2004 that the first direct images of titan's surface were obtained. the "huygens" probe was unable to detect the direction of the sun during its descent, and although it was able to take images from the surface, the "huygens" team likened the process to "taking pictures of an asphalt parking lot at dusk".
<p> titan is never visible to the naked eye, but can be observed through small telescopes or strong binoculars. amateur observation is difficult because of the proximity of titan to saturn's brilliant globe and ring system; an occulting bar, covering part of the eyepiece and used to block the bright planet, greatly improves viewing. titan has a maximum apparent magnitude of +8.2, and mean opposition magnitude 8.4. this compares to +4.6 for the similarly sized ganymede, in the jovian system.
<p> titan's vertical atmospheric structure is similar to earth. they both have a troposphere, stratosphere, mesosphere, and thermosphere. however, titan's lower surface gravity creates a more extended atmosphere, with scale heights of 15-50km in comparison to 5-8km on earth. voyager data, combined with data from "huygens" and radiative-convective models provide increased understanding of titan's atmospheric structure.
<p> bullet::::- titan, at 5,149 km diameter, is the second largest moon in the solar system and saturn's largest. out of all the large moons, titan is the only one with a dense (surface pressure of 1.5 atm), cold atmosphere, primarily made of nitrogen with a small fraction of methane. the dense atmosphere frequently produces bright white convective clouds, especially over the south pole region. on june 6, 2013, scientists at the iaa-csic reported the detection of polycyclic aromatic hydrocarbons in the upper atmosphere of titan. on june 23, 2014, nasa claimed to have strong evidence that nitrogen in the atmosphere of titan came from materials in the oort cloud, associated with comets, and not from the materials that formed saturn in earlier times. the surface of titan, which is difficult to observe due to persistent atmospheric haze, shows only a few impact craters and is probably very young. it contains a pattern of light and dark regions, flow channels and possibly cryovolcanos. some dark regions are covered by longitudinal dune fields shaped by tidal winds, where sand is made of frozen water or hydrocarbons. titan is the only body in the solar system beside earth with bodies of liquid on its surface, in the form of methane–ethane lakes in titan's north and south polar regions. the largest lake, kraken mare, is larger than the caspian sea. like europa and ganymede, it is believed that titan has a subsurface ocean made of water mixed with ammonia, which can erupt to the surface of the moon and lead to cryovolcanism. on july 2, 2014, nasa reported the ocean inside titan may be "as salty as the earth's dead sea".
<p> from 2005, the findings of the cassini–huygens probe have revealed a largely smooth surface of titan, with some notable abnormalities. many titanean "mountains" are little more than hills. however, some of these mountains rise to some several hundreds of meters high. doom mons is currently believed to be possibly the largest titanean mountain range and with the eponymous peak one of the highest; the title of highest peak on titan is thought to be held by the mithrim montes, which may have been formed by global contraction. doom mons is believed to be a twin-peak that rises above the relatively flat surrounding plain, and a probable massive cryovolcano. it has a 500–600 m deep indentation on its western side, containing a circular pit that is another 400 m deep, while sotra patera is immediately to its east.
<p> bullet::::- hyperion is titan's nearest neighbor in the saturn system. the two moons are locked in a 4:3 mean-motion resonance with each other, meaning that while titan makes four revolutions around saturn, hyperion makes exactly three. with an average diameter of about 270 km, hyperion is smaller and lighter than mimas. it has an extremely irregular shape, and a very odd, tan-colored icy surface resembling a sponge, though its interior may be partially porous as well. the average density of about 0.55 g/cm indicates that the porosity exceeds 40% even assuming it has a purely icy composition. the surface of hyperion is covered with numerous impact craters—those with diameters 2–10 km are especially abundant. it is the only moon besides the small moons of pluto known to have a chaotic rotation, which means hyperion has no well-defined poles or equator. while on short timescales the satellite approximately rotates around its long axis at a rate of 72–75° per day, on longer timescales its axis of rotation (spin vector) wanders chaotically across the sky. this makes the rotational behavior of hyperion essentially unpredictable.
<p> titan's atmosphere supports an opaque cloud layer that obscures titan's surface features at visible wavelengths. the haze that can be seen in the adjacent picture contributes to the moon's anti-greenhouse effect and lowers the temperature by reflecting sunlight away from the satellite. the thick atmosphere blocks most visible wavelength light from the sun and other sources from reaching titan's surface. | The interesting wikipedia article about extraterrestrial skies says that on Titan, Saturn is permanently invisible behind orange smog, and even the Sun would only be a lighter patch in the haze. |
assuming a modern refrigerator of average capacity/efficiency; further, assuming standard use: would the presence of 0.5m^3 of water inside acting as a thermal "battery" have any effect on the system's efficiency? | <p> the measured capacity of refrigeration is always dimensioned in units of power. domestic and commercial refrigerators may be rated in kj/s, or btu/h of cooling. for commercial and industrial refrigeration systems, the kilowatt (kw) is the basic unit of refrigerationexcept in north america, where the ton of refrigeration (tr) is used. (nominally the capacity to freeze one short ton of water per day, the tr is defined as 12,000 btu/hr (3.517 kw).)
<p> since the refrigerators represent 30-40% of the energy consumption of a data center and that this is largely due to mechanical refrigeration, a higher water inlet temperature allows increase the hours in which free cooling is possible and therefore increase the efficiency of the refrigerator.
<p> it is worth noting here that the refrigerator cost in all cases is so small that there is very little percentage savings associated with reduced refrigeration demands at high temperature. this means that if a htsc, bscco for instance, works better at a low temperature, say 20k, it will certainly be operated there. for very small smes, the reduced refrigerator cost will have a more significant positive impact.
<p> it may seem odd that a hypothetical heat pump with low efficiency is being used to violate the second law of thermodynamics, but the figure of merit for refrigerator units is not efficiency, formula_21, but the coefficient of performance (cop),
<p> cooling capacity is the measure of a cooling system's ability to remove heat. the si units are watts (w). another common unit is the ton of refrigeration, which describes the amount of water at freezing temperature that can be frozen in a day (24 hours). 1 ton of refrigeration is equivalent to 211 kj/min or 200 btu/min.
<p> on april 16, 2015, as part of the national appliance energy conservation act (naeca), new minimum standards for efficiency of residential water heaters set by the united states department of energy went into effect. all new gas storage tank water heaters with capacities smaller than sold in the united states in 2015 or later shall have an energy factor of at least 60% (for 50-us-gallon units, higher for smaller units), increased from the pre-2015 minimum standard of 58% energy factor for 50-us-gallon gas units. electric storage tank water heaters with capacities less than 55 us gallons sold in the united states shall have an energy factor of at least 95%, increased from the pre-2015 minimum standard of 90% for 50-us-gallon electric units.
<p> in most cases "t ="300 k, so for "t ≥"150 k the carnot efficiency is unity. the practical efficiency is a catch-all term that accounts for the many mechanical non-idealities that come into play in a refrigeration system aside from the fundamental physics of the carnot efficiency. for a large refrigeration installation there is some economy of scale, and it is possible to achieve "η" in the range of 0.2–0.3. the wall-plug power consumed by the refrigerator is then | Zero effect. The water won't change the amount of heat lost through the exterior. All it will do it add thermal mass, thus 'softening' the warm/cold spikes the interior will experience as the refrigerator cycles on and off, or the door is opened/closed. The presence of water won't affect the opening and closing, unless it's placed in such a way as to block airflow, but even then there will be the same amount of airflow on the front-facing side of the water container, so you'd just be 'protecting' the rear of the refrigerator from getting warmer, but you'd still gain heat on that side of the container, so it's all the same. |
when you bend a piece of metal, or plastic, what is happening on the small scale to the material? | <p> plastic bending begins when an applied moment causes the outside fibers of a cross-section to exceed the material's yield strength. loaded only by a moment, the peak bending stresses occurs at the outside fibers of a cross-section. the cross-section will not yield linearly through the section. rather, outside regions will yield first, redistributing stress and delaying failure beyond what would be predicted by elastic analytical methods. the stress distribution from the neutral axis is the same as the shape of the stress-strain curve of the material (this assumes a non-composite cross-section). after a cross-section reaches a sufficiently high condition of plastic bending, it acts as a plastic hinge.
<p> a popular misconception is that all materials that bend are "weak" and those that don't are "strong". in reality, many materials that undergo large elastic and plastic deformations, such as steel, are able to absorb stresses that would cause brittle materials, such as glass, with minimal plastic deformation ranges, to break.
<p> in plastic limit analysis of structural members subjected to bending, it is assumed that an abrupt transition from elastic to ideally plastic behaviour occurs at a certain value of moment, known as plastic moment (m). member behaviour between m and m is considered to be elastic. when m is reached, a plastic hinge is formed in the member. in contrast to a frictionless hinge permitting free rotation, it is postulated that the plastic hinge allows large rotations to occur at constant plastic moment m.
<p> most solid materials undergo plastic deformations when subjected to strong shocks. the point on the shock hugoniot at which a material transitions from a purely elastic state to an elastic-plastic state is called the hugoniot elastic limit (hel) and the pressure at which this transition takes place is denoted "p". values of "p" can range from 0.2 gpa to 20 gpa. above the hel, the material loses much of its shear strength and starts behaving like a fluid.
<p> in structural engineering, the plastic moment (m) is a property of a structural section. it is defined as the moment at which the entire cross section has reached its yield stress. this is theoretically the maximum bending moment that the section can resist - when this point is reached a plastic hinge is formed and any load beyond this point will result in theoretically infinite plastic deformation. in practice most materials are work-hardened resulting in increased stiffness and moment resistance until the material fails. this is of little significance in structural mechanics as the deflection prior to this occurring is considered to be an earlier failure point in the member.
<p> this type of deformation is also irreversible. a break occurs after the material has reached the end of the elastic, and then plastic, deformation ranges. at this point forces accumulate until they are sufficient to cause a fracture. all materials will eventually fracture, if sufficient forces are applied.
<p> bending is a manufacturing process that produces a v-shape, u-shape, or channel shape along a straight axis in ductile materials, most commonly sheet metal. commonly used equipment include box and pan brakes, brake presses, and other specialized machine presses. typical products that are made like this are boxes such as electrical enclosures and rectangular ductwork. | Bending plastic is a different mechanism than bending metal. Also, there are two different kinds of bending: Plastic deformation is where the object remains permanently deformed after bending, and elastic deformation is bending where the object springs back to it's original shape. As for elastic deformation in a crystalline material (like metal), on one side the atoms in the crystal are being pulled apart slightly causing that side of the object to be slightly longer, and on the other side they are being pushed together, making that side slightly shorter (I assumed a bending bar, you can also stretch or squeeze, or twist or shear). Because of this, when you elastically bend a material, how hard it is to bend (called the modulus of elasticity) is actually a measure of the strength of the interatomic bonds of the crystalline material. At some point of applied force (known as stress plastic deformation will begin to occur. This is distinct from elastic deformation in that it is irreversible. What makes this different, is that you are causing a permanent movement of atoms throughout the crystal lattice (lattice just means the grid of atoms that make up the crystal). Fundamental to this idea is that crystals of materials are not perfect. What I mean by this is that there might be a missing plane of atoms in the middle of the crystal, or there might be an extra plane of atoms, or there might be a twist in the crystal structure. These are called dislocations. In order for there to be plastic deformation, there **must** be dislocation motion (the correct technical term is called "dislocation slip"). When you stress an object, you are causing a stress field through the material, and once that stress field reaches a certain threshold, you cause the dislocations to move through the material. Before there is dislocation motion is called elastic deformation which I discussed above. The point at which dislocations begin to move is called the yield stress. The point after which dislocations are moving is called plastic deformation. When the stress is removed, the dislocations remain in their new spot (instead of jumping back to their original spot). This is what causes the permanent change in shape. Just like there is a certain amount of energy required to push a car up over a hill before it can roll down the other side, a certain amount of energy is required to push a dislocation to the next "spot" in the crystal. This energy level required is what determines how strong the material is (mentioned above as yield stress). All of these ideas are inter-related and I hope I did not confuse you up to this point. On an interesting side note: If you want to make a material stronger, you can increase the energy required to move a dislocation, or in other words make the hill taller that you are pushing a car up. This will require you to insert more energy to get the dislocation or car to the top of the hill. In materials, this "heightening" can be done in many different ways. One of the first ways ever discovered was the invention of steel, where carbon atoms are inserted into the iron crystal lattice. These carbon atoms deform the shape of the crystal, and in doing so increase the energy levels required to move a dislocation, thus making the material far stronger (before carbon addition it is simply iron, after carbon addition it becomes steel). This is solid solution strengthening and the wikipedia article includes a great picture of how the lattice is distorted by the tiny carbon atoms. So to bring this discussion full circle and to answer your question: just like in the car analogy, once the car is pushed over the hill, it will roll down the other side and accumulate kinetic energy. This is very similar to the motion of dislocations, as they slide into new positions they are releasing energy in the form of kinetic energy that heats the material. As to the plastic spork question, this is fundamentally different as plastic is a polymer, which is a material that consists of long chains of atoms all jumbled together like spaghetti. When you bend the fork, you are pulling the strings apart from eachother (imagine pulling the pile of spaghetti apart). Once you bend a plastic spoon it tends to stay bent, but Rubber bands are polymer chains that want to be all curled up. When you let go of the spoon, the polymer chains do not snap back to their curled up/jumbled up positions, but a rubber band does. Richard Feyman (who has forgotten more things about this subject than I would ever hope to know) explained this far better than me in his short video and the tragic consequences of polymer elasticity when he told congress how the Challenger exploded due to a cold o-ring. |
how do herbicides distinguish between plants and weeds? | <p> herbicides (, ), also commonly known as weedkillers, are chemical substances used to control unwanted plants. selective herbicides control specific weed species, while leaving the desired crop relatively unharmed, while non-selective herbicides (sometimes called total weedkillers in commercial products) can be used to clear waste ground, industrial and construction sites, railways and railway embankments as they kill all plant material with which they come into contact. apart from selective/non-selective, other important distinctions include "persistence" (also known as "residual action": how long the product stays in place and remains active), "means of uptake" (whether it is absorbed by above-ground foliage only, through the roots, or by other means), and "mechanism of action" (how it works). historically, products such as common salt and other metal salts were used as herbicides, however these have gradually fallen out of favor and in some countries a number of these are banned due to their persistence in soil, and toxicity and groundwater contamination concerns. herbicides have also been used in warfare and conflict.
<p> bullet::::- selective herbicides control or suppress certain plants without affecting the growth of other plants species. selectivity may be due to translocation, differential absorption, physical (morphological) or physiological differences between plant species. 2,4-d, mecoprop, dicamba control many broadleaf weeds but remain ineffective against turfgrasses.
<p> herbicides are designed to kill plants, and are used to control unwanted plants such as agricultural weeds. however herbicides can also cause phytotoxic effects in plants that are not within the area over which the herbicide is applied, for example as a result of wind-blown spray drift or from the use of herbicide-contaminated material (such as straw or manure) being applied to the soil. the phytotoxic effects of herbicides are an important subject of study in the field of ecotoxicology.
<p> pesticides derived from plants include nicotine, rotenone, strychnine and pyrethrins. plants such as tobacco, cannabis, opium poppy, and coca yield psychotropic chemicals. poisons from plants include atropine, ricin, hemlock and curare, though many of these also have medicinal uses.
<p> as herbicides are pesticides used to kill unwanted plants, silvicides are special pesticides (cacodylic acid or msma for instance) used to kill brush and trees, or ""entire forest"" or unwanted forest species.
<p> modern herbicides are often synthetic mimics of natural plant hormones which interfere with growth of the target plants. the term organic herbicide has come to mean herbicides intended for organic farming. some plants also produce their own natural herbicides, such as the genus "juglans" (walnuts), or the tree of heaven; such action of natural herbicides, and other related chemical interactions, is called allelopathy. due to herbicide resistance - a major concern in agriculture - a number of products combine herbicides with different means of action. integrated pest management may use herbicides alongside other pest control methods.
<p> however weed control can also be achieved by the use of herbicides. selective herbicides kill certain targets while leaving the desired crop relatively unharmed. some of these act by interfering with the growth of the weed and are often based on plant hormones. herbicides are generally classified as follows: | I’m not sure how it works for every herbicide, but one example I know is atrazine, which I’ve studied a bit in the lab on hormone signaling in animals. For atrazine, it only reaches toxic levels in plants that can’t metabolize it effectively. It’s going to be a bit toxic to any plant at agriculturally relevant doses, but the plant won’t die unless it can’t prevent atrazine from accumulating within its cells. |
is there an experiment that could cause a vacuum metastability event if the universe is indeed in a false vacuum state? | <p> in quantum field theory, a false vacuum is a hypothetical vacuum that is somewhat, but not entirely, stable. it may last for a very long time in that state, and might eventually move to a more stable state. the most common suggestion of how such a change might happen is called bubble nucleation – if a small region of the universe by chance reached a more stable vacuum, this 'bubble' would spread.
<p> vacuum decay would be theoretically possible if our universe had a false vacuum in the first place, an issue that was highly theoretical and far from resolved in 1982. if this were the case, a bubble of lower-energy vacuum could come to exist by chance or otherwise in our universe, and catalyze the conversion of our universe to a lower energy state in a volume expanding at nearly the speed of light. chaotic inflation theory suggests that the universe may be in either a false vacuum or a true vacuum state.
<p> bullet::::- perfect vacuum is an ideal state of no particles at all. it cannot be achieved in a laboratory, although there may be small volumes which, for a brief moment, happen to have no particles of matter in them. even if all particles of matter were removed, there would still be photons and gravitons, as well as dark energy, virtual particles, and other aspects of the quantum vacuum.
<p> in order to best understand the false vacuum collapse theory, one must first understand the higgs field which permeates the universe. much like an electromagnetic field, it varies in strength based upon its potential. a true vacuum exists so long as the universe exists in its lowest energy state, in which case the false vacuum theory is irrelevant. however, if the vacuum is not in its lowest energy state (a false vacuum), it could tunnel into a lower energy state. this is called vacuum decay. this has the potential to fundamentally alter our universe; in more audacious scenarios even the various physical constants could have different values, severely affecting the foundations of matter, energy, and spacetime. it is also possible that all structures will be destroyed instantaneously, without any forewarning. studies of a particle similar to the higgs boson support the theory of a false vacuum collapse billions of years from now.
<p> in order to best understand the false vacuum collapse theory, one must first understand the higgs field which permeates the universe. much like an electromagnetic field, it varies in strength based upon its potential. a true vacuum exists so long as the universe exists in its lowest energy state, in which case the false vacuum theory is irrelevant. however, if the vacuum is not in its lowest energy state (a false vacuum), it could tunnel into a lower energy state. this is called vacuum decay. this has the potential to fundamentally alter our universe; in more audacious scenarios even the various physical constants could have different values, severely affecting the foundations of matter, energy, and spacetime. it is also possible that all structures will be destroyed instantaneously, without any forewarning.
<p> a false vacuum is unstable due to the quantum tunnelling of instantons to lower energy states. tunnelling can be caused by quantum fluctuations or the creation of high-energy particles. the false vacuum is a local minimum, but not the lowest energy state.
<p> for each observer in any chosen point of space, false vacuum eventually tunnels into a state with the same potential energy, but which is not a vacuum (it is not at a local minimum of the potential energy – it “can decay”). this state can be seen as a true vacuum, filled with a large number of inflaton particles. however, rate of expansion of the true vacuum does not change at that moment: only its exponential character changes to much slower expansion of the flrw metric. this ensures that expansion rate precisely matches the energy density. | In the last few months there has been active discussion on whether or not this is even possible, in principle, if we life in a nearly Minkowski space. In MWT (or string landscape), the vacua are really degenerate. In this case all the Hilbert spaces are orthonormal and it's really impossible to know about the existence of the rest of the vacua. When you break the degeneracy, then you might have tunneling through bubble nucleation. In order to artificially create a bubble, you would need enough energy to climb the potential wall that separates you from the other vacuum. This is model dependent, but I imagine that if the wall was small enough such that you could artificially tunnel at the LHC, then the lifetime of this vacuum would be smaller than the age of the universe and we would have already crossed to the other side. |
does an increased global average temperature affect geological processes? | <p> in the scientific literature, there is a strong consensus that global surface temperatures have increased in recent decades and that the trend is caused primarily by human-induced emissions of greenhouse gases. with regard to the global warming controversy, the scientific mainstream puts neither doubt on the existence of global warming nor on its causes and effects.
<p> in the scientific literature, there is an overwhelming consensus that global surface temperatures have increased in recent decades and that the trend is caused mainly by human-induced emissions of greenhouse gases. no scientific body of national or international standing disagrees with this view. scientific discussion takes place in journal articles that are peer-reviewed, which scientists subject to assessment every couple of years in the intergovernmental panel on climate change reports. the scientific consensus stated in the ipcc fifth assessment report is that it "is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century".
<p> there are also concerns regarding global warming that a global average increase of 3–4 degrees celsius above the preindustrial baseline could lead to a further unchecked increase in surface temperatures. for example, releases of methane, a greenhouse gas more potent than co, from wetlands, melting permafrost and continental margin seabed clathrate deposits could be subject to positive feedback.
<p> the climate change study projects further temperature increases, with greater warming in the summer and higher extreme temperatures by 2050. due to the increased temperature, there is a projected moderate increase in the rate of water evaporation. reduced snowfallperhaps 15% to 30% less than current amountsand the elimination of surface hail, along with the higher likelihood of intense precipitation events are predicted by 2050. droughts may be more likely due to increased temperatures, increased evaporation rates, and potential changes in precipitation.
<p> since the late 1800s, the surface of the earth has experienced an increase of 0.6 °c in global temperatures. the earth historically has experienced periods of large increases in global temperatures. for example, around 2 million b.c the surface temperature of the earth is estimated to have been 5 °c warmer than today. while these temperatures increased as a result of the natural warming and cooling of the earth, current increases in global temperatures are attributed to increasing amounts of greenhouse gases in the atmosphere. greenhouse gases have increased since the late 19th century due to the industrialization of nations worldwide. examples of greenhouse gases include carbon dioxide, methane, nitrous oxide, and hydro-fluorocarbons. while each of these have a significant impact on the effects of greenhouse gases, carbon dioxide is considered to be the most important as approximately three-quarters of the human-generated global warming effect may attributed to increased carbon dioxide output .
<p> a rise in global temperatures is also predicted to correlate with an increase in global precipitation but because of increased runoff, floods, increased rates of soil erosion, and mass movement of land, a decline in water quality is probable, because while water will carry more nutrients it will also carry more contaminants. while most of the attention about climate change is directed towards global warming and greenhouse effect, some of the most severe effects of climate change are likely to be from changes in precipitation, evapotranspiration, runoff, and soil moisture. it is generally expected that, on average, global precipitation will increase, with some areas receiving increases and some decreases.
<p> the scientific consensus is that the global average surface temperature has risen over the past century. scientific opinion on climate change was summarized in the 2001 third assessment report of the intergovernmental panel on climate change (ipcc). the main conclusions on global warming at that time were as follows: | It does a bit, in that it feeds into a couple of feedback loops which will react accordingly. However the effects of this would be delayed and probably not felt before the final extinction of our species. So no worries, right? The 2 main channels of retroaction are the melting of icecaps and the effects of increasing aridification on plant cover and erosion rates. When icecaps and alpine glaciers melt, it affects the distribution of mass in mountain chains and continents, and isostatic rebound will subsequently act in such a way as to counterct this effect. So, for instance, if you remove through melting a continental icecap (say the Greenland icesheet), the underlying continental plateqhich is floating on top of the underlyinf aesthenosphere will rebound upwards ar a measurable rate over the course of a few millenia. this will induce seismic activity, notably. In the context of an orogenic belt such as the Andes, there might also be effects on the critical taper and the way the collision unfolds, at least for a while. Then, an increase in tempearture is said to bring about local increases in rainfall and to also increase desertification, both of which would result in increased erosion rates. This increases the amount of sediment piling up at the edge of the continental plates, including along subduction zones. Thus, over the course of the next several million years, the amount of sediment trying to work its way into the subduction goes up, which affects the geometry of the collision zone, the size of the accretionary prism and the amount of subducted material going through partial melting, which in turn might affect the amount of melts produced along the subduction zone, which will affect the rate of volcanism along subduction zones, etc, etc... |
how right is it to say that (e)^2=(mc^2)^2+(pc)^2 is just applying the pythagorean theorem? | <p> the theorem now states that the ols estimator is a blue. the main idea of the proof is that the least-squares estimator is uncorrelated with every linear unbiased estimator of zero, i.e., with every linear combination formula_41 whose coefficients do not depend upon the unobservable formula_30 but whose expected value is always zero.
<p> looman pointed out that the function given by "f"("z") = exp(−"z") for "z" ≠ 0, "f"(0) = 0 satisfies the cauchy–riemann equations everywhere but is not analytic (or even continuous) at "z" = 0. this shows that the function "f" must be assumed continuous in the theorem.
<p> cobham's thesis holds that p is the class of computational problems that are "efficiently solvable" or "tractable". this is inexact: in practice, some problems not known to be in p have practical solutions, and some that are in p do not, but this is a useful rule of thumb.
<p> furthermore, the pcp theorem asserts that the number of proof accesses can be brought all the way down to a constant. that is, . they used this valuable characterization of np to prove that approximation algorithms do not exist for the optimization versions of certain np-complete problems unless p = np. such problems are now studied in the field known as hardness of approximation.
<p> in mathematics, the pythagorean theorem, also known as pythagoras' theorem, is a fundamental relation in euclidean geometry among the three sides of a right triangle. it states that the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares on the other two sides. this theorem can be written as an equation relating the lengths of the sides "a", "b" and "c", often called the "pythagorean equation":
<p> the pythagorean theorem is derived from the axioms of euclidean geometry, and in fact, were the pythagorean theorem to fail for some right triangle, then the plane in which this triangle is contained cannot be euclidean. more precisely, the pythagorean theorem implies, and is implied by, euclid's parallel (fifth) postulate. thus, right triangles in a non-euclidean geometry
<p> bullet::::- if the two functions are "f" = "z" and "f" = "e" then the theorem implies the hermite–lindemann theorem that "e" is transcendental for any nonzero algebraic α, otherwise α, 2α, 3α... would be an infinite number of values at which both "f" and "f" are algebraic. | It *is* fundamentally related to the pythagorean theorem, but in a deeper way than the pythagoreans or Euclid would have known. If you want to know the distance between two points in 2 dimensions you use the pythagorean theorem to find that c^2 = a^2 + b^2 . Let's change these variables to something we're a bit more used to in physics, call the overall distance ds, and the distance in x, dx, and the distance in y dy. ds^2 = dx^2 + dy^2 . In 3 dimensions, it extends to ds^2 = dx^2 + dy^2 + dz^2 . Pretty simple. But if you want to know the distance between two points (called events) in space**-time**, the formula changes slightly: ds^2 = -(c*dt)^2 + dx^2 + dy^2 + dz^2 (I'm making some convention choices to simplify the discussion), where c is the speed of light and dt is the time separation of the points. Note the minus sign in front of the time component. This is where we begin to extend the Pythagorean theorem to new kinds of geometries beyond the one you're familiar with in grade school. Specifically this is called a hyperbolic geometry, and there's a lot more cool stuff we can talk about there. Well it turns out, when you start to mess around with physics in space-time (this simple space-time, there are more complicated ones), we get that in general there's some constant number, let's call Q that can be calculated by the timelike and spacelike components of things called 4-vectors (like the space 3-vectors; if you're unfamiliar with vectors they're not that important to the present discussion). Well the next most useful 4-vector, imo, is the Energy momentum 4-vector, aka 4-momentum. Energy is the timelike bit, and momentum is the spacelike bit. Well anyway -E^2 + (**p**c)^2 is our thing we sum, and the number it equals? (mc^2 )^2 . (mc^2 )^2 = -E^2 +(**p**c)^2 . Now we can rearrange that then, to get E^2 = (**p**c)^2 +(mc^2 )^2 , to get the fuller expression. And just like the video says, it's related to the pythagorean theorem, but in a very roundabout way. It happens of course, that it's still a useful mathematical tool. |
why do clouds have shadows instead of displaying the color spectrum? | <p> clouds are white for the same reason as ice. they are composed of water droplets or ice crystals mixed with air, very little light that strikes them is absorbed, and most of the light is scattered, appearing to the eye as white. shadows of other clouds above can make clouds look gray, and some clouds have their own shadow on the bottom of the cloud.
<p> dark clouds appear so because of sub-micrometre-sized dust particles, coated with frozen carbon monoxide and nitrogen, which effectively block the passage of light at visible wavelengths. also present are molecular hydrogen, atomic helium, co (co with oxygen as the o isotope), cs, nh (ammonia), hco (formaldehyde), c-ch (cyclopropenylidene) and a molecular ion nh (diazenylium), all of which are relatively transparent. these clouds are the spawning grounds of stars and planets, and understanding their development is essential to understanding star formation.
<p> the luminance or brightness of a cloud is determined by how light is reflected, scattered, and transmitted by the cloud's particles. its brightness may also be affected by the presence of haze or photometeors such as halos and rainbows. in the troposphere, dense, deep clouds exhibit a high reflectance (70% to 95%) throughout the visible spectrum. tiny particles of water are densely packed and sunlight cannot penetrate far into the cloud before it is reflected out, giving a cloud its characteristic white color, especially when viewed from the top. cloud droplets tend to scatter light efficiently, so that the intensity of the solar radiation decreases with depth into the gases. as a result, the cloud base can vary from a very light to very-dark-grey depending on the cloud's thickness and how much light is being reflected or transmitted back to the observer. high thin tropospheric clouds reflect less light because of the comparatively low concentration of constituent ice crystals or supercooled water droplets which results in a slightly off-white appearance. however, a thick dense ice-crystal cloud appears brilliant white with pronounced grey shading because of its greater reflectivity.
<p> the color of a cloud, as seen from the earth, tells much about what is going on inside the cloud. dense deep tropospheric clouds exhibit a high reflectance (70% to 95%) throughout the visible spectrum. tiny particles of water are densely packed and sunlight cannot penetrate far into the cloud before it is reflected out, giving a cloud its characteristic white color, especially when viewed from the top. cloud droplets tend to scatter light efficiently, so that the intensity of the solar radiation decreases with depth into the gases. as a result, the cloud base can vary from a very light to very dark grey depending on the cloud's thickness and how much light is being reflected or transmitted back to the observer. thin clouds may look white or appear to have acquired the color of their environment or background. high tropospheric and non-tropospheric clouds appear mostly white if composed entirely of ice crystals and/or supercooled water droplets.
<p> in contrast, the water droplets that make up clouds are of a comparable size to the wavelengths in visible light, and the scattering is described by mie's model rather than that of rayleigh. here, all wavelengths of visible light are scattered approximately identically, and the clouds therefore appear to be white or grey.
<p> if parts of clouds contain small water droplets or ice crystals of similar size, their cumulative effect is seen as colors. the cloud must be optically thin, so that most rays encounter only a single droplet. iridescence is therefore mostly seen at cloud edges or in semi-transparent clouds, while newly forming clouds produce the brightest and most colorful iridescence. when the particles in a thin cloud are very similar in size over a large extent, the iridescence takes on the structured form of a corona, a bright circular disk around the sun or moon surrounded by one or more colored rings.
<p> clouds obscure the view of other objects in the sky, though varying thicknesses of cloudcover have differing effects. a very thin cirrus cloud in front of the moon might produce a rainbow-colored ring around the moon. stars and planets are too small or dim to take on this effect, and are instead only dimmed (often to the point of invisibility). thicker cloudcover obscures celestial objects entirely, making the sky black or reflecting city lights back down. clouds are often close enough to afford some depth perception, though they are hard to see without moonlight or light pollution. | **TL;DR: Cloud droplets are about 100 times smaller than raindrops, so they scatter light differently and do not produce a rainbow.** Rain drops are typically around 1 mm across (0.04 inches), while cloud droplets are typically around 10 micrometers across (0.01 mm, or 0.0004 inches). Though they are essentially the same thing (droplets of water), because of this large difference in size they scatter light in very different ways. When light encounters a rain drop, the drop acts like a small prism. The light is refracted as it enters the drop (its path is bent), and since the angle of refraction depends on the wavelength, the white light separates into a spectrum of colors, each traveling at a slightly different angle. The light is then reflected off the back of the drop, and refracted again as it travels back at an angle relative to its original direction of motion (see this handy diagram). This explanation is quite simplified, but is a good basic primer on the proces. When light encounters a cloud droplet, because the droplet is on the same size scale as the wavelength of light, it is difficult for light to undergo this refraction-reflection-refraction process cleanly (for complicated reasons which I don't even fully understand). The process by which these smaller cloud droplets scatter light is known as Mie scattering, and the scattering does not have the same strong directionality as the larger drops. Edit: As for the reason clouds cast shadows, well, rain casts a shadow as well, you just have to be in the right place to see it. All this light scattered away by the drops means that less light gets through the cloud, so the area behind the cloud is darker. A shadow is the area behind something that is blocking sunlight, whether it be a cloud or rain or a house or a person. Rainbows are due to reflection of light, which is a different phenomenon. Bonus facts: Because rainbows are formed by a precise reflection of light, they will always appear in the same position with respect to the sun: it will always be about 42 degrees away from the antisolar point (the point at the exact opposite of the sky from the sun). For this reason, if you manage to get high enough in the air, you can see the rainbow as a full circle around the antisolar point. |
is the earth's rotation still slowing down? if so, could it eventually stop? | <p> the main reason for the slowing down of the earth's rotation is tidal friction, which alone would lengthen the day by 2.3 ms/century. other contributing factors are the movement of the earth's crust relative to its core, changes in mantle convection, and any other events or processes that cause a significant redistribution of mass. these processes change the earth's moment of inertia, affecting the rate of rotation due to conservation of angular momentum. some of these redistributions increase earth's rotational speed, shorten the solar day and oppose tidal friction. for example, glacial rebound shortens the solar day by 0.6 ms/century and the 2004 indian ocean earthquake is thought to have shortened it by 2.68 microseconds. it is evident from the figure that the earth's rotation has slowed at a decreasing rate since the initiation of the current system in 1971, and the rate of leap second insertions has therefore been decreasing.
<p> this scenario is unique because it doesn't happen overnight, but rather over a given period of time: the earth revolves at 1,000 miles an hour, but is gradually slowing down, yet this slowing is too slow to be noticed on human timescales. but what if it significantly slowed and eventually stopped? (the reason for this is because if the earth stopped spinning instantly, everything on its surface, including buildings and trees, would blown away eastward across the planet by winds over thousands of miles per hour, which would essentially kill every living thing on the surface in the process)
<p> the earth's rate of rotation is slowing down mainly because of tidal interactions with the moon and the sun. since the solid parts of the earth are ductile, the earth's equatorial bulge has been decreasing in step with the decrease in the rate of rotation.
<p> the spin of the earth starts slowing down dramatically. it is estimated earth would stop spinning in as little as 5 years. the first effect is the isolation between the global positioning system satellites and ground-based atomic clocks. then stock markets crash because of uncertainty about humanity's future. as times goes on the oceanic bulge of water at the equator moves northward and southward. the water floods russia, canada, antarctica and northern europe. the atmosphere, once shaking solar heat out over the world and shifting air, stops and whirls to the poles. the air starts to thin at the equator and people have to migrate to more northerly and southerly cities in order to keep up with denser air. there is a higher risk of solar radiation as the magnetosphere weakens because of the slowing inner core. as the earth slows, the crust, mantle and the molten core slows down at a different speeds, causing massive friction. this creates tremendous earthquakes where there have never been earthquakes before.
<p> the earth's rotation rate is still slowing down, though gradually, by about two thousandths of a second per rotation every 100 years. estimates of how fast the earth was rotating in the past vary, because it is not known exactly how the moon was formed. estimates of the earth's rotation 500 million years ago are around 20 modern hours per "day".
<p> the earth's axis shifted by estimates of between and . this deviation led to a number of small planetary changes, including the length of a day, the tilt of the earth, and the chandler wobble. the speed of the earth's rotation increased, shortening the day by 1.8 microseconds due to the redistribution of earth's mass. the axial shift was caused by the redistribution of mass on the earth's surface, which changed the planet's moment of inertia. because of conservation of angular momentum, such changes of inertia result in small changes to the earth's rate of rotation. these are expected changes for an earthquake of this magnitude. the earthquake also generated infrasound waves detected by perturbations in the orbit of the goce satellite, which thus serendipitously became the first seismograph in orbit.
<p> as the collapse continues, the rotation rate can increase to the point where the accreting protostar can break up due to centrifugal force at the equator. thus the rotation rate must be braked during the first 100,000 years to avoid this scenario. one possible explanation for the braking is the interaction of the protostar's magnetic field with the stellar wind in magnetic braking. the expanding wind carries away the angular momentum and slows down the rotation rate of the collapsing protostar. | Given an infinite amount of time, the limit would be that it would become tidally locked to the moon (like the moon is to the earth), so that a day would last a month (which would actually be longer than it is now). In actuality, the sun will become a red giant and earth will either be consumed or be rendered unable to support life long before that happens. |
why does body hair stop exactly where clothes start? | <p> the real action of leg hair takes place below the skin or the epidermis. the cells that are in the hair follicles divide and multiply. when the space fills up in the follicle it pushes older cells out and that is what becomes the leg hair. after the older cells become hard and leave the follicle, they form a hair shaft. the hair shaft is mostly made up of dead tissue and a protein that is known as keratin.
<p> hair is a protein filament that grows from follicles in the dermis, or skin. with the exception of areas of glabrous skin, the human body is covered in follicles which produce thick terminal and fine vellus hair. it is an important biomaterial primarily composed of protein, notably keratin.
<p> leg hair is hair that grows on the legs of humans, generally appearing after the onset of puberty. for hygienic or aesthetic reasons and for some sports, people shave, wax, or use hair removal creams to remove the hair from their legs: see leg shaving.
<p> in the later decades of life, especially after the 5th decade, there begins a noticeable reduction in body hair especially in the legs. the reason for this is not known but it could be due to poorer circulation, lower free circulating hormone amounts or other reasons.
<p> care of the hair and care of the scalp skin may appear separate, but are actually intertwined because hair grows from beneath the skin. the living parts of hair (hair follicle, hair root, root sheath and sebaceous gland) are beneath the skin, while the actual hair shaft which emerges (the cuticle which covers the cortex and medulla) has no living processes. damage or changes made to the visible hair shaft cannot be repaired by a biological process, though much can be done to manage hair and ensure that the cuticle remains intact.
<p> people have between 100,000 and 150,000 hairs on their head. the number of strands normally lost in a day varies but on average is 100. in order to maintain a normal volume, hair must be replaced at the same rate at which it is lost. the first signs of hair thinning that people will often notice are more hairs than usual left in the hairbrush after brushing or in the basin after shampooing. styling can also reveal areas of thinning, such as a wider parting or a thinning crown.
<p> body hair (on the chest, shoulders, back, abdomen, buttocks, thighs, tops of hands, and tops of feet) turns, over time, from terminal ("normal") hairs to tiny, blonde vellus hairs. arm, perianal, and perineal hair is reduced but may not turn to vellus hair on the latter two regions (some cisgender women also have hair in these areas). underarm hair changes slightly in texture and length, and pubic hair becomes more typically female in pattern. lower leg hair becomes less dense. all of these changes depend to some degree on genetics. | maybe clothes are made to stop where the hair starts. |
geologist of reddit can you tell me what these rocks are? | <p> within these rocks are abundant mineral resources that include uranium, coal, petroleum, and natural gas. study of the area's unusually clear geologic history (which is laid bare due to the arid and semiarid conditions) has greatly advanced that science.
<p> the rocks of the area are complex and have featured in international geological debate since the 1950s. the site has attracted geologists from all over the world and featured in a number of theories that have been put forward to explain the unusual rock relationships. some of these theories have now become an accepted part of geological science.
<p> geologists work in the energy and mining sectors searching for natural resources such as petroleum, natural gas, precious and base metals. they are also in the forefront of preventing and mitigating damage from natural hazards and disasters such as earthquakes, volcanoes, tsunamis and landslides. their studies are used to warn the general public of the occurrence of these events. geologists are also important contributors to climate change discussions.
<p> point of rocks is a geologically significant outcropping located along u.s. highway 12 (us 12) southwest of baraboo, wisconsin. the formation is made up of baraboo quartzite and is part of the baraboo range; it dates from the precambrian and is roughly 1.7 billion years old. along with the nearby van hise rock, the formation was instrumental in the university of wisconsin–madison's development of the field of geology in the late 19th and early 20th centuries. the formation is listed on the national register of historic places.
<p> geologists are involved in the study of ore deposits, which includes the study of ore genesis and the processes within the earth's crust that form and concentrate ore minerals into economically viable quantities.
<p> robert g. coleman (born 1923) is an american geologist. he is a member of the united states national academy of sciences. his primary field of expertise is the formation and plate tectonic setting of ophiolites and ultramafic rocks. he is a retired professor of geology from stanford university and retired from the u.s. geological survey. he continues to conduct research and publish scientific books and articles.
<p> the rock is named in honor of charles van hise, a prominent geologist who chaired the university of wisconsin department of mineralogy and geology. among his significant accomplishments in geology and politics, van hise determined how the quartzite in the baraboo range had formed. building on earlier discoveries that the quartzite formed in the precambrian era and had metamorphosed, van hise was able to determine the forces that deformed the rock; his discoveries helped form the key principles of structural geology. van hise also used his studies of the quartzite's conversion from sandstone to write the "treatise on metamorphism" in 1904, the first publication to describe the process of metamorphism in detail. university of wisconsin geologists used van hise rock to demonstrate the properties and principles which van hise discovered; the rock is still used in geological education and research. | The one on the top ~~left~~ **EDIT** RIGHT (damn it) is a bunch of quartz crystals. Compare the shape with amythyst (quartz with purple coloration) - identical. May be worth a few dollars just because people like crystals. The one on the top left looks like a sandstone or limestone brecchia (broken up rocks) concreted with perhaps calcite, by the larger white crystals. It has an interesting infilling of a different material (the grey with white flecks) with a much finer grain. To test, put a little acid on it - if it's calcite/limestone it will fizz. If it is as I describe, it's not valuable. The bottom one could be a lot of things, but the green colour on the worn, top surface forming a crust is suggestive of a large amount of copper ions. The dark colour of the main body could imply either a mudstone/clay or perhaps a basalt (especially with the iron-rich red weathered bits). You'd need to look using a hand-lens to be sure. One outside possibility: If the green stuff on the top looks like little crystals (the pale green fuzzy bit in the middle of the picture), it might be olivine, implying the rock contains material from very deep down in a mid-ocean ridge. Valuation? Almost certainly not valuable. Sorry :) |
how many man made objects are in space? | <p> the first commercial product manufactured in space were microscopic polystyrene beads, 10 µm in diameter, that were made during sts-6 aboard the space shuttle challenger april 4–9, 1983. the beads were made to be used for the calibration of particle size measuring instruments such as optical and electron microscopes. the manufacturing process took advantage of being able to form near perfect spheres in a microgravity environment. the technology necessary to produce the beads was jointly developed by lehigh university and nasa.
<p> in february 2017, taylor became the first private citizen to manufacture an item in space when a gravity meter he commissioned and co-designed was printed on the international space station. the item was subsequently donated to the museum of science and industry in chicago.
<p> made in space, inc. (mis) is an america-based company, specializing in the engineering and manufacturing of three-dimensional printers for use in microgravity. headquartered in mountain view, california on moffett field, made in space's 3d printer (zero-g printer) was the first manufacturing device in space.
<p> the company "made in space," which has developed a 3d printer adapted to the constraints of space travel, was founded at singularity university. the first prototype of made in space, the "zero-g printer", was developed with nasa and sent into space in september, 2014.
<p> another potential source of raw materials, at least in the short term, is recycled orbiting satellites and other man-made objects in space. some consideration was given to the use of the space shuttle external fuel tanks for this purpose, but nasa determined that the potential benefits were outweighed by the increased risk to crew and vehicle.
<p> bullet::::- in a 2015 episode of "murdoch mysteries", set in about 1905, an inventor works with tsiolkovsky's daughter to build a suborbital rocket based on his ideas and be the first man in space; a second rocket built to the same design is adapted as a ballistic missile for purposes of extortion.
<p> in 2015–2016, other 3d-printed spacecraft assemblies were ground-tested, including high-temperature, high-pressure rocket engine combustion chambers and the entire mechanical spaceframe and propellant tanks for a small satellite of a few hundred kilograms. | Take a look at this. Edit: The US Space Surveillance Network currently tracks about 8000 objects. |
are there any non-poisonous fish other than sharks that can kill a human? | <p> contrary to popular belief, only a limited number of shark species are known to pose a serious threat to humans. the species that are most dangerous can be indiscriminate and will take any potential meal they happen to come across (as an oceanic whitetip might eat a person floating in the water after a shipwreck), or may bite out of curiosity or mistaken identity (as with a great white shark attacking a human on a surfboard possibly because it resembles its favoured prey, a seal).
<p> although salmon sharks are thought to be capable of injuring humans, few, if any, attacks on humans have been reported, but reports of divers encountering salmon sharks and salmon sharks bumping fishing vessels have been given. these reports, however, may need positive identification of the shark species involved.
<p> only a few species of shark are dangerous to humans. out of more than 480 shark species, only three are responsible for two-digit numbers of fatal unprovoked attacks on humans: the great white, tiger and bull; however, the oceanic whitetip has probably killed many more castaways which have not been recorded in the statistics. these sharks, being large, powerful predators, may sometimes attack and kill people; however, they have all been filmed in open water by unprotected divers. the 2010 french film "oceans" shows footage of humans swimming next to sharks in the ocean. it is possible that the sharks are able to sense the presence of unnatural elements on or about the divers, such as polyurethane diving suits and air tanks, which may lead them to accept temporary outsiders as more of a curiosity than prey. uncostumed humans, however, such as those surfboarding, light snorkeling or swimming, present a much greater area of exposed skin surface to sharks. in addition, the presence of even small traces of blood, recent minor abrasions, cuts, scrapes or bruises, may lead sharks to attack a human in their environment. sharks seek out prey through electroreception, sensing the electric fields that are generated by all animals due to the activity of their nerves and muscles.
<p> in addition to the four species responsible for a significant number of fatal attacks on humans, a number of other species have attacked humans without being provoked, and have on extremely rare occasions been responsible for a human death. this group includes the shortfin mako, hammerhead, galapagos, gray reef, blacktip, lemon, silky shark and blue sharks. these sharks are also large, powerful predators which can be provoked simply by being in the water at the wrong time and place, but they are normally considered less dangerous to humans than the previous group.
<p> contrary to popular belief, only a few sharks are dangerous to humans. out of more than 470 species, only four have been involved in a significant number of fatal, unprovoked attacks on humans: the great white, oceanic whitetip, tiger, and bull sharks. these sharks are large, powerful predators, and may sometimes attack and kill people. despite being responsible for attacks on humans they have all been filmed without using a protective cage.
<p> bullet::::- sharks have often been portrayed as monsters who will immediately attack anything that swims in their vicinity. contrary to popular belief, only a few sharks are dangerous to humans. out of more than 470 species, only four have been involved in a significant number of fatal, unprovoked attacks on humans: the great white, oceanic whitetip, tiger, and bull sharks. these sharks are large, powerful predators, and may sometimes attack and kill people. however, even then, shark attacks on humans are extremely rare. the average number of fatalities worldwide per year between 2001 and 2006 from unprovoked shark attacks is 4.3.
<p> adult american crocodiles have no natural predators. they are known predators of lemon sharks, and sharks avoid areas with american crocodiles. nonetheless, a single recorded fatality was reported for a small adult american crocodile when a great white shark killed the american crocodile as it was swimming out at sea. | anything in the whale and porpoise family can kill us. some of them can kill is by calling out too close to us. octopi can kill us by drowning us by removing our mask or breathing apparatus or biting us with their parrot beak. barracuda can kill us with their teeth. swordfish can impale us with their nose spike thing |
hoverboards? | <p> a hoverboard (or hover board) is a fictional levitating board used for personal transportation, first described by author m. k. joseph in 1967 and popularized by the "back to the future" film franchise. hoverboards are generally depicted as resembling a skateboard without wheels. during the 1990s there were rumors, fueled by director robert zemeckis, that hoverboards were in fact real, but not marketed because they were deemed too dangerous by parents' groups. these rumors have been conclusively debunked. the hoverboard concept has been used by many authors in various forms of media, for instance in the 1998 film futuresport, used by dean cain's character.
<p> the malloy hoverbike is a single seater turbo-fan powered quadrocopter developed in 2006 by new zealand inventor chris malloy and has been contracted by an american engineering firm to produce such bikes for the united states department of defense. by the use of turbofans, it gives its user the ability to hover in the air like a helicopter in the manner of riding a motorbike.
<p> in 2004, jamie hyneman and his team built a makeshift hovercraft for "mythbusters", dubbed the "hyneman hoverboard", from a surfboard and leafblower. however, jamie's hoverboard was not very effective.
<p> a spiker (also known as a spike driver) is a piece of rail transport maintenance of way equipment. its purpose is to drive rail spikes into the ties on a rail track to hold the rail in place. many different sizes of spikers are manufactured and in use around the world.
<p> hoverrace was created in 1996 though there are some pieces of information cached on the internet that suggests it may have origins in 1995. it was designed by grokksoft with richard langlois as its principle programmer and john ferber who was responsible for the company's marketing and advertising of the game. in the shareware version of the game users could only race with the basic hovercraft and race three of the company's tracks. users who bought a registration key for $16 could race with all hovercrafts, play any track, and/or even create their own.
<p> the idea of the modern hovercraft is most often associated with a british mechanical engineer sir christopher cockerell. cockerell's group was the first to develop the use of a ring of air for maintaining the cushion, the first to develop a successful skirt, and the first to demonstrate a practical vehicle in continued use.
<p> the name “hoveround” is the brainchild of tom kruse. he blended the word “hover” (based on the hovering look of the wheelchair), with the beach boys’ song “i get around”. kruse had been listening to the song on the radio while driving to a power chair promotional event and came up with the hoveround name. | > But since the X and Y directions would be constantly changing allowing free movement. tl;dr: No. The trapped superconductor would turn with the magnetic field. See, these tracks the superconductors fly at only work, because the direction (vector) of the magnetic field lines is different in the middle than on the side of the track. The important thing is, that the direction of these vectors change with height and with moving from side to side. But they don't change when sliding along the track. A superconductor below it's jump temperature just tries to maintain the direction of the magnetic field throughout it's body (more specifically, throughout the imperfections of it's body). For your idea to work, the magnetic field must have magnetic vectors that are changing when traversing the z axis but constant along the x and y axis. There is a theoretical vector field that satisfies this requirement, but it's magnetic vectors would have to spiral along the z axis (kinda like a spiral stair). I am not entirely sure, but i think you can't create such a field with any configuration of magnets. Now, you may think, that you actually CAN create such a field by errecting poles point horizontal of magnets that have their poles rotated a bit respective to their top and bottom neighbor. But i am afraid this would not work, unless you manage to make the magnet field only propagate in a very narrow vertical hight (no idea how that would work) AND have their magnet field very uniform along the x-y plane (you could use very VERY strong magnets for that). I don't think it's doable with today's technology (including technology availible in the next 5 years), so hoverboards in 2015 will remain sci-fi. |
the expansion of the universe is accelerating, will the speed of acceleration asymptotically approach the speed of light, surpass it, or neither? | <p> to determine if the expansion rate of the universe is speeding up or slowing down over time, cosmologists make use of the finite velocity of light. it takes billions of years for light from a distant galaxy to reach the earth. since the universe is expanding, the universe was smaller (galaxies were closer together) when light from distant galaxies was emitted. if the expansion rate of the universe is speeding up due to dark energy, then the size of the universe increases more rapidly with time than if the expansion were slowing down. using supernovae, we cannot quite measure the size of the universe versus time. instead we can measure the size of the universe (at the time the star exploded) and the distance to the supernova. with the distance to the exploding supernova in hand, astronomers can use the value of the speed of light along with the theory of general relativity to determine how long it took the light to reach the earth. this then tells them the age of the universe when the supernova exploded.
<p> current evidence suggests that the expansion rate of the universe is accelerating, which means that the second derivative of the scale factor formula_23 is positive, or equivalently that the first derivative formula_24 is increasing over time. this also implies that any given galaxy recedes from us with increasing speed over time, i.e. for that galaxy formula_25 is increasing with time. in contrast, the hubble parameter seems to be decreasing with time, meaning that if we were to look at some fixed distance d and watch a series of different galaxies pass that distance, later galaxies would pass that distance at a smaller velocity than earlier ones.
<p> the accelerating expansion of the universe is the observation that the expansion of the universe is such that the velocity at which a distant galaxy is receding from the observer is continuously increasing with time.
<p> he was also a lead investigator on the aaomega "wigglez" project, which provided some of the key evidence showing that the expansion of the universe is accelerating, driven by the previously unknown dark energy. he described the concept thus: "everything – stars and in particular galaxies – is moving away from each other in all directions at a faster rate. something, which has been called dark energy, is driving that because the most common force that controls motions in the universe, gravity, would cause things to slow down not speed up." the project started in 2006 and ran for four years, taking detailed measurements of 240,000 galaxies and building a three-dimensional map of galaxies. the team of twenty researchers using the 3.9 metre aat and also working with collaborators in toronto, canada and at the california institute of technology and the jet propulsion laboratory in the us.
<p> another common source of confusion is that the accelerating universe does "not" imply that the hubble parameter is actually increasing with time; since formula_21, in most accelerating models formula_22 increases relatively faster than formula_23, so h decreases with time. (the recession velocity of one chosen galaxy does increase, but different galaxies passing a sphere of fixed radius cross the sphere more slowly at later times.)
<p> modern observations of accelerating expansion imply that more and more of the currently visible universe will pass beyond our event horizon and out of contact with us. the eventual result is not known. the λcdm model of the universe contains dark energy in the form of a cosmological constant. this theory suggests that only gravitationally bound systems, such as galaxies, will remain together, and they too will be subject to heat death as the universe expands and cools. other explanations of dark energy, called phantom energy theories, suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion in a so-called big rip.
<p> the expansion of the universe causes distant galaxies to recede from us faster than the speed of light, if proper distance and cosmological time are used to calculate the speeds of these galaxies. however, in general relativity, velocity is a local notion, so velocity calculated using comoving coordinates does not have any simple relation to velocity calculated locally. (see comoving and proper distances for a discussion of different notions of 'velocity' in cosmology.) rules that apply to relative velocities in special relativity, such as the rule that relative velocities cannot increase past the speed of light, do not apply to relative velocities in comoving coordinates, which are often described in terms of the "expansion of space" between galaxies. this expansion rate is thought to have been at its peak during the inflationary epoch thought to have occurred in a tiny fraction of the second after the big bang (models suggest the period would have been from around 10 seconds after the big bang to around 10 seconds), when the universe may have rapidly expanded by a factor of around 10 to 10. | Gahhh "the speed of acceleration!" Alright, fine, you edited it out :) The rate of expansion isn't a speed. It actually has units of speed divided by distance. You can see this from Hubble's law, v=Hd. v is the apparent recession velocity between two galaxies, d is the distance between them, and H - the Hubble parameter - is the rate of expansion of the Universe. As it turns out, whether the expansion is accelerating or not, there's usually *some* distance past which the recession velocity is greater than the speed of light, just pick d > c/H (often called the Hubble horizon). All that means is that, very roughly speaking, if things continue as they are (for example if H stays the same, which is the hallmark of an accelerating universe), light from regions beyond that won't be able to communicate with us because the light will in effect be unable to "outrun" the expansion. But I'd emphasize that this is a pretty heuristic way of looking at things. |
is the total angular momentum of all bodies in the universe conserved as a whole? | <p> conservation of angular momentum states that j for a closed system, or j for the whole universe, is conserved. however, l and s are "not" generally conserved. for example, the spin–orbit interaction allows angular momentum to transfer back and forth between l and s, with the total j remaining constant.
<p> combining newton's second and third laws, it is possible to show that the linear momentum of a system is conserved. in a system of two particles, if formula_14 is the momentum of object 1 and formula_15 the momentum of object 2, then
<p> under periodic boundary conditions, the linear momentum of the system is conserved, but angular momentum is not. conventional explanation of this fact is based on noether's theorem, which states that conservation of angular momentum follows from rotational invariance of lagrangian. however in a paper it is shown that this approach is not consistent. it fails to explain the absence of conservation of angular momentum of a single particle moving in a periodic cell. lagrangian of the particle is constant and therefore rotationally invariant, while angular momentum of the particle is not conserved. this contradiction is caused by the fact that noether's theorem is usually formulated for closed systems. the periodic cell exchanges mass momentum, angular momentum, and energy with the neighboring cells.
<p> the angular momentum about the "z" axis is "not" , but the quantity , which is not conserved due to the contribution from the magnetic field. the canonical momentum is the conserved quantity. it is still the case that is the linear or translational momentum along the "z" axis, which is also conserved.
<p> the momentum form is preferable since this is readily generalized to more complex systems, generalizes to special and general relativity (see four-momentum). it can also be used with the momentum conservation. however, newton's laws are not more fundamental than momentum conservation, because newton's laws are merely consistent with the fact that zero resultant force acting on an object implies constant momentum, while a resultant force implies the momentum is not constant. momentum conservation is always true for an isolated system not subject to resultant forces.
<p> conservation of angular momentum is the principle that the total angular momentum of a system has a constant magnitude and direction if the system is subjected to no external torque. angular momentum is a property of a physical system that is a constant of motion (also referred to as a "conserved" property, time-independent and well-defined) in two situations:
<p> noether's theorem states that every conservation law is associated with a symmetry (invariant) of the underlying physics. the symmetry associated with conservation of angular momentum is rotational invariance. the fact that the physics of a system is unchanged if it is rotated by any angle about an axis implies that angular momentum is conserved. | If the laws of physics are the same if you rotate the universe, then that symmetry implied that angular momentum is conserved. (Similarly, symmetry over time implies conservation of energy, symmetry over a linear direction implies conservation of momentum, etc) |
askscience ama series: i'm /u/themeaningofhaste and i'm helping to build a galactic-scale gravitational wave detector. ask me anything! | <p> the laser interferometer space antenna (lisa) is a mission led by the european space agency to detect and accurately measure gravitational waves—tiny ripples in the fabric of space-time—from astronomical sources. lisa would be the first dedicated space-based gravitational wave detector. it aims to measure gravitational waves directly by using laser interferometry. the lisa concept has a constellation of three spacecraft arranged in an equilateral triangle with sides 2.5 million km long, flying along an earth-like heliocentric orbit. the distance between the satellites is precisely monitored to detect a passing gravitational wave.
<p> the kamioka gravitational wave detector (kagra), formerly the large scale cryogenic gravitational wave telescope (lcgt), is a project of the gravitational wave studies group at the institute for cosmic ray research (icrr) of the university of tokyo. it aims to be the world's first major (one with ability to actually detect a gravitational wave) gravitational wave observatory that is built underground, and the first major detector to use cryogenic mirrors. it will also be the first major gravitational wave observatory in asia.
<p> there are several current scientific collaborations for observing gravitational waves. there is a worldwide network of ground-based detectors, these are kilometre-scale laser interferometers including: the laser interferometer gravitational-wave observatory (ligo), a joint project between mit, caltech and the scientists of the ligo scientific collaboration with detectors in livingston, louisiana and hanford, washington; virgo, at the european gravitational observatory, cascina, italy; geo600 in sarstedt, germany, and the kamioka gravitational wave detector (kagra), operated by the university of tokyo in the kamioka observatory, japan. ligo and virgo are currently being upgraded to their advanced configurations. advanced ligo began observations in 2015, detecting gravitational waves even though not having reached its design sensitivity yet; advanced virgo is expected to start observing in 2016. the more advanced kagra is scheduled for 2018. geo600 is currently operational, but its sensitivity makes it unlikely to make an observation; its primary purpose is to trial technology.
<p> currently, a number of land-based gravitational wave detectors are in operation, and a mission to launch a space-based detector, lisa, is currently under development, with a precursor mission (lisa pathfinder) which was launched in 2015. gravitational wave observations can be used to obtain information about compact objects such as neutron stars and black holes, and also to probe the state of the early universe fractions of a second after the big bang.
<p> the advanced ligo project to enhance the original ligo detectors began in 2008 and continues to be supported by the nsf, with important contributions from the uk science and technology facilities council, the max planck society of germany, and the australian research council. the improved detectors began operation in 2015. the detection of gravitational waves was reported in 2016 by the ligo scientific collaboration (lsc) and the virgo collaboration with the international participation of scientists from several universities and research institutions. scientists involved in the project and the analysis of the data for gravitational-wave astronomy are organized by the lsc, which includes more than 1000 scientists worldwide, as well as 440,000 active einstein@home users .
<p> bullet::::- ligo, the laser interferometer gravitational-wave observatory, is a large-scale physics experiment and observatory to detect cosmic gravitational waves and to develop gravitational-wave observations as an astronomical tool. currently two ligo observatories exist: ligo livingston observatory in livingston, louisiana, and ligo hanford observatory near richland, washington.
<p> the laser interferometer gravitational-wave observatory (ligo) is a large-scale physics experiment and observatory to detect cosmic gravitational waves and to develop gravitational-wave observations as an astronomical tool. two large observatories were built in the united states with the aim of detecting gravitational waves by laser interferometry. these can detect a change in the 4 km mirror spacing of less than a ten-thousandth the charge diameter of a proton. | > pre-postdoctoral researcher Does that mean PhD student? Never heard that term before. |
so, is travelling faster than the speed of light (using any method) basically impossible because of the time telegraph paradox? | <p> time travel to the past is theoretically possible in certain general relativity spacetime geometries that permit traveling faster than the speed of light, such as cosmic strings, transversable wormholes, and alcubierre drive. the theory of general relativity does suggest a scientific basis for the possibility of backward time travel in certain unusual scenarios, although arguments from semiclassical gravity suggest that when quantum effects are incorporated into general relativity, these loopholes may be closed. these semiclassical arguments led stephen hawking to formulate the chronology protection conjecture, suggesting that the fundamental laws of nature prevent time travel, but physicists cannot come to a definite judgment on the issue without a theory of quantum gravity to join quantum mechanics and general relativity into a completely unified theory.
<p> miguel alcubierre briefly discusses some of these issues in a series of lecture slides posted online, where he writes: "beware: in relativity, any method to travel faster than light can in principle be used to travel back in time (a time machine)". in the next slide he brings up the chronology protection conjecture and writes: "the conjecture has not been proven (it wouldn’t be a conjecture if it had), but there are good arguments in its favor based on quantum field theory. the conjecture does not prohibit faster-than-light travel. it just states that if a method to travel faster than light exists, and one tries to use it to build a time machine, something will go wrong: the energy accumulated will explode, or it will create a black hole."
<p> a weaker form of einstein's locality principle remains intact, which is this: classical, history-setting information cannot be transmitted faster than the speed of light "c", not even by using quantum entanglement events. this weaker form of locality is less conceptually elegant than einstein's absolute locality, but is sufficient to prevent the emergence of causality paradoxes.
<p> since it is meaningless to measure a one-way velocity prior to the synchronisation of distant clocks, experiments claiming a measure of the one-way speed of light can often be reinterpreted as verifying the laue-weyl's round-trip condition.
<p> bullet::::- some observers with sub-light relative motion will disagree about which occurs first of any two events that are separated by a space-like interval. in other words, any travel that is faster-than-light will be seen as traveling backwards in time in some other, equally valid, frames of reference, or need to assume the speculative hypothesis of possible lorentz violations at a presently unobserved scale (for instance the planck scale). therefore, any theory which permits "true" ftl also has to cope with time travel and all its associated paradoxes, or else to assume the lorentz invariance to be a symmetry of thermodynamical statistical nature (hence a symmetry broken at some presently unobserved scale).
<p> some authors such as mansouri and sexl (1977) as well as will (1992) argued that this problem doesn't affect measurements of the isotropy of the one-way speed of light, for instance, due to direction dependent changes relative to a "preferred" (aether) frame σ. they based their analysis on a specific interpretation of the rms test theory in relation to experiments in which light follows a unidirectional path and to slow clock-transport experiments. will agreed that it is impossible to measure the one-way speed between two clocks using a time-of-flight method without synchronization scheme, though he argued: ""...a test of the isotropy of the speed between the same two clocks as the orientation of the propagation path varies relative to σ should not depend on how they were synchronized..."". he added that aether theories can only be made consistent with relativity by introducing ad-hoc hypotheses. in more recent papers (2005, 2006) will referred to those experiments as measuring the ""isotropy of light speed using one-way propagation"".
<p> it is also debatable whether faster-than-light travel is physically possible, in part because of causality concerns: travel faster than light may, under certain conditions, permit travel backwards in time within the context of special relativity. proposed mechanisms for faster-than-light travel within the theory of general relativity require the existence of exotic matter and it is not known if this could be produced in sufficient quantity. | the lorentz transformation jsut doesnt makes much sense for speeds above c. you see the term sqrt( 1 - (v/c)^2 ) that appears everywhere becomes imaginary for v > c. But imaginary numbers dont make sense for length or times. there are plenty strange things implied in ftl travel. i dont think one could say that this telegraph paradox is the reason for that, its more of an example how faster than light travel is impossible if you want to keep causality and such. |
how is the human microbiome inherited inherited and is it localized in only the gut and skin or can it also be found in other body regions like the brain? | <p> in humans, the gut microbiota has the largest numbers of bacteria and the greatest number of species compared to other areas of the body. in humans, the gut flora is established at one to two years after birth, by which time the intestinal epithelium and the intestinal mucosal barrier that it secretes have co-developed in a way that is tolerant to, and even supportive of, the gut flora and that also provides a barrier to pathogenic organisms.
<p> in humans, the gut microbiota has the largest numbers of bacteria and the greatest number of species compared to other areas of the body. in humans the gut flora is established at one to two years after birth, and by that time the intestinal epithelium and the intestinal mucosal barrier that it secretes have co-developed in a way that is tolerant to, and even supportive of, the gut flora and that also provides a barrier to pathogenic organisms.
<p> the gut microbiome has emerged in recent years as an important player in human health. its prevalent functions are related to the fermentation of indigestible food components, competitions with pathogen, strengthening of the intestinal barrier, stimulation and regulation of the immune system.
<p> over the last few decades, research on the perinatal acquisition of microbiota in humans has expanded as a result of developments in dna sequencing technology. bacteria have been detected in umbilical cord blood, amniotic fluid, and fetal membranes of healthy, term babies. the meconium, an infant’s first bowel movement of digested amniotic fluid, has also been shown to contain a diverse community of microbes. these microbial communities consist of genera commonly found in the mouth and intestines, which may be transmitted to the uterus via the blood stream, and in the vagina, which may ascend through the cervix.
<p> the gut flora is the complex community of microorganisms that live in the digestive tracts of humans and other animals. the gut metagenome is the aggregate of all the genomes of gut microbiota. the gut is one niche that human microbiota inhabit.
<p> human gut microbiota play a key role in the intestinal immune system. galacto-oligosaccharides support natural defenses of the human body via the gut microflora, indirectly by increasing the number of bacteria in the gut and inhibiting the binding or survival of "escherichia coli", "salmonella" typhimurium and "clostridia". gos can positively influence the immune system indirectly through the production of antimicrobial substances, reducing the proliferation of pathogenic bacteria.
<p> the human microbiome is the aggregate of all microbiota that resides on or within any of a number of human tissues and biofluids, including the skin, mammary glands, placenta, seminal fluid, uterus, ovarian follicles, lung, saliva, oral mucosa, conjunctiva, biliary and gastrointestinal tracts. they include bacteria, archaea, fungi, protists and viruses. though micro-animals can also live on the human body, they are typically excluded from this definition. in the context of genomics, the human microbiome is sometimes used to refer to the collective genomes of resident microorganisms; however, the term "human metagenome" has the same meaning. | Microbes in the brain would be life threatening. Other than the skin and your GI tract, the only place on the body where a stable microbial community persists in a healthy person would be the portion of the upper respiratory tract consisting of the sinus/nasal cavity & not much anywhere else. This doesn't mean that you won't ever find microbes persisting anywhere else. Some parasitic infections like Toxoplasmosis can be asymptomatic. As can also dormant virus infections appear such. But these are never considered part of one's microbiome in the way commensal bacteria are. |
what does the study that says that the earth's temperature was warmer 2000 years ago that it is today mean? | <p> in 2007 the national oceanic and atmospheric administration stated that the "u.s. and global annual temperatures are now approximately 1.0°f warmer than at the start of the 20th century, and the rate of warming has accelerated over the past 30 years, increasing globally since the mid-1970s at a rate approximately three times faster than the century-scale trend. the past nine years have all been among the 25 warmest years on record for the contiguous u.s., a streak which is unprecedented in the historical record."
<p> by the end of the 21st century, temperatures may increase to a level not experienced since the mid-pliocene, around 3 million years ago. at that time, models suggest that mean global temperatures were about 2–3 °c warmer than pre-industrial temperatures. in the early pliocene era, the global temperature was only 1-2°c warmer than now, but sea level was 15-25 meters higher.
<p> the temperature data was updated in 1999 to report that 1998 was the warmest year since the instrumental data began in 1880. they also found that the rate of temperature change was larger than at any time in instrument history, and concluded that the recent el niño was not solely responsible for the large temperature anomaly in 1998. in spite of this, the united states had seen a smaller degree of warming, and a region in the eastern u.s. and the western atlantic ocean had actually cooled slightly.
<p> in a january 2013 survey, pew found that 69% of americans say there is solid evidence that the earth's average temperature has gotten warmer over the past few decades, up six points since november 2011 and 12 points since 2009.
<p> the national science board's "patterns and perspectives in environmental science" report of 1972 discussed the cyclical behavior of climate, and the understanding at the time that the planet was entering a phase of cooling after a warm period. "judging from the record of the past interglacial ages, the present time of high temperatures should be drawing to an end, to be followed by a long period of considerably colder temperatures leading into the next glacial age some 20,000 years from now." but it also continued; "however, it is possible, or even likely, that human interference has already altered the environment so much that the climatic pattern of the near future will follow a different path."
<p> the u.s. national academy of sciences, both in its 2002 report to president george w. bush, and in later publications, has strongly endorsed evidence of an average global temperature increase in the 20th century.
<p> the mann, bradley and hughes reconstruction covering 1,000 years (mbh99) was published by "geophysical research letters" in march 1999 with the cautious title "northern hemisphere temperatures during the past millennium: inferences, uncertainties, and limitations". mann said that "as you go back farther in time, the data becomes sketchier. one can't quite pin things down as well, but, our results do reveal that significant changes have occurred, and temperatures in the latter 20th century have been exceptionally warm compared to the preceding 900 years. though substantial uncertainties exist in the estimates, these are nonetheless startling revelations." when mann gave a talk about the study to the national oceanic and atmospheric administration's geophysical fluid dynamics laboratory, climatologist jerry d. mahlman nicknamed the graph the "hockey stick". | All the study means is that it was hotter then than it is now. Without evidence on why that matters, it means nothing else. We don't look at evidence in a vacuum, we use it as part of a larger system of establishing causation and work from there. |
how is magnetized plasma created and what kind of gasses produce them? | <p> in the pioneering experiment, los alamos national laboratory's frx-l, a plasma is first created at low density by transformer-coupling an electric current through a gas inside a quartz tube (generally a non-fuel gas for testing purposes). this heats the plasma to about (~2.3 million degrees). external magnets confine fuel within the tube. plasmas are electrically conducting, allowing a current to pass through them. this current, generates a magnetic field that interacts with the current. the plasma is arranged so that the fields and current stabilize within the plasma once it is set up, self-confining the plasma. frx-l uses the field-reversed configuration for this purpose. since the temperature and confinement time is 100x lower than in mcf, the confinement is relatively easy to arrange and does not need the complex and expensive superconducting magnets used in most modern mcf experiments.
<p> plasma is initiated in the system by applying a strong rf (radio frequency) electromagnetic field to the wafer platter. the field is typically set to a frequency of 13.56 megahertz, applied at a few hundred watts. the oscillating electric field ionizes the gas molecules by stripping them of electrons, creating a plasma.
<p> plasma techniques are especially useful because they can deposit ultra thin (a few nm), adherent, conformal coatings. glow discharge plasma is created by filling a vacuum with a low-pressure gas (ex. argon, ammonia, or oxygen). the gas is then excited using microwaves or current which ionizes it. the ionized gas is then thrown onto a surface at a high velocity where the energy produced physically and chemically changes the surface. after the changes occur, the ionized plasma gas is able to react with the surface to make it ready for protein adhesion. however, the surfaces may lose mechanical strength or other inherent properties because of the high amounts of energy.
<p> the plasma is created by a cascaded arc plasma source, which exhausts into the spherical vacuum vessel. gases such as helium, hydrogen, nitrogen, argon, xenon can be used to create plasma. samples are mounted in direct view of the plasma. the plasma species interact with the atoms of the sample, leading to surface modifications. in general this process is called plasma processing.
<p> surrounding the entire assembly is the 2,600 tonne eight-limbed transformer which is used to induce a current into the plasma. the primary purpose of this current is to generate a poloidal field that mixes with the one supplied by the toroidal magnets to produce the twisted field inside the plasma. the current also serves the secondary purpose of ionizing the fuel and providing some heating of the plasma before other systems take over.
<p> a plasma is a fluid consisting of a large number of free charged particles (globally neutral and whose kinetic energy is larger than the electrostatic potential energy between them). the charges and currents that conform a plasma are sources of the electromagnetic fields and, in turn, these fields affect the distribution of charges and currents which makes its dynamics highly nonlinear and very different from that of a neutral gas. when the magnetic fields are capable of modifying an individual particle trajectory, it is said that the plasma is magnetized. the corona is highly magnetized and therefore, several structures are observed, some of which can maintain its stability for relatively long times as dark filaments on the surface of the sun.
<p> plasma is an ionized gas that conducts electricity. in bulk, it is modeled using magnetohydrodynamics, which is a combination of the navier–stokes equations governing fluids and maxwell's equations governing how magnetic and electric fields behave. fusion exploits several plasma properties, including: | The gases can be most anything. Hydrogen (including its fusion-fuel isotopes deuterium and/or tritium) or helium are common choices. Storage generally requires some sort of magnetic confinement. A toroidal magnetic field geometry such as in a tokamak is a common choice, though there are other geometries, such as stellarators. In tokamaks, plasma is often made by pumping the chamber full of neutral gas and then sending current through a central coil, discharging in the medium. This acts to heat the medium (through Ohmic heating) as well as generates a toroidal magnetic field in the plasma. Alternatively, plasma can be heated by propagating energetic neutral beams into the medium or using some sort of radio-frequency or microwave heating. Alternatively, you can also make magnetized plasma in the laboratory by zapping matter with an intense laser beam. One way of generating magnetic fields in such plasma is to produce misaligned gradients in electron density and electron temperature, causing a thermoelectric magnetic field to grow in the medium. These magnetic fields can be quite large, up to ~10^9 Gauss in the case of the highest intensity lasers interacting with solid-density matter. Edit: added some links |
this may be a silly question or a well-known fact, but what is it that makes a person's face oilier or greasier than, say, their arms? | <p> while his physical appearance might have caused initial surprise, due to a somewhat frumpy wardrobe and unique gait, accounts of his great compassion and interest in others, his broad awareness of the world beyond the borders of his home and work, and his ability to see all sides of argument and decision made him one whose physical appearance was quickly forgotten when engaged in entertaining conversation.
<p> oily skin is caused by over-active sebaceous glands, that produce a substance called sebum, a naturally healthy skin lubricant. when the skin produces excessive sebum, it becomes heavy and thick in texture. oily skin is typified by shininess, blemishes and pimples. the oily-skin type is not necessarily bad, since such skin is less prone to wrinkling, or other signs of aging, because the oil helps to keep needed moisture locked into the epidermis (outermost layer of skin).
<p> conant wrote that he thought that kelly's sole flaw in her appearance was her jaw, which he considered too square. he would use a dog or a baby to disguise it when photographing her below her jaw. conant later said that "you trusted grace's beauty...you knew it wasn't built from clothes and makeup...this was grace: natural, unpretentious".
<p> with a playful cynicism he remarked of his popularity as a portraitist with high society women, "the essential thing is to elongate the women and especially to make them slim. after that it just remains to enlarge their jewels. they are ravished." this remark is reminiscent of another of his sayings: "painting is the most beautiful of lies".
<p> corpses swell as gases from decomposition accumulate in the torso and the increased pressure forces blood to ooze from the nose and mouth. this causes the body to look "plump", "well-fed", and "ruddy"—changes that are all the more striking if the person was pale or thin in life. in the arnold paole case, an old woman's exhumed corpse was judged by her neighbours to look more plump and healthy than she had ever looked in life. the exuding blood gave the impression that the corpse had recently been engaging in vampiric activity.
<p> for women, face, figure, coiffure, posture, and grooming had become important fashion factors in addition to clothing. in particular, cosmetics became a major industry. women did not feel ashamed for caring about their appearance and it was a declaration of self-worth and vanity, hence why they no longer wanted to achieve a natural look. for evenings and events, the popular look was a smoky eye with long lashes, rosy cheeks and a bold lip. to emphasize the eyes, kohl eyeliner became popular, and was the first time they knew anything of eyeliner (information about egyptian fashion was not discovered until later on in the 20s). women also started wearing foundation and using pressed powder. also, with the invention of the swivel lipstick, lipstick was on the rise with bright colors and they applied their lipstick to achieve a cupid's bow and “bee stung” look.
<p> in a 2013 interview, comedian and director stephen merchant remarked: "cool could genuinely contort his face; he was kind [of] extraordinary as an impressionist because he would actually change his face, without makeup, to look like the people he was doing. and he did this aquaphibian and he scrunched his face up, and i remember just actually weeping with laughter. i had never seen anything as funny as that." | Sebaceous glands. And no question is ever silly. |
how does underwater pressure work in a cave system? | <p> the cave passages in the park are said to "breathe" as air continually moves into or out of them, equalizing the atmospheric pressure of the cave and the outside air. when the air pressure is higher outside the cave than inside it, air flows into the cave, raising the cave's pressure to match the outside pressure. when the air pressure inside the cave is higher than outside it, air flows out of the cave, lowering the air pressure within the cave. a large cave such as wind cave with only a few small openings will "breathe" more obviously than a small cave with many large openings.
<p> the historically older open diving chamber, known as an open diving bell or wet bell, is in effect a compartment with an open bottom that contains a gas space above a free water surface, which allows divers to breathe underwater. the compartment may be large enough to fully accommodate the divers above the water, or may be smaller, and just accommodate head and shoulders. internal air pressure is at the pressure of the free water surface, and varies accordingly with depth. the breathing gas supply for the open bell may be self-contained, or more usually, supplied from the surface via flexible hose, which may be combined with other hoses and cables as a bell umbilical. an open bell may also contain a breathing gas distribution panel with divers' umbilicals to supply divers with breathing gas during excursions from the bell, and an on-board emergency gas supply in high-pressure storage cylinders. this type of diving chamber can only be used underwater, as the internal gas pressure is directly proportional to the depth underwater, and raising or lowering the chamber is the only way to adjust the pressure.
<p> in cave diving, a torricellian chamber is a cave chamber with an airspace above the water at less than atmospheric pressure. this is formed when the water level drops and there is no way for more air to get into the chamber. in theory such chambers could pose a risk of decompression sickness to divers, similar to flying after diving. also, in a torricellian chamber the diver's depth gauge is unlikely to give an accurate reading of pressure as most depth gauges are not designed to show depths less than zero.
<p> when used underwater there are two ways to prevent water flooding in when the submersible hyperbaric chamber's hatch is opened. the hatch could open into a moon pool chamber, and then its internal pressure must first be equalised to that of the moon pool chamber. more commonly the hatch opens into an underwater airlock, in which case the main chamber's pressure can stay constant, while it is the airlock pressure that shifts. this common design is called a lock-out chamber, and is used in submarines, submersibles, and underwater habitats as well as diving chambers.
<p> a sealable diving chamber, closed bell or dry bell is a pressure vessel with hatches large enough for people to enter and exit, and a compressed breathing gas supply to raise the internal air pressure. such chambers provide a supply of oxygen for the user, and are usually called hyperbaric chambers whether used underwater, at the water surface or on land to produce underwater pressures. however, some use the term "submersible chamber" to refer to those used underwater and "hyperbaric chamber" for those used out of water. there are two related terms that reflect particular usages rather than technically different types:
<p> releasing air underwater forms bubble rings, which are vortex rings of water with bubbles (or even a single donut-shaped bubble) trapped along its axis line. such rings are often produced by scuba divers and dolphins.
<p> the water pressure on the outer hatch is always greater than the air pressure inside the submarine, which prevents opening the hatch. only when the pressure inside the escape chamber is equal to the sea pressure can the hatch be opened. thus the compartment must be sealed off from the interior of the submarine and the pressure inside the chamber must be raised to sea pressure in order to make it possible to open the escape hatch. | It will wind up being the same. It turns out that water outside the cave is being pressed into the cave by the water pressure at that depth. So, water will rush into the cave until something exactly balances that force. For example, if there were an air pocket, the pressure of the air would have to exactly balance the pressure of the water at the surface. This effect is called pascal's law. And while it should be contingent on objects in an enclosed space, it still applies in the approximation here. You can find a NASA article about it here: |
how does the brain physically recover from traumatic events? | <p> there are multiple responses of the body to brain injury, occurring at different times after the initial occurrence of damage, as the functions of the neurons, nerve tracts, or sections of the brain can be affected by damage. the immediate response can take many forms. initially, there may be symptoms such as swelling, pain, bruising, or loss of consciousness. post-traumatic amnesia is also common with brain damage, as is temporary aphasia, or impairment of language.
<p> traumatic brain injury happens when the head suffers from a sharp blow, or suddenly accelerates or decelerates. in these cases, the brain gets churned around, and can be damaged by the bony bumps and knobs inside the skull, or by the twisting and tearing of fibres in the brain. if the traumatic brain injury is severe enough, it can lead to an initial coma, which is then followed by a time of post-traumatic amnesia. post traumatic amnesia typically resolves itself gradually, however it will leave a mild, but permanent deficit in the patient's memory.
<p> brain injuries have far-reaching and varied consequences due to the nature of the brain as the main source of bodily control. brain-injured people commonly experience issues with memory. this can be issues with either long or short term memories depending on the location and severity of the injury. sometimes memory can be improved through rehabilitation, although it can be permanent. behavioral and personality changes are also commonly observed due to changes of the brain structure in areas controlling hormones or major emotions. headaches and pain can also occur as a result of a brain injury either directly from the damage or due to neurological conditions stemming from the injury. due to the changes in the brain as well as the issues associated with the change in physical and mental capacity, depression and low self-esteem are common side effects that can be treated with psychological help. antidepressants must be used with caution in brain injury people due to the potential for undesired effects because of the already altered brain chemistry.
<p> traumatic brain injuries vary in their mechanism of injury, producing a blunt or penetrating trauma resulting in a primary and secondary injury with excitotoxicity and relatively wide spread neuronal death. due to the overwhelming number of traumatic brain injuries as a result of the war on terror, tremendous amounts of research have been placed towards a better understanding of the pathophysiology of traumatic brain injuries as well as neuroprotective interventions and possible interventions prompting restorative neurogenesis. hormonal interventions, such as progesterone, estrogen, and allopregnanolone have been examined heavily in recent decades as possible neuroprotective agents following traumatic brain injuries to reduce the inflammation response stunt neuronal death. in rodents, lacking the regenerative capacity for adult neurogenesis, the activation of stem cells following administration of α7 nicotinic acetylcholine receptor agonist, pnu-282987, has been identified in damaged retinas with follow-up work examining activation of neurogenesis in mammals after traumatic brain injury. currently, there is no medical intervention that has passed phase-iii clinical trials for use in the human population.
<p> brain healing is the process that occurs after the brain has been damaged. if an individual survives brain damage, the brain has a remarkable ability to adapt. when cells in the brain are damaged and die, for instance by stroke, there will be no repair or scar formation for those cells. the brain tissue will undergo liquefactive necrosis, and a rim of gliosis will form around the damaged area.
<p> causes include falls, vehicle collisions, and violence. brain trauma occurs as a consequence of a sudden acceleration or deceleration within the cranium or by a complex combination of both movement and sudden impact. in addition to the damage caused at the moment of injury, a variety of events following the injury may result in further injury. these processes include alterations in cerebral blood flow and the pressure within the skull. some of the imaging techniques used for diagnosis include computed tomography and magnetic resonance imaging (mris).
<p> traumatic brain injury is defined as damage to the brain resulting from external mechanical force, such as rapid acceleration or deceleration, impact, blast waves, or penetration by a projectile. brain function is temporarily or permanently impaired and structural damage may or may not be detectable with current technology. | Hey u/frying_pans \- hope I can help out some. Assuming you are talking about psychological trauma, and not physical - there hasn't been too much research that I can find about recovery without the use of pharmaceuticals. First off, stress has numerous effects on the body and the brain - more than can be described in a reddit post. You can delve into the sympathetic nervous system for months in high level physiology classes, significantly more than my knowledge. However, I do have a pretty decent understanding of medical terminology sympathetic response, so I will try and dissect a study for you - found here: . To read the full study, you would need access thru a university or institution, or pay a ridiculous price unless this is your field. & #x200B; The main mention of this study is corticostriatal circuitry. Which is connections in the brain that influence your goals, behavior, motivation, and cognitive actions. Major stress can reduce the effectiveness of this circuitry. How, I am not too sure. However, part of the recovery process involves the rebuilding of this network, and strengthening the integrity of these connections. This allows for a lower fractional anisotropy, which relates to diffusion of chemicals and substances. Diffusion is the process of particles moving down a concentration gradient, whether it be into cells or out of some cells. In laymans terms, the connections in the brain rebuild themselves, allowing chemicals to more easily diffuse into cell membranes, leading to better decision making, motivation, behavior, and goals. The study mentioned was a two year study, with relatively few candidates - however neuro is a field which is evolving every day. I would love to check back in a few years and read up more on this! Hope I helped some! |
why does a higher gear consume less than a lower gear at the same speed? | <p> different gears and ranges of gears are appropriate for different people and styles of cycling. multi-speed bicycles allow gear selection to suit the circumstances: a cyclist could use a high gear when cycling downhill, a medium gear when cycling on a flat road, and a low gear when cycling uphill. in a lower gear every turn of the pedals leads to fewer rotations of the rear wheel. this allows the energy required to move the same distance to be distributed over more pedal turns, reducing fatigue when riding uphill, with a heavy load, or against strong winds. a higher gear allows a cyclist to make fewer pedal turns to maintain a given speed, but with more effort per turn of the pedals.
<p> while long steep hills and/or heavy loads may indicate a need for lower gearing, this can result in a very low speed. balancing a bicycle becomes more difficult at lower speeds. for example, a bottom gear around 16 gear inches gives an effective speed of perhaps 3 miles/hour (5 km/hour) or less, at which point it might be quicker to walk.
<p> the impetus is to minimize overdrive use and provide a higher ratio first gear, which means more gears between the first and the last to keep the engine at its most efficient speed. this is part of the reason that modern automobiles tend to have larger numbers of gears in their transmissions. it is also why more than one overdrive gear is seldom seen in a vehicle except in special circumstances i.e. where high (numerical) differential gear is required to get the vehicle moving as in trucks or performance cars though double overdrive transmissions are common in other vehicles, often with a small number on the axle gear reduction, but usually only engage at speeds exceeding .
<p> for instance, a given car traveling on a road of a given slope presents a load which the engine must act against. because air resistance increases with speed, the motor must put out more torque at a higher speed in order to maintain the speed. by shifting to a higher gear, one may be able to meet the requirement with a higher torque and a lower engine speed, whereas shifting to a lower gear has the opposite effect. accelerating increases the load, whereas decelerating decreases the load.
<p> it is possible for the next higher gear to be such that upshifting lowers the engine speed excessively, resulting in the engine being operated outside its "power band". for example, the 1967 porsche 911 s produced 160 hp at 6600/min and 179 nm of torque at 5200/min. using the standard transmission gear ratios above, assuming the driver shifts from 2nd to 3rd gear at 6,600/min, the engine speed would fall to 4,990/min (which is 6600 x 1.27 / 1.78). in this case, shifting up to 3rd gear causes the engine speed to be slightly below the speed at which maximum power is produced. by using a close-ratio gearbox, such as the hill climb example above, shifting to 3rd gear would drop engine speed to 5,110/min (6600 x 1.55 / 2.00), which almost coincides with the maximum power output of the engine.
<p> the gearing range indicates the difference between bottom gear and top gear, and provides some measure of the range of conditions (high speed versus steep hills) with which the gears can cope; the strength, experience, and fitness level of the cyclist are also significant. a range of 300% or 3:1 means that for the same pedalling speed a cyclist could travel 3 times as fast in top gear as in bottom gear (assuming sufficient strength, etc.). conversely, for the same pedalling effort, a cyclist could climb a much steeper hill in bottom gear than in top gear.
<p> factory 4-speed or 5-speed transmission ratios generally have a greater difference between gear ratios and tend to be effective for ordinary driving and moderate performance use. wider gaps between ratios allow a higher 1st gear ratio for better manners in traffic, but cause engine speed to decrease more when shifting. narrowing the gaps will increase acceleration at speed, and potentially improve top speed under certain conditions, but acceleration from a stopped position and operation in daily driving will suffer. | ICE engines are designed to run at an optimal RPM this is due to things like sealing piston weight etc. Any time you can operate in this optimal condition the engine will deliver the most amount of its power to the drive train instead of the things like friction. So when you switch to a higher gear the engine can lower its RPM. |
how does dna get divided in half on spermatozoons and eggs? | <p> spermatozoa are produced in a multi-step process. a primary spermatocyte with the full diploid number of chromosomes divides to form two secondary spermatocytes which are haploid, i.e. each has half the diploid number of chromosomes. each secondary spermatocyte then divides to produce two spermatids which undergo further development to form spermatozoa. in synspermia, two or more spermatids from the same spermatocyte fuse together and are enclosed in an envelope, forming a "capsule". this contrasts with cleistospermia, where the capsules enclose individual spermatozoa. after transfer to the female in either form, decapsulation and activation are necessary before the resulting spermatozoa can fertilize eggs.
<p> the entire process of spermatogenesis can be broken up into several distinct stages, each corresponding to a particular type of cell in humans. in the following table, ploidy, copy number and chromosome/chromatid counts are for one cell, generally prior to dna synthesis and division (in g1 if applicable). the primary spermatocyte is arrested after dna synthesis and prior to division.
<p> ovulated eggs become arrested in metaphase ii until fertilization triggers the second meiotic division. similar to the segregation events of mitosis, the pairs of sister chromatids resulting from the separation of bivalents in meiosis i are further separated in anaphase of meiosis ii. in oocytes, one sister chromatid is segregated into the second polar body, while the other stays inside the egg. during spermatogenesis, each meiotic division is symmetric such that each primary spermatocyte gives rise to 2 secondary spermatocytes after meiosis i, and eventually 4 spermatids after meiosis ii. meiosis ii-nondisjunction may also result in aneuploidy syndromes, but only to a much smaller extent than do segregation failures in meiosis i.
<p> during the formation of sperm, protamine binds to the phosphate backbone of dna using the arginine-rich domain as an anchor. dna is then folded into a toroid, an o-shaped structure, although the mechanism is not known. a sperm cell can contain up to 50,000 toroid-shaped structures in its nucleus with each toroid containing about 50 kilobases. before the toroid is formed, histones are removed from the dna by transition nuclear proteins, so that protamine can condense it. the effects of this change are 1) an increase in sperm hydrodynamics for better flow through liquids by reducing the head size 2) decrease in the occurrence of dna damage 3) removal of the epigenetic markers that occur with histone modifications.
<p> in eukaryotes, dna is organized with the help of histones into compact particles called nucleosomes, where sequences of about 147 dna base pairs make ~1.65 turns around histone protein octamers. dna within nucleosomes is inaccessible to many transcription factors. some transcription factors, so-called pioneering factors are still able to bind their dna binding sites on the nucleosomal dna. for most other transcription factors, the nucleosome should be actively unwound by molecular motors such as chromatin remodelers. alternatively, the nucleosome can be partially unwrapped by thermal fluctuations, allowing temporary access to the transcription factor binding site. in many cases, a transcription factor needs to compete for binding to its dna binding site with other transcription factors and histones or non-histone chromatin proteins. pairs of transcription factors and other proteins can play antagonistic roles (activator versus repressor) in the regulation of the same gene.
<p> the spermatozoon that fertilizes an oocyte will contribute its pronucleus, the other half of the zygotic genome. in some species, the spermatozoon will also contribute a centriole, which will help make up the zygotic centrosome required for the first division. however, in some species, such as in the mouse, the entire centrosome is acquired maternally. currently under investigation is the possibility of other cytoplasmic contributions made to the embryo by the spermatozoon.
<p> nuclear dna has two copies per cell (except for sperm and egg cells), one copy being inherited from the father and the other from the mother. mitochondrial dna, however, is strictly inherited from the mother and each mitochondrial organelle typically contains between 2 and 10 mtdna copies. during cell division the mitochondria segregate randomly between the two new cells. those mitochondria make more copies, normally reaching 500 mitochondria per cell. as mtdna is copied when mitochondria proliferate, they can accumulate random mutations, a phenomenon called heteroplasmy. if only a few of the mtdna copies inherited from the mother are defective, mitochondrial division may cause most of the defective copies to end up in just one of the new mitochondria (for more detailed inheritance patterns, see human mitochondrial genetics). mitochondrial disease may become clinically apparent once the number of affected mitochondria reaches a certain level; this phenomenon is called "threshold expression". | Check out meiosis - the process by which haploid cells are generated - and chromosomal crossover - how you can have exchange of information between homologous chromosomes. |
could nano-technology ever be used to re-arrange atoms into something else? | <p> in 2011, new york university scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. they have demonstrated that it is possible to replicate not just molecules like cellular dna or rna, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.
<p> present-day technologies are limited in various ways. large atomically precise structures (that is, virtually defect-free) do not exist. complex 3d nanoscale structures exist in the form of folded linear molecules such as dna origami and proteins. it is also possible to build very small atomically precise structures using scanning probe microscopy to construct molecules such as feco and triangulene, or to perform hydrogen depassivation lithography. but it is not yet possible to combine components in a systematic way to build larger, more complex systems.
<p> by integrating synthetic biology with materials science, it would be possible to use cells as microscopic molecular foundries to produce materials with properties whose properties were genetically encoded. re-engineering has produced curli fibers, the amyloid component of extracellular material of biofilms, as a platform for programmable nanomaterial. these nanofibers were genetically constructed for specific functions, including adhesion to substrates, nanoparticle templating and protein immobilization.
<p> in "safe exponential manufacturing", which was published in a 2004 issue of "nanotechnology", it was suggested that creating manufacturing systems with the ability to self-replicate by the use of their own energy sources would not be needed. the foresight institute also recommended embedding controls in the molecular machines. these controls would be able to prevent anyone from purposely abusing nanotechnology, and therefore avoid the gray goo scenario.
<p> bullet::::- 2001 - scientists assembled molecules into basic circuits, raising hopes for a new world of nanoelectronics. if researchers can wire these circuits into intricate computer chip architectures, this new generation of molecular electronics will undoubtedly provide computing power to launch scientific breakthroughs for decades.
<p> beyond synthesis techniques to create single molecules, the key challenge of atomically precise manufacturing is in the assembly of molecular building blocks into larger and more complex objects that are also atomically precise. the two known methods for doing this are self-assembly and positional assembly. molecules that have been designed or have evolved to bind together, typically along conformal surfaces, will self-assemble under the right conditions. in the production of atomically precise membranes, molecules can arrange themselves on the surface of a liquid and then be chemically bound to each other . complex atomically precise self-assembled objects are also possible: striking examples include the robot-like enterobacteria phage t4 and the bacterial flagellar motor . in these cases, free-floating "parts" (proteins) in solution self-assemble into three-dimensional objects. self-assembly apm is experimentally accessible today.
<p> the most advanced form of molecular nanotechnology is often imagined to involve self-replicating molecular machines, and there have been some speculations suggesting something similar might be possible with analogues of molecules composed of nucleons rather than atoms. for example, the astrophysicist frank drake once speculated about the possibility of self-replicating organisms composed of such nuclear molecules living on the surface of a neutron star, a suggestion taken up in the science fiction novel "dragon's egg" by the physicist robert forward. it is thought by physicists that nuclear molecules may be possible, but they would be very short-lived, and whether they could actually be made to perform complex tasks such as self-replication, or what type of technology could be used to manipulate them, is unknown. | You use the term nanotechnology so vaguely here that this question isn't really possible to answer. I'd suggest that it is more likely that some form of printing will allow for something similar to what you are suggesting. Look into the printing of meat for example. This is like asking can something around the size of a meter be used to travel in space. |
why is it possible to freeze semen and then have it function properly when thawed? | <p> semen extender is a liquid diluent which is added to semen to preserve its fertilizing ability. it acts as a buffer to protect the sperm cells from their own toxic byproducts, and it protects the sperm cells from cold shock and osmotic shock during the chilling and shipping process (the sperm is chilled to reduce metabolism and allow it to live longer). the extender allows the semen to be shipped to the female, rather than requiring the male and female to be near to each other. special freezing extender use also allows cryogenic preservation of sperm ("frozen semen"), which may be transported for use, or used on-site at a later date.
<p> semen is collected, extended, then cooled or frozen. it can be used on site or shipped to the female's location. if frozen, the small plastic tube holding the semen is referred to as a "straw". to allow the sperm to remain viable during the time before and after it is frozen, the semen is mixed with a solution containing glycerol or other cryoprotectants. an "extender" is a solution that allows the semen from a donor to impregnate more females by making insemination possible with fewer sperm. antibiotics, such as streptomycin, are sometimes added to the sperm to control some bacterial venereal diseases. before the actual insemination, estrus may be induced through the use of progestogen and another hormone (usually pmsg or prostaglandin f2α).
<p> semen is frozen using either a controlled-rate, slow-cooling method (slow programmable freezing or spf) or a newer flash-freezing process known as vitrification. vitrification gives superior post-thaw motility and cryosurvival than "slow programmable freezing".
<p> the addition of extender to semen protects the sperm against possible damage by toxic seminal plasma, as well as providing nutrients and cooling buffers if the semen is to be cooled also to protecting sperm from bacteria by adding antibiotics to it to prevent increase of bacteria. in the case of freezing extenders, one or more penetrating cryoprotectants will be added. typical cryoprotectants include glycerol, dmso and dimethylformamide. egg yolk, which has cryoprotective properties, is also a common component.
<p> lyophilization, or freeze drying, is a process that removes water from a liquid drug creating a solid powder, or cake. the lyophilized product is stable for extended periods of time and could allow storage at higher temperatures. in protein formulations, stabilizers are added to replace the water and preserve the structure of the molecule.
<p> in bioseparations, freeze-drying can be used also as a late-stage purification procedure, because it can effectively remove solvents. furthermore, it is capable of concentrating substances with low molecular weights that are too small to be removed by a filtration membrane. freeze-drying is a relatively expensive process. the equipment is about three times as expensive as the equipment used for other separation processes, and the high energy demands lead to high energy costs. furthermore, freeze-drying also has a long process time, because the addition of too much heat to the material can cause melting or structural deformations. therefore, freeze-drying is often reserved for materials that are heat-sensitive, such as proteins, enzymes, microorganisms, and blood plasma. the low operating temperature of the process leads to minimal damage of these heat-sensitive products.
<p> freeze-dried products can be rehydrated (reconstituted) much more quickly and easily because the process leaves microscopic pores. the pores are created by the ice crystals that sublimate, leaving gaps or pores in their place. this is especially important when it comes to pharmaceutical uses. freeze-drying can also be used to increase the shelf life of some pharmaceuticals for many years. | Before I offer my insight I would point out: sperm are not organisms. They are differentiated cells of an organism. Bacteria in laboratory settings are frozen at -80°C on a regular basis. I haven't been in the lab for long, but I'm yet to encounter any stored for under two years that have not grown when thawed. My understanding is that most biological cell samples (including sperm) are frozen in a glycerol stock (a low percentage usually 10-20%), which massively reduces the formation of ice crystals that damage the cell membrane. As for limitations, there are many. Only certain small multicellular organisms such as some select insects can survive freezing, as they have adapted to protect against and repair cellular damage. The temperature is also an important factor, and -80°C is the generally accepted temperature (-196°C aka liquid nitrogen is also an option). At these temperatures the molecular mobility is low enough to halt cellular function. The duration for which the biological sample is frozen is also a factor, largely due to accumulative DNA damage that prevents the cell(s) from functioning properly. Edit: Another important factor that is being highlighted in this discussion is that not all the sperm need survive. Even if 99% of the sperm died (which is a grossly exaggerated proportion) there is a chance of fertilization. Healthy sperm are more likely to achieve fertilization, and a large portion of the frozen sample will be undamaged. |
what is the cuolomb force, and is it possible that it might move faster than the speed of light? | <p> an air or water mass moving with speed formula_48 subject only to the coriolis force travels in a circular trajectory called an 'inertial circle'. since the force is directed at right angles to the motion of the particle, it moves with a constant speed around a circle whose radius formula_49 is given by:
<p> equals about 166 hz. this would be easy to notice. however, the pulsar is spinning at a quarter of the speed of light at the equator, and its radius is only three times more than its schwarzschild radius. when such fast motion and such strong gravitational fields exist in a system, the simplified approach of separating gravitomagnetic and gravitoelectric forces can be applied only as a very rough approximation.
<p> variations on this basic formula describe the magnetic force on a current-carrying wire (sometimes called laplace force), the electromotive force in a wire loop moving through a magnetic field (an aspect of faraday's law of induction), and the force on a particle which might be traveling near the speed of light (relativistic form of the lorentz force).
<p> (in si units). variations on this basic formula describe the magnetic force on a current-carrying wire (sometimes called laplace force), the electromotive force in a wire loop moving through a magnetic field (an aspect of faraday's law of induction), and the force on a charged particle which might be traveling near the speed of light (relativistic form of the lorentz force).
<p> coulomb's law states that the force on a charged particle due to the field from another particle is dependent on the magnitudes of the two charges as well as the distance between them. the further away the particle is, the weaker the force on it is. positive charges exert attractive forces on negative charges (and vice versa) while positive charges exert repulsive forces on other positive charges (and similarly for the force between negative charges). the si units of force are newtons (n).
<p> a particle, carrying a charge of one coulomb, and moving perpendicularly through a magnetic field of one tesla, at a speed of one metre per second, experiences a force with magnitude one newton, according to the lorentz force law. as an si derived unit, the tesla can also be expressed as
<p> two thin, straight, stationary, parallel wires, a distance "r" apart in free space, each carrying a current "i", will exert a force on each other. ampère's force law states that the force per length "l" is given by | I'll first address the Coulomb force, then explain your article. The 'Coulomb force' is the force that is a result of Coulomb's Law, which means it's just electromagnetism. Coulomb's Law describes how two charged particles will feel a force between them. If both particles are the same charge (both positive or both negative), then the force will be repulsive; the charges will be pushed away from each other. If the charges are different (one positive and one negative), the charges will attract each other. Coulomb's Law describes mathematically how this attraction or repulsion is weaker the farther the charges are from each other. Now, about the article. What the science teacher in the article is trying to test is this: how fast does a change in the coulomb force travel? He extends this to pondering how fast changes in gravity travel as well. Here's a good way to think about the problem: Imagine if the Sun sudden;y, instantly vanished, leaving no mass and no trace behind. It would take eight minutes for people on Earth to stop seeing light from it, since light takes eight minutes to reach Earth from where the Sun is (or was). So what about gravity? Would Earth stop orbiting the instant the star vanished, or would it keep orbiting an empty point in space for eight minutes? If Mr. Robnett is correct, the Earth would stop orbiting immediately. The same would be true for a particle in a magnetic field. As soon as the field producer vanishes, the particle instantly stops feeling the field. This, at least, is his idea. While I hesitate to make a right or wrong statement, given the things scientists have been wrong about in the past and the ways physics has surprised us, it's extremely unlikely Mr. Robnett is correct. If he is, it would mean there is a fundamental flaw in our current knowledge of relativity, which has been experimentally proven thousands of times to extreme precision. The consequences of faster-than-light forces or information would be revolutionary and would strike at the foundations of modern physics. We already have evidence to indicate that changes in gravity travel at the speed of light or slower, and we have a huge amount of mathematical and experimental evidence that prohibits the carrier of the electromagnetic force (and the 'Coulomb force'), the photon, from going any speed but the speed of light. All the forces that we know (except gravity) about are transmitted by particles that move at finite speeds, the fastest of which is the photon. While we're not sure yet how gravity is transmitted, there is no evidence to indicate it would be faster than the speed of light. |
are there substances with ph-vaule that is outside the traditional 0-14, and how is that possible? | <p> studies have shown that phmg in solution has fungicidal as well as bactericidal activity against both gram-positive and gram-negative bacteria. the substance also has detergent, anti-corrosive, and flocculant properties and prevents biofouling. phmg-p is a white powdered solid, and as all polyguanidine salts, readily soluble in water.
<p> typically phacs are found in low concentrations, (<1 ug/l) making acute toxicity effects fairly unlikely. however, because of their continual input to the environment it is possible for chronic toxicity effects to occur. one major area of concern with several compounds being present at low levels at the same time is what happens when the compounds mix? it is possible and truly likely that these mixtures will have additive, neutralistic or synergistic effects. but again testing would be both time consuming and very expensive to test all of the combined effects.
<p> phosphine, a toxic, colourless gas, is the most stable phosphorus hydride and is the first of the homologous straight-chain polyphosphane series ph ("n" = 1–9) that become increasingly thermally unstable as "n" increases. other cyclic and condensed polyphosphane series are known, from ph to ph, amounting to 85 known phosphanes in 1997. insoluble in water but soluble in organic liquids (as well as carbon disulfide and trichloroacetic acid), phosphine is a strong reducing agent.
<p> unlike the related polymer polyhexanide (phmb), phmg has been described as a relatively new compound with properties, potency, and effects being not yet fully recognized. preliminary findings indicate that phmg and its derivatives primarily rely on damaging the cell membrane by inhibiting the activity of cellular dehydrogenases.
<p> phenampromide is in schedule i of the controlled substances act 1970 of the united states as a narcotic with acscn 9638 with a zero aggregate manufacturing quota as of 2014. the free base conversion ratio for salts includes 0.88 for the hydrochloride. it is listed under the single convention for the control of narcotic substances 1961 and is controlled in most countries in the same fashion as is morphine.
<p> there are two reasons why phenol makes such an effective purifier for nucleic acid samples. the first is that it is a non-polar compound. because nucleic acids are highly polar, they do not dissolve in the presence of phenol. the second is that phenol has a density of 1.07 g/cm, which is higher than the density of water (1.00 g/cm). thus, when phenol is added to a cell sample solution the water and phenol remain separate. two “phases” form when phenol is added to the solution and centrifuged. there is an aqueous, polar phase at the top of the solution containing nucleic acids and water, and an organic phase containing denatured proteins and other cell components at the bottom of the solution. the aqueous phase is always on top of the organic because, as mentioned above, phenol is denser than water. nucleic acids are polar, and therefore stay in the aqueous phase, whereas non-polar cellular components move into the organic phase.
<p> bullet::::- phenolic substances (such as phenol (also called "carbolic acid"), cresols such as thymol, halogenated (chlorinated, brominated) phenols, such as hexachlorophene, triclosan, trichlorophenol, tribromophenol, pentachlorophenol, salts and isomers thereof), | I copied this from quora... It's the best answer and I couldn't say it better: pH, as the word says is the -log[H+], so it's the concentration of protons in a solution. When this solution is water there are 2 limits: 1. the lower limit is pH -1.74. Where does this number come from? 1 L of water contains 55.5 moles of water, so [H2O] = 55.5 M pKa = –log 55.5 = –log 10 EXP 1.74 = –1.74 What this means is that you can put max 55.5 moles of H+ in one liter of water (H+ doesn't like to be alone... it likes to be with water as H3O+) 2. the upper limit is pH = 15.74 It comes from these calculations: (1.00 x 10¯14 /55.5) = (10¯14 /10exp-1.74) = 10¯15.74 Why? Because now you want to know what's the minimum H+ you could possibly have. Ok, just like before at the very extreme all your water will be entirely [OH-] which also has a concentration = 55.5 M. You also have a constraint: [H+][OH-] = Kw = 1x10-14. Now if we say that we can have at most 55.5M of [OH-], you do some replacements: [H+] = 1x10-14/55.5 and you get with H+ = 10exp-15.74 So the lowest and highest pH you can measure in water are -1.74 and 15.74, respectively. |
why do wider, non-riveted tires offer more traction on cars? | <p> bullet::::4. using low rolling resistance tires (tires were often made to give a quiet, smooth ride, high grip, etc., but efficiency was a lower priority). tires cause mechanical drag, once again making the engine work harder, consuming more fuel. hybrid cars may use special tires that are more inflated than regular tires and stiffer or by choice of carcass structure and rubber compound have lower rolling resistance while retaining acceptable grip, and so improving fuel economy whatever the power source.
<p> radial tires have different characteristics of springiness from those of bias-ply tires, and a different degree of slip while steering. a benefit was that cars could now be made lighter because they would not have to make up for the deficiencies of bias-ply tires.
<p> tires are often improved on off-road vehicles in order to better traverse rough terrain. regular automotive tires don't provide enough traction to help a vehicle through sand, dirt, snow and ice, so specialized tires are normally used on off-road 4x4 vehicles. large overall wheel diameter provides a better ride comfort and road clearance. wide tires help to distribute the weight on sand, while narrower tires help get better traction in the snow or on ice. each tire type has its own tread type to provide a proper grip in certain road conditions. common off-road tire types are: sand tires, mud-terrain tire, snow tires and all-terrain tire.
<p> in some applications, there is a complicated set of trade-offs in choosing materials. for example, soft rubbers often provide better traction but also wear faster and have higher losses when flexed—thus reducing efficiency. choices in material selection may have a dramatic effect. for example: tires used for track racing cars may have a life of 200 km, while those used on heavy trucks may have a life approaching 100,000 km. the truck tires have less traction and also thicker rubber.
<p> for offroad vehicles, the emphasis is on lengthening the suspension travel and installing larger tires. larger tires—with or without larger wheels—increase ground clearance, travel over rough terrain more smoothly, provide additional cushioning, and decrease ground pressure (which is important on soft surfaces).
<p> in general, softer rubber, higher hysteresis rubber and stiffer cord configurations increase road holding and improve handling. on most types of poor surfaces, large diameter wheels perform better than lower wider wheels. the depth of tread remaining greatly affects aquaplaning (riding over deep water without reaching the road surface). increasing tire pressures reduces their slip angle, but lessening the contact area is detrimental in usual surface conditions and should be used with caution.
<p> on automobiles, camber thrust may contribute to or subtract from the total centripetal force generated by the tire, depending on the camber angle. on a well-aligned vehicle, camber thrust from the tires on each side balances out. on a surface rough enough for one front tire to momentarily lose traction, camber thrust from the other front tire can cause the vehicle to wander or feel skittish. | I believe you mean "rivulets" and not "rivetes". Rivulets being the channels in tires made to direct water from underneath the tire. A simple answer is that race tires are very specialized tires made for a limited, specific purpose: racing on DRY race tracks. Smooth racing tires don't work on rainy days. Here is a little racing rain history: Indy 500 Rain History and wikipedia: **Rainout** - Some auto racing series do not compete in rain, especially series that race on paved oval tracks. The rain severely diminishes the traction between the slick tires and the surface. Other series, especially those that race on road courses such as Formula One and public roads as in rallying, use special treaded rain tires while the surface is wet but not in excessively heavy rain, standing water, or lightning (which is an automatic cessation of racing because of pit crew, race marshals, and safety). Dirt track racing can be run in a light rain as the vehicles have treaded tires. Rallying can be held in rain or snow. |
why did apes and monkeys never gain a foothold in the americas? after all some monkeys made it to cold climates in asia, and northeast asia was for a time linked to the americas. | <p> the old world monkeys are native to africa and asia today, inhabiting numerous environments: tropical rain forests, savannas, shrublands, and mountainous terrain. they inhabited much of europe in the past; today the only survivors in europe are the barbary macaques of gibraltar.
<p> old world monkeys are native to africa and asia today, inhabiting numerous environments: tropical rain forests, savannas, shrublands, and mountainous terrain. they inhabited much of europe in the past; today the only survivors in europe are the barbary macaques of gibraltar. it is unknown whether they are native to gibraltar, or were brought by humans.
<p> during the miocene, much of asia and africa was covered by expanses of forest and damp woodland. it was here that miocene apes thrived, and dozens of genera with many species of early apes lived during this time. miocene apes originated in africa during the early miocene, but dispersed into europe and asia during the middle to late miocene, about 15 to 5 million years ago. when the world began to get cooler and drier, miocene apes died off, leaving animals behind who were more easily able to adapt to new environments.
<p> later (by 36 ma ago) primates followed, again from africa in a fashion similar to that of the rodents. primates capable of migrating had to be small. like caviomorph rodents, south american monkeys are believed to be a clade (i.e., monophyletic). however, although they would have had little effective competition, all extant new world monkeys appear to derive from a radiation that occurred long afterwards, in the early miocene about 18 ma ago. subsequent to this, monkeys apparently most closely related to titis island-hopped to cuba, hispaniola and jamaica. additionally, a find of seven 21-ma-old apparent cebid teeth in panama suggests that south american monkeys had dispersed across the seaway separating central and south america by that early date. however, all extant central american monkeys are believed to be descended from much later migrants, and there is as yet no evidence that these early central american cebids established an extensive or long-lasting population, perhaps due to a shortage of suitable rainforest habitat at the time.
<p> various fossil primates have been found in south america and adjacent regions such as panama and the caribbean. presently, 78 species of new world monkeys have been registered in south america. around the middle of the cenozoic, approximately 34 million years ago, two types of mammals appeared for the first time in south america: rodents and primates. both of these groups had already been inhabiting other continents for millions of years and they simply arrived in south america rather than originated there. analyses of evolutionary relationships have shown that their closest relatives were living in africa at the time. therefore, the most likely explanation is that they somehow crossed the atlantic ocean, which was less wide than today, landed in south america, and founded new populations of rodents and primates.
<p> the native range of these monkeys is sub-saharan africa from senegal and ethiopia south to south africa. however, in previous centuries, a number of them were taken as pets by slavers, and were transported across the atlantic ocean to the caribbean islands, along with the enslaved africans. the monkeys subsequently escaped or were released and became naturalized. the descendants of those populations are found on the west indian islands of barbados, saint kitts, nevis, anguilla, and saint martin. a colony also exists in broward county, florida.
<p> new world monkeys are all simian primates. while they are endemic to south and central america, their ancestors rafted over or traversed via land bridge from africa across the atlantic ocean when it was much narrower than at present. | As your edited version notes, monkeys made it over to the Americas but apes did not. How did monkeys get to South America from Africa? Probably they drifted on tree rafts -- trees blown into the ocean while monkeys were on it. So first, it's pretty extraordinary that anything made it across to successfully colonize at all (remember at least one male and female would have to get across at the same time, and probably quite a few more to reduce the inbreeding risk). Second, small things are much more likely than large ones to make it: a bunch of monkeys could survive on, say, a fig tree in the ocean for longer than even one orangutan. And finally (I think) the Atlantic ocean has been getting wider due to continental drift, and monkeys are evolutionarily older than apes; maybe by the time apes were around, the ocean was too wide to cross by this means for anything. |
how do you go about decontaminating a nuclear reactor? | <p> to produce weapons grade plutonium, the uranium nuclear fuel must spend no longer than several weeks in the reactor core before being removed, creating a low fuel burnup. for this to be carried out in a pressurized water reactor - the most common reactor design for electricity generation - the reactor would have to prematurely reach cold shut down after only recently being fueled, meaning that the reactor would need to cool decay heat and then have its reactor pressure vessel be depressurized, followed by a fuel rod defueling. if such an operation were to be conducted, it would be easily detectable, and require prohibitively costly reactor modifications.
<p> nuclear reaction control is provided by a single rod of boron carbide, which is a neutron absorber. the reactor is intended to be launched cold, preventing the formation of highly radioactive fission products. once the reactor reaches its destination, the neutron absorbing boron rod is removed to allow the nuclear chain reaction to start. once the reaction is initiated, decay of a series of fission products cannot be stopped completely. however, the depth of control rod insertion provides a mechanism to adjust the rate at which uranium fissions, allowing the heat output to match the load.
<p> the decommission of a nuclear reactor can only take place after the appropriate licence has been granted pursuant to the relevant legislation. as part of the licensing procedure, various documents, reports and expert opinions have to be written and delivered to the competent authority, e.g. safety report, technical documents and an environmental impact study (eis).
<p> removing the fuel from a nuclear reactor requires a specially trained team. the coolant is drained first. a reactor must be cooled down for at least three years after its final shutdown before this can be done. the hull above the reactor is then removed, followed by the top shield. the fuel elements are extracted and transported by ship and then rail to a storage facility.
<p> nuclear decommissioning is the administrative and technical process whereby a nuclear facility such as a nuclear power plant (npp), a research reactor, an isotope production plant, a particle accelerator, or uranium mine is dismantled to the point that it no longer requires measures for radiation protection.
<p> in order to start up a controllable fission reaction, the assembly must be delayed-critical. in other words, "k" must be greater than 1 (supercritical) without crossing the prompt-critical threshold. in nuclear reactors this is possible due to delayed neutrons. because it takes some time before these neutrons are emitted following a fission event, it is possible to control the nuclear reaction using control rods.
<p> some of the fission products, such as xenon-135 and samarium-149, have a high neutron absorption cross section. since a nuclear reactor depends on a balance in the neutron production and absorption rates, those fission products that remove neutrons from the reaction will tend to shut the reactor down or "poison" the reactor. nuclear fuels and reactors are designed to address this phenomenon through such features as burnable poisons and control rods. build-up of xenon-135 during shutdown or low-power operation may poison the reactor enough to impede restart or to interfere with normal control of the reaction during restart or restoration of full power, possibly causing or contributing to an accident scenario. | Radiation is like glitter, takes forever to get rid of even a small amount and getting rid of a large amount is even harder. Basically just about every piece of material in the building is radioactive and needs to be sealed in a lot of water lined with lead and transported to a place similarly designed that will last throughout its decay which can take millenias to decrease to a safe amount. It doesn't help that few are willing to work in radiation areas, fewer are qualified, and these aren't small plants we're talking about. |
could the universe ever come to a complete standstill? | <p> however, the universe is not destroyed for not entirely clear reasons. blair mentions an "eternal sphere backup" earlier, but there's no evidence that it was applied. the characters decide that even if they really are just programs, they have achieved "consciousness" and therefore cannot be deleted. alternatively, others suggest maria's power of alteration has something to do with it, perhaps even implying that their universe has truly become a reality unto itself and therefore not subject to deletion.
<p> an easy example would be the statement ″the sun will rise tomorrow″. although many reasons could be devised for which that statement could turn out to be false (the earth could stop turning, aliens could destroy the sun with their star-killer doomsday weapon, the universe might be a simulation and could be shut down, etc.), and in fact it is known that it will "not" be true forever (since the sun will eventually become a red giant, engulf earth, and then become a white dwarf), none of these arguments are rationally compelling for practical reasoning, making it indistinguishable in practice from a strictly true statement. the statement ″the sun will rise tomorrow″ is thus defeasible.
<p> knowing the universe will end soon, reed richards and susan storm choose jessica and natasha romanoff to copilot a ship that will contain a handpicked few to restart humanity and escape the destruction of the universe. their ship is shot down when the children of tomorrow from the ultimate universe invade, and she and the ship's passengers are killed in the ensuing explosion. this timeline and the resulting deaths were later undone.
<p> commenting on "final call", kitaro said, "i have always felt we all must respect the providence of the universe. unfortunately, through the course of time and the growth of civilization, many living creatures that we now know will become extinct. if we don't alter how we treat each other and our planet earth, many habitats and portions of this earth may become devastated and eventually disappear."
<p> hawking also discusses how the universe could have been. for example, if the universe formed and then collapsed quickly, there would not be enough time for life to form. another example would be a universe that expanded too quickly. if a universe expanded too quickly, it would become almost empty. the idea of many universes is called the "many-worlds interpretation".
<p> given the challenges confronting humans in determining how the universe may evolve over billions and trillions of our years, it is difficult to say how long this arrow may be and the its eventual end state. at this time some prominent investigators suggest that much if not most of the visible matter of the universe will collapse into black holes which can be depicted as isolated, in a static cosmology.
<p> because our universe entered the dark energy dominated era about five billion years ago, our universe is probably approaching a de sitter universe in the infinite future. if the current acceleration of our universe is due to a cosmological constant then as the universe continues to expand all of the matter and radiation will be diluted. eventually there will be almost nothing left but the vacuum energy, tiny thermal fluctuations, quantum fluctuations and our universe will have become a de sitter universe. | As far as our understanding goes, based on what information we currently have available, our universe will continuously disorder potential energy during its existence until the point of virtual heat death. At that point, it will approach absolute heat death to an infinitesimal proximity (but never actually reach it, we guess). When the Universe is in this phase of its existence, it will be "essentially" unable to do anything, but not absolutely unable to do anything. Uncertainty will still allow for interactions, but such events will become less and less probable at an exponentially increasing rate (with the geometric expansion of the Universe "thinning it out") As far as we can tell, energy can only be transformed but never destroyed. If all potential energy were removed from the Cosmos, then it would not exist. This violates our current understanding of physics. |
why do paramedics wrap newborn babies in foil? | <p> modern specialized baby swaddles are designed to make it easier to swaddle a baby than with traditional square blanket. they are typically fabric blankets in a triangle, 't' or 'y' shape, with 'wings' that fold around the baby's torso or down over the baby's shoulders and around underneath the infant. some of these products employ velcro patches or other fasteners. some parents prefer a specialized device because of the relative ease of use, and many parents prefer a large square receiving blanket or wrap because they can get a tighter and custom fit and the baby will not outgrow the blanket.
<p> wraps (sometimes called "wraparounds" or "wraparound slings") are lengths of fabric (usually between 2 metres and 6 metres, or 2.5-7 yards long, and 15-30 inches wide), which are wrapped around both the baby and the wearer and then tied. there are different carrying positions possible with a wrap, depending on the length of the fabric. a baby or toddler can be carried on the wearer's front, back or hip. with shorter wraps it is possible to do a one-shouldered carry, similar to those done with a pouch or a ring sling, although most carries involve the fabric going over both shoulders of the wearer and often around the waist to offer maximum support.
<p> after delivery, plastic wraps or warm mattresses are useful to keep the infant warm on their way to the neonatal intensive care unit (nicu). in developed countries premature infants are usually cared for in an nicu. the physicians who specialize in the care of very sick or premature babies are known as neonatologists. in the nicu, premature babies are kept under radiant warmers or in incubators (also called isolettes), which are bassinets enclosed in plastic with climate control equipment designed to keep them warm and limit their exposure to germs. modern neonatal intensive care involves sophisticated measurement of temperature, respiration, cardiac function, oxygenation, and brain activity. treatments may include fluids and nutrition through intravenous catheters, oxygen supplementation, mechanical ventilation support, and medications. in developing countries where advanced equipment and even electricity may not be available or reliable, simple measures such as "kangaroo care" (skin to skin warming), encouraging breastfeeding, and basic infection control measures can significantly reduce preterm morbidity and mortality. bili lights may also be used to treat newborn jaundice (hyperbilirubinemia).
<p> due to the risk of latex allergies among users, the original composition of elastic bandages has changed. while some bandages are still manufactured with latex, many woven and knitted elastic bandages provide adequate compression without the use of natural rubber or latex. the modern elastic bandage is constructed from cotton, polyester and latex-free elastic yarns. by varying the ratio of cotton, polyester, and the elastic yarns within a bandage, manufacturers are able to offer various grades of compression and durability in their wraps. often aluminum or stretchable clips are used to fasten the bandage in place once it has been wrapped around the injury. some elastic bandages even use velcro closures to secure and stabilize the wrap in place.
<p> there are two main types of wrap - stretchy and woven. stretchy wraps are generally made of knits such as jersey or interlock. it is easy to take babies in and out of a stretchy wrap. this can be easier for the wearer as the sling often remains tied on and the baby is lifted out and put back in as required. several factors influence stretchiness: carriers with any spandex or lycra content will tend to be very stretchy, carriers which are 100% cotton or other natural fibers will tend to have less lengthwise stretch.
<p> baby bundle is a parenting mobile app for iphone and ipad. it was designed to help new parents through pregnancy and the first two years of parenthood. developed in collaboration with medical experts, it helps track and record the child’s development and growth, offers parental advice, manages vaccinations and health check-ups, stores photos and provides baby monitoring services.
<p> when the baby is in the carrier, the baby's weight puts tension on the fabric, and the combination of fabric tension, friction of fabric surfaces against each other and the rings combine to "lock" the sling in position. this type of sling can adjust to different wearers' sizes and accommodate different wearing positions easily: the wearer supports the baby's weight with one hand and uses the other hand to pull more fabric through the rings to tighten or loosen the sling. | To keep them warm. The foil reflects the infrared radiation from the baby. |
i may be really misinformed but why the panic over helium? after we "use" it for balloons wouldn't that helium go back into the atmosphere? | <p> in 2019 a global shortage of helium sharply reduced supply for helium-filled balloons, due to the us rationing helium because of a reduction in supply by 30% stemming from a saudi-boycott of producer country qatar, hurting party stores such as party city, one of the reasons the company cited in closing 45 of its 870 stores.
<p> helium is a natural atmospheric gas, but as a land-resource, it is limited. as of 2012 the united states national helium reserve accounted for 30 percent of the world's helium, and was expected to run out of helium in 2018. some geophysicists fear the world's helium could be gone in a generation. for this reason, balloon releases are seen as a wasteful use of this limited resource.
<p> helium was initially selected for the lifting gas because it was the safest to use in airships, as it is not flammable. one proposed measure to save helium was to make double-gas cells for 14 of the 16 gas cells; an inner hydrogen cell would be protected by an outer cell filled with helium, with vertical ducting to the dorsal area of the envelope to permit separate filling and venting of the inner hydrogen cells. at the time, however, helium was also relatively rare and extremely expensive as the gas was available in industrial quantities only from distillation plants at certain oil fields in the united states. hydrogen, by comparison, could be cheaply produced by any industrialized nation and being lighter than helium also provided more lift. because of its expense and rarity, american rigid airships using helium were forced to conserve the gas at all costs and this hampered their operation.
<p> later, the united states began to use helium because it is non-flammable and has 92.7% of the buoyancy (lifting power) of hydrogen. following a series of airship disasters in the 1930s, and especially the hindenburg disaster where the airship burst into flames, hydrogen fell into disuse.
<p> all degassed helium is lost to space eventually, due to the average speed of helium exceeding the escape velocity for the earth. thus, it is assumed the helium content and ratios of earth's atmosphere have remained essentially stable.
<p> it is becoming more common for balloons to be filled with air instead of helium, as air-filled balloons will not release into the atmosphere or deplete the earthly helium supply. there are numerous party games and school-related activities that can use air-filled balloons as opposed to helium balloons. when age-appropriate, these activities often include the added fun of blowing the balloons up. in many events, the balloons will contain prizes, and party-goers can pop the balloons to retrieve the items inside.
<p> a computer-controlled mechanism inflates the balloon with helium from a cylinder during diastole, usually linked to either an electrocardiogram (ecg) or a pressure transducer at the distal tip of the catheter; some iabps, such as the datascope system 98xt, allow asynchronous counterpulsation at a set rate, though this setting is rarely used. helium is used because its low viscosity allows it to travel quickly through the long connecting tubes, and has a lower risk than air of causing an embolism should the balloon rupture. | Helium, being lighter than air, tends to float above air. What's above air? Space. Helium escapes to space (vs. other kinds of gasses, like argon) |
mathematics: given a set of outcomes, and an infinite number of trials, what can we say about the certainty of at least one trial producing a given outcome? | <p> for example, in a trial the participants are aware the outcome of all the previous history of trials. they also assume that each outcome is equally probable. together this allows a single unconditional value of probability to be defined.
<p> in this example, one tries to increase the probability of a rare event occurring at least once by carrying out more trials. for example, a job seeker might argue, "if i send my résumé to enough places, the law of averages says that someone will eventually hire me." assuming a non-zero probability, it is true that conducting more trials increases the overall likelihood of the desired outcome. however, there is no particular number of trials that guarantees that outcome; rather, the probability that it will already have occurred approaches but never quite reaches 100%.
<p> some treatments of probability assume that the various outcomes of an experiment are always defined so as to be equally likely. however, there are experiments that are not easily described by a set of equally likely outcomes— for example, if one were to toss a thumb tack many times and observe whether it landed with its point upward or downward, there is no symmetry to suggest that the two outcomes should be equally likely.
<p> some treatments of probability assume that the various outcomes of an experiment are always defined so as to be equally likely. however, there are experiments that are not easily described by a sample space of equally likely outcomes—for example, if one were to toss a thumb tack many times and observe whether it landed with its point upward or downward, there is no symmetry to suggest that the two outcomes should be equally likely.
<p> consider a sequence of trials, where each trial has only two possible outcomes (designated failure and success). the probability of success is assumed to be the same for each trial. in such a sequence of trials, the geometric distribution is useful to model the number of failures before the first success. the distribution gives the probability that there are zero failures before the first success, one failure before the first success, two failures before the first success, and so on.
<p> it is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. this relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies.
<p> there are two clear limitations to the classical definition. firstly, it is applicable only to situations in which there is only a 'finite' number of possible outcomes. but some important random experiments, such as tossing a coin until it rises heads, give rise to an infinite set of outcomes. and secondly, you need to determine in advance that all the possible outcomes are equally likely without relying on the notion of probability to avoid circularity—for instance, by symmetry considerations. | You could flip a fair coin an infinite number of times and always get heads. The probability of that occurring approaches zero as number of trials approaches infinity, but there is nothing saying it has to occur. On each single toss it is always a fifty fifty chance. |
why do certain sounds sound appealing and others don't? | <p> according to sevdaliza, "i think my sound would mostly be described as pure and raw. i'm not necessarily drawn to a genre, but to a process, or towards a certain mood like melancholy. the interesting thing is that the music my music gets compared to is not necessarily music i've listened to, which makes it super interesting. i was performing once, and a conservatory professor came to me after the show, saying that he could really hear that i draw inspiration from old persian singers. i asked him, "wow, that's really interesting. how do you hear that?," and he said because i use certain semitones and microtones when i sing. but i've never had a singing lesson in my life, and i've never listened to persian music in my life! it's really interesting to me that some things just come to you unconsciously like that. it's like you have this brain and it's unconsciously and consciously registering everything and in combination with your dna it just becomes... something."
<p> pinard et al also suggested that pure word deafness and general auditory agnosia represent different degrees of the same disorder. they suggested that environmental sounds are spared in the mild cases because they are easier to perceive than speech parts. they argued that environmental sounds are more distinct than speech sounds because they are more varied in their duration and loudness. they also proposed that environmental sounds are easier to perceive because they are composed of a repetitive pattern (e.g., the bark of a dog or the siren of the ambulance).
<p> while a natural environment provides more sensory input than the soundscape there are indications that the soundscape alone also affords restoration. a majority of humans indicate that they find natural sounds pleasurable.
<p> "sounds so good" is a song written and recorded by american country music artist ashton shepherd. it was released in may 2008 as the second single and title track from her debut album "sounds so good".
<p> according to allmusic's kieran mccarthy "it's next to impossible to describe their sound, because — by design — it rarely follows consistent patterns". some of their music has been described as having "a majestic ebb and flow that suggests natural wonders" or a "witchy, tribal side". either way, at any one time it may incorporate chanting and punchy drums, dancey polyrhythms atonal composition or psychedelia.
<p> it can be noted that use of language such as certain accents may result in an individual experiencing prejudice. for example, some accents hold more prestige than others depending on the cultural context. however, with so many dialects, it can be difficult to determine which is the most preferable. the best answer linguists can give, such as the authors of "do you speak american?", is that it depends on the location and the individual. research has determined however that some sounds in languages may be determined to sound less pleasant naturally. also, certain accents tend to carry more prestige in some societies over other accents. for example, in the united states speaking general american (i.e., an absence of a regional, ethnic, or working class accent) is widely preferred in many contexts such as television journalism. also, in the united kingdom, the received pronunciation is associated with being of higher class and thus more likeable. in addition to prestige, research has shown that certain accents may also be associated with less intelligence, and having poorer social skills. an example can be seen in the difference between southerners and northerners in the united states, where people from the north are typically perceived as being less likable in character, and southerners are perceived as being less intelligent.
<p> some of these sounds are very rare. for example, has only one dictionary entry word-internally (in , 'heavy') and two entries word-initially. likewise, has only two dictionary entries: ('blue; unripe') and ('crooked, curved'). | Three incomplete explanations: 1. A sound can be appealing because it is associated with a pleasant experience. If you grew up near the ocean, for example, a distant foghorn might be a pleasant sound. 2. As for the quality of sound, it might be that we are born appreciating certain sounds that are associated with well-being or survival, such as kind human voices or trickling water. Both could indicate safety and security. 3. In music, we tend to appreciate certain harmonies. These tend to feature a moderate amount of dissonance. Two tones played in unison (both the same tone) sounds nice but is uninteresting, as the frequencies (as measured in Hertz) have a simple 1:1 ratio. An octave has a 1:2 ratio and sound a little more colorful. Other harmonies have more complex ratios like 2:3, 3:5, etc. These are more colorful still. As the ratios get more complex they start to sound sad, and then eventually hideous. The most complex ratio within an octave on a piano keyboard is 45:32, known as a tritone. It sounds very "ugly". To hear a tritone, play the notes C and F# simultaneously. |
what is the purpose of the near vertical wing tips on some newer planes? | <p> an all-wing design, the centre section has a v-shaped lower profile deepening its keel and is sharply tapered both front and rear, while the outer sections are sharply swept at approximately 45° and tapered, giving the leading edge a sweep greater than 45° and the trailing edge an m-shaped outline from above. the wing tips are turned down, giving them a slight anhedral.
<p> wing tips: these are removable assemblies to allow easy replacement in the event of damage. their shape has evolved over a number of years of “in the field” testing to provide the best possible swath width without compromising aircraft performance, and maintaining small, controlled wingtip vortices.
<p> aircraft designers may increase dihedral angle to provide greater clearance between the wing tips and the runway. this is of particular concern with swept-wing aircraft, whose wingtips could hit the runway on rotation/touchdown. in military aircraft dihedral angle space may be used for mounting materiel and drop-tanks on wing hard points, especially in aircraft with low wings. the increased dihedral effect caused by this design choice may need to be compensated for, perhaps by decreasing the dihedral angle on the horizontal tail.
<p> the bottom wing was rigged with 5° dihedral while the top wing lacked any dihedral; this meant that the gap between the wings was less at the tips than at the roots; this change had been made at the suggestion of fred sigrist, the sopwith works manager, as a measure to simplify the aircraft's construction. the upper wing featured a central cutout section for the purpose of providing improved upwards visibility for the pilot.
<p> most aircraft have been designed with planar wings with simple dihedral (or anhedral). some older aircraft such as the vought f4u corsair and the beriev be-12 were designed with gull wings bent near the root. modern polyhedral wing designs generally bend upwards near the wingtips (also known as "tip dihedral"), increasing dihedral effect without increasing the angle the wings meet at the root, which may be restricted to meet other design criteria.
<p> in real-world designs, the wing root, where the wing meets the fuselage, is thicker than the wing tip. this is because the wing spar has to support the forces from the entire wing outboard, meaning there is very little force on the spar at the tip, and the lift force of the entire wing at the root. spars generally get much larger as they approach the root to account for these forces, and streamlining the wing profile around such designs generally requires the wing to be much thicker and be more heavily curved at the root than the tip.
<p> wingtips, wing-to-nacelle joints, tips and edge of stabilizers and control surfaces (excluding the horizontal stabilizer and elevator) were all smoothly rounded, blended or filleted. the overall design was exceptionally clean and fluid as the aircraft possessed very few sharp corners or edges. | They slightly increase fuel efficiency by reducing drag on the wings. The rising cost of fuel has made them more cost effective recently which is why we are seeing them more these days. |
how far is the closest potentially habitable exoplanet? | <p> at 4.2 light-years (1.3 parsecs, 40 trillion km, or 25 trillion miles) away from earth, the closest potentially habitable exoplanet is proxima centauri b, which was discovered in 2016. this means it would take more than 18,100 years to get there if a vessel could consistently travel as fast as the juno spacecraft (250,000 kilometers per hour or 150,000 miles per hour). in other words, it is currently not feasible to send humans or even probes to search for biosignatures outside of our solar system. given this fact, the only way to search for biosignatures outside of our solar system is by observing exoplanets with telescopes.
<p> to date, it is the fifth-closest known potentially habitable exoplanet to earth. the closest potentially habitable exoplanet is proxima centauri b at 4.2 light years. second is ross 128 b at 11 light years away, followed by the unconfirmed planets tau ceti e and f, just under 12 light years distant. fourthly is wolf 1061c at 13.8 light years from the sun.
<p> in 2014, gliese 832 was announced to be hosting the closest potentially habitable earth-mass-range exoplanet to the solar system. this star achieved perihelion some 52,920 years ago when it came within an estimated of the sun.
<p> a 2015 review concluded that the exoplanets kepler-62f, kepler-186f and kepler-442b were likely the best candidates for being potentially habitable. these are at a distance of 1,200, 490 and 1,120 light-years away, respectively. of these, kepler-186f is similar in size to earth with a 1.2-earth-radius measure and it is located towards the outer edge of the habitable zone around its red dwarf.
<p> a review in 2015 identified exoplanets kepler-62f, kepler-186f and kepler-442b as the best candidates for being potentially habitable. these are at a distance of 1200, 490 and 1,120 light-years away, respectively. of these, kepler-186f is in similar size to earth with its 1.2-earth-radius measure, and it is located towards the outer edge of the habitable zone around its red dwarf star.
<p> on 13 may 2016, researchers at university of california, los angeles (ucla) announced that they had found various scenarios that allow the exoplanet to be habitable. they tested several simulations based on kepler-62f having an atmosphere that ranges in thickness from the same as earth's all the way up to 12 times thicker than our planet's, various concentrations of carbon dioxide in its atmosphere, ranging from the same amount as is in the earth's atmosphere up to 2,500 times that level and several different possible configurations for its orbital path.
<p> in 2007, a new, potentially habitable exoplanet, gliese 581c, was found, orbiting gliese 581. the minimum mass estimated by its discoverers (a team led by stephane udry) is . the discoverers estimate its radius to be 1.5 times that of earth (). since then gliese 581d, which is also potentially habitable, was discovered. | Not too far away! The closest exoplanet in its star's habitable zone is actually in orbit around our closest star, Proxima Centauri. It's only about 4.2 light years away, and should be the right temperature for life like us. Unfortunately, Proxima Cenauri b is probably not ACTUALLY habitable (except underground) because it likely has no atmosphere and is tidally locked, scorching one side while freezing the other. The closest exoplanet that might actually BE habitable, with an atmosphere, not *completely* tidally locked (necessarily), and possibly the right conditions for liquid water on the surface, is Luyten b, 12.2 light years away. It's the third-nearest exoplanet discovered in its star's habitable zone, and unlike the nearer two, it's one of the most earth-like exoplanets we've discovered so far. |
how are supersonic aircraft able to slow the air coming into the intakes so that the shock wave doesn't damage anything internally? | <p> however, shock waves can form on some parts of an aircraft moving at less than the speed of sound. low pressure regions around an aircraft cause the flow to accelerate, and at transonic speeds this local acceleration can exceed mach 1. localized supersonic flow must return to the freestream conditions around the rest of the aircraft, and as the flow enters an adverse pressure gradient in the aft section of the wing, a discontinuity emerges in the form of a shock wave as the air is forced to rapidly slow and return to ambient pressure.
<p> as flight speed increases supersonically the shock system is initially external. for the sr-71 this was until about m1.6 to m1.8 and m2 for the xb-70. the intake is said to be unstarted. further increase in speed produces supersonic speeds inside the duct with a plane shock near the throat. the intake is said to be started. upstream or downstream disturbances, such as gusts/atmospheric temperature gradients and engine airflow changes, both intentional and unintentional(from surging), tend to cause the shock to be expelled almost instantaneously. expulsion of the shock, known as an unstart, causes all the supersonic compression to take place externally through a single plane shock. the intake has changed in a split second from its most efficient configuration with most of its supersonic compression taking place inside the duct to the least efficient as shown by the large loss in pressure recovery, from about 80% to about 20% at m3 flight speeds. there is a large drop in intake pressure and loss in thrust together with temporary loss of control of the aircraft.
<p> several smaller shock waves can and usually do form at other points on the aircraft, primarily at any convex points, or curves, the leading wing edge, and especially the inlet to engines. these secondary shockwaves are caused by the air being forced to turn around these convex points, which generates a shock wave in supersonic flow.
<p> at speeds above the critical mach number, the airflow begins to become transonic, with local airflow in some places causing small sonic shock waves to form. this soon leads to the shock stall, causing a rapid increase in drag. the wings of fast subsonic craft such as jet airliners tend to be swept in order to delay the onset of these shock waves.
<p> in supersonic flight (mach numbers greater than 1.0), wave drag is the result of shockwaves present in the fluid and attached to the body, typically oblique shockwaves formed at the leading and trailing edges of the body. in highly supersonic flows, or in bodies with turning angles sufficiently large, unattached shockwaves, or bow waves will instead form. additionally, local areas of transonic flow behind the initial shockwave may occur at lower supersonic speeds, and can lead to the development of additional, smaller shockwaves present on the surfaces of other lifting bodies, similar to those found in transonic flows. in supersonic flow regimes, wave drag is commonly separated into two components, supersonic lift-dependent wave drag and supersonic volume-dependent wave drag.
<p> on supersonic military jets, the inlets are usually much more complex and use shock waves to slow down the air, and movable internal vanes to shape and control the flow. supersonic flight speeds form shock waves in the intake system and reduce the recovered pressure at the compressor, so some supersonic intakes use devices, such as a cone or ramp, to increase pressure recovery by making more efficient use of the shock waves. the complexity of these inlets increases with an increase in top speed. planes with top speeds over mach 2 require much more elaborate inlet designs. this limits most modern combat aircraft to top speeds of mach 1.8-2.0.
<p> if the p is lowered enough, the shock wave will sit at the nozzle exit (figure 1d). due to the very long region of acceleration (the entire nozzle length) the flow speed will reach its maximum just before the shock front. however, after the shock the flow in the jet will be subsonic. | Inlet cones are one of the methods for creating a shock wave that's immediately upstream of the engine hardware. This provides subsonic air to do compression, combustion, etc. with. But you don't always have to have subsonic flow to have successful combustion. Check out the scramjet. |
i know it's fundamentally impossible to travel faster than c per se, but are there ways around it? | <p> more generally, it is normally impossible for information or energy to travel faster than "c". one argument for this follows from the counter-intuitive implication of special relativity known as the relativity of simultaneity. if the spatial distance between two events a and b is greater than the time interval between them multiplied by "c" then there are frames of reference in which a precedes b, others in which b precedes a, and others in which they are simultaneous. as a result, if something were travelling faster than "c" relative to an inertial frame of reference, it would be travelling backwards in time relative to another frame, and causality would be violated. in such a frame of reference, an "effect" could be observed before its "cause". such a violation of causality has never been recorded, and would lead to paradoxes such as the tachyonic antitelephone.
<p> on 7 november 1946 the daily telegraph, having interviewed hartree, quoted him as saying: "the implications of the machine are so vast that we cannot conceive how they will affect our civilisation. here you have something which is making one field of human activity 1,000 times faster. in the field of transportation, the equivalent to ace would be the ability to travel from london to cambridge ... in five seconds as a regular thing. it is almost unimaginable."
<p> manned travel at a speed not close to the speed of light, would require either that we overcome our own mortality with technologies like radical life extension or traveling with a generation ship. if traveling at a speed closer to the speed of light, time dilation would allow intergalactic travel in a timespan of decades of on-ship time.
<p> physical communication is by nafal ships, nearly as fast as light. the physics is never explained: the ship vanishes from where it was and reappears somewhere else many years later. the trip takes slightly longer than it would to cross the same distance at the speed of light, but ship-time is just a few hours for those on board. it cannot apparently be used for trips within a solar system. trips can begin or end close to a planet, but if used without a "retemporalizer", there are drastic physical effects at the end of long trips, at least according to the shing, whose information may be suspect. it is also lethal if the traveler is pregnant.
<p> to make the numbers easy, the ship is assumed to attain full speed in a negligible time upon departure (even though it would actually take close to a year accelerating at 1 "g" to get up to speed). similarly, at the end of the outgoing trip, the change in direction needed to start the return trip is assumed to occur in a negligible time.
<p> astronomical distances and the impossibility of faster-than-light travel pose a challenge to most science-fiction authors. they can be dealt with in several ways: accept them as such (hibernation, slow boats, generation ships, time dilation – the crew will perceive the distance as much shorter and thus flight time will be short from their perspective), find a way to move faster than light (warp drive), "fold" space to achieve instantaneous translation (e.g. the dune universe's holtzman effect), access some sort of shortcut (wormholes), utilize a closed timelike curve (e.g. stross' "singularity sky"), or sidestep the problem in an alternate space: hyperspace, with spacecraft able to use hyperspace sometimes said to have a hyperdrive.
<p> bullet::::- in 1966, mathematician paul cooper theorized that the fastest, most efficient way to travel across continents would be to bore a straight hollow tube directly through the earth, connecting a set of antipodes, remove the air from the tube and fall through. the first half of the journey consists of free-fall acceleration, while the second half consists of an exactly equal deceleration. the time for such a journey works out to be 42 minutes. even if the tube does not pass through the exact center of the earth, the time for a journey powered entirely by gravity (known as a gravity train) always works out to be 42 minutes, so long as the tube remains friction-free, as while the force of gravity would be lessened, the distance traveled is reduced at an equal rate. (the same idea was proposed, without calculation by lewis carroll in 1893 in "sylvie and bruno concluded".) now we know that is not true, and it only would take about 38 minutes. | Not without "exotic matter", which probably doesn't exist... |
is it possible to simulate a quantum computer within a quantum computer? | <p> quantum simulators can solve problems which are difficult to simulate on classical computers because they directly exploit quantum properties of real particles. in particular, they exploit a property of quantum mechanics called superposition, wherein a quantum particle is made to be in two distinct states at the same time, for example, aligned and anti-aligned with an external magnetic field. crucially, simulators also take advantage of a second quantum property called entanglement, allowing the behavior of even physically well separated particles to be correlated.
<p> it is speculated that in a quantum computer, such simulations would be much more efficient and exact than that done in a classical computer, because it can perform the tunneling directly, rather than needing to add it by hand. moreover, it may be able to do this without the tight error controls needed to harness the quantum entanglement used in more traditional quantum algorithms. some confirmation of this is found in exactly solvable models.
<p> quantum computers offer a search advantage over classical computers by searching many database elements at once as a result of quantum superpositions. a sufficiently advanced quantum computer would break current encryption methods by factorizing large numbers several orders of magnitude faster than any existing classical computer. any computable problem may be expressed as a general quantum search algorithm although classical computers may have an advantage over quantum search when using more efficient tailored classical algorithms. the issue with quantum computers is that a measurement must be made to determine if the problem is solved which collapses the superposition. vedral points out that unintentional interaction with the environment can be mitigated with redundancy, and this would be necessary if we were to scale up current quantum computers to achieve greater utility, i.e. to utilize 10 qubits have a 100 atom quantum system so that if one atom decoheres a consensus will still be held by the other 9 for the state of the same qubit.
<p> quantum simulators permit the study of quantum systems that are difficult to study in the laboratory and impossible to model with a supercomputer. in this instance, simulators are special purpose devices designed to provide insight about specific physics problems.
<p> quantum computers are the ultimate quantum network, combining 'quantum bits' or 'qubit' which are devices that can store and process quantum data (as opposed to binary data) with links that can transfer quantum information between qubits. in doing this, quantum computers are predicted to calculate certain algorithms significantly faster than even the largest classical computer available today.
<p> the idea that quantum computers might be more powerful than classical computers originated in richard feynman's observation that classical computers seem to require exponential time to simulate many-particle quantum systems. since then, the idea that quantum computers can simulate quantum physical processes exponentially faster than classical computers has been greatly fleshed out and elaborated. efficient (that is, polynomial-time) quantum algorithms have been developed for simulating both bosonic and fermionic systems and in particular, the simulation of chemical reactions beyond the capabilities of current classical supercomputers requires only a few hundred qubits. quantum computers can also efficiently simulate topological quantum field theories. in addition to its intrinsic interest, this result has led to efficient quantum algorithms for estimating quantum topological invariants such as jones and homfly polynomials, and the turaev-viro invariant of three-dimensional manifolds.
<p> the quantum computer may be physically implemented in arbitrary ways but the common apparatus considered to date features a mach–zehnder interferometer. the quantum computer is set in a superposition of "not running" and "running" states by means such as the quantum zeno effect. those state histories are quantum interfered. after many repetitions of very rapid projective measurements, the "not running" state evolves to a final value imprinted into the properties of the quantum computer. measuring that value allows for learning the result of some types of computations such as grover's algorithm even though the result was derived from the non-running state of the quantum computer. | Well, you can *simulate* a quantum computer in a regular computer, so you can definitely simulate a quantum computer inside another quantum computer. You don't actually get any quantum speedup if you do it this way, though. |
why are male ligers "sterile", but not the females? | <p> males with lns do not reproduce due to the characteristics of the disease. however, if a male with a less severe phenotype reproduces, all of his daughters are carriers, and none of his sons will be affected.
<p> leydig cell hypoplasia does not occur in biological females as they do not have either leydig cells or testicles. however, the cause of the condition in males, luteinizing hormone insensitivity, does affect females, and because lh plays a role in the female reproductive system, it can result in primary amenorrhea or oligomenorrhea (absent or reduced menstruation), infertility due to anovulation, and ovarian cysts.
<p> vaginal anomalies are abnormal structures that are formed (or not formed) during the prenatal development of the female reproductive system and are rare congenital defects that result in an abnormal or absent vagina. when present, they are often found with uterine, skeletal and urinary abnormalities. this is because these structures, like the vagina, are most susceptible to disruption during crucial times of organ-genesis. many of these defects are classified under the broader term müllerian duct anomalies. müllerian duct anomalies are caused by a disturbance during the embryonic time of genitourinary development. the other isolated incidents of vaginal anomalies can occur with no apparent cause. oftentimes vaginal anomalies are part of a cluster of defects or syndromes. in addition, inheritance can play a part as can prenatal exposure to some teratogens. many vaginal anomalies are not detected at birth because the external genitalia appear to be normal. other organs of the reproductive system may not be affected by an abnormality of the vagina. the uterus, fallopian tubes and ovaries can be functional despite the presence of a defect of the vagina and external genitalia. a vaginal anomaly may not affect fertility. though it depends on the extent of the vaginal defect, it is possible for conception to occur. in instances where a functional ovary exists, ivf may be successful. functioning ovaries in a woman with a vaginal defect allows the implantation of a fertilized ovum into the uterus of an unaffected gestational carrier. a successful conception and can occur. vaginal length varies from 6.5 to 12.5 cm. since this is slightly shorter than older descriptions, it may impact the diagnosis of women with vaginal agenesis or hypoplasia who may unnecessarily be encouraged to undergo treatment to increase the size of the vagina. vaginal anomalies may cause difficulties in urination, conception, pregnancy, impair sex. psychosocial effects can also exist.
<p> in men, indirect hernias follow the same route as the descending testes, which migrate from the abdomen into the scrotum during the development of the urinary and reproductive organs. the larger size of their inguinal canal, which transmitted the testicle and accommodates the structures of the spermatic cord, might be one reason why men are 25 times more likely to have an inguinal hernia than women. although several mechanisms such as strength of the posterior wall of the inguinal canal and shutter mechanisms compensating for raised intra-abdominal pressure prevent hernia formation in normal individuals, the exact importance of each factor is still under debate. the physiological school of thought thinks that the risk of hernia is due to a physiological difference between patients who suffer hernia and those who do not, namely the presence of aponeurotic extensions from the transversus abdominis aponeurotic arch.
<p> xx males are sterile due to no sperm content and there is currently no treatment to address this infertility. genital ambiguities, while not necessary to treat for medical reasons, can be treated through the use of hormonal therapy, surgery, or both. since xx male syndrome is variable in its presentation, the specifics of treatment varies widely as well. in some cases gonadal surgery can be performed to remove partial or whole female genitalia. this may be followed by plastic and reconstructive surgery to make the individual appear more externally male. conversely, the individual may wish to become more feminine and feminizing genitoplasty can be performed to make the ambiguous genitalia appear more female. hormonal therapy may also aid in making an individual appear more male or female.
<p> males wait for a female to molt, and immediately afterwards inseminate her, breaking off their genitalia within the female, which thereby acts as a plug to prevent other males from mating with her. the now sterile male then spends the rest of his life (life span: about one year) driving away other males. nevertheless, females with several dismembered male organs within them have been found.
<p> as stated, males are known, but are very rare. one possible reason for this scarcity is the presence of a bacterium in the genus "wolbachia", which is endosymbiotic in the females' gametes. a female infected with "wolbachia" produces only diploid eggs, when in the cells of the ovaries presumably cause the fusion of the pronuclei, which leads to entirely female progeny. when the females were treated with antibiotics, they were then able to produce normal male and female eggs. | This is called "Haldane's rule": if there is sterility in a hybrid population, it is more likely that the heterogametic sex (that is, the one carrying XY) will be the sterile one. In mammals, the male is XY, while in birds, it's the female that's XY (called ZW, but that's just terminology), and so hybrid male mammals are more likely to be sterile, while hybrid bird females are more likely to be. The reason for this isn't well understood, but there are some hypotheses listed in the Wikipedia article: |
a squeaky door does not squeak when opened fast. is this because it squeaks in the inaudible range? | <p> also, squeaking sounds made by a house's materials that happen as a result of changes in temperature and humidity are also nowadays called "yanari." they often occur in newly-built houses where the materials have not yet been there for long, and in worse cases it can result in defective housing causing trouble for the construction company and homeowner.
<p> bullet::::- outgrabe: humpty says " 'outgribing' is something between bellowing and whistling, with a kind of sneeze in the middle". carroll's book appendices suggest it is the past tense of the verb to 'outgribe', connected with the old verb to 'grike' or 'shrike', which derived 'shriek' and 'creak' and hence 'squeak'.
<p> unlike most moths, which generate noise by rubbing external body parts together, all three species within the genus "acherontia" are capable of producing a "squeak" from the pharynx, a response triggered by external agitation. the moth sucks in air, causing an internal flap between the mouth and throat to vibrate at a rapid speed. the "squeak" described is produced upon the exhalation when the flap is open. each cycle of inhalation and exhalation takes approximately one fifth of a second.
<p> it is unclear exactly why the moth emits this sound. one thought is that the squeak may be used to deter potential predators. due to its unusual method of producing sound, the squeak created by "acherontia atropos" is especially startling. another hypothesis suggests that the squeak relates to the moth's honey bee hive raiding habits. the squeak produced from this moth mimics the piping noise produced from a honey bee hive's queen, a noise in which she utilizes to signal the worker bees to stop moving.
<p> bullet::::- disengaging – the motor driven mechanism (the opener portion) is not joined to the closer, but only engages the closer when it is needed to open the door. when opening the door manually, the opener portion is still, so the opening is smooth and quiet.
<p> adults can make rasping sounds with their wings and can emit high pitched squeaking sounds that are audible to humans. these sounds have been found to affect bat behavior, as the squeaks of this insect cause bats to avoid the noxious moth. bats that could associate squeaking or clicking sounds as indicative of toxic prey quickly used sound alone as a deterrent.
<p> in recent years, the engine has developed a noise, referred to as "the squeak". while the cause of this noise is not definitively known, it is presumed to come from the low pressure valve. there are no physical indications of scuffing, galling, or damage to components indicating a metal-to-metal contact. the squeak is more pronounced as the engine warms up, and goes away as the engine speed increases. the problem has so vexed the engineers that they have started a tongue-in-cheek fund, whereby visitors are required to donate $1 to the repair fund if they wish to talk to an engineer about it. (this fund may also be diverted to a beer fund, at the discretion of the engineer.) | A phenomenon called stick-slip friction is what causes doors to squeak. There is a critical speed at which stick-slip friction is most prevelent and when you move faster than this speed the phenomenon disappears. Stick-slip friction is caused by oscillations in the frictional force, and in turn velocity, between two objects (thus the name slip-stick). When two surfaces that are in contact go from being stationary to moving with respect to each other the amount of friction between them changes. The total amount of friction is roughly the same when they are stationary and then a smaller, but still constant amount at high relative speeds. The interesting behavior comes in between these two realms, at slow speeds when the total friction is very variable (see this simple graph of friction vs velocity). There is a region of speeds (the downward sloping part of the linked graph) that is very susceptible combining (resonating) with deformations in of the objects to create vibrations. These vibrations are what cause the noise you hear. When you open the door quickly you move out of this slow "critical velocity" and into a fast region with more constant frictional force where the vibrations stop. Primary source is this thesis on stick-slip friction here |
why does the hydrophobic effect increase in strength with temperature? | <p> the hydrophobic effect can be quantified by measuring the partition coefficients of non-polar molecules between water and non-polar solvents. the partition coefficients can be transformed to free energy of transfer which includes enthalpic and entropic components, "δg = δh - tδs". these components are experimentally determined by calorimetry. the hydrophobic effect was found to be entropy-driven at room temperature because of the reduced mobility of water molecules in the solvation shell of the non-polar solute; however, the enthalpic component of transfer energy was found to be favorable, meaning it strengthened water-water hydrogen bonds in the solvation shell due to the reduced mobility of water molecules. at the higher temperature, when water molecules become more mobile, this energy gain decreases along with the entropic component. the hydrophobic effect depends on the temperature, which leads to "cold denaturation" of proteins.
<p> the hydrophobic effect is the desire for non-polar molecules to aggregate in aqueous solutions in order to separate from water. this phenomenon leads to minimum exposed surface area of non-polar molecules to the polar water molecules (typically spherical droplets), and is commonly used in biochemistry to study protein folding and other various biological phenomenon. the effect is also commonly seen when mixing various oils (including cooking oil) and water. over time, oil sitting on top of water will begin to aggregate into large flattened spheres from smaller droplets, eventually leading to a film of all oil sitting atop a pool of water. however the hydrophobic effect is not considered a non-covalent interaction as it is a function of entropy and not a specific interaction between two molecules, usually characterized by entropy.enthalpy compensation. an essentially enthalpic hydrophobic effect materializes if a limited number of water molecules are restricted within a cavity; displacement of such water molecules by a ligand frees the water molecules which then in the bulk water enjoy a maximum of hydrogen bonds close to four.
<p> the hydrophobic interaction is mostly an entropic effect originating from the disruption of the highly dynamic hydrogen bonds between molecules of liquid water by the nonpolar solute forming a clathrate-like structure around the non-polar molecules. this structure formed is more highly ordered than free water molecules due to the water molecules arranging themselves to interact as much as possible with themselves, and thus results in a higher entropic state which causes non-polar molecules to clump together to reduce the surface area exposed to water and decrease the entropy of the system. thus, the 2 immiscible phases (hydrophilic vs. hydrophobic) will change so that their corresponding interfacial area will be minimal. this effect can be visualized in the phenomenon called phase separation.
<p> hydrophobic interactions are essentially entropic interactions basically due to order/disorder phenomena in an aqueous medium. the free energy associated with minimizing interfacial areas is responsible for minimizing the surface area of water droplets and air bubbles in water. this same principle is the reason that hydrophobic amino acid side chains are oriented away from water, minimizing their interaction with water. the hydrophilic groups on the outside of the molecule result in protein water solubility. characterizing this phenomenon can be done by treating these hydrophobic relationships with interfacial free energy concepts. accordingly, one can think of the driving force of these interactions as the minimization of total interfacial free energy, i.e. minimization of surface area.
<p> the hydrophobic effect is the observed tendency of nonpolar substances to aggregate in an aqueous solution and exclude water molecules. the word hydrophobic literally means "water-fearing", and it describes the segregation of water and nonpolar substances, which maximizes hydrogen bonding between molecules of water and minimizes the area of contact between water and nonpolar molecules.
<p> in chemistry, hydrophobicity is the physical property of a molecule (known as a hydrophobe) that is seemingly repelled from a mass of water. (strictly speaking, there is no repulsive force involved; it is an absence of attraction.) in contrast, hydrophiles are attracted to water.
<p> hydrophobic forces are the attractive entropic forces between any two hydrophobic groups in aqueous media, e.g. the forces between two long hydrocarbon chains in aqueous solutions. the magnitude of these forces depends on the hydrophobicity of the interacting groups as well as the distance separating them (they are found to decrease roughly exponentially with the distance). the physical origin of these forces is a debated issue but they have been found to be long-ranged and are the strongest among all the physical interaction forces operating between biological surfaces and molecules. due to their long range nature, they are responsible for rapid coagulation of hydrophobic particles in water and play important roles in various biological phenomena including folding and stabilization of macromolecules such as proteins and fusion of cell membranes. | So the "strength" of the interaction that you're referring to can be related to the Gibbs Free Energy where a larger negative ΔG is a stronger effect. The equation for ΔG is ΔG = ΔH - TΔS So, the enthalpy (ΔH) stays constant with respect to temperature, but at higher temperature, the contribution of entropy (ΔS) to the Gibbs Energy is larger. If the net reaction is higher entropy, then at a higher temperature will lead to a higher "strength" interaction. If the reaction products were lower entropy, the reverse would be true, higher temperature would decrease the strength of the effect. A common misconception with the hydrophobic effect is that the reaction describes only the interaction of two nonpolar molecules (which has lower entropy). This is part of the hydrophobic effect, but neglects the water. Without going into the details, water has more structure and thus less entropy when solvating a nonpolar molecule than in bulk solvent. So, the actual reaction that the hydrophobic effect relates to is [Nonpolar molecule + water(structured)] + [second nonpolar molecule + water (structured)] == > [Nonpolar-nonpolar] + 2 Water(free) Freeing this water from its nonpolar-solvating state drives a net increase in entropy, and so a net decrease in the Gibbs energy. Increasing the temperature makes it even more favourable to release this water. |
why is there still sunlight after the sun set? | <p> the sun can sometimes appear as a green spot for a second or two as it is rising or setting: this is known as green flash. roughly speaking, the red light from the sun is blocked by earth, the blue light is scattered by the atmosphere, and the green light is refracted by the atmosphere to the observer. a similar effect can occasionally be seen with other astronomical objects such as the moon and bright planets.
<p> the luminosity of the sun increases as it progresses through its life cycle and are visible over the course of millions of years. sunspots can form on the sun's surface, which can cause greater variability in the emissions that earth receives.
<p> indirectly scattered sunlight comes from two directions. from the atmosphere itself, and from outer space. in the first case, the sun has just set but still illuminates the upper atmosphere directly. because the amount of scattered sunlight is proportional to the number of scatterers (i.e. air molecules) in the line of sight, the intensity of this light decreases rapidly as the sun drops further below the horizon and illuminates less of the atmosphere.
<p> the visible surface of the sun, the photosphere, is the layer below which the sun becomes opaque to visible light. photons produced in this layer escape the sun through the transparent solar atmosphere above it and become solar radiation, sunlight. the change in opacity is due to the decreasing amount of h ions, which absorb visible light easily. conversely, the visible light we see is produced as electrons react with hydrogen atoms to produce h ions.
<p> significantly, the smm's acrim instrument package showed that contrary to expectations, the sun is actually brighter during the sunspot cycle maximum (when the greatest number of dark 'sunspots' appear). this is because sunspots are surrounded by bright features called faculae, which more than cancel the darkening effect of the sunspot.
<p> however, it soon was recognized by sir arthur eddington and others that the total amount of energy available through this mechanism only allowed the sun to shine for millions of years rather than the billions of years that the geological and biological evidence suggested for the age of the earth. (kelvin himself had argued that the earth was millions, not billions, of years old.) the true source of the sun's energy remained uncertain until the 1930s, when it was shown by hans bethe to be nuclear fusion.
<p> why does the sun shine? (the sun is a mass of incandescent gas) is an ep by alternative rock band they might be giants, released in 1993. the ep is notable for being their first release with a full-band lineup, rather than only the two original members (john flansburgh and john linnell) performing. it was also released as a single on 7" vinyl. | It is because of scattering. A lot the light from the sun does not follow a straigth path through the atmosphere. Instead it is being scattered in all directions. There are two processes for that. Rayleigh scattering and some Mie Scattering. Rayleigh scattering of course also causes the blue light. Another effect that plays a role is refraction in the atmosphere. The air has a different refractive index than the vacuum surrounding Earth. This leads to refraction according to Snells law. It bends the light and you actually see the sun longer than possible by direct line of sight. It also means that the midnightsun in the summer lasts longer than the complete darkness in the winter. |
can biologists agree on a singular definition of "life"? | <p> there is currently no consensus regarding the definition of life. one popular definition is that organisms are open systems that maintain homeostasis, are composed of cells, have a life cycle, undergo metabolism, can grow, adapt to their environment, respond to stimuli, reproduce and evolve. however, several other definitions have been proposed, and there are some borderline cases of life, such as viruses or viroids.
<p> although there is no universal agreement on the definition of life, scientists generally accept that the biological manifestation of life is characterized by organization, metabolism, growth, adaptation, response to stimuli, and reproduction. life may also be said to be simply the characteristic state of organisms.
<p> although there is no universal agreement on the definition of life, scientists generally accept that the biological manifestation of life is characterized by organization, metabolism, growth, adaptation, response to stimuli and reproduction. life may also be said to be simply the characteristic state of organisms. in biology, the science of living organisms, "life" is the condition which distinguishes active organisms from inorganic matter, including the capacity for growth, functional activity and the continual change preceding death.
<p> in the past, there have been many attempts to define what is meant by "life" through obsolete concepts such as odic force, hylomorphism, spontaneous generation and vitalism, that have now been disproved by biological discoveries. aristotle was the first person to classify organisms. later, carl linnaeus introduced his system of binomial nomenclature for the classification of species. eventually new groups and categories of life were discovered, such as cells and microorganisms, forcing dramatic revisions of the structure of relationships between living organisms. though currently only known on earth, life need not be restricted to it, and many scientists speculate in the existence of extraterrestrial life. artificial life is a computer simulation or man-made reconstruction of any aspect of life, which is often used to examine systems related to natural life.
<p> since there is no unequivocal definition of life, most current definitions in biology are descriptive. life is considered a characteristic of something that preserves, furthers or reinforces its existence in the given environment. this characteristic exhibits all or most of the following traits:
<p> life is a characteristic that distinguishes physical entities that have biological processes, such as signaling and self-sustaining processes, from those that do not, either because such functions have ceased (they have died), or because they never had such functions and are classified as inanimate. various forms of life exist, such as plants, animals, fungi, protists, archaea, and bacteria. the criteria can at times be ambiguous and may or may not define viruses, viroids, or potential synthetic life as "living". biology is the science concerned with the study of life.
<p> biology depends on the precise coincidence of the laws of nuclear physics, gravity, electro-magnetic forces and thermodynamics that allow stars, habitable planets, chemistry and biology to exist. further, life has an effective purpose, self-propagation. therefore a universe that contains life, contains purpose. in these respects the universe came to a unique point in life. | No, biologists don't agree on a singular definition. There are multiple criteria for life, and different biologists think that certain ones are vital while others aren't. There are also competing theories for a single, simple, definition. Because of this, they don't all even agree entirely on *what* is alive. Although there is consensus that viruses sort of are sort of in a gray area between what is and isn't alive, some biologists subscribe to the belief that they are definitely alive, and others believe that they are definitely not. |
on a big bang in an infinite universe | <p> the designer universe theory of john gribbin suggests that the universe could have been made deliberately by an advanced civilization in another part of the multiverse, and that this civilization may have been responsible for causing the big bang.
<p> "the big bang was "small"": it is misleading to visualize the big bang by comparing its size to everyday objects. when the size of the universe at big bang is described, it refers to the size of the observable universe, and not the entire universe.
<p> the big bang itself had been proposed in 1931, long before this period, by georges lemaître, a belgian physicist, who suggested that the evident expansion of the universe in time required that the universe, if contracted backwards in time, would continue to do so until it could contract no further. this would bring all the mass of the universe to a single point, a "primeval atom", to a state before which time and space did not exist. hoyle is credited with coining the term "big bang" during a 1949 bbc radio broadcast, saying that lemaître's theory was "based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past." it is popularly reported that hoyle intended this to be pejorative, but hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models. lemaître's model was needed to explain the existence of deuterium and nuclides between helium and carbon, as well as the fundamentally high amount of helium present, not only in stars but also in interstellar space. as it happened, both lemaître and hoyle's models of nucleosynthesis would be needed to explain the elemental abundances in the universe.
<p> english astronomer fred hoyle is credited with coining the term "big bang" during a 1949 bbc radio broadcast, saying: "these theories were based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past."
<p> the big bang explains the evolution of the universe from a density and temperature that is well-beyond humanity's capability to replicate, so extrapolations to most extreme conditions and earliest times are necessarily more speculative. georges lemaître called this initial state the ""primeval atom"" while george gamow called the material ""ylem"". how the initial state of the universe originated is still an open question, but the big bang model does constrain some of its characteristics. for example, observations indicate the universe is consistent with being flat which implies a balance between gravitational potential energy and other forms requiring no additional energy to be created, while quantum fluctuations in the early universe can provide the circumstances for dense regions of matter (such as superclusters) to form. ultimately, the big bang theory, built upon the equations of classical general relativity, indicates a singularity at the origin of cosmic time, and such an infinite energy density may be a physical impossibility. in any case, the physical theories of general relativity and quantum mechanics as currently realized are not applicable before the planck epoch, and correcting this will require the development of a correct treatment of quantum gravity certain quantum gravity treatments, such as the wheeler–dewitt equation, imply that time itself could be an emergent property. as such, physics may conclude that time did not exist before the big bang so there might be no "beginning" or "before".
<p> carroll has also worked on the arrow of time problem. he and jennifer chen posit that the big bang is not a unique occurrence as a result of all of the matter and energy in the universe originating in a singularity at the beginning of time, but rather one of many cosmic inflation events resulting from quantum fluctuations of vacuum energy in a cold de sitter space. they claim that the universe is infinitely old but never reaches thermodynamic equilibrium as entropy increases continuously without limit due to the decreasing matter and energy density attributable to recurrent cosmic inflation. they assert that the universe is "statistically time-symmetric," insofar as it contains equal progressions of time "both forward and backward". some of his work has been on violations of fundamental symmetries, the physics of dark energy, modifications of general relativity, and the arrow of time. recently he started focusing on issues at the foundations of cosmology, statistical mechanics, quantum mechanics, and complexity.
<p> "big bang" chronicles the history and development of the big bang model of the universe, from the ancient greek scientists who first measured the distance to the sun to the 20th century detection of the cosmic radiation still echoing the dawn of time. | That's the thing about infinity; there's always room for more. |
a hot day feels hotter to humans when humidity is high, as their sweat cannot evaporate as well. are animals who do not sweat at higher risk of overheating during humid days? | <p> in equatorial climates and during temperate summers, overheating (hyperthermia) is as great a threat as cold. in hot conditions, many warm-blooded animals increase heat loss by panting, which cools the animal by increasing water evaporation in the breath, and/or flushing, increasing the blood flow to the skin so the heat will radiate into the environment. hairless and short-haired mammals, including humans, also sweat, since the evaporation of the water in sweat removes heat. elephants keep cool by using their huge ears like radiators in automobiles. their ears are thin and the blood vessels are close to the skin, and flapping their ears to increase the airflow over them causes the blood to cool, which reduces their core body temperature when the blood moves through the rest of the circulatory system.
<p> humidity plays an important role for surface life. for animal life dependent on perspiration (sweating) to regulate internal body temperature, high humidity impairs heat exchange efficiency by reducing the rate of moisture evaporation from skin surfaces. this effect can be calculated using a heat index table, also known as a humidex.
<p> when ambient temperature is excessive, humans and many animals cool themselves below ambient by evaporative cooling of sweat (or other aqueous liquid; saliva in dogs, for example); this helps prevent potentially fatal hyperthermia. the effectiveness of evaporative cooling depends upon humidity. wet-bulb temperature, which takes humidity into account, or more complex calculated quantities such as wet-bulb globe temperature (wbgt), which also takes solar radiation into account, give useful indications of the degree of heat stress and are used by several agencies as the basis for heat-stress prevention guidelines. (wet-bulb temperature is essentially the lowest skin temperature attainable by evaporative cooling at a given ambient temperature and humidity.)
<p> the majority of mammals, including humans, rely on evaporative cooling to maintain body temperature. most medium-to-large mammals rely on panting, while humans rely on sweating, to dissipate heat. advantages to panting include cooler skin surface, little salt loss, and heat loss by forced convection instead of reliance on wind or other means of convection. on the other hand, sweating is advantageous in that evaporation occurs over a much larger surface area (the skin), and it is independent of respiration, thus is a much more flexible mode of cooling during intense activity such as running. because human sweat glands are under a higher level of neuronal control than those of other species, they allow for the excretion of more sweat per unit surface area than any other species. heat dissipation of later hominins was also enhanced by the reduction in body hair. by ridding themselves of an insulating fur coat, running humans are better able to dissipate the heat generated by exercise.
<p> living organisms can survive only within a certain temperature range. when the ambient temperature is excessive, humans and many animals cool themselves below ambient by evaporative cooling (sweat in humans and horses, saliva and water in dogs and other mammals); this helps to prevent potentially fatal hyperthermia due to heat stress. the effectiveness of evaporative cooling depends upon humidity; wet-bulb temperature, or more complex calculated quantities such as wet bulb globe temperature (wbgt) which also takes account of solar radiation, give a useful indication of the degree of heat stress, and are used by several agencies as the basis for heat stress prevention guidelines.
<p> xerocoles have developed a variety of mechanisms to reduce water loss via evaporation. mammalian xerocoles sweat much less than their non-desert counterparts. for example, the camel can survive ambient temperatures as high as without sweating, and the kangaroo rat lacks sweat glands entirely. both birds and mammals in the desert have oils on the surface of their skin to "waterproof" it and inhibit evaporation.
<p> humans are sensitive to humid air because the human body uses evaporative cooling as the primary mechanism to regulate temperature. under humid conditions, the "rate" at which perspiration evaporates on the skin is lower than it would be under arid conditions. because humans perceive the rate of heat transfer from the body rather than temperature itself, we feel warmer when the relative humidity is high than when it is low. | I can think of one example in which the answer is yes. Kangaroos do not sweat (like humans do), and when temperatures are high they have to lick their forearms and dig themselves into the dirt to maintain their body temperatures. This was shown on BBC's the Life of Mammals with David Attenborough, but I haven't been able to find a good source from a quick google search. |
was this a real scientific experiment, or just a falsely construed 'story'? (monkeys, ladders, and bananas) | <p> experiment perilous is a 1944 melodrama set at the turn of the 20th century. the film is based on a 1943 novel by margaret carpenter and directed by jacques tourneur. albert s. d'agostino, jack okey, darrell silvera, and claude e. carpenter were nominated for an academy award for best art direction-interior decoration, black-and-white. hedy lamarr's singing voice was dubbed by paula raymond.
<p> peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the eighteen-hundreds.
<p> english accounts of a similar practice began to circulate in victorian times. it involved placing some food in a coconut or other container which would then trap the animal, since it would not unclench its fist. from this tradition originates the modern idiom of 'a monkey trap', used of a clever trap of any sort that owes its success to the ineptitude or gullibility of the victim. it also underlies the brazilian proverb "macaco velho não mete a mão em cumbuca" (an old monkey will not stick his hand into a jar), with the meaning that an experienced hand cannot be bamboozled.
<p> in 1913, wolfgang köhler started writing a book on problem solving titled "the mentality of apes" (1917). in this research, köhler observed the manner in which chimpanzees solve problems, such as that of retrieving bananas when positioned out of reach. he found that they stacked wooden crates to use as makeshift ladders in order to retrieve the food. if the bananas were placed on the ground outside of the cage, they used sticks to lengthen the reach of their arms.
<p> the experiment is a 2010 american drama thriller film directed by paul t. scheuring and starring adrien brody, forest whitaker, cam gigandet, clifton collins, jr., and maggie grace, about an experiment which resembles philip zimbardo's stanford prison experiment in 1971.
<p> most of the monkeys placed inside it were at least three months old and had already bonded with others. the point of the experiment was to break those bonds in order to create the symptoms of depression. the chamber was a small, metal, inverted pyramid, with slippery sides, slanting down to a point. the monkey was placed in the point. the opening was covered with mesh. the monkeys would spend the first day or two trying to climb up the slippery sides. after a few days, they gave up. harlow wrote, "most subjects typically assume a hunched position in a corner of the bottom of the apparatus. one might presume at this point that they find their situation to be hopeless."blum 1994, p. 218. stephen j. suomi, another of harlow's doctoral students, placed some monkeys in the chamber in 1970 for his phd. he wrote that he could find no monkey who had any defense against it. even the happiest monkeys came out damaged. he concluded that even a happy, normal childhood was no defense against depression.
<p> the utopia experiment was an experiment by dylan evans, set up in the scottish highlands. it involved the establishing and running of a microcommunity of catastrophists. it was time-limited to 18 months, and served as both a learning community (where everyone had a skill of knowledge they could teach the others) and a working community (where everyone would contribute by working). | Apparantly it is a real study conducted by Stephenson et al., entitled *Cultural acquisition of a specific learned response among rhesus monkeys*... but I can't find a pdf of the article. It makes sense that social animals would be able to learn this way... it allows us to learn that poisonous snakes are dangerous without having to be bitten by one first. |
what are the consequences of "losing brain cells"? | <p> there is speculation of several mechanisms by which the brain cells could be lost. one mechanism consists of an abnormal accumulation of the protein alpha-synuclein bound to ubiquitin in the damaged cells. this insoluble protein accumulates inside neurones forming inclusions called lewy bodies. according to the braak staging, a classification of the disease based on pathological findings proposed by heiko braak, lewy bodies first appear in the olfactory bulb, medulla oblongata and pontine tegmentum; individuals at this stage may be asymptomatic or may have early non-motor symptoms (such as loss of sense of smell, or some sleep or automatic dysfunction). as the disease progresses, lewy bodies develop in the substantia nigra, areas of the midbrain and basal forebrain and, finally, the neocortex. these brain sites are the main places of neuronal degeneration in pd; however, lewy bodies may not cause cell death and they may be protective (with the abnormal protein sequestered or walled off). other forms of alpha-synuclein (e.g., oligomers) that are not aggregated in lewy bodies and lewy neurites may actually be the toxic forms of the protein. in people with dementia, a generalized presence of lewy bodies is common in cortical areas. neurofibrillary tangles and senile plaques, characteristic of alzheimer's disease, are not common unless the person is demented.
<p> alzheimer's disease damages and kills brain cells. compared to a healthy brain, the brain of someone with alzheimer’s has fewer cells and there are fewer connections among surviving cells. this inevitably leads to brain shrinkage. this disease characterises two types of abnormalities: plaques and tangles. plaques are clumps of a protein called beta-amyloid. they may damage and destroy brain cells by interfering with cell-to-cell communication, among others. the collection of beta-amyloid on the outside of brain cells is thought to be implicated in the cause of this disease. tangles are threads of another protein, tau. tau twist into abnormal tangles inside brain cells, resulting in failure of the transport system, which is also implicated in the death of brain cells. the brain relies on this internal support and transport system in order to carry nutrients and essential materials, requiring the normal structure and functioning of tau.
<p> diseases that cause neurodegeneration, such as alzheimer's disease, can also be a factor in a person's short-term and eventually long-term memory. damage to certain sections of the brain due to this disease causes a shrinkage in the cerebral cortex which disables the ability to think and recall memories.
<p> in addition to damaging effects on brain cells, ischemia and infarction can result in loss of structural integrity of brain tissue and blood vessels, partly through the release of matrix metalloproteases, which are zinc- and calcium-dependent enzymes that break down collagen, hyaluronic acid, and other elements of connective tissue. other proteases also contribute to this process. the loss of vascular structural integrity results in a breakdown of the protective blood brain barrier that contributes to cerebral edema, which can cause secondary progression of the brain injury.
<p> the mechanism by which the brain cells in parkinson's are lost "may" consist of an abnormal accumulation of the protein alpha-synuclein bound to ubiquitin in the damaged cells. the alpha-synuclein-ubiquitin complex cannot be directed to the proteasome. this protein accumulation forms proteinaceous cytoplasmic inclusions called lewy bodies. the latest research on pathogenesis of disease has shown that the death of dopaminergic neurons by alpha-synuclein is due to a defect in the machinery that transports proteins between two major cellular organelles – the endoplasmic reticulum (er) and the golgi apparatus. certain proteins like rab1 may reverse this defect caused by alpha-synuclein in animal models.
<p> global brain ischemia occurs when blood flow to the brain is halted or drastically reduced. this is commonly caused by cardiac arrest. if sufficient circulation is restored within a short period of time, symptoms may be transient. however, if a significant amount of time passes before restoration, brain damage may be permanent. while reperfusion may be essential to protecting as much brain tissue as possible, it may also lead to reperfusion injury. reperfusion injury is classified as the damage that ensues after restoration of blood supply to ischemic tissue.
<p> brain damage can occur both during and after oxygen deprivation. during oxygen deprivation, cells die due to an increasing acidity in the brain tissue (acidosis). additionally, during the period of oxygen deprivation, materials that can easily create free radicals build up. when oxygen enters the tissue these materials interact with oxygen to create high levels of oxidants. oxidants interfere with the normal brain chemistry and cause further damage (this is known as "reperfusion injury"). | "Brain Cells" generally refers to our neurons, though it could also refer to our glial cells or other cells. Essentially, they all play parts in our cognition/thought. Their connectivity and interaction (upon billions of them) produces nearly all of what we think and essentially define our behavior. Individual neurons play little significant role in brain activity, and the brain can regenerate from small amounts of damage fairly easily. Any kind of damage (even jostling) could potentially kill or damage these cells. However, the more severe the damage, the greater the possible damage. Damage to the brain can result in obvious cognitive deteriorating or just smaller things like increased memory recall time, or slower ability to perceive patterns, etc. |
many antipsychotics have blurred vision as a side effect. why/how does this happen? | <p> bullet::::- medication: if the separation alone is not working, antipsychotics are often prescribed for a short time to prevent the delusions. antipsychotics are medications that reduce or relieve symptoms of psychosis such as delusions or hallucinations (seeing or hearing something that is not there). other uses of antipsychotics include stabilizing moods for people with mood swings and mood disorders ( i.e. in bipolar patients), reducing anxiety in anxiety disorders and lessening tics in people with tourettes. antipsychotics do not cure psychosis but they do help reduce the symptoms and when paired with therapy, the afflicted person has the best chance of recovering. while antipsychotics are powerful, and often effective, they do have side effects such as inducing involuntary movements and should only be taken if absolutely required and under the supervision of a psychiatrist.
<p> there is tentative evidence that discontinuation of antipsychotics can result in psychosis. it may also result in reoccurrence of the condition that is being treated. rarely tardive dyskinesia can occur when the medication is stopped.
<p> there is tentative evidence that discontinuation of antipsychotics can result in psychosis. it may also result in reoccurrence of the condition that is being treated. rarely tardive dyskinesia can occur when the medication is stopped.
<p> there is tentative evidence that discontinuation of antipsychotics can result in psychosis. it may also result in reoccurrence of the condition that is being treated. rarely tardive dyskinesia can occur when the medication is stopped.
<p> there is tentative evidence that discontinuation of antipsychotics can result in psychosis. it may also result in reoccurrence of the condition that is being treated. rarely tardive dyskinesia can occur when the medication is stopped.
<p> there is tentative evidence that discontinuation of antipsychotics can result in psychosis. it may also result in reoccurrence of the condition that is being treated. rarely tardive dyskinesia can occur when the medication is stopped.
<p> there is tentative evidence that discontinuation of antipsychotics can result in psychosis. it may also result in reoccurrence of the condition that is being treated. rarely tardive dyskinesia can occur when the medication is stopped. | Hey, so antipsychotics work by blocking D2 dopamine receptors in the brain. These receptors are thought to signal when something signficiant happens. These are thought to be overactive in diseases that can cause psychosis leading to insignificant things gaining importance in the sufferers mind leading to delusions and hallucinations. The problem is the drugs aren't perfect and they block receptors for other neurotransmitters as well. They can block acetylcholine receptors which serve several functions including secreting saliva (this is why you can get a dry mouth) as well as constricting the pupil. The blurred vision, I'm pretty sure, is caused by impairing the ability of the pupil to constrict which stops the eye from changing its point of focus over distance. Newer antipsychotics (atypicals) were thought to have less side effects but I think that's been proven to be false. Many of antipsychotic side effects are either caused by blocking dopamine receptors too much (leading to Parkinsonism or dystonias) or blocking other neurotransmitter sites. Edit: this changing in the depth of focus in called accomidation. Other symptoms caused by anticholinergic effects include dry mouth, urinary retention and constipation. |
are copenhagen interpretation and multiple worlds interpretation equivalent? can one be right and other be wrong? | <p> the copenhagen interpretation intends to indicate the proper ways of thinking and speaking about the physical meaning of the mathematical formulations of quantum mechanics and the corresponding experimental results. it offers due respect to discontinuity, probability, and a conception of wave–particle dualism. in some respects, it denies standing to causality.
<p> the term 'copenhagen interpretation' suggests some definite set of rules for interpreting the mathematical formalism of quantum mechanics. however, no such text exists, apart from some informal popular lectures by bohr and heisenberg, which contradict each other on several important issues.
<p> there have been many objections to the copenhagen interpretation over the years. these include: discontinuous jumps when there is an observation, the probabilistic element introduced upon observation, the subjectiveness of requiring an observer, the difficulty of defining a measuring device, and the necessity of invoking classical physics to describe the "laboratory" in which the results are measured.
<p> the term 'copenhagen interpretation' suggests something more than just a spirit, such as some definite set of rules for interpreting the mathematical formalism of quantum mechanics, presumably dating back to the 1920s. however, no such text exists, apart from some informal popular lectures by bohr and heisenberg, which contradict each other on several important issues. it appears that the particular term, with its more definite sense, was coined by heisenberg in the 1950s, while criticizing alternate "interpretations" (e.g., david bohm's) that had been developed. lectures with the titles 'the copenhagen interpretation of quantum theory' and 'criticisms and counterproposals to the copenhagen interpretation', that heisenberg delivered in 1955, are reprinted in the collection "physics and philosophy". before the book was released for sale, heisenberg privately expressed regret for having used the term, due to its suggestion of the existence of other interpretations, that he considered to be "nonsense".
<p> the copenhagen interpretation is an expression of the meaning of quantum mechanics that was largely devised from 1925 to 1927 by niels bohr and werner heisenberg. it remains one of the most commonly taught interpretations of quantum mechanics.
<p> the copenhagen interpretation is the "standard" interpretation of quantum mechanics formulated by niels bohr and werner heisenberg while collaborating in copenhagen around 1927. bohr and heisenberg extended the probabilistic interpretation of the wavefunction proposed originally by max born. the copenhagen interpretation rejects questions like "where was the particle before i measured its position?" as meaningless. the measurement process randomly picks out exactly one of the many possibilities allowed for by the state's wave function in a manner consistent with the well-defined probabilities that are assigned to each possible state. according to the interpretation, the interaction of an observer or apparatus that is external to the quantum system is the cause of wave function collapse, thus according to paul davies, "reality is in the observations, not in the electron". in general, after a measurement (click of a geiger counter or a trajectory in a spark or bubble chamber) it ceases to be relevant unless subsequent experimental observations can be performed.
<p> the views of several early pioneers of quantum mechanics, such as niels bohr and werner heisenberg, are often grouped together as the "copenhagen interpretation", though physicists and historians of physics have argued that this terminology obscures differences between the views so designated. while copenhagen-type ideas were never universally embraced, challenges to a perceived copenhagen orthodoxy gained increasing attention in the 1950s with the pilot-wave interpretation of david bohm and the many-worlds interpretation of hugh everett iii. | Here's the thing with interpretations of quantum mechanics: they're just that, interpretations. They're people trying to come up with analogies for what's happening "behind the scenes". None of them contradict each other, or any established experimental results. So in that sense, they're *all* "right". You'll find that many physicists endorse the "shut up and calculate" interpretation, because the equations correctly predict the outcomes of experiments, and everything else is more philosophy than science. |
do deaf children learn to read by associating words with signs? could you learn exclusively written language? | <p> deaf children use sign to express themselves, discuss events, ask questions, and refer to things in their settings, just as hearing children use spoken language. the human brain is naturally wired to crave information and constant access to communication, and social settings with accessible language provide that. the earlier that deaf children have the chance to naturally acquire sign language with constant language input, the better their cognitive and social skills, because they are able to receive information about actions, objects, experiences, and events in time.
<p> children who are deaf and employ a sign language as their primary language learn to read in slightly different ways than their hearing counterparts. much as speakers of oral languages most frequently achieve spoken fluency before they learn to read and write, the most successful profoundly deaf readers first learn to communicate in a sign language. research suggests that there is a mapping process, in which features from the sign language are accessed as a basis for the written language, similar to the way hearing unimodal bilinguals access their primary language when communicating in their second language. profoundly deaf asl signers show that fluency in asl is the best predictor of high reading skills in predicting proficiency in written english. in addition, highly proficient signing deaf children use more evaluative devices when writing than less proficient signing deaf children, and the relatively frequent omission of articles when writing in english by proficient signers may suggest a stage in which the transfer effect (that normally facilitates deaf children in reading) facilitates a mix of the morphosyntactic systems of written english and asl. deaf children then appear to map the new morphology, syntax, and lexical choices of their written language onto the existing structures of their primary sign language.
<p> there are mixed results in how important phonological information is to deaf individuals when reading and when that information is obtained. alphabets, abugidas, abjads, and syllabaries all seem to require the reader/writer to know something about the phonology of their target language prior to learning the system. profoundly deaf children do not have access to the same auditory base that hearing children do. orally trained deaf children do not always use phonological information in reading tasks, word recognition tasks or homophonic tasks; however, deaf signers who are not orally trained do utilize phonological information in word-rhyming tasks. furthermore, when performing on tasks with phonologically confusable initial sounds, hearing readers made more errors than deaf readers. yet when given sentences that are sublexically confusable when translated into asl, deaf readers made more errors than hearing readers. the body of literature clearly shows that skilled deaf readers can employ phonological skills, even if they don’t all the time; without additional longitudinal studies it is uncertain if a profoundly deaf person must know something about the phonology of the target language to become a skilled reader (less than 75% of the deaf population) or if by becoming a skilled reader a deaf person learns how to employ phonological skills of the target language.
<p> bullet::::- certain deaf adults who neither have capability to learn a spoken language nor have access to a sign language, known as home signers, in fact communicate with both others like them and the outside world using gestures and self-created signing. although they have no experience in language or how it works, they are able to conceptualize more than iconic words but move into the abstract, suggesting that they could understand that before creating a gesture to show it. ildefonso, a homesigner who learned a main sign language at twenty-seven years of age, found that although his thinking became easier to communicate, he had lost his ability to communicate with other homesigners as well as recall how his thinking worked without language.
<p> bilingual education for deaf aims to acquire #jsl and written language. some parents select other modality of language as well with sign language, like spoken language, to communicate with their children. some parents also select to use other tools, cochlear implants and hearing aids, for their deaf children with sign language. in regards to the deaf education, using sign was cited in studies as it prevents from acquiring written language for a long time. however, recent articles reported that the children who have fluent first language have the ability to acquire second language, like other foreign language learners, even though the modalities are different. therefore, the most important thing is to acquire fluency in the first language. the future task is to think about how to make the bridge between japanese sign language and written language in bilingual education. in japan, the bilingual education has been in free school (tatsunoko gakuen) since 1999 and school (meisei gakuen) since 2009.
<p> because new hearing parents of deaf children are frequently advised by professionals without training in the field of language acquisition in deaf children, they are often advised to use sign as a last resort, only after the child has failed to learn spoken language. this subjects these children to language deprivation in the time before they are exposed to accessible language input, which has been shown to increase the likelihood of permanent, irreversible effects to their brains. these effects include not only a detrimental impact on language acquisition, but other cognitive and mental health difficulties as well.
<p> the written forms of language can be considered another modality. sign languages do not have widely accepted written forms, so deaf individuals learn to read and write an oral language. this is known as sign–print bilingualism—a deaf individual has fluency in (at least) one sign language as their primary language and has literacy skills in the written form of (at least) one oral language, without access to other resources of the oral language that are gained through auditory stimuli. orthographic systems employ the morphology, syntax, lexical choices, and often phonetic representation of their target language in at least superficial ways; one must learn these new features of the target language in order to read or write. in communities where there is standardized education for the deaf, such as the united states and the netherlands, deaf individuals do gain skill sets in reading and writing in the oral language of the community. in such a state, bilingualism is achieved between a sign language and the written form of the community's oral language. in this view, all sign–print bilinguals are bimodal bilinguals, but all bimodal bilinguals may not be sign–print bilinguals. | There is a written language for people who use signlanguge [here is a link to a site that explains, it also gives asl/bsl tutorials for free unless you become a member. it's not like a trial thing, more like reddit and reddit gold] () |
i've heard in a recent post that water is at its densest at 4 degrees celsius. how does this play in the largest bodies of water? (lakes, oceans, etc...) | <p> super-dense water is water that has been contained in an environment with both molecular uniformity and extreme depth, which causes the molecules of water to be packed tightly together and thus gain a tougher solidity and higher density than regular ice. super dense water is found on planets, such as the moons tethys, ganymede, callisto, and europa in the solar system, which are covered entirely in water and have little to no landmass.
<p> in the equation, formula_1 the thickness of the freshwater zone above sea level is represented as formula_2 and that below sea level is represented as formula_3. the two thicknesses formula_2 and formula_3, are related by formula_6 and formula_7 where formula_6 is the density of freshwater and formula_9 is the density of saltwater. freshwater has a density of about 1.000 grams per cubic centimeter (g/cm) at 20 °c, whereas that of seawater is about 1.025 g/cm. the equation can be simplified to formula_10.
<p> concentrations in fresh water vary more significantly. surface water such as rivers or lakes generally contains between 0.01–0.3 ppm. groundwater (well water) concentrations vary even more, depending on the presence of local fluoride-containing minerals. for example, natural levels of under 0.05 mg/l have been detected in parts of canada but up to 8 mg/l in parts of china; in general levels rarely exceed 10 mg/litre
<p> where formula_4 is the density of the mantle (ca. 3,300 kg m), formula_5 is the density of the crust (ca. 2,750 kg m) and formula_11 is the density of the water (ca. 1,000 kg m). thus, we may generally consider:
<p> subantarctic mode water (samw) is an important water mass in the earth's oceans. it is formed near the subantarctic front on the northern flank of the antarctic circumpolar current. the surface density of subantarctic mode water ranges between about 1026.0 and 1027.0 kg/m and the core of this water mass is often identified as a region of particularly low stratification.
<p> the total volume of water on earth is estimated at 1.386 billion km³ (333 million cubic miles), with 97.5% being salt water and 2.5% being fresh water. of the fresh water, only 0.3% is in liquid form on the surface.
<p> this corresponds to a mean density about 4 higher than that of water (i.e., about ), about 20% below the modern value, but still significantly larger than the mean density of normal rock, suggesting for the first time that the interior of the earth might be substantially | Well it causes convection currents in the ocean, which helps cause circulation of nutrients and allows life to exist in the ocean, nutrients and minerals from deep water vents rising due to superheated water. Also, since water is less dense as a solid it floats, otherwise when the water froze it would sink, killing anything under it, as well as causing more water to freeze on top and sink until no liquid was left |
north korea's july 4th missile reached an altitude of 1700 miles. why are scientists saying its range is only 4160 miles? why couldn't it orbit/deorbit to anywhere on earth? | <p> north korea stated that the missile reached an altitude of around 4,475 km and traveled some 950 km downrange with a flight time of 53 minutes. based on its trajectory and distance, the missile would have a range of more than 13,000 km (8,100 miles) – more than enough to reach washington d.c. and the rest of the united states, albeit with a reduced payload according to the union of concerned scientists. in addition, the range covers several of the united states's international allies such as the united kingdom and france, as well as all of earth's continents, except south america, the caribbean and most of antarctica.
<p> on july 28, 2017, north korea launched an additional ballistic missile from chagang province, reaching an altitude of 3,000 km (1,865 mi). jeffrey lewis, researcher at the james martin center for nonproliferation studies, estimated that the missile could have a range of approximately 10,000 km based on its 45-minute flight time. with this range, the missile could potentially reach major u.s. cities such as denver and chicago. this is the fourteenth missile test conducted by north korea in the year 2017. as with the missile launched on july 4, this missile has also been estimated to be of type hwasong-14.
<p> on september 15 at about 6:30am kst, north korea fired a hwasong-12 missile from the pyongyang international airport, which, for a second time, overflew hokkaido, japan. the missile traveled and reached a maximum height of ; this is the furthest distance any north korean irbm missile has ever reached.
<p> on july 4, 2017, north korea launched hwasong-14 from banghyon airfield, near kusong, in a lofted trajectory it claims lasted 39 minutes for 930 km (578 mi), landing in the waters of the japanese exclusive economic zone. us pacific command said the missile was aloft for 37 minutes, meaning that in a standard trajectory it could have reached all of alaska, a distance of 6,690 km (4,160 mi).
<p> the missile traveled 3,700 kilometres (2,300 mi) achieving a maximum apogee of 770 kilometres (480 mi) during its 19-minute flight. it was the furthest any north korean irbm missile has gone above and beyond japan. on september 18, north korea announced that any further sanctions would only cause acceleration of their nuclear program.
<p> on august 29, 2017, at 5:57 am kst, north korea launched a hwasong-12 ballistic missile that passed over hokkaido, the second largest island of japan. the missile travelled and reached a maximum height of . this was the second successful test flight of the hwasong-12 missile, following three failed tests.
<p> bullet::::- on 22 june 2016, north korea successfully launched its land-based medium-range missile hwasong-10 to an altitude of and a range of . the missile test demonstrates that the missile's range could be as far as about 3500 km. even though some experts are skeptical about whether hwasong-10 has the capability to deliver the warhead to the u.s. [[guam]] military base at the configuration used in this test, they agreed that guam is in the range if the weight of the warhead can be reduced from 650 kg to less than 500 kg. | Circularizing an orbit requires much more energy than just bringing your spacecraft high enough, if you orbit close to the planet. For a circular orbit of the height of Sputnik my little napkin calculation gave me the result that circularizing its orbit requires about 4 times more energy than just shooting it to the same height (assuming no drag, which so not a very good assumption, but still maybe something to get an idea). [edit: On the other hand bringing something to 1700 miles requires just twice as much energy as bringing something to 583 miles.] To illustrate my point a little more: The kinetic energy of an object on a circular orbit is GMm/(2r) G gravitational constant, M mass of earth, m mass of the object, r radius of orbit. The amount of energy needed to get something to escape velocity is GMm/(R), R radius of the earth. That means if the earth was in vacuum, then the energy required to make something orbit directly around the earth with a negligible distance from the planet requires half as much energy as sending a spacecraft *arbitrarily* far away from earth, which shows that just comparing heights of objects is a bad idea. Getting something to 'orbit' infinitely far away from earth requires just twice the energy of letting something orbit 0m from earth (again, all in vacuum of course). |
how does mustard gas work chemically? | <p> mustard gas was first used effectively in world war i by the german army against british and canadian soldiers near ypres, belgium, in 1917 and later also against the french second army. the name "yperite" comes from its usage by the german army near the town of ypres. the allies did not use mustard gas until november 1917 at cambrai, france, after the armies had captured a stockpile of german mustard-gas shells. it took the british more than a year to develop their own mustard gas weapon, with production of the chemicals centred on avonmouth docks. (the only option available to the british was the despretz–niemann–guthrie process). this was used first in september 1918 during the breaking of the hindenburg line with the hundred days' offensive.
<p> sulfur mustard, commonly known as mustard gas, is the prototypical substance of the sulfur-based family of cytotoxic and vesicant chemical warfare agents, which can form large blisters on exposed skin and in the lungs. they have a long history of use as a blister-agent in warfare and, along with organoarsenic compounds such as lewisite, are the most well-studied of such agents. related chemical compounds with similar chemical structure and similar properties form a class of compounds known collectively as sulfur mustards or mustard agents. pure sulfur mustards are colorless, viscous liquids at room temperature. when used in impure form, such as warfare agents, they are usually yellow-brown and have an odor resembling mustard plants, garlic, or horseradish, hence the name. the common name of "mustard gas" is considered inaccurate because the sulfur mustard is not actually vaporized, but dispersed as a fine mist of liquid droplets. sulfur mustard was originally assigned the name lost, after the scientists wilhelm lommel and wilhelm steinkopf, who developed a method of large-scale production for the imperial german army in 1916.
<p> mustard agents are regulated under the 1993 chemical weapons convention. three classes of chemicals are monitored under this convention, with sulfur and nitrogen mustard grouped in schedule 1, as substances with no use other than in chemical warfare. mustard agents could be deployed by means of artillery shells, aerial bombs, rockets, or by spraying from warplanes or other aircraft.
<p> the pungency of the condiment mustard results when ground mustard seeds are mixed with water, vinegar, or other liquid (or even when chewed). under these conditions, a chemical reaction between the enzyme myrosinase and a glucosinolate known as sinigrin from the seeds of black mustard ("brassica nigra") or brown indian mustard ("brassica juncea") produces allyl isothiocyanate. by distillation one can produce a very sharp-tasting essential oil, sometimes called volatile oil of mustard, containing more than 92% allyl isothiocyanate. the pungency of allyl isothiocyanate is due to the activation of the trpa1 ion channel in sensory neurons. white mustard ("brassica hirta") does not yield "allyl" isothiocyanate, but a different and milder isothiocyanate.
<p> mustard gas was first used effectively in world war i by the german army against british and canadian soldiers near ypres, belgium, in 1917 and later also against the french second army. the name yprite comes from its usage by the german army near the town of ypres. the allies did not use mustard gas until november 1917 at cambrai, france, after the armies had captured a stockpile of german mustard-gas shells. it took the british more than a year to develop their own mustard gas weapon, with production of the chemicals centred on avonmouth docks. (the only option available to the british was the despretz–niemann–guthrie process). this was used first in september 1918 during the breaking of the hindenburg line with the hundred days' offensive.
<p> bullet::::- hd – codenamed pyro by the british, and distilled mustard by the us. distilled sulfur mustard (bis(2-chloroethyl) sulfide); approximately 96% pure. the term "mustard gas" usually refers to this variety of sulfur mustard. a much-used path of synthesis was based upon the reaction of thiodiglycol with hydrochloric acid.
<p> nitrogen mustards are cytotoxic chemotherapy agents derived from mustard gas. although their common use is medicinal, in principle these compounds can also be deployed as chemical warfare agents. nitrogen mustards are nonspecific dna alkylating agents. nitrogen mustard gas was stockpiled by several nations during the second world war, but it was never used in combat. as with all types of mustard gas, nitrogen mustards are powerful and persistent blister agents and the main examples (hn1, hn2, hn3, see below) are therefore classified as schedule 1 substances within the chemical weapons convention. production and use is therefore strongly restricted. | It doesn't have much to do with water. One of the chlorides in the mustard gas structure comes off and forms a cyclic sulfur species, which alkylates the guanine DNA base pair inside of the cell. This prevents the cell from dividing and causes cell death, giving nasty chemical burns. |
since we measure nuclear warhead yields in terms of tonnes of tnt, would detonating an equivalent amount of tnt actually produce a similar explosion in terms of size, temperature, blast wave etc? | <p> so, one can state that a nuclear bomb has a yield of 15 kt (63×10 or 6.3×10 j); but an actual explosion of a 15 000 ton pile of tnt may yield (for example) 8×10 j due to additional carbon/hydrocarbon oxidation not present with small open-air charges.
<p> bullet::::- one megatonne of tnt equivalent amounts to approx. 4 petajoules and is the approximate energy released on igniting one million tonnes of tnt. the unit is often used in measuring the explosive power of nuclear weapons.
<p> bullet::::- kt/mt – this is an approximate measure of how much energy is released by the detonation of a nuclear weapon; kt stands for kilotons tnt, mt stands for megatons tnt. conventional science of the period contemporary to the manhattan project came up with these measures so as to reasonably analogize the incredible energy of a nuclear detonation in a form that would be understandable to the military, politicians, or civilians. trinitrotoluene (tnt) was and is a high explosive with industrial and military uses, and is around 40% more powerfully explosive than an equivalent weight of gunpowder. a ton is equivalent to 1000 kg or approximately 2200 pounds. a 20 kt nuclear device, therefore, liberates as much energy as does the explosion of 20,000 tons of tnt (this is the origin of the term, for the exact definition see tnt equivalent). this is a large quantity of energy. in addition, unlike tnt, the detonation of a nuclear device also emits ionizing radiation that can harm living organisms, including humans; the prompt radiation from the blast itself and the fallout can persist for a long period of time, though within hours to weeks, the radiation from a single nuclear detonation will drop enough to permit humans to remain at the site of the blast indefinitely without incurring acute fatal exposure to radiation.
<p> the yield of 10 tons tnt equivalent was just below the largest yield for any conventional bomb built until the 1950s, t-12 cloudmaker (designed in 1944), at a mass of close to 20 metric tons yielding a blast of 11 tons tnt equivalent.
<p> prior to the detonation of the hiroshima bomb, the size of the halifax explosion (about 3 kt tnt equivalent, or 1.26 j), was the standard for this type of relative measurement. each explosion had been the largest known man-made detonation to date.
<p> the vast majority of explosives are chemical explosives. explosives usually have less potential energy than fuels, but their high rate of energy release produces a great blast pressure. tnt has a detonation velocity of 6,940 m/s compared to 1,680 m/s for the detonation of a pentane-air mixture, and the 0.34-m/s stoichiometric flame speed of gasoline combustion in air.
<p> the "tonne of trinitrotoluene (tnt)" is used as a proxy for energy, usually of explosions (tnt is a common high explosive). prefixes are used: kiloton(ne), megaton(ne), gigaton(ne), especially for expressing nuclear weapon yield, based on a specific combustion energy of tnt of about 4.2 mj/kg (or one thermochemical calorie per milligram). hence, 1 t tnt = 4.2 gj, 1 kt tnt = 4.2 tj, 1 mt tnt = 4.2 pj. | Assuming a density > 1 g/cc the volume of 50 megatons of TNT is on the order of 300x300x300 meters^3. The initial conditions of the blast are quite different because the fusion event occurs in a much smaller volume and hence temperature and energy density are greater-initially. Thus the initial shockwave is faster and accompanied by UV, x-ray, and gamma radiation. In the case of a pile of TNT the detonation would propagate through the pile in several hundred ms and the initial fireball would be considerably cooler than the equivalent sized nuclear fireball. That said, since a large fraction of the nuclear event is in the form of heat and radiation, the mechanical shockwave would be less powerful than that produced by a equivalent energy mass of TNT. In the latter case, the shock is the result of the sudden production of hot gas (Nitrogen, Carbon dioxide) at thousands of degrees in contrast to a smaller volume at tens to hundreds of thousands of degrees during a fusion event. So less energy is dissipated (initially)-as heat-in the chemical event, and more energy goes into the shockwave via gas expansion. RAND has declassified documents that go into greater detail with graphs and tables. You can find some online. Also, the US did tests with 1 kt (ish) piles of dynamite to simulate nuclear blasts. Videos can be found on the youtube. |
since we measure nuclear warhead yields in terms of tonnes of tnt, would detonating an equivalent amount of tnt actually produce a similar explosion in terms of size, temperature, blast wave etc? | <p> so, one can state that a nuclear bomb has a yield of 15 kt (63×10 or 6.3×10 j); but an actual explosion of a 15 000 ton pile of tnt may yield (for example) 8×10 j due to additional carbon/hydrocarbon oxidation not present with small open-air charges.
<p> bullet::::- one megatonne of tnt equivalent amounts to approx. 4 petajoules and is the approximate energy released on igniting one million tonnes of tnt. the unit is often used in measuring the explosive power of nuclear weapons.
<p> bullet::::- kt/mt – this is an approximate measure of how much energy is released by the detonation of a nuclear weapon; kt stands for kilotons tnt, mt stands for megatons tnt. conventional science of the period contemporary to the manhattan project came up with these measures so as to reasonably analogize the incredible energy of a nuclear detonation in a form that would be understandable to the military, politicians, or civilians. trinitrotoluene (tnt) was and is a high explosive with industrial and military uses, and is around 40% more powerfully explosive than an equivalent weight of gunpowder. a ton is equivalent to 1000 kg or approximately 2200 pounds. a 20 kt nuclear device, therefore, liberates as much energy as does the explosion of 20,000 tons of tnt (this is the origin of the term, for the exact definition see tnt equivalent). this is a large quantity of energy. in addition, unlike tnt, the detonation of a nuclear device also emits ionizing radiation that can harm living organisms, including humans; the prompt radiation from the blast itself and the fallout can persist for a long period of time, though within hours to weeks, the radiation from a single nuclear detonation will drop enough to permit humans to remain at the site of the blast indefinitely without incurring acute fatal exposure to radiation.
<p> the yield of 10 tons tnt equivalent was just below the largest yield for any conventional bomb built until the 1950s, t-12 cloudmaker (designed in 1944), at a mass of close to 20 metric tons yielding a blast of 11 tons tnt equivalent.
<p> prior to the detonation of the hiroshima bomb, the size of the halifax explosion (about 3 kt tnt equivalent, or 1.26 j), was the standard for this type of relative measurement. each explosion had been the largest known man-made detonation to date.
<p> the vast majority of explosives are chemical explosives. explosives usually have less potential energy than fuels, but their high rate of energy release produces a great blast pressure. tnt has a detonation velocity of 6,940 m/s compared to 1,680 m/s for the detonation of a pentane-air mixture, and the 0.34-m/s stoichiometric flame speed of gasoline combustion in air.
<p> the "tonne of trinitrotoluene (tnt)" is used as a proxy for energy, usually of explosions (tnt is a common high explosive). prefixes are used: kiloton(ne), megaton(ne), gigaton(ne), especially for expressing nuclear weapon yield, based on a specific combustion energy of tnt of about 4.2 mj/kg (or one thermochemical calorie per milligram). hence, 1 t tnt = 4.2 gj, 1 kt tnt = 4.2 tj, 1 mt tnt = 4.2 pj. | Not an explosives expert, can't help you on the specifics of such a detonation. However your follow up is easy: - TNT (Trinitrotoluene) has a density of 1,654 kg/m^(3) - 50 x 10^6 tonnes is 5 x 10^10 kg - 5 x 10^10 kg/ 1,654 kg/m^3 = 3.02 x 10^7 m3 This would be the volume occupied by this TNT pile (ignoring air gaps between pieces or containers). This is about 10 times the volume of NASA's Vehicle Assembly Building, or slightly more than the volume of concrete that makes up the Three Gorges Dam. If this volume were a cube, each side would be 310m (1020 feet) long. |
since we measure nuclear warhead yields in terms of tonnes of tnt, would detonating an equivalent amount of tnt actually produce a similar explosion in terms of size, temperature, blast wave etc? | <p> so, one can state that a nuclear bomb has a yield of 15 kt (63×10 or 6.3×10 j); but an actual explosion of a 15 000 ton pile of tnt may yield (for example) 8×10 j due to additional carbon/hydrocarbon oxidation not present with small open-air charges.
<p> bullet::::- one megatonne of tnt equivalent amounts to approx. 4 petajoules and is the approximate energy released on igniting one million tonnes of tnt. the unit is often used in measuring the explosive power of nuclear weapons.
<p> bullet::::- kt/mt – this is an approximate measure of how much energy is released by the detonation of a nuclear weapon; kt stands for kilotons tnt, mt stands for megatons tnt. conventional science of the period contemporary to the manhattan project came up with these measures so as to reasonably analogize the incredible energy of a nuclear detonation in a form that would be understandable to the military, politicians, or civilians. trinitrotoluene (tnt) was and is a high explosive with industrial and military uses, and is around 40% more powerfully explosive than an equivalent weight of gunpowder. a ton is equivalent to 1000 kg or approximately 2200 pounds. a 20 kt nuclear device, therefore, liberates as much energy as does the explosion of 20,000 tons of tnt (this is the origin of the term, for the exact definition see tnt equivalent). this is a large quantity of energy. in addition, unlike tnt, the detonation of a nuclear device also emits ionizing radiation that can harm living organisms, including humans; the prompt radiation from the blast itself and the fallout can persist for a long period of time, though within hours to weeks, the radiation from a single nuclear detonation will drop enough to permit humans to remain at the site of the blast indefinitely without incurring acute fatal exposure to radiation.
<p> the yield of 10 tons tnt equivalent was just below the largest yield for any conventional bomb built until the 1950s, t-12 cloudmaker (designed in 1944), at a mass of close to 20 metric tons yielding a blast of 11 tons tnt equivalent.
<p> prior to the detonation of the hiroshima bomb, the size of the halifax explosion (about 3 kt tnt equivalent, or 1.26 j), was the standard for this type of relative measurement. each explosion had been the largest known man-made detonation to date.
<p> the vast majority of explosives are chemical explosives. explosives usually have less potential energy than fuels, but their high rate of energy release produces a great blast pressure. tnt has a detonation velocity of 6,940 m/s compared to 1,680 m/s for the detonation of a pentane-air mixture, and the 0.34-m/s stoichiometric flame speed of gasoline combustion in air.
<p> the "tonne of trinitrotoluene (tnt)" is used as a proxy for energy, usually of explosions (tnt is a common high explosive). prefixes are used: kiloton(ne), megaton(ne), gigaton(ne), especially for expressing nuclear weapon yield, based on a specific combustion energy of tnt of about 4.2 mj/kg (or one thermochemical calorie per milligram). hence, 1 t tnt = 4.2 gj, 1 kt tnt = 4.2 tj, 1 mt tnt = 4.2 pj. | It would release the same amount of *energy* (that's what tonnage equivalent means) but the devil is in the details. Conventional explosives release mostly mechanical energy and some heat; a nuke releases a bigger share of heat and a lot of ionizing radiation. |
why do standard radios only go from ~88-107 mhz? | <p> 160 meters refers to the band of radio frequencies between 1,800 and 2,000 khz, just above the mediumwave broadcast band. for many decades the lowest radio frequency band allocated for use by amateur radio, before the adoption, at the beginning of the 21st century in most countries, of the 630 and 2200 meter bands. older amateur operators often refer to 160 meters as the top band it is also sometimes referred to as the "gentleman's band" in contrast to the often-freewheeling activity in the 80 and 20 meter bands.
<p> the center frequencies of the fm channels are spaced in increments of 200 khz. the frequency of 87.9 mhz, while technically part of tv channel 6 (82 to 88 mhz), is used by just two fm class-d stations in the united states. portable radio tuners often tune down to 87.5 mhz, so that the same radios can be made and sold worldwide. automobiles usually have fm radios that can tune down to 87.7 mhz, so that tv channel 6's audio at 87.75 mhz (±10 khz) could be received, such as in birmingham, alabama, and denver, colorado. with the advent of digital television in the united states, this ability will soon be irrelevant when the remaining analog lptv stations are required by the fcc to shut down or convert to digital by september 2015—but there are still analog television stations in the sparsely-populated regions of northern canada. there are also analog tv stations on the other continents and on scores of different islands.
<p> 87.5–87.9 mhz is a radio frequency which, in most of the world, is used for fm broadcasting. in north america, however, this bandwidth is allocated to vhf television channel 6 (82–88 mhz). the analog audio for tv channel 6 is broadcast at 87.75 mhz (adjustable down to 87.74). several stations, most notably those joining the pulse 87 franchise, have operated on this frequency as radio stations, though they use television licenses. as a result, fm radio receivers such as those found in automobiles which are designed to tune into this frequency range could receive the audio for analog-mode programming on the local tv channel 6 while in north america.
<p> in may 1940, the federal communications commission (fcc), a u.s. government agency, formally allocated the 42 – 50 mhz band for fm radio broadcasting. it was soon apparent that distant fm signals from up to distance would often interfere with local stations during the summer months.
<p> the receivers on many of these modern 800 mhz radios can be easily modified to receive higher than 870 mhz, to about 904 mhz with good sensitivity. in addition, the transmitters on many of the aforementioned 900 mhz radios can be easily modified to transmit lower than 935 mhz, to about 926 mhz with acceptable power output. with this in mind, many amateurs have opted to set up repeaters with -25 mhz splits using modified 800 mhz radios as receivers and modified 900 mhz radios as transmitters.
<p> in the united states, a number of low-power radio stations operate on analog television channel 6; this channel broadcasts its audio on the 87.75 mhz frequency. while most of these stations market themselves on "87.7," (due to the .2 mhz odd-decimal spacing used in the united states) such stations are equally audible on 87.8 mhz.
<p> when the itu approved the extension of the "top end" of the am band to 1700 khz in 1988, few consumer radios could tune higher than about 1620 or 1630 khz. however, it was reported at the time that fcc "officials have been meeting with american manufacturers of radio receivers to make an early start on producing sets capable of receiving signals in the new band..." and when the first u.s. expanded band radio station began operating in late 1995, it was estimated that by now there were 280 million radios capable of receiving the full expanded band. | a) The antennas are not optimized for frequencies that are not near that range. b) Regulation. That part of the spectrum is reserved for terrestrial FM transmission. No one else is supposed to run devices that interferes with those frequencies. |
g-force, pushing vs. pulling? can you tell? | <p> the term g-force is technically incorrect as it is a measure of "acceleration", not force. while acceleration is a vector quantity, g-force accelerations ("g-forces" for short) are often expressed as a scalar, with positive g-forces pointing downward (indicating upward acceleration), and negative g-forces pointing upward. thus, a g-force is a vector of acceleration. it is an acceleration that must be produced by a mechanical force, and cannot be produced by simple gravitation. objects acted upon "only" by gravitation experience (or "feel") no g-force, and are weightless.
<p> g-forces, when multiplied by a mass upon which they act, are associated with a certain type of mechanical "force" in the correct sense of the term force, and this force produces compressive stress and tensile stress. such forces result in the operational sensation of weight, but the equation carries a sign change due to the definition of positive weight in the direction downward, so the direction of weight-force is opposite to the direction of g-force acceleration:
<p> if a g-force (acceleration) is vertically upward and is applied by the ground (which is accelerating through space-time) or applied by the floor of an elevator to a standing person, most of the body experiences compressive stress which at any height, if multiplied by the area, is the related mechanical force, which is the product of the g-force and the supported mass (the mass above the level of support, including arms hanging down from above that level). at the same time, the arms themselves experience a tensile stress, which at any height, if multiplied by the area, is again the related mechanical force, which is the product of the g-force and the mass hanging below the point of mechanical support. the mechanical resistive force spreads from points of contact with the floor or supporting structure, and gradually decreases toward zero at the unsupported ends (the top in the case of support from below, such as a seat or the floor, the bottom for a hanging part of the body or object). with compressive force counted as negative tensile force, the rate of change of the tensile force in the direction of the g-force, per unit mass (the change between parts of the object such that the slice of the object between them has unit mass), is equal to the g-force plus the non-gravitational external forces on the slice, if any (counted positive in the direction opposite to the g-force).
<p> the g-force experienced by an object is due to the vector sum of all non-gravitational and non-electromagnetic forces acting on an object's freedom to move. in practice, as noted, these are surface-contact forces between objects. such forces cause stresses and strains on objects, since they must be transmitted from an object surface. because of these strains, large g-forces may be destructive.
<p> a pushdown is a strength training exercise used for strengthening the triceps muscles in the back of the arm. the exercise is completed by pushing an object downward against resistance. this exercise is an example of the primary function of the triceps, extension of the elbow joint. it is a little-known fact that doing the triceps pushdown also works the biceps muscle as well. this is also vice versa for the bicep curls, which work the triceps.
<p> the gravitational force equivalent, or, more commonly, g-force, is a measurement of the type of force per unit mass – typically acceleration – that causes a perception of weight, with a g-force of 1 g equal to the conventional value of gravitational acceleration on earth, "g", of about . since g-forces indirectly produce weight, any g-force can be described as a "weight per unit mass" (see the synonym specific weight). when the g-force is produced by the surface of one object being pushed by the surface of another object, the reaction force to this push produces an equal and opposite weight for every unit of an object's mass. the types of forces involved are transmitted through objects by interior mechanical stresses. gravitational acceleration (except certain electromagnetic force influences) is the cause of an object's acceleration in relation to free fall.
<p> concepts related to force include: thrust, which increases the velocity of an object; drag, which decreases the velocity of an object; and torque, which produces changes in rotational speed of an object. in an extended body, each part usually applies forces on the adjacent parts; the distribution of such forces through the body is the internal mechanical stress. such internal mechanical stresses cause no acceleration of that body as the forces balance one another. pressure, the distribution of many small forces applied over an area of a body, is a simple type of stress that if unbalanced can cause the body to accelerate. stress usually causes deformation of solid materials, or flow in fluids. | Any rear-wheel drive car is still _pushing_ you, regardless of where the engine is. To flip it on its head, if a rear-engine car were front wheel drive, would it still be pushing you? It's not a question of where the engine is converting chemical energy into kinetic energy, it's where this kinetic energy is being applied to the road. Even so, the reason rear-wheel-drive layouts tend to 'feel better' is that as the car 'squats' back due to inertia, it puts weight over the driven axle, whereas in a front wheel drive car, it unloads the driven axle. Things like that, and the ensuing understeer/oversteer behavior bias of FWD and RWD makes RWD the more 'fun' or 'preferable' configuration for the type of people you describe. Where the engine is, however, doesn't change whether the driven wheels are pushing or pulling you. That doesn't make any sense. |
does exercising in the morning really increase your metabolism all day vs exercising any other time? | <p> phillips maintains that aerobic exercise is more effective for fat loss when done first thing in the morning, because it raises the metabolism for the remainder of the day, and because the body draws more heavily on its fat stores after fasting overnight.
<p> the body's basal metabolic rate increases with increases in muscle mass, which promotes long-term fat loss and helps dieters avoid yo-yo dieting. moreover, intense workouts elevate metabolism for several hours following the workout, which also promotes fat loss.
<p> studies have shown that exercise reduces stress. exercise effectively reduces fatigue, improves sleep, enhances overall cognitive function such as alertness and concentration, decreases overall levels of tension, and improves self-esteem. because many of these are depleted when an individual experiences chronic stress, exercise provides an ideal coping mechanism. despite popular belief, it is not necessary for exercise to be routine or intense in order to reduce stress. as little as five minutes of aerobic exercise can begin to stimulate anti-anxiety effects. further, a 10-minute walk may have the same psychological benefits as a 45-minute workout, reinforcing the assertion that exercise in any amount or intensity will reduce stress.
<p> exercise is an activity that can facilitate or inhibit sleep quality; people who exercise experience better quality of sleep than those who do not, but exercising too late in the day can be activating and delay falling asleep. increasing exposure to bright and natural light during the daytime and avoiding bright light in the hours before bedtime may help promote a sleep-wake schedule aligned with nature's daily light-dark cycle.
<p> bullet::::- higher intensity exercise, such as high-intensity interval training (hiit), increases the resting metabolic rate (rmr) in the 24 hours following high intensity exercise, ultimately burning more calories than lower intensity exercise; low intensity exercise burns more calories during the exercise, due to the increased duration, but fewer afterwards.
<p> a patient's metabolic rate may change, causing an increase or decrease in weight and energy levels, changes to sleep patterns, and temperature sensitivity. androgen deprivation leads to slower metabolism and a loss of muscle tone. building muscle takes more work. the addition of a progestogen may increase energy, although it may increase appetite as well.
<p> physical exercise rapidly triggers substantial changes at the organismal level, including the secretion of myokines and metabolites by muscle cells. for instance, aerobic exercise in humans leads to significant structural alterations in the brain, while wheel-running in rodents promotes neurogenesis and improves synaptic transmission in particular in the hippocampus. moreover, physical exercise triggers histone modifications and protein synthesis which ultimately positively influence mood and cognitive abilities. notably, regular exercise is somewhat associated with a better sleep quality, which could be mediated by the muscle secretome. | Nope. It's a myth, like locational fat burning (eg. doing crunches to burn tummy fat) or burning more calories at a lower heart rate. Metabolic rate spikes after exercise but then tails off after a few hours back down to slightly above the BMR for up to a couple of days regardless of time of day you start the exercise. The amount it spikes by is pretty closely related to how hard you exercise. You perform better later in the day (optimally around 6pm), and you have the best mix of temperature and hormones at that time to avoid injury in comparison to training with cold muscles first thing in the morning too. HOWEVER There are a bunch of studies that show that exercise first thing in the morning is more habit forming than exercise later in the day. Personally I don't believe in relying on some motivation or inspiration to achieve an ends. I think you have goals and you train yourself to have habits that satisfy those goals. You don't need to motivate yourself to get out of bed and hit the gym, it's just what you always do. Which is why I tend to train first thing, despite the slightly elevated risks. |
i'm getting flu/cold. what does science say i should do? | <p> man flu is a phrase that refers to the idea that men, when they have a common cold, experience and self-report symptoms of greater severity, akin to those experienced during the flu. while it is a commonly-used phrase in much of the english-speaking world, there is a continuing discussion over the scientific basis.
<p> a study published in 2009 was reported by a number of outlets including "the daily telegraph" as supporting a scientific basis for the existence of "man flu". however, the study had nothing to do with the flu (the experiment was related to bacterial, not viral, infection) and was performed on genetically modified mice rather than human beings, so the results are not necessarily applicable to humans.
<p> i’ve heard some folks try to dodge the evidence [of global climate change] by saying they’re not scientists; that we don’t have enough information to act. well, i’m not a scientist, either. but you know what, i know a lot of really good scientists at nasa, and at noaa, and at our major universities. and the best scientists in the world are all telling us that our activities are changing the climate, and if we don’t act forcefully, we’ll continue to see rising oceans, longer, hotter heat waves, dangerous droughts and floods, and massive disruptions that can trigger greater migration and conflict and hunger around the globe.
<p> according to roach, "make no mistake, good science writing is medicine. it is a cure for ignorance and fallacy. good science writing peels away the blindness, generates wonder, and brings the open palm to the forehead: 'oh! now i get it!'" regarding her skepticism about the world around her, roach states in her book "spook,"
<p> influenza research involves investigating molecular virology, pathogenesis, host immune responses, genomics, and epidemiology regarding influenza. the main goal of research is to develop influenza countermeasures such as vaccines, therapies and diagnostic tools.
<p> i don't agree with (the who) because i think it's a panic metre, not a pandemic metre. [...] if that flu-like illness is not deadly, i don't know what the cause for alarm is for people who are not really sickened by this virus. [...] i'm really eager to know how much worse this is than seasonal flu. so far it's looking like it's not that serious.
<p> grif and simmons present various tips to combat the cold and flu, though they have little expert knowledge of the subject. to supplement this, doc appears to offer his own advice, though it becomes obvious that he considers many of the side effects of 's possession of him to be common ailments that are met with the cold and flu, along with a series of listed effects. it is revealed that caboose has been suffering from avian influenza, from which he believes he can fly, and falls off a rock trying to demonstrate his ability. finally, donut advises to stay warm and avoid computer viruses. as grif pretends to be sick himself, sarge prepares his own unorthodox treatments. | We aren't allowed to give medical advice. This is because none of us here can properly examine and diagnose you over the Internet. Go see a doctor. |
why does time slow down when favorable or unfavorable things happen? | <p> possibly related to the oddball effect, research suggests that time seems to slow down for a person during dangerous events (such as a car accident, a robbery, or when a person perceives a potential predator or mate), or when a person skydives or bungee jumps, where they're capable of complex thoughts in what would normally be the blink of an eye (see fight-or-flight response). this reported slowing in temporal perception may have been evolutionarily advantageous because it may have enhanced one's ability to intelligibly make quick decisions in moments that were of critical importance to our survival. however, even though observers commonly report that time seems to have moved in slow motion during these events, it is unclear whether this is a function of increased time resolution during the event, or instead an illusion created by the remembering of an emotionally salient event.
<p> possibly related to the oddball effect, research suggests that time seems to slow down for a person during intense events—such as a car accident, a robbery, a chase, skydiving or bungee jumping, a potential predator threat or an intimacy with sexual partner (which would elicit sexual excitement, which in turn release adrenaline), where they're capable of complex thoughts in what would normally be the blink of an eye caused by fight-or-flight response. this reported slowing in temporal perception may have been evolutionary advantageous because it may have enhanced one's ability to intelligibly make quick decisions in moments that were of critical importance to our survival. however, even though observers commonly report that time seems to have moved in slow motion during these events, it is unclear whether this is a function of increased time resolution during the event, or instead an illusion created by the remembering of an emotionally salient event.
<p> retrocausality or backwards causation is a concept of cause and effect where the effect precedes its cause in time, so that a later event in time affects an earlier event. in quantum physics, the distinction between cause and effect is not made at the most fundamental level, so time-symmetric systems can be viewed as causal or retro-causal. philosophical considerations of time travel often address the same issues as retrocausality, as do treatments of the subject in fiction, but the two phenomena are distinct.
<p> there are real phenomena that cause time dilation similar that of a stasis field. extremely high velocities approaching light speed or immensely powerful gravitational fields such as those existing near the event horizons of black holes will cause time to progress more slowly. however, there is no known theoretical way of causing such time dilation independently of such conditions.
<p> temporally, a large number of processes or events can cause change, but for sake of simplicity they can be categorized roughly as either abrupt or gradual. abrupt changes are generally referred to as disturbances; these include things like wildfires, high winds, landslides, floods, avalanches and the like. their causes are usually external (exogenous) to the community—they are natural processes occurring (mostly) independently of the natural processes of the community (such as germination, growth, death, etc.). such events can change vegetation structure and composition very quickly and for long time periods, and they can do so over large areas. very few ecosystems are without some type of disturbance as a regular and recurring part of the long term system dynamic. fire and wind disturbances are particularly common throughout many vegetation types worldwide. fire is particularly potent because of its ability to destroy not only living plants, but also the seeds, spores, and living meristems representing the potential next generation, and because of fire's impact on fauna populations, soil characteristics and other ecosystem elements and processes (for further discussion of this topic see fire ecology).
<p> a strong time dilation effect has been reported for perception of objects that were looming, but not of those retreating, from the viewer, suggesting that the expanding discs — which mimic an approaching object — elicit self-referential processes which act to signal the presence of a possible danger. anxious people, or those in great fear, experience greater "time dilation" in response to the same threat stimuli due to higher levels of epinephrine, which increases brain activity (an adrenaline rush). in such circumstances, an illusion of time dilation could assist an efficacious escape. when exposed to a threat, three-year-old children were observed to exhibit a similar tendency to overestimate elapsed time.
<p> an israeli research team have observed a strong time dilation effect in objects that were looming, but not the retreating, from the subjects. they theorize that the expanding discs—which mimic an approaching object—elicit self-referential processes which act to signal the presence of a possible danger. anxious people, or those in great fear, experience greater "time dilation" in response to the same threat stimuli brought on by the increased brain activity caused by epinephrine, or an adrenaline rush. in such circumstances, an illusion of time dilation could assist an efficacious escape. when exposed to a threat, three-year-old children were observed to exhibit a similar tendency to overestimate elapsed time. | We do not perceive time as steady. We perceive time as a series of events. Emotional and stressful events are more memorable. When we look back at these events they seem to have occurred over a longer period of time than the normal flow of events in our daily life. "...The richness of novel experiences means they pass by more slowly than do experiences that command little attention and hold little interest." |
about how many water molecules are in an average size water droplet? | <p> the table below shows how the internal pressure of a water droplet increases with decreasing radius. for not very small drops the effect is subtle, but the pressure difference becomes enormous when the drop sizes approach the molecular size. (in the limit of a single molecule the concept becomes meaningless.)
<p> bullet::::- mass: a million cubic millimeters (small droplets) of water would have a volume of one litre and a mass of one kilogram. a million millilitres or cubic centimetres (one cubic metre) of water has a mass of a million grams or one tonne.
<p> note however, that it is not uncommon to express aqueous concentrations—particularly in drinking-water reports intended for the general public—using parts-per notation (2.1 ppm, 0.8 ppb, etc.) and further, for those reports to state that the notations denote milligrams per liter or micrograms per liter. although "2.1 mg/l" is not a dimensionless quantity, it is assumed in scientific circles that "2.1 mg/kg" (2.1 ppm) is the true measure because one liter of water has a mass of about one kilogram. the goal in all technical writing (including drinking-water reports for the general public) is to clearly communicate to the intended audience with minimal confusion. drinking water is intuitively a volumetric quantity in the public’s mind so measures of contamination expressed on a per-liter basis are considered to be easier to grasp. still, it is technically possible, for example, to "dissolve" more than one liter of a very hydrophilic chemical in 1 liter of water; parts-per notation would be confusing when describing its solubility in water (greater than a million parts per million), so one would simply state the volume (or mass) that will dissolve into a liter, instead.
<p> physicist robert l. park, former executive director of the american physical society, is quoted as saying:"since the least amount of a substance in a solution is one molecule, a 30c solution would have to have at least one molecule of the original substance dissolved in a minimum of 1,000,000,000,000,000,000,000,000,000,000,wbr000,000,000,000,000,000,000,000,000,000 [or 10] molecules of water. this would require a container more than 30,000,000,000 times the size of the earth." park is also quoted as saying that, "to expect to get even one molecule of the 'medicinal' substance allegedly present in 30x pills, it would be necessary to take some two billion of them, which would total about a thousand tons of lactose plus whatever impurities the lactose contained".
<p> the primary factors influencing the initial droplet size produced are frequency of vibration, surface tension, and viscosity of the liquid. frequencies are commonly in the range of 20–180 khz, beyond the range of human hearing, where the highest frequencies produce the smallest drop size.
<p> one litre of water has a mass of almost exactly one kilogram when measured at its maximal density, which occurs at about 4 °c. similarly: one millilitre (1 ml) of water has a mass of about 1 g; 1,000 litres of water has a mass of about 1,000 kg (1 tonne). this relationship holds because the gram was originally defined as the mass of 1 ml of water; however, this definition was abandoned in 1799 because the density of water changes with temperature and, very slightly, with pressure.
<p> if the surface tension of water is known which is 72 dyne/cm, we can calculate the surface tension of the specific fluid from the equation. the more drops we weigh, the more precisely we can calculate the surface tension from the equation. the stalagmometer must be kept clean for meaningful readings. there are commercial tubes for stalagmometric method in three sizes: 2.5, 3.5, and 5.0 (ml). the 2.5-ml size is suitable for small volumes and low viscosity, that of 3.5 (ml) for relatively viscous fluids, and that of 5.0 (ml) for large volumes and high viscosity. the 2.5-ml size is suitable for most fluids. | Given that a water/rain droplet has a large variance (see here), I'm going to assume ~1 mL. Given that the density of water is 0.99997 g/mL, this corresponds to 0.99997 g of water. Since the molecular weight of water is 18.01528 g/mol, this corresponds to 0.0555068 mol of water. Now, to convert to number of water molecules, you need to use Avogadro's number (6.02214129×10^23 molecules/mol), which gives the number of molecules for one mole of a compound (An explanation of the mole unit here). This means that there are **~3.343x10^22 water molecules** in one drop of water. Hope this helps. |
how could a backdoor be put into a random number generator? | <p> an example of a simple mathematical trapdoor is "6895601 is the product of two prime numbers. what are those numbers?" a typical solution would be to try dividing 6895601 by several prime numbers until finding the answer. however, if one is told that 1931 is one of the numbers, one can find the answer by entering "6895601 ÷ 1931" into any calculator. this example is not a sturdy trapdoor function – modern computers can guess all of the possible answers within a second – but this sample problem could be improved by using the product of two much larger primes.
<p> a backdoor may take the form of a hidden part of a program, a separate program (e.g. back orifice may subvert the system through a rootkit), code in the firmware of the hardware, or parts of an operating system such as windows. trojan horses can be used to create vulnerabilities in a device. a trojan horse may appear to be an entirely legitimate program, but when executed, it triggers an activity that may install a backdoor. although some are secretly installed, other backdoors are deliberate and widely known. these kinds of backdoors have "legitimate" uses such as providing the manufacturer with a way to restore user passwords.
<p> a sophisticated form of black box backdoor is a compiler backdoor, where not only is a compiler subverted (to insert a backdoor in some other program, such as a login program), but it is further modified to detect when it is compiling itself and then inserts both the backdoor insertion code (targeting the other program) and the code-modifying self-compilation, like the mechanism through which retroviruses infect their host. this can be done by modifying the source code, and the resulting compromised compiler (object code) can compile the original (unmodified) source code and insert itself: the exploit has been boot-strapped.
<p> a trapdoor in cryptography has the very specific aforementioned meaning and is not to be confused with a backdoor (these are frequently used interchangeably, which is incorrect). a backdoor is a deliberate mechanism that is added to a cryptographic algorithm (e.g., a key pair generation algorithm, digital signing algorithm, etc.) or operating system, for example, that permits one or more unauthorized parties to bypass or subvert the security of the system in some fashion.
<p> the threat of backdoors surfaced when multiuser and networked operating systems became widely adopted. petersen and turn discussed computer subversion in a paper published in the proceedings of the 1967 afips conference. they noted a class of active infiltration attacks that use "trapdoor" entry points into the system to bypass security facilities and permit direct access to data. the use of the word "trapdoor" here clearly coincides with more recent definitions of a backdoor. however, since the advent of public key cryptography the term "trapdoor" has acquired a different meaning (see trapdoor function), and thus the term "backdoor" is now preferred, only after the term trapdoor went out of use. more generally, such security breaches were discussed at length in a rand corporation task force report published under arpa sponsorship by j.p. anderson and d.j. edwards in 1970.
<p> a backdoor in a login system might take the form of a hard coded user and password combination which gives access to the system. an example of this sort of backdoor was used as a plot device in the 1983 film "wargames", in which the architect of the "wopr" computer system had inserted a hardcoded password which gave the user access to the system, and to undocumented parts of the system (in particular, a video game-like simulation mode and direct interaction with the artificial intelligence).
<p> a backdoor is a method of bypassing normal authentication procedures, usually over a connection to a network such as the internet. once a system has been compromised, one or more backdoors may be installed in order to allow access in the future, invisibly to the user. | Conceptually, it's somewhat straightforward. Let's say we have a deterministic RNG which requires some sort of seed. If, for some reason, someone were able to manipulate the seed source, the output of the RNG would no longer be pseudorandom. Criticisms of RdRand were the following: 1. You had to put trust in Intel's CPU and its microcode. 2. Under extremely heavy loads, Intel states it is theoretically possible for the demand for random numbers to exceed the rate at which the HW can supply them. This could potentially be exploited by an attacker who runs a program on the same hardware that attempts to overload the RNG while the victim is attempting to generate a key in parallel. 3. (This is, I think, what you're really asking) The concern is that some combination of microcode modifications could enable seed data with low entropy to be used to generate the random bits, hence giving some attacker the ability to narrow down the possible set of keys generated for any number of crytpographic algorithms (for instance, an RSA key or an initial vector for a hash). The solution to this, if you do not completely trust Intel, is to use RdRand in combination with some other source of entropy to make random numbers. This is, AFAIK, what Linux is currently doing. To recap: A "backdoor" in a random number generator could take the form of a modification to the seed going in, thereby making the RNG output's resemblance to a completely random number less than negligible. If enough entropy is removed, then an attacker who knows the inner workings of the RNG will be able to predict with greater accuracy what keys will be generated. |
what does black hole at center of the milky way move relative to? | <p> co-0.40-0.22 is a high velocity compact gas cloud near the centre of the milky way. it is 200 light years away from the centre in the central molecular zone. the cloud is in the shape of ellipse. the differences in the velocity, termed velocity dispersion, of the gas is unusually high at 100 km/s. the velocity dispersion may be due to an intermediate-mass black hole (imbh) with a mass of about 100,000 solar masses. if it exists, this black hole would be the second largest known in the milky way. further observations with the atacama large millimeter/submillimeter array were not consistent with such a large imbh, but subsequent studies of the gas cloud and nearby imbh candidates have re-opened the possibility.
<p> our milky way galaxy contains several stellar-mass black hole candidates (bhcs) which are closer to us than the supermassive black hole in the galactic center region. most of these candidates are members of x-ray binary systems in which the compact object draws matter from its partner via an accretion disk. the probable black holes in these pairs range from three to more than a dozen solar masses.
<p> in the center of the milky way is the core, a bar-shaped bulge with what is believed to be a supermassive black hole at its center. this is surrounded by four primary arms that spiral from the core. this is a region of active star formation that contains many younger, population i stars. the disk is surrounded by a spheroid halo of older, population ii stars, as well as relatively dense concentrations of stars known as globular clusters.
<p> stellar proper motions have been used to infer the presence of a super-massive black hole at the center of the milky way. this black hole is suspected to be sgr a*, with a mass of 4.2 × , where is the solar mass.
<p> a 2010 paper suggested that the black hole may be displaced from the galactic center by about . the displacement was claimed to be in the opposite direction of the jet, indicating acceleration of the black hole by the jet. another suggestion was that the change in location occurred during the merger of two supermassive black holes. however, a 2011 study did not find any statistically significant displacement, and a 2018 study of high-resolution images of m87 concluded that the apparent spatial offset was caused by temporal variations in the jet's brightness rather than a physical displacement of the black hole from the galaxy's center.
<p> astronomers long suspected that a black hole exists at the center of the milky way, but their theory was unproven. conclusive evidence was obtained after 16 years of monitoring the galactic center with eso telescopes at the la silla and paranal observatories.
<p> its changing apparent position has been monitored since 1995 by two groups (at ucla and at the max planck institute for extraterrestrial physics) as part of an effort to gather evidence for the existence of a supermassive black hole in the center of the milky way galaxy. the accumulating evidence points to sagittarius a* as being the site of such a black hole. by 2008, s2 had been observed for one complete orbit. | It rotates relative to me. There is no absolute frame. So scientists will pick one that makes the math easier. |
are we inside of a black hole's event horizon? | <p> the event horizon of a black hole may be thought of as a surface moving outward at the local speed of light and is just on the edge between escaping and falling back. the event horizon of a white hole is a surface moving inward at the local speed of light and is just on the edge between being swept outward and succeeding in reaching the center. they are two different kinds of horizons—the horizon of a white hole is like the horizon of a black hole turned inside-out.
<p> the black hole event horizon is teleological in nature, meaning that we need to know the entire future space-time of the universe to determine the current location of the horizon, which is essentially impossible. because of the purely theoretical nature of the event horizon boundary, the traveling object does not necessarily experience strange effects and does, in fact, pass through the calculatory boundary in a finite amount of proper time.
<p> the event horizons bounding the black hole and white hole interior regions are also a pair of straight lines at 45 degrees, reflecting the fact that a light ray emitted at the horizon in a radial direction (aimed outward in the case of the black hole, inward in the case of the white hole) would remain on the horizon forever. thus the two black hole horizons coincide with the boundaries of the future light cone of an event at the center of the diagram (at "t"="x"=0), while the two white hole horizons coincide with the boundaries of the past light cone of this same event. any event inside the black hole interior region will have a future light cone that remains in this region (such that any world line within the event's future light cone will eventually hit the black hole singularity, which appears as a hyperbola bounded by the two black hole horizons), and any event inside the white hole interior region will have a past light cone that remains in this region (such that any world line within this past light cone must have originated in the white hole singularity, a hyperbola bounded by the two white hole horizons). note that although the horizon looks as though it is an outward expanding cone, the area of this surface, given by "r" is just formula_46, a constant. i.e., these coordinates can be deceptive if care is not exercised.
<p> the defining feature of a black hole is the appearance of an event horizon—a boundary in spacetime through which matter and light can only pass inward towards the mass of the black hole. nothing, not even light, can escape from inside the event horizon. the event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach an outside observer, making it impossible to determine if such an event occurred.
<p> the location of the event horizon is determined by the larger root of formula_58. when formula_59 (i.e. formula_60), there are no (real valued) solutions to this equation, and there is no event horizon. with no event horizons to hide it from the rest of the universe, the black hole ceases to be a black hole and will instead be a naked singularity.
<p> an observer crossing the event horizon of a non-rotating and uncharged (or schwarzschild) black hole cannot avoid the central singularity, which lies in the future world line of everything within the horizon. thus one cannot avoid spaghettification by the tidal forces of the central singularity.
<p> bullet::::- event horizon - the boundary separating a black hole from the rest of the universe. anything crossing the event horizon into the black hole cannot ever come back, since nothing can ever cross the event horizon from the other direction. | Inside of a black hole, everything moves directly toward the center of a black hole. If you're capable of moving up, down, left, right, forward, and backward, you're not in a black hole. If you're capable of seeing light in any direction you look, you're not in a black hole. |
do we know which direction the center of the universe is in, based on the speed of galaxies around us. | <p> based upon a radial velocity of about 10,500 km s, the interacting pair of galaxies at the northwest are located at a distance of from us (assuming a hubble constant value of ). if we further assume that the third galaxy lies at the same distance away from us, we find that the galaxies are separated by a projected linear distance of roughly , though later findings from hubble may cast this assumption into doubt (see below)
<p> american astronomer edwin hubble observed that the distances to faraway galaxies were strongly correlated with their redshifts. this was interpreted to mean that all distant galaxies and clusters are receding away from our vantage point with an apparent velocity proportional to their distance: that is, the farther they are, the faster they move away from us, regardless of direction. assuming the copernican principle (that the earth is not the center of the universe), the only remaining interpretation is that all observable regions of the universe are receding from all others. since we know that the distance between galaxies increases today, it must mean that in the past galaxies were closer together. the continuous expansion of the universe implies that the universe was denser and hotter in the past.
<p> one such frame of reference is the hubble flow, the apparent motions of galaxy clusters due to the expansion of space. individual galaxies, including the milky way, have peculiar velocities relative to the average flow. thus, to compare the milky way to the hubble flow, one must consider a volume large enough so that the expansion of the universe dominates over local, random motions. a large enough volume means that the mean motion of galaxies within this volume is equal to the hubble flow. astronomers believe the milky way is moving at approximately with respect to this local co-moving frame of reference. the milky way is moving in the general direction of the great attractor and other galaxy clusters, including the shapley supercluster, behind it. the local group (a cluster of gravitationally bound galaxies containing, among others, the milky way and the andromeda galaxy) is part of a supercluster called the local supercluster, centered near the virgo cluster: although they are moving away from each other at as part of the hubble flow, this velocity is less than would be expected given the 16.8 million pc distance due to the gravitational attraction between the local group and the virgo cluster.
<p> the milky way galaxy is moving through space and many astronomers believe the velocity of this motion to be approximately relative to the observed locations of other nearby galaxies. another reference frame is provided by the cosmic microwave background. this frame of reference indicates that the milky way is moving at around .
<p> in some applications use is made of rectangular coordinates based on galactic longitude and latitude and distance. in some work regarding the distant past or future the galactic coordinate system is taken as rotating so that the -axis always goes to the centre of the galaxy.
<p> to illustrate further, consider the question: "does our universe rotate?" to answer, we might attempt to explain the shape of the milky way galaxy using the laws of physics, although other observations might be more definitive, that is, provide larger discrepancies or less measurement uncertainty, like the anisotropy of the microwave background radiation or big bang nucleosynthesis. the flatness of the milky way depends on its rate of rotation in an inertial frame of reference. if we attribute its apparent rate of rotation entirely to rotation in an inertial frame, a different "flatness" is predicted than if we suppose part of this rotation actually is due to rotation of the universe and should not be included in the rotation of the galaxy itself. based upon the laws of physics, a model is set up in which one parameter is the rate of rotation of the universe. if the laws of physics agree more accurately with observations in a model with rotation than without it, we are inclined to select the best-fit value for rotation, subject to all other pertinent experimental observations. if no value of the rotation parameter is successful and theory is not within observational error, a modification of physical law is considered, for example, dark matter is invoked to explain the galactic rotation curve. so far, observations show any rotation of the universe is very slow, no faster than once every 60·10 years (10 rad/yr), and debate persists over whether there is "any" rotation. however, if rotation were found, interpretation of observations in a frame tied to the universe would have to be corrected for the fictitious forces inherent in such rotation in classical physics and special relativity, or interpreted as the curvature of spacetime and the motion of matter along the geodesics in general relativity.
<p> the andromeda galaxy is approaching the milky way at about as indicated by blueshift. however, the lateral speed (measured as proper motion) is very difficult to measure with a precision to draw reasonable conclusions: a lateral speed of only 7.7 km/s would mean that the andromeda galaxy is moving toward a point 177,800 light-years to the side of the milky way ((7.7 km/s) / (110 km/s) × (2,540,000 ly)), and such a speed over an eight-year timeframe amounts to only 1/3,000th of a hubble space telescope pixel (hubble's resolution≈0.05 arcsec: (7.7 km/s)/(300,000 km/s)×(8 y)/(2,540,000 ly)×180°/π×3600 = 0.000017 arcsec). until 2012, it was not known whether the possible collision was definitely going to happen or not. in 2012, researchers concluded that the collision is sure using hubble to track the motion of stars in andromeda between 2002 and 2010 with sub-pixel accuracy. andromeda's tangential or sideways velocity with respect to the milky way was found to be much smaller than the speed of approach and therefore it is expected that it will directly collide with the milky way in around four and a half billion years. | There's no center. \- - |
why is fiberglass safe vs asbestos? | <p> the north american insulation manufacturers association (naima) claims that glass fiber is fundamentally different from asbestos, since it is man-made instead of naturally occurring. they claim that glass fiber "dissolves in the lungs", while asbestos remains in the body for life. although both glass fiber and asbestos are made from silica filaments, naima claims that asbestos is more dangerous because of its crystalline structure, which causes it to cleave into smaller, more dangerous pieces, citing the u.s. department of health and human services:
<p> glass fiber has increased in popularity since the discovery that asbestos causes cancer and its subsequent removal from most products. however, the safety of glass fiber is also being called into question, as research shows that the composition of this material (asbestos and glass fiber are both silicate fibers) can cause similar toxicity as asbestos.
<p> fiberglass will irritate the eyes, skin, and the respiratory system. potential symptoms include irritation of eyes, skin, nose, throat, dyspnea (breathing difficulty); sore throat, hoarseness and cough. scientific evidence demonstrates that fiber glass is safe to manufacture, install and use when recommended work practices are followed to reduce temporary mechanical irritation. unfortunately these work practices are not always followed; and fiberglass is often left exposed in basements that later become occupied. fiberglass insulation should never be left exposed in an occupied area, according to the american lung association.
<p> fiberglass will irritate the eyes, skin, and the respiratory system. potential symptoms include irritation of eyes, skin, nose, and throat, dyspnea (breathing difficulty), sore throat, hoarseness and cough. scientific evidence demonstrates that fiberglass is safe to manufacture, install and use when recommended work practices are followed to reduce temporary mechanical irritation. unfortunately these work practices are not always followed, and fiberglass is often left exposed in basements that later become occupied. fiberglass insulation should never be left exposed in an occupied area, according to the american lung association.
<p> bullet::::- asbestos: is a material that was once used for the insulation of buildings, and some businesses are still using this material to manufacture roofing materials and brakes. inhalation of asbestos fibers can lead to lung cancer and asbestosis.
<p> asbestos is found in older homes and buildings, but occurs most commonly in schools, hospitals and industrial settings. although all asbestos is hazardous, products that are friable, eg. sprayed coatings and insulation, pose a significantly higher hazard as they are more likely to release fibers to the air. the us federal government and some states have set standards for acceptable levels of asbestos fibers in indoor air. there are particularly stringent regulations applicable to schools.
<p> fiberglass is the most common residential insulating material, and is usually applied as batts of insulation, pressed between studs. health and safety issues include potential cancer risk from exposure to glass fibers, formaldehyde off-gassing from the backing/resin, use of petrochemicals in the resin, and the environmental health aspects of the production process. green building practices shun fiberglass insulation. | First of all, the premise of the question is wrong: fiberglass is not safe, especially if it is made out of exceedingly thin fibers. Inhalation of fine fiberglass dust will probably lead to silicosis and other unpleasant health effects. The difference in harmfulness between fiberglass and asbestos is due to the different chemical composition and the ease of forming ultrathin, fine airborn dust. Asbestos is a mineral that naturally separates into fine, fragile fibers. Fiberglass is produced by drawing glass fibers out of a melt. They are much thicker and much less fragile than asbestos. |
is mathematics a science? | <p> the mathematical sciences are a group of areas of study that includes, in addition to mathematics, those academic disciplines that are primarily mathematical in nature but may not be universally considered subfields of mathematics proper.
<p> like all formal sciences, mathematics is not concerned with the validity of theories based on observations in the empirical world, but rather, mathematics is occupied with the theoretical, abstract study of such topics as quantity, structure, space and change. methods of the mathematical sciences are, however, applied in constructing and testing scientific models dealing with observable reality. albert einstein wrote, "one reason why mathematics enjoys special esteem, above all other sciences, is that its laws are absolutely certain and indisputable, while those of other sciences are to some extent debatable and in constant danger of being overthrown by newly discovered facts."
<p> "mathematics", first of all known as the science of numbers which is classified in arithmetic and algebra, is classified as a formal science, has both similarities and differences with the empirical sciences (the natural and social sciences). it is similar to empirical sciences in that it involves an objective, careful and systematic study of an area of knowledge; it is different because of its method of verifying its knowledge, using "a priori" rather than empirical methods.
<p> mathematics is essential in the formation of hypotheses, theories, and laws in the natural and social sciences. for example, it is used in quantitative scientific modeling, which can generate new hypotheses and predictions to be tested. it is also used extensively in observing and collecting measurements. statistics, a branch of mathematics, is used to summarize and analyze data, which allow scientists to assess the reliability and variability of their experimental results.
<p> the opinions of mathematicians on this matter are varied. many mathematicians feel that to call their area a science is to downplay the importance of its aesthetic side, and its history in the traditional seven liberal arts; others feel that to ignore its connection to the sciences is to turn a blind eye to the fact that the interface between mathematics and its applications in science and engineering has driven much development in mathematics. one way this difference of viewpoint plays out is in the philosophical debate as to whether mathematics is "created" (as in art) or "discovered" (as in science). it is common to see universities divided into sections that include a division of "science and mathematics", indicating that the fields are seen as being allied but that they do not coincide. in practice, mathematicians are typically grouped with scientists at the gross level but separated at finer levels. this is one of many issues considered in the philosophy of mathematics.
<p> bullet::::- mathematical sciences: refers to academic disciplines that are mathematical in nature, but are not considered proper subfields of mathematics. examples include statistics, cryptography, game theory and actuarial science.
<p> the exact sciences, sometimes called the exact mathematical sciences are those sciences "which admit of absolute precision in their results"; especially the mathematical sciences. examples of the exact sciences are mathematics, optics, astronomy, and physics, which many philosophers from descartes, leibniz, and kant to the logical positivists took as paradigms of rational and objective knowledge. these sciences have been practiced in many cultures from antiquity to modern times. given their ties to mathematics, the exact sciences are characterized by accurate quantitative expression, precise predictions and/or rigorous methods of testing hypotheses involving quantifiable predictions and measurements. | My two laymen cents is that it itself it not a science, but a method with which we may perform science. Also, I'm not quite sure you'll get the most concrete possible answer here. Give r/askphilosophy a shot as well. |
what percentage of the atoms in our bodies were in us a year ago? ten years ago? | <p> evidence for the existence of atoms was the law of definite proportions proposed by him in 1792. richter found that the ratio by weight of the compounds consumed in a chemical reaction was always the same. it took 615 parts by weight of magnesia (mgo), for example, to neutralize 1000 parts by weight of sulfuric acid. from his data, ernst gottfried fischer calculated in 1802 the first table of chemical equivalents, taking sulphuric acid as the standard with the figure 1000. when joseph proust reported his work on the constant composition of chemical compounds, the time was ripe for the reinvention of an atomic theory. the law of definite proportions and constant composition do not prove that atoms exist, but they are difficult to explain without assuming that chemical compounds are formed when atoms combine in constant proportions.
<p> small amounts of other substances are found, including amino acids at concentrations of up to 2 micrograms of nitrogen atoms per liter, which are thought to have played a key role in the origin of life.
<p> this is an index of lists of molecules (i.e. by year, number of atoms, etc.). millions of molecules have existed in the universe before the formation of earth, elements have being mixed and formed molecules for millions of years, three of them, carbon dioxide, water and oxygen were necessary for the growth of life, even thought, we were able to see these substances we did not know what was their components.
<p> because the age of the earth is (4.6 billion years), the half-life of the given nuclides must be greater than about (100 million years) for practical considerations. for example, for a nuclide with half-life (60 million years), this means 77 half-lives have elapsed, meaning that for each mole () of that nuclide being present at the formation of earth, only 4 atoms remain today.
<p> about 90 atoms of flerovium have been observed: 58 were synthesized directly, and the rest were made from the radioactive decay of heavier elements. all of these flerovium atoms have been shown to have mass numbers from 284 to 290. the most stable known flerovium isotope, flerovium-289, has a half-life of around 1.9 seconds, but it is possible that the unconfirmed flerovium-290 with one extra neutron may have a longer half-life of 19 seconds; this would be one of the longest half-lives of any isotope of any element at these farthest reaches of the periodic table. flerovium is predicted to be near the centre of the theorized island of stability, and it is expected that heavier flerovium isotopes, especially the possibly doubly magic flerovium-298, may have even longer half-lives.
<p> there is evidence for high concentrations between 200 and 150 million years ago of over 3,000 ppm, and between 600 and 400 million years ago of over 6,000 ppm. in more recent times, atmospheric concentration continued to fall after about 60 million years ago. about 34 million years ago, the time of the eocene–oligocene extinction event and when the antarctic ice sheet started to take its current form, was about 760 ppm, and there is geochemical evidence that concentrations were less than 300 ppm by about 20 million years ago. decreasing concentration, with a tipping point of 600 ppm, was the primary agent forcing antarctic glaciation. low concentrations may have been the stimulus that favored the evolution of c4 plants, which increased greatly in abundance between 7 and 5 million years ago. based on an analysis of fossil leaves, wagner et al. argued that atmospheric concentrations during the last 7,000–10,000 year period were significantly higher than 300 ppm and contained substantial variations that may be correlated to climate variations. others have disputed such claims, suggesting they are more likely to reflect calibration problems than actual changes in . relevant to this dispute is the observation that greenland ice cores often report higher and more variable values than similar measurements in antarctica. however, the groups responsible for such measurements (e.g. h.j. smith et al.) believe the variations in greenland cores result from "in situ" decomposition of calcium carbonate dust found in the ice. when dust concentrations in greenland cores are low, as they nearly always are in antarctic cores, the researchers report good agreement between measurements of antarctic and greenland concentrations.
<p> in 1939, martin kamen and samuel ruben of the radiation laboratory at berkeley began experiments to determine if any of the elements common in organic matter had isotopes with half-lives long enough to be of value in biomedical research. they synthesized using the laboratory's cyclotron accelerator and soon discovered that the atom's half-life was far longer than had been previously thought. this was followed by a prediction by serge a. korff, then employed at the franklin institute in philadelphia, that the interaction of thermal neutrons with in the upper atmosphere would create . it had previously been thought that would be more likely to be created by deuterons interacting with . at some time during world war ii, willard libby, who was then at berkeley, learned of korff's research and conceived the idea that it might be possible to use radiocarbon for dating. | While most of your cells are constantly replaced, and even those that aren't will have a portion of their atoms exchanged through normal biological activity, some parts undergo exchange very slowly. For cells that don't divide and never get replaced, the DNA doesn't incorporate new, or lose old, atoms, unless they're damaged. |
during the time that pangea existed, were there other islands? | <p> from 320 ma onward, gondwana, laurussia, and intervening terranes merged to form the supercontinent pangea. pangea's main amalgamation occurred during the carboniferous but continents continued to be added and rifted away in the late paleozoic to early mesozoic. pangea ruptured during the jurassic, preceded by and associated with widespread magmatic activity, including the karoo flood basalts and related dyke swarms in south africa and the ferrar province in east antarctica.
<p> pangea was created by the continent of gondwanaland and the continent of laurussia. during the carboniferous period the two continents came together to form the supercontinent of pangea. the mountain building events that happened at this time created the appalachian mountains and the variscan belt of central europe. however, not all landmasses on earth had attached themselves onto pangea. it took until the late permian until the siberian land mass collided with pangea. the only land mass to not be a part of pangea were the former north and south china plates, they created a much smaller land mass in the ocean. there was a massive ocean that encompassed the world called panthalassa, because most of the continental crust was sutured together into one giant continent there was a giant ocean to match.
<p> pangea broke apart after 70 million years. the supercontinent was torn apart through fragmentation, which is where parts of the main landmass would break off in stages. there were two main events that led to the dispersal of pangea. the first was a passive rifting event that occurred in the triassic period. this rifting event caused the atlantic ocean to form. the other event was an active rifting event. this happened in the lower jurassic and caused the opening of the indian ocean.this breakup took 17 million years to complete.
<p> during the late triassic, pangea began to be torn apart when a three-pronged fissure grew between africa, south america, and north america. rifting began as magma welled up through the weakness in the crust, creating a volcanic rift zone. volcanic eruptions spewed ash and volcanic debris across the landscape as these severed continent-sized fragments of pangea diverged.. the gash between the spreading continents gradually grew to form a new ocean basin, the atlantic. the rift zone known as the mid-atlantic ridge continued to provide the raw volcanic materials for the expanding ocean basin.
<p> the continents that had drifted away from rodinia drifted together again during the paleozoic: gondwana, euramerica, and siberia/angara collided to form the supercontinent of pangea during the devonian and carboniferous periods, some 350 million years ago. pangea was a short-lived supercontinent; it began to break apart again in the early jurassic. while pangea existed it created opportunities for intermixing of the flora and fauna.
<p> during the triassic period about 250 million years ago chile was part of the supercontinent pangaea, which concentrated the world's major land masses. africa, antarctica, australia and india were near chile. when pangaea began to split apart during the jurassic period, south america and the adjacent land masses formed gondwana. floral affinities among these now-distant landmasses date from the gondwanaland period. south america separated from antarctica and australia 27 million years ago with the development of the drake passage. across the -wide drake passage lie the mountains of the antarctic peninsula, south of the scotia plate, which appear to be a continuation of the andes. in the extreme south, the magallanes–fagnano fault separates tierra del fuego from the small scotia plate.
<p> at the end of the silurian period (c. 420 million years ago) the iapetus ocean had completely disappeared and the combined mass of the three continents formed the "new" continent of laurasia, which would itself be the northern component of the singular supercontinent of pangaea. | Sure, just as there are along coastlines today. Just because all the continental plates were joined, doesn't mean there was anything precluding islands. Life on islands is generally smaller than on the mainland, but you would certainly expect to find vertebrates and trees like on coastal islands today. |
is it possible that our sun is orbiting another planet/star? | <p> in a 2009 interview with the discovery channel, mike brown noted that, while it is not impossible that the sun has a distant planetary companion, such an object would have to be lying very far from the observed regions of the solar system to have no detectable gravitational effect on the other planets. a mars-sized object could lie undetected at 300 au (10 times the distance of neptune); a jupiter-sized object at 30,000 au. to travel 1000 au in two years, an object would need to be moving at 2400 km/s – faster than the galactic escape velocity. at that speed, any object would be shot out of the solar system, and then out of the milky way galaxy into intergalactic space.
<p> a planet orbiting the sun so that it was always on the other side of the sun from earth could (in theory) have such an orbit because it was the same distance from the sun and had the same mass as earth. thus, what would make it undetectable to astronomers (or any other human beings) on earth would also make it habitable to beings at least similar to humans. with the same size and distance from the sun as earth, it could have the same (or very similar) surface environment—gravity, atmospheric pressure, and surface temperature range. at the same time such a planet could have the same orbiting velocity and path as earth, so that if it was positioned 180 degrees from earth, it would remain behind the sun being blocked from view from earth indefinitely.
<p> planet nine could have been captured from outside the solar system during a close encounter between the sun and another star. if a planet was in a distant orbit around this star, three-body interactions during the encounter could alter the planet's path, leaving it in a stable orbit around the sun. a planet originating in a system without jupiter-massed planets could remain in a distant eccentric orbit for a longer time, increasing its chances of capture. the wider range of possible orbits would reduce the odds of its capture in a relatively low inclination orbit to 1–2 percent. this process could also occur with rogue planets, but the likelihood of their capture is much smaller, with only 0.05–0.10% being captured in orbits similar to that proposed for planet nine.
<p> in 1998, using the european southern telescope in chile, a planet was announced to be orbiting the star. this team retracted this claim in 2002, but found a different periodicity of 7 days possibly due to stellar rotation.
<p> in 1998 the california and carnegie planet search team, after following a suggestion by kevin apps, a briton who at the time was an undergraduate student found a possible planet orbiting the star. there were also indications of another, more distant body orbiting the star and this claim was published in 2006. this planet was confirmed in 2009.
<p> an encounter with another star could also alter the orbit of a distant planet, shifting it from a circular to an eccentric orbit. the "in situ" formation of a planet at this distance would require a very massive and extensive disk, or the outward drift of solids in a dissipating disk forming a narrow ring from which the planet accreted over a billion years. if a planet formed at such a great distance while the sun was in its original cluster, the probability of it remaining bound to the sun in a highly eccentric orbit is roughly 10%. a previous article reported that if the massive disk extended beyond 80 au some objects scattered outward by jupiter and saturn would have been left in high inclination (inc 50°), low eccentricity orbits which have not been observed. an extended disk would also have been subject to gravitational disruption by passing stars and by mass loss due to photoevaporation while the sun remained in the open cluster where it formed.
<p> as of 2011, three extrasolar planets have been found to orbit the star. announced on the first of november 1999, the first planet (hd 37124 b) was discovered orbiting its parent star around the inner edge of the habitable zone, causing the planet to have a somewhat similar insolation to that of venus. a second planet became apparent by 2003, thought to orbit in a 1940 days on an eccentric orbit, but this was subsequently found to be unstable. solving this, a three-planet solution was announced in 2005: this contained a second planet (hd 37124 c) orbiting at the outer edge of the habitable zone with an insolation similar to that of mars, and a third planet, (hd 37124 d). while not obviously in any orbital resonances in 2005, an updated solution announced in 2011 found planets c and d to likely be in a 2:1 resonance. | No, there's nothing to suggest this. There's no nearby stars massive enough to have that effect on the sun. We have a good understanding of all the massive objects within some lightyears of us. |
how can tsa/airport security workers stand next to x and t ray machines all day everyday without any ill effects? | <p> the uk trialed a controversial new method of screening passengers to further improve airport security using backscatter x-ray machines that provide a 360-degree view of a person, as well as "see" under clothes, right down to the skin and bones. they are no longer used and were replaced by millimeter wave scanners which shows any hidden items while not showing the body of the passenger.
<p> screening technology has advanced to detect any harmful materials under a traveler's clothes and also detect any harmful materials that may have been consumed internally. full body scanners or advanced imaging technology (ait) were introduced to u.s. airports in 2006. two types of body screening that are currently being used at all airports internationally are backscatters and millimeter wave scanners. backscatters use a high-speed yet thin intensity x-ray beam to portray the digital image of an individual's body. millimeter wave scanners uses the millimeter waves to create a 3-d image based on the energy reflected from the individual's body.
<p> the u.s. government is also supplying higher-radiation through-body x-ray machines to at least two african countries "for the purposes of airport security — the kind that can see through flesh, and which deliver real doses of radiation. the u.s.-supplied scanners have apparently been deployed at one airport in ghana and four in nigeria". which has caused some to question how far the u.s. government intends to go with the technology.
<p> since the introduction of full body x-ray scanners to airports in 2007, many concerns over traveler privacy have arisen. individuals are asked to step inside a rectangular machine that takes an alternate wavelength image of the person's naked body for the purpose of detecting metal and non-metal objects being carried under the clothes of the traveler. this screening technology comes in two forms, millimeter wave technology (mm-wave technology) or backscatter x-rays (similar to x-rays used by dentists). full-body scanners were introduced into airports to increase security and improve the quality of screening for objects such as weapons or explosives due to an increase of terrorist attacks involving airplanes occurring in the early 2000s.
<p> airport checkpoint screening has been significantly tightened since 2001, and security personnel are more thoroughly trained to detect weapons or explosives. in addition to standard metal detectors, many u.s. airports now employ full-body scanning machines, in which passengers are screened with millimeter wave technology to check for potential hidden weapons or explosives on their persons. initially, early body scanners provoked quite a bit of controversy because the images produced by the machines were deemed graphic and intrusive. many considered this an invasion of personal privacy, as tsa screeners were essentially shown an image of each passenger's naked body. newer body scanners have since been introduced which do not produce an image, but rather alert tsa screeners of areas on the body where an unknown item or substance may be hidden. a tsa security screener then inspects the indicated area(s) manually.
<p> bullet::::- x-ray transmission scanners: these have a very high detection probability. however, the scan is carcinogenic, therefore it is not being used in airport security. these machines exist in some u.s. prisons and proved their capability to detect objects hidden in body cavities.
<p> bullet::::- the national radiation safety standard (see below) sets a dose per screening limit for the general-use category. to meet the requirements of the general-use category a full-body x-ray security system must deliver less than the dose a person receives during 4 minutes of airline flight. tsa has set their dose limit to ensure a person receives less radiation from one scan with a tsa general-use x-ray security system than from 2 minutes of airline flight. | Like any radiation worker, they apply ALARA. That means that you should take steps to make your radiation exposure "As Low As Reasonably Achievable". The ways to do this are the maximize distance from the source, minimize time near it, and use shielding when possible. If you pay close attention when passing through security, you'll see that they rotate between positions throughout the day. So the people operating the x-ray machines rotate around to other positions as well. You may also notice that some employees are wearing badge dosimeters. These are little badges that you wear on your body. Over time they will accumulate on average the same exposure density to radiation that your body does. Every few months you send them in for testing to see if you had an abnormally high exposure within that time. I don't know much about the manufacture of their machines (I'd guess it's not something they want the public to know much about), but it's not hard to add some shielding to strongly attenuate x-rays. |
if the integral of dx/dt is position, what is the integral of position? | <p> in mathematics, the laplacian of the indicator of the domain "d" is a generalisation of the derivative of the dirac delta function to higher dimensions, and is non-zero only on the "surface" of "d". it can be viewed as the "surface delta prime function". it is analogous to the second derivative of the heaviside step function in one dimension. it can be obtained by letting the laplace operator work on the indicator function of some domain "d".
<p> the first integral on the right-hand side integrates out to the boundaries, in x and "t", of the integration domain and is zero since the variations "δφ" are taken to be zero at these boundaries. for variations "δφ" which are zero at the free surface and the bed, the second integral remains, which is only zero for arbitrary "δφ" in the fluid interior if there the laplace equation holds:
<p> in general, an eigenvector of a linear operator "d" defined on some vector space is a nonzero vector in the domain of "d" that, when "d" acts upon it, is simply scaled by some scalar value called an eigenvalue. in the special case where "d" is defined on a function space, the eigenvectors are referred to as eigenfunctions. that is, a function "f" is an eigenfunction of "d" if it satisfies the equation
<p> so that "a" = −"a". this operator is a continuous linear transformation on d("u"). so, if "t" ∈ d′("u") is a distribution, then the partial derivative of "t" with respect to the coordinate "x" is defined by the formula
<p> where "x" is an initial point on the line and θ is a unit vector giving the direction of the line "l". the latter integral is not regarded in the oriented sense: it is the integral with respect to the 1-dimensional lebesgue measure on the euclidean line "l".
<p> the divergence operator "δ" (to be more precise, "δ", since it depends on the dimension) is now defined to be the adjoint of ∇ in the hilbert space sense, in the hilbert space "l"(r, "b"(r), "γ"; r). in other words, "δ" acts on a vector field "v" : r → r to give a scalar function "δv" : r → r, and satisfies the formula
<p> a linear operator "t" on a vector space "v" is defined to be nilpotent if there is a positive integer "k" such that "t" = 0. for example, any operator given by a matrix whose entries are zero on and below its diagonal, such as | Time integral of position would have units of m*s. If x was, for example constant, the result would be x*t. I don't see it as a meaningful quantity. |
are water levels in landlocked fresh water lakes affected by the rise of global sea levels? | <p> the intergovernmental panel on climate change (ipcc) estimate global mean sea-level rise from 1990 to 2100 to be between nine and eighty eight centimetres. it is also predicted that with climate change there will be an increase in the intensity and frequency of storm events such as hurricanes. this suggests that coastal flooding from storm surges will become more frequent with sea level rise. a rise in sea level alone threatens increased levels of flooding and permanent inundation of low-lying land as sea level simply may exceed the land elevation. this therefore indicates that coastal flooding associated with sea level rise will become a significant issue into the next 100 years especially as human populations continue to grow and occupy the coastal zone.
<p> rising sea levels will further threaten coastal areas and erode and alter landscapes whilst also resulting in salt water intrusion into soils, reducing soil quality and limiting plant species growth. the ministry for the environment says by 2050–2070, storms and high tides which produce extreme coastal water levels will occur on average at least once a year instead of once every 100 years. gns climate scientist tim naish, says in the event of a two metre rise in sea-level by the end of the century, one-in-100-year flooding event will become a daily event. naish says: “we are a coastal nation so we are going to get whacked by sea-level rise. in many areas, we have to retreat "which comes with massive disruption and social and economic issues.”
<p> it has been said that one way to prevent significant flooding of coastal areas now and into the future is by reducing global sea level rise. this could be minimised by further reducing greenhouse gas emissions. however, even if significant emission decreases are achieved, there is already a substantial commitment to sea level rise into the future. international climate change policies like the kyoto protocol are seeking to mitigate the future effects of climate change, including sea level rise.
<p> sea levels are expected to get up to one meter higher by 2100, though this projection is disputed. a rise in the sea level would result in an agricultural land loss, in particular in areas such as south east asia. erosion, submergence of shorelines, salinity of the water table due to the increased sea levels, could mainly affect agriculture through inundation of low-lying lands.
<p> thermal expansion of water and increased melting of oceanic glaciers from an increase in temperature gives way to a rise in sea level. this can affect the fresh water supply to coastal areas as well. as river mouths and deltas with higher salinity get pushed further inland, an intrusion of saltwater results in an increase of salinity in reservoirs and aquifers. sea-level rise may also consequently be caused by a depletion of groundwater, as climate change can affect the hydrologic cycle in a number of ways. uneven distributions of increased temperatures and increased precipitation around the globe results in water surpluses and deficits, but a global decrease in groundwater suggests a rise in sea level, even after meltwater and thermal expansion were accounted for, which can provide a positive feedback to the problems sea-level rise causes to fresh-water supply.
<p> in addition, the national university of colombia analyzed possible impacts of a doubling of the carbon dioxide emissions between the years 2050 to 2080 and projects a sea level rise of 2 to 5mm per year. sea level increases will likely cause saline intrusion into aquifer-based freshwater supplies in insular and coastal areas. freshwater systems and their biological diversity will be severely affected. moreover, prognostic modeling of small islands has identified major land loss if no action is taken. for example, in san andres island, the first national communications estimated a loss of 17% of land area, including most of the coastal zone by 2060 (50 cm increase in sea level).
<p> humans impact how much water is stored on land. building dams prevents large masses of water from flowing into the sea and therefore increases the storage of water on land. on the other hand, humans extract water from lakes, wetlands and underground reservoirs for food production leading to rising seas. furthermore, the hydrological cycle is influenced by climate change and deforestation, which can lead to further positive and negative contributions to sea level rise. in the 20th century, these processes roughly balanced, but dam building has slowed down and is expected to stay low for the 21st century. | Well that depends on their source of water. If it's already got seasonal effects due to snow-melt then yes it will get higher until the sources start running out of snow or maybe the melt season gets shorter and less goes down the mountain. If it's rain based it would require finding whether the average rainfall changes on the lake/river leading to the lake. Additionally because of the increased temperature they could just evaporate more than is replenished. So both scenarios of increase and decreased water levels are possible. Climate change will effect it how it will effect it is harder to say. |
askscience cosmos q & a thread. episode 7: the clean room | <p> the third video in the series is 4 minutes, 21 seconds in length and was released on november 23, 2009. "our place in the cosmos" features carl sagan, richard dawkins, michio kaku, and robert jastrow. samples were taken from "cosmos", "genius of charles darwin", a ted talk, "stephen hawking's universe", interviews and visuals from "baraka" and "koyaanisqatsi", history channel's universe series, and "cosmic voyage".
<p> creatures of the cosmos is an anthology of fantasy and science fiction short stories for younger readers, edited by catherine crook de camp. it was first published in hardcover by westminster press in 1977. it was the third such anthology assembled by de camp, following the earlier "3000 years of fantasy and science fiction" (1972) and "tales beyond time" (1973), both of which she edited together with her husband l. sprague de camp.
<p> the library's william n. deramus iii cosmology theater shows images of the cosmos from the hubble space telescope and nasa science missions. these images are delivered via viewspace to the library with daily updates (via the internet) that provide the library with new content for visitors.
<p> the following is a list of all episodes of the science documentary television series "cosmos". the first season, "", was broadcast in 1980 on pbs and hosted by carl sagan. the second season, "", was broadcast in 2014 on fox and hosted by neil degrasse tyson. a third season, "possible worlds", is slated to air on fox in spring 2019 with tyson returning as host.
<p> "cosmos" utilizes a light, conversational tone to render complex scientific topics readable for a lay audience. on many topics, the book encompasses a more concise, refined presentation of previous ideas about which sagan had written.
<p> bullet::::- (sppecial volume) : "mohaya uchū wa meikyū no kagami no yōni" (, "now the cosmos is like the mirrors of labyrinth") 2017-07 sairyū sha , this book is not contained in the complete collection, but is essentially the part of the collection. after the complete collection had been released, this book was published.
<p> "cosmos at the castle" is an award-winning interactive astronomy exhibit that takes place at the observatory, featuring four cinema sized screens that share information with visitors on the big bang, the evolution of life on earth, and the existence of extraterrestrial life in the universe. | How do they know if a specific meteorite came from our solar system, and not from another? |
is the super-massive black hole at the center of the milky way spinning in the plane of the rest of our galaxy? would this just be a coincidence, or does one or the other have ability to influence the other into matching it? | <p> stars at the centre of the milky way are so densely packed that special imaging techniques (such as adaptive optics) were needed to boost the resolution of the vlt. thanks to these techniques, astronomers were able to watch individual stars with unprecedented accuracy as they circled the galactic center. their paths conclusively demonstrated that they were orbiting in the immense gravitational grip of a supermassive black hole nearly three million times more massive than the sun. the vlt observations also revealed flashes of infrared light emerging from the region at regular intervals. while the cause of this phenomenon is unknown, observers have suggested that the black hole may be spinning rapidly.
<p> in the 1990s, a research group led by john kormendy demonstrated that a supermassive black hole is present within the sombrero galaxy. using spectroscopy data from both the cfht and the hubble space telescope, the group showed that the speed of revolution of the stars within the center of the galaxy could not be maintained unless a mass 1 billion times the mass of the sun, or , is present in the center. this is among the most massive black holes measured in any nearby galaxies.
<p> donald lynden-bell and martin rees hypothesized in 1971 that the center of the milky way galaxy would contain a massive black hole. sagittarius a* was discovered and named on february 13 and 15, 1974, by astronomers bruce balick and robert brown using the green bank interferometer of the national radio astronomy observatory. they discovered a radio source that emits synchrotron radiation; it was found to be dense and immobile because of its gravitation. this was, therefore, the first indication that a supermassive black hole exists in the center of the milky way.
<p> this nebula is seen as circumstantial evidence that the magnetic fields at the center of the galaxy are extremely strong, more than 1,000 times stronger than those of the sun. if so, they may be driven by the massive disc of gas orbiting the central super-massive black hole.
<p> stellar proper motions have been used to infer the presence of a super-massive black hole at the center of the milky way. this black hole is suspected to be sgr a*, with a mass of 4.2 × , where is the solar mass.
<p> our milky way galaxy contains several stellar-mass black hole candidates (bhcs) which are closer to us than the supermassive black hole in the galactic center region. most of these candidates are members of x-ray binary systems in which the compact object draws matter from its partner via an accretion disk. the probable black holes in these pairs range from three to more than a dozen solar masses.
<p> astronomers now have evidence there is a supermassive black hole at the center of the galaxy. sagittarius a* (abbreviated sgr a*) is agreed to be the most plausible candidate for the location of this supermassive black hole. the very large telescope and keck telescope detected stars orbiting sgr a* at speeds greater than that of any other stars in the galaxy. one star, designated s2, was calculated to orbit sgr a* at speeds of over 5,000 kilometers per second at its closest approach. | > Is the SMBH really massive enough to gravitationally pull the rest of the galaxy to rotate around it? Or as it migrated (I'm assuming that's what happened?) to the center, did the galaxy impart the rotational direction? The BH is part of the galaxy, it appeared and grew as a part of normal processes that exist in most galaxies. The galaxy as a whole has a spin. When galactic matter falls into the BH, spin is conserved. The BH grows by feeding on galactic stuff, in the center of the galaxy, more or less. Ergo, the spin of the BH should follow the direction of the overall galactic spin pretty closely. |
[physics] is entropy quantifiable, and if so, what unit(s) is it expressed in? | <p> the sum is over all possible states "i" of the system in question, such as the positions of gas particles in a container. moreover, is the probability that the state "i" is attained and "k" is the boltzmann constant. similarly, entropy in information theory measures the quantity of information. if a message recipient may expect any one of "n" possible messages with equal likelihood, then the amount of information conveyed by any one such message is quantified as bits.
<p> in boltzmann's definition, entropy is a measure of the number of possible microscopic states (or microstates) of a system in thermodynamic equilibrium, consistent with its macroscopic thermodynamic properties (or macrostate). to understand what microstates and macrostates are, consider the example of a gas in a container. at a microscopic level, the gas consists of a vast number of freely moving atoms, which occasionally collide with one another and with the walls of the container. the microstate of the system is a description of the positions and momenta of all the atoms. in principle, all the physical properties of the system are determined by its microstate. however, because the number of atoms is so large, the details of the motion of individual atoms is mostly irrelevant to the behavior of the system as a whole. provided the system is in thermodynamic equilibrium, the system can be adequately described by a handful of macroscopic quantities, called "thermodynamic variables": the total energy "e", volume "v", pressure "p", temperature "t", and so forth. the macrostate of the system is a description of its thermodynamic variables.
<p> where "k" is a positive constant. shannon then states that "any quantity of this form, where "k" merely amounts to a choice of a unit of measurement, plays a central role in information theory as measures of information, choice, and uncertainty." then, as an example of how this expression applies in a number of different fields, he references r.c. tolman's 1938 "principles of statistical mechanics", stating that "the form of "h" will be recognized as that of entropy as defined in certain formulations of statistical mechanics where "p" is the probability of a system being in cell "i" of its phase space… "h" is then, for example, the "h" in boltzmann's famous h theorem." as such, over the last fifty years, ever since this statement was made, people have been overlapping the two concepts or even stating that they are exactly the same.
<p> a key measure in information theory is "entropy". entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. for example, identifying the outcome of a fair coin flip (with two equally likely outcomes) provides less information (lower entropy) than specifying the outcome from a roll of a (with six equally likely outcomes). some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy.
<p> in the branch of physics called statistical mechanics, entropy is a measure of the randomness or disorder of a physical system. this concept was studied in the 1870s by the austrian physicist ludwig boltzmann, who showed that the thermodynamic properties of a gas could be derived from the combined properties of its many constituent molecules. boltzmann argued that by averaging the behaviors of all the different molecules in a gas, one can understand macroscopic properties such as volume, temperature, and pressure. in addition, this perspective led him to give a precise definition of entropy as the natural logarithm of the number of different states of the molecules (also called "microstates") that give rise to the same macroscopic features.
<p> which is the famous boltzmann entropy formula when "k" is boltzmann's constant, which may be interpreted as the thermodynamic entropy per nat. there are many ways of demonstrating the equivalence of "information entropy" and "physics entropy", that is, the equivalence of "shannon entropy" and "boltzmann entropy". nevertheless, some authors argue for dropping the word entropy for the "h" function of information theory and using shannon's other term "uncertainty" instead.
<p> in information theory, "entropy" is the measure of the amount of information that is missing before reception and is sometimes referred to as "shannon entropy". shannon entropy is a broad and general concept which finds applications in information theory as well as thermodynamics. it was originally devised by claude shannon in 1948 to study the amount of information in a transmitted message. the definition of the information entropy is, however, quite general, and is expressed in terms of a discrete set of probabilities "p so that | Yes, entropy is a quantity. It has dimensions of [energy]/[temperature]. |
[physics] is entropy quantifiable, and if so, what unit(s) is it expressed in? | <p> the sum is over all possible states "i" of the system in question, such as the positions of gas particles in a container. moreover, is the probability that the state "i" is attained and "k" is the boltzmann constant. similarly, entropy in information theory measures the quantity of information. if a message recipient may expect any one of "n" possible messages with equal likelihood, then the amount of information conveyed by any one such message is quantified as bits.
<p> in boltzmann's definition, entropy is a measure of the number of possible microscopic states (or microstates) of a system in thermodynamic equilibrium, consistent with its macroscopic thermodynamic properties (or macrostate). to understand what microstates and macrostates are, consider the example of a gas in a container. at a microscopic level, the gas consists of a vast number of freely moving atoms, which occasionally collide with one another and with the walls of the container. the microstate of the system is a description of the positions and momenta of all the atoms. in principle, all the physical properties of the system are determined by its microstate. however, because the number of atoms is so large, the details of the motion of individual atoms is mostly irrelevant to the behavior of the system as a whole. provided the system is in thermodynamic equilibrium, the system can be adequately described by a handful of macroscopic quantities, called "thermodynamic variables": the total energy "e", volume "v", pressure "p", temperature "t", and so forth. the macrostate of the system is a description of its thermodynamic variables.
<p> where "k" is a positive constant. shannon then states that "any quantity of this form, where "k" merely amounts to a choice of a unit of measurement, plays a central role in information theory as measures of information, choice, and uncertainty." then, as an example of how this expression applies in a number of different fields, he references r.c. tolman's 1938 "principles of statistical mechanics", stating that "the form of "h" will be recognized as that of entropy as defined in certain formulations of statistical mechanics where "p" is the probability of a system being in cell "i" of its phase space… "h" is then, for example, the "h" in boltzmann's famous h theorem." as such, over the last fifty years, ever since this statement was made, people have been overlapping the two concepts or even stating that they are exactly the same.
<p> a key measure in information theory is "entropy". entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. for example, identifying the outcome of a fair coin flip (with two equally likely outcomes) provides less information (lower entropy) than specifying the outcome from a roll of a (with six equally likely outcomes). some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy.
<p> in the branch of physics called statistical mechanics, entropy is a measure of the randomness or disorder of a physical system. this concept was studied in the 1870s by the austrian physicist ludwig boltzmann, who showed that the thermodynamic properties of a gas could be derived from the combined properties of its many constituent molecules. boltzmann argued that by averaging the behaviors of all the different molecules in a gas, one can understand macroscopic properties such as volume, temperature, and pressure. in addition, this perspective led him to give a precise definition of entropy as the natural logarithm of the number of different states of the molecules (also called "microstates") that give rise to the same macroscopic features.
<p> which is the famous boltzmann entropy formula when "k" is boltzmann's constant, which may be interpreted as the thermodynamic entropy per nat. there are many ways of demonstrating the equivalence of "information entropy" and "physics entropy", that is, the equivalence of "shannon entropy" and "boltzmann entropy". nevertheless, some authors argue for dropping the word entropy for the "h" function of information theory and using shannon's other term "uncertainty" instead.
<p> in information theory, "entropy" is the measure of the amount of information that is missing before reception and is sometimes referred to as "shannon entropy". shannon entropy is a broad and general concept which finds applications in information theory as well as thermodynamics. it was originally devised by claude shannon in 1948 to study the amount of information in a transmitted message. the definition of the information entropy is, however, quite general, and is expressed in terms of a discrete set of probabilities "p so that | There is also dimensionless entropy, just the logarithm of the number of microstates. Then there is the thermodynamic β (coldness), the derivative of entropy with respect to internal energy. It is only for historical reasons that one uses the kelvin scale instead of this coldness parameter. |
what would it take for an aircraft carrier to fly? | <p> these carriers had hangars for storing and maintaining the aircraft, but no flight deck as in a true aircraft carrier. instead, they used cranes to lower the aircraft into the sea for takeoff and to recover them after landing. the ships were normally converted merchant vessels rather than specially constructed for the task. as aircraft improved, the problems of using seaplanes became more of a handicap. the aircraft could only be operated in a smooth sea and the ship had to stop for launching or recovery, both of which took around 20 minutes. the tender was often stationed or so in front of the main battle fleet with the cruiser screen so that it would not fall hopelessly behind when it launched its aircraft. seaplanes also had poorer performance than other aircraft because of the drag and weight of the floats. seaplane tenders had largely been superseded by aircraft carriers in the battle fleet by the end of the first world war, although aircraft were still of minor importance compared to the firepower of naval artillery.
<p> landing larger and faster aircraft on a flight deck was made possible through the use of arresting cables installed on the flight deck and a tailhook installed on the aircraft. early carriers had a very large number of arrestor cables or "wires". current u.s. navy carriers have three or four steel cables stretched across the deck at intervals which bring a plane, traveling at , to a complete stop in about .
<p> it was known that aircraft could maintain flight with a greater payload than that possible during take off. major robert h. mayo, the technical general manager at imperial airways, proposed mounting a small, long-range seaplane on top of a larger carrier aircraft, using the combined power of both to bring the smaller aircraft to operational height, at which time the two aircraft would separate, the carrier aircraft returning to base while the other flew on to its destination. the british air ministry issued specification "13/33" to cover this project.
<p> because the take-off speed of early aircraft was so low, it was possible for an aircraft to make a very short take off when the launching ship was steaming into the wind. later, removable "flying-off platforms" appeared on the gun turrets of battleships and battlecruisers starting with , allowing aircraft to be flown off for scouting purposes, although there was no chance of recovery.
<p> the aircraft carriers are of a stobar configuration: short take-off but arrested recovery. short take-off is achieved by using a 12-degree ski-jump on the bow. there is also an angled deck with arresting wires, which allows aircraft to land without interfering with launching aircraft. the flight deck has a total area of . two aircraft elevators, on the starboard side forward and aft of the island, move aircraft between the hangar deck and the flight deck.
<p> successful air navigation involves piloting an aircraft from place to place without getting lost, not breaking the laws applying to aircraft, or endangering the safety of those on board or on the ground. air navigation differs from the navigation of surface craft in several ways; aircraft travel at relatively high speeds, leaving less time to calculate their position en route. aircraft normally cannot stop in mid-air to ascertain their position at leisure. aircraft are safety-limited by the amount of fuel they can carry; a surface vehicle can usually get lost, run out of fuel, then simply await rescue. there is no in-flight rescue for most aircraft. additionally, collisions with obstructions are usually fatal. therefore, constant awareness of position is critical for aircraft pilots.
<p> naval aviation is typically projected to a position nearer the target by way of an aircraft carrier. carrier-based aircraft must be sturdy enough to withstand demanding carrier operations. they must be able to launch in a short distance and be sturdy and flexible enough to come to a sudden stop on a pitching flight deck; they typically have robust folding mechanisms that allow higher numbers of them to be stored in below-decks hangars and small spaces on flight decks. these aircraft are designed for many purposes, including air-to-air combat, surface attack, submarine attack, search and rescue, matériel transport, weather observation, reconnaissance and wide area command and control duties. | 100,000 tons. the MV22 weighs 27 tons (all up) and has 10,000hp with disc loading of 100kg/sq m. Scale it up and you need 40million hp (30GW) and 1 million sq m ( 1000m x 1000m, or 1 sq km) of disc area to lift 100,000 tons.. Although if you made it efficiently out of aluminium you could probably get the weight down to 20,000 tons. Planes like boeing's cost about $1million per ton. A hundred thousand ton plane would cost a hundred billion dollars. The Nimitz class costs $30,000 per ton excluding the planes. |
are there in fact legitimate health concerns associated with microwaving styrofoam cup noodles cups? | <p> cup noodles are often seen in the 2012 video game binary domain, which is set in a futuristic version of tokyo. it is commonly seen in billboards and advertisements throughout the city, and is even seen being eaten by some characters. cup noodles were also prominently featured as product placement in the 2016 video game "final fantasy xv". this partnership also resulted in a crossover tv ad in japan. cup noodles have also been noted in the upcoming video game star citizen as a fictional product placement under the manufacture of a corporation known as "big benny's".
<p> the popularity of cup noodles has also resulted in the creation of a cup noodle museum. the museum features displays on cup noodles and their founder, momofuku ando. the museum is located in yokohama, japan.
<p> goop, and by extension paltrow, have drawn criticism by showcasing expensive products and promoting medically and scientifically impossible treatments, many of which have harmful consequences. the controversies have included vaginal steaming, the use of jade eggs, a dangerous coffee enema device, and "body vibes", wearable stickers that were claimed to "rebalance the energy frequency in our bodies" and which goop falsely claimed were made of a nasa-developed material. goop settled a lawsuit regarding the health claims it made over the jade eggs.
<p> kelp noodles are cholesterol, fat, gluten-free and also rich in nutrients. a 1/2 cup serving includes 186 milligrams of sodium, 134 milligrams of calcium, 2.28 milligrams of iron, and 52.8 micrograms of vitamin k. they are a good dietary source of iodine. consumers with thyroid and heart disease should take the sodium and iodine content into account.
<p> nowadays, the traditional methods of making sanxiang noodles are rarely used because of the huge demand of the market and the development of technology. machines that make sanxiang noodles have been vastly applied. besides, the recipes of making sanxiang noodles has also been updated because of the diversity of food and people’s appetite.
<p> in the uk, ireland, south africa, australia, and canada, the brand "polyfilla", multi-purpose filler, is used as a generic term for spackling paste, even though it differs from spackle in being cellulose based. the manufacturers claim that it has an advantage over spackle in that it doesn't shrink or crack.
<p> cup noodles were introduced in 1990 by maruchan. due to its popularity, instant noodles are often referred to simply as "maruchan". today, many local brands such as "la moderna" and "herdez" have developed their own cup noodles, along nissin, which is also a newcomer. | But it is also possible for chemicals from the styofoam to leech into the water. |
is it possible that the reason the universe is expanding is because it is still in it's acceleration phase of the big bang? | <p> based on a huge amount of experimental observation and theoretical work, it is now believed that the reason for the observation is that "space itself is expanding", and that it expanded very rapidly within the first fraction of a second after the big bang. this kind of expansion is known as a ""metric"" expansion. in the terminology of mathematics and physics, a "metric" is a measure of distance that satisfies a specific list of properties, and the term implies that "the sense of distance within the universe is itself changing", although at this time it is far too small an effect to see on less than an intergalactic scale.
<p> the expansion is thought to have been triggered by the phase transition that marked the end of the preceding grand unification epoch at approximately 10 seconds after the big bang. one of the theoretical products of this phase transition was a scalar field called the inflaton field. as this field settled into its lowest energy state throughout the universe, it generated a repulsive force that led to a rapid expansion of space. this expansion explains various properties of the current universe that are difficult to account for without such an inflationary epoch.
<p> the big bang is not an explosion of matter moving outward to fill an empty universe. instead, space itself expands with time everywhere and increases the physical distance between two comoving points. in other words, the big bang is not an explosion "in space", but rather an expansion "of space". because the flrw metric assumes a uniform distribution of mass and energy, it applies to our universe only on large scales—local concentrations of matter such as our galaxy are gravitationally bound and as such do not experience the large-scale expansion of space.
<p> observations made by edwin hubble during the 1920s–1950s found that galaxies appeared to be moving away from each other, leading to the currently accepted big bang theory. this suggests that the universe began – very small and very dense – about 13.8 billion years ago, and it has expanded and (on average) become less dense ever since. confirmation of the big bang mostly depends on knowing the rate of expansion, average density of matter, and the physical properties of the mass–energy in the universe.
<p> the big crunch scenario hypothesized that the density of matter throughout the universe is sufficiently high that gravitational attraction will overcome the expansion which began with the big bang. the flrw cosmology can predict whether the expansion will eventually stop based on the average energy density, hubble parameter, and cosmological constant. if the metric expansion stopped, then contraction will inevitably follow, accelerating as time passes and finishing the universe in a kind of gravitational collapse.
<p> since the hubble "constant" is a constant only in space, not in time, the radius of the hubble sphere may increase or decrease over various time intervals. the subscript '0' indicates the value of the hubble constant today. current evidence suggests that the expansion of the universe is accelerating ("see" accelerating universe), meaning that, for any given galaxy, the recession velocity dd/dt is increasing over time as the galaxy moves to greater and greater distances; however, the hubble parameter is actually thought to be decreasing with time, meaning that if we were to look at some "fixed" distance d and watch a series of different galaxies pass that distance, later galaxies would pass that distance at a smaller velocity than earlier ones.
<p> the expansion of space is sometimes described as a force which acts to push objects apart. though this is an accurate description of the effect of the cosmological constant, it is not an accurate picture of the phenomenon of expansion in general. for much of the universe's history the expansion has been due mainly to inertia. the matter in the very early universe was flying apart for unknown reasons (most likely as a result of cosmic inflation) and has simply continued to do so, though at an ever-decreasing rate due to the attractive effect of gravity. | Some people have postulated theories like this, but one of the biggests problems is that as far as we know, the rate of expansion of the universe is actually *accelerating*, not slowing down. Since we are not only expanding, but also show no signs of slowing down it seems unlikely that we will end in a "Big Crunch" (unless, of course, there is some hidden mechanism for reversing the rate of expansion we have not yet observed). |
can anyone explain how a mimosa plant can sense touch? | <p> plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of "mimosa pudica", the insect traps of venus flytrap and bladderworts, and the pollinia of orchids.
<p> there are also forms of tactile communication that do not involve direct touch, including vibrational communication. some chameleon species communicate with one another by vibrating the substrate that they are standing on, such as a tree branch or leaf. animals that use vibrational communication exhibit unique adaptations in morphology (i.e., body form) that enable them to detect vibration and use it in communication. these include unique adaptations in ear and jaw morphology that give the animal direct contact with the surface they are standing on, and enable them to detect subtle vibrations. lizards that live on substrates that can be easily moved (such as thin tree branches or leaves) are probably more likely to use vibrational communication than lizards that live on substrates that do not transmit vibrations as easily, such as the ground or thick tree trunks.
<p> it has been concluded that loss of turgor pressure within the leaves of "mimosa pudica" is responsible for the reaction the plant has when touched. other factors such as changes in osmotic pressure, protoplasmic contraction and increase in cellular permeability have been observed to affect this response. it has also been recorded that turgor pressure is different in the upper and lower pulvinar cells of the plant, and the movement of potassium and calcium ions throughout the cells cause the increase in turgor pressure. when touched, the pulvinus is activated and exudes contractile proteins, which in turn increases turgor pressure and closes the leaves of the plant.
<p> thigmonastic movements, those that occur in response to touch, are used as a defense in some plants. the leaves of the sensitive plant, "mimosa pudica", close up rapidly in response to direct touch, vibration, or even electrical and thermal stimuli. the proximate cause of this mechanical response is an abrupt change in the turgor pressure in the pulvini at the base of leaves resulting from osmotic phenomena. this is then spread via both electrical and chemical means through the plant; only a single leaflet need be disturbed. this response lowers the surface area available to herbivores, which are presented with the underside of each leaflet, and results in a wilted appearance. it may also physically dislodge small herbivores, such as insects.
<p> plants are capable of detecting invaders through the recognition of non-self signals despite the lack of a circulatory or immune system like those found in animals. often a plant's first line of defense against microbes occurs at the plant cell surface and involves the detection of microorganism-associated molecular patterns (mamps). mamps include nucleic acids common to viruses and endotoxins on bacterial cell membranes which can be detected by specialized pattern-recognition receptors. another method of detection involves the use of plant immune receptors to detect effector molecules released into plant cells by pathogens. detection of these signals in infected cells leads to an activation of effector-triggered immunity (eti), a type of innate immune response.
<p> thigmomorphogenesis (thigma -- to touch in greek) is the response by plants to mechanical sensation (touch) by altering their growth patterns. in the wild, these patterns can be evinced by wind, raindrops, and rubbing by passing animals.
<p> wilhelm pfeffer, a german botanist during the 17th century, used "mimosa" in one of the first experiments testing plant habituation. further experimentation was done in 1965, when holmes and gruenberg discovered that "mimosa" could distinguish between two stimuli, a water drop and a finger touch. their findings also demonstrated that the habituated behavior was not due to fatigue since the leaf-folding response returned when another stimulus was presented. | First of all, it is important to know that you are looking at a compound leaf. Each little "leaf" is actually a leaflet, and the leaf itself is the larger structure that they make up. At the base of each leaflet, there is a swelling called a pulvinus. It is a structure made up of two plant tissue types--woody schlerenchema and turgid collenchema. The collenchema surrounds the schelrenchema like this: 0 = collenchema X = schlerenchema 000000000000000000 XXXXXXXXXXXXXXXX 000000000000000000 When you touch the plant, you set off a series of signals that go through the plant's cellular ion channels much in the same way nerve signals work in animals. This causes an ion gradient to form on one side of the collenchema which causes it to expel water on one side which make it "flex" like this: OOOO OOOXXXXOOO OOXXXX0000XXXOO XXXX000 0000XXX 00000 0000 Hope that helps. The best part of this behavior is the name. It is called "Thigmonasty". Edit: I apologize for my lack of illustrating skills. Maybe someone can help me out. Edit 2: Wikipedia to the rescue! |
other than living in a simulation, what other possible implications does this have? | <p> bostrom argues that "if" "the fraction of all people with our kind of experiences that are living in a simulation is very close to one", "then" it follows that we probably live in a simulation. some philosophers disagree, proposing that perhaps "sims" do not have conscious experiences the same way that unsimulated humans do, or that it can otherwise be self-evident to a human that they are a human rather than a sim. philosopher barry dainton modifies bostrom's trilemma by substituting "neural ancestor simulations" (ranging from literal brains in a vat, to far-future humans with induced high-fidelity hallucinations that they are their own distant ancestors) for bostrom's "ancestor simulations", on the grounds that every philosophical school of thought can agree that sufficiently high-tech neural ancestor simulation experiences would be indistinguishable from non-simulated experiences. even if high-fidelity computer sims are never conscious, dainton's reasoning leads to the following conclusion: either the fraction of human-level civilizations that reach a posthuman stage and are able and willing to run large numbers of neural ancestor simulations is close to zero, or we are in some kind of (possibly neural) ancestor simulation.
<p> epistemologically, it is not impossible to tell whether we are living in a simulation. for example, bostrom suggests that a window could "pop up" saying: "you are living in a simulation. click here for more information." however, imperfections in a simulated environment might be difficult for the native inhabitants to identify and for purposes of authenticity, even the simulated memory of a blatant revelation might be purged programmatically. nonetheless, should any evidence come to light, either for or against the skeptical hypothesis, it would radically alter the aforementioned probability.
<p> simulation theory argues that mental simulations do not fully exclude the external information that surrounds the user. rather that the mediated stimuli are reshaped into imagery and memories of the user in order to run the simulation. it explains why the user is able to form these experiences without the use of technology, because it points to the relevance of construction and internal processing.
<p> bullet::::- simulation is a term that can be defined as a virtual representation of reality. for example, according to new media and visual culture, virtually, things seem real based on experience, but they are not real because they have not actually happened. french theorist, jean baudrillard, believed that simulation was the modern stage of simulacrum.
<p> bostrom goes on to use a type of anthropic reasoning to claim that, "if" the third proposition is the one of those three that is true, and almost all people with our kind of experiences live in simulations, "then" we are almost certainly living in a simulation.
<p> live, virtual, & constructive (lvc) simulation is a broadly used taxonomy for classifying models and simulation (m&s). however, categorizing a simulation as a live, virtual, or constructive environment is problematic since there is no clear division between these categories. the degree of human participation in a simulation is infinitely variable, as is the degree of equipment realism. the categorization of simulations also lacks a category for simulated people working real equipment.
<p> they can help to understand the connections between factors and events and to examine their dynamics. simulation is a process that represents a structure and change of a system. in simulation some aspects of reality are duplicated or reproduced, usually within the model. | It sounds like nonsense (edit: opinion revised below, the way the video presents things seems overexaggerated in its implications, but the mathematical similarity seems legitimate), though it's impossible to know what the actual facts are from a short question and answer session, especially given the extremely complicated physical nature of the subject. The best I can get from it is that this guy has interpreted things in nature as binary strings and that either: * He's found that they exactly match some specific computer code, which is probably just coincidence driven by the way there's a *lot* of compute code out there and a *lot* of physics to make random interpretations of until it matches. * He's found that they look in general like computer code, which doesn't really mean anything. |