question
stringlengths
6
296
context
stringlengths
1.9k
8.48k
answer
stringlengths
0
9.92k
- what is pink floyd's the wall about?
<p> pink floyd – the wall is a 1982 british surrealist live-action/animated musical drama film directed by alan parker with animated segments by political cartoonist gerald scarfe, and is based on the 1979 pink floyd album of the same name. the film centers around a solitude rocker named pink, who, after being driven into insanity by the death of his father and many depressive moments during his lifetime, constructs a metaphorical (and sometimes physical) wall to be protected from the world and emotional situations around him. when this coping mechanism backfires he puts himself on trial and sets himself free. the screenplay was written by pink floyd vocalist and bassist roger waters. <p> the wall is the eleventh studio album by english rock band pink floyd, released 30 november 1979 on harvest and columbia records. a rock opera, its story explores pink, a jaded rockstar whose eventual self-imposed isolation from society is symbolised by a wall. the record was a commercial success, charting at number one in the us for 15 weeks, and number three in the uk. in 1982, the album was adapted into a feature film of the same name, directed by alan parker. <p> the three parts of "another brick in the wall" appear on pink floyd's 1979 album "the wall," a rock opera that explores abandonment and isolation, symbolised by a wall. during "part 1", the protagonist, pink, begins building a metaphorical wall around himself following the death of his father. in "part 2", traumas including his overprotective mother and abusive schoolteachers become metaphorical bricks in the wall. following a violent breakdown in "part 3", pink dismisses everyone he knows as "just bricks in the wall". <p> "the wall" tells the story of pink, an embittered and alienated rock star. at this point in the album's narrative, pink has achieved wealth and fame, and is usually away from home, due to the demands of his career as a touring performer. he is having casual sex with groupies to relieve the tedium of the road, and is living a separate life from his wife. <p> "the wall" is a rock opera that explores abandonment and isolation, symbolised by a wall. the songs create an approximate storyline of events in the life of the protagonist, pink (who is introduced in the songs "in the flesh?" and "the thin ice"), a character based on syd barrett as well as roger waters, whose father was killed during wwii. pink's father also dies in a war ("another brick in the wall (part 1)"), which is where pink starts to build a metaphorical wall around himself. pink is oppressed by his overprotective mother ("mother") and tormented at school by tyrannical, abusive teachers ("the happiest days of our lives"). all of these traumas become metaphorical "bricks in the wall" ("another brick in the wall (part 2)"). the protagonist eventually becomes a rock star, his relationships marred by infidelity, drug use, and outbursts of violence. he soon marries and is about to complete his "wall" ("empty spaces"). while touring in america, he brings a groupie home after learning of his wife's infidelity. ruminating on his failed marriage, he trashes his room and scares the groupie away in a violent fit of rage ("one of my turns"). as his marriage crumbles ("don't leave me now"), he dismisses everyone he's known as "just bricks in the wall" ("another brick in the wall (part 3)") and finishes building his wall ("goodbye cruel world"), completing his isolation from human contact. <p> the wall () is a 1963 novel by austrian writer marlen haushofer. considered the author's finest work, "the wall" is an example of dystopian fiction. the english translation by shaun whiteside was published by cleis press in 1990. <p> "the wall" tells the story of pink, an alienated young rock star who is retreating from society and isolating himself. in "hey you", pink realizes his mistake of shunning society and attempts to regain contact with the outside world. however, he cannot see or hear beyond the wall. pink's call becomes more and more desperate as he begins to realize there is no escape.
The walls we erect around ourselves to shelter us from the pain inflicted by the world / other people..
what does natural air smell like? one that's not polluted
<p> biological sources of air pollution are also found indoors, as gases and airborne particulates. pets produce dander, people produce dust from minute skin flakes and decomposed hair, dust mites in bedding, carpeting and furniture produce enzymes and micrometre-sized fecal droppings, inhabitants emit methane, mold forms on walls and generates mycotoxins and spores, air conditioning systems can incubate legionnaires' disease and mold, and houseplants, soil and surrounding gardens can produce pollen, dust, and mold. indoors, the lack of air circulation allows these airborne pollutants to accumulate more than they would otherwise occur in nature. <p> an air pollutant is a material in the air that can have adverse effects on humans and the ecosystem. the substance can be solid particles, liquid droplets, or gases. a pollutant can be of natural origin or man-made. <p> air pollution is commonly associated with the image of billowing clouds of smoke rising into the sky from a large factory. while the fumes and smoke previously stated definitely is a prominent form of air pollution, it is not the only one. air pollution can come from the emission of cars, smoking, and other sources. air pollution does not just affect birds though, like one may have thought. air pollution affects mammals, birds, reptiles, and any other organism that requires oxygen to live. frequently, if there is any highly dangerous air pollution, the animal observation process will be rather simple: there will be an abundance of dead animals located near the vicinity of the pollution. <p> air pollution is the introduction into the atmosphere of chemicals, particulate matter, or biological materials that cause harm or discomfort to humans or other living organisms, or damages the natural environment. many urban areas have significant problems with smog, a type of air pollution derived from vehicle emissions from internal combustion engines and industrial fumes that react in the atmosphere with sunlight to form secondary pollutants that also combine with the primary emissions to form photochemical smog. <p> removing the source of an unpleasant odor will decrease the chance that people will smell it. ventilation is also important to maintaining indoor air quality and can aid in eliminating unpleasant odors. simple cleaners such as white vinegar and baking soda, as well as natural absorbents like activated charcoal and zeolite, are effective at removing odors. other solutions are bad smells removers that are adapted to different types of odor. the result is odor-free air that is also pollution-free and safer to breathe. some house plants may also aid in the removal of toxic substances from the air in building interiors. <p> scientific evidence has indicated that indoor air pollution can be worse than outdoor pollutants in large and industrialized cities. many products and chemicals used inside the home, for cooking and heating, and for appliances and home décor are primary sources of indoor air pollutants. everything we use in the home contributes to the pollution, and can possibly degrade the environment. air pollution is responsible for 7 million premature deaths around the world each year. when pollutants enter the body through our respiratory system, they can be absorbed in the blood and travel throughout the body, and can directly damage the heart and other vital organs. <p> the caa defines "air pollutant" as "any air pollution agent or combination of such agents, including any physical, chemical, biological, radioactive ... substance or matter which is emitted into or otherwise enters the ambient air". the majority opinion commented that "greenhouse gases fit well within the caa's capacious definition of air pollutant."
Never been out to the middle of nowhere? Also air generally is very smell-neutral. (Which makes sense, smell is supposed to help you figure out what's going on around you and whether food is good to eat. Being alert to the air around you is just a waste of time and is rightly ignored.)
when i listen to someone playing the piano, why do i know when they make a mistake even if i've never heard the song they're playing?
<p> "why this book? because few instrumentalists understand why the piano so often betrays their thinking. all the elements - stability and fingerprint, true relaxation, tactile and cerebral awareness - give the means for a real and not only intentional sound requirement." <p> bullet::::- the 1991 party game "notability" was played by people trying to guess a song played on a toy piano, while, according to the rules, "shoot the piano player!" was to be shouted if someone thought the player was cheating (playing out of tune/tempo). <p> "i am tempted to copy out a small piano piece for you, because i would like to know how you agree with it. it is teeming with dissonances! these may [well] be correct and [can] be explained—but maybe they won’t please your palate, and now i wished, they would be less correct, but more appetizing and agreeable to your taste. the little piece is exceptionally melancholic and ‘to be played very slowly’ is not an understatement. every bar and every note must sound like a ritard[ando], as if one wanted to suck melancholy out of each and every one, lustily and with pleasure out of these very dissonances! good lord, this description will [surely] awaken your desire!" <p> pitch detection is often the detection of individual notes that might make up a melody in music, or the notes in a chord. when a single key is pressed upon a piano, what we hear is not just "one" frequency of sound vibration, but a "composite" of multiple sound vibrations occurring at different mathematically related frequencies. the elements of this composite of vibrations at differing frequencies are referred to as harmonics or partials. <p> while very few people have the ability to name a pitch with no external reference, pitch memory can be activated by repeated exposure. people who are not skilled singers will often sing popular songs in the correct key, and can usually recognize when tv themes have been shifted into the wrong key. members of the venda culture in south africa also sing familiar children's songs in the key in which the songs were learned. <p> studies suggest that individuals are capable of automatically detecting a difference or anomaly in a melody such as an out of tune pitch which does not fit with their previous music experience. this automatic processing occurs in the secondary auditory cortex. brattico, tervaniemi, naatanen, and peretz (2006) performed one such study to determine if the detection of tones that do not fit an individual's expectations can occur automatically. they recorded event-related potentials (erps) in nonmusicians as they were presented unfamiliar melodies with either an out of tune pitch or an out of key pitch while participants were either distracted from the sounds or attending to the melody. both conditions revealed an early frontal negativity independent of where attention was directed. this negativity originated in the auditory cortex, more precisely in the supratemporal lobe (which corresponds with the secondary auditory cortex) with greater activity from the right hemisphere. the negativity response was larger for pitch that was out of tune than that which was out of key. ratings of musical incongruity were higher for out of tune pitch melodies than for out of key pitch. in the focused attention condition, out of key and out of tune pitches produced late parietal positivity. the findings of brattico et al. (2006) suggest that there is automatic and rapid processing of melodic properties in the secondary auditory cortex. the findings that pitch incongruities were detected automatically, even in processing unfamiliar melodies, suggests that there is an automatic comparison of incoming information with long term knowledge of musical scale properties, such as culturally influenced rules of musical properties (common chord progressions, scale patterns, etc.) and individual expectations of how the melody should proceed. the auditory area processes the sound of the music. the auditory area is located in the temporal lobe. the temporal lobe deals with the recognition and perception of auditory stimuli, memory, and speech (kinser, 2012). <p> elijah wood had worked with a teacher three weeks prior to going to barcelona and found it stressful having to play the piano and speak at the same time saying, "it was incredibly technical [...] lots of moments where it was jumping from where i'd play, listen to a click, listen to music, have to be in the right place and the right time and hear dialogue and repeat dialogue".
Your ears will naturally lock on to the key/tune of the piece of music. So if someone deviates from it, your ears will notice.
why do your gums and teeth feel weird when you don't get enough sleep?
<p> common symptoms include drooling or dribbling, increased chewing, mood changes, irritability or crankiness, and swollen gums. crying, sleeplessness, restless sleep at night, and mild fever are also associated with teething. teething can begin as early as 3 months and continue until a child's third birthday. in rare cases, an area can be filled with fluid and appears over where a tooth is erupting and cause the gums to be even more sensitive. pain is often associated more with large molars since they cannot penetrate through the gums as easily as the other teeth. <p> drooling or sialorrhea can occur during sleep. it is often the result of open-mouth posture from cns depressants intake or sleeping on one's side. sometimes while sleeping, saliva does not build up at the back of the throat and does not trigger the normal swallow reflex, leading to the condition. freud conjectured that drooling occurs during deep sleep, and within the first few hours of falling asleep, since those who are affected by the symptom suffer the most severe harm while napping, rather than during overnight sleep. <p> soreness of teeth when chewing, or when the teeth touch, is typical. adults usually feel the soreness 12 to 24 hours later, but younger patients tend to react sooner, (e.g., 2 to 6 hours). adults are sometimes prescribed headgear but this is less frequent. the headgear is one of the most useful appliances available to the orthodontist, but many patients find it difficult to comply with daytime wear, so it is mainly worn in the evenings and when sleeping. a similar appliance is the reverse-pull headgear or orthodontic facemask, which pulls the patients teeth forward (rather than back, as in this case). <p> "half an ounce of a tincture produced narcotic symptoms, confusing the head, causing a tendency to snore even when awake, and giving feelings of tingling, etc., with a strong odour of the drug from the breath and skin which only passed off after a day or two". <p> some noticeable symptoms that a baby has entered the teething stage include chewing on their fingers or toys to help relieve pressure on their gums. babies might also refuse to eat or drink due to the pain. symptoms will generally fade on their own, but a doctor should be notified if they worsen or are persistent. teething may cause signs and symptoms in the mouth and gums, but does not cause problems elsewhere in the body. <p> salivary flow rate is decreased during sleep, which may lead to a transient sensation of dry mouth upon waking. this disappears with eating or drinking or with oral hygiene. when associated with halitosis, this is sometimes termed "morning breath". dry mouth is also a common sensation during periods of anxiety, probably owing to enhanced sympathetic drive. dehydration is known to cause hyposalivation, the result of the body trying to conserve fluid. physiologic age-related changes in salivary gland tissues may lead to a modest reduction in salivary output and partially explain the increased prevalence of xerostomia in older people. however, polypharmacy is thought to be the major cause in this group, with no significant decreases in salivary flow rate being likely to occur through aging alone. <p> as a consequence night time sleep does not include as much deep sleep, so the brain tries to "catch up" during the day, hence eds. people with narcolepsy may visibly fall asleep at unpredicted moments (such motions as head bobbing are common). people with narcolepsy fall quickly into what appears to be very deep sleep, and they wake up suddenly and can be disoriented when they do (dizziness is a common occurrence). they have very vivid dreams, which they often remember in great detail. people with narcolepsy may dream even when they only fall asleep for a few seconds. along with vivid dreaming, people with narcolepsy are known to have audio or visual hallucinations prior to falling asleep.
I've never felt this. Is this really a thing?
what are the dangers/benefits of having a low birthrate and a large percentage of your population over the age of 65?
<p> these rates are especially pronounced for children under the age of 5-years old, particularly in lower-income, developing countries. these children have a much greater chance of dying of diseases that have become very preventable in higher-income parts of the world. the instances of these children dying of things like malaria, respiratory infections, diarrhea, perinatal conditions, or measles are much more pronounced in developing nations. data shows that after the age of 5 these preventable causes level out between high and low-income countries. the only cause of death that affects people aged 30-59 at a significantly higher rate in low income. <p> according to the united nations population fund (unfpa), "pregnancies among girls less than 18 years of age have irreparable consequences. it violates the rights of girls, with life-threatening consequences in terms of sexual and reproductive health, and poses high development costs for communities, particularly in perpetuating the cycle of poverty." health consequences include not yet being physically ready for pregnancy and childbirth leading to complications and malnutrition as the majority of adolescents tend to come from lower-income households. the risk of maternal death for girls under age 15 in low and middle income countries is higher than for women in their twenties. teenage pregnancy also affects girls' education and income potential as many are forced to drop out of school which ultimately threatens future opportunities and economic prospects. <p> this occurs where birth and death rates are both low, leading to a total population stability. death rates are low for a number of reasons, primarily lower rates of diseases and higher production of food. the birth rate is low because people have more opportunities to choose if they want children; this is made possible by improvements in contraception or women gaining more independence and work opportunities. the dtm is only a suggestion about the future population levels of a country, not a prediction. <p> birth rates ranging from 10-20 births per 1000 are considered low, while rates from 40-50 births per 1000 are considered high. there are problems associated with both an extremely high birth rate and an extremely low birth rate. high birth rates can cause stress on the government welfare and family programs to support a youthful population. additional problems faced by a country with a high birth rate include educating a growing number of children, creating jobs for these children when they enter the workforce, and dealing with the environmental effects that a large population can produce. low birth rates can put stress on the government to provide adequate senior welfare systems and also the stress on families to support the elders themselves. there will be less children or working age population to support the constantly growing aging population. <p> birth rates ranging from 10–20 births per 1,000 are considered low, while rates from 40–50 births per 1,000 are considered high. there are problems associated with both extremes. high birth rates may stress government welfare and family programs. additional problems faced by a country with a high birth rate include educating a growing number of children, creating jobs for these children when they enter the workforce, and dealing with the environmental impact of a large population. low birth rates may stress the government to provide adequate senior welfare systems and stress families who must support the elders themselves. there will be fewer children (and a working-age population) to support an aging population. <p> in the uk, around half of all pregnancies to under 18 are concentrated among the 30% most deprived population, with only 14% occurring among the 30% least deprived. for example, in italy, the teenage birth rate in the well-off central regions is only 3.3 per 1,000, while in the poorer mezzogiorno it is 10.0 per 1,000. similarly, in the u.s., sociologist mike a. males noted that teenage birth rates closely mapped poverty rates in california: <p> under natural conditions, mortality rates for girls under five are slightly lower than boys for biological reasons. however, after birth, neglect and diverting resources to male children can lead to some countries having a skewed ratio with more boys than girls, with such practices killing an approximate 230,000 girls under five in india each year. while sex-selective abortion is more common among the higher income population, who can access medical technology, abuse after birth, such as infanticide and abandonment, is more common among the lower income population. female infanticide in pakistan is a common practice.
People over 65 generally work (much) less than young people. But, they consume much, much more of a country's social services, like healthcare. An aging population and low birthrate suggest that in the future there will be many fewer young workers to support the growing needs of the aged group within the society. Adding to this pressure is the fact that most social insurance and government pension programs are built on models that presume future funding from new workers, who will in turn have their end of life needs funded by yet another generation of young workers.
why are nasal antiserums used so sparsely?
<p> in adults short term use of nasal decongestants may have a small benefit. antihistamines may improve symptoms in the first day or two; however, there is no longer-term benefit and they have adverse effects such as drowsiness. other decongestants such as pseudoephedrine appear effective in adults. combined oral analgesics, antihistaminics and decongestants are generally effective for older children and adults. ipratropium nasal spray may reduce the symptoms of a runny nose but has little effect on stuffiness. the safety and effectiveness of nasal decongestant use in children is unclear. <p> decongestant nasal sprays are available over-the-counter in many countries. they work to very quickly open up nasal passages by constricting blood vessels in the lining of the nose. prolonged use of these types of sprays can damage the delicate mucous membranes in the nose. this causes increased inflammation, an effect known as rhinitis medicamentosa or the rebound effect. decongestant nasal sprays are advised for short-term use only, preferably 5 to 7 days at maximum. some doctors advise to use them 3 days at maximum. a recent clinical trial has shown that a corticosteroid nasal spray may be useful in reversing this condition. topical nasal decongestants include: <p> nasal administration is a route of administration in which drugs are insufflated through the nose. it can be a form of either topical administration or systemic administration, as the drugs thus locally delivered can go on to have either purely local or systemic effects. nasal sprays are locally acting drugs such as decongestants for cold and allergy treatment, whose systemic effects are usually minimal. examples of systemically active drugs available as nasal sprays are migraine drugs, nicotine replacement, and hormone treatments. <p> rhinitis affects the nasal mucosa, while rhinosinusitis or sinusitis affects the nose and paranasal sinuses, including frontal, ethmoid, maxillary, and sphenoid sinuses. nasopharyngitis (rhinopharyngitis or the common cold) affects the nares, pharynx, hypopharynx, uvula, and tonsils generally. without involving the nose, pharyngitis inflames the pharynx, hypopharynx, uvula, and tonsils. similarly, epiglottitis (supraglottitis) inflames the superior portion of the larynx and supraglottic area; laryngitis is in the larynx; laryngotracheitis is in the larynx, trachea, and subglottic area; and tracheitis is in the trachea and subglottic area. <p> there is a connection between the acoustic production of laryngeals and nasals, as can be seen from the antiformants both can produce when viewed via a spectrogram. this is because both sounds in a sense have branched resonators: in the production of nasal sound, both the oral cavity and the nasal cavity act as resonators. for laryngeals, the space below the glottis acts as a second resonator, which in turn can produce slight antiformants. <p> simple nasals are differentiated from stops only by a lowered velum that allows the air to escape through the nose during the occlusion. nasals are acoustically sonorants, as they have a non-turbulent airflow and are nearly always voiced, but they are articulatorily obstruents, as there is complete blockage of the oral cavity. the term occlusive may be used as a cover term for both nasals and stops. <p> in terms of acoustics, nasals are sonorants, which means that they do not significantly restrict the escape of air (as it can freely escape out the nose). however, nasals are also obstruents in their articulation because the flow of air through the mouth is blocked. this duality, a sonorant airflow through the nose along with an obstruction in the mouth, means that nasal occlusives behave both like sonorants and like obstruents. for example, nasals tend to pattern with other sonorants such as and , but in many languages, they may develop from or into stops.
Shots are the quickest way into the bloodstream. While a nasal spray vaccination works, it has to be absorbed into the blood vessels in the nostrils to trigger an immune response. More importantly, the flu shot is a dead strain of the virus while the nasal spray is a live strain. Neither can give you the flu but the spray typically has more serious, flu-like effects. In addition, the spray is not recommended for infants under 2 while the shot can be administered when a baby is older than six months. The shot, while mildly painful is actually the better option of the two in my opinion.
why do so many babies do that thing where they fidget and kick so much when changing their diaper?
<p> babies may have their diapers changed five or more times a day. parents and other primary child care givers often carry spare diapers and necessities for diaper changing in a specialized diaper bag. diapering may possibly serve as a good bonding experience for parent and child. children who wear diapers may experience skin irritation, commonly referred to as diaper rash, due to continual contact with fecal matter, as feces contains urease which catalyzes the conversion of the urea in urine to ammonia which can irritate the skin and can cause painful redness. <p> although most commonly worn by and associated with babies and children, diapers are also worn by adults for a variety of reasons. in the medical community, they are usually referred to as "adult absorbent briefs" rather than diapers, which are associated with children and may have a negative connotation. the usage of adult diapers can be a source of embarrassment, and products are often marketed under euphemisms such as incontinence pads. the most common adult users of diapers are those with medical conditions which cause them to experience urinary like bed wetting or fecal incontinence, or those who are bedridden or otherwise limited in their mobility. <p> babies are likely to accumulate gas in the stomach while feeding and experience considerable discomfort (and agitation) until assisted. burping an infant involves placing the child in a position conducive to gas expulsion (for example against the adult's shoulder, with the infant's stomach resting on the adult's chest) and then lightly patting the lower back. because burping can cause vomiting, a "burp cloth" or "burp pad" is sometimes employed on the shoulder to protect clothing. <p> many toy store chains and online retailers sell diapers or nappies as a loss leader in order to entice parents into the store in the hopes that the children will spot toys, bottles or other items that the family "needs". <p> parents report that the squat or "potty" position that they tend to use to hold their baby in order to go is very comfortable for the baby. the position aligns the digestive tract and supports relaxation, as well as contraction of the pelvic floor muscles, helping babies to release their urine or stool and simultaneously build control of the urinary and anal sphincter muscles. this especially helps babies who are suffering from mild constipation. many babies find defecating to be an unsettling process, especially as they transition to solid food. with ec, parents hold their infant in a supportive position as they defecate into the toilet or a suitable receptacle, offering loving emotional and physical support during this process. <p> for infants and toddlers, less frequent diaper changes can lead to increased instances of diaper rash and urinary tract infections, which can hospitalize the baby. when parents cannot afford diapers, they resort to leaving their child in a diaper for much longer than they should. some parents will leave their child in a wet or dirty diaper, and other parents will “clean” a used disposable diaper and then put it on their baby many times. some parents also attempt to potty train their baby as young as less than one year old, whereas diaper manufacturers claim most children should not be potty trained until they are two or three years old. furthermore, the experience of diapering has been identified as a significant conduit for mother-infant bonding and a source of confidence for mothers. parents' inability to provide adequate diaper changes has been linked to parenting stress and maternal depression. in households where parents experience high levels of stress and depression, children are at greater risk of social, emotional and behavioral problems. <p> babywearing allows the wearer to have two free hands to accomplish tasks such as laundry while caring for the baby's need to be held or be breastfed. babywearing offers a safer alternative to placing a car seat on top of a shopping cart. it also allows children to be involved in social interactions and to see their surroundings as an adult would.
Probably because they're either uncomfortable or stimulated by new sensations. They're always bundled up, then, suddenly their most sensitive (especially if they have diaper rash) parts are wet and exposed to open air. Then, your wiping sensitive skin which can sting. If they're not uncomfortable they may just find novelty that it feels different to make those motions without a diaper on.
is there a way to 'stop' in space, or would we in theory always have velocity above 0 m/s?
<p> if one's goal is simply to "reach space", for example in competing for the ansari x prize, horizontal motion is not needed. in this case the lowest required delta-v, to reach 100 km altitude, is about 1.4 km/s. moving slower, with less free-fall, would require more delta-v. <p> if the speed is higher than the orbital velocity, but not high enough to leave earth altogether (lower than the escape velocity), it will continue revolving around earth along an elliptical orbit. (d) for example horizontal speed of 7,300 to approximately 10,000 m/s for earth. <p> the escape velocity from earth is about at the surface. more generally, escape velocity is the speed at which the sum of an object's kinetic energy and its gravitational potential energy is equal to zero; an object which has achieved escape velocity is neither on the surface, nor in a closed orbit (of any radius). with escape velocity in a direction pointing away from the ground of a massive body, the object will move away from the body, slowing forever and approaching, but never reaching, zero speed. once escape velocity is achieved, no further impulse need to be applied for it to continue in its escape. in other words, if given escape velocity, the object will move away from the other body, continually slowing, and will asymptotically approach zero speed as the object's distance approaches infinity, never to come back. speeds higher than escape velocity have a positive speed at infinity. note that the minimum escape velocity assumes that there is no friction (e.g., atmospheric drag), which would increase the required instantaneous velocity to escape the gravitational influence, and that there will be no future acceleration or deceleration (for example from thrust or gravity from other objects), which would change the required instantaneous velocity. <p> defined a little more formally, "escape velocity" is the initial speed required to go from an initial point in a gravitational potential field to infinity and end at infinity with a residual speed of zero, without any additional acceleration. all speeds and velocities are measured with respect to the field. additionally, the escape velocity at a point in space is equal to the speed that an object would have if it started at rest from an infinite distance and was pulled by gravity to that point. <p> in common usage, the initial point is on the surface of a planet or moon. on the surface of the earth, the escape velocity is about 11.2 km/s, which is approximately 33 times the speed of sound (mach 33) and several times the muzzle velocity of a rifle bullet (up to 1.7 km/s). however, at 9,000 km altitude in "space", it is slightly less than 7.1 km/s. <p> one problem with velocity is that it conflates work done with planning accuracy. in other words, a team can inflate velocity by estimating tasks more conservatively. if a team says that a task will take four hours or is worth 4 points instead of taking two hours or being worth two points, their velocity will look better (sometimes called point inflation). velocity should not be used as a performance metric. <p> at a specific horizontal firing speed called escape velocity, dependent on the mass of the planet, an open orbit (e) is achieved that has a parabolic path. at even greater speeds the object will follow a range of hyperbolic trajectories. in a practical sense, both of these trajectory types mean the object is "breaking free" of the planet's gravity, and "going off into space" never to return.
The question of stopping in space is not a complete question. Relative to what is the rest of the question. Stop is only in relation to a certain object. It is possible to stop in relation to any particular object as long as you match course and speed.
why has there been such a marked increase in spam/scam phone calls in the past few years, and is there anything that can be done about it?
<p> the lesser and geographically uneven prevalence of mobile phone spam is attributable to geographic variation of prevalence of mobile vs non-mobile electronic communications, the higher cost (to spammers) of and technological barriers to sending mobile messages in some areas, and to law enforcement in others. today, particularly in north america, most mobile phone spam is sent from mobile devices that have prepaid unlimited messaging rate plans. while the rate plans allow for unlimited messaging, in reality the relatively slow sending rate (on the order of magnitude of 1/s) limits the number of messages that may be sent before an abusing mobile is shut down. <p> the law required the ftc to report back to congress within 24 months of the effectiveness of the act. no changes were recommended. it also requires the ftc to promulgate rules to shield consumers from unwanted mobile phone spam. on december 20, 2005 the ftc reported that the volume of spam has begun to level off, and due to enhanced anti-spam technologies, less was reaching consumer inboxes. a significant decrease in sexually explicit e-mail was also reported. <p> mobile phone spam is a form of spam (unsolicited messages, especially advertising), directed at the text messaging or other communications services of mobile phones or smartphones. as the popularity of mobile phones surged in the early 2000s, frequent users of text messaging began to see an increase in the number of unsolicited (and generally unwanted) commercial advertisements being sent to their telephones through text messaging. this can be particularly annoying for the recipient because, unlike in email, some recipients may be charged a fee for every message received, including spam. mobile phone spam is generally less pervasive than email spam, where in 2010 around 90% of email is spam. the amount of mobile spam varies widely from region to region. in north america, mobile spam has steadily increased from 2008 ed 2012 and is projected to account for half of all mobile phone traffic in 2019. in parts of asia up to 30% of messages were spam in 2012. <p> despite the high number of phone users, there has not been so much phone spam, because there is a charge for sending sms. recently, there are also observations of mobile phone spam delivered via browser push notifications. these can be a result of allowing websites which are malicious or delivering malicious ads to send a user notifications. <p> because of the international nature of spam, the spammer, the hijacked spam-sending computer, the spamvertised server, and the user target of the spam are all often located in different countries. as much as 80% of spam received by internet users in north america and europe can be traced to fewer than 200 spammers. <p> bullet::::- 1996 vodacom became the first network to introduce prepay mobile phones under the 'vodago' package, using an 'intelligent network' platform. this made it possible to debit customers’ accounts in real time, and led to a dramatic increase in use. <p> after revelations that german chancellor angela merkel's mobile was being tapped, the tech industry rushed to create a secure cell phone. according to "techrepublic", revelations from the nsa leaks "rocked the it world" and had a "chilling effect". the three biggest impacts were seen as increased interest in encryption, business leaving u.s. companies, and a reconsideration of the safety of cloud technology. the blackphone, which "the new yorker" called "a phone for the age of snowden"—described as "a smartphone explicitly designed for security and privacy", created by the makers of geeksphone, silent circle, and pgp, provided encryption for phone calls, emails, texts, and internet browsing.
The level and detail of information about people is so accurate now that these companies can afford to ring you. Before they would need to randomly dial every number for a few hits. Now they can purchase data on things like, people who have had a car crash, people who have bought a PC etc. Our data is everywhere. What you buy, when you buy it etc are all easily collected. Things like a store loyalty card isnt there because they really really like you, its because they can tell if people in a particular area prefer Pepsi or Coke-Cola etc. They also get mega bucks by passing these sorts of details over to marketing people who by these lists from all over the show, and then sell big lists to anyone who will buy them. This means that you can afford to only ring the 100,000 people on your list about that car crash they've had, rather than the entire country.
why has there been such a marked increase in spam/scam phone calls in the past few years, and is there anything that can be done about it?
<p> the lesser and geographically uneven prevalence of mobile phone spam is attributable to geographic variation of prevalence of mobile vs non-mobile electronic communications, the higher cost (to spammers) of and technological barriers to sending mobile messages in some areas, and to law enforcement in others. today, particularly in north america, most mobile phone spam is sent from mobile devices that have prepaid unlimited messaging rate plans. while the rate plans allow for unlimited messaging, in reality the relatively slow sending rate (on the order of magnitude of 1/s) limits the number of messages that may be sent before an abusing mobile is shut down. <p> the law required the ftc to report back to congress within 24 months of the effectiveness of the act. no changes were recommended. it also requires the ftc to promulgate rules to shield consumers from unwanted mobile phone spam. on december 20, 2005 the ftc reported that the volume of spam has begun to level off, and due to enhanced anti-spam technologies, less was reaching consumer inboxes. a significant decrease in sexually explicit e-mail was also reported. <p> mobile phone spam is a form of spam (unsolicited messages, especially advertising), directed at the text messaging or other communications services of mobile phones or smartphones. as the popularity of mobile phones surged in the early 2000s, frequent users of text messaging began to see an increase in the number of unsolicited (and generally unwanted) commercial advertisements being sent to their telephones through text messaging. this can be particularly annoying for the recipient because, unlike in email, some recipients may be charged a fee for every message received, including spam. mobile phone spam is generally less pervasive than email spam, where in 2010 around 90% of email is spam. the amount of mobile spam varies widely from region to region. in north america, mobile spam has steadily increased from 2008 ed 2012 and is projected to account for half of all mobile phone traffic in 2019. in parts of asia up to 30% of messages were spam in 2012. <p> despite the high number of phone users, there has not been so much phone spam, because there is a charge for sending sms. recently, there are also observations of mobile phone spam delivered via browser push notifications. these can be a result of allowing websites which are malicious or delivering malicious ads to send a user notifications. <p> because of the international nature of spam, the spammer, the hijacked spam-sending computer, the spamvertised server, and the user target of the spam are all often located in different countries. as much as 80% of spam received by internet users in north america and europe can be traced to fewer than 200 spammers. <p> bullet::::- 1996 vodacom became the first network to introduce prepay mobile phones under the 'vodago' package, using an 'intelligent network' platform. this made it possible to debit customers’ accounts in real time, and led to a dramatic increase in use. <p> after revelations that german chancellor angela merkel's mobile was being tapped, the tech industry rushed to create a secure cell phone. according to "techrepublic", revelations from the nsa leaks "rocked the it world" and had a "chilling effect". the three biggest impacts were seen as increased interest in encryption, business leaving u.s. companies, and a reconsideration of the safety of cloud technology. the blackphone, which "the new yorker" called "a phone for the age of snowden"—described as "a smartphone explicitly designed for security and privacy", created by the makers of geeksphone, silent circle, and pgp, provided encryption for phone calls, emails, texts, and internet browsing.
ELI5: When you mail a letter some place, you usually put a return address on it. However there is nobody that actually checks to verify the letter came from where you say it did. You could live in California and pretend to be from Washington, and if you use a re-mailer service the post marks will even show it's from Washington. It is the same with telephone numbers in the digital age due to the ability of many voice over IP customers to change the phone number displayed when they call someone, much like setting a fake return address above. This allows scammers, robodialers, telemarketers, even bill collectors, to call a person without revealing their real phone number, or even pretending to be somebody else like the IRS, a neighbor, the police department, or a business. Detailed explanation: Telephone systems used to work using a protocol called SS7 or signalling system 7 which uses point codes instead of ip addresses. SS7 packets contain information about the source point code, the destination, and information on who placed the call, and where the call is destined. Because the telephone company had exclusive access to this network, it was not possible to fake a telephone number. Then came voice over IP which uses TCP/IP networking to send telephone calls over a data network using things like SIGTRAN or SIP (thanks for the correction Databeast) which helps establish calls over IP networks. SIP information can be sent by the telephone company, but if you have access to a SIP provider, then it is possible to change the displayed number and make a telephone call appear to come from any phone number you wish, the same way changing the return to address on a letter can. This allows spammers and scammers to hide their real telephone number, and make the call appear to come from any phone number they wanted. For instance the IRS 800 number, your local police department phone number, friends or family, or even your own local area code and prefix so they could pretend to be a local call. This makes it very easy to abuse the telephone system in a hard to trace manner while remaining anonymous so your victims have little information to find or incriminate you. This is why telephone abuse is becoming more prevalent even on national call blocked numbers. Some people have asked why the phone companies don't block them and the answer is it was against the law and they could incur FCC fines for disrupting telephone calls. The FCC is working on new rules that would allow a user to give their phone provider permission to block these type of calls without incurring fines. It is a good policy to set your phones default ring tone to silence or 24 hour "do not disturb", and specifically add phone numbers of friends and family to the exclusion list so the phone will still ring when they call. And if you see a phone call placed from the first 6 digits of your real phone number, it is guaranteed to be a scam. If your phone number is 210 855 4444 and you see a phone call from 210 855 1234 it's a scam.
just watched ford v. ferrari. how was the 1964 gt40 able to achieve a top speed of 210+ when modern supercars are still barely pushing 200?
<p> the next opportunity to reach the claimed top speed was a shootout at nardò ring organized by "auto, motor und sport". ferrari sent two cars but neither could reach more than , beaten by the porsche 959 s, which attained a top speed of , and the ruf ctr, which attained a top speed of . both were limited production cars with only 29 built, so while the f40 never was the world's fastest sports car as self-appraised by ferrari, it could still claim the title of the fastest production car with over 500 units built until the arrival of the lamborghini diablo. <p> despite an average weight of , published performance test data shows the 1966 toronado was capable of accelerating from in 7.5 seconds, and through the standing 1/4 mile (~400 m) in 16.4 seconds at . it was also capable of a maximum speed of . testers found the toronado's handling, despite its noticeable front weight bias and consequent understeer, was not substantially different from other full-size u.s. cars when driven under normal conditions. in fact, many contemporary testers felt that the toronado was more poised and responsive than other cars, and when pushed to the limits, exhibited superior handling characteristics, although it was essentially incapable of terminal oversteer. <p> the british magazine autocar got hold of what they described as the first production model ferrari 212 in 1950, which outperformed any car that they had previously tested. it recorded a top speed of over and acceleration times of 0 to 60 mph (96 km/h) of 10.5 seconds and 100 mph (161 km/h) in 22.5 seconds; the magazine however noted they had limited the engine to 6,500 rpm out of respect for the newness and low mileage of the car they were using, which suggested that even better performance would be available from a fully "run in" model. the test appears also to have been the autocar team's first encounter with a five speed gear box. <p> the gtc4lusso's "ferrari f140" 65° v12 engine rated at at 8,000 rpm and of torque at 5,750rpm, also thanks to a compression ratio raised to 13.5:1. ferrari claims a top speed of , unchanged from the ff, and a acceleration time of 3.4 seconds. <p> a "road & track" road test recorded acceleration from 0–60 mph in 22.4 seconds, "almost half of the vw’s 39.2." however the magazine noted that at , a common american cruising speed at the time, the metropolitan was revving at 4300 rpm, which shortened engine life, whereas the volkswagen could travel at the same speed at only 3000 rpm. "road & track"s testers also said that the car had “more than its share of roll and wallow on corners” and there was “little seat-of-the-pants security when the rear end takes its time getting back in line.” <p> on 8 april 2010, ferrari announced official details of the 599 gto. the car was a road-legal version of the 599xx track day car and at the time ferrari claimed that the 599 gto was their fastest ever road car, able to lap the fiorano test circuit in 1 minute 24 seconds, one second faster than the ferrari enzo ferrari. its engine generated a power output of at 8,250 rpm and of torque at 6,500 rpm. the car has the multiple shift program for the gearbox from the 599xx along with the exhaust system. ferrari claimed that the 599 gto could accelerate from in under 3.3 seconds and has a top speed of over . at , the 599 gto weighs almost less than the standard gtb. production was limited to 599 cars. of these, approximately 125 were produced for the united states market. <p> the 250 gt/l lusso used a colombo-designed v12 engine with a displacement of . this engine developed an output of at 7,500 rpm and torque at 5,500 rpm. it was able attain a maximum speed of , thus becoming the fastest passenger car of that period, and required only 7 to 8 seconds to accelerate from . certain components such as the valves and the crankshaft, were derived from the engine of the 250 gt swb, while others, such as the pistons and the cylinder block, were derived from the 250 gte.
It's not that they can't. We have Hennessy making road cars that can hit 270. A lot of factors and a certain degree of risk/reward comes into play when you're going that fast. It's not worth it for a lot of manufacturers.
just watched ford v. ferrari. how was the 1964 gt40 able to achieve a top speed of 210+ when modern supercars are still barely pushing 200?
<p> the next opportunity to reach the claimed top speed was a shootout at nardò ring organized by "auto, motor und sport". ferrari sent two cars but neither could reach more than , beaten by the porsche 959 s, which attained a top speed of , and the ruf ctr, which attained a top speed of . both were limited production cars with only 29 built, so while the f40 never was the world's fastest sports car as self-appraised by ferrari, it could still claim the title of the fastest production car with over 500 units built until the arrival of the lamborghini diablo. <p> despite an average weight of , published performance test data shows the 1966 toronado was capable of accelerating from in 7.5 seconds, and through the standing 1/4 mile (~400 m) in 16.4 seconds at . it was also capable of a maximum speed of . testers found the toronado's handling, despite its noticeable front weight bias and consequent understeer, was not substantially different from other full-size u.s. cars when driven under normal conditions. in fact, many contemporary testers felt that the toronado was more poised and responsive than other cars, and when pushed to the limits, exhibited superior handling characteristics, although it was essentially incapable of terminal oversteer. <p> the british magazine autocar got hold of what they described as the first production model ferrari 212 in 1950, which outperformed any car that they had previously tested. it recorded a top speed of over and acceleration times of 0 to 60 mph (96 km/h) of 10.5 seconds and 100 mph (161 km/h) in 22.5 seconds; the magazine however noted they had limited the engine to 6,500 rpm out of respect for the newness and low mileage of the car they were using, which suggested that even better performance would be available from a fully "run in" model. the test appears also to have been the autocar team's first encounter with a five speed gear box. <p> the gtc4lusso's "ferrari f140" 65° v12 engine rated at at 8,000 rpm and of torque at 5,750rpm, also thanks to a compression ratio raised to 13.5:1. ferrari claims a top speed of , unchanged from the ff, and a acceleration time of 3.4 seconds. <p> a "road & track" road test recorded acceleration from 0–60 mph in 22.4 seconds, "almost half of the vw’s 39.2." however the magazine noted that at , a common american cruising speed at the time, the metropolitan was revving at 4300 rpm, which shortened engine life, whereas the volkswagen could travel at the same speed at only 3000 rpm. "road & track"s testers also said that the car had “more than its share of roll and wallow on corners” and there was “little seat-of-the-pants security when the rear end takes its time getting back in line.” <p> on 8 april 2010, ferrari announced official details of the 599 gto. the car was a road-legal version of the 599xx track day car and at the time ferrari claimed that the 599 gto was their fastest ever road car, able to lap the fiorano test circuit in 1 minute 24 seconds, one second faster than the ferrari enzo ferrari. its engine generated a power output of at 8,250 rpm and of torque at 6,500 rpm. the car has the multiple shift program for the gearbox from the 599xx along with the exhaust system. ferrari claimed that the 599 gto could accelerate from in under 3.3 seconds and has a top speed of over . at , the 599 gto weighs almost less than the standard gtb. production was limited to 599 cars. of these, approximately 125 were produced for the united states market. <p> the 250 gt/l lusso used a colombo-designed v12 engine with a displacement of . this engine developed an output of at 7,500 rpm and torque at 5,500 rpm. it was able attain a maximum speed of , thus becoming the fastest passenger car of that period, and required only 7 to 8 seconds to accelerate from . certain components such as the valves and the crankshaft, were derived from the engine of the 250 gt swb, while others, such as the pistons and the cylinder block, were derived from the 250 gte.
A modern supercar is quite different from a race car. race cars are spartan, lightweight, have no emissions (to speak of), and are designed to go very very fast. Also, most of them will kill anyone who's stupid. The 1964 GT40 was technically a prototype car. Yes, there were several, but they were all hand built, hand tested, hand tuned and driven by very talented pilots. supercars are *production* vehicles - they're designed for the roads you and I drive on. top speed of a supercar isn't really relevant - and if you're gonna take one to the track, then you're probably rich enough to afford the modifications necessary for it to compete on the track. If you look at even modern day 24h Lemans races, you'll note that any of the production cars there are not what you'd see on the showroom floor. They're purpose built race cars (for example, the Corvette C7 R and C8 R)
just watched ford v. ferrari. how was the 1964 gt40 able to achieve a top speed of 210+ when modern supercars are still barely pushing 200?
<p> the next opportunity to reach the claimed top speed was a shootout at nardò ring organized by "auto, motor und sport". ferrari sent two cars but neither could reach more than , beaten by the porsche 959 s, which attained a top speed of , and the ruf ctr, which attained a top speed of . both were limited production cars with only 29 built, so while the f40 never was the world's fastest sports car as self-appraised by ferrari, it could still claim the title of the fastest production car with over 500 units built until the arrival of the lamborghini diablo. <p> despite an average weight of , published performance test data shows the 1966 toronado was capable of accelerating from in 7.5 seconds, and through the standing 1/4 mile (~400 m) in 16.4 seconds at . it was also capable of a maximum speed of . testers found the toronado's handling, despite its noticeable front weight bias and consequent understeer, was not substantially different from other full-size u.s. cars when driven under normal conditions. in fact, many contemporary testers felt that the toronado was more poised and responsive than other cars, and when pushed to the limits, exhibited superior handling characteristics, although it was essentially incapable of terminal oversteer. <p> the british magazine autocar got hold of what they described as the first production model ferrari 212 in 1950, which outperformed any car that they had previously tested. it recorded a top speed of over and acceleration times of 0 to 60 mph (96 km/h) of 10.5 seconds and 100 mph (161 km/h) in 22.5 seconds; the magazine however noted they had limited the engine to 6,500 rpm out of respect for the newness and low mileage of the car they were using, which suggested that even better performance would be available from a fully "run in" model. the test appears also to have been the autocar team's first encounter with a five speed gear box. <p> the gtc4lusso's "ferrari f140" 65° v12 engine rated at at 8,000 rpm and of torque at 5,750rpm, also thanks to a compression ratio raised to 13.5:1. ferrari claims a top speed of , unchanged from the ff, and a acceleration time of 3.4 seconds. <p> a "road & track" road test recorded acceleration from 0–60 mph in 22.4 seconds, "almost half of the vw’s 39.2." however the magazine noted that at , a common american cruising speed at the time, the metropolitan was revving at 4300 rpm, which shortened engine life, whereas the volkswagen could travel at the same speed at only 3000 rpm. "road & track"s testers also said that the car had “more than its share of roll and wallow on corners” and there was “little seat-of-the-pants security when the rear end takes its time getting back in line.” <p> on 8 april 2010, ferrari announced official details of the 599 gto. the car was a road-legal version of the 599xx track day car and at the time ferrari claimed that the 599 gto was their fastest ever road car, able to lap the fiorano test circuit in 1 minute 24 seconds, one second faster than the ferrari enzo ferrari. its engine generated a power output of at 8,250 rpm and of torque at 6,500 rpm. the car has the multiple shift program for the gearbox from the 599xx along with the exhaust system. ferrari claimed that the 599 gto could accelerate from in under 3.3 seconds and has a top speed of over . at , the 599 gto weighs almost less than the standard gtb. production was limited to 599 cars. of these, approximately 125 were produced for the united states market. <p> the 250 gt/l lusso used a colombo-designed v12 engine with a displacement of . this engine developed an output of at 7,500 rpm and torque at 5,500 rpm. it was able attain a maximum speed of , thus becoming the fastest passenger car of that period, and required only 7 to 8 seconds to accelerate from . certain components such as the valves and the crankshaft, were derived from the engine of the 250 gt swb, while others, such as the pistons and the cylinder block, were derived from the 250 gte.
First of all you're comparing a 55-year-old racing prototype to brand new road cars. The Ford GT40 was made to go as fast as it could, reliably enough to win a 24-hour race; a modern 'supercar' is designed to look pretty, be comfortable, meet government safety regulations, etc. reliably for years. A more apt comparison is between the GT40 and a new Le Mans prototype. Compared to a modern Le Mans prototype it was more advantageous for the GT40 to be as fast on the Mulsanne Straight as it possibly could. The course at Le Mans now has chicanes deliberately added to the straight, to force drivers to slow down. There's only so much space to reach top speed now, so carrying as much speed through corners and accelerating as quickly as possible is more advantageous. Last year's Le Mans winner, the Toyota TS050, had a top speed of 217.5 mph, which isn't all that much more than the GT40. However, it reached that top speed in much shorter straights and can corner much quicker than the GT40 ever could. Dan Gurney set the fastest lap in '66, at 3:30. Last year's fastest lap was 3:17, with two chicanes in the Mulsanne Straight. Overall the TS050 is **much** faster.
just watched ford v. ferrari. how was the 1964 gt40 able to achieve a top speed of 210+ when modern supercars are still barely pushing 200?
<p> the next opportunity to reach the claimed top speed was a shootout at nardò ring organized by "auto, motor und sport". ferrari sent two cars but neither could reach more than , beaten by the porsche 959 s, which attained a top speed of , and the ruf ctr, which attained a top speed of . both were limited production cars with only 29 built, so while the f40 never was the world's fastest sports car as self-appraised by ferrari, it could still claim the title of the fastest production car with over 500 units built until the arrival of the lamborghini diablo. <p> despite an average weight of , published performance test data shows the 1966 toronado was capable of accelerating from in 7.5 seconds, and through the standing 1/4 mile (~400 m) in 16.4 seconds at . it was also capable of a maximum speed of . testers found the toronado's handling, despite its noticeable front weight bias and consequent understeer, was not substantially different from other full-size u.s. cars when driven under normal conditions. in fact, many contemporary testers felt that the toronado was more poised and responsive than other cars, and when pushed to the limits, exhibited superior handling characteristics, although it was essentially incapable of terminal oversteer. <p> the british magazine autocar got hold of what they described as the first production model ferrari 212 in 1950, which outperformed any car that they had previously tested. it recorded a top speed of over and acceleration times of 0 to 60 mph (96 km/h) of 10.5 seconds and 100 mph (161 km/h) in 22.5 seconds; the magazine however noted they had limited the engine to 6,500 rpm out of respect for the newness and low mileage of the car they were using, which suggested that even better performance would be available from a fully "run in" model. the test appears also to have been the autocar team's first encounter with a five speed gear box. <p> the gtc4lusso's "ferrari f140" 65° v12 engine rated at at 8,000 rpm and of torque at 5,750rpm, also thanks to a compression ratio raised to 13.5:1. ferrari claims a top speed of , unchanged from the ff, and a acceleration time of 3.4 seconds. <p> a "road & track" road test recorded acceleration from 0–60 mph in 22.4 seconds, "almost half of the vw’s 39.2." however the magazine noted that at , a common american cruising speed at the time, the metropolitan was revving at 4300 rpm, which shortened engine life, whereas the volkswagen could travel at the same speed at only 3000 rpm. "road & track"s testers also said that the car had “more than its share of roll and wallow on corners” and there was “little seat-of-the-pants security when the rear end takes its time getting back in line.” <p> on 8 april 2010, ferrari announced official details of the 599 gto. the car was a road-legal version of the 599xx track day car and at the time ferrari claimed that the 599 gto was their fastest ever road car, able to lap the fiorano test circuit in 1 minute 24 seconds, one second faster than the ferrari enzo ferrari. its engine generated a power output of at 8,250 rpm and of torque at 6,500 rpm. the car has the multiple shift program for the gearbox from the 599xx along with the exhaust system. ferrari claimed that the 599 gto could accelerate from in under 3.3 seconds and has a top speed of over . at , the 599 gto weighs almost less than the standard gtb. production was limited to 599 cars. of these, approximately 125 were produced for the united states market. <p> the 250 gt/l lusso used a colombo-designed v12 engine with a displacement of . this engine developed an output of at 7,500 rpm and torque at 5,500 rpm. it was able attain a maximum speed of , thus becoming the fastest passenger car of that period, and required only 7 to 8 seconds to accelerate from . certain components such as the valves and the crankshaft, were derived from the engine of the 250 gt swb, while others, such as the pistons and the cylinder block, were derived from the 250 gte.
I don't know where you get the "barely pushing 200" from. 1993 McLaren F1 - 240.1mph. ... 2005 Bugatti Veyron - 253mph. ... 2007 Shelby Supercars Ultimate Aero - 256.18mph. ... 2010 Bugatti Veyron Super Sport - 267.857mph. ... 2014 Hennessey Venom GT - 270.49mph. ... 2017 Koenigsegg Agera RS - 277.87mph. ... 2019 Bugatti Chiron - 304.77mph. Plus as others have said, these are production cars. The GT40 was a purpose built race car. Modern NASCAR race cars are purpose built to do 210ish mph on the top end and average about 180 mph for 500 miles(depending on the track.) Lemans cars average around 150 mph over the course of the race. Funny cars and dragsters are purpose built and regularly hit 330+ mph in under 4 seconds. By comparison the 2017 Bugatti Chiron took 32.6 seconds to reach 249 mph.
someone dies before they get a chance to retire. what happens to all of their social security benefits?
<p> similarly to u.s. citizens, a person who worked in h-1b status may be eligible to receive social security benefit payments at retirement. generally, a worker must have worked in the u.s. and paid social security taxes obtaining at least 40 credits before retirement. the person will not be eligible for payments if the person moves outside the u.s. and is a citizen of a country with a social insurance system or a pension system that pays periodic payments upon old age, retirement, or death. <p> if a worker covered by social security dies, a surviving spouse can receive survivors' benefits. in some instances, survivors' benefits are available even to a divorced spouse. a father or mother with minor or disabled children in his or her care can receive benefits which are not actuarially reduced. the earliest age for a non-disabled widow(er)'s benefit is age 60. the benefit is equal to the worker's basic retirement benefit (pia) (reduced if the deceased was receiving reduced benefits) for spouses who are at, or older than, normal retirement age. if the surviving spouse starts benefits before normal retirement age, there is an actuarial reduction. if the worker earned delayed retirement credits by waiting to start benefits after their normal retirement age, the surviving spouse will have those credits applied to their benefit. <p> some federal, state, local and education government employees pay no social security but have their own retirement, disability systems that nearly always pay much better retirement and disability benefits than social security. these plans typically require vesting—working for 5–10 years for the same employer before becoming eligible for retirement. but their retirement typically only depends on the average of the best 3–10 years salaries times some retirement factor (typically 0.875%–3.0%) times years employed. this retirement benefit can be a "reasonably good" (75–85% of salary) retirement at close to the monthly salary they were last employed at. for example, if a person joined the university of california retirement system at age 25 and worked for 35 years they could receive 87.5% (2.5% × 35) of their average highest three year salary with full medical coverage at age 60. police and firemen who joined at 25 and worked for 30 years could receive 90% (3.0% × 30) of their average salary and full medical coverage at age 55. these retirements have cost of living adjustments (cola) applied each year but are limited to a maximum average income of $350,000/year or less. spousal survivor benefits are available at 100–67% of the primary benefits rate for 8.7% to 6.7% reduction in retirement benefits, respectively. ucrp retirement and disability plan benefits are funded by contributions from both members and the university (typically 5% of salary each) and by the compounded investment earnings of the accumulated totals. these contributions and earnings are held in a trust fund that is invested. the retirement benefits are much more generous than social security but are believed to be actuarially sound. the main difference between state and local government sponsored retirement systems and social security is that the state and local retirement systems use compounded investments that are usually heavily weighted in the stock market securities—which historically have returned more than 7.0%/year on average despite some years with losses. short term federal government investments may be "more" secure but pay much lower average percentages. nearly all other federal, state and local retirement systems work in a similar fashion with different benefit retirement ratios. some plans are now combined with social security and are "piggy backed" on top of social security benefits. for example, the current federal employees retirement system, which covers the vast majority of federal civil service employees hired after 1986, combines social security, a modest defined-benefit pension (1.1% per year of service) and the defined-contribution thrift savings plan. <p> due to changing needs or personal preferences, a person may go back to work after retiring. in this case, it is possible to get social security retirement or survivors benefits and work at the same time. a worker who is of full retirement age or older may (with spouse) keep all benefits, after taxes, regardless of earnings. but, if this worker or the worker's spouse are younger than full retirement age and receiving benefits and earn "too much", the benefits will be reduced. if working under full retirement age for the entire year and receiving benefits, social security deducts $1 from the worker's benefit payments for every $2 earned above the annual limit of $15,120 (2013). deductions cease when the benefits have been reduced to zero and the worker will get one more year of income and age credit, slightly increasing future benefits at retirement. for example, if you were receiving benefits of $1,230/month (the average benefit paid) or $14,760 a year and have an income of $29,520/year above the $15,120 limit ($44,640/year) you would lose all ($14,760) of your benefits. if you made $1,000 more than $15,200/year you would "only lose" $500 in benefits. you would get no benefits for the months you work until the $1 deduction for $2 income "squeeze" is satisfied. your first social security check will be delayed for several months—the first check may only be a fraction of the "full" amount. the benefit deductions change in the year you reach full retirement age and are still working—social security only deducts $1 in benefits for every $3 you earn above $40,080 in 2013 for that year and has no deduction thereafter. the income limits change (presumably for inflation) year by year. <p> for those few cases where workers with very low earnings over a long working lifetime that were too low to receive full retirement credits and the recipients would receive a very small social security retirement benefit a "special minimum benefit" (special minimum pia) provides a "minimum" of $804 per month in social security benefits in 2013. to be eligible the recipient along with their auxiliaries and survivors must have very low assets and not be eligible for other retirement system benefits. about 75,000 people in 2013 receive this benefit. <p> retired members of the united states armed forces who cease to be u.s. citizens may lose their entitlement to veterans' benefits, if the right to benefits is dependent on the retiree's continued military status. <p> in late 2010, discussions related to cutting federal taxes raised anew the following concern: how much would an annuity cost a retiree if he or she had to replace his or her social security income? assuming that the average benefit from social security is $14,000 per year, the replacement cost would be about $250,000 for a 66-year-old individual. the figures are based upon the individual receiving an inflation-adjusted stream that would pay for life and be insured.
Social security isn't a personal bank account. There's no fixed total sum of money each person is entitled to. There's a spousal benefit if the spouse survives. There's also a children's benefit with some limits. If there's no spouse or qualifying children, there's nobody entitled to a benefit. So there's no benefit. Because it's not a personal bank account, there's no money that then has to get redirected somewhere else.
why don't general physicians cover teeth?
<p> often oral health education and training is limited for healthcare aids and nurses, leading to suboptimal oral care for dependent patients in long-term care and hospital settings. the toothette is inaccurately used in the long-term care and hospital setting as the predominant tool for oral care, and toothbrushes are rarely used grap et al. found that nursing staff in an intensive care unit most commonly use toothettes and mouthwash as the predominant tool for oral care, especially for intubated patients. this is concerning because it is well-established that the toothette does not effectively remove oral biofilm, and the toothbrush is significantly better at promoting health of the gums and controlling oral biofilm. when the efficacy of the toothbrush and toothette are compared, the toothbrush is better at removing plaque from the oral cavity. <p> one of their main concerns is tooth decay prevention. not only do they work with the teeth, pediatric dentists also look at the gums, throat muscles and nervous system of the head, neck and jaw, the tongue, and salivary glands. they do this to check for lumps, swellings, ulcers, discolorations, and other anomalies. another duty of theirs is to test for oral cancer and perform biopsies, if needed. <p> the faculty of general dental practice of the royal college of surgeons of england publication selection criteria in dental radiography holds that given current evidence full mouth series are to be discouraged due to the large numbers of radiographs involved, many of which will not be necessary for the patient's treatment. an alternative approach using bitewing screening with selected periapical views is suggested as a method of minimising radiation dose to the patient while maximizing diagnostic yield. contrary to advice that emphasises only conducting radiographs when in the patient's interest, recent evidence suggests that they are used more frequently when dentists are paid under fee-for-service <p> dentists also encourage prevention of oral diseases through proper hygiene and regular, twice yearly, checkups for professional cleaning and evaluation. oral infections and inflammations may affect overall health and conditions in the oral cavity may be indicative of systemic diseases, such as osteoporosis, diabetes, celiac disease or cancer. many studies have also shown that gum disease is associated with an increased risk of diabetes, heart disease, and preterm birth. the concept that oral health can affect systemic health and disease is referred to as "oral-systemic health". <p> by nature of their general training they can carry out the majority of dental treatments such as restorative (fillings, crowns, bridges), prosthetic (dentures), endodontic (root canal) therapy, periodontal (gum) therapy, and extraction of teeth, as well as performing examinations, radiographs (x-rays), and diagnosis. dentists can also prescribe medications such as antibiotics, sedatives, and any other drugs used in patient management. <p> toothettes and foam swabs are effective at stimulating the tissue between oral care, and are used for patients who are unable to care for their own oral health. oral swabs are especially helpful when a patient suffers from gross mucositis, potentially arising from chemotherapy. this is because the oral swabs can apply moisture to the oral cavity, therefore soothing the tissues. additionally, toothettes are indicated when toothbrushing is contraindicated, particularly when an individual's platelet counts are below 40000-50000 and when there are issues accessing the oral cavity. it is also necessary to use oral swabs for oral care when an individual has thrombocytopenia in order to reduce risk of exacerbated bleeding. <p> there are a number of recommendations for dentists that can help reduce the risk of developing musculoskeletal pain. the use of magnification or loupes and good lighting aids an improvement in posture by preventing the need to crane the neck and back for better vision. the use of a saddle seat also assists improved posture by keeping the spine in its natural 's' curve. patients should be positioned with enough distance to allow the shoulders to be in a relaxed, neutral position and elbows at about a 90 degree or less flexion. however, according to a cochrane review published in 2018, there is insufficient evidence about the effects of ergonomic interventions in preventing musculoskeletal disorders among dentists and other dental care practitioners.
Dentistry is more complicated than you'd think. Dentistry needs to consider not only teeth, but the entire oral cavity. It's not just making sure someone doesn't have a cavity; you also need to understand how the bone structure of the skull and the associated soft tissue play into things.
why do people go to different doctors for dentistry, surgery, and primary care but pets go to one vet for everything?
<p> most vets work in clinical settings, treating animals directly. these vets may be involved in a general practice, treating animals of all types; may be specialized in a specific group of animals such as companion animals, livestock, laboratory animals, zoo animals or horses; or may specialize in a narrow medical discipline such as surgery, dermatology, laboratory animal medicine, or internal medicine. <p> vets are often assisted by registered veterinary nurses, who are able to both assist the vet and to autonomously practice a range of skills of their own, including minor surgery under direction from a responsible vet. <p> as with healthcare professionals, vets face ethical decisions about the care of their patients. current debates within the profession include the ethics of purely cosmetic procedures on animals, such as declawing of cats, docking of tails, cropping of ears and debarking on dogs. <p> pets for vets is a 501(c)(3) non-profit organization in the united states dedicated to providing a second chance to shelter dogs by rescuing, training, and matching them with american veterans who need a companion pet. it was founded in 2009 to help veterans who were suffering from combat stress and other emotional issues. each companion dog is rescued in connection with local animal rescue groups. <p> pets for vets developed a program focusing on addressing these issues by bringing together animals needing to be rescued and veterans needing a companion for a better quality of life. not every veteran qualifies for a psychiatric service dog, however everyone who wants one can benefit from a companion or pet animal. <p> as opposed to human medicine, general practice veterinarians greatly outnumber veterinary specialists. most veterinary specialists work at the veterinary schools, or at a referral center in large cities. as opposed to human medicine, where each organ system has its own medical and surgical specialties, veterinarians often combine both the surgical and medical aspect of an organ system into one field. the specialties in veterinary medicine often encompass several medical and surgical specialties that are found in human medicine. <p> veterinarians treat disease, disorder or injury in animals, which includes diagnosis, treatment and aftercare. the scope of practice, specialty and experience of the individual veterinarian will dictate exactly what interventions they perform, but most will perform surgery (of differing complexity).
The extent to which a human will pay for/enroll in specialized services and micro-management of their physical condition created a large market of providers. In other words, because there is enough money and patronage in the broad field to allow doctors to focus on the education, experience and infrastructure required to be a top pro at a given field. The extent to which a human will pay "good money" to resolve an animal's physical difficulty is much less. Yes, there are pet owners out there who will pony up money (and there are some vets who do specialize due to the growing number of people willing to spend a fortune on their pets). But for the most part, it's "blood work and we'll get the lab results back to you" and then it's either "we have a cheap medicine that can make things okay for your pet" or "you might want to consider putting your pet down" being the typical options. Not because vets are unwilling. But because the free market has tested this out for a very long time and the results are in: people will pay a limited amount for a cured animal, a very limited amount for treatment of an animal who can't be cured, and that's about the typical. Beyond that, it's Old Yeller, not to be cold about it. I have dropped easily 15 grand on pets at the vet, I'm a softy when it comes to that (and no, I can't afford it). I spend about 250 a month at this point on two elderly cats who would probably die within 60 to 90 days without their medicine, certainly less than a year. What would a human pay to keep their elderly parents alive? Everything they own. So with more money and customers comes a greater ability to sustain the infrastructure required to specialize.
why are there no sentient plant-based species? why is base intelligence so abundant and diverse in animals, but non-existent in the plant kingdom?
<p> it has been argued that although plants are capable of adaptation, it should not be called intelligence "per se", as plant neurobiologists rely primarily on metaphors and analogies to argue that complex responses in plants can only be produced by intelligence. "a bacterium can monitor its environment and instigate developmental processes appropriate to the prevailing circumstances, but is that intelligence? such simple adaptation behaviour might be bacterial intelligence but is clearly not animal intelligence." however, plant intelligence fits a definition of intelligence proposed by david stenhouse in a book about evolution and animal intelligence, in which he describes it as "adaptively variable behaviour during the lifetime of the individual". critics of the concept have also argued that a plant cannot have goals once it is past the developmental stage of seedling because, as a modular organism, each module seeks its own survival goals and the resulting organism-level behavior is not centrally controlled. this view, however, necessarily accommodates the possibility that a tree is a collection of individually intelligent modules cooperating, competing, and influencing each other to determine behavior in a bottom-up fashion. the development into a larger organism whose modules must deal with different environmental conditions and challenges is not universal across plant species, however, as smaller organisms might be subject to the same conditions across their bodies, at least, when the below and aboveground parts are considered separately. moreover, the claim that central control of development is completely absent from plants is readily falsified by apical dominance. <p> it is also possible to see in animals that a high genetic diversity is beneficial in providing resiliency against harsh abiotic stressors. this acts as a sort of stock room when a species is plagued by the perils of natural selection. a variety of galling insects are among the most specialized and diverse herbivores on the planet, and their extensive protections against abiotic stress factors have helped the insect in gaining that position of honor. <p> it has been observed that predators tend to select the most common morph in a population or species. the "search image hypothesis" proposes that an individual's sensory system becomes better able to detect a specific prey phenotype after recent experience with that same phenotype. it is clear that plant-pollinator interactions differ from predator-prey relationships, as it is beneficial to both the plant and animal for the pollinator to locate the plant. however, it has been suggested that cognitive constraints on short-term memory capabilities may limit pollinators from identifying and handling more than one floral type at a time, making plant-pollinator relationships theoretically similar to predator-prey relationships in regards to the ability to identify food sources. although plant traits that have evolved to attract pollinators are not cryptic, corolla colors can be more or less conspicuous with the background and pollinators that are more efficient at detecting a particular morph will minimize their search time. studies have demonstrated that the degree of frequency-dependence increases with the number of flowers visited, which suggests this is a learned response that develops gradually. <p> the concepts of plant perception, communication, and intelligence have parallels in other biological organisms for which such phenomena appear foreign to or incompatible with traditional understandings of biology, or have otherwise proven difficult to study or interpret. similar mechanisms exist in bacterial cells, choanoflagellates, fungal hyphae, and sponges, among many other examples. all of these organisms, despite being devoid of a brain or nervous system, are capable of sensing their immediate and momentary environment and responding accordingly. in the case of unicellular life, the sensory pathways are even more primitive in the sense that they take place on the surface of a single cell, as opposed to within a network of many related cells. <p> the plants are of considerable biological and evolutionary interest because of their adaptions to particular pollinators, such as flies in the families tabanidae, acroceridae, bombyliidae, and most spectacularly, nemestrinidae. <p> they are used as model systems for higher plants because of their relatively high homogeneity and high growth rate, featuring still general behaviour of plant cell. the diversity of cell types within any part of a naturally grown plant "(in vivo)" makes it very difficult to investigate and understand some general biochemical phenomena of living plant cells. the transport of a solute in or out of the cell, for example, is difficult to study because the specialized cells in a multicellular organism behave differently. cell suspension cultures such as tobacco by-2 provide good model systems for these studies at the level of a single cell and its compartments because tobacco by-2 cells behave very similarly to one another. the influence of neighbouring cells behavior is in the suspension is not as important as it would be in an intact plant. as a result any changes observed after a stimulus is applied can be statistically correlated and it could be decided if these changes are reactions to the stimulus or just merely coincidental. in this moment by-2 cells are relatively well understood and often used in research. this model plant system is especially useful for studies of cell division, cytoskeletons, plant hormone signaling, intracellular trafficking, and organelle differentiation. <p> plant defense may explain, in part, why herbivores employ different life history strategies. monophagous species (animals that eat plants from a single genus) must produce specialized enzymes to detoxify their food, or develop specialized structures to deal with sequestered chemicals. polyphagous species (animals that eat plants from many different families), on the other hand, produce more detoxyfying enzymes (specifically mfo) to deal with a range of plant chemical defenses. polyphagy often develops when a herbivore's host plants are rare as a necessity to gain enough food. monophagy is favored when there is interspecific competition for food, where specialization often increases an animals' competitive ability to use a resource.
> Is there something inherent to “plant cells” that prohibits that possibility? It is more something inherent to plant biology that prohibits the possibility, and is related to the lack of nerves within plants. Brains require *a lot* of energy! The human brain consumes about 20% of the total energy used by the human body, which is immense considering it is only about 2% of the total weight. A plant sitting out in the sun just isn't going to soak up enough energy through photosynthesis to maintain a significant brain. Add on to that problem that the energy extracted isn't enough to run all the other things required to act on such thinking; the plant can't beat a heart to establish a robust circulatory system, or a respiratory system capable of supporting muscle cells which they also generally lack. Without all of those things a nervous system is fairly useless (what would it control?) and the result is that even if they somehow had a free brain it would be pointless!
why do unreleased cars get tested with the black wrap all over them?
<p> due to its high development costs in the midst of a competitive market, these testing sessions are intended to be as secretive as possible to prevent competitors gaining an advantage and sometimes developing a similar vehicle of their own. it has become a common practice for car manufacturers to mask details of their prototypes to make the car very difficult to be recognised, sometimes using "protection cars" that drive alongside the test car to block the view of the prototype from photographers. aside from the motoring press, lehmann's photographs have appeared in the german news magazine "stern" and additionally had been offered to sell his photographs to rival japanese car manufacturers. <p> the v5 document records who the registered keeper of the vehicle is; it does not establish legal ownership of the vehicle. these documents used to be blue on the front. however, they were changed to red in 2010/11 after approximately 2.2 million blank blue v5 documents were stolen, allowing thieves to clone stolen vehicles much more easily. <p> very little is known about the lm variant due to non-availability of records, though there are photos to suggest that at least five cars were produced (three in dark green, one in white and one in the same blue as the standard car which is believed to be the prototype). the cars were sold to a buyer in japan. the blue car was bought by a car collector in the uk sometime after 2013 making it the first xjr-15 lm outside of japan, thus making the existence of such a variant known. <p> ford used several models over the years. they were coded by the color of the plastic wire strain relief, or "grommet" as it is most often called, in order to make them easy to identify. in addition to the color-coding, the modules may have a keyway molded into the electrical connectors to prevent accidental use in the wrong vehicle. <p> because of the unavailability of certain car models, demand for grey market vehicles arose in the late 1970s. importing them into the us involved modifying or adding certain equipment, such as headlamps, sidemarker lights, bumpers, and a catalytic converter as required by the relevant regulations. the nhtsa and epa would review the paperwork and then approve possession of the vehicle. it was also possible for these agencies to reject the application and order the automobile destroyed or re-exported the grey market provided an alternative method for americans to acquire desirable vehicles, and still obtain certification. tens of thousands of cars were imported this way each year during the 1980s. <p> the all out format was created because of rich christensen’s displeasure with 'sandbagging’ – feathering or decelerating to create a false elapsed time and hide actual performance – on the original "pinks". this format, where brothers and technical directors adam and nate pritchett rigorously select a group of closely matched cars, was made to provide the drama associated with closer racing. <p> manufacturers may give the same item different model numbers in different countries, even though the functions of the item are identical, so that they can identify grey imports. manufacturers can also use supplier codes to enable similar tracing of grey imports. parallel market importers often decode the product in order to avoid the identification of the supplier. in the united states, courts have ruled decoding is legal, however manufacturers and brand owners may have rights if they can prove that the decoding has materially altered the product where certain trademarks have been defaced or the decoding has removed the ability of the manufacturer from enforcing quality-control measures. for example, if the decoding defaces the logo of the product or brand or if the batch code is removed preventing the manufacturer from re-calling defective batches.
The manufacturer doesn't want their competitors or their customers to know exactly what they are developing until the product is actually released. It takes years to develop a product like a new car. If Chrysler were to know how the 2021 Corvette was designed they might borrow from that to make their own sports car. And if the public knows too much about what's coming out in the future they might not buy what you're trying to sell right now.
why in cartoons they show cats are scared from dogs but in reality most of dogs are scared from cats?
<p> kittens are vulnerable because they like to find dark places to hide, sometimes with fatal results if they are not watched carefully. cats have a habit of seeking refuge under or inside cars or on top of car tires during stormy or cold weather. this often leads to broken bones, burns, heat stroke, damaged internal organs or death. <p> the signals and behaviors that cats and dogs use to communicate are different and can lead to signals of aggression, fear, dominance, friendship or territoriality being misinterpreted by the other species. dogs have a natural instinct to chase smaller animals that flee, an instinct common among cats. most cats flee from a dog, while others take actions such as hissing, arching their backs and swiping at the dog. after being scratched by a cat, some dogs can become fearful of cats. <p> bullet::::- when cats are frightened they tend to stretch their backs to appear bigger and more menacing. if that doesn't help they will quickly flee or jump past their aggressor. cats also have a tendency to climb up trees and often refuse (or are unable) to come down, forcing their owner to call the fire service to rescue the cat. this type of behavior led to the expressions "scaredy-cat", "acting like a pussy" and the dutch saying "een kat in het nauw maakt rare sprongen" (translation: "a threatened cat makes odd jumps", which means "desperate needs lead to desperate deeds."). <p> bullet::::- the cat is a small innocent cat which the little dog is terrified of, despite its being harmless. the big dog's bark causes the cat to freeze in terror; however, the cat is not afraid of the big dog unless he barks. <p> the reason that cats are seen as "yōkai" in japanese mythology is attributed to many of the characteristics that they possess: for example, the way the irises of their eyes change shape depending on the time of day, the way their fur seems to cause sparks due to static electricity when they are petted (especially in winter), the way they sometimes lick blood, the way they can walk without making a sound, their wild nature that remains despite the gentleness they can show at times, the way they are difficult to control (unlike dogs), the sharpness of their claws and teeth, their nocturnal habits, and their speed and agility. <p> the comedy films "cats & dogs," released in 2001, and its sequel "," released in 2010, both projected and amplified the above-mentioned antipathy between dogs and cats into an all-out war between the two species wherein cats are shown as being out-and-out enemies of humans, whereas dogs are shown as being more sympathetic to humans. <p> domestic cats, especially young kittens, are known for their love of play. this behavior mimics hunting and is important in helping kittens learn to stalk, capture, and kill prey. cats also engage in play fighting, with each other and with humans. this behavior may be a way for cats to practice the skills needed for real combat, and might also reduce any fear they associate with launching attacks on other animals.
Back in the day, people kept their pets outdoors more often then now. Dogs would be leashed outside or keeped in a fenced yard, often as guard animals, and cats would typically be put out for the night. Strange animals would often come into contact with little human supervision, and the territorial dogs would chase anything smaller than it away, often killing any cat they would catch. These days, pets spend more of they time supervised and indoors, and have a better chance at acclimating to one another.
when does a country go from a developing nation to a developed nation and when was this first coined?
<p> bullet::::- the origin and definition of developing countries: like walt whitman rostow, mohammed tamim believes that, beginning with the industrial revolution in england during the 18th and 19th centuries, developing countries can be defined as countries in transition from various traditional ways of life toward the modern way of life. <p> there is criticism for using the term "developing country". the term could imply inferiority of this kind of country compared with a developed country. it could assume a desire to develop along the traditional western model of economic development which a few countries, such as cuba and bhutan, choose not to follow. alternative measurements such as gross national happiness have been suggested as important indicators. <p> terms linked to the concept "developed country" include "advanced country", "industrialized country", "'more developed country" (mdc), "more economically developed country" (medc), "global north country", "first world country", and "post-industrial country". the term industrialized country may be somewhat ambiguous, as industrialisation is an ongoing process that is hard to define. the first industrialized country was the united kingdom, followed by belgium. later it spread further to germany, united states, france and other western european countries. according to some economists such as jeffrey sachs, however, the current divide between the developed and developing world is largely a phenomenon of the 20th century. <p> the concept of the developing nation is found, under one term or another, in numerous theoretical systems having diverse orientations — for example, theories of decolonization, liberation theology, marxism, anti-imperialism, modernization, social change and political economy. <p> the term "developing" describes a currently observed situation and not a changing dynamic or expected direction of progress. since the late 1990s, developing countries tended to demonstrate higher growth rates than developed countries. developing countries include, in decreasing order of economic growth or size of the capital market: newly industrialized countries, emerging markets, frontier markets, least developed countries. therefore, the least developed countries are the poorest of the developing countries. <p> starting in the report for 2007, the first category is referred to as "developed countries", and the last three are all grouped in "developing countries". the original "very high human development" (0.8 to 1) has been split into two as above in the report for 2007. <p> ‘"developing countries"’ loosely refers to the global south. following independence and decolonization in the 20th century, these states had dire need of new infrastructure, industry and economic stimulation. many relied on foreign investment. this funding focused on improving infrastructure and industry, but led to a system of systemic exploitation. they exported raw materials, such as rubber, for a bargain. companies based in the western world have often used the cheaper labor in the global south for production. the west benefited significantly from this system, but left the global south undeveloped.
There's technically no clear-cut definition and thus no 'true' answer to what makes a nation developed and not developing. Definition may differ and may have different ideological content attached to it. Hence the fact that developed and not developing deals with several aspects (which were already dealt with in the other comments). The website of the OECD states this clearly while referring to the United Nations. Some aspects can be distilled though from taking into account the economies that are in the list of developed and those that are not. - The aspect of economy, it is claimed that an economy that diversifies itself well enough and it not solely dependent on resource export or industrial manufacturing - thus having a wider pallet of economical activities is developed. Attached to this is the claim that a robust economy (that has these diversified activities) will be able to deal with an economical crisis better thus shows signs of being mature thus developed. Good example is the developing character of the Russian economy. It showed a severe drop when the economical crisis started in 2007. The slow down of economies in Europe dropped their economical performance extremely, since a too big portion of their economy was dependent on export of resources. - When it comes to the aspects of democracy and free speech is a more difficult and ambiguous thing. Then you seem to be getting closer to an ideological story then what not. Because what is democracy, and these parameters seem to favor certain nation models more than others. There's a huge difference between democracy and free speech etc etc in the United States and for example West Europe or Japan and South Korea. But no one is going to argue that these nations are not developed. This difference in state model becomes very apparent now, since the report of Princeton states that US is more of an oligarchy than an democracy That's why, taken into account what makes an economy developed? In my view, would be an economy that has the maturity (diversity in economical activity) to withstand economical shocks - without any too serious long term consequences- and that has the capacity to create a situation in which the people living in that country can be taken care of. This in several ways a. Jobs, people can be absorbed easily into the economy b. Opportunities, people living in an economy enjoy a wide range of consumer goods and privileges c. Security, an economy that has a capacity to take care of those who cannot enjoy the full benefits the system has to offer (because of whatever reason)
what is a proxy, how do i get one and why do i want to?
<p> proxy is defined by supreme courts as "an "authority" or power to "do" a certain thing." a person can confer on his proxy any power which he himself possesses. he may also give him secret instructions as to voting upon particular questions. but a proxy is ineffectual when it is contrary to law or public policy. where the proxy is duly appointed and he acts within the scope of the proxy, the person authorizing the proxy is bound by his appointee's acts, including his errors or mistakes. when the appointer sends his appointee to a meeting, the proxy may do anything at that meeting necessary to a full and complete exercise of the appointer's right to vote at such meeting. this includes the right to vote to take the vote by ballot, or to adjourn (and, hence, he may also vote on other ordinary parliamentary motions, such as to refer, postpone, reconsider, etc., when necessary or when deemed appropriate and advantageous to the overall object or purpose of the proxy). <p> an open proxy is a proxy server that is accessible by any internet user. generally, a proxy server only allows users "within a network group" (i.e. a closed proxy) to store and forward internet services such as dns or web pages to reduce and control the bandwidth used by the group. with an "open" proxy, however, any user on the internet is able to use this forwarding service. <p> a proxy list is a list of open http/https/socks proxy servers all on one website. proxies allow users to make indirect network connections to other computer network services. proxy lists include the ip addresses of computers hosting open proxy servers, meaning that these proxy servers are available to anyone on the internet. proxy lists are often organized by the various proxy protocols the servers use. many proxy lists index web proxies, which can be used without changing browser settings. <p> an open proxy is a forwarding proxy server that is accessible by any internet user. as of 2008, gordon lyon estimates there are "hundreds of thousands" of open proxies on the internet. an "anonymous open proxy" allows users to conceal their ip address while browsing the web or using other internet services. there are varying degrees of anonymity however, as well as a number of methods of 'tricking' the client into revealing itself regardless of the proxy being used. <p> in computer programming, the proxy pattern is a software design pattern. a "proxy", in its most general form, is a class functioning as an interface to something else. the proxy could interface to anything: a network connection, a large object in memory, a file, or some other resource that is expensive or impossible to duplicate. in short, a proxy is a wrapper or agent object that is being called by the client to access the real serving object behind the scenes. use of the proxy can simply be forwarding to the real object, or can provide additional logic. in the proxy, extra functionality can be provided, for example caching when operations on the real object are resource intensive, or checking preconditions before operations on the real object are invoked. for the client, usage of a proxy object is similar to using the real object, because both implement the same interface. <p> in computer networks, a proxy server is a server (a computer system or an application) that acts as an intermediary for requests from clients seeking resources from other servers. a client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource available from a different server and the proxy server evaluates the request as a way to simplify and control its complexity. proxies were invented to add structure and encapsulation to distributed systems. <p> an "open" proxy is one which will create connections for "any" client to "any" server, without authentication. like open relays, open proxies were once relatively common, as many administrators did not see a need to restrict access to them.
Basically it makes so people can't track your IP address and trace downloads or other browsing back to you.
how come we can see the contrails of planes when they are high up but not when they are low down?
<p> this is when an aircraft is moving at very low altitude over a surface that has a regular repeating pattern, for example ripples on water. the pilot's eyes can misinterpret the altitude if each eye lines up different parts of the pattern rather than both eyes lining up on the same part. this leads to a large error in altitude perception, and any descent can result in impact with the surface. this illusion is of particular danger to helicopter pilots operating at a few metres altitude over calm water. <p> in good weather a pilot can fly by looking out the window. however, when flying in cloud or at night at least one gyroscopic instrument is necessary to orient the aircraft, being either an artificial horizon, turn and slip, or a gyro compass. <p> therefore, when a skydiver exits a forward-moving aircraft such as an aeroplane, the relative wind emanates from the direction the aeroplane is facing due to the skydiver's initial forward ( horizontal ) momentum. <p> anyone in an aircraft that is making a coordinated turn, no matter how steep, will have little or no sensation of being tilted in the air unless the horizon is visible. similarly, it is possible to gradually climb or descend without a noticeable change in pressure against the seat. in some aircraft, it is possible to execute a loop without pulling negative g so that, without visual reference, the pilot could be upside down without being aware of it. this is because a gradual change in any direction of movement may not be strong enough to activate the fluid in the vestibular system, so the pilot may not realize that the aircraft is accelerating, decelerating, or banking. <p> to make an aircraft descend (i.e. lose altitude), the pilot will "lower the nose" lower than it was in the cruise attitude. for many light aircraft, this will correspond to a sight picture where the aircraft nose appears to be "slightly below" the horizon. the actual amount of down movement usually will not exceed about 10 degrees for most "normal" descents. <p> due to the fog, neither crew was able to see the other plane on the runway ahead of them. in addition, neither of the aircraft could be seen from the control tower, and the airport was not equipped with ground radar. <p> aerodrome or "tower" controllers work in tall towers with large windows allowing them, in good weather, to see the aircraft flying in the vicinity of the aerodrome, unless the aircraft is not in sight from the tower (e.g. a helicopter departing from a ramp area). also, aircraft in the vicinity of an aerodrome tend to be flying at lower speeds. therefore, if the aerodrome controller can see both aircraft, or both aircraft report that they can see each other, or a following aircraft reports that it can see the preceding one, controllers may reduce the standard separation to whatever is adequate to prevent a collision.
The same reason you can't see your breath on a warm day. Cold air cools down your breath, or in this case the jet exhaust, and allows the water vapour to condense into water droplets which are then visible.
rene descartes' proof of god and his alleged circular argument
<p> many commentators, both at the time that descartes wrote and since, have argued that this involves a circular argument, as he relies upon the principle of clarity and distinctness to argue for the existence of god, and then claims that god is the guarantor of his clear and distinct ideas. the first person to raise this criticism was marin mersenne, in the "second set of objections" to the "meditations": <p> descartes argued that god's existence can be deduced from his nature, just as geometric ideas can be deduced from the nature of shapes—he used the deduction of the sizes of angles in a triangle as an example. he suggested that the concept of god is that of a supremely perfect being, holding all perfections. he seems to have assumed that existence is a predicate of a perfection. thus, if the notion of god did not include existence, it would not be supremely perfect, as it would be lacking a perfection. consequently, the notion of a supremely perfect god who does not exist, descartes argues, is unintelligible. therefore, according to his nature, god must exist. <p> descartes argued that he had a clear and distinct idea of god. in the same way that the cogito was self-evident, so too is the existence of god, as his perfect idea of a perfect being could not have been caused by anything less than a perfect being. <p> initially, descartes arrives at only a single first principle: i think. thought cannot be separated from me, therefore, i exist ("discourse on the method" and "principles of philosophy"). most notably, this is known as "cogito ergo sum" (english: "i think, therefore i am"). therefore, descartes concluded, if he doubted, then something or someone must be doing the doubting, therefore the very fact that he doubted proved his existence. "the simple meaning of the phrase is that if one is skeptical of existence, that is in and of itself proof that he does exist." these two first principles—i think and i exist—were later confirmed by descartes's clear and distinct perception (delineated in his third meditation): that i clearly and distinctly perceive these two principles, descartes reasoned, ensures their indubitability. <p> descartes then claimed that because he discovered the cogito through perceiving it clearly and distinctly, anything he can perceive clearly and distinctly must be true. then he argues that he can conceive of an infinite being, but finite beings cannot produce infinite ideas and hence an infinite being must have put the idea into his mind. he uses this argument, commonly known as an ontological argument, to invoke the existence of an omni-benevolent god as the indubitable foundation that makes all sciences possible. many people admired descartes intentions, but were unsatisfied with this solution. some accused him of circularity, proclaiming his ontological argument uses his definition of truth as a premise, while his proof of his definition of truth uses his ontological argument as a premise. hence the problems of solipsism, truth and the existence of the external world came to dominate 17th century western thought. <p> rené descartes, with "je pense donc je suis" or "cogito ergo sum" or "i think, therefore i am", argued that "the self" is something that we can know exists with epistemological certainty. descartes argued further that this knowledge could lead to a proof of the certainty of the existence of god, using the ontological argument that had been formulated first by anselm of canterbury. <p> descartes argues – for example, in the third of his "meditations on first philosophy" – that whatever one clearly and distinctly perceives is true: "i now seem to be able to lay it down as a general rule that whatever i perceive very clearly and distinctly is true." (at vii 35) he goes on in the same meditation to argue for the existence of a benevolent god, in order to defeat his skeptical argument in the first meditation that god might be a deceiver. he then says that without his knowledge of god's existence, none of his knowledge could be certain.
"concept of God as that than which nothing greater can be conceived. To think of such a being as existing only in thought and not also in reality involves a contradiction, since a being that lacks real existence is not a being than which none greater can be conceived. A yet greater being would be one with the further attribute of existence. Thus the unsurpassably perfect being must exist; otherwise it would not be unsurpassably perfect." --- You start with the assumption that God is perfect (God=perfection) " God as that than which nothing greater can be conceived. " You then make the assumption that existing is part of being perfect (perfection=existing) "a being that lacks real existence is not a being than which none greater can be conceived. " The argument can be summed up as, If God is perfect, and perfect beings exist, God must exist. God=perfect=existing Therefore God=existing --- It isn't so much circular as it is childish. It's easy to see that the argument doesn't make sense, but harder to point out *why*. It's playing on defining perfection in a certain way. I could easily say that Unicorns are the perfect type of horse, perfect beings exist, therefore unicorns exist. It doesn't mean that they exist. --- It isn't really circular, but there is a circular argument that relies of his proof of God.
what is difference in coffee roasts such as medium and light?
<p> bullet::::- "dark roast" coffee tastes subjectively stronger than medium roasts. standards are based on medium roasts, and the equivalent strength for a dark roast requires using a lower brewing ratio. <p> the degree of roast has an effect upon coffee flavor and body. darker roasts are generally bolder because they have less fiber content and a more sugary flavor. lighter roasts have a more complex and therefore perceived stronger flavor from aromatic oils and acids otherwise destroyed by longer roasting times. roasting does not alter the amount of caffeine in the bean, but does give less caffeine when the beans are measured by volume because the beans expand during roasting. <p> at lighter roasts, the coffee will exhibit more of its "origin character"—the flavors created by its variety, processing, altitude, soil content, and weather conditions in the location where it was grown. as the beans darken to a deep brown, the origin flavors of the bean are eclipsed by the flavors created by the roasting process itself. at darker roasts, the "roast flavor" is so dominant that it can be difficult to distinguish the origin of the beans used in the roast. <p> roasting coffee using hot air is a commonly used method by most roasting plants, but it takes away the original flavor of the coffee. doutor coffee explored other ways to roast the coffee, but in a more effective way that retains the flavor in the coffee. doutor coffee utilizes the flame roasting approach which is laborious and time-extensive, but it allows richly flavored coffee beans. since flame roasting is used more for small shops due to the fact that it can only roast 5 kg to 20 kg of beans at a time, doutor coffee is trying to create an industrialized flame roasting technique. <p> the most popular, but probably the least accurate, method of determining the degree of roast is to judge the bean's color by eye (the exception to this is using a spectrophotometer to measure the ground coffee reflectance under infrared light and comparing it to standards such as the agtron scale). as the coffee absorbs heat, the color shifts to yellow and then to increasingly darker shades of brown. during the later stages of roasting, oils appear on the surface of the bean. the roast will continue to darken until it is removed from the heat source. coffee also darkens as it ages, making color alone a poor roast determinant. most roasters use a combination of temperature, smell, color, and sound to monitor the roasting process. <p> in the united states, white coffee may also refer to coffee beans which have been roasted to a yellow roast level. when prepared as espresso these beans produce a thin yellow brew, with a high acidic note. there is a debate about whether white coffee is more highly caffeinated than darker roasted coffee. in fact, the sublimation point of caffeine is , about one hundred degrees lower than the typical very dark roast. coffee beans can catch fire at temperatures lower than . white coffee is generally used only for making espresso drinks, not simple brewed coffee. with shorter roasting times, natural sugars are not caramelized within the coffee beans, making the coffee less bitter. the flavor of white coffee is frequently described as nutlike, with pronounced acidity. <p> although not considered part of the processing pipeline proper, nearly all coffee sold to consumers throughout the world is sold as roasted coffee in general one of four degrees of roasting: light, medium, medium-dark, and dark. consumers can also elect to buy unroasted coffee to be roasted at home.
The amount of time the beans are roasted for. Roasting for longer changes some of the chemical composition in the beans, which affects the flavor and mouth feel. Light roasts tend to have a sharper taste (called 'acidity' in coffee lingo but it's not talking about actual acid), while darker roasts tend to have a smoother taste.
why some google's features aren't available in some countries?
<p> competitors of google include baidu and soso.com in china; naver.com and daum.net in south korea; yandex in russia; seznam.cz in the czech republic; yahoo in japan, taiwan and the us, as well as bing and duckduckgo. some smaller search engines offer facilities not available with google, e.g. not storing any private or tracking information. <p> while initially only available in the united states, over time google videos had become available to users in more countries and could be accessed from many other countries, including the united kingdom, france, germany, italy, canada and japan. <p> limitations of application in a jurisdiction include the inability to require removal of information held by companies outside the jurisdiction. there is no global framework to allow individuals control over their online image. however, professor viktor mayer-schönberger, an expert from oxford internet institute, university of oxford, said that google cannot escape compliance with the law of france implementing the decision of the european court of justice in 2014 on the right to be forgotten. mayer-schönberger said nations, including the us, had long maintained that their local laws have "extra-territorial effects". <p> google earth has been viewed by some as a threat to privacy and national security, leading to the program being banned in multiple countries. some countries have requested that certain areas be obscured in google's satellite images, usually areas containing military facilities. <p> the list of most-downloaded google play applications includes most of the free apps that have been downloaded more than 500 million times and most of the paid apps that have been downloaded over one million times on unique android devices. there are numerous android apps that have been downloaded over one million times from the google play app store and it was reported in july 2017 that there are 319 apps which have been downloaded at least 100 million times and 4,098 apps have been downloaded at least ten million times. the barrier for entry on this list is set at 500 million for free apps to limit its size. many of the applications in this list are distributed pre-installed on top-selling android devices and may be considered bloatware by some people because users did not actively choose to download them. the table below shows the number of google play apps in each category. <p> google has been criticized both for disclosing too much information to governments too quickly and for not disclosing information that governments need to enforce their laws. in april 2010, google, for the first time, released details about how often countries around the world ask it to hand over user data or to censor information. online tools make the updated data available to everyone. <p> due to low user engagement and disclosed software design flaws that potentially allowed outside developers access to personal information of its users, the google+ developer api was discontinued on march 7, 2019 and google+ was shut down for business use and consumers on april 2, 2019.
I just tried some of the things you mentioned, such as "etymology for euthanasia" and "university of Iowa acceptance rate", by visiting /ncr (to override the redirect to my local Google). I'm using IE11, even when I turned compatibility view on (IE7, Google puts a black link bar at the top of the page) it still worked. I'm using Google in the English language though, and that does carry over to for me, if you're using another language that may explain why it isn't working.
how can i weight 252 pounds at 10pm and then weight 249 pounds at 6am the next morning?
<p> rates are rarely reported but in 1725 and 1761 is 18 pounds per person tournaments. he is 21 pounds in 1770 to reach 42 pounds in 1790 (fortunately for the traveler, it is stated that the "sleeping bag weighing 10 pounds is" free"). <p> in the early nineteenth century, there were no standard weight classes. in 1823, the "dictionary of the vulgar tongue" said the limit for a "light weight" was 12 stone (168 lb, 76.2 kg) while "sportsman's slang" the same year gave 11 stone (154 lb, 69.9 kg) as the limit. <p> bullet::::- the allowed carbohydrate amounts are a maximum of 6 grams for breakfast, 12 grams for lunch, 12 grams for dinner for a combined maximum of 30 grams of carbohydrate per day for a 140 pounds patient. so if a child weighs 35 pounds, he should get 7.5 grams instead of 30 grams per day. (see march 2017 teleseminar on youtube). however these 30 grams are not to be adjusted for instance if one weighs 130 pounds. also if one weighs 200 pounds and these 30 grams do not give him enough healthy vegetables, he can increase the amount of vegetables(see september 2015 teleseminar). <p> in 2005, he weighed 960 pounds (68st 8 lb, 435 kg). five years later, he had dropped down to 450 pounds (32st 2 lb, 204 kg). at one stage he had to weigh himself on the scales in a post office which he had to access from the back entrance so he wouldn't be seen. he achieved his weight loss with diet and exercise, and with help from his manager lucille star. <p> in 1920, the minimum weight for a heavyweight was set at 175 pounds (12 st 7 lb, 79 kg), which today is the light heavyweight division maximum. since 1980, for most boxing organizations, the maximum weight for a cruiserweight has been 200 pounds. <p> readjusting any weight exceeding 18% down to that value is done, in principle, on a quarterly basis. however, whenever a constituent reaches a weight exceeding 20% during a quarter (intra-quarter breach), then the weight is brought back to 18% without waiting for the next quarterly review. <p> anthony lapsley initially weighed in at 174 pounds, over the welterweight limit allowance of 171 pounds. lapsley was given an additional two hours to lose the weight. he successfully weighed in at 171 pounds two hours later.
Poop? Pee? Sweat? You lost weight. We're you wearing more or less? Not yo mention just errors in the scale. If you were standing on it differently, etc.
what makes the https protocol secure?
<p> historically, https connections were primarily used for payment transactions on the world wide web, e-mail and for sensitive transactions in corporate information systems. , https is used more often by web users than the original non-secure http, primarily to protect page authenticity on all types of websites; secure accounts; and keep user communications, identity, and web browsing private. <p> hypertext transfer protocol secure (https) is an extension of the hypertext transfer protocol (http). it is used for secure communication over a computer network, and is widely used on the internet. in https, the communication protocol is encrypted using transport layer security (tls), or, formerly, its predecessor, secure sockets layer (ssl). the protocol is therefore also often referred to as http over tls, or http over ssl. <p> https creates a secure channel over an insecure network. this ensures reasonable protection from eavesdroppers and man-in-the-middle attacks, provided that adequate cipher suites are used and that the server certificate is verified and trusted. <p> the principal motivation for https is authentication of the accessed website and protection of the privacy and integrity of the exchanged data while in transit. it protects against man-in-the-middle attacks. the bidirectional encryption of communications between a client and server protects against eavesdropping and tampering of the communication. in practice, this provides a reasonable assurance that one is communicating without interference by attackers with the website that one intended to communicate with, as opposed to an impostor. <p> the security of https is that of the underlying tls, which typically uses long-term public and private keys to generate a short-term session key, which is then used to encrypt the data flow between client and server. x.509 certificates are used to authenticate the server (and sometimes the client as well). as a consequence, certificate authorities and public key certificates are necessary to verify the relation between the certificate and its owner, as well as to generate, sign, and administer the validity of certificates. while this can be more beneficial than verifying the identities via a web of trust, the 2013 mass surveillance disclosures drew attention to certificate authorities as a potential weak point allowing man-in-the-middle attacks. an important property in this context is forward secrecy, which ensures that encrypted communications recorded in the past cannot be retrieved and decrypted should long-term secret keys or passwords be compromised in the future. not all web servers provide forward secrecy. <p> netscape communications created https in 1994 for its netscape navigator web browser. originally, https was used with the ssl protocol. as ssl evolved into transport layer security (tls), https was formally specified by rfc 2818 in may 2000. google announced its chrome browser will mark http sites as "not secure" after july 2018 in february 2018. this move was to encourage website owners to implement https, as an effort to secure the internet. <p> http is not encrypted and is vulnerable to man-in-the-middle and eavesdropping attacks, which can let attackers gain access to website accounts and sensitive information, and modify webpages to inject malware or advertisements. https is designed to withstand such attacks and is considered secure against them (with the exception of older, deprecated versions of ssl).
The 's' in https means secure. Jokes aside, https uses SSL/TLS encryption between your browser and the webserver. There are groups called Certificate Authorities (CAs) who exist to vouch for the identity of different websites. They use keypair cryptography (in which there are two keys, and you use one key to encrypt something and only the other matching key can decrypt it) where the website keeps the "private" key to themselves, and publish an SSL Certificate, which is basically the "public" key that matches the private key, paired with a promise from a CA promising that it's the real public key that matches their private key. Then you download a webpage via https, it arrives encrypted. You then unencrypt it with the website's public key, and since the CA promised that it's the right key, you know that it was encrypted with that websites private key, and so the webpage actually came from that website and not someone in between you and the website. Your response to the website (eg your password) is then encrypted with their public key, meaning that only the website can unencrypt it since only they have the private key.
why don't computer processors run at 100% always when under load? wouldn't it complete the job faster?
<p> in a processor-based system, the speed of the processor is always more than that of the main memory. as a result, unnecessary wait-states are developed when instructions or data are being fetched from the main memory. this causes a hampering of the performance of the system. a cache memory is basically developed to increase the efficiency of the system and to maximise the utilisation of the entire computational speed of the processor. <p> most modern cpus are so fast that for most program workloads, the bottleneck is the locality of reference of memory accesses and the efficiency of the caching and memory transfer between different levels of the hierarchy. as a result, the cpu spends much of its time idling, waiting for memory i/o to complete. this is sometimes called the "space cost", as a larger memory object is more likely to overflow a small/fast level and require use of a larger/slower level. the resulting load on memory use is known as "pressure" (respectively "register pressure", "cache pressure", and (main) "memory pressure"). terms for data being missing from a higher level and needing to be fetched from a lower level are, respectively: register spilling (due to register pressure: register to cache), cache miss (cache to main memory), and (hard) page fault (main memory to disk). <p> computer microprocessors generally run much faster than the computer's other subsystems, which hold the data the cpu reads and writes. even memory, the fastest of these, cannot supply data as fast as the cpu could process it. in an example from 2011, typical pc processors like the intel core 2 and the amd athlon 64 x2 run with a clock of several ghz, which means that one clock cycle is less than 1 nanosecond (typically about 0.3 ns to 0.5 ns on modern desktop cpus), while main memory has a latency of about 15–30 ns. some second-level cpu caches run slower than the processor core. <p> cray took another approach. at the time, cpus generally ran slower than the main memory to which they were attached. for instance, a processor might take 15 cycles to multiply two numbers, while each memory access took only one or two. this meant there was a significant time where the main memory was idle. it was this idle time that the 6600 exploited. <p> the shared bus between the program memory and data memory leads to the "von neumann bottleneck", the limited throughput (data transfer rate) between the central processing unit (cpu) and memory compared to the amount of memory. because the single bus can only access one of the two classes of memory at a time, throughput is lower than the rate at which the cpu can work. this seriously limits the effective processing speed when the cpu is required to perform minimal processing on large amounts of data. the cpu is continually forced to wait for needed data to move to or from memory. since cpu speed and memory size have increased much faster than the throughput between them, the bottleneck has become more of a problem, a problem whose severity increases with every new generation of cpu. <p> as microprocessors are becoming faster, mainly because of the cores being added every few months, memory latency gap is becoming wider. memory latency was few cycles in 1980 and it is reaching nowadays almost 1000 cycles. if the micro-processor has enough cores and hopefully they are not sending requests to the main memory at the same time, there will be partial aggregate hiding of memory latency. some cores might be executing while others are waiting for memory response. this is not the best situation for multi-core processors. high performance computing experts are striving to keep all cores busy all the time. so, if each core is kept busy all the time, a complete utilization of the whole micro-processor is possible. creating software based threads won't solve the problem for one obvious reason. context switching threads to main memory is much expensive operation when compared to memory latency. for example, in cell broadband engine context switching any of the core's thread takes 2000 micro-seconds in best cases. some software techniques like double or multi-buffering may solve the memory latency problem. however, they can be used in regular algorithms, where the program knows where is the next data chunk to retrieve from memory; in this case it sends request to memory while it is processing previously request data. however, this technique won't work if it the program does not know the next data chunk to retrieve from memory. in other words, it won't work in combinatorial algorithms, such as tree spanning or random list ranking. in addition, multi-buffering assumes that memory latency is constant and can be hidden by statically. however, reality shows that memory latency changes from application to another. it depends on the overall load on microprocessor's shared resources, such as the rate of memory requests shared cores interconnections. <p> the performance of an underclocked machine will often be better than might be expected. under normal desktop use, the full power of the cpu is rarely needed. even when the system is busy, a large amount of time is usually spent waiting for data from memory, disk, or other devices. such devices communicate with the cpu through a bus which operates at a much lower bandwidth. generally, the lower the cpu multiplier (and thus clockrate of a cpu), the closer its performance will be to that of the bus, and the less time it will spend waiting.
When they need to they do. There are some reasons it may not though: 1. CPU speed isn't the bottleneck: if the program needs to pull a lot of data from its code, usually the limiting factor is the speed of the storage drive, (aka hard drive or SSD). The CPU can't do much if its waiting to receive data. 2. The program isn't written to use all cores: today's processors are all multicored. Meaning they are like 2 or 4 or even 8 CPUs in one. In order to take advantage of all cores, the program you running has to be written to do so. A lot of today's programs still aren't set up to use more than 1 or 2 cores at a time (I'm looking at you Microsoft Excel).
why artificial coloring perceived as worse than natural?
<p> widespread public belief that artificial food coloring causes adhd-like hyperactivity in children originated from benjamin feingold, a pediatric allergist from california, who proposed in 1973 that salicylates, artificial colors, and artificial flavors cause hyperactivity in children; however, there is no evidence to support broad claims that food coloring causes food intolerance and adhd-like behavior in children. it is possible that certain food colorings may act as a trigger in those who are genetically predisposed, but the evidence is weak. <p> because many consumers are worried about possible health consequences of synthetic dyes, some companies are beginning to use natural food colours. since these food colours are natural, they do not require any certification from the food and drug administration. the most popular natural food colours are: <p> industrial melanism is an evolutionary effect in insects such as the peppered moth, "biston betularia" in areas subject to industrial pollution. darker pigmented individuals are favored by natural selection, apparently because they are better camouflaged against polluted backgrounds. when pollution was later reduced, lighter forms regained the advantage and melanism became less frequent. other explanations have been proposed, such as that the melanin pigment enhances function of immune defences, or a thermal advantage from the darker coloration. <p> because it is fast and in many cases can use few colors, greedy coloring can be used in applications where a good but not optimal graph coloring is needed. one of the early applications of the greedy algorithm was to problems such as course scheduling, in which a collection of tasks must be assigned to a given set of time slots, avoiding incompatible tasks being assigned to the same time slot. <p> designers need to take into account that color-blindness is highly sensitive to differences in material. for example, a red-green colorblind person who is incapable of distinguishing colors on a map printed on paper may have no such difficulty when viewing the map on a computer screen or television. in addition, some color blind people find it easier to distinguish problem colors on artificial materials, such as plastic or in acrylic paints, than on natural materials, such as paper or wood. third, for some color blind people, color can only be distinguished if there is a sufficient "mass" of color: thin lines might appear black, while a thicker line of the same color can be perceived as having color. <p> designers should also note that red-blue and yellow-blue color combinations are generally safe. so instead of the ever-popular "red means bad and green means good" system, using these combinations can lead to a much higher ability to use color coding effectively. this will still cause problems for those with monochromatic color blindness, but it is still something worth considering. <p> alternative hair coloring products are designed to create hair colors not typically found in nature. these are also referred to as "vivid color" in the hairstyling industry. the available colors are diverse, such as the colors green and fuchsia.
Some people are allergic to some additives. A lot of people believe that a lot of additives are in some way toxic or carciogenic. It makes food seem more 'natural' which is something a lot of people like. So it's also good marketing.
how is it that we are still not able to truly soundproof a room without turning it into a fortress? it seems like the only solution is concrete.
<p> several different materials may be used for sound barriers. these materials can include masonry, earthwork (such as earth berm), steel, concrete, wood, plastics, insulating wool, or composites. walls that are made of absorptive material mitigate sound differently than hard surfaces. it is now also possible to make noise barriers with active materials such as solar photovoltaic panels to generate electricity while also reducing traffic noise. <p> safe rooms in the basement or on a concrete slab can be built with concrete walls, a building technique that is normally not possible on the upper floors of wood-framed structures unless there is significant structural reinforcement to the building. <p> masonry has been used in structures for thousands of years, and can take the form of stone, brick or blockwork. masonry is very strong in compression but cannot carry tension (because the mortar between bricks or blocks is unable to carry tension). because it cannot carry structural tension, it also cannot carry bending, so masonry walls become unstable at relatively small heights. high masonry structures require stabilisation against lateral loads from buttresses (as with the flying buttresses seen in many european medieval churches) or from windposts. <p> buildings that are made of flammable materials such as wood are different from building materials such as concrete. generally, a "fire-resistant" building is designed to limit fire to a small area or floor. other floors can be safe by preventing smoke inhalation and damage. all buildings suspected or on fire must be evacuated, regardless of fire rating. <p> bullet::::1. airborne transmission - a noise source in one room sends air pressure waves which induce vibration to one side of a wall or element of structure setting it moving such that the other face of the wall vibrates in an adjacent room. structural isolation therefore becomes an important consideration in the acoustic design of buildings. highly sensitive areas of buildings, for example recording studios, may be almost entirely isolated from the rest of a structure by constructing the studios as effective boxes supported by springs. air tightness also becomes an important control technique. a tightly sealed door might have reasonable sound reduction properties, but if it is left open only a few millimeters its effectiveness is reduced to practically nothing. the most important acoustic control method is adding mass into the structure, such as a heavy dividing wall, which will usually reduce airborne sound transmission better than a light one. <p> bullet::::- concrete is one of the most commonly used materials in home construction. when pockets of air are not removed, or the mixture is not allowed to cure properly, the concrete can crack, which allows water to force its way through the wall. <p> geometry of area structures is an important input, since the presence of buildings or walls can block sound under certain circumstances, but reflective properties can augment sound energy at other locations.
Sound is vibration of matter. It is pretty hard to stop vibration from spreading. If you put a wall up, the vibration will simply transfer to the wall, then through the wall and out the other side. The only real way to stop sound is to suspend the source in a vacuum somehow, and that isn't really possible on earth. Every soundproofing solution we have simply tries to bounce the sound back/force it through various substances to reduce it's intensity before it gets out.
why it's called a semi truck.
<p> in the united states, canada, and the philippines "truck" is usually reserved for commercial vehicles larger than normal cars, and includes pickups and other vehicles having an open load bed. in australia, new zealand and south africa, the word "truck" is mostly reserved for larger vehicles; in australia and new zealand, a pickup truck is usually called a "ute" (short for "utility"), while in south africa it is called a "bakkie" (afrikaans: "small open container"). in the united kingdom, india, malaysia, singapore, ireland and hong kong "lorry" is used instead of "truck", but only for the medium and heavy types. <p> a truck or lorry is a motor vehicle designed to transport cargo. trucks vary greatly in size, power, and configuration; smaller varieties may be mechanically similar to some automobiles. commercial trucks can be very large and powerful and may be configured to be mounted with specialized equipment, such as in the case of refuse trucks, fire trucks, concrete mixers, and suction excavators. strictly speaking, a commercial vehicle without a tractor or other articulation is a "straight truck" while one designed specifically to pull a trailer is not a truck but a "tractor". <p> in british english the word "truck" refers to large open topped freight vehicles or rail freight waggons. a "lorry" is a hgv road vehicle. a "van" is used for an enclosed railway freight carriage or medium or smaller commercial road vehicles. <p> a truck is a nautical term for a wooden ball, disk, or bun-shaped cap at the top of a mast, with holes in it through which flag halyards are passed. trucks are also used on wooden flagpoles, to prevent them from splitting. <p> a semi-trailer truck (more commonly semi truck or simply "semi") is the combination of a tractor unit and one or more semi-trailers to carry freight. a semi-trailer attaches to the tractor with a fifth-wheel coupling (hitch), with much of its weight borne by the tractor. the result is that both the tractor and semi-trailer will have a design distinctly different from that of a rigid truck and trailer. <p> the "trucks" (usually referred to in american releases as the freight cars) transport goods. there are various designs of trucks, designed for different purposes: the open-topped "wagons" carry most goods; liquids are carried in the tankers; and anything which needs protection from the elements can be carried in the "vans". <p> the first known usage of "truck" was in 1611, when it referred to the small strong wheels on ships' cannon carriages. in its extended usage it came to refer to carts for carrying heavy loads, a meaning known since 1771. its expanded application to "motor-powered load carrier" has been in usage since 1930, shortened from "motor truck", which dates back to 1901.
The "semi" doesn't refer to the truck. It's called a semi truck because it's built to carry what's known as a semi-*trailer*: a trailer which doesn't have front wheels on it, because it just slides on top of the truck. (There are full trailers that do have front wheels, but they're much rarer.)
how efficient are our muscles at converting energy to movement?
<p> the conversion efficiency of energy from respiration into mechanical (physical) power depends on the type of food and on the type of physical energy usage (e.g., which muscles are used, whether the muscle is used aerobically or anaerobically). in general, the efficiency of muscles is rather low: only 18 to 26% of the energy available from respiration is converted into mechanical energy. this low efficiency is the result of about 40% efficiency of generating atp from the respiration of food, losses in converting energy from atp into mechanical work inside the muscle, and mechanical losses inside the body. the latter two losses are dependent on the type of exercise and the type of muscle fibers being used (fast-twitch or slow-twitch). for an overall efficiency of 20%, one watt of mechanical power is equivalent to per hour. for example, a manufacturer of rowing equipment shows calories released from 'burning' food as four times the actual mechanical work, plus per hour, which amounts to about 20% efficiency at 250 watts of mechanical output. it can take up to 20 hours of little physical output (e.g., walking) to "burn off" more than a body would otherwise consume. for reference, each kilogram of body fat is roughly equivalent to of food energy (i.e., 3,500 kilocalories per pound). <p> the energy that is absorbed by the muscle can be converted into elastic recoil energy, and can be recovered and reused by the body. this creates more efficiency because the body is able to use the energy for the next movement, decreasing the initial impact or shock of the movement. <p> the efficiency of human muscle has been measured (in the context of rowing and cycling) at 18% to 26%. the efficiency is defined as the ratio of mechanical work output to the total metabolic cost, as can be calculated from oxygen consumption. this low efficiency is the result of about 40% efficiency of generating atp from food energy, losses in converting energy from atp into mechanical work inside the muscle, and mechanical losses inside the body. the latter two losses are dependent on the type of exercise and the type of muscle fibers being used (fast-twitch or slow-twitch). for an overall efficiency of 20 percent, one watt of mechanical power is equivalent to 4.3 kcal per hour. for example, one manufacturer of rowing equipment calibrates its rowing ergometer to count burned calories as equal to four times the actual mechanical work, plus 300 kcal per hour, this amounts to about 20 percent efficiency at 250 watts of mechanical output. the mechanical energy output of a cyclic contraction can depend upon many factors, including activation timing, muscle strain trajectory, and rates of force rise & decay. these can be synthesized experimentally using work loop analysis. <p> muscular energy reserves, or stores for biomechanical exertion, stem from metabolic, immediate production of atp and increased o2 consumption. muscular exertion generated depends on the muscle length and the velocity at which it is able to shorten, or contract. <p> skeletal muscle burns 90 mg (0.5 mmol) of glucose each minute during continuous activity (such as when repetitively extending the human knee), generating ≈24 w of mechanical energy, and since muscle energy conversion is only 22–26% efficient, ≈76 w of heat energy. resting skeletal muscle has a basal metabolic rate (resting energy consumption) of 0.63 w/kg making a 160 fold difference between the energy consumption of inactive and active muscles. for short duration muscular exertion, energy expenditure can be far greater: an adult human male when jumping up from a squat can mechanically generate 314 w/kg. such rapid movement can generate twice this amount in nonhuman animals such as bonobos, and in some small lizards. <p> energy minimization is widely considered a primary goal of the central nervous system. the rate at which a human expends metabolic energy while walking (gross metabolic rate) increases nonlinearly with increasing speed. however, humans also require a continuous basal metabolic rate to maintain normal function. the energetic cost of walking itself is therefore best understood by subtracting basal metabolic rate from gross metabolic rate, yielding net metabolic rate. in human walking, net metabolic rate also increases nonlinearly with speed. these measures of walking energetics are based on how much oxygen people consume per unit time. many locomotion tasks, however, require walking a fixed distance rather than for a set time. dividing gross metabolic rate by walking speed results in gross cost of transport. for human walking, gross cost of transport is u-shaped. similarly, dividing net metabolic rate by walking speed yields a u-shaped net cost of transport. these curves reflect the cost of moving a given distance at a given speed and may better reflect the energetic cost associated with walking. <p> while running, tendons are able to reduce the metabolic rate of muscle activity by reducing the volume of the muscle that is active to produce force. the timing of muscle activation is very important for utilizing the mechanical and energetic benefits of tendon elasticity. power attenuation by the use of the tendons can allow the muscle-tendon system the ability to absorb energy at a rate beyond the muscles maximum capacity to absorb energy. power amplification mechanisms are able to work because the spring and muscles contain different intrinsic limits of power. muscles in a skeletal system can be limited in their maximum power production. power amplification by the use of the tendons allows the muscle to produce power beyond the muscle’s capacity. the mechanical functions of tendons contain a structural basis and are not subjected to limitation of power production.
Our muscles are around 25% efficient. Electric motors can exceed 90% so an electrically powered robot could do much better, especially as they might be able to recapture some energy regeneratively. Still, the advantage of electric motors is not as big as it seems, since converting other forms of energy into electricity is very inefficient.
why are bugs attracted to the indoors? and why do they struggle to go out the window once they’re in?
<p> during certain times of the year boxelder bugs cluster together in large groups while sunning themselves on warm surfaces near their host tree (e.g. on rocks, shrubs, trees, and man-made structures). this is especially a problem in the fall when they are seeking a warm place to overwinter. large numbers are often seen congregating on houses seeking an entry point. once they have gained access, they remain inactive behind siding and inside of walls while the weather is cool. once the home's heating system becomes active for the season, the insects may falsely perceive it to be springtime and enter inhabited parts of the home in search of food and water. once inside inhabited areas of a home, their excreta may stain upholstery, carpets, drapes, and they may feed on certain types of house plants. in the spring, the bugs leave their winter hibernation locations to feed and lay eggs on maple or ash trees. clustered masses of boxelder bugs may be seen again at this time, and depending on the temperature, throughout the summer. their outdoor congregation habits and indoor excreta deposits are perceived as a nuisance by many people, therefore boxelder bugs are often considered pests. however, boxelder bugs are harmless to people and pets. the removal of boxelder trees and maple trees can help control boxelder bug populations. spiders are minor predators, but because of the boxelder bug's chemical defenses few birds or other animals will eat them. boxelder bug populations are not affected by any major diseases or parasites. <p> bat bugs are moderately common in the midwest us and have been recorded in scotland, and are found in houses and buildings that harbor bats. infestations in human dwellings are usually introduced by bats carrying the bugs on their skin. bat bugs usually remain in close proximity to the roosting locations of bats (attics, chimneys, etc.) but explore the rest of the building if the bats leave or are eliminated. in some cases, they move into harborages that are more typical of bedbugs, such as mattresses and bed frames. <p> bed bugs are attracted to their hosts primarily by carbon dioxide, secondarily by warmth, and also by certain chemicals. "cimex lectularius" only feeds every five to seven days, which suggests that it does not spend the majority of its life searching for a host. when a bed bug is starved, it leaves its shelter and searches for a host. it returns to its shelter after successful feeding or if it encounters exposure to light. "cimex lectularius" aggregate under all life stages and mating conditions. bed bugs may choose to aggregate because of predation, resistance to desiccation, and more opportunities to find a mate. airborne pheromones are responsible for aggregations. <p> heavy populations of fungus beetles may first show up trapped in bathtubs, sinks or around lamps and tv sets. they do not bite, sting, spread human diseases nor damage wood, food, fabric, etc. they are just annoying little bugs that will non go away. <p> infestation is rarely caused by a lack of hygiene. transfer to new places is usually in the personal items of the human they feed upon. dwellings can become infested with bed bugs in a variety of ways, such as: <p> they are often found roaming in a home and can cover great distances in a house. they are quite a safe spider to be in a home and can deal with other insect problems because of the amount they travel in a short period of time. <p> bed bugs are obligatory hematophagous (bloodsucking) insects. most species feed on humans only when other prey are unavailable. they obtain all the additional moisture they need from water vapor in the surrounding air. bed bugs are attracted to their hosts primarily by carbon dioxide, secondarily by warmth, and also by certain chemicals. bedbugs prefer exposed skin, preferably the face, neck, and arms of a sleeping person.
There are a LOT of insects. Some are bound to get on on accident. You just don't notice all the ones outside, or even the ones that try to get in but fail
why do i go cross-eyed and get blurry vision when i'm fighting falling asleep (such as during class or in traffic)?
<p> if blood is allowed to pool in the lower areas of the body, the brain will be deprived of blood, leading to temporary hypoxia. hypoxia first causes a greyout (a dimming of the vision), also called brownout, followed by tunnel vision and ultimately complete loss of vision 'blackout' followed by g-induced loss of consciousness or 'g-loc'. the danger of g-loc to aircraft pilots is magnified because on relaxation of g there is a period of disorientation before full sensation is re-gained. <p> it can cause dizziness, lightheadedness, headache, blurred or dimmed vision and fainting, because the brain does not get sufficient blood supply. this, in turn, is caused by gravity, pulling the blood into the lower part of the body. <p> diabetic retinopathy often has no early warning signs. even macular edema, which can cause rapid vision loss, may not have any warning signs for some time. in general, however, a person with macular edema is likely to have blurred vision, making it hard to do things like read or drive. in some cases, the vision will get better or worse during the day. <p> related, a japanese scientist tatsuji inouye examines soldiers who had been shot through their visual cortex during battle and lost random spots of vision. inouye figured that the spots of missing vision were connected with the spots that their brain had been shot through, and set out to map the visual cortex through talking to these soldiers. <p> bullet::::- glaucoma—increased pressure in the eye, causing poor night vision, blind spots, and loss of vision to either side. a major cause of blindness. glaucoma can happen gradually or suddenly—if sudden, it is a medical emergency. <p> because of the high level of sensitivity that the eye’s retina has to hypoxia, symptoms are usually first experienced visually. as the retinal blood pressure decreases below globe pressure (usually 10–21 mm hg), blood flow begins to cease to the retina, first affecting perfusion farthest from the optic disc and retinal artery with progression towards central vision. skilled pilots can use this loss of vision as their indicator that they are at maximum turn performance without losing consciousness. recovery is usually prompt following removal of "g"-force but a period of several seconds of disorientation may occur. absolute incapacitation is the period of time when the aircrew member is physically unconscious and averages about 12 seconds. relative incapacitation is the period in which the consciousness has been regained, but the person is confused and remains unable to perform simple tasks. this period averages about 15 seconds. upon regaining cerebral blood flow, the g-loc victim usually experiences myoclonic convulsions (often called the ‘funky chicken’) and often full amnesia of the event is experienced. brief but vivid dreams have been reported to follow g-loc. if g-loc occurs at low altitude, this momentary lapse can prove fatal and even highly experienced pilots can pull straight to a g-loc condition without first perceiving the visual onset warnings that would normally be used as the sign to back off from pulling any more "g"s. <p> accommodative infacility is the inability to change the accommodation of the eye with enough speed and accuracy to achieve normal function. this can result in visual fatigue, headaches, and difficulty reading. the delay in accurate accommodation also makes vision blurry for a moment when switching between distant and near objects. the duration and extent of this blurriness depends on the extent of the deficit.
You go cross eyed and get blurry vision when you're fighting falling asleep because your brain is literally trying to shut down and you're not letting it. Eventually, your brain wins. Listen. I have fallen asleep at the wheel once. I woke up literally flying through the air, having veered off and ramped up a drive way, heading directly for a solid cement electrical pole at ~40 mph. Thankfully I landed just before I hit the pole, swerved to the side, and proceeded to immediately pull over and hyperventilate for the next ten minutes. Imagine if I had been on the highway, where I usually go 75? I got lucky and only got a scare, but driving while sleepy **will** kill you. It has been repeatedly proven to be as dangerous as driving drunk. Meanwhile, a five minute cat-nap *vastly* improves alertness, mental acuity, and reflex speed, and a 20 minute power nap is even better. Are you really in so much of a hurry that 5 minutes is worth risking your life? There's only been a couple of times in my life I could honestly say yes to that question, and I bet it's the same for you.
what makes a food item filling? and why is that some high calorie items aren’t necessarily “filling” food? (ex. fries)
<p> this is a list of stuffed dishes, comprising dishes and foods that are prepared with various fillings and stuffings. some dishes are not actually stuffed; the added ingredients are simply spread atop the base food. one cannot truly stuff an oyster or a mussel or a pizza. <p> some products are sold with fillers, which increase the legal weight of the product with something that costs the producer very little compared to what the consumer thinks that he or she is buying. food is an example of this, where meat is injected with broth or even brine (up to 15%), or tv dinners are filled with gravy or other sauce instead of meat. malt and ham have been used as filler in peanut butter. there are also non-meat fillers which may look starchy in their makeup; they are high in carbohydrate and low in nutritional value. one example is known as a cereal binder and usually contains some combination of flours and oatmeal. <p> stuffing or filling is an edible substance or mixture, normally consisting primarily of small cut-up pieces of bread or a similar starch and served as a side dish or used to fill a cavity in another food item while cooking. many foods may be stuffed, including eggs, poultry, seafood, mammals, and vegetables, but chickens and turkey are the most common. stuffing serves the dual purpose of helping to keep the meat moist while also adding to the mix of flavours of both the stuffing and the thing it is stuffed in. <p> the pumpability of viscous or pasty products has a key effect on the reliable function of a vacuum filler. filling products in the food sector can be characterised with the aid of various different properties related to their pumpability (“fillability”). they are either physical characteristics that can be measured directly or they are sensory attributes. <p> fillings are used if the object has suffered considerable damage. the process of filling depends on the objects chemical composition consisting of- refractive indexes, transparency, low viscosity, and its compatibility with the rest of the object. <p> many food filled packages are filled with nitrogen to extend shelf life. food manufacturers are often looking for ways to improve their geographical reach or otherwise extending the shelf life of their product without the use of chemicals. nitrogen filling is a natural means of extending shelf life. more and more manufacturers are choosing to create and control their own nitrogen supply by using an on demand nitrogen generators. <p> a fillet or filet (, ; from the french word "filet" ) is a cut or slice of boneless meat or fish. the fillet is often a prime ingredient in many cuisines, and many dishes call for a specific type of fillet as one of the ingredients.
Part of the feeling of being full comes from having your stomach stretched and your digestive system engaged. Foods differ in the amount of work your body has to do in order to get at the calories. Foods like pork take a lot of work to digest because its got a lot of tightly bound up protein and it gets prepared by chewing then macerated in the stomach and then the intestines have a go at it. Foods like maltose (the stuff in maltesers) are readily available and only need saliva to separate the glucose out. Combination foods can also have an effect . If we take the pork and bread it and fry it. Then we add fats and sugars that the body can pick up quickly and then use as energy to help digest the protein.
if somebody is pointing a gun at me, how far away roughly would i need to be to be able to duck and miss the bullet if the trigger was pulled? ps i know this would change from gun to gun, but would like an example.
<p> neddie is doubtful. he says "how can someone shoot themselves by pointing their finger at their head like this and going..." at that point there is the sound of a gunshot, followed by neddie's body falling to the ground. <p> with pistol quick kill, the pistol is gripped and pointed at a target much like a person would point their finger. "when you point, you naturally do not attempt to sight or aim your finger. it will be somewhat below your eye level in your peripheral vision, perhaps 2-4 inches below eye level." <p> the same applies when pointing a gun at a target. just as with pointing their finger, the user will "...see the end of the barrel and/or front sight while looking at the target...you have not looked at the gun or front sight, just the target." <p> pointed. when presented with a target, the soldier keeps the rifle at his side and quickly fires a single shot or burst. he keeps both eyes open and uses his instinct and peripheral vision to line up the rifle with the target. using this technique, a target at 15 meters or less may be engaged in less than one second. <p> aimed. when presented with a target, the soldier brings the rifle up to his shoulder and quickly fires a single shot. his firing eye looks through or just over the rear sight aperture. he uses the front sight post to aim at the target. using this technique, a target at 25 meters or less may be accurately engaged in one second or less. <p> bullet::::- the roy rogers effect allows you to make any trick shot you can imagine, eliminating all cover your target may be behind. of course, you can't actually "kill" anyone except at high noon... <p> bullet::::- seesaw 60 – two people stand atop a giant seesaw. they have 60 seconds to move a 10 kg barrel from one side to the other without letting either end of the seesaw touch the floor. a third person gets to call out advice to the other two people. this challenge has had 1 victory.
Human response time: 200ms Muzzle velocity of an average 9mm round is about 1200 fps. Assuming that you have the visual acuity to see the shooter pull the trigger, the bullet will have traveled 240 ft. This distance is pushing it for even the best of marksman. So if someone is shooting at you with a pistol from a reasonable distance, the bullet will hit you before you've even registered the pull of the trigger.
oled displays: samsung vs apple
<p> universal display's oled screens currently feature in samsung's galaxy s, s ii and s iii, s iv and s v smartphones. the galaxy s3 sold 10 million units in the first three months after its launch in april 2012. also, their galaxy note has sold 10 million units since launch. <p> the samsung galaxy sl has a superclear lcd touch screen, protected by gorilla glass. the sc-lcd is cheaper than the amoled display used in samsung galaxy s. furthermore, the display consumes more power compared to superamoled displays, although the phone ships with a higher capacity battery than the original galaxy s to compensate for it. an advantage of the superclear lcd display over the superamoled one is that the latter uses a pentile matrix layout that some users find less visually appealing, while the former is a true rgb display. <p> sony had not used oled panels in their smartphones previously, however the xz3 is the first sony smartphone to come with an oled panel. it is a qhd+ (2880x1440) display, with a 2:1 aspect ratio (marketed as 18:9). being a sony device, it features their triluminos and x-reality technology and supports 10-bit colour, which means it is certified for the bt.2020 standard and hdr10 playback. <p> the samsung galaxy s ii uses a wvga (800 x 480) super amoled plus capacitive touchscreen that is covered by gorilla glass with an oleophobic fingerprint-resistant coating. the display is an upgrade of its predecessor, and the "plus" signifies that the display panel has done away with pentile matrix to regular rgb matrix display which results in a 50% increase in sub-pixels. this translates to grain reduction and sharper images and text. in addition, samsung has claimed that super amoled plus displays are 18% more power efficient than the older super amoled displays. some phones have display issues, with a few users reporting a "yellow tint" on the left bottom edge of the display when a neutral grey background is displayed. <p> apple began using oled panels in its watches in 2015 and in its laptops in 2016 with the introduction of an oled touchbar to the macbook pro. in 2017, apple announced the introduction of their tenth anniversary iphone x with their own optimized oled display licensed from universal display corporation. <p> an oled display works without a backlight. thus, it can display deep black levels and can be thinner and lighter than a liquid crystal display (lcd). in low ambient light conditions such as a dark room an oled screen can achieve a higher contrast ratio than an lcd, whether the lcd uses cold cathode fluorescent lamps or led backlight. oleds are expected to replace other forms of display in near future. <p> uniquely on oled display panels, while an oled will consume around 40% of the power of an lcd displaying an image that is primarily black, for the majority of images it will consume 60–80% of the power of an lcd. however, an oled can use more than three times as much power to display an image with a white background, such as a document or web site. this can lead to reduced battery life in mobile devices, when white backgrounds are used.
They can boast and claim whatever they want, its generally done with enough weasel words and qualifiers that its "accurate" Reviewers can also call them out on their odd claims and they do, but its still the best display that's been in an iPhone yet Samsung doesn't particularly care, they're getting nearly $100/screen which they're likely quite happy about
why is it that sometimes you have to hold the toilet handle down to flush it?
<p> toilet seats often have a lid. this lid is frequently left open. it can be closed to prevent small items from falling in, to reduce odors, for aesthetic purposes or to provide a chair in the toilet room. some people also close the lid to prevent the spread of aerosols on flushing ("toilet plume"). <p> in those settings, bucket toilets are more likely to be used without a liner, or the liner is not removed each time the bucket is emptied. this is because the users cannot afford to regularly discard suitably sized, sturdy liners. instead, the users may place some dry material in the base of the bucket (newspaper, sawdust, leaves, straw, or similar) in order to facilitate easier emptying. <p> the holders in many public toilets are designed to make it difficult for patrons to steal the toilet rolls. various contraptions have been devised to lock the spare rolls away, and release them only when the active roll is used up. <p> toilets without cisterns are often flushed through a simple flush valve or "flushometer" connected directly to the water supply. these are designed to rapidly discharge a limited volume of water when the lever or button is pressed then released. <p> many public toilets do not have soap for washing hands, or towels for drying hands. many people carry a handkerchief with them for such occasions, and some even carry soap. some public toilets are fitted with powerful hand dryers to reduce the volume of waste generated from paper towels. hand dryers and taps are sometimes installed with motion-sensors as an additional resource-saving measure. <p> roomettes often have their own toilet and wash basin which folds into the wall, as well as hot and cold taps. in older-style roomette cars, the corridor runs down the car in a straight line, and the floor area of the compartments is rectangular. because the bed occupies most of this area when folded down, the toilet cannot be unfolded and used while the bed is down. this means that if the passenger wishes to use the toilet, they must temporarily fold the bed at least partially upwards. <p> some toilets also use the siphon principle to obtain the actual flush from the cistern. the flush is triggered by a lever or handle that operates a simple diaphragm-like piston pump that lifts enough water to the crest of the siphon to start the flow of water which then completely empties the contents of the cistern into the toilet bowl. the advantage of this system was that no water would leak from the cistern excepting when flushed. these were mandatory in the uk until 2011.
The way tank toilets flush is by adding water to the bowl (the part you pee into) until the water pressure helps siphon the water down the toilet drain. When you flush, the handle is lifting a stopper at the bottom of the toilet’s water tank (the upper part of the toilet where the handle is) which allows water from the tank to fill the toilet bowl thus creating the siphon and sending everything down the drain. On toilets that aren’t super effective, sometimes pressing the handle quickly doesn’t leave the stopper open long enough to drain enough water into the bowl. The stopper closes too quickly unless you hold down the handle and force it to be held open. You can see all of this happen on your own toilet if you take the top off of the tank!
why are high rise buildings safer than shorter buildings in the event of an earthquake?
<p> traditional seismic design assumes that the lower stories of a building are stronger than the upper stories; where this is not the case—if the lower story is less strong than the upper structure—the structure will not respond to earthquakes in the expected fashion. using modern design methods, it is possible to take a weak lower story into account. several failures of this type in one large apartment complex caused most of the fatalities in the 1994 northridge earthquake. <p> regions with low seismic risk are safe for most earth buildings, but historic construction techniques often cannot resist even medium earthquake levels effectively because of earthen buildings' three highly undesirable qualities as a seismic building material: being relatively 'weak, heavy and brittle'. however, earthen buildings can be built to resist seismic loads. <p> however, only certain types of structures are vulnerable to this resonance effect. taller buildings have their own frequencies of vibration. those that are six to fifteen stories tall also vibrate at the 2.5-second cycle, making them act like tuning forks in the event of an earthquake. the low-frequency waves of an earthquake are amplified by the mud of the lakebed, which in turn, is amplified by the building itself. this causes these buildings to shake more violently than the earthquake proper as the earthquake progresses. many of the older colonial buildings have survived hundreds of years on the lakebed simply because they are not tall enough to be affected by the resonance effect. <p> the skyline has seen rapid growth due to improvements in seismic design standards, which has made certain building types highly earthquake-resistant. many of the new skyscrapers contain a housing or hotel component. <p> high-rise structures pose particular design challenges for structural and geotechnical engineers, particularly if situated in a seismically active region or if the underlying soils have geotechnical risk factors such as high compressibility or bay mud. they also pose serious challenges to firefighters during emergencies in high-rise structures. new and old building design, building systems like the building standpipe system, hvac systems (heating, ventilation and air conditioning), fire sprinkler system and other things like stairwell and elevator evacuations pose significant problems. studies are often required to ensure that pedestrian wind comfort and wind danger concerns are addressed. in order to allow less wind exposure, to transmit more daylight to the ground and to appear more slender, many high-rises have a design with setbacks. <p> multi-storey buildings were then constructed using a reinforced concrete frame of columns and beams with brick infill panels. holmes and his colleagues believed that in a major earthquake these rigid outer walls, which were poorly connected to the relatively flexible inner frame, would take the brunt of the seismic forces in a major earthquake, causing them to "shatter, fall and destroy the building.", <p> because the then new principles of "skyscraper" design were not yet fully understood, the building was overbuilt, with its steel foundation anchored deeply into bedrock five stories below street level. this overly sturdy construction helped this tall, slender building withstand the collapse of two world trade towers only 220 yards (201 m) to the west on september 11, 2001, with only minimal damage despite the impact which was measured at the time as a 3.3 magnitude seismic event.
I'm no expert here, but I'll give a quick answer till someone can go into actual detail. It depends on the strength of the earthquake, as they produce different frequencies of vibrations. Taller buildings have different Resonance Frequencies than shorter buildings. Resonance frequency is the specific frequency (number of vibrations per time period) that it takes for an object to shake more and more with no stop. Think of pushing someone on a swing. To get the maximum height, you need to push at a specific time. If you randomly pushed, you may slow them down, or even push them right off of the swing. So more vibrations does not always mean more damage.
how do those images where a 3d image appears when you cross your eyes work?
<p> when one image is presented to one eye and a very different image is presented to the other (also known as dichoptic presentation), instead of the two images being seen superimposed, one image is seen for a few moments, then the other, then the first, and so on, randomly for as long as one cares to look. for example, if a set of vertical lines is presented to one eye, and a set of horizontal lines to the same region of the retina of the other, sometimes the vertical lines are seen with no trace of the horizontal lines, and sometimes the horizontal lines are seen with no trace of the vertical lines. <p> the lenses are accurately aligned with the interlaces of the image, so that light reflected off each strip is refracted in a slightly different direction, but the light from all pixels originating from the same original image is sent in the same direction. the end result is that a single eye looking at the print sees a single whole image, but two eyes will see different images, which leads to stereoscopic 3d perception. <p> by focusing the lenses on a nearby autostereogram where patterns are repeated and by converging the eyeballs at a distant point behind the autostereogram image, one can trick the brain into seeing 3d images. if the patterns received by the two eyes are similar enough, the brain will consider these two patterns a match and treat them as coming from the same imaginary object. this type of visualization is known as "wall-eyed viewing", because the eyeballs adopt a wall-eyed convergence on a distant plane, even though the autostereogram image is actually closer to the eyes. because the two eyeballs converge on a plane farther away, the perceived location of the imaginary object is behind the autostereogram. the imaginary object also appears bigger than the patterns on the autostereogram because of foreshortening. <p> given two or more images of the same 3d scene, taken from different points of view, the correspondence problem refers to the task of finding a set of points in one image which can be identified as the same points in another image. to do this, points or features in one image are matched with the corresponding points or features in another image. the images can be taken from a different point of view, at different times, or with objects in the scene in general motion relative to the camera(s). <p> starting with a 2d image, image points are extracted which correspond to corners in an image. the projection rays from the image points are reconstructed from the 2d points so that the 3d points, which must be incident with the reconstructed rays, can be determined. <p> bullet::::- the parallel viewing method uses an image pair with the left-eye image on the left and the right-eye image on the right. the fused three-dimensional image appears larger and more distant than the two actual images, making it possible to convincingly simulate a life-size scene. the viewer attempts to look "through" the images with the eyes substantially parallel, as if looking at the actual scene. this can be difficult with normal vision because eye focus and binocular convergence are habitually coordinated. one approach to decoupling the two functions is to view the image pair extremely close up with completely relaxed eyes, making no attempt to focus clearly but simply achieving comfortable stereoscopic fusion of the two blurry images by the "look-through" approach, and only then exerting the effort to focus them more clearly, increasing the viewing distance as necessary. regardless of the approach used or the image medium, for comfortable viewing and stereoscopic accuracy the size and spacing of the images should be such that the corresponding points of very distant objects in the scene are separated by the same distance as the viewer's eyes, but not more; the average interocular distance is about 63 mm. viewing much more widely separated images is possible, but because the eyes never diverge in normal use it usually requires some previous training and tends to cause eye strain. <p> the cross-eyed viewing method, in traditional stereoscopy, swaps the left and right eye images so that they will be correctly seen cross-eyed, the left eye viewing the image on the right and vice versa. a fused three-dimensional image thus appears to the eye, though it also appears to be smaller and closer than the actual images, so that large objects and scenes appear miniaturized.
All the random dots repeat in fixed-width columns. Crossing your eyes allows you to view the columns overlapping as if they were one, though they will still look flat. To get the 3D effect, for any individual dot, you can make that dot appear farther or nearer by shortening or increasing the distance between them (thinner or wider columns for just those dots). The eye can pick out those offset dots quite easily, and the brain will make them appear 3D.
assume the universe is infinite, is there then other realities in which everything is almost exactly the same as on earth?
<p> "only the pythagoreans place the infinite among the objects of sense (they do not regard number as separable from these), and assert that what is outside the heaven is infinite. plato, on the other hand, holds that there is no body outside (the forms are not outside because they are nowhere), yet that the infinite is present not only in the objects of sense but in the forms also. (aristotle)" <p> everything (or every thing) is all that exists; the opposite of nothing, or its complement. it is the totality of things relevant to some subject matter. without expressed or implied limits, it may refer to anything. the universe is everything that exists theoretically, though a multiverse may exist according to theoretical cosmology predictions. it may refer to an anthropocentric worldview, or the sum of human experience, history, and the human condition in general. every object and entity is a part of everything, including all physical bodies and in some cases all abstract objects. <p> since there is believed to be no "center" or "edge" of the universe, there is no particular reference point with which to plot the overall location of the earth in the universe. because the observable universe is defined as that region of the universe visible to terrestrial observers, earth is, because of the constancy of the speed of light, the center of earth's observable universe. reference can be made to the earth's position with respect to specific structures, which exist at various scales. it is still undetermined whether the universe is infinite. there have been numerous hypotheses that the known universe may be only one such example within a higher multiverse; however, no direct evidence of any sort of multiverse has been observed, and some have argued that the hypothesis is not falsifiable. <p> this infinite or god (also the reality) is the enticing and elusive dimension of our human life. god is ever approachable, but never attainable exhaustively. like the horizon, that invites and cajoles us and recedes from us, god is always near and far at the same time. he bases this insight on scientific details like the lowest temperature reachable (t →0) and knowing that the beginning of big bang (t →0) and is like the "horizon", which is never fully attainable. <p> if we [...] define being in the universal sense as the principle of manifestation, and at the same time as comprising in itself the totality of possibilities of all manifestation, we must say that being is not infinite because it does not coincide with total possibility; and all the more so because being, as the principle of manifestation, although it does indeed comprise all the possibilities of manifestation, does so only insofar as they are actually manifested. outside of being, therefore, are all the rest, that is all the possibilities of non-manifestation, as well as the possibilities of manifestation themselves insofar as they are in the unmanifested state; and included among these is being itself, which cannot belong to manifestation since it is the principle thereof, and in consequence is itself unmanifested. for want of any other term, we are obliged to designate all that is thus outside and beyond being as "non-being", but for us this negative term is in no way synonym for 'nothingness'. <p> according to avicenna, the universe consists of a chain of actual beings, each giving existence to the one below it and responsible for the existence of the rest of the chain below. because an actual infinite is deemed impossible by avicenna, this chain as a whole must terminate in a being that is wholly simple and one, whose essence is its very existence, and therefore is self-sufficient and not in need of something else to give it existence. because its existence is not contingent on or necessitated by something else but is necessary and eternal in itself, it satisfies the condition of being the necessitating cause of the entire chain that constitutes the eternal world of contingent existing things. thus his ontological system rests on the conception of god as the "wajib al-wujud" (necessary existent). there is a gradual multiplication of beings through a timeless emanation from god as a result of his self-knowledge. <p> philoponus originated the argument now known as the traversal of the infinite. if the existence of something requires that something else exist before it, then the first thing cannot come into existence without the thing before it existing. an infinite number cannot actually exist, nor be counted through or 'traversed', or be increased. something cannot come into existence if this requires an infinite number of other things existing before it. therefore, the world cannot be infinite.
that's not necessarily true. just because something is infinite does not mean it does anything interesting. i can come up with an infinite string of numbers that never repeats itself but is entirely bland. 10100100010000100001.... (i.e. add one more 0 between the 1s every time)
how come we are legally adults and we can be tried as adults if we can't still buy alcohol?
<p> the age at which people are legally allowed to purchase alcohol is 18 or over in most circumstances. adults purchasing alcohol on behalf of a person under 18 in a pub or from an off-licence are potentially liable to prosecution along with the vendor. <p> persons under 18 years cannot drink alcohol on licensed premises under any circumstances. until 13 september 2018, licensees could supply liquor to a minor for consumption on a licensed premises as part of a meal if the minor was accompanied by a parent, guardian, or spouse, and minors could not be on licensed premises (i.e. premises on which alcohol may be sold or consumed) unless accompanied by an adult or in other limited circumstances. <p> it is legal for a person under 18 years to drink alcohol within private premises, with the supervision of a parent/guardian. it is illegal for a person under the age of 18 years to purchase alcohol, or to have alcohol bought for them in public places, or to attend a licensed venue without parental supervision (there are some special circumstances). it is illegal for licensed premises to sell alcohol to someone under the age of 18 years alcohol. <p> alcohol is legal for adults 21 and over in the state of california to possess, purchase, and consume. sale of alcohol is regulated and a license must be granted by county authorities before a store, bar, or restaurant may sell alcohol. <p> most people are aware that serving alcohol to people who are below the legal age for the consumption of alcohol is illegal in the united states. exceptions from that prohibition for service of alcohol to minors in family settings, for religious reasons and other purposes varies by state. in some states a person who serves alcohol to a minor may potentially be held liable if the alcohol provided is found to have contributed to the commission of a crime. <p> a person must be at least 21 years old in new jersey to purchase alcoholic beverages in a retail establishment, or to possess or consume alcoholic beverages in a public (for example, a park or on the street) or semi-public area (e.g. restaurant, automobile). a person only needs to be 18 to own a liquor license, or to sell or serve alcohol (for example, a waiter). state law also prohibits an underage person from misrepresenting their age in a licensed establishment. <p> except for the specific exempt circumstances provided in maryland law, it is also illegal for anyone to purchase alcohol for someone under 21, or to give it to them. maryland alcohol laws require that the defendant knew the person was under 21, and purchased or furnished alcohol for that underage person to consume. in addition, it is also illegal for an adult who owns or leases property, and lives at that property, to knowingly and willfully allow anyone under 21 to consume alcohol there, unless they are members of the same immediate family. this law does not necessarily make homeowners criminally responsible for any illegal drinking at their residence, unless they were both aware of it and intentionally allowed it to happen.
There's some evidence that alcohol abuse is still significantly more harmful at 18 than, say, 30. Because of this, the federal government made a large amount of highway funding contingent on states setting their drinking age to 21, and every state agreed.
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
48
Edit dataset card