id
stringlengths 5
6
| input
stringlengths 3
301
| output
list | meta
null |
---|---|---|---|
rh5zl | Why does my voice tend to go higher around new company or in formal situations? | [
{
"answer": "Higher pitched voices are often perceived as less threatening. So it is a way of reassuring new people you are not a threat. It could also be the excitement of meeting a new person being expressed as a higher pitch.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "49749599",
"title": "Deep voice privilege",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 246,
"text": "Deep voice privilege, an idea in sociology and psychology, is the privilege that a man gains from having a deeper speaking voice. According to one study, there is a correlation between voice pitch, CEO salary, and size of firm that the CEO runs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "125909",
"title": "Procedural justice",
"section": "Section::::In relation to communication.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 1163,
"text": "The ability and right to a voice is linked with feelings of respect and value, which emphasizes the importance of the interpersonal factors of procedural justice. This is important in the workplace because employees will feel more satisfied and respected, which can help to increase job task and contextual performance. There is an emphasis on the interpersonal and social aspects of the procedure, which result in employees feeling more satisfied when their voices are able to be heard. This was argued by Greenberg and Folger. Procedural justice also is a major factor that contributes to the expression of employee dissent. It correlates positively with managers' upward dissent. With procedural justice there is a greater deal of fairness in the workplace. There are six rules that apply to procedural justice, \"Leventhal's rules\", are consistence, bias suppression, accuracy, correctability, representativeness, and ethicality. With procedural justice in the workplace and in communication, things need to be fair to everyone, when something is applied it has to be applied to everyone and procedures need to be consistent with the moral and ethical values.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24581594",
"title": "Exit, Voice, and Loyalty Model",
"section": "Section::::Exit, Voice, Loyalty, Neglect Model.:Voice.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 399,
"text": "Voice refers to any attempt to change, rather than escape from, the dissatisfying situation. Voice can be constructive response, such as recommending ways for management to improve the situation, or it can be more confrontational, such as by filing formal grievances. In the extreme, some employees might engage in counterproductive behaviors to get attention and force changes in the organization.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34307928",
"title": "Voice change",
"section": "Section::::Anatomical changes.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 541,
"text": "Most of the voice change begins around puberty. Adult pitch is reached 2–3 years later but the voice does not stabilize until the early years of adulthood. It usually happens months or years before the development of significant facial hair. Under the influence of androgens, the voice box, or larynx, grows in both sexes. This growth is far more prominent in boys than in girls and is more easily perceived. It causes the voice to drop and deepen. Along with the larynx, the vocal folds (vocal cords) grow significantly longer and thicker.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1871417",
"title": "Exit, Voice, and Loyalty",
"section": "Section::::Applying the theory to membership organizations.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 258,
"text": "Some studies confirm Hirschman's assertion that greater exit and entry costs heighten the likelihood of voice. Particularly when examining dispute resolution in contexts with limited exit opportunities, increased entry costs make workers' voice more likely.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34307928",
"title": "Voice change",
"section": "Section::::Anatomical changes.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 529,
"text": "The facial bones begin to grow as well. Cavities in the sinuses, the nose, and the back of the throat grow bigger, thus creating more space within the head to allow the voice to resonate. Occasionally, voice change is accompanied by unsteadiness of vocalization in the early stages of untrained voices. Due to the significant drop in pitch to the vocal range, people may unintentionally speak in head voice or even strain their voices using pitches which were previously chest voice, the lowest part of the modal voice register.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3315213",
"title": "Sex differences in human physiology",
"section": "Section::::Evolution of sexual dimorphism in human voice pitch.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 729,
"text": "The pitch of a male voice is about half as high in males in comparison to females. Even after controlling for body height and volume, the male voice remains lower. Some scientists have suggested that human voice evolved through intersexual sexual selection, via female male choices. Puts (2005) showed that preference for male voice pitch changed according to the stage of the menstrual cycle whilst Puts (2006) found women preferred lower male voices mainly for short-term, sexual relationships. Intrasexual selection, via male competition, also causes a selection in voice pitch. Pitch is related to interpersonal power and males tend to adjust their pitch according to their perceived dominance when speaking to a competitor.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1rkgwb | Is it possible to cryogenically freeze an entire ovary to save the eggs for later? | [
{
"answer": "I don't believe we've reached a point where we can 'revive' tissue after cryopreservation. In the case of gametes and embryos, special freezing media is required to protect them from cold shock and freezing damage. Cryoprotectants such as low density lipoproteins and glycerol stabilise the plasma membrane during chilling and replace intracellular fluid to prevent ice crystals forming within the cell. As the follicular fluid does not have these properties, I doubt the oocytes would survive being frozen within the ovary itself.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "219284",
"title": "Cryobiology",
"section": "Section::::Applied cryobiology.:In humans.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 1489,
"text": "Cryopreservation in humans with regards to infertility involves preservation of embryos, sperm, or oocytes via freezing. Conception, \"in vitro\", is attempted when the sperm is thawed and introduced to the 'fresh' eggs, the frozen eggs are thawed and sperm is placed with the eggs and together they are placed back into the uterus or a frozen embryo is introduced to the uterus. Vitrification has flaws and is not as reliable or proven as freezing fertilized sperm, eggs, or embryos as traditional slow freezing methods because eggs alone are extremely sensitive to temperature. Many researchers are also freezing ovarian tissue in conjunction with the eggs in hopes that the ovarian tissue can be transplanted back into the uterus, stimulating normal ovulation cycles. In 2004, Donnez of Louvain in Belgium reported the first successful ovarian birth from frozen ovarian tissue. In 1997, samples of ovarian cortex were taken from a woman with Hodgkin's lymphoma and cryopreserved in a (Planer, UK) controlled-rate freezer and then stored in liquid nitrogen. Chemotherapy was initiated after the patient had premature ovarian failure. In 2003, after freeze-thawing, orthotopic autotransplantation of ovarian cortical tissue was done by laparoscopy and after five months, reimplantation signs indicated recovery of regular ovulatory cycles. Eleven months after reimplantation, a viable intrauterine pregnancy was confirmed, which resulted in the first such live birth – a girl named Tamara.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50642488",
"title": "Cryoconservation of animal genetic resources",
"section": "Section::::Description.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 892,
"text": "Cryoconservation is the process of freezing cells and tissues using liquid nitrogen to achieve extreme low temperatures with the intent of using the preserved sample to prevent the loss of genetic diversity. Semen, embryos, oocytes, somatic cells, nuclear DNA, and other types of biomaterial such as blood and serum can be stored using cryopreservation, in order to preserve genetic materials. The primary benefit of cryoconservation is the ability to save germplasms for extended periods of time, therefore maintaining the genetic diversity of a species or breed. There are two common techniques of cryopreservation: slow freezing and vitrification. Slow freezing helps eliminate the risk of intracellular ice crystals. If ice crystals form in the cells, there can be damage or destruction of genetic material. Vitrification is the process of freezing without the formation of ice crystals.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10990255",
"title": "Oocyte cryopreservation",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 1048,
"text": "Human oocyte cryopreservation (egg freezing) is a procedure to preserve a woman's eggs (oocytes). This technique has been used to enable women to postpone pregnancy to a later date - whether for medical reasons such as cancer treatment or for social reasons such as employment or studying. Several studies have proven that most infertility problems are due to germ cell deterioration related to ageing. Surprisingly, the uterus remains completely functional in most elderly women. This implies that the factor which needs to be preserved is the woman's eggs. The eggs are extracted, frozen and stored. The intention of the procedure is that the woman may choose to have the eggs thawed, fertilized, and transferred to the uterus as embryos to facilitate a pregnancy in the future. The procedure's success rate (the chances of a live birth using frozen eggs) varies depending on the age of the woman, and ranges from 14.8 percent (if the eggs were extracted when the woman was 40) to 31.5 percent (if the eggs were extracted when the woman was 25).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50642488",
"title": "Cryoconservation of animal genetic resources",
"section": "Section::::Limitations.\n",
"start_paragraph_id": 68,
"start_character": 0,
"end_paragraph_id": 68,
"end_character": 863,
"text": "Cryoconservation is limited by the cells and tissues that can be frozen and successfully thawed. Cells and tissues that can be successfully frozen are limited by their surface area. To keep cells and tissues viable, they must be frozen quickly to prevent ice crystal formation. Thus, a large surface area is beneficial. Another limitation is the species being preserved. There have been difficulties using particular methods of cryoconservation with certain species. For example, artificial insemination is more difficult in sheep than cattle, goats, pigs, or horses due to posterior folds in the cervix of ovines. Cryopreservation of embryos is dependent on the species and the stage of development of the embryo. Pig embryos are the most difficult to freeze, thaw, and utilize produce live offspring due to their sensitivity to chilling and high lipid content.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19349845",
"title": "Cryopreservation",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 1213,
"text": "Cryo-preservation or cryo-conservation is a process where organelles, cells, tissues, extracellular matrix, organs, or any other biological constructs susceptible to damage caused by unregulated chemical kinetics are preserved by cooling to very low temperatures (typically −80 °C using solid carbon dioxide or −196 °C using liquid nitrogen). At low enough temperatures, any enzymatic or chemical activity which might cause damage to the biological material in question is effectively stopped. Cryopreservation methods seek to reach low temperatures without causing additional damage caused by the formation of ice crystals during freezing. Traditional cryopreservation has relied on coating the material to be frozen with a class of molecules termed cryoprotectants. New methods are constantly being investigated due to the inherent toxicity of many cryoprotectants. By default it should be considered that cryopreservation alters or compromises the structure and function of cells unless it is proven otherwise for a particular cell population. Cryoconservation of animal genetic resources is the process in which animal genetic material is collected and stored with the intention of conservation of the breed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19349845",
"title": "Cryopreservation",
"section": "Section::::History.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 782,
"text": "Cryopreservation was applied to humans beginning in 1954 with three pregnancies resulting from the insemination of previously frozen sperm. Fowl sperm was cryopreserved in 1957 by a team of scientists in the UK directed by Christopher Polge. During 1963, Peter Mazur, at Oak Ridge National Laboratory in the U.S., demonstrated that lethal intracellular freezing could be avoided if cooling was slow enough to permit sufficient water to leave the cell during progressive freezing of the extracellular fluid. That rate differs between cells of differing size and water permeability: a typical cooling rate around 1 °C/minute is appropriate for many mammalian cells after treatment with cryoprotectants such as glycerol or dimethyl sulphoxide, but the rate is not a universal optimum.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10990255",
"title": "Oocyte cryopreservation",
"section": "Section::::Indications.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 492,
"text": "Oocyte cryopreservation is an option for individuals undergoing IVF who object, either for religious or ethical reasons, to the practice of freezing embryos. Having the option to fertilize only as many eggs as will be utilized in the IVF process, and then freeze any remaining unfertilized eggs can be a solution. In this way, there are no excess embryos created, and there need be no disposition of unused frozen embryos, a practice which can create complex choices for certain individuals.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
9am27c | how do we know that pet euthanasia is truly painless? | [
{
"answer": "First, I'm sorry to hear you're going through this. It's the absolute worst part of owning a pet.\n\nTo answer your question, though, pet euthanasia is essentially done with a large dose of anesthesia. Have you ever had surgery? It's the same process, but with an alternative end. The feeling you felt while being put under is the same feeling your pet will feel. No pain at all. They'll slowly drift away peacefully.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "258700",
"title": "Pet adoption",
"section": "Section::::Unwanted pets.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 1815,
"text": "People deal with their unwanted pets in many ways. Some people have the pet euthanized (also known as \"putting it down\" or \"putting it to sleep\"), although many veterinarians do not consider this to be an ethical use of their resources for young and healthy animals, while others argue that euthanasia is a more humane option than leaving a pet in a cage for very long periods of time. Other people simply release the pet into the wild or otherwise abandon it, with the expectation that it will be able to take care of itself or that it will be found and adopted. More often, these pets succumb to hunger, weather, traffic, or common and treatable health problems. Some people euthanize pets because of terminal illnesses or injuries, while others even do it for common health problems that they cannot, or will not, pay for treating. More responsible owners will take the pet to a shelter, or call a rescue organization, where it will be cared for properly until a home can be found. One more way is to rehome a dog (find another owner for this dog) that can occur because of allergy to a dog, pet-owner death, divorce, baby born or even relocation. Homes cannot always be found, however, and euthanasia is often used for the excess animals to make room for newer pets, unless the organization has a no-kill policy. The Humane Society of the United States estimates that 2.4 million healthy, adoptable cats and dogs are euthanized each year in the US because of a lack of homes. Animal protection advocates campaign for adoption instead of buying animals in order to reduce the number of animals who have to be euthanized. Many shelters and animal rescues encourage the education of spaying or neutering a pet in order to reduce the number of animals euthanized in shelters and to help control the pet population.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "60857",
"title": "People for the Ethical Treatment of Animals",
"section": "Section::::Philosophy and activism.:Profile.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 221,
"text": "Recently, PETA have been accused of mass euthanasia of animals within PETA owned shelters in violations with the euthanasia laws of the state of Virginia. These actions have prompted the creation of the no kill movement.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "60857",
"title": "People for the Ethical Treatment of Animals",
"section": "Section::::Philosophy and activism.:Euthanizing shelter animals.\n",
"start_paragraph_id": 61,
"start_character": 0,
"end_paragraph_id": 61,
"end_character": 1537,
"text": "PETA opposes the no-kill movement, attempts to address the animal-overpopulation crisis at its source through spaying and neutering companion animals as well as by opposing breeders and puppy mills, transfers adoptable animals to open-admission shelters, and euthanizes most of the animals who end up at its \"shelter of last resort.\" According to its 2014 recent filing with the Virginia Department of Agriculture and Consumer Services (VDACS), PETA euthanized 81 percent of the animals who ended up at its shelter. According to VDACS, PETA took 3,017 animals into its shelters in 2014, of which 2,455 were euthanized, 162 were adopted, 353 were released to other shelters, and 6 were reclaimed by their original owners. The group justifies its euthanasia policies toward animals who are not adopted by saying that it takes in feral cat colonies with diseases such as feline AIDS and leukemia, stray dogs, litters of parvo-infected puppies, and backyard dogs and says that it would be unrealistic to follow a \"no-kill\" policy in such instances. PETA offers free euthanasia services to counties that kill unwanted animals via gassing or shooting—the group recommends the use of an intravenous injection of sodium pentobarbital if administered by a trained professional and for severely ill or dying animals when euthanasia at a veterinarian is unaffordable. The group recommends not breeding pit bulls and supports euthanasia in certain situations for animals in shelters: for example, for those living for long periods in cramped cages.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16045584",
"title": "Philippine Animal Welfare Society",
"section": "Section::::Fight against animal cruelty.:Tambucho killing and euthanasia.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 311,
"text": "PAWS recommends using the methods that cause a rapid loss of consciousness and that cause minimal pain, distress, and suffering in the animal. PAWS opposes any euthanasia methods or techniques that do not meet these humane principles. PAWS opposes “tambucho” gassing and electrocution as methods of euthanasia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21972643",
"title": "National Animal Interest Alliance",
"section": "Section::::Positions.:Pets.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 424,
"text": "The Humane Society of the United States reports that about 4 million cats and dogs are euthanized yearly in shelters in the US. But, the NAIA says that the Humane Society does not differentiate in its reporting on euthanasia between the number of adoptable animals and others. For instance, shelters generally consider feral cats as unadoptable, as are a number of dogs that are too old, too sick or have behavior problems.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5256121",
"title": "Society for the Prevention of Cruelty to Animals (Hong Kong)",
"section": "Section::::Controversy.\n",
"start_paragraph_id": 48,
"start_character": 0,
"end_paragraph_id": 48,
"end_character": 207,
"text": "The SPCA (HK)'spolicy on animal euthanasia has long been controversial among animal lovers and pet owners. According to the SPCA (HK)'sannual report, 4128 animals were put down by the society in 2007 to 08.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18687970",
"title": "MSPCA-Angell",
"section": "Section::::Statements of belief.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 368,
"text": "BULLET::::- Euthanasia of Shelter Animals: recognizing that the number of animals in shelter exceeds the number of responsible people available to adopt them, the society condones euthanasia but condemns the use of high-altitude decompression chambers, electrocution, injectable paralytic agents, unfiltered and uncooled carbon monoxide, and drowning for this purpose\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5v34pk | Why not send a drone to Mars that can recharge with solar panels? | [
{
"answer": "NASA is considering the possibility of a helicopter drone for a future Mars mission. You can read about this [from NASA](_URL_3_), as well as some articles on others sites, such as [this](http://www._URL_0_/28360-nasa-mars-helicopter-drone.html) (from _URL_0_) and [this](https://www._URL_1_/extreme/229937-nasa-testing-helicopter-drone-to-accompany-next-mars-rover) (from _URL_1_).\n\nThough it's a different kind of thing, there is also consideration of a [glider](_URL_5_).",
"provenance": null
},
{
"answer": "I feel of you were to send a flying drone to Mars a major issue would be latency, a very high latency from Earth is already dangerous for rovers and unless we can make a device that can fly predictably I feel if something goes wrong we wouldn't be able to know or fix it in time due to latency.",
"provenance": null
},
{
"answer": "Martian dust is an important mention here. Due to the small size of dust grains and the low gravity, they are lifted very easily and stay suspended in the atmosphere. The lack of humidity also allows them to stay afloat for a long time.\n\nThanks to wind erosion, dust grains are rounded, which makes them far less abrasive than lunar dust. However it is still very harmful as it can clog bearings (or any moving parts for that matter), damage camera lenses or cover solar panels. In fact, one of the goals of Curiosity's \"skycrane\" was to keep the retrorockets as far as possible from the ground to minimize the amount of lifted dust. The Spirit rover had already lost function of one of its wheels due to dust in the bearings before it got stuck in a sand pit.\n\nAlso flying with an upwards propeller requires a lot of energy. The drone would take few short flights of few minutes in length and stay on charge mode for a long time. These frequent landings and takeoffs would lift dust every time making things worse.\n\nOf course it's not impossible, just a huge engineering challenge that makes rovers preferable in many cases.\n\nThe other comment gave some links about a flying drone. Hopping robots and walking robots have also been already thought of, see \"Introduction to Space Robotics\" by Giancarlo Genta. The greatest advantage of wheels is their low energy requirement, though of course, they have trouble on harsh terrains.\n",
"provenance": null
},
{
"answer": "In short, there will never be a self-powered drone or helicopter on Mars. The reason has everything to do with lift.\n\nOn Mars, as you stated, the atmosphere is incredibly thin at only 0.02kg/m^3. Now lets look at wings; fundamentally, a wing generates lift by creating an area of higher static pressure on the underside. This is done by accelerating the flow over the top and taking advantage of Bernoulli's Equation which states that:\n\n\nTotal Pressure = Static Pressure + Dynamic Pressure\n\nwhere: Dynamic Pressure = 0.5*(density)*(velocity)^2\n\n\nNow we know that the total pressure remains the same at a constant altitude so lets look at the surface which has a total pressure of 600 pascals (Pascal=N/m^2). Now lets say that we need a drone that has a mass of 250kg because it will be carrying scientific instruments (The curiosity rover's dry mass was 899kg). And, due to volume constraints set by the launch provider, our maximum wingspan is 10m and the chord length is 1m giving us a wing area of 10m^2.\n\n\nNow, we need to generate enough lift to keep this 250kg drone in the air so lets take a look at the numbers. The weight of the drone will be 250kg*G where G=9.81m/s*0.38. Thus, the weight on mars is 932 N. We need 932N of lift to stay flying. \n\n\nLift Equation = (Coefficient of Lift)*(Dynamic Pressure)*(Wing Area)\n\nSo if we divide by the total wing area and we set Cl = 1, which is typical for many airfoils, we get a dynamic pressure of 93.2N/m^2. Now lets break the formula down further.\n\n93.2N/m^(2) =(0.5)*(density)*(velocity)^2\n\n9320 = (velocity)^2\n\nVelocity = 96.5m/s = 216mph\n\n\nNow, we have done some back of the envelope calculations and have found that our 250kg drone needs to get up to 216mph just to stay flying but here comes the real kicker... We need to make enough thrust. So what? Well, we decide to make the propellers bigger, but now the mass has increased and once again we need to increase the velocity again and now we have become stuck in a cycle where we need to continue throwing more mass at the problem and it's still not going away. (Also fundamentally you may have a misunderstanding of how a propeller works, the act of the propeller generating thrust actually induces drag into the propeller so when you say \"Spinning huge propellers shouldn't be a problem because they won't face much air resistance.\" it is actually a misconception. However much thrust you generate is directly related to the drag you generate, it does not depend on air resistance.)\n\n\nBottom line is by trying to build a drone you end up with a much more heavy design (and with space travel more mass=more money). So now the question becomes why do we want a drone? Instead of putting lots of mass and money into making large wings and big propellers why can't we put all of that mass into building a nice satellite that can orbit Mars and take pictures? Why can't we build another rover to do ground work? And the real question is: What does a drone add in terms of science that we cannot get any other way? And the answer is that it doesn't. By building rovers and orbiters we can get much more science because we can spend our money on instruments instead of material and mass to make a cow fly on mars.\n\n\nOverall, Engineering is all about trying to come up with the best solutions and sadly self-powered drones will never belong on Mars. That is not to say though that we won't send drones to other areas in our solar system! As an undergrad I worked with a professor who was testing inflatable wings for a UAV design for Titan, the atmosphere is extremely dense, and in that case a drone can get more science than an orbiter because it could get below the cloud cover. Plus, planes love denser air always!\n\n\n\nedit: fixed equation format",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "266344",
"title": "Space debris",
"section": "Section::::Threats.:To spacecraft.:Unmanned spacecraft.\n",
"start_paragraph_id": 44,
"start_character": 0,
"end_paragraph_id": 44,
"end_character": 201,
"text": "Although spacecraft are protected by Whipple shields, solar panels, which are exposed to the Sun, wear from low-mass impacts. These produce a cloud of plasma which is an electrical risk to the panels.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39076510",
"title": "Asteroid Redirect Mission",
"section": "Section::::Spacecraft overview.:Propulsion.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 315,
"text": "Even at a destination, the SEP system can be configured to provide power to maintain the systems or prevent propellant boil-off before the crew arrives. However, existing flight-qualified solar-electric propulsion is at levels of 1–5 kW. A Mars cargo mission would require ~100 kW, and a crewed flight ~150–300 kW.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "179100",
"title": "Rosetta (spacecraft)",
"section": "Section::::History.:Deep space manoeuvres.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 763,
"text": "On 25 February 2007, the craft was scheduled for a low-altitude flyby of Mars, to correct the trajectory. This was not without risk, as the estimated altitude of the flyby was a mere . During that encounter, the solar panels could not be used since the craft was in the planet's shadow, where it would not receive any solar light for 15 minutes, causing a dangerous shortage of power. The craft was therefore put into standby mode, with no possibility to communicate, flying on batteries that were originally not designed for this task. This Mars manoeuvre was therefore nicknamed \"The Billion Euro Gamble\". The flyby was successful, with \"Rosetta\" even returning detailed images of the surface and atmosphere of the planet, and the mission continued as planned.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42302371",
"title": "Mars habitat",
"section": "Section::::Overview.:Power.\n",
"start_paragraph_id": 86,
"start_character": 0,
"end_paragraph_id": 86,
"end_character": 976,
"text": "For a 500-day crewed Mars mission NASA has studied using solar power and nuclear power for its base, as well as power storage systems (e.g. batteries). Some of the challenges for solar power include a reduction in solar intensity because Mars is farther from the sun, dust accumulation, and periodic dust storms, in addition to the usual challenges of solar power such as storing power for the night-time. One of the difficulties is enduring the global Mars dust storms, which cause lower temperatures and reduce sunlight reaching the surface. Two ideas for overcoming this are to use an additional array deployed during a dust storm and to use some nuclear power to provide base-line power that is not affected by the storms. NASA has studied nuclear-power fission systems in the 2010s for Mars surface missions. One design was planned for an output of 40 kilowatts, and its more independent of the sunlight reaching the surface of Mars which can be affected by dust storms.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "77178",
"title": "Spaceflight",
"section": "Section::::Phases.:Leaving orbit.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 436,
"text": "Robotic missions do not require an abort capability or radiation minimization, and because modern launchers routinely meet \"instantaneous\" launch windows, space probes to the Moon and other planets generally use direct injection to maximize performance. Although some might coast briefly during the launch sequence, they do not complete one or more full parking orbits before the burn that injects them onto an Earth escape trajectory.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34481270",
"title": "Materials Adherence Experiment",
"section": "Section::::Purpose.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 938,
"text": "Using solar power on the Martian surface is challenging because the Martian atmosphere has a significant amount of dust suspended in it. In addition to blocking sunlight from reaching Mars's surface, dust particles gradually settle out of the air and onto objects. As \"Pathfinder\" was NASA's first Mars surface mission to be solar-powered, the effect of Martian dust settling on solar cells was not well understood before the mission. It was predicted at the time that dust particles in the Martian atmosphere would settle on the solar cells powering \"Pathfinder\", blocking sunlight from striking them and slowly causing \"Pathfinder\" to lose power. Since knowing how the settling of dust out of Mars's atmosphere would affect solar cell performance would be critical to subsequent solar-powered missions on Mars, the MAE was included aboard the \"Sojourner\" rover to measure the degradation in performance of a solar cell as dust settled.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38082",
"title": "Viking program",
"section": "Section::::Viking orbiters.:Power.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 259,
"text": "The power to the two orbiter craft was provided by eight 1.57 × 1.23 m solar panels, two on each wing. The solar panels comprised a total of 34,800 solar cells and produced 620 W of power at Mars. Power was also stored in two nickel-cadmium 30-A·h batteries.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5p2gx1 | Conspiracy people claim the Apollo Astronauts would have been killed by radiation outside of the protection of the Van Allen Belt. How much of this is pseudo science? | [
{
"answer": "Well all of it. Apollo 11 that carried Neil Armstrong and Buzz Aldrin was deliberately launched from the descending node of the geomagnetic plane specifically so that it would be almost completely out of reach of the Van Allen Belts by the time it was far enough away from the Earth to no longer be protected. The trajectory was carefully planned, is very well known, and you can verify this fact for yourself.\n\nThe Van Allen Belts aren't like a sphere of death that surrounds the planet.\n\nedit: Also, the title of the post seems to insinuate that space everywhere is deadly radiant and the \"Van Allen Belt\" is something that protects from it. This is almost the opposite of the truth, the background radiation in space is not terribly harmful, it's the Van Allen Belts themselves that are a deadly concentration of radiation.",
"provenance": null
},
{
"answer": "Well, they're not entirely wrong... but it takes a while for radiation to kill you. A recent study found that the Apollo astronauts died from cardiovascular disease at a rate 4-5 times higher than that of astronauts who only traveled to low earth orbit (where the ISS is) or had never flown. \n\nThe entire paper can be found here for free: [Apollo Lunar Astronauts Show Higher Cardiovascular Disease Mortality: Possible Deep Space Radiation Effects on the Vascular Endothelium](_URL_0_)",
"provenance": null
},
{
"answer": "They did **risk** severe exposure once they left the Earth's magnetosphere, but fortunately no ill-timed solar eruptions took place.\n\nThere *was* a large solar storm in August 1972, 4 months after Apollo 16, 4 months before Apollo 17. Had astronauts been on the lunar surface when that event occurred, the radiation dose may have been deadly/life-threatening.",
"provenance": null
},
{
"answer": "The pseudo science is in saying that this means the Apollo missions couldn't have happened. The sun does now and then, kick out bursts of radiation that would have killed the Apollo crew. NASA was well aware of that, and also knew that there is a degree of predictability to these events, so that the mission could take place when the probability of such an event was low.\n\nThe normal levels of radiation that the sun kicks out, and the Van Allen belts protect us from, the Apollo crew got in full measure, but it wasn't lethal, though it probably took some time off their clocks.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "39761773",
"title": "Central nervous system effects from radiation exposure during spaceflight",
"section": "Section::::Conclusion.\n",
"start_paragraph_id": 64,
"start_character": 0,
"end_paragraph_id": 64,
"end_character": 1546,
"text": "Reliable projections for CNS risks from space radiation exposure cannot be made at this time due to a paucity of data on the subject. Existing animal and cellular data do suggest that space radiation can produce neurological and behavioral effects; therefore, it is possible that mission operations will be impacted. The significance of these results on the morbidity to astronauts has not been elucidated, however. It is to be noted that studies, to date, have been carried out with relatively small numbers of animals (10 per dose group); this means that testing of dose threshold effects at lower doses (0.5 Gy) has not yet been carried out to a sufficient extent. As the problem of extrapolating space radiation effects in animals to humans will be a challenge for space radiation research, such research could become limited by the population size that is typically used in animal studies. Furthermore, the role of dose protraction has not been studied to date. An approach has not been discovered to extrapolate existing observations to possible cognitive changes, performance degradation, or late CNS effects in astronauts. Research on new approaches to risk assessment may be needed to provide the data and knowledge that will be necessary to develop risk projection models of the CNS from space radiation. A vigorous research program, which will be required to solve these problems, must rely on new approaches to risk assessment and countermeasure validation because of the absence of useful human radio-epidemiology data in this area.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "80740",
"title": "Moon landing conspiracy theories",
"section": "Section::::Hoax claims and rebuttals.:Environment.\n",
"start_paragraph_id": 63,
"start_character": 0,
"end_paragraph_id": 63,
"end_character": 357,
"text": "1. The astronauts could not have survived the trip because of exposure to radiation from the Van Allen radiation belt and galactic ambient radiation (see radiation poisoning and health threat from cosmic rays). Some conspiracists have suggested that Starfish Prime (a high-altitude nuclear test in 1962) was a failed attempt to disrupt the Van Allen belts.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18896",
"title": "Human spaceflight",
"section": "Section::::Safety concerns.:Environmental hazards.:Medical issues.:Radiation.\n",
"start_paragraph_id": 85,
"start_character": 0,
"end_paragraph_id": 85,
"end_character": 839,
"text": "Without proper shielding, the crews of missions beyond low Earth orbit (LEO) might be at risk from high-energy protons emitted by solar flares and associated solar particle events (SPEs). Lawrence Townsend of the University of Tennessee and others have studied the overall most powerful solar storm ever recorded. The flare was seen by the British astronomer Richard Carrington in September 1859. Radiation doses astronauts would receive from a Carrington-type storm could cause acute radiation sickness and possibly even death. Another storm that could have incurred a lethal radiation dose if astronauts were outside the Earth's protective magnetosphere occurred during the Space Age, in fact, shortly after Apollo 16 landed and before Apollo 17 launched. This solar storm of August 1972 would likely at least have caused acute illness.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "151196",
"title": "Acute radiation syndrome",
"section": "Section::::Cause.:Spaceflight.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 519,
"text": "During spaceflight, particularly flights beyond low Earth orbit (LEO), astronauts are exposed to both galactic cosmic radiation (GCR) and solar particle event (SPE) radiation. Evidence indicates past SPE radiation levels that would have been lethal for unprotected astronauts. One possible such event occurred in 1859, but another occurred during the Space Age, in fact in a few months gap between Apollo missions, in early August 1972. GCR levels that might lead to acute radiation poisoning are less well understood.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3609096",
"title": "A Funny Thing Happened on the Way to the Moon",
"section": "Section::::Overview.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 610,
"text": "Sibrel's claims that the moon landing was a hoax making claims about supposed photographic anomalies; disasters such as the destruction of Apollo 1; technical difficulties experienced in the 1950s and 1960s; and the problems of traversing the Van Allen radiation belts. Sibrel proposes that the most condemning evidence is a piece of footage that he claims was secret, and inadvertently sent to him by NASA; he alleges that the footage shows Apollo 11 astronauts attempting to create the illusion that they were from Earth (or roughly halfway to the Moon) when, he claims, they were only in a low Earth orbit.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "59071829",
"title": "Solar storm of August 1972",
"section": "Section::::Impacts.:Human spaceflight.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 1040,
"text": "Occurring between Apollo missions, the storm has long been chronicled within NASA. Apollo 16 had returned home in April, and the final Apollo mission was a Moon landing planned for the following December. Those inside an Apollo command module would be shielded from 90% of incoming radiation, which could still have exposed astronauts to radiation sickness if they were located outside the protective magnetic field of Earth, which was the case for much of a lunar mission. A moonwalker or one on EVA in orbit could have faced severe acute illness and potentially a nearly universally fatal dose. An enhanced risk of contracting cancer would have been unavoidable regardless of the location of astronauts or spacecraft. This is one of only a handful of solar storms occurring in the Space Age that could cause severe illness, and was the most hazardous thus far. Had the most intense solar activity of early August occurred during a mission it would have forced contingency measures up to an emergency return landing for medical treatment.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "80740",
"title": "Moon landing conspiracy theories",
"section": "Section::::Claimed motives of the United States and NASA.:NASA funding and prestige.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 680,
"text": "Mary Bennett and David Percy have claimed in \"Dark Moon: Apollo and the Whistle-Blowers\", that, with all the known and unknown hazards, NASA would not risk broadcasting an astronaut getting sick or dying on live television. The counter-argument generally given is that NASA in fact \"did\" incur a great deal of public humiliation and potential political opposition to the program by losing an entire crew in the Apollo 1 fire during a ground test, leading to its upper management team being questioned by Senate and House of Representatives space oversight committees. There was in fact no video broadcast during either the landing or takeoff because of technological limitations.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
80i8b7 | how does the body store water? | [
{
"answer": "It gets absorbed and distributed throughout all your cells within your body. It's not really stored anywhere, but when your cells are at a nice hydrated state, any access water that enters your body will go to your bladder, which is why the more water you drink, the clearer, or more like water, your urine gets. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "22331743",
"title": "Water retention (medicine)",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 308,
"text": "Water is found both inside and outside the body’s cells. It forms part of the blood, helping to carry the blood cells around the body and keeping oxygen and important nutrients in solution so that they can be taken up by tissues such as glands, bone and muscle. Even the organs and muscles are mostly water.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "305679",
"title": "Body water",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 423,
"text": "In physiology, body water is the water content of an animal body that is contained in the tissues, the blood, the bones and elsewhere. The percentages of body water contained in various fluid compartments add up to total body water (TBW). This water makes up a significant fraction of the human body, both by weight and by volume. Ensuring the right amount of body water is part of fluid balance, an aspect of homeostasis.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5925110",
"title": "List of macronutrients",
"section": "Section::::Macronutrients that do not provide energy.:Water.\n",
"start_paragraph_id": 76,
"start_character": 0,
"end_paragraph_id": 76,
"end_character": 459,
"text": "Water is the most important substance for life on Earth. It provides the medium in which all metabolic processes proceed. As such it is necessary for the absorption of macronutrients, but it provides no nutritional value in and of itself. Water often contains naturally occurring micronutrients such as calcium and salts, and others can be introduced to the water supply such as chlorine and fluoride for various purposes such as sanitation or dental health.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "305679",
"title": "Body water",
"section": "Section::::Functions.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 209,
"text": "Water in the animal body performs a number of functions: as a solvent for transportation of nutrients; as a medium for excretion; a means for heat control; as a lubricant for joints; and for shock absorption.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14434367",
"title": "Fluid compartments",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 815,
"text": "About two thirds of the total body water of humans is held in the cells, mostly in the cytosol, and the remainder is found in the extracellular compartment. The extracellular fluids may be divided into three types: interstitial fluid in the \"interstitial compartment\" (surrounding tissue cells and bathing them in a solution of nutrients and other chemicals), blood plasma and lymph in the \"intravascular compartment\" (inside the blood vessels and lymphatic vessels), and small amounts of transcellular fluid such as ocular and cerebrospinal fluids in the \"transcellular compartment\". The interstitial and intravascular compartments readily exchange water and solutes but the third extracellular compartment, the transcellular, is thought of as separate from the other two and not in dynamic equilibrium with them.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "158539",
"title": "Fremen",
"section": "Section::::Customs.:Water conservation.:Collection.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 815,
"text": "Water is collected from the atmosphere in windtraps that condense the humidity and add it to the underground water store (caches). Water can also be collected from dead animals and people, using a deathstill to remove the water from a corpse for addition to the sietch water store. The Fremen who obtains the body — through discovery or honorable killing — is then given a set of water rings whose markings denote the volume collected. These rings are used as a form of currency, and are backed by fixed volumes of water (analogous to the historical gold standard). For example, the victor of a sanctioned duel would claim his dead opponent's water, and a dead Fremen's water can be inherited by his/her spouse or children. Water rings have a profound significance in matters of birth, death, and courtship ritual.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33306",
"title": "Water",
"section": "Section::::Effects on human civilization.:Human uses.:For drinking.\n",
"start_paragraph_id": 81,
"start_character": 0,
"end_paragraph_id": 81,
"end_character": 783,
"text": "The human body contains from 55% to 78% water, depending on body size. To function properly, the body requires between of water per day to avoid dehydration; the precise amount depends on the level of activity, temperature, humidity, and other factors. Most of this is ingested through foods or beverages other than drinking straight water. It is not clear how much water intake is needed by healthy people, though the British Dietetic Association advises that 2.5 liters of total water daily is the minimum to maintain proper hydration, including 1.8 liters (6 to 7 glasses) obtained directly from beverages. Medical literature favors a lower consumption, typically 1 liter of water for an average male, excluding extra requirements due to fluid loss from exercise or warm weather.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
eh7fbl | Why did Japan and South Korea turn into a democratic state with little corruption but other East Asian countries did not? | [
{
"answer": "I don't think that the premise of this question (that Japan and South Korea have avoided corruption, unlike the rest of East Asia) really holds up. If you look at the Economist's [Democracy Index](_URL_0_), and Transparency International's [Corruption Perceptions Index](_URL_1_), you'll find that things are a little more complicated in that region.\n\nYou're right that South Korea and Japan are at the top of the DI in Asia. But Taiwan is up there with them, and other East and Southeast Asian countries, such as Malaysia, the Philippines, Singapore, Indonesia, and Hong Kong are in the \"flawed democracy\" category as well (albeit at the lower end).\n\nOn the CPI, Singapore leads in East and Southeast Asia by a large margin, and is tied for the 3rd least-corrupt country in the world. Japan sits at number three in East/Southeast Asia, just behind Hong Kong. Taiwan is next, then South Korea, whose score of 57 puts it in the \"middling\" range, close to countries such as Rwanda and Costa Rica.\n\nTo address the question itself: When talking about the economies of these countries in the 20th century, economists like to refer to the \"Japanese Economic Miracle\" and the \"Miracle of the Four Asian Tigers\" (South Korea, Taiwan, Singapore, and Hong Kong), essentially asking \"How did these five impoverished countries, ravaged by World War II, create their highly-developed economies within a generation?\" \\*\n\nLikely factors include: a high degree of state intervention in the economy, an emphasis on export-oriented policies, low taxes for foreign corporations, and early investment in infrastructure, technological innovation, and universal primary education. At the time, there were few limits to state power in any of these countries, so economic directives from the top could be implemented quickly and effectively. Some writers and politicians include a \"cultural\" factor into the mix, claiming that there is a certain set of \"Asian values\" such as hard work, stability, collective success, and respect toward authority. This same set of values was used to explain why these countries retained their authoritarian governments—until, of course, they didn't.\n\nSouth Korea and Taiwan both experienced peaceful democratic revolutions in the 1980s, a development that many political scientists attribute to their newly-educated, aspirational, and globally-aware young populations who wanted a greater say in their futures (both revolutions were initiated by student groups). And in 1993, Japan's Liberal Democratic Party, which had ruled the country since the end of the American occupation, lost their parliamentary majority for the first time and peacefully ceded power to the opposition, a huge milestone in solidifying democratic institutions in any country. Hong Kong and Singapore made democratic and transparency reforms during this period but never transitioned into fully democratic states.\n\nAnswering why a country did become a democratic state is a lot easier than speculating on why one didn't, so I'm not going to touch on the remainder of East and Southeast Asia. Suffice to say there's a whole lot of variation across the continent, and each country has its unique set of advantages and challenges.\n\n\\* Scholars are finally beginning to reconsider using the term \"miracle\" when discussing these developments. Using the term \"miracle\" implies that the success of these countries could have only come from sheer luck or an act of God, which feels pretty condescending.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "905755",
"title": "General Sherman incident",
"section": "Section::::Background.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 321,
"text": "However, the Joseon dynasty court which ruled Korea was well aware of the displacement of the traditional ruling classes of China as a result of the First and the Second Opium Wars, and maintained a strict policy of isolationism forbidding any of those they ruled to trade with the outside world to avoid a similar fate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12838533",
"title": "Korean mixed script",
"section": "Section::::History.:Mixed script.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 1186,
"text": "The Korean kingdoms had traditionally become client states of China under nominal tributary status. As western colonial and trade expansion into Asia occurred, it exposed the weakness of China due to centuries of isolation, and led Japan to modernize and nurture its own colonial designs, but many of the skirmishes occurred in Korea, and ultimately, was a battle over political and cultural control of Korea. The clear waning of Chinese protection and the looming threat of Japanese occupation led to numerous uprisings such as the Donghak Peasant Rebellion (東學農民革命, 동학 농민 혁명). In response, King Gojong instituted a series of proclamations, the Gap-o Reforms (甲午, 갑오) of 1894-1896. This led to the termination of the client relationship with China when King Gojong proclaimed himself Emperor Gwangmu (高宗光武帝, 고종 광무제). Emperor Gwangmu also ended the \"gwageo\" examination system and the use of literary Chinese as the language of the royal court, courts, government records and sanctioned literature. Korean written in the 'national letters' (國文, 국문)—now understood as an alternate name for \"hangul\"—but actually referred to the mixed script was made the official language of governance.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "48605",
"title": "History of Korea",
"section": "Section::::Joseon Dynasty of Korea.:Political history.\n",
"start_paragraph_id": 94,
"start_character": 0,
"end_paragraph_id": 94,
"end_character": 471,
"text": "However, corruption in government and social unrest prevailed in the years thereafter, causing numerous civil uprisings and revolts. The government made sweeping reforms in the late 19th century, but adhered to a strict isolationist policy, earning Korea the nickname \"Hermit Kingdom\". The policy had been established primarily for protection against Western imperialism, but soon the Joseon dynasty was forced to open trade, beginning an era leading into Japanese rule.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2311556",
"title": "United States Army Military Government in Korea",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 617,
"text": "The country during this period was plagued with political and economic chaos, which arose from a variety of causes. The after-effects of the Japanese occupation were still being felt in the occupation zone, as well as in the Soviet zone in the North. Popular discontent stemmed from the U.S. Military Government's support of the Japanese colonial government; then once removed, keeping the former Japanese governors on as advisors; by ignoring, censoring and forcibly disbanding the functional and popular People's Republic of Korea (PRK); and finally by supporting United Nations elections that divided the country.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13139823",
"title": "Post-classical history",
"section": "Section::::History by region in the Old World.:East Asia.\n",
"start_paragraph_id": 66,
"start_character": 0,
"end_paragraph_id": 66,
"end_character": 639,
"text": "Korea and Japan sinicized because their ruling class were largely impressed by China's bureaucracy. The major influences China had on these countries were the spread of Confucianism, the spread of Buddhism, and the establishment of centralized governance. In the times of the Sui, Tang and Song dynasties (581–1279), China remained the world's largest economy and most technologically advanced society. Inventions such as gunpowder, woodblock printing and the magnetic compass were improved upon. China stood in contrast to other areas at the time as the imperial governments exhibited concentrated central authority instead of feudalism.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1878871",
"title": "Korean independence movement",
"section": "Section::::History.:Japanese rule.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 724,
"text": "Japanese rule was oppressive but changed over time. Initially, there was very harsh repression in the decade following annexation. Japan's rule was markedly different than in its other colony, Formosa. This period is called \"amhukki\", the dark period by Koreans. Tens of thousands of Koreans were arrested for political reasons. The harshness of Japanese rule increased support for the Korean independence movement. Many Koreans left the Korean Peninsula, some of whom formed resistance groups and societies in Manchuria to agitate for Korean independence. Some went to Japan, where groups agitated clandestinely. There was a prominent group of Korean Communists in Japan, who were in danger for their political activities.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7714083",
"title": "China–South Korea relations",
"section": "Section::::Joint stance on Japan.:Japanese war crimes.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 274,
"text": "Both the governments of China and South Korea take a firm stand on issues in relation to Japanese war crimes. Korea had been under Japanese rule after the collapse of the Joseon Dynasty in 1910. During the Second Sino-Japanese War, Japan invaded and occupied eastern China.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1p03gl | dual citizenship | [
{
"answer": "Some countries allow dual citizenships but there are some countries which do not allow it. And then there are some countries which are very vague if they allow it or not. \n\n > Are you responsible for both nation's laws?\n\nAs a citizen or non-citizen, you are always responsible for any country's laws. And ignorance is never accepted as an excuse. \n\n > Do you pay two taxes?\n\nFor the most part, no. However, the one big exception to this rule is the USA. The USA demands that all USA citizens residing in the USA or abroad earning income, must file for their taxes. \n\nIt can get very complicated and expensive and for this reason many USA citizens living abroad have renounced their USA citizenship. \n\n > Can you come and go as you please?\n\nIf you have the money, yes. \n\n > Do you have to have two houses?\n\nThere is no requirement to own property. \n\n\n\n\n\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "993845",
"title": "United States nationality law",
"section": "Section::::Dual citizenship.\n",
"start_paragraph_id": 87,
"start_character": 0,
"end_paragraph_id": 87,
"end_character": 749,
"text": "Based on the U.S. Department of State regulation on dual citizenship (7 FAM 082), the Supreme Court of the United States has stated that dual citizenship is a \"status long recognized in the law\" and that \"a person may have and exercise rights of nationality in two countries and be subject to the responsibilities of both. The mere fact he asserts the rights of one citizenship does not, without more, mean that he renounces the other\", \"Kawakita v. U.S.\", 343 U.S. 717 (1952). In \"Schneider v. Rusk\", 377 U.S. 163 (1964), the U.S. Supreme Court ruled that a naturalized U.S. citizen has the right to return to his native country and to resume his former citizenship, and also to remain a U.S. citizen even if he never returns to the United States.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2265454",
"title": "South African nationality law",
"section": "Section::::Dual nationality.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 411,
"text": "BULLET::::- Dual Nationality does not necessarily refer to two citizenship, it can mean more than that. The definition dual-Citizenship in international law is the base for obtaining MANY citizenships. If, for instance, a person is granted Irish-South African Citizenship AT birth then later on moved to New Zealand, they can obtain New Zealand Citizenship should they wish. This is a common misinterpretation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30876330",
"title": "Multiple citizenship",
"section": "Section::::Effects and potential issues.:Appearance of foreign allegiance.\n",
"start_paragraph_id": 112,
"start_character": 0,
"end_paragraph_id": 112,
"end_character": 1116,
"text": "In the United States, dual citizenship is associated with two categories of security concerns: foreign influence and foreign preference. Contrary to common misconceptions, dual citizenship in itself is not the major problem in obtaining or retaining security clearance in the United States. As a matter of fact, if a security clearance applicant's dual citizenship is \"based solely on parents' citizenship or birth in a foreign country\", that can be a mitigating condition. However, taking advantage of the entitlements of a non-US citizenship can cause problems. For example, possession or use of a foreign passport is a condition disqualifying one from security clearance and \"is not mitigated by reasons of personal convenience, safety, requirements of foreign law, or the identity of the foreign country\" as is explicitly clarified in a Department of Defense policy memorandum which defines a guideline requiring that \"any clearance be denied or revoked unless the applicant surrenders the foreign passport or obtains official permission for its use from the appropriate agency of the United States Government\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1475751",
"title": "Persons of Indian Origin Card",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 350,
"text": "Later, the Citizenship (Amendment) Act, 2005, expanded the scope of grant of OCI for PIOs of all countries except Pakistan and Bangladesh as long as their home country allows dual citizenship under their local law. It must be noted here that the OCI is not actually a dual citizenship as the Indian constitution forbids dual nationality (Article 9).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "80366",
"title": "German diaspora",
"section": "Section::::Germany's policy on dual citizenship.\n",
"start_paragraph_id": 183,
"start_character": 0,
"end_paragraph_id": 183,
"end_character": 210,
"text": "BULLET::::2. If dual citizenship was obtained at birth. Some countries do not accept the \"dual-citizenship-by-birth principle,\" so the concerned person must later choose one citizenship and renounce the other.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "252858",
"title": "Security clearance",
"section": "Section::::United States.:Dual citizenship.\n",
"start_paragraph_id": 92,
"start_character": 0,
"end_paragraph_id": 92,
"end_character": 1660,
"text": "Dual citizenship is associated with two categories of security concerns: foreign influence and foreign preference. Dual citizenship in itself is not the major problem in obtaining or retaining security clearance in the USA. If a security clearance applicant's dual citizenship is \"based solely on parents' citizenship or birth in a foreign country\", that can be a mitigating condition. However, \"exercising\" (taking advantage of the entitlements of) a non-U.S. citizenship can cause problems. For example, possession and/or use of a foreign passport is a condition disqualifying from security clearance and \"is not mitigated by reasons of personal convenience, safety, requirements of foreign law, or the identity of the foreign country\" as is explicitly clarified in a Department of Defense policy memorandum which defines a guideline requiring that \"any clearance be denied or revoked unless the applicant surrenders the foreign passport or obtains official permission for its use from the appropriate agency of the United States Government\". This guideline has been followed in administrative rulings by the Department of Defense (DoD) Defense Office of Hearings and Appeals (DOHA) office of Industrial Security Clearance Review (ISCR), which decides cases involving security clearances for Contractor personnel doing classified work for all DoD components. In one such case, an administrative judge ruled that it is not clearly consistent with U.S. national interest to grant a request for a security clearance to an applicant who was a dual national of the United States and Ireland. DOHA can rule on issues involving the granting of security clearances.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8455160",
"title": "Joseph Carens",
"section": "Section::::Citizenship.:Legal dimension.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 469,
"text": "There are a lot of objections against dual citizenship; some are based on the following questions that come to mind. If people hold more than one citizenship, which state is responsible? Are dual citizens subject to two conflicting sets of laws (i.e., marriage and divorce laws) and obligations (i.e., taxes and military duty)? Usually these problems are easily solved. In most cases the laws and obligations of the state in which the person lives need to be followed.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
aikt5j | what is standard error and confidence interval? | [
{
"answer": "Both answers so far have been laymen's guesses at what are specific terms.\n\nStandard Error is an estimate, normally of the standard deviation from the mean of a population based on a sample. It's estimated by dividing the standard deviation of the sample by the square root of the sample size.\n\nIf you have a population of 100, and you sample 5 from the population, and you find the sample mean to be 3, you can simplisticly estimate that the mean of the population is 3. However, without considering the deviation from the sample mean, you can't begin to guess at how accurate your estimate is.\n\nIt is logical that if the five numbers in your sample are -102,100,3,-950,964, and the mean happens to be 3, then there's a good chance that the whole population also exhibits significant variation, and as a result, it is relatively unlikely that your mean is accurate.\n\nHowever, if the five numbers are, 2, 4, 3, 3, 3 then it suggests that your estimate of 3 is probably fairly accurate.\n\nIn the first case, the Standard Deviation of our sample is very high (as an example 676.24). We can then calculate our Standard Error as 676.24/sqrt(5), which is about 302.42.\n\nIn the second case, our standard deviation is just 0.71, so we can calculate our standard error as 0.71/sqrt(5), which is about 0.32.\n\nNote that this **isn't** an estimate of the population mean, it's an estimate of the accuracy of our sampling, and in particular, the standard deviation of the numbers produced by repeated sampling (ie: if we continued to sample repeatedly, then 302.42 is a good estimate of the standard deviation of the sample means).\n\nIn order to estimate the population mean, this is where we turn to confidence intervals. If we take our sample with the highest variation, then our estimates for the mean should range wildly, because the sample is not highly consistent. So let's say, we need to know estimate with 95% confidence what the mean will be. In this case, we'll need to provide a range.\n\nNow, we know what the Standard Error of our sample is (676.24), and we know that our sample mean was 3. Now what we need to do is refer to a magic table of numbers to find a multiplier that gives 95% confidence. These numbers will vary based on the type of distribution, etc., but for a normal distribution, we can use a multiplier of 1.96. So now, we can work out the upper bound of our confidence, which will be our mean plus the standard error multiplied by our magic number 3 + 676.24 \\* 1.96 = 1 328.43. We can use the same idea except with subtractions to estimate a lower bound: 3 - 676.24 \\* 1.96 = -1 322.43. So we can say that we are 95% confident that the population mean is between -1 322.43 and 1 328.43. This is called a **confidence interval**.\n\nWith our less varied sample, we can be 95% confident that the mean falls between 2.46 and 3.54 (see below maths).\n\n3 - 0.19 \\* 1.96 = 2.46\n\n3 + 0.19 \\* 1.96 = 3.54\n\nIn short:\n\n* **Standard Error** is an estimate of the accuracy of a specific statistical measure (most commonly the mean) based on the variation within a sample compared to the overall size of the population.\n* A **Confidence Interval** is a range within which a mean can be expected to fall with a specific level of confidence, based on the estimated accuracy of a sample\n\nThe two things work very closely together to help statisticians estimate the mean of a population based on the mean and the variation within a sample.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "49489200",
"title": "Population proportion",
"section": "Section::::Estimation.:Common errors and misinterpretations from estimation.\n",
"start_paragraph_id": 48,
"start_character": 0,
"end_paragraph_id": 48,
"end_character": 322,
"text": "A very common error that arises from the construction of a confidence interval is the belief that the level of confidence such as formula_67 means 95% chance. This is incorrect. The level of confidence is based on a measure of certainty, not probability. Hence, the values of formula_68 fall between 0 and 1, exclusively.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16941667",
"title": "Rule of three (statistics)",
"section": "Section::::Derivation.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 842,
"text": "A 95% confidence interval is sought for the probability \"p\" of an event occurring for any randomly selected single individual in a population, given that it has not been observed to occur in \"n\" Bernoulli trials. Denoting the number of events by \"X\", we therefore wish to find the values of the parameter \"p\" of a binomial distribution that give Pr(\"X\" = 0) ≥ 0.05. The rule can then be derived either from the Poisson approximation to the binomial distribution, or from the formula (1−\"p\") for the probability of zero events in the binomial distribution. In the latter case, the edge of the confidence interval is given by Pr(\"X\" = 0) = 0.05 and hence (1−\"p\") = .05 so \"n\" ln(1–\"p\") = ln .05 ≈ −2.996. Rounding the latter to −3 and using the approximation, for \"p\" close to 0, that ln(1−\"p\") ≈ −\"p\", we obtain the interval's boundary 3/\"n\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "280911",
"title": "Confidence interval",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 796,
"text": "In statistics, a confidence interval (CI) is a type of interval estimate, computed from the statistics of the observed data, that might contain the true value of an unknown population parameter. The interval has an associated confidence level that, loosely speaking, quantifies the level of confidence that the parameter lies in the interval. More strictly speaking, the confidence level represents the frequency (i.e. the proportion) of possible confidence intervals that contain the true value of the unknown population parameter. In other words, if confidence intervals are constructed using a given confidence level from an infinite number of independent sample statistics, the proportion of those intervals that contain the true value of the parameter will be equal to the confidence level.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "277379",
"title": "Margin of error",
"section": "Section::::Concept.:Calculations assuming random sampling.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 621,
"text": "Note that there is not necessarily a strict connection between the true confidence interval, and the true standard error. The true \"p\" percent confidence interval is the interval [\"a\", \"b\"] that contains \"p\" percent of the distribution, and where (100 − \"p\")/2 percent of the distribution lies below \"a\", and (100 − \"p\")/2 percent of the distribution lies above \"b\". The true standard error of the statistic is the square root of the true sampling variance of the statistic. These two may not be directly related, although in general, for large distributions that look like normal curves, there is a direct relationship.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "536062",
"title": "Prediction interval",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 538,
"text": "Prediction intervals are used in both frequentist statistics and Bayesian statistics: a prediction interval bears the same relationship to a future observation that a frequentist confidence interval or Bayesian credible interval bears to an unobservable population parameter: prediction intervals predict the distribution of individual future points, whereas confidence intervals and credible intervals of parameters predict the distribution of estimates of the true population mean or other quantity of interest that cannot be observed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "280911",
"title": "Confidence interval",
"section": "Section::::Conceptual basis.:Philosophical issues.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 1571,
"text": "The principle behind confidence intervals was formulated to provide an answer to the question raised in statistical inference of how to deal with the uncertainty inherent in results derived from data that are themselves only a randomly selected subset of a population. There are other answers, notably that provided by Bayesian inference in the form of credible intervals. Confidence intervals correspond to a chosen rule for determining the confidence bounds, where this rule is essentially determined before any data are obtained, or before an experiment is done. The rule is defined such that over all possible datasets that might be obtained, there is a high probability (\"high\" is specifically quantified) that the interval determined by the rule will include the true value of the quantity under consideration. The Bayesian approach appears to offer intervals that can, subject to acceptance of an interpretation of \"probability\" as Bayesian probability, be interpreted as meaning that the specific interval calculated from a given dataset has a particular probability of including the true value, conditional on the data and other information available. The confidence interval approach does not allow this since in this formulation and at this same stage, both the bounds of the interval and the true values are fixed values, and there is no randomness involved. On the other hand, the Bayesian approach is only as valid as the prior probability used in the computation, whereas the confidence interval does not depend on assumptions about the prior probability.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "231442",
"title": "Reference range",
"section": "Section::::Standard definition.:Establishment methods.:Normal distribution.:Confidence interval of limit.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 214,
"text": "These confidence intervals reflect random error, but do not compensate for systematic error, which in this case can arise from, for example, the reference group not having fasted long enough before blood sampling.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
23uklv | physiologically speaking, how do falls from heights kill people? | [
{
"answer": "They can die from heart attacks, but mostly its the major organ failure caused by the sudden deceleration.",
"provenance": null
},
{
"answer": "Its not the fall that kills you, its the sudden deceleration at the end.\n\nSo your body is tough, but its not unbreakable. Hitting concrete at 120 miles per hour (roughly terminal velocity for a human I believe) is a lot of force. \n\nSo your bones are breaking into pieces which is going to wreck, rip and tear all sorts of things in your body.\n\nMeanwhile your vital organs (which are being shredded by your now fragmented bones) are smashed into the ground or into the bones around them. This is not good for the organs and most of your insides would be pulp. This includes your brain which is not going to be in great shape having just smashed itself into your unyielding skull.\n\nThen the impact (and bones again) will probably break a lot of blood vessels so internal/external bleeding is a nasty option as well.\n\nEnd of the day, a whole lot of different stuff combines to just kill you. People have survived falls like this, but those are rare cases. ",
"provenance": null
},
{
"answer": "Generally falls are mostly blunt force trauma.\n\nExtreme, full body blunt force trauma causes bones to break/shatter, organs to tear/rupture, broken bones can pierce and rip through soft tissue.\n\nDepending on how the person lands will determine what could kill them first. If the heart, brain, or lungs are preserved blood loss could do it (internal and external). \n\nSevere head trauma is a factor in near instant death otherwise the person is waiting to bleed out, have the heart stop from trauma or blood loss, or asphyxiation leading to cardiac arrest; really it's a bunch of stuff that are situation dependent. ",
"provenance": null
},
{
"answer": "I hate to break it to you OP, but there's a good chance you won't be killed instantly. Luckily for you though, there is a good chance you'll be knocked unconscious... and, then die.",
"provenance": null
},
{
"answer": "Hmm so what's the fastest a human can decelerate without dying??",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "58565157",
"title": "Tagging system",
"section": "Section::::Injuries and fatalities caused from improper safety measures.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 255,
"text": "According to the HSE “Falls from height are one of the biggest causes of workplace fatalities and major injuries. Common causes are falls from ladders and through fragile roofs. The purpose of WAHR is to prevent death and injury from a fall from height.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58565157",
"title": "Tagging system",
"section": "Section::::Injuries and fatalities caused from improper safety measures.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 484,
"text": "Falling from height in the workplace accounts for nearly half of fatal injuries per year (on average), the majority of these fatalities are within the Construction Industry. There are roughly 40 fatalities per year from people falling from a height. “The most common hazards associated with scaffolding are falls, falling objects from a higher level of scaffolding, electric shock from nearby power lines and failures of either the scaffolding itself or the planks used as flooring.”\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1232575",
"title": "Suicide methods",
"section": "Section::::Jumping from height.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 333,
"text": "Jumping from height is the act of jumping from high altitudes, for example, from a window (self-defenestration or auto-defenestration), balcony or roof of a high rise building, cliff, dam or bridge. This method, in most cases, results in severe consequences if the attempt fails, such as paralysis, organ damage, and bone fractures.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "455905",
"title": "Acrophobia",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 485,
"text": "Most people experience a degree of natural fear when exposed to heights, known as the fear of falling. On the other hand, those who have little fear of such exposure are said to have a head for heights. A head for heights is advantageous for those hiking or climbing in mountainous terrain and also in certain jobs such as steeplejacks or wind turbine mechanics. Some people may also be afraid of the high wind, as an addition of falling. This is actually known as added ancraophobia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18498968",
"title": "Falling (accident)",
"section": "Section::::Height and severity.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 436,
"text": "Injuries caused by falls from buildings vary depending on the building's height and the age of the person. Falls from a building's second floor/story (American English) or first floor/storey (British English and equivalent idioms in continental European languages) usually cause injuries but are not fatal. Overall, the height at which 50% of children die from a fall is between four and five storey heights (around ) above the ground.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34480714",
"title": "Autokabalesis",
"section": "Section::::Prevalence.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 215,
"text": "A study of 1998 (Joyce & Fleminger) reported that according to the Office of Census and Surveys 1990-1994, 4% of all deaths by suicide were accountable by falling from height or jumping in front of a moving object.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33900050",
"title": "Fear of falling",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 591,
"text": "The fear of falling (FOF), also referred to as basophobia (or basiphobia), is a natural fear and is typical of most humans and mammals, in varying degrees of extremity. It differs from acrophobia (the fear of heights), although the two fears are closely related. The fear of falling encompasses the anxieties accompanying the sensation and the possibly dangerous effects of falling, as opposed to the heights themselves. Those who have little fear of falling may be said to have a head for heights. Basophobia is sometimes associated with astasia-abasia, the fear of walking/standing erect.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
to32i | how do microchips know time? | [
{
"answer": "CPUs use a clock signal as sort of a metronome to control the signal flow. The clock signal is produced using a crystal oscillator circuit.",
"provenance": null
},
{
"answer": "[Real Time Clocks](_URL_0_)",
"provenance": null
},
{
"answer": "Crystal oscillators\n[Wikipedia](_URL_1_)\n > A crystal oscillator is an electronic oscillator circuit that uses the mechanical resonance of a vibrating crystal of piezoelectric material to create an electrical signal with a very precise frequency. This frequency is commonly used to keep track of time (as in quartz wristwatches), to provide a stable clock signal for digital integrated circuits, and to stabilize frequencies for radio transmitters and receivers. The most common type of piezoelectric resonator used is the quartz crystal, so oscillator circuits designed around them became known as \"crystal oscillators.\"\n\nHope that helps.\n\nYou were also asking about the flashing LED\nThe LED is wired up to another little chip, which again gets its clock from some kind of an crystal oscillator. But you dont need a new crystal for every chip. \nIt´s possible to divide the clock rate in half by using [JK latches](_URL_0_). (Linking fixed, thanks to droneprime)",
"provenance": null
},
{
"answer": "It literally uses the exact same thing. The Real Time Clock on your motherboard utilizes a quartz crystal oscillator just like your wrist watch. It uses an oversized watch battery to run it while your PC is powered off.",
"provenance": null
},
{
"answer": "There are a lot of answers here for how the time pulse works, but none on how time is stored. In a digital system, typically time is stored in a 32 bit or 64 bit integer. Depending on the system, this integer is the number of seconds since jan 1 1970 (unix), or the number of 100 nanosecond intervals since jan 1 1601(windows). The software or microchip then has a routine that converts this integer number into a real world representable date/time. I recommend reading the Wikipedia article on \"system time\"",
"provenance": null
},
{
"answer": "I am an electrical engineer, so i'll provide an answer. Your question depends on what sort of time signal you think about. Do you want the time, in hours, minutes and seconds(and date, month, year, etc)? Or just a signal that flash eg. once per second?\n\nIn the first case, you will need a real-time clock(RTC). It is basically an oscillator with some counters and perhaps some memory. The oscillator is usually a crystal, as they provide the most precise signals, but it can be something like an RC circuit, as other people mention. The crystal will actually oscillate and vibrate at a very precise and known frequency. The built-in counter counts the oscillations, and when a certain value is counted, it knows that now one second has passed. A typical RTC oscillator frequency is 32.768kHz. This means that the counter must count to 2^15 in order to know that one second is passed. Then, when the value is reached, it resets, but sends a signal to another counter. This counter then count to 60, and this indicates seconds. this again sends a signal to the next counter, which counts to 60. The next counter counts to 24 to indicate hour, and so on. There is typically some other logic to take into account leap-years and date of certain months and so on. Some memory might be present to store the counter values in case you need to change the battery or depending on the implementation of the counters.\n\nAs you can see, each year is dependent on each date, which is dependent on each hour, which is dependent on each minute, which again is dependent on each second, which is finally dependent on 32768 oscillations of the crystal. Thus, you can imagine how a small imprecision in the oscillations of the crystal will ripple through the system and potentially provide a wrong answer. Fortunately, crystals are very precise.\n\nIf you only need a light to blink eg. once every second, but don't care about the date or time, there is no need to build such a complicated system. It will be much easier to just use an RC oscillator. It will be less precise and prone to age of the components as well as the temperature, but since you only want a blinking light and not a precise time reference, you usually don't care if the light blinks 1.000 per second or 1.001 per second. It will also be cheaper to build in terms of components(crystals are expensive). The funny part is, that the light source will use much more power than the RTC clock.\n\n[Here is a little writeup on the power consumption of an RTC](_URL_0_). For reference, i can tell you that an LED uses around 10-20mA, which would be enough to run off the CR2032 cell for around 100-200 hours, not counting the power consumption of the RC timer itself :)\n\n\n",
"provenance": null
},
{
"answer": "It seems like nobody has really given you a complete explanation, so let me take a knock at it.\n\nThere's a guy that referenced charge and decay rates from RC circuits. While it is true that you can measure time by taking voltage measurements, this is NOT how modern day microchips know time. RC analog circuits are simply not accurate enough to perform the precision you need for digital clocks.\n\nThe short answer has already been given to you with [crystal oscillators](_URL_2_) and [phase locked loops](_URL_1_), but there's a little more to the answer than this. Getting a stable clock source is only one part of the equation.\n\nLet's say you have a steady 1 kHz clock (using a crystal clock and phase locked loop, of course), meaning that every 1 second the signal goes on and off 1000 times. This means that for all intents and purposes, your circuit or \"clock\" could never measure more accurately than .001 fraction of a second. This is called your 'resolution.' While this resolution isn't bad, it's still far from great when you're talking digital circuits. You can see why having a much faster clock (GHz range instead of kHz) can be so important when you're talking about precise measurements.\n\nNow, even chips that run in the GHz range will work out long time delays such as 10 seconds or even hours. How does this work? \n\nThe simplest way is to use a counter. As an example, let's go back to our kHz clock that turns on and off 1000 times a second. If you wanted to measure **five seconds**, and flash an LED, you could start a counter that increments from 0 to 5000, toggles the LED, and repeats. Given any clock frequency, you can figure out how high your counter should go to calculate any time. (Counter = ClockFrequency * TimeDelay)\n\nWhile this is the simplest solution, it's FAR from the BEST solution. The most common solution is done in software using [interrupts...](_URL_0_) specifically 'timed' or 'periodic' interrupts. Interrupts are features that are already built into a microchip that allow you to basically tell a computer, in X amount of time, wake up and do something. Without getting too involved, that's the essence of a periodic interrupt.\n\nThe answer gets even more complicated when you're talking about different 'threads' or 'cores' running at the same time, but hopefully this quick answer helps a little bit.",
"provenance": null
},
{
"answer": "I'm not sure there's a full, accurate answer here yet, so I'll add my two cents.\n\nAs a preface, I'll mention that tiny quartz crystals in the shape of a tuning rod can be precisely calibrated to resonate at a specific frequency. Due to the piezoelectric effect this allows the crystal's vibration to be driven by electric current and also to generate a precisely timed series of electrical pulses. Those pulses are then used in digital circuitry which does little more than add numbers together in order to keep track of seconds, minutes, days, months, years, etc.\n\nIn a typical computer there is an entire subsystem that is effectively just a little quartz watch. This is called the [Real Time Clock](_URL_0_). Computer systems can use this clock to keep track of time, and they can use it in conjunction with it's own sub-systems to keep track of extremely short timescales as well (since the CPU is also powered by a precisely controlled high frequency \"clock\" signal). This sub-system contains a battery so that even when the power is off your computer will still keep track of time.\n\nAdditionally, modern computers call out to trusted time servers on the local network or the internet to keep their clocks calibrated over longer periods of time.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "25480226",
"title": "Time capture",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 423,
"text": "Time capture is the concept of making sense of time-related data based on timestamps generated by system software. Software that run on PCs and other digital devices rely on internal software clocks to generate timestamps. In turn, these timestamps serve as the basis for representing when an event has occurred (i.e. when an outgoing call was made), and for how long that event lasted (i.e. the duration of a phone call).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20556944",
"title": "Tor (anonymity network)",
"section": "Section::::Weaknesses.:Mouse fingerprinting.\n",
"start_paragraph_id": 93,
"start_character": 0,
"end_paragraph_id": 93,
"end_character": 482,
"text": "In March 2016 a security researcher based in Barcelona, demonstrated laboratory techniques using time measurement via JavaScript at the 1-millisecond level could potentially identify and correlate a user's unique mouse movements provided the user has visited the same \"fingerprinting\" website with both the Tor browser and a regular browser. This proof of concept exploits the \"time measurement via JavaScript\" issue which has been an open ticket on the Tor Project for ten months.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "956709",
"title": "Time clock",
"section": "Section::::Biometrics.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 783,
"text": "Biometric time clocks are a feature of more advanced time and attendance systems. Rather than using a key, code or chip to identify the user, they rely on a unique attribute of the user, such as a hand print, finger print, finger vein, palm vein, facial recognition, iris or retina. The user will have their attribute scanned into the system. Biometric readers are often used in conjunction with an access control system, granting the user access to a building, and at the same time clocking them in recording the time and date. These systems also attempt to cut down on fraud such as \"buddy clocking.\" When combined with an access control system they can help prevent other types of fraud such as 'ghost employees', where additional identities are added to payroll but don't exist.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13338132",
"title": "Medipix",
"section": "Section::::Versions.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 360,
"text": "Timepix is device conceptually originating from Medipix-2. It adds two more modes to the pixels, in addition to counting of detected signals: Time-over-Threshold (TOT) and Time-of-Arrival (TOA). The detected pulse height is recorded in the pixel counter in the TOT mode. The TOA mode measures time between trigger and arrival of the radiation into each pixel.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25480226",
"title": "Time capture",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 301,
"text": "Time capture software use data mining techniques to index, cleanse and make sense of this data. Applications include automated time tracking, where software can track the time a user spends on various PC-based tasks, such as time in applications, files/documents, web pages (via browser), and emails.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6613070",
"title": "System time",
"section": "Section::::History.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 501,
"text": "Microcontrollers operating within embedded systems (such as the Raspberry Pi, Arduino, and other similar systems) do not always have internal hardware to keep track of time. Many such controller systems operate without knowledge of the external time. Those that require such information typically initialize their base time upon rebooting by obtaining the current time from an external source, such as from a time server or external clock, or by prompting the user to manually enter the current time.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1006035",
"title": "Unix time",
"section": "Section::::Definition.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 284,
"text": "Two layers of encoding make up Unix time. The first layer encodes a point in time as a scalar real number which represents the number of seconds that have passed since 00:00:00UTC Thursday, 1 January 1970. The second layer encodes that number as a sequence of bits or decimal digits.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
qdarr | Does light and sound truly travel in a wave-like manner as we draw it (sine wave), or is the pattern of travel misrepresented by our pictures of the sine wave and the actual travel motion something different? | [
{
"answer": "No, they don't 'travel' in a sine wave. Rather, if you take a sound wave, and you measure the air pressure along its path, you'll notice that you will measure a sine wave in pressure.\n\nSimilarly, for light, if you measure the electric field along the path of light, you will measure a sine wave.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "686036",
"title": "Wave vector",
"section": "Section::::Direction of the wave vector.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 233,
"text": "For example, when a wave travels through an anisotropic medium, such as light waves through an asymmetric crystal or sound waves through a sedimentary rock, the wave vector may not point exactly in the direction of wave propagation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2200436",
"title": "Front velocity",
"section": "Section::::Various velocities.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 627,
"text": "Associated with propagation of a disturbance are several different velocities. For definiteness, consider an amplitude modulated electromagnetic carrier wave. The phase velocity is the speed of the underlying carrier wave. The group velocity is the speed of the modulation or envelope. Initially it was thought that the group velocity coincided with the speed at which \"information\" traveled. However, it turns out that this speed can exceed the speed of light in some circumstances, causing confusion by an apparent conflict with the theory of relativity. That observation led to consideration of what constitutes a \"signal\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "686036",
"title": "Wave vector",
"section": "Section::::Direction of the wave vector.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 552,
"text": "The direction in which the wave vector points must be distinguished from the \"direction of wave propagation\". The \"direction of wave propagation\" is the direction of a wave's energy flow, and the direction that a small wave packet will move, i.e. the direction of the group velocity. For light waves, this is also the direction of the Poynting vector. On the other hand, the wave vector points in the direction of phase velocity. In other words, the wave vector points in the normal direction to the surfaces of constant phase, also called wavefronts.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "863741",
"title": "Light field",
"section": "Section::::The 4D light field.:Sound analog.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 586,
"text": "This two-dimensionality, compared with the apparent four-dimensionality of light, is because light travels in rays (0D at a point in time, 1D over time), while by Huygens–Fresnel principle, a sound wave front can be modeled as spherical waves (2D at a point in time, 3D over time): light moves in a single direction (2D of information), while sound simply expands in every direction. However, light travelling in non-vacuous media may scatter in a similar fashion, and the irreversibility or information lost in the scattering is discernible in the apparent loss of a system dimension.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "61328143",
"title": "Performance and Modelling of AC Transmission",
"section": "Section::::Long Transmission Line.:Travelling waves in Transmission Line.\n",
"start_paragraph_id": 298,
"start_character": 0,
"end_paragraph_id": 298,
"end_character": 465,
"text": "Travelling waves are the current and voltage waves that creates a disturbance and moves along the transmission line from the sending end of a transmission line to the other end at a constant speed. The travelling wave plays a major role in knowing the voltages and currents at all the points in the power system. These waves also help in designing the insulators, protective equipment, the insulation of the terminal equipment, and overall insulation coordination.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33125",
"title": "Wavelength",
"section": "Section::::More general waveforms.\n",
"start_paragraph_id": 44,
"start_character": 0,
"end_paragraph_id": 44,
"end_character": 296,
"text": "If a traveling wave has a fixed shape that repeats in space or in time, it is a \"periodic wave\". Such waves are sometimes regarded as having a wavelength even though they are not sinusoidal. As shown in the figure, wavelength is measured between consecutive corresponding points on the waveform.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33125",
"title": "Wavelength",
"section": "Section::::Sinusoidal waves.:General media.:Nonuniform media.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 461,
"text": "Waves that are sinusoidal in time but propagate through a medium whose properties vary with position (an \"inhomogeneous\" medium) may propagate at a velocity that varies with position, and as a result may not be sinusoidal in space. The figure at right shows an example. As the wave slows down, the wavelength gets shorter and the amplitude increases; after a place of maximum response, the short wavelength is associated with a high loss and the wave dies out.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3qexv5 | what processes or treatments are performed upon meat to classify it as a carcinogen according to the who study? what can i look for on the nutrition facts label to determine whether it's relatively safe? | [
{
"answer": "The most recent one (about processed or red meat like bacon, sausage, and steak) is actually a report based on over 800 independent studies they've aggregated the information on.\n\nThe methods vary, but the results show that consumption of those products leads to an increased cancer risk, thus they are \"carcinogens\". They do not cause cancer, they've merely shown a link between consumption and raised risk.\n\nIt's important to note, almost everything is a carcinogen. So many things, in fact, it's not worth worrying about. There is no labeling to indicate carcinogen status.\n\nFor instance: anything that is browned or burned, from toast to roasted garlic to a marshmallow, is carcinogenic. Sunlight is carcinogenic. Birth control pills, alcohol, vinyl chloride (used to make the white PVC pipes in your house), diesel exhaust, ginger, salted fish, wood dust, mineral oil, various dyes, nickle, breathing non-filtered air, and many other things are all in the same \"Group 1\" of carcinogens, along with many other things.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "18940",
"title": "Meat",
"section": "Section::::Health.:Cancer.\n",
"start_paragraph_id": 124,
"start_character": 0,
"end_paragraph_id": 124,
"end_character": 702,
"text": "There are concerns about a relationship between the consumption of meat, in particular processed and red meat, and increased cancer risk. The International Agency for Research on Cancer (IARC), a specialized agency of the World Health Organization (WHO), classified processed meat (e.g., bacon, ham, hot dogs, sausages) as, \"\"carcinogenic to humans\" (Group 1), based on \"sufficient evidence\" in humans that the consumption of processed meat causes colorectal cancer.\" IARC also classified red meat as \"\"probably carcinogenic to humans\" (Group 2A), based on \"limited evidence\" that the consumption of red meat causes cancer in humans and \"strong\" mechanistic evidence supporting a carcinogenic effect.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1764200",
"title": "Red meat",
"section": "Section::::Human health.:Cancer.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 803,
"text": "The International Agency for Research on Cancer (IARC) classified processed meat (e.g., bacon, ham, hot dogs, sausages) as, \"\"carcinogenic to humans\" (Group 1), based on \"sufficient evidence\" in humans that the consumption of processed meat causes colorectal cancer.\" IARC also classified red meat as \"\"probably carcinogenic to humans\" (Group 2A), based on \"limited evidence\" that the consumption of red meat causes cancer in humans and \"strong\" mechanistic evidence supporting a carcinogenic effect.\" Subsequent studies have shown that taxing processed meat products could save lives, particularly in the West where meat intensive diets are the norm. If the amount of taxation was linked to the level of harm they caused, some processed meats, such as bacon and sausages, would nearly double in price.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6494435",
"title": "Curing (food preservation)",
"section": "Section::::Effect of meat preservation.:On health.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 220,
"text": "In 2015, the International Agency for Research on Cancer of the World Health Organization classified processed meat, that is, meat that has undergone salting, curing, fermenting, or smoking, as \"carcinogenic to humans\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1855289",
"title": "International Agency for Research on Cancer",
"section": "Section::::Controversies.:Glyphosate Monograph (2015–2018).:Criticism of Monographs methodology.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 495,
"text": "On 26 October 2015, a Working Group of 22 experts from 10 countries evaluated the carcinogenicity of the consumption of red meat and processed meat and classified the consumption of red meat as \"probably carcinogenic to humans (Group 2A)\", mainly related to colorectal cancer, and to pancreatic and prostate cancer. It also evaluated processed meat to be \"carcinogenic to humans (Group 1)\", due to \"sufficient evidence in humans that the consumption of processed meat causes colorectal cancer\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1781370",
"title": "Processed meat",
"section": "Section::::Relationship to cancer.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 262,
"text": "The International Agency for Research on Cancer at the World Health Organization classifies processed meat as Group 1 (carcinogenic to humans), because the IARC has found sufficient evidence that consumption of processed meat by humans causes colorectal cancer.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10834",
"title": "Food preservation",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 284,
"text": "Some methods of food preservation are known to create carcinogens. In 2015, the International Agency for Research on Cancer of the World Health Organization classified processed meat, i.e. meat that has undergone salting, curing, fermenting, and smoking, as \"carcinogenic to humans\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "294295",
"title": "Glyphosate",
"section": "Section::::Toxicity.:Government and organization positions.:European Food Safety Authority.\n",
"start_paragraph_id": 83,
"start_character": 0,
"end_paragraph_id": 83,
"end_character": 399,
"text": "A 2013 systematic review by the German Institute for Risk Assessment (BfR) examined more than 1000 epidemiological studies, animal studies, and \"in vitro\" studies. It found that \"no classification and labelling for carcinogenicity is warranted\" and did not recommend a carcinogen classification of either 1A or 1B. It provided the review to EFSA in January 2014 which published it in December 2014.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
26fvvl | How did walled cities deal with urban sprawl when walls were critical for city defense? | [
{
"answer": "In the case of Rome, they would just build a new, larger wall around the city. It is important to note that city walls were huge undertakings that were extremely costly, so usually walls were only built when it was believed that the cost of not building a wall would exceed the cost of building one. By the time gunpowder was popular, city walls had generally ceased to be effective enough to merit the investment in their construction.",
"provenance": null
},
{
"answer": "Many times they'd just be built around and/or knocked down.\n\nSiena still has its walls up and fully intact due to post bubonic plague demographics, where the city didn't regain its pre-plague population level until the 1800s\n\nHere's a picture of a fully modern city still surrounded by walls, and they still close the gates every night \n\n_URL_1_\n\n_URL_2_\n\n_URL_0_",
"provenance": null
},
{
"answer": "As a piggyback question: in the novel A Clash of Kings, by George R. R. Martin, a character orders sprawl around the wall to be burned down, so enemy soldiers can't climb it and get over the wall.\n\nWas this ever a concern during a siege? Were walls ever climbed by enemy armies?",
"provenance": null
},
{
"answer": "For one thing, sprawl didn't exist in the way it does today. True sprawl, like we see today in America, wasn't possible without motorized transport. \n\nHowever, when the city started to expand beyond historic walls, in some cases it just became a matter of money. In medieval Dublin, living within the wall came with certain taxes. In return, those people obviously got the protection of the city defenses. Those who didn't want or couldn't afford the tax had to risk living outside the wall, and the city didn't have to responsibility to give them much protection. The fixation line of the wall still exists in Dublin between the area around St. Patrick's cathedral and the Liberties neighborhood across the street. ",
"provenance": null
},
{
"answer": "In the Netherlands, the growth of cities was severely restricted by their walls. (For a typical example of how those walls looked, see the [city of Brielle](_URL_4_). The suburb to the south is twentiest-century.)\n\nThe problem with suburbs and urban sprawl is not so much that they are undefended, but rather that they stand in the way of your cannons, making any construction directly outside the city walls impossible. Thus, you have to build the walls first, as in the case of [Amsterdam](_URL_3_) mentioned above.\n\nMost cities in the Netherlands kept their walls into the second half of the nineteenth century[1]; by that time, most Dutch cities were very, *very* crowded.\n\nLook at the city of Utrecht, which took down its walls very early, in 1830: compare [this map of 1865](_URL_2_) with [this one of 1649](_URL_5_). In 1865, the city is still not much larger than in 1649, while having almost twice the inhabitants[2].\n\nA very nice book on overpopulation in Dutch cities in the nineteenth century is [Koninkrijk vol sloppen](_URL_0_), but it is in Dutch.\n\n[1] The [fortress law of 1874](_URL_6_) listed a large number of cities that were finally allowed to demolish their walls and was the end of the paradigm of defensible cities (with [one exception](_URL_1_)). Most walls were turned into much-needed public parks, so that the form of the old fortifications can often still be recognised today.\n\n[2] According to [this table on wikipedia](_URL_7_).",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "139114",
"title": "Defensive wall",
"section": "Section::::Decline.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 451,
"text": "In the wake of city growth and the ensuing change of defensive strategy, focusing more on the defense of forts around cities, many city walls were demolished. Also, the invention of gunpowder rendered walls less effective, as siege cannons could then be used to blast through walls, allowing armies to simply march through. Today, the presence of former city fortifications can often only be deduced from the presence of ditches, ring roads or parks.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30275656",
"title": "Wars of the Roses",
"section": "Section::::Aftermath.\n",
"start_paragraph_id": 117,
"start_character": 0,
"end_paragraph_id": 117,
"end_character": 388,
"text": "Many areas did little or nothing to change their city defences, perhaps an indication that they were left untouched by the wars. City walls were either left in their ruinous state or only partially rebuilt. In the case of London, the city was able to avoid being devastated by convincing the York and Lancaster armies to stay out after the inability to recreate the defensive city walls.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58412746",
"title": "Ancient South Arabian art",
"section": "Section::::Architecture.:Secular architecture.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 441,
"text": "All cities were protected by a city wall (two consecutive walls in the case of Shabwa), with at least two gates, which could be protected by towers. The course of the walls, which was either simply structured or included bastions, had to follow the terrain, especially in mountainous regions, and this is what created irregular city plans. Sometimes cities were protected by citadels, as in Shabwa, Raidan, Qana', and the Citadel of Rada'a.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15331525",
"title": "Murage",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 298,
"text": "Some of the walls were probably enclosing towns for the first time. Others, such as at Worcester, were to extend walls in order to bring suburbs inside the town, or to fund the repair of existing walls, as was the case at Canterbury, to which murage was granted in 1378, 1379, 1385, 1399 and 1402.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1355817",
"title": "Bastide",
"section": "Section::::Structural Elements.:City walls.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 575,
"text": "When bastides were founded most had no city walls or fortifications. This was because it was a peaceful time in history, and walls were prohibited by the Treaty of Paris (1229). Fortifications were added later. This was paid for either through a special tax, or carried out through a law that required that the people of the city helped build the walls. A good example is Libourne. Ten years after the city was founded, the people asked for money to build city walls. Once they had received the money, they spent it on making their city prettier, rather than building walls.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "139114",
"title": "Defensive wall",
"section": "Section::::Composition.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 770,
"text": "Urban areas outside the city walls, so-called Vorstädte, were often enclosed by their own set of walls and integrated into the defense of the city. These areas were often inhabited by the poorer population and held the \"noxious trades\". In many cities, a new wall was built once the city had grown outside of the old wall. This can often still be seen in the layout of the city, for example in Nördlingen, and sometimes even a few of the old gate towers are preserved, such as the \"white tower\" in Nuremberg. Additional constructions prevented the circumvention of the city, through which many important trade routes passed, thus ensuring that tolls were paid when the caravans passed through the city gates, and that the local market was visited by the trade caravans.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "346079",
"title": "City gate",
"section": "Section::::Uses.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 506,
"text": "With increased stability and freedom, many walled cities removed such fortifications as city gates, although many still survive; albeit for historic interest rather than security. Many surviving gates have been heavily restored, rebuilt or new ones created to add to the appearance of a city, such as Bab Bou Jalous in Fes. With increased levels of traffic, city gates have come under threat in the past for impeding the flow of traffic, such as Temple Bar in London which was removed in the 19th century.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2e7xwh | Do people transitioning through HRT experience changes in muscle tone and physical ability? | [
{
"answer": "I am unaware of any studies specifically about muscle tone in transgender people, if anyone knows about them I'd love to read up on them.\n\nIn terms of policy, the International Olympics Committee recognizes that hormone replacement therapy significantly alters an athlete's abilities. Transgender people who have been on HRT for at least two years and have had sex reassignment surgery are eligible to compete in sports as their recognized gender. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "14673086",
"title": "Transgender hormone therapy (female-to-male)",
"section": "Section::::Effects.:Psychological changes.\n",
"start_paragraph_id": 141,
"start_character": 0,
"end_paragraph_id": 141,
"end_character": 363,
"text": "The psychological changes are harder to define, since HRT is usually the first physical action that takes place when transitioning. This fact alone has a significant psychological impact, which is hard to distinguish from hormonally induced changes. Most trans men report an increase of energy and an increased sex drive. Many also report feeling more confident.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "997173",
"title": "Electromyography",
"section": "Section::::Technique.:Other measurements.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 507,
"text": "EMG can also be used for indicating the amount of fatigue in a muscle. The following changes in the EMG signal can signify muscle fatigue: an increase in the mean absolute value of the signal, increase in the amplitude and duration of the muscle action potential and an overall shift to lower frequencies. Monitoring the changes of different frequency changes the most common way of using EMG to determine levels of fatigue. The lower conduction velocities enable the slower motor neurons to remain active.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32070582",
"title": "Aging movement control",
"section": "Section::::Training consequences.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 584,
"text": "Neural changes like reduced motor unit discharge rates, increased variability of motor unit discharge activity, altered recruitment and derecruitment behavior mediate modifications in muscle control. On the other hand, physiological deleterious factors including motor unit loss, increased motor unit innervation ratios also affect muscle force. Through strength training, old adults can significantly improve their force control. The rapid adaptation suggests modifications in motor unit activation, increased excitability of motoneuron pool, and decreased antagonist cocontraction.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "59782908",
"title": "Psychological stress and sleep",
"section": "Section::::Immune mediation.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 732,
"text": "This IL-6 increase is also observed during times of increased psychological stress. In a laboratory setting, individuals exposed to psychological stressors have had raised IL-6 (and acute phase protein CRP) measured especially in those who displayed anger or anxiety in response to stressful stimulus. Just as the human body responds to inflammation-inducing illness with increased fatigue or reduced sleep quality, so too does it respond to psychological stress with a sickness behaviour of tiredness and poor sleep quality. While sleep is important for recovery from stress, as with an inflammatory illness, continuous and long term increases of inflammatory markers with its associated behaviours may be considered maladaptive. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14673089",
"title": "Transgender hormone therapy (male-to-female)",
"section": "Section::::Physical and mental effects.:Neurological changes.\n",
"start_paragraph_id": 106,
"start_character": 0,
"end_paragraph_id": 106,
"end_character": 307,
"text": "All aforementioned physical changes can, and reportedly do, change the experience of sensation compared to prior to HRT. Areas affected include, but aren't limited to, the basic senses, erogenous stimulus, perception of emotion, perception of social interaction, and processing of feelings and experiences.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "530708",
"title": "Muscle memory",
"section": "Section::::Physiology.:Strength training and adaptations.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 825,
"text": "Evidence has shown that increases in strength occur well before muscle hypertrophy, and decreases in strength due to detraining or ceasing to repeat the exercise over an extended period of time precede muscle atrophy. To be specific, strength training enhances motor neuron excitability and induces synaptogenesis, both of which would help in enhancing communication between the nervous system and the muscles themselves. However, neuromuscular efficacy is not altered within a two-week time period following cessation of the muscle usage; instead, it is merely the neuron's ability to excite the muscle that declines in correlation with the muscle's decrease in strength. This confirms that muscle strength is first influenced by the inner neural circuitry, rather than by external physiological changes in the muscle size.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1137227",
"title": "DNA methylation",
"section": "Section::::In mammals.:In exercise.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 535,
"text": "High intensity exercise has been shown to result in reduced DNA methylation in skeletal muscle. Promoter methylation of PGC-1α and PDK4 were immediately reduced after high intensity exercise, whereas PPAR-γ methylation was not reduced until three hours after exercise. By contrast, six months of exercise in previously sedentary middle-age men resulted in increased methylation in adipose tissue. One study showed a possible increase in global genomic DNA methylation of white blood cells with more physical activity in non-Hispanics.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
595hir | is there really a difference between large tv screens and computer screens anymore? | [
{
"answer": "Many TVs actually have post-processing in order to make their picture look better than similar competitors. This software causes slight input lag that may not be noticeable for some, but with PC gamers it can be noticeable. \n\n",
"provenance": null
},
{
"answer": "I'm using a 40\" Vizio as a monitor for my computer right now. It's working great. \n\nThe most visually intensive thing I do is play is Kerbal Space Program, and it works great.\n\nUse a TV, it will be fine.",
"provenance": null
},
{
"answer": "There is a usually huge difference that I'm surprised no one mentioned: [Input lag](_URL_0_). TVs tent to have much greater input lag, that is, it takes more time from receiving the frame to displaying it. That's not a big deal for watching TV, but it means that it takes more time from doing something with a controller, and an effect occurring in the game you're playing. [this gamer](_URL_1_) noticed a big difference when trying a low latency monitor for a first-person shooter.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "14682695",
"title": "Technology of television",
"section": "Section::::Aspect ratios.\n",
"start_paragraph_id": 41,
"start_character": 0,
"end_paragraph_id": 41,
"end_character": 815,
"text": "Recently \"widescreen\" has spread from television to computing where both desktop and laptop computers are commonly equipped with widescreen displays. There are some complaints about distortions of movie picture ratio due to some DVD playback software not taking account of aspect ratios; but this may subside as the DVD playback software matures. Furthermore, computer and laptop widescreen displays are in the 16:10 aspect ratio both physically in size and in pixel counts, and not in 16:9 of consumer televisions, leading to further complexity. This was a result of widescreen computer display engineers' assumption that people viewing 16:9 content on their computer would prefer that an area of the screen be reserved for playback controls, subtitles or their Taskbar, as opposed to viewing content full-screen.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1306721",
"title": "On-screen display",
"section": "Section::::History.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 400,
"text": "When electronics became more advanced, it became clear that adding some extra devices for an OSD was cheaper than adding a second display device. TV screens had become much bigger and could display much more information than a small second display. OSDs display graphical information superimposed over the picture, which is done by synchronizing the reading from OSD video memory with the TV signal.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14564013",
"title": "16:10 aspect ratio",
"section": "Section::::History.:Computer displays.:Industry moves towards 16:9 from 2008.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 593,
"text": "The primary reason for this move was considered to be production efficiency - since display panels for TVs use the aspect ratio, it became more efficient for display manufacturers to produce computer display panels in the same aspect ratio as well. A 2008 report by DisplaySearch also cited a number of other reasons, including the ability for PC and monitor manufacturers to expand their product ranges by offering products with wider screens and higher resolutions, helping consumers to adopt such products more easily and \"stimulating the growth of the notebook PC and LCD monitor market\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24777336",
"title": "Home screen",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 383,
"text": "A home screen, homescreen, or start screen is the main screen on a mobile operating system or computer program. Home screens are not identical because users rearrange icons as they please, and home screens often differ across mobile operating systems. Almost every smartphone has some form of home screen, which typically displays links to applications, settings, and notifications.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30876233",
"title": "Aspect ratio (image)",
"section": "Section::::Visual comparisons.\n",
"start_paragraph_id": 48,
"start_character": 0,
"end_paragraph_id": 48,
"end_character": 505,
"text": "Televisions and other displays typically list their size by their diagonal. Given the same diagonal, a 4:3 screen has more area compared to 16:9. For CRT-based technology, an aspect ratio that is closer to square is cheaper to manufacture. The same is true for projectors, and other optical devices such as cameras, camcorders, etc. For LCD and plasma displays, however, the cost is more related to the area. Producing wider and shorter screens can yield the same advertised diagonal, but with less area.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8400335",
"title": "Software portability",
"section": "Section::::Strategies for portability.:Different processors.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 504,
"text": " the majority of desktop and laptop computers used microprocessors compatible with the 32- and 64-bit x86 instruction sets. Smaller portable devices use processors with different and incompatible instruction sets, such as ARM. The difference between larger and smaller devices is such that detailed software operation is different; an application designed to display suitably on a large screen cannot simply be ported to a pocket-sized smartphone with a tiny screen even if the functionality is similar.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5548053",
"title": "Best coding practices",
"section": "Section::::Coding standards.:Keep the code simple.\n",
"start_paragraph_id": 90,
"start_character": 0,
"end_paragraph_id": 90,
"end_character": 747,
"text": "Finally, very terses layouts might better utilize modern wide-screen computer displays. In the past screens were limited to 40 or 80 characters (such limits originated far earlier: manuscripts, printed books, and even scrolls, have for millennia used quite short lines (see for example Gutenberg Bible). Modern screens can easily display 200 or more characters, allowing extremely long lines. Most modern coding styles and standards do not take up that entire width. Thus, if using one window as wide as the screen, a great deal of available space is wasted. On the other hand, with multiple windows, or using an IDE or other tool with various information in side panes, the available width for code is in the range familiar from earlier systems.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
nx0vm | Why does drinking whiskey help my throat when it is sore? | [
{
"answer": "When you drink whiskey (and don't have a sore throat) it feels kind of hot and tingly, right? That happens because compounds in the whiskey (primarily alcohol and some tannins) are able to partially turn on the same neurons that normally sense heat and pain. It's a relatively weak effect, and short lived, but it happens.\n\nSo, neurons that transmit pain and heat information to the brain run in networks with other neurons that sense benign stimuli like touch and pressure, and these adjacent neurons influence the signalling of one-another. The overall neuron network has a limited information carrying capacity (since neurons can only conduct information at a limited rate, and have a period of time after firing during which they cannot fire again). \n\nThink about a time when you've poked a finger, stubbed a toe, or banged you knee on a table. What did you do next? Probably started shaking your finger, walking around quickly (saying \"ouch, ouch, ouch\") on the toe, or rubbed your knee. Right? This is behavior that exploits the limited information carrying capacity of the pain network. You flood the neuron \"pipe\" with benign, non-pain, information and thus effectively block some of the pain signal from getting to the brain.\n\nThe whiskey on your sore throat has a similar effect. The neurons that sense warmth and one type of pain get stimulated by the whiskey, and temporarily block the other pain neurons from delivering their \"my throat is sore\" information to the brain. It also increases the latency period of the pain neurons for a while, meaning they are able to fire less often, and thus deliver less total pain signal to your brain. ",
"provenance": null
},
{
"answer": "When you drink whiskey (and don't have a sore throat) it feels kind of hot and tingly, right? That happens because compounds in the whiskey (primarily alcohol and some tannins) are able to partially turn on the same neurons that normally sense heat and pain. It's a relatively weak effect, and short lived, but it happens.\n\nSo, neurons that transmit pain and heat information to the brain run in networks with other neurons that sense benign stimuli like touch and pressure, and these adjacent neurons influence the signalling of one-another. The overall neuron network has a limited information carrying capacity (since neurons can only conduct information at a limited rate, and have a period of time after firing during which they cannot fire again). \n\nThink about a time when you've poked a finger, stubbed a toe, or banged you knee on a table. What did you do next? Probably started shaking your finger, walking around quickly (saying \"ouch, ouch, ouch\") on the toe, or rubbed your knee. Right? This is behavior that exploits the limited information carrying capacity of the pain network. You flood the neuron \"pipe\" with benign, non-pain, information and thus effectively block some of the pain signal from getting to the brain.\n\nThe whiskey on your sore throat has a similar effect. The neurons that sense warmth and one type of pain get stimulated by the whiskey, and temporarily block the other pain neurons from delivering their \"my throat is sore\" information to the brain. It also increases the latency period of the pain neurons for a while, meaning they are able to fire less often, and thus deliver less total pain signal to your brain. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "26083160",
"title": "Throat irritation",
"section": "Section::::Treatment.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 320,
"text": "Home remedies for throat irritation include gargling with warm water twice a day, sipping honey and lemon mixture or sucking on medicated lozenges. If the cause is dry air, then one should humidify the home. Since smoke irritates the throat, stop smoking and avoid all fumes from chemicals, paints and volatile liquids.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1002473",
"title": "Gastritis",
"section": "Section::::Pathophysiology.:Acute.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 247,
"text": "Also, note that alcohol consumption does not cause chronic gastritis. It does, however, erode the mucosal lining of the stomach; low doses of alcohol stimulate hydrochloric acid secretion. High doses of alcohol do not stimulate secretion of acid.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22713",
"title": "Opium",
"section": "Section::::Modern production and use.:Consumption.\n",
"start_paragraph_id": 107,
"start_character": 0,
"end_paragraph_id": 107,
"end_character": 447,
"text": "In Eastern culture, opium is more commonly used in the form of paregoric to treat diarrhea. This is a weaker solution than laudanum, an alcoholic tincture which was prevalently used as a pain medication and sleeping aid. Tincture of opium has been prescribed for, among other things, severe diarrhea. Taken thirty minutes prior to meals, it significantly slows intestinal motility, giving the intestines greater time to absorb fluid in the stool.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1353592",
"title": "Kretek",
"section": "Section::::Health effects.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 338,
"text": "The eugenol in clove smoke causes a numbing of the throat which can diminish the gag reflex in users, leading researchers to recommend caution for individuals with respiratory infections. There have also been a few cases of aspiration pneumonia in individuals with normal respiratory tracts possibly because of the diminished gag reflex.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "310094",
"title": "Sore throat",
"section": "Section::::Differential diagnosis.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 448,
"text": "A sore throat is usually from irritation or inflammation. The most common cause (80%) is acute viral pharyngitis, a viral infection of the throat. Other causes include other infections (such as streptococcal pharyngitis), trauma, and tumors. Gastroesophageal (acid) reflux disease can cause stomach acid to back up into the throat and also cause the throat to become sore. In children streptococcal pharyngitis is the cause of 37% of sore throats.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3187173",
"title": "Health effects of wine",
"section": "Section::::Effect on the body.:Cardiovascular system.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 881,
"text": "Studies have shown that heavy drinkers put themselves at greater risk for heart disease and developing potentially fatal cardiac arrhythmias. Excessive alcohol consumption can cause higher blood pressure, increased cholesterol levels and weakened heart muscles. Studies have shown that moderate wine drinking can improve the balance of low-density lipoprotein (LDL or \"bad\" cholesterol) to high-density lipoprotein (HDL or \"good\" cholesterol), which has been theorized as to clean up or remove LDL from blocking arteries. The main cause of heart attacks and the pain of angina is the lack of oxygen caused by blood clots and atheromatous plaque build up in the arteries. The alcohol in wine has anticoagulant properties that limits blood clotting by making the platelets in the blood less prone to stick together and reducing the levels of fibrin protein that binds them together.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26083160",
"title": "Throat irritation",
"section": "Section::::Acid reflux.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 712,
"text": "This affliction is a common cause of throat irritation. Normally the stomach produces acid in the stomach which is neutralized in the small intestine. To prevent acid from flowing backwards, the lower part of the swallowing tube (esophagus) has a valve which closes after food passes through. In some individuals, this valve becomes incompetent and acid goes up into the esophagus. Reflux episodes often occur at night and one may develop a bitter taste in the mouth. The throat can be severely irritated when acid touches the vocal cords and can lead to spasms of coughing. To prevent throat irritation from reflux, one should lose weight, stop smoking, avoid coffee beverages and sleep with the head elevated.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
425bqr | why is it hard to get a good picture of something that "glows in the dark?" | [
{
"answer": "Your eyes are magnificent sensors, and cameras are not as good. Your eye has adaptive gain control, which allows you to see better in the dark by trading \"frame rate\" for sensitivity. To get the same effect in a camera, you need a longer exposure. If you have a nice camera and a tripod, you should be able to get great images. The camera on your cell phone just has too small a lens. You eye also slightly blooms glowing objects in a dark space, which the camera would not.",
"provenance": null
},
{
"answer": "The reason is that a camera is basically a array of small sensors elements that detects how many photons (light particles) that collides at each element during. Since the number of photons sent out per time unit is so low you would need to record for a long time to be able to distinguish the actual signal from the noise. However if you ''record'' too long the elements will ''overflow'' and ''leak into neighbouring elements'' (causing so called blloming artefacts which is what you see if you take a photo of the sun). Therefore it is easy to construct a camera that would capture great images of glow-in-the-dark products but it would require you to hold it stable for a long time and be useless in normal lighting since everything would become white due to blooming.",
"provenance": null
},
{
"answer": "Are you using a cell phone camera? You need a DSLR, and the trick is to shoot the photo in manual mode with a super slow shutter speed. It greatly helps to have a tripod and a remote control, since slow shutter speeds can make the photo blurry if the camera moves slightly.\n\nIf using a normal point-and-shoot digital camera, you have little control over the functionality, and thus the camera automatically compensates for lack of light by making the shots grainy. And as a rule of thumb, never use flash.\n\n[Here's a photo I took of my glow-in-the-dark LEGO ghost minifigures under a blacklight against a black background in the dark, using a slow shutter speed.](_URL_0_) This was taken with my Nikon DSLR, and with a tripod, a camera remote control, and manual focus.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3953330",
"title": "Photoclinometry",
"section": "Section::::Problems.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 314,
"text": "Light direction is very important to the quality of a photoclinometric image. Light that comes from directly over the surface (behind the camera) makes it hard to distinguish the shadows. Multiple light sources are also a problem, since they destroy important shadows required for the algorithms to work properly.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "51426240",
"title": "Barry Masteller",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 608,
"text": "\"As a painter - when I pick up a camera it becomes a drawing tool. Night photography has so many things about it that fit my aesthetic - a way to truly capture light and movement. Since the light is limited at that time of day and I'm using a hand held camera I can't expect to get a \"picture.\" What I get instead is an image of long exposure red, white, yellow and green light line tracings; from street lights, signals, car and porch lights – the reseeding and approaching sunlight for its beautiful blues and violets and any other accidental light source that finds its way onto the sensor of my camera.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17357470",
"title": "Photographic lighting",
"section": "Section::::Perceptual cause and effect.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 1069,
"text": "A skilled photographer can manipulate how a viewer is likely to react to the content of a photo by manipulating the lighting. Outdoors that can require changing location, waiting for the ideal time of day or in some cases the ideal time of year for the lighting to create the desired impression in the photo or manipulating the natural lighting by using reflectors or flash. In a studio setting there is no limit to options for lighting objects to ether make them look \"seen by eye\" normal or surreal as the goals for the photograph require. But more often than not the reaction on the part of the view will be from the baseline of whether the lighting seems normal/natural or not compared to other clues. Mistakes less skilled photographer often make when mixing flash and natural lighting is not matching with the flash the highlight and shadow clues seen in the ambient lit background. If the background is illuminated by the setting sun but the face in the foreground appears to have been photographed at noon it will not seem normal because the clues don't match.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17357470",
"title": "Photographic lighting",
"section": "Section::::Perceptual cause and effect.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 643,
"text": "The goal in all photographs is not to create an impression of normality. But as with magic, knowing what the audience normally expects to see required to pull off a lighting strategy which fools the brain or creates an other than normal impression. Light direction relative to the camera can make a round ball appear to be a flat disk or a sphere. The position of highlights and direction and length of shadows will provide other clues to shape and outdoors the time of day. The tone of the shadows on an object or provide contextual clues about the time of day or environment and by inference based on personal experience the mood of person.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "103177",
"title": "Daguerreotype",
"section": "Section::::\"Camera obscura\".\n",
"start_paragraph_id": 56,
"start_character": 0,
"end_paragraph_id": 56,
"end_character": 514,
"text": "You will catch these pictures on a piece of white paper, which placed vertically in the room not far from that opening, and you will see all the above-mentioned objects on this paper in their natural shapes or colors, but they will appear smaller and upside down, on account of crossing of the rays at that aperture. If these pictures originate from a place which is illuminated by the sun, they will appear colored on the paper exactly as they are. The paper should be very thin and must be viewed from the back.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17357470",
"title": "Photographic lighting",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 590,
"text": "Photographic lighting is the illumination of scenes to be photographed. A photograph simply records patterns of light, color, and shade; lighting is all-important in controlling the image. In many cases even illumination is desired to give an accurate rendition of the scene. In other cases the direction, brightness, and color of light are manipulated for effect. Lighting is particularly important for monochrome photography, where there is no color information, only the interplay of highlights and shadows. Lighting and exposure are used to create effects such as low-key and high-key.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2671685",
"title": "Key light",
"section": "Section::::Lighting a scene.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 345,
"text": "Using just a key light results in a high-contrast scene, especially if the background is not illuminated. A fill light decreases contrast and adds more details to the dark areas of an image. An alternative to the fill light is to reflect existing light or to illuminate other objects in the scene (which in turn further illuminate the subject).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2ket9d | What should I know about Arendt before reading her? The good and bad. | [
{
"answer": "Here is a [haAretz article on what's controversial with her writings on Eichmann](_URL_1_). [That controversy was made into a film](_URL_0_).\n\nFor her writings on Totalitarianism she had no access to Russian language sources. She relies on the sayings out of date with what she is using them for not sourced and taken, as google reveals, from news papers, from Trotskist pamphlets etc.\n\nIn her «Reflections on Little Rock» she voiced opposition towards Black Suffrage in US America.\n\nHer support for Heidegger was controversial because Heidegger would refuse to show remorse for what he had done in the Nazi regime.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "359542",
"title": "Paul R. Ehrlich",
"section": "Section::::Reception.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 1157,
"text": "Dan Gardner argues that Ehrlich has been insufficiently forthright in acknowledging errors he made, while being intellectually dishonest or evasive in taking credit for things he claims he got \"right\". For example, he rarely acknowledges the mistakes he made in predicting material shortages, massive death tolls from starvation (as many as one billion in the publication \"Age of Affluence\") or regarding the disastrous effects on specific countries. Meanwhile, he is happy to claim credit for \"predicting\" the increase of AIDS or global warming. However, in the case of disease, Ehrlich had predicted the increase of a disease based on overcrowding, or the weakened immune systems of starving people, so it is \"a stretch to see this as forecasting the emergence of AIDS in the 1980s.\" Similarly, global warming was one of the scenarios that Ehrlich described, so claiming credit for it, while disavowing responsibility for failed scenarios is a double standard. Gardner believes that Ehrlich is displaying classical signs of cognitive dissonance, and that his failure to acknowledge obvious errors of his own judgement render his current thinking suspect.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7871755",
"title": "Herbert Schildt",
"section": "Section::::Reception.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 301,
"text": "Schildt's books have a reputation for being riddled with errors. Their technical accuracy has been challenged by many reviewers, including ISO C committee members Peter Seebach and Clive Feather, C FAQ author Steve Summit, and numerous \"C Vu\" reviewers from the Association of C and C++ Users (ACCU).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52590",
"title": "The Population Bomb",
"section": "Section::::Criticisms.:Predictions.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 685,
"text": "Journalist Dan Gardner has criticized Ehrlich both for his overconfident predictions and his refusal to acknowledge his errors. \"In two lengthy interviews, Ehrlich admitted making not a single major error in the popular works he published in the late 1960s and early 1970s … the only flat-out mistake Ehrlich acknowledges is missing the destruction of the rain forests, which happens to be a point that supports and strengthens his world view—and is therefore, in cognitive dissonance terms, not a mistake at all. Beyond that, he was by his account, off a little here and there, but only because the information he got from others was wrong. Basically, he was right across the board.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1178580",
"title": "Andrew Weil",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 689,
"text": "Weil has been criticized for specific cases where he has appeared to reject aspects of evidence-based medicine, or promote unverified beliefs; and critiques by scientific watchdog organizations for his failing to disclaim in cases of his writings that have had connections to his own commercial interests, as well as for his and his peers downplaying social, structural, and environmental factors that contribute to the etiology of disease in the West, and for the clear component of entrepreneurialism associated with his establishing his brand of health care services and products. He refused to be interviewed by \"Frontline\" for their January 19, 2016 episode about health supplements.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2640326",
"title": "Nigel Slater",
"section": "Section::::Writing.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 420,
"text": "As he told \"The Observer\", \"The last bit of the book is very foody. But that is how it was. Towards the end I finally get rid of these two people in my life I did not like [his father and stepmother, who had been the family's cleaning lady]—and to be honest I was really very jubilant—and thereafter all I wanted to do was cook.\" Slater's negative portrayal of his stepmother is challenged, however, by his stepsisters.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "95184",
"title": "Hannah Arendt",
"section": "Section::::Work.:Arendt and the Eichmann trial (1961–1963).:Reception.\n",
"start_paragraph_id": 127,
"start_character": 0,
"end_paragraph_id": 127,
"end_character": 1073,
"text": "Arendt was profoundly shocked by the response, writing to Karl Jaspers \"People are resorting to any means to destroy my reputation ... They have spent weeks trying to find something in my past that they can hang on me\". Now she was being called arrogant, heartless and ill-informed. She was accused of being duped by Eichmann, of being a \"self-hating Jewess\", and even an enemy of Israel. Her critics included The Anti-Defamation League and many other Jewish groups, editors of publications she was a contributor to, faculty at the universities she taught at and friends from all parts of her life. Her friend Gershom Scholem, a major scholar of Jewish mysticism, broke off relations with her, publishing their correspondence without her permission. Arendt was criticized by many Jewish public figures, who charged her with coldness and lack of sympathy for the victims of the Holocaust. Because of this lingering criticism neither this book nor any of her other works were translated into Hebrew until 1999. Arendt responded to the controversies in the book's Postscript.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1885160",
"title": "Cakes and Ale",
"section": "Section::::Plot summary.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 582,
"text": "The story relates Ashenden's recollections of his past associations with the Driffields, especially Rosie. Due to his intimate association with her he hesitates to reveal how much information he will divulge to Driffield's second wife and Kear, who ostensibly wants a \"complete\" picture of the famous author, but who routinely glosses over the untoward stories that might upset Driffield's surviving wife. Ashenden holds the key to the deep mystery of love, and the act of love, in the life of each character, as he recounts a history of creativity, infidelity and literary memory.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4wrfwu | the relationship between the legislative, judicial and executive branches of the us government. | [
{
"answer": "They're mutually independent branches of government that have (many) various, different responsibilities and, generally, have the capacity to limit the actions of one another.",
"provenance": null
},
{
"answer": "The legislative branch create laws\n\nThe executive branch approves laws (and are allowed to offer ideas for new ones)\n\nJudicial branch \"maintains neutrality\" by keeping laws within the vision of our Constitution and ruling over all legal issues",
"provenance": null
},
{
"answer": "Power is divided among the three branches of the federal government as a check against corruption. Each branch has ways of balancing out the other two, the idea being that if they fight each other somewhat, we can avoid someone gaining absolute power.\n\nThe Legislature writes the laws but has no power to enforce them.\n\nThe Executive enforces the laws but has no power to modify them. (Note that the Executive does put out regulations interpreting the laws, but largely because the Legislature delegates that authority).\n\nThe Judiciary passes judgment on the legality of the laws but has no power to enforce them. \n\nAdditionally, they each have checks, such as impeachment, appointment, or veto powers that help balance them out. This division of power means that the branches are often at odds, but also that multiple people have to work together between the branches to get things done. ",
"provenance": null
},
{
"answer": "The Legislative branch makes laws. They also have the power to impeach the president and confirm judicial appointments. It was also expected that few Presidential candidates would reach 50% of the electoral vote, in which case the legislative branch would also get to pick the winner. However, the rise of political parties and campaigning meant that rarely happened.\n\nThe Executive branch enforces laws. They can write and propose legislation, but it has to go through congress to be enacted. They nominate judges who are then confirmed by the legislative branch. They can veto a bill before it becomes law, but that can be overruled by a legislative supermajority. They have a degree of leeway in how they interpret the law and act within it, which has lead to greater executive power in recent decades. The Vice President also serves as head of the Senate and can break tied votes.\n\nThe Judicial branch passes judgement on legality, and in the case of the Supreme Court on the legality of laws themselves with respect to the constitution. They have the final say and can overrule the other two branches, but are appointed by those same other two branches.\n\nThere are various other little bits (like the congress having the power to declare war, but the president being in charge of commanding, but congress basically relinquishing that power because they don't want it for political reasons, etc.), but that's the basic gist.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "32021",
"title": "Politics of the United States",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 728,
"text": "The executive branch is headed by the president and is formally independent of both the legislature and the judiciary. The cabinet serves as a set of advisers to the president. They include the vice president and heads of the executive departments. Legislative power is vested in the two chambers of Congress, the Senate and the House of Representatives. The judicial branch (or judiciary), composed of the Supreme Court and lower federal courts, exercises judicial power. The judiciary's function is to interpret the United States Constitution and federal laws and regulations. This includes resolving disputes between the executive and legislative branches. The federal government's structure is codified in the Constitution.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19225",
"title": "Politics of Mexico",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 515,
"text": "The executive power is exercised by the executive branch, which is headed by the President, advised by a cabinet of secretaries that are independent of the legislature. Legislative power is vested upon the Congress of the Union, a two-chamber legislature comprising the Senate of the Republic and the Chamber of Deputies. Judicial power is exercised by the judiciary, consisting of the Supreme Court of Justice of the Nation, the Council of the Federal Judiciary and the collegiate, unitary and district tribunals.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24930943",
"title": "Federal government of Mexico",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 488,
"text": "The executive power is exercised by the executive branch, which is headed by the president and his Cabinet, which, together, are independent of the legislature. Legislative power is vested upon the Congress of the Union, a bicameral legislature comprising the Senate and the Chamber of Deputies. Judicial power is exercised by the judiciary, consisting of the Supreme Court of Justice of the Nation, the Council of the Federal Judiciary, and the collegiate, unitary, and district courts.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23365",
"title": "Politics of Pakistan",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 562,
"text": "The Government consists of three branches: executive, legislative and judicial. The Executive branch consists of the Cabinet and is led by the Prime Minister. It is totally independent of the legislative branch that consists of a bicameral parliament. The Upper House is the Senate whilst the National Assembly is the lower house. The Judicial branch forms with the composition of the Supreme Court as an apex court, alongside the high courts and other inferior courts. The judiciary's function is to interpret the Constitution and federal laws and regulations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33367309",
"title": "Federal government of Brazil",
"section": "Section::::Division of powers.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 522,
"text": "Executive power is exercised by the executive, headed by the President, advised by a Cabinet of Ministers. The President is both the head of state and the head of government. Legislative power is vested upon the National Congress, a two-chamber legislature comprising the Federal Senate and the Chamber of Deputies. Judicial power is exercised by the judiciary, consisting of the Supreme Federal Court, the Superior Court of Justice and other Superior Courts, the National Justice Council and the regional federal courts.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "924152",
"title": "Elections in Mexico",
"section": "Section::::Federal Level.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 325,
"text": "The executive branch is headed by the president, who is also the chief of state and of the army. The legislative branch consists of the Union of Congress and is divided into an upper and lower chamber. The judicial branch is headed by the Supreme Court of Justice of the Nation and does not participate in federal elections.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3633",
"title": "Politics of Brazil",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 570,
"text": "The federal government exercises control over the central government and is divided into three independent branches: executive, legislative and judicial. Executive power is exercised by the President, advised by a cabinet. Legislative power is vested upon the National Congress, a two-chamber legislature comprising the Federal Senate and the Chamber of Deputies. Judicial power is exercised by the judiciary, consisting of the Supreme Federal Court, the Superior Court of Justice and other Superior Courts, the National Justice Council and the Regional Federal Courts.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
8l4heg | what is a power over ethernet interface module | [
{
"answer": "Some modern networking equipment can get power via the Ethernet cord along with network access. You see it in devices that may be hardwired with ethernet, but that would otherwise be difficult to get power to - like security cameras, access points and conference phones.\n\nHowever, in order for Power over Ethernet (PoE) to work, you need a router that can do PoE. If you don't have a router that supports it, you can get a PoE Interface module that connects after the router and supplies the power.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "782836",
"title": "Power over Ethernet",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 305,
"text": "Power over Ethernet or PoE describes any of several standard or ad-hoc systems which pass electric power along with data on twisted pair Ethernet cabling. This allows a single cable to provide both data connection and electric power to devices such as wireless access points, IP cameras, and VoIP phones.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "507910",
"title": "DECstation",
"section": "Section::::DECstation RISC workstations.:Models.:Personal DECstation 5000 Series.:I/O subsystem.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 786,
"text": "The I/O subsystem provides the system with an 8-bit single-ended SCSI bus, 10 Mbit/s Ethernet, serial line, the Serial Desktop Bus and analog audio. SCSI is provided by a NCR 53C94 ASC (Advanced SCSI Controller). Ethernet is provided by an AMD Am7990 LANCE (Local Area Network Controller for Ethernet) and an AMD Am7992 SIA (Serial Interface Adapter) that implements the AUI interface. A single serial port capable of 50 to 19,200 baud with full modem control capability is provided by a Zilog Z85C30 SCC (Serial Communications Controller). Analog audio and ISDN support is provided by an AMD 79C30A DSC (Digital Subscriber Controller). These devices are connected to IOCTL ASIC via two 8-bit buses or one 16-bit bus. The ASIC interfaces the subsystem to the TURBOchannel interconnect.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5761747",
"title": "DEC 3000 AXP",
"section": "Section::::Description.:I/O subsystem.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 1040,
"text": "The I/O subsystem provides the DEC 3000 AXP with Ethernet, ISDN and audio capability, four serial lines, and a real time clock. The I/O subsystem is interfaced to TURBOchannel by the IOCTL ASIC, which also implements two 8-bit buses, known as IOBUS HI and IOBUS LO, to which the I/O devices connect to. These two 8-bit buses can be combined to serve as one 16-bit bus to provide an I/O device with more bandwidth. Ethernet is provided by an AMD Am7990 LANCE (Local Area Network Controller for Ethernet), an AMD Am7992 SIA (Serial Interface Adapter) that implements the 10BASE-T or AUI Ethernet interface, and an ESAR (Ethernet Station Address ROM) that stores the MAC address. The Am7990 is the only I/O device in the subsystem to have a 16-bit interface to the IOCTL ASIC. ISDN and telephone-quality audio is provided by an AMD Am79C30A DSC (Digital Subscriber Controller). The four serial lines are provided by two Zilog Z85C30 SCC (Serial Communications Controller) dual UARTs, and the real time clock is a Dallas Semiconductor DS1287A.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1942195",
"title": "Local Management Interface",
"section": "Section::::Carrier Ethernet.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 282,
"text": "Ethernet Local Management Interface (E-LMI) is an Ethernet layer operation, administration, and management (OAM) protocol defined by the Metro Ethernet Forum (MEF) for Carrier Ethernet networks. It provides information that enables auto configuration of customer edge (CE) devices.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "782836",
"title": "Power over Ethernet",
"section": "Section::::Terminology.:Power sourcing equipment.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 339,
"text": "\"Power sourcing equipment\" (PSE) are devices that provide (\"source\") power on the Ethernet cable. This device may be a network switch, commonly called an \"endspan\" (IEEE 802.3af refers to it as \"endpoint\"), or an intermediary device between a non-PoE-capable switch and a PoE device, an external PoE \"injector\", called a \"midspan\" device.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25757910",
"title": "Energy-Efficient Ethernet",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 328,
"text": "Energy-Efficient Ethernet (EEE) is a set of enhancements to the twisted-pair and backplane Ethernet family of computer networking standards that reduce power consumption during periods of low data activity. The intention is to reduce power consumption by 50% or more, while retaining full compatibility with existing equipment.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40584505",
"title": "Virtual Distributed Ethernet",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 508,
"text": "Virtual Distributed Ethernet is a set of programs to provide virtual software-defined Ethernet Network Interface Controllers across multiple devices, typically computers, which are either virtual or physical. It forms part of the Virtual Square project from the Italian Bologna University whose code is available on public servers using free software licenses, mostly GPLv2. Researchers at the Department of Mathematics and Computer Science, Xavier University, Cincinnati OH are also working on the project.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2ac9c0 | what happens to money lost due to depreciation? | [
{
"answer": "The $5000 wasn't lost. The guy you bought the car from has it.",
"provenance": null
},
{
"answer": "It evaporated. The money didn't *vanish* it just left your hands and went back into the system. Like a bucket of water left in the sun. We all just collectively agree it's not as useful or interesting as it used to be and the extra $5,000 you paid for it is off being useful and interesting elsewhere.\n\nThe key to understanding economic systems is that it's a big circle that doesn't really exist. My car has value because everyone believes it does and it's worth less than your car.. because everyone believes that's the case.\n\nThere is no beginning or end, no big pot of money, no definable physical trait of '*value*'. It's not anchored in an observable physical universe, if you dig something out the ground without civilisation around you can't *measure* its value like you can measure it's weight. All you can say is it was *y* number of times harder to find than milk is but less useful (or whatever) and work from there. It's all relative.\n\nYour car is worth $5,000 because everyone says it is. Doesn't really matter what it used to be worth because it was a made up number then and it's a made up number now. However as long as you can sell it for those made up numbers and use them to buy other stuff for made up numbers.. it's useful.",
"provenance": null
},
{
"answer": "You didn't lose money, you lost value. The car you originally purchased for $10k is only worth half of that now; the rest was lost over time, gradually, due to various factors (entropic effects on the body of the vehicle [rust, corrosion, breakdowns], the fact that newer vehicles have become available since that have better features). ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "291268",
"title": "Depreciation",
"section": "",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 211,
"text": "Depreciation has been defined as the diminution in the utility or value of an asset. Depreciation is a non cash expense. It does not result in any cash outflow. Causes of depreciation are natural wear and tear.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18934838",
"title": "Asset",
"section": "Section::::Characteristics.:Tangible assets.\n",
"start_paragraph_id": 44,
"start_character": 0,
"end_paragraph_id": 44,
"end_character": 204,
"text": "Depreciation is applied to tangible assets when those assets have an anticipated lifespan of more than one year. This process of depreciation is used instead of allocating the entire expense to one year.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "291268",
"title": "Depreciation",
"section": "Section::::Accounting concept.:Effect on cash.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 442,
"text": "Depreciation expense does not require current outlay of cash. However, since depreciation is an expense to the P&L account, provided the enterprise is operating in a manner that covers its expenses (e.g. operating at a profit) depreciation is a source of cash in a statement of cash flows, which generally offsets the cash cost of acquiring new assets required to continue operations when existing assets reach the end of their useful lives.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "291268",
"title": "Depreciation",
"section": "Section::::Accounting concept.:Accumulated depreciation.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 1107,
"text": "While depreciation expense is recorded on the income statement of a business, its impact is generally recorded in a separate account and disclosed on the balance sheet as accumulated under fixed assets, according to most accounting principles. Accumulated depreciation is known as a contra account, because it separately shows a negative amount that is directly associated an accumulated depreciation account on the balance sheet, depreciation expense is usually charged against the relevant asset directly. The values of the fixed assets stated on the balance sheet will decline, even if the business has not invested in or disposed of any assets. The amounts will roughly approximate fair value. Otherwise, depreciation expense is charged against accumulated depreciation. Showing accumulated depreciation separately on the balance sheet has the effect of preserving the historical cost of assets on the balance sheet. If there have been no investments or dispositions in fixed assets for the year, then the values of the assets will be the same on the balance sheet for the current and prior year (P/Y).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7977203",
"title": "Engineering economics",
"section": "Section::::Examples of usage.:Depreciation and Valuation.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 754,
"text": "The fact that assets and material in the real world eventually wear down, and thence break, is a situation that must be accounted for. Depreciation itself is defined by the decreasing of value of any given asset, though some exceptions do exist. Valuation can be considered the basis for depreciation in a basic sense, as any decrease in \"value\" would be based on an \"original value\". The idea and existence of depreciation becomes especially relevant to engineering and project management is the fact that capital equipment and assets used in operations will slowly decrease in worth, which will also coincide with an increase in the likelihood of machine failure. Hence the recording and calculation of depreciation is important for two major reasons.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1698333",
"title": "Amortization (business)",
"section": "Section::::Amortization of intangible assets.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 407,
"text": "Depreciation is a corresponding concept for tangible assets. Methodologies for allocating amortization to each accounting period are generally the same as these for depreciation. However, many intangible assets such as goodwill or certain brands may be deemed to have an indefinite useful life and are therefore not subject to amortization (although goodwill is subjected to an impairment test every year).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "415961",
"title": "Fractional-reserve banking",
"section": "Section::::History.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 415,
"text": "If creditors (note holders of gold originally deposited) lost faith in the ability of a bank to pay their notes, however, many would try to redeem their notes at the same time. If, in response, a bank could not raise enough funds by calling in loans or selling bills, the bank would either go into insolvency or default on its notes. Such a situation is called a bank run and caused the demise of many early banks.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
aqoath | How was it like on Earth immediately after the impact that formed the moon? | [
{
"answer": "I think the prevailing theory is that the impact was so energetic that within hours the entire surface of the planet was raised to somewhere in the neighborhood of 3,000 degrees Celsius, meaning that it was entirely molten rock and a great deal of vaporized rock as well. If there was liquid water on the Earth before the impact, it was certainly vaporized, so I guess that means sea levels would have risen, in so far as they were all up in the air somewhere.\n\nI imagine that for a short while the Earth looked like a little, tiny star. Not even remotely as bright since the light would have been black body radiation from the molten rock, rather than light emitted from fusion reactions, but still, a cute, tiny little star. \n\nCheck this out: _URL_0_\n\nEdit: FYI, the real jam starts right around 5:50",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "11964643",
"title": "Alastair G. W. Cameron",
"section": "Section::::Career.:Formation of the Moon.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 828,
"text": "Samples brought back from the Apollo program showed that the Moon was composed of the same material as the mantle of the Earth. This surprising result was still unexplained in the early 1970s, when Cameron began work on an explanation of the Moon's origins. He theorized that the formation of the Moon was the result of a tangential impact of an object at least the size of Mars on the early Earth. In this model, the outer silicates of the body hitting the Earth would be vaporized, whereas a metallic core would not. The more volatile materials that were emitted during the collision would escape the Solar System, whereas silicates would tend to coalesce. Hence, most of the collisional material sent into orbit would consist of silicates, leaving the coalescing Moon deficient in iron and volatile materials, such as water.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11603215",
"title": "Geological history of Earth",
"section": "Section::::Precambrian.:Hadean Eon.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 850,
"text": "Earth was initially molten due to extreme volcanism and frequent collisions with other bodies. Eventually, the outer layer of the planet cooled to form a solid crust when water began accumulating in the atmosphere. The Moon formed soon afterwards, possibly as a result of the impact of a large planetoid with the Earth. Some of this object's mass merged with the Earth, significantly altering its internal composition, and a portion was ejected into space. Some of the material survived to form an orbiting moon. More recent potassium isotopic studies suggest that the Moon was formed by a smaller, high-energy, high-angular-momentum giant impact cleaving off a significant portion of the Earth. Outgassing and volcanic activity produced the primordial atmosphere. Condensing water vapor, augmented by ice delivered from comets, produced the oceans.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7592",
"title": "Caldera",
"section": "Section::::Extraterrestrial calderas.:The Moon.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 1038,
"text": "The Moon has an outer shell of low-density crystalline rock that is a few hundred kilometers thick, which formed due to a rapid creation. The craters of the Moon have been well preserved through time and were once thought to have been the result of extreme volcanic activity, but actually were formed by meteorites, nearly all of which took place in the first few hundred million years after the Moon formed. Around 500 million years afterward, the Moon's mantle was able to be extensively melted due to the decay of radioactive elements. Massive basaltic eruptions took place generally at the base of large impact craters. Also, eruptions may have taken place due to a magma reservoir at the base of the crust. This forms a dome, possibly the same morphology of a shield volcano where calderas universally are known to form. Although caldera-like structures are rare on the Moon, they are not completely absent. The Compton-Belkovich Volcanic Complex on the far side of the Moon is thought to be a caldera, possibly an ash-flow caldera.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19331",
"title": "Moon",
"section": "Section::::Formation.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 252,
"text": "The prevailing hypothesis is that the Earth–Moon system formed after an impact of a Mars-sized body (named \"Theia\") with the proto-Earth (giant impact). The impact blasted material into Earth's orbit and then the material accreted and formed the Moon.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9228",
"title": "Earth",
"section": "Section::::Chronology.:Formation.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 554,
"text": "A subject of research is the formation of the Moon, some 4.53 Bya. A leading hypothesis is that it was formed by accretion from material loosed from Earth after a Mars-sized object, named Theia, hit Earth. In this view, the mass of Theia was approximately 10 percent of Earth, it hit Earth with a glancing blow and some of its mass merged with Earth. Between approximately 4.1 and , numerous asteroid impacts during the Late Heavy Bombardment caused significant changes to the greater surface environment of the Moon and, by inference, to that of Earth.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1621514",
"title": "Lunar craters",
"section": "Section::::History.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 385,
"text": "Grove Karl Gilbert suggested in 1893 that the Moon's craters were formed by large asteroid impacts. Ralph Baldwin in 1949 wrote that the Moon's craters were mostly of impact origin. Around 1960, Gene Shoemaker revived the idea. According to David H. Levy, Gene \"saw the craters on the Moon as logical impact sites that were formed not gradually, in eons, but explosively, in seconds.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "48916027",
"title": "2016 in science",
"section": "Section::::Events.:January.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 225,
"text": "BULLET::::- Research by UCLA provides further evidence that the Moon was formed by a violent, head-on collision between the early Earth and a “planetary embryo” called Theia, roughly 100 million years after the Earth formed.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
52kwis | why were old movies like "gone with the wind" and "wizard of oz" in color when movies were still in black in white until the late 50s/early 60s | [
{
"answer": "Color was expensive back then, but not impossible.\n\nMany studios would make black and white movies to keep the budget down until the technology would finally be cheap enough to be economical.",
"provenance": null
},
{
"answer": "These are both Technicolor movies, a painstakingly expensive process Hollywood adopted for a long period. Once the great depression hit the number of full-color films dropped significantly. Hollywood had largely moved away from it due to the expense, but Disney played a role in bringing it back, utilizing it for Snow White in 1938 which became the top-grossing film. This attracted lots of big studios back to it to use it for live-action.",
"provenance": null
},
{
"answer": "Technicolor was cumbersome and required expensive specialty cameras and lighting. It was at first only suitable for big-budget pictures, sort of like 3D today.\n\nAlso, many directors preferred black and white for stylistic reasons. This is even true today; look at Schindler's List.\n\nEdit: Jesus, I get it. Schindler's List is over twenty years old. You're all very clever for pointing that out.\n\nMy point was that it was made in black and white for stylistic purposes even though color film was cheap and had long become the norm.",
"provenance": null
},
{
"answer": "Same reason there are electric cars today, but most drive gas still. The tech is there, but the cost and comfort level is not.",
"provenance": null
},
{
"answer": "Night of the Living Dead was also in black and white to save money even though movies in color were becoming a lot more common simply because it was cheaper. He could use chocolate syrup for blood and nobody would've known the difference. ",
"provenance": null
},
{
"answer": "Color was basically like what 3D is now. That is to say, it was something special and expensive which was only used for big-budget movies.\n\nIncidentally, you'll notice that old color films tend to have a lot of bright and primary colors. This was done to \"justify\" the film being in color, similar to how a 3D movie will include a lot of things coming at the camera to \"justify\" it being in 3D.",
"provenance": null
},
{
"answer": "ELI5 version of why it was expensive:\n\nAll film was black and white. A Prism split the light to make: \n\na black-and-white film of the red things, \n\na black-and-white film of the blue things, and \n\na black-and-white film of the green things. \n\nEach film was dipped in a big vat to dye it the right color. \n\nWhen you stacked the dyed films on top of each other and glued them together, you could see all the colors at once!\n\nThat was a lot of work. ",
"provenance": null
},
{
"answer": "Color was the 3D of that time period. Doable but expensive. Most films aren't in color for the same reason most films now aren't in 3D.",
"provenance": null
},
{
"answer": "It was possible back then but expensive, as time goes on technology becomes cheaper and more advanced.\n\n\nThink of it like 3D or an AAA game, some companies wanna make it more fancy and spend more budget on it but in return you most consider that it makes an income for the company. \n",
"provenance": null
},
{
"answer": "When Television started to gobble up a portion of the Film audience, Film repeatedly fired back with a new innovation to draw back audience members from Television. Each time, they used an innovation that Television couldn't do, color, then 3D, then cinemascope (2.85 aspect ratio). But, film studios quickly realized that color was too expensive and after a few years stopped doing it. ",
"provenance": null
},
{
"answer": "I wonder if many people back then got sick of color movies as many are now of 3d movies and glasses and though it's not worth it",
"provenance": null
},
{
"answer": "If colour is the equivalent of 3D today then what movies stopped using it as a gimmick? I ask because I can't be arsed with today's 3D (yesterday's on the other hand like in The Creature From the Black Lagoon...) unless it serves the story I guess.\n\nI'm also thinking there's probably something interesting to ask about the contrasts of black and white in film such as noir and how colour changes that approach (like what is neo-noir in colour? Is it acid-washed or something?).",
"provenance": null
},
{
"answer": "Price was one thing. Quality was another - while contemporary black and white film had low sensitivity and required a lot if light, color was even worse. ",
"provenance": null
},
{
"answer": "Back then shooting in color was very expensive. In the days of Wizard of Oz it was the \"the three strip technicolor process\". So for each shot it was three negatives, not just one. That right there triples you cost. Add to this the specialists needed to make it look right, triple the negative processing, and a more expensive and time intensive printing process. \n\nIt was expensive that none but the biggest productions could afford it\n\nOn the plus side nothing compares to it the old technicolor films were beautiful. ",
"provenance": null
},
{
"answer": "Off topic just a bit. I just re-watched Gone with the Wind, the other weekend for the first time as an adult. I hadn't seen the movie since I was a kid. For the time it was made, it was a spectacular film and the music was amazing. Hard to believe it was filmed in 1939. Way better than some movies today.\n\nEdited to add in the movie title.",
"provenance": null
},
{
"answer": "Related follow-up question: would it be fair to say that 'The Wizard of Oz' was the first color film many people would have seen? In other words, would the transition from black-and-white into the colorful Oz during the film have really blown people away at the time, moreso than it does when watching today?",
"provenance": null
},
{
"answer": "Very similar to why only some cartoons were computer animated while majority were old style. Like Reboot, transformers, and Donkey Kong Country. Why only one movie every couple of years were CA (toy story), and almost all kid movies are. Why HD was a rare thing, later to be 3D, and now it's 4K.\n\nIt costs a ton of money, and very few companies can manage it. As time goes on, technology becomes more efficient and common, so the prices go down because more people can do it and the machines, programs, and equipment become more affordable as well.\n",
"provenance": null
},
{
"answer": "Lot's of interesting answers. Most mundane answer is everyone had Black & White TV's not color. Well into the 1970's B & W was still common in peoples homes. When color TV's became common the B & W became the second TV for the kids or spare room.",
"provenance": null
},
{
"answer": "Colored film was very expensive and toxic to developed, at the time color film came out in 1935 by Kodak. Wasn't totally worth the Hassel to develope it most the time ",
"provenance": null
},
{
"answer": "Several reasons:\n\n* Early color processes (Technicolor) used three separate films to record red, green, and blue light components of the image. So, film costs were 3x higher for color films.\n* Technicolor used a mechanical (not optical) printing process to make the final distribution prints. This was much slower and more expensive than making distribution positives from a B & W negative.\n* Single-strip color film processes didn't become readily available until around 1950, and even then both film stock and processing were still more expensive than B & W. B & W film uses two chemicals - a developer and a \"fixer\". Color negative film processes require a more expensive color developer, a bleaching agent, and a fixer.\n\nThe result - the more expensive process got used on A-list titles and roadshow movies. As color got cheaper, more movies were shot in it. \"B\" movies and the rest were shot on cheaper B & W.",
"provenance": null
},
{
"answer": "The camera equipment was also very large and very heavy (about 400 lbs after you include the sound dampening box the camera lived inside), which made them difficult to use and impractical for anything but the largest film shoots. This video gives you a good idea of the size of these things: \n\n_URL_0_",
"provenance": null
},
{
"answer": "I read somewhere that it had to do with money and resources being used for WWII. GWTW and Wizard of OZ were pre-war. _URL_0_\n",
"provenance": null
},
{
"answer": "For a while, color films were left for films that had \"fantasy.\" Black and white films were more \"realistic\" for filmmakers and filmgoers at the time. This is why Dorothy starts out in Kansas in black and white, and when she gets to the magical world of\nOz, everything turns to color. Ironic, but clearly that has changed by now!\n\nAlso, color film is more expensive.\n\nedit: magical world of Oz not Is lol",
"provenance": null
},
{
"answer": "Those two movies were shot with expensive and complicated three strip technicolor process.\n\nB & W was cheaper, and considered for more mature for films by the 1960s.",
"provenance": null
},
{
"answer": "Technicolor was a royal pain in the ass in the early days and only really expensive movies could do it. It's kind of like how Jurassic Park was able to do really good CGI Dinosaurs 1993, but it was quite a while after that until it became more common place.",
"provenance": null
},
{
"answer": "In addition to the expense and complications of creating it like other have already said, the fact is also just that... people didn't have color TVs. They didn't exist, and once they did they were stupidly expensive. Even until the 1960s black and white TVs were pretty popular. I know my dad once said how dismayed he was to realize Mr. Spock wore blue.",
"provenance": null
},
{
"answer": "To go off of this, when a black and white movie has color added to it, is this purely an additive process or is there any part of this that can be derived from the original version? ",
"provenance": null
},
{
"answer": "As a regular movie goer, I would say that it was used to help enhance the magnitude, emotions, etc. of those movies. Big things happened in them.\n",
"provenance": null
},
{
"answer": "Color was like what Imax or 3D was to us around the time that Avatar came out. Possible to film in, but expensive and highly technical. ",
"provenance": null
},
{
"answer": "It was all an expensive process. When shooting a color film in their day, that would be capture the movie on three different rolls film instead of one. One film strip for red, green, and blue. One the film was finished, during post production they run the film reels through their proper color to stain them red, green, or blue. Once stained, they'd overlay all three reels of film on top of each other and bam! They had their colored film. Obviously this process was intricate and took a lot of time and man power to accomplish, which meant budget for a movie immediately skyrocketed if was okayed to be in color. \n\nIt should also be noted that while color film, well, was in color. Black and white film looked better. This sharper image was mainly due to the fact of how long people were perfecting the black and white image long before a commercially viable color film was produced and black and white film continued to be the dominant look until the late 50s for this reason, when color filmstocks became cheaper and widescreens became industry standard looks. ",
"provenance": null
},
{
"answer": "Most of hitchcock's movies were in black and white even though he liked color, the problem was the difficulty in getting accurate color in the 50s and 60s made it much cheaper and safer to use black and white.",
"provenance": null
},
{
"answer": "In the future people will ask the same thing regarding why some producers were still delivering in 2K in 2016",
"provenance": null
},
{
"answer": "Why was Avatar in 3D but other movies are not that are made even more recent. Same answer applies to black and white.",
"provenance": null
},
{
"answer": "Adding on to past answers, technicolor wasn't just expensive to produce in terms of cameras, but it was also extremely expensive because in order to absorb the red, green, and blue color you have to use three times the lighting normally used.",
"provenance": null
},
{
"answer": "For the same reason movies today are in 3D: it's a new technology that gives the viewer a big new experience that can't reasonably be approximated anywhere else--and that sells tickets. The studios were willing to spend big money on the gamble that it would pay off for these \"epic\" new projects.\n\nExcept that with Wizard of Oz, it sadly didn't. (At the time of release.)",
"provenance": null
},
{
"answer": "Because color was the equivalent of 3D technology today. Except for the fact that color was an improvement on the technology that customers actually wanted, whereas 3D is just bullshit designed to make them money.",
"provenance": null
},
{
"answer": "My Theatre and Film appreciation class covered the Wizard of Oz bit in class last year. Back then color movies were still relatively new and seemed fake and were used for fantasy. The black and white Kansas is the real world while the color world of Oz is fantasy.",
"provenance": null
},
{
"answer": "Why are movies still in 2D when 3D was invented years ago?",
"provenance": null
},
{
"answer": "It was much more expensive and also regarded as kind of a gimmick. It was the 3D of its time.",
"provenance": null
},
{
"answer": "For the same reason movies nowadays aren't all shot in 8K or 3D. Equipment availability/intrest/director preferences.",
"provenance": null
},
{
"answer": "Also:\n\nCorrect me if I'm wrong, but, wasn't the image-quality for black & white film superior to that of color film, back then?\n\nLike, especially with still-photos, didn't a lot of photographers continue using black & white film pretty far into the color-era (of still photos) like deep into the 1970's, for this reason?\n\n(I know some of said people were doing it for artistic reasons (as in, just simply wanted to photos to be in black & white, and not color, because artistically they wanted to capture the thing in a monochromatic way, not for image-quality reasons, but just artistically-speaking or whatever. But, I was under the impression that in *addition* to this, there were also a lot of people who were doing it not for artistic-style reasons (or not purely for that reason alone) but rather, because it yielded blatantly, noticeably superior image-quality (in terms of resolution/clarity/etc type of aspects, I mean).\n\nI assume this factor would also be the case in regards to movie-film, in addition to still-photography film?\n\nAs for me personally, I've always felt that as far as my own eyeballs can tell, this does seem to be the case. Color movies in the black & white era seem to have noticeably lower image-quality than high-end black & white movies that were made in the same year, by comparison. (I think it's already noticeable at the 1080p/4k level on a tv screen, but I remember my father talking about it, since he had seen the actual optical-projection versions of the movies, in theaters, back at that time, and he said the difference in image quality in equivalency seemed to be pretty enormous).",
"provenance": null
},
{
"answer": "The same reason we have 3D production technology, but hardly ever use it. It's crazy complicated and expensive, so it's reserved for the blockbusters which will guarantee a return on investment.",
"provenance": null
},
{
"answer": "\"Why were old movies like Avatar and Cloudy With A Chance Of Meatballs in 3D when movies were still in 2D until the late 2020s\"",
"provenance": null
},
{
"answer": "One reason I don't see being mentioned very much was for aesthetic purposes. Film Noir, suspense, and mystery films were very popular in the 40's - 50's, and many of those films had high budgets which could have allowed for color (particularly in the 50's) but chose to forgo color to set a different mood.\n\n\nLook at films like Touch of Evil or Sunset Boulevard. They would be completely different in color.\n\nSome directors made those type of films but did decide to use color. Symbolism of the color Green in Hitchcock's Vertigo for example. \n\n\nFinally sometimes color wasn't used so the film could get a lighter rating or not be banned. Think if in 1960 Psycho had been color...\n\n\nFor the industry as a whole YES budget was the main constraint but for the best directors black and white was just another artistic device for their films.",
"provenance": null
},
{
"answer": "Also, they redid some of the classics in color when the originals were still in black and white. I don't know if this applies to these. ",
"provenance": null
},
{
"answer": "Many people thought colored moving pictures was a gimmick and detracted from the perpose of film making. To many the perpose was to convey meaning and emotion. This is why art films in history particularly, stayed with the traditional black and white. ",
"provenance": null
},
{
"answer": "Color was added afterwards. Dorothy's shoes in The Wizard of Oz were originally blue, not red. ",
"provenance": null
},
{
"answer": "No one seems to have mentioned Warner Bros, who decided to stick with black and white until the process became cheaper. \n\nAt the time they were known for darker, grittier films, so they pushed on with film noir and made some classics! I'd recommend Kubrick's 'The Killing'.",
"provenance": null
},
{
"answer": "GWTW and Wizard of Oz were filmed at around the same time, when there were only 7 technicolor cameras ever made. The big fire scene in GWTW even required all 7 to be used at once. \nAnd I'm not sure where I saw this and I can't seem to find it anywhere... But I remember reading that during some overlap of the two films, The Wizard of Oz had a majority of the color cameras, and some parts of GWTW had to manually be colored in. ",
"provenance": null
},
{
"answer": "Because songs like \"follow the yellow brick road\"' would have been completely baffling to the viewer in black and white",
"provenance": null
},
{
"answer": "Same reason why some movies are 3D now and some are not. It's expensive to be at the cutting edge.",
"provenance": null
},
{
"answer": "Because those films felt it was worth it to spend the extra money to film in color where other films chose black and white because they didn't believe much would be gained by shooting in color versus a much cheaper black and white option.\n\nWe've had the option to shoot in color for a good while before it became popular, it just wasn't cheap enough. We have similar technologies and options now that are dreams of technogeeks and the such, but it's just extremely expensive. ",
"provenance": null
},
{
"answer": "Not sure if this has been mentioned yet, if so, I apologise.\n\n'The wizard of Oz' started filming with B & W when Technicolor became both affordable and widely available.\n\nSince it was one of THE big spend items for the year (think a Star Wars or Avatar 2) it would have been fucked if it didn't use colour.\n\nStroke of genius - Oz is in colour, Kansas isn't.\n\nSimple, but extraordinarily effective. And everyone lauded the Wachowski sisters for their green tinge in the matrix.",
"provenance": null
},
{
"answer": "Color was expensive. One example from TV is Bewitched, which started out in black and white before becoming popular and profitable, which prompted the change to color. ",
"provenance": null
},
{
"answer": "money and time. same reason movies can still be awesome or only worthy of a rental. sometimes good writing and acting can overcome the black and white factor, just like bad writing could be balanced by a little added color. ",
"provenance": null
},
{
"answer": "Aside from it being expensive, many directors turned their nose up to colour for being too low brow and cheap entertainment. It was seen very much like Transformers 3...D in iMax is today.",
"provenance": null
},
{
"answer": "B & W is easier and cheaper than color. In 1994, Kevin Smith shot *Clerks* in B & W, not because he wanted to, but because didn't have much money to make the movie.\n\nHere are some reasons why B & W is easier:\n\n**Color casts:** Every light source has a different color. Sunlight is \"white\", a regular lightbulb makes \"white\" light, and fluorescent lights make \"white\" light, but they're not actually the same white. Each light is a different color. If you take the same film everywhere, the pictures taken outside will look blue and the pictures taken inside will look yellow. The worst one is fluorescent lighting, which can show up as a hideous green color. With B & W film you don't really need to worry about it.\n\nOne of the hardest parts is that the color of sunlight changes during the day. If it takes all day to shoot a scene, then different parts of the scene will have different colors, and they won't look right together. So you have to be careful to monitor the weather and the time of day when you're shooting. Even a few clouds can change the color of a picture dramatically. This is even a problem in the studio, because when you turn on the lights in a movie studio, they change color as they warm up.\n\nYour eyes naturally adapt to color changes so you don't notice them very much, but color film doesn't adapt like your eyes do. For color film, you have to pay close attention, and use color filters to adjust the color of the light to be just right.\n\nWith B & W film, you can even get away with shooting \"nighttime\" scenes in broad daylight, and many studios did this. This is not really possible with color film.\n\n**Technicolor:** Technicolor is actually made using three different strips of B & W film. Instead of loading one piece of film into the camera, you load three. The camera is a monster, and it has prisms and filters inside so it can split the color light into three different B & W images. To make the final movie for projection, you have to combine the three film strips back into one, which is tedious and expensive.\n\nThe prisms and filters in a technicolor camera were also inefficient. It took a lot of light in order to make a technicolor film. It took so much light that you had to shoot outside or with bright studio lights. Bright lights are expensive, they make the studio hot, and they make the actors uncomfortable. You can forget about shooting technicolor at night, it just won't work.\n\n**Monopack film:** Later, in the 1950s, color \"monopack\" film became available, using processes like ECN-1. This made it possible to film color using ordinary cameras, the same cameras you use for B & W. However, this film was still more difficult and expensive to process. Color film is also more sensitive to temperature. With B & W film, if you process it at the wrong temperature, you can compensate by processing for a different amount of time, and the picture will mostly be the same. With color film, if you process it at the wrong temperature, you might get different colors.\n\nB & W film is still more sensitive to light than color film, even today. This is because each color film is made out of three B & W films stacked on top of each other, and each film only receives a part of the light.\n\n**Skills:** Even when color was available, not everyone knew how to use it. People had to learn how to use filters, how to measure color during the day, how to pay attention to the weather. New artistic decisions had to be made: \"nighttime\" in a color film might mean adding blue filters to the light, \"daytime\" indoors might mean putting dark orange filters over the windows. It took many years before people making movies learned these skills.\n\nThe same thing happened with digital cameras. Digital cameras respond differently to light than film does, and so you have to be very careful when you shoot digital, and you have to change the lighting a little bit. Some filmmakers have a lot of experience working with film, and for them it's easier to keep using film rather than learning how to use digital, even though digital may be easier once you know how to use it.",
"provenance": null
},
{
"answer": "A few years ago they showed a new old dads army episode that was found in a barn and never aired. It was on black and white film. They discovered that it had all color markings still intact so it could be converted from the black and white to colour.because it was a barn find it was not in the best condition so they decided to restore it using modern day technology. Very interested to Learn a lot of old programs where actually recorded using a color camara but used black and white film. The camara saves the color code on the black and white film.\n\nImagen if we could convert all the non HD photos and film ever recorded to full HD.\n",
"provenance": null
},
{
"answer": "Black and white looks better and ages better than old color films. In addition to the more technical and logistical reasons being listed here, a lot of film directors preferred to continue shooting their films in black and white because it was a hallmark of the artform. Its still true that most B & W films from the 50s and early 60s look better than their color counterparts. ",
"provenance": null
},
{
"answer": "Why are movies like \"Avatar\" and \"The Hobbit\" in 3D when movies are still in 2D to this day?",
"provenance": null
},
{
"answer": "Technicolor was a process that's been available since 1935. It was just incredibly expensive, so even the big budget movies opted not to use it - only huge ticket movies like the 2 you mentioned had the budget for it.\n\nEdit: guy below me had more research haha, thank you sir",
"provenance": null
},
{
"answer": "My mom was young when The Wizard of Oz came out, and they went to see it in the theater. The movie starts in black and white, it only becomes color when Dorothy steps out of her house into Oz. That's the first time mom had ever seen any color in a movie, she still looks a little awed when she tells about it.",
"provenance": null
},
{
"answer": "It was a HUGE deal when Gone with the Wind & Wizard of Oz came out, too, that they were in color. That's part of why those were so successful at the box office-they were novelties, basically. Black & white films were still going strong through the 60s, too, because it was significantly cheaper to film. I love a b-movie, but they're b-movies for a reason: they're cheap, in terms of casting, production, costumes, etc. ",
"provenance": null
},
{
"answer": "For the same reason that many films today aren't shot in **IMAX 3D** even though the technology has been available since the 80s.\n\nThere are costs and complexities that come with shooting with advanced, proprietary film types that often wind up prohibitive given the budget/time constraints and the overall goals for a movie. \n\nAs an example think of how much the forced 3D popout scenes added into a lot of IMAX films actually adds to the movie experience other than to justify spending an extra $4 on the ticket.",
"provenance": null
},
{
"answer": "Color film was largely associated with fantasy/musical setting or storylines (one of the first examples is Journey to the Moon even though it was hand colored) most films remained in black in white until later on. Color film was also not taken very seriously by many people similar to how we tend to view animated movies as childish. (Source: I'm a film student)",
"provenance": null
},
{
"answer": "To better relate this to today - why aren't all movies shot in Imax quality 3D? Cost, difficulty, availability of equipment.",
"provenance": null
},
{
"answer": "Colour was very expensive and used for spectacle/blockbuster films, historical epics, fantasy, musicals, and so on. Black-and-white remained popular with audiences for more serious subjects, contemporary dramas, and smaller-budget films.",
"provenance": null
},
{
"answer": "Funny that you mention those two movies actually. The only reason that the scenes from Kansas are shot in Black and White is because the producers of Wizard of Oz had to relinquish their technicolor cameras for the production of Gone with the Wind. It wasn't originally supposed to be that way, they just happened to have been filming the Kansas scenes last and realized that black and white fit the setting better, which obviously worked out to their advantage. Unfortunately, Gone with the Wind beat them for best picture in the Oscars that year.",
"provenance": null
},
{
"answer": "The world was in black & white until 1966. The Wizard of Oz heralded the invention of colour itself. The real world switched to colour many years later, and the early years of colour were celebrated with Psychedelia and 70s gaudiness.\nEarly reluctance to adopt colour was down to fears that too much colour would blind us all and we'd all become triffid food.",
"provenance": null
},
{
"answer": "Not sure how old you are but, the answer would be the same as: why weren't all shows in High Definition when it first came out?\n\nCost",
"provenance": null
},
{
"answer": "I can't find a definite source, but I have anecdotal information form my father that a very few commercially screened color films even existed in the Silent Era.",
"provenance": null
},
{
"answer": "Occasionally there are directors who choose to do B & amp;W for stylistic reasons look at Psycho by Hitchcock, both the movie before and the movie after were in colour. \n\nBilly Wilder chose to do all of his films in B & amp;W, more recently I've seen a few movies, \"The Artist\" and \"Goodnight and Goodluck\" those where both B & amp;W to fit with the time period the movies take place in. ",
"provenance": null
},
{
"answer": "_URL_0_ watched that video earlier this morning. Strange that this is on the front page now. (We live in the matrix)\n\nAnyways, check out that video. It's very interesting. And if you're interested in film concepts in general, check out the YouTube channel \"Every Frame A Painting\".",
"provenance": null
},
{
"answer": "One reason is that the Wizard of Oz and Gone with the Wind were actually sharing a color camera. Which was part of the artistic decision of having Kansas in black and white and Oz in color in the Wizard of Oz.",
"provenance": null
},
{
"answer": "Actual ELI5: It was very hard and expensive to do with the stuff they had back then. So only big companies could afford it!",
"provenance": null
},
{
"answer": "I think Hollywood went to more color movies, especially in the 1950's, because that is when television became popular. Hollywood wanted to provide an experience in the theater that you couldn't get at home watching TV, so the widescreen format was introduced and color was used more.\n\nHere's an interesting bit a trivia regarding \"The Adventures of Robin Hood\" which was made in 1938 and starred Errol Flynn and Olivia de Havilland:\n\n\"The production used all 11 of the Technicolor cameras in existence in 1938 and they were all returned to Technicolor at the end of each day's filming.\"\n\n--IMDB",
"provenance": null
},
{
"answer": "We had colour film capabilities in those times, but it was expensive. And with the rise of TV, which had a complete lack of colour capabilities, it was pointless to make tv programs in colour.\n\nTheres still plenty of movies from the 50s and early 60s in colour.",
"provenance": null
},
{
"answer": "In 1905, people said movies were a fad, people prefer live theater\n\nIn 1929, people said talkies were a fad, people prefered silent film\n\nIn 1939, people said color was a fad, people prefer serious B & W\n\nIn 2009, people said 3D was a fad, people prefer flat images",
"provenance": null
},
{
"answer": "The same is true of sound in films. The first sound synced film was in 1900, but it wasn't until the early 1930s that the \"talkies\" became mainstream.",
"provenance": null
},
{
"answer": "The excellent \"Filmmaker IQ\" series explains the [whole history of color film](_URL_0_) and it is fascinating.\n\nThe TL;DR of your question is those movies were shot in technicolor, which used a [huge and expensive camera rig] (_URL_1_) to simultaneously shoot 3 rolls of film, separating the three primary light colors. Those were blockbuster films, and most smaller films wouldn't have had the budget for it. It is kind of like digital effects and 3-d is today; you don't spend that kind of money on a simple romantic comedy.",
"provenance": null
},
{
"answer": "You're actually asking two questions: why did they make few color films in the thirties and why did they make b & w films in the 50s and 60s.\nMost of the answers given are good regarding expense. That is the primary reason. There are secondary stylistic reasons that are also interesting. Color was often reserved for fantasies and spectacle. You'll even find black and white films of the thirties with color sequences. A good example is The Women, released the same year as your two examples. There is a color fashion show in the middle of the film, signaling the fantastic allure for the characters. Even into the 1950s when color was becoming more frequent and studios were trying things like widescreen as well to keep viewers from staying home with TV, color became the choice for \"travel\" films. This was the age of travelogue romances set in faraway locales, and European vistas. Rich color cinematography of Venice or Rome in films like Three Coins in the Fountain were ways to give audiences a vicarious view of the outside world. \n\nEven into the early '60s as color became ubiquitous, black and white was employed as a stylistic choice. Hitchcock had been shooting in color since the late forties, but shot Psycho in black and white intentionally. Part of the reason was to lesson the gory impact of blood running down the shower.\n\nDisney made all of his features in color (except the docudrama The Reluctant Dragon, which \"becomes color\" after a visit to the paint factory). However, he made The Shaggy Dog and the two flubber movies in black and white because he hoped it would help disguise the special effects used. ",
"provenance": null
},
{
"answer": "Same reason only some movies are made in 3D. First it's cost way more to make. Also at the time, most people didn't have color TV's just like many people today don't have 3D home setups.",
"provenance": null
},
{
"answer": "There used to be two processes, Technicolor and Kineticolor. This process consisted of the very tedious task of going through each and every slide/frame that needed color and basically taking a crayon (well, dye) and then coloring the slides manually, tinting each section with the necessary color, or using a prism-camera hybrid to make the slides/frames the right color. it's why there is so little diversity between colors, and most colors are extremes. For instance, in the Wizard of Oz, how much of the movie is a color like, say, brown or grey, as opposed to bright vibrant blues, yellows, and reds? Primary colors were easier to use. It's also why the colors tend to be extremes or mesh together poorly compared to today's filming. ",
"provenance": null
},
{
"answer": "Using color in movies was very expensive at that time. Some companies realized that they'll draw more people in to see thee movie if they use color. It was new and it made the movie, back then, pop. \n\nAlso, fun fact: Dorothy's slippers were actually silver in the book. Red was used in the movie to make them really pop. ",
"provenance": null
},
{
"answer": "The technology was rare and kind of expensive. Audiences had been accepting Black and White and it was unclear how much (if any) of an advantage color would be. \n\nThe actual cameras were initially expensive to build and there were only a few of them. For instance, some scenes in Gone with the Wind used ALL 7 Technicolor cameras then in existence.",
"provenance": null
},
{
"answer": "Today we convert black and white movies using a digital process. Once the b & w movies are digitized, the movie is gone over and a series of frames are programmed into the system. The computer then colors in the movie. Each frame is manually fine tuned until the desired effect is achieved. Before digital, each frame had to be colored manually. It was an expensive, highly tedious, time consuming process.",
"provenance": null
},
{
"answer": "Part of the styling of Black and White had to do with clarity. Black and White film was much cleaner and sharper than color film of the time.",
"provenance": null
},
{
"answer": "1. Color was very cumbersome and expensive at the time. It was reserved for \"A\" features with huge budgets, or for cartoons, which were easier to implement than live action. \n\n2. There were actually two separate cinematography Oscars awarded for color and black and white.\n\n3. Unlike 3D, audiences loved color. \"In glorious Technicolor\" was quite an effective marketing phrase. The only objections heard were from inside the industry. Actresses who didn't like the way they looked--there was no silvery glow to their complexion like there was in b/w. Directors and technicians didn't like the refrigerator-size camera, huge lighting requirements, and the finiky presence of Natalie Calmus, the mandatory \"color consultant\" on the set. \n\n4. Color prints were 100 percent compatible with black and white, so even the dump-iest neighborhood theater could show them with no modifications or special training.\n\nThere's tons more great (and technically correct) information here:\n\n_URL_0_",
"provenance": null
},
{
"answer": "The technology was there to have color film. You can see color pics and films from the 20's. But the issue is it cost alot to film in color. The main issue is that theaters don't have color projectors because cost alot. So to make a film in color you need someone to pay for innovation to do it. Sony did it, they picked up the tab because of television was there. They had to innovate or die.",
"provenance": null
},
{
"answer": "It was expensive to do color. That's like asking why doesn't everyone own a 4k TV, the tech exists, why don't people buy it?",
"provenance": null
},
{
"answer": "Back then, movies were more commerce than art. You had a budget, and a system that delivered the talent on both ends cheaply with no thought to being able to watch it over again hundreds of times. Black and white was good enough for 80 % of everything.",
"provenance": null
},
{
"answer": "One way to look at it is that it's sort of like 3D now, where it's a big spectacle for major Motion Pictures but generally isn't worth doing for all movies",
"provenance": null
},
{
"answer": "A lot of people are saying that shooting in color was expensive (and it definitely was) but you can't fail to mention the business side of it. Like others have mentioned, there were a limited number of technicolor cameras and no other options to shoot and process color film. In other words, technicolor had a bit of a monopoly until single-strand color film was developed. They could raise the price all they pleased. \n\nThe trade off is that these first color films were phenomenally successful. So dealing the difficulties of color film was financially worth it.\n\nSo there's also this aspect of it: use technicolor, an expensive monopoly, or use black and white, which was cheaper and easier. ",
"provenance": null
},
{
"answer": "Ask yourself this: Why aren't all movies in 3D?",
"provenance": null
},
{
"answer": "Technicolor was invented in the 30's. Then WWII happened and it wasn't reasonable to spend the money and materiel on color. So they went back to black and white. After the war, when industry turned back to domestic production, and money was freed from waging war, they returned to color.\n\nThis is from a book about Fred & Ginger. They shot a color scene in 1938 for Carefree (\"I Used to Be Color Blind\"). The whole movie was supposed to be color but the cost was too high. They nixed the color scene and released the whole picture in B & W. I have seen the color dream sequence on TV, so at SOME point it was restored, but I don't know when.",
"provenance": null
},
{
"answer": "If anyone is interested in why Hollywood quit making B & W movies ( except for rare occasions) is television. Selling broadcast rights to the major networks and a few indie station syndication packages. (this was before cable television) was a major revenue stream. By 1967 television decided to go 100% color broadcasts and Hollywood and indies switched to color only. By this time, there were several cheaper alternatives to technicolor...",
"provenance": null
},
{
"answer": "Black and white was cheaper but also the norm, the color portion of the Wizard of Oz is the dream sequence for example, when Dorothy woke up the film was back to normal and in black and white again.",
"provenance": null
},
{
"answer": "This is the way I like to think of it, at least in photography. Black and white at that time was along the same quality of some of the color images today (seriously, B & W stuff from that era is gorgeous) and color was just born, so we were yet to iron out the kinks. Of course it was also cheaper. But I know when I'm doing any type of photo work from that era (I did a basketball card series from around that time) I prefer black & white ten-to-one over color of the time. Why? The images are crisper, color gets real fuzzy at that time and it just seems better focused overall. So not only were there cost benefits, there were aesthetic benefits as well.",
"provenance": null
},
{
"answer": "To help understand why these movies were in color and others were in black and white, think about Avatar. \n\nTo rephrase your question, \"Why do movies like Avatar have advanced visual effects and 3D presentation when other movies don't?\" \n\nThe answer, as others have pointed out, is money. The movies with advanced visual effects (color back in the day, crazy CGI today) are big ticket movies that the studio is gambling will make a lot of money. ",
"provenance": null
},
{
"answer": "I would highly recommend looking up the cameras and detail about what cameras were used for those movies. They are way more complicated then what they had back then And today. Of course we have colored cameras that are higher quality and everything like that but the stuff they had to use back then was crazy. I remember reading about a Disney movie (i can't remember which one, but it had real people in a drawn world with like drawn animals and the people were walking on a path and animated animals would walk or skip by them using blue screen to put them in that world and it was like a 60s or 70s movie) but the camera they used was so fascinating to me. I can't explain it and I read it a few years ago. Sorry I'm not much help. But it was very cool and very hard to do what they did for that time.",
"provenance": null
},
{
"answer": "This has already been answered but I need to put my two cents in. The first colored film my grandmother saw was The Wizard of Oz. It starts out black and white, and she had no idea that when Dorothy lands in Oz, it changes to technicolor. This was simply an awesome thing to behold if all you were used to was b & w. Her mouth dropped and she said it was one of the most beautiful things she saw. This is why they went with technicolor for that film. To give Oz this truly magical feeling to the viewers.",
"provenance": null
},
{
"answer": "Pretty much the same reason not all films are shot in stereo today.\n\nIt cost more, required special techniques to get right, and a lot of moviegoers didn't care anyway.",
"provenance": null
},
{
"answer": "Same reason that movies are still shot in 2D when they could be in 3D these days.\n\nTechnology and price isn't at a standard that warrants it.",
"provenance": null
},
{
"answer": "Cuz the camera that captured color was almost the size of a small car. \n\n_URL_0_\n\nIn it, it used three different films to capture red, blue and green colors of the scene. \n\nThe price for the film itself was monumentally high, so any screw ups while filming cost them a lot of money. \n\nSo, it required a lot of work to film, a lot of time and a crap ton of money. A lot of companies avoided it instead and just filmed in b & w.",
"provenance": null
},
{
"answer": "In *Wizard of Oz* the use of black and white for part of the film was obviously an artistic decision and I assume it was the same for other movies too. \n\nThink about it this way, some people even in this modern age choose to still take photos in black and white instead of color. It is an artist's choice and typically I think they'd come up with some sort of justification along those lines, for the effect they think it might provide their art. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "11660365",
"title": "The Rocky Horror Show (franchise)",
"section": "Section::::Alternative versions.:Oz recut.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 583,
"text": "Richard O'Brien originally intended for the film to be in black and white for the first 20 minutes and turning to color when Frank-N-Furter appeared, starting with red color on his lipstick and spreading color throughout the picture as the song continued—a direct allusion to \"The Wizard of Oz\". It was vetoed by 20th Century Fox for a more conventional look. In the 25th Anniversary DVD, an Easter egg appears that converts the film to a semblance of O'Brien's original vision, with the film switching to color instantly when Riff Raff swings open the doorway during the Time Warp.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34494012",
"title": "The Fantastic Flying Books of Mr. Morris Lessmore",
"section": "Section::::Inspiration.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 205,
"text": "Like \"The Wizard of Oz\", the film utilizes the contrast of color and black-and-white as a narrative device. In this case, the black-and-white represents the sadness and despair brought about by the storm.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8589718",
"title": "The Wizard of Oz on television",
"section": "Section::::Telecasts in the Pre-Cable Era.:Shown in color.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 1001,
"text": "From the beginning \"The Wizard of Oz\" was telecast in color, although few people owned color television sets in 1956. Except for 1961, all U.S. telecasts have been in color, an effect that seemed much more striking in the early 1960s, when there were still relatively few color programs on television. It was not televised in color in 1961 because color telecasts had to be paid for by their sponsors, who declined to do so that year. Between 1956 and 1965, the \"Wizard of Oz\" showings were rare exceptions to the black and white program schedule at CBS. During this period, CBS had the ability to broadcast programs in color, but generally chose not to do so unless a sponsor paid for a film or program to be shown in color. During this period, the competing network NBC was owned by RCA, which by 1960 manufactured 95% of the color sets sold in the U.S. Hence, CBS perceived that increased use of color broadcasting would primarily benefit its rival by promoting sales of RCA color television sets.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "220533",
"title": "Black and white",
"section": "Section::::Contemporary use.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 306,
"text": "Since the late 1960s, few mainstream films have been shot in black-and-white. The reasons are frequently commercial, as it is difficult to sell a film for television broadcasting if the film is not in color. 1961 was the last year in which the majority of Hollywood films were released in black and white.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "220533",
"title": "Black and white",
"section": "Section::::Films with a color/black-and-white mix.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 837,
"text": "\"The Wizard of Oz\" (1939) is in color when Dorothy is in Oz, but in black-and-white when she is in Kansas, although the latter scenes were actually in sepia when the film was originally released. In a similar manner, in \"Stalker\" (1979), the \"zone\", in which natural laws do not apply, is in colour, and the world outside the \"zone\" generally in sepia. In contrast, the British film \"A Matter of Life and Death\" (1946) depicts the other world in black-and-white (a character says \"one is starved of Technicolor … up there\"), and earthly events in color. Similarly, Wim Wenders's film \"Wings of Desire\" (1987) uses sepia-tone black-and-white for the scenes shot from the angels' perspective. When Damiel, the angel (the film's main character), becomes a human the film changes to color, emphasising his new \"real life\" view of the world.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30691222",
"title": "Technicolor",
"section": "Section::::History.:Two-color Technicolor.:Process 3.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 477,
"text": "Very few of the original camera negatives of movies made in Technicolor Process 2 or 3 survive. In the late 1940s, most were discarded from storage at Technicolor in a space-clearing move, after the studios declined to reclaim the materials. Original Technicolor prints that survived into the 1950s were often used to make black-and-white prints for television and simply discarded thereafter. This explains why so many early color films exist today solely in black and white.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9215946",
"title": "Movie Movie",
"section": "Section::::Plot summary.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 233,
"text": "The film is introduced by George Burns, who tells viewers that they were about to see an old-style double feature. In the old days, he explains, movies were in black-and-white, except sometimes \"when they sang it came out in color.\"\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
18jv0z | Geologists: What forces caused these adjacent mountain formations to end up looking so different? | [
{
"answer": "Different rock types. not got time to look up a geological map right now, but basically they are just weathering differently.\n\nAlso note, we wouldn't describe those as two separate ranges as they are so closely related.",
"provenance": null
},
{
"answer": "Looks like the darker mountains are cinder cones of volcanic origin. ",
"provenance": null
},
{
"answer": "The darker ones look distinctly mafic to me, so I agree that they're probably igneous, maybe volcanic in origin.\n\nGeneral geomorphology though makes me think these things you're looking at aren't mountains, more like rotated fault blocks.",
"provenance": null
},
{
"answer": "It is likely you are looking at some kind of tilted volcanic pile. It is very difficult to say the composition of the volcanics because many intermediate to felsic volcanic rocks appear dark despite their high silica content. \n\nAs to what caused the rocks to look different, it is most likely that they are different. The rocks on the left appear were most likely erupted and tilted during Basin and Range extension. The rocks that are lighter in color were probably erupted at some other time and have some different composition, but probably not significantly different from the darker ones. They do look to be tilted as well, and look to be more steeply tilted, implying that they are older. \n\nMost volcanic rocks in the Basin and Range province tend to be tuffs, not cinder cones. \n\nThe rocks may be weathering differently, but it is slightly illogical because they are both facing the same direction. \n\nBest to consult the geologic map, check out the USGS. \n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "31468490",
"title": "Mieming Range",
"section": "Section::::Geology.:Alpine Orogeny.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 525,
"text": "At the time of their deposition the rocks of the Northern Limestone Alps were located several hundred kilometres south of their present position. About 35 million years ago, tectonic forces, that are still active today, began to push these geological units northwards. At that time several kilometres of rock and several hundred metres of water lay on top of the rocks visible today. As a result there was a massive overlapping pressure that prevented the formations underneath from breaking up as they were pushed together.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2235932",
"title": "Mount Michener",
"section": "Section::::Geography.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 415,
"text": "The formation of the Rocky Mountains began in the Late Cretaceous Period and finished in the Early Tertiary Period. The pressure on the fault line caused thousands of metres of rock to thrust upward. The contorted beds near the summit of Mount Michener are visible evidence of the tremendous force that caused its formation. A system of limestone caves does exist within the mountain, but they remain undocumented.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "156403",
"title": "Mountain chain",
"section": "Section::::Formation of parallel mountain chains.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 734,
"text": "The chain-like arrangement of summits and the formation of long, jagged mountain crests – known in Spanish as sierras (\"saws\") – is a consequence of their collective formation by mountain building forces. The often linear structure is linked to the direction of these thrust forces and the resulting mountain folding which in turn relates to the fault lines in the upper part of the earth's crust, that run between the individual mountain chains. In these fault zones, the rock, which has sometimes been pulverised, is easily eroded, so that large river valleys are carved out. These, so called longitudinal valleys reinforce the trend, during the early mountain building phase, towards the formation of parallel chains of mountains.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "156403",
"title": "Mountain chain",
"section": "Section::::Formation of parallel mountain chains.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 758,
"text": "The tendency, especially of fold mountains (e. g. the Cordilleras) to produce roughly parallel chains is due to their rock structure and the propulsive forces of plate tectonics. The uplifted rock masses are either magmatic plutonic rocks, easily shaped because of their higher temperature, or sediments or metamorphic rocks, which have a less robust structure, that are deposited in the synclines. As a result of orogenic movements, strata of folded rock are formed that are crumpled out of their original horizontal plane and thrust against one another. The longitudinal stretching of the folds takes place at right angles to the direction of the lateral thrusting. The overthrust folds of a nappe belt (e.g. the Central Alps) are formed in a similar way.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25459",
"title": "Rocky Mountains",
"section": "Section::::Geology.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 560,
"text": "Further south, an unusual subduction may have caused the growth of the Rocky Mountains in the United States, where the Farallon plate dove at a shallow angle below the North American plate. This low angle moved the focus of melting and mountain building much farther inland than the normal . Scientists hypothesize that the shallow angle of the subducting plate increased the friction and other interactions with the thick continental mass above it. Tremendous thrusts piled sheets of crust on top of each other, building the broad, high Rocky Mountain range.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13325214",
"title": "Dante's View",
"section": "Section::::Geological.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 270,
"text": "These mountains were created when the surface of the earth was being stretched, forming a horst or a pulling force, forming grabens. The crust ruptured because of this force, and as a result, lava erupted and ended up deposited on top of the preceding sedimentary rock.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31468490",
"title": "Mieming Range",
"section": "Section::::Geology.:Mesozoic Era.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 611,
"text": "The mountains' geological history began in tropical climes, on the edge of a broad and shallow sea: the Alpine Tethys. To begin with, material was deposited that had been washed into the sea from the land; then the sea level rose and limestone-forming organisms began to populate it. Marine deposits that were near the shore, made of limestones, dolomitic rocks and breccias, are still layered in places in a narrow strip of land between Langlehn and Igelskar (Reichenhall Strata). Because they weather relatively easily, they form cols (\"Scharte\") and \"Törle\" like the \"Biberwierer Scharte\" or the \"Tajatörl\".\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
dvdzab | why does the weather say 39 but "feels like 30" wouldnt it just be 30 outside? | [
{
"answer": "Ambient temperature in a general area VS perceived temperature due to humidity and wind chill making it colder.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3572681",
"title": "30th parallel north",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 266,
"text": "It is the approximate southern border of the horse latitudes in the Northern Hemisphere, meaning that much of the land area touching the 30th parallel is arid or semi-arid. If there is a source of wind from a body of water the area would more likely be subtropical.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "617947",
"title": "Weather lore",
"section": "Section::::Where weather happens.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 308,
"text": "It is in Earth's middle latitudes, between roughly 30° to 60° North and South, that a significant portion of \"weather\" can be said to happen, that is, where meteorological phenomena do not persist over the long term, and where it may be warm, sunny, and calm one day, and cold, overcast and stormy the next.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "617947",
"title": "Weather lore",
"section": "Section::::Where weather happens.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 374,
"text": "Weather folklore, therefore, refers to this mid-latitude region of daily variability. While most of it applies equally to the Southern Hemisphere, the Southern Hemisphere resident may need to take into account the fact that weather systems rotate opposite to those in the North. For instance, the \"crossed winds\" rule (see below) must be reversed for the Australian reader.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1744360",
"title": "Colonization of Mars",
"section": "Section::::Differences from Earth.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 343,
"text": "BULLET::::- Due to the thin atmosphere, the temperature difference between day and night is much larger than on Earth, typically around 70 °C (125 °F). However, the day/night temperature variation is much lower during dust storms when very little light gets through to the surface even during the day, and instead warms the middle atmosphere.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6292531",
"title": "Diamante, Calabria",
"section": "Section::::Chili Peppers Festival.:Geography.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 223,
"text": "Due to its latitude temperatures can vary wildly, especially in the summer when there is a large night-day difference. There is a train station situated in Diamante, which is linked to major cities like Naples and Messina.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3302985",
"title": "Heavy Weather (Wodehouse novel)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 291,
"text": "Heavy Weather is a novel by P. G. Wodehouse, first published in the United States on 28 July 1933 by Little, Brown and Company, Boston, and in the United Kingdom on 10 August 1933 by Herbert Jenkins, London. It had been serialised in \"The Saturday Evening Post\" from 27 May to 15 July 1933.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14831788",
"title": "Climate of the United States",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 301,
"text": "The climate of the United States varies due to differences in latitude, and a range of geographic features, including mountains and deserts. Generally, on the mainland, the climate of the U.S. becomes warmer the further south one travels, and drier the further west, until one reaches the West Coast.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
24r59o | Does rinsing or just running water over my hands without soap after using the bathroom do anything? | [
{
"answer": "I dont think that rubbing your hands together will kill bacteria. They are too small to unfluence that way. \nAbout the hot and cold water. You would need to put your hand in boiling water for an extended amount of time (hours) before it even remotely kills enough bacteria to be considered clean. And switching between hot and cold probably isnt gonna cut it either.\n\nAs already said the water makes it easier for bacteria to get loose from your hands. So that woulf only spread more. \n\nBest case scenario: wash with water and soap and dry using a (papr) towel and not those blowers",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "44167746",
"title": "Prevention of viral hemorrhagic fever",
"section": "Section::::Standard precautions.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 447,
"text": "Washing hands with soap and water eliminates microorganisms from the skin and hands. This provides some protection against transmission of VHF and other diseases. This requires at least cake soap cut into small pieces, soap dishes with openings that allow water to drain away, running water or a bucket kept full with clean water, a bucket for collecting rinse water and a ladle for dipping, if running water is not available, and one-use towels.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3263286",
"title": "Dishwashing liquid",
"section": "Section::::Primary uses.:Hand dishwashing.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 405,
"text": "Hand dishwashing detergents utilize surfactants to play the primary role in cleaning. The reduced surface tension of dishwashing water, and increasing solubility of modern surfactant mixtures, allows the water to run off the dishes in a dish rack very quickly. However, most people also rinse the dishes with pure water to make sure to get rid of any soap residue that could affect the taste of the food.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "509020",
"title": "Mug",
"section": "Section::::History.:Shaving mugs and scuttles.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 369,
"text": "In use, the shaving brush is dunked into the wide spout, allowing it to soak into the water and heat up. The soap is placed in the soap holder. When needed, one can take the brush and brush it against the soap, bringing up a layer of lather; excess water is drained back. This allows conservation of water and soap, whilst retaining enough heat to ensure a long shave.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3829190",
"title": "Hand sanitizer",
"section": "Section::::Uses.:Not indicated.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 710,
"text": "There are certain situations during which hand washing with water and soap are preferred over hand sanitizer, these include: eliminating bacterial spores of \"Clostridioides difficile\", parasites such as \"Cryptosporidium\", and certain viruses like norovirus depending on the concentration of alcohol in the sanitizer (95% alcohol was seen to be most effective in eliminating most viruses). In addition, if hands are contaminated with fluids or other visible contaminates, hand washing is preferred as well as when after using the toilet and if discomfort develops from the residue of alcohol sanitizer use. Furthermore, CDC recommends hand sanitizers are not effective in removing chemicals such as pesticides.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2888063",
"title": "Influenza pandemic",
"section": "Section::::Strategies to slow down a flu pandemic.:Public response measures.\n",
"start_paragraph_id": 91,
"start_character": 0,
"end_paragraph_id": 91,
"end_character": 295,
"text": "BULLET::::- Handwashing Hygiene: Frequent handwashing with soap and water (or with an alcohol-based hand sanitizer) is very important, especially after coughing or sneezing, and after contact with other people or with potentially contaminated surfaces (such as handrails, shared utensils, etc.)\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3829190",
"title": "Hand sanitizer",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 650,
"text": "Hand sanitizer is a liquid generally used to decrease infectious agents on the hands. Formulations of the alcohol-based type are preferable to hand washing with soap and water in most situations in the healthcare setting. It is generally more effective at killing microorganisms and better tolerated than soap and water. Hand washing should still be carried out if contamination can be seen or following the use of the toilet. The general use of non-alcohol based versions has no recommendations. Outside the health care setting evidence to support the use of hand sanitizer over hand washing is poor. They are available as liquids, gels, and foams.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27654014",
"title": "Shaving soap",
"section": "Section::::Use.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 356,
"text": "A hard shaving soap is used with a shaving brush to create lather for shaving. For soap in the form of a puck or bar, the brush is first soaked in water and then swirled vigorously over the surface of the soap, causing moist soap to coat the brush's bristles. The brush is then transferred either to a separate bowl or to the shaver's face to be lathered.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6bk8yj | why does beer make you crave salty/fatty food? | [
{
"answer": "Alcohol releases dopamine in your brain, when it starts to wear off you start looking for something else that will release dopamine. \nFat and salt are particular good for this (to do with evolution of humans.)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "54712",
"title": "Abdominal obesity",
"section": "Section::::Society and culture.:Colloquialisms.\n",
"start_paragraph_id": 58,
"start_character": 0,
"end_paragraph_id": 58,
"end_character": 646,
"text": "Several colloquial terms used to refer to central obesity, and to people who have it, refer to beer drinking. However, there is little scientific evidence that beer drinkers are more prone to central obesity, despite its being known colloquially as \"beer belly\", \"beer gut\", or \"beer pot\". One of the few studies conducted on the subject did not find that beer drinkers are more prone to central obesity than nondrinkers or drinkers of wine or spirits. Chronic alcoholism can lead to cirrhosis, symptoms of which include gynecomastia (enlarged breasts) and ascites (abdominal fluid). These symptoms can suggest the appearance of central obesity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1741326",
"title": "Fatty alcohol",
"section": "Section::::Applications.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 425,
"text": "Fatty alcohols are mainly used in the production of detergents and surfactants. They are components also of cosmetics, foods, and as industrial solvents. Due to their amphipathic nature, fatty alcohols behave as nonionic surfactants. They find use as co-emulsifiers, emollients and thickeners in cosmetics and food industry. About 50% of fatty alcohols used commercially are of natural origin, the remainder being synthetic.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "337566",
"title": "Long-term effects of alcohol consumption",
"section": "Section::::Digestive system and weight gain.:Body composition.\n",
"start_paragraph_id": 81,
"start_character": 0,
"end_paragraph_id": 81,
"end_character": 325,
"text": "Alcohol affects the nutritional state of the chronic drinkers. It can decrease food consumption and lead to malabsorption. It can create imbalance in the skeletal muscle mass and cause muscle wasting. Chronic consumption alcohol can also increase breakdown of important proteins in our body which can affect gene expression.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3363",
"title": "Beer",
"section": "Section::::Health effects.\n",
"start_paragraph_id": 84,
"start_character": 0,
"end_paragraph_id": 84,
"end_character": 618,
"text": "It is considered that overeating and lack of muscle tone is the main cause of a beer belly, rather than beer consumption. A 2004 study, however, found a link between binge drinking and a beer belly. But with most overconsumption, it is more a problem of improper exercise and overconsumption of carbohydrates than the product itself. Several diet books quote beer as having an undesirably high glycemic index of 110, the same as maltose; however, the maltose in beer undergoes metabolism by yeast during fermentation so that beer consists mostly of water, hop oils and only trace amounts of sugars, including maltose.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "327489",
"title": "Pale lager",
"section": "Section::::Variations.:Dry beer.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 492,
"text": "Though all lagers are well attenuated, a more fully fermented pale lager in Germany goes by the name \"Diät-Pils\" or \"\". \"Diet\" in the instance not referring to being \"light\" in calories or body, rather its sugars are fully fermented into alcohol, allowing the beer to be targeted to diabetics due to its lower carbohydrate content. Because the available sugars are fully fermented, dry beers often have a higher alcohol content, which may be reduced in the same manner as low-alcohol beers. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21640",
"title": "Low-alcohol beer",
"section": "Section::::Craft Non-Alcoholic Beer.:History.\n",
"start_paragraph_id": 59,
"start_character": 0,
"end_paragraph_id": 59,
"end_character": 244,
"text": "With an ever growing health conscious market segment, breweries began to produce craft non-alcoholic beers with as little as 10 calories per can, so that those who crave beer can fulfill their cravings without breaking their health resolution.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3680556",
"title": "Light beer",
"section": "Section::::Reduced alcohol.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 254,
"text": "Light beers with significantly lower alcohol content allow consumers to drink more beers in a shorter period without becoming intoxicated. Low alcohol content can also mean a less expensive beer, especially where excise is determined by alcohol content.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
eag5bs | how do usb plugs built in to outlets work with phones and other devices that use usb? don’t you need to convert ac to dc? | [
{
"answer": "Yes you need a rectifier. They are so small these days, they are build in a small board the size of a dime.",
"provenance": null
},
{
"answer": "There is either a converter built into the wall before the usb. So it goes wires > converter > USB. Or the cube is used because the normal outlet uses the two prongs and not USB.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2885000",
"title": "Battery charger",
"section": "Section::::Type.:USB-based charger.\n",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 445,
"text": "Since the Universal Serial Bus specification provides for a five-volt power supply (with limited maximum power), it is possible to use a USB cable to connect a device to a power supply. Products based on this approach include chargers for cellular phones, portable digital audio players, and tablet computers. They may be fully compliant USB peripheral devices adhering to USB power discipline, or uncontrolled in the manner of USB decorations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "57218370",
"title": "USB hardware",
"section": "Section::::Connectors.:Connector types.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 828,
"text": "USB connector types multiplied as the specification progressed. The original USB specification detailed standard-A and standard-B plugs and receptacles.The connectors were different so that users could not connect one computer receptacle to another. The data pins in the standard plugs are recessed compared to the power pins,so that the device can power up before establishing a data connection. Some devices operate in different modes depending on whether the data connection is made. Charging docks supply power and do not include a host device or data pins, allowing any capable USB device to charge or operate from a standard USB cable. Charging cables provide power connections, but not data. In a charge-only cable, the data wires are shorted at the device end, otherwise the device may reject the charger as unsuitable.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32073",
"title": "USB",
"section": "Section::::Connectors.\n",
"start_paragraph_id": 97,
"start_character": 0,
"end_paragraph_id": 97,
"end_character": 830,
"text": "USB connector types multiplied as the specification progressed. The original USB specification detailed standard-A and standard-B plugs and receptacles. The connectors were different so that users could not connect one computer receptacle to another. The data pins in the standard plugs are recessed compared to the power pins, so that the device can power up before establishing a data connection. Some devices operate in different modes depending on whether the data connection is made. Charging docks supply power and do not include a host device or data pins, allowing any capable USB device to charge or operate from a standard USB cable. Charging cables provide power connections, but not data. In a charge-only cable, the data wires are shorted at the device end, otherwise the device may reject the charger as unsuitable.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13064488",
"title": "Wireless repeater",
"section": "Section::::Connectivity.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 323,
"text": "Some wireless range extending devices connect via a USB port. These USB adapters add Wi-Fi capability to desktop PCs and other devices that have standard USB ports. USB supports not only the data transfers required for networking, but it also supplies a power source so that these adapters do not require electrical plugs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30862892",
"title": "Y-cable",
"section": "Section::::Uses.:Power.:USB.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 503,
"text": "Traditional USB Y-cables exist to enable one USB peripheral device to receive power from two USB host sockets at once, while only transceiving data with one of those sockets. As long as the host has two available USB sockets, this enables a peripheral that requires more power than one USB port can supply (but not more than two ports can supply) to be used without requiring a mains adaptor. Portable hard disk drives and optical disc drives are sometimes supplied with such Y-cables, for this reason.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1169469",
"title": "AC adapter",
"section": "Section::::Use of USB.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 885,
"text": "The USB connector (and voltage) has emerged as a de facto standard in low-power AC adapters for many portable devices. In addition to serial digital data exchange, the USB standard also provides , up to ( over USB 3.0). Numerous accessory gadgets (\"USB decorations\") were designed to connect to USB only for DC power and not for data interchange. The USB Implementers Forum in March, 2007 released the USB Battery Charging Specification which defines, \"...limits as well as detection, control and reporting mechanisms to permit devices to draw current in excess of the USB 2.0 specification for charging ...\". Electric fans, lamps, alarms, coffee warmers, battery chargers, and even toys have been designed to tap power from a USB connector. Plug-in adapters equipped with USB receptacles are widely available to convert or power or automotive power to USB power (see photo at right).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "57218370",
"title": "USB hardware",
"section": "Section::::Power.:Non-standard devices.\n",
"start_paragraph_id": 115,
"start_character": 0,
"end_paragraph_id": 115,
"end_character": 617,
"text": "Some USB devices require more power than is permitted by the specifications for a single port. This is common for external hard and optical disc drives, and generally for devices with motors or lamps. Such devices can use an external power supply, which is allowed by the standard, or use a dual-input USB cable, one input of which is for power and data transfer, the other solely for power, which makes the device a non-standard USB device. Some USB ports and external hubs can, in practice, supply more power to USB devices than required by the specification but a standard-compliant device may not depend on this.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1atho2 | how humid would air need to be for a human to breath their liquid requirements in a 24 hour period? | [
{
"answer": "It couldn't happen. At 100 degrees F (38C) the partial pressure of water vapor at 100% relative humidity is 49mmHg and change. Humans exhale 47mmHg of water vapor. So at 100% humidity in a 100 degree environment that would effectively \"stop\" the water loss that comes from exhaling, but not add a huge amount of water back to the system. To increase the partial pressure further, you have to increase the temperature, but as you do so we'll start to lose volume from sweat which competes against the cause. To get to a point where we were in effect breathing in 2x as much water as we were exhaling, you'd have to have 100% humidty at around 122-123 degrees F (50-51C). At that temperature you have to worry not just about sweat loss but heat radiation, and at 100% RH you're looking at heat stroke and death becoming likely.\n\nFrom a theoretical standpoint (assuming we don't die of heat stroke, and assuming that we don't start losing volume as sweat as temperature increases), there probably is a point where it can happen. Assuming that a normal alveolar ventilation is on the order of 6000L per day (12 breaths per minute, 350ml of alveolar ventilation per breath), you could then figure out how many grams of water you'd need in the air to offset the 2.5L of water an average person on an average day loses. But that math is getting beyond me, I just posted these values as a starting point for anyone who actually wants to work out the ideal gas law of it for a further answer.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "31499475",
"title": "Heated humidified high-flow therapy",
"section": "Section::::History.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 394,
"text": "Even with quiet breathing, the inspiratory flow rate at the nares of an adult usually exceeds 12 liters a minute, and can exceed 30 liters a minute for someone with mild respiratory distress. Traditional oxygen therapy is limited to six liters a minute and does not begin to approach the inspiratory demand of an adult and therefore the oxygen is then diluted with room air during inspiration.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1561380",
"title": "Nasal cannula",
"section": "Section::::Applications.:Nasal high flow therapy.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 282,
"text": "Definition: Non-invasive delivery of oxygen air mixture delivered via a nasal cannula at flows that exceed the patient’s inspiratory flow demands with gas that has been optimally conditioned by warming and humidifying the gas to close to 100% relative humidity at body temperature.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3802867",
"title": "Inert gas asphyxiation",
"section": "Section::::Physiology.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 618,
"text": "A typical human breathes between 12 and 20 times per minute at a rate primarily influenced by carbon dioxide concentration, and thus pH, in the blood. With each breath, a volume of about 0.6 litres is exchanged from an active lung volume (tidal volume + functional residual capacity) of about 3 litres. Normal Earth atmosphere is about 78% nitrogen, 21% oxygen, and 1% argon, carbon dioxide, and other gases. After just two or three breaths of nitrogen, the oxygen concentration in the lungs would be low enough for some oxygen already in the bloodstream to exchange back to the lungs and be eliminated by exhalation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "77485",
"title": "Altitude sickness",
"section": "Section::::Prevention.:Other methods.\n",
"start_paragraph_id": 56,
"start_character": 0,
"end_paragraph_id": 56,
"end_character": 262,
"text": "Increased water intake may also help in acclimatization to replace the fluids lost through heavier breathing in the thin, dry air found at altitude, although consuming excessive quantities (\"over-hydration\") has no benefits and may cause dangerous hyponatremia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "66723",
"title": "Respiratory system",
"section": "Section::::Mammals.:Responses to low atmospheric pressures.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 1666,
"text": "There is, however, a complication that increases the volume of air that needs to be inhaled per minute (respiratory minute volume) to provide the same amount of oxygen to the lungs at altitude as at sea level. During inhalation the air is warmed and saturated with water vapor during its passage through the nose passages and pharynx. Saturated water vapor pressure is dependent only on temperature. At a body core temperature of 37 °C it is 6.3 kPa (47.0 mmHg), irrespective of any other influences, including altitude. Thus at sea level, where the ambient atmospheric pressure is about 100 kPa, the moistened air that flows into the lungs from the trachea consists of water vapor (6.3 kPa), nitrogen (74.0 kPa), oxygen (19.7 kPa) and trace amounts of carbon dioxide and other gases (a total of 100 kPa). In dry air the partial pressure of O at sea level is 21.0 kPa (i.e. 21% of 100 kPa), compared to the 19.7 kPa of oxygen entering the alveolar air. (The tracheal partial pressure of oxygen is 21% of [100 kPa – 6.3 kPa] = 19.7 kPa). At the summit of Mt. Everest (at an altitude of 8,848 m or 29,029 ft) the total atmospheric pressure is 33.7 kPa, of which 7.1 kPa (or 21%) is oxygen. The air entering the lungs also has a total pressure of 33.7 kPa, of which 6.3 kPa is, unavoidably, water vapor (as it is at sea level). This reduces the partial pressure of oxygen entering the alveoli to 5.8 kPa (or 21% of [33.7 kPa – 6.3 kPa] = 5.8 kPa). The reduction in the partial pressure of oxygen in the inhaled air is therefore substantially greater than the reduction of the total atmospheric pressure at altitude would suggest (on Mt Everest: 5.8 kPa \"vs.\" 7.1 kPa).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "56677683",
"title": "Mars suit",
"section": "Section::::Environmental design requirements.:Breathing.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 1011,
"text": "Exhaled breath on Earth normally contains about 4% carbon dioxide and 16% oxygen, along with 78% nitrogen, plus about 0.2 to 0.3 liters of water. Carbon dioxide slowly becomes increasingly toxic in high concentrations, and must be scrubbed from the breathing gas. A concept to scrub carbon dioxide from breathing air is to use re-usable amine bead carbon dioxide scrubbers. While one carbon dioxide scrubber filters the astronaut's air, the other can vent scrubbed carbon dioxide to the Mars atmosphere. Once that process is completed, another scrubber can be used, and the one that was used can take a break. Another more traditional way to remove carbon dioxide from air is by a lithium hydroxide canister, however these need to be replaced periodically. Carbon dioxide removal systems are a standard part of habitable spacecraft designs, although their specifics vary. One idea to remove carbon dioxide is to use a zeolite molecular sieve, and then later the carbon dioxide can be removed from the material.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28252573",
"title": "Spanish submarine Peral",
"section": "Section::::Conception.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 589,
"text": "The first study consisted of human breath test in an enclosure for several hours. A room of square meters was used, with an air storage cell, loaded to 79 atmospheres and a storage capacity of 0.5 m. In addition to instruments to measure the temperature and moisture, there was a tube to re-oxygenate the air supply to the crew through a waterproof cloak and three water buckets to maintain the moisture. Six people locked themselves inside the room; one had to leave an hour and quarter later, but the rest remained for a total of five hours, and the test was considered a total success.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
27gmaz | laser thermometers. | [
{
"answer": "The laser part is just to visually indicate where you are measuring, the actual temperature is read by a calibrated infra red sensor. Similar to on a remote control.",
"provenance": null
},
{
"answer": "The laser is just for aiming. It has an infra red sensor and displays the average of the temperature it sees. The further back you hold it the bigger the spot the sensor sees gets, so max range depends on the size of the object your trying to get a read from. Since it works like an ir camera, it sees the surface, not the air.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "50351656",
"title": "Thermopile laser sensor",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 497,
"text": "Thermopile laser sensors (Fig 1) are used for measuring laser power from a few µW to several W (see section 2.4). The incoming radiation of the laser is converted into heat energy at the surface. This heat input produces a temperature gradient across the sensor. Making use of the thermoelectric effect a voltage is generated by this temperature gradient. Since the voltage is directly proportional to the incoming radiation, it can be directly related to the irradiation power (see section 2.1).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3365641",
"title": "Infrared thermometer",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 668,
"text": "An infrared thermometer is a thermometer which infers temperature from a portion of the thermal radiation sometimes called black-body radiation emitted by the object being measured. They are sometimes called laser thermometers as a laser is used to help aim the thermometer, or non-contact thermometers or temperature guns, to describe the device's ability to measure temperature from a distance. By knowing the amount of infrared energy emitted by the object and its emissivity, the object's temperature can often be determined within a certain range of its actual temperature. Infrared thermometers are a subset of devices known as \"thermal radiation thermometers\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50351656",
"title": "Thermopile laser sensor",
"section": "Section::::Applications.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 355,
"text": "Thermopile laser sensors find their use mainly where sensitivity to a wide spectral range is needed or where high laser powers need to be measured. Thermopile sensors are integrated into laser systems and laser sources and are used for sporadic as well as continuous monitoring of laser power, e.g. in feedback control loops. Some of the applications are\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30990",
"title": "Thermocouple",
"section": "Section::::Applications.:Thermopile radiation sensors.\n",
"start_paragraph_id": 131,
"start_character": 0,
"end_paragraph_id": 131,
"end_character": 422,
"text": "Thermopiles are used for measuring the intensity of incident radiation, typically visible or infrared light, which heats the hot junctions, while the cold junctions are on a heat sink. It is possible to measure radiative intensities of only a few μW/cm with commercially available thermopile sensors. For example, some laser power meters are based on such sensors; these are specifically known as thermopile laser sensor.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "48532100",
"title": "Laser schlieren deflectometry",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 543,
"text": "Laser schlieren deflectometry (LSD) is a method for a high-speed measurement of the gas temperature in microscopic dimensions, in particular for temperature peaks under dynamic conditions at atmospheric pressure. The principle of LSD is derived from schlieren photography: a narrow laser beam is used to scan an area in a gas where changes in properties are associated with characteristic changes of refractive index. Laser schlieren deflectometry is claimed to overcome limitations of other methods regarding temporal and spatial resolution.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2994",
"title": "Anemometer",
"section": "Section::::Velocity anemometers.:Laser Doppler anemometers.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 608,
"text": "In laser Doppler velocimetry, laser Doppler anemometers use a beam of light from a laser that is divided into two beams, with one propagated out of the anemometer. Particulates (or deliberately introduced seed material) flowing along with air molecules near where the beam exits reflect, or backscatter, the light back into a detector, where it is measured relative to the original laser beam. When the particles are in great motion, they produce a Doppler shift for measuring wind speed in the laser light, which is used to calculate the speed of the particles, and therefore the air around the anemometer.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25657073",
"title": "Strontium vapor laser",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 235,
"text": "A strontium vapor laser is a laser that produces at its output, high-intensity pulsed light at a wavelength of 430.5 nm in the blue-violet region of the visible spectrum via vaporized strontium metal gas contained within a glass tube.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2xujoh | Do waves move faster then light because of the sinusoidal path they take? | [
{
"answer": "Light \"waves\" do not move sinusoidally. This is a convenient way of representing light's wave-like properties, but they don't actually slew ftom side to side like an old truck with a sloppy steering box.",
"provenance": null
},
{
"answer": "Like other posters have said, lights doesn't move sinusoidally. That representation is actually a simplification showing the electric field amplitude at each point in space along the path of the light",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "33125",
"title": "Wavelength",
"section": "Section::::Sinusoidal waves.:General media.:Nonuniform media.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 461,
"text": "Waves that are sinusoidal in time but propagate through a medium whose properties vary with position (an \"inhomogeneous\" medium) may propagate at a velocity that varies with position, and as a result may not be sinusoidal in space. The figure at right shows an example. As the wave slows down, the wavelength gets shorter and the amplitude increases; after a place of maximum response, the short wavelength is associated with a high loss and the wave dies out.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33125",
"title": "Wavelength",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 223,
"text": "Assuming a sinusoidal wave moving at a fixed wave speed, wavelength is inversely proportional to frequency of the wave: waves with higher frequencies have shorter wavelengths, and lower frequencies have longer wavelengths.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37019651",
"title": "List of equations in wave theory",
"section": "Section::::Definitions.:General fundamental quantities.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 947,
"text": "A wave can be longitudinal where the oscillations are parallel (or antiparallel) to the propagation direction, or transverse where the oscillations are perpendicular to the propagation direction. These oscillations are characterized by a periodically time-varying displacement in the parallel or perpendicular direction, and so the instantaneous velocity and acceleration are also periodic and time varying in these directions. (the apparent motion of the wave due to the successive oscillations of particles or fields about their equilibrium positions) propagates at the phase and group velocities parallel or antiparallel to the propagation direction, which is common to longitudinal and transverse waves. Below oscillatory displacement, velocity and acceleration refer to the kinematics in the oscillating directions of the wave - transverse or longitudinal (mathematical description is identical), the group and phase velocities are separate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25948",
"title": "Refraction",
"section": "Section::::General explanation.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 598,
"text": "Consider a wave going from one material to another where its speed is slower as in the figure. If it reaches the interface between the materials at an angle one side of the wave will reach the second material first, and therefore slow down earlier. With one side of the wave going slower the whole wave will pivot towards that side. This is why a wave will bend away from the surface or toward the normal when going into a slower material. In the opposite case of a wave reaching a material where the speed is higher, one side of the wave will speed up and the wave will pivot away from that side.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25202",
"title": "Quantum mechanics",
"section": "Section::::Mathematical formulations.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 761,
"text": "Wave functions change as time progresses. The Schrödinger equation describes how wave functions change in time, playing a role similar to Newton's second law in classical mechanics. The Schrödinger equation, applied to the aforementioned example of the free particle, predicts that the center of a wave packet will move through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more uncertain with time. This also has the effect of turning a position eigenstate (which can be thought of as an infinitely sharp wave packet) into a broadened wave packet that no longer represents a (definite, certain) position eigenstate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5786179",
"title": "Acoustic wave",
"section": "Section::::Wave properties.:Reflection.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 306,
"text": "An acoustic travelling wave can be reflected by a solid surface. If a travelling wave is reflected, the reflected wave can interfere with the incident wave causing a standing wave in the near field. As a consequence, the local pressure in the near field is doubled, and the particle velocity becomes zero.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12778",
"title": "Group velocity",
"section": "Section::::In three dimensions.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 210,
"text": "If the waves are propagating through an anisotropic (i.e., not rotationally symmetric) medium, for example a crystal, then the phase velocity vector and group velocity vector may point in different directions.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
20rtqo | how does apple make more money than google / android? | [
{
"answer": "Android isn't a company. Google doesn't completely own the phones that have their operating system in it, so they only get a portion of the profit from them. Apple owns the entire process. I also wouldn't be surprised if the profit on an individual phone was more for an iPhone than an android phone for a lot of reasons.\n\nAndroid phones might be a bigger part of the phone market, but they share that profit with tons of people, apple doesn't have to really share their profit.",
"provenance": null
},
{
"answer": "Because a large percentage of Android phones are low-end devices. There are no low-end Apple devices.\n\nThat makes a difference when it comes to their app stores.[ iOS customers outspend Android customers 5 to 1](_URL_0_). People who buy low-end devices are far less likely to purchase apps.\n\nApple, quite simply, has a superior business model. Yes, there are more Android devices out there, but the iOS devices are far, far more profitable.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "856",
"title": "Apple Inc.",
"section": "Section::::Corporate affairs.:Finance.\n",
"start_paragraph_id": 224,
"start_character": 0,
"end_paragraph_id": 224,
"end_character": 234,
"text": "Apple amassed 65% of all profits made by the eight largest worldwide smartphone manufacturers in quarter one of 2014, according to a report by Canaccord Genuity. In the first quarter of 2015, the company garnered 92% of all earnings.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30896038",
"title": "Google One Pass",
"section": "Section::::Priced Content/Subscriptions.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 341,
"text": "Similar to the Android Market, Google shared in the revenue generated by all sales through One Pass. On its launch date, revenue was split between the publisher and Google in a 90%/10% respectively. That was significantly less than Apple's competing product that provided only 70% of the revenue to the publisher and kept the remaining 30%.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "856",
"title": "Apple Inc.",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 994,
"text": "Apple is well known for its size and revenues. Its worldwide annual revenue totaled $265billion for the 2018 fiscal year. Apple is the world's largest technology company by revenue and one of the world's most valuable companies. It is also the world's third-largest mobile phone manufacturer after Samsung and Huawei. In August 2018, Apple became the first public U.S. company to be valued at over $1 trillion. The company employs 123,000 full-time employees and maintains 504 retail stores in 24 countries . It operates the iTunes Store, which is the world's largest music retailer. , more than 1.3 billion Apple products are actively in use worldwide. The company also has a high level of brand loyalty and is ranked as the world's most valuable brand. However, Apple receives significant criticism regarding the labor practices of its contractors, its environmental practices and unethical business practices, including anti-competitive behavior, as well as the origins of source materials.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28846270",
"title": "Mobile business intelligence",
"section": "Section::::History.:Purpose-built Mobile BI apps.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 404,
"text": "Google Inc.’s Android has overtaken Apple Inc.’s iOS in the wildly growing arena of app downloads. In the second quarter of 2011, 44% of all apps downloaded from app marketplaces across the web were for Android devices while 31% were for Apple devices, according to new data from ABI Research. The remaining apps were for various other mobile operating systems, including BlackBerry and Windows Phone 7.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2988323",
"title": "Intelligent enterprise",
"section": "Section::::Real Life Examples.:Apple.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 439,
"text": "Apple when introduced to the highly competitive computer environment retailed for about $2000 but cost less than $500, as over 70% of its components were outsourced (Choo 1995). Instead, Apple focused on the design, logistics, software and product assembly. Due to the concentration of only a few knowledge adding services, Apple was able to rise to the top of the highly competitive PC market and attain great sales (Gupta, Sharma 2004).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2593693",
"title": "History of Apple Inc.",
"section": "Section::::2007–2011: Apple Inc., iPhone, iOS, iPad.:Resurgence compared to Microsoft.\n",
"start_paragraph_id": 116,
"start_character": 0,
"end_paragraph_id": 116,
"end_character": 644,
"text": "Since 2005, Apple's revenues, profits, and stock price have grown significantly. On May 26, 2010, Apple's stock market value overtook Microsoft's, and Apple's revenues surpassed those of Microsoft in the third quarter of 2010. After giving their results for the first quarter of 2011, Microsoft's net profits of $5.2 billion were lower for the quarter than those of Apple, which earned $6 billion in net profit for the quarter. The late April announcement of profits by the companies marked the first time in 20 years that Microsoft's profits had been lower than Apple's, a situation described by \"Ars Technica\" as \"unimaginable a decade ago\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33496160",
"title": "Mobile app",
"section": "Section::::Overview.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 482,
"text": "Market research firm Gartner predicted that 102 billion apps would be downloaded in 2013 (91% of them free), which would generate $26 billion in the US, up 44.4% on 2012's US$18 billion. By Q2 2015, the Google Play and Apple stores alone generated $5 billion. An analyst report estimates that the app economy creates revenues of more than €10 billion per year within the European Union, while over 529,000 jobs have been created in 28 EU states due to the growth of the app market.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
14zr4q | What was Richard III's role in the end of The War of the Roses? | [
{
"answer": "Is this related to your [High School English Assignment](_URL_0_) or a separate homework?",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "90444",
"title": "House of York",
"section": "Section::::Wars of the Roses.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 478,
"text": "The Wars of the Roses began the following year, with the First Battle of St Albans. Initially, Richard aimed only to purge his Lancastrian political opponents from positions of influence over the king. It was not until October 1460 that he claimed the throne for the House of York. In that year the Yorkists had captured the king at the battle of Northampton, but victory was short-lived. Richard and his second son Edmund were killed at the battle of Wakefield on 30 December.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50072",
"title": "Battle of Barnet",
"section": "Section::::Background.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 1071,
"text": "The Wars of the Roses were a series of conflicts between various English lords and nobles in support of two different royal families descended from Edward III. In 1461 the conflict reached a milestone when the House of York supplanted its rival, the House of Lancaster, as the ruling royal house in England. Edward IV, leader of the Yorkists, seized the throne from the Lancastrian king, Henry VI, who was captured in 1465 and imprisoned in the Tower of London. The Lancastrian queen, Margaret of Anjou, and her son, Edward of Lancaster, fled to Scotland and organised resistance. Edward IV crushed the uprisings and pressured the Scottish government to force Margaret out; the House of Lancaster went into exile in France. As the Yorkists tightened their hold over England, Edward rewarded his supporters, including his chief adviser, Richard Neville, 16th Earl of Warwick, elevating them to higher titles and awarding them land confiscated from their defeated foes. The Earl grew to disapprove of the King's rule, however, and their relationship later became strained.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30275656",
"title": "Wars of the Roses",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 977,
"text": "The ascension of Richard III occurred under a cloud of controversy, and shortly after assuming the throne, the wars sparked anew with Buckingham's rebellion, as many die-hard Yorkists abandoned Richard to join Lancastrians. While the rebellions lacked much central coordination, in the chaos the exiled Henry Tudor, son of Henry VI's half-brother Edmund Earl of Richmond, and the leader of the Lancastrian cause, returned to the country from exile in Brittany at the head of an army of combined Breton and English forces. Richard avoided direct conflict with Henry until the Battle of Bosworth Field on 22 August 1485. After Richard III was killed and his forces defeated at Bosworth Field, Henry assumed the throne as Henry VII and married Elizabeth of York, the eldest daughter and heir of Edward IV, thereby uniting the two claims. The House of Tudor ruled the Kingdom of England until 1603, with the death of Elizabeth I, granddaughter of Henry VII and Elizabeth of York. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29731255",
"title": "The Wars of the Roses (adaptation)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 762,
"text": "The Wars of the Roses was a 1963 theatrical adaptation of William Shakespeare's first historical tetralogy (\"1 Henry VI\", \"2 Henry VI\", \"3 Henry VI\" and \"Richard III\"), which deals with the conflict between the House of Lancaster and the House of York over the throne of England, a conflict known as the Wars of the Roses. The plays were adapted by John Barton, and directed by Barton himself and Peter Hall at the Royal Shakespeare Theatre. The production starred David Warner as Henry VI, Peggy Ashcroft as Margaret of Anjou, Donald Sinden as the Duke of York, Paul Hardwick as the Duke of Gloucester, Janet Suzman as Joan la Pucelle, Brewster Mason as the Earl of Warwick, Roy Dotrice as Edward IV, Susan Engel as Queen Elizabeth and Ian Holm as Richard III.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "56388572",
"title": "Richard III (2016 film)",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 415,
"text": "\"Richard III\" aired in 2016 as part of the concluding cycle \"The Hollow Crown: The Wars of the Roses\", along with a two-part adaptation of the other plays in Shakespeare's first tetralogy, \"Henry VI, Part 1\", \"Henry VI, Part 2\" and \"Henry VI, Part 3\". Benedict Cumberbatch was nominated for the British Academy Television Award for Best Leading Actor and \"The Wars of the Roses\" was nominated for Best Mini-Series.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7310973",
"title": "Issue of Edward III of England",
"section": "Section::::Sons.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 1147,
"text": "The Wars of the Roses were civil wars over the throne of the Kingdom of England fought among the descendants of King Edward III through his five surviving adult sons. Each branch of the family had competing claims through seniority, legitimacy, and/or the sex of their ancestors, despite patriarchal rule of the day. Thus, the senior Plantagenet line was ended with the death of Richard II, but not before the execution of Thomas of Woodstock for treason. The heirs presumptive through Lionel of Antwerp were passed over in favour of the powerful Henry IV, descendant of Edward III through John of Gaunt. These Lancaster kings initially survived the treason of their Edmund of Langley (York) cousins but eventually were deposed by the merged Lionel/Edmund line in the person of Edward IV. Internecine killing among the Yorks left Richard III as king, supported and then betrayed by his cousin Buckingham, the descendant of Thomas of Woodstock. Finally, the Yorks were dislodged by the remaining Lancastrian candidate, Henry VII of the House of Tudor, another descendant of John of Gaunt, who married the eldest daughter of Yorkist King Edward IV.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "333709",
"title": "List of English civil wars",
"section": "",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 398,
"text": "BULLET::::- Wars of the Roses (1455–1487) – in England and Wales; Richard III was the last English king to die in combat, The Wars of the Roses were a series of dynastic wars for the throne of England. They were fought between supporters of two rival branches of the royal House of Plantagenet, the houses of Lancaster and York. They were fought in several sporadic episodes between 1455 and 1487.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
9fru76 | why is there such a significant price gap between canadian crude oil prices and us crude oil prices? | [
{
"answer": "There isn't a price gap between the price of west texas crude this is what the stock market is looking at when saying the price of oil is xx per barrel.\n\n\nThe price gap is caused by a misnomer what you are calling canadian crude oil is not actual crude. It is bitumen. Now for Canada to ship it it needs to be diluted so that it flows better.\n\nBitumen is more costly to refine into petroleum products than crude oil. The current infrastructure in place for Canada to get this to market has minimal going to tide water (oceans) within Canada. This means that the only market that is purchasing the raw product from Canada is the u.s. and typically it's about 50 to 60 percent of the oil price. \n\nCanada would get a better price per barrel if more markets are available to sell to. This is why there is an importance to pipelines to tide water. Environmental concerns are the push back to this. But what is not looked at or ignored is oil is going to tide water by rail already. Just not in any capacity to affect market prices.",
"provenance": null
},
{
"answer": "It's a function of both quality, cost to extract, and transportation cost. Not all crude oil is the same so you are not paying more for the same product.\n\nCheaper crudes tend to have more undesirable byproducts like sulfur or nitrogen compounds. Also when the crude oil is refined, different crudes have different conversion rates to final products like diesel and gasoline. So crudes that make more and higher quality final products are more expensive",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "849508",
"title": "Peak oil",
"section": "Section::::Possible consequences.:Oil prices.:Historical oil prices.\n",
"start_paragraph_id": 91,
"start_character": 0,
"end_paragraph_id": 91,
"end_character": 591,
"text": "More recently, between 2011 and 2014 the price of crude oil was relatively stable, fluctuating around $US100 per barrel. It dropped sharply in late 2014 to below $US70 where it remained for most of 2015. In early 2016 it traded at a low of $US27. The price drop has been attributed to both oversupply and reduced demand as a result of the slowing global economy, OPEC reluctance to concede market share, and a stronger US dollar. These factors may be exacerbated by a combination of monetary policy and the increased debt of oil producers, who may increase production to maintain liquidity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1150532",
"title": "Brent Crude",
"section": "Section::::Futures market trading.:Pricing.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 861,
"text": "The US Energy Information Administration attributes the price spread between WTI and Brent to an oversupply of crude oil in the interior of North America (WTI price is set at Cushing, Oklahoma) caused by rapidly increasing oil production from Canadian oil sands and tight oil formations such as the Bakken Formation, Niobrara Formation, and Eagle Ford Formation. Oil production in the interior of North America has exceeded the capacity of pipelines to carry it to markets on the Gulf Coast and east coast of North America; as a result, the oil price on the US and Canadian east coast and parts of the US Gulf Coast since 2011 has been set by the price of Brent Crude, while markets in the interior still follow the WTI price. Much US and Canadian crude oil from the interior is now shipped to the coast by railroad, which is much more expensive than pipeline.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "212685",
"title": "Ralph Klein",
"section": "Section::::Premier.:The Alberta Advantage: Klein's austerity campaign.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 434,
"text": "From the mid-1980s to September 2003, the inflation adjusted price of a barrel of crude oil on NYMEX was generally under $25/barrel. A rebound in the price of oil worldwide led to big provincial surpluses in Alberta since the mid-1990. During 2004, the price of oil rose above $40, and then $50. A series of events led the price to exceed $60 by August 11, 2005, leading to a record-speed hike that reached $75 by the middle of 2006.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6230797",
"title": "Connacher Oil and Gas",
"section": "Section::::OPEC, Fracking put pressure on small bitumen production players.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 1243,
"text": "In late 2014, as the global demand for oil slows down, and production of crude oil remains high in the United States, Canada and in Organization of the Petroleum Exporting Countries, the oil market collapsed into a bear market. While the decision by OPEC to \"hold their production steady at 30 million bpd\" contributed to the continued price decline of oil, there was a rebound in oil futures on 1 December 2014. The price of West Texas Intermediate (WTI), the benchmark for North American crude dropped to $US68.93 and to the decline in the price of Western Canadian Select, which is the benchmark for emerging heavy, high TAN (acidic) crudes to US$51.93. By early December 2014, Connacher was one of several oilsands bitumen-focused producers to struggle financially due to the drop in the price of oil and a tightening of capital markets. Others include OPTI Canada Inc., Southern Pacific Resources Corp., and Sunshine Oilsands Ltd.. On 1 December Connacher which is $1.05 billion in debt hired BMO Capital Markets advisors to undertake a review process of its \"liquidity and capital structure.\" If the price of WTI per barrel was US$75, the Connacher could generate $C70 million of EBITDA in 2015, but it has $90 million in debt payments.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1197343",
"title": "2000s energy crisis",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 499,
"text": "From the mid-1980s to September 2003, the inflation-adjusted price of a barrel of crude oil on NYMEX was generally under US$25/barrel. During 2003, the price rose above $30, reached $60 by 11 August 2005, and peaked at $147.30 in July 2008. Commentators attributed these price increases to many factors, including Middle East tension, soaring demand from China, the falling value of the U.S. dollar, reports showing a decline in petroleum reserves, worries over peak oil, and financial speculation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "381465",
"title": "National Energy Program",
"section": "Section::::Price of oil.\n",
"start_paragraph_id": 49,
"start_character": 0,
"end_paragraph_id": 49,
"end_character": 511,
"text": "Throughout the 1950s, 1960s, and 1970s, the retail price of petroleum in Canada consistently remained close to the price of gasoline in the United States (and at oftentimes lower than prices seen in the U.S., especially during the price spikes of the 1970s). Following NEP (which raised the price of fuel in the West and coincided with a hike in provincial gas taxes in Ontario and Quebec), the retail price of gasoline in Canada became noticeably higher than in the U.S. (a trend which continues to this day).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21152128",
"title": "2001 world oil market chronology",
"section": "",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 387,
"text": "October 18 Crude Oil for November delivery falls to its lowest level since August 1999 on the New York Mercantile Exchange (NYMEX). Light, sweet crude falls 50 cents per barrel to settle at $21.31 per barrel. Brent crude for. Poor economic prospects in the next few months, and OPEC's inability to respond so far are seen as factors contributing to the sliding prices of crude oil. (OD)\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
9c4ago | who are you genetically closer to? | [
{
"answer": "Your kids. As you said, kids are 50% \"you\". But your siblings are only 25% \"you\" in average. \n\nIt is true that they are made 50% your mum and 50% your dad as well. But consider that your dad's DNA is made of copy A and B of each chromosome, and your mum copy C and D. Then you could have got copy A from your dad and C from your mum, making you A-C for example. While your siblings could have got A-C (same as you), A-D, B-C, or B-D. Repeat this idea for all 23 pairs of chromosomes. So by probablity only in 25% of the cases you will have the same combination.",
"provenance": null
},
{
"answer": "You share 50% of your DNA with both your children and your parents. So you are the exact same closeness to them.\n\nTechnically you can be genetically closer to your sibling. But also at the same time you might not be. It all depends on which genes you picked up.\n\nHumans have 23 pairs of chromosomes and pass down 1 of each pair to their children.\n\nTheoretically then there is a chance if your parents pass down one set of chromosomes to you (let's call it set a.) And pass down the other set to your sibling (set b.) Then technically you share 0% of your DNA with your siblings. \n\n(Well this is actually incorrect as other things happen when the gametes are being created but this is ELI5)\n\n The reverse is also true and you can share 100% of your DNA with you siblings. (Which also is very unlikely unless you are identical twins and thus split off the same fetilised egg)\n\nSo tldr to your question. Both. It depends on what genes were passed down to your siblings.",
"provenance": null
},
{
"answer": "Theoretically you can be 0% similar with a sibling. Would be very statistically unlikely. \n\nYou can be no less than 50% similar with a child or parent.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "21549439",
"title": "Hamiltonian spite",
"section": "Section::::Doubts about the adaptive nature of spiteful behaviour.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 358,
"text": "Second, presuming a panmictic population, the vast majority of pairs of individuals exhibit a roughly average level of relatedness. For a given individual, the majority of others are not worth helping or harming. While it is easy to identify the few most closely related ones (see: kin recognition), it is hard to identify the most genetically distant ones.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "682482",
"title": "Human",
"section": "Section::::Biology.:Biological variation.\n",
"start_paragraph_id": 77,
"start_character": 0,
"end_paragraph_id": 77,
"end_character": 299,
"text": "No two humans—not even monozygotic twins—are genetically identical. Genes and environment influence human biological variation in visible characteristics, physiology, disease susceptibility and mental abilities. The exact influence of genes and environment on certain traits is not well understood.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "265570",
"title": "Kinship",
"section": "Section::::Biology, psychology and kinship.:Evolutionary psychology.\n",
"start_paragraph_id": 84,
"start_character": 0,
"end_paragraph_id": 84,
"end_character": 1173,
"text": "The other approach, that of Evolutionary psychology, continues to take the view that genetic relatedness (or genealogy) is key to understanding human kinship patterns. In contrast to Sahlin's position (above), Daly and Wilson argue that \"the categories of 'near' and 'distant' do not 'vary independently of consanguinal distance', not in any society on earth.\" (Daly et al. 1997, p282). A current view is that humans have an inborn but culturally affected system for detecting certain forms of genetic relatedness. One important factor for sibling detection, especially relevant for older siblings, is that if an infant and one's mother are seen to care for the infant, then the infant and oneself are assumed to be related. Another factor, especially important for younger siblings who cannot use the first method, is that persons who grew up together see one another as related. Yet another may be genetic detection based on the major histocompatibility complex (See Major Histocompatibility Complex and Sexual Selection). This kinship detection system in turn affects other genetic predispositions such as the incest taboo and a tendency for altruism towards relatives.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21549439",
"title": "Hamiltonian spite",
"section": "Section::::Theories on altruism and spitefulness.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 345,
"text": "W. D. Hamilton published an influential paper on altruism in 1964 to explain why genetic kin tend to help each other. He argued that genetically related individuals are likely to carry the copies of the same alleles; thus, helping kin may ensure that copies of the actors' alleles pass onto next generations of both the recipient and the actor.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4816754",
"title": "Human genetic variation",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 333,
"text": "No two humans are genetically identical. Even monozygotic twins (who develop from one zygote) have infrequent genetic differences due to mutations occurring during development and gene copy-number variation. Differences between individuals, even closely related individuals, are the key to techniques such as genetic fingerprinting.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11717690",
"title": "Wildstorm Universe",
"section": "Section::::Fictional history.:Powers of Gen-active humans.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 408,
"text": "All Gen-Active humans have a telepathic link to each other. This link usually is very weak, even unnoticeable to most, but stronger between relatives (they sometimes can feel when a relative is in extreme pain). The link also allows sensitive Gen-Actives to notice the presence of other Gen-Actives. In case of Team 7, the link also made the sum of their powers greater than each member's individual powers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "148979",
"title": "Queer theory",
"section": "Section::::Intersex and the role of biology.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 538,
"text": "Scientists who have written on the conceptual significance of intersex individuals include Anne Fausto-Sterling, Katrina Karkazis, Rebecca Jordan-Young, and Joan Roughgarden. While the medical literature focuses increasingly on genetics of intersex traits, and even their deselection, some scholars on the study of culture, such as Barbara Rogoff, argue that the traditional distinction between biology and culture as independent entities is overly simplistic, pointing to the ways in which biology and culture interact with one another.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1jdwsl | why does white meat chicken always taste dryer than dark meat chicken? | [
{
"answer": "There's more fat in the dark meat. Fats and oils will keep the dark meat moist even when the water in the white meat has cooked off.\n\n > [Dark meat contains 2.64 times more saturated fat than white meat, per gram of protein.](_URL_0_)",
"provenance": null
},
{
"answer": "The white meat is always dryer than dark because the dark meat has more fat in it. Fat is not as easily evaporated as water, so when you cook a whole chicken the dark meat holds more liquid and ends up being more moist. A way to remedy this is to stuff butter under the skin of the chicken. It creates a barrier that prevents water from evaporating as easily, making it moist and way more delicious. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2019285",
"title": "White meat",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 493,
"text": "In nutritional studies however, \"white meat\" includes poultry and fish, but excludes all mammal flesh, which is considered red meat. The United States Department of Agriculture classifies meats as red if the myoglobin level is higher than 65%. This categorization is controversial as some types of fish, such as tuna, are red when raw and turn white when cooked; similarly, certain types of poultry that are sometimes grouped as \"white meat\" are actually red when raw, such as duck and goose.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2019285",
"title": "White meat",
"section": "Section::::Poultry.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 441,
"text": "Within poultry, there are two types of meats—white and dark. The different colours are based on the different locations and uses of the muscles. White meat can be found within the breast of a chicken or turkey. Dark muscles are fit to develop endurance or long-term use, and contain more myoglobin than white muscles, allowing the muscle to use oxygen more efficiently for aerobic respiration. White meat contains large amounts of protein. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37501030",
"title": "Solid white (chicken plumage)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 250,
"text": "In poultry standards, solid white is coloration of plumage in chickens (\"Gallus gallus domesticus\") characterized by a uniform pure white color across all feathers, which is not generally associated with depigmentation in any other part of the body.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23197",
"title": "Poultry",
"section": "Section::::Poultry as food.:Cuts of poultry.\n",
"start_paragraph_id": 49,
"start_character": 0,
"end_paragraph_id": 49,
"end_character": 854,
"text": "Dark meat, which avian myologists refer to as \"red muscle\", is used for sustained activity—chiefly walking, in the case of a chicken. The dark colour comes from the protein myoglobin, which plays a key role in oxygen uptake and storage within cells. White muscle, in contrast, is suitable only for short bursts of activity such as, for chickens, flying. Thus, the chicken's leg and thigh meat are dark, while its breast meat (which makes up the primary flight muscles) is white. Other birds with breast muscle more suitable for sustained flight, such as ducks and geese, have red muscle (and therefore dark meat) throughout. Some cuts of meat including poultry expose the microscopic regular structure of intracellular muscle fibrils which can diffract light and produce iridescent colours, an optical phenomenon sometimes called structural colouration.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4012192",
"title": "Reconstituted meat",
"section": "Section::::Properties and production.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 541,
"text": "The characteristics of dark meat from poultry; such as its color, low plasticity, and high fat content; are caused by myoglobin, a pigmented chemical compound found in muscle tissue that undergoes frequent use. Because domestic poultry rarely fly, the flight muscles in the breast contain little myoglobin and appear white. Dark meat which is high in myoglobin is less useful in industry, especially fast food, because it is difficult to mold into shapes. Processing dark meat into a slurry makes it more like white meat, easier to prepare.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2019285",
"title": "White meat",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 530,
"text": "In culinary terms, white meat is meat which is pale in color before and after cooking. The most common kind of white meat is the lighter-colored meat of poultry (light meat), coming from the breast, as contrasted with dark meat from the legs. Poultry white (\"light\") meat is made up of fast-twitch muscle fibres, while red (\"dark\") meat is made up of muscles with fibres that are slow-twitch. In traditional gastronomy, white meat also includes rabbit, the flesh of milk-fed young mammals (in particular veal and lamb), and pork.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52993",
"title": "Roasting",
"section": "Section::::Meat.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 437,
"text": "Red meats such as beef, lamb, and venison, and certain game birds are often roasted to be \"pink\" or \"rare\", meaning that the center of the roast is still red. Roasting is a preferred method of cooking for most poultry, and certain cuts of beef, pork, or lamb. Although there is a growing fashion in some restaurants to serve \"rose pork\", temperature monitoring of the center of the roast is the only sure way to avoid foodborne disease.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
r26xg | somalia and what's happening there | [
{
"answer": "Today 'Somalia' is basically a geographic expression, the country has had no central government since the 80s. The previous government did a lot of bad things in the north so when that government (Siad Barre) fell they were able to step away and run their own business. So the north is a pretty decent place. They have their own (unrecognized) government and their own (unrecognized) currency. You can even go there as a tourist and you'll be relatively safe.\n\nThe southern part of the country is a different story. When the Barre government collapsed there was no more governmental authority. Warlords and clan chiefs stepped into the void and spent about 15 years fighting one another in an anarchic free for all.\n\nEventually a group of Islamists called the Union of Islamic Courts was created to attempt to end the civil war and restore stability. While they were fighting to take over the south the U.S. gave millions of dollars worth of weapons to the warlords. The UIC won anyways, and for a little while they were doing some good. They were more moderate than the Taliban and while some of their punishments were barbaric they made progressing in restoring a semblance of order. They also got the airports and port running for the first time since things went \nto shit.\n\nOnce the warlords were beaten there was a lot of concern about terrorists going to Somalia. So Ethiopia, whose military is largely a product of American money, went in (presumably with U.S. backing) and attempted to overthrow the UIC and install the U.N. backed provisional government. The provisional government has a lot of problems though, it's mostly made up of warlords and has no presence inside Somalia-- they have to hold their meetings in Kenya.\n\nEthiopia tried occupying Somalia for a little while but since they'd gotten rid of the UIC and didn't have the resources to stay that long they got out. The UIC of 2006 didn't really exist by this time, it had been splintered. The previous leadership tended to be older and more moderate, but after the Ethiopian intervention the Islamists who were left fighting were the youngest, fiercest, and most extreme. They call themselves al-Shabaab (meaning 'youth movement'.) While the UIC cared more about restoring law in Somalia than international terrorism, al-Shabaab is much more sympathetic to terrorists who want to carry out attacks abroad. They've provided a safe haven to some al-Qaeda guys and have carried out a few attacks of their own elsewhere in Africa. Kenya and Uganda have both been bombed by al-Shabaab, in retaliation for support they've given to the provisional government.\n\nSo Somalia is a complex place, with regions that look radically different from one another. The north will probably continue to do well and likely will be recognized as an independent state some day. There's no reason to be positive about the south though; that's only getting worse and worse. Especially since we've now got all kinds of drones flying around the place.\n",
"provenance": null
},
{
"answer": " > How did Somalia reach the point of state failure?\n\nSomalia's eventual collapse has its roots in the Cold War. During the 1970s, the US and China were essentially fighting a low-level proxy war against the USSR in Ethiopia following Haile Selassie's ouster from power by the Derg, a nominally socialist military junta.\n\nHowever, both the US and the USSR provided the Derg with limited support. Initially, the USSR at the time was more interested in retaining its influence over Somalia and Eritrea (which at that time was still part of Ethiopia although in open rebellion against the state) and any weapons that it sent to Ethiopia were sent via Somalia and Eritrea.\n\nThe turning point in Soviet relations with Ethiopia came in 1977 following another coup by [Mengistu Haile Mariam](_URL_0_) who had most of his political opponents in the Derg assassinated and assumed power. Mengistu decided to strengthen Ethiopia's relationships with other socialist countries, particularly the USSR and East Germany.\n\nFollowing Mengistu's coup and his subsequent overtures to the USSR, the Soviets began sending huge amounts of weapons to Ethiopia including helicopters, tanks, and fighter jets.\n\nMeanwhile, the leader of Somalia, [Mohammed Siad Barre](_URL_2_), straddled the line between Soviet support and his ties with conservative Arab governments of Saudi Arabia and Egypt, who were considered allies of the US. Siad Barre recognized that he could not implement large-scale socialist reforms in Somalia and still retain the support of the numerous Somali clans on whose patronage he relied to keep power.\n\nThese factors led Siad Barre to the [Ogaden War](_URL_4_), in a region that straddled the border of Western Somalia and Eastern Ethiopia, which pitted Ethiopian troops against Somalian troops and ethnic Somalis living in Ehtiopia\n\nThis war eventually came to be known as the Horn of Africa crisis. The right wing of the US political elite viewed Carter's reaction to the events in the Horn of Africa as too weak towards the Soviets.\n\nRonald Reagan, who would soon be launching his bid for president, spoke of the Crisis in almost apocalyptic terms saying that \"More immediately, control of the Horn of Africa would give Moscow the ability to destabilize those governments on the Arabian peninsula which have proven themselves to be strongly anti-Communist... in a few years we may be faced with the prospect of a Soviet empire of proteges and dependencies stretching from Addis Ababa to Capetown.\"\n\nTaking a cue from these statements from the West, Siad Barre eventually abandoned Soviet support under the assumption that his government would receive support from the US and Arab states, which never fully materialized. Faced with the prospect of declining revenues, Siad Barre attempted to new levies and taxes in the provinces which reignited clan loyalties over state loyalties in the Somali hinterland. \n\nIn 1988, in a final desperate gambit, Siad Barre attempted to ally himself with the Mengistu regime in Addis, a move which turned the clans on which he relied for popular support against him. The Somalian state began to crumble as inter-clan warfare broke out and by 1990 Somalia had no real government to speak of.\n\n > Who is running Somalia?\n\nToday, Somalia can essentially be divided into three parts: Somalia, Puntland, and Somaliland - see [this map](_URL_1_). Somaliland and Puntland each have their own autonomous governments that are somewhat democratic, as well as their own military forces.\n\nSomalia is a different story. While the central government of Somalia is theoretically governed by a \"transitional federal goverment\", the government has almost no power and is largely administered from Nairobi.\n\n > What's happening with the UN peacekeepers there?\n\nThe peacekeeping force in Somalia ais known as the African Union Mission in Somalia or [ANISOM](_URL_6_) operate with a joint mandate from the UN and African Union. The force about 10,000 strong (soon to increase to 17,000) and is primarily made up of Ugandan soldiers with some Burundian soldiers and is really only present in Mogadishu to fight back forces of [Al-Shabaab](_URL_5_).\n\nAdditionally, recent attacks by Al-Shabaab in Kenya and Ethiopia have prompted those governments to launch attacks into Somalia to try to capture key Shabaab strongholds - again see [map](_URL_1_).\n\n > Is there any hope for the future?\n\nA [recent conference](_URL_3_) on the situation in Somalia outlined a plan for a sort of federated style of government that, while far from promising, is one of the better suggested solutions in recent memory (imo).\n\nOne development that may also help is the declining support for Al-Shabaab following its refusal to allow Western food aid to reach tens of thousands of people who suffered through one of the worst droughts, and subsequent famines, in recent history.\n\n > Safe zones?\n\nSomaliland and Puntland are far safer than Southern Somalia. While Mogadishu is occupied by ANISOM forces, attacks in the city are still frequent.\n\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "54749133",
"title": "Marry-your-rapist law",
"section": "Section::::Campaigns for repeal.:Somalia.\n",
"start_paragraph_id": 49,
"start_character": 0,
"end_paragraph_id": 49,
"end_character": 516,
"text": "Since January 1991, Somalia has been in a [[Somali Civil War|state of civil war]], without a functioning central government that controls the entire country. The northwestern region of [[Somaliland]] unilaterally declared independence in May 1991, while the northeastern region of [[Puntland]] unilaterally declared its regional autonomy within Somalia in 1998; both gradually evolved their own legal systems, and made efforts to outlaw the practice of forcing a rape victim to marry her attacker in the late 2010s.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "56087344",
"title": "Sangnoksu Unit",
"section": "Section::::Sangnoksu Unit in Somalia (1993. 7 ~ 1994. 3).:Dispatch background.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 1776,
"text": "Somalia, located in the Horn of Africa, has a population of about 95.6 million people, but there are 1.5 million refugees because of long-term civil war. Somalian are major(85 percent of the population), but there is a multiracial group such as Helman, Arabs, and Indian. Somalia was originally an Islamic Emirate since the end of the 19th century, but it became independent from Britain and Italy on July 1, 1960. Although the coup regime assumed power in 1969, a large and small civil war continued to ensue. In January 1991, the United Somali Congress (USC), a group of Somali rebels, expelled the Barre regime of the Somali Revolutionary Socialist Party (SRSP), which has continued to reign after the coup d'état, which has continued its dictatorship and nepotism. But in the United Somali Congress (USC), conflicts between president Mahdi and chairman of Aidid have been getting worse and consequently the civil war of anarchy has been lasted. The United Nations established the United Nations Operation in Somalia I (UNOSOM I) in April 1992 for peace and reconstruction of Somalia under the United Nations Security Council Resolution 733. However, the civil war continued as the infighting continued, and the state of anarchy continued. In December 1992, the U.S.-led multinational force was dispatched to Somalia, and the United Nations Operation in Somalia II (UNOSOM II) was established in March 1993. In February 1993, South Korea decided to dispatch troops after a field survey and three related ministries’ meeting in order to actively participate in the UN peacekeeping effort and enhance international status. On May 18, 1993, South Korea dispatched 516 members to carry out UN peacekeeping operations, which were held in Mogadishu from July 1993 to March 1994.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27358",
"title": "Somalia",
"section": "Section::::History.:Federal government.\n",
"start_paragraph_id": 84,
"start_character": 0,
"end_paragraph_id": 84,
"end_character": 664,
"text": "The Federal Government of Somalia, the first permanent central government in the country since the start of the civil war, was later established in August 2012. By 2014, Somalia was no longer at the top of the fragile states index, dropping to second place behind South Sudan. UN Special Representative to Somalia Nicholas Kay, European Union High Representative Catherine Ashton and other international stakeholders and analysts have also begun to describe Somalia as a \"fragile state\" that is making some progress towards stability. In August 2014, the Somali government-led Operation Indian Ocean was launched against insurgent-held pockets in the countryside.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "51329",
"title": "Famine",
"section": "Section::::Regional history.:Africa.\n",
"start_paragraph_id": 44,
"start_character": 0,
"end_paragraph_id": 44,
"end_character": 267,
"text": "In 1992 Somalia became a war zone with no effective government, police, or basic services after the collapse of the dictatorship led by Siad Barre and the split of power between warlords. This coincided with a massive drought, causing over 300,000 Somalis to perish.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27358",
"title": "Somalia",
"section": "",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 626,
"text": "By mid-2012, the insurgents had lost most of the territory that they had seized, and a search for more permanent democratic institutions began. A new provisional constitution was passed in August 2012, which reformed Somalia as a federation. The same month, the Federal Government of Somalia was formed and a period of reconstruction began in Mogadishu. Somalia has maintained an informal economy, mainly based on livestock, remittances from Somalis working abroad, and telecommunications. It is a member of the United Nations, the Arab League, African Union, Non-Aligned Movement and the Organisation of Islamic Cooperation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36817379",
"title": "Federal Government of Somalia",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 347,
"text": "The Federal Government of Somalia (FGS) (, ) is the internationally recognised government of Somalia, and the first attempt to create a central government in Somalia since the collapse of the Somali Democratic Republic. It replaced the Transitional Federal Government of Somalia on 20 August 2012 with the adoption of the Constitution of Somalia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "55532698",
"title": "14 October 2017 Mogadishu bombings",
"section": "Section::::Background.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 448,
"text": "The United States had a military involvement in Somalia until 1994, and had then withdrawn. Earlier in 2017 the US designated Somalia a \"zone of active hostilities\" (allowing it to apply looser rules and oversight concerning the authorization of drone strikes and ground operations), and the deployment of regular US forces to Somalia was again authorized. This saw America’s ground forces in Somalia increase from about 50 in 2016 to 400 in 2017.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
7u6ttl | why almost no smartphone protective case has a cover for the camera glass? | [
{
"answer": "A decent quality smartphone will have a hard protective layer (e.g. gorilla glass) over the lens so it doesn't get scratched. It will resist scratches pretty well. \n\nOn the other hand, smartphone cases are made of cheaper materials, usually some kind of plastic, and are much easier to scratch. So if you had a case over the lens, over time, there would be a bunch of scratches in front of the lens, all your pictures would come out blurry and terrible. \n\nAlso, even if the case is nice and clear with no scratches, it will tend to add distortion, extra glare, and so forth to your photos. \n\nSo bottom line is, they make a cut-out for the camera so your photos aren't potato quality. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "47926181",
"title": "Nexus 6P",
"section": "Section::::Reception.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 368,
"text": "iFixit gave the phone a 2 out of 10 in terms of repairability, praising the solid construction which improved durability, but mentions that it is \"very difficult\" to open the device without damaging the glass camera cover due to the unibody design, and panned the difficulty in replacing the screen and the adhesive holding the rear cover panels and battery in place.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6929013",
"title": "Lens cover",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 300,
"text": "A lens cover or lens cap provides protection from scratches and minor collisions for camera and camcorder lenses. Lens covers come standard with most cameras and lenses. Some mobile camera phones include lens covers, such as the Sony Ericsson W800, the Sony Ericsson K750 and the Sony Ericsson K550.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1864909",
"title": "Window film",
"section": "Section::::Primary properties.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 1352,
"text": "Security films are applied to glass so when the glass is broken it holds together, preventing dangerous shards from flying about, or to make it more difficult for an intruder to gain entry. Typically applied to commercial glass, these films are made of heavy-gauge plastic and are intended to maintain the integrity of glass when subject to heavy impact. The most robust security films are capable of preventing fragmentation and the production of hazardous glass shards from forces such as bomb blasts. Some companies have even experimented with bullet ballistics and multiple layers of security film. Another key application for security window films (safety window films) is on large areas of \"flat glass\" such as storefront windows, sliding glass doors, and larger windows that are prone to hurricane damage. These security films, if applied properly, can also provide protection for vehicles. These security films are often tinted and can be up to 400 micrometers (µm) thick, compared to less than 50 µm for ordinary tint films. If anchored correctly, they can also provide protection for architectural glazing in the event of an explosion. A layer of film (of 100 µm thickness or greater) can prevent the ejection of spall when a projectile impacts on its surface, which otherwise creates small dagger-like shards of glass that can cause injury.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37339282",
"title": "Nexus 4",
"section": "Section::::Reception.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 534,
"text": "Some owners however complain that the all-glass construction leads to a phone that is fragile and easily broken. Additionally, if the earlier phones are left on a smooth surface, an alarm with vibration will cause the phone to \"walk\" off the surface and fall. The glass screen is also sensitive to breakage due to the thin plastic \"surround\" that leaves little margin if the edge of the phone is crushed in an impact or when dropped, making either the plastic \"bumper\" or better still, a well-made, impact-absorbing case a necessity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5162378",
"title": "Screen protector",
"section": "Section::::Materials.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 582,
"text": "Screen protectors are made of either plastics, such as polyethylene terephthalate (PET) or thermoplastic polyurethane (TPU), or of tempered glass, similar to the device’s original screen they are meant to protect. Plastic screen protectors cost less than glass and are thinner, around thick, compared to for glass. At the same price, glass will resist scratches better than plastic, and feel more like the device's screen, though higher priced plastic protectors may be better than the cheapest tempered glass models, since glass will shatter or crack with sufficient impact force.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "667163",
"title": "Camera phone",
"section": "Section::::Laws.\n",
"start_paragraph_id": 55,
"start_character": 0,
"end_paragraph_id": 55,
"end_character": 801,
"text": "Camera phones, or more specifically, widespread use of such phones as cameras by the general public, has increased exposure to laws relating to public and private photography. The laws that relate to other types of cameras also apply to camera phones. There are no special laws for camera phones. Enforcing bans on camera phones has proven nearly impossible. They are small and numerous and their use is easy to hide or disguise, making it hard for law enforcement and security personnel to detect or stop use. Total bans on camera phones would also raise questions about freedom of speech and the freedom of the press, since camera phone ban would prevent a citizen or a journalist (or a citizen journalist) from communicating to others a newsworthy event that could be captured with a camera phone.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10347740",
"title": "Safety and security window film",
"section": "Section::::Applications.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 205,
"text": "Safety and security films are used where there is a potential for injury from broken glass (such as glass doors or overhead glazing). These films can be applied to toughened, annealed, or laminated glass.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
45lrm8 | How would Viking warbands choose their leader? | [
{
"answer": "The leader was the person who most obviously had the qualities of leadership (I'll explain more at the end). Those qualities included:\n\n\n*Good lineage. Was your father or father's father a great leader and an honorable person who paid their debts and was honest with their business? Did they come from a hospitable family that takes in travelers? Were they distinguished? Also there is a reason there is so much politics around the idea of marriage. Marriage during this period was a way of unifying families to create a stronger grouping, bring peace between the two, or a way for families to become a part of a dynasty. Since heterosexual unions came with the assumption of children, those children became the heirs to family lines that also tie to land rights. As such rulers were the ones that typically owned land or had legal precedence to land ownership (i.e. their land was taken from them and they have a \"birth right\" to fight for it back like King Harald Hadrada).\n\n\n*Personal strength, bravery, honesty and masculinity which are all tied to personal honor. If you were not the strongest or bravest then why would I follow you into battle? Leaders including Kings were at the front lines leading the attack and showing an example to their men. Also since warband leaders were responsible for distributing the booty, if you are dishonest, how do I know I won't get my fair share? And of course masculinity which ties to strength and bravery but also ties to living in your role as a man. For instance, magic was believed to be the realm of women and while there were some men that practiced it they did so at the risk of being labeled ergi which means lacking in respectable masculine values. This could lead to your wife being allowed to divorce you as well as adversly effecting your reputation for future business opportunities. As such any remarks about ones manliness was taken seriously as it to the point of challenging them to a holmgang (a one-on-one duel) or straight up killing the person (you would be legally protected because if you didn't kill or challenge the person you were weak and thus embody the label of ergi) As such a person who was unquestionably manly and honest (which also tied to Norse masculine values) were usually candidates for leadership.\n\n\n*Good hamingja (luck). Is there a sense of luck around the person or things have happened to them that shows that they have a bright future and their fate is aligned for greatness? There was a belief that luck was an entity that followed and favored certain people and it only made sense that you followed a lucky person so that they could reap from the trickle-down of luck and other wealth from that person. This is something that is hard to objectively describe but the Norse were all about talking about people's hamingja. For instance Leif Eriksson is also known as Leif the Lucky since he saved a handful of men from a ship wreck that led to news of lands further West (Vinland). Luck can also leave you and with that your support for leadership can leave you too.\n\n\nNow saying the choice is obvious is very subjective to the social dynamics that went on during the time. To say a leader was always chosen a certain way is not the case because during this era in Norway, kings were killing each other left and right to claim the right to rule. What we can take out of this is that the connection between the leader and pertinent landed freemen (not all freemen had the same rights) was decided based on more personal ties (reciprocity of material and service) to that potential leader and if their peers believed the same thing too. You could become a local leader without the \"royal/jarl\" background and still aspire for greatness and those men were called Hersirs. Hersirs would lead a hundred (a unit of land area that is a county division) show their ability for leadership through their actions in battle which could lead to a Jarl or King bringing them into their Hird(I believe this term is primarily tied to the King of Norway) or personal retinue which can lead to more favors (jarls were replaced and replaced with loyal men), the ability to marry into the family and as such the ability to expand their powers further into the future themselves or through their kin.\n\n\nThe best example I have that you may be able to relate to is Aragorn in Lord of the Rings. He shows the virtues of bravery, strength and honesty that is tested in times of peril and which is known by many others around him which leads to stories that build up his reputation. He also comes from a distinguished line and royal ties that add to his credibility of being a ruler. By the end of the movie he is one of the big heroes that is instrumental in stopping Sauron from taking over mankind in Middle-Earth (no one can see or know about Frodo's ring mission as it's nature requires secrecy and thus none of the glory) that it was \"rightfully\" acknowledged and even pressured upon him that he was the best candidate for king Arnor and Gondor. What I mean: there wasn't an election but an organic \"vote\" and understanding based on the qualities of the person, that is pretty much how warband leaders were typically chosen.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "52308201",
"title": "Vikings: War of Clans",
"section": "Section::::Cultural references and critical reception.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 613,
"text": "Paul Glader, associate professor of journalism at Berlin School of Creative Leadership, wrote about his experience of a Summer spent ruling a clan in \"Vikings: War of Clans\", summarising \"I enjoyed my Summer as a Viking chief. I learned that many of the principles of good leadership in real life apply in these virtual realms. Good leadership in either realm takes time, thought and engagement. It also takes a team. And, sometimes, when you find yourself less engaged as a leader, it's time to make a succession plan or a new leadership plan. Because that’s when your Viking clan might face its greatest test.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "88554",
"title": "Hersir",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 1119,
"text": "A Hersir was a local Viking military commander of a \"hundred\" (a county subdivision) of about 100 men and owed allegiance to a jarl or king. They were also aspiring landowners, and, like the middle class in many feudal societies, supported the kings in their centralization of power. Originally, the term Hersir referred to a wealthy farmer who owned land and had the status of a leader. Throughout the Viking Age, Hersir was eventually redefined as someone who organized and led raids. In the 10th century, the influence of Hersirs began to decrease due to the development of effective national monarchies in Scandinavia. Hersir was again redefined later on, to mean a local leader or representative. The independence of the Hersir as a military leader eventually vanished, to be replaced only by the title of a royal representative. The \"Hávamál\", which was the mythical advice of the supreme creator Odin to humankind, contains a number of verses emphasizing the virtue of cautious consideration and strategical attack. This theme, in its oral form, was one of the major influences on the mind of the Viking Hersir.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8159825",
"title": "Vinland Saga (manga)",
"section": "Section::::Plot.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 1282,
"text": "The two Viking bands later clash when their commanders seek to capture the young Danish Prince Canute, Askeladd's company succeeding but are forced by Thorkell's forces to take refuge for the winter in the frozen north of England near the Danish encampment at Gainsborough. Upon finding the effeminate Canute timid and heavily dependent on his caretaker Ragnar, a deeply disappointed Askeladd briefly changes his initial plan of backing the prince to hold him ransom. But a sudden attack by Thorkell's brigade forces Askeladd to change his mind, murdering Ragnar to forces Canute to stand up for himself. The prince brings both Thorkell and Askeladd's remaining forces under his command as he confronts his father, who decides not to kill Canute after he proved his worth while adamant to have Harald as his heir. Canute and his companions formulate a plot that required Askeladd to be killed by the prince after he slaughters Sweyn and his attendants during an audience, Askeladd securing Canute's position as king while stopping Sweyn's intent to invade his homeland, Wales. But Thorfinn, feeling denied of his revenge, attempts to kill Caunte before being stopped. Canute, understanding Thorfinn's pain, spares him the death penalty and instead sentences him to life as a slave.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3779598",
"title": "Great Heathen Army",
"section": "Section::::Invasion of England.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 708,
"text": "The term \"vikingr\" simply meant pirate, and the Viking \"heres\" may well have included fighters of other nationalities than Scandinavians. The Viking leaders would often join together for mutual benefit and then dissolve once profit had been achieved. Several of the Viking leaders who had been active in Francia and Frisia joined forces to conquer the four kingdoms constituting Anglo-Saxon England. The composite force probably contained elements from Denmark, Norway, Sweden and Ireland as well as those who had been fighting on the continent. The Anglo-Saxon historian Æthelweard was very specific in his chronicle and said that \"the fleets of the viking tyrant Hingwar landed in England from the north\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58169961",
"title": "Cold Pursuit",
"section": "Section::::Plot.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 449,
"text": "Viking tries in vain to stop the gang war by using one of his own men as a scapegoat and sending White Bull the man's head. This is insufficient to placate Bull, who kills the messenger. Meanwhile, Nels kidnaps Viking's son from his prep school before Bull's men can, in order to draw Viking into an ambush. Nels treats the boy well and protects him from the violence to come, but his identity is revealed to Viking by a Janitor in the prep school.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "181076",
"title": "Berserker",
"section": "Section::::Theories.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 308,
"text": "When Viking villages went to war in unison, the berserkers often wore special clothing, for instance furs of a wolf or bear, to indicate that this person was a berserker, and would not be able to tell friend from foe when in rage \"bersærkergang\". In this way, other allies would know to keep their distance.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52308201",
"title": "Vikings: War of Clans",
"section": "Section::::Gameplay.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 215,
"text": "In Vikings: War of Clans, players have to cooperate with each other to create their own clan. Each clan has the ruling hierarchy from a ranker to the chief, and each player has their correspondent authority extent.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
67h2j0 | does tire tread help when driving on wet surfaces? if so; how? | [
{
"answer": "Because low tread leaves nowhere for the water to go so the tire kinda skims on top of the water. If there is tread, the water has grooves to go through leaving the tread direct contact with the road. ",
"provenance": null
},
{
"answer": "Tread refers specifically to the channels cut into the surface of a tire. The tread is designed to shed water displaced from under the road contacting surfaces of the tire, though the actual pattern isn't actually terribly important so long as certain key criteria are met, and is highly stylized.\n\nAn over-inflated tire has a significant impact on improving said displacement, as the bulging center can more easily press the water from the center out. I'm not advocating you over-inflate your tires - while it also reduces rolling resistance, increasing fuel economy, it also reduces traction, so you're more likely to lose control of your vehicle, especially at higher speeds, and it wears the center of your tire excessively, greatly reducing durability.\n\nIf you can't displace water fast enough, typically due to speed, lack of tread, or an under inflated tire having too much displacement, you'll hydroplane - the car will literally be floating. That's not driving, that's sailing.\n\nSnow and ice tires are hard rubber with bold edges to dig into the snow and ice, and use *that* as the road surface. They make pretty bad rain tires because if it's warm enough to rain and not snow, you're still driving on hard rubber that doesn't really care all that much about gripping the road surface.\n\nTread is the gaps between the road contacting surfaces of the tire, and the less tire you have in contact with the road, the less friction. Tread actually reduces traction in ideal conditions by virtue of being \"not tire\", which is why performance tires for ideal conditions have little to no tread. Racing tires, aka \"slicks\" (which are anything but, depending on the compound, they can be as sticky as duct tape when *cold*) are illegal for road use because they are dangerous to drive on in the presence of any amount of moisture on the road. They have no means of displacing water but by casting a wake in front of the point of contact. I was in a Dodge Viper that nearly wiped out at 25 mph driving through a neighborhood because it rained, *two days prior*, because of the tires on it at the time.\n\nSo treads are a compromise in the design, and all season tires are the ultimate compromise.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "65037",
"title": "Tire",
"section": "Section::::Performance characteristics.:Forces.\n",
"start_paragraph_id": 99,
"start_character": 0,
"end_paragraph_id": 99,
"end_character": 514,
"text": "BULLET::::- \"Wet traction—\"Wet traction is the tire's traction, or grip, under wet conditions. Wet traction is improved by the tread design's ability to channel water out of the tire footprint and reduce hydroplaning. However, tires with a circular cross-section, such as those found on racing bicycles, when properly inflated have a sufficiently small footprint to not be susceptible to hydroplaning. For such tires, it is observed that fully slick tires will give superior traction on both wet and dry pavement.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2257043",
"title": "Beadlock",
"section": "Section::::Purpose.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 1061,
"text": "High traction is desired for tires for automobile dirt track racing, off-road racing, off-road vehicles, and off-road motorcycles, so their tread is therefore coarse. Nevertheless, some riders will lower the tire pressure to cause the tread to spread out and create a larger contact patch. This practice can create a safety hazard, as there may not be enough pressure to adequately secure the tire beads to the wheel. Reactive ground forces push a tire to one side or the other, especially the outside rear tire of a racing vehicle when it is turning in a corner of a track. This could cause a bead of the tire to come off the rim completely, or enough to cause partial loss of air. It is also possible for the tire to have more traction on the ground than there is friction between the tire and rim. In this case the wheel would slip around the tire beads without turning the tire. Beadlocks, of one form or another including adhesive, are therefore used to keep the beads of off-road tires firmly seated and prevent slip, even when inflation pressure is low.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "263198",
"title": "Opposite lock",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 238,
"text": "The technique works best on loose or wet surfaces where the friction between the tires and the road is not too high, but can also be used on asphalt or other surfaces with high friction if the vehicle has enough power to maintain speed. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1333814",
"title": "Tread",
"section": "Section::::Tires.:Street tires.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 774,
"text": "The grooves in the rubber are designed to allow water to be expelled from beneath the tire and prevent hydroplaning. The proportion of rubber to air space on the road surface directly affects its traction. Design of tire tread has an effect upon noise generated, especially at freeway speeds. Generally there is a tradeoff of tread friction capability; deeper patterns often enhance safety, but simpler designs are less costly to produce and actually may afford some roadway noise mitigation. Tires intended for dry weather use will be designed with minimal pattern to increase the contact patch. Tires with a smooth tread (i.e., having no tread pattern) are known as slicks and are generally used for racing only, since they are quite dangerous if the road surface is wet.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5193283",
"title": "Fender (vehicle)",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 520,
"text": "Sticky materials, such as mud, may adhere to the smooth outer tire surface, while smooth loose objects, such as stones, can become temporarily embedded in the tread grooves as the tire rolls over the ground. These materials can be ejected from the surface of the tire at high velocity as the tire imparts kinetic energy to the attached objects. For a vehicle moving forward, the top of the tire is rotating upward and forward, and can throw objects into the air at other vehicles or pedestrians in front of the vehicle.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6139533",
"title": "Slip ratio",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 1253,
"text": "When accelerating or braking a vehicle equipped with tires, the observed angular velocity of the tire does not match the expected velocity for pure rolling motion, which means there appears to be apparent sliding between outer surface of the rim and the road in addition to rolling due to deformation of the part of tire above the area in contact with the road. When driving on dry pavement the fraction of slip that is caused by actual sliding taking place between road and tire contact patch is negligible in magnitude and thus does not in practice make slip ratio dependent on speed. It is only relevant in soft or slippery surfaces, like snow, mud, ice, etc and results constant speed difference in same road and load conditions independently of speed, and thus fraction of slip ratio due to that cause is inversely related to speed of the vehicle. The difference between theoretically calculated forward speed based on angular speed of the rim and rolling radius, and actual speed of the vehicle, expressed as a percentage of the latter, is called ‘slip ratio’. This slippage is caused by the forces at the contact patch of the tire, not the opposite way, and is thus of fundamental importance to determine the accelerations a vehicle can produce.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1333814",
"title": "Tread",
"section": "Section::::Tires.:Mountain bike and motorcycle tires.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 571,
"text": "Mountain bike and some motorcycle tires feature tread similar to off-road tires used on cars and trucks but may sometimes include an unbroken tread that runs along its center. This feature provides better traction and lower noise on asphalt at high speed and on high tire pressure, but retains the ability to provide grip on a soft or loose surface- lower tire pressure or soft ground will cause the side lugs to come into contact with the surface. Road bike tires may have shallow grooves for aesthetic purposes, but such grooves are unnecessary in narrow applications.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2lcr4s | why is that when i say "a university student" it sounds right but when i say "an university student" like it should be in english, it sounds completely wrong. | [
{
"answer": "Someone posted something similar the other day on here, and it more has to do with pronunciation not lettering.\n\nYou use the singular designator \"a,\" for words that follow don't have a vowel sound. You use \"an\" if it is a vowel sound.",
"provenance": null
},
{
"answer": "You don't say \"An University Student\" in English.\n\n\"An\" is used when the following word starts with a vowel sound. \"University\" does not, it starts with a consonant \"y\" sound, \"You-Ni-Verse-It-Ee\".\n\nYou would use \"an\" when saying a word such as \"umpire\" which starts with a vowel \"u\" sound, \"Uhm-Pyre\"",
"provenance": null
},
{
"answer": "You use \"an\" before vowels that is true, \nbut sometimes you should use an \"a\" too. \nThe sound makes it confusing, \nFor the vowel, you are using, \n\"Youniversity\" starts with \"Y\" and not \"U\"",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2011",
"title": "Comparison of American and British English",
"section": "Section::::Vocabulary.:Social and cultural differences.:Education.:University.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 537,
"text": "In the UK a university student is said to \"study\", to \"read\" or, informally, simply to \"do\" a subject. In the recent past the expression 'to read a subject' was more common at the older universities such as Oxford and Cambridge. In the US a student \"studies\" or \"majors in\" a subject (although \"concentration\" or \"emphasis\" is also used in some US colleges or universities to refer to the major subject of study). \"To major in\" something refers to the student's principal course of study; \"to study\" may refer to any class being taken. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27918920",
"title": "Contextualization (sociolinguistics)",
"section": "Section::::Examples of Contextualization in Use.:Example Two: Kyoko Masuda.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 809,
"text": "In this interaction, the cues received by the student's style of speaking suggests that they are speaking to an authority figure, because they are deferring through the use of questions. Furthermore, you can see the formality in their language throughout the brief interaction. The student speaks in elongated sentences, saying things such as \"I don't understand well\" rather than just the informal \"I don't get it.\" In examining the professor's use of language, they switch between the informal form (\"I (definitely) think so, you know.\") and the formal form (\"After all, do you mind (their behavior)?\"). This suggests that the professor used cues to learn that the student would prefer to remain in the formal form, and molded their language style to fit that. The reverse is seen within the next example: \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1199964",
"title": "Second-language acquisition",
"section": "Section::::Comparisons with first-language acquisition.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 1080,
"text": "Some errors that second-language learners make in their speech originate in their first language. For example, Spanish speakers learning English may say \"Is raining\" rather than \"It is raining\", leaving out the subject of the sentence. This kind of influence of the first language on the second is known as \"negative\" language transfer. French speakers learning English, however, do not usually make the same mistake of leaving out \"it\" in \"It is raining.\" This is because pronominal and impersonal sentence subjects can be omitted (or as in this case, are not used in the first place) in Spanish but not in French. The French speaker knowing to use a pronominal sentence subject when speaking English is an example of \"positive\" language transfer. It is important to note that not all errors occur in the same ways; even two individuals with the same native language learning the same second language still have the potential to utilize different parts of their native language. Likewise, these same two individuals may develop near-native fluency in different forms of grammar.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10224",
"title": "E-Prime",
"section": "Section::::Psychological effects.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 476,
"text": "While teaching at the University of Florida, Alfred Korzybski counseled his students to eliminate the infinitive and verb forms of \"to be\" from their vocabulary, whereas a second group continued to use \"I am,\" \"You are,\" \"They are\" statements as usual. For example, instead of saying, \"I am depressed,\" a student was asked to eliminate that emotionally primed verb and to say something else, such as, \"I feel depressed when ...\" or \"I tend to make myself depressed about ...\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4120925",
"title": "Glossary of education terms (S)",
"section": "Section::::S.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 585,
"text": "BULLET::::- Student: Etymologically derived through Middle English from the Latin second-type conjugation verb \"stŭdērĕ\", which means \"to direct one's zeal at\"; hence a student is one who directs zeal at a subject. Also known as a disciple in the sense of a religious area of study, and/or in the sense of a \"discipline\" of learning. In widest use, \"student\" is used to mean a school or class attendee. In many countries, the word \"student\" is however reserved for higher education or university students; persons attending classes in primary or secondary schools being called pupils.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "415406",
"title": "English as a second or foreign language",
"section": "Section::::Difficulties for learners.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 1998,
"text": "Some students may have problems due to the incoherence in rules like were, a noun is a noun and a verb is a verb because grammarians say they are. For e.g. In \"I am suffering terribly\" \"suffering\" is the verb, but in \"My suffering is terrible\", it is a noun. But both sentences expresses the same idea using the same words. Other students might have problems due to the prescribing and proscribing nature of rules in the language formulated by amateur grammarians rather than ascribing to the functional and descriptive nature of languages evidenced from distribution. For example, a cleric, Robert Lowth introduced the rule to never end a sentence with a preposition, inspired from Latin grammar through his book \"A Short Introduction to English Grammar\". Due to the inconsistencies brought from Latin language standardization of English language lead to classifying and sub-classyfing an otherwise simple language structure. Like many alphabetic writing systems English also have incorporated the principle that graphemic units should correspond to the phonemic units, however, the fidelity to the principle is compromised, compared to an exemplar like Finnish language. This is evident in the Oxford English Dictionary, for many years they experimented with many spellings of SIGN to attain a fidelity with the said principle, among them are SINE, SEGN, and SYNE, and through the diachronic mutations they settled on SIGN. Cultural differences in communication styles and preferences are also significant. For example, a study among Chinese ESL students revealed that preference of not using tense marking on verb present in the morphology of their mother tongue made it difficult for them to express time related sentences in English. Another study looked at Chinese ESL students and British teachers and found that the Chinese learners did not see classroom 'discussion and interaction' type of communication for learning as important but placed a heavy emphasis on teacher-directed lectures.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "56074466",
"title": "Spanish personal pronouns",
"section": "Section::::Subject pronouns.:Pronoun dropping and grammatical gender.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 331,
"text": "English subject pronouns are generally not translated into Spanish when neither clarity nor emphasis is an issue. \"I think\" is generally translated as just \"Pienso\" unless the speaker is contrasting his or her views with those of someone else or placing emphasis on the fact that their views are their own and not somebody else's.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3xaz63 | why did adam sandler seemingly stop being funny some years ago? | [
{
"answer": "I was in high school when Billy Madison and Happy Gilmore were out. That humor made me laugh 20 years ago. He hasn't changed we have. His brand of humor just doesn't stand up to the improv style of so many great comedies of the last 5+ years.",
"provenance": null
},
{
"answer": "I'd say that your sense of humor has changed, Adam Sandler's humor has always been childish/frat guy's humor",
"provenance": null
},
{
"answer": "His earlier movies still make me laugh. Waterboy, Billy Madison, Happy Gilmore, Big Daddy, The Wedding Singer. Maybe he's losing his edge as he gets further away from his stand up and sketch comedy days. ",
"provenance": null
},
{
"answer": "For any comedian, Scuba Steve would be the pinnacle of a career. \nIt's all down hill (or perhaps under water) from there.",
"provenance": null
},
{
"answer": "He's getting too old to play those adrift Man-Child characters. He has to move on to other material.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "102690",
"title": "Adam Sandler",
"section": "Section::::Public image.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 425,
"text": "Sandler has been referenced multiple times in various media, including in the TV shows \"The Simpsons\" in the episode \"Monty Can't Buy Me Love\", in \"Family Guy\" in the episode \"Stew-Roids\", and in \"South Park\" in the episode \"You're Getting Old\". He was also referenced in the video game \"\". The HBO series \"Animals\" episode \"The Trial\" features a mock court case to decide whether Sandler or Jim Carrey is a better comedian.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1724850",
"title": "The Sandman (wrestler)",
"section": "Section::::Professional wrestling career.:Eastern/Extreme Championship Wrestling.:Surfer and pimp (1992–1994).\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 1024,
"text": "In 1994, Sandman changed his gimmick after ECW owner Tod Gordon suggested that he channel his own personality into his character, creating an edgier gimmick. He began a feud with his former tag team partner Tommy Cairo, after The Sandman was temporarily blinded following a match and inadvertently struck Peaches. When The Sandman regained his sight and saw Cairo assisting Peaches to her feet, he attacked Cairo. The Sandman subsequently became estranged from his wife (claiming \"life's a bitch, and then you marry one\"). After losing a match against Cairo, that led to Peaches hitting him with a strap profusely, Woman attacked Peaches and led her back to the ring where Sandman held her and Woman applied the strap to her skin before Cairo returned to save her. After this event Sandman adopted Woman as his new manager. In keeping with The Sandman's character, Woman would open his beers and light his cigarettes prior to matches. She began carrying a Singapore cane with which she would strike The Sandman's opponents.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "467628",
"title": "Margaret Dumont",
"section": "Section::::Performances with the Marx Brothers.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 547,
"text": "For decades, film critics and historians have theorized that because Dumont never broke character or smiled at Groucho's jokes, she did not \"get\" the Marxes' humor. On the contrary, Dumont, a seasoned stage professional, maintained her \"straight\" appearance to enhance the Marxes' comedy. In 1965, shortly before Dumont's death, \"The Hollywood Palace\" featured a recreation of \"Hooray for Captain Spaulding\" (from the Marxes' 1930 film \"Animal Crackers\") in which Dumont can be seen laughing at Groucho's ad-libs — proving that she got the jokes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1298984",
"title": "Blue Skies (1946 film)",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 621,
"text": "The reasons for Astaire's (temporary) retirement remain a source of debate: his own view that he was \"tired and running out of gas,\" the sudden collapse in 1945 of the market for Swing music which left many of his colleagues in jazz high and dry, a desire to devote time to establishing a chain of dancing schools, and a dissatisfaction with roles, as in this film, where he was relegated to playing second fiddle to the lead. Ironically, it is for his celebrated solo performance of \"Puttin' On The Ritz,\" which featured Astaire leading an entire dance line of Astaires, that this film is most remembered by some today.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "43055",
"title": "Buster Keaton",
"section": "Section::::Career.:Early life in vaudeville.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 244,
"text": "Keaton claimed he was having so much fun that he would sometimes begin laughing as his father threw him across the stage. Noticing that this drew fewer laughs from the audience, he adopted his famous deadpan expression whenever he was working.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "102690",
"title": "Adam Sandler",
"section": "Section::::Career.:Acting career.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 371,
"text": "Sandler's recent comedy films, including \"Grown Ups\" and \"Grown Ups 2\", have received strongly negative reviews. In reviewing the latter, critic Mark Olsen of \"The Los Angeles Times\" remarked that Sandler had become the antithesis of Judd Apatow; he was instead \"the white Tyler Perry: smart enough to know better, savvy enough to do it anyway, lazy enough not to care.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "104998",
"title": "Michael Richards",
"section": "Section::::Career.:2006 Laugh Factory incident.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 417,
"text": "The incident was later parodied on several TV shows, including \"MadTV\", \"Family Guy\", \"South Park\", and \"Extras\". In an episode of \"Curb Your Enthusiasm\", Richards appeared as himself and poked fun at the incident. In a 2012 episode of Seinfeld's web series \"Comedians in Cars Getting Coffee\", Richards admitted that the outburst still haunted him, and was a major reason for his withdrawal from performing stand-up.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5xqrrq | what is with the weird "bubble in your throat" phenomenon? | [
{
"answer": "I am pretty sure it's just some mucus messing with your vocal cords, as ,usually, coughing to clear your voice will get rid of it.The voice changes, usually gets a deeper pitch, because the air you are exiling is not just making the vocal cord vibrate, but also all the mucus covering them and all the temporary mucus membranes between them.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3284724",
"title": "Gargling",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 348,
"text": "Gargling (same root as 'gurgle') is the act of bubbling liquid in the mouth. It is also the washing of one's mouth and throat with a liquid that is kept in motion by breathing through it with a gurgling sound. Vibration caused by the muscles in the throat and back of the mouth cause the liquid to bubble and flurry around inside the mouth cavity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8399929",
"title": "Bubble and Squeek",
"section": "Section::::Episodes.:Fun Fair.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 575,
"text": "Bubble is awoken by a newspaper which features an advert for a funfair. Bubble is intrigued and seems eager; but Squeek would rather work. However, Squeek quickly changes his mind when he sees a shining three-tone horn for grand prize. They can't get knock the coconut down, which is how they will get the horn. Until, a fortune-teller gives them another ball to knock it down, but that does not work. Ashamed and empty-handed, Bubble tries to kill himself, but Squeek diverts the gun and ends up destroying the funfair and the coconut, which makes them win the grand prize.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "51984254",
"title": "Physiology of decompression",
"section": "Section::::Bubble formation, growth and elimination.:Bubble distribution.\n",
"start_paragraph_id": 65,
"start_character": 0,
"end_paragraph_id": 65,
"end_character": 320,
"text": "Bubbles are also known to form within other tissues, where they may cause damage leading to symptoms of decompression sickness. This damage is likely to be caused by mechanical deformation and stresses on the cells rather than local hypoxia, which is an assumed mechanism in the case of gas embolism of the capillaries.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47519635",
"title": "The Boy in the Bubble",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 339,
"text": "\"The Boy in the Bubble\" is a song by the American singer-songwriter Paul Simon. It was the third single from his seventh studio album, \"Graceland\" (1986), released on Warner Bros. Records. Written by Simon and Forere Motloheloa (an accordionist from Lesotho), its lyrics explore starvation and terrorism, juxtaposed with wit and optimism.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39660483",
"title": "Bubble Butt",
"section": "Section::::Composition.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 1110,
"text": "Musically, \"Bubble Butt\" is an electronic dance music (EDM), hip hop, and dancehall song. It is a \"swaggering\" and energetic number that draws heavily from crunk inspirations. It was composed with the intention of being played at clubs. The track features squawking samples, heavy bass, \"squiggly synths\", clap beats, and bubble-popping sound effects, culminating with the \"bub-bub-bubbing hook\". \"Consequence of Sound\"s Derek Staples noted the resemblance between \"Bubble Butt\" and Major Lazer’s \"Pon de Floor\" (2009). Both songs feature lyrics that are seen as an anthem to twerking, \"ass-shaking\" and big buttocks. Mars repeats the chorus multiple times, \"Bubble butt, bubble butt, turn around, stick it out, show the world you got a bubble butt\", which draws inspiration from Rihanna's \"Rude Boy\" (2010), while Tyga raps \"Damn, bitch, talk much?/I don't want interviews/Ha! I'm tryin' ta get into you/then make you my enemy\". Mystic is found \"lyrically wining circles around Bruno Mars and Tyga\". Critics found that the combination of the track's lyrics and its catchy beat would make a \"dance-floor hit\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39660483",
"title": "Bubble Butt",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 664,
"text": "\"Bubble Butt\" is a song by American electronic dance music trio Major Lazer from their second studio album, \"Free the Universe\" (2013). It was released as the album's fourth single on May 24, 2013, for digital download. The track features American singer-songwriter Bruno Mars, and rappers Tyga and Mystic. The single version also features verses from American rapper 2 Chainz. Thomas Pentz, David Taylor, Mars, Michael Stevenson and Mystic co-wrote the track, while production was handled by Major Lazer and Valentino Khan. Musically, it is an electronic dance, hip hop and dancehall track with lyrics implying that girls twerk and show off their giant buttocks.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47840187",
"title": "Double bubble (radiology)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 506,
"text": "In radiology, the double bubble sign is a feature of pediatric imaging seen on radiographs or prenatal ultrasound in which two air filled bubbles are seen in the abdomen, representing two discontiguous loops of bowel in a proximal, or 'high,' small bowel obstruction. The finding is typically pathologic, and implies either duodenal atresia, duodenal web, annular pancreas, and on occasion midgut volvulus, a distinction that requires close clinical correlation and, in most cases, surgical intervention. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2mty1l | a fever. | [
{
"answer": "Hi i am a doctor.\n\nIn response to an invading pathogen (bug) the body starts an inflammatory cascade (attacks the bug). Lots of chemicals are released (cytokines etc) these chemicals cause the brain to reset the normal body temperature to a higher value, say 40 degrees Celsius. This is believed to help the immune system fight the infection but has not been scientifically confirmed. Although the brain raises the set point the body has to actually heat up to this new set point so you may have a temperature of 39 but the brain says that it should be 40 so you feel cold (even though you are not) and start shivering in order to generate more heat. This is a fever and when you shiver its called \"rigoring\"\nWhen you get a fever from heat stroke/exhaustion the brain doesn't raise the set point and as you heat up you actually feel hot this time, this is called hyperthermia (not a fever)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "46253",
"title": "Fever",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 552,
"text": "A fever can be caused by many medical conditions ranging from non serious to life-threatening. This includes viral, bacterial and parasitic infections such as the common cold, urinary tract infections, meningitis, malaria and appendicitis among others. Non-infectious causes include vasculitis, deep vein thrombosis, side effects of medication, and cancer among others. It differs from hyperthermia, in that hyperthermia is an increase in body temperature over the temperature set point, due to either too much heat production or not enough heat loss.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1330683",
"title": "Tuberculosis management",
"section": "Section::::Adverse effects.\n",
"start_paragraph_id": 117,
"start_character": 0,
"end_paragraph_id": 117,
"end_character": 1214,
"text": "Fever during treatment can be due to a number of causes. It can occur as a natural effect of tuberculosis (in which case it should resolve within three weeks of starting treatment). Fever can be a result of drug resistance (but in that case the organism must be resistant to two or more of the drugs). Fever may be due to a superadded infection or additional diagnosis (patients with TB are not exempt from getting influenza and other illnesses during the course of treatment). In a few patients, the fever is due to drug allergy. The clinician must also consider the possibility that the diagnosis of TB is wrong. If the patient has been on treatment for more than two weeks and if the fever had initially settled and then come back, it is reasonable to stop all TB medication for 72 hours. If the fever persists despite stopping all TB medication, then the fever is not due to the drugs. If the fever disappears off treatment, then the drugs need to be tested individually to determine the cause. The same scheme as is used for test dosing for drug-induced hepatitis (described below) may be used. The drug most frequently implicated as causing a drug fever is RMP: details are given in the entry on rifampicin.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "378661",
"title": "Thermoregulation",
"section": "Section::::Variation in animals.:Variations due to fever.\n",
"start_paragraph_id": 85,
"start_character": 0,
"end_paragraph_id": 85,
"end_character": 287,
"text": "Fever is a regulated elevation of the set point of core temperature in the hypothalamus, caused by circulating pyrogens produced by the immune system. To the subject, a rise in core temperature due to fever may result in feeling cold in an environment where people without fever do not.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "56448630",
"title": "Intermittent fever",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 558,
"text": "Intermittent fever is a type or pattern of fever in which there is an interval where temperature is elevated for several hours followed by an interval when temperature drops back to normal. This type of fever usually occurs during the course of an infectious disease. Diagnosis of intermittent fever is frequently based on the clinical history but some biological tests like complete blood count and blood culture are also used. In addition radiological investigations like chest X-ray, abdominal ultrasonography can also be used in establishing diagnosis. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "75654",
"title": "Hyperthermia",
"section": "Section::::Pathophysiology.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 404,
"text": "A fever occurs when the core temperature is set higher, through the action of the pre-optic region of the anterior hypothalamus. For example, in response to a bacterial or viral infection, certain white blood cells within the blood will release pyrogens which have a direct effect on the anterior hypothalamus, causing body temperature to rise, much like raising the temperature setting on a thermostat.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46253",
"title": "Fever",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 395,
"text": "Fever is one of the most common medical signs. It is part of about 30% of healthcare visits by children and occurs in up to 75% of adults who are seriously sick. While fever is a useful defense mechanism, treating fever does not appear to worsen outcomes. Fever is viewed with greater concern by parents and healthcare professionals than it usually deserves, a phenomenon known as fever phobia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46253",
"title": "Fever",
"section": "Section::::Society and culture.:Fever phobia.\n",
"start_paragraph_id": 85,
"start_character": 0,
"end_paragraph_id": 85,
"end_character": 695,
"text": "Fever phobia is the name given by medical experts to parents' misconceptions about fever in their children. Among them, many parents incorrectly believe that fever is a disease rather than a medical sign, that even low fevers are harmful, and that any temperature even briefly or slightly above the oversimplified \"normal\" number marked on a thermometer is a clinically significant fever. They are also afraid of harmless side effects like febrile seizures and dramatically overestimate the likelihood of permanent damage from typical fevers. The underlying problem, according to professor of pediatrics Barton D. Schmitt, is \"as parents we tend to suspect that our children’s brains may melt.\"\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
345s7x | the sudden outrage towards dr. oz | [
{
"answer": "From my point of view, it largely seems to stem from the fact he is hiding behind the first amendment to promote fringe or even quack medical and health products. Sure the first amendment gives you the right to say any crazy thing you want, but the fact that he is (supposedly) a doctor and is using his authority in that role to peddle bad products is a major issue that will likely get him kicked out of the medical profession at least. ",
"provenance": null
},
{
"answer": "This doesn't explain why it's suddenly become such a big issue, but as for the outrage itself...\n\nHe uses (abuses) his status as a medical doctor (specifically, he seems to be an excellent heart surgeon) in order to make large piles of money by promoting bullshit alternative medicine to people who don't know any better.\n\nThere was a pretty funny montage on Youtube recently of all the times he has said on his show \"I have this magic weight loss pill that will burn the fat right off you without you doing anything...\" or some close variation, and then cuts to his recent congressional hearing, being asked \"is there a magic weight loss pill?\" and him trying to evade the question but finally answering \"no.\"",
"provenance": null
},
{
"answer": "Dr. Oz spreads lies about supposed cures for diseases that seriously have no cure. I have Gastroparesis. One of the members of his team posted a blog about how all that we need to do to cure a paralyzed stomach is take a walk. Is that why I have a gastric pacemaker, a port for IV meds, and three compression fractures due to seizures brought on by malnutrition? A few weeks later, after the GP community went after them, the author posted a mediocre clarification. _URL_0_\nThe guy is a tool. ",
"provenance": null
},
{
"answer": "The outrage is not sudden. Doctors and scientists and others have been. Complaining for years. Hell, he was summoned before congress last year and there was a whole series of criticisms leveled at him at that time. Criticism just keeps building and building. But most recently, a large number of medical faculty at Columbia urged revocation of his tenure, and that action was a huge deal as it's done extremely rarely. Why now? No reason--it's just part of the overall criticisms against him that have been building .",
"provenance": null
},
{
"answer": "If you check out the doctors behind it and the timing then you'll understand: _URL_0_ ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "10920490",
"title": "Mehmet Oz",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 574,
"text": "He is a proponent of alternative medicine, and has been criticized by physicians, government officials, and publications, including \"Popular Science\" and \"The New Yorker\", for giving non-scientific advice and promoting pseudoscience. In 2014 the British Medical Journal examined over 400 medical or health recommendations from 40 episodes of his program and found that only 46% of his claims were supported by reputable research, while 15% of his claims contradicted medical research and the remainder of Oz's advice were either vague banalities or unsupported by research.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "59526939",
"title": "Medical claims on The Dr. Oz Show",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 817,
"text": "\"The Dr. Oz Show\" is an American daytime television syndicated talk series, which debuted in 2009. Over the course of its run, various episodes and segment features have been criticized for a lack of scientific credibility in reference to the medical claims on \"The Dr. Oz Show\". A study by the British Medical Journal in 2014 concluded that less than half the claims made on the Dr Oz Show were backed by \"some\" evidence, and that fell to a third when the threshold was raised to \"believable\" evidence. The website Science Based Medicine goes even further, claiming: \"No other show on television can top The Dr. Oz Show for the sheer magnitude of bad health advice it consistently offers, all while giving everything a veneer of credibility.\" What follows is a selection of claims proven to be false and misleading.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "61949",
"title": "The Oprah Winfrey Show",
"section": "Section::::Regular segments and campaigns.:Tuesdays with Dr. Oz.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 456,
"text": "Mehmet Oz, the head of cardiac surgery at Columbia Presbyterian Medical Center in NYC and better known to millions of Winfrey's viewers as \"Dr. Oz\", regularly appeared on Tuesdays during the 2008–2009 season. In 2009, Dr. Oz debuted \"The Dr. Oz Show\" in first-run syndication. The series is co-produced by Harpo Productions and Sony Pictures Television. Dr. Oz has been criticized as promoting pseudo-science, and was the 2009 winner of the Pigasus Award.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10920490",
"title": "Mehmet Oz",
"section": "Section::::Controversies and criticism.:Lack of scientific validity.\n",
"start_paragraph_id": 52,
"start_character": 0,
"end_paragraph_id": 52,
"end_character": 408,
"text": "In April 2015, a group of ten physicians from across the United States, including Henry Miller, a fellow in scientific philosophy and public policy at Stanford University's Hoover Institute, sent a letter to Columbia University calling Oz's faculty position unacceptable. They accused Oz of \"an egregious lack of integrity by promoting quack treatments and cures in the interest of personal financial gain\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42509765",
"title": "The Dr. Oz Show",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 299,
"text": "The Dr. Oz Show is an American daytime television talk series. Each episode has segments on health, wellness and medical information, sometimes including true crime stories and celebrity interviews. It is co-produced by Oprah Winfrey's Harpo Productions and distributed by Sony Pictures Television.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "212698",
"title": "Quackery",
"section": "Section::::Persons accused of quackery.:Living.\n",
"start_paragraph_id": 81,
"start_character": 0,
"end_paragraph_id": 81,
"end_character": 225,
"text": "BULLET::::- Mehmet Oz (born 1960), as host of \"The Dr. Oz Show\", has promoted pseudoscientific health treatments and supplements and faced a hearing at the United States Senate for helping companies sell fraudulent medicine.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10920490",
"title": "Mehmet Oz",
"section": "Section::::Career.:Television.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 270,
"text": "On the show, Oz addressed issues like Type 2 diabetes and promoted resveratrol supplements, which he stated were anti-aging. His \"Transplant!\" television series won both a Freddie and a Silver Telly award. He served as medical director for Denzel Washington's \"John Q\".\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
373km4 | Who is generally credited with being the first musician/band/musical act etc. to have branched beyond performance to merchandise their name as a brand? | [
{
"answer": "Some professional signers of the baroque period and later had sweet merch, although I don't believe any of them profited from it directly through royalties or such, only indirectly through spreading their celebrity. It was a bit of a \"thing\" to have little enamel miniatures of your favorite singer and you could put them on your dress as a pin, on a chain as a necklace, or on the tops of your shoes (like decorative buckles). Luigi Marchesi and Farinelli are the only ones I know off the top of my head who got fangirls enough to merit shoe-toppers. [Here is an example of one of those enamel miniatures for Farinelli.](_URL_0_) There were also plaster busts of signers that were popular to collect, [here is one of an unknown man](_URL_1_), they were very fragile and very few survived, I don't know of any for opera singers that survived to today, but we have mentions of women collecting them for their favorite signers in satires and newspapers. There were also some direct musical appeals to celebrity from music publishers, like publishing \"Favorite Songs of Sig. Farinelli\" using singer's names and their famous arias, not sure if that would count. \n\nBut for who first deliberately cultivated such non-musical branding opportunities for their own direct commercial gain like \"Pickles Nickels,\" not sure, but there's not really an equivalent in the 17th-19th centuries. The idea of \"personality rights\" wasn't really there yet. Some vague movement towards moral rights of artistry (like the right not to have your music ripped off and published at someone else's gain) but even that was very sketchy, and depended on where you were working in Europe. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "12122761",
"title": "Iron Horse Music Hall",
"section": "Section::::History.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 1057,
"text": "By 1982, musical performers played the Horse on a regular basis, with many performers being nationally known or critically renowned artists. Some now well known acts such as Suzanne Vega, Stanley Jordan, George Winston, Michelle Shocked, Tracy Chapman, Dar Williams, Northampton-area native Sonya Kitchell and comedian Steven Wright performed at the Iron Horse before they became nationally known artists. The lineup of musicians has also included legendary reggae group Toots & the Maytals, Michael Franti, pop-rock icon John Mayer, monster guitarist Jorma Kaukonen (a founding member of Jefferson Airplane), pioneering free-jazz pianist Cecil Taylor, jazz pianist Mose Allison, folk-blues legend Taj Mahal, alternative-rock band They Might Be Giants, psychedelic-folk act The Incredible String Band, rock musician Jesse Malin, folk-rocker Steve Forbert, Chicago blues guitarist Jimmy Dawkins, children's musician Mister G the Five Blind Boys of Alabama, and rapper George Watsky. The Iron Horse closed for a time in the 90's, but was eventually reopened.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34558",
"title": "20th century",
"section": "Section::::Culture and entertainment.:Music.\n",
"start_paragraph_id": 72,
"start_character": 0,
"end_paragraph_id": 72,
"end_character": 935,
"text": "The world's most popular / famous / revered music artists of the 20th century include : Louis Armstrong, Little Richard, Igor Stravinsky, Gustav Mahler, George Gershwin, Sergei Prokofiev, Benjamin Britten, Maurice Ravel, Arnold Schoenberg, Dmitri Shostakovich, Aaron Copland, Béla Bartók, Ernesto Lecuona, Sergei Rachmaninoff, Richard Strauss, Thelonious Monk, Ella Fitzgerald, Duke Ellington, Bing Crosby, ABBA, The Beach Boys, The Beatles, Harry Belafonte, Chuck Berry, James Brown, Miles Davis, Bob Dylan, Jimi Hendrix, Eagles, Michael Jackson, Elton John, Bee Gees, Barbra Streisand, Cher, Nat \"King\" Cole, Robert Johnson, Led Zeppelin, Leonard Cohen, Queen, Madonna, Bob Marley, Metallica, Charlie Parker, Pink Floyd, Elvis Presley, The Rolling Stones, Frank Sinatra, Stevie Wonder, Aretha Franklin, Tupac Shakur, Nirvana (band), The Notorious B.I.G., Amr Diab, Fairuz, Umm Kulthum, Abdel Halim Hafez, Randy Newman and many more.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "400436",
"title": "Bez (dancer)",
"section": "Section::::Career.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 386,
"text": "The \"New Musical Express\" (NME) carried a series of articles about famous members of bands whose musical contribution to their bandmates' success was negligible. The newspaper used the name \"Bez\" as a generic label for the likes of Chas Smash of Madness, Andrew Ridgeley of Wham!, Paul Morley of Art of Noise, Linda McCartney of Wings, and Paul Rutherford of Frankie Goes to Hollywood.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "174650",
"title": "Capitol Records",
"section": "Section::::History.:Founding.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 243,
"text": "The earliest recording artists included co-owner Mercer, Johnnie Johnston, Morse, Jo Stafford, the Pied Pipers, Tex Ritter, Tilton, Paul Weston, Whiteman, and Margaret Whiting Capitol's first gold single was Morse's \"Cow Cow Boogie\" in 1942. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "582930",
"title": "Klaus Wunderlich",
"section": "Section::::Biography.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 246,
"text": "As a musician He was open to different music styles and played classical, operetta, Broadway musical, as well as popular music. He sold more than 20 million records all over the world and received 13 golden albums as well as one golden cassette.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12505304",
"title": "Ghost band",
"section": "Section::::Related musical terminology.\n",
"start_paragraph_id": 64,
"start_character": 0,
"end_paragraph_id": 64,
"end_character": 381,
"text": "BULLET::::- There are several well-known bands that have endured for decades – bands that are promoted and perceived to be continuations of the original. These types of bands are analogous to franchises, except, instead of multiple bands touring under the same name, only one band performs, but with a turnover of musicians. Examples includeTower of Power (currently, in its year)\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "215619",
"title": "VH1",
"section": "Section::::\"VH1: Music First\" (1994–2003).:\"Legends\".\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 821,
"text": "Shortly after, VH1 created a companion series, \"Legends\" (originally sponsored by AT&T), profiling artists who have made a more significant contribution to music history to qualify as \"Legends\" (that is, those artists who have gone beyond the category of \"Behind the Music\" biographies). The artists profiled so far have included Aerosmith; the Bee Gees; David Bowie; Johnny Cash; Eric Clapton; The Clash; George Clinton; Sam Cooke; Crosby, Stills, Nash & Young; The Doors; John Fogerty; Aretha Franklin; Marvin Gaye; The Grateful Dead; Guns N' Roses; Jimi Hendrix; Michael Jackson; Eminem; Elton John; Janis Joplin; B. B. King; Led Zeppelin; John Lennon; Curtis Mayfield; Nirvana; Pink Floyd; The Pretenders; Red Hot Chili Peppers; Queen; Bruce Springsteen; Tina Turner; U2; Stevie Ray Vaughan; The Who, and Neil Young.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
60lh6z | how come household incomes haven't gone up significantly in decades if more and more women have joined the labor force? | [
{
"answer": "In part precisely because more and more women have entered the labor force. Labor supply went up faster than demand, so labor became cheaper. The other issue is that technically compensation has continued to increase. People tend to only look at wage and say that people get paid the same as three decades ago. That's not true though, because healthcare benefits are compensation too, but they've eaten up a larger share of compensation (hence the push to reduce healthcare costs with the ACA).",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "5755439",
"title": "Household income in the United States",
"section": "Section::::Recent trends.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 243,
"text": "However, as indicated by the charts below, household income has still increased significantly since the late 1970s and early 80s in real terms, partly due to higher individual median wages, and partly due to increased opportunities for women.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29826999",
"title": "Added worker effect",
"section": "Section::::The added worker effect after the Great Recession.:Long-term unemployment's impact on the added worker effect.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 612,
"text": "During the Great Recession, which spanned December 2007 to June 2009, the average duration of unemployment reached a record high in the United States, which led to an increased incidence of the added worker effect (Rampell, 2010). The labor force participation rate of the wife rises with the expectation that her husband will be unemployed permanently due to aging or other factors (Maloney, p. 183). Women who expect their husbands will be unemployed for the long-run are more likely to accept a job when they have the opportunity, but without the intention of dropping out implied by the Added Worker Effect.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8020189",
"title": "Economic mobility",
"section": "Section::::Men and women.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 409,
"text": "However, much of this can be attributed to employment rates. The employment rate of women in their 30s has increased from 39% in 1964 to 70% in 2004; whereas, the rate of employment for men in this same age group has decreased from 91% in 1964 to 86% in 2004. This sharp increase in income for working women, in addition to stable male salaries, is the reason upward economic mobility is attributed to women.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4982305",
"title": "Mental disorders and gender",
"section": "Section::::Socioeconomic status (SES).:Gender disparities in socioeconomic status (SES).\n",
"start_paragraph_id": 88,
"start_character": 0,
"end_paragraph_id": 88,
"end_character": 555,
"text": "When it comes to income and earning ability in the United States, women are once again at an economic disadvantage. Indeed, for a same level of education and an equivalent field of occupation, men earn a higher wage than women. Though the pay-gap has narrowed over time, according U.S Census Bureau Survey, it was still 21% in 2014. Additionally, pregnancy negatively affects professional and educational opportunities for women since \"an unplanned pregnancies can prevent women from finishing their education or sustaining employment (Cawthorne, 2008)\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29826999",
"title": "Added worker effect",
"section": "Section::::Relation to income and substitution effects.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 701,
"text": "For added workers to enter the labor market when earning power decreases, the negative income effect must outweigh the positive substitution effect (Mincer, p. 68). In families whose male head of household loses his job, “the relative decline in family income is much stronger than the relative decline in the 'expected' wage rate of the wife.” In this case, the net effect leads the wife to enter the labor market, thereby increasing the labor supply. An example of the effect can be found in a study by Arnold Katz, who attributes the bulk of the increase in married female workers in the depression of 1958 “to the distress[ed] job seeking of wives whose husbands were out of work” (1961, p. 478).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38961363",
"title": "Gender inequality in Tonga",
"section": "Section::::Gender Inequality Index.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 342,
"text": "The statistics for the labor market participation of women show growth. In 1990, 36% of the female population was employed, which had grown to 52% in 2003. 74% of the male population was employed, which shows a disparity, but the gap is closing. The unemployment rate for women, 7.4%, is higher than men, who are at a 3.6% unemployment rate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34889609",
"title": "Gender inequality in Bolivia",
"section": "Section::::Economic participation.:Workforce participation and finances.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 837,
"text": "Women's participation in economic development increased from 22.5 percent to 40 percent between 1976 and 2002. As of 2002, 44 percent of women worked. Women living in urban areas tend to have the least paying and unproductive types of jobs, which is believed to be due to the lack of educational opportunities for women and educational requirements for better jobs. In rural areas women struggle more due to their gender and of being indigenous. As of 1992 rural working women had risen from 18.3 percent in 1976 to 38.1 percent, but working conditions are often poor, wages low and have low productivity. Some employers require women to sign agreements not to get pregnant. Indigenous women tend to work long hours as street vendors or domestic worker. Women who work the latter tend to work more hours, with less days off and low pay.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
11a2k3 | Will the GPS coordinates of a fixed point on land change due to Continental Drift? Also, why is 0 lattitude 0 longitude in the ocean instead of on land? | [
{
"answer": " > Will the GPS coordinates of a fixed point on land change due to Continental Drift?\n\nYes, Very Very VERY slowly.. at most [2-6 inches a year](_URL_2_). \n\n > Also, why is 0 latitude 0 Longitude in the ocean instead of on land?\n\nWell the Earth is a sphere so 0 latitude is the equator. Runs right round the middle of the world in a North/South orientation\n\n0 longitude is a bit different. The world power at the time of the definition of the prime meridian (0 longitude), Was England. They set 0 longitude as the line running right down the middle of the [Royal Observatory front door](_URL_1_). \n\nWhy 0,0 is out at sea.. well that's the way the World lines up when [divided into Graticules](_URL_0_) based on those standards.\n\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "17617",
"title": "Longitude",
"section": "Section::::Plate movement and longitude.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 361,
"text": "If a global reference frame (such as WGS84, for example) is used, the longitude of a place on the surface will change from year to year. To minimize this change, when dealing just with points on a single plate, a different reference frame can be used, whose coordinates are fixed to a particular plate, such as \"NAD83\" for North America or \"ETRS89\" for Europe.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "45287773",
"title": "Tenerife meridian",
"section": "Section::::Tenerife.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 745,
"text": "Determination of longitude at sea was not possible without a considerable margin of error until the mid 18th century. One of the methods that was used instead was dead reckoning, from the last point of land sighted. Lizard Point in Cornwall was a famous starting point for this, as was the 3,718 metre Teide volcano of Tenerife. From the early 1640s some Dutch cartographers were using Tenerife as a prime meridian in maps, with a significant increase in use after 1662. Joan Blaeu started using it in 1663, Frederik de Wit in 1670, and German mapmakers Weigel and Homann in the 1720s/1730s. After 1675 Tenerife was the predominant meridian on Dutch maps and in 1787 the Amsterdam Admiralty issued a formal statement of support to the meridian.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4470647",
"title": "Point Hicks",
"section": "Section::::History.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 646,
"text": "Cook estimated the coordinates of his Point Hicks (from a great distance) to be located at , a location in the sea over 60 km to the South West. Though measuring longitude in Cook's time was problematic due to the paucity of reliable [[marine chronometers]], Cook and his astronomer's measurements of latitude were usually very accurate. Nevertheless, the latitude of 38 degrees S placed the point more than 20 km out to sea from the East-West running coastline. It is likely that the reckoning was an error, that a cloudbank was mistaken for land, and that the true location of landfall by \"Endeavour\" lies somewhat to the East of Cape Everard.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20053858",
"title": "Longitude (book)",
"section": "Section::::Problem of longitude.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 279,
"text": "Determining longitude on land was fairly easy compared to the task at sea. A stable surface to work from, known coordinates to refer to, a sheltered environment for the unstable chronometers of the day, and the ability to repeat determinations over time made for great accuracy.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13980071",
"title": "History of longitude",
"section": "Section::::Problem of longitude.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 463,
"text": "Determining longitude at sea was also much harder than on land. A stable surface to work from, a comfortable location to live in while performing the work, and the ability to repeat determinations over time made various astronomical techniques possible on land (such as the observation of eclipses) that were unfortunately impractical at sea. Whatever could be discovered from solving the problem at sea would only improve the determination of longitude on land.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31658772",
"title": "Iberian nautical sciences, 1400–1600",
"section": "Section::::Abraham Zacuto and the ephemerides.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 854,
"text": "Once out of sight of the coast, Portuguese and Spanish ship pilots could rely upon the astrolabe and quadrant to determine their location on a north/south reference, however longitude was noticeably more difficult to acquire. The problem was time. Out on the vast stretches of the ocean, it is very difficult to keep track of time once leaving port. In order to calculate longitude a sailor would need to know the time difference between his current location and a fixed point somewhere on earth, usually the port of call. Even if one could determine the time of day while in deep waters, they still needed to know the time at their home port. The answer was the ephemerides, astronomical charts plotting the location of the stars over a distinct period of time. The German astronomer Regiomontanus published an accurate day-to-day Ephemerides in 1474. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50225",
"title": "Prime meridian",
"section": "Section::::International prime meridian.:IERS Reference Meridian.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 712,
"text": "Due to the movement of Earth's tectonic plates, the line of 0° longitude along the surface of the Earth has slowly moved toward the west from this shifted position by a few centimetres; that is, towards the Airy Transit Circle (or the Airy Transit Circle has moved toward the east, depending on your point of view) since 1984 (or the 1960s). With the introduction of satellite technology, it became possible to create a more accurate and detailed global map. With these advances there also arose the necessity to define a reference meridian that, whilst being derived from the Airy Transit Circle, would also take into account the effects of plate movement and variations in the way that the Earth was spinning.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2hdew2 | How do people in space (ex: living on the ISS) keep track of time? Do they adjust their sleep-wake schedules according to one master clock? | [
{
"answer": "Pretty much. There is so much to do for astronauts whether it's science, maintenance, spacewalks etc. that their days are fairly choreographed and planned. Then they just block off time for sleeping each 24 hour period. One of the physiological issues with life on the ISS is there is a sunset/sunrise every 90 minutes and it can mess with your circadian rhythm and sleep cycles. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "30890",
"title": "Time zone",
"section": "Section::::Time zones in outer space.\n",
"start_paragraph_id": 93,
"start_character": 0,
"end_paragraph_id": 93,
"end_character": 537,
"text": "Orbiting spacecraft typically experience many sunrises and sunsets in a 24-hour period, or in the case of Apollo program astronauts travelling to the moon, none. Thus it is not possible to calibrate time zones with respect to the sun, and still respect a 24-hour sleep/wake cycle. A common practice for space exploration is to use the Earth-based time zone of the launch site or mission control. This keeps the sleeping cycles of the crew and controllers in sync. The International Space Station normally uses Greenwich Mean Time (GMT).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1614102",
"title": "Effect of spaceflight on the human body",
"section": "Section::::Psychological effects.:Sleep.\n",
"start_paragraph_id": 64,
"start_character": 0,
"end_paragraph_id": 64,
"end_character": 1298,
"text": "The amount and quality of sleep experienced in space is poor due to highly variable light and dark cycles on flight decks and poor illumination during daytime hours in the space craft. Even the habit of looking out of the window before retiring can send the wrong messages to the brain, resulting in poor sleep patterns. These disturbances in circadian rhythm have profound effects on the neurobehavioural responses of crew and aggravate the psychological stresses they already experience (see Fatigue and sleep loss during spaceflight for more information). Sleep is disturbed on the ISS regularly due to mission demands, such as the scheduling of incoming or departing space vehicles. Sound levels in the station are unavoidably high because the atmosphere is unable to thermosiphon; fans are required at all times to allow processing of the atmosphere, which would stagnate in the freefall (zero-g) environment. Fifty percent of space shuttle astronauts take sleeping pills and still get 2 hours less sleep each night in space than they do on the ground. NASA is researching two areas which may provide the keys to a better night's sleep, as improved sleep decreases fatigue and increases daytime productivity. A variety of methods for combating this phenomenon are constantly under discussion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "45315837",
"title": "Sleep in space",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 286,
"text": "Human spaceflight often requires astronaut crews to endure long periods without rest. Studies have shown that lack of sleep can cause fatigue that leads to errors while performing critical tasks. Also, individuals who are fatigued often cannot determine the degree of their impairment.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5607447",
"title": "Space medicine",
"section": "Section::::Effects of space-travel.:Effects of fatigue.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 286,
"text": "Human spaceflight often requires astronaut crews to endure long periods without rest. Studies have shown that lack of sleep can cause fatigue that leads to errors while performing critical tasks. Also, individuals who are fatigued often cannot determine the degree of their impairment.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34095626",
"title": "Neuroscience in space",
"section": "Section::::Operational aspects.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 645,
"text": "Astronauts must remain alert and vigilant while operating complicated equipment. Therefore, getting enough sleep is a crucial factor of mission success. Weightlessness, a confined and isolated environment, and busy schedules coupled with the absence of a regular 24-hour day make sleep difficult in space. Astronauts typically average only about six hours of sleep each night. Cumulative sleep loss and sleep disruption could lead to performance errors and accidents that pose significant risk to mission success. Sleep and circadian cycles also temporally modulate a broad range of physiological, hormonal, behavioral, and cognitive functions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14163574",
"title": "Mission Elapsed Time",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 413,
"text": "The International Space Station (ISS) does not use an MET clock since it is a \"permanent\" and international mission. The ISS observes Coordinated Universal Time (UTC/GMT). When the shuttle visited ISS the ISS-crew usually adjusted their workday to the MET clock to make work together easier. The shuttles also had UTC clocks so that the astronauts could easily figure out what the \"official\" time aboard ISS was.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18597893",
"title": "Sleep deprivation",
"section": "Section::::Physiological effects.:Other effects.\n",
"start_paragraph_id": 60,
"start_character": 0,
"end_paragraph_id": 60,
"end_character": 227,
"text": "Astronauts have reported performance errors and decreased cognitive ability during periods of extended working hours and wakefulness as well as due to sleep loss caused by circadian rhythm disruption and environmental factors.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
hhxyp | Is it technically possible that somewhere in the Universe some of the fundamental constants are actually variable? | [
{
"answer": "Is it possible in the sense that we can't conclusively rule it out? I guess I have to reluctantly say yes; that's the price of having empirical science.\n\nIs it possible in the sense that there is *any* reason to believe it happens, or in the sense that it's consistent with our present observations? Absolutely not.",
"provenance": null
},
{
"answer": "Go to a bank and have them change a pocket full of your money into another currency. They'll tell you that, today, a pound is worth 1.15 euros, or whatever it happens to be at that moment. Then turn around and ask them whether it's technically possible that, somewhere in the universe, the exchange rate could be something else?\n\nThe answer, of course, is no. Because the exchange rate from pounds to euros is not a *field* defined over *space.* It's just a scalar value used to convert numerical values from one basis to another. A given amount of money is the same regardless of how you choose to denominate it; nothing actually *happens* to your money when you change it from one currency to another. You just end up breaking it up into differently sized units, is all.\n\nIn the same way, neither the speed of light nor Planck's constant are *fields* defined over space. They're just unit-conversion factors. The speed of light is the conversion factor for going from units of distance to units of duration and back. Planck's constant is the conversion factor for going from units of distance *or* duration to units of energy. They're no more \"fundamental physical constants\" than the number of inches in a meter is, and in fact it's customary for physicists to work in units of measurement in which *c* and *h* are both numerically equal to one. (Throw *G* into that mix and you can also get rid of the kilogram, which is nice.)\n\nAs for *π,* that's not a constant at all, but instead a property of geometry. In flat space, the ratio of a circle's circumference to its radius is 2*π,* but this is not true in curved space. And the space we live in, as we all know, can be curved. So the value of *π* varies from place to place. In fact, one of the most important scientific experiments of last decade involved measuring the ratio of the circumference to the radius of the largest possible circle — a circle projected onto the surface of last scattering — to determine the overall geometry of the observable universe. If *π* were the same everywhere, there would've been no reason to do that experiment.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "50519584",
"title": "Time-variation of fundamental constants",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 335,
"text": "In a more philosophical context, the conclusion that these quantities are constant raises the question of why they have the specific value they do in what appears to be a \"fine-tuned Universe\", while their being variable would mean that their known values are merely an accident of the current time at which we happen to measure them.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "458565",
"title": "Dimensionless physical constant",
"section": "Section::::Examples.:Martin Rees's Six Numbers.\n",
"start_paragraph_id": 49,
"start_character": 0,
"end_paragraph_id": 49,
"end_character": 399,
"text": "\"N\" and \"ε\" govern the fundamental interactions of physics. The other constants (\"D\" excepted) govern the size, age, and expansion of the universe. These five constants must be estimated empirically. \"D\", on the other hand, is necessarily a nonzero natural number and cannot be measured. Hence most physicists would not deem it a dimensionless physical constant of the sort discussed in this entry.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23601",
"title": "Pi",
"section": "Section::::Outside mathematics.:Describing physical phenomena.\n",
"start_paragraph_id": 200,
"start_character": 0,
"end_paragraph_id": 200,
"end_character": 395,
"text": "Although not a physical constant, appears routinely in equations describing fundamental principles of the universe, often because of 's relationship to the circle and to spherical coordinate systems. A simple formula from the field of classical mechanics gives the approximate period of a simple pendulum of length , swinging with a small amplitude ( is the earth's gravitational acceleration):\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "573880",
"title": "Fine-tuned Universe",
"section": "Section::::Premise.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 537,
"text": "The premise of the fine-tuned universe assertion is that a small change in several of the dimensionless physical constants would make the universe radically different. As Stephen Hawking has noted, \"The laws of science, as we know them at present, contain many fundamental numbers, like the size of the electric charge of the electron and the ratio of the masses of the proton and the electron. ... The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23205",
"title": "Physical constant",
"section": "Section::::Choice of units.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 387,
"text": "It is known that the Universe would be very different if these constants took values significantly different from those we observe. For example, a few percent change in the value of the fine structure constant would be enough to eliminate stars like our Sun. This has prompted attempts at anthropic explanations of the values of some of the dimensionless fundamental physical constants.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23205",
"title": "Physical constant",
"section": "Section::::Fine-tuned Universe.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 944,
"text": "Some physicists have explored the notion that if the dimensionless physical constants had sufficiently different values, our Universe would be so radically different that intelligent life would probably not have emerged, and that our Universe therefore seems to be fine-tuned for intelligent life. The anthropic principle states a logical truism: the fact of our existence as intelligent beings who can measure physical constants requires those constants to be such that beings like us can exist. There are a variety of interpretations of the constants' values, including that of a divine creator (the apparent fine-tuning is actual and intentional), or that ours is one universe of many in a multiverse (e.g. the many-worlds interpretation of quantum mechanics), or even that, if information is an innate property of the universe and logically inseparable from consciousness, a universe without the capacity for conscious beings cannot exist.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31880",
"title": "Universe",
"section": "Section::::Physical properties.:Support of life.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 639,
"text": "The Universe may be \"fine-tuned\"; the Fine-tuned Universe hypothesis is the proposition that the conditions that allow the existence of observable life in the Universe can only occur when certain universal fundamental physical constants lie within a very narrow range of values, so that if any of several fundamental constants were only slightly different, the Universe would have been unlikely to be conducive to the establishment and development of matter, astronomical structures, elemental diversity, or life as it is understood. The proposition is discussed among philosophers, scientists, theologians, and proponents of creationism.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
coipvd | who was jeffrey epstein? why is him committing suicide suspicious? what does him committing suicide mean? | [
{
"answer": "Had a child sex slave trafficking ring with multiple elite billionaires involved but hasn’t given much info and was supposed to go to trial soon also was on suicide watch but somehow still committed “suicide” it’s suspicious because there’s a high chance it’s a coverup",
"provenance": null
},
{
"answer": "Epstein was (is?) a very wealthy and connected financier (investment banking, financial consulting, etc...) who has been under intense investigation for his ties to child sex trafficking.\n\nHis apparent suicide is suspicious because he was recently arrested (for a second time) around child trafficking. He had supposedly been under suicide watch after previously attempting it. The fact that he is connected with many high profile names (famous US presidents and politicians, British and Saudi royalty and generally wealthy and well known VIPs, etc...) leads many to believe that there’s more than meets the eye. Without him alive he can’t name drop or implicate the names being accused. Sure it’s possible he committed suicide, but there are many very wealthy, very connected people with an interest in silencing him.\n\nThere are also theories that he’s tied to a deep state spy organization in Israel and that the Israeli government body swapped him and transported him out of prison and the US. \n\nHis suicide “means” that he can’t speak or testify against any of the potential people involved.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "129194",
"title": "Brian Epstein",
"section": "Section::::Death.\n",
"start_paragraph_id": 78,
"start_character": 0,
"end_paragraph_id": 78,
"end_character": 686,
"text": "Epstein died of an overdose of Carbitral, a form of barbiturate or sleeping pill, in his locked bedroom on 27 August 1967. He was discovered after his butler had knocked on the door and then, hearing no response, asked the housekeeper to call the police. Epstein was found on a single bed, dressed in pyjamas, with various correspondence spread over a second single bed. At the statutory inquest his death was officially ruled an accident, caused by a gradual buildup of Carbitral combined with alcohol in his system. It was revealed that he had taken six Carbitral pills in order to sleep, which was probably normal for him, but in combination with alcohol they reduced his tolerance.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6253522",
"title": "Jeffrey Epstein",
"section": "Section::::Legal proceedings.:Second criminal case.:Trafficking charges.\n",
"start_paragraph_id": 96,
"start_character": 0,
"end_paragraph_id": 96,
"end_character": 1004,
"text": "Epstein's lawyers urged the court to allow Epstein to post bail, offering to post up to a $600million bond (including $100million from his brother, Mark) so he could leave jail and submit to house arrest in his New York mansion. Judge Richard M. Berman denied the request on July 18, saying that Epstein posed a danger to the public and a serious flight risk to avoid prosecution. On July 23, Epstein was found injured and semiconscious at 1:30 a.m. on the floor of his cell, with marks around his neck that were suspected to be from a suicide attempt or an assault. His cellmate former New York City police officer, Nicholas Tartaglione, who is charged with four counts of murder, was questioned about Epstein's condition. He denied knowledge of what happened. According to NBC News, two sources said that Epstein might have tried to hang himself, a third said the injuries were not serious and could have been staged, while a fourth source said that an assault by his cellmate, had not been ruled out.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1504584",
"title": "Howie Epstein",
"section": "Section::::Death.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 634,
"text": "On February 23, 2003, Epstein died from complications related to drug use. MTV News reported that Epstein's death was caused by a heroin overdose. He was 47. Investigators were told Epstein had been using heroin. On the day of his death, Howie was driven to St. Vincent Hospital in Santa Fe, New Mexico by his girlfriend, who described him as \"under distress\". Epstein was taking antibiotics for an illness and had recently suffered from influenza, stomach problems, and an abscess on his leg, friends said. Additionally, it was reported that he had been extremely distraught over the death of his 16-year-old dog a few days earlier.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "179643",
"title": "Cilla Black",
"section": "Section::::Music career.:Before August 1967.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 446,
"text": "Epstein died of an accidental drug overdose in August 1967, not long after negotiating a contract with the BBC for Black to appear in a television series of her own. Relations between Epstein and Black had somewhat soured during the year prior to his death, largely because he was not paying her career enough attention and the fact that her singles \"A Fool Am I\" (UK No. 13, 1966) and \"What Good Am I?\" (UK No. 24, 1967) were not big successes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "129194",
"title": "Brian Epstein",
"section": "Section::::Death.\n",
"start_paragraph_id": 76,
"start_character": 0,
"end_paragraph_id": 76,
"end_character": 370,
"text": "Epstein attended a traditional shiva in Liverpool after his father died, having just come out of the Priory clinic where he had been trying to cure his acute insomnia and addiction to amphetamines. A few days before his death he made his last visit to a Beatles recording session on 23 August 1967, at the Chappell Recording Studios on Maddox Street in Mayfair, London.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6253522",
"title": "Jeffrey Epstein",
"section": "Section::::Legal proceedings.:First criminal case.:Initial developments (2005–2006).\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 679,
"text": "Police began an 11-month undercover investigation of Epstein, followed by a search of his home. The Federal Bureau of Investigation also became involved in the investigation. Subsequently, the police alleged that Epstein had paid several girls to perform sexual acts with him. Interviews with five alleged victims and 17witnesses under oath, a high school transcript and other items found in Epstein's trash and home allegedly showed that some of the girls involved were under 18. The police search of Epstein's home found two hidden cameras and large numbers of photos of girls throughout the house, some of whom the police had interviewed in the course of their investigation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1504584",
"title": "Howie Epstein",
"section": "Section::::Career.:The Heartbreakers.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 398,
"text": "On September 1, 1982, he made his live debut at the Santa Cruz Civic Auditorium in Santa Cruz, California, on the tour to promote the album, \"Long After Dark\". Epstein was a member of the Heartbreakers until his departure due to his failing health caused by his heroin addiction. He made his final appearance with the band when they were inducted into the Rock and Roll Hall of Fame in March 2002.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5zqjaj | if your body, very slowly, began to not get the oxygen it needs, which systems would shut down first? (and last) and why? | [
{
"answer": "Im not aware of any published evidence on this so I will give my professional opinion.\n\nFirstly it depends on why are not getting the oxygen it needs. The two main reasons are because of a lack of oxygen in the air (rare) or your lungs not oxygenating blood properly (common). \n\nNot having enough oxygen in your blood (as measured by a blood test from your artery) is termed respiratory failure. There are two types, one is just not enough oxygen with low carbon dioxide caused by hyperventilating to try to get enough oxygen in. The second type is not enough oxygen AND too much carbon dioxide because the lungs are not moving air in and out efficiently enough. \n\nIf you're talking about lack of oxygen then that would typically show the first type of respiratory failure on the arterial blood test. We would still term it respiratory failure even though the lungs were working fine. Without any shadow of a doubt your brain would be the first thing to go. Most of your organs can survive a certain amount of hypoxia but you would go unconscious fairly rapidly. Your liver and kidneys would probably go next - the liver because it is the organ that carries out the most chemical reactions and needs oxygen for this and the kidneys because they require a lot of oxygenated blood flow to keep working. \n\nIf you removed the oxygen very very slowly (over days and weeks) then other mechanisms would kick in such as the blood production mechanisms to ensure there is more haemoglobin to mop up as much as possible of the scarce oxygen that you breathe in. This is why mountaineers have to spend time acclimatising and why people who live at high altitude in for example Chile have very high haemoglobin levels. If you kept removing the oxygen though, you'd eventually pass out. \n\nAfter you'd passed out the liver and kidneys would begin to shut down next and then probably your heart. You wouldn't live long after you'd passed out. The brain is obviously the top priority. After this the body will just keep trying to get as much oxygen as it can until the heart stops.\nTl;dr: The brain. \n\nSource: I am a doctor.\n\nEdit: Grammar",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "8221",
"title": "Death",
"section": "Section::::Reperfusion.\n",
"start_paragraph_id": 62,
"start_character": 0,
"end_paragraph_id": 62,
"end_character": 446,
"text": "\"One of medicine's new frontiers: treating the dead\", recognizes that cells that have been without oxygen for more than five minutes die, not from lack of oxygen, but rather when their oxygen supply is resumed. Therefore, practitioners of this approach, e.g., at the Resuscitation Science institute at the University of Pennsylvania, \"aim to reduce oxygen uptake, slow metabolism and adjust the blood chemistry for gradual and safe reperfusion.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1770",
"title": "Apollo 13",
"section": "Section::::Investigation and response.:Activities and report.\n",
"start_paragraph_id": 77,
"start_character": 0,
"end_paragraph_id": 77,
"end_character": 413,
"text": "Mechanical shock forced the oxygen valves closed on the number 1 and number 3 fuel cells, leaving them operating for only about three minutes on the oxygen in the feed lines. The shock also either partially ruptured a line from the number 1 oxygen tank, or caused its check or relief valve to leak, causing its contents to leak out into space over the next 130 minutes, entirely depleting the SM's oxygen supply.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11308093",
"title": "The Destruction Factor",
"section": "Section::::Plot.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 588,
"text": "From measuring the oxygen buildup in the biodome the scientific task force (Exon's daughter Denise, her fiancé Howard Rogers, Nobel prize winning microbiologist Max Flinders, and a government-backed scientist named only as \"Blowers\",) discover that the oxygen output is so high that if unchecked within twenty years the oxygen balance of the planet will have been doubled to 40%, making current life all but extinct. Another emergency vent of the biodome is required, but a 747 Jumbo jet passes through the escaping oxygen bubble causing it to explode as the engines suck in pure oxygen.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "490305",
"title": "Ischemia",
"section": "Section::::Signs and symptoms.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 438,
"text": "Since oxygen is carried to tissues in the blood, insufficient blood supply causes tissue to become starved of oxygen. In the highly metabolically active tissues of the heart and brain, irreversible damage to tissues can occur in as little as 3–4 minutes at body temperature. The kidneys are also quickly damaged by loss of blood flow (renal ischemia). Tissues with slower metabolic rates may undergo irreversible damage after 20 minutes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50048517",
"title": "Lance Becker",
"section": "Section::::Research.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 384,
"text": "Becker discovered that re-introduction of oxygen, rather than loss of oxygen, was primarily responsible for cell death. Cell death can be delayed or stopped through the application of therapeutic hypothermia. In the case of Swedish skier Anna Bågenholm, who fell through ice into freezing water, the cold protected her from brain damage despite being without oxygen for over an hour.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "459471",
"title": "Breathing gas",
"section": "Section::::For diving and other hyperbaric use.:Individual component gases.:Oxygen.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 449,
"text": "Oxygen (O) must be present in every breathing gas. This is because it is essential to the human body's metabolic process, which sustains life. The human body cannot store oxygen for later use as it does with food. If the body is deprived of oxygen for more than a few minutes, unconsciousness and death result. The tissues and organs within the body (notably the heart and brain) are damaged if deprived of oxygen for much longer than four minutes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "261148",
"title": "Apnea",
"section": "Section::::Complications.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 527,
"text": "Under normal conditions, humans cannot store much oxygen in the body. Prolonged apnea leads to severe lack of oxygen in the blood circulation. Permanent brain damage can occur after as little as three minutes and death will inevitably ensue after a few more minutes unless ventilation is restored. However, under special circumstances such as hypothermia, hyperbaric oxygenation, apneic oxygenation (see below), or extracorporeal membrane oxygenation, much longer periods of apnea may be tolerated without severe consequences.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1j173e | how are synthetic materials (such as plastic) unnatural / toxic, if they are made from ingredients found on earth? | [
{
"answer": "Naturally occuring chemicals can be used to make chemicals which do not occur in nature. Think of it like baking a cake. The main ingredients in cake (sugar, flour, oil, eggs) are all naturally occuring but you would never a cake in nature. It's similar with plastics. While the chemicals used in plastic manufacturing (most petroleum based) are naturally occuring, you can combine them in specific ways to make something which is not.",
"provenance": null
},
{
"answer": "I personally dont think there is a distinction between 'natural' and 'unnatural' in a universal sense. We are a product of nature, and nature endowed us with the ability to create things, just like bird's nests.\n\nWhere the environmentalists have a point is that when we DO create a new compounds (chemicals that other processes didnt put together until now, like a new atomic leggo set), it often decays slowly and can be harmful to living organisms that have not evolved around such materials. It disrupts their biological functions because we changed some base elements into a compound that works differently than other compounds they are used to.\n\nIf you create a lot of slowly decaying poisonous things and leave them laying around, it will kill a bunch of the living things that were there for millions of years without that stuff.\n\nPlastic, for example, is made (often) of petroleum, which is found deep underground and is (often) the result of many years of decaying organic matter. We pull up the petroleum, which is now some long stringy bits of carbon, and subject it to other chemicals and heat and pressure etc.\n\nNow we can create a plastic bag. Other organisms havent done this before, so we are the first to introduce the plastic bag into the ecosystem. Whenever you introduce a new thing into the ecosystem, it messes with what was already there.\n\nMy personal take on this is that the result can be bad for US, but not universally bad usually. George Carlin has a bit about how nature will just eventually use plastic in a new species, but we'll be fucking long dead cause we screwed up the environment so bad that even we cant live there. I think there is validity to this point.\n\nIf we create a new string of carbon and stuff that causes cancer and leave it in the drinking water, WE die, along with other creatures. WE dont really want to die, so we assume this is universally bad, when in reality, it's only bad for us. \n\nBut bad for us is bad enough, and we should take care to keep this place clean, if for no other reason than I want to not get horrible cancer and die painfully when it could be avoided.\n\nAnother point is that we are not the first species to introduce a powerful chemical agent globally and change all of life. The Earth didnt always have an oxygen rich environment. It took photosynthetic organisms to change the entire atmosphere so that creatures that breath air could evolve and exist. The trees and whatnot changed the planet long before we did. Also, 99% of all species have gone extinct. We are likely subject to the same laws.\n\nIf we change our environment too rapidly and dont pay attention to the cause and effect chains of our decision making, we may end up in that 99% sooner rather than later. This is counter to all our instincts as living creatures, so we should try to avoid doing this to ourselves, even if at some long range universal point of view it's not a new thing to have happen.\n\nI'll bet all that new O2 killed a lot of things off back when photosynthetic organisms were starting to do their thing.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "26145195",
"title": "Plastic",
"section": "Section::::Toxicity.\n",
"start_paragraph_id": 117,
"start_character": 0,
"end_paragraph_id": 117,
"end_character": 1031,
"text": "Pure plastics have low toxicity due to their insolubility in water and because they are biochemically inert, due to a large molecular weight. Plastic products contain a variety of additives, some of which can be toxic. For example, plasticizers like adipates and phthalates are often added to brittle plastics like polyvinyl chloride to make them pliable enough for use in food packaging, toys, and many other items. Traces of these compounds can leach out of the product. Owing to concerns over the effects of such leachates, the European Union has restricted the use of DEHP (di-2-ethylhexyl phthalate) and other phthalates in some applications, and the United States has limited the use of DEHP, DPB, BBP, DINP, DIDP, and DnOP in children's toys and child care articles with the Consumer Product Safety Improvement Act. Some compounds leaching from polystyrene food containers have been proposed to interfere with hormone functions and are suspected human carcinogens. Other chemicals of potential concern include alkylphenols.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26145195",
"title": "Plastic",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 304,
"text": "Plastics are typically organic polymers of high molecular mass and often contain other substances. They are usually synthetic, most commonly derived from petrochemicals, however, an array of variants are made from renewable materials such as polylactic acid from corn or cellulosics from cotton linters.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "446403",
"title": "Synthetic",
"section": "Section::::In the sense of both \"combination\" and \"artificial\".\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 204,
"text": "BULLET::::- Synthetic organic compounds are tens of thousands of synthetic chemical compounds, all containing carbon, that are extremely useful, including medicines, rubbers, plastics, refrigerants, etc.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5308868",
"title": "Synthetic resin",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 332,
"text": "Synthetic resins are industrially produced resins, typically viscous substances that convert into rigid polymers by the process of curing. In order to undergo curing, resins typically contain reactive end groups, such as acrylates or epoxides. Some synthetic resins have properties similar to natural plant resins, but many do not.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26145195",
"title": "Plastic",
"section": "Section::::Additives.\n",
"start_paragraph_id": 103,
"start_character": 0,
"end_paragraph_id": 103,
"end_character": 254,
"text": "Blended into most plastics are additional organic or inorganic compounds. The average content of additives is a few percent. Many of the controversies associated with plastics actually relate to the additives: organotin compounds are particularly toxic.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "307065",
"title": "Tissue engineering",
"section": "Section::::Scaffolds.:Materials.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 1118,
"text": "A commonly used synthetic material is PLA - polylactic acid. This is a polyester which degrades within the human body to form lactic acid, a naturally occurring chemical which is easily removed from the body. Similar materials are polyglycolic acid (PGA) and polycaprolactone (PCL): their degradation mechanism is similar to that of PLA, but they exhibit respectively a faster and a slower rate of degradation compared to PLA. While these materials have well maintained mechanical strength and structural integrity, they exhibit a hydrophobic nature. This hydrophobicity inhibits their biocompatibility, which makes them less effective for in vivo use as tissue scaffolding. In order to fix the lack of biocompatibility, much research has been done to combine these hydrophobic materials with hydrophilic and more biocompatible hydrogels. While these hydrogels have a superior biocompatibility, they lack the structural integrity of PLA, PCL, and PGA. By combining the two different types of materials, researchers are trying to create a synergistic relationship that produces a more biocompatible tissue scaffolding.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5308868",
"title": "Synthetic resin",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 716,
"text": "Synthetic resins are of several classes. Some are manufactured by esterification of organic compounds. Some are thermosetting plastics in which the term \"resin\" is loosely applied to the reactant or product, or both. \"Resin\" may be applied to one of two monomers in a copolymer, the other being called a \"hardener\", as in epoxy resins. For thermosetting plastics that require only one monomer, the monomer compound is the \"resin\". For example, liquid methyl methacrylate is often called the \"resin\" or \"casting resin\" while in the liquid state, before it polymerizes and \"sets\". After setting, the resulting PMMA is often renamed acrylic glass, or \"acrylic\". (This is the same material called Plexiglas and Lucite).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
omj88 | what do you do with your invention idea? | [
{
"answer": "Write all plans, print it out and mail it to yourself, never opening it. \nIt's a poor man's copyright. \nOther than that, I don't know. Hopefully someone else has more in-depth knowledge. ",
"provenance": null
},
{
"answer": "Really depends on what you've invented/what your idea is.\n\nIf it's a mass-market thing, or something that can be quickly duplicated by competitors and you haven't got the capacity to produce and distribute widely yourself, you might want to consider licensing. Find a company doing something similar or related and pitch it to them in exchange for a license fee.\n\nIf you're going it alone, then a business plan, marketing plan, etc are a must. How much money you need depends on what your startup and operating costs will be. Whether you'll need patents depends on the idea too. \n\nBefore you even start going through the process of developing a plan though, it's worth your time to discuss it with some people first. It might seem like a great idea, but we aren't always the first ones to see major flaws in our plans. Most importantly, try to avoid using family or close friends for this, as they're the least likely to be completely honest if an idea really sucks.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "44312",
"title": "Invention",
"section": "Section::::Process of invention.:Conceptual means.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 484,
"text": "Invention is often a creative process. An open and curious mind allows an inventor to see beyond what is known. Seeing a new possibility, connection or relationship can spark an invention. Inventive thinking frequently involves combining concepts or elements from different realms that would not normally be put together. Sometimes inventors disregard the boundaries between distinctly separate territories or fields. Several concepts may be considered when thinking about invention.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44312",
"title": "Invention",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 507,
"text": "An invention is a unique or novel device, method, composition or process. The invention process is a process within an overall engineering and product development process. It may be an improvement upon a machine or product or a new process for creating an object or a result. An invention that achieves a completely unique function or result may be a radical breakthrough. Such works are novel and not obvious to others skilled in the same field. An inventor may be taking a big step in success or failure.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44312",
"title": "Invention",
"section": "Section::::Process of invention.:Conceptual means.:Exploration.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 241,
"text": "Invention is often an exploratory process with an uncertain or unknown outcome. There are failures as well as successes. Inspiration can start the process, but no matter how complete the initial idea, inventions typically must be developed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44312",
"title": "Invention",
"section": "Section::::Process of invention.:Conceptual means.:Re-envision.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 645,
"text": "To invent is to see anew. Inventors often envision a new idea, seeing it in their mind's eye. New ideas can arise when the conscious mind turns away from the subject or problem when the inventor's focus is on something else, or while relaxing or sleeping. A novel idea may come in a flash—a Eureka! moment. For example, after years of working to figure out the general theory of relativity, the solution came to Einstein suddenly in a dream \"like a giant die making an indelible impress, a huge map of the universe outlined itself in one clear vision\". Inventions can also be accidental, such as in the case of polytetrafluoroethylene (Teflon).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33942303",
"title": "Utility in Canadian patent law",
"section": "Section::::General principles.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 249,
"text": "An invention is useful if it does what it promises; following the directions should result in the desired effect. The inventor does not have to have created the product of the invention, but the specifications must disclose an actual way to do so. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44312",
"title": "Invention",
"section": "Section::::Process of invention.:Practical means of invention.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 414,
"text": "Idea for an Invention may be developed on paper or on a computer, by writing or drawing, by trial and error, by making models, by experimenting, by testing and/or by making the invention in its whole form. Brainstorming also can spark new ideas for an invention. Collaborative creative processes are frequently used by engineers, designers, architects and scientists. Co-inventors are frequently named on patents.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44312",
"title": "Invention",
"section": "Section::::Process of invention.:Practical means of invention.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 243,
"text": "In the process of developing an invention, the initial idea may change. The invention may become simpler, more practical, it may expand, or it may even \"morph\" into something totally different. Working on one invention can lead to others too.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5aq048 | how do television ratings work? how long do i have to be tuned in to a channel for the rating to count? and what's the correlation between the rating number (i.e. 13.4) and the number of viewers? | [
{
"answer": "Ratings are based off what are called Nielsen Ratings. The Nielsen Company employs a system where they select families of a certain demographic in every single area code and \"hires\" out these families to be what are known as The Nielsen Families.\n\nHow do they gather what shows they watch? Nielsen employs a box that connects to a family's DVR or cable box as well as connects to their TV so that they know exactly what shows the family is watching, when they watch it, how they watch it (recorded or live), and how often. All of this information gets transferred into the box and that's then transmitted to their data warehouse down in Texas.\n\nThere, millions upon millions of data is migrated, mined, and reported out to various companies who have bought media, and they receive a report around GRPs or Gross Rating Points. Gross Rating Points tell you the frequency (how often and length) and reach (# of Nielsen families). Each company has a set threshold that they wish to hit so that's how some shows get cancelled vs others. \n\nNot everyone can impact ratings as this would require tons of data plus not everyone wants to have their viewing habits shared with companies. You cannot choose to become a Nielsen Family, you have to live in a certain area and hit a type of demographic (income, race, make up of the family, etc) for you to be selected by The Nielsen Company.\n\nRatings count by seconds so you can be on a channel for a brief moment for it to be counted. For example, if you're channel surfing, the box will record exactly what channels you accessed and for how long even if it was for a second or less. They can also tell if you've accessed the channel guide. They can also tell when you switched the tv over to gaming and play a game.\n\nThere's a high correlation between the two as the ratings take into account number of viewers (reach) and frequency of viewing (how many times viewed and length of time).",
"provenance": null
},
{
"answer": "I've been a Nielsen family participant on 2 different instances. Once about 20 years ago in OK City, and again, 1 year ago in Texas.\n\nThey do allow a small fee to the participants, but all tracking in both instances for me was a manually written log provided by the Nielsen company. I would fill out and mail them the log, and they would send a new blank log back for the next period. \n\nIn both cases, I eventually gave up on the arrangement so my TV watching was very sporadic and I didn't like filling out and keeping up with the log books. \n\nIf they would have provided a box of some kind that tracks viewing, I'd still be doing it. I'm not sure what dictates who gets a box and who has to do the logs by hand.\n\n\n\n\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "8711222",
"title": "Target rating point",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 556,
"text": "Television rating point (TRP) for calculation purposes is a device attached to the TV set in a few thousand viewers, houses for judging purposes. These numbers are treated as a sample from the overall TV owners in different geographical and demographic sectors. Using a device a special code is telecasted during the programme, It records the time and the programme that a viewer watches on a particular day. The average is taken for a 30-day period, which gives the viewership status for the particular channel. It is also known as \"Target Rating Point\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2456847",
"title": "Audience measurement",
"section": "Section::::Ratings point.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 669,
"text": "One single television ratings point (Rtg or TVR) represents 1% of television households in the surveyed area in a given minute. As of 2004, there are an estimated 109.6 million television households in the United States. Thus, a single national ratings point represents 1%, or 1,096,000 television households for the 2004–05 season. When used for the broadcast of a program, the average rating across the duration of the show is typically given. Ratings points are often used for specific demographics rather than just households. For example, a ratings point among the key 18- to 49-year-olds demographic is equivalent to 1% of all 18- to 49-year-olds in the country.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "182410",
"title": "Television in the United Kingdom",
"section": "Section::::Channels and channel owners.:Viewing statistics.:Most viewed channels.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 285,
"text": "The Broadcasters' Audience Research Board (BARB) measures television ratings in the UK. As of November 2017, the average weekly viewing time per person across all broadcast channels was 24 hours 16 minutes. 12 channels have a share of total viewing time across all channels of ≥ 1.0%.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13673176",
"title": "Million Dollar Password",
"section": "Section::::Ratings.:U.S. standard ratings.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 419,
"text": "In the following summary, \"rating\" is the percentage of all households with televisions that tuned to the show, and \"share\" is the percentage of all televisions in use at that time that are tuned in. \"18–49\" is the percentage of all adults aged 18–49 tuned into the show. \"Viewers\" is the number of viewers, in millions, watching at the time. \"Rank\" is how well the show did compared to other TV shows aired that week.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10130609",
"title": "Samantha Who?",
"section": "Section::::U.S. television ratings.:Standard ratings.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 418,
"text": "In the following summary, \"rating\" is the percentage of all households with televisions that tuned to the show, and \"share\" is the percentage of all televisions in use at that time that are tuned in. \"18-49\" is the percentage of all adults aged 18–49 tuned into the show. \"Viewers\" are the number of viewers, in millions, watching at the time. \"Rank\"; how well the show did compared to other TV shows aired that week.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "796129",
"title": "Japanese television drama",
"section": "Section::::Importance of ratings.:Rating system.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 662,
"text": "The rating system is very simple. All the major Japanese television networks make up the television market, so a research firm must determine the size of an average audience. The audience size is determined using two factors: the amount of content that is transmitted and the amount that is received, as market size varies from firm to firm. The viewer count of a given episode is calculated using a variety of polling methods. Ratings are calculated using a percentage or point system. This is based on the episode's viewership numbers divided by the market size. Finally, the numbers are published on the research firm's website. A hard copy is also produced.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11244302",
"title": "Big Shots (TV series)",
"section": "Section::::U.S. Nielsen ratings.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 342,
"text": "In the following chart, \"rating\" is the percentage of all households with televisions that tuned to the show, and \"share\" is the percentage of all televisions in use at that time that are tuned in. \"18–49\" is the percentage of all adults aged 18–49 tuned into the show. \"Viewers\" are the number of viewers, in millions, watching at the time.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
14r93x | why are smartphones $500-700+ while laptops with the same or better specs are considerably less? | [
{
"answer": "Designing electronics when you have no, or relaxed space constraints is **much** easier and therefore cheaper. Also, the specific parts, while maybe less powerful, are likely more efficient with regards to power (this is highly variable, of course). So even though your particular processor or whatnot is *slower,* it has a more complicated design to ensure better battery life and smaller physical size.\n\nEDIT: A lot of people are nitpicking about the fact that margins are very high in devices like Samsungs phones and the iPhone line. Just because their *raw materials cost* is low, and the profit margin is high on the device, does not mean miniaturization is irrelevant. The reason they can charge those prices, is because miniaturization is **hard** and they've made new, successful, miniature devices. They are recouping their R & D costs. The market will push these prices down (as evidenced by Google's new phones) because the bulk of the R & D is done, and that cost isn't repeated. Companies learn from one another, which is in part some of the issues with patent laws but that's another story.",
"provenance": null
},
{
"answer": "Its all about size. You wouldn't be able to fit a laptop into your pocket. I would imagine making technology smaller requires a smarter way to do so.",
"provenance": null
},
{
"answer": "Proximity sensor, gyroscope, wifi, bluetooth, 3g, 4g, GPS, multitouch super dense screen, multiple cameras, multiple microphones, light sensor, NFC stuff. Theres a ton of stuff in there. I'm surprised its so cheep. ",
"provenance": null
},
{
"answer": "This is not an ELI5 answer, but if you are really interested in smartphone costs listen to the section about it (driving smartphone costs down) on this podcast: _URL_0_. These guys are probably the most thorough tech reviewers and its still fairly easy to understand if you follow the industry at all.",
"provenance": null
},
{
"answer": "A lot of people are saying it's expensive to make things smaller, but nobody is really explaining why. It's not just that you have to do more R & D, you also have to manufacture everything to much tighter tolerances.\n\nEngineers know that when the plans for something say it should be 1mm thick, every single unit won't come out at exactly 1.0000000...mm when it's manufactured. Consequently, they design whatever it is that they're designing such that it will still work properly if all the parts are a bit larger or smaller--they incorporate a **tolerance**. If certain parts of something need to be almost exactly the specified size (perhaps the two halves of a hinge, so it can swing smoothly), they need to be manufactured to a tight tolerance.\n\nThe smaller a device is, the tighter all the tolerances have to be, because there's less room for error. And to manufacture parts with very tight tolerances, you need manufacturing equipment that is *itself* built to very tight tolerances, which in turn had to be manufactured with other tools with tight tolerances. This is, of course, expensive. If you think about it, it's amazing anything can be this accurate at all, considering the whole process started out with sticks and rocks.",
"provenance": null
},
{
"answer": "While the answers about smaller = lower tolerances and such are right, some of it is also plain price gouging. Apple sells iPhones for 3x what it costs to produce them because people will pay that for them.",
"provenance": null
},
{
"answer": "Why are laptops $500-700+ while desktops with the same or better specs are considerably less?",
"provenance": null
},
{
"answer": "The actual reason. \"Because the cost of the phone to the customer is subsidized by the carrier.\" If the carrier is going to discount the phone from $500 to $0 on a 3 year term then there is no reason for the customer to care what the actual price is so no reason for manufacturers to reduce price. If everyone had to buy hardware outright the manufacturers would be forced to be competitive and just like in the pc industry prices would come crashing down. Ever wondered why an ipod touch costs 179.99$ but an iphones full price is $699.99 when it almost the exact same hardware with an antenna? It is because the ipod is sold at low margin and needs to be marketed at a reasonable price and the iphone needs to be marketed at a similar price (with contract). But since carriers are willing to subsidize the price why not charge more and use the full subsidy as full profit. The carrier nor the manufacturer are expecting people to buy phones straight out. They would prefer you on a term.",
"provenance": null
},
{
"answer": "Everyone is saying space, which is certainly a huge factor. The other is simply supply and demand. Laptops have been declining in sales due to competition from tablets and smartphones (not to mention people are getting savvier and learning how to maintain their computer so they can keep it for more than a couple of years - computers are not as disposable now as they were 5 or 10 years ago).\n\nMore people are getting smartphones now. It's also worth pointing out that relatively few people end up paying $500-700 on a smartphone when they just buy them subsidized through their wireless providers. ",
"provenance": null
},
{
"answer": "The computer hardware industry is one that is highly competitive (many players) with very low margins (profit). This is one of the reasons why many of the older manufacturers are moving out of the business (e.g. IBM selling to Lenovo). You will often see the prices come very close to the base component cost especially when they go on sale. \n\nSmart phones are relatively new (Iphone first released in 2007), and esp Apple has been making a killing off of them by having a huge profit margin. [The Iphone 5's bill of materials \\(parts\\) for the 16GB is $207, and for the 32GB is $209. The total manufacturing cost (labor to assemble) gets it to about $230.](_URL_1_) Apple not only makes money from the extra ~$400 and much more for larger GB versions, plus Itunes store/app fees, some subscription fees, and the rest of their business (laptops, software, etc). Yes, they have R & D, advertising and store costs but so do other 'traditional' companies like Dell, Intel, Nvidia, MS, Best Buy etc yet they combined can sell you computers near cost. \n\nThe reason why Apple and others can sell so much is because everyone wants one. They want one because there's little competition for it so far compared to PCs, it's a closed system (you can't just buy a phone and install your own OS easily and the whole closed app market system), and because people everywhere from China to USA think of it as a status symbol. \n\nHowever, people say that this will go down soon esp with the software help from Android. Yes, the Nexus 4 is cheaper at $300 and the Kindle Fire is only $200, but the real prices will come down with more competition. For example, did you know that the best selling smartphone in Kenya is a Huawei (Chinese) running Android that costs [only $80?](_URL_0_)",
"provenance": null
},
{
"answer": "Because people are willing to pay $500-700 for them. ",
"provenance": null
},
{
"answer": "This is a really toughie. Hmmm... I design embedded electronics so my best analogy is this. Building a smart phone is like building mansion in the city. Building a laptop is like building a mansion in the country. The city mansion has less space left/right so i need to build your mansion with many stories (pcb layers) b/c i built so many stories the plumbing and electricity is more complicated (emissions/signal integrity/blind and buried via technology). Also because your city mansion is taller than a country mansion its harder for me to add the jacuzzi, karma machine and tv systems b/c your mansion is in lets say manhattan and it is more expensive to get the special crane to get the jacuzzi to the 5th floor. Where the country mansion can just have a regular one installed outside in the backyard (analogy for package on package memory ics. ...... Um \"yo dawg i installed a chip on top of your chip?\")\n\nAnother thing is your city mansion is in an area where i cannot park big mack trucks to deliver thinks easily. I have to deliver things one at a time b/c the streets are narrow and that means some things like yhe swimming pool need special designers who can figure out how to make all the parts smaller and fit. Also somehow get installed in a more difficult way. That special designer is expensive (ultra fine pitch bga designs for circuits and the higher accuracy machines and finer pitches needed in order to route them)\n\nTl;dr\n\nBuilding a mansion in the city is harder than building one in the open country.\n\n~Sent from my android",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "220633",
"title": "Point of sale",
"section": "Section::::History.:Modern software (post-1990s).\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 475,
"text": "As far as computers are concerned, off-the-shelf versions are usually newer and hence more powerful than proprietary POS terminals. Custom modifications are added as needed. Other products, like touchscreen tablets and laptops, are readily available in the market, and they are more portable than traditional POS terminals. The only advantage of the latter is that they are typically built to withstand rough handling and spillages; a benefit for food & beverage businesses.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14260687",
"title": "Mobile operating system",
"section": "Section::::Market share.:Usage.\n",
"start_paragraph_id": 445,
"start_character": 0,
"end_paragraph_id": 445,
"end_character": 401,
"text": "According to StatCounter web use statistics (a proxy for all use), smartphones (alone without tablets) have majority use globally, with desktop computers used much less (and Android in particular more popular than Windows). Use varies however by continent with smartphones way more popular in the biggest continents, i.e. Asia, and the desktop still more popular in some, though not in North America.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18457137",
"title": "Personal computer",
"section": "Section::::Types.:Portable.:Laptop.\n",
"start_paragraph_id": 48,
"start_character": 0,
"end_paragraph_id": 48,
"end_character": 426,
"text": "Unlike desktop computers, only minor internal upgrades (such as memory and hard disk drive) are feasible owing to the limited space and power available. Laptops have the same input and output ports as desktops, for connecting to external displays, mice, cameras, storage devices and keyboards. Laptops are also a little more expensive compared to desktops, as the miniaturized components for laptops themselves are expensive.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20433252",
"title": "List of laptop brands and manufacturers",
"section": "Section::::Original design manufacturers (ODMs).:ODM laptop units sold and market shares.\n",
"start_paragraph_id": 55,
"start_character": 0,
"end_paragraph_id": 55,
"end_character": 422,
"text": "There is a discrepancy between the 2009 numbers due to the various sources cited; i.e. the units sold by all ODMs add up to 144.3 million laptops, which is much more than the given total of 125 million laptops. The market share percentages currently refer to those 144.3 million total. Sources may indicate hard drive deliveries to the ODM instead of actual laptop sales, though the two numbers may be closely correlated.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21808348",
"title": "Computer hardware",
"section": "Section::::Types of computer systems.:Personal computer.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 260,
"text": "The personal computer, also known as the PC, is one of the most common types of computer due to its versatility and relatively low price. Laptops are generally very similar, although they may use lower-power or reduced size components, thus lower performance.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3854883",
"title": "Subnotebook",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 380,
"text": "Subnotebooks are smaller than full sized laptops but larger than handheld computers. They often have smaller-sized screens, less than 14 inches, and weigh less than typical laptops, usually being less than 2 kg (4.4 lbs). The savings in size and weight are usually achieved partly by omitting ports and optical disc drives. Many can be paired with docking stations to compensate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "644662",
"title": "Pixel density",
"section": "Section::::Smartphones.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 447,
"text": "Smartphones use small displays, but modern smartphone displays have a larger PPI rating, such as the Samsung Galaxy S7 with a quad HD display at 577 PPI, Fujitsu F-02G with a quad HD display at 564 PPI, the LG G6 with quad HD display at 564 PPI or – XHDPI or Oppo Find 7 with 534 PPI on 5.5\" display – XXHDPI (see section below). Sony's Xperia XZ Premium has a 4K display with a pixel density of 807 PPI, the highest of any smartphone as of 2017.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3f9i76 | how are these girls doing the math in their head so fast? | [
{
"answer": "Do you notice how they're moving their hands around as the guy reads the numbers? That's because they're using a mental abacus. An abacus allows you to do fast calculations that would be very hard to do in your head. All they have to do is picture what the abacus would look like and they can read off the answer even without actually holding on to one.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "5584076",
"title": "Cantamath",
"section": "Section::::Team Competition section.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 792,
"text": "In the Team Competition section, each participating school sends in four selected student mathematicians per year level. The participants compete against other schools in the Christchurch Horncastle Arena. It's a speed competition and takes 30 minutes. There are 20 questions for each team to complete, the aim being for each team to answer all questions the fastest. One of the four team members is a runner who runs to a judge to check if the answer to their current question is right. Each question is worth 5 points, allowing a maximum score of 100. A team can only attempt one question at a time and have to keep working on it until they get it right. Passing is allowed, but no points will be received for that question, as well as preventing the team from returning to that question. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "43641932",
"title": "The Challenge: Battle of the Exes II",
"section": "Section::::Gameplay.:Challenge games.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 1109,
"text": "BULLET::::- Don't Forget About Me: Teams have to solve a memory puzzle while hiking up and down a mountain. First, the guys will lift up a 300-pound steel door out of the sand that is connected to a rope for as long as they can. Under the steel door is an answer key that contains various colors of squares and rectangles, which the female partners will have to memorize. Whenever the girls feel that they have memorized the answer key enough, or their male partners are unable to keep the steel doors from shutting, each partner will be required to hike up a mountain with a bag containing their puzzle pieces, to their designated puzzle station, which the girls will have to solve. The process continues back and forth, and the first team to correctly solve their puzzle wins the Power Couple. Initially, the last team to correctly solve their puzzle would be automatically sent to the Dome; however, due to time constraints and the reduced amount of daylight, host T. J. Lavin explained to the last four teams that the team with the fewest correctly-solved puzzle pieces would be sent to the Dome instead.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11482540",
"title": "World Scholar's Cup",
"section": "Section::::Events.:Team Events.:The Scholar's Bowl.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 607,
"text": "In order to answer the questions, each team of students is given a \"clicker\" that connects to a scoring computer on stage. Students then choose their answer by pressing A, B, C, D, or E on their clicker. Once the question has been read aloud by the bowl master (usually Alpaca-In-Chief Daniel Berdichevsky), students are given 15 seconds to submit their answer. The questions gets harder each time and worth more points than the previous one. There are sometimes rapid fire questions which have to be answered in 5 seconds (5 such questions will be present and each question will usually carry 100 points).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2243303",
"title": "The Strawberry Alarm Clock (radio programme)",
"section": "Section::::Features on the show.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 230,
"text": "Each morning at 8.40, Jim-Jim & Mark chat to a kid in a car on their way to school. They then must try guess what the kid is thinking about in 20 seconds. If they fail, the kid shouts the catchphrase \"Ha Ha, In Your Face Suckers\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1599236",
"title": "Get the Picture (game show)",
"section": "Section::::Gameplay.:Knowledge activities.\n",
"start_paragraph_id": 41,
"start_character": 0,
"end_paragraph_id": 41,
"end_character": 228,
"text": "BULLET::::- You Can Count On It – Questions related to math were being called out, and players had to guess what number was the answer to the problem. After 30 seconds, the teams had to guess what was the picture on the screen.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28352957",
"title": "Family Game Night (TV series)",
"section": "Section::::Games.:Season 2.:Green Scream.\n",
"start_paragraph_id": 68,
"start_character": 0,
"end_paragraph_id": 68,
"end_character": 349,
"text": "Kids roll around on a green screen floor, revealing pictures (associated with a category) for the parents to guess in 90 seconds. Up to ten are used per family and a right answer scores ten points. Highest score after the game wins. If there is a tie after the 90 second time limit, the team who correctly solved the words in the fastest time wins.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39425949",
"title": "Gender inequality in the United States",
"section": "Section::::Current issues for women.:Education.:Gender inequality in elementary and middle schools.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 722,
"text": "So often in our society, girls receive signals from an early age that they are not good at math, or that boys are simply better. This can occur at home, when wives ask their husbands for help when it comes to math. In 2013, women received 57% of all Bachelor’s degrees, however they only received 43% of math degrees, 19% of engineering degrees, and 18% of computer science degrees. At school and at home, many young girls receive the message that they either “have the math gene or they do not.” When a mother tells her daughter that she wasn’t good at math in school, oftentimes, the daughter’s mathematical achievement will decrease. Oftentimes, women do not realize they are sending these messages to their daughters.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
18lac1 | why it hurts to look at the sky on a cloudy day | [
{
"answer": "* ELI5 version: the sky is still very bright even when you're not looking at the sun, so it can still hurt your eyes.\n\n* Super technical version: [Here is an AskScience question that has a very detailed answer](_URL_0_).",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "12820028",
"title": "Mandelbaum effect",
"section": "Section::::Discussion.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 344,
"text": "When visibility is poor, as at night during rainstorms or fog, the eye tends to relax and focus on its best distance, technically known as \"empty field\" or \"dark focus\". This distance is usually just under one meter (one yard), but varies considerably among people. The tendency is aggravated by objects close to the eye, drawing focus closer.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "154576",
"title": "Cloud cover",
"section": "Section::::Variability.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 281,
"text": "On a regional scale, it can be also worth of note that some extensive areas of Earth experience cloudy conditions virtually all time such as Central America's Amazon Rainforest while other ones experience clear-sky conditions virtually all time such as the Africa's Sahara Desert.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1077335",
"title": "Polar stratospheric cloud",
"section": "Section::::Formation.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 211,
"text": "Due to their high altitude and the curvature of the surface of the Earth, these clouds will receive sunlight from below the horizon and reflect it to the ground, shining brightly well before dawn or after dusk.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3657365",
"title": "Inquiry",
"section": "Section::::Example of inquiry.:Looking more closely.:Analogy of experience.:Testing.\n",
"start_paragraph_id": 78,
"start_character": 0,
"end_paragraph_id": 78,
"end_character": 318,
"text": "If the observer looks up and does not see dark clouds, or if he runs for shelter but it does not rain, then there is fresh occasion to question the utility or the validity of his knowledge base. But we must leave our foulweather friend for now and defer the logical analysis of this testing phase to another occasion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "731893",
"title": "Grey",
"section": "Section::::In the sciences, nature, and technology.:Storm clouds.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 474,
"text": "The whiteness or darkness of clouds is a function of their depth. Small, fluffy white clouds in summer look white because the sunlight is being scattered by the tiny water droplets they contain, and that white light comes to the viewer's eye. However, as clouds become larger and thicker, the white light cannot penetrate through the cloud, and is reflected off the top. Clouds look darkest grey during thunderstorms, when they can be as much as 20,000 to 30,000 feet high.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8282374",
"title": "Tropical cyclone",
"section": "Section::::Physical structure.:Eye and center.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 454,
"text": "The cloudy outer edge of the eye is called the \"eyewall\". The eyewall typically expands outward with height, resembling an arena football stadium; this phenomenon is sometimes referred to as the \"stadium effect\". The eyewall is where the greatest wind speeds are found, air rises most rapidly, clouds reach to their highest altitude, and precipitation is the heaviest. The heaviest wind damage occurs where a tropical cyclone's eyewall passes over land.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "617947",
"title": "Weather lore",
"section": "Section::::Reliability.:Sayings which may be locally accurate.:Red sky at night.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 329,
"text": "When weather systems predominantly move from west to east, a red sky at night indicates that the high pressure air (and better weather) is westwards. In the morning the light is eastwards, and so a red sky then indicates the high pressure (and better weather) has already passed, and an area of low pressure is following behind.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
ja7yd | li5: poker | [
{
"answer": "In almost every form of poker, you make a five card hand. The hands ranked from best to worst (the notation should make sense if you're familiar with playing cards, Jc is the jack of clubs, Th is the ten of hearts, etc):\n\n* Straight flush (same suit, 5 in a row, like 5h 6h 7h 8h 9h)\n* Four of a kind (like 6c 6s 6d 6h 9c)\n* Full house (three of one rank, two of another, like 8c 8s 8d 5h 5c)\n* Flush (5 of one suit, like 3c 5c 9c Tc Qc)\n* Straight (5 in a row, like 8c 9c Tc Jc Qc)\n* Three of a kind (like 2c 2s 2d Jh Kh)\n* Two pair (like 6s 6c Ts Th Ad)\n* One pair (like 3d 3c 2h 5s 9c)\n* High card (this means none of the above, like 2c 4c 7s Tc Qd is called \"Queen high\")\n\nSome games give you more than 5 cards, some include a combination of cards just for you and what are called \"community cards\" which everyone can use in your hand. But in just about every game, you will be trying to make a 5 card hand.\n\nThe way betting works, is that it generally starts with the person left of the dealer. When the betting gets to you:\n\n* If no one has bet yet this round, you may **check** (do nothing) or **bet** (put money into the pot that others will at least have to match to continue).\n* If someone else has bet before you act, you may **fold** (give up the hand, you don't have to put any more money in), **call** (match the person's bet to stay in), or **raise** (in addition to matching the bet, you bet even more). \n\nSome games have what are called \"fixed limits.\" In every betting round, there is an amount you are allowed to bet. If the fixed limit for a round is $2, the first player may check or bet $2. If he bets $2, the next player may fold, call $2, or raise another $2 for a total of $4. In fixed limit, the bet in each round goes up in increments of the limit.\n\nOther games are called \"no limit.\" This means you may bet any or all of your chips at any time. Two exceptions: there is generally a minimum, and if someone has bet $x and you want to raise, you have to raise at least by another $x for a total of $2x.\n\nTwo important pieces of advice:\n\n* Tell the people you're playing with that you are a newbie. It's a heck of a lot easier to get the hang of things by having things explained to you as it goes. \n* Figure out how much money you are ok with losing before arriving. Under no circumstances should you let yourself lose more than that.",
"provenance": null
},
{
"answer": "Yes. I specifically searched for this ELI5. This is quite helpful, thank you!",
"provenance": null
},
{
"answer": "In almost every form of poker, you make a five card hand. The hands ranked from best to worst (the notation should make sense if you're familiar with playing cards, Jc is the jack of clubs, Th is the ten of hearts, etc):\n\n* Straight flush (same suit, 5 in a row, like 5h 6h 7h 8h 9h)\n* Four of a kind (like 6c 6s 6d 6h 9c)\n* Full house (three of one rank, two of another, like 8c 8s 8d 5h 5c)\n* Flush (5 of one suit, like 3c 5c 9c Tc Qc)\n* Straight (5 in a row, like 8c 9c Tc Jc Qc)\n* Three of a kind (like 2c 2s 2d Jh Kh)\n* Two pair (like 6s 6c Ts Th Ad)\n* One pair (like 3d 3c 2h 5s 9c)\n* High card (this means none of the above, like 2c 4c 7s Tc Qd is called \"Queen high\")\n\nSome games give you more than 5 cards, some include a combination of cards just for you and what are called \"community cards\" which everyone can use in your hand. But in just about every game, you will be trying to make a 5 card hand.\n\nThe way betting works, is that it generally starts with the person left of the dealer. When the betting gets to you:\n\n* If no one has bet yet this round, you may **check** (do nothing) or **bet** (put money into the pot that others will at least have to match to continue).\n* If someone else has bet before you act, you may **fold** (give up the hand, you don't have to put any more money in), **call** (match the person's bet to stay in), or **raise** (in addition to matching the bet, you bet even more). \n\nSome games have what are called \"fixed limits.\" In every betting round, there is an amount you are allowed to bet. If the fixed limit for a round is $2, the first player may check or bet $2. If he bets $2, the next player may fold, call $2, or raise another $2 for a total of $4. In fixed limit, the bet in each round goes up in increments of the limit.\n\nOther games are called \"no limit.\" This means you may bet any or all of your chips at any time. Two exceptions: there is generally a minimum, and if someone has bet $x and you want to raise, you have to raise at least by another $x for a total of $2x.\n\nTwo important pieces of advice:\n\n* Tell the people you're playing with that you are a newbie. It's a heck of a lot easier to get the hang of things by having things explained to you as it goes. \n* Figure out how much money you are ok with losing before arriving. Under no circumstances should you let yourself lose more than that.",
"provenance": null
},
{
"answer": "Yes. I specifically searched for this ELI5. This is quite helpful, thank you!",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "10679",
"title": "Five-card draw",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 445,
"text": "Five-card draw (also known as a Cantrell draw) is a poker variant that is considered the simplest variant of poker, and is the basis for video poker. As a result, it is often the first variant learned by new players. It is commonly played in home games but rarely played in casino and tournament play. The variant is also offered by some online venues, although it is not as popular as other variants such as Seven-card stud and Texas hold 'em.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "75692",
"title": "List of poker variants",
"section": "Section::::Specific poker variant games.:Five-O poker.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 540,
"text": "Five-O Poker is a heads-up poker variant in which both players must play five hands of five cards simultaneously. Four of the five cards in each hand are face-up. Once all five hands are down, there is a single round of betting. The winner is determined by matching each hand to the corresponding hand of the opponent. The player with the stronger poker hand in three (or more) out of the five columns, wins, unless a player folds on a bet that was made. If a player beats their opponent with all five hands, this is called a “Five-O” win.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5360",
"title": "Card game",
"section": "Section::::Types.:Casino or gambling card games.:Poker games.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 470,
"text": "Poker is a family of gambling games in which players bet into a pool, called the pot, value of which changes as the game progresses that the value of the hand they carry will beat all others according to the ranking system. Variants largely differ on how cards are dealt and the methods by which players can improve a hand. For many reasons, including its age and its popularity among Western militaries, it is one of the most universally known card games in existence.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19464090",
"title": "Strip Poker (game show)",
"section": "Section::::Overview.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 589,
"text": "The poker part of the game was based on five-card stud, using a deck of 24 cards ranging from 9 to Ace. Before each question, a pair of face-up cards came down a chute. The first question was directed at the guys and had to do with \"girl stuff\". If they got it right, they received control. A miss gave the girls a chance to take control by giving the correct answer. If they missed, however, the guys got control by default, because the question was in the girls' area of knowledge. Questions alternated between the two teams, with the girls being asked about \"guy stuff\" on their turns.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28693069",
"title": "Poker Night at the Inventory",
"section": "Section::::Gameplay.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 967,
"text": "\"Poker Night\" is a computer-based Texas Hold 'Em poker simulation between the player as an unseen participant and the four characters, Max, Tycho, The Heavy, and Strong Bad. Each player starts with a $10,000 buy-in and stays in the game until they are broke, with the goal of the player being the last player standing. The game uses no-limit betting and a gradually-increasing blind bets over the course of several rounds. Randomly, one of the four non-playable characters will not be able to front the money but will offer one of their possessions as buy-in for the game. The player can win these items as \"Team Fortress 2\" unlockable equipment only if he or she is the one to bust that non-player character out of the game. The game keeps track of the player's statistics over the course of several games, and by completing certain objects (such as number of hands or games won) can unlock different playing card or table artwork to customize the look of the game.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23729097",
"title": "Poker strategy",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 532,
"text": "Poker is a popular card game that combines elements of chance and strategy. There are various styles of poker, all of which share an objective of presenting the least probable or highest-scoring hand. A poker hand is usually a configuration of five cards depending on the variant, either held entirely by a player or drawn partly from a number of shared, community cards. Players bet on their hands in a number of rounds as cards are drawn, employing various mathematical and intuitive strategies in an attempt to better opponents.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23014",
"title": "Poker",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 451,
"text": "Poker is a family of card games that combines gambling, strategy, and skill. All poker variants involve betting as an intrinsic part of play, and determine the winner of each hand according to the combinations of players' cards, at least some of which remain hidden until the end of the hand. Poker games vary in the number of cards dealt, the number of shared or \"community\" cards, the number of cards that remain hidden, and the betting procedures.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
9arf3q | how did humans discover music? or is there music among animals as well? | [
{
"answer": "\"Or is there music among animals as well?\"\n\nYou - you've never heard of a bird? ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "14918624",
"title": "Attenborough in Paradise and Other Personal Voyages",
"section": "Section::::Documentary Summaries.:\"The Song of the Earth\" (2000).\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 492,
"text": "This natural history of music begins with Attenborough playing the piano. Searching for the origins of human music, he traces its connections to the musical sounds that other animals make: the beauty of the wolf's howl, the complexity of the bat's cry, the deep rumble of the elephant's signals, the acoustically sophisticated sounds the dolphin produces and the songs of whales and birds. Why do these animals produce this amazing variety of sounds? It's all tied up with sex and territory.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22860",
"title": "Paleolithic",
"section": "Section::::Human way of life.:Music.\n",
"start_paragraph_id": 59,
"start_character": 0,
"end_paragraph_id": 59,
"end_character": 797,
"text": "The origins of music during the Paleolithic are unknown. The earliest forms of music probably did not use musical instruments other than the human voice or natural objects such as rocks. This early music would not have left an archaeological footprint. Music may have developed from rhythmic sounds produced by daily chores, for example, cracking open nuts with stones. Maintaining a rhythm while working may have helped people to become more efficient at daily activities. An alternative theory originally proposed by Charles Darwin explains that music may have begun as a hominin mating strategy. Bird and other animal species produce music such as calls to attract mates. This hypothesis is generally less accepted than the previous hypothesis, but nonetheless provides a possible alternative.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "974035",
"title": "Richard Maurice Bucke",
"section": "Section::::\"Cosmic Consciousness\".\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 458,
"text": "In \"Cosmic Consciousness\", beginning with Part II, Bucke explains how animals developed the senses of hearing and seeing. Further development culminated in the ability to experience and enjoy music. Bucke states that, initially, only a small number of humans were able to see colors and experience music. But eventually these new abilities spread throughout the human race until only a very small number of people were unable to experience colors and music.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1043371",
"title": "Zoomusicology",
"section": "Section::::Human interaction.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 650,
"text": "Composers have evoked or imitated animal sounds in compositions including Jean-Philippe Rameau's \"The Hen\" (1728), Camille Saint-Saëns's \"Carnival of the Animals\" (1886), Olivier Messiaen's \"Catalogue of the Birds\" (1956–58) and Pauline Oliveros's \"El Relicario de los Animales\" (1977). Other examples include Alan Hovhaness's \"And God Created Great Whales\" (1970), George Crumb's \"Vox Balaenae\" (Voice of the Whale) (1971) and Gabriel Pareyon's \"Invention over the song of the Vireo atriccapillus\" (1999) and \"Kha Pijpichtli Kuikatl\" (2003). The Indian zoomusicologist, A. J. Mithra has composed music using bird, animal and frog sounds since 2008.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1502471",
"title": "Prehistoric music",
"section": "Section::::Origins.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 983,
"text": "Music can be theoretically traced to prior to the Paleolithic age. The anthropological and archaeological designation suggests that music first arose (among humans) when stone tools first began to be used by hominids. The noises produced by work such as pounding seed and roots into meal are a likely source of rhythm created by early humans. The first rhythm instruments or percussion instruments most likely involved the clapping of hands, stones hit together, or other things that are useful to create rhythm. Examples of paleolithic objects which are considered unambiguously musical are bone flutes or pipes; paleolithic finds which are currently open to interpretation include pierced phalanges (usually interpreted as \"phalangeal whistles\"), bullroarers, and rasps. These musical instruments date back as far as the paleolithic, although there is some ambiguity over archaeological finds which can be variously interpreted as either musical or non-musical instruments/tools. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4390344",
"title": "Music psychology",
"section": "Section::::History.:Early history (pre-1860).\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 901,
"text": "The study of sound and musical phenomenon prior to the 19th century was focused primarily on the mathematical modelling of pitch and tone. The earliest recorded experiments date from the 6th century BCE, most notably in the work of Pythagoras and his establishment of the simple string length ratios that formed the consonances of the octave. This view that sound and music could be understood from a purely physical standpoint was echoed by such theorists as Anaxagoras and Boethius. An important early dissenter was Aristoxenus, who foreshadowed modern music psychology in his view that music could only be understood through human perception and its relation to human memory. Despite his views, the majority of musical education through the Middle Ages and Renaissance remained rooted in the Pythagorean tradition, particularly through the quadrivium of astronomy, geometry, arithmetic, and music.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2384297",
"title": "The Lives of a Cell: Notes of a Biology Watcher",
"section": "Section::::Summary.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 638,
"text": "Music is the only form of communication that saves us from an overwhelming amount of small talk. This is not only a human phenomenon, but happens throughout the animal world. Thomas makes examples of animals from termites and earthworms to gorillas and alligators that perform some sort of rhythmic noise making that can be interpreted as music if we had full range of hearing. From the vast number of animals that participate in music it is clear that the need to make music is a fundamental characteristic of biology. Thomas proposes that the animal world is continuing a musical memory that has been going since the beginning of time.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2wok7x | Who was the first Ottoman Sultan to claim the title of Caliph, and how was he able to legitimize himself as such? | [
{
"answer": "Selim I \"the Grim,\" over the course of his brief reign 1512-1520, secured Mecca, Medina, and Jerusalem, the three Islamic holy cities, and utterly demolished the Mamelukes of Egypt, who had been seen as the holders/protectors of the Holy Cities. Selim's conquests totally changed the character of the Ottoman holdings, which had previously been majority Christian and heavily European, into a truly Eastern Mediterranean empire with large Muslim populations in Syria and Egypt added. With the collapse of the Mamelukes, the possession of the holy cities, and the rivalry with the Shi'ite Safavids, proclaiming the Ottoman sultan the successor to the caliph tradition and the commander of the faithful etc. was just the natural next step.\n\nIn short, Selim became the first Ottoman caliph in 1517 after his dramatic conquest of all the Mameluke holdings (Egypt, the Levant, and the Hedjaz).",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "29339048",
"title": "Abolition of the Ottoman sultanate",
"section": "Section::::End of Ottoman Empire.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 586,
"text": "The Ottoman Dynasty embodied the Ottoman Caliphate since the fourteenth century, starting with the reign of Murad I. The Ottoman Dynasty kept the title Caliph, power over all Muslims, as Mehmed's cousin Abdülmecid II took the title. The Ottoman Dynasty left as a political-religious successor to Muhammad and a leader of the entire Muslim community without borders in a post Ottoman Empire. Abdülmecid II's title was challenged in 1916 by the leader of the Arab Revolt King Hussein bin Ali of Hejaz, who denounced Mehmet V, but his kingdom was defeated and annexed by Ibn Saud in 1925.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4168529",
"title": "List of Caliphs",
"section": "Section::::Ecumenical caliphates.:Ottoman Caliphate (1517 – 3 March 1924).\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 269,
"text": "The head of the Ottoman dynasty was just entitled \"Sultan\" originally, but soon it started accumulating titles assumed from subjected peoples. Murad I (reigned 1362–1389) was the first Ottoman claimant to the title of Caliph; claimed the title after conquering Edirne.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20427700",
"title": "List of sultans of the Ottoman Empire",
"section": "Section::::State organisation of the Ottoman Empire.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 501,
"text": "After the conquest of Constantinople in 1453 by Mehmed II, Ottoman sultans came to regard themselves as the successors of the Roman Empire, hence their occasional use of the titles Caesar ( \"Qayser\") of Rûm, and emperor, as well as the caliph of Islam. Newly enthroned Ottoman rulers were girded with the Sword of Osman, an important ceremony that served as the equivalent of European monarchs' coronation. A non-girded sultan was not eligible to have his children included in the line of succession.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "216816",
"title": "Ottoman dynasty",
"section": "Section::::Titles.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 575,
"text": "The first Ottoman ruler to actually claim the title of \"Sultan\" was Murad I, who ruled from 1362 to 1389. The holder of the title Sultan (سلطان in Arabic) was in Arabic-Islamic dynasties originally the power behind the throne of the Caliph in Bagdad and it was later used for various independent Muslim Monarchs. This title was senior to and more prestigious than that of Amir; it was not comparable to the title of Malik 'King', a secular title not yet common among Muslim rulers, or the Persian title of Shah, which was used mostly among Persian or Iranian related rulers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20427700",
"title": "List of sultans of the Ottoman Empire",
"section": "Section::::State organisation of the Ottoman Empire.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 1009,
"text": "The Ottoman Empire was an absolute monarchy during much of its existence. By the second half of the fifteenth century, the sultan sat at the apex of a hierarchical system and acted in political, military, judicial, social, and religious capacities under a variety of titles. He was theoretically responsible only to God and God's law (the Islamic \"şeriat\", known in Arabic as \"sharia\"), of which he was the chief executor. His heavenly mandate was reflected in Islamic titles such as \"shadow of God on Earth\" ( \"ẓıll Allāh fī'l-ʿalem\") and \"caliph of the face of the earth\" ( \"Ḫalife-i rū-yi zemīn\"). All offices were filled by his authority, and every law was issued by him in the form of a decree called \"firman\" (). He was the supreme military commander and had the official title to all land. Osman (died 1323/4) son of Ertuğrul was the first ruler of the Ottoman state, which during his reign constituted a small principality (\"beylik\") in the region of Bithynia on the frontier of the Byzantine Empire.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22278",
"title": "Ottoman Empire",
"section": "Section::::Government.\n",
"start_paragraph_id": 62,
"start_character": 0,
"end_paragraph_id": 62,
"end_character": 1550,
"text": "The highest position in Islam, \"caliphate\", was claimed by the sultans starting with Murad I, which was established as the Ottoman Caliphate. The Ottoman sultan, \"pâdişâh\" or \"lord of kings\", served as the Empire's sole regent and was considered to be the embodiment of its government, though he did not always exercise complete control. The Imperial Harem was one of the most important powers of the Ottoman court. It was ruled by the Valide Sultan. On occasion, the Valide Sultan would become involved in state politics. For a time, the women of the Harem effectively controlled the state in what was termed the \"Sultanate of Women\". New sultans were always chosen from the sons of the previous sultan. The strong educational system of the palace school was geared towards eliminating the unfit potential heirs, and establishing support among the ruling elite for a successor. The palace schools, which would also educate the future administrators of the state, were not a single track. First, the Madrasa (') was designated for the Muslims, and educated scholars and state officials according to Islamic tradition. The financial burden of the Medrese was supported by vakifs, allowing children of poor families to move to higher social levels and income. The second track was a free boarding school for the Christians, the \"Enderûn\", which recruited 3,000 students annually from Christian boys between eight and twenty years old from one in forty families among the communities settled in Rumelia or the Balkans, a process known as Devshirme (').\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39831252",
"title": "List of Sheikh-ul-Islams of the Ottoman Empire",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 700,
"text": "The following is a list of Sheikh-ul-Islams of the Ottoman Empire. After the fondation of the Ottoman empire around 1300, the title of Sheikh-ul-Islam, formerly used in the Abbasid Caliphate, was given to a leader authorized to issue legal opinion or fatwa. During the reign of Sultan Murad II, (1421-1444, 1446-1451) the position became an official title, with authority over other muftis in the empire. In the late 16th century, Sheikh-ul-Islam were assigned to appoint and dismiss supreme judges, high ranking college professors, and heads of Sufi orders. Prominent figures include Zenbilli Ali Cemali Efendi (c1445-1526), Ibn-i Kemal (Kemalpasazade) (1468-1533) and Ebussuud Efendi (c1491-1574).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
9ryupa | Why did Moscow become the capitol of the USSR even though Petrograd was the center of the revolution? | [
{
"answer": "Firstly, one only has to look at a map of the positions of the soviet civil war/pre 1939. Petrograd was mere miles away from first the German occupied areas of Russia signed away by the bolsheviks, and then also threatened by the breakaway Baltic republics and Finland. Moscow, being in the centre of Bolshevik Russia was a much more defensible position \n\nSecondly, Moscow and st Petersburg have had a sort of duelling cultural meaning in Russian culture. St Petersburg was the city of the tsars and represented, essentially, westernism. Moscow was the cultural heartland of Russia. In picking Moscow, the Bolshevik in part rejected the capitalist west to build a new society out of true Russia ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "19004",
"title": "Moscow",
"section": "Section::::History.:Soviet period (1917–1991).\n",
"start_paragraph_id": 60,
"start_character": 0,
"end_paragraph_id": 60,
"end_character": 273,
"text": "Following the success of the Russian Revolution of 1917, Vladimir Lenin, fearing possible foreign invasion, moved the capital from Saint Petersburg back to Moscow on March 12, 1918. The Kremlin once again became the seat of power and the political centre of the new state.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1629198",
"title": "Saint Petersburg Metro",
"section": "Section::::History.:Metro projects for the imperial capital.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 205,
"text": "In 1918 Moscow became the country's capital after the October Revolution of 1917 and the Russian Civil War (1917–1922) followed; for more than a decade plans to build a metro in St. Petersburg languished.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "364036",
"title": "History of Moscow",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 491,
"text": "The city of Moscow gradually grew around the Moscow Kremlin, beginning in the 14th century. It was the capital of the Grand Duchy of Moscow (or Muscovy), from 1340 to 1547 and in 1713 renamed as the Tsardom of Russia by Peter I \"the Great\" (when the capital was moved to Saint Petersburg). Moscow was the capital of the Russian Soviet Federative Socialist Republic from 1918, which then became the Soviet Union (1922 to 1991), and since 1991 has served as capital of the Russian Federation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "148180",
"title": "Ivan I of Moscow",
"section": "Section::::Biography.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 710,
"text": "According to the Russian historian Kluchevsky, the rise of Moscow under Ivan I Kalita was determined by three factors. The first one was that the Moscow principality was situated in the middle of other Russian principalities; thus, it was protected from any invasions from the East and from the West. Compared to its neighbors, Ryazan principality and Tver principality, Moscow was less often devastated. The relative safety of the Moscow region resulted in the second factor of the rise of Moscow – an influx of working and tax-paying people who were tired of constant raids and who actively relocated to Moscow from other Russian regions. The third factor was a trade route from Novgorod to the Volga river.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19004",
"title": "Moscow",
"section": "Section::::Demographics.:Architecture.\n",
"start_paragraph_id": 131,
"start_character": 0,
"end_paragraph_id": 131,
"end_character": 1097,
"text": "For much of its architectural history, Moscow was dominated by Orthodox churches. However, the overall appearance of the city changed drastically during Soviet times, especially as a result of Joseph Stalin's large-scale effort to \"modernize\" Moscow. Stalin's plans for the city included a network of broad avenues and roadways, some of them over ten lanes wide, which, while greatly simplifying movement through the city, were constructed at the expense of a great number of historical buildings and districts. Among the many casualties of Stalin's demolitions was the Sukharev Tower, a longtime city landmark, as well as mansions and commercial buildings The city's newfound status as the capital of a deeply secular nation, made religiously significant buildings especially vulnerable to demolition. Many of the city's churches, which in most cases were some of Moscow's oldest and most prominent buildings, were destroyed; some notable examples include the Kazan Cathedral and the Cathedral of Christ the Savior. During the 1990s, both were rebuilt. Many smaller churches, however, were lost.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9329759",
"title": "House on the Embankment",
"section": "Section::::History.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 409,
"text": "The relocation of the capital from St. Petersburg to Moscow caused an increased need to house civil servants in Moscow. In 1927, a commission decided that a building would be constructed in the Bersenevka neighborhood, opposite the Kremlin, which had been occupied by the Wine and Salt Court, an old distillery and excise warehouse. During the Tsarist era, the area had been used mainly as a mushroom market.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19004",
"title": "Moscow",
"section": "",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 600,
"text": "Moscow is a seat of power of the Government of Russia, being the site of the Moscow Kremlin, a medieval city-fortress that is today the residence for work of the President of Russia. The Moscow Kremlin and Red Square are also one of several World Heritage Sites in the city. Both chambers of the Russian parliament (the State Duma and the Federation Council) also sit in the city. Moscow is considered the center of Russian culture, having served as the home of Russian artists, scientists, and sports figures and because of the presence of museums, academic and political institutions and theatres.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4d26uu | How much does an understanding of historical linguistics benefit study of the period? | [
{
"answer": "Do you mean historical linguistics or knowing the languages? Historical linguistics is the study of how languages change over time, it's what's used to reconstruct things like Proto-Indo-European. It's not the same as knowing the languages, a historical linguist doesn't necessarily actually know the language that he's working on, although for obvious reasons it helps. It's also generally not all that helpful for history, although it can be useful for learning the languages (I don't personally think you can learn Greek without some basic idea of how the Greek language changed from prehistory to Attic, because otherwise you have to memorize the paradigms of literally every verb you encounter like a psycho). Knowing the languages, though, is of great use. I would argue that it's nearly impossible to study ancient history and classics without knowing Greek and Latin (although there are a *very* few number of scholars who actually don't). In more contemporary fields maybe it's not as important, I don't know--as a classicist I deal more or less exclusively with the texts themselves, so I would be forced to work from translation, which is very unsatisfactory for any detail or nuance and is not always possible, as some texts have never been translated. What's important in any historical field is being able to read the sources, whether that's direct material or scholarly material",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "8451030",
"title": "Indigenous Aryans",
"section": "Section::::Historical background.:Indo-Aryan migration theory.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 586,
"text": "Historical linguistics provides the main basis for the theory, analysing the development and changes of languages, and establishing relations between the various Indo-European languages, including the time frame of their development. It also provides information about shared words, and the corresponding area of the origin of Indo-European, and the specific vocabulary which is to be ascribed to specific regions. The linguistic analyses and data are supplemented with archaeological data and anthropological arguments, which together provide a coherent model that is widely accepted.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22760983",
"title": "Linguistics",
"section": "",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 363,
"text": "Linguistics also deals with the social, cultural, historical and political factors that influence language, through which linguistic and language-based context is often determined. Research on language through the sub-branches of historical and evolutionary linguistics also focuses on how languages change and grow, particularly over an extended period of time.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22760983",
"title": "Linguistics",
"section": "Section::::Areas of research.:Historical linguistics.\n",
"start_paragraph_id": 100,
"start_character": 0,
"end_paragraph_id": 100,
"end_character": 774,
"text": "Historical linguists study the history of specific languages as well as general characteristics of language change. The study of language change is also referred to as \"diachronic linguistics\" (the study of how one particular language has changed over time), which can be distinguished from \"synchronic linguistics\" (the comparative study of more than one language at a given moment in time without regard to previous stages). Historical linguistics was among the first sub-disciplines to emerge in linguistics, and was the most widely practised form of linguistics in the late 19th century. However, there was a shift to the synchronic approach in the early twentieth century with Saussure, and became more predominant in western linguistics with the work of Noam Chomsky.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14400",
"title": "History of science",
"section": "Section::::Modern science.:Social sciences.:Linguistics.\n",
"start_paragraph_id": 127,
"start_character": 0,
"end_paragraph_id": 127,
"end_character": 1059,
"text": "Historical linguistics emerged as an independent field of study at the end of the 18th century. Sir William Jones proposed that Sanskrit, Persian, Greek, Latin, Gothic, and Celtic languages all shared a common base. After Jones, an effort to catalog all languages of the world was made throughout the 19th century and into the 20th century. Publication of Ferdinand de Saussure's \"Cours de linguistique générale\" created the development of descriptive linguistics. Descriptive linguistics, and the related structuralism movement caused linguistics to focus on how language changes over time, instead of just describing the differences between languages. Noam Chomsky further diversified linguistics with the development of generative linguistics in the 1950s. His effort is based upon a mathematical model of language that allows for the description and prediction of valid syntax. Additional specialties such as sociolinguistics, cognitive linguistics, and computational linguistics have emerged from collaboration between linguistics and other disciplines.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "63630",
"title": "Historical linguistics",
"section": "Section::::History and development.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 1126,
"text": "At first, historical linguistics served as the cornerstone of comparative linguistics primarily as a tool for linguistic reconstruction. Scholars were concerned chiefly with establishing language families and reconstructing prehistoric proto-languages, using the comparative method and internal reconstruction. The focus was initially on the well-known Indo-European languages, many of which had long written histories; the scholars also studied the Uralic languages, another European language family for which less early written material exists. Since then, there has been significant comparative linguistic work expanding outside of European languages as well, such as on the Austronesian languages and various families of Native American languages, among many others. Comparative linguistics is now, however, only a part of a more broadly conceived discipline of historical linguistics. For the Indo-European languages, comparative study is now a highly specialized field. Most research is being carried out on the subsequent development of these languages, in particular, the development of the modern standard varieties.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8329918",
"title": "Critical period hypothesis",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 399,
"text": "The critical period hypothesis is the subject of a long-standing debate in linguistics and language acquisition over the extent to which the ability to acquire language is biologically linked to age. The hypothesis claims that there is an ideal time window to acquire language in a linguistically rich environment, after which further language acquisition becomes much more difficult and effortful.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1560464",
"title": "Uruk period",
"section": "Section::::Society and culture.\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 639,
"text": "Scholarship is therefore interested in this period as a crucial step in the evolution of society—a long and cumulative process whose roots could be seen at the beginning of the Neolithic more than 6000 years earlier and which had picked up steam in the preceding Ubayd period in Mesopotamia. This is especially the case in English-language scholarship, in which the theoretical approaches have been largely inspired by anthropology since the 1970s, and which has studied the Uruk period from the angle of 'complexity' in analysing the appearance of early states, an expanding social hierarchy, intensification of long-distance trade, etc.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
quhwo | the concept of "hanging on" or "fighting" when you're dying from a disease like cancer | [
{
"answer": "It's less from a medical standpoint and more a matter of will. Someone hanging on or fighting means they still want to live. Once someone decides they don't want to live anymore, or they give in to death, a survival part of the brain shuts down and the illness takes over. When someone \"hangs on\" or \"fights to survive\" they are still battling their illness mentally. While not everything can be overcome this way, if someone decides they just give up and want to die, it's hard to turn it around. You can't force someone to live who just gives up.",
"provenance": null
},
{
"answer": "Being sick can be hard work. \n\nTaking meds that make you naseous, getting painful and invasive procedures, not smoking or drinking or eating junk food, eating when you don't have an appetite, staying on top of your doctor, actively seeking out new treatments. Doing these things can be the difference between life and death, and some who is \"fighting\" is doing all of these.\n\nAlso, getting depressed can have physical side effects that make fighting a disease harder.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "50251642",
"title": "Battle with cancer",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 367,
"text": "Battle with cancer is a term used by the media when referring to people suffering from cancer. Those who have died are said to have \"lost their battle with cancer\", while the living are described as \"fighting cancer\". It has been argued that words such as \"battle\" and \"fight\" are inappropriate, as they suggest that cancer can be defeated if one fights hard enough.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52995624",
"title": "Emperor of Sand",
"section": "Section::::Concept and lyrical themes.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 281,
"text": "\"At the end of the story, the person simultaneously dies and is saved,\" Dailor said. \"It's about going through cancer, going through chemotherapy and all the things associated with that. I didn't want to be literal about it. But it's all in there. You can read between the lines.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1480420",
"title": "Metacognition",
"section": "Section::::Definitions.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 223,
"text": "It has been used, albeit off the original definition, to describe one's own knowledge that we will die. Writers in the 1990s involved with the grunge music scene often used the term to describe self-awareness of mortality.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52308555",
"title": "Assisted death in the United States",
"section": "Section::::Controversy.:Debate over Whether Assisted Death is Suicide.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 440,
"text": "Suicide refers to someone taking their own lives. Opponents feel that this term is appropriate to describe assisted death, because of the social and personal dynamics that can pressure someone into choosing death. Opponents also cite the fact that oncologists and other non-psychiatric physicians responsible for referring patients for counseling are not trained to detect complex, potentially invisible disorders like clinical depression.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12817201",
"title": "AIDS and Its Metaphors",
"section": "Section::::Overview.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 592,
"text": "\"Illness as Metaphor\" was a response to Sontag’s experiences as a cancer patient, as she noticed that the cultural myths surrounding cancer negatively impacted her as a patient. She finds that, a decade later, cancer is no longer swathed in secrecy and shame, but has been replaced by AIDS as the disease most demonized by society. She finds that the metaphors that we associate with disease contribute not only to stigmatizing the disease, but also stigmatizing those who are ill. She believes that the distractions of metaphors and myths ultimately cause more fatalities from this disease.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52308555",
"title": "Assisted death in the United States",
"section": "Section::::Controversy.:Debate over Whether Assisted Death is Suicide.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 794,
"text": "Proponents feel that \"medical aid in dying\" differs from suicide because a patient must be confirmed by two physicians to be terminally ill with a prognosis of 6 months or less to live and must also be confirmed by two physicians to be mentally capable to make medical decisions. That is why proponents support death certificates that list their underlying condition as the cause of death. According to the proponents, suicide is a solitary, unregulated act whereas aid in dying is medically authorized and is intended to allow for the presence of loved ones. Proponents define \"suicide\" as an irrational act committed in the throws of mental illness. They assert that the latter act is fundamentally distinct from the practice that they are advocating, as it is intended to be a measured act.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "105219",
"title": "Cancer",
"section": "Section::::Society and culture.\n",
"start_paragraph_id": 183,
"start_character": 0,
"end_paragraph_id": 183,
"end_character": 516,
"text": "In the United States and some other cultures, cancer is regarded as a disease that must be \"fought\" to end the \"civil insurrection\"; a War on Cancer was declared in the US. Military metaphors are particularly common in descriptions of cancer's human effects, and they emphasize both the state of the patient's health and the need to take immediate, decisive actions himself rather than to delay, to ignore or to rely entirely on others. The military metaphors also help rationalize radical, destructive treatments. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
r38al | what's wrong with the word 'negro'? how is 'black' politically more correct than 'negro'? | [
{
"answer": "Apparently words that have a neutral definition can become slurs if they are constantly used to describe someone we don't like. Negro, just as a word outside of any context, is completly neutral; it's the Spanish word for black. However, because it was previously used to name people that we oppresed, the word is now bad. Just like Chinaman or Jap. Chinaman is bad but Englishman is good, and Jap is bad but Brit is good. These words are bad because at some point in time, they were used negatively. If there was a big war, and the word \"person\" was used to describe the enemy in propaganda, you would not be allowed to call anyone a \"person\" afterwards.",
"provenance": null
},
{
"answer": "In American society, while the word negro wasn't used in as racially charged slurs as the other, it was the word the oppressor/the Man/the Other traditionally used to name blacks in America through slavery and segregation and beyond. (That's a whole other ELI5.) \"Black\" (not necessarily capitalized) doesn't have those same connotations and is also the term people choose to call themselves. \n\nAlso, \"negro\" is almost a 'dated' word in America's use of the English lexicon; it has archaic tones in relation to current American society and terminology and has fallen out of general use and favor in our language. ",
"provenance": null
},
{
"answer": "deep_sea2 is right, and it is generally frowned upon to refer to a \"person of color\" as a Negro, at least in the United States. I say generally because you might notice it is still a [racial category](_URL_0_) on the 2010 US Census. There are a significant number of people in the US who prefer to racially identify themselves as Negroes.",
"provenance": null
},
{
"answer": "The only thing I can think of honestly is because whenever I was called \"negro\" by a white person it was used as a slur. When I was young and ashamed of the color of my skin being called \"black\" empowered me. See documentaries and looking at old photos of my mom with the huge afro when someone called me \"black\" I mentally related it to strong, and still do. ",
"provenance": null
},
{
"answer": "So, I have a question here. Did the offensive word \"nigger\" come from the harmless word \"negro\"? ",
"provenance": null
},
{
"answer": "It's all about context, intention, and knowledge overall. You can't expect someone who doesn't know a word's history to understand the historical reasons behind the word. I had a white friend who purposely would say negro instead of the \"n-word\" because he knew I didn't like it, but he would emphasize negro just the same (to get on my nerves). The issue is is it ever used positively? That's why the United Negro College fund, there's no problem with it. But racists saying they don't want negro's (I highly doubt they would say negro) is just negative all around. ",
"provenance": null
},
{
"answer": "I think its a question of who used it, when, and what the emotional attachment was to the word.\n\nBasically any word used by non-African Americans while American society was still fairly racist to describe African Americans ended up adopting the connotation that went along with the social asymmetry of the time. Its not until we used clinical technical terms like \"African American\" that we achieved any level of neutrality.\n\nThe simple fact is that African Americans still have hurdles that they go through that others do not. It sounds like a stereotype, but they are far more likely to be stopped by a police officer for no reason. So long as that sort of inequity exists, the words negro, colored, etc, will always retain their negative connotation.\n\nThere are signs that things are getting better though: When New Orleans was flooded and mostly African Americans were forcibly displaced, some people started referring to them as \"refugees\". This is just a technical description of their status due to the events, and should have been a neutral description. Jesse Jackson then decided to make a stink and insist that they not be called that because it was demeaning and made it seem as though they were foreigners from the third world, and thus not from the first world America. However, most people rejected this, as there was no connotation of any kind intended one way or another.\n",
"provenance": null
},
{
"answer": "Negro is widely used in Latin America, and French and Spanish speaking countries all over the world. \n\n\nIn the US, \"Negro\" had noble connotations in black and white America during the early 1900s. Activists and thinkers of the Civil Rights Movement during the 1940s-'60s distanced themselves from the word because it had connotations with Marcus Garvey's Pan-African Movement in the 1920s that had helped empower and unify blacks yet stirred up a lot of racial hatred. Promoting the \"____-American\" label helped with integration. According to their train of thought, an America made up of distinct, separate races like \"white,\" \"negro,\" \"yellow,\" or \"red,\" as it had been under segregation would not be inclined to live with each other. A nation of \"African-Americans,\" \"Native-Americans,\" \"European-Americans,\" etc, however, would probably see each other as different shades of the same tribe, and may vote for integration. They did.\n\n\nNot to mention what's already been said about using \"negro\" negatively. It's like if Harijan became a bad word because people still used it to describe people of that caste as untouchable and second-class, as opposed to \"child of God\" as Gandhi meant. ",
"provenance": null
},
{
"answer": "\"Negro\" is similar to \"Oriental\" in that, while not primarily meant as a pejorative, was used as a means of creating a hierarchical separation -- these words were used in a context that tried to scientifically distinguish people of different ethnicities and was often used as a means of patriarchal domination. \n\nELI12: Dog-whistle words for ethnic discrimination.\n\nELI5: It doesn't seem like it would be a bad word at pure face value, but it is. ",
"provenance": null
},
{
"answer": "\"Is there something I can call you that's less offensive than Mexican?\" - Michael Scott",
"provenance": null
},
{
"answer": "Negro is the politically correct term in Salvador, Brazil, where about 80 percent of the population is black. It is considered insulting to use 'black' to describe an individual, with the rational that the term should be reserved for the description of inanimate objects. People are proud to be called Negros there. I spent 6 weeks there, just thought it would be an interesting aside to the conversation. ",
"provenance": null
},
{
"answer": "You might also want to look at the concept of the \"Euphemism Treadmill\".\n\n_URL_0_\n\nTerms that are perfectly acceptable gradually become associated with negative qualities, and then eventually become looked at as insults.\n\nFor example, in the US, The original term used was \"Negro\" (As in the United Negro College Fund)\n\nThen, when that began taking on negative connotations, someone started using the word \"Colored\". (For example, the National Assocation for the Advancement of Colored People. (NAACP))\n\nEventually that hit the treadmill and was replaced by Black... Then African American.... Now I think the politically correct term is \"Person of Color\"\n\n",
"provenance": null
},
{
"answer": "Meanwhile, the NAACP isn't budging :)",
"provenance": null
},
{
"answer": "I think there's some confusion here. \"Negro\" is not so much politically incorrect now as it is antiquated. As far as I know, \"Negro\" was never an offensive term; it was, in fact, the preferred term for a long time until for whatever reason it just fell out of favor -- just like \"colored\" or, for that matter, \"square.\"\n\nThe cultural lingo just changed. That's all. People in this thread seem to assume that it's offensive now just because it's not used anymore.",
"provenance": null
},
{
"answer": "Is this really an ELI5 topic?\n\nEDIT: I mean doesn't it belong in [/r/answers](/r/answers) ? ",
"provenance": null
},
{
"answer": "What about Porch Monkey? We should take it back.",
"provenance": null
},
{
"answer": "Because white people wouldn't like to be called Caucasoids. ",
"provenance": null
},
{
"answer": "Connotations.\n\nThe same way the Hitler mustache isn't ever worn by anyone anymore.",
"provenance": null
},
{
"answer": "When people call me 'negro' or 'black' I don't get offended, I just think they are crazy because I'm white.",
"provenance": null
},
{
"answer": "In the 1960's, negro was a perfectly acceptable term for blacks. Martin Luther King used the word negro often to refer to black people, though 'nigger' certainly would have been very offensive back then.\n\nHowever, many black people felt a sense of unease with being black. When you're disrespected everyday for the color of your skin, there are almost no positive icons in popular American culture, you could see how many black people could develop a stigma with being black. It's not that black people wanted to be white (most didn't). But many black people in America just didn't want to be black and the heavy baggage that came with it. \n\nMalcolm X was the catalyst for embracing blackness and ridding black America of this self-hatred. [You can see Malcolm X here in 1962 speaking on this topic.](_URL_0_). Malcolm exhorted black people in America to be proud of who they were despite a stifling culture of disrespect an disenfranchisement. He emphasized the importance of self-esteem and self-reliance within the community. He worked to improve black America from within, while Martin Luther worked to improve black America from without.\n\nA part of Malcolm's work was the subconscious meaning of words. Although you hear Malcolm X using the word 'negro' in the previous video, he eventually distanced himself from it in favor of 'black'. He saw 'negro' as being associated with slavery and segregation so by making this explicit break with the word 'negro' and embracing 'black', this was a subtle but important path to self determination.\n\nMalcolm X's work (along with others) eventually evolved into what was the black power movement. The spirit behind this black power movement was really captured elegantly in James Brown 1968 hit ['I'm black and I'm proud'](_URL_1_). This song got black people on a mass level to proudly embrace 'blackness'. No longer was being black was no longer a derogatory term. Black became beautiful, black became strong. \n\nThe end result of all this, is that 'black' became a word of pride while 'negro' was seen as a classification imposed on black people by outsiders. So 'black' slowly but surely became widely accepted as 'negro' fell to the wayside.",
"provenance": null
},
{
"answer": "Different cultures have different meanings. In South America, Negro is considered a polite term. Like saying \"Friend\". [Example here](_URL_0_)",
"provenance": null
},
{
"answer": "phrases have to be changed very often, otherwise they become stigmatising.\n\nDisabled has become Special Needs which has become Additional Needs\n\nSpecial Education has become Individualized Education which has in turn become Adapted Edducation which has become Additional Support. (that was within three years)\n\n\n\n",
"provenance": null
},
{
"answer": "[Kaffir](_URL_0_) is our version of nigger in South Africa. Be careful when and where you use this word.\n\nWhile I was high school it started getting used as a way to refer to a person as useless or if they did something so dumb that \"only a kaffir would do\" for some reason. Let's just say people started getting a bit to familiar with the word.\n\nI feel racist typing this.\n\nedit: This word can't be used as in, \"my nigger.\" It is purely a racist term.",
"provenance": null
},
{
"answer": "A website thats 97% white people explaining why black people find something offensive. I always enjoy watching tht. \n\nCalling someone by their race is offensive for obvious reasons. Just because it is still an accepted term does not make it less offensive. \n\nUnless obviously its how you say \"black\" in your native tongue but then again why are you describing people by their race in the first place.",
"provenance": null
},
{
"answer": "I think that it isn't accepted as a term because it sounds too much like the word \"nigger\". ",
"provenance": null
},
{
"answer": "The word \"black\" to describe someone's race has only just become socially acceptable as the word negro is just in times with modern US vocabulary and African American is simply just not accurate most of the times. \n\nHowever the word Negro is still the official terminology for the race, and it is still widely used on most, if not all, government or record keeping forms. \n\nEven the 2010 census has the word Negro on it. ",
"provenance": null
},
{
"answer": "I have always wondered the same thing, OP. In my country, \"Negro\" is the \"politically correct\" term to address a black person and \"Black\" is the pejorative term. Living and learning.....",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "18855505",
"title": "Negro",
"section": "Section::::In English.:United States.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 397,
"text": "However, during the 1950s and 1960s, some black American leaders, notably Malcolm X, objected to the word \"Negro\" because they associated it with the long history of slavery, segregation, and discrimination that treated African Americans as second class citizens, or worse. Malcolm X preferred \"Black\" to \"Negro\", but also started using the term \"Afro-American\" after leaving the Nation of Islam.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "545753",
"title": "Magical Negro",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 285,
"text": "Critics use the word \"Negro\" because it is considered archaic, and usually offensive, in modern English. This underlines their message that a \"magical black character\" who goes around selflessly helping white people is a throwback to stereotypes such as the \"Sambo\" or \"noble savage\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18855505",
"title": "Negro",
"section": "Section::::In English.:United States.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 1382,
"text": "\"Negro\" superseded \"colored\" as the most polite word for African Americans at a time when \"black\" was considered more offensive. In Colonial America during the 17th century the term Negro was, according to one historian, also used to describe Native Americans. John Belton O'Neall's The Negro Law of South Carolina (1848) stipulated that \"the term negro is confined to slave Africans, (the ancient Berbers) and their descendants. It does not embrace the free inhabitants of Africa, such as the Egyptians, Moors, or the negro Asiatics, such as the Lascars.\" The American Negro Academy was founded in 1897, to support liberal arts education. Marcus Garvey used the word in the names of black nationalist and pan-Africanist organizations such as the Universal Negro Improvement Association (founded 1914), the \"Negro World\" (1918), the Negro Factories Corporation (1919), and the Declaration of the Rights of the Negro Peoples of the World (1920). W. E. B. Du Bois and Dr. Carter G. Woodson used it in the titles of their non-fiction books, \"The Negro\" (1915) and \"The Mis-Education of the Negro\" (1933) respectively. \"Negro\" was accepted as normal, both as exonym and endonym, until the late 1960s, after the later Civil Rights Movement. One well-known example is the identification by Martin Luther King, Jr. of his own race as \"Negro\" in his famous \"I Have a Dream\" speech of 1963.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4745",
"title": "Black people",
"section": "Section::::North America.:United States.\n",
"start_paragraph_id": 97,
"start_character": 0,
"end_paragraph_id": 97,
"end_character": 1146,
"text": "By the 1900s, \"nigger\" had become a pejorative word in the United States. In its stead, the term \"colored\" became the mainstream alternative to \"negro\" and its derived terms. After the civil rights movement, the terms \"colored\" and \"negro\" gave way to \"black\". \"Negro\" had superseded \"colored\" as the most polite word for African Americans at a time when \"black\" was considered more offensive. This term was accepted as normal, including by people classified as Negroes, until the later Civil Rights movement in the late 1960s. One well-known example is the identification by Reverend Martin Luther King, Jr. of his own race as \"Negro\" in his famous speech of 1963, I Have a Dream. During the American civil rights movement of the 1950s and 1960s, some African-American leaders in the United States, notably Malcolm X, objected to the word \"Negro\" because they associated it with the long history of slavery, segregation, and discrimination that treated African Americans as second-class citizens, or worse. Malcolm X preferred \"Black\" to \"Negro\", but later gradually abandoned that as well for \"Afro-American\" after leaving the Nation of Islam.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36593329",
"title": "Post-blackness",
"section": "Section::::Definitions of blackness.:The Kinship Schema.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 759,
"text": "American racial categories are, since they thus do not give any positive definition of blackness, groundless and have no empirical foundation. She argues that racial designations refer to physical characteristics of individuals, which were for one inherited from their forebears but also inherent in people in a physical way. So if someone is being called “black” in common American usage, this does not only refer to the looks of the person defined but about the looks of all black people and how the person resembles them. What is perceived as typically black is what scientists now view as the mythology of race which is nowadays closely intertwined with the historical conditions under which the now-disproved scientific theories of race were formulated.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2409762",
"title": "Ethnonym",
"section": "Section::::Change over time.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 292,
"text": "Four decades later, a similar difference of opinion remains. In 2006, one commentator suggested that the term Negro is outdated or offensive in many quarters; similarly, the word \"colored\" still appears in the name of the NAACP, or National Association for the Advancement of Colored People.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18855505",
"title": "Negro",
"section": "Section::::In other languages.:Latin America (Portuguese and Spanish).\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 1077,
"text": "In certain parts of Latin America, the usage of \"negro\" to directly address black people can be colloquial. It is important to understand that this is not similar to the use of the word \"nigga\" in English in urban hip hop subculture in the United States, given that \"negro\" is not a racist term. For example, one might say to a friend, \"\"\" (literally 'Hey, black-one, how are you doing?'). In such a case, the diminutive ' can also be used, as a term of endearment meaning 'pal'/'buddy'/'friend'. ' has thus also come to be used to refer to a person of any ethnicity or color, and also can have a sentimental or romantic connotation similar to 'sweetheart' or 'dear' in English. In other Spanish-speaking South American countries, the word \"\" can also be employed in a roughly equivalent term-of-endearment form, though it is not usually considered to be as widespread as in Argentina or Uruguay (except perhaps in a limited regional and/or social context). It is consequently occasionally encountered, due to the influence of \"nigga\", in Chicano English in the United States.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1ffr4d | Do the deaf need to wear hearing protection? | [
{
"answer": "Depends why they're deaf. If it's a neurological thing, the physical ear being in good shape, it would make sense to try and preserve your ears in case of a medical advance that could restore your hearing.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "520289",
"title": "Hearing aid",
"section": "Section::::Types.:Eyeglass aids.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 339,
"text": "These are generally worn by people with a hearing loss who either prefer a more cosmetic appeal of their hearing aids by being attached to their glasses or where sound cannot be passed in the normal way, via a hearing aids, perhaps due to a blockage in the ear canal. pathway or if the client suffers from continual infections in the ear.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1153576",
"title": "Earmuffs",
"section": "Section::::Hearing protection.:Specific considerations for hearing protection for workers with hearing loss.\n",
"start_paragraph_id": 44,
"start_character": 0,
"end_paragraph_id": 44,
"end_character": 855,
"text": "Workers may want to wear their hearing aids under an earmuff. According to OSHA, hearing aids should not be used in areas with dangerous noise levels. However, OSHA allows for the professional(s) in charge of the hearing loss protection program to decide on a case-by-case basis if a worker can wear their hearing aids under an earmuff in high-level noise environments. Workers are not permitted to wear their hearing aids (even if they are turned off) instead of using HPD. OSHA specifies that hearing aids are not \"hearing protectors\" and do not attenuate enough sound to be used instead of HPD. Wearing hearing aids alone, without the use of earmuffs, could potentially cause additional noise-induced hearing loss. It is recommended that workers should not use their hearing aids without the use of an earmuff when exposed to sound levels over 80 dBA.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5089299",
"title": "Adapted physical education",
"section": "Section::::Teaching for specific disabilities.:Deafness.\n",
"start_paragraph_id": 81,
"start_character": 0,
"end_paragraph_id": 81,
"end_character": 261,
"text": "Being deaf or hard of hearing typically has little impact on the development of motor skills, fitness levels, and participation in sports. However, it is still important to accommodate students who are deaf or hard of hearing in the physical education setting.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "51984",
"title": "Models of deafness",
"section": "Section::::Social Model.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 1438,
"text": "Through this lens individuals who are deaf are considered disabled due to their inability to hear, which hearing counterparts in their surroundings have historically viewed as a disadvantage. People with disabilities affirm that the design of the environment often disables them. In more accessible environments where those that are deaf have access to language that is not only spoken they are disabled less, or not at all. Areas where hearing and deaf individuals interact, called contact zones, often leave deaf individuals at a disadvantage because of the environment being tailored to suit the needs of the hearing counterpart. The history of Martha's Vineyard, when looking specifically at Martha's Vineyard Sign Language, supports this notion. At one point in time, the deaf population on the island was so great that it was commonplace for hearing residents to know and use both signed and spoken language to communicate with their neighbors. In this environmental design, it was not \"bad\" or \"disabling\" if one was not able to hear in order to communicate. With certain disabilities, medical intervention can improve subsequent health issues. This is true to parts of the deaf population, as in some cases hearing can be gained with the assistance of medical technologies. The social model acknowledges the hard truth that medical intervention does not address societal issues that prevail - regardless of its extent or success.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39700685",
"title": "MDP syndrome",
"section": "Section::::Management.:Deafness.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 253,
"text": "Deafness is a feature of MDP syndrome as a result of the nerves not working well and people often have difficulty getting hearing aids because of the small size of their ears. Digital hearing aids can be helpful and audiometry follow up will be needed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "56315577",
"title": "Hearing protection device",
"section": "Section::::Types.:Dual Hearing Protection.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 466,
"text": "Dual hearing protection refers to the use of earplugs under ear muffs. This type of hearing protection is particularly recommended for workers in the Mining industry because they are exposed to extremely high noise levels, such as an 105 dBA TWA. Fortunately, there is an option of adding electronic features to dual hearing protectors. These features help with communication by making speech more clear, especially for those workers who already have hearing loss. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49604",
"title": "Hearing loss",
"section": "Section::::Sign language.:Government policies.:Health care.\n",
"start_paragraph_id": 245,
"start_character": 0,
"end_paragraph_id": 245,
"end_character": 424,
"text": "Not only can communication barriers between deaf and hearing people affect family relationships, work, and school, but they can also have a very significant effect on a deaf individual’s physical and mental health care. As a result of poor communication between the health care professional and the deaf or hard of hearing patient, many patients report that they are not properly informed about their disease and prognosis.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3q3nk8 | what are 'short-links' such as _url_1_, _url_2_, and _url_0_ used for? | [
{
"answer": "The goal is to make URL's smaller, which is beneficial if e.g. you have a comment section or tweet you want to send and there's a character limit. It may also just look better than a medium to long size url.\n\nAn exception is that some of those URL shorteners are also used maliciously by criminals to hide the original URL which may have looked less safe. \n\nSo, keep an eye out when clicking those url's.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3919945",
"title": "URL shortening",
"section": "Section::::History.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 225,
"text": "The shortest possible long-term URLs were generated by NanoURL from December 2009 until about 2011, associated with the top-level \".to\" (Tonga) domain, in the form , where represents a sequence of random numbers and letters.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24408333",
"title": "Bitly",
"section": "Section::::Technology.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 403,
"text": "The company uses HTTP 301 redirects for its links. The shortcuts are intended to be permanent and cannot be changed once they are created. URLs that are shortened with the bitly service use the codice_3 domain or any other generic domain that the service offers. Information about any short bitly URL codice_4 is available at codice_5 (that is, the URL with a plus sign appended), for example codice_6.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3919945",
"title": "URL shortening",
"section": "Section::::Advantages.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 379,
"text": "The main advantage of a short link is that it is, in fact, short, looks neat and clean and can be easily communicated and entered without error. To a very limited extent it may obscure the destination of the URL, though easily discoverable; this may be advantageous, disadvantageous, or irrelevant. A short link which expires, or can be terminated, has some security advantages.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3919945",
"title": "URL shortening",
"section": "Section::::Expiry and time-limited services.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 394,
"text": "A permanent URL is not necessarily a good thing. There are security implications, and obsolete short URLs remain in existence and may be circulated long after they cease to point to a relevant or even extant destination. Sometimes a short URL is useful simply to give someone over a telephone conversation for a one-off access or file download, and no longer needed within a couple of minutes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2003680",
"title": "TinyURL",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 266,
"text": "TinyURL is a URL shortening web service, which provides short aliases for redirection of long URLs. Kevin Gilbertson, a web developer, launched the service in January 2002 as a way to post links in newsgroup postings which frequently had long, cumbersome addresses.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2003680",
"title": "TinyURL",
"section": "Section::::Service.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 417,
"text": "Short URL aliases are seen as useful because they are easier to be written down, remembered or distributed. They also fit in text boxes with a limited number of characters allowed. Some examples of limited text boxes are IRC channel topics, email signatures, microblogs, certain printed newspapers (such as \".net\" magazine or even \"Nature\"), and email clients that impose line breaks on messages at a certain length.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3919945",
"title": "URL shortening",
"section": "Section::::Expiry and time-limited services.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 608,
"text": "Some URL shorteners offer a time-limited service, which will expire after a specified period. Services available include an ordinary, easy-to-say word as the URL with a lifetime from 5 minutes up to 24 hours, creation of a URL which will expire on a specified date or after a specified period, creation of a very-short-lived URL of only 5 characters for typing into a smartphone, restriction by the creator of the total number of uses of the URL, and password protection. A Microsoft Security Brief recommends the creation of short-lived URLs, but for reasons explicitly of security rather than convenience.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5459pn | Is there any historical inspiration or precedence behind the "Ludovico technique," or is it strictly a creation of Anthony Burgess and/or Stanley Kubrick for A Clockwork Orange? | [
{
"answer": "What springs to mind when reading this is Ivan Pavlov and his theory of classical conditioning (for which he used dogs to illustrate this) among other learning theories.\n\n**Now who was Ivan Pavlov?**\n\nIvan Pavlov was a Russian Behaviourist who in 1927 conducted an experiment which would (alongside *operant conditioning* essentially learning by trial and error) form a theory (\"Classical/Pavlovian Conditionding\" essentially learning by association) and mold part of our understanding on how we learn.\n\n**What did this study entail?**\n\n* Aim/Hypothesis: To demonstrate what animals learn by association.\n\n* Method/Procedure: Pavlov placed food in front of dogs, when the food was being brought to the dogs they began salivating. Pavlov would ring a bell every time the dogs ate the food. \n\nEven when there was no food given after the bell was rang, the dogs still salivated. They had *learnt* to salivate. Ordinarily when the dogs were given food they would salivate, Pavlov called this an *unconditioned response* to an *unconditioned stimulus* (food) as this happened naturally without experimentation.\n\nAfter a few rings of the bell, the dog began to associate that sound with food, Pavlov called this a *conditioned response* as it had been learnt (and an association attached) over time. Thus the food now was a *conditioned stimulus*.\n\nThere is more to Pavlov's study (e.g. to do with how long the learning would last and the specific parameters needed or not needed to evoke the response) however that delves more into science than history (But I'm still willing to explain that if you wish).\n\nHowever the \"ludovico technique\" involves inducing fear to emit a negative response so they feel adverse to doing that again. When thinking of that it makes me think of two other things:\n\n* Watson & Rayner's \"Little Albert\" experiment.\n* Aversion therapy & Phobias \n\n**Watson, Rayner and Little Albert**\n\n* Aim/Hypothesis: In 1920, Psychologist John B. Watson and his graduate student Rosaline Rayner already knew from studies in classical conditioning that fear to certain noises (e.g. a loud bang) was an *unconditioned response* in humans. What they wanted to find out was whether you could *condition* someone to fear a specific thing (e.g. furry toys or animals)\n\n* Method/Procedure: Watson & Rayner tested their hypothesis by attempting to scare an 11 month old orphan known as \"Little Albert\". They presented Albert with a series of items (white rats, rabbits, dogs, furry & non-furry masks, a santa clause mask, cotton wool and burning paper), Albert showed no fear when he saw the items. \n\nAlbert was given the white rat which he happily played with. Upon playing with the rat, Watson & Rayner struck a metal bar frightening Albert. Watson & Rayner repeated this several times, upon the 7th time Albert was shown only the rat without the noise. Now Albert became increasingly distressed and began to cry. \n\nW & R had turned a *neutral stimulus* (the rat) into a *conditioned stimulus* whilst also changing an *unconditioned response* (original fear of the loud bar) into a *conditioned* one (emotional fear). In later experiments W & R would *generalise* (this is one of the features Pavlov discovered) Albert's responses by showing him similar but different stimuli. \n\n**What can we do to remove these *conditioned* responses (e.g. phobias)?** \n\nWell W & R did try to *desensitize* Albert's conditioned responses but there wasn't time to do this. If one wanted treatment for their phobias, there are two main options:\n\n* Flooding\n* Systematic Desensitization\n\nFlooding essentially does exactly what you'd think it floods the person (all at once) with stimuli that they fear in an attempt to shock them out of it. E.g. If you're scared of spiders they'd get you to be in a room full of spiders.\n\nSystematic Desensitization is a little more thought out and methodical. In SD the person is gradually exposed to their fear (e.g. entering a state of relaxation, touching a picture of a spider, watching a video of a spider, touching a real spider, holding a real spider).\n\n**Aversion Therapy**\n\nThis is similar to classical conditioning and works by getting the person to experience an extremely negative reaction when viewing unwanted stimuli. E.g. an alcoholic would be given an emetic (a drug to make them vomit), they would then be given alcohol and an emetic which would also cause vomiting but condition the person to associate alcohol with being sick. \n\nAversion therapy was the main form of therapy when people tried to convert people from homosexuality to heterosexuality. \n\nHopefully this helped :) You can view how Pavlov went about his experiment [here](_URL_1_) and view the footage from the Little Albert experiment [here](_URL_0_) \n\n**Sources & Further Reading**\n\n* Pavlov. I. P. (1927): \"Conditioned Reflexes: An Investigation of the Physiological Activity of the Cerebral Cortex\"\n\n* Asratyan. E. A (1953): \"I. P. Pavlov: His Life and Work\" \n\n* Watson. B. J & Rayner. R (1920): \"Conditioned emotional reactions\"\n\n* Boswell. K et al (2009): \"AQA GCSE Psychology\"\n\n* Billingham. M et al (2008): \"AQA Psychology B AS: Student's Book\"\n\n\n\n\n\n\n \n\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "12765929",
"title": "Mind control in popular culture",
"section": "Section::::Science fiction.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 553,
"text": "BULLET::::- In the Anthony Burgess novel \"A Clockwork Orange\", later adapted into a film by Stanley Kubrick, the \"Ludovico Technique\" is a form of mind control that causes the subject, in this case the thug anti-hero Alex, to feel sickness and pain whenever he has a violent or anti-social impulse. This backfires because of Alex's association with the music of Ludwig van Beethoven to ultraviolence, an unintended side effect means that he has the same physical reaction to the music alone, which is exploited later by a man whose wife Alex had raped.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7824269",
"title": "Ludovico Technique LLC",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 277,
"text": "Ludovico Technique LLC was an art and entertainment production company which produces a variety of media, from feature films, to comic books. Their name comes from the Ludovico technique, a fictitious brainwashing technique from both the novel and the film A Clockwork Orange.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "43069535",
"title": "György Lukács",
"section": "Section::::Work.:\"Realism in the Balance\" and defence of literary realism.\n",
"start_paragraph_id": 59,
"start_character": 0,
"end_paragraph_id": 59,
"end_character": 791,
"text": "Lukács believed that desirable alternative to such modernism must therefore take the form of Realism, and he enlists the realist authors Maxim Gorky, Thomas and Heinrich Mann, and Romain Rolland to champion his cause. To frame the debate, Lukács introduces the arguments of critic Ernst Bloch, a defender of Expressionism, and the author to whom Lukács was chiefly responding. He maintains that modernists such as Bloch are too willing to ignore the realist tradition, an ignorance that he believes derives from a modernist rejection of a crucial tenet of Marxist theory, a rejection which he quotes Bloch as propounding. This tenet is the belief that the system of capitalism is \"an objective totality of social relations,\" and it is fundamental to Lukács's arguments in favour of realism.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1728474",
"title": "The Gizmo",
"section": "Section::::Origins.:10cc.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 558,
"text": "The Gizmo was first used on 10cc's instrumental \"Gizmo My Way\", a song arranged as a type of laid back beach music, where it appears as a slide guitar effect and sustained background effect. \"Gizmo My Way\" was the B-side to \"The Wall Street Shuffle\", and appeared on 10cc's second album, \"Sheet Music\" (1974), which included more uses of The Gizmo, most notably on the track \"Old Wild Men\". Its presence is heard throughout most of the track as a unique shimmering background guitar effect. The Gizmo was also used on the \"Sheet Music\" track \"Baron Samedi\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10858",
"title": "Franz Kafka",
"section": "Section::::Legacy.:\"Kafkaesque\".\n",
"start_paragraph_id": 86,
"start_character": 0,
"end_paragraph_id": 86,
"end_character": 661,
"text": "The term \"Kafkaesque\" is used to describe concepts and situations reminiscent of his work, particularly (\"The Trial\") and \"Die Verwandlung\" (\"The Metamorphosis\"). Examples include instances in which bureaucracies overpower people, often in a surreal, nightmarish milieu which evokes feelings of senselessness, disorientation, and helplessness. Characters in a Kafkaesque setting often lack a clear course of action to escape a labyrinthine situation. Kafkaesque elements often appear in existential works, but the term has transcended the literary realm to apply to real-life occurrences and situations that are incomprehensibly complex, bizarre, or illogical.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1933881",
"title": "Rudolf Hausner",
"section": "Section::::Conflict with Surrealists and later life.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 822,
"text": "In 1957, Hausner painted his first \"Adam\" picture. He came into conflict with the Surrealist orthodoxy, who condemned as heretical his attempt to give equal importance to both conscious and unconscious processes. In 1959 he co-founded the Vienna School of Fantastic Realism together with his old surrealist group members: Ernst Fuchs, Fritz Janschka, Wolfgang Hutter, Anton Lehmden and Arik Brauer. In 1962, Hausner met Paul Delvaux, René Magritte, Victor Brauner, and Dorothea Tanning while traveling in Germany, the Netherlands, Belgium, and France. The 1st Burda Prize for Painting was awarded to him in 1967. In 1969, he was awarded the Prize of the City of Vienna. Shortly after, he separated from Hermine Jedliczka and moved to Hietzing together with his daughter Xenia and Anne Wolgast, whom he had met in Hamburg.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5863654",
"title": "Takehisa Kosugi",
"section": "Section::::Biography.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 536,
"text": "Kosugi is probably best known for the experimental music that he created in from 1960 until 1975, first in the early 1960s with the Tokyo-based seven-member ensemble Group Ongaku (music group) and thereafter as a solo artist and with itinerant octet Taj Mahal Travellers (1969–75). Kosugi's primary instrument was the violin, which he sent through various echo-chambers and effects to create a bizarre, jolting music quite at odds with the drones of other more well-known Fluxus artists, such as Tony Conrad, John Cale and Henry Flynt.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4gkexz | If a planet was orbiting a star, and that star were to go supernova, would the planet continue to orbit it? would there be a delay before it stops orbiting it? | [
{
"answer": "In our current understanding of gravity (general relativity), gravity travels at the speed of light. If you're sitting at the exploding star, any orbiting object 5 light hours away will appear to continue as normal for 10 hours (5 for the effect to reach it, 5 more for that news to make it back to you). If you're sitting on the planet, it will seem to happen immediately, as the gravitational change arrives with the light. (Simultaneity depends on your reference frame in relativity). \n\nThe orbit WILL change, though, yes. This happens with binary stars as well as planets. Since the supernova is removing mass from the primary star, there's less gravitational pull on the orbiting object after it happens. But the object is still moving at the old orbital speed. The resulting \"kick\" that the binary companion has (be it a planet or a star) is called the [\"Blaauw kick\"](_URL_0_). Looking at the [virial theorem](_URL_1_), it becomes clear that if half the mass of the binary or more is lost in the supernova ejecta, the system won't remain bound. Think of it like spinning a sling around your head and then letting go. In this case, the planet/binary companion will go flying off at high speeds. \n\nThere's a bit of extra complication, though, in that supernovae aren't necessarily symmetrical. There's an extra \"kick\" from a bit more mass being ejected in one direction than another. Depending on how these kicks line up with what the velocity of the orbiting object is at the time of the supernova, it's possible that supernova losing less than half the binary mass results in an unbound system, or that one losing more than half the binary mass remains bound. How big these asymmetric kicks are and how common they are, and whether they apply to black holes as well as neutron stars, is still an active subject of research. \n\nEven if the system remains bound, the orbit will end up being highly elliptical for a while even if it was circular to start with. The supernova gives it a strong push in one direction, mucking about with the symmetry. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "49168255",
"title": "Planet Nine",
"section": "Section::::Batygin and Brown hypothesis.:Origin.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 777,
"text": "Planet Nine could have been captured from outside the Solar System during a close encounter between the Sun and another star. If a planet was in a distant orbit around this star, three-body interactions during the encounter could alter the planet's path, leaving it in a stable orbit around the Sun. A planet originating in a system without Jupiter-massed planets could remain in a distant eccentric orbit for a longer time, increasing its chances of capture. The wider range of possible orbits would reduce the odds of its capture in a relatively low inclination orbit to 1–2 percent. This process could also occur with rogue planets, but the likelihood of their capture is much smaller, with only 0.05–0.10% being captured in orbits similar to that proposed for Planet Nine.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36104002",
"title": "GD 356",
"section": "Section::::Possible companion.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 555,
"text": "A planet could possibly get into this situation by evaporating while orbiting inside the gaseous shell of the red giant and at the same time having its orbit decay due to bow-shock friction with the gas. Tides induce on the expanded star by the planet would also cause the orbit to decay, rather than expand as might have been expected to loss of gas from the star. These possibilities have been studied because that is the expected future of the Earth. Another hypothesis is that close-in planets could have formed during the merger of two white dwarfs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49168255",
"title": "Planet Nine",
"section": "Section::::Batygin and Brown hypothesis.:Origin.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 967,
"text": "An encounter with another star could also alter the orbit of a distant planet, shifting it from a circular to an eccentric orbit. The \"in situ\" formation of a planet at this distance would require a very massive and extensive disk, or the outward drift of solids in a dissipating disk forming a narrow ring from which the planet accreted over a billion years. If a planet formed at such a great distance while the Sun was in its original cluster, the probability of it remaining bound to the Sun in a highly eccentric orbit is roughly 10%. A previous article reported that if the massive disk extended beyond 80 AU some objects scattered outward by Jupiter and Saturn would have been left in high inclination (inc 50°), low eccentricity orbits which have not been observed. An extended disk would also have been subject to gravitational disruption by passing stars and by mass loss due to photoevaporation while the Sun remained in the open cluster where it formed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21694580",
"title": "Interstellar object",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 525,
"text": "It is possible for objects orbiting a star to be ejected due to interaction with a third massive body, thereby becoming interstellar objects. Such a process was initiated in early 1980s when C/1980 E1, initially gravitationally bound to the Sun, passed near Jupiter and was accelerated sufficiently to reach escape velocity from the Solar System. This changed its orbit from elliptical to hyperbolic and made it the most eccentric known object at the time, with an eccentricity of 1.057. It is headed for interstellar space.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "252372",
"title": "Planetary system",
"section": "Section::::System architectures.:Components.:Planets.\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 225,
"text": "When the O-type star goes supernova any planets that had formed would become free-floating due to the loss of stellar mass unless the natal kick of the resulting remnant pushes it in the same direction as an escaping planet.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36221385",
"title": "HD 97658",
"section": "Section::::Planetary system.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 967,
"text": "On November 1, 2010, a super-earth was announced orbiting the star along with Gliese 785 b as part of the NASA-UC Eta-Earth program. The planet orbits in just under 9.5 days and was originally thought to have a minimum mass of 8.2 ± 1.2 M. Spurred by the possibility of transits, additional data was acquired for less than a year which found a lower mass for the star and hence reduced the minimum mass of the planet to 6.4 ± 0.7 M, and improved certainty on the time of possible transit. Transits of the planet were apparently detected and announced on September 12, 2011; this would make HD 97658 the second-to-brightest star with a transiting planet after 55 Cancri and indicating a low-density planet like Gliese 1214 b. However, the occurrence of transits was quietly retracted on April 11, 2012, and three days later it was announced that observations by the MOST space telescope could not confirm transits. Transits of radii larger than 1.87 R were ruled out.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "48543200",
"title": "List of Solar System objects by greatest aphelion",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 333,
"text": "Comets are thought to orbit the Sun at great distances, but then be perturbed by passing stars and the galactic tides. As they come into or leave the inner Solar System they may have their orbit changed by the planets, or alternatively be ejected from the Solar System. It is also possible they may collide with the Sun or a planet.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4ll4gz | Why did the KPD, SPD and Members of the Many Socialist and Communist Militias and Originations Not Offer Any Significant Resistance to the Nazis in 1933? With Not Even a Real Attempt at a General Strike like During the Kappputsch? | [
{
"answer": "I have read many different takes on this question. A decent if incomplete explanation is that these forces were (a) demoralized and (b) split against each other.\n\n(B) is the less complex explanation, so I'll briefly address this. To the KPD, the SPD was a \"fascist\" political party that had ruined the promise of the 1918 revolution. In 1932, the KPD joined the Nazis in a transit strike in Berlin. In other words, the KPD had no interest in maintaining democratic institutions. We can assume, also, that the Reichswehr would have moved with alacrity against any uprising on the part of the KPD or associated militias.\n\n(a) Requires us to backtrack to the events of summer 1932. Franz von Papen is Chancellor. Eager to break the power of the SPD, he deposes the SPD government of Prussia by force. The Prussian state government was the last bastion of SPD power. Prussian Premier Otto Braun and Minister of the Interior Carl Severing, whose powers extended over Prussia's large and well-armed police force, would have been the ones in a position of sufficient authority to call a strike together. But contrast their position to that of Friedrich Ebert and Gustav Noske when these men called their strike in 1920. They were, respectively, President and Minister of Defense. They spoke with the authority of the nation in some sense, while those launching the coup were of the widely hated and discredited forces of monarchy and militarism. And while much of the military was on the fence in 1920, Reichswehr leader von Seect was, for instance, unwilling to take a positive move before an outcome was decided. In 1932, on the other hand, the Reichswehr stood with Minister of Defense Kurt von Schleicher. An outright strike could well have precipitated a violent reaction from the national government, and may indeed have reinforced their narrative of Communist/Socialist troublemaking requiring more authoritarian government. In sum - Unlike in 1920, Braun and Severing, and by extension the SPD, were in the position of illegitimacy. As Erich Eyck writes, \"large numbers of Germans, many of them quite influential, were jubilant at the prospect of getting rid of the Socialists and, if possible, of the unions as well.\" Finally, if the SPD had poor prospects of launching such a strike in 1932 their prospects were even more grim in 1933, after months of electoral drubbing and the loss of almost every position of power they had once held.\n\nYet, if they had truly believed in the Republic, wasn't it worth a last ditch effort? Probably not. It is doubtful that the mass of workers could have been called to an effective strike. Not only had many millions of them defected to the KPD and NSDAP, but the unemployment rate would have made such a tactic ineffective. Eych observes: \"For how could the trade unions call the workers from their posts when they knew that millions of unemployed were waiting the moment when these places might become vacant?\"\n\nA final point. I wrote [here] (_URL_0_) several weeks ago that Schleicher reached several labor union ministers and persuaded them that they would be granted positions of power in the new order, thus keeping them from launching a coup. I have to admit, I may have been seduced by good story-telling. While it's plausible, I haven't seen the evidence for it. \n\nCited:\n\nErich Eych, *History of the Weimar Republic, Vol. II*\nJohn Wheeler-Bennett, *Nemesis of Power*",
"provenance": null
},
{
"answer": "The Comintern followed a policy of forcing splits inside socialist parties up until 1934. Parties would split into socialists (trying to work in the parlamentary system) and communists (following a revolutionary way to power). Nazi rule was expected to be short and cause an uproar that would allow communists to come to power. In this context, the KPD didn't put up any resistance as the NSDAP was climbing to power, even assisting them in certain situations. As the NSDAP managed to hold on to power and crack down on the opposition a lot more effective than was expected the Comintern changed its policy and in 1934 started propagating the idea of \"popular fronts\" in the countries not yet under fascist rule. For instance, the Blum Popular front government in France, a wide coalition of parties intended to oppose the spread of fascism. To achieve this, the Comintern and the communist parties eased up on the whole \"revolutionary\" rhetoric. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "36729062",
"title": "Merger of the KPD and SPD into the Socialist Unity Party of Germany",
"section": "Section::::Background.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 556,
"text": "Among circles of the workers' parties KPD and SPD there were different interpretations of the reasons for the rise of the Nazis and their electoral success. A portion of the Social Democrats blamed the devastating role of Communists in the final phase of the Weimar Republic. The Communist Party, in turn, insulted the Social Democrats as \"social fascists\" (\"Sozialfaschisten\"). Others believed that the splitting of the labour movement into the SPD and KPD prevented them effectively opposing the power of the Nazis, made possible by the First World War.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19860926",
"title": "History of the Social Democratic Party of Germany",
"section": "Section::::Social Democracy in Germany until 1945.:Weimar Republic (1918–1933).\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 1230,
"text": "Subsequently, the Social Democratic Party and the newly founded Communist Party of Germany (KPD), which consisted mostly of former members of the SPD, became bitter rivals, not least because of the legacy of the German Revolution. Under Defense Minister of Germany Gustav Noske, the party aided in putting down the Communist and left wing Spartacist uprising throughout Germany in early 1919 with the use of the Freikorps, a morally questionable decision that has remained the source of much controversy amongst historians to this day. While the KPD remained in staunch opposition to the newly established parliamentary system, the SPD became a part of the so-called Weimar Coalition, one of the pillars of the struggling republic, leading several of the short-lived interwar cabinets. The threat of the Communists put the SPD in a difficult position. The party had a choice between becoming more radical (which could weaken the Communists but lose its base among the middle class) or stay moderate, which would damage its base among the working class. Splinter groups formed: In 1928, a small group calling itself Neu Beginnen, in the autumn of 1931, the Socialist Workers' Party of Germany, and in December 1931 the Iron Front.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8761118",
"title": "Free Association of German Trade Unions",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 1322,
"text": "During the years following its formation, the FVdG began to adopt increasingly radical positions. During the German socialist movement's debate over the use of mass strikes, the FVdG advanced the view that the general strike must be a weapon in the hands of the working class. The federation believed the mass strike was the last step before a socialist revolution and became increasingly critical of parliamentary action. Disputes with the mainstream labor movement finally led to the expulsion of FVdG members from the Social Democratic Party of Germany (SPD) in 1908 and the complete severing of relations between the two organizations. Anarchist and especially syndicalist positions became increasingly popular within the FVdG. During World War I, the FVdG rejected the SPD's and mainstream labor movement's cooperation with the German state—known as the \"Burgfrieden\"—but was unable to organize any significant resistance to or continue its regular activities during the war. Immediately after the November Revolution, the FVdG very quickly became a mass organization. It was particularly attractive to miners from the Ruhr area opposed to the mainstream unions' reformist policies. In December 1919, the federation merged with several minor left communist unions to become the Free Workers' Union of Germany (FAUD).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "265557",
"title": "Communist Party of Germany",
"section": "Section::::Early history.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 859,
"text": "Under the leadership of Liebknecht and Luxemburg, the KPD was committed to a revolution in Germany, and during 1919 and 1920 attempts to seize control of the government continued. Germany's Social Democratic government, which had come to power after the fall of the Monarchy, was vehemently opposed to the KPD's idea of socialism. With the new regime terrified of a Bolshevik Revolution in Germany, Defense Minister Gustav Noske formed a series of anti-communist paramilitary groups, dubbed \"Freikorps\", out of demobilized World War I veterans. During the failed Spartacist uprising in Berlin of January 1919, Liebknecht and Luxemburg, who had not initiated the uprising but joined once it had begun, were captured by the Freikorps and murdered. The Party split a few months later into two factions, the KPD and the Communist Workers Party of Germany (KAPD).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "302252",
"title": "Council communism",
"section": "Section::::History.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 459,
"text": "In Germany, the left communists were expelled from the Communist Party of Germany and formed the Communist Workers Party (KAPD). Similar parties were formed in the Netherlands, Bulgaria and Britain. The KAPD rapidly lost most of its members and it eventually dissolved. However, some of its militants had been instrumental in organising factory-based unions like the AAUD and AAUD-E, the latter being opposed to separate party organisation (see syndicalism).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "43441278",
"title": "Socialist Workers Party (United States)",
"section": "Section::::History.:Communist League of America.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 774,
"text": "The rise of fascism in Nazi Germany and the failure of the communist and social democratic left to unite against the common danger created a situation where certain radical parties throughout the world reexamined their priorities and sought a mechanism for building united action. As early as December 1933, a Trotskyist splinter group called the Communist League of Struggle (CLS), headed by former Socialist Party youth section leader Albert Weisbord and his wife Vera Buch, approached Norman Thomas of the Socialist Party of America seeking a united front hunger march of the two organizations followed by a general strike. This suggestion was dismissed as \"poppycock\" by SP Executive Secretary Clarence Senior, but the seed of the idea of joint action had been planted.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44379525",
"title": "Spartacus League",
"section": "Section::::History.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 630,
"text": "In December 1918, the Spartakusbund formally renamed itself the Communist Party of Germany (KPD). In January 1919, the KPD, along with the Independent Socialists, launched the Spartacist uprising. This included staging massive street demonstrations intended to destabilize the Weimar government, led by the centrists of the SPD under Chancellor Friedrich Ebert. The government accused the opposition of planning a general strike and communist revolution in Berlin. With the aid of the Freikorps (Free corps), Ebert's administration quickly crushed the uprising. Luxemburg and Liebknecht were taken prisoner and killed in custody.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3oodaq | How were the letters written by Apostle Paul delivered? | [
{
"answer": "First off, Saint Paul never knew Jesus. This is a very common misconception. He was not one of the original 12 apostles. He was a Jewish/Roman military man of the Roman Empire who at first persecuted Christians, then later converted, became a missionary and became arguably the key founder of Christianity. He was from Tarsus, Cilicia. He lived from 5-67 a.d.\n\nAnyway, I'll try to answer the question as best I can, I'm no expert. \n\nThe Roman's postal system was incredibly sophisticated and advanced for the time period. It was very simple for one to send a letter almost anywhere in the empire as long as it had an address. Paul's letters were often addressed to cities he or other missionaries had started Christian followings in. Often they were written to answer theological questions that arose in these congregations from a lack of a central or canon Christian law. The beginning of Christianity saw much debate and disagreement with different interpretations of the different gospels and the reality/divinity of Jesus. The letters would often be sent from Paul to the known Christian leaders of specific towns. These letters were to be read aloud to the following of Christians during their time of worship and congregation together. During the infancy of the Church it was common for letters to be sent out, there were always questions that needed to be clarified by one who they believed had an authority on the matter.",
"provenance": null
},
{
"answer": "You already received a comment about the postal system so I wanted to point out that he actually states in a few of the letters who was delivering them. I just got to work and can't put a lot of time into this, but the two examples I was able to quickly find are Phoebe who was sent to Rome (Romans 16:1) and Timothy in Thessalonica (1 Thessalonians 3:2). \n\nPaul states that these two were sent as leaders of the church to help that particular congregation. It is possible that they followed the letters or preceded them, but the general consensus in my studies was they brought the letters themselves (they were going anyways) and would read and instruct through them to make sure the message was understood. ",
"provenance": null
},
{
"answer": "So I'm going to begin with one caveat. What I know about is the 4th century letter networks primarily amongst bishops, not the high imperial postal system of the 1st-2nd century of Paul's world. So bear that in mind.\n\nLetters outside of official government communication were not delivered by the imperial post (i.e. post office), they were delivered individually, by slaves, friends, or trusted individuals who would be chancing by the recipients location at personal cost to the sender or the messenger. \n\nI don't precisely know what the mediums were, but I presume a wax tabula or a scroll. Tabulas tended to more durable, but considering the length of some of these letters, I can't imagine papyrus scrolls not being used for communique.\n\nThere was a low expectation of privacy amongst late antique letters (doubly so outside of secret government communication) and many were written with the intent for them to be published. \n\nThis is \"non-presumption of privacy\" is an important thing to consider, as this is why late antique letters don't read like modern ones. Modern letters are considered intensely personal, not meant for dissemination, and usually a chronicle of autobiography or recent history. Ancient letters straddle what would be considered many modern genres. They could be philosophical treatises, essays, political commentary, panegyric-like praises, educational recommendations, in addition to autobiography and history. They were intended to be \"instructive\" as well as \"informative.\" This is why, regardless of whether an author cared for a letter to be saved, letters would be recirculated and sometimes published. At the very least, most letters (at least those that were saved) were not intended to be fully private.\n\nTo whom were the letters sent? To whomever they wanted. There are plenty of records of soldiers on the frontiers sending letters back to their family at home. However keep in mind, that outside the military, the only people who had the need to send a letter, were usually people who had the means or could afford to not be forced to live as part of the agrarian 80% of the roman world, tied to their lands by basic necessity or law, i.e. merchants, bishops, officials, senators, etc.\n\nWhen letters were received, they could be read alone or in a large group, but often the letter bearer was thought of as a stand-in for the person delivering the letter, so he would sometimes be invited in as a guest and quizzed about whatever details and circumstances regarding the letter and the letter sender. This would naturally result in the letter being re-read to the close ones of the recipient as well.\n\nOff the top of my head, I don't have a late antique example of multiple copies being made, but considering the known unreliability of letter transportation, for important communication this must've been so. I know as an example from way later, from the 16th century, Matteo Ricci would send multiple copies of his letters from China back to Rome via dual ships traveling in different directions. One east via the Americas, one west via the Indian ocean. In the Late Antique world, it was considered the responsibility of the letter sender to maintain regular contact, which was frequently revealed in the introductory rhetoric of most letters, lamenting delays or praising prompt replies, so if multiple letters weren't sent, I presume a summary would be attached in the next one if one was lost.\n\nSome sources I'm pulling from:\n\n* Ebbeler, Jennifer. “Tradition, Innovation, and Epistolary Mores.” In A Companion to Late Antiquity, edited by Philip Rousseau and Jutta Raithel, 270–84. Blackwell Companions to the Ancient World. Chichester, U.K. ; Malden, MA: Wiley-Blackwell, 2009.\n\n* Gibson, Roy K. “On the Nature of Ancient Letter Collections.” Journal of Roman Studies 102 (November 2012): 56–78. doi:10.1017/S0075435812000019.\n\n* Sotinel, Claire. “How Were Bishops Informed? Information Transmission across the Adriatic Sea in Late Antiquity.” In Travel, Communication and Geography in Late Antiquity: Sacred and Profane, edited by Linda Ellis and Frank L. Kidner. Aldershot, Hants, England ; Burlington, VT: Ashgate Pub Ltd, 2004.\n\n* Walsh, P. G. 35. Letters of St. Paulinus of Nola, Vol. 1. Westminster, Md.: Paulist Press, 1966.\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "9952",
"title": "First Epistle to the Thessalonians",
"section": "Section::::Composition.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 660,
"text": "Most New Testament scholars believe Paul the Apostle wrote this letter from Corinth, although information appended to this work in many early manuscripts (e.g., Codices Alexandrinus, Mosquensis, and Angelicus) state that Paul wrote it in Athens after Timothy had returned from Macedonia with news of the state of the church in Thessalonica (; ). For the most part, the letter is personal in nature, with only the final two chapters spent addressing issues of doctrine, almost as an aside. Paul's main purpose in writing is to encourage and reassure the Christians there. Paul urges them to go on working quietly while waiting in hope for the return of Christ.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24140",
"title": "Paul the Apostle",
"section": "Section::::Writings.:Authorship.\n",
"start_paragraph_id": 102,
"start_character": 0,
"end_paragraph_id": 102,
"end_character": 496,
"text": "BULLET::::- First, they have found a difference in these letters' vocabulary, style, and theology from Paul's acknowledged writings. Defenders of the authenticity say that they were probably written in the name and with the authority of the Apostle by one of his companions, to whom he distinctly explained what had to be written, or to whom he gave a written summary of the points to be developed, and that when the letters were finished, Paul read them through, approved them, and signed them.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9869",
"title": "Epistle to the Ephesians",
"section": "Section::::Composition.:Authorship.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 349,
"text": "The first verse in the letter identifies Paul as its author. While early lists of New Testament books, including Marcion's canon and the Muratorian fragment, attribute the letter to Paul, more recently there have been challenges to Pauline authorship on the basis of the letter's characteristically non-Pauline syntax, terminology, and eschatology.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9869",
"title": "Epistle to the Ephesians",
"section": "Section::::Composition.:Place, date, and purpose of the writing of the letter.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 607,
"text": "If Paul was the author of the letter, then it was probably written from Rome during Paul's first imprisonment (; ; ), and probably soon after his arrival there in the year 62, four years after he had parted with the Ephesian elders at Miletus. However, scholars who dispute Paul's authorship date the letter to between 70–80 AD. In the latter case, the possible location of the authorship could have been within the church of Ephesus itself. Ignatius of Antioch himself seemed to be very well versed in the epistle to the Ephesians, and mirrors many of his own thoughts in his own epistle to the Ephesians.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11378",
"title": "First Epistle to the Corinthians",
"section": "Section::::Composition.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 226,
"text": "By comparing Acts of the Apostles and mentions of Ephesus in the Corinthian correspondence, scholars suggest that the letter was written during Paul's stay in Ephesus, which is usually dated as being in the range of AD 53–57.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "301005",
"title": "Marcionism",
"section": "Section::::Recent scholarship.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 373,
"text": "David Trobisch argues that comparison of the oldest manuscripts of Paul’s letters show evidence that several epistles had been previously assembled as an anthology and published separate from the New Testament, and this anthology as a whole was then incorporated into the New Testament. Trobisch further argues for Paul as the assembler of his own letters for publication.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9961",
"title": "Epistle to the Romans",
"section": "Section::::Hermeneutics.:Protestant interpretation.\n",
"start_paragraph_id": 88,
"start_character": 0,
"end_paragraph_id": 88,
"end_character": 289,
"text": "Martin Luther described Paul's letter to the Romans as \"the most important piece in the New Testament. It is purest Gospel. It is well worth a Christian's while not only to memorize it word for word but also to occupy himself with it daily, as though it were the daily bread of the soul\".\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2eb1xd | Why did 1960s Communist China engage in so many territorial conflicts over tiniest bits of land with such major powers as India and as the USSR? | [
{
"answer": "It wasn't necessarily \"Mao\" that was responsible for these actions. If anything, Zhou Enlai played a bigger part, having been China's foreign minister up until the 1960s.\n\nBefore we get to the Chola incident and the 1963 Sino-Pakistan agreement, you should understand that the Chola incident was a result of the Sino-Indian War of 1962. This was due to a border conflict between China and India. India was concerned about seeming weak due to territorial conflicts with Pakistan, while China was concerned that India was allying with the Soviets to surround China, as well as subverting Chinese rule in recently annexed Tibet. Indian Prime Minister Nehru instituted the Forward Policy, which authorized Indian troops to move into disputed regions held by Chinese troops. As they moved deeper into these regions, they came into conflict with PRC troops, eventually resulting in several firefights. The Chinese were incensed as they believed this was part of a plan to destabilize Tibet, so they elected to attack India to punish them. The resulting treaty resulted in a peace that more or less demarcates the current borders, although there were still border disputes for years after the war. As a result of this incident, China courted Pakistan as an ally against India, to help offset the Soviet Union's courting of India.\n\nOn a similar note, the Sino-Soviet split had already been in motion for a long time. China had significant territorial claims on Russia, who had signed one of the \"unequal treaties\" in the 1800s to claim Outer Manchuria, or Primorye, as well as the annexation of the area known as Tannu Tava in the West, as well as border incidents in Mongolia and Xinjiang. As a result, to satisfy Chinese revanchism, China attempted to negotiate with the Soviet Union to \"revise\" these treaties which the Soviet Union found to be unacceptable. As a result, Chinese troops attacked Russian forces in a series of border skirmishes over the island and various other territories.\n\nSources: \n\nMaxwell, India's China War\n\nLuthi, the Sino-Soviet Split",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2947802",
"title": "Japan–Soviet Union relations",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 605,
"text": "Relations between the Soviet Union and Japan between the Communist takeover in 1917 and the collapse of Communism in 1991 tended to be hostile. Japan had sent troops to counter the Bolshevik presence in Russia's Far East during the Russian Civil War, and both countries had been in opposite camps during World War II and the Cold War. In addition, territorial conflicts over the Kuril Islands and South Sakhalin were a constant source of tension. These, with a number of smaller conflicts, prevented both countries from signing a peace treaty after World War II, and even today matters remain unresolved.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4438586",
"title": "History of Sino-Russian relations",
"section": "Section::::Soviet Union, Republic of China, People's Republic of China.:Second Sino-Japanese War and World War II.\n",
"start_paragraph_id": 59,
"start_character": 0,
"end_paragraph_id": 59,
"end_character": 497,
"text": "In 1931, the Empire of Japan invaded Manchuria and created the puppet state of Manchukuo (1932), which signalled the beginning of the Second Sino-Japanese War. In 1937, a month after the Marco Polo Bridge Incident, the Soviet Union established a non-aggression pact with the Republic of China. During the World War II-period, the two countries suffered more losses than any other country, with China (in the Second Sino-Japanese war) losing over 35 million and the Soviet Union 27 million people.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1098416",
"title": "Anti-Russian sentiment",
"section": "Section::::By country.:Rest of the world.:China.\n",
"start_paragraph_id": 134,
"start_character": 0,
"end_paragraph_id": 134,
"end_character": 448,
"text": "In the 1960s, tensions between two communist nations had emerged into a [[Sino-Soviet border conflict|border conflict]], in which almost resulted with Soviet Union attempt to use nuclear bombs to nuke China. The conflict would only last at 1989 and ended at 1991 with the collapse of USSR, however there is still a modern sense of resentment against Russia by a minority of Chinese, who see Russia as the perpetrator for crimes within the country.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30964882",
"title": "Sino-Russian relations since 1991",
"section": "Section::::History.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 368,
"text": "China and the USSR were rivals after the Sino-Soviet split in 1961, competing for control of the worldwide Communist movement. There was a serious possibility of a major war in the early 1960s; a brief border war took place in 1969. This enmity began to lessen after the death of Mao Zedong in 1976, but relations were poor until the fall of the Soviet Union in 1991.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "552565",
"title": "Sino-Soviet border conflict",
"section": "Section::::Assessment.:Aftermath.\n",
"start_paragraph_id": 41,
"start_character": 0,
"end_paragraph_id": 41,
"end_character": 628,
"text": "China's relations with the USSR remained sour after the conflict, despite the border talks, which began in 1969 and continued inconclusively for a decade. Domestically, the threat of war caused by the border clashes inaugurated a new stage in the Cultural Revolution; that of China's thorough militarization. The 9th National Congress of the Communist Party of China, held in the aftermath of the Zhenbao Island incident, confirmed Defense Minister Lin Biao as Mao's heir apparent. Following the events of 1969, the Soviet Union further increased its forces along the Sino-Soviet border, and in the Mongolian People's Republic.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13831734",
"title": "Shripad Amrit Dange",
"section": "Section::::Sino-Indian War.:Sino-Soviet differences.\n",
"start_paragraph_id": 122,
"start_character": 0,
"end_paragraph_id": 122,
"end_character": 581,
"text": "Another issue that fueled the split in the Communist Party of India was parting of the ways between the USSR and China. Though the conflict had a long history, it came out in open in 1959, Khrushchev sought to appease the West during a period of the Cold War known as 'The Thaw', by holding a summit meeting with U.S. President Dwight Eisenhower. Two other reasons were USSR's unwillingness to support Chinese nuclear program and their neutrality in the initial days of Sino-Indian border conflict. These events greatly offended Mao Zedong and the other Chinese Communist leaders.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10402850",
"title": "Soviet Volunteer Group",
"section": "Section::::Background.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 1402,
"text": "Sino–Soviet diplomatic ties had been cut following the Sino-Soviet conflict (1929). At the time the Soviet Union was undergoing a country wide program of mass industrialization in preparation for a potential war on two fronts (with Germany and Japan respectively). The establishment of Manchukuo complicated the situation as its territory now housed a colony of 40,000 Soviet citizens working on the Chinese Eastern Railway. Although the Soviets refused to officially recognize the new state, they sold the railway to the Japanese in March 1935, at a cut-rate following a series of Japanese provocations. The Soviets felt unready for a new confrontation with Japan, opting to improve relations with China as a temporary countermeasure. The League of Nations remained silent on the issue of Japanese imperialism, pushing China to reactivate its unofficial communication channels with its only remaining potential ally. The Anti-Comintern Pact, signed on 25 November 1936, erased the last doubts held by both sides regarding the ongoing reconciliation efforts. On 7 July 1937, the Marco Polo Bridge Incident marked the beginning of the Second Sino-Japanese War. On 21 August, China and the Soviet Union signed a non–aggression pact. Although the pact made no mention of Soviet military support, it de facto established a tacit understanding that the Soviets would provide both military and material aid.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6geqsm | i have terrible vision, but sometimes if i blink hard enough, my vision goes crystal clear til i blink again. why? | [
{
"answer": "As someone who's spent 4 years studying, researching and working clinically with eyeballs, here's my guess: \n\nYou're likely forcing your focusing system to focus through as much blur as it possibly can, assuming that while you \"blink enough\" you're concentrating your gaze, at a single object or direction. Both your cornea and your crystalline lens will change shape in order for you to be able to focus; younger people, especially kids, have a much greater dynamic range for focusing then do older folks, so if you're young, that's probably most of it. If you know you have terrible vision, meaning a high prescription in one and or both eyes, you definitely should not do this. In that case, you'll probably get headaches if you do it enough. Just use glasses. ",
"provenance": null
},
{
"answer": "Could be that blinking hard, your eyelids are pressing on your cornea enough to flatten them, essentially making your nearsightedness less. The effect lasts until you blink again, and your cornea resumes its usual shape and the clarity in your vision disappears.",
"provenance": null
},
{
"answer": "It is because you spread layer of sticky tears on your cornea. It just happens to be concave at the right spot ( assuming you are myopic ). \n\nUsually this layer is convex and just worsen the vision. I sometimes have flush my eyes to restore precise vision. I do not understand why the liquid is sometimes stickier causing these problems.",
"provenance": null
},
{
"answer": "These answers are all over the board and everyone sounds 100% sure of themselves. You may want to find your way over to r/askscience and hopefully an ophthalmologist can chime in.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "13863144",
"title": "Fundus photography",
"section": "Section::::Fundus camera.:Modes.:Resolve artifact in fundus photography.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 399,
"text": "Blinking results in blurred and incomplete image of the fundus. It is imperative to instruct the patient not to blink when the fundus photo is taken.The patient may blink normally at any other time to prevent the excessive drying of the eye. A dry eye may also lead to a blurred fundus photo. When dry eye is suspected, ask the patient to blink several times to lubricate the eye before continuing.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39464282",
"title": "The Light That Failed (1939 film)",
"section": "Section::::Plot.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 224,
"text": "When his vision starts to blur, he goes to see a doctor (Halliwell Hobbes), who gives him a grim prognosis: as a result of his old war injury, he will go blind, in a year if he avoids strain, \"not very long\" if he does not.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7407460",
"title": "Optic neuropathy",
"section": "Section::::Causes.:Mitochondrial optic neuropathies.:Nutritional optic neuropathies.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 555,
"text": "Patients who suffer from nutritional optic neuropathy may notice that colors are not as vivid or bright as before and that the color red is washed out. This normally occurs in both eyes at the same time and is not associated with any eye pain. They might initially notice a blur or fog, followed by a drop in vision. While vision loss may be rapid, progression to blindness is unusual. These patients tend to have blind spots in the center of their vision with preserved peripheral vision. In most cases, the pupils continue to respond normally to light.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58816250",
"title": "Phacolytic glaucoma",
"section": "Section::::Signs and symptoms.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 327,
"text": "Another symptom includes the fading of visual clarity. This symptom makes the eye create an image commonly described to appear as though looking through a waterfall. If the lens becomes completely opaque the individual will become blind, even though the photoreceptors are completely functional. Other common symptoms include;\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3132998",
"title": "Lilac chaser",
"section": "Section::::Explanation.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 638,
"text": "BULLET::::3. When a blurry stimulus is presented to a region of the visual field away from where we are fixating, and we keep our eyes still, that stimulus will disappear even though it is still physically presented. This is called Troxler's fading. It occurs because although our eyes move a little when we are fixating on a point, away from that point (in \"peripheral vision\") the movements are not large enough to shift the lilac discs to new neurons of the visual system. Their afterimages essentially cancel the original images, so that all one sees of the lilac discs is grey, except for the gap where the green afterimage appears.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "725992",
"title": "Blinking",
"section": "Section::::Blinking in everyday life.:Adults.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 479,
"text": "When the eyes dry out or become fatigued due to reading on a computer screen, it can be an indication of Computer Vision Syndrome. Computer Vision Syndrome can be prevented by taking regular breaks, focusing on objects far from the screen, having a well-lit workplace, or using a blink reminder application such as EyeLeo or VisionProtect. Studies suggest that adults can learn to maintain a healthy blinking rate while reading or looking at a computer screen using biofeedback.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "43568708",
"title": "Mitochondrial optic neuropathies",
"section": "Section::::Causes.:Nutritional optic neuropathies.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 846,
"text": "Nutritional deficiency may be the cause of a genuine optic neuropathy, sometimes associated with involvement of the peripheral nervous system, called peripheral neuropathy. Loss of vision is usually bilateral, painless, chronic, insidious and slowly progressive. Most often, they present as a non-specific retrobulbar optic neuropathy. Patients may notice that colors are not as vivid or bright as before and that the color red is washed out. This normally occurs in both eyes at the same time and is not associated with any eye pain. They might initially notice a blur or fog, followed by a drop in vision. While vision loss may be rapid, progression to blindness is unusual. These patients tend to have blind spots in the center of their vision with preserved peripheral vision. In most cases, the pupils continue to respond normally to light.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
bkgu44 | to anybody who has used nesquik milkshake powder, why is it that the chocolate powder never mixes in with the milk fully, yet the banana powder does? | [
{
"answer": "The primary ingredients in the banana powder are cane sugar and maltodextrin (which is a white powder made from flour starch and used as a food additive). Both are very soluble in water (or milk), so it dissolves easily.\n\n\nThe chocolate powder, the primary ingredients are cane sugar (dissolves easily) and cocoa powder. Cocoa powder is about 22% fat, which is insoluble (doesn't dissolve well in water or milk). So the bits that don't dissolve are the cocoa powder, due largely to the fat.\n\n\nIt will dissolve better using hot water, and vigorous stirring, but may not do it perfectly even then.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "657067",
"title": "Powdered milk",
"section": "Section::::Food and health uses.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 573,
"text": "Powdered milk is frequently used in the manufacture of infant formula, confectionery such as chocolate and caramel candy, and in recipes for baked goods where adding liquid milk would render the product too thin. Powdered milk is also widely used in various sweets such as the famous Indian milk balls known as gulab jamun and a popular Indian sweet delicacy (sprinkled with desiccated coconut) known as chum chum (made with skim milk powder). Many no-cook recipes that use nut butters use powdered milk to prevent the nut butter from turning liquid by absorbing the oil. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47270314",
"title": "Stracciatella (ice cream)",
"section": "Section::::Description.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 557,
"text": "Makers produce the effect by drizzling melted chocolate into plain milk ice cream towards the end of the churning process; chocolate solidifies immediately coming in contact with the cold ice cream, and is then broken up and incorporated into the ice cream with a spatula. This process creates the shreds of chocolate that give stracciatella its name. (\"Straciatella\" in Italian means \"little shred\".) While straciatella ice cream traditionally involves milk ice cream and milk chocolate, modern variations can also be made with vanilla and dark chocolate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "292340",
"title": "Flavonoid",
"section": "Section::::Dietary sources.:Cocoa.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 267,
"text": "Flavonoids exist naturally in cocoa, but because they can be bitter, they are often removed from chocolate, even dark chocolate. Although flavonoids are present in milk chocolate, milk may interfere with their absorption; however this conclusion has been questioned.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38992849",
"title": "Nesquik",
"section": "Section::::Advertising campaigns.:2012–2013 attempted TV ad ban in England.\n",
"start_paragraph_id": 96,
"start_character": 0,
"end_paragraph_id": 96,
"end_character": 561,
"text": "The ad for Nesquik chocolate milkshake stated: \"You know, kids only grow up once, which is why they pack their days full of the good stuff. So start theirs with a tasty glass of Nesquik at breakfast. It has essential vitamins and minerals to help them grow and develop because all this laughing and playing can be hard work.\" An animation showed the ingredients \"Vitamins D, B & C\", \"Iron\", and \"Magnesium\" adjacent to a glass of the product, mixed with milk. On-screen text during the ad read, \"Enjoy Nesquik as part of a balanced diet and healthy lifestyle\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19714",
"title": "Milk",
"section": "Section::::Varieties and brands.:Additives and flavoring.\n",
"start_paragraph_id": 168,
"start_character": 0,
"end_paragraph_id": 168,
"end_character": 383,
"text": "Milk often has flavoring added to it for better taste or as a means of improving sales. Chocolate milk has been sold for many years and has been followed more recently by strawberry milk and others. Some nutritionists have criticized flavored milk for adding sugar, usually in the form of high-fructose corn syrup, to the diets of children who are already commonly obese in the U.S.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "347493",
"title": "Crème anglaise",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 893,
"text": "The cream is made by whipping egg yolks and sugar together until the yolk is almost white, and then slowly adding hot milk, while whisking. Vanilla beans (seeds) may be added for extra flavour and visual appeal. The sauce is then cooked over low heat (excessive heating may cause the yolks to cook, resulting in scrambled eggs) and stirred constantly with a spoon until it is thick enough to coat the back of a spoon, and then removed from the heat. It is also possible to set the sauce into custard cups and bake in a bain-marie until the egg yolks set. If the sauce reaches too high a temperature, it will curdle, although it can be salvaged by straining into a container placed in an ice bath. Cooking temperature should be between 70 °C (156 °F) and 83 °C (180 °F); the higher the temperature, the thicker the resulting cream, as long as the yolks are fully incorporated into the mixture.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2001439",
"title": "Hot milk cake",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 287,
"text": "Hot milk cake gets its distinctive flavor from the scalded milk that is the liquid component of the batter. It differs from traditional sponge cakes in that it contains baking powder as leavening, and the eggs are beaten together whole instead of whipped as yolks and whites separately.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
atu5zz | why does a scientific calculator show "0" as a result if i add 1 to a really high number and then substract said high number although it should show "1"? | [
{
"answer": "Your calculator doesn't store all of the digits for 2^50, so the 1 at the very end gets removed from the memory. How many digits a calculator actually holds depends from calculator to calculator. ",
"provenance": null
},
{
"answer": "For 32 bit floating point numbers in IEEE754 format there's something called precision error and rounding error and a whole bunch if other problems.\n\nA 32 bit integer number can store all numbers up to roughly 4 billion.\n\nBut a 32 bit float can store up to roughly 3x10^38 which is much higher than 4 billion.\n\nHow is that possible?\n\nIt's because after roughly 16 million, reals don't store every integer anymore. There start to be gaps in what integer can be stored accurately and the gaps keep getting larger.\n\nSo let's say if you're adding 1 to 17000000 the result is still 17000000 but if you add 2 then the result is 17000002.\n\nHowever at 20000000 you need to add 3 to get something larger because now neither 20000001 nor 20000002 can be represented.\n\nIf the magnitude difference between the two numbers you add is larger than roughly 10^7 you will have problems.\n\nIf variable X is s real and you perform X=X+1 over and over, X will increment roughly until 10^7 and then it will stop adding because the result of 1+10^7=10^7.\n",
"provenance": null
},
{
"answer": "The floating point and rounding situation other people mention is true, but I think there is also an order of operation that is important here.\n\nLet's say your input is a + b - c, the calculator processes a + b first (which equals 2^50 due to said storage depth, the 1 is dropped) then it subtracts c (which equals zero since 2^50 - 2^50).\n\nWhat happens if you input 2^50 - 2^50 + 1?\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "59715",
"title": "Scientific notation",
"section": "Section::::E-notation.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 1181,
"text": "Most calculators and many computer programs present very large and very small results in scientific notation, typically invoked by a key labelled (for \"exponent\"), (for \"enter exponent\"), , , , or depending on vendor and model. Because superscripted exponents like 10 cannot always be conveniently displayed, the letter \"E\" (or \"e\") is often used to represent \"times ten raised to the power of\" (which would be written as \"× 10\") and is followed by the value of the exponent; in other words, for any two real numbers \"m\" and \"n\", the usage of \"\"m\"E\"n\"\" would indicate a value of \"m\" × 10. In this usage the character \"e\" is not related to the mathematical constant \"e\" or the exponential function \"e\" (a confusion that is unlikely if scientific notation is represented by a capital \"E\"). Although the \"E\" stands for \"exponent\", the notation is usually referred to as \"(scientific) E-notation\" rather than \"(scientific) exponential notation\". The use of E-notation facilitates data entry and readability in textual communication since it minimizes keystrokes, avoids reduced font sizes and provides a simpler and more concise display, but it is not encouraged in some publications.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4521681",
"title": "Sinclair Sovereign",
"section": "Section::::Design.:Functions.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 837,
"text": "As well as addition, subtraction, multiplication and division, it had reciprocal and square-root functions, and the ability to multiply by a fixed constant. With an eight-digit display, the calculator could display positive numbers between 0.0000001 and 99,999,999, and negative numbers between -0.000001 and -9,999,999. Calculators of the time tended to have displays of between 3 and 12 digits, as reducing the number of digits was an effective way of reducing the cost of the calculator. A number outside that range leads to an overflow, and the screen flashes and all keys except the clear key are rendered inoperable to inform the user of the error. A independent memory register could read information from the screen, and information could only be taken from the memory onto the screen. Five keys were used for memory operations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "567292",
"title": "Mental calculation",
"section": "Section::::Methods and techniques.:Approximating common logs (log base 10).\n",
"start_paragraph_id": 302,
"start_character": 0,
"end_paragraph_id": 302,
"end_character": 243,
"text": "The same process applies for numbers between 0 and 1. For example, 0.045 would be written as 4.5 × 10. The only difference is that b is now negative, so when adding you are really subtracting. This would yield the result 0.653 − 2, or −1.347.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2698660",
"title": "Methods of computing square roots",
"section": "Section::::Approximations that depend on the floating point representation.\n",
"start_paragraph_id": 207,
"start_character": 0,
"end_paragraph_id": 207,
"end_character": 329,
"text": "where \"a\" is a bias for adjusting the approximation errors. For example, with \"a\" = 0 the results are accurate for even powers of 2 (e.g., 1.0), but for other numbers the results will be slightly too big (e.g.,1.5 for 2.0 instead of 1.414... with 6% error). With \"a\" = -0x4B0D2, the maximum relative error is minimized to ±3.5%.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3821",
"title": "Binary-coded decimal",
"section": "Section::::Subtraction with BCD.\n",
"start_paragraph_id": 65,
"start_character": 0,
"end_paragraph_id": 65,
"end_character": 359,
"text": "Thus the result of the subtraction is 1001 1001 0010 0101 (−925). To confirm the result, note that the first digit is 9, which means negative. This seems to be correct, since 357 − 432 should result in a negative number. The remaining nibbles are BCD, so 1001 0010 0101 is 925. The ten's complement of 925 is 1000 − 925 = 75, so the calculated answer is −75.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1352428",
"title": "Modulo operation",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 458,
"text": "For example, the expression \"5 mod 2\" would evaluate to 1 because 5 divided by 2 has a quotient of 2 and a remainder of 1, while \"9 mod 3\" would evaluate to 0 because the division of 9 by 3 has a quotient of 3 and leaves a remainder of 0; there is nothing to subtract from 9 after multiplying 3 times 3. (Doing the division with a calculator will not show the result referred to here by this operation; the quotient will be expressed as a decimal fraction.)\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27918833",
"title": "Numeric precision in Microsoft Excel",
"section": "Section::::Accuracy and binary storage.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 790,
"text": "For \"x\"′s that are not simple powers of 2, a noticeable error in can occur even when \"x\" is quite large. For example, if \"x\" = 1/1000, then = 9.9999999999989 × 10, an error in the 13-th significant figure. In this case, if Excel simply added and subtracted the decimal numbers, avoiding the conversion to binary and back again to decimal, no round-off error would occur and accuracy actually would be better. Excel has the option to \"Set precision as displayed\". With this option, depending upon circumstance, accuracy may turn out to be better or worse, but you will know exactly what Excel is doing. (It should be noted, however, that only the selected precision is retained, and one cannot recover extra digits by reversing this option.) Some similar examples can be found at this link.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
b99cju | how does regenerative brakes work ? | [
{
"answer": "Electrical induction.\n\nYou have probably made an electromagnet out of a coil of wire around a nail and a battery in school. Electricity flowing through a conductor will form a magnetic field around the conductor, and the reverse is true as well. A magnetic field moving around a conductor will cause an electrical current within it.\n\nAn electrical motor and an electrical generator are basically the same device, the difference being the input and the output. A generator takes the physical turning of magnets past wires to make electricity and a motor takes electricity moving through wires to make a magnetic field to turn the magnets.\n\nRegenerative braking is using the momentum of a moving car to turn the magnets and create electricity. This slows the car down, and the most effective use of that captured electricity is to turn around and use it to accelerate the car again through the reverse process.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "305992",
"title": "Regenerative brake",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 787,
"text": "Regenerative braking is an energy recovery mechanism which slows a vehicle or object by converting its kinetic energy into a form which can be either used immediately or stored until needed. In this mechanism, the electric motor uses the vehicle's momentum to recover energy that would be otherwise lost to the brake discs as heat. This contrasts with conventional braking systems, where the excess kinetic energy is converted to unwanted and wasted heat by friction in the brakes, or with dynamic brakes, where energy is recovered by using electric motors as generators but is immediately dissipated as heat in resistors. In addition to improving the overall efficiency of the vehicle, regeneration can greatly extend the life of the braking system as its parts do not wear as quickly.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "305992",
"title": "Regenerative brake",
"section": "Section::::General.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 634,
"text": "The most common form of regenerative brake involves an electric motor as an electric generator. In electric railways the electricity generated is fed back into the supply system. In battery electric and hybrid electric vehicles, the energy is stored chemically in a battery, electrically in a bank of capacitors, or mechanically in a rotating flywheel. Hydraulic hybrid vehicles use hydraulic motors to store energy in the form of compressed air. In a fuel cell powered vehicle, the electric energy generated by the motor is used to break waste water down into oxygen, and hydrogen which goes back into the fuel cell for later reuse.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11567567",
"title": "Energy-efficient driving",
"section": "Section::::Techniques.:Acceleration and deceleration (braking).\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 300,
"text": "Conventional brakes dissipate kinetic energy as heat, which is irrecoverable. Regenerative braking, used by hybrid/electric vehicles, recovers some of the kinetic energy, but some energy is lost in the conversion, and the braking power is limited by the battery's maximum charge rate and efficiency.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8166749",
"title": "Hybrid electric vehicle",
"section": "Section::::History.:Predecessors of present technology.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 385,
"text": "The regenerative brake concept was further developed in the early 1980s by David Arthurs, an electrical engineer, using off-the shelf components, military surplus, and an Opel GT. The voltage controller to link the batteries, motor (a jet-engine starter motor), and DC generator was Arthurs'. The vehicle exhibited fuel efficiency, and plans for it were marketed by Mother Earth News.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "65423",
"title": "Brake",
"section": "Section::::Background.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 725,
"text": "Most brakes commonly use friction between two surfaces pressed together to convert the kinetic energy of the moving object into heat, though other methods of energy conversion may be employed. For example, regenerative braking converts much of the energy to electrical energy, which may be stored for later use. Other methods convert kinetic energy into potential energy in such stored forms as pressurized air or pressurized oil. Eddy current brakes use magnetic fields to convert kinetic energy into electric current in the brake disc, fin, or rail, which is converted into heat. Still other braking methods even transform kinetic energy into different forms, for example by transferring the energy to a rotating flywheel.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1277811",
"title": "Hybrid Synergy Drive",
"section": "Section::::Operation.:Performance.\n",
"start_paragraph_id": 81,
"start_character": 0,
"end_paragraph_id": 81,
"end_character": 573,
"text": "BULLET::::- gradual braking: Regenerative brakes re-use the energy of braking, but cannot absorb energy as fast as conventional brakes. Gradual braking recovers energy for re-use, boosting mileage; hard braking wastes the energy as heat, just as for a conventional car. Use of the \"B\" (braking) selector on the transmission control is useful on long downhill runs to reduce heat and wear on the conventional brakes, but it does not recover additional energy. Constant use of \"B\" is discouraged by Toyota as it \"may cause decreased fuel economy\" compared to driving in \"D\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1519764",
"title": "Brake fade",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 423,
"text": "Brake fade is caused by a buildup of heat in the braking surfaces and the subsequent changes and reactions in the brake system components and can be experienced with both drum brakes and disc brakes. Loss of stopping power, or fade, can be caused by friction fade, mechanical fade, or fluid fade. Brake fade can be significantly reduced by appropriate equipment and materials design and selection, as well as good cooling.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2ojwue | why do some tv shows have a sign language interpreter on the screen? why can't they just use subtitles? | [
{
"answer": "As far as the interpreter goes, though, some deaf people may prefer it because they're accessing information in their own language (one that is readily accessible)... English is usually the second language learned for deaf people, so that may be a secondary choice.",
"provenance": null
},
{
"answer": "Might it be because it is live television?",
"provenance": null
},
{
"answer": "When broadcasting live it's much faster to translate to sign language than to write subtitles ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "519148",
"title": "Television in the United States",
"section": "Section::::Television channels and networks.:Broadcast television.:Broadcast television in languages other than English.:Other languages.\n",
"start_paragraph_id": 48,
"start_character": 0,
"end_paragraph_id": 48,
"end_character": 591,
"text": "There have also been a few local stations that have broadcast programming in American Sign Language, accompanied by English closed captioning. Prior to the development of closed captioning, it was not uncommon for some public television programs to incorporate ASL translations by an on-screen interpreter. An interpreter may still be utilized for the deaf and hard-of-hearing community for on-air emergency broadcasts (such as severe weather alerts given by local governments) as well as televised press conferences by local and state government officials accompanied by closed captioning.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24946015",
"title": "Subtitle editor",
"section": "Section::::Purpose.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 223,
"text": "In television, subtitles are used for \"clarification, translation, services for the deaf, as well as identifying places or people in the news.\" In movies, subtitles are mainly used for translations from foreign languages. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7224224",
"title": "Subtitle",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 303,
"text": "Sometimes, mainly at film festivals, subtitles may be shown on a separate display below the screen, thus saving the film-maker from creating a subtitled copy for perhaps just one showing. Television subtitling for the deaf and hard of hearing is also referred to as closed captioning in some countries.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "54470227",
"title": "A Quiet Place (film)",
"section": "Section::::Production.:Use of sign language.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 657,
"text": "The producers Andrew Form and Bradley Fuller said they initially planned not to provide on-screen subtitles for sign-language dialogue with \"context clues,\" but realized that for the scene in which the deaf daughter and her hearing father argue about the modified hearing aid, subtitles were necessary. The producers subsequently added subtitles for all sign-language dialogue in the film. Producer Brad Fuller said, \"And I think once you put one subtitle in, you subtitle the whole movie. You don't take liberties like, 'Oh they probably know what I love you is, but we don't subtitle it.' It's just gonna live everywhere and that's the world we live by.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22348",
"title": "Opera",
"section": "Section::::Language and translation issues.\n",
"start_paragraph_id": 95,
"start_character": 0,
"end_paragraph_id": 95,
"end_character": 793,
"text": "In the 1980s, supertitles (sometimes called surtitles) began to appear. Although supertitles were first almost universally condemned as a distraction, today many opera houses provide either supertitles, generally projected above the theatre's proscenium arch, or individual seat screens where spectators can choose from more than one language. TV broadcasts typically include subtitles even if intended for an audience who knows well the language (for example, a RAI broadcast of an Italian opera). These subtitles target not only the hard of hearing but the audience generally, since a sung discourse is much harder to understand than a spoken one—even in the ears of native speakers. Subtitles in one or more languages have become standard in opera broadcasts, simulcasts, and DVD editions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8860",
"title": "Dubbing (filmmaking)",
"section": "Section::::Alternatives.:Subtitles.\n",
"start_paragraph_id": 260,
"start_character": 0,
"end_paragraph_id": 260,
"end_character": 489,
"text": "In the Netherlands, Flanders, Nordic countries, Estonia and Portugal, films and television programmes are shown in the original language (usually English) with subtitles, and only cartoons and children's movies and programs are dubbed, such as the \"Harry Potter\" series, \"Finding Nemo\", \"Shrek\", \"Charlie and the Chocolate Factory\" and others. Cinemas usually show both a dubbed version and one with subtitles for this kind of movie, with the subtitled version shown later in the evening.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40883",
"title": "Closed captioning",
"section": "Section::::Application.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 504,
"text": "In the United States, the National Captioning Institute noted that English as a foreign or second language (ESL) learners were the largest group buying decoders in the late 1980s and early 1990s before built-in decoders became a standard feature of US television sets. This suggested that the largest audience of closed captioning was people whose native language was not English. In the United Kingdom, of 7.5 million people using TV subtitles (closed captioning), 6 million have no hearing impairment.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
66k21l | "the core of the planet earth is made of iron and nickel": how scientists can determine that if no one has been in the core of the earth? | [
{
"answer": "We have a pretty good idea about what's on the inside of the Earth, based on the geologist's equivalent of a CAT scan or an MRI -- earthquake data. When an earthquake happens, it sends waves bouncing around the inside of the planet. These waves change direction and speed based on the kinds of materials they pass through. Geologists can detect the movement of these waves by taking measurements at different locations all across the planet, and in so doing, build a picture of how the inside of the planet is constructed.\n\nThat's how we know that the interior of the Earth is separated into four layers, that the innermost is made of something solid, and that at least one of them is an actual liquid. From here, scientists can use other information to get an idea of what elements the interior is actually composed of.\n\nBased on the estimated density of the solid inner core, we can guess that it's probably made of iron. This hypothesis is supported by the fact that iron appears to be exceedingly plentiful in the solar system. Given how plentiful it is, and given that we know it's a very dense element, and given what we know about how a dense metal like iron would behave in a still-molten Earth when it was forming, it makes sense that Iron is probably what our core is made of. \n\nWe can guess a few more things about how the inner and outer cores behave, based on the fact that the Earth has a magnetic field. We know that the inner core must be rotating, and that the outer core must be convecting, because without those two things, the Earth would not have a magnetic field. So the existence of some external factors can tell us a lot about the internal factors of our planet.",
"provenance": null
},
{
"answer": "The truth is no one really know as direct measurements can be made. That said, the theoretical composition of the core of the earth has been estimated based on a few things:\n\n1. The earths magnetic filed could only be formed with a large iron mass at its core\n2. Iron and nickel are relatively dense and would tend have migrated towards the center of the earth when it was still a giant ball of liquid. \n3. An interesting theory exists that at the center of the core is a large mass of uranium which acts as a natural fission reactor. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "55321500",
"title": "Geochemistry of carbon",
"section": "Section::::Core.\n",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 683,
"text": "Earth's core is believed to be mostly an alloy of iron and nickel. The density indicates that it also contains a significant amount of lighter elements. Elements such as hydrogen would be stable in the Earth's core, however the conditions at the formation of the core would not be suitable for its inclusion. Carbon is a very likely constituent of the core. Preferential partitioning of the carbon isotopeC into the metallic core, during its formation, may explain why there seems to be more C on the surface and mantle of the Earth compared to other solar system bodies (−5‰ compared to -20‰). The difference can also help to predict the value of the carbon proportion of the core.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37615833",
"title": "2013 in science",
"section": "Section::::Events, discoveries and inventions.:July.\n",
"start_paragraph_id": 463,
"start_character": 0,
"end_paragraph_id": 463,
"end_character": 227,
"text": "BULLET::::- A radical new theory of the composition of the Earth's core is published. It proposes that the shape of the solid iron core is determined by the atomic structure of the different forms of iron of which it consists.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17468893",
"title": "J. Marvin Herndon",
"section": "Section::::Theories on Earth.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 277,
"text": "Herndon suggested that the composition of the inner core of Earth is nickel silicide; the conventional view is that it is iron–nickel alloy More recently, he has suggested \"georeactor\" planetocentric nuclear fission reactors as energy sources for the gas giant outer planets. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2939202",
"title": "Earth's inner core",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 457,
"text": "There are no samples of the Earth's core available for direct measurement, as there are for the Earth's mantle. The information that we have about it mostly comes from analysis of seismic waves and the magnetic field. The inner core is believed to be composed of an iron–nickel alloy with some other elements. The temperature at the inner core's surface is estimated to be approximately or 9806 °F, which is about the temperature at the surface of the Sun.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2939202",
"title": "Earth's inner core",
"section": "Section::::Age.\n",
"start_paragraph_id": 76,
"start_character": 0,
"end_paragraph_id": 76,
"end_character": 405,
"text": "Theories about the age of the core are necessarily part of theories of the history of Earth as a whole. This has been a long debated topic and is still under discussion at the present time. It is widely believed that the Earth's solid inner core formed out of an initially completely liquid core as the Earth cooled down. However, there is still no firm evidence about the time when this process started.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2939202",
"title": "Earth's inner core",
"section": "Section::::Discovery.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 747,
"text": "The Earth was discovered to have a solid inner core distinct from its molten outer core in 1936, by the Danish seismologist Inge Lehmann, who deduced its presence by studying seismograms from earthquakes in New Zealand. She observed that the seismic waves reflect off the boundary of the inner core and can be detected by sensitive seismographs on the Earth's surface. She inferred a radius of 1400 km for the inner core, not very far from the currently accepted value of 1221 km. In 1938, B. Gutenberg and C. Richter analyzed a more extensive set of data and estimated the thickness of the outer core as 1950 km with a steep but continuous 300 km thick transition to the inner core; implying a radius between 1230 and 1530 km for the inner core.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47503",
"title": "Carbon cycle",
"section": "Section::::Deep carbon cycle.:Carbon in the core.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 1239,
"text": "Although the presence of carbon in the Earth's core is well-constrained, recent studies suggest large inventories of carbon could be stored in this region. Shear (S) waves moving through the inner core travel at about fifty percent of the velocity expected for most iron-rich alloys. Because the core's composition is believed to be an alloy of crystalline iron and a small amount of nickel, this seismic anomaly indicates the presence of light elements, including carbon, in the core. In fact, studies using diamond anvil cells to replicate the conditions in the Earth's core indicate that iron carbide (FeC) matches the inner core's wave speed and density. Therefore, the iron carbide model could serve as an evidence that the core holds as much as 67% of the Earth's carbon. Furthermore, another study found that in the pressure and temperature condition of the Earth's inner core, carbon dissolved in iron and formed a stable phase with the same FeC composition—albeit with a different structure from the one previously mentioned. In summary, although the amount of carbon potentially stored in the Earth's core is not known, recent studies indicate that the presence of iron carbides can explain some of the geophysical observations.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
9xyujs | Iberian Peninsula in Medieval times | [
{
"answer": "I'm so happy you asked this. I've been reading a book called Spain: [The Root and the Flower by John A. Crow](_URL_0_) that has been extremely interesting to me. I've really learned a lot and I'd highly suggest his work. \n\n & #x200B;\n\nIn short, Iberia was fundamentally different from the rest of Europe because, during the time of the artistic and intellectual renaissance in Italy, Flanders, and the rest of Europe, the kingdom of Castille (the closest thing to \"Spain\" there was back then) was still busily reconquering the Iberian peninsula from the \"Moorish invaders.\" I put that in quotes because by the time Spain reconquered Moorish cities like Seville, Granada, and Cadiz, the moors had been there for 600+ years, and had seriously better art, science, and math than the Spanish, who had spent the majority of the years 700-1400 in a constant state of conflict against the Moors. (okay- under some rulers, Moors, Jews, and Christians lived harmoniously, but for the most part of Spanish history, rulers used religion to unify the country and if you weren't Christian, you were taxed, tortured, and kicked out of the country once the Spanish Inquisition started.) \n\n & #x200B;\n\nThe \"re-birth\" Spain experienced wasn't a rebirth of art and culture like that of Italy, but rather, a re-harnessing of the conquistador spirit that suddenly had no more Spanish land to conquer. In 1492, Isabelle of Castille banned the Jews and Moors thus unifying Spain under the Catholic cross. Simultaneously, Christopher Columbus \"discovered\" America, a land full of gold and natives to convert. The decision to invade for Spain was an obvious one. The Royals were most concerned with spreading their influence and increasing their wealth than creating art and re-discovering the human experience. They weren't exactly a country of romantics. \n\n & #x200B;\n\nHowever, I want to point out to you that during the golden age of Spain, (which will be defined by different times depending on who you ask, for these purposes, we'll say 1492-1650) there were several of the worlds first, and most significant dramatic written stories. I'm sure you've heard of Don Quijote by Miguel Cervantes, published in 1615 (often called the first novel), but that story was actually preceded by an even older written story called \"La Celestina\" was written in dramatic dialogue, however, never was intended to be performed or told, but rather, read. \n\n & #x200B;\n\nAlso, you should check out El Greco and Diego Velazquez for paintings, they were both amazing artists who worked in the Spanish Courts. \n\n & #x200B;\n\nSource: Spain: [The Root and the Flower: An Interpretation of Spain and the Spanish People Third Edition](_URL_0_) by John A. Crow",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "503345",
"title": "High Middle Ages",
"section": "Section::::Historical events and politics.:Spain and Italy.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 308,
"text": "Much of the Iberian peninsula had been occupied by the Moors after 711, although the northernmost portion was divided between several Christian states. In the 11th century, and again in the thirteenth, the Christian kingdoms of the north gradually drove the Muslims from central and most of southern Iberia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1977868",
"title": "Slavery in medieval Europe",
"section": "Section::::Slavery in Al-Andalus.\n",
"start_paragraph_id": 84,
"start_character": 0,
"end_paragraph_id": 84,
"end_character": 587,
"text": "The medieval Iberian Peninsula was the scene of episodic warfare among Muslims and Christians (although sometimes Muslims and Christians were allies). Periodic raiding expeditions were sent from Al-Andalus to ravage the Christian Iberian kingdoms, bringing back booty and people. For example, in a raid on Lisbon in 1189 the Almohad caliph Yaqub al-Mansur took 3,000 female and child captives, and his governor of Córdoba took 3,000 Christian slaves in a subsequent attack upon Silves in 1191; an offensive by Alfonso VIII of Castile in 1182 brought him over two-thousand Muslim slaves.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13299",
"title": "History of Spain",
"section": "Section::::Gothic Hispania (5th–8th centuries).:Visigothic rule.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 713,
"text": "The Visigothic Kingdom conquered all of Hispania and ruled it until the early 8th century, when the peninsula fell to the Muslim conquests. The Muslim state in Iberia came to be known as Al-Andalus. After a period of Muslim dominance, the medieval history of Spain is dominated by the long Christian \"Reconquista\" or \"reconquest\" of the Iberian Peninsula from Muslim rule. The Reconquista gathered momentum during the 12th century, leading to the establishment of the Christian kingdoms of Portugal, Aragon, Castile and Navarre and by 1250, had reduced Muslim control to the Emirate of Granada in the south-east of the peninsula. Muslim rule in Granada survived until 1492, when it fell to the Catholic Monarchs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52374650",
"title": "King",
"section": "Section::::History.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 237,
"text": "BULLET::::- In the Iberian Peninsula, the remnants of the Visigothic Kingdom, the petty kingdoms of Asturias and Pamplona, expanded into the kingdom of Portugal, the Crown of Castile and the Crown of Aragon with the ongoing Reconquista.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14603821",
"title": "History of Alicante",
"section": "Section::::Before the 20th Century.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 384,
"text": "The Moors ruled southern and eastern Spain until the 11th century \"reconquista\" (reconquest). Alicante was finally taken in 1246 by the Castilian king Alfonso X, but it passed soon and definitely to the Kingdom of Valencia in 1298 with the Catalan King James II of Aragon. It gained the status of Royal Village (\"Vila Reial\") with representation in the medieval Valencian Parliament.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22983160",
"title": "Christianity in the 8th century",
"section": "Section::::Christianity and Islam.:Iberian Peninsula and the Reconquista.\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 246,
"text": "Between 711–718 the Iberian peninsula had been conquered by Muslims in the Umayyad conquest of Hispania; between 722 and 1492 the Christian kingdoms that later would become Spain and Portugal reconquered it from the Moorish states of Al-Ándalus.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26667",
"title": "Spain",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 765,
"text": "In the early eighth century the Visigothic Kingdom fell to the Moors of the Umayyad Islamic Caliphate, who arrived to rule most of the peninsula in the year 726, leaving only a handful of small Christian realms in the north and lasting up to seven centuries in the Kingdom of Granada. This led to many wars during a long reconquering period across the Iberian Peninsula, which led to the creation of Kingdom of Leon, Kingdom of Castille, Kingdom of Aragon and Kingdom of Navarre as the main Christian kingdoms to face the invasion. Following the Moorish conquest, Europeans began a gradual process of retaking the region known as the Reconquista, which by the late 15th century culminated in the emergence of Spain as a unified country under the Catholic Monarchs.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3uz0ho | Were the British involved in instigating the 19th century revolutions against Spain in Latin America? | [
{
"answer": "Sorry for the long delay in getting to this, unfortunately it has been a hectic week. The British were definitely involved in the Latin American revolutions to a greater or lesser extent. It is worth noting that the British were fighting Napoleon during the early 19th century and had a giant army all ready to go and get involved in the Western hemisphere. Not only that, but following the American Revolution, the British adopted a trade policy that allowed them to trade with countries that they would not formally recognize. Furthermore, many of the leaders of the Revolutions were anglophiles who actively sought British aid and support. Simon Bolivar, of obvious fame, wanted the British to get involved and even suggested placing the newly free countries under the British wing, though not their direct control. Though it wasn't a revolution, the British pressure on King Joao was the direct cause of his declaring Brazil a sovereign kingdom, equal to Portugal and any other country, for that matter.\n\nSo that's the prelude, on the ground, The British Legion, which consisted of 800 soldiers on five ships, were sent to aid Bolivar in his revolution, those these were not official troops. Meaning that the government was not willing to officially endorse the Revolution, but were willing to help out of they could. Not all of those troops made it to Venezuela, and they were not particularly helpful, but they were sent.\n\nAfter the Revolutions, the British were very big on nation building, sending tons of ships to trade with the new nations. especially British manufactured goods for the various export goods of Latin America. Whether or not this was a good thing has been a huge debate in Latin American history, since some, building on extraction theory, have said that is simply changed them from a de jure colony to a de facto colony, but nevertheless, they were welcomed at the time. For a while the British was the largest shipping country in the western hemisphere, outdoing even the United States. \n\nSources: John Chasteen, *Americanos*, Leon Fink, *Sweatshops at Sea*",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "39937022",
"title": "International relations of the Great Powers (1814–1919)",
"section": "Section::::1814–1830: Restoration and reaction.:Spain loses its colonies.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 1095,
"text": "Multiple revolutions in Latin America allowed the region to break free of the mother country. Repeated attempts to regain control failed, as Spain had no help from European powers. Indeed, Britain and the United States worked against Spain, enforcing the Monroe Doctrine. British merchants and bankers took a dominant role in Latin America. In 1824, the armies of generals José de San Martín of Argentina and Simón Bolívar of Venezuela defeated the last Spanish forces; the final defeat came at the Battle of Ayacucho in southern Peru. After the loss of its colonies, Spain played a minor role in international affairs. Spain kept Cuba, which repeatedly revolted in three wars of independence, culminating in the Cuban War of Independence. The United States demanded reforms from Spain, which Spain refused. The U.S. intervened by war in 1898. Winning easily, the U.S. took Cuba and gave it independence. The U.S. also took the Spanish colonies of the Philippines and Guam. Though it still had small colonial holdings in North Africa, Spain's role in international affairs was essentially over.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "490486",
"title": "Latin American wars of independence",
"section": "Section::::World reaction.:United States and Great Britain.\n",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 295,
"text": "Great Britain's trade with Latin America greatly expanded during the revolutionary period, which until then was restricted due to Spanish mercantilist trade policies. British pressure was sufficient to prevent Spain from attempting any serious reassertion of its control over its lost colonies.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14458743",
"title": "Presidency of James Monroe",
"section": "Section::::Foreign affairs.:Latin America.:Monroe Doctrine.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 837,
"text": "The British had a strong interest in ensuring the demise of Spanish colonialism, as the Spanish followed a mercantilist policy that imposed restrictions on trade between Spanish colonies and foreign powers. In October 1823, Ambassador Rush informed Secretary of State Adams that Foreign Secretary George Canning desired a joint declaration to deter any other power from intervening in Central and South America. Canning was motivated in part by the restoration of King Ferdinand VII of Spain by France. Britain feared that either France or the \"Holy Alliance\" of Austria, Prussia, and Russia would help Spain regain control of its colonies, and sought American cooperation in opposing such an intervention. Monroe and Adams deliberated the British proposal extensively, and Monroe conferred with former presidents Jefferson and Madison.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "61312670",
"title": "History of U.S. foreign policy, 1801–1829",
"section": "Section::::Latin America.:Monroe Doctrine.\n",
"start_paragraph_id": 62,
"start_character": 0,
"end_paragraph_id": 62,
"end_character": 837,
"text": "The British had a strong interest in ensuring the demise of Spanish colonialism, as the Spanish followed a mercantilist policy that imposed restrictions on trade between Spanish colonies and foreign powers. In October 1823, Ambassador Rush informed Secretary of State Adams that Foreign Secretary George Canning desired a joint declaration to deter any other power from intervening in Central and South America. Canning was motivated in part by the restoration of King Ferdinand VII of Spain by France. Britain feared that either France or the \"Holy Alliance\" of Austria, Prussia, and Russia would help Spain regain control of its colonies, and sought American cooperation in opposing such an intervention. Monroe and Adams deliberated the British proposal extensively, and Monroe conferred with former presidents Jefferson and Madison.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "57762757",
"title": "British intervention in Spanish American independence",
"section": "Section::::Background.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 706,
"text": "The colonial status quo seemed to be guaranteed in 1809, when a pact was signed between the Spanish government and the United Kingdom, which established aid against the French invasion. This agreement was ambiguous with regard to South America, since the efforts of Bonaparte they felt they didn't need to invade Spanish territory there. A weakened Spain distracted and virtually cut off from her colonies, meant that insurrections there would flare up. The Royal Navy nevertheless were allowed to reach Spanish ports of both hemispheres. Thus, while the American revolutionaries rejected the French commissioners, and their adhesion to Napoleonic Spain, the British improved their own colonial interests.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21193597",
"title": "Convention of Pardo",
"section": "Section::::Aftermath.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 414,
"text": "Some issues were eventually resolved in the 1750 Treaty of Madrid, but illegal British trade with the Spanish colonies continued to flourish. The Spanish Empire in the Caribbean remained intact and victorious despite several English attempts to seize some of its heavily defended and fortified colonies. Spain would later use its trading routes and resources to help the rebels' cause in the American Revolution. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "197229",
"title": "George Canning",
"section": "Section::::Foreign Secretary and Leader of the House.:Latin America.\n",
"start_paragraph_id": 57,
"start_character": 0,
"end_paragraph_id": 57,
"end_character": 535,
"text": "Britain had a strong interest in ensuring the demise of Spanish colonialism, and to open the newly independent Latin American colonies to its trade. The Latin Americans received a certain amount of unofficial aid – arms and volunteers – from outside, but no outside official help at any stage from Britain or any other power. Britain refused to aid Spain and opposed any outside intervention on behalf of Spain by other powers. The Royal Navy was a decisive factor in the struggle for independence of certain Latin American countries.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3401xs | the event horizon of a black hole | [
{
"answer": "Because any direction past the event horizon points inward. Space itself is warped so massively beyond the horizon that nothing can get out not only because the escape velocity is greater than the speed of light, but there is literally no direction that is \"out\".",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "151013",
"title": "T-symmetry",
"section": "Section::::Macroscopic phenomena: black holes.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 471,
"text": "The event horizon of a black hole may be thought of as a surface moving outward at the local speed of light and is just on the edge between escaping and falling back. The event horizon of a white hole is a surface moving inward at the local speed of light and is just on the edge between being swept outward and succeeding in reaching the center. They are two different kinds of horizons—the horizon of a white hole is like the horizon of a black hole turned inside-out.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2089093",
"title": "Kruskal–Szekeres coordinates",
"section": "Section::::Qualitative features of the Kruskal–Szekeres diagram.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 1349,
"text": "The event horizons bounding the black hole and white hole interior regions are also a pair of straight lines at 45 degrees, reflecting the fact that a light ray emitted at the horizon in a radial direction (aimed outward in the case of the black hole, inward in the case of the white hole) would remain on the horizon forever. Thus the two black hole horizons coincide with the boundaries of the future light cone of an event at the center of the diagram (at \"T\"=\"X\"=0), while the two white hole horizons coincide with the boundaries of the past light cone of this same event. Any event inside the black hole interior region will have a future light cone that remains in this region (such that any world line within the event's future light cone will eventually hit the black hole singularity, which appears as a hyperbola bounded by the two black hole horizons), and any event inside the white hole interior region will have a past light cone that remains in this region (such that any world line within this past light cone must have originated in the white hole singularity, a hyperbola bounded by the two white hole horizons). Note that although the horizon looks as though it is an outward expanding cone, the area of this surface, given by \"r\" is just formula_46, a constant. I.e., these coordinates can be deceptive if care is not exercised.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "456715",
"title": "Kerr metric",
"section": "Section::::Overextreme Kerr solutions.\n",
"start_paragraph_id": 63,
"start_character": 0,
"end_paragraph_id": 63,
"end_character": 350,
"text": "The location of the event horizon is determined by the larger root of formula_58. When formula_59 (i.e. formula_60), there are no (real valued) solutions to this equation, and there is no event horizon. With no event horizons to hide it from the rest of the universe, the black hole ceases to be a black hole and will instead be a naked singularity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4650",
"title": "Black hole",
"section": "Section::::Properties and structure.:Event horizon.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 466,
"text": "The defining feature of a black hole is the appearance of an event horizon—a boundary in spacetime through which matter and light can only pass inward towards the mass of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach an outside observer, making it impossible to determine if such an event occurred.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29320146",
"title": "Event horizon",
"section": "Section::::Event horizon of a black hole.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 795,
"text": "One of the best-known examples of an event horizon derives from general relativity's description of a black hole, a celestial object so massive that no nearby matter or radiation can escape its gravitational field. Often, this is described as the boundary within which the black hole's escape velocity is greater than the speed of light. However, a more accurate description is that within this horizon, all lightlike paths (paths that light could take) and hence all paths in the forward light cones of particles within the horizon, are warped so as to fall farther into the hole. Once a particle is inside the horizon, moving into the hole is as inevitable as moving forward in time, and can actually be thought of as equivalent to doing so, depending on the spacetime coordinate system used.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29320146",
"title": "Event horizon",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 446,
"text": "The black hole event horizon is teleological in nature, meaning that we need to know the entire future space-time of the universe to determine the current location of the horizon, which is essentially impossible. Because of the purely theoretical nature of the event horizon boundary, the traveling object does not necessarily experience strange effects and does, in fact, pass through the calculatory boundary in a finite amount of proper time.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29320146",
"title": "Event horizon",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 252,
"text": "In astrophysics, an event horizon is a boundary beyond which events cannot affect an observer on the opposite side of it. An event horizon is most commonly associated with black holes, where gravitational forces are so strong that light cannot escape.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
368adm | what does it mean when a wound gets "infected?" | [
{
"answer": "It means that bacteria or fungus has set in the wound and has begun to grow off of the tissue in that area.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "37220",
"title": "Infection",
"section": "Section::::Signs and symptoms.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 307,
"text": "The symptoms of an infection depends on the type of disease. Some signs of infection affect the whole body generally, such as fatigue, loss of appetite, weight loss, fevers, night sweats, chills, aches and pains. Others are specific to individual body parts, such as skin rashes, coughing, or a runny nose.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49338973",
"title": "Postoperative wounds",
"section": "Section::::Complications.:Infection.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 485,
"text": "Infection will complicate healing of surgical wounds and is commonly observed. Most infections are present within the first 30 days after surgery. Surgical wounds can become infected by bacteria, regardless if the bacteria is already present on the patient's skin or if the bacteria is spread to the patient due to contact with infected individuals. Wound infections can be superficial (skin only), deep (muscle and tissue), or spread to the organ or space where the surgery occurred.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2029171",
"title": "Chromoblastomycosis",
"section": "Section::::Presentation.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 552,
"text": "Several complications may occur. Usually, the infection slowly spreads to the surrounding tissue while still remaining localized to the area around the original wound. However, sometimes the fungi may spread through the blood vessels or lymph vessels, producing metastatic lesions at distant sites. Another possibility is secondary infection with bacteria. This may lead to lymph stasis (obstruction of the lymph vessels) and elephantiasis. The nodules may become ulcerated, or multiple nodules may grow and coalesce, affecting a large area of a limb.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42731481",
"title": "Chronic wound pain",
"section": "Section::::Infection.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 412,
"text": "Infection results when the wound’s micro-organisms overcome the immune system’s natural defense to fight off replicating micro-organisms. Chronic wounds that persist for more than 12 weeks should be evaluated for delayed healing, increase exudate, foul odor, additional areas of skin breakdown or slough on the wound bed, and bright red discoloration of granulation tissue, which may be indicative of infection.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37220",
"title": "Infection",
"section": "Section::::Pathophysiology.:Colonization.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 1108,
"text": "Wound colonization refers to nonreplicating microorganisms within the wound, while in infected wounds, replicating organisms exist and tissue is injured. All multicellular organisms are colonized to some degree by extrinsic organisms, and the vast majority of these exist in either a mutualistic or commensal relationship with the host. An example of the former is the anaerobic bacteria species, which colonizes the mammalian colon, and an example of the latter are the various species of staphylococcus that exist on human skin. Neither of these colonizations are considered infections. The difference between an infection and a colonization is often only a matter of circumstance. Non-pathogenic organisms can become pathogenic given specific conditions, and even the most virulent organism requires certain circumstances to cause a compromising infection. Some colonizing bacteria, such as \"Corynebacteria sp.\" and \"viridans streptococci\", prevent the adhesion and colonization of pathogenic bacteria and thus have a symbiotic relationship with the host, preventing infection and speeding wound healing.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "875883",
"title": "Hospital-acquired infection",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 1148,
"text": "A hospital-acquired infection (HAI), also known as a nosocomial infection, is an infection that is acquired in a hospital or other health care facility. To emphasize both hospital and nonhospital settings, it is sometimes instead called a health care–associated infection (HAI or HCAI). Such an infection can be acquired in hospital, nursing home, rehabilitation facility, outpatient clinic, or other clinical settings. Infection is spread to the susceptible patient in the clinical setting by various means. Health care staff also spread infection, in addition to contaminated equipment, bed linens, or air droplets. The infection can originate from the outside environment, another infected patient, staff that may be infected, or in some cases, the source of the infection cannot be determined. In some cases the microorganism originates from the patient's own skin microbiota, becoming opportunistic after surgery or other procedures that compromise the protective skin barrier. Though the patient may have contracted the infection from their own skin, the infection is still considered nosocomial since it develops in the health care setting.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12845091",
"title": "Angiostrongyliasis",
"section": "Section::::Symptoms.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 231,
"text": "Infection first presents with severe abdominal pain, nausea, vomiting, and weakness, which gradually lessens and progresses to fever, and then to central nervous system (CNS) symptoms and severe headache and stiffness of the neck.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4dtad2 | do painkillers (advil, tylenol, etc) reduce pain in the specific area that is hurting or do they affect the whole body but you only notice it woking on the area that is in pain? | [
{
"answer": "The pain killers you listed reduce inflammation in different ways so they would help calm down a throbbing injury where inflammatory response is strongest -- they act at the site of the pain. However, the effective anti-inflammation molecules are in your blood so its not like they can't affect more than one region. If you took Tylenol for a sore back and later stubbed your toe you wouldn't have to take more Tylenol for the new injury.\n\nPain killers like Vicodin act in the central nervous system and lower your emotional response to pain. These don't affect the inflamed area at all -- they just make your perception of pain less unpleasant.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "215199",
"title": "Otitis media",
"section": "Section::::Management.\n",
"start_paragraph_id": 41,
"start_character": 0,
"end_paragraph_id": 41,
"end_character": 637,
"text": "Oral and topical pain killers are effective to treat the pain caused by otitis media. Oral agents include ibuprofen, paracetamol (acetaminophen), and opiates. Evidence for the combination over single agents is lacking. Topical agents shown to be effective include antipyrine and benzocaine ear drops. Decongestants and antihistamines, either nasal or oral, are not recommended due to the lack of benefit and concerns regarding side effects. Half of cases of ear pain in children resolve without treatment in three days and 90% resolve in seven or eight days. The use of steroids is not supported by the evidence for acute otitis media. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33632441",
"title": "Psychoactive drug",
"section": "Section::::Uses.:Pain management.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 544,
"text": "Psychoactive drugs are often prescribed to manage pain. The subjective experience of pain is primarily regulated by endogenous opioid peptides. Thus, pain can often be managed using psychoactives that operate on this neurotransmitter system, also known as opioid receptor agonists. This class of drugs can be highly addictive, and includes opiate narcotics, like morphine and codeine. NSAIDs, such as aspirin and ibuprofen, are also analgesics. These agents also reduce eicosanoid-mediated inflammation by inhibiting the enzyme cyclooxygenase.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25161",
"title": "Idiopathic intracranial hypertension",
"section": "Section::::Treatment.:Medication.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 292,
"text": "Various analgesics (painkillers) may be used in controlling the headaches of intracranial hypertension. In addition to conventional agents such as paracetamol, a low dose of the antidepressant amitriptyline or the anticonvulsant topiramate have shown some additional benefit for pain relief.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "54904694",
"title": "Pain management in children",
"section": "Section::::Management.:Acute pain treatment.\n",
"start_paragraph_id": 105,
"start_character": 0,
"end_paragraph_id": 105,
"end_character": 298,
"text": "The approach to acute pain should take into account the severity of the pain. Non-opioid analgesics, such as acetaminophen and NSAIDs, can be used alone to treat mild pain. For moderate to severe pain, it is optimal to use a combination of multiple agents, including opioid and non-opioid agents. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38074",
"title": "Kidney stone disease",
"section": "Section::::Treatment.:Pain management.\n",
"start_paragraph_id": 95,
"start_character": 0,
"end_paragraph_id": 95,
"end_character": 305,
"text": "Management of pain often requires intravenous administration of NSAIDs or opioids. NSAIDs appear somewhat better than opioids or paracetamol in those with normal kidney function. Medications by mouth are often effective for less severe discomfort. The use of antispasmodics does not have further benefit.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7930039",
"title": "Opiorphin",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 337,
"text": "Such action extends the duration of enkephalin effect where the natural pain killers are released physiologically in response to specific potentially painful stimuli, in contrast with administration of narcotics, which floods the entire body and causes many undesirable adverse reactions, including addiction liability and constipation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "167683",
"title": "Mittelschmerz",
"section": "Section::::Treatment.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 237,
"text": "The pain is not harmful and does not signify the presence of disease. No treatment is usually necessary. Pain relievers (analgesics) such as NSAIDS (Non-steroidal anti inflammatories) may be needed in cases of prolonged or intense pain.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
vephh | What benefits would the addition of a third eye bring? | [
{
"answer": "Eyes (and cameras) are basically devices for taking in a bunch of light, and sorting it out by angle. If you see something in one eye, you know that it's somewhere along a ray starting at your eye and going out in a particular direction, but you don't know where along the ray it is.\n\nAdding a second eye gives you another ray, starting in a different place, and pointing in a different direction. The intersection of these rays is a *unique* point in space; there's no more ambiguity.\n\nThe best way to improve depth perception (I'll be a bit more specific and define \"improve depth perception\" as \"reduce the uncertainty in range\") is to move the eyes further apart.\n\nA third camera can improve estimated positions of points in 3D space just by providing an additional measurement, but I don't think this is much of a problem for animals. The main benefit of a third eye would be, as pointed out in *300*, having another spare.",
"provenance": null
},
{
"answer": "Ask a [tuatara](_URL_0_). Juveniles have a third eye that scales over as they grow. Bummer.",
"provenance": null
},
{
"answer": "Are we brushing aside the liabilities as well? More chances for infections*, needs to be protected under the brow and in a socket, caloric cost, potential epileptic trigger, the added brain circuitry to process more visual stimuli and make sense of it (and the cost of that as well), etc. Who knows, but all these speculation based questions often ignore the cost and just focus on potential benefits which seems unfair. \n\n*Up until fairly recently, a bad infection was a death sentence. The less holes a human has then the better.",
"provenance": null
},
{
"answer": "It depends on where the eye is located. A third eye on the forehead, in the same orientation as the regular eyes, may not add much to vision, but an eye on the back of the head would provide increased awareness, and could even be adaptive in areas where large predators [still attack humans](_URL_0_).",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "311888",
"title": "Cornea",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 315,
"text": "While the cornea contributes most of the eye's focusing power, its focus is fixed. Accommodation (the refocusing of light to better view near objects) is accomplished by changing the geometry of the lens. Medical terms related to the cornea often start with the prefix \"\"kerat-\"\" from the Greek word κέρας, \"horn\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44022646",
"title": "Stylophthalmine trait",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 216,
"text": "The work of Weihhs and Moser (1981) showed that the eye's elliptical shape allows a stylophthalmine to dramatically enlarge its field of view through rotation on the stalk, giving a much larger effective pupil size.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1157448",
"title": "Accommodation reflex",
"section": "Section::::Pupil constriction and lens accommodation.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 330,
"text": "During the accommodation reflex, the pupil constricts to increase the depth of focus of the eye by blocking the light scattered by the periphery of the cornea. The lens then increases its curvature to become more biconvex, thus increasing refractive power. The ciliary muscles are responsible for the lens accommodation response.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "95646",
"title": "Dioptre",
"section": "Section::::In vision correction.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 647,
"text": "In humans, the total optical power of the relaxed eye is approximately 60 dioptres. The cornea accounts for approximately two-thirds of this refractive power (about 40 dioptres) and the crystalline lens contributes the remaining one-third (about 20 dioptres). In focusing, the ciliary muscle contracts to reduce the tension or stress transferred to the lens by the suspensory ligaments. This results in increased convexity of the lens which in turn increases the optical power of the eye. The amplitude of accommodation is about 15 to 20 dioptres in the very young, decreasing to about 10 dioptres at age 25, and to around 1 dioptre above age 50.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19065640",
"title": "Adjustable-focus eyeglasses",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 244,
"text": "Adjustable focus eyeglasses are eyeglasses with an adjustable focal length. They compensate for refractive errors (such as presbyopia) by providing variable focusing, allowing users to adjust them for desired distance or prescription, or both.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "305465",
"title": "Lens (anatomy)",
"section": "Section::::Function.:Nourishment.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 203,
"text": "The lens is metabolically active and requires nourishment in order to maintain its growth and transparency. Compared to other tissues in the eye, however, the lens has considerably lower energy demands.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29701119",
"title": "Reduced eye",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 513,
"text": "The reduced eye is an idealized model of the optics of the human eye. Introduced by Franciscus Donders, the reduced eye model replaces the several refracting bodies of the eye (the cornea, lens, aqueous humor, and vitreous humor) are replaced by an ideal air/water interface surface that is located 20 mm from a model retina. This, converts a system with six cardinal points (two focal points, two principal points and two nodal points) into one with three cardinal points (two focal points and one nodal point).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1ke4z8 | In WWI, did executions of soldiers suffering from PTSD or "shell shock" for the crime of desertion actually occur, and if so how common were they? | [
{
"answer": "There was one brief period of time during WWI when an army executing its deserters was common. This was June 1917, when the French army was hit with widespread mutinies, in the wake of the failed Niville offensive of May, 1917. General Petan replaced Niville as the comander of the French army and instituted the following reforms. He curtailed the supply of wine to the french soldiers, improved their leave policy, promised no more futile frontal attacks, and prohibited pacifist and Bolsehvik literature from being distributed at the front. The French army admits to passing out 412 death sentances and claims 356 were commuted. The actual number of soldiers executed by the French army in June of 1917 is a matter of speculation, but the official French numbers are much to low. There were 170 major acts of mutiny and some of their ringleaders were shot without a trail. Petain was looking to weed out the soldiers that went to Paris on Leave and went back to the front with a bale of Bolshevik or Pacifist propaganda. The ringleaders the French shot were not just simple deserters. They were trying to get their entire regiment to join them in \"voting for peace with their legs\" to use the Bolshevik phrase that was popular in 1917. Source: D J Goodspeed \"The German Wars\" p 235",
"provenance": null
},
{
"answer": "This is actually a very complex question for reasons that I’ll try to outline. Please note that I will be using “shell shock” and PTSD interchangeably and will approach the question largely from the British perspective.\n\nEarly in the war, physicians began to handle cases of psychological breakdown, paralysis, and disturbing, uncontrolled physical behavior among men who had been in combat. C.S. Myers was one of the first to coin the term “shell shock,” as doctors assumed that artillery fire and the like had had caused concussion-like damage and possibly physical legions somewhere in the brain. Other doctors saw the same thing, but Myers discovered that many men experiencing these symptoms hadn't been near artillery bombardments and so he tried to withdraw the term, but it stuck. The condition was called “soldier’s heart” in the American Civil War and “combat fatigue” in the Second World War, and now we call it PTSD. It’s not until 1980 that PTSD gets into the medical handbook as a legitimate syndrome, which means that doctors can treat it and that those who suffer from it can receive a pension.\n\n* **Why was it so difficult to pin down a definition for “shell shock?”** \n\nThe medical profession of the time was conservative and relatively endogenous. Many of them thought that shell shock was a license for cowardice or a renunciation of “manliness,” which made it partly a problem of gender. It’s important to understand that although we usually think of PTSD as a psychological disability, it often manifests itself in physical ways. At the time, the conversion of mental symptoms to physical ones was called hysteria – a term reserved for women. This meant that men suffering from “hysteria” were transgressing Victorian gender norms, and we can see the stigma of this diagnosis clash with social conventions – only enlisted men were diagnosed with hysteria, while officers were diagnosed with “nervous breakdown.” The difference in diagnosis was paralleled by differences in treatment – treatment for enlisted men was largely punitive and coercive, while treatment for officers was based more on persuasion, sometimes through psychotherapy. Lest you think officers were in a better position, remember that the casualty rate for them was almost double that of enlisted men.\n\nDiagnosis and treatment were further complicated by the difficulty in identifying who legitimately had a problem and who was just trying to get away from the front. For some physicians, the solution was to make treatments more painful than returning to the front. For example, electric shock therapy could be used on mutes to try and stimulate the tongue so that they would make noise. In Austria, future Nobel Prize winner Julius Wagner Jauregg was accused of torturing his patients because he used electroconvulsive shock treatment to discourage malingering. In general, the war tore up the Hippocratic Oath because doctors became servants of armies that needed men to return to the front as soon as possible. Thus, the principal aim of doctors was to heal the injured enough to send them back to the front. This meant that if a soldier had a physical wound in addition to psychological symptoms, doctors would often treat the wound and then send the soldier back. Treatments were thus largely coercive in nature – there’s a famous French story in which an army doctor told a soldier “Yes, you are going to get this.” The enlisted man responded, “No, I’m not.” “Yes you are, I’m your officer, I gave you an order.” The exchange continued back and forth until the doctor moved to put the electrodes on his forehead and the enlisted man knocked him out. The soldier was then court-martialed, found guilty, fined one franc, and dismissed from the army without a war pension. This is the sort of thing that contributed to desertion, especially from men who felt they had no way out.\n\nAs you can see, there were numerous problems with the medical profession’s approach to the symptoms, diagnosis, and treatment of shell shock. Consequently, we really don’t know how many suffered from it. The British Army recorded 80,000 cases, but this likely underestimates the actual number. Regardless, we can be sure that a significant number of those that went through artillery barrages and trench warfare experienced something like it at some point. While the number is significant, it’s important to remember that a minority of soldiers suffered shell shock, and consequently it does fit into the spectrum of individual refusal. \n\n* **What about executions?**\n\nIn the late 90s there was a movement in England to apologize to those that refused to continue fighting in the war. There were 306 men that had been shot for cowardice or desertion and although the British government refused to make a formal apology, one of Tony Blair’s last acts as prime minister was to posthumously pardon them. The problem here should be obvious – it’s unclear how many were shell shocked and convicted of cowardice or desertion when they really were insane. There’s serious doubt as to how many men actually thought it through and decided that they couldn't fight anymore and were going to leave. \n\nIn the French case there was a terrible period at the beginning of the war when there were many summary executions. It’s a perfect example of what happened when officials and the professional army feared the effects that desertion might have on the rest of the men that had been mobilized at the start of the war. The French CiC, Joffre, felt that if offensives didn't proceed because people were “allowed to act as cowards,” the rest of the mobilized army, made up of millions of reservists, would be contaminated. The upshot was the summary executions of numerous soldiers. The French parliament set up a special tribunal in 1932 to reexamine many of the cases, and a number of those who had been executed were subsequently pardoned, some on grounds that they had originally been denied the right of appeal despite being citizens. There is an important distinction to make here – French soldiers had the vote and could appeal to their representatives for better legal treatment, while millions of British soldiers could not since they were subjects of the crown. By the end of the war, every capital sentence required the approval of the French president.\n\n* **Why do we think PTSD began with “shell shock?\"**\n\nWorld War I was the first to really introduce mental illness to mass society. The notion of traumatic memory that was brought back home and reappeared in literature helped normalize mental illness in the absence of consensus by the medical profession as to what it was. Although PTSD existed long before the First World War, the circumstances of the war pushed hundreds of thousands of men beyond the limits of human endurance. They faced weapons that denied any chance for heroism or courage or even military skill because the artillery weapons that caused 60 percent of all casualties were miles away from the battlefield. The enthusiastic men that signed up in 1914 were loyal, patriotic, and genuinely believed that they were fighting to defend their homeland. While they consented to national defense, it’s not clear that they consented to fight an industrialized assembly-line murderous war that emerged after 1914. Unlike previous wars, there was no beginning, middle, and end. Trench warfare was seen as a prelude to a breakout, but those breakouts never really occurred. Many men withdrew from the reality of the war into their own minds, and in this sense shell shock can be seen as a mutiny against the war. PTSD has numerous symptoms, but among them is the sense that the war the soldier lived had escaped from human control. This is why many PTSD sufferers are constantly reliving the trauma – the horror of combat never goes away and time has no hold over it. There’s a wonderful autobiography by Robert Graves called Good-Bye to All That; it’s one of the most famous World War I memoirs. Of course, the great irony is that he can’t say good-bye to all that - his life is constantly affected by his war experience, even 10 years after the war ended. There are so many great World War I memoirs, but I’d highly recommend the following:\n\n***The Secret Battle* by A.P. Herbert**\n \n***The Case of Sergeant Grischa* by Arnold Zweig**\n\nBoth deal with executions and the perversion of military justice during the war. I believe the Secret Battle is available online for free. You can knock it out in an afternoon. There are some other books I’d recommend that deal with shell shock but I’m not at home at the moment and need to find them. I would recommend ***The Legacy of the Great War*** and ***Remembering War***. Both are by Jay Winter, who specializes in historical memory and World War I. This is definitely the longest post I’ve ever written, but I’ll leave you with one final note: I was lucky enough to study under Jay Winter back in 2011, and he told me that when he was teaching at Cambridge in the late 70s and early 80s, he travelled to Warwick hospital to study some of the records of patients that had been institutionalized there during the war for shell shock. When he went there, he discovered that there were still several men that had been kept in the asylum without treatment since the Great War. Once enthusiastic young men, psychologically crippled by the war, had spent the next 70 years constantly reliving their trauma, locked away from a society that didn't understand what was wrong with them. I can’t think of a more horrible fate.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "165119",
"title": "Cowardice",
"section": "Section::::Military law.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 736,
"text": "Generally, cowardice was punishable by execution during World War I, and those who were caught were often court-martialed and, in many cases, executed by firing squad. British men executed for cowardice were often not commemorated on war memorials, and their families often did not receive benefits and had to endure social stigma. However, many decades later, those soldiers all received posthumous pardons in the Armed Forces Act 2006 and have been commemorated with the Shot at Dawn Memorial. Unlike British, French, German, and Soviet/Russian forces, the United States forces tried soldiers for cowardice but never followed through with execution while German commanders were less inclined to use execution as a form of punishment.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21759521",
"title": "Combat Stress (charitable organisation)",
"section": "Section::::History before 1919.:World War I.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 314,
"text": "During the war, 306 British soldiers were executed for cowardice; many of whom were victims of shell shock. On 7 November 2006, the Government of the United Kingdom gave them all a posthumous conditional pardon. The Shot at Dawn Memorial at the National Memorial Arboretum in Staffordshire commemorates these men.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "367273",
"title": "International Military Tribunal for the Far East",
"section": "Section::::Aftermath.:Parole for war criminals movement.\n",
"start_paragraph_id": 114,
"start_character": 0,
"end_paragraph_id": 114,
"end_character": 494,
"text": "In 1950, after most Allied war crimes trials had ended, thousands of convicted war criminals sat in prisons across Asia and Europe, detained in the countries where they had been convicted. Some executions had not yet been carried out, as Allied courts agreed to reexamine their verdicts. Sentences were reduced in some cases, and a system of parole was instituted, but without relinquishing control over the fate of the imprisoned (even after Japan and Germany had regained their sovereignty).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24611478",
"title": "Nieuw Vosseveld",
"section": "Section::::History.:Camp in post-war times.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 402,
"text": "After World War II, the camp was first used as a prison for Germans and \"wrong\" people: Dutch SS-men, (suspected) collaborators and/or their children, and war criminals. At first, they were guarded by Allied soldiers, but shortly after by the Dutch. As a parliamentary enquiry (the Committee A.M. Baron Tuyll van Serooskerken) showed in 1950, this resulted in maltreatment and even summary executions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "63764",
"title": "Dysentery",
"section": "Section::::Notable cases.\n",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 486,
"text": "BULLET::::- 1942 – The Selarang Barracks incident in the summer of 1942 during World War II involved the forced crowding of 17,000 Anglo-Australian prisoners-of-war (POWs) by their Japanese captors in the areas around the barracks square for nearly five days with little water and no sanitation after the Selarang Barracks POWs refused to sign a pledge not to escape. The incident ended with the surrender of the Australian commanders due to the spreading of dysentery among their men.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10772574",
"title": "Self-inflicted wound",
"section": "Section::::Punishments.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 372,
"text": "In the British army during World War I, the maximum penalty for a self-inflicted wound (\"Wilfully maiming himself with intent to render himself unfit for service\" as it was described) under Section 18 of the Army Act 1881 was imprisonment, rather than capital punishment. In the British Army, some 3,894 men were found guilty, and were sent to prison for lengthy periods.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53565228",
"title": "Civil Resettlement Units",
"section": "Section::::Background.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 920,
"text": "During the First World War and shortly afterwards, many psychiatrists including Sigmund Freud assumed that soldiers who had been captured were 'virtually immune' from psychological harm because they were at a safe distance from battle. This was linked with the belief that shell shock might be a way of escaping from danger. Around the time of the Second World War, this view began to change. Psychiatrists and psychologists such as Millais Culpin and Adolf Vischer argued that POWs were at risk of mental harm, and Vischer coined the term \"barbed-wire disease\" to describe this condition. Psychiatrists had been keen to look into these ideas, and the outbreak of war gave them the opportunity to conduct research. The 1929 Geneva Convention had changed how POWs were dealt with by setting forth rules for prisoner exchange which made it possible for POWs to be returned to their home nations before the end of the war.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
54hbne | difference of chinese dialects and written languages | [
{
"answer": "The written language obviously shares a considerable history with the spoken language. But unlike this language I'm typing in here, with these letters, the Chinese written language does not fundamentally express the way words sound when spoken. It has symbols for different words, over the history as they made symbols for new things by making compounds of existing symbols they would often create a compound using one symbol for the sound and one for the meaning, or some variation on that. But its not a rule, its just a a history of how those symbols came to be. They don't fundamentally say how they're spoken at all. \n\nThis means that there is a gap between the written language and the spoken language in a way that doesn't really exist if you have a language that's designed to express how words sound. There are other countries that aren't china that have taken the chinese alphabet and they can write out sentences that you can understand if you know the chinese written language even if you don't know the spoken language of the area. However the design of making sentences in the Chinese written language is naturally very interrelated with how things are structured, the grammar, of the chinese spoken language. \n\nThey're not strictly the same language, they're not wholly independent. Speaking chinese and reading/writing chinese is probably more akin to knowing how to program in two languages, than being fluent in two entirely separate languages. The underlying logic is there, even if everything has different names, you need a new vocabulary. ",
"provenance": null
},
{
"answer": "**Spoken**\n\nThere are (I hate this word) actually many, many more dialects of Chinese than just Mandarin and Cantonese, although it's true that those are the two largest and most influential. Mandarin in particular enjoys a strong legal status as the official language of the People's Republic of China, including as the language of instruction in schools. \n\nIf you go anywhere in China, then it is likely that the local people where you live will have their own language, whether it is the language of the province, that area within the province, or even just a particular village. Some of these dialects are basically just Mandarin with an accent; others are completely mutually unintelligible with Mandarin.\n\nNevertheless, because of the strong legal status of Mandarin, with the notable exception of the elderly, the very poor, and those living in very far-flung regions (particularly areas of Tibet and Xinjiang, China's far northwestern province) virtually everyone can at a bare minimum understand Mandarin and (in my experience) definitely over 90% can speak it. If you're talking about young, educated people in an urban center then it's > 99%, although (not totally dissimilar to Britain) there's a certain preoccupation with accents and a rich and often self-deprecating humor that surrounds less-than-standard pronunciation. In general (including in Guangzhou) people will respond to you in whatever language you use to speak to them.\n\nComparing Mandarin and Cantonese specifically, the two are not really mutually intelligible. There is a limited amount of vocabulary that you might be able to guess at from one or the other and/or go \"oh!\" if it were explained to you, but overall the tones, vocab, and even to a certain extent grammar are different.\n\nPeople in Guangdong are nevertheless quite proud of Cantonese, which also enjoys a degree of cachet as a commonly-used language in relatively wealthy and culturally influential Hong Kong and among overseas Chinese, many of whom have family origins in southeast China. Because of this there's also definitely a corresponding degree of language politics that takes place in China and particularly Hong Kong about the official statuses of the two languages that can occasionally become heated.\n\n**Written**\n\nToday in Chinese there are \"simplified\" characters and \"traditional\" characters. Simplified characters are used throughout the PRC, \"traditional\" characters mostly in Taiwan. The idea of simplifying the writing system goes back a long way, and the work to create the current set of simplified characters was (somewhat ironically) begun by the Chinese Nationalist Party who (after going through a lot of changes) eventually moved to Taiwan and stuck with traditional characters.\n\nYou can think of the two writing systems as basically being different fonts (albeit sometimes *very* different) in the sense that there's a one-to-one correspondence between the two sets. It's easy with software to transcribe the one into the other.\n\nEither set of characters can be used to write almost all dialects of Chinese, although you will find the odd word in dialect for which there simply is no character, and this or that dialect might commonly use a character that is rare in other dialects.\n\nThere's a lot more to Chinese than that and the history of the language is pretty interesting, but that's a broad overview of the questions you were asking.\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "91225",
"title": "Written Chinese",
"section": "Section::::Function.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 1466,
"text": "Chinese languages and dialects vary by not only pronunciation, but also, to a lesser extent, vocabulary and grammar. Modern written Chinese, which replaced Classical Chinese as the written standard as an indirect result of the May Fourth Movement of 1919, is not technically bound to any single variety; however, it most nearly represents the vocabulary and syntax of Mandarin, by far the most widespread Chinese dialectal family in terms of both geographical area and number of speakers. This version of written Chinese is called Vernacular Chinese, or 白話/白话 \"báihuà\" (literally, \"plain speech\"). Despite its ties to the dominant Mandarin language, Vernacular Chinese also permits some communication between people of different dialects, limited by the fact that Vernacular Chinese expressions are often ungrammatical or unidiomatic in non-Mandarin dialects. This role may not differ substantially from the role of other linguae francae, such as Latin: For those trained in written Chinese, it serves as a common medium; for those untrained in it, the graphic nature of the characters is in general no aid to common understanding (characters such as \"one\" notwithstanding). In this regard, Chinese characters may be considered a large and inefficient phonetic script. However, Ghil'ad Zuckermann’s exploration of phono-semantic matching in Standard Chinese concludes that the Chinese writing system is multifunctional, conveying both semantic and phonetic content.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1197992",
"title": "Chinese Wikipedia",
"section": "Section::::Wikipedia in other varieties of Chinese.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 1469,
"text": "The varieties of Chinese are a diverse group encompassing many regional varieties, most of which are mutually unintelligible and often divided up into several larger dialect groups, such as Wu (including Shanghainese and Suzhounese), Min Nan (of which Taiwanese is a notable dialect), and Cantonese. In regions that speak non-Mandarin languages or regional Mandarin dialects, the Vernacular Chinese standard largely corresponding to Standard Chinese is nevertheless used exclusively as the Chinese written standard; this written standard differs sharply from the local dialects in vocabulary and grammar, and is often read in local pronunciation but preserving the vocabulary and grammar of Standard Chinese. After the founding of Wikipedia, many users of non-Mandarin Chinese varieties began to ask for the right to have Wikipedia editions in non-Mandarin varieties as well. However, they also met with significant opposition, based on the fact that Mandarin-based Vernacular Chinese is the only form used in scholarly or academic contexts. Some also proposed the implementation of an similar to that between Simplified and Traditional Chinese; however, others pointed out that although conversion between Simplified and Traditional Chinese consists mainly of glyph and sometimes vocabulary substitutions, different regional varieties of Chinese differ so sharply in grammar, syntax, and semantics that it was unrealistic to implement an automatic conversion program.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5751",
"title": "Chinese language",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 909,
"text": "The varieties of Chinese are usually described by native speakers as dialects of a single Chinese language, but linguists note that they are as diverse as a language family. The internal diversity of Chinese has been likened to that of the Romance languages, but may be even more varied. There are between 7 and 13 main regional groups of Chinese (depending on classification scheme), of which the most spoken by far is Mandarin (about 800 million, e.g. Southwestern Mandarin), followed by Min (75 million, e.g. Southern Min), Wu (74 million, e.g. Shanghainese), Yue (68 million, e.g. Cantonese), etc. Most of these groups are mutually unintelligible, and even dialect groups within Min Chinese may not be mutually intelligible. Some, however, like Xiang and certain Southwest Mandarin dialects, may share common terms and a certain degree of intelligibility. All varieties of Chinese are tonal and analytic.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "152827",
"title": "Han Chinese",
"section": "Section::::Culture.:Language.\n",
"start_paragraph_id": 56,
"start_character": 0,
"end_paragraph_id": 56,
"end_character": 730,
"text": "During the early 20th century, written vernacular Chinese based on Mandarin dialects, which had been developing for several centuries, was standardized and adopted to replace literary Chinese. While written vernacular forms of other varieties of Chinese exist, such as written Cantonese, written Chinese based on Mandarin is widely understood by speakers of all varieties and has taken up the dominant position among written forms, formerly occupied by literary Chinese. Thus, although residents of different regions would not necessarily understand each other's speech, they generally share a common written language, Standard Written Chinese and Literary Chinese (these two writing styles can merge into a 半白半文 writing style). \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "243875",
"title": "Languages of China",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 940,
"text": "The languages of China are the languages that are spoken in China. The predominant language in China, which is divided into seven major language groups (classified as dialects by the Chinese government for political reasons), is known as \"Hanyu\" () and its study is considered a distinct academic discipline in China. \"Hanyu\", or Han language, spans eight primary varieties, that differ from each other morphologically and phonetically to such a degree that they will often be mutually unintelligible, similarly to English and German or Danish. The languages most studied and supported by the state include Chinese, Mongolian, Tibetan, Uyghur and Zhuang. China has 302 living languages listed at Ethnologue. According to the 2010 edition of the \"Nationalencyklopedin\", 955 million out of China's then-population of 1.34 billion spoke some variety of Mandarin Chinese as their first language, accounting for 71% of the country's population.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19359",
"title": "Mandarin Chinese",
"section": "Section::::History.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 498,
"text": "The hundreds of modern local varieties of Chinese developed from regional variants of Old Chinese and Middle Chinese. Traditionally, seven major groups of dialects have been recognized. Aside from Mandarin, the other six are Wu, Gan, and Xiang in central China, and Min, Hakka, and Yue on the southeast coast. The \"Language Atlas of China\" (1987) distinguishes three further groups: Jin (split from Mandarin), Huizhou in the Huizhou region of Anhui and Zhejiang, and Pinghua in Guangxi and Yunnan.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "321538",
"title": "Chinese grammar",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 417,
"text": "The grammar of Standard Chinese or Mandarin shares many features with other varieties of Chinese. The language almost entirely lacks inflection, so that words typically have only one grammatical form. Categories such as number (singular or plural) and verb tense are frequently not expressed by any grammatical means, although there are several particles that serve to express verbal aspect, and to some extent mood.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
fj1jym | Rules Roundtable VI: No Historical "What-If?" Questions or Counterfactuals | [
{
"answer": "What Ifs are *really* popular questions sometimes, but the thing is, with a little work most 'what if' questions can actually be turned into really good, really interesting questions that match the rules. It's all about the angle and perspective you have when asking the question. If you never need a bit of help phrasing things, let us know!",
"provenance": null
},
{
"answer": "What is the difference between /r/HistoricalWhatIf and /r/HistoryWhatIf?",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2783063",
"title": "Linguistic modality",
"section": "Section::::Modal categories.:Realis vs. irrealis.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 817,
"text": "Counterfactuals refer to things that are contrary to the actual situation. In English, counterfactuals can be expressed implicitly in \"if\"-clauses by using a tense form that normally refers to a time prior to the time actually semantically referred to in the \"if\"-clause. For example, \"If I knew that, I wouldn't have to ask\" contains the counterfactual \"If I knew\", which refers to the present tense despite the form of the verb, and which denies the proposition \"I know that\". This contrasts with the construction \"If I know that...\", which is not a counterfactual because it means that maybe I know it and maybe I don't (or maybe I will know it, and maybe I will not). Likewise, \"If I had known that, I would have gone there\" contains the counterfactual \"If I had known\", denying the proposition that I had known.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35736414",
"title": "Fréchet inequalities",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 793,
"text": "In probabilistic logic, the Fréchet inequalities, also known as the Boole–Fréchet inequalities, are rules implicit in the work of George Boole and explicitly derived by Maurice Fréchet that govern the combination of probabilities about logical propositions or events logically linked together in conjunctions (AND operations) or disjunctions (OR operations) as in Boolean expressions or fault or event trees common in risk assessments, engineering design and artificial intelligence. These inequalities can be considered rules about how to bound calculations involving probabilities without assuming independence or, indeed, without making any dependence assumptions whatsoever. The Fréchet inequalities are closely related to the Boole–Bonferroni–Fréchet inequalities, and to Fréchet bounds.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "491578",
"title": "Counterfactual conditional",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 546,
"text": "A counterfactual conditional (abbreviated ), is a conditional with a false if-clause. The term \"counterfactual conditional\" was coined by Nelson Goodman in 1947, extending Roderick Chisholm's (1946) notion of a \"contrary-to-fact conditional\". The study of counterfactual speculation has increasingly engaged the interest of scholars in a wide range of domains such as philosophy, human geography, psychology, cognitive psychology, history, political science, economics, social psychology, law, organizational theory, marketing, and epidemiology.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5915049",
"title": "Lindley's paradox",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 432,
"text": "Lindley's paradox is a counterintuitive situation in statistics in which the Bayesian and frequentist approaches to a hypothesis testing problem give different results for certain choices of the prior distribution. The problem of the disagreement between the two approaches was discussed in Harold Jeffreys' 1939 textbook; it became known as Lindley's paradox after Dennis Lindley called the disagreement a paradox in a 1957 paper.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17286936",
"title": "Counterfactual thinking",
"section": "Section::::Overview.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 819,
"text": "\"The term \"Counterfactual\"\" is defined by the Merriam-Webster Dictionary as contrary to the facts. A counterfactual thought occurs when a person modifies a factual prior event and then assesses the consequences of that change. A person may imagine how an outcome could have turned out differently, if the antecedents that led to that event were different. For example, a person may reflect upon how a car accident could have turned out by imagining how some of the factors could have been different, for example, \"If only I hadn't been speeding...\". These alternatives can be better or worse than the actual situation, and in turn give improved or more disastrous possible outcomes, \"If only I hadn't been speeding, my car wouldn't have been wrecked\" or \"If I hadn't been wearing a seatbelt, I would have been killed\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1409541",
"title": "Berkson's paradox",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 437,
"text": "Berkson's paradox also known as Berkson's bias or Berkson's fallacy is a result in conditional probability and statistics which is often found to be counterintuitive, and hence a veridical paradox. It is a complicating factor arising in statistical tests of proportions. Specifically, it arises when there is an ascertainment bias inherent in a study design. The effect is related to the explaining away phenomenon in Bayesian networks.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2274504",
"title": "French verbs",
"section": "Section::::Tenses and aspects.:Tenses and aspects of the subjunctive mood.:Uses.\n",
"start_paragraph_id": 49,
"start_character": 0,
"end_paragraph_id": 49,
"end_character": 539,
"text": "Finally, as in English, counterfactual conditions in the past are expressed by backshifting the apparent time reference. In English this backshifted form is called the pluperfect subjunctive, and unless it is expressed in inverted form it is identical in form to the pluperfect indicative; it is called subjunctive because of the change in implied time of action. In French, however, there is a distinction in form between the seldom used pluperfect subjunctive and the pluperfect indicative, which is used in this situation. For example,\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
18c4s3 | Are all sperm Clones? It doesn't matter Which sperm got to the egg, i was going to be me no matter what correct? (contemplating the miracle of my existence) | [
{
"answer": "First, I suggest reading about meiosis. \n\nHumans are diploid, meaning we have two sets of each chromosome in our cells, one from each parent. So, your parents have a set of 23 from each of their parents (your grandparents). Sperm (and eggs) form through meiosis, which really just picks a random chromosome from each of the 23 sets. Chromosome 1, 3, 9, 10, 20, etc. could be from your dad's dad, but the rest could be from your dad's mom. Each single sperm has this random assortment.\n\nCombine that with the same thing happening in the egg, and you get a lot of different combinations.\n\nEdit: This gets even more complicated with recombination...but this should answer your basic question. ",
"provenance": null
},
{
"answer": "No, they are not clones at all.\n\nEach sperm represents a completely different shuffled assortment of your dad's genetic material.\n\nBecause of recombination, there are millions and millions of possible sperm and eggs that your parents can create.\n\nIT IS NOT TRUE that your parents are giving you a set of fully intact chromosomes, as suggested by hobo & abbe (\"Chromosome 1, 3, 9, 10, 20, etc. could be from your dad's dad, but the rest could be from your dad's mom\") These answers are ignoring the process of recombination.\n\nIt is possible to sometimes share a chromosome with only one grandparent or the other, but the most likely outcome is that the child will inherit a recombined chromosome containing DNA from both the child's paternal grandfather and paternal grandmother (or maternal grandfather and maternal grandmother if we're talking about the egg rather than the sperm).\n\n[Graph One](_URL_3_) All of this child's maternal chromosomes contain both grandparents' DNA -- she shares some of her DNA with her grandfather (segments shown in green) and some with her grandmother (not pictured but her DNA would fill in the grey gaps). For example, on chromosome 10, she shares roughly the first half of the chromosome with her grandfather, but then a recombination event took place and she shares the second half with her grandmother.\n\n[Graph Two](_URL_1_) This child is a bit different, she does have several chromosomes that she shares only with one grandparent. She shares no DNA with her maternal grandmother (pictured in blue) on chromosomes 5, 15, 16, 18, and 22, meaning she inherited those chromosomes entirely from the maternal grandfather.\n\nThe places in the genome where these recombination events occur are not fixed (although it's true that [recombination does occur more often](_URL_2_) in certain spots). This is why there are sooooo many possible combinations of sperm that your dad can make -- [there are roughly 27.6 recombinations per paternal meoisis](_URL_0_) and the location/size etc. of these recombination events is highly variable from one gamete to another.\n\nTL;DR Genetically, you really are a unique little snowflake.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3315213",
"title": "Sex differences in human physiology",
"section": "Section::::Sex determination and differentiation.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 301,
"text": "A human egg contains only one set of chromosomes (23) and is said to be haploid. Sperm also have only one set of 23 chromosomes and are therefore haploid. When an egg and sperm fuse at fertilization, the two sets of chromosomes come together to form a unique \"diploid\" individual with 46 chromosomes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1751707",
"title": "Christian views on cloning",
"section": "Section::::Dignity of the Human Person.:Defining Dignity.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 767,
"text": "The belief in intrinsic dignity, nonetheless, leads the Christians who hold this belief to also argue that if the soul enters the body at the moment when the sperm and the egg unite, producing cloned zygotes that are unlikely to survive is equivalent to murder. Therefore, if one believes, as Catholics do, that the zygotes have souls and are therefore human, in the words of John Paul II, \"regardless of the objective for which it was done, human embryonic cloning conflicts with the international legal norms that protect human dignity.\" Some Christian conservatives even express concern that cloned embryos would have no soul, since it is, in their view, born outside of God's parameters, as its creation is in a laboratory setting rather than natural conception.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36532474",
"title": "XXXY syndrome",
"section": "Section::::Cause.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 506,
"text": "In the case where the sperm is the genetic cause of 48, XXXY syndrome, the sperm would have to contain two X chromosomes and one Y chromosome. This would be caused by two non-disjunction events in spermatogenesis, both meosis I and meiosis II. The duplicated X chromosome in the sperm would have to fail to separate in both meiosis I and meiosis II for a sperm as well as the X and Y chromosomes would have to be in the same sperm. Then the XXY sperm would fertilize a normal oocyte to make a XXXY zygote.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36202",
"title": "5α-Reductase deficiency",
"section": "Section::::Signs and symptoms.:Fertility.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 314,
"text": "Since the gonad tissue develops into testes rather than ovaries, they are thus unable to create ova but may be able to create sperm. Male fertility can still be possible if viable sperm is present in the testes and is able to be extracted. In general, individuals with 5-ARD are capable of producing viable sperm.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18899714",
"title": "Sperm sorting",
"section": "Section::::Applications.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 791,
"text": "Sperm undergoes a process of natural selection when millions of sperm enter vagina but only few reach the egg cell and then only one is usually allowed to fertilize it. The sperm is selected not only by its highest motility but also by other factors such as DNA integrity, production of reactive oxygen species and viability. This selection is largely circumvented in case of in-vitro fertilization which leads to higher incidence of birth defects associated with assisted reproductive techniques. Egg cells are often fertilized by sperm which would have low chance of fertilizing it in natural conditions. Sperm sorting could thus be used to decrease risks associated with assisted reproduction. Additionally, there is ongoing debate about using sperm sorting for choosing the child's sex.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6633670",
"title": "Hamster zona-free ovum test",
"section": "Section::::Procedure.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 351,
"text": "Contrary to some assertions by medical professionals, if the human sperm penetrates the hamster egg, a hybrid embryo is indeed created. The resulting hamster-hybrid is known as a humster. These embryos are typically destroyed before they divide into two cells and are never known to have been implanted in either a hamster or human uterus to develop.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10813956",
"title": "Female sperm",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 442,
"text": "Since the late 1980s, scientists have explored how to produce sperm where all of the chromosomes come from a female donor. In the late 1990s, this concept became a partial reality when scientists in Japan developed chicken female sperm by injecting bone marrow stem cells from a female chicken into a rooster's testicles. This technique proved to fall below expectations, however, and has not yet been successfully adapted for use on humans.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
xqo79 | Why do people seem to make mistakes more often when in front of people? | [
{
"answer": "If you're longboarding and showing people (or not, I guess), it's likely that you start (subconsciously or not) focusing on doing it \"properly\" by thinking through the steps you take one by one, instead of focusing on the whole--which would be the same reason most sports coaches become worse at their sport when they start teaching.\n\nThat, or self-consciousness.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "11865833",
"title": "SPEAKING",
"section": "Section::::Rich Points.:Mistake.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 654,
"text": "Mistakes in conversation occur when participants in the conversation are operating with different implicit rules and expectations for the SPEAKING model. Mistakes often results from disagreements about inclusion of participants, mismatched ends, unexpected act sequences, keys or instrumentalities. In general mistakes and conflicts arise when there is a deviation in the conversation from the norm. In some genres, such as gossip, rapid turn-taking and interrupting is not only accepted, but expected. If one participant is not active in this type of speech they may come across as ambivalent to the conversation; this would be an example of a mistake.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21922177",
"title": "Grounding in communication",
"section": "Section::::Consequences of a lack of common ground.:Multiple Ignorances.\n",
"start_paragraph_id": 69,
"start_character": 0,
"end_paragraph_id": 69,
"end_character": 499,
"text": "People base their decisions and contribution based on their own point of view. When there is a lack of common ground in the points of views of individuals within a team, misunderstandings occur. Sometimes these misunderstandings remain undetected, which means that decisions would be made based on ignorant or misinformed point of views, which in turn lead to multiple ignorances. The team may not be able to find the right solution because it does not have a correct representation of the problem.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1490105",
"title": "How to Win Friends and Influence People",
"section": "Section::::Major sections and points.:Be a Leader: How to Change People Without Giving Offense or Arousing Resentment.\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 274,
"text": "BULLET::::2. Call attention to people's mistakes indirectly. No one likes to make mistakes, especially in front of others. Scolding and blaming only serve to humiliate. If we subtly and indirectly show people mistakes, they will appreciate us and be more likely to improve.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "56137525",
"title": "Diving safety",
"section": "Section::::Human factors.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 378,
"text": "Human error is inevitable and everyone makes mistakes at some time. The consequences of these errors are varied and depend on many factors. Most errors are minor and do not cause significant harm, but others can have catastrophic consequences. Examples of human error leading to accidents are available in vast numbers, as it is the direct cause of 60% to 80% of all accidents.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13416473",
"title": "Carelessness",
"section": "Section::::Associated areas of concern.:Education.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 644,
"text": "In any education environment, careless mistakes are those errors that occur in areas within which the student has had training. Careless mistakes are common occurrences for students both within and outside of the learning environment. They are often associated with a lapse in judgment or what are known as mind slips because the students had know-how to have avoided making the mistakes, but did not for undeterminable reasons. Given that students that are competent of the subject and focused are most likely to make careless mistakes, concerns for students exhibiting careless mistakes often turn toward neurological disorders as the cause.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "856726",
"title": "Cass Sunstein",
"section": "Section::::Career.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 341,
"text": "People often make poor choices – and look back at them with bafflement! We do this because as human beings, we all are susceptible to a wide array of routine biases that can lead to an equally wide array of embarrassing blunders in education, personal finance, health care, mortgages and credit cards, happiness, and even the planet itself.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3449686",
"title": "Speech error",
"section": "Section::::Psycholinguistic explanations.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 930,
"text": "Speech errors are made on an occasional basis by all speakers. They occur more often when speakers are nervous, tired, anxious or intoxicated. During live broadcasts on TV or on the radio, for example, nonprofessional speakers and even hosts often make speech errors because they are under stress. Some speakers seem to be more prone to speech errors than others. For example, there is a certain connection between stuttering and speech errors. Charles F. Hockett explains that \"whenever a speaker feels some anxiety about possible lapse, he will be led to focus attention more than normally on what he has just said and on what he is just about to say. These are ideal breeding grounds for stuttering.\" Another example of a \"chronic sufferer\" is Reverend William Archibald Spooner, whose peculiar speech may be caused by a cerebral dysfunction, but there is much evidence that he invented his famous speech errors (spoonerisms).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
e7wqhb | why isn't the night sky just one big light? | [
{
"answer": "it's called olbers paradox and actually there is a lot of light. we just can't see it because it's out of out visible spectrum. this is because as galaxies move away the light changes and so we may not be able to see it anymore",
"provenance": null
},
{
"answer": "Visible light is on a small portion of light we can observe but there are many types of light That we can’t observe",
"provenance": null
},
{
"answer": "Two things:\n\n1) Regardless of how big the universe is, light still takes time to get places. The universe is 13.77 billion years old, so light has only had 13.77 billion years to get here. Because the universe is expanding, we can see stuff from much farther away than that, but there's still a limit on how far away stars can be and still have had time for the light to get to us.\n\n2) As the universe expands, it stretches light passing through it, causing the light to be redshifted, which means it lowers in wavelength. Visible light from the very edges of the visible universe can get redshifted out of the visible spectrum and into infrared or radio waves. That's why the Cosmic Microwave Background is, well, microwaves. It used to include a *lot* of visible light, but it's so old and it's been shifted so much that it's all microwaves, now.",
"provenance": null
},
{
"answer": "But the observable universe isn't infinite. Because the universe is expanding everywhere at once, there is a distance where objects are moving away from us faster than the speed of light (important to note they aren't moving faster than light, but the expansion of the universe is causing the distance between is to grow faster than the speed of light) bc of this light from those objects will never reach us.\n\nSo when you stare out at the blackness between the stars, you're actually looking at the edge of the known universe and I think thats fucking cool",
"provenance": null
},
{
"answer": "Another thing people haven’t said is your assumption is wrong \n\n > t he universe is infinite, so theoretically in every possible direction we look at some point there should be a star somewhere out there, right?\n\nThis isn’t true, just because it’s infinite doesn’t imply this. It could be infinite but there still be a space somewhere. Just because it goes on forever doesn’t mean the stars are evenly distributed.\n\nI think it’s simpler to think about it in terms of numbers. Pi is infinitely long, 3.1415... forever, but what if we took out every single 7? It would still be infinitely long, still be a unique number but just have no sevens.\n\nNow with that in mind what if we divided the sky into ten sections 0-9 and look at all the stars in the sky in order or how close they are to us. every time a star shows up we add it’s section to the end of a number. Say the first star in in 1 , then the next is in section 5 then 4 381289345etc. As in the example above a seven doesn’t have to show up which would mean 10% of the sky has no stars even though there are an infinite number is stars",
"provenance": null
},
{
"answer": "1) Light diffuses rather significantly with distance. The sky is indeed awash with stars, but most of them are too far away to be even remotely visible.\n\n2) Because of universal expansion, light from far away sources gets redshifted to frequencies below that of human vision limits.\n\n3) Expanding on the above; the night sky, bluntly, **is** one big light. But most of that light is at relatively low frequencies below what the human eye can see, even before redshifting is taken into account.",
"provenance": null
},
{
"answer": "If you draw a line on a balloon with a sharpie, then inflate the balloon, the line you drew will get stretched. What was one solid black stroke at the beginning is now a large faded line.\n\nNow imagine the balloon is the universe and the line is light from a far away object. Eventually it gets stretched so much that we can’t see it anymore.",
"provenance": null
},
{
"answer": "Lots of nice comments, and several explain the physics in a way I've forgotten since I studied it, so kudos.\n\nBut I'd like to add that the night sky is really actually quite bright. If you can get somewhere without massive light pollution, there really isn't any direction which doesn't have light.\n\nIf you find a \"dark patch\" and look at it through a telescope you'll generally see stuff... And if there's a dark patch in that then you get a bigger telescope etc.\n\nEdit: cause apparently I can't English today",
"provenance": null
},
{
"answer": "My favourite [minute physics video](_URL_0_) explains this exact thing really well!",
"provenance": null
},
{
"answer": "Turn your radio on but not to a channel. Hear that static? That’s the one big light. \n\nSame for an analog tv that hasn’t been tuned.\n\nWe call it CMB. Cosmic microwave background.",
"provenance": null
},
{
"answer": "1. We don't actually know the universe is infinite.\n2. The black patches might be the stars being too far away for enough light to make it to us to be visible to the naked eye\n3. There's a theory that this proves the universe is expanding because if it weren't the sky would be filled with stars.",
"provenance": null
},
{
"answer": "It's called Olbers' Paradox--\"why is the sky dark at night?\" \n\n[_URL_1_](_URL_0_)\n\nThe idea's been around a long time.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "195193",
"title": "Sky",
"section": "Section::::During the night.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 594,
"text": "The term night sky refers to the sky as seen at night. The term is usually associated with skygazing and astronomy, with reference to views of celestial bodies such as stars, the Moon, and planets that become visible on a clear night after the Sun has set. Natural light sources in a night sky include moonlight, starlight, and airglow, depending on location and timing. The fact that the sky is not completely dark at night can be easily observed. Were the sky (in the absence of moon and city lights) absolutely dark, one would not be able to see the silhouette of an object against the sky.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "997476",
"title": "Night sky",
"section": "Section::::Brightness.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 242,
"text": "The fact that the sky is not completely dark at night, even in the absence of moonlight and city lights, can be easily observed, since if the sky were absolutely dark, one would not be able to see the silhouette of an object against the sky.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2571012",
"title": "National Dark-Sky Week",
"section": "Section::::Goal.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 397,
"text": "Jennifer Barlow states, \"The night sky is a gift of such tremendous beauty that should not be hidden under a blanket of wasted light. It should be visible so that future generations do not lose touch with the wonder of our universe.\" Barlow explains, \"It is my wish that people see the night sky in all of its glory, without excess light in the sky as our ancestors saw it hundreds of years ago.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4968799",
"title": "Sky brightness",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 298,
"text": "Sky brightness refers to the visual perception of the sky and how it scatters and diffuses light. The fact that the sky is not completely dark at night is easily visible. If light sources (e.g. the Moon and light pollution) were removed from the night sky, only direct starlight would be visible. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "195193",
"title": "Sky",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 703,
"text": "During daylight, the sky appears to be blue because air scatters more blue sunlight than red. At night, the sky appears to be a mostly dark surface or region spangled with stars. During the day, the Sun can be seen in the sky unless obscured by clouds. In the night sky (and to some extent during the day) the Moon, planets and stars are visible in the sky. Some of the natural phenomena seen in the sky are clouds, rainbows, and aurorae. Lightning and precipitation can also be seen in the sky during storms. Birds, insects, aircraft, and kites are often considered to fly in the sky. Due to human activities, smog during the day and light pollution during the night are often seen above large cities.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "526237",
"title": "Twilight",
"section": "Section::::Astronomical twilight.:Astronomical dawn and dusk.:Definition.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 557,
"text": "However, in other places, especially those with skyglow, astronomical twilight may be almost indistinguishable from night. In the evening, even when astronomical twilight has yet to end and in the morning when astronomical twilight has already begun, most casual observers would consider the entire sky fully dark. Because of light pollution, observers in some localities, generally in large cities, may never have the opportunity to view even fourth-magnitude stars, irrespective of the presence of any twilight at all, and to experience truly dark skies.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "997476",
"title": "Night sky",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 253,
"text": "The term night sky, usually associated with astronomy from Earth, refers to the nighttime appearance of celestial objects like stars, planets, and the Moon, which are visible in a clear sky between sunset and sunrise, when the Sun is below the horizon.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
81b2f7 | Why does northern Canada look so strange on Google Maps? | [
{
"answer": "What you are seing here are features left by the passage of the last continental glaciation. Most of Canada was under 2-3 km of ice a mere 12 000 years ago. That ice sheet flowed, and then melted, leaving behind all kinds of features.\n\nIn this one, I note a prominent group of elongated hills trending NW-SE, probably [drumlins](_URL_2_) or perhaps some kind of [moraine](_URL_1_). There are a few N-S trending [eskers](_URL_0_) (essentially sand and gravel infilling of meltwater channels in the decaying glacier). ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "8296742",
"title": "Northern (genre)",
"section": "Section::::Characteristics.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 426,
"text": "Northerns are similar to westerns but are set in the frozen north of North America; that is, Canada or Alaska. Of the two, Canada was the most common setting, although many tropes could apply to both. Popular locations within Canada are the Yukon, the Barren Grounds, and area around Hudson Bay. Generic names used for this general setting included the \"Far North\", the \"Northlands\", the \"North Woods\", and the \"Great Woods\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52606392",
"title": "Digital divide in Canada",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 410,
"text": "The territories situated within Northern Canada in particular have been technologically divided compared to the rest of the country due to economic and geographical obstacles creating challenges regarding having high speed internet connections set up between distant and sparsely populated towns, along with the low digital literacy rates and lack of access to technology that some northern residents possess.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31063805",
"title": "Google Street View in Canada",
"section": "Section::::Privacy concerns.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 1305,
"text": "While Canada, like other jurisdictions, has raised the issue of privacy concerns regarding Google Street View, the presence of Google cameras in one Canadian city in March 2009 gave rise to a different complaint. Les MacPherson, a columnist with the Saskatoon Star-Phoenix, complained in a March 28, 2009, column that the timing of the imaging, at the end of a protracted winter season and before the true onset of spring would cast an unfavourable image of Saskatoon and other cities. \"What worries me more than any loss of privacy is the prospect of presenting to the world a highly unflattering impression of Canadian cities. With the possible exception of Victoria, they do not show off well in the spring. Google could not have picked a more inauspicious time to do its scanning. Saskatoon is unfortunately typical. For Google to record its images of the city at this most visually unappealing time of year is like photographing a beautiful woman who has just awakened from a six-month coma,\" he wrote. In early October 2009, the first Canadian cities began to appear on Street View; several, including Saskatoon, were not included in the initial roll-out. One city that was included, Calgary, included images taken in both summer and winter. Images of Saskatoon were rolled out on December 2, 2009.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26080303",
"title": "Google Street View privacy concerns",
"section": "Section::::Americas.:Canada.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 1305,
"text": "While Canada, like other jurisdictions, has raised the issue of privacy concerns regarding Google Street View, the presence of Google cameras in one Canadian city in March 2009 gave rise to a different complaint. Les MacPherson, a columnist with the Saskatoon Star-Phoenix, complained in a March 28, 2009, column that the timing of the imaging, at the end of a protracted winter season and before the true onset of spring would cast an unfavourable image of Saskatoon and other cities. \"What worries me more than any loss of privacy is the prospect of presenting to the world a highly unflattering impression of Canadian cities. With the possible exception of Victoria, they do not show off well in the spring. Google could not have picked a more inauspicious time to do its scanning. Saskatoon is unfortunately typical. For Google to record its images of the city at this most visually unappealing time of year is like photographing a beautiful woman who has just awakened from a six-month coma,\" he wrote. In early October 2009, the first Canadian cities began to appear on Street View; several, including Saskatoon, were not included in the initial roll-out. One city that was included, Calgary, included images taken in both summer and winter. Images of Saskatoon were rolled out on December 2, 2009.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "79987",
"title": "Northern Canada",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 525,
"text": "Northern Canada, colloquially the North, is the vast northernmost region of Canada variously defined by geography and politics. Politically, the term refers to three territories of Canada: Yukon, Northwest Territories, and Nunavut. Similarly, \"the Far North\" (when contrasted to \"the North\") may refer to the Canadian Arctic: the portion of Canada that lies north of the Arctic Circle, east of Alaska and west of Greenland. This area covers about 39% of Canada's total land area, but has less than 1% of Canada's population.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52163976",
"title": "Canadian Arctic tundra",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 316,
"text": "The Canadian Arctic tundra is a biogeographic designation for Northern Canada's terrain generally lying north of the tree line or boreal forest, that corresponds with the Scandinavian Alpine tundra to the east and the Siberian Arctic tundra to the west inside the circumpolar tundra belt of the Northern Hemisphere.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1035744",
"title": "Nordicity",
"section": "",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 745,
"text": "The term is used by the Canadian government that has a set system for measuring nordicity. This system is used for determining a number of regulations in fields such as environmental protection, infrastructure, and many others. Northern Canada, apropos, is normally divided into three areas. The \"Middle North\" covering the northern parts of most provinces, as well as parts of the territories is largely populated by those of European descent and has significant resource extraction if a low population. The \"Far North\" covers the northern part of the continent and the southern Arctic Archipelago. The \"Extreme North\" covers the northernmost islands and is largely uninhabitable. Other countries have their own systems of measuring nordicity.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1zj7j2 | Why did the United States use images of Native Americans on its' coins during an era of Indian persecution? | [
{
"answer": "You could try r/Anthropology, or r/CulturalAnthro with is question too; This is a topic that they might be able to shed a different light on than historians on this sub(not downplaying their knowledge or expertise, but just offering a different light on the subject). \n\nI'm not an expert on Native American history by any means as I am trained as an Anthropologist, but what you have described, and the co-opting of imagery and creation of a mythos in this situation is partly due to the idea of a \"free\", and \"natural\" society, one which was idolized, yet paradoxically, was also being controlled and oppressed. This is a condensed answer, but it gives a broad stroke answer to the question. Like I said I'm not an expert, but I would recommend reading either Vine Deloria or Philip Deloria(his book, \"Playing Indian\" refers to the kind of cultural romanticism and co-opting which has become associated with many Native American cultures and practices); they are both Native American scholars and very respected in their work. I know this doesn't really answer your question, but I hope that maybe it helps point you in a direction that might be helpful. \n\nTo the mods: this is my first time commenting on a post in this sub and I'm aware that the rules are enforced; if something I have posted is not in keeping with these rules I will revise the post so that it does.",
"provenance": null
},
{
"answer": "I don't know if it's the same period you're thinking of, but I know that during World Wars 1 and 2 the US armed forces liked to use imagery of \"Red Indians\". I have always assumed - but am willing to be corrected by those who know better - that a \"fierce warrior\" image is being invoked. For example:\n\n*The Lafayette Squadron, an American Volunteer unit in the French Air Force during World War 1, used a \"Sioux Chief\" as its [squadron motif](_URL_1_).\n\n*US Paratroopers during World War 2 would habitually shave their heads into \"Mohawk\" cuts before a combat drop and would often also apply \"war paint\" - see [this picture](_URL_0_).\n\n",
"provenance": null
},
{
"answer": "**Tl;dr** It does seem counter-intuitive to honor Native Americans on coins while denying their humanity and the right to their own language, religion, culture and customs, but this simply didn't slow America down. Phil Deloria does a great job of showing how the *idea* of Native Americans has pretty much always been divorced from the realities of Native American life or U.S.-Indian policy. Indeed, as he puts it when writing of the early 19th century organization [The Improved Order of Red Men](_URL_0_), \"They desired Indianness, not Indians.\" (The Wiki here notes that membership was restricted to whites until the 1970s.) It's the same type of appropriation of cultural motifs and generic imagery that /u/Brickie78 is referring to and still exists today in the form of many professional, collegiate, and high school athletic mascots.\n\nAs /u/ggarcimer15 suggests, read Vine Deloria (his most famous work being *Custer Died for Your Sins*) and his son Phil's *Playing Indian*. Phil especially goes into how Native Americans were a convenient \"other\" for Euro-Americans. They were variously portrayed as savage; noble; epitomes of freedom; enemies of the United States; threats to Christian civilization; or the last vestiges of a pre-modern society, tragically fading away under the superior technology and lifestyle of white Americans. \n\nSo how did Indian head coins come about? In the abstract, especially in the early days of the United States, Native Americans were symbols of freedom and liberty - think of early [personifications of Colombia](_URL_3_) or the U.S. Capitol's [Statue of Freedom](_URL_2_), with their vaguely Native American headdresses and attire, or more explicitly, the Boston Tea Party disguising themselves as Native Americans because of their popular association with freedom. As a perception that predates America, it was influential enough to survive the demonization of Native Americans during the 19th century \"Indian Wars,\" and afterwards the idea of Indians as paragons of virility and ruggedness (compared to the effete, urbanized late 19th century American) came back in full force - this time frame also saw the rise of organizations like the [Camp Fire Girls](_URL_1_) and the Boy Scouts, which co-founder Ernest Thompson Seton explicitly linked to Native Americans: \"Indian teachings in the fields of art, handicraft, woodcraft, agriculture, social life, health, and joy need no argument beyond presentation; they speak for themselves. The Red Man is the apostle of outdoor life, his example and precept are what young America needs today above any other teaching of which I have knowledge.\" Putting \"his\" face (the designer of the \"Buffalo nickel\" claimed not to have drawn a portrait, but a \"type\") on coinage was another way of using the image of Native Americans to reinforce the idea of American uniqueness and freedom, just like the \"Mohawks\" in Boston Harbor.\n\nAlso see Jared Farmer's *On Zion's Mount* for more on the late 19th/early 20th century obsession with the declining virility of the American male and how embracing certain aspects of Native American lifestyle (albeit a heavily idealized lifestyle) was seen as a remedy.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1370087",
"title": "Edward S. Curtis",
"section": "Section::::Collections of Curtis materials.:Library of Congress.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 588,
"text": "The Library of Congress acquired these images as copyright deposits from about 1900 through 1930. The dates on them are dates of registration, not the dates when the photographs were taken. About two-thirds (1,608) of these images were not published in \"The North American Indian\" and therefore offer a different glimpse into Curtis's work with indigenous cultures. The original glass plate negatives, which had been stored and nearly forgotten in the basement of the Morgan Library, in New York, were dispersed during World War II. Many others were destroyed and some were sold as junk.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29963204",
"title": "Native Americans in film",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 898,
"text": "Portrayals of Native Americans in film have historically tended to be based in inaccurate stereotypes, notably caricatures of the Plains Indians, depicted in Hollywood Westerns. But throughout Hollywood history, images of Native Americans have alternated between violent, uncivilized villains along with positive, romantic portrayals. Early short one- and two-reel movies tended to show diverse portrayals of positive and negative images and occasionally featured stories of Indian/white interracial marriages. Negative images dominated the mid- to late-1930s until the watershed movie Broken Arrow (1950 film) appeared that many credit as the first postwar Western to depict Native American sympathetically. Starting in the 1990s, authentic films by Native American filmmakers and often independent films have focused on portraying Indigenous Peoples from their own Native American point of view.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39016128",
"title": "G.E.E. Lindquist Native American Photographs Collection",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 343,
"text": "They depict the people, places, and practices of Native Americans and their communities from at least 34 States, plus Canada and Mexico in the period from 1909-1953. The majority of the images were taken by G. E. E. Lindquist (1886-1967), an itinerant representative of the ecumenical Home Missions Council of the Federal Council of Churches.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39938071",
"title": "White slave propaganda",
"section": "Section::::Modern analysis.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 544,
"text": "Gwendolyn DuBois Shaw, in \"Portraits of a People,\" has argued that the usage of props, such as the American flag and books, helped to provide context for Northern viewers, and also to emphasize that the purpose of the photos was to raise money for education of former slaves and schools in Louisiana. She also noted that the use of \"white\" children to illustrate the damage caused by institutional slavery, whose victims were overwhelmingly visibly people of color, demonstrated the contemporary racism of both southern and northern societies.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38825214",
"title": "Lewis and Clark Exposition gold dollar",
"section": "Section::::Design.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 882,
"text": "Art historian Cornelius Vermeule, in his volume on American coinage, pointed out that some people liked the Lewis and Clark Exposition dollar as it depicted historic figures who affected the course of American history, rather than a bust intended to be Liberty, and that Barber's coin presaged the 1909 Lincoln cent and the 1932 Washington quarter. Nevertheless, Vermeule deprecated the piece, as well as the earlier American gold commemorative, the Louisiana Purchase Exposition dollar. \"The lack of spark in these coins, as in so many designs by Barber or Assistant Engraver (later Chief Engraver) Morgan, stems from the fact that the faces, hair and drapery are flat and the lettering is small, crowded, and even.\" According to Vermeule, when the two engravers collaborated on a design, such as the 1916 McKinley Birthplace Memorial dollar, \"the results were almost oppressive\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19556004",
"title": "Salvage anthropology",
"section": "Section::::Changing Meanings of Artifacts.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 1016,
"text": "Since American Indians were erroneously thought to be going extinct, white American anthropologists did not trust them to preserve their own traditions within their communities and began an effort in the late nineteenth century to dispossess communities of spiritual and other items, which would be transplanted into museums. As Euro-Americans removed sacred objects from their communities, they placed spiritual items into an educational context. Although the collectors believed they were using these objects to showcase the memory of a “vanishing” people, the objects were taken from actual people, many of whom believed that public display was disrespectful and potentially harmful to viewers. Many American Indians also believed that exhibiting sacred objects stripped the items of their spiritual power. By creating new meanings for the objects on display, in attempts to externally preserve a culture, anthropologists and collectors diminished the meaning that items held for the people who had created them.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5070047",
"title": "Piebaldism",
"section": "Section::::History.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 228,
"text": "Early photographers captured many images of African piebalds used as a form of amusement, and George Catlin is believed to have painted several portraits of Native Americans of the Mandan tribe who were affected by piebaldism. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
19rwf9 | How would bread have been cut/served prior to the invention of the sandwich? | [
{
"answer": "This isn't perhaps the cutting techniques you're looking for, but stale bread often used to be cut into a square shape and used as a plate, in what was called a 'trencher'. A 'good trencherman' would be one who ate a lot of food. These bits of bread would be given out as alms after a nobleman's meal if those eating didn't want them. ",
"provenance": null
},
{
"answer": "Quite often by \"breaking\" or simply tearing and sharing. In the Bible, the phrase \"break(ing) bread\" occurs dozens of times. It is of course an ancient phrase meaning \"to eat together.\" In many uses in the New Testament, it came to mean taking communion and/or fellowship.\n\nBread has always been a staple in human diet and has been found on every continent man has inhabited, all made from local ingredients. Modern table breads are distinctly soft, while older style or more \"rustic\" breads have a tough crusty exterior, and to seperate it you would literally have to \"break bread\" (if you have ever gotten your hands on a loaf of good French Bread or a Baguette, you know you break or tear the bread). ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "82425",
"title": "Sandwich",
"section": "Section::::History.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 401,
"text": "The modern concept of a sandwich using slices of bread as found within the West can arguably be traced to 18th-century Europe. However, the use of some kind of bread or bread-like substance to lie under (or under \"and\" over) some other food, or used to scoop up and enclose or wrap some other type of food, long predates the eighteenth century, and is found in numerous much older cultures worldwide.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "251320",
"title": "Sliced bread",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 282,
"text": "Sliced bread is a loaf of bread that has been sliced with a machine and packaged for convenience. It was first sold in 1928, advertised as \"the greatest forward step in the baking industry since bread was wrapped\". This led to the popular idiom \"greatest thing since sliced bread\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "149707",
"title": "Otto Frederick Rohwedder",
"section": "Section::::Career.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 467,
"text": "In 1927 Rohwedder successfully designed a machine that not only sliced the bread but wrapped it. He applied for patents to protect his invention and sold the first machine to a friend and baker Frank Bench, who installed it at the Chillicothe Baking Company, in Chillicothe, Missouri, in 1928. The first loaf of sliced bread was sold commercially on July 7, 1928. Sales of the machine to other bakeries increased and sliced bread became available across the country.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1604462",
"title": "Sealed crustless sandwich",
"section": "Section::::Controversial patent.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 490,
"text": "That is, the patent described a sandwich with a layer of filling in between two pieces of bread which are crimped shut and have their crust removed. The other nine claims of the patent elaborate the idea further, including the coating of two sides of the bread with peanut butter first before putting the jelly in the middle, so that the jelly would not seep into the bread—the layers of filling \"are engaged to one another to form a reservoir for retaining the second filling in between\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "149707",
"title": "Otto Frederick Rohwedder",
"section": "Section::::Career.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 392,
"text": "In 1930 Continental Baking Company introduced Wonder Bread as a sliced bread. It was followed by other major companies when they saw how the bread was received. By 1932 the availability of standardized slices had boosted sales of automatic, pop-up toasters, an invention of 1926 by Charles Strite. In 1933 American bakeries for the first time produced more sliced than unsliced bread loaves.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "242672",
"title": "Toaster",
"section": "Section::::History.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 328,
"text": "Before the development of the electric toaster, sliced bread was toasted by placing it in a metal frame or on a long-handled toasting-fork and holding it near a fire or over a kitchen grill. Utensils for toasting bread over open flames appeared in the early 19th century, including decorative implements made from wrought iron.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4118653",
"title": "Rodilla",
"section": "Section::::History.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 408,
"text": "About 1939 or 1940 Antonio Rodilla opened a confectionery shop in Callao Square in downtown Madrid. After some years he decided to sell a new line of products, cold meat (\"fiambre\") sandwiches. Since it was difficult at the time to find a good bread supplier he decided to make his own sliced bread as well, called \"English bread\", or \"pan de molde\" (as opposed to the more traditional baguette-like bread).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2f9ic2 | how has the economy managed to compensate for a majority of women entering the workforce in the past several decades, along with a rise in unmarried households - essentially doubling the demand for high paying jobs in a short period of time? | [
{
"answer": "The economy hasn't doubled high-paying jobs. An interesting book on the subject is called \"The Two-Income Trap\" by Elizabeth Warren. \n\nThe simple answer is that women entering the workforce made quality housing more expensive and made it so unmarried mothers have almost no chance to move up in social classes - most unmarried mothers have low-paying jobs. In general terms, the highest paying jobs are held by men and women who are married, have college degrees, and they combine incomes. To move up in life you really need both incomes.\n\nOur economy in the past fifty years has seen an explosion of low-paying service jobs like retail and customer service and an explosion in creative jobs like computer programming. There is very little in the middle.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "29826999",
"title": "Added worker effect",
"section": "Section::::The added worker effect after the Great Recession.:Prevalence of women as primary workers.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 711,
"text": "A prolonged period of unemployment can lead to what economists call the discouraged worker effect, where workers drop out of the labor supply. The wives of discouraged workers do not behave as secondary workers, altering their labor supply in response to their spouses' transitory bouts with unemployment, but rather, these wives become breadwinners (Maloney, p. 183). Between 2007 and 2009, the United States saw a large increase in women's contribution to family income, resulting from a decrease in husband's earnings because three out of four eliminated jobs had belonged to men (Mattingly & Smith, p. 344). Working wives also worked more hours if their spouses stopped working (Mattingly & Smith, p. 355).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29098205",
"title": "Women Employed",
"section": "Section::::Current initiatives.:Promoting fair workplaces.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 488,
"text": "Despite many improvements in women's economic status over the past three decades, employment discrimination and unfairness in the workplace are still a fact of life for many women. On average, women make only 80 cents for every dollar a man makes, and can lose an immense amount of wages over a lifetime due to the wage gap which persists despite education level. A disproportionate number of women are clustered in low-paying, part-time jobs, often without benefits or dependable hours.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2430477",
"title": "Personal Responsibility and Work Opportunity Act",
"section": "Section::::Criticism.:Gendered and racial poverty.\n",
"start_paragraph_id": 89,
"start_character": 0,
"end_paragraph_id": 89,
"end_character": 404,
"text": "But the income disparity is not the only form of disadvantage that women face in the labor market. Many women are unable to obtain a full time job not just due to gender discrimination, but also because of unavailable, expensive, or inadequate day care. This problem is only amplified when considering the issue of the segregation of women into underpaid work, limiting possibilities of economic growth.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14433622",
"title": "Women in the workforce",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 618,
"text": "Women in the workforce earning wages or salary are part of a modern phenomenon, one that developed at the same time as the growth of paid employment for men, but women have been challenged by inequality in the workforce. Until modern times, legal and cultural practices, combined with the inertia of longstanding religious and educational conventions, restricted women's entry and participation in the workforce. Economic dependency upon men, and consequently the poor socio-economic status of women, have had the same impact, particularly as occupations have become professionalized over the 19th and 20th centuries.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "648470",
"title": "Oppression",
"section": "Section::::Social oppression.:Economic oppression.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 637,
"text": "Women, in contrast, are still expected to fulfill the caretaker role and take time off for domestic needs such as pregnancy and ill family members, preventing them from conforming to the \"ideal-worker norm\". With the current norm in place, women are forced to juggle full-time jobs and family care at home. Others believe that this difference in wage earnings is likely due to the supply and demand for women in the market because of family obligations. Eber and Weichselbaumer argue that \"over time, raw wage differentials worldwide have fallen substantially. Most of this decrease is due to better labor market endowments of females\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16084422",
"title": "Women in Jordan",
"section": "Section::::Social representation.:Employment.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 1471,
"text": "Unemployment, underemployment, differences in wages and occupational segregation are the four main factors in the economy that impact women’s level of labor. In terms of unemployment, 15% of men are unemployed while 25% of women are unemployed and 82% of young women ages 15–29 are unemployed. Women are underemployed as they tend to be hired less than men with lesser education because large sections of the Jordanian economy are and have traditionally been closed off to women. Less educated men often hold more jobs while women are often better educated, leading to many women settling for jobs requiring lesser education than they have. Wage discrimination in Jordan is no different from anywhere else in the world, but in combination with traditional and cultural factors – like being responsible for the private sphere (the family and the home) – women are driven away from the workforce. Jordanian law suggests that wives should be obedient to their husbands because the men financially support the family, and if she is disobedient her husband can discontinue financial support. In addition, men have assumed the power to forbid their wives from working, and the Jordanian courts have upheld these laws. Furthermore, as honor killings consistently occur and are currently on the rise, women are less motivated to leave the safety of their homes. Laws in Jordan regarding honor killings continue to make it possible for courts to deal with perpetrators leniently.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4881604",
"title": "Work–life balance",
"section": "Section::::Role of gender and family.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 548,
"text": "\"The past two decades have witnessed a sharp decline in men's provider role, caused in part by growing female labor participation and in part by the weakening of men's absolute power due to increased rates of unemployment and underemployment,\" states sociologist Jiping Zuo. She continues, \"Women's growing earning power and commitment to the paid workforce together with the stagnation of men's social mobility make some families more financially dependent on women. As a result, the foundations of the male dominance structure have been eroded.\"\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3kvj1s | what does the president of france do as co-prince of andorra? | [
{
"answer": "The coprinses have, like most heads of state of modern monarchies, more of a ceremonial function than a political one. The don't even have the right to veto governmental decisions. They are also have representatives in place so the President of France will normally not directly concern Andorran affairs that often.\n\nThe real power lies with the parliament and their head of government, Antoni Martí. So not much difference there compared to other democracies. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "23507790",
"title": "Andorra–France relations",
"section": "Section::::History.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 734,
"text": "Official diplomatic relations between Andorra and France were established after the signing of a joint \"Treaty of Good Neighborhood, Friendship and Cooperation\" between Andorra, France and Spain; after Andorra adopted a new constitution establishing them as a parliamentary democracy. The President of France acts as a co-Prince (along with the Spanish Bishop of Urgell) in Andorra. In 1993, France opened a resident embassy in Andorra la Vella. In October 1967, French President (and co-Prince) Charles de Gaulle paid a visit to Andorra. It was the first visit by a French President to the nation. President de Gaulle paid a second visit in 1969. Since then, there have been several bilateral visits between leaders of both nations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34749",
"title": "1967",
"section": "Section::::Events.:October.\n",
"start_paragraph_id": 315,
"start_character": 0,
"end_paragraph_id": 315,
"end_character": 348,
"text": "BULLET::::- October 23 – Charles de Gaulle becomes the first French Co-Prince of Andorra to visit his Andorran subjects. In addition to being President of France, de Gaulle is a joint ruler (along with Spain's Bishop of Urgel of the tiny nation located in the mountains between France and Spain, pursuant to the 1278 agreement creating the nation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "563975",
"title": "François Hollande",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 529,
"text": "François Gérard Georges Nicolas Hollande (; born 12 August 1954) is a French politician who served as President of the French Republic and \"ex officio\" Co-Prince of Andorra from 2012 to 2017. He was previously the First Secretary of the Socialist Party from 1997 to 2008, Mayor of Tulle from 2001 to 2008, and President of the Corrèze General Council from 2008 to 2012. Hollande also served in the National Assembly of France twice for the department of Corrèze's 1st constituency from 1988 to 1993, and again from 1997 to 2012.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "67582",
"title": "Politics of Andorra",
"section": "Section::::Government.:Executive branch.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 870,
"text": "Under the 1993 constitution, the co-princes continue as heads of state, but the head of government retains executive power. The two co-princes serve coequally with limited powers that do not include veto over government acts. Both are represented in Andorra by a delegate, although since 1993, both France and Spain have their own embassies. As co-princes of Andorra, the President of France and the Bishop of Urgell maintain supreme authority in approval of all international treaties with France and Spain, as well as all those that deal with internal security, defense, Andorran territory, diplomatic representation, and judicial or penal cooperation. Although the institution of the co-princes is viewed by some as an anachronism, the majority sees them as both a link with Andorra's traditions and a way to balance the power of Andorra's two much larger neighbors.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34980665",
"title": "Executive Council of Andorra",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 870,
"text": "Under the 1993 constitution, the co-princes continue as heads of state, but the head of government retains executive power. The two co-princes serve coequally with limited powers that do not include veto over government acts. Both are represented in Andorra by a delegate, although since 1993, both France and Spain have their own embassies. As co-princes of Andorra, the President of France and the Bishop of Urgell maintain supreme authority in approval of all international treaties with France and Spain, as well as all those that deal with internal security, defense, Andorran territory, diplomatic representation, and judicial or penal cooperation. Although the institution of the co-princes is viewed by some as an anachronism, the majority sees them as both a link with Andorra's traditions and a way to balance the power of Andorra's two much larger neighbors.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34980665",
"title": "Executive Council of Andorra",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 494,
"text": "The way the two co-princes are chosen makes Andorra one of the most politically distinct nations on Earth. One co-prince is the current sitting President of France, currently Emmanuel Macron (it has historically been any head of state of France, including kings and emperors of the French). The other is the current Roman Catholic bishop of the Catalan city of La Seu d'Urgell, currently Joan Enric Vives i Sicilia. As neither prince lives in Andorra, their role is almost entirely ceremonial.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39764",
"title": "Jacques Chirac",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 306,
"text": "Jacques René Chirac (; born 29 November 1932) is a French politician who served as President of the French Republic and \"ex officio\" Co-Prince of Andorra from 1995 to 2007. Chirac previously was Prime Minister of France from 1974 to 1976 and from 1986 to 1988, as well as Mayor of Paris from 1977 to 1995.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
18fkbp | What do most people not understand or realize about WWI? | [
{
"answer": "Penicillin wasn't discovered until 1928 and arguably didn't save a life until 1942 and wasn't ready for mass production until 1945. I'll leave it to the readers to imagine a life in which minor injuries and small sniffles proved fatal without effective antibiotics. \n \nA large part of the driving force behind the sheer scale of WWI was the [Haber process](_URL_0_) of 1909. Prior to the industrialisation of this process nations were limited in how fast they could blow people up by the rate at which they could scrape bird shit off of small islands far far away. Making explosives from the very air around us helped speed up the killing no end. \n \nThe \"trench warfare\" of WWI wasn't confined to surface trenches but continued on down through the clay into tunnels and bunkers six or seven stories underground. Sappers from both sides were probing and counter probing to extend tunnels under the other's lines and lay massive amounts of explosives prior to surface movements. ",
"provenance": null
},
{
"answer": "The toll that it took on the British aristocracy in terms of casualties and the impact that had on the loosening of the class system in subsequent decades. In terms of proportion of aristocratic males killed it was a greater rate than the English Civil war of the 17th c. This changed the social fabric in unexpected ways, it ended dynasties, caused many women to have to marry 'below' them, eradicated many of the serving jobs, brought much land out of private ownership. \n\nIt was WWI that gutted the landed gentry that had existed since time immemorial, they had been in decline for centuries but WWI was a blow from which they would never recover, there are a few remnants today but nothing the like the pre-war generation. ",
"provenance": null
},
{
"answer": "The displacement of millions of refugees throughout Europe. In modern day memory, we have an image of static warfare, men living in trenches for years and making little movement, yet civilian displacement was unprecedented.\n\n10+ million Russian refugees flee into the interior.\nSerbian refugees, french refugees, Polish, Armenians- you name it.\n\nAlso, Britain sees an the biggest influx of refugees it has ever seen (apart from the Irish Potato Famine). Belgian refugees total 250,000-300,000 and become a familiar presence in the war-time economy. So much so, the British government creates miniature Belgian cities within the country and gives entire control of them to the Belgian government (See Birtley).\n\nEdit: If you have any questions, just reply. This stuff is interesting (to me, at least)!",
"provenance": null
},
{
"answer": "I am going to list some of my unanswered questions about the First World War... Some of these may be answered in books I haven't read yet though.\n\nWhat happened to German POWs? Were they treated well? Did the Allies follow Geneva Conventions? If not, was it out of malicious intent or bureaucratic necessity?\n\nHow do we explain the weakness of peace movements in the belligerent nations in 1914? Is it just a matter of war nationalism overpowering the long intellectual history of opposition to war? How important was the Belgian cause in influencing support for the war among the Allied countries? Was it more important than nationalism? \n\nIs French Canada the only place that can demonstrate serious and vocal opposition to the war (at least, among the belligerent nations) in September 1914? Why is French Canada different from the rest of the belligerent nations? (my work is answering these questions)\n\nHow do we measure military success and failure in the First World War? Casualties inflicted/received? Land taken? Expended material vs land taken vs men lost? \n\nWhat are the exact cost of offensives hour by hour in terms of casualties? How many soldiers were lost for gaining a kilometre of land in 1914 vs 1916 vs 1918? What role did the terrain play in influencing the success or failure of operations? No role? Was there areas of the front which operations were more successful or less successful? Why? (though again, how do we measure success)\n\nDid armies gain effectiveness and perform better (if we could agree on how to measure performance) over the course of the war? Was there a learning curve? Or, was the war simply a matter of attrition/disease/material advantage? Does that negate the influence of leadership (good and bad) in the armed forces of the belligerent countries?\n\nHow did Catholics at war deal with the papal opposition to the conflict? What were the different reactions among different Catholics in different nations? (Belgium vs France vs German vs Australian vs English vs Irish vs Irish Canadian vs French Canadian - just to name a few off the top of my head) What consequence, if any, did this religious difference cause among individual Catholics? Within the Vatican?\n\nHow do we explain the different memories of the war? Britain has focused on the \"Lost Generation\" and the tragedy of the war whereas Canada remembers the war as the beginning of its national independence even as French Canadians see it as the beginning of their long path away from Confederation. How do you write about these national/societal narratives without diminishing the many many experiences of the war that do not align with them? Can the memory of the war ever align with the history of the war? \n\nI could probably keep posing these questions for hours... I find the scholarship is really weak/narrow. Don't even get me started on Canadian-specific literature and questions. \n\n",
"provenance": null
},
{
"answer": "[NMW wrote a great post](_URL_0_) that basically gets at how the general perception of the soldier's view of WWI comes from a small group of highly educated, upper class poets.",
"provenance": null
},
{
"answer": "WWI (or The Great War and other names used back then) had an interesting line of events leading up to it, and they played out so that virtually every country in Europe and a *massive* majority of the national populations welcomed the war from day 1. Millions of people volunteered and it was an extremely popular war in the beginning. We might find that rather odd in this day and age because we're used to the aggressor/victim role used almost exclusively in the media today, but back then the sentiments were very different: Old scores were to be settled, the national pride was at stake and everyone expect a short, victorious campaign. Needless to say, that wasn't going to happen.",
"provenance": null
},
{
"answer": "I'd argue that one thing that people don't understand is why trench warfare developed and why it continued after it developed. There seems to be a popular viewpoint of WWI military leaders that paints them as being callous and cruel towards the lives of their men because they had them charging across the landscape at entrenched fortifications, without understanding the realities of the situation. \n\nAnother thing that's a common perception is that trench warfare was invented in WWI. At best this is partially true--the realities on the ground meant that WWI developed the concept of trench warfare beyond anything that had happened previously. However there were some precursors to trench warfare in the American Civil war, particularly the siege of Petersburg. [This](_URL_0_) is an example of trenchworks during the siege of Petersburg and they're quite complex.",
"provenance": null
},
{
"answer": "How much of a global pacifist movement resulted from the war. Check out the [Kellog-Briand Pact](_URL_0_). People back then were really serious about achieving global peace and went so far as to \"outlaw\" war. Unfortunately, the pacifists just weren't the ones in power.",
"provenance": null
},
{
"answer": "1) It was probably the first war to occur between a group of highly industrialized AND bureaucratized countries. The latter point was key in maintaining the war despite the hideous casualties and large forces required. The British, for example, developed a book before the war that detailed everyone's role (right down to how to deal with the influx of marriage licenses!) in a major war with a continental power. Lyn MacDonald covers this in \"1914.\"\n\n2) Defensive artillery, not machine guns or barbed wire, probably had the biggest impact on beginning and perpetuating the stalemate. There tends to be a correlation between effective counter battery measures (i.e. using artillery to destroy enemy artillery) and successful offensives.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "4764461",
"title": "World War I",
"section": "Section::::Legacy and memory.:Cultural memory.\n",
"start_paragraph_id": 303,
"start_character": 0,
"end_paragraph_id": 303,
"end_character": 253,
"text": "World War I had a lasting impact on social memory. It was seen by many in Britain as signalling the end of an era of stability stretching back to the Victorian period, and across Europe many regarded it as a watershed. Historian Samuel Hynes explained:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4764461",
"title": "World War I",
"section": "Section::::Legacy and memory.:Cultural memory.\n",
"start_paragraph_id": 305,
"start_character": 0,
"end_paragraph_id": 305,
"end_character": 982,
"text": "These beliefs did not become widely shared because they offered the only accurate interpretation of wartime events. In every respect, the war was much more complicated than they suggest. In recent years, historians have argued persuasively against almost every popular cliché of World War I. It has been pointed out that, although the losses were devastating, their greatest impact was socially and geographically limited. The many emotions other than horror experienced by soldiers in and out of the front line, including comradeship, boredom, and even enjoyment, have been recognised. The war is not now seen as a 'fight about nothing', but as a war of ideals, a struggle between aggressive militarism and more or less liberal democracy. It has been acknowledged that British generals were often capable men facing difficult challenges, and that it was under their command that the British army played a major part in the defeat of the Germans in 1918: a great forgotten victory.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3184822",
"title": "Harry Elmer Barnes",
"section": "Section::::Early career.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 395,
"text": "the truth about the causes of the World War is one of the livest and most important practical issues of the present day. It is basic to the whole matter of the present European and world situation, resting as it does upon an unfair and unjust Peace Treaty, which was itself erected upon a most uncritical and complete acceptance of the grossest forms of war-time illusions concerning war guilt.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5530",
"title": "Conspiracy theory",
"section": "Section::::Sociological interpretations.\n",
"start_paragraph_id": 64,
"start_character": 0,
"end_paragraph_id": 64,
"end_character": 249,
"text": "Sociological historian Holger Herwig found in studying German explanations for the origins of World War I, \"Those events that are most important are hardest to understand because they attract the greatest attention from myth makers and charlatans.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34558",
"title": "20th century",
"section": "Section::::Wars and politics.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 424,
"text": "BULLET::::- Rising nationalism and increasing national awareness were among the many causes of World War I (1914–1918), the first of two wars to involve many major world powers including Germany, France, Italy, Japan, Russia/USSR, the British Empire and the United States. World War I led to the creation of many new countries, especially in Eastern Europe. At the time, it was said by many to be the \"war to end all wars\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34865681",
"title": "The Great War and Modern Memory",
"section": "Section::::Genres.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 420,
"text": "Also, I was very interested in the Great War, as it was called then, because it was the initial twentieth-century shock to European culture. By the time we got to the Second World War, everybody was more or less used to Europe being badly treated and people being killed in multitudes. The Great War introduced those themes to Western culture, and therefore it was an immense intellectual and cultural and social shock.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "282291",
"title": "Aftermath of World War I",
"section": "Section::::Social trauma.\n",
"start_paragraph_id": 113,
"start_character": 0,
"end_paragraph_id": 113,
"end_character": 455,
"text": "The experiences of the war in the west are commonly assumed to have led to a sort of collective national trauma afterward for all of the participating countries. The optimism of 1900 was entirely gone and those who fought became what is known as \"the Lost Generation\" because they never fully recovered from their suffering. For the next few years, much of Europe mourned privately and publicly; memorials were erected in thousands of villages and towns.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
f85lvv | Grover Cleveland met his wife when she was born and he was 27. He took care of her after her father died and married her when she turned 21. How was this relationship viewed by the public? | [
{
"answer": "More input is always welcome; in the meantime, this exact question came up last month, and you may be interested in what u/WovenCoverlet and u/sunagainstgold [had to say on the topic](_URL_0_).",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "53904146",
"title": "Richard Falley Cleveland",
"section": "Section::::Marriage and family.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 584,
"text": "Cleveland's fifth son, Grover Cleveland, became the 22nd and 24th President of the United States, the only president to serve non-consecutive terms. He was 16 years old at the time of his father's death and reputedly learned of the event from a boy hawking newspapers. Grover Cleveland spoke highly of his father in later life, praising his godliness and devotion to family, and named one of his sons (Richard F. Cleveland) after him. His sister Rose (the family's youngest child) acted as First Lady for the first year or so of his presidency, before his marriage to Frances Folsom.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "287938",
"title": "Marjory Stoneman Douglas",
"section": "Section::::Personal life.:Mental health.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 556,
"text": "Douglas suggested she had what she referred to as \"blank periods\" before and during her marriage, but they were brief. She connected these lapses to her mother's insanity. She eventually quit the newspaper, but after her father's death in 1941 she suffered a third and final breakdown, when her neighbors found her roaming the neighborhood one night screaming. She realized she had a \"father complex\", explaining it by saying, \"Having been brought up without him all those years, and then coming back and finding him so sympathetic had a powerful effect\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52403024",
"title": "Richard F. Cleveland",
"section": "Section::::Early life.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 443,
"text": "Cleveland was born in Princeton, New Jersey, the eldest son of Grover Cleveland, the 22nd and 24th President of the United States, and Frances Folsom. He was born nearly eight months after the end of his father's second term, and was named for his grandfather, Richard Falley Cleveland. He was the next to youngest of five siblings: sisters Ruth (1891–1904), Esther (1893–1980), and Marion (1895–1977), and brother Francis Grover (1903–1995).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1110272",
"title": "Ruth Cleveland",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 550,
"text": "Ruth Cleveland (October 3, 1891 – January 7, 1904), popularly known as Baby Ruth, was the eldest of five children born to United States President Grover Cleveland and First Lady Frances Cleveland. Her birth between Cleveland's two terms of office caused a national sensation. Interest in her continued even after her father's second presidential term was over. A sickly child, Ruth Cleveland contracted diphtheria on January 2, 1904. Doctors thought her case was mild, but she died five days after her diagnoses. She is buried in Princeton Cemetery.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "51851793",
"title": "Presidencies of Grover Cleveland",
"section": "Section::::First presidency (1885–1889).:Administration.:Marriage and children.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 563,
"text": "Cleveland entered the White House as a bachelor, and his sister Rose Cleveland acted as hostess for the first two years of his administration. On June 2, 1886, Cleveland married Frances Folsom in the Blue Room at the White House. He was the second president to wed while in office, after John Tyler. Though Cleveland had supervised Frances's upbringing after her father's death, the public took no exception to the match. At 21 years, Frances Folsom Cleveland was the youngest First Lady in history, and the public soon warmed to her beauty and warm personality.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "59039050",
"title": "Francis Cleveland",
"section": "Section::::Early life.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 288,
"text": "Cleveland was born in 1903 in Buzzards Bay, Massachusetts, a part of the Town of Bourne. His father, Grover Cleveland, was the 22nd and 24th president of the United States; his mother, Frances Folsom, was First Lady. He had a brother, Richard, and three sisters, Ruth, Marion and Esther \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "415642",
"title": "Frances Cleveland",
"section": "Section::::Later life.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 671,
"text": "After her husband's death in 1908, Cleveland remained in Princeton, New Jersey. On February 10, 1913, at the age of 48, she married Thomas J. Preston, Jr., a professor of archaeology at her alma mater, Wells College. She was the first presidential widow to remarry. She was vacationing at St. Moritz, Switzerland, with her daughters Marion and Esther and her son Francis when World War I started in August 1914. They returned to the United States via Genoa on October 1, 1914. Soon afterwards, she became a member of the pro-war National Security League, becoming its director of the Speaker's Bureau and the \"Committee on Patriotism through Education\" in November 1918.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
24oey1 | how do criminal defendants end up with charges like "four counts of murder" when only two people are killed? | [
{
"answer": "Sometimes it is hard to prove that an accused murderer had all the requirements of a crime. With 1st degree murder, the prosecution needs to prove everything in 2nd degree murder, PLUS the act/s were premeditated. \n\nIf the jury agreed that he was reacting, and not making specific plans, that would eliminate 1 st degree. If the jury found at any point the defendant was in fear, using self-defense of life, or local versions of 'stand your ground, and 'castle doctrine', they could nullify any of the murder charges. \n\nEach States laws are a little different, some would call them 'included offenses'. In this case, if you prove murder 1, you have to also prove murder 2- even though the same act . Technically, that act could also be murder 3, manslaughter, and aggravated assault. \nBut our system only punishes the highest crime of the inclusive 'stack' - for each separate action. \n\nThe jury instructions are [here](_URL_0_). Thee jury was asked to determine if the facts met all 4possible crimes, jury says yes. \nUnless there is something I missed that makes these 2crimes x2victims, the sentencing will only show a conviction and penalty for the most heinous criminal act. \n\nIf these other charges were not given to the jury now, they could not choose to convict, and the defendant might be protected under double jeopardy. Many times a criminal will be charged with lesser versions of the same crime, just to avoid letting them walk on a paperwork issue . \n\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "26665023",
"title": "Felony murder rule (Florida)",
"section": "Section::::Penalties.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 290,
"text": "If a person committing a predicate felony directly contributed to the death of the victim then the person will be charged with murder in the first degree - felony murder which is a capital felony. The only two sentences available for that statute are life in prison and the death penalty. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3768697",
"title": "Murder in English law",
"section": "Section::::Proceedings.:Indictment.:Joinder of counts.\n",
"start_paragraph_id": 77,
"start_character": 0,
"end_paragraph_id": 77,
"end_character": 212,
"text": "A count of murder may be joined with a count charging another offence of murder, or a count charging a different offence. A count of conspiracy to murder may be joined with a count of aiding and abetting murder.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18916677",
"title": "Double Jeopardy Clause",
"section": "Section::::\"Twice put in jeopardy\".:Retrial after conviction.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 535,
"text": "An example of this are the charges of \"conspiring to commit murder\" and \"murder\". Both charges typically have facts distinct from each other. A person can be charged with \"conspiring to commit murder\" even if the murder never actually takes place if all facts necessary to support the charge can be demonstrated through evidence. Further, a person convicted or acquitted of murder can, additionally, be tried on conspiracy as well if it has been determined after the conviction or acquittal that a conspiracy did, in fact, take place.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2514888",
"title": "Jury selection",
"section": "Section::::\"Voir dire\".:Canada.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 275,
"text": "Where multiple offences are tried together, the greatest number applicable is used (i.e., in an offence involving first-degree murder and armed robbery, the accused and the prosecutor are each entitled to twenty peremptory challenges) [s. 634 (3), Criminal Code of Canada]. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52062458",
"title": "People v. Bland",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 360,
"text": "The court further distinguishes that if a defendant intends to kill a target and also kills others in the kill zone, then they are guilty of the murder of each person killed, i.e., is guilty of multiple counts of murder. However, if the defendant by the same act fails to kill anyone, defendant is only guilty of a single count of attempted murder by the act.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26665023",
"title": "Felony murder rule (Florida)",
"section": "Section::::Penalties.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 255,
"text": "If a person commits a predicate felony, but was not the direct contributor to the death of the victim then the person will be charged with murder in the second degree - felony murder which is a felony of the first degree. The maximum prison term is life.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2514888",
"title": "Jury selection",
"section": "Section::::\"Voir dire\".:Canada.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 423,
"text": "When multiple accused are tried together, each accused is entitled to the same number that they would receive if tried separately, while the prosecutor has as many challenges as the total number available to all of the accused (i.e., in a case wherein two co-accused are charged with first-degree murder, each receives twenty peremptory challenges, and the prosecutor receives forty) [s. 634 (4), Criminal Code of Canada].\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5ti657 | How far did the KGB infiltrate the American government? | [
{
"answer": "As a follow-up, would the handling of foreign spies for the USSR generally be the responsibility of GRU, KGB, or a different agency? Or all three? ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "51191775",
"title": "1995 CIA disinformation controversy",
"section": "Section::::Background.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 1487,
"text": "Aldrich Ames, a CIA counterintelligence agent working in the SE Division, approached the Soviet Embassy in Washington, D.C. on April 16, 1985, and within a month received $50,000 from the KGB in exchange for espionage service. Meeting with Soviet official and go-between Sergey Dmitriyevich Chuvakhin on June 13, Ames passed him copied documents identifying over ten Soviet agents working for the CIA and FBI. As a CIA review related, the Soviets began arresting and sometimes executing U.S. operatives later in 1985, and the CIA realized that it \"was faced with a major CI problem.\" Suspicions initially fell on Edward Lee Howard, a former CIA officer who also compromised CIA operations in 1985 and defected to the Soviet Union on September 21. However, the CIA realized by fall 1985 that Howard was not responsible for all of the damage. Three agents arrested in fall 1985 and later executed, some of the CIA's most valuable sources, had been betrayed by Ames, not Howard. By December, six SE agents had vanished, a trend which continued into 1986; according to a congressional report, the over 20 operations Ames revealed \"amounted to a virtual collapse of the CIA's Soviet operations.\" Ames, who stated that he spied for the KGB due to his financial debt, gave thousands of pages of classified documents to the Soviets and later admitted to disclosing over 100 CIA, FBI, military, and allied operations; he was deemed responsible for the arrests and executions of ten U.S. sources.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5716625",
"title": "Sleepers (TV series)",
"section": "Section::::Plot summary.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 538,
"text": "The KGB also begin to take action to retrieve the two rogue agents and send Major Grishina - a darkly attractive female officer - to the United Kingdom in order to bring them back to the Soviet Union. Her arrival alerts the CIA and MI5 that something big must be happening for the KGB to send such a high-ranking officer to Britain. Her arrival also shakes up the Soviet representatives in the UK. The chief KGB officer in the UK is more decadent than the locals and is originally discovered watching an American baseball game on the TV.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5451409",
"title": "Rem Krassilnikov",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 395,
"text": "During the 1980s, he was Chief of the First (American) Department within the KGB's Second Chief Directorate, which placed him in charge of investigating and disrupting the operations of the American Central Intelligence Agency in the Soviet Union's capital of Moscow. Prior to that he headed up the Second Department of the SCD, which targeted the intelligence operations of the United Kingdom.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "142690",
"title": "Disinformation",
"section": "Section::::Defections reveal covert operations.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 1018,
"text": "In 1985, Aldrich Ames gave the KGB a significant amount of information on CIA agents, and the Soviet government swiftly moved to arrest these individuals. Soviet intelligence feared this rapid action would alert the CIA that Ames was a spy. In order to reduce the chances the CIA would discover Ames's duplicity, the KGB manufactured disinformation as to the reasoning behind the arrests of U.S. intelligence agents. During summer 1985, a KGB officer who was a double agent working for the CIA on a mission in Africa traveled to a dead drop in Moscow on his way home but never reported in. The CIA heard from a European KGB source that their agent was arrested. Simultaneously the FBI and CIA learned from a second KGB source of their agent's arrest. Only after Ames had been outed as a spy for the KGB did it become apparent that the KGB had known all along that both of these agents were double agents for the U.S. government, and had played them as pawns to send disinformation to the CIA in order to protect Ames.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41254380",
"title": "Operation Shocker",
"section": "Section::::Overview.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 565,
"text": "The operation began in 1959 when U.S. Army First Sergeant Joseph Edward Cassidy (1920-2011), assigned to the Army's nuclear power office near Washington, D.C., was approached (with Army permission) by the FBI. Cassidy, despite having no previous training, was able to make contact with a Soviet naval attache believed to be a spy, and set up an arrangement where he would provide information to the Soviets in exchange for money. Soviet requests for information were passed to the US Joint Chiefs of Staff, and various classified information provided as a result. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38817942",
"title": "Active Measures Working Group",
"section": "Section::::Background.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 661,
"text": "In October 1979 Stanislav Levchenko, head of the Active Measures Line of the KGB Rezidentura in Tokyo, contacted American officials and was granted political asylum in the United States. Levchenko explained the workings of the Soviet apparatus and how it was carried out, under his direction, in Japan. Levchenko's information, combined with that of Ladislav Bittman, who had been the deputy head of the Czechoslovakian Intelligence Service's Disinformation Department, was instrumental in helping the CIA understand many of the operations that were being carried out against the United States. This information was also reported to policy makers and Congress.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "797178",
"title": "First Chief Directorate",
"section": "Section::::FCD residency organization.\n",
"start_paragraph_id": 88,
"start_character": 0,
"end_paragraph_id": 88,
"end_character": 638,
"text": "In return for money, they gave the KGB the names of officers of the KGB residency in Washington, DC, and other places, who cooperated with the FBI and/or the CIA. Line KR officers immediately arrested a number of people, including Major General Dmitri Polyakov, a high-ranking military intelligence officer (GRU). He was cooperating with the CIA and FBI. Ames reported that Colonel Oleg Gordievsky, London resident, had spied for the Secret Intelligence Service (SIS or MI6). Line KR officers arrested many others, whom they sent to Moscow. There they were passed into the hands of the KGB Second Chief Directorate (counterintelligence).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.